Archive for month: May 2017

7 in 10 smartphone apps share your data with third-party services

File 20170526 6367 5lu5ct

Narseo Vallina-Rodriguez, University of California, Berkeley and Srikanth Sundaresan, Princeton University

Our mobile phones can reveal a lot about ourselves: where we live and work; who our family, friends and acquaintances are; how (and even what) we communicate with them; and our personal habits. With all the information stored on them, it isn’t surprising that mobile device users take steps to protect their privacy, like using PINs or passcodes to unlock their phones.

The research that we and our colleagues are doing identifies and explores a significant threat that most people miss: More than 70 percent of smartphone apps are reporting personal data to third-party tracking companies like Google Analytics, the Facebook Graph API or Crashlytics.

When people install a new Android or iOS app, it asks the user’s permission before accessing personal information. Generally speaking, this is positive. And some of the information these apps are collecting are necessary for them to work properly: A map app wouldn’t be nearly as useful if it couldn’t use GPS data to get a location.

But once an app has permission to collect that information, it can share your data with anyone the app’s developer wants to – letting third-party companies track where you are, how fast you’re moving and what you’re doing.

The help, and hazard, of code libraries

An app doesn’t just collect data to use on the phone itself. Mapping apps, for example, send your location to a server run by the app’s developer to calculate directions from where you are to a desired destination.

The app can send data elsewhere, too. As with websites, many mobile apps are written by combining various functions, precoded by other developers and companies, in what are called third-party libraries. These libraries help developers track user engagement, connect with social media and earn money by displaying ads and other features, without having to write them from scratch.

However, in addition to their valuable help, most libraries also collect sensitive data and send it to their online servers – or to another company altogether. Successful library authors may be able to develop detailed digital profiles of users. For example, a person might give one app permission to know their location, and another app access to their contacts. These are initially separate permissions, one to each app. But if both apps used the same third-party library and shared different pieces of information, the library’s developer could link the pieces together.

Users would never know, because apps aren’t required to tell users what software libraries they use. And only very few apps make public their policies on user privacy; if they do, it’s usually in long legal documents a regular person won’t read, much less understand.

Developing Lumen

Our research seeks to reveal how much data are potentially being collected without users’ knowledge, and to give users more control over their data. To get a picture of what data are being collected and transmitted from people’s smartphones, we developed a free Android app of our own, called the Lumen Privacy Monitor. It analyzes the traffic apps send out, to report which applications and online services actively harvest personal data.

Because Lumen is about transparency, a phone user can see the information installed apps collect in real time and with whom they share these data. We try to show the details of apps’ hidden behavior in an easy-to-understand way. It’s about research, too, so we ask users if they’ll allow us to collect some data about what Lumen observes their apps are doing – but that doesn’t include any personal or privacy-sensitive data. This unique access to data allows us to study how mobile apps collect users’ personal data and with whom they share data at an unprecedented scale.

In particular, Lumen keeps track of which apps are running on users’ devices, whether they are sending privacy-sensitive data out of the phone, what internet sites they send data to, the network protocol they use and what types of personal information each app sends to each site. Lumen analyzes apps traffic locally on the device, and anonymizes these data before sending them to us for study: If Google Maps registers a user’s GPS location and sends that specific address to maps.google.com, Lumen tells us, “Google Maps got a GPS location and sent it to maps.google.com” – not where that person actually is.

Trackers are everywhere

Lumen’s user interface, showing the data leakages and their privacy risks, found for a mobile Android game called ‘Odd Socks.’
ICSI, CC BY-ND

More than 1,600 people who have used Lumen since October 2015 allowed us to analyze more than 5,000 apps. We discovered 598 internet sites likely to be tracking users for advertising purposes, including social media services like Facebook, large internet companies like Google and Yahoo, and online marketing companies under the umbrella of internet service providers like Verizon Wireless.

Lumen’s explanation of a leak of a device’s Android ID.
ICSI, CC BY-ND

We found that more than 70 percent of the apps we studied connected to at least one tracker, and 15 percent of them connected to five or more trackers. One in every four trackers harvested at least one unique device identifier, such as the phone number or its device-specific unique 15-digit IMEI number. Unique identifiers are crucial for online tracking services because they can connect different types of personal data provided by different apps to a single person or device. Most users, even privacy-savvy ones, are unaware of those hidden practices.

More than just a mobile problem

Tracking users on their mobile devices is just part of a larger problem. More than half of the app-trackers we identified also track users through websites. Thanks to this technique, called “cross-device” tracking, these services can build a much more complete profile of your online persona.

And individual tracking sites are not necessarily independent of others. Some of them are owned by the same corporate entity – and others could be swallowed up in future mergers. For example, Alphabet, Google’s parent company, owns several of the tracking domains that we studied, including Google Analytics, DoubleClick or AdMob, and through them collects data from more than 48 percent of the apps we studied.

Data transfers observed between locations of Lumen users (left) and third-party server locations (right). Traffic frequently crosses international boundaries.
ICSI, CC BY-ND

Users’ online identities are not protected by their home country’s laws. We found data being shipped across national borders, often ending up in countries with questionable privacy laws. More than 60 percent of connections to tracking sites are made to servers in the U.S., U.K., France, Singapore, China and South Korea – six countries that have deployed mass surveillance technologies. Government agencies in those places could potentially have access to these data, even if the users are in countries with stronger privacy laws such as Germany, Switzerland or Spain.

Connecting a device’s MAC address to a physical address (belonging to ICSI) using Wigle.
ICSI, CC BY-ND

Even more disturbingly, we have observed trackers in apps targeted to children. By testing 111 kids’ apps in our lab, we observed that 11 of them leaked a unique identifier, the MAC address, of the Wi-Fi router it was connected to. This is a problem, because it is easy to search online for physical locations associated with particular MAC addresses. Collecting private information about children, including their location, accounts and other unique identifiers, potentially violates the Federal Trade Commission’s rules protecting children’s privacy.

Just a small look

Although our data include many of the most popular Android apps, it is a small sample of users and apps, and therefore likely a small set of all possible trackers. Our findings may be merely scratching the surface of what is likely to be a much larger problem that spans across regulatory jurisdictions, devices and platforms.

It’s hard to know what users might do about this. Blocking sensitive information from leaving the phone may impair app performance or user experience: An app may refuse to function if it cannot load ads. Actually, blocking ads hurts app developers by denying them a source of revenue to support their work on apps, which are usually free to users.

If people were more willing to pay developers for apps, that may help, though it’s not a complete solution. We found that while paid apps tend to contact fewer tracking sites, they still do track users and connect with third-party tracking services.

The ConversationTransparency, education and strong regulatory frameworks are the key. Users need to know what information about them is being collected, by whom, and what it’s being used for. Only then can we as a society decide what privacy protections are appropriate, and put them in place. Our findings, and those of many other researchers, can help turn the tables and track the trackers themselves.

Narseo Vallina-Rodriguez, Research Assistant Professor, IMDEA Networks Institute, Madrid, Spain; Research Scientist, Networking and Security, International Computer Science Institute based at, University of California, Berkeley and Srikanth Sundaresan, Research Fellow in Computer Science, Princeton University

This article was originally published on The Conversation. Read the original article.

The heavy price we pay for ‘free’ Wi-Fi

Benjamin Dean, Columbia University

For many years, New York City has been developing a “free” public Wi-Fi project. Called LinkNYC, it is an ambitious effort to bring wireless Internet access to all of the city’s residents. The Conversation

This is the latest in a longstanding trend in which companies offer ostensibly free Internet-related products and services, such as social network access on Facebook, search and email from Google or the free Wi-Fi now commonly provided in cafes, shopping malls and airports.

These free services, however, come at a cost. Use is free on the condition that the companies providing the service can collect, store and analyze users’ valuable personal, locational and behavioral data.

This practice carries with it poorly appreciated privacy risks and an opaque exchange of valuable data for very little.

Is free public Wi-Fi, or any of these other services, really worth it?

Origins of LinkNYC

New York City began exploring a free public Wi-Fi network back in 2012 to replace its aging public phone system and called for proposals two years later.

The winning bid came from CityBridge, a partnership of four companies including advertising firm Titan and designer Control Group.

Their proposal involved building a network of 10,000 kiosks (dubbed “links”) throughout the city that would be outfitted with high-speed Wi-Fi routers to provide Internet, free phone calls within the U.S., a cellphone charging station and a touchscreen map.

Recently, Google created a company called Sidewalk Labs, which snapped up Titan and Control Group and merged them.

Google, a company whose business model is all about collecting our data, thus became a key player in the entity that will provide NYC with free Wi-Fi.

How free is ‘free’?

Like many free Internet products and services, the LinkNYC will be supported by advertising revenue.

LinkNYC is expected to generate about US$500 million in advertising revenue for New York City over the next 12 years from the display of digital ads on the kiosks’ sides and via people’s cellphones. The model works by providing free access in exchange for users’ personal and behavioral data, which are then used to target ads to them.

Yet LinkNYC’s privacy policy doesn’t actually use the word “advertising,” preferring instead to vaguely state it “may use your information, including Personally Identifiable Information,” to provide information about goods or services of interest.

It also isn’t clear the extent to which the network could be used to track people’s location.

Titan previously made headlines in 2014 after installing Bluetooth beacons in over 100 pay phone booths, for the purpose of testing the technology, without the city’s permission. Titan was subsequently ordered to remove them.

But the beacons are back as part of the LinkNYC contract, though users have to choose to opt in to the location services. The beacons allow targeted ads to be delivered to cellphones as people pass the hotspots, but their use isn’t spelled out in the privacy policy.

After close examination, it becomes evident that far from being free, use of LinkNYC comes with the price of mandatory collection of potentially sensitive personal, locational and behavioral data.

This is all standard practice in the terms of use and privacy policies for free Internet-based products and services. Can we really consider this to be a fully informed agreement and transparent exchange when the actual uses of the data, and the privacy and security implications of these uses, are not clear?

A privacy paradox

People’s widespread use of products and services with these data collection and privacy infringing practices is curiously at odds with what they say they are willing to tolerate in studies.

Surveys consistently show that people value their privacy. In a recent Pew survey, 93 percent of adults said that being in control of who can get information about them is important, and 90 percent said the same about what information is collected.

In experiments, people quote high prices for which they would be willing to sell their data. For instance, in a 2005 study in the U.K., respondents said they would sell one month’s access to their location (via a cellphone) for an average of £27.40 (about US$50 based on the exchange rate at the time or $60 in inflation-adjusted terms). The figure went up even higher when subjects were told third party companies would be interested in using the data.

In practice, though, people trade away their personal and behavioral data for very little. This privacy paradox is on full display in the free Wi-Fi example.

Breaking down the economics of LinkNYC’s business model, recall that an estimated $500 million in total ad revenue will be collected over 12 years. With 10,000 Links, and approximately eight million people in New York City, the monthly revenue per person per link is $0.000043.

Fractions of a cent. This is the indirect valuation that users accept from advertisers in exchange for their personal, locational and behavioral data when using the LinkNYC service. Compare that with the value U.K respondents put on their locational data alone.

How to explain this paradoxical situation? In valuing their data in experiments, people are usually given the full context of what information will be collected and how it will be used.

In real life, though, a lot of people don’t read the terms of use or privacy policy. Those that do are not always able to understand what these documents are saying owing partly to the legalese used and partly to the intentionally vague wording of some passages.

People thus end up exchanging their data and their privacy far less than they might in a transparent and open market transaction.

The business model of some of the most successful tech companies is built on this opaque exchange between data owner and service provider. The same opaque exchange occurs on social networks like Facebook, online search and online journalism.

Part of a broader trend

It’s ironic that, in this supposed age of abundant information, people are so poorly informed about how their valuable digital assets are being used before they unwittingly sign their rights away.

To grasp the consequences of this, think about how much personal data you hand over every time you use one of these “free” services. Consider how upset people have been in recent years due to large-scale data breaches: for instance, the more than 22 million who lost their background check records in the Office of Personnel Management hack.

Now imagine the size a file of all your personal data in 2020 (including financial data, like purchasing history, or health data) after years of data tracking. How would you feel if it were sold to an unknown foreign corporation? How about if your insurance company got ahold of it and raised your rates? Or if an organized crime outfit stole all of it? This is the path that we are on.

Some have already made this realization, and a countervailing trend is already under way, one that gives technology users more control over their data and privacy. Mozilla recently updated its Firefox browser to allow users to block ads and trackers. Apple too has avoided an advertising business model, and the personal data harvesting that it necessitates, instead opting to make its money from hardware, app and digital music or video sales.

Developing a way for people to correctly value their data, privacy and information security would be a major additional step forward in developing financially viable, private and secure alternatives.

With it might come the possibility of an information age where people can maintain their privacy and retain ownership and control over their digital assets, should they choose to.

Benjamin Dean, Fellow for Internet Governance and Cyber-security, School of International and Public Affairs, Columbia University

This article was originally published on The Conversation. Read the original article.

Could a doodle replace your password?

File 20170502 17281 1xh74q9What if you could unlock your smartphone this way?

Janne Lindqvist, Rutgers University

Nearly 80 percent of Americans own a smartphone, and a growing proportion of them use smartphones for internet access, not just when they’re on the go. This leads to people storing considerable amounts of personal and private data on their mobile devices. The Conversation

Often, there is just one layer of security protecting all that data – emails and text messages, social media profiles, bank accounts and credit cards, even other passwords to online services. It’s the password that unlocks the smartphone’s screen. Usually this involves entering a number, or just laying a fingertip on a sensor.

Over the past couple of years, my research group, my colleagues and I have designed, created and tested a better way. We call it “user-generated free-form gestures,” which means smartphone owners can draw their own security pattern on the screen. It’s a very simple idea that is surprisingly secure.

An explanation of gesture-based passwords in action.

Improving today’s weak security

It might seem that biometric authentication, like a fingerprint, could be stronger. But it’s not, because most systems that let a user allow fingerprint access also require a PIN or a password as an alternate backup method. A user – or thief – could skip the biometric method and instead just enter (or guess) a PIN or a password.

Text passwords can be hard to enter accurately on mobile devices, with small “shift” keys and other buttons to press to enter numbers or punctuation marks. As a result, people tend to use instead PIN codes, which are faster but much more easily guessed, because they are short sequences that humans choose in predictable ways: for example, using birth dates. Some devices allow users to choose a connect-the-dots pattern on a grid on the screen – but those can be even less secure than three-digit PINs.

Compared to other methods, our approach dramatically increases the potential length and complexity of a password. Users simply draw a pattern across an entire touchscreen, using any number of locations on the screen.

Measuring drawings

As users draw a shape or pattern on the screen, we track their fingers, recording where they move and how quickly (or slowly). We compare that track to one recorded when they set up the gesture-based login. This protection can be added just by software changes; it needs no specific hardware or other modifications to existing touchscreen devices. As touchscreens become more common on laptop computers, this method could be used to protect them too.

Our system also allows people to use more than one finger – though some participants wrongly assumed that making simple gestures with multiple fingers would be more secure than the same gesture with just one finger. The key to improving security using one or more fingers is to make a design that is not easy to guess.

Easy to do and remember, hard to break

Some people who participated in our studies created gestures that could be articulated as symbols, such as digits, geometric shapes (like a cylinder) and musical notations. That made complicated doodles – including ones that require lifting fingers (multistroke) – easy for them to remember.

Simple, but still complex.
Wikimedia Commons

This observation inspired us to study and create new ways to try to guess gesture passwords. We built up a list of possible symbols and tried them. But even a relatively simple symbol, like an eighth note, can be drawn in so many different ways that calculating the possible variations is computationally intensive and time-consuming. This is unlike text passwords, for which variations are simple to try out.

Replacing more than one password

Our research has extended beyond just using a gesture to unlock a smartphone. We have explored the potential for people to use doodles instead of passwords on several websites. It appeared to be no more difficult to remember multiple gestures than it is to recall different passwords for each site.

In fact, it was faster: Logging in with a gesture took two to six seconds less time than doing so with a text password. It’s faster to generate a gesture than a password, too: People spent 42 percent less time generating gesture credentials than people we studied who had to make up new passwords. We also found that people could successfully enter gestures without spending as much attention on them as they had to with text passwords.

Gesture-based interactions are popular and prevalent on mobile platforms, and are increasingly making their way to touchscreen-equipped laptops and desktops. The owners of those types of devices could benefit from a quick, easy and more secure authentication method like ours.

Janne Lindqvist, Assistant Professor of Electrical and Computer Engineering, Rutgers University

This article was originally published on The Conversation. Read the original article.

Why we choose terrible passwords, and how to fix them

File 20170428 12984 4awn7wHow secure are you?
Rawpixel.com via shutterstock.com

Megan Squire, Elon University

The first Thursday in May is World Password Day, but don’t buy a cake or send cards. Computer chip maker Intel created the event as an annual reminder that, for most of us, our password habits are nothing to celebrate. Instead, they – and computer professionals like me – hope we will use this day to say our final goodbyes to “qwerty” and “123456,” which are still the most popular passwords. The Conversation

The problem with short, predictable passwords

The purpose of a password is to limit access to information. Having a very common or simple one like “abcdef” or “letmein,” or even normal words like “password” or “dragon,” is barely any security at all, like closing a door but not actually locking it.

Hackers’ password cracking tools take advantage of this lack of creativity. When hackers find – or buy – stolen credentials, they will likely find that the passwords have been stored not as the text of the passwords themselves but as unique fingerprints, called “hashes,” of the actual passwords. A hash function mathematically transforms each password into an encoded, fixed-size version of itself. Hashing the same original password will give the same result every time, but it’s computationally nearly impossible to reverse the process, to derive a plaintext password from a specific hash.

Instead, the cracking software computes the hash values for large numbers of possible passwords and compares the results to the hashed passwords in the stolen file. If any match, the hacker’s in. The first place these programs start is with known hash values for popular passwords.

More savvy users who choose a less common password might still fall prey to what is called a “dictionary attack.” The cracking software tries each of the 171,000 words in the English dictionary. Then the program tries combined words (such as “qwertypassword”), doubled sequences (“qwertyqwerty”), and words followed by numbers (“qwerty123”).

Moving on to blind guessing

Only if the dictionary attack fails will the attacker reluctantly move to what is called a “brute-force attack,” guessing arbitrary sequences of numbers, letters and characters over and over until one matches.

Mathematics tells us that a longer password is less guessable than a shorter password. That’s true even if the shorter password is made from a larger set of possible characters.

For example, a six-character password made up of the 95 different symbols on a standard American keyboard yields 956, or 735 billion, possible combinations. That sounds like a lot, but a 10-character password made from only lowercase English characters yields 2610, 141 trillion, options. Of course, a 10-character password from the 95 symbols gives 9510, or 59 quintillion, possibilities.

That’s why some websites require passwords of certain lengths and with certain numbers of digits and special characters – they’re designed to thwart the most common dictionary and brute-force attacks. Given enough time and computing power, though, any password is crackable.

And in any case, humans are terrible at memorizing long, unpredictable sequences. We sometimes use mnemonics to help, like the way “Every Good Boy Does Fine” reminds us of the notes indicated by the lines on sheet music. They can also help us remember a password like “freQ!9tY!juNC,” which at first appears very mixed up.

Splitting the password into three chunks, “freQ!,” “9tY!” and “juNC,” reveals what might be remembered as three short, pronounceable words: “freak,” “ninety” and “junk.” People are better at memorizing passwords that can be chunked, either because they find meaning in the chunks or because they can more easily add their own meaning through mnemonics.

Don’t reuse passwords

Suppose we take all this advice to heart and resolve to make all our passwords at least 15 characters long and full of random numbers and letters. We invent clever mnemonic devices, commit a few of our favorites to memory, and start using those same passwords over and over on every website and application.

At first, this might seem harmless enough. But password-thieving hackers are everywhere. Recently, big companies including Yahoo, Adobe and LinkedIn have all been breached. Each of these breaches revealed the usernames and passwords for hundreds of millions of accounts. Hackers know that people commonly reuse passwords, so a cracked password on one site could make the same person vulnerable on a different site.

No! Don’t do this!
designer491 via shutterstock.com

Beyond the password

Not only do we need long, unpredictable passwords, but we need different passwords for every site and program we use. The average internet user has 19 different passwords. It’s easy to see why people write them down on sticky notes or just click the “I forgot my password” link.

Software can help! The job of password management software is to take care of generating and remembering unique, hard-to-crack passwords for each website and application.

Sometimes these programs themselves have vulnerabilities that can be exploited by attackers. And some websites block password managers from functioning. And of course, an attacker could peek at the keyboard as we type in our passwords.

Multi-factor authentication was invented to solve these problems. This involves a code sent to a mobile phone, a fingerprint scan or a special USB hardware token. However, even though users know the multi-factor authentication is probably safer, they worry it might be more inconvenient or difficult. To make it easier, sites like Authy.com provide straightforward guides for enabling multi-factor authentication on popular websites.

So no more excuses. Let’s put on our party hats and start changing those passwords. World Password Day would be a great time to ditch “qwerty” for good, try out a password manager and turn on multi-factor authentication. Once you’re done, go ahead and have that cake, because you’ll deserve it.

Megan Squire, Professor of Computing Sciences, Elon University

This article was originally published on The Conversation. Read the original article.

It’s easier to defend against ransomware than you might think

Image 20160520 4478 rhdazf
Try to make this the only time you see a ransomware warning notice.
Christiaan Colen/flickr, CC BY-SA

Amin Kharraz, Northeastern University

Ransomware – malicious software that sneaks onto your computer, encrypts your data so you can’t access it and demands payment for unlocking the information – has become an emerging cyberthreat. Several reports in the past few years document the diversity of ransomware attacks and their increasingly sophisticated methods. Recently, high-profile ransomware attacks on large enterprises such as hospitals and police departments have demonstrated that large organizations of all types are at risk of significant real-world consequences if they don’t protect themselves properly against this type of cyberthreat. The Conversation

The development of strong encryption technology has made it easier to encode data so that it cannot be read without the decryption key. The emergence of anonymity services such as the Tor network and bitcoin and other cryptocurrencies has eased worries about whether people who receive payments might be identified through financial tracking. These trends are likely driving factors in the recent surge of ransomware development and attacks.

Like other classes of malicious software – often called “malware” – ransomware uses a fairly wide range of techniques to sneak into people’s computers. These include attachments or links in unsolicited email messages, or phony advertisements on websites. However, when it comes to the core part of the attack – encrypting victims’ files to make them inaccessible – most ransomware attacks use very similar methods. This commonality provides an opportunity for ransomware attacks to be detected before they are carried out.

My recent research discovered that ransomware programs’ attempts to request access and encrypt files on hard drives are very different from benign operating system processes. We also found that diverse types of ransomware, even ones that vary widely in terms of sophistication, interact with computer file systems similarly.

Moving fast and hitting hard

One reason for this similarity amid apparent diversity is the commonality of attackers’ mindsets: the most successful attack is one that encrypts a user’s data very quickly, makes the computer files inaccessible and requests money from the victim. The more slowly that sequence happens, the more likely the ransomware is to be detected and shut down by antivirus software.

What attackers are trying to do is not simple. First, they need to reliably encrypt the victim’s files. Early ransomware used very basic techniques to do this. For example, it used to be that a ransomware application would use a single decryption key no matter where it spread to. This meant that if someone were able to detect the attack and discover the key, they could share the key with other victims, who could then decode the encrypted data without paying.

Today’s ransomware attackers use advanced cryptographic systems and Internet connectivity to minimize the chance that a victim could find a way to get her files back on her own. Once the program makes its way into a new computer, it sends a message back over the internet to a computer the attacker is using to control the ransomware. A unique key pair for encryption and decryption is generated for that compromised computer. The decryption key is saved in the attacker’s computer, while the encryption key is sent to the malicious program in the compromised computer to perform the file encryption. The decryption key, which is required to decrypt the files only on that computer, is what the victim receives when he pays the ransom fee.

The second part of a “successful” ransomware attack – from the perspective of the attacker – depends on finding reliable ways to get paid without being caught. Ransomware operators continuously strive to make payments harder to trace and easier to convert into their preferred currency. Attackers attempt to avoid being identified and arrested by communicating via the anonymous Tor network and exchanging money in difficult-to-trace cryptocurrencies like bitcoins.

Defending against a ransomware attack

Unfortunately, the use of advanced cryptosystems in modern ransomware families has made recovering victims’ files almost impossible without paying the ransom. However, it is easier to defend against ransomware than to fight off other types of cyberthreats, such as hackers gaining unauthorized entry to company data and stealing secret information.

Back up your data!
Pixabay

The easiest way to protect against ransomware attacks is to have, and follow, a reliable data-backup policy. Companies that do not want to end up as paying victims of ransomware should have their workers conduct real-time incremental backups (which back up file changes every few minutes). In addition, in case their own backup servers get infected with ransomware, these companies should have offsite cloud backup storage that is protected from ransomware. Companies that are attacked can then restore their data from these backups instead of paying the ransom.

Users should also download and install regular updates to software, including third-party plug-ins for web browsers and other systems. These often plug security vulnerabilities that, if left open, provide attackers an easy way in.

Generally, being infected with ransomware has two important messages for an organization. First, it’s a sign of vulnerability in a company’s entire computer system, which also means that the organization is vulnerable to other types of attacks. It is always better to learn of an intrusion earlier, rather than being compromised for several months.

Second, being infected with ransomware also suggests users are engaging in risky online behavior, such as clicking on unidentified email attachments from unknown senders, and following links on disreputable websites. Teaching people about safe internet browsing can dramatically reduce an organization’s vulnerability to a ransomware attack.

Amin Kharraz, Research Assistant, Systems Security Lab, Northeastern University

This article was originally published on The Conversation. Read the original article.

Play video games, advance science

Image 20161005 14227 4yv9k9A fun game, plus science advancement.

Scott Horowitz, University of Michigan and James Bardwell, University of Michigan

Computer gaming is now a regular part of life for many people. Beyond just being entertaining, though, it can be a very useful tool in education and in science. The Conversation

If people spent just a fraction of their play time solving real-life scientific puzzles – by playing science-based video games – what new knowledge might we uncover? Many games aim to take academic advantage of the countless hours people spend gaming each day. In the field of biochemistry alone, there are several, including the popular game Foldit.

In Foldit, players attempt to figure out the detailed three-dimensional structure of proteins by manipulating a simulated protein displayed on their computer screen. They must observe various constraints based in the real world, such as the order of amino acids and how close to each other their biochemical properties permit them to get. In academic research, these tasks are typically performed by trained experts.

Thousands of people – with and without scientific training – play Foldit regularly. Sure, they’re having fun, but are they really contributing to science in ways experts don’t already? To answer this question – to find out how much we can learn by having nonexperts play scientific games – we recently set up a Foldit competition between gamers, undergraduate students and professional scientists. The amateur gamers did better than the professional scientists managed using their usual software.

This suggests that scientific games like Foldit can truly be valuable resources for biochemistry research while simultaneously providing enjoyable recreation. More widely, it shows the promise that crowdsourcing to gamers (or “gamesourcing”) could offer to many fields of study.

Looking closely at proteins

Proteins perform basically all the microscopic tasks necessary to keep organisms alive and healthy, from building cell walls to fighting disease. By seeing the proteins up close, biochemists can much better understand life itself.

Understanding how proteins fold is also critical because if they don’t fold properly, the proteins can’t do their tasks in the cell. Worse, some proteins, when improperly folded, can cause debilitating diseases, such as Alzheimer’s, Parkinson’s and ALS.

Taking pictures of proteins

First, by analyzing the DNA that tells cells how to make a given protein, we know the sequence of amino acids that makes up the protein. But that doesn’t tell us what shape the protein takes.

An electron density map of a protein, generated by X-ray crystallography.
Scott Horowitz, CC BY-ND

To get a picture of the three-dimensional structure, we use a technique called X-ray crystallography. This allows us to see objects that are only nanometers in size. By taking X-rays of the protein from multiple angles, we can construct a digital 3D model (called an electron density map) with the rough outlines of the protein’s actual shape. Then it’s up to the scientist to determine how the sequence of amino acids folds together in a way that both fits the electron density map and also is biochemically sound.

Although this process isn’t easy, many crystallographers think that it is the most fun part of crystallography because it is like solving a three-dimensional jigsaw puzzle.

An electron density map of a protein with the protein threaded through the map, revealing how the protein folds.
Scott Horowitz, CC BY-ND

An addictive puzzle

The competition, and its result, were the culmination of several years of improving biochemistry education by showing how it can be like gaming. We teach an undergraduate class that includes a section on how biochemists can determine what proteins look like.

When we gave an electron density map to our students and had them move the amino acids around with a mouse and keyboard and fold the protein into the map, students loved it – some so much they found themselves ignoring their other homework in favor of our puzzle. As the students worked on the assignment, we found the questions they raised became increasingly sophisticated, delving deeply into the underlying biochemistry of the protein.

In the end, 10 percent of the class actually managed to improve on the structure that had been previously solved by professional crystallographers. They tweaked the pieces so they fit better than the professionals had been able to. Most likely, since 60 students were working on it separately, some of them managed to fix a number of small errors that had been missed by the original crystallographers. This outcome reminded us of the game Foldit.

From the classroom to the game lab

Like crystallographers, Foldit players manipulate amino acids to figure out a protein’s structure based on their own puzzle-solving intuition. But rather than one trained expert working alone, thousands of nonscientist players worldwide get involved. They’re devoted gamers looking for challenging puzzles and willing to use their gaming skills for a good cause.

Playing Foldit.

Foldit’s developers had just finished a new version of the game providing puzzles based on three-dimensional crystallographic electron density maps. They were ready to see how players would do.

We gave students a new crystallography assignment, and told them they would be competing against Foldit players to produce the best structure. We also got two trained crystallographers to compete using the software they’d be familiar with, as well as several automated software packages that crystallographers often use. The race was on!

Amateurs outdo professionals

The students attacked the assignment vigorously, as did the Foldit players. As before, the students learned how proteins are put together through shaping these protein structures by hand. Moreover, both groups appeared to take pride in their role in pioneering new science.

At the end of the competition, we analyzed all the structures from all the participants. We calculated statistics about the competing structures that told us how correct each participant was in their solution to the puzzle. The results ranged from very poor structures that didn’t fit the map at all to exemplary solutions.

The best structure came from a group of nine Foldit players who worked collaboratively to come up with a spectacular protein structure. Their structure turned out to be even better than the structures from the two trained professionals.

Students and Foldit players alike were eager to master difficult concepts because it was fun. The results they came up with gave us useful scientific results that can really improve biochemistry.

There are many other games along similar lines, including the “Discovery” mini-game in the massively multiplayer online role-playing game “Eve Online,” which helps build the Human Protein Atlas, and Eterna, which tries to decipher how RNA molecules fold themselves up. If educators incorporate scientific games into their curricula potentially as early as middle school, they are likely to find students becoming highly motivated to learn at a very deep level while having a good time. We encourage game designers and scientists to work together more to create games with purpose, and gamers of the world should play more to bolster the scientific process.

Scott Horowitz, Research Fellow, University of Michigan and James Bardwell, Professor, Molecular, Cellular and Developmental Biology, University of Michigan

This article was originally published on The Conversation. Read the original article.

The future is in interactive storytelling

File 20170503 21637 btv8taSeeking to make stories that surround us.

Noah Wardrip-Fruin, University of California, Santa Cruz and Michael Mateas, University of California, Santa Cruz

Marvel’s new blockbuster, “Guardians of the Galaxy, Vol. 2,” carries audiences through a narrative carefully curated by the film’s creators. That’s also what Telltale’s Guardians-themed game did when it was released in April. Early reviews suggest the game is just another form of guided progress through a predetermined story, not a player-driven experience in the world of the movie and its characters. Some game critics lament this, and suggest game designers let traditional media tell the linear stories. The Conversation

What is out there for the player who wants to explore on his or her own in rich universes like the ones created by Marvel? Not much. Not yet. But the future of media is coming.

As longtime experimenters and scholars in interactive narrative who are now building a new academic discipline we call “computational media,” we are working to create new forms of interactive storytelling, strongly shaped by the choices of the audience. People want to explore, through play, themes like those in Marvel’s stories, about creating family, valuing diversity and living responsibly.

These experiences will need compelling computer-generated characters, not the husks that now speak to us from smartphones and home assistants. And they’ll need virtual environments that are more than just simulated space – environments that feel alive, responsive and emotionally meaningful.

This next generation of media – which will be a foundation for art, learning, self-expression and even health maintenance – requires a deeply interdisciplinary approach. Instead of engineer-built tools wielded by artists, we must merge art and science, storytelling and software, to create groundbreaking, technology-enabled experiences deeply connected to human culture.

In search of interactivity

One of the first interactive character experiences involved “Eliza,” a language and software system developed in the 1960s. It seemed like a very complex entity that could engage compellingly with a user. But the more people interacted with it, the more they noticed formulaic responses that signaled it was a relatively simple computer program.

In contrast, programs like “Tale-Spin” have elaborate technical processes behind the scenes that audiences never see. The audience sees only the effects, like selfish characters telling lies. The result is the opposite of the “Eliza” effect: Rather than simple processes that the audience initially assumes are complex, we get complex processes that the audience experiences as simple.

An exemplary alternative to both types of hidden processes is “SimCity,” the seminal game by Will Wright. It contains a complex but ultimately transparent model of how cities work, including housing locations influencing transportation needs and industrial activity creating pollution that bothers nearby residents. It is designed to lead users, through play, to an understanding of this underlying model as they build their own cities and watch how they grow. This type of exploration and response is the best way to support long-term player engagement.

Connecting technology with meaning

Creating biased histories can be uncomfortable.
‘Terminal Time,’ by Steffi Domike, Michael Mateas and Paul Vanouse., CC BY-ND

No one discipline has all the answers for building meaningfully interactive experiences about topics more subtle than city planning – such as what we believe, whom we love and how we live in the world. Engineering can’t teach us how to come up with a meaningful story, nor understand if it connects with audiences. But the arts don’t have methods for developing the new technologies needed to create a rich experience.

Today’s most prominent examples of interactive storytelling tend to lean toward one approach or the other. Despite being visually compelling, with powerful soundtracks, neither indie titles like “Firewatch” nor blockbusters such as “Mass Effect: Andromeda” have many significant ways for a player to actually influence their worlds.

Both independently and together, we’ve been developing deeper interactive storytelling experiences for nearly two decades. “Terminal Time,” an interactive documentary generator first shown in 1999, asks the audience several questions about their views of historical issues. Based on the responses (measured as the volume of clapping for each choice), it custom-creates a story of the last millennium that matches, and increasingly exaggerates, those particular ideas.

For example, to an audience who supported anti-religious rationalism, it might begin presenting distant events that match their biases – such as the Catholic Church’s 17th-century execution of philosopher Giordano Bruno. But later it might show more recent, less comfortable events – like the Chinese communist (rationalist) invasion and occupation of (religious) Tibet in the 1950s.

The results are thought-provoking, because the team creating it – including one of us (Michael), documentarian Steffi Domike and media artist Paul Vanouse – combined deep technical knowledge with clear artistic goals and an understanding of the ways events are selected, connected and portrayed in ideologically biased documentaries.

Digging into narrative

Façade,” released in 2005 by Michael and fellow artist-technologist Andrew Stern, represented a further extension: the first fully realized interactive drama. A person playing the experience visits the apartment of a couple whose marriage is on the verge of collapse. A player can say whatever she wants to the characters, move around the apartment freely, and even hug and kiss either or both of the hosts. It provides an opportunity to improvise along with the characters, and take the conversation in many possible directions, ranging from angry breakups to attempts at resolution.

“Façade” also lets players interact creatively with the experience as a whole, choosing, for example, to play by asking questions a therapist might use – or by saying only lines Darth Vader says in the “Star Wars” movies. Many people have played as different characters and shared videos of the results of their collaboration with the interactive experience. Some of these videos have been viewed millions of times.

As with “Terminal Time,” “Façade” had to combine technical research – about topics like coordinating between virtual characters and understanding natural language used by the player – with a specific artistic vision and knowledge about narrative. In order to allow for a wide range of audience influence, while still retaining a meaningful story shape, the software is built to work in terms of concepts from theater and screenwriting, such as dramatic “beats” and tension rising toward a climax. This allows the drama to progress even as different players learn different information, drive the conversation in different directions and draw closer to one or the other member of the couple.

Engaging with a couple on the rocks.
‘Façade,’ by Michael Mateas and Andrew Stern., CC BY-ND

Bringing art and engineering together

A decade ago, our work uniting storytelling, artificial intelligence, game design, human-computer interaction, media studies and many other arts, humanities and sciences gave rise to the Expressive Intelligence Studio, a technical and cultural research lab at the Baskin School of Engineering at UC Santa Cruz, where we both work. In 2014 we created the country’s first academic department of computational media.

Today, we work with colleagues across campus to offer undergrad degrees in games and playable media with arts and engineering emphases, as well as graduate education for developing games and interactive experiences.

With four of our graduate students (Josh McCoy, Mike Treanor, Ben Samuel and Aaron A. Reed), we recently took inspiration from sociology and theater to devise a system that simulates relationships and social interactions. The first result was the game “Prom Week,” in which the audience is able to shape the social interactions of a group of teenagers in the week leading up to a high school prom.

We found that its players feel much more responsibility for what happens than in pre-scripted games. It can be disquieting. As game reviewer Craig Pearson put it – after destroying the romantic relationship of his perceived rival, then attempting to peel away his remaining friendships, only to realize this wasn’t necessary – “Next time I’ll be looking at more upbeat solutions, because the alternative, frankly, is hating myself.”

That social interaction system is also a base for other experiences. Some address serious topics like cross-cultural bullying or teaching conflict deescalation to soldiers. Others are more entertaining, like a murder mystery game – and a still-secret collaboration with Microsoft Studios. We’re now getting ready for an open-source release of the underlying technology, which we’re calling the Ensemble Engine.

Making friends in ‘Prom Week.’
Prom Week, CC BY-ND

Pushing the boundaries

Our students are also expanding the types of experiences interactive narratives can offer. Two of them, Aaron A. Reed and Jacob Garbe, created “The Ice-Bound Concordance,” which lets players explore a vast number of possible combinations of events and themes to complete a mysterious novel.

Three other students, James Ryan, Ben Samuel and Adam Summerville, created “Bad News,” which generates a new small midwestern town for each player – including developing the town, the businesses, the families in residence, their interactions and even the inherited physical traits of townspeople – and then kills one character. The player must notify the dead character’s next of kin. In this experience, the player communicates with a human actor trained in improvisation, exploring possibilities beyond the capabilities of today’s software dialogue systems.

Kate Compton, another student, created “Tracery,” a system that makes storytelling frameworks easy to create. Authors can fill in blanks in structure, detail, plot development and character traits. Professionals have used the system: Award-winning developer Dietrich Squinkifer made the uncomfortable one-button conversation game “Interruption Junction.” “Tracery” has let newcomers get involved, too, as with the “Cheap Bots Done Quick!” platform. It is the system behind around 4,000 bots active on Twitter, including ones relating the adventures of a lost self-driving Tesla, parodying the headlines of “Boomersplaining thinkpieces,” offering self-care reminders and generating pastel landscapes.

Many more projects are just beginning. For instance, we’re starting to develop an artificial intelligence system that can understand things usually only humans can – like the meanings underlying a game’s rules and what a game feels like when played. This will allow us to more easily explore what the audience will think and feel in new interactive experiences.

‘Bad News’ is played in physical space. In the installation at Big Pictures in Los Angeles (for the Slamdance DIG exhibition), the player and actor were on one side of a wall (right) and the ‘wizard,’ who combs the lives of the generated characters for interesting story potential, was on the other (left).
James Ryan, CC BY-ND

There’s much more to do, as we and others work to invent the next generation of computational media. But as in a Marvel movie, we’d bet on those who are facing the challenges, rather than the skeptics who assume the challenges can’t be overcome.

Noah Wardrip-Fruin, Professor of Computational Media, University of California, Santa Cruz and Michael Mateas, Professor of Computational Media, University of California, Santa Cruz

This article was originally published on The Conversation. Read the original article.

Computers to humans: Shall we play a game?

File 20170510 21596 p2i8u6Artificial intelligence can bring many benefits to human gamers.

Arend Hintze, Michigan State University

Way back in the 1980s, a schoolteacher challenged me to write a computer program that played tic-tac-toe. I failed miserably. But just a couple of weeks ago, I explained to one of my computer science graduate students how to solve tic-tac-toe using the so-called “Minimax algorithm,” and it took us about an hour to write a program to do it. Certainly my coding skills have improved over the years, but computer science has come a long way too. The Conversation

What seemed impossible just a couple of decades ago is startlingly easy today. In 1997, people were stunned when a chess-playing IBM computer named Deep Blue beat international grandmaster Garry Kasparov in a six-game match. In 2015, Google revealed that its DeepMind system had mastered several 1980s-era video games, including teaching itself a crucial winning strategy in “Breakout.” In 2016, Google’s AlphaGo system beat a top-ranked Go player in a five-game tournament.

An artificial intelligence system learns to play ‘Breakout.’

The quest for technological systems that can beat humans at games continues. In late May, AlphaGo will take on Ke Jie, the best player in the world, among other opponents at the Future of Go Summit in Wuzhen, China. With increasing computing power, and improved engineering, computers can beat humans even at games we thought relied on human intuition, wit, deception or bluffing – like poker. I recently saw a video in which volleyball players practice their serves and spikes against robot-controlled rubber arms trying to block the shots. One lesson is clear: When machines play to win, human effort is futile.

Robots play volleyball.

This can be great: We want a perfect AI to drive our cars, and a tireless system looking for signs of cancer in X-rays. But when it comes to play, we don’t want to lose. Fortunately, AI can make games more fun, and perhaps even endlessly enjoyable.

Designing games that never get old

Today’s game designers – who write releases that earn more than a blockbuster movie – see a problem: Creating an unbeatable artificial intelligence system is pointless. Nobody wants to play a game they have no chance of winning.

But people do want to play games that are immersive, complex and surprising. Even today’s best games become stale after a person plays for a while. The ideal game will engage players by adapting and reacting in ways that keep the game interesting, maybe forever.

So when we’re designing artificial intelligence systems, we should look not to the triumphant Deep Blues and AlphaGos of the world, but rather to the overwhelming success of massively multiplayer online games like “World of Warcraft.” These sorts of games are graphically well-designed, but their key attraction is interaction.

It seems as if most people are not drawn to extremely difficult logical puzzles like chess and Go, but rather to meaningful connections and communities. The real challenge with these massively multi-player online games is not whether they can be beaten by intelligence (human or artificial), but rather how to keep the experience of playing them fresh and new every time.

Change by design

At present, game environments allow people lots of possible interactions with other players. The roles in a dungeon raiding party are well-defined: Fighters take the damage, healers help them recover from their injuries and the fragile wizards cast spells from afar. Or think of “Portal 2,” a game focused entirely on collaborating robots puzzling their way through a maze of cognitive tests.

Exploring these worlds together allows you to form common memories with your friends. But any changes to these environments or the underlying plots have to be made by human designers and developers.

In the real world, changes happen naturally, without supervision, design or manual intervention. Players learn, and living things adapt. Some organisms even co-evolve, reacting to each other’s developments. (A similar phenomenon happens in a weapons technology arms race.)

Computer games today lack that level of sophistication. And for that reason, I don’t believe developing an artificial intelligence that can play modern games will meaningfully advance AI research.

We crave evolution

A game worth playing is a game that is unpredictable because it adapts, a game that is ever novel because novelty is created by playing the game. Future games need to evolve. Their characters shouldn’t just react; they need to explore and learn to exploit weaknesses or cooperate and collaborate. Darwinian evolution and learning, we understand, are the drivers of all novelty on Earth. It could be what drives change in virtual environments as well.

Evolution figured out how to create natural intelligence. Shouldn’t we, instead of trying to code our way to AI, just evolve AI instead? Several labs – including my own and that of my colleague Christoph Adami – are working on what is called “neuro-evolution.”

In a computer, we simulate complex environments, like a road network or a biological ecosystem. We create virtual creatures and challenge them to evolve over hundreds of thousands of simulated generations. Evolution itself then develops the best drivers, or the best organisms at adapting to the conditions – those are the ones that survive.

A neuro-evolution learns to drive a car.

Today’s AlphaGo is beginning this process, learning by continuously playing games against itself, and by analyzing records of games played by top Go champions. But it does not learn while playing in the same way we do, experiencing unsupervised experimentation. And it doesn’t adapt to a particular opponent: For these computer players, the best move is the best move, regardless of an opponent’s style.

Programs that learn from experience are the next step in AI. They would make computer games much more interesting, and enable robots to not only function better in the real world, but to adapt to it on the fly.

Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

This article was originally published on The Conversation. Read the original article.

No, we’re not all being pickled in deadly radiation from smartphones and wifi

Image 20150519 25441 rcadyx
As technology improves our lives, we seem destined to witness a parallel rise in fear-mongering.
Yahoo/Flickr, CC BY

Simon Chapman, University of Sydney

Tomorrow at TedX Sydney’s Opera House event, high-profile neurosurgeon Charlie Teo will talk about brain cancer. Last Saturday Teo was on Channel 9’s Sunrise program talking about the often malignant cancer that in 2012 killed 1,241 Australians. During the program he said: The Conversation

Unfortunately the jury is still out on whether mobile phones can lead to brain cancer, but studies suggest it’s so.

Teo’s name appears on a submission recently sent to the United Nations. If you Google “Charlie Teo and mobile phones” you will see that his public statements on this issue go back years.

The submission he signed commences:

We are scientists engaged in the study of biological and health effects of non-ionizing electromagnetic fields (EMF). Based upon peer-reviewed, published research, we have serious concerns regarding the ubiquitous and increasing exposure to EMF generated by electric and wireless devices. These include – but are not limited to – radiofrequency radiation (RFR) emitting devices, such as cellular and cordless phones and their base stations, Wi-Fi, broadcast antennas, smart meters, and baby monitors as well as electric devices and infra-structures [sic] used in the delivery of electricity that generate extremely-low frequency electromagnetic field (ELF EMF).

That list just about covers off every facet of modern life: the internet, phones, radio, television and any smart technology. It’s a list the Amish and reclusive communities of “wifi refugees” know all about.

Other than those living in the remotest of remote locations, there are very few in Australia today who are not bathed in electromagnetic fields and radiofrequency radiation, 24 hours a day. My mobile phone shows me that my house is exposed to the wifi systems of six neighbours’ houses as well as my own. Public wifi hotspots are rapidly increasing.

The first mobile phone call in Australia was made over 28 years ago on February 23, 1987. In December 2013, there were some 30.2 million mobile phones being used in a population of 22.7 million people. Predictions are that there will be 5.9 billion smartphone users globally within four years. There are now more than 100 nations which have more mobile phones than population.

So while Australia has become saturated in electromagnetic field radiation over the past quarter century, what has happened to cancer rates?

Brain cancer is Teo’s surgical speciality and the cancer site that attracts nearly all of the mobile phone panic attention. In 1987 the age-adjusted incidence rate of brain cancer in Australia per 100,000 people was 6.6. In 2011, the most recent year for which national data is available, the rate was 7.3.

The graph below shows brain cancer incidence has all but flat-lined across the 29 years for which data are available. All cancer is notifiable in Australia.

New cases of brain cancer in Australia, 1982 to 2011 (age-adjusted)
Australian Institute of Health and Welfare, CC BY

Brain cancers are a relatively uncommon group of cancers: their 7.3 per 100,000 incidence compares with female breast (116), colorectal (61.5) and lung cancer (42.5). There is no epidemic of brain cancer, let alone mobile phone caused brain cancer. The Cancer Council explicitly rejects the link. This US National Cancer Institute fact sheet summarises current research, highlighting rather different conclusions than Charlie Teo.

Another Australian signatory of the submission, Priyanka Bandara, describes herself as an “Independent Environmental Health Educator/Researcher; Advisor, Environmental Health Trust and Doctors for Safer Schools”.

Last year, a former student of mine asked to meet with me to discuss wifi on our university campus. She arrived at my office with Bandara who looked worried as she ran a EMF meter over my room. I was being pickled in it, apparently.

Her pitch to me was one I have encountered many times before. The key ingredients are that there are now lots of highly credentialed scientists who are deeply concerned about a particular problem, here wifi. These scientists have published [pick a very large number] of “peer reviewed” research papers about the problem.

Peer review often turns out to be having like-minded people from their networks, typically with words like “former”, “leading”, “senior” next to their names, write gushing appraisals of often unpublished reports.

The neo-Galilean narrative then moves to how this information is all being suppressed by the web of influence of vested industrial interests. These interests are arranging for scientists to be sacked, suppressing publication of alarming reports, and preventing many scientists from speaking out in fear.

Case reports of individuals claiming to be harmed and suffering Old Testament-length lists of symptoms as a result of exposure are then publicised. Here’s one for smart meters, strikingly similar to the 240+ symptom list for “wind turbine syndrome”. Almost any symptom is attributed to exposure.

Historical parallels with the conduct of the tobacco and asbestos industries and Big Pharma are then made. The argument runs “we understand the history of suppression and denial with these industries and this new issue is now experiencing the same”.

There is no room for considering that the claims about the new issue might just be claptrap and that the industries affected by the circulation of false and dangerous nonsense might understandably want to stamp on it.

Bandara’s modest blog offers schools the opportunity to hear her message:

Wireless technologies are sweeping across schools exposing young children to microwave radiation. This is not in line with the Precautionary Principle. A typical classroom with 25 WiFi enabled tablets/laptops (each operating at 0.2 W) generates in five hours about the same microwave radiation output as a typical microwave oven (at 800 W)in two minutes. Would you like to microwave your child for two minutes (without causing heating as it is done very slowly using lower power) daily?

David French/Flickr, CC BY

There can be serious consequences of alarming people about infinitesimally small, effectively non-existent risks. This rural Victorian news story features a woman so convinced that transmission towers are harming her that she covers her head in a “protective” cloth cape.

This woman was so alarmed about the electricity smart meter at her house that she had her electricity cut off, causing her teenage daughter to study by candlelight. Yet she is shown being interviewed by a wireless microphone.

Mobile phones have played important roles in rapid response to life-saving emergencies. Reducing access to wireless technology would have incalculable effects in billions of people’s lives, many profoundly negative.

Exposing people to fearful messages about wifi has been experimentally demonstrated to increase symptom reportage when subjects were later exposed to sham wifi. Such fears can precipitate contact with charlatans readily found on the internet who will come to your house, wave meters around and frighten the gullible into purchasing magic room paint, protective clothing, bed materials and other snake-oil at exorbitant prices.

As exponential improvements in technology improve the lifestyles and well-being of the world’s population, we seem destined to witness an inexorable parallel rise in fear-mongering about these benefits.

Simon Chapman, Professor of Public Health, University of Sydney

This article was originally published on The Conversation. Read the original article.

How secure is your smartphone’s lock screen?

Image 20160331 28451 15rssmlThere’s a big difference between a 4-digit PIN and a 6-digit PIN.

Clinton Carpene, Edith Cowan University

One consequence of the Apple vs FBI drama has been to shine a spotlight on the security of smartphone lockscreens. The Conversation

The fact that the FBI managed to hack the iPhone of the San Bernardino shooter without Apple’s help raises questions about whether PIN codes and swipe patterns are as secure as we think.

In fact, they’re probably not as secure as we’d hope. No device as complex as a smartphone or tablet is ever completely secure, but device manufactures and developers are still doing their best to keep your data safe.

The first line of defence is your lockscreen, typically protected by a PIN code or password.

When it comes to smartphones, the humble four-digit PIN code is the most popular choice. Unfortunately, even ignoring terrible PIN combinations such as “1234”, “1111” or “7777”, four-digit PIN codes are still incredibly weak, since there are only 10,000 unique possible PINs.

If you lose your device, and there are no other protections, it would only take a couple of days for someone to find the correct PIN through brute force (i.e. attempting every combination of four-digit PIN).

A random six-digit PIN will afford you better security, given that there are a million possible combinations. However, with a weak PIN and a bit of time and luck, it’s still possible for someone to bypass this using something like Rubber Ducky, a tool designed to try every PIN combination without triggering other security mechanisms.

Checks and balances

Fortunately, there other safeguards in place. On iPhones and iPads, for instance, there is a forced delay of 80 milliseconds between PIN or password attempts.

And after 10 incorrect attempts, the device will either time-out for increasing periods of time, lock out completely, or potentially delete all data permanently, depending on your settings.

A swipe pattern can be easier to remember than a PIN.
Mike Dent/Flickr, CC BY-NC-ND

Similarly, Android devices enforce time delays after a number of passcode or password entries. However, stock Android devices will not delete their contents after any number of incorrect entries.

Swipe patterns are also a good security mechanism, as there are more possible combinations than a four-digit PIN. Additionally, you can’t set your swipe pattern to be the same as your banking PIN or password, so if one is compromised, then the others remain secure.

However, all of these security controls can potentially be thwarted. By simply observing the fingerprints on a device’s display on an unclean screen, it is possible to discern a swipe pattern or passcode. When it comes to touch screen devices: cleanliness is next to secure-ness.

Bypasses

Speaking of fingers, biometrics have increased in popularity recently. Biometric security controls simply means that traits of a human body can be used to identify someone and therefore unlock something.

Some Android phones now carry built-in fingerprint sensors.
Kārlis Dambrāns/Flickr, CC BY

In the case of smartphones, there are competing systems that offer various levels of security. Android has facial, voice and fingerprint unlocking, while iOS has fingerprint unlocking only.

Generally, biometrics on their own are not inherently secure. When used as the only protection mechanism, they’re often very unreliable, either allowing too many unauthorised users to access a device (false positives), or by creating a frustrating user experience by locking out legitimate users (false negatives).

Some methods of bypassing these biometric protections have been widely publicised, such as using a gummi bear or PVA glue to bypass Apple’s TouchID, or using a picture to fool facial recognition on Android.

Watch as a picture of a face can unlock an Android phone.

To combat this, Apple disables the TouchID after five incorrect fingerprint attempts, requiring a passcode or password entry to re-enable the sensor. Likewise, current versions of Android enforce increasing time-outs on after a number of incorrect entries.

These methods help strike a balance between security and usability, which is crucial for making sure smartphones don’t end up hurled at a wall.

Although these lockscreen protections are in place, your device may still contain bugs in its software that can allow attackers to bypass them. A quick search for “smartphone lockscreen bypasses” on your favourite search engine will yield more results than you’d probably care to read.

Lockscreen bypasses are particularly problematic for older devices that are no longer receiving security updates, but new devices are not immune. For example, the latest major iOS release (iOS 9.0) contained a flaw that allowed users to access the device without entering a valid passcode via the Clock app, which is accessible on the lockscreen. Similar bugs have been discovered for Android devices as well.

All of these efforts could be thrown out the window if you install an app that includes malware.

So lockscreens, PIN codes, passwords and swipe patters should only be considered your first line of defence rather than a foolproof means of securing your device.

Clinton Carpene, Post Doctoral Researcher in network security, Edith Cowan University

This article was originally published on The Conversation. Read the original article.