Category Archives: Education

Electrically stimulating your brain can boost memory – but here’s one reason it doesn’t always work

File 20170512 3682 f4tcyeIs electrical pulse to the brain your favorite memory enhancer?
U.S. Air Force photo by J.M. Eddins Jr., CC BY-NC

Shelly Fan, University of California, San Francisco

The first time I heard that shooting electrical currents across your brain can boost learning, I thought it was a joke.

But evidence is mounting. According to a handful of studies, transcranial direct current stimulation (tDCS), the poster child of brain stimulation, is a bona fide cognitive booster: By directly tinkering with the brain’s electrical field, some research has found that tDCS enhances creativity, bolsters spatial and math learning and even language aquisition – sometimes weeks after the initial zap.

For those eager to give their own brains a boost, this is good news. Various communities have sprung up to share tips and tricks on how to test the technique on themselves, often using self-rigged stimulators powered by 9-volt batteries.

Scientists and brain enthusiasts aren’t the only people interested. The military has also been eager to support projects involving brain stimulation with the hope that the technology could one day be used to help soldiers suffering from combat-induced memory loss.

But here’s the catch: The end results are inconsistent at best. While some people swear by the positive effects anecdotally, others report nothing but a nasty scalp burn from the electrodes.

In a meta-analysis covering over 20 studies, a team from Australia found no significant effects of tDCS on memory. Similar disparities pop up for other brain stimulation techniques. It’s not that brain stimulation isn’t doing anything – it just doesn’t seem to be doing something consistently across a diverse population. So what gives?

It looks like timing is everything.

When the zap comes is crucial

We all have good days when your brain feels sharp and bad days when the “brain fog” never lifts. This led scientists to wonder: Because electrical stimulation directly regulates the activity of the brain’s neural networks, what if it gives them a boost when they’re faltering, but conversely disrupts their activity when already performing at peak?

In a new study published in “Current Biology,” researchers tested the idea using the most direct type of brain stimulation – electrodes implanted into the brain. Compared to tDCS, which delivers currents through electrodes on the scalp, implanted ones allow much higher precision in controlling which brain region to target and when.

Blue dots indicate overall electrode placement in the new study from the University of Pennsylvania; the yellow dot (top-right corner) is the electrode used to stimulate the subject’s brain to increase memory performance.
Joel Stein and Youssef Ezzyat, CC BY-ND

The team collaborated with a precious resource: epilepsy patients who already have electrodes implanted into their hippocampi and surrounding areas. These brain regions are crucial for memories about sequences, spaces and life events. The electrodes serve a double purpose: They both record brain activity and deliver electrical pulses.

The researchers monitored the overall brain activity of 102 epilepsy patients as they memorized 25 lists of a dozen unrelated words and tried to recall them later on.

For each word, the researchers used the corresponding brain activity pattern to train a type of software called a classifier. In this way, for each patient the classifier eventually learned what types of brain activity preceded successfully remembering a word, and what predicted failed recall. Using this method, the scientist objectively classified a “foggy” brain state as the pattern of brain activity that preceded an inability to remember the word, while the pattern of activity common before successfully recalling is characteristic of being on the ball.

Next, in the quarter of patients for whom the classifier performed above chance, the researchers zapped their brains as they memorized and recalled a new list of words. As a control, they also measured memory performance without any stimulation, and the patients were asked whether they could tell when the electrodes were on (they couldn’t).

Here’s what they found: when the zap came before a low, foggy brain state, the patients scored roughly 12 to 13 percent higher than usual on the recall task. But if they were already in a high-performance state, quite the opposite occurred. Then the electrical pulse impaired performance by 15 to 20 percent and disrupted the brain’s encoding activity – that is, actually making memories.

Moving beyond random stimulation

This study is notably different from those before. Rather than indiscriminately zapping the brain, the researchers showed that the brain state at the time of memory encoding determines whether brain stimulation helps or hinders. It’s an invaluable insight for future studies that try to tease apart the effects of brain stimulation on memory.

The next big challenge is to incorporate these findings into brain stimulation trials, preferably using noninvasive technologies. The finding that brain activity can predict recall is promising and builds upon previous research linking brain states to successful learning. These studies may be leveraged to help design “smart” brain stimulators.

For example: Picture a closed-loop system, where a cap embedded with electrodes measures brain activity using EEG or other methods. Then the data go to a control box to determine the brain state. When the controller detects a low functioning state, it signals the tDCS or other stimulator to give a well-timed zap, thus boosting learning without explicit input from the user.

Of course, many questions remain before such a stimulator becomes reality. What are the optimal number and strength of electrical pulses that best bolster learning? Where should we place the electrodes for best effect? And what about unintended consequences? A previous study found that boosting learning may actually impair a person’s ability to automate that skill – quickly and effortlessly perform it – later on. What other hidden costs of brain stimulation are we missing?

The ConversationI’m not sure if I’ll ever be comfortable with the idea of zapping my brain. But this new study and the many others sure to follow give me more confidence: If I do take the leap into electrical memory enhancement, it’ll be based on data, not on anecdotes.

Shelly Fan, Postdoctoral Scholar in Neuroscience, University of California, San Francisco

This article was originally published on The Conversation. Read the original article.

The heavy price we pay for ‘free’ Wi-Fi

Benjamin Dean, Columbia University

For many years, New York City has been developing a “free” public Wi-Fi project. Called LinkNYC, it is an ambitious effort to bring wireless Internet access to all of the city’s residents. The Conversation

This is the latest in a longstanding trend in which companies offer ostensibly free Internet-related products and services, such as social network access on Facebook, search and email from Google or the free Wi-Fi now commonly provided in cafes, shopping malls and airports.

These free services, however, come at a cost. Use is free on the condition that the companies providing the service can collect, store and analyze users’ valuable personal, locational and behavioral data.

This practice carries with it poorly appreciated privacy risks and an opaque exchange of valuable data for very little.

Is free public Wi-Fi, or any of these other services, really worth it?

Origins of LinkNYC

New York City began exploring a free public Wi-Fi network back in 2012 to replace its aging public phone system and called for proposals two years later.

The winning bid came from CityBridge, a partnership of four companies including advertising firm Titan and designer Control Group.

Their proposal involved building a network of 10,000 kiosks (dubbed “links”) throughout the city that would be outfitted with high-speed Wi-Fi routers to provide Internet, free phone calls within the U.S., a cellphone charging station and a touchscreen map.

Recently, Google created a company called Sidewalk Labs, which snapped up Titan and Control Group and merged them.

Google, a company whose business model is all about collecting our data, thus became a key player in the entity that will provide NYC with free Wi-Fi.

How free is ‘free’?

Like many free Internet products and services, the LinkNYC will be supported by advertising revenue.

LinkNYC is expected to generate about US$500 million in advertising revenue for New York City over the next 12 years from the display of digital ads on the kiosks’ sides and via people’s cellphones. The model works by providing free access in exchange for users’ personal and behavioral data, which are then used to target ads to them.

Yet LinkNYC’s privacy policy doesn’t actually use the word “advertising,” preferring instead to vaguely state it “may use your information, including Personally Identifiable Information,” to provide information about goods or services of interest.

It also isn’t clear the extent to which the network could be used to track people’s location.

Titan previously made headlines in 2014 after installing Bluetooth beacons in over 100 pay phone booths, for the purpose of testing the technology, without the city’s permission. Titan was subsequently ordered to remove them.

But the beacons are back as part of the LinkNYC contract, though users have to choose to opt in to the location services. The beacons allow targeted ads to be delivered to cellphones as people pass the hotspots, but their use isn’t spelled out in the privacy policy.

After close examination, it becomes evident that far from being free, use of LinkNYC comes with the price of mandatory collection of potentially sensitive personal, locational and behavioral data.

This is all standard practice in the terms of use and privacy policies for free Internet-based products and services. Can we really consider this to be a fully informed agreement and transparent exchange when the actual uses of the data, and the privacy and security implications of these uses, are not clear?

A privacy paradox

People’s widespread use of products and services with these data collection and privacy infringing practices is curiously at odds with what they say they are willing to tolerate in studies.

Surveys consistently show that people value their privacy. In a recent Pew survey, 93 percent of adults said that being in control of who can get information about them is important, and 90 percent said the same about what information is collected.

In experiments, people quote high prices for which they would be willing to sell their data. For instance, in a 2005 study in the U.K., respondents said they would sell one month’s access to their location (via a cellphone) for an average of £27.40 (about US$50 based on the exchange rate at the time or $60 in inflation-adjusted terms). The figure went up even higher when subjects were told third party companies would be interested in using the data.

In practice, though, people trade away their personal and behavioral data for very little. This privacy paradox is on full display in the free Wi-Fi example.

Breaking down the economics of LinkNYC’s business model, recall that an estimated $500 million in total ad revenue will be collected over 12 years. With 10,000 Links, and approximately eight million people in New York City, the monthly revenue per person per link is $0.000043.

Fractions of a cent. This is the indirect valuation that users accept from advertisers in exchange for their personal, locational and behavioral data when using the LinkNYC service. Compare that with the value U.K respondents put on their locational data alone.

How to explain this paradoxical situation? In valuing their data in experiments, people are usually given the full context of what information will be collected and how it will be used.

In real life, though, a lot of people don’t read the terms of use or privacy policy. Those that do are not always able to understand what these documents are saying owing partly to the legalese used and partly to the intentionally vague wording of some passages.

People thus end up exchanging their data and their privacy far less than they might in a transparent and open market transaction.

The business model of some of the most successful tech companies is built on this opaque exchange between data owner and service provider. The same opaque exchange occurs on social networks like Facebook, online search and online journalism.

Part of a broader trend

It’s ironic that, in this supposed age of abundant information, people are so poorly informed about how their valuable digital assets are being used before they unwittingly sign their rights away.

To grasp the consequences of this, think about how much personal data you hand over every time you use one of these “free” services. Consider how upset people have been in recent years due to large-scale data breaches: for instance, the more than 22 million who lost their background check records in the Office of Personnel Management hack.

Now imagine the size a file of all your personal data in 2020 (including financial data, like purchasing history, or health data) after years of data tracking. How would you feel if it were sold to an unknown foreign corporation? How about if your insurance company got ahold of it and raised your rates? Or if an organized crime outfit stole all of it? This is the path that we are on.

Some have already made this realization, and a countervailing trend is already under way, one that gives technology users more control over their data and privacy. Mozilla recently updated its Firefox browser to allow users to block ads and trackers. Apple too has avoided an advertising business model, and the personal data harvesting that it necessitates, instead opting to make its money from hardware, app and digital music or video sales.

Developing a way for people to correctly value their data, privacy and information security would be a major additional step forward in developing financially viable, private and secure alternatives.

With it might come the possibility of an information age where people can maintain their privacy and retain ownership and control over their digital assets, should they choose to.

Benjamin Dean, Fellow for Internet Governance and Cyber-security, School of International and Public Affairs, Columbia University

This article was originally published on The Conversation. Read the original article.

It’s easier to defend against ransomware than you might think

Image 20160520 4478 rhdazf
Try to make this the only time you see a ransomware warning notice.
Christiaan Colen/flickr, CC BY-SA

Amin Kharraz, Northeastern University

Ransomware – malicious software that sneaks onto your computer, encrypts your data so you can’t access it and demands payment for unlocking the information – has become an emerging cyberthreat. Several reports in the past few years document the diversity of ransomware attacks and their increasingly sophisticated methods. Recently, high-profile ransomware attacks on large enterprises such as hospitals and police departments have demonstrated that large organizations of all types are at risk of significant real-world consequences if they don’t protect themselves properly against this type of cyberthreat. The Conversation

The development of strong encryption technology has made it easier to encode data so that it cannot be read without the decryption key. The emergence of anonymity services such as the Tor network and bitcoin and other cryptocurrencies has eased worries about whether people who receive payments might be identified through financial tracking. These trends are likely driving factors in the recent surge of ransomware development and attacks.

Like other classes of malicious software – often called “malware” – ransomware uses a fairly wide range of techniques to sneak into people’s computers. These include attachments or links in unsolicited email messages, or phony advertisements on websites. However, when it comes to the core part of the attack – encrypting victims’ files to make them inaccessible – most ransomware attacks use very similar methods. This commonality provides an opportunity for ransomware attacks to be detected before they are carried out.

My recent research discovered that ransomware programs’ attempts to request access and encrypt files on hard drives are very different from benign operating system processes. We also found that diverse types of ransomware, even ones that vary widely in terms of sophistication, interact with computer file systems similarly.

Moving fast and hitting hard

One reason for this similarity amid apparent diversity is the commonality of attackers’ mindsets: the most successful attack is one that encrypts a user’s data very quickly, makes the computer files inaccessible and requests money from the victim. The more slowly that sequence happens, the more likely the ransomware is to be detected and shut down by antivirus software.

What attackers are trying to do is not simple. First, they need to reliably encrypt the victim’s files. Early ransomware used very basic techniques to do this. For example, it used to be that a ransomware application would use a single decryption key no matter where it spread to. This meant that if someone were able to detect the attack and discover the key, they could share the key with other victims, who could then decode the encrypted data without paying.

Today’s ransomware attackers use advanced cryptographic systems and Internet connectivity to minimize the chance that a victim could find a way to get her files back on her own. Once the program makes its way into a new computer, it sends a message back over the internet to a computer the attacker is using to control the ransomware. A unique key pair for encryption and decryption is generated for that compromised computer. The decryption key is saved in the attacker’s computer, while the encryption key is sent to the malicious program in the compromised computer to perform the file encryption. The decryption key, which is required to decrypt the files only on that computer, is what the victim receives when he pays the ransom fee.

The second part of a “successful” ransomware attack – from the perspective of the attacker – depends on finding reliable ways to get paid without being caught. Ransomware operators continuously strive to make payments harder to trace and easier to convert into their preferred currency. Attackers attempt to avoid being identified and arrested by communicating via the anonymous Tor network and exchanging money in difficult-to-trace cryptocurrencies like bitcoins.

Defending against a ransomware attack

Unfortunately, the use of advanced cryptosystems in modern ransomware families has made recovering victims’ files almost impossible without paying the ransom. However, it is easier to defend against ransomware than to fight off other types of cyberthreats, such as hackers gaining unauthorized entry to company data and stealing secret information.

Back up your data!
Pixabay

The easiest way to protect against ransomware attacks is to have, and follow, a reliable data-backup policy. Companies that do not want to end up as paying victims of ransomware should have their workers conduct real-time incremental backups (which back up file changes every few minutes). In addition, in case their own backup servers get infected with ransomware, these companies should have offsite cloud backup storage that is protected from ransomware. Companies that are attacked can then restore their data from these backups instead of paying the ransom.

Users should also download and install regular updates to software, including third-party plug-ins for web browsers and other systems. These often plug security vulnerabilities that, if left open, provide attackers an easy way in.

Generally, being infected with ransomware has two important messages for an organization. First, it’s a sign of vulnerability in a company’s entire computer system, which also means that the organization is vulnerable to other types of attacks. It is always better to learn of an intrusion earlier, rather than being compromised for several months.

Second, being infected with ransomware also suggests users are engaging in risky online behavior, such as clicking on unidentified email attachments from unknown senders, and following links on disreputable websites. Teaching people about safe internet browsing can dramatically reduce an organization’s vulnerability to a ransomware attack.

Amin Kharraz, Research Assistant, Systems Security Lab, Northeastern University

This article was originally published on The Conversation. Read the original article.

No, we’re not all being pickled in deadly radiation from smartphones and wifi

Image 20150519 25441 rcadyx
As technology improves our lives, we seem destined to witness a parallel rise in fear-mongering.
Yahoo/Flickr, CC BY

Simon Chapman, University of Sydney

Tomorrow at TedX Sydney’s Opera House event, high-profile neurosurgeon Charlie Teo will talk about brain cancer. Last Saturday Teo was on Channel 9’s Sunrise program talking about the often malignant cancer that in 2012 killed 1,241 Australians. During the program he said: The Conversation

Unfortunately the jury is still out on whether mobile phones can lead to brain cancer, but studies suggest it’s so.

Teo’s name appears on a submission recently sent to the United Nations. If you Google “Charlie Teo and mobile phones” you will see that his public statements on this issue go back years.

The submission he signed commences:

We are scientists engaged in the study of biological and health effects of non-ionizing electromagnetic fields (EMF). Based upon peer-reviewed, published research, we have serious concerns regarding the ubiquitous and increasing exposure to EMF generated by electric and wireless devices. These include – but are not limited to – radiofrequency radiation (RFR) emitting devices, such as cellular and cordless phones and their base stations, Wi-Fi, broadcast antennas, smart meters, and baby monitors as well as electric devices and infra-structures [sic] used in the delivery of electricity that generate extremely-low frequency electromagnetic field (ELF EMF).

That list just about covers off every facet of modern life: the internet, phones, radio, television and any smart technology. It’s a list the Amish and reclusive communities of “wifi refugees” know all about.

Other than those living in the remotest of remote locations, there are very few in Australia today who are not bathed in electromagnetic fields and radiofrequency radiation, 24 hours a day. My mobile phone shows me that my house is exposed to the wifi systems of six neighbours’ houses as well as my own. Public wifi hotspots are rapidly increasing.

The first mobile phone call in Australia was made over 28 years ago on February 23, 1987. In December 2013, there were some 30.2 million mobile phones being used in a population of 22.7 million people. Predictions are that there will be 5.9 billion smartphone users globally within four years. There are now more than 100 nations which have more mobile phones than population.

So while Australia has become saturated in electromagnetic field radiation over the past quarter century, what has happened to cancer rates?

Brain cancer is Teo’s surgical speciality and the cancer site that attracts nearly all of the mobile phone panic attention. In 1987 the age-adjusted incidence rate of brain cancer in Australia per 100,000 people was 6.6. In 2011, the most recent year for which national data is available, the rate was 7.3.

The graph below shows brain cancer incidence has all but flat-lined across the 29 years for which data are available. All cancer is notifiable in Australia.

New cases of brain cancer in Australia, 1982 to 2011 (age-adjusted)
Australian Institute of Health and Welfare, CC BY

Brain cancers are a relatively uncommon group of cancers: their 7.3 per 100,000 incidence compares with female breast (116), colorectal (61.5) and lung cancer (42.5). There is no epidemic of brain cancer, let alone mobile phone caused brain cancer. The Cancer Council explicitly rejects the link. This US National Cancer Institute fact sheet summarises current research, highlighting rather different conclusions than Charlie Teo.

Another Australian signatory of the submission, Priyanka Bandara, describes herself as an “Independent Environmental Health Educator/Researcher; Advisor, Environmental Health Trust and Doctors for Safer Schools”.

Last year, a former student of mine asked to meet with me to discuss wifi on our university campus. She arrived at my office with Bandara who looked worried as she ran a EMF meter over my room. I was being pickled in it, apparently.

Her pitch to me was one I have encountered many times before. The key ingredients are that there are now lots of highly credentialed scientists who are deeply concerned about a particular problem, here wifi. These scientists have published [pick a very large number] of “peer reviewed” research papers about the problem.

Peer review often turns out to be having like-minded people from their networks, typically with words like “former”, “leading”, “senior” next to their names, write gushing appraisals of often unpublished reports.

The neo-Galilean narrative then moves to how this information is all being suppressed by the web of influence of vested industrial interests. These interests are arranging for scientists to be sacked, suppressing publication of alarming reports, and preventing many scientists from speaking out in fear.

Case reports of individuals claiming to be harmed and suffering Old Testament-length lists of symptoms as a result of exposure are then publicised. Here’s one for smart meters, strikingly similar to the 240+ symptom list for “wind turbine syndrome”. Almost any symptom is attributed to exposure.

Historical parallels with the conduct of the tobacco and asbestos industries and Big Pharma are then made. The argument runs “we understand the history of suppression and denial with these industries and this new issue is now experiencing the same”.

There is no room for considering that the claims about the new issue might just be claptrap and that the industries affected by the circulation of false and dangerous nonsense might understandably want to stamp on it.

Bandara’s modest blog offers schools the opportunity to hear her message:

Wireless technologies are sweeping across schools exposing young children to microwave radiation. This is not in line with the Precautionary Principle. A typical classroom with 25 WiFi enabled tablets/laptops (each operating at 0.2 W) generates in five hours about the same microwave radiation output as a typical microwave oven (at 800 W)in two minutes. Would you like to microwave your child for two minutes (without causing heating as it is done very slowly using lower power) daily?

David French/Flickr, CC BY

There can be serious consequences of alarming people about infinitesimally small, effectively non-existent risks. This rural Victorian news story features a woman so convinced that transmission towers are harming her that she covers her head in a “protective” cloth cape.

This woman was so alarmed about the electricity smart meter at her house that she had her electricity cut off, causing her teenage daughter to study by candlelight. Yet she is shown being interviewed by a wireless microphone.

Mobile phones have played important roles in rapid response to life-saving emergencies. Reducing access to wireless technology would have incalculable effects in billions of people’s lives, many profoundly negative.

Exposing people to fearful messages about wifi has been experimentally demonstrated to increase symptom reportage when subjects were later exposed to sham wifi. Such fears can precipitate contact with charlatans readily found on the internet who will come to your house, wave meters around and frighten the gullible into purchasing magic room paint, protective clothing, bed materials and other snake-oil at exorbitant prices.

As exponential improvements in technology improve the lifestyles and well-being of the world’s population, we seem destined to witness an inexorable parallel rise in fear-mongering about these benefits.

Simon Chapman, Professor of Public Health, University of Sydney

This article was originally published on The Conversation. Read the original article.

How secure is your smartphone’s lock screen?

Image 20160331 28451 15rssmlThere’s a big difference between a 4-digit PIN and a 6-digit PIN.

Clinton Carpene, Edith Cowan University

One consequence of the Apple vs FBI drama has been to shine a spotlight on the security of smartphone lockscreens. The Conversation

The fact that the FBI managed to hack the iPhone of the San Bernardino shooter without Apple’s help raises questions about whether PIN codes and swipe patterns are as secure as we think.

In fact, they’re probably not as secure as we’d hope. No device as complex as a smartphone or tablet is ever completely secure, but device manufactures and developers are still doing their best to keep your data safe.

The first line of defence is your lockscreen, typically protected by a PIN code or password.

When it comes to smartphones, the humble four-digit PIN code is the most popular choice. Unfortunately, even ignoring terrible PIN combinations such as “1234”, “1111” or “7777”, four-digit PIN codes are still incredibly weak, since there are only 10,000 unique possible PINs.

If you lose your device, and there are no other protections, it would only take a couple of days for someone to find the correct PIN through brute force (i.e. attempting every combination of four-digit PIN).

A random six-digit PIN will afford you better security, given that there are a million possible combinations. However, with a weak PIN and a bit of time and luck, it’s still possible for someone to bypass this using something like Rubber Ducky, a tool designed to try every PIN combination without triggering other security mechanisms.

Checks and balances

Fortunately, there other safeguards in place. On iPhones and iPads, for instance, there is a forced delay of 80 milliseconds between PIN or password attempts.

And after 10 incorrect attempts, the device will either time-out for increasing periods of time, lock out completely, or potentially delete all data permanently, depending on your settings.

A swipe pattern can be easier to remember than a PIN.
Mike Dent/Flickr, CC BY-NC-ND

Similarly, Android devices enforce time delays after a number of passcode or password entries. However, stock Android devices will not delete their contents after any number of incorrect entries.

Swipe patterns are also a good security mechanism, as there are more possible combinations than a four-digit PIN. Additionally, you can’t set your swipe pattern to be the same as your banking PIN or password, so if one is compromised, then the others remain secure.

However, all of these security controls can potentially be thwarted. By simply observing the fingerprints on a device’s display on an unclean screen, it is possible to discern a swipe pattern or passcode. When it comes to touch screen devices: cleanliness is next to secure-ness.

Bypasses

Speaking of fingers, biometrics have increased in popularity recently. Biometric security controls simply means that traits of a human body can be used to identify someone and therefore unlock something.

Some Android phones now carry built-in fingerprint sensors.
Kārlis Dambrāns/Flickr, CC BY

In the case of smartphones, there are competing systems that offer various levels of security. Android has facial, voice and fingerprint unlocking, while iOS has fingerprint unlocking only.

Generally, biometrics on their own are not inherently secure. When used as the only protection mechanism, they’re often very unreliable, either allowing too many unauthorised users to access a device (false positives), or by creating a frustrating user experience by locking out legitimate users (false negatives).

Some methods of bypassing these biometric protections have been widely publicised, such as using a gummi bear or PVA glue to bypass Apple’s TouchID, or using a picture to fool facial recognition on Android.

Watch as a picture of a face can unlock an Android phone.

To combat this, Apple disables the TouchID after five incorrect fingerprint attempts, requiring a passcode or password entry to re-enable the sensor. Likewise, current versions of Android enforce increasing time-outs on after a number of incorrect entries.

These methods help strike a balance between security and usability, which is crucial for making sure smartphones don’t end up hurled at a wall.

Although these lockscreen protections are in place, your device may still contain bugs in its software that can allow attackers to bypass them. A quick search for “smartphone lockscreen bypasses” on your favourite search engine will yield more results than you’d probably care to read.

Lockscreen bypasses are particularly problematic for older devices that are no longer receiving security updates, but new devices are not immune. For example, the latest major iOS release (iOS 9.0) contained a flaw that allowed users to access the device without entering a valid passcode via the Clock app, which is accessible on the lockscreen. Similar bugs have been discovered for Android devices as well.

All of these efforts could be thrown out the window if you install an app that includes malware.

So lockscreens, PIN codes, passwords and swipe patters should only be considered your first line of defence rather than a foolproof means of securing your device.

Clinton Carpene, Post Doctoral Researcher in network security, Edith Cowan University

This article was originally published on The Conversation. Read the original article.

Alcamy offers open self-learning to learn or teach anything

Alcamy offers open self-learning to learn or teach anything

There’s a lot of great education services on the web, and many of them offer free classes on a ton of different subjects, but have you checked out Alcamy yet? It’s a bit like the Wikipedia of learning courses. Alcamy has set out to offer an open source method of learning and teaching that relies on a combination of experts and an active learning tool called Darwin. In short, they hope to let anyone learn anything, for free! We love any service that offers opportunities for people to harness the power of their Internet connection for great purposes like learning new skills or just learning for the joy of it!

From their Introduction:

Topics

Experts and self-learners organize the resources of the web into cronological programs that you can learn from.

Resources

Resources are individual articles, projects, videos or presentations that are the actual learning material being curated.

Quizzes

Each resource is attached to a quiz. Actively test your understanding right then and there. Track your progress over time.

Community

Each topic has a community of self-learners and experts. Ask them for help, discuss material. The learning resources under each topic self-adjust and improve as more people take them.

We’re taking an open, Wikipedia-like approach to curating content & information for self-learning. Call it a Wikipedia + Coursera + Reddit mashup. Our mission is to make learning and teaching using the resources already available on the web open, free and exciting!

 Visit Alcamy.org to learn more about their exciting new platform!