Robots are in the news a lot these days, and if it seems a little hard to believe, I was right there with you until they started writing for the Associated Press.
That’s right. They are taking over my job.
The Associated Press is using an Artificial Intelligence (AI) program to write their more data-heavy articles (such as corporate earnings reports), and have been doing so for a couple of years. The program is called Wordsmith (generated by Automated Insights), and “uses natural language generation to turn data into a written, plain-language narrative,” just like I have been doing for all my college career.
At the risk of sounding a little racist (speciesist? Tech-ist?), I say go home, Computron, and quit taking all our jobs!
Apparently, the breakthrough in AI that allows computers to acquire the skill I have spent years developing is called “deep learning.” The computer uses algorithms to learn things that were not specifically programmed into its system. The Google team that developed the technology tested it by feeding the AI YouTube videos. Predictably, the AI learned a lot about cats.
Facebook has also been using AI (or bots) to sniff out fake news articles, locate terrorist activity on its site, and identify suicidal individuals, all tasks that humans have had a lot of trouble managing on such a large-scale platform.
So far AI is being used for big chunks of data-saturated grunt work. But they can’t really replicate human emotion. Right? I still have that going for me. Right?
Affectiva is helping AI learn to recognize feelings
Maybe robots can’t love (that’s right, Wordsmith, I said it. How does that feel? You don’t know, do you?), but they are developing the ability to empathize. Affectiva is a program from MIT that can recognize human emotion from reading your face. While this sounds a little creepy, the idea is that it could improve recommendation software for sites like Netflix and Google, and even predict content that will go viral. The digital world is a cold and impersonal one, and emotional recognition software aims to bridge the gap between information and feeling.
Personally, I already get creeped out when Google uses my first name or asks me about something I don’t remember telling it. I can’t imagine if it started asking me why I am sad/angry/excited.
In conclusion, I am preferable to a robot because:
|Turns data into sentences||Turns data into sentences|
|Knows about cats||Knows about cats|
|Can’t even love||Loves cats and my husband|
|Doesn’t even know how soft cats feel||Pets kitties every day|
|Can empathize||Is sort of empathetic|
|Will spy on you||Won’t spy on you|
Artificial intelligence can bring many benefits to human gamers.
Way back in the 1980s, a schoolteacher challenged me to write a computer program that played tic-tac-toe. I failed miserably. But just a couple of weeks ago, I explained to one of my computer science graduate students how to solve tic-tac-toe using the so-called “Minimax algorithm,” and it took us about an hour to write a program to do it. Certainly my coding skills have improved over the years, but computer science has come a long way too.
What seemed impossible just a couple of decades ago is startlingly easy today. In 1997, people were stunned when a chess-playing IBM computer named Deep Blue beat international grandmaster Garry Kasparov in a six-game match. In 2015, Google revealed that its DeepMind system had mastered several 1980s-era video games, including teaching itself a crucial winning strategy in “Breakout.” In 2016, Google’s AlphaGo system beat a top-ranked Go player in a five-game tournament.
The quest for technological systems that can beat humans at games continues. In late May, AlphaGo will take on Ke Jie, the best player in the world, among other opponents at the Future of Go Summit in Wuzhen, China. With increasing computing power, and improved engineering, computers can beat humans even at games we thought relied on human intuition, wit, deception or bluffing – like poker. I recently saw a video in which volleyball players practice their serves and spikes against robot-controlled rubber arms trying to block the shots. One lesson is clear: When machines play to win, human effort is futile.
This can be great: We want a perfect AI to drive our cars, and a tireless system looking for signs of cancer in X-rays. But when it comes to play, we don’t want to lose. Fortunately, AI can make games more fun, and perhaps even endlessly enjoyable.
Designing games that never get old
Today’s game designers – who write releases that earn more than a blockbuster movie – see a problem: Creating an unbeatable artificial intelligence system is pointless. Nobody wants to play a game they have no chance of winning.
But people do want to play games that are immersive, complex and surprising. Even today’s best games become stale after a person plays for a while. The ideal game will engage players by adapting and reacting in ways that keep the game interesting, maybe forever.
So when we’re designing artificial intelligence systems, we should look not to the triumphant Deep Blues and AlphaGos of the world, but rather to the overwhelming success of massively multiplayer online games like “World of Warcraft.” These sorts of games are graphically well-designed, but their key attraction is interaction.
It seems as if most people are not drawn to extremely difficult logical puzzles like chess and Go, but rather to meaningful connections and communities. The real challenge with these massively multi-player online games is not whether they can be beaten by intelligence (human or artificial), but rather how to keep the experience of playing them fresh and new every time.
Change by design
At present, game environments allow people lots of possible interactions with other players. The roles in a dungeon raiding party are well-defined: Fighters take the damage, healers help them recover from their injuries and the fragile wizards cast spells from afar. Or think of “Portal 2,” a game focused entirely on collaborating robots puzzling their way through a maze of cognitive tests.
Exploring these worlds together allows you to form common memories with your friends. But any changes to these environments or the underlying plots have to be made by human designers and developers.
In the real world, changes happen naturally, without supervision, design or manual intervention. Players learn, and living things adapt. Some organisms even co-evolve, reacting to each other’s developments. (A similar phenomenon happens in a weapons technology arms race.)
Computer games today lack that level of sophistication. And for that reason, I don’t believe developing an artificial intelligence that can play modern games will meaningfully advance AI research.
We crave evolution
A game worth playing is a game that is unpredictable because it adapts, a game that is ever novel because novelty is created by playing the game. Future games need to evolve. Their characters shouldn’t just react; they need to explore and learn to exploit weaknesses or cooperate and collaborate. Darwinian evolution and learning, we understand, are the drivers of all novelty on Earth. It could be what drives change in virtual environments as well.
Evolution figured out how to create natural intelligence. Shouldn’t we, instead of trying to code our way to AI, just evolve AI instead? Several labs – including my own and that of my colleague Christoph Adami – are working on what is called “neuro-evolution.”
In a computer, we simulate complex environments, like a road network or a biological ecosystem. We create virtual creatures and challenge them to evolve over hundreds of thousands of simulated generations. Evolution itself then develops the best drivers, or the best organisms at adapting to the conditions – those are the ones that survive.
Today’s AlphaGo is beginning this process, learning by continuously playing games against itself, and by analyzing records of games played by top Go champions. But it does not learn while playing in the same way we do, experiencing unsupervised experimentation. And it doesn’t adapt to a particular opponent: For these computer players, the best move is the best move, regardless of an opponent’s style.
Programs that learn from experience are the next step in AI. They would make computer games much more interesting, and enable robots to not only function better in the real world, but to adapt to it on the fly.