It's time programmers looked out old computer text adventures like Zork and Colossal Cave from the 1970s and 1980s.
The new AlphaGo Zero artificial intelligence took just days to learn to play Go from scratch, with no human intervention. It even learned strategies never seen before in human play.
The artificial intelligence that beat a world master at the game of Go is now to be directed at more complex global problems. So what can we expect?
Google's AlphaGo victory over the human world champion shows how far things have come since DeepBlue.
Twenty years after Deep Blue beat Garry Kasparov at chess, artificial intelligence can make games more fun, and perhaps even endlessly enjoyable, if it learns to adapt.
Artificial intelligence researchers have upped the ante and developed a program that has beaten the world's best Heads-Up No-Limit Texas Hold’em poker players.
We need to do more than teach machines to learn. We need to overcome the barriers that separate machines from us – and us from them.
Computers must master football if they are to demonstrate that they can be our equal.
Artificial intelligence gives us machines that can beat humans at games such as chess and go. How long before we see AI surpass human intelligence?
An artificial intelligence has defeated a world champion of Go, the ancient Chinese strategy game. But what is Go, and why is it worth teaching to a computer?
Google's artificial intelligence made a surprise move in the recent Go challenge that has some people worried about what happens when AI makes a non-human decision that we could not anticipate.
A machine has bested us at yet another intellectually challenging game. It shows artificial intelligence is progressing rapidly, but it doesn't mean humans are redundant quite yet.
While it's impressive, developing a computer to win at Go is not a big step toward the type of artificial intelligence used by the thinking machines we see in the movies.
Even the smartest AIs weren't supposed to beat top humans at Go for another decade or more.