As we near what could be the deciding match in the history-making meeting between AlphaGo and Lee Se-dol, we pause to review what’s taken place so far, and what it all means.
AlphaGo, Google’s Go-playing Artificial Intelligence software, has done the unthinkable. It has taken a 2-0 lead against Lee Se-dol, the world’s best Go player, in a series of matches that have already rewritten human history as we know it. And there is still more to come!
This is no publicity stunt. The technology is real, the victory genuine, and the change monumental. We’ve woken to a new morning for humankind—a world where machines can learn. By utilizing Deep Learning techniques, DeepMind (Google’s AI lab) has essentially been able to “train” the software to teach itself how to play, to improve with every move, and to ultimately master one of the most complex games ever invented.
Researchers, scientists, programmers and philosophers the world over have been stunned by the results. It was one thing for a machine to win at chess. Processing power made that possible—there are only so many moves, and a powerful enough computer can eventually predict and analyze them all before making the next move. That’s not possible with Go. There are too many possibilities. Something else is required to win.
More than 1.5 million people have watched these matches so far. This is an incredible number, and it speaks to the emotional power of what’s taking place. AlphaGo is engaging in something we didn’t really think was possible. It is very nearly thinking. The language observers have used to describe AlphaGo’s performance has been almost poetically anthropomorphic. DeepMind’s founder & CEO Demis Hassabis referred to certain moves as being “beautiful” and “creative.” David Ormerod, covering the matches for GoGameGuru, noted both the “creativity” and “flexibility” of AlphaGo’s play. And Lee Se-dol himself seemed to be speaking of a real player when he registered his shock at the “perfection” of AlphaGo’s performance.
At Udacity, we teach Deep Learning (as part of our Machine Learning Nanodegree program), and accordingly know that this technology is very real, and very learnable. But that’s not to say there’s isn’t a bit of magic to it after all. As described in WIRED’s commentary after Match 2:
During the match, the commentators even invited DeepMind research scientist Thore Graepel onto their stage to explain the system’s rather autonomous nature. “Although we have programmed this machine to play, we have no idea what moves it will come up with,” Graepel said. “Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands—and much better than we, as Go players, could come up with.
This is perhaps what is ultimately so moving about the experience of witnessing AlphaGo’s triumph. None of us actually knows what it’s going to do. But what it does, is exactly what it should do. This is no Hal from “2001: A Space Odyssey,” no Frankenstein’s monster turned on its master. This is a well-trained machine that can improvise, pivot, analyze, and grow. It’s not human, but it’s definitely intelligent!
Will AlphaGo sweep the affair? Will it all be history after Match 3? You’ll have to tune in to find out. In the meantime, join us in celebrating Google, DeepMind, and AlphaGo for their accomplishments. And join us as well in celebrating Lee Se-dol for so consistently and gracefully evidencing—throughout the proceedings—that which we hold so dear: humanity.