That’s 19-year-old world champion Ke Jie upon a recently losing a game of Go to Google’s AlphaGo robot.
For those who grew up with checkers and chess, Go is an incredibly complex Chinese board game whose conquering by computers is seen as kind of a holy grail, and was not expected to be possible for another decade.
So, in other words, he’s not just a sore loser. It’s kind of a big deal.
Haven’t computers played chess since the 90s?
Yes, but they did so with brute force — essentially mapping out all the possible moves in a game and choosing the best one.
But this isn’t possible in Go’s complex 19×19 board as there are too many possibilities to handle. For a sense of scale: after the first 2 moves in a chess game, there are 400 possible next moves. In Go, there are 13k.
Rather than mapping every move to the end game, AlphaGo needed to set limits on how far in advance it thinks, and use previous experience to determine risk for the current scenario.
Which makes it kind of… human
AlphaGo “learned” by speeding through the equivalent of playing 80 years straight to develop its technique and strategy.
It even has a brain. The program relies on a series of “neural networks” — systems of connected hardware and software designed to approximate the web of neurons in the human brain. This requires 170 GPU cards and 1,200 CPU’s just to operate.
But we’re not just talking about an ancient board game
The implications of true artificial intelligence are huge. Obviously.
A robot that can think in real time is ideal for driverless cars that need to deal with continuous, novel situations. And Facebook, who has their own Go bot, has hinted at a virtual assistant that can use pattern recognition to complete purchases.
But theoretically, a robot that can learn from experience to handle new situations can tackle any problem a human could.
Or as one researcher puts it, “What if the universe is just a giant game of Go?”