What can a computer beating a human at the ancient game of Go tell us about how to understand uncertainty better: to see beauty in a threat?
It was in 1997 that a computer, IBM’s Deep Blue, finally beat the best human player, Gary Kasparov, at Chess. It wasn’t until March 2016 that a computer, DeepMind’s AlphaGo, beat the best human player, Lee Sedol, at Go. 1997 was the dawn of the internet age. What does the difference between these two computing landmarks tell us about how we need to think differently in this, the age of uncertainty?
The task of creating a computer and programme to beat the best at Chess was one of processing grunt. Since 1997 the increasing availability of the requisite processing power has transformed the way Chess is ‘consumed’ as a high level sport. Magnus Carlsen played Sergey Karjakin in the 2016 world championship. During the games each move was analysed in real time by chess ‘machines’ to predict its effect on the likely outcome. This analysis was then further interpreted and debated live online by thousands of chess aficionados.
Go is the oldest board game still being played. It was invented more than 2,500 years ago in ancient China and is played on a 19×19 grid on which players alternately place black or white stones. Beating a human champion at Go has long been considered a ‘grand challenge’ in Artificial Intelligence (AI) research. Despite its simple rules the size of the board and the number and complexity of possible moves mean that processing grunt is not in itself enough to play the game. Demis Hassabis, DeepMind’s founder, estimates that a typical Go turn offers around 200 legal moves, compared with just 20 or so in Chess. Any one game possess more possibilities than the total number of atoms in the universe.
Go’s complexity doesn’t just arise from the size if its board. Chess is essentially about controlling the board and winning through capturing opponent’s pieces. The ultimate aim and mechanics of Go are based on surrounding more territory than your opponent. This is something that must be continually reassessed and agreed between the players. Even the end of the game of Go is abstract: there is no check-mate, rather players simply agree that the game is ended.
So Go is a game about creating and controlling space in an unpredictable environment rather than just extrapolating data to an endpoint. This would also seem to be a description of the challenge in our age of uncertainty: to define and maintain a space for what is right for us in a fluid complex environment where no ultimate ‘right’ answer exists.
So…how do you programme a computer to beat the best human at Go?
Deep Blue was directly programmed by humans to play Chess, DeepMind was designed to learn to play and make decisions for itself. This ‘Machine Learning’ allows computers to figure out how to do things for themselves like recognise human faces and translate languages (Google Translate invented its own language (!) – a kind of rosetta stone – in order to do this, for example). So as well as studying zillions of moves from previous games, DeepMind played untold numbers of games with itself to create new strategies to out-think the human collective experience.
Let’s bring this to a more mundane level. My best friend lives in Montreal, I live in the UK. We’ve constantly got at least one online chess game on the go. I play chess to the standard that I know what the pieces do and follow these simple rules to make my decisions based on being able to imagine a couple of moves ahead. I’ve never had instruction and consequently a competent five year old would wipe the board with me…but I can play and its enjoyable.
A few years ago – in part inspired by Howard Marks’ book Mr Nice in which he eulogised Go – I bought a board and some stones and my wife and I set about learning how to play. We seldom argue – I’m not bragging about that, it’s just not our style – but boy did we fall out over Go. I mean every time we tried to play. The rules were there in print before us but we couldn’t agree on what they meant when applied to what we were attempting on the board. The experience of trying to play Go was so unsettling, upsetting, I don’t even know where the board is today. What we needed was someone who has played before to talk us through a few games together.
Thriving in unpredictability is about learning based on experience. But to even get to the start of that game, to play, we can’t learn the rules in abstract. Crucially we need to engage with people who have done things, experienced stuff, that we have not. That doesn’t mean they need to have cracked our problem already or we need them to do it for us. By seeking them out and getting their help to understand our situation based on a joint, collective assessment, then we can begin to play: to experiment with possible responses, possible answers, re-assess the effect on our space…and play again.
This besting of the collective human experience by AI can be viewed as rather creepy – a foreshadowing of a dystopian future. It depends on your point of view. I was talking recently with Fred Grzyb, Wardrobe Master for the New York City Ballet and New York City Opera for over 40 years, about changes in the arts and in general and he said:
“It’s melancholic if you see change as depletion.”
Fan Hui is the European Go champion and AlphaGo trounced him five nil. He helped the team at DeepMind (bought by Google in 2014) to prepare for the match with Lee Sedol. Wired magazine recorded the effect this had:
“The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In [months] AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. ‘So beautiful,’ he says. ‘So beautiful.’ ”