Why is the recent ‘Go’ victory so important for AI? (Part 1)

Anyone who has seen any news in the last few days will know that a computer has for the first time beaten a top human player at the ancient Chinese game of ‘Go’.  In fact, at the time of writing, the AI (let’s call it by its name:  AlphaGo) has beaten its opponent 3 times, and the human (Lee So-dol) has won one – the fifth in the series takes place shortly.  But why is this such important news for AI?

After all, AI has been beating top grand-masters at Chess for a while now – Gary Kasparov was beaten by a computer in 1997, and although the exact ‘fairness’ of those matches has been questioned by some, it’s certainly been the case since about 2006 that a ‘commercially available’ computer running standard software can beat any human player on the planet.

So why is ‘Go’ so different?  In many ways, it’s a very similar game.  It’s ‘zero-sum’ (meaning one player’s loss exactly matches the other players gain), deterministic (meaning there is no random element to the game), partisan (meaning all moves are available to both players), and ‘perfect-information’ (meaning both players can see the whole game state – there are no hidden elements or information).   Just like Chess.

From an AI point of view, two things make ‘Go’ vastly more difficult than Chess.

Firstly, the board is a lot bigger (19×19), meaning that the average number of legal moves per turn is around 200 (compared to an average of 37 for chess).  This means that the ‘combinatorial explosion’ (which makes chess difficult enough) is much worse for ‘Go’:  to calculate the next 4 moves (2 each for each player) would need 320,000,000,000 board positions to be analysed – and looking ahead 2 moves each would give a pathetically weak game.

The second factor is that for Chess, analysing the ‘strength’ of a board position is fairly easy.  The material ‘pieces’ each player owns are all worth something that can be approximated with a simple scoring system, and that can be made more elaborate with some simple extra strategic rules (knights are more valuable near the centre, pawns are best arranged in diagonals, etc).  But for ‘Go’, a simple ‘piece counting’ system is nothing like a useful enough indicator of the advantage a player has in the game, and no ‘simple rules’ can be written which help.

Instead, good human players (and even relative amateurs) can assess a board position, more or less just by using their intuition, and that intuition is where a lot of the best play comes from.  Computers, of course, are not well known for their use of ‘intuition’.

I’ll write more about the approach ‘AlphaGo’ used – and why this has wider implications for AI in general – in a follow-up article in the next few days.