Wednesday, January 27, 2016

Go Figure

It's over. Humans had a good run on this planet, but our time is nearly up. Google DeepMind's AlphaGo has just clocked a strong professional Go player 5-0 in a five game match. It will take on Lee Sedol, the World's best human month after next, but even if the human ekes out a win, the handwriting is already on the wall, or maybe I should say, the pixels are already in memory.

It has been about twenty years since computers cracked chess, but the ancient game of Go had been stubbornly resistant. Probably even more interesting than the fact of the accomplishment is the way it was done, not by brute computer power or clever algorithms, but with deep neural networks. Such networks are electronic emulations of the way brains work, and the networked learned, in effect, by distilling a kind of essence of millions of games it studied. Such networks, powered by enormous computing power, are now demolishing artificial intelligence problems that had defied researchers for a couple of generations: face recognition, language translation, spatial navigation.

Very few human intellectual tasks are going to escape the computers increasing mastery in the next decade or two.

Personal Note: two or three decades ago, I trained a neural network to solve a certain kind of integral equation. I noted at the time that the advantage of the neural network was that you didn't need to know much of anything about how it did it. The disadvantage is the same - you don't really wind up understanding how it does its thing. The programmers who beat chess included some strong professional players. I don't know about those who built AlphaGo, but in principle at least, the programmer really doesn't need to know much of anything about the problem being solved - he just feeds the program millions of situations and how they were solved, and the network does the rest. distilling some extremely complex and quite likely humanly incomprehensible rules from all the data.