Alphago as BS detector

One of the great things about Alphago winning three matches and then losing a fourth is it manages to undermine the most spectacular BS generators who were talking about Alphago's superhuman abilities after match three.

I am a great fan of what Alphago's done. Being a person who, at one point could probably have beaten any go-playing program on the planet, the advancement seen is amazing, especially to reach up to professional levels. Alphago is awesome and I have a huge amount of respect for the developers.

And then you get people like Eliezer Yudkowsky of the "Machine Intelligence Research Institute". I've been somewhat tired of the whole hyper-rational mysticism thing where they carefully draw logical conclusions from extremely shaky premises. It rather reminds me of the Pythagorean cults. The ancient Greeks were pretty good at maths, but sucked at science.

Anyway, back to this particular screed. Much of it is about how Alphago is a strongly super-human player of such strength that pros don't understand its moves. Which looks a bit silly when Alphago loses its fourth match. (An update says it's just perhaps a flawed strongly super-human player. Of course.)

This seems like rubbish to me because even at a few stones difference in strength, I can still understand the moves being played by a stronger player - I just can't play them myself, just in the same way that I can understand a clever proof that I'd not be able to devise myself.

The main thing that strikes me about this ill-advised post, is that it's providing a vast amount of strong opinion, with the minor problems that Yudkowsky a) doesn't know much about go b) doesn't know much about how Alphago works. A lesser person would perhaps let this stop them from writing.

Instead this event is a Rorschach test that allows one to demonstrate AI theory hobby-horses without letting reality get too close.

One quote I particularly enjoyed was this:

Human-equivalent competence is a small and undistinguished region in possibility-space. [...] AI is either overwhelmingly stupider or overwhelmingly smarter than you. The more other AI progress and the greater the hardware overhang, the less time you spend in the narrow space between these regions. There was a time when AIs were roughly as good as the best human Go-players, and it was a week in late January.

This really is pretty naff. If you run the numbers and work out human-competence as information-theoretic bits-worth of mistakes per game, human competence covers a good and interesting region of the possibility space. There's a good area above us, but the very top is inaccessible even to super-intelligent beings - it's effectively limited to exhaustive search, which is just computationally infeasible.

A bit below the impossible, there is a space for super-human go players. However, there's no real indication of magic here. The space looks pretty smooth - the choice between a good move and a great move is smaller than the choice between some good move and some bad move. This means that a) optimisation is harder, making the building of a super-human AI difficult b) super-human AI moves won't look utterly alien - just good in a way that is difficult to judge compared to alternatives.

Fundamentally, go is a really bad task with which to wave the flag for incomprehensible AI. Tasks like "write a program/design a system to do XYZ" is are fantastic, since artificial systems can explore unlikely spaces we'd never investigate - look at some of the neat things done with genetic algorithms. Go is an awful screw for Yudkowsky's AI angst hammer.

Posted 2016-03-13.