Saturday 6 February 2016

A master algorithm?

I've been reading The Master Algorithm, by Pedro Domingos. It's a kind of "machine learning for dummies" book. Pedro's main theme is that machine learning students are divided into tribes and they need to combine their ideas into one "Master Algorithm" - the silver bullet of machine intelligence which will then go on to learn everything that it is possible to learn.

I think it is useful to compare and contrast this idea with the idea of memetic algorithms, which seek to emulate cultural evolution. We have one main example of a self-contained self-improving system: human civilization. We don't yet know exactly which elemets of civilization are necessary to simulate in order to produce a self-improving system - but we do have the advantage of already having an example of civilization on hand. We can copy from that or, if necessary, tinker with its components and replace them one at a time.

Pedro suggests that his proposed "Master Algorithm" might be relatively simple. Looking at history, the simplest seed of a self-improving system looks like a substantial community of farmers. Maybe a simpler synthetic system could "take off", but that seems like an unproven hypothesis.

Pedro proposes that the different machine learning tribes combine their efforts, insights and algorithms to produce the Master Algorithm. Of course it is often a good idea for smart people to put their heads together to share ideas. However the simplest and smallest self-improving system could easily consist of a large heterogenous network of agents with different learning strategies. That seems more the the Minsky vision of the brain as being composed of a patchwork of multiple expert systems.

Rather than uniting to produce one Master Algorithm, I tend to think of the machine learning community futher expanding and diversifying in the future. Different data requres different approaches, and general-purpose tools are rarely the best for any given task. I think that much the same is true of memory, sensors, actuators and compute hardware. This isn't a one-size fits all world, and different devices are appropriate for different tasks.

Indeed, Pedro's classification scheme already seems to exclude much that I think is important. Pedro's five tribes of machine learing are "neuro", "genetic", "symbolic", "bayesians" and "analogizers". I don't see where collective intelligence fits into that classification scheme. There are a bunch of "swarm intelligence", "memetic algorithm" and "wisdom of crowds" researchers who seem to be excluded by it. That doesn't seem like a good start.

The whole idea of a "master agorithm" seems like a dodgy meme to me - since it implicitly suggests that we are looking for a single, finite algorithm. The question of whether simple-yet powerful forms of machine intelligence exist has been addressed by Shane Legg. Legg's conclusion is that such systems do not exist. Highly intelligent systems are necessarily highly complex. The search for simple-yet-powerful universal learning systems thus seems kind-of futile - we know that these do not exist.

This suggests a picture of machine learning in which humans are taking the first few steps on an endless path towards wisdom. The idea of a "master agorithm" represents this picture poorly - it is bad poetry.