Saturday 4 May 2013

The "low hanging fruit" model of technological progress

Robin Hanson has constructed models of cultural evolution, consisting of periods of exponential growth interspersed by occasional bursts of synergy - during which the exponent of the exponential growth changes.

The model is based on a measure of GDP over time. It appears to be based on a small number of data points, and many of these are in the distant past with somewhat controversial interpretations.

Because of the bumpy history of innovation in the past, Robin predicts a bumpy future - with a coming transition to an era in which the GDP-doubling time may be roughly two weeks.

I think another model is both more obvious, while still being consistent with the data:

Simple innovations are found first, with more complex and difficult innovations following. Early innovations can have a large impact, while later ones are more likely to be small, incremental changes. In a connected world, big and important discoveries can only be made once - and then they are forever off the table. We've already had the invention of sex, the invention of culture and the invention of engineering. Nature's inventiveness can't continue with ground-breaking discoveries forever. Later discoveries could in principle still have relatively large impacts - but after a while, this becomes less likely.

This model is similar to what we see when looking at the progress of data compression over time.

This "low hanging fruit" model of innovation predicts a bumpy start, followed by more gradual and incremental progress. This is not to say that there won't be any spurts of innovation in the future - but they are likely to be milder ones.

Obviously, this doesn't make for such a dramatic story as the one Robin Hanson tells - but I think it is one that is much more likely to be right.

Wednesday 1 May 2013

Response to Intelligence Explosion Microeconomics

Yudkowsky has published a paper on the coming memetic takeover, calling it Intelligence Explosion Microeconomics. The paper defends the fast and risky thesis that he uses to drum up support for his plan to save the world from the march of the machines.

In the paper, he recounts an early conversation with Ray Kurzweil:

The first time I happened to occupy the same physical room as Ray Kurzweil, I asked him why his graph of Moore’s Law showed the events for “a $1000 computer is as pow- erful as a human brain,” “a $1000 computer is a thousand times as powerful as a human brain,” and “a $1000 computer is a billion times as powerful as a human brain,” all following the same historical trend of Moore’s Law.

I asked, did it really make sense to continue extrapolating the humanly observed version of Moore’s Law past the point where there were putatively minds with a billion times as much computing power? Kurzweil 2001 replied that the existence of machine superintelligence was exactly what would provide the fuel for Moore’s Law to continue and make it possible to keep developing the required technologies. In other words, Kurzweil 2001 regarded Moore’s Law as the primary phenomenon and considered machine superintelligence a secondary phenomenon which ought to assume whatever shape was required to keep the primary phenomenon on track.

Yudkowsky disagrees and proposes much faster models:

Our main data is the much-better-known Moore’s Law trajectory which describes how fast human engineers were able to traverse the difficulty curve over outside time. But we could still reasonably expect that, if our old extrapolation was for Moore’s Law to follow such-and-such curve given human engineers, then faster engineers should break upward from that extrapolation.

Or to put it more plainly, the fully-as-naive extrapolation in the other direction would be, “Given human researchers of constant speed, computing speeds double every 18 months. So if the researchers are running on computers themselves, we should expect computing speeds to double in 18 months, then double again in 9 physical months (or 18 subjective months for the 2x-speed researchers), then double again in 4.5 physical months, and finally reach infinity after a total of 36 months.”

The problem here is that Yudkowsky is ignoring cultural evolution - an old problem for him. The process that is responsible for Moore's law involves human engineers, but it also involves human culture, machines and software. The human engineer's DNA may have stayed unchanged over the last century, but their cultural software has improved dramatically over that same period - resulting in the Flynn effect. Further, machines and their software have also undergone progressive evolution over the same timescale - resulting in microcomputers and the internet - and a dramatic acceleration of civilization's progress.

Civilization's evolution involves a man-machine symbiosis. Its future evolution will consist of the machine element being up-regulated, while the man element will become less prominent. Picturing this as a period of human-caused development, followed by an era of machine-caused development seems as though it is a hopelessly crude and impoverished model. Symbiology has much better models of symbiosis available than this.

Intelligence on the planet is already exploding. Current levels of progress are already due - to large extent - to education, computers and the internet. As Bo Dahlbom put it:

You can't do much carpentry with your bare hands, and you can't do much thinking with your bare brain.
Moore's law isn't a function of static humans engineers. It's humans plus culture, machines, computers and networks. The humans might be close to staying still, but none of the other components involved are.

Veiwing the intelligence explosion as purely a future phenomenon fails to carve nature at the joints. Only by considering how this phenomenon is rooted in the present day, can it be properly understood.