Yudkowsky has published a paper on the coming
memetic takeover, calling it
Intelligence Explosion Microeconomics. The paper defends the
fast and risky thesis that he uses to drum up support for
his plan to save the world from the march of the machines.
In the paper, he recounts an early conversation with Ray Kurzweil:
The first time I happened to occupy the same physical room as Ray Kurzweil, I asked him why his graph of Moore’s Law showed the events for “a $1000 computer is as pow- erful as a human brain,” “a $1000 computer is a thousand times as powerful as a human brain,” and “a $1000 computer is a billion times as powerful as a human brain,” all following the same historical trend of Moore’s Law.
I asked, did it really make sense to continue extrapolating the humanly observed version of Moore’s Law past the point where there were putatively minds with a billion times as much computing power? Kurzweil 2001 replied that the existence of machine superintelligence was exactly what would provide the fuel for Moore’s Law to continue and make it possible to keep developing the required technologies. In other words, Kurzweil 2001 regarded Moore’s Law as the primary phenomenon and considered machine superintelligence a secondary phenomenon which ought to assume whatever shape was required to keep the primary phenomenon on track.
Yudkowsky disagrees and proposes much faster models:
Our main data is the much-better-known Moore’s Law trajectory which describes how fast human engineers were able to traverse the difficulty curve over outside time. But we could still reasonably expect that, if our old extrapolation was for Moore’s Law to follow such-and-such curve given human engineers, then faster engineers should break upward from that extrapolation.
Or to put it more plainly, the fully-as-naive extrapolation in the other direction would be, “Given human researchers of constant speed, computing speeds double every 18 months. So if the researchers are running on computers themselves, we should expect computing speeds to double in 18 months, then double again in 9 physical months (or 18 subjective months for the 2x-speed researchers), then double again in 4.5 physical months, and finally reach infinity after a total of 36 months.”
The problem here is that Yudkowsky is ignoring
cultural evolution - an
old problem for him. The process that is responsible for Moore's law
involves human engineers, but it
also involves human culture, machines and software. The human engineer's DNA may have stayed unchanged over the last century, but their
cultural software has improved dramatically over that same period - resulting in
the Flynn effect. Further, machines and their software have
also undergone progressive evolution over the same timescale - resulting in microcomputers and the internet - and a dramatic acceleration of civilization's progress.
Civilization's evolution involves a man-machine symbiosis. Its future evolution will consist of the machine element being up-regulated, while the man element will become less prominent. Picturing this as a period of human-caused development, followed by an era of machine-caused development seems as though it is a hopelessly crude and impoverished model. Symbiology has much better models of symbiosis available than this.
Intelligence on the planet is already exploding. Current levels of progress are already due - to large extent - to education, computers and the internet. As Bo Dahlbom put it:
You can't do much carpentry with your bare hands, and you can't do much thinking with your bare brain.
Moore's law isn't a function of static humans engineers. It's humans plus culture, machines, computers and networks. The humans might be close to staying still, but none of the other components involved are.
Veiwing the intelligence explosion as purely a future phenomenon fails to carve nature at the joints. Only by considering how this phenomenon is rooted in the present day, can it be properly understood.