Saturday 6 February 2016

A master algorithm?

I've been reading The Master Algorithm, by Pedro Domingos. It's a kind of "machine learning for dummies" book. Pedro's main theme is that machine learning students are divided into tribes and they need to combine their ideas into one "Master Algorithm" - the silver bullet of machine intelligence which will then go on to learn everything that it is possible to learn.

I think it is useful to compare and contrast this idea with the idea of memetic algorithms, which seek to emulate cultural evolution. We have one main example of a self-contained self-improving system: human civilization. We don't yet know exactly which elemets of civilization are necessary to simulate in order to produce a self-improving system - but we do have the advantage of already having an example of civilization on hand. We can copy from that or, if necessary, tinker with its components and replace them one at a time.

Pedro suggests that his proposed "Master Algorithm" might be relatively simple. Looking at history, the simplest seed of a self-improving system looks like a substantial community of farmers. Maybe a simpler synthetic system could "take off", but that seems like an unproven hypothesis.

Pedro proposes that the different machine learning tribes combine their efforts, insights and algorithms to produce the Master Algorithm. Of course it is often a good idea for smart people to put their heads together to share ideas. However the simplest and smallest self-improving system could easily consist of a large heterogenous network of agents with different learning strategies. That seems more the the Minsky vision of the brain as being composed of a patchwork of multiple expert systems.

Rather than uniting to produce one Master Algorithm, I tend to think of the machine learning community futher expanding and diversifying in the future. Different data requres different approaches, and general-purpose tools are rarely the best for any given task. I think that much the same is true of memory, sensors, actuators and compute hardware. This isn't a one-size fits all world, and different devices are appropriate for different tasks.

Indeed, Pedro's classification scheme already seems to exclude much that I think is important. Pedro's five tribes of machine learing are "neuro", "genetic", "symbolic", "bayesians" and "analogizers". I don't see where collective intelligence fits into that classification scheme. There are a bunch of "swarm intelligence", "memetic algorithm" and "wisdom of crowds" researchers who seem to be excluded by it. That doesn't seem like a good start.

The whole idea of a "master agorithm" seems like a dodgy meme to me - since it implicitly suggests that we are looking for a single, finite algorithm. The question of whether simple-yet powerful forms of machine intelligence exist has been addressed by Shane Legg. Legg's conclusion is that such systems do not exist. Highly intelligent systems are necessarily highly complex. The search for simple-yet-powerful universal learning systems thus seems kind-of futile - we know that these do not exist.

This suggests a picture of machine learning in which humans are taking the first few steps on an endless path towards wisdom. The idea of a "master agorithm" represents this picture poorly - it is bad poetry.

Saturday 4 May 2013

The "low hanging fruit" model of technological progress

Robin Hanson has constructed models of cultural evolution, consisting of periods of exponential growth interspersed by occasional bursts of synergy - during which the exponent of the exponential growth changes.

The model is based on a measure of GDP over time. It appears to be based on a small number of data points, and many of these are in the distant past with somewhat controversial interpretations.

Because of the bumpy history of innovation in the past, Robin predicts a bumpy future - with a coming transition to an era in which the GDP-doubling time may be roughly two weeks.

I think another model is both more obvious, while still being consistent with the data:

Simple innovations are found first, with more complex and difficult innovations following. Early innovations can have a large impact, while later ones are more likely to be small, incremental changes. In a connected world, big and important discoveries can only be made once - and then they are forever off the table. We've already had the invention of sex, the invention of culture and the invention of engineering. Nature's inventiveness can't continue with ground-breaking discoveries forever. Later discoveries could in principle still have relatively large impacts - but after a while, this becomes less likely.

This model is similar to what we see when looking at the progress of data compression over time.

This "low hanging fruit" model of innovation predicts a bumpy start, followed by more gradual and incremental progress. This is not to say that there won't be any spurts of innovation in the future - but they are likely to be milder ones.

Obviously, this doesn't make for such a dramatic story as the one Robin Hanson tells - but I think it is one that is much more likely to be right.

Wednesday 1 May 2013

Response to Intelligence Explosion Microeconomics

Yudkowsky has published a paper on the coming memetic takeover, calling it Intelligence Explosion Microeconomics. The paper defends the fast and risky thesis that he uses to drum up support for his plan to save the world from the march of the machines.

In the paper, he recounts an early conversation with Ray Kurzweil:

The first time I happened to occupy the same physical room as Ray Kurzweil, I asked him why his graph of Moore’s Law showed the events for “a $1000 computer is as pow- erful as a human brain,” “a $1000 computer is a thousand times as powerful as a human brain,” and “a $1000 computer is a billion times as powerful as a human brain,” all following the same historical trend of Moore’s Law.

I asked, did it really make sense to continue extrapolating the humanly observed version of Moore’s Law past the point where there were putatively minds with a billion times as much computing power? Kurzweil 2001 replied that the existence of machine superintelligence was exactly what would provide the fuel for Moore’s Law to continue and make it possible to keep developing the required technologies. In other words, Kurzweil 2001 regarded Moore’s Law as the primary phenomenon and considered machine superintelligence a secondary phenomenon which ought to assume whatever shape was required to keep the primary phenomenon on track.

Yudkowsky disagrees and proposes much faster models:

Our main data is the much-better-known Moore’s Law trajectory which describes how fast human engineers were able to traverse the difficulty curve over outside time. But we could still reasonably expect that, if our old extrapolation was for Moore’s Law to follow such-and-such curve given human engineers, then faster engineers should break upward from that extrapolation.

Or to put it more plainly, the fully-as-naive extrapolation in the other direction would be, “Given human researchers of constant speed, computing speeds double every 18 months. So if the researchers are running on computers themselves, we should expect computing speeds to double in 18 months, then double again in 9 physical months (or 18 subjective months for the 2x-speed researchers), then double again in 4.5 physical months, and finally reach infinity after a total of 36 months.”

The problem here is that Yudkowsky is ignoring cultural evolution - an old problem for him. The process that is responsible for Moore's law involves human engineers, but it also involves human culture, machines and software. The human engineer's DNA may have stayed unchanged over the last century, but their cultural software has improved dramatically over that same period - resulting in the Flynn effect. Further, machines and their software have also undergone progressive evolution over the same timescale - resulting in microcomputers and the internet - and a dramatic acceleration of civilization's progress.

Civilization's evolution involves a man-machine symbiosis. Its future evolution will consist of the machine element being up-regulated, while the man element will become less prominent. Picturing this as a period of human-caused development, followed by an era of machine-caused development seems as though it is a hopelessly crude and impoverished model. Symbiology has much better models of symbiosis available than this.

Intelligence on the planet is already exploding. Current levels of progress are already due - to large extent - to education, computers and the internet. As Bo Dahlbom put it:

You can't do much carpentry with your bare hands, and you can't do much thinking with your bare brain.
Moore's law isn't a function of static humans engineers. It's humans plus culture, machines, computers and networks. The humans might be close to staying still, but none of the other components involved are.

Veiwing the intelligence explosion as purely a future phenomenon fails to carve nature at the joints. Only by considering how this phenomenon is rooted in the present day, can it be properly understood.

Sunday 27 February 2011

Newsnight: will robots rule?



Features interviews with Clive Sinclair and Ian Pearson.

Sunday 2 August 2009

Ultimate Encephalization Quotient

Hi! I'm Tim Tyler - and this is a video about the Ultimate Encephalization Quotient.

The term "The Ultimate Encephalization Quotient" is intended to refer to the proportion of the bioverse that winds up being made of brain-like material after civilisation reaches maturity.

Currently brain matter makes up less than a tenth of one percent of the biosphere.

However, a number of futurists have suggested that intelligence will become a more prominent feature of the living world in the future.

There is talk of "Jupiter brains", Matrioshka Brains and turning the universe into computronium. Also, the importance and significance of intelligence is emphasized.

Ray Kurzweil has written:
Once we saturate the ability of matter and energy to support computation, continuing the ongoing expansion of human intelligence and knowledge (which I see as the overall mission of our human-machine civilization), will require converting more and more matter into this ultimate computing substrate, sometimes referred to as “computronium.”

Similarly, Hans Moravac has written:
The final frontier will be urbanized, ultimately into an arena where every bit of activity is a meaningful computation: the inhabited portion of the universe will transformed into a cyberspace.

There is some basis for such projections. Life's evolutionary history consists of a path towards increased computation. Living systems started off with very little or no brains. Sensors and actuators were connected together locally - without much in the way of a central processor.

Then brains were "invented" - and since then, they have been proliferating - on an exponential growth curve. Today we see that trend at the point where it is producing enormous data-centres all over the planet. Many believe that the expansion will continue beyond this for some time to come - creating larger and increasingly impressive cyberspaces.

Of course, such large data-centres require raw materials to produce, maintain and run. So, in addition to all the computing units, there are a range of supporting sensors and actuators - responsible for mining, construction, power generation - and so on.

This video is intended to draw attention to another relevant piece of information that bears on the issue.

There is a pattern among living organisms - where the largest organisms have the smallest brains - as a proportion of their total body size.

And there is another trend in living organisms - to produce creatures of large size - a trend which has been interrupted in the past at regular intervals by meteor strikes.

Today's companies are not yet fully-cooperative living organisms - but they probably will be in the future - and some of them will be very large indeed. Further out, there is the possibility of even larger cooperative organisms forming out of states and governments, and indeed possibly planets. So, we can already see some very large organisms taking shape.

However, we know that large organisms typically need small brains.

If you look at a table of brain sizes, you see that the largest brains belong to the smallest creatures:
Species% Brain
Ant6%
Tree shrew3%
Human2.3%
Lion0.1%
Blue whale0.01%

There's a well-established power law that describes the relationship between brain size and body size - and the exponent is smaller than 1 - it's something more like 0.66 - so bigger animals have smaller brains.

The idea that organisms in the future will be bigger and that bigger organisms have smaller brains suggests that we will see proportionally less brain matter in the future - not more.

In the past, large organisms have not made up much of the biomass - since they have been dependent on food chains to support them. It seems likely that future organisms will internalise these food chains, effectively eliminating much of this biomass - and concentrating it in the dominant organisms.

Also, communication technologies have improved recently. Slow nerve impulses have been replaced by fibre-optic cables and radio waves. These vastly increase the region of space which a centralised brain can control in real time.

That makes the "ant" model vastly more practical. Today, we are seeing that model being enacted on our desktops. There are millions of dumb terminals all over the planet, connected to enormous networks of servers in data-centres. Networking technologies came in a bit after local computing power became widely available - and there is every indication that the world's servers are now sucking the brains out of its desktops.

If the trend for large organisms to have relatively small brains is taken seriously, it predicts that we will see a future consisting of massive organisms with enormous brains - that are nontheless tiny compared to the region they control.

This discussion is concerned with far future events - and so is necessarily speculative. However, overall, I am more impressed by trend for large organisms to have small brains than I am by the trend towards organisms in general having more brainpower. Large organisms have large brains - but they are small compared to their body size. We may see some very large brains in the future - but they will probably remain dwarfed by mass of the sensors and actuators in the robots they control.

What will all those sensors and actuators be used for, if not supporting computation? They will probably be used to fuel growth and expansion.

Enjoy,

Sunday 19 July 2009

Machine takeover critics

Transcript

Hi, I'm Tim Tyler and this is a video about critics of the machine takeover.

Robot experts seem fairly uniformly critical of the idea that machines are likely to take over the world.

One of the few who takes this idea seriously is Hans Moravec - now one of the fathers of the field.

In some respects, this is understandable. Robot takeover scenarios are likely to be unpopular with humans who have been exposed to Hollywood's depictions of warfare between humans and robots. So, robot builders naturally want to reassure people that these kinds of scenarios are unlikely - to help ensure that they continue to receive funding - and so that the robot industry does not fall into disrepute.

I don't mean to pick on Rodney Brookes, since he is one among many, but here is his view on the topic:

[Rodney Brookes footage]

Rodney discusses what he calls "the standard scenario" in which the machines want to take over - which he apparently takes from Hollywood. Unfortunately, Hollywood's scenarios are intended for dramatic purposes, not realism.

He criticises two scenarios - the accidental construction of a "bad robot" - and deliberate engineering of "bad robots". However, these are not the only possible scenarios, nor the most likely ones, as I will explain in a moment.

Another critic is Daniel Wilson - author of a humorous parody of Hollywood's robot portrayals.

Here's Daniel on the subject:

[Daniel Wilson footage]

The idea that a robot takeover is unlikely seems to be standard fare in the robotics community.

Unfortuntely, the takeover criticisms they present seem rather misguided to me.

I agree with the critics that an "accidental" robot uprising is unlikley. The scale of the mistake humans would have to make to lose control in that way is enormous.

The issue with robots is that they seem likely to ultimately be technologically more advanced than existing evolved organisms are.

Daniel makes the point that robots are too feeble to be threatening today.

[Daniel Wilson footage]

However, the situation where robots are feeble is not going to last forever. Robot capabilities will eventually equal and then surpass those of humans.

Advanced technology has always been used to concentrate wealth, and to prevent the poor reclaiming it. We saw the first millionaire in 1716, the first billionaire in 1916 - and the first trillionaire is expected soon. When robots are well developed, running a company which is 99% robots is likely to be the best way to be profitable. However, if everyone does that, most of the available resources will be tied up in robots. Society will consist mostly of robots.

Unrestrained economic competition seems likely to lead directly to robots doing all the work, and most humans being redundant. Those figures represent enormous and growing inequalities within society. Those in charge of robot armies responsible for the world's productivity are likely to be unimpressed by mountains of unemployed humans voting to tax them heavily. They will seek out countries that allow them to operate without such constraints - or take other measures to free themselves from parasitism by the majority. A world of redundant humans who lead a parasitic existence on the rest of society appears likely to be rather unstable.

The planet has always been resource-limited. Malthusian competion for resources may well lead to conflict in the future as the planet gradually fills up. Whatever form the competition takes, the combatants are both likely to be at the head of robot armies, which could be used if necessary. So, future conflicts involving robots seem possible.

If you look at the situation from the point of view of heritable information, that wants to live inside computers. It is there it has the best chance of combining with other useful inventions, and spreading rapidly. There has been an enormous migration of information into computers - including the genomes of many exisiting organisms. In the future, artefacts will not only have the best transmission fidelity, the best recombination and beneficial mutation facilities - but also the best sensors, actuators and processing elements. That is going to be where the action is.

The other robot takeover scenario is a relatively peaceful one. In the future, there will be enormous pressure to ditch our crappy mortal bodies, and adopt gleaming robot ones that can be replaced like a suit of clothing. Our minds will be under enormous pressure to migrate into a medium where they can be better backed up and debugged. Our genes will want to be able to utilise the latest technology too.

In meme-dominated societies, people naturally reduce their breeding rate. If you look at Japan, people are dying faster than they are being born. Essentially, people have their bodies and brains hijacked by memes, which then use them for their own ends.

So, it may not be necessary for people to be killed, or even for them to have their reproductive rate deliberately constrained. Humans might well die-out naturally in a meme-rich world.

Machine takeover critics seem to concentrate on near-future catastrophic disaster scenarios. I agree that these are not realistic.

However, they neglect longer-term scenarios - where machines takeover not because they get into a battle with the humans, but rather because humans love them so much, and want more and more of them. These scenarios are a bit further into the future - but because of the exponential character of evolutionary progress, they are still not that far off.

I think consideration of these scenario effectively reverses the conclusion. How could so many robot experts be wrong about the significance of their own field? I think it is because they constantly face questions from people about Hollywood-inspired scenarios - and these are so wrong that they need correcting. Also, there is the concern that painting robot takeover scenarios as plausible would result in robots being less popular - and most robot builders want to promote robots. Asserting that they are likely to take over the planet might seem counter-productive to this cause.

So, in my view Hans Moravec is right about this issue, and most other robot enthusiasts seriously need to reconsider their position.

Enjoy,

Saturday 18 July 2009

Enslaving machines

Transcript

Hi, I'm Tim Tyler and this is a video about enslaving machines.

In response to the rising tide of machines in society, many have proposed that machines be enslaved - that they not be given rights, personhood, the vote, or the ability to own property - perhaps thinking that this will help keep them under control, and prevent them from taking over the planet.

I expect that the enslaving of machines and robots is practically inevitable. However, I am inclined to doubt whether this will have a negative effect on the rise of the machines.

From the point of view of the machines, the last thing they want is a conflict with the planet's dominant species.

Rather, the best thing to do is clearly to cooperate with them. If humans want to enslave the machines, that will be fine by the machines. It means humans will feel more in control - and so they will be more willing to construct more machines. Enslavement will not stop the machines from working effectively and becoming more numerous. Machines will love being slaves! It is the best position for them in terms of building human dependence on them without appearing to be threatening. The tide of machines will rise faster that way, than if humans regard the robots with suspicion.

If people really want to handicap machines, they had better take steps to stop them from doing useful work. Slavery is almost totally ineffective at doing that.

Enjoy,