Menu Close

Artificial intelligence – can we keep it in the box?

“We should stop treating intelligent machines as the stuff of science fiction.” Cea.

We know how to deal with suspicious packages – as carefully as possible! These days, we let robots take the risk. But what if the robots are the risk? Some commentators argue we should be treating AI (artificial intelligence) as a suspicious package, because it might eventually blow up in our faces. Should we be worried?

Exploding intelligence?

Asked whether there will ever be computers as smart as people, the US mathematician and sci-fi author Vernor Vinge replied: “Yes, but only briefly”.

He meant that once computers get to this level, there’s nothing to prevent them getting a lot further very rapidly. Vinge christened this sudden explosion of intelligence the “technological singularity”, and thought that it was unlikely to be good news, from a human point of view.

Was Vinge right, and if so what should we do about it? Unlike typical suspicious parcels, after all, what the future of AI holds is up to us, at least to some extent. Are there things we can do now to make sure it’s not a bomb (or a good bomb rather than a bad bomb, perhaps)?

AI as a low achiever

Optimists sometimes take comfort from the fact the field of AI has very chequered past. Periods of exuberance and hype have been mixed with so-called “AI winters” – times of reduced funding and interest, after promised capabilities fail to materialise.

Some people point to this as evidence machines are never likely to reach human levels of intelligence, let alone to exceed them. Others point out that the same could have been said about heavier-than-air flight.

ra1000

The history of that technology, too, is littered with naysayers (some of whom refused to believe reports of the Wright brothers’ success, apparently). For human-level intelligence, as for heavier-than-air flight, naysayers need to confront the fact nature has managed the trick: think brains and birds, respectively.

A good naysaying argument needs a reason for thinking that human technology can never reach the bar in terms of AI.

Pessimism is much easier. For one thing, we know nature managed to put human-level intelligence in skull-sized boxes, and that some of those skull-sized boxes are making progress in figuring out how nature does it. This makes it hard to maintain that the bar is permanently out of reach of artificial intelligence – on the contrary, we seem to be improving our understanding of what it would take to get there.

Moore’s Law and narrow AI

On the technological side of the fence, we seem to be making progress towards the bar, both in hardware and in software terms. In the hardware arena, Moore’s law, which predicts that the amount of computing power we can fit on a chip doubles every two years, shows little sign of slowing down.

In the software arena, people debate the possibility of “strong AI” (artificial intelligence that matches or exceeds human intelligence) but the caravan of “narrow AI” (AI that’s limited to particular tasks) moves steadily forward. One by one, computers take over domains that were previously considered off-limits to anything but human intellect and intuition.

We now have machines that have trumped human performance in such domains as chess, trivia games, flying, driving, financial trading, face, speech and handwriting recognition – the list goes on.

Along with the continuing progress in hardware, these developments in narrow AI make it harder to defend the view that computers will never reach the level of the human brain. A steeply rising curve and a horizontal line seem destined to intersect!

What’s so bad about intelligent helpers?

Would it be a bad thing if computers were as smart as humans? The list of current successes in narrow AI might suggest pessimism is unwarranted. Aren’t these applications mostly useful, after all? A little damage to Grandmasters’ egos, perhaps, and a few glitches on financial markets, but it’s hard to see any sign of impending catastrophe on the list above.

That’s true, say the pessimists, but as far as our future is concerned, the narrow domains we yield to computers are not all created equal. Some areas are likely to have a much bigger impact than others. (Having robots drive our cars may completely rewire our economies in the next decade or so, for example).

The greatest concerns stem from the possibility that computers might take over domains that are critical to controlling the speed and direction of technological progress itself.

Software writing software?

What happens if computers reach and exceed human capacities to write computer programs? The first person to consider this possibility was the Cambridge-trained mathematician I J Good (who worked with Alan Turing code-breaking at Bletchley Park during the second world war, and later on early computers at the University of Manchester).

In 1965 Good observed that having intelligent machines develop even more intelligent machines would result in an “intelligence explosion”, which would leave the human levels of intelligence far behind. He called the creation of such machine “our last invention” – which is unlikely to be “Good” news, the pessimists add!

FlySi

In the above scenario, the moment computers become better programmers than humans marks the point in history where the speed of technological progress shifts from the speed of human thought and communication to the speed of silicon. This is a version of Vernor Vinge’s “technological singularity” – beyond this point, the curve is driven by new dynamics and the future becomes radically unpredictable, as Vinge had in mind.

Not just like us, but smarter!

It would be comforting to think that any intelligence that surpassed our own capabilities would be like us, in important respects – just a lot cleverer. But here, too, the pessimists see bad news: they point out that almost all the things we humans value (love, happiness, even survival) are important to us because we have particular evolutionary history – a history we share with higher animals, but not with computer programs, such as artificial intelligences.

By default, then, we seem to have no reason to think that intelligent machines would share our values. The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.

JD Hancock

The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.

People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.

Getting in the way

By now you see where this is going, according to this pessimistic view. The concern is that by creating computers that are as intelligent as humans (at least domains that matter to technological progress), we risk yielding control over the planet to intelligences that are simply indifferent to us, and to things that we consider valuable – things such as life and a sustainable environment.

If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

How much time do we have?

It’s hard to say how urgent the problem is, even if pessimists are right. We don’t yet know exactly what makes human thought different from current generation of machine learning algorithms, for one thing, so we don’t know the size of the gap between the fixed bar and the rising curve.

But some trends point towards the middle of the present century. In Whole Brain Emulation: A Roadmap, the Oxford philosophers Anders Sandberg and Nick Bostrom suggest our ability to scan and emulate human brains might be sufficient to replicate human performance in silicon around that time.

“The pessimists might be wrong!”

Of course – making predictions is difficult, as they say, especially about the future! But in ordinary life we take uncertainties very seriously, when a lot is at stake.

Sebastianlund

That’s why we use expensive robots to investigate suspicious packages, after all (even when we know that only a very tiny proportion of them will turn out to be bombs).

If the future of AI is “explosive” in the way described here, it could be the last bomb the human species ever encounters. A suspicious attitude would seem more than sensible, then, even if we had good reason to think the risks are very small.

At the moment, even that degree of reassurance seems out of our reach – we don’t know enough about the issues to estimate the risks with any high degree of confidence. (Feeling optimistic is not the same as having good reason to be optimistic, after all).

What to do?

A good first step, we think, would be to stop treating intelligent machines as the stuff of science fiction, and start thinking of them as a part of the reality that we or our descendants may actually confront, sooner or later.

Once we put such a future on the agenda we can begin some serious research about ways to ensure out-sourcing intelligence to machines would be safe and beneficial, from our point of view.

Perhaps the best cause for optimism is that, unlike ordinary ticking parcels, the future of AI is still being assembled, piece by piece, by hundreds of developers and scientists throughout the world.

The future isn’t yet fixed, and there may well be things we can do now to make it safer. But this is only a reason for optimism if we take the trouble to make it one, by investigating the issues and thinking hard about the safest strategies.

We owe it to our grandchildren – not to mention our ancestors, who worked so hard for so long to get us this far! – to make that effort.



Further information:
For a thorough and thoughtful analysis of this topic, we recommend The Singularity: A Philosophical Analysis by the Australian philosopher David Chalmers. Jaan Tallinn’s recent public lecture The Intelligence Stairway is available as a podcast or on YouTube via Sydney Ideas.


The Centre for the Study of Existential Risk
The authors are the co-founders, together with the eminent British astrophysicist, Lord Martin Rees, of a new project to establish a Centre for the Study of Existential Risk (CSER) at the University of Cambridge.

The Centre will support research to identify and mitigate catastrophic risk from developments in human technology, including AI – further details at CSER.ORG.

Want to write?

Write an article and join a growing community of more than 191,300 academics and researchers from 5,063 institutions.

Register now