Menu Close
Terminate your concerns. San Diego Shooter/Flickr

Super intelligent machines aren’t to be feared

Fear of machines becoming smarter than humans is a standard part of popular culture. In films like iRobot and Terminator, humans are usurped. Throughout history we can trace stories about humankind overreaching through a desire to understand and copy ourselves, from Ancient Greek mythology to Milton’s Paradise Lost and Shelley’s Frankenstein. Today’s Prometheans are supposedly scientists working on artificial intelligence (AI), who run the risk of creating machines intelligent enough to supercede us.

But this is no mere convention of dystopian science fiction, worries such as these are expressed by academics too. A recent review by Luke Muehlhauser, Executive Director of the Machine Intelligence Research Institute, suggests that “the default outcome from advanced AI is human extinction”. If this is true, we definitely have cause for concern.

Muehlhauser and others are worried we will reach a point in the future where AI will surpass human intelligence - a moment often referred to as the “singularity”. Once the singularity is reached, so-called “runaway AI” might continue to improve itself at an accelerating rate, leaving human intelligence far behind.

It may seem that our foot is on the path to destruction, but this sense of foreboding is based on a particular assumption about how we should compare AI to human intelligence. Human intelligence is usually thought of as the raw brain-power of the average individual. The human brain evolved to its current capacity around 100,000 years ago. It has not changed much since then and is unlikely to improve any time soon. Based on this comparison, it seems plausible that AI could surpass us in many respects in the near future.

But other comparisons might be more appropriate and more informative. For instance, perhaps we should be comparing AI with the collective intelligence of humanity. After all, as an entity, AI can stretch across multiple machines. Likewise, the human race amounts to much more than the sum of its parts when we share our capabilities.

And why strip us humans of our intelligence-enhancing artifacts? Since the Stone Age, about 50,000 years ago, humans have used language to store and communicate knowledge, boosting our individual and collective reasoning capacity. Computers, the internet, even AI itself, are just the most recent additions to a set of technologies whose earlier members include red ochre (for cave painting), papyrus, the abacus, the printing press, typewriter and telephone.

These intelligence-boosting technologies have hugely expanded our ability to apply shared knowledge and control our environment according to our goals. This historical acceleration could easily be described as “runaway human intelligence”, as cultural and scientific development have led to a larger, longer-lived and better-educated human species.

Now, with advanced communication technologies such as smartphones, we can share our intelligence better than ever before. In this way, we contribute the raw processing power of our individual brains to what Francis Heylighen has called the “Global Brain”.

Visions of the future: it’s not all death and destruction for humanity. Cayusa/Flickr

This enhanced, species-level intelligence has no obvious ceiling. We can continue to create technologies that complement our natural intelligence, which will allow us to communicate faster and make us collectively smarter. Comparing future AI with the Global Brain, puts a singularity event much further off, and makes it much less plausible that humanity will be left behind in the intelligence race.

Worrying scenarios remain, though. It could be that a split emerges between AI and the Global Brain, where a sneaky and malevolent AI attempts to conceal its advances from humanity. Like Skynet, the self-aware AI in Terminator, it could bide its time until it is ready to eliminate all unnecessary humans.

But this scenario underestimates the contribution of our biological intelligence to any future human-machine collective. There are many things that we do exceptionally well and which are hard for machines to master, because they lack the same richness of sensory and motor interaction with the world. Researchers are working hard on this challenge, and making some progress, with organisations like the Convergent Science Network providing a venue for collaboration and communication.

Still, the limitations of robots are obvious at events like the Robocup2013 tournament. Though they can pass the ball around, their awareness, dexterity and flexibility remain a very pale imitation of ours.

These ‘kid-sized’ robots still can’t bend it like Beckham.

Acting in, and understanding the world, are skills in which humans excel, and intelligent machines will need us around to intepret the world for them for a long time to come. There is no real economic incentive for replacing this aspect of human intelligence, either. Machines will continue to be engineered to take on the tasks we do poorly, rather than the ones we do well. Like symbiotic systems in nature, the future partnership of people with intelligent machines will be successful because its two halves complement, rather than copy, each other.

The most plausible scenario is that our collective intelligence will continue its runaway path. Greater and deeper integration between humans and our intelligence enhancing technologies will result in an increasingly bio-hybrid - part biological, part artificial - form. What is good for AI will then also be good for us.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now