Menu Close
AI isn’t as scary as we imagine. AndreyZH/Shutterstock

AI’s current hype and hysteria could set the technology back by decades

Most discussions about artificial intelligence (AI) are characterised by hyperbole and hysteria. Though some of the world’s most prominent and successful thinkers regularly forecast that AI will either solve all our problems or destroy us or our society, and the press frequently report on how AI will threaten jobs and raise inequality, there’s actually very little evidence to support these ideas. What’s more, this could actually end up turning people against AI research, bringing significant progress in the technology to a halt.

The hyperbole around AI largely stems from its promotion by tech-evangelists and self-interested investors. Google CEO Sundar Pichai declared AI to be “probably the most important thing humanity has ever worked on”. Given the importance of AI to Google’s business model, he would say that.

Some even argue that AI is a solution to humanity’s fundamental problems, including death, and that we will eventually merge with machines to become an unstoppable force. The inventor and writer Ray Kurzweil has famously argued this “Singularity” will occur by as soon as 2045.


Read more: Super-intelligence and eternal life: transhumanism's faithful follow it blindly into a future for the elite


The hysteria around AI comes from similar sources. The likes of physicist Stephen Hawking and billionaire tech entrepreneur Elon Musk warned that AI poses an existential threat to humanity. If AI doesn’t destroy us, the doomsayers argue, then it may at least cause mass unemployment through job automation.

The reality of AI is currently very different, particularly when you look at the threat of automation. Back in 2013, researchers estimated that, in the following ten to 20 years, 47% of jobs in the US could be automated. Six years later, instead of a trend towards mass joblessness, we’re in fact seeing US unemployment at a historic low.

Even more job losses have been threatened for the EU. But past evidence indicates otherwise, given that between 1999 and 2010, automation created 1.5m more jobs than it destroyed in Europe.

AI is not even making advanced economies more productive. For example, in the ten years following the financial crisis, labour productivity in the UK grew at its slowest average rate since 1761. Evidence shows that even global superstar firms, including firms who are among the top investors in AI and whose business models depends on it such as Google, Facebook and Amazon, have not become more productive. This contradicts claims that AI will inevitably enhance productivity.

Current AI is good at finding patterns in large datasets, and not much else. Gorodenkoff/Shutterstock

So why are the society-transforming effects of AI not materialising? There are at least four reasons. First, AI diffuses through the economy much more slowly than most people think. This is because most current AI is based on learning from large amounts of data and it is especially difficult for most firms to generate enough data to make the algorithms efficient or simply to afford to hire data analysts. A manifestation of the slow diffusion of AI is the growing use of “pseudo-AI” where a firm appears to use an online AI bot to interact with customers but which is in fact a human operating behind the scenes.

The second reason is AI innovation is getting harder. Machine learning techniques that have driven recent advances may have already produced their most easily reached achievements and now seem to be experiencing diminishing returns. The exponentially increasing power of computer hardware, as described by Moore’s Law, may also be coming to an end.

Related to this is the fact that most AI applications just aren’t that innovative, with AI mostly used to fine-tune and disrupt existing products rather than introduce radically new products. For example, Carlsberg is investing in AI to help it improve the quality of its beer. But it is still beer. Heka is a US company producing a bed with in-built AI to help people sleep better. But it is still a bed.

Third, the slow growth of consumer demand in most Western countries makes it unprofitable for most businesses to invest in AI. Yet this kind of limit to demand is almost never considered when the impacts of AI are discussed, partly because academic models of how automation will affect the economy are focused on the labour market and/or the supply side of the economy.

Fourth, AI is essentially not really being developed for general application. AI innovation is overwhelmingly in visual systems, ultimately aimed for use in driverless cars. Yet such cars are most notable for their absence from our roads, and technical limits mean they are likely to remain so for a long time.

New thinking needed

Of course, AI’s small impact in the recent past doesn’t rule out larger impacts in the future. Unexpected progress in AI could still lead to a “robocalypse”. But it will have to come from a different kind of AI. What we currently call “AI” – big data and machine learning – is not really intelligent. It is essentially correlation analysis, looking for patterns in data. Machine learning generates predictions, not explanations. In contrast, human brains are storytelling devices generating explanations.

As a result of the hype and hysteria, many governments are scrambling to produce national AI strategies. International organisations are rushing to be seen to take action, holding conferences and publishing flagship reports on the future of work. For example the United Nations University Centre for Policy Research claims that AI is “transforming the geopolitical order” and, even more incredibly, that “a shift in the balance of power between intelligent machines and humans is already visible”.

This “unhinged” debate about the current and near-future state of AI threatens both an AI arms race and stifling regulations. This could lead to inappropriate controls and moreover loss of public trust in AI research. It could even hasten another AI-winter – as occurred in the 1980s – in which interest and funding disappear for years or even decades after a period of disappointment. All at a time when the world needs more, not less, technological innovation.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now