Science, like any other field that attracts investment, is prone to bubbles. Overly optimistic investments in scientific fields, research methods and technologies generate episodes comparable to those experienced by financial markets prior to crashing.
Assessing the toxic intellectual debt that builds up when too much liquidity is concentrated on too few assets is an important task if research funders want to avoid going short on overvalued research.
The cause of the meltdown of the financial market is obvious: leveraged trading in financial instruments that bear no relation to the things they are supposed to be secured against. Science, too, is a market in which the value of research is ultimately secured against objects in the world. If the world is not as it appears in a research paper, does the research have value?
A paper that claims that smoking causes cancer or that terrorism is caused by poverty is valuable only if it turns out to be a good explanation of cancer or terrorism. As recently noted by Philip Gerrans at the University of Adelaide, “[It] is why an original and true explanation is the gold standard of academic markets.”
Hunting for bubbles
Consider the recent investments in neuroscience. No one with an interest in scientific trends and science policy will have failed to notice that cognitive neuroscience is the next big thing. This narrative has been around for at least a decade, but now it is getting serious.
Take the recent award by the European Commission of €1 billion (US$1.3 billion) to the Human Brain Project to build a “supercomputer replica of the human brain” or the US$1 billion Brain Activity Map project – “the largest and most ambitious effort in fundamental biology since the Human Genome Project” – endorsed by the US president, Barack Obama, in January 2013.
As with a leveraged investment in mortgage bonds, most bureaucrats have little or no competence in determining how these massive projects will turn out. Whether or not the expectations will be realised, research funding is framed with expectations that neuroscience will translate into jobs and growth. Neurotechnology – brain-based devices, drugs and diagnostics – is projected to be a US$145 billion industry by 2025.
It should be little surprise then to see newly emerging fields that attach “neuro” to some human trait – neuroeconomics, neuromarketing, neuropsychiatry, neuroethics – with the expectation that the techniques of neuroscience will explain the relevant human behaviour and practice.
Impending neurobubble?
The generous provision of funding for projects in neuroscience creates the first precondition for a science bubble. Add to this a second precondition: the presence of speculators. Both researchers and directors of research institutes hedge their bets by supporting research strategies that follow the current fashions, publication channels and funding streams. Consider for instance the vision of a statement from the prominent experimental neuroscientist Semir Zeki:
It is only by understanding the neural laws that dictate human activity in all spheres – in law, morality, religion and even economics and politics, no less than in art – that we can ever hope to achieve a more proper understanding of the nature of man.
And this tremendous claim from a recent interview with the principal investigator of the Human Brain Project, Henry Markram:
Once you have built a [a supercomputer replica of the human] brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own.
Combine these promises with a series of results from social psychology suggesting that peer reviewers, students and lay citizens are likely to find explanations of psychological phenomena more convincing when they contain neuroscientific information – even in situations when the neuroscientific information is irrelevant to the explanation.
This situation resembles a number of well-documented phenomena in social psychology and behavioural economics called “pluralistic ignorance” (a situation in which a majority of members in a group reject a norm, but incorrectly assume that most others accept it and therefore go along with it) and “bystander effects” (where the greater the number of bystanders, the less likely it is that any one of them will help a victim). So, in other words, everyone can see something is wrong but everyone expects someone else to do something about it. These have been shown to have significant impact on processing information and making judgements.
Overly optimistic research programs and claims of future scientific impacts crowd out more modest and pluralist research strategies pursued by scientists in search for novel explanations and solid evidence building. And that is what science is all about, not only mainlining what may turn out to be a science bubble.