The uncertainty of the future is a universal concern. Humans have always tried to see into the future with the aid of specialist knowledge – from the days when rulers consulted haruspices and diviners, to our contemporary obsession with risk assessment. Exposed to natural hazards throughout different phases of human history – from predators when we were hunter–gatherers to droughts and pests when we became agriculturalists – we’ve tried to find ways of knowing more about what might be around the corner.
Today we handle the future using probabilities rather than animal entrails. Scientific tools, of which statistical analysis is one, have allowed us to intervene in the natural world in ways our ancestors would not have imagined possible. At the same time, we share with them some of their vulnerability to natural hazards. But thanks to our ability to transform the world around us, we also increasingly face hazards that are no longer simply natural.
Climate change and bacterial antibiotic resistance are two examples, both being unintended and unwanted outcomes of natural processes combined with human interventions. The recent floods in the UK, and their relationship to flood management practices, represent another example – in which climate change is also implicated. The question vexing policymakers in the wake of these events is: how to decide what to do about their likely future recurrence?
Here, statistics may be less helpful. The reliability of the picture they give us of the future is dependent on future outcomes being statistically comparable to past ones. What we may fail to recognise is that this assumption is often wrong.
Estimates of probability based on past data treat past outcomes as random. But when humans act the outcomes are not random. We act with particular goals in mind.
Our capacity for changing the world – particularly when our efforts combine with complex natural processes – can even create phenomena that are entirely novel in human experience, such as human-caused climate change and ozone depletion.
Changing landscapes of risk
Scientists studying climate change concur. They point out that the novel nature of human-caused climate change means that past estimates of the probability of flooding will no longer be reliable guides to the future. Oxford University’s Myles Allan suggests “just looking back at the historical record to plan flood defences or set insurance premiums is increasingly misleading”. And Reading’s Nigel Arnell says: “We have long been exposed to risk from flooding, but climate change is loading the dice.” Of the rest of us, 80% suspect this is true. The extent to which such conclusions are shaping public policy is, however, debatable.
The Enlightenment taught us to make (or at least, claim to be making) decisions about what we should do, now and in the future, on the basis of reliable knowledge about the past. For us, this has come to mean rigorous statistical research. Much of our way of thinking about decisions is therefore tied up with the consequentialist moral philosophy implicit in this belief. We want to know what the outcomes of our actions are likely to be, so that we can weigh up risks and benefits and then act accordingly.
Bureaucrats bind themselves by rules that prescribe decision making procedures of this kind, such as those in the Treasury’s Green Book. Yet when faced with a genuinely novel phenomenon like human-caused climate change, we find ourselves in a situation in which we do not have access to the data our consequentialist habits of mind require.
The ethics of uncertainty
We cannot wait for this data to become available. Whatever we do about human-caused climate change, we will need to make decisions in a state of uncertainty. However well-constructed climate models are, we cannot afford to test them by waiting to see if the scenarios they outline are in fact correct.
The underdeveloped branch of moral philosophy that deals with the ethics of risk gives us some pointers on how we can proceed when the outcomes of possible courses of action are unknown. For example, if we face situations where our actions create greater uncertainties about how the future will turn out, then we should not act as if business as usual will continue.
This is, however, often not what the rules encourage bureaucrats to do. Following economic orthodoxy, they tend to assume, for example, that future generations will continue to be better off than present ones. The Green Book accordingly instructs policymakers to discount the future – to set a lower value on outcomes in the future than on ones nearer to the present.
But if there are plausible reasons to think business will not continue as usual, then the welfare of future generations has to count for more in policy than it does now. If the future facing us is, thanks to human activity, no longer what it used to be, then uncertainty is a reason for acting to prevent harm rather than for waiting.