Menu Close

Challenge 9: Decision making amidst increasing complexity

Overwhelmed: to live wisely in a world where complexity seems to be running rampant, we must first grasp what complexity is. Flickr/Elif Ayiter/Alpha Auer/..../

In part nine of our multi-disciplinary Millennium Project series, Cliff Hooker argues that to get any better at decision-making, we must first face up to our limitations.

Global challenge 9: How can the capacity to decide be improved as the nature of work and institutions change?

Our decision making urgently needs improving. But how? And why? This Millennium discussion focuses immediately on handling increasing complexity as our chief challenge, and it identifies new computational processes and digital media resources as the primary source of solutions.

Both claims have merit but are too narrow. Complex systems are, roughly, those that have many nested and organised interrelations among their component entities, as, for example, living creatures do (of which more below). We need to both sharpen our understanding of complexity, and consider more than complexity and digitalisation.

Besides complexity, as just characterised, there are features of our current world that we often conflate with complexity but are instead mostly simply complicating for us:

  1. Trans-institutionalisation and globalisation, each of which requires integrating culturally diverse styles, agendas, time-tables, and more;

  2. Increasing rapidity of change;

  3. Increasing scales of consequences in space and time.

All three features, for example, afflict the climate change problem. These add numbers of considerations and constraints, creating (massive) complications to our decision making, but are not at the core of complexities proper.

Beside digitalisation: computers can indeed often help ameliorate these problems, and the Millennium discussion offers useful developments and ideas. But, for example, making decisions speedily often changes the kind of decision process, not just its speed. However useful cost-benefit-risk analysis is elsewhere, anyone who has a hungry lion charging them and continues to use it to decide what to do, is irrational. Merely having faster or networked computers etc., won’t help when to decide to change decision method, or what alternative method to choose.

The need for expertise and skill to decide wisely on changes in decision method is increasingly crucial to our lives but poorly recognised. It is well illustrated in complex systems, so let’s now consider them.

Complex systems

Core complexity is being more ordered than a gas (fully disordered) but less ordered than a crystal (fully ordered). Neither are useful models of climate, living organisms, ecologies, cities. These systems are somewhere between gases and crystals in ordered-ness, but also nested (sub-systems within sub-systems) and organised (many distinct roles all differently interrelated to achieve coherent overall functioning, e.g. the parts of a car engine).

Complex systems typically involve irreversibility, organised levels of phased feedback/feedforward, internal constraints, and so on. Various combinations of these features lead to the behaviours that are distinctive of complexity: sensitivity to conditions, chaos, criticality, fat tails, self-organisation, emergence, path-dependence and so on. Our climate shows all of these features and more. When a complex system has to be managed, it is managing these distinctively complex behaviours that causes the decision problems.

And we have to decide under a new set of restrictions on what can be known. Two illustrations are considered:

  1. Complex systems often sit on a knife edge between diverging behaviours, the slightest change in conditions tipping it into one path or the other. A steel ball bearing dropping on to a real knife edge is a useful image. Since we can never know system conditions infinitely accurately before it tips, we cannot know which way it has gone and, because the paths diverge, our ignorance grows with time. (The differences can be below computer round-up errors, highlighting computational limits.)

  2. Typically, dynamical equations for complex systems have no explicit (“analytic”) solutions, so they have to be approximately simulated. This is where computers offer us one uniquely new and important tool. However, approximation can wipe out sensitivities. And simulation typically must be in a multi-dimensional space and there is yet no effective general way to search this space for its important dynamical features. So experts must decide how best to search it, given what is known and the approximations used. This leaves it uncertain how the system will behave in many new conditions. (But we are also rapidly improving our knowledge of particular systems.)

(For the interested, all these subjects and more are discussed and referenced across various chapters in a book I edited, Philosophy of Complex Systems, although it is written for researchers.)

Sharp insights: if a steel ball falls straight onto the edge, we cannot predict which way will go. Flickr/Falling Outside The Normal Moral Constraints

Fat tails and policy making

To illustrate the issues, let’s consider fat tails. Complex systems can generate unlikely behaviours (“outliers”) more frequently than would occur at random. The increase is the “fat” in their low-probability outlier tails. An ash slope at a critical angle has an equal probability of landslides at all scales, big and small, so big ones are more frequent than if the sliding particles all must come together at random. The difference is often crucial because the unlikely behaviours often cause massive damage (mud-slide, tsunami) or bring massive reward (some stock market bets).

Most current statistical decision packages assume your data is generated by sampling random variables, so that deciding on the basis of averages is valid. But if instead you were actually sampling a complex process, you would be caught out by not allowing for the extra outliers. It is conjectured that this was behind the recent $2 billion+ losses by JP Morgan Bank on what seemed a watertight risk-hedging scheme. (Current statistical tests also assume the variables are independent, when they will not be for many complex systems - another basic challenge to current decision making tools.)

Most suggested revolutions in decision making - for example, those on the Millennium page - simply offer new ways to get deciding-by-averages done more efficiently and do not tackle this issue. For example, dynamic Net Present Value analysis - a basic tool (commonly used in finance) for deciding future courses of action by comparing projected inflows and outflows - can be hastened by simulation, but this typically masks the fat tails and unpredictability of complex future behaviours, often rendering these analyses more dangerous than enlightening. Designing in resilience instead is often far more important and practical. (Interested readers might look at Resilience Thinking, by Brian Walker and David Salt.)

Life is run by averages, except where it is run by outliers. It is in deciding to avoid, or grasp, outliers that agents show intelligence: deciding by averages can be automated. (Well, inputting sensible valuations can’t be, but set that aside here.) What is crucial is to be able to judge when a complex domain has been entered that matters for decision making and, specifically, how it matters.

For example, Australian dry grasslands can have an exploitation “tipping point” or threshold beyond which they degrade irreversibly into a less biodiverse, less productive condition. It takes practical expertise in both agriculture and dynamic analysis to identify it. Standard deciding-by-averages assumes exploitation for maximum yield, a condition typically near the threshold, ignoring the risk of crossing it and ignoring our typical uncertainty about where it is. When life is run by outliers, only appropriate decision expertise has any value. This is well expressed for poker: you have to know when to hold, when to fold and when to run away.


Managing complex systems is crucial. To do that we need to at least:

  1. Identify these systems and locate the key decisions requiring expertise.

  2. Integrate systems contexts for key decisions to organise context-dependent decision interrelations and timescales. This replaces the Millenium principle of subsidiarity (defined as having “decisions made by the smallest number of people possible at the level closest to the impact of a decision”), which is unworkable for complexity.

  3. Foster the development of capable experts for these tasks, and also develop their capacity for entering effectively into the range of expert-lay decision making that flows from 1).

  4. Re-vamp existing decision tools like statistical analysis to suit complexity.

  5. Develop the capacity for resilience analysis and practice.

  6. Construct institutions that mirror in decision authority the interrelations in 2 above and in orientation mirror 3 and 5.

  7. Separate, as far as possible, scientific epistemic scepticism from political pragmatic scepticism and educate the public to the preceding challenges and democratic roles therein. Climate scepticism, e.g., then largely reduces to a political phantasm.

These are massive challenges that humans have only just begun to tackle, yet are crucial to our survival.

Comments welcome below.

Want to write?

Write an article and join a growing community of more than 174,800 academics and researchers from 4,812 institutions.

Register now