Menu Close

Did the BOM get it wrong on the hot, dry summer? No – predicting chaotic systems is probability, not certainty

weather forecast app
Shutterstock

What happened to the scorching El Niño summer we were bracing for? Why has the east coast of Australia been drenched while the north and west gets the heat?

For beachgoers, a wrong weather forecast is annoying. For farmers, it can be very expensive. And for northern Queensland residents surprised by flooding after Cyclone Jasper, it can be devastating. Small wonder there’s been plenty of criticism levelled at the Bureau of Meteorology and other forecasting agencies this summer.

The criticism is understandable. But is it fair? No. The reason is that weather forecasting is inherently not about certainty but probability. Our atmosphere and oceans do not behave in simple, easily predictable ways. They are non-linear, chaotic systems. That means we can only predict large weather features such as highs and lows or bands of storms with relative certainty and even then only for a few days in advance.

flooding cairns
Heavy rains from Cyclone Jasper triggered flooding in far north Queensland in December. AAP/Department of Defence

We want certainty – but we have to settle for probability

Let’s say you check your weather app and see your location has a 60% chance of rain at midday. What does this actually mean?

It means if this forecast was issued 100 times, you should get wet 60 times and stay dry 40 times.

To forecast rainfall for a whole season ahead, meteorologists generally calculate the chance of exceeding average conditions, rather than stating that we will have a dry or wet summer with certainty.

So if we predict a 25% chance of above-average rain during an El Niño summer, we would expect that one out of every four times we make this prediction, we would observe higher rainfall than the average.

So how then do we know if we are making good forecasts? Given that a 60% chance of rain can mean wet or dry, albeit with different odds, we certainly won’t be able to judge the forecast quality based on a single event. Instead, we assess many forecasts of 60% rain made in the past to see if the 60 to 40 split of wet and dry eventuated. If it did for this and all other possible probabilities, the forecasts work well.

storms entering sydney
Big weather events such as bands of storms are easier to predict with some certainty. But other weather is much harder. Shutterstock

This isn’t what we’d like. Many of us find probabilistic forecasts confusing. Intuitively, we would prefer to simplify them into absolute statements.

Take a picnic you have planned for tomorrow. If you read the statement “there will be thunderstorms at noon tomorrow at Picnic Spot,” you will feel confident it’s best to cancel the event. But the statement “there’s a 60% chance of thunderstorms at noon tomorrow at Picnic Spot” is far more accurate. The first gives false certainty, by vastly oversimplifying what we really know.

Let’s not forget, there is a 40% chance it will stay dry, which the first statement completely ignores. And if it does stay dry, how will your friends react to the cancelled picnic? How much risk are you willing to take?


Read more: Curious Kids: how do people know what the weather will be?


When we criticise weather forecasts for their inaccuracy, we are usually being unfair. You can’t actually say a weather forecast was wrong if you experienced rain when the forecast was for a high chance of being dry. It’s simply not possible to tell from a single day or even a season how well our forecasts are working because of the nature of how our atmosphere and oceans behave. We’ve known about this for 60 years.

That is why the Bureau of Meteorology’s seasonal forecasts come in likelihoods, such as the rainfall outlook for October to December issued on September 28th. It predicted that “October to December rainfall was likely (60 to 80% chance) to be below median for much of Australia excluding most of central and northwestern WA and south-west Tasmania.” Note that the forecast had a 20-40% chance for the wetter than usual conditions which some parts of Australia ended up experiencing.

But beware: We can’t declare the success or failure of a likelihood forecast from a single season. What the likelihood gives us is the ability to make better decisions based on the best information we have.

Less than certain but far better than nothing

Given these constraints, how can we best use probabilistic forecasts in making decisions?

Here, weather and climate forecasting alone cannot provide the answers. The use and value of a particular forecast strongly depend on what decisions need to be made, our values, and what economic circumstances decisions are made in.

A very simple example is to assess how much it would cost to protect ourselves against, say, a flood, and the loss we would incur if we did not protect ourselves and then the event happened.

If the cost of protection is very low and the loss very large, the answer is simple: protect yourself all the time. High protection costs and low losses imply we should never protect ourselves. Both statements can be made without bringing in the forecast probability. But in the middle, it gets tricky. How much should you spend on a highly damaging event with a low probability of occurring?

Deterministic weather forecasts giving certainty are only possible for a week or two, and only for the large features of the weather. This means longer term forecasts and those for intense weather systems such as thunderstorms or tropical cyclones will only ever be possible by assessing how likely different outcomes are, and giving us a probability.

It’s fine to complain about the weather. But we can’t complain about the forecasting based on a single event. We want to know what’s coming our way, but the weather doesn’t work like that. We owe it to society to provide and use the best information we have to protect and save property and lives. There is too much at stake to keep it simple.


Read more: Extreme weather is outpacing even the worst-case scenarios of our forecasting models


Want to write?

Write an article and join a growing community of more than 182,600 academics and researchers from 4,945 institutions.

Register now