Bureau’s weather records to be reviewed again – sure, why not?

Adjusted data from Australian weather stations has been peer-reviewed before. But the government’s new technical panel could still offer useful advice. Bidgee/Wikimedia Commons, CC BY-SA

The federal government’s new “Technical Advisory Forum” on weather data, announced by parliamentary environment secretary Bob Baldwin last week, will “review and provide advice on Australia’s official temperature data set”. This data set, known as ACORN-SAT and maintained by the Bureau of Meteorology, is the primary record used for monitoring temperature trends around the country.

Although the advisory forum’s detailed terms of reference have not been released, it seems clear that when the panel meets in March it will not be tasked with delivering the comprehensive audit demanded by some of the Bureau’s prominent critics, such as the Prime Minister’s business adviser Maurice Newman.

That’s fair enough, considering that a comprehensive, independent and international peer review was carried out as recently as 2011 and concluded that the Bureau’s practices and data meet the world’s best standards.

Instead, the announcement seems to be acting on a recommendation from the 2011 review (see recommendation E4, page 14), which suggests that a technical advisory group be established so that “respected external scientists, statisticians and stakeholders [can] provide an opportunity for external comment on the further development of the ACORN-SAT system”.

The new panel features eight scientists, with a variety of backgrounds but based mostly in statistics, chaired by University of New South Wales statistician Ron Sandland. What advice are they likely to offer?

What will be reviewed?

While we do not yet know in any detail, with all this statistical expertise we can perhaps assume that one of the main areas for review will be the statistical practices applied to the ACORN-SAT data. In particular, it is likely that the practices around data homogenisation will be covered.

This blending, merging and/or adjusting of weather data is a necessary step in the generation of uniform, continuous climate records of the highest quality.

As described in a previous Conversation article, raw climate data can sometimes be patchy over time, and often contains artificial jumps and/or trends that are due to non-climate factors such as changes in observing practices, the movement of stations, or changes to the surroundings at the instrument’s location (for instance because of urban development).

For changes to our climate to be effectively monitored, continuous records must be generated that show only those influences associated with the climate. To do this, climate scientists use statistical adjustments that are based on cross-comparisons with any information about what could have caused any artificial signals, or comparisons with other nearby instruments.

Given the expertise of those listed as members of the Technical Advisory Forum, we might speculate that it is these statistical adjustments that are likely to be examined in more detail by the advisory group.

Fact-checking is a must in science

The “fact-checking” of scientific data is a necessary part of the scientific process. Thorough, objective and critical evaluation of scientific methods and findings must be undertaken to ensure confidence in scientific findings.

Such tenets underpin the process of peer-review in scientific journals. Notionally, all studies are examined with a fine-toothed comb by a series of experts to ensure they are robust before they are published (although there is recent evidence that this comb is, unfortunately, not always as rigorous as we might expect).

The same standards must apply for any form of scientific inquiry, including those areas that have become highly politicised and polarised, such as climate science.

An objective external review might also provide information that can be highly constructive and encourage scientific advancement, as well as checking the existing work. For example, a panel of experts in statistical theory and application might be able to provide information on new statistical techniques that could be applied to the data. Any scientist would welcome such input, as it will help to advance scientific knowledge.

Does Australia’s temperature record need another review?

This will not be the first time that official temperature records, either in Australia or globally, have been reviewed and their statistical techniques critiqued.

One of the most comprehensive reviews of global surface temperature records was the Berkley Earth project, which also included a section on Australia’s data. The review concluded that the evidence of warming in surface temperature records is robust.

Given the previous reviews, it is worth asking whether yet another review is really necessary. There are philosophical arguments as to whether or not it is strictly necessary, but in my opinion it’s worth doing anyway.

On one hand, commissioning review after review does rather give an impression that time and resources are being wasted, especially if the answer is fundamentally the same every time. It is my opinion that any recommendations implemented from the advisory group will not change the key features of Australia’s official temperature record, which shows that the country has warmed by around 0.9C since 1910, and that 7 of the 10 hottest years have happened since 2002.

But on the other hand, such reviews have the capacity to lead to more transparent and reproducible science. This is fundamental to the scientific method and so, in my opinion, should always be welcomed.

Editor’s note: Ailie will be answering questions between 3:30 and 4:30pm AEDT on Wednesday January 28. You can ask your questions about the article in the comments below.

Want to write?

Write an article and join a growing community of more than 100,600 academics and researchers from 3,217 institutions.

Register now