Menu Close

On global temperatures, Berkeley’s BEST is similar to the rest

Global land-surface temperatures are up, but it’s not really news. Stuart Dowell

The Berkeley Earth Surface Temperature (BEST) study recently found that global land-surface temperatures have increased by about 1°C since the 1950s — and 1.5°C since the mid-18th century.

These results have been getting a lot of press - with some describing them as “game changing” - but how much do they really differ from what has come before?

What the BEST study is

The BEST study re-analyses land-surface temperatures. It was led by Richard Muller, a physics professor at the University of California. Muller has described himself as a “properly sceptical” scientist, and was motivated to analyse global temperature records to satisfy his own scientific misgivings.

Those misgivings related to the veracity of temperature data collected globally. Questions are regularly raised in the blogosphere about corrections to temperature records, and the possible influence of the urban heat island effect on global warming trends.

The BEST study aimed to determine whether the choices made when analysing temperatures influenced the warming trends that were being observed.

What the BEST study did

There are many reasons for correcting temperature data. Over time, changes in technology and observing practices introduce inconsistencies to the data that must be accounted for. These include the influence of weather stations moving from one location to another, and changes from mercury (or alcohol) thermometers to electronic sensors.

Hence, temperature data is sometimes separated into two categories — unanalysed data and adjusted data. In the blogosphere, it has been argued that unanalysed data (sometimes called “raw” data), most accurately represent real temperature changes. This assertion is based on the false assumption that “raw” data are “pure” and have not been subject to any factors dependent on human decision making.

Bloggers wonder what effect urban heat islands have on global warming. Kenith Mun

In fact, just taking temperature observations involves a whole lot of decision making — such as which instruments to use, how they are calibrated, and how they are housed and sited in the field. Those decisions have not been applied universally across different countries and different environments and have changed over time.

There are no observations — from satellites to ocean buoys — that are free of these issues. Analysis of data once it’s collected has to ensure the data represents physical reality.

To create a global temperature record, one has to first assemble a data set with the best coverage, which means acquiring data from more than 200 countries.

Then, it is necessary to appropriately analyse the data. Changes in the observational network over time are the most important thing to correctly account for. For example, unless you account for it, new stations starting up in Antarctica (where there used to be none) will drag the global-average temperature down, relative to the preceding period. Similarly, changes in the local environment, such as urbanisation or vegetation changes, can affect already established observing sites.

Before the BEST study, there were three centres analysing global temperatures — the UK Meteorological Office (HadCRU), NASA, and the US National Climatic Data Center (NCDC). All draw on fairly similar data but use different analysis methods. For example, recent global temperatures from NASA have been warmer than those from the UK, due to differences in the way Arctic temperatures are analysed.

All three of the major data sets use a subset of the total available temperature data, with records selected for their length, quality and completeness. The BEST data set uses a different approach, including every piece of data they could lay their hands on, even from stations which have only operated for a few years.

The BEST study did not include temperatures over the oceans. Jens Karlsson

Perhaps the biggest difference between the BEST study and the three established records is in the way that the data are adjusted. Whereas NASA, HadCRU and NCDC use a process of adjustment that includes labour-intensive decisions, such as inspection of metadata (descriptive logs that record information at each site, such as a change of instrument), the BEST study uses an entirely automated process based on data analysis alone. This type of comparison is an important check, since it gauges the impact of the adjustments on the temperature trends.

What the BEST team ended up producing were global land-temperature trends that are slightly larger than those previously reported.

What the BEST study is not

The reason these results received such huge interest is due to the suggestion on blogs that warming trends are due to urbanisation and adjustments. Using a different method, the BEST team has suggested that existing data sets may be underestimating the real warming. This finding was counter to Muller’s expectations.

However, it is important to take a step back and look at just how significant the BEST study was in terms of the science.

The BEST study has not yet been accepted for publication by a scientific journal. Secondly, the importance of historically observed temperatures in assessing the influence of greenhouse gases on climate is overstated in much of the commentary. This is explained in more detail in an earlier post on The Conversation.

Further, the BEST study only looked at part of the picture. It did not include temperatures over the oceans or temperatures recorded from satellites. Those two alternative temperature data sets have showed warming that is consistent with terrestrial thermometers over the last 100 years or more. And we know that those records have not been influenced by urbanisation.

Finally, and perhaps most significantly, it should be acknowledged that the BEST research is not actually that novel. While it is great to have others independently verify results, many climate researchers have been surprised by the attention this study has received.

In fact, national meteorological agencies and research centres have been exploring the sensitivities of the records to adjustments and other analysis choices for two decades.

For example, the Bureau of Meteorology has been producing an automated analysis of Australian temperatures — from 1910 to present*, using a method very similar to that employed by BEST — since the late 1990s. This unadjusted data complements the adjusted data, and the two sets are cross-checked against each other.

The upshot of all that research is that the warming trends observed globally over the last century are physically robust, and not particularly sensitive to urbanisation or method of construction.

But it never hurts to find additional evidence.

*The phrase, “from 1910 to present” was added as an update to this article.

Want to write?

Write an article and join a growing community of more than 175,000 academics and researchers from 4,814 institutions.

Register now