April 23, 2024
Global Renewable News

The Truth about Global Temperature Data
Volume 7, Issue 32

August 23, 2016

According to Congressman Lamar Smith (R-Tex.), "In June, NOAA employees altered temperature data to get politically correct results." This is what he alleged in a Washington Post letter to the editor last November. The op-ed was part of Smith's months-long campaign against NOAA climate scientists. Specifically, Smith was unhappy after an update to NOAA's global surface temperature dataset slightly increased the short-term warming trend since 1998. And being a man of action, Smith proceeded to give an anti-climate change stump speech at the Heartland Institute conference, request access to NOAA's data (which was already publicly available), and subpoena NOAA scientists for their e-mails.

Smith isn't the only politician who questions NOAA's results and integrity. During a recent hearing of the Senate Subcommittee on Space, Science, and Competitiveness, Senator Ted Cruz (R-Tex.) levelled similar accusations against the entire scientific endeavor of tracking Earth's temperature. He added,

"I would note if you systematically add, adjust the numbers upwards for more recent temperatures, wouldn't that, by definition, produce a dataset that proves your global warming theory is correct? And the more you add, the more warming you can find, and you don't have to actually bother looking at what the thermometer says, you just add whatever number you want."

There are entire blogs dedicated to uncovering the conspiracy to alter the globe's temperature. The premise is as follows through supposed "adjustments," nefarious scientists manipulate raw temperature measurements to create (or at least inflate) the warming trend. People who subscribe to such theories argue that the raw data is the true measurement; they treat the term "adjusted" like a synonym for "fudged."1

Peter Thorne, a scientist at Maynooth University in Ireland who has worked with all sorts of global temperature datasets over his career, disagrees. "Find me a scientist who's involved in making measurements who says the original measurements are perfect, as are. It doesn't exist," he told Ars. "It's beyond a doubt that we have to have to do some analysis. We can't just take the data as a given."2

Speaking of data, the latest datasets are in and 2015 is (as expected) officially the hottest year on record. It's the first year to hit one degree Celsius above levels of the late 1800s. And to upend the inevitable backlash that news will receive (*spoiler alert*), using all the raw data without performing any analysis would actually produce the appearance of more warming since the start of records in the late 1800s.

So how do scientists build datasets that track the temperature of the entire globe? That story is defined by problems. On land, our data comes from weather stations, and there's a reason they are called weather stations rather than climate stations. They were built, operated, and maintained only to monitor daily weather, not to track gradual trends over decades. Lots of changes that can muck up the long-term record, like moving the weather station or swapping out its instruments, were made without hesitation in the past. Such actions simply didn't matter for weather measurements.

The impacts of those changes are mixed in with the climate signal you're after. And knowing that, it's hard to argue that you shouldn't work to remove the non-climatic factors. In fact, removing these sorts of background influences is a common task in science. As an incredibly simple example, chemists subtract the mass of the dish when measuring out material. For a more complicated one, we can look at water levels in groundwater wells. Automatic measurements are frequently collected using a pressure sensor suspended below the water level. Because the sensor feels changes in atmospheric pressure as well as water level, a second device near the top of the well just measures atmospheric pressure so daily weather changes can be subtracted out.3

If you don't make these sorts of adjustments, you'd simply be stuck using a record you know is wrong.

Some weather station changes are pretty straight-forward. The desire for weather information at new airports around the 1940s led to station moves. Some of these stations had been set up on the roofs of post office buildings and later found themselves in an open environment on the edge of town. Looking at the temperature records, you might see a sudden and consistent drop in temperatures by a couple of degrees.4

Equipment has changed, too, like the installation of a housing to shield thermometers from sunshine or the switch from mercury thermometers to electronic ones. By running them side-by-side, scientists have learned that the two types of thermometers record slightly different low and high temperatures. And because the electronic thermometers necessitated running electricity to the station, some of the stations were moved closer to buildings at the same time.

And while the impact isn't immediately obvious, changing the time of day that the weather station data is recorded is actually a big deal. Most weather stations didn't automatically log measurements, especially in the days of mercury thermometers. Instead, special thermometers were designed to mark the minimum and maximum temperatures that were reached. When someone checked the station to note those measurements, they reset the markers.

Imagine that you reset the thermometer at 4:00pm on a hot summer afternoon. The maximum marker is going to immediately return to its previous temperature. Even if the following day is considerably cooler, you will return to see the same high temperature on the thermometer yesterday's warmth is accidentally going to be double-counted. The same goes for minimum temperatures recorded in the morning.

As far as long-term trends are concerned, luckily this doesn't really matter, provided you always check the station at the same time of day. But if you switch from a routine of evening measurements to morning measurements, for example, you'll suddenly be less likely to double-count high temperatures, but much more likely to double-count low temperatures. This is known as "time of observation bias."5

In most of the world, the effect of all these non-climatic factors is neutral. Changes that raised temperatures have been balanced by changes that lowered them. The U.S., however, is different. Here, weather stations are run by volunteers who report to the National Weather Service. Compared to other countries, the U.S. has more stations but less uniformity among those stations.

At times, guidelines for the volunteers in the US have changed, with new equipment or procedures gradually spreading through the network of stations. Around 1960, the guidelines changed from late afternoon observations to morning observations. That kicked in over time (many stations didn't change until a new volunteer took over) and there's a substantial cooling bias over that time period as a result. In the 1980s, the National Weather Service asked volunteers to switch to electronic thermometers, adding another cooling bias. So in the U.S., accounting for non-climatic factors ends up increasing the warming trend over the raw data - which we know is wrong.
 


1 arstechnica.com/science/2016/01/thorough-not-thoroughly-fabricated-the-truth-about-global-temperature
2 Ibid
3 Ibid
4 Ibid
5 Ibid

For more information

Terry Wildman

Terry Wildman
Senior Editor
terry@electricenergyonline.com
GlobalRenewableNews.com