Skip to content

A Guide to Understanding Global Temperature Data by Roy W. Spencer, Ph.D.

July 22, 2016

By Paul Homewood

 

image

FFP-Global-Temperature-booklet-July-2016-PDF-1

 

I mentioned earlier the new paper just published by Roy Spencer.

It is only 12 pages long, and is well worth reading. I have though put together below a couple of key sections:

 

INTRODUCTION

 

When we measure temperature in our backyard, we really aren’t that concerned if the thermometer we use is off by a degree or two. Since most people live where the temperature fluctuates by many degrees every day, and the seasonal swing in temperatures can be 80 F or more, a couple of degrees doesn’t matter too much.
But in the case of global warming, one or two degrees is the entire change scientists are trying to measure over a period of 50 to 100 years. Since none of our temperature monitoring systems was designed to measure such a small change over such a long period of time, there is much disagreement over exactly how much warming has or will occur.
Whether we use thermometers, weather balloons, or Earth-orbiting satellites, the measurements must be adjusted for known sources of error. This is difficult if not impossible to do accurately. As a result, different scientists come up with different global warming trends—or no warming trend at all.
So, it should come as no surprise that the science of global warming is not quite as certain as the media and politicians make it out to be.
Increasingly, the “science” of global warming is being based upon theories of what might happen, not on what is being observed to happen. And the observations are increasingly at odds with the theory. The United Nations Intergovernmental Panel on Climate Change (IPCC) relies upon theoretical climate models which predict about 2 C (3.8 F) of warming by the end of this century, due primarily to carbon dioxide emissions resulting from our burning of fossil fuels. The IPCC claims that this rate of warming could be catastrophic for some forms of life.
But is the Earth really warming as rapidly as the IPCC says? And, is that warming entirely the fault of humans?
In this paper I will answer some basic questions about global temperature data in particular, climate change in general, and what it all means for the debate over energy policy. The following questions are some of the more frequently ones asked of me over the last 20 years I have been performing climate change research under U.S. government funding.
These questions include:
1) Does an increasing CO2 level mean there will be higher global temperatures?
2) Can global temperatures go up naturally, even without rising CO2 levels?
3) How are temperature data adjusted?
4) Are global temperatures really going up? If so, by how much?
5) Is warming enough to be concerned about? Is warming necessarily a bad thing?
6) Could the warming be both natural and human-caused?
7) Why would the climate models produce too much warming?
8) What is climate sensitivity?
9) Don’t 97 percent of climate researchers agree that global warming is a serious
man-made problem?
10) Haven’t ocean temperatures been going up, too?
11) What does it mean when we hear “the highest temperature on record”?
12) Is there a difference between weather and climate?
13) Why would climate science be biased? Isn’t global warming research
immune from politics?
From the answers to these questions that follow it should be clear that the science of global warming is far from settled.
Uncertainties in the adjustments to our global temperature datasets, the small amount of warming those datasets have measured compared to what climate models expect, and uncertainties over the possible role of Mother Nature in recent warming, all combine to make climate change beliefs as much faith-based as science-based.
Until climate science is funded independent of desired energy policy outcomes, we can continue to expect climate research results to be heavily biased in the direction of catastrophic outcomes.

 

 

3) How are temperature data manipulated?
There are three main methods used to monitor global temperatures, all of which have systematic errors that need to be corrected for.
We have had thermometer data since the mid-1800s, which are our only reliable way of monitoring near-surface temperatures. Over the oceans, the thermometer measurements have mostly come from buoys and ships. Weather balloons (radiosondes) have made measurements of the lower atmosphere only since the 1950s, and for a greatly reduced number of locations. Finally, satellite measurements of the lower atmosphere are our newest technology (since 1979), which have the unique advantage of global coverage.

Unfortunately, all three of these systems have undergone changes with time, and the effects of those changes are often as large as the global warming signal we are trying to measure. That’s why it is not really advisable to just analyze the raw data and expect a meaningful result. Adjustments to the data for known changes in the measurement systems are necessary. But, the sizes of those adjustments are quite uncertain, and depending on how they are made, some large differences in calculated global temperature trends can result depending upon who is making the adjustment decisions.
In the case of thermometers, usually placed to measure air temperature about six feet above ground, there have been changes in the time of day that high and low temperatures for the day are reported. Also, natural vegetation around thermometer sites has gradually been replaced with man-made structures, which causes an ‘urban heat island’ (UHI) effect. This effect is experienced by millions of people every day as they commute in and out cities and towns.
The plot in Fig. 4 shows the average UHI effect in daily surface weather data I computed from weather reporting stations all around the world during the year 2000, based upon daily temperature differences between neighboring temperature stations. As can be seen, even at population densities as low as 10 persons per square kilometer, there is an average warming of 0.6 C (1 F), which is almost as large as the global warming signal over the last century.

image
Fig. 4. Localized warming (Urban Heat Island effect) at temperature monitoring stations occurs as population density increases, as seen in this analysis of one year of daily temperature data from all reporting stations in the world. Note the most rapid warming occurs at the lowest population densities.


Clearly, to make meaningful estimates of global warming, the UHI effect must be taken out of the data. Unfortunately, the UHI effect is difficult to quantify at individual stations, many of which have obvious spurious heat influences around them like concrete or asphalt paving, exhaust fans, etc. In fact, there is evidence that the UHI effect has not been removed from the surface thermometer data at all. It appears that, rather than the urban stations being adjusted to match the rural stations, the rural stations have instead been adjusted to match the urban stations which then leads to a false global warming signal.

Besides the UHI effect, older mercury-in-glass thermometers housed in wooden instrument shelters (Fig. 5) have been largely replaced with electronic thermistor-type thermometers in smaller metal housings. The newer sensors measure electrical resistance which is then related to temperature. Such instrument changes do not really affect their use for weather monitoring, but can have a significant impact on long-term temperature monitoring.

In the case of weather balloons, which measure the temperature profile up through the lower atmosphere (see Fig. 6), the instrumentation designs have also changed over time. The thermistors themselves have changed, shielding of the thermistors from sunlight has changed, as has the computer software for analyzing the data.
As in the case of the thermometer measurements, these changes have not affected weather forecasting, because they are small (usually a degree or less) compared to the size of day-to-day weather changes. But they are large and detrimental for the purposes of long-term temperature monitoring.

Finally, sensors flown on satellites (Fig. 7) measure how much thermal radiation is emitted by the atmosphere. But these must be replaced with new satellites every few years, and no two sensors are exactly alike. This means that successively launched satellites must be intercalibrated, that is, adjusted so that the newer satellite readings match the readings from older satellite during the period when both satellites are operating.

Then, most of the satellites slowly drift from measuring temperature at a specific time of day in the early years of a mission to a different time of day in later years, requiring another adjustment. This is due to the satellites slowly falling back to Earth, which takes them out of their original orbit which was intended to keep them measuring at the same time every day.
Finally, some of the satellites show small changes in their calibrated temperatures, usually by only hundredths of a degree, for reasons which are unknown.
The errors for satellites are typically hundredths or tenths of a degree, and so are generally smaller than for ground-based thermometer systems. Nevertheless, differences in how adjustments to the satellite data are made can lead to global temperature trend differences of 50 percent or more between different research groups’ results, which has led to some controversy.
While some people criticize the satellite measurements as not really being a ‘temperature’ measurement, it is no more indirect than surface thermometers which use thermistors to measure electrical resistance, which is proportional to temperature. The satellites instead measure the intensity of thermal microwave radiation, which is also proportional to temperature. (Today, some doctors use a similar method to take your temperature by using an infrared-measuring instrument pointed in your ear.) Both surface thermometers and satellite sensors involve calibration adjustments to relate their measurements to temperature, and have their own relative advantages and disadvantages. One advantage of the satellite sensor is that a single measurement samples about 10,000 cubic kilometers of air, while a single thermometer measurement samples maybe a few cubic feet of air.
So, we can see that the unavoidable adjustments that are necessary to analyze global temperature trends over many years lead to considerable uncertainty. In the case of the UAH satellite dataset I am the co-developer of, we estimate a global temperature trend uncertainty of 0.1 F per decade or less. By way of comparison, the IPCC-expected global warming signal is about 0.5 F per decade over the next 50 to 100 years.

12 Comments leave one →
  1. David Richardson permalink
    July 22, 2016 12:26 pm

    The usual sane, measured (sorry) and scientific analysis of the situation you would expect from Dr. Spencer – thank you Sir, and thank you Paul for spreading sight of it to an even wider audience.

    The idea that “new records” can be pronounced to 100th of a degree, while the error in the readings is a least an order of magnitude bigger is mad – but it is not the rigour but the impression that alarmists want to convey.

  2. July 22, 2016 12:38 pm

    Reblogged this on Patti Kellar.

  3. July 22, 2016 1:12 pm

    yet another complexity in the temperature data is that the de-diurnalized, deseasonalized, and detrended residuals are not gaussian. they tend to show dependence, memory, and persistence in terms of the so called “hurst phenomenon”.

    persistence generates apparent but random decadal and multidecadal trends but these things aren’t really trends that can be ascribed to an external cause because they are the expression of randomness in a time series that exhibits dependence.

    an important assumption of ols regression is independence.

    please see
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763358
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2689425
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2776867

    there’s more at
    ssrn.com/author=2220942

  4. July 22, 2016 2:35 pm

    Utterly reasonable. Here is a man one can trust.

  5. July 22, 2016 5:24 pm

    He makes a great deal of sense. Considering the wide temperature variations even a mile apart, it has always been a question of mine how you interpolate the values. How can you know if the value you interpolated is even close to what was there? The entire business is overwhelmingly complex and quite probably beyond what we have the capability of doing at this point. The only way to be sure about things would be a very large system of measurements done at the same time every day, calibrated regularly and the instruments not varied over time. This would, of course, require many more years of research and the global warming crowd would not tolerate waiting for an accurate measurement.

    This doesn’t even touch on gridding and so forth while calculating the global average temperature. It’s rather mind-boggling.

    • July 22, 2016 8:26 pm

      A few partial answers. Gridding is done by timing. The orbit is known, the instrument aperture (field of view) is known, so a specific read at a specific time can be mapped to a spot on a gridded globe.
      Testing the UAH algorithms is done by comparison to weather balloon radiosonde data that directly measures temperature at altitude. They use a series of weather stations up the west NA coast from southern Mexico to northern Alaska to verify results by latitude.
      The most recent processing revision (r6) improved the aperture/altitude solution for earths curvature. Dr. Spencer blogged the problem and r6 algorithm change; the formal paper on it is still in peer review.

  6. July 22, 2016 6:15 pm

    Reblogged this on Petrossa's Blog.

  7. July 23, 2016 1:25 am

    Reblogged this on Climatism and commented:
    Excellent read.

  8. July 23, 2016 3:09 am

    If we have hundreds of thermometers and if their bias is random – some reading too high others too low – their biases will cancel out. Their imprecision is of course random by definition and those errors also cancel. A sufficiently large sample (as in USCRN) should in principle still be able to detect small long term trends in the de-diurnalized and de-seasonalized series in which about 90% of the variance in the measurements has been removed. However the trend estimation cannot be carried out with OLS regression because the temperature time series violates OLS assumptions. It contains dependence in the form of Hurst persistence. This is really the crux of the problem in the hunt for the elusive and ever changing OLS trend in temperature. It’s not because the thermometers are bad. It is because the OLS tool is unreliable in these kinds of time series data.

    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2763358
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2631298
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2689425

    more at
    ssrn.com/author=2220942

  9. July 23, 2016 3:13 am

    oops
    sorry
    accidentally made the same comment twice
    please discard one of them

  10. July 30, 2016 1:10 am

    Reblogged this on Climate Collections.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: