Skip to content

Kansas Temperature Trend Update–Muller Confirms There is a Problem

October 22, 2011
tags: ,

The analysis of USHCN stations in Kansas that I reported a few days ago suggested there was a large divergence in temperature increases since 1911. Some stations showed increases of up to 1.82C while others were as low as 0.13C.

The new paper from Richard Muller’s BEST project confirms that this problem exists not just in Kansas but also across the whole US, and by implication worldwide. To quote :-

As with the world sample, the ratio of warming sites to cooling ones was in the ratio of 2:1. Though some clumping is present, it is nonetheless possible to find long time series with both positive and negative trends from all portions of the United States. This reemphasises the point that detection of long term climate trends should never rely on individual records.

This message is reinforced by the map they show.


Map of stations in and near the US with at least 70 years of measurements; red stations are those with positive trends and blue stations are those with negative trends.

Muller makes no attempt to identify the reasons for these discrepancies but seems to accept they have nothing to with long term climate trends. However in other parts of his paper, he talks about the very real causes of UHI warming and also how “rural areas could show temperature biases due to anthropogenic effects, for example, changes in irrigation”. In a very telling comment about possible urban cooling effects he says “For example, if an asphalt surface is replaced by concrete, we might expect the solar absorption to decrease, leading to a net cooling effect”. What’s sauce for the goose etc!

The bottom line is that all the reds and all the blues cannot be right. Many must be incorrect. And yet the approach of BEST along with GISS and co is simply to average the lot together hoping that the errors will even themselves out. This cannot be sound science.

  1. October 22, 2011 4:50 pm

    The bottom line is that all the reds and all the blues cannot be right.

    Well, there’s another possibility: the “trends” are well within the true margin of error (given that I’ve personally experienced 5°F of temperature change within ¼mile in Kansas, and given that no reporting stations are actually accurate to ±0.1°F, let alone ±0.01°F) and therefore are nothing more than what you’d expect from trying to draw trend lines through multiple sources of pink noise.

    • Mike Davis permalink
      October 23, 2011 12:54 pm

      I was under the impression that accuracy for thermometers was in the neighborhood of close to 1C and here in Tennessee I can see as much as 10F difference in 20 miles at the same elevation.

  2. Scott Mebeat permalink
    October 23, 2011 11:41 am

    Data on thier own says nothing about their causes. The warming industry with its fixation about atmospheric CO2 forgot to do the science to establish the effect of its greenhouse hypothesis on the climate – computer models offer no more than a circular argument. Or at least, nothing has been published. We need to be careful not to bark up the wrong tree.

  3. Mike Davis permalink
    October 23, 2011 1:16 pm

    As a retired trouble shooter I can guarantee you that multiple errors do not average out but they compound and tend to exaggerate each other. Each site “MUST” be treated individually then the most reliable sites should be compared. The end result can be no better than the worst starting measurement used. It may loo god on paper but that is just playing mind games and fooling themselves.
    I was in middle management for a short period and went back to hands on work because management is only interested in statistics / bottom line and finding ways to improve those, usually by sharpening a pencil rather than making any meaningful changes in methods or procedures. If the work is done right the statistics do not matter because the answer is obvious.

    • October 23, 2011 1:32 pm

      Averaging works when you are taking a random sample, say, the average weight of 100 people. Then the average is …well… the average!

      With temperatures ,where you are looking for the underlying climatic signal, such an approach is meaningless.

      • October 24, 2011 9:02 pm

        Precisely. If there were tens of thousands of thermometers of varying reliability you could take a sufficiently large random sample and average it to get a product mildly more reliable than the thermometers themselves.

  4. Mike Davis permalink
    October 23, 2011 3:28 pm

    You are looking for an average sample of a known and verifiable quantity in weight as the error is only in the measurement devices used. You are not weighing a 5’1″ 95 pound person in one town and extrapolating that to be the weight of all people in that town and a 6’4″ 300 pound person as representing the next town.
    Then there is the over time issue that compounds it all. Fifty years ago I weighed less than 100 pounds and now I weigh just over 200, It must have been caused by Global Warming! 😉

  5. Ron C. permalink
    October 26, 2011 1:53 pm

    As a number of people here have noted, climate and weather are definitely local realities. Pielke’s research in Colorado showed large differences in temperature readings among sites located within a few km. of each other. Further, no single station was representative of the regional (statistically compiled) trend. Thus, as the map above shows, climate is local, regional, national and global climates are statistical artifacts.

    On the matter of precision: Yes, you can improve it by taking multiple readings and averaging, but the multiple samples must be taken under the exact same conditions. From what is said above, readings from different gauges in different sites do not meet this requirement. From what I have read, the precision is +/- 0.5F.

Comments are closed.

%d bloggers like this: