Serious quality problems in the surface temperature data sets–Ross McKittrick
By Paul Homewood
When people talk about the widely reported global surface temperature record, it is worth recalling Ross McKittrick’s damning assessment of it in 2010, “A Critical Review of Global Surface Temperature Data Products”.
ABSTRACT
There are three main global temperature histories: the combined CRU-Hadley record (HADCRU), the NASA-GISS (GISTEMP) record, and the NOAA record. All three global averages depend on the same underlying land data archive, the Global Historical Climatology Network (GHCN). Because of this reliance on GHCN, its quality deficiencies will constrain the quality of all derived products.
The number of weather stations providing data to GHCN plunged in 1990 and again in 2005. The sample size has fallen by over 75% from its peak in the early 1970s, and is now smaller than at any time since 1919. The collapse in sample size has increased the relative fraction of data coming from airports to about 50 percent (up from about 30 percent in the 1970s). It has also reduced the average latitude of source data and removed relatively more high-altitude monitoring sites.
Oceanic data are based on sea surface temperature (SST) rather than marine air temperature (MAT). All three global products rely on SST series derived from the ICOADS archive. ICOADS observations were primarily obtained from ships that voluntarily monitored SST. Prior to the post-war era, coverage of the southern oceans and polar regions was very thin. Coverage has improved partly due to deployment of buoys, as well as use of satellites to support extrapolation. Ship-based readings changed over the 20th century from bucket-and-thermometer to engine-intake methods, leading to a warm bias as the new readings displaced the old. Until recently it was assumed that bucket methods disappeared after 1941, but this is now believed not to be the case, which may necessitate a major revision to the 20th century ocean record. There is evidence that SST trends overstate nearby MAT trends.
The quality of data over land, namely the raw temperature data in GHCN, depends on the validity of adjustments for known problems due to urbanization and land-use change. The adequacy of these adjustments has been tested in three different ways, with two of the three finding evidence that they do not suffice to remove warming biases.
The overall conclusion of this report is that there are serious quality problems in the surface temperature data sets that call into question whether the global temperature history, especially over land, can be considered both continuous and precise. Users should be aware of these limitations, especially in policy-sensitive applications.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1653928
Add into the mix the fact that there is little or no data for vast swathes of the world.
https://www.ncdc.noaa.gov/temp-and-precip/global-maps/
And it is clear that the whole thing needs to be taken with a large dose of salt.
Comments are closed.
Are there any data bases that track “average daily high temperature” data and “average overnight low temperature” data? As in the raw information as opposed to anomalies? I’d be curious to see if there is any actual increase in average high temperatures since to me, that would be a better indicator that there is ongoing global warming. If you are basically only sampling in, say, urban areas, you’re getting a lot of residual temperature at the start of each day, so it would seem logical that if there was global warming, the daily highs would have to be rising. They don’t seem to be rising that much, if any, here in Phoenix ,AZ, but the average daily temperature and certainly the overnight lows have risen.
Tom… try this? https://www.ncdc.noaa.gov/cag/time-series/us/2/0/tmax/1/5/1895-2017?base_prd=true&firstbaseyear=1901&lastbaseyear=2000&trend=true&trend_base=10&firsttrendyear=1998&lasttrendyear=2017
Maximum May temperatures since 1998 in Arizona have been trending cooler…
Tom O, yes BEST provides that for land stations. I did a study where I wanted to compare seasonal highs and lows and needed maxs and mins by hemisphere.
https://rclutz.wordpress.com/2015/06/22/when-is-it-warming-the-real-reason-for-the-pause/
Good article!
“I’d actually assert that there are only two measurements needed to show the existence or absence of global warming. Highs in the hottest month must get hotter and lows in the coldest month must get warmer. BOTH must happen, and no other months matter as they are just transitional.”
I would agree. You need those temperatures.
A number of years ago I did some experimenting with math and found that a day that gets hotter can have a higher average than a cooler day – and that the average could not tell me anything about what the highs or lows were. My next thought was that that would work when averaging temperatures for the entire planet, too. Only actual temperatures mattered. I was floored that they were using an average to “prove” global warming. Then I found Christopher Essex’s article about averages – and it confirmed everything I’d learned. The average is meaningless. The planet not only didn’t have a single temperature, but there was no way to get an average either. Turns out there’s no way to get an absolute surface air temperature period. Even James Hansen admits to that (https://data.giss.nasa.gov/gistemp/faq/abs_temp.html). There isn’t even a standard way to measure the temperatures they are getting now. They are playing with no rules.
The use of anomalies greatly strengths the credibility of the temperature record.
Global surface temperature datasets should be consigned to the dustbin as they are junk.
Why was there this data reduction just as the alleged seriousness of CAGW announced?
In all technical work I have been involved in, the importance of the project caused more data, not less, to be gathered. What is the response of climatologists to the reduction? The data is still being collected, just not used. How would Mann support it?
If past data is “poor” and only the current data “good”, how can you create graphs combining the two? Another nature “trick”?
“If past data is “poor” and only the current data “good”, how can you create graphs combining the two? Another nature “trick”? /quote.
it’s all rather too bloody well convenient and consensus say…………..97% are agreed because you’re worth lying to! That’s why the graphs but it’s all natural, the panic is false, the reasons are unconscionable the results of the green agenda will cause deaths through cold and intermittent electrical supply. Mankind’s greatest enemy is hubris and mankind itself. We the people need protecting from the iniquity and mendacity of: our own political claque……………………
Man made CO2 = warming is a political myth, it’s a the greatest political fiction and scam. CO2 rises as a factor of T increases, thus, rationally, in all logic………………….. cannot be causal.
So, stuff to what the ‘figures’ say, this never was an ’emergency’.
6,000 surface temperature stations was reduced to about 1,500 stations.
How many stations at high altitude and/or very northerly or very southerly latitudes were dropped, and when ?
Not so fast Phillip. Ocean SSTs are 71% of the global surface and HadSST is reasonably reliable. Here is the update for May 2017 showing the ocean cooling has resumed.
https://rclutz.wordpress.com/2017/06/13/ocean-cooling-resumes/
As many have observed, the global surface temperature record is nothing of the sort as it is stitched together from some land and ocean measurements. Unfortunately neither measures a surface temperature.
The land-based datasets are just the temperature of the passing air currents at eye level. Anyone who has tried to walk down a sandy beach on a hot summers’ day can testify this are not the same as the temperature of the hot sand underfoot. Similarly ocean measurements are not taken at the surface, but rather reflect the temperature of ocean currents close to the surface.
There is no such thing as a “global average temperature” – that is a thermodynamic nonsense. You can’t even define it. Temperature is defined only for systems in thermal equilibrium, which the Earth is not (if it were, we would not be here). Non-equilibrium systems (like boilers, aero engines, car engines … or GCM’s) are modelled usually by some assumption of “local” thermodynamic equilibrium (so you can use e.g. local “equations of state”) but that does not make a non-local average the “temperature” of anything. Global mean temperature is a statistic, yes, carrying the dimensions of temperature, but it isn’t the temperature of anything.
Worse yet, whether a statistic shows “warming” or “cooling” depends on what kind of average you use. For the intrepid see J Non-Equilib Thermodyn 2007 v32, 1-27.
WMO- “In order to assess the state of the climate
in any region, regular distributed and longterm
observations are needed within that
region. Unfortunately, WMO Regional
Baseline Climatological Network (RBCN)
stations and GCOS Global Surface Network
(GSN) stations across the African continent
often lack resources to report on monthly
or on annual (see Figure 4) time scales”
The WMO flag up that Africa needs 9000 temps stations, this gives the scale of the problem- Africa is bigger than the land masses of the US, China, India, Mexico, Peru, France, Spain, Papua New Guinea, Sweden, Japan, Germany, Norway, Italy, New Zealand, the UK, Nepal, Bangladesh and Greece put together.
Reply
WMO-
“Some 70 countries around the
world do not have the capabilities they need
to generate and apply climate information and
forecasts with the required timeliness and
quality of service”
WMO-
“The amount of data available for the period 1901-1930 was
rather limited” Presumably pre- 1900 was practically non existent, what are they comparing with?
“Add into the mix the fact that there is little or no data for vast swathes of the world.”
That doesn’t stop claims of “Record Warmest” in some data-deficient areas.
Namibia’s temperature data ended in 1948; Angola’s in 1951
I wish that when they discuss “global temperature” on the BBC etc they would actually address the issue of how the figures are arrived at and how the typical advocate of “climate change” actually thinks they are arrived at.
I suspect that most would be surprised that there is so little actual data.
Harrabin and his mates have absolutely no idea how it is derived. To change an old saying
Those that can do, those that can’t become climate scientists and those that don’t make it become BBC Environment reporters.
The BBC declares all contrary fact or opinion on climate ‘false balance’. However, as really happened, Roger Harrabin on the main news bulletin showing us a green pea sized shriveled strawberry, in his walled garden, in the middle of a city, 4 feet from patio doors, in January, is positive proof.
So that leaves us with a 39 year satellite record to reckon global warming
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2985439
I agree that there are many deficiencies in the quality and coverage of surface temperature records. But what choices do we have? Satellite data is great but only exists for the past 35 years or so. Apart from using thermometer records, how can we assess global warming? Tree rings, or ice cores or computer models?
Thermometer records show that in most regions of the world there has been a cyclical temperature trend over the past century or more with Highs around 1930 and 2000 and Lows around 1900 and 1970. So using short records, such as satellite data looks only at the rising part of the temperature cycle and produces quite misleading conclusions.
My analysis of several hundred long-term surface records (all raw data, from rural locations or small towns) shows a remarkable similarity of temperature trends. It would appear that data errors are generally less than the typical cyclical trendlines at individual stations. https://briangunterblog.wordpress.com/2017/02/17/world-50-temperatures/
Reblogged this on 4timesayear's Blog.
This chart shows just how WOEFULLY POOR the SST data or in fact any data feeding into the calculations of ocean heat content was before even 2003.
Any SST or OCH fabrication before 2003 can ONLY be a product of massive , assumption driven modelling. The real data just isn’t there.
Levitus etc,…. just FABRICATION to tell the warming story
I can confirm that merchant ships were still using buckets for sea surface temp reports to Met Office during 1990s and even later in some cases. The method used was coded into the message sent to Met Office. We too noted on a day to day basis that bucket temps were notably lower than engine intake temps.
“…bucket temps were notably lower…”
Was this difference before or after the standard corrections, for evaporative cooling of the Ashford bucket, etc.? And since the corrections depend on the length of time the bucket stands and where it stands, how does one allow for different handling by different ship’s crews, anyway? It all seems straining after gnats.
Updates:
(1) On the death spiral:
(2) On the ever increasing tropical cyclones:
http://models.weatherbell.com/tropical.php