Skip to content

How Temperature Adjustments Have Transformed Arctic Climate History

February 11, 2015

By Paul Homewood


There seems to have been a campaign of misinformation, to downplay the significance of the Arctic temperature adjustments we have been looking at.


The claim is that they make little difference to temperature trends there, so let’s test this out with the example of Akureyri.





Note that the temperatures were cooled from 1922 to 1965, effectively in the middle of the record, which started in 1882. As a result the overall trend since 1882 has remained virtually the same. But, of course, this is not the point, as it is the trend since the 1920’s that is significant.

We can see the effect on trends, using 5-year running averages in the chart below.





The original data shows that the period from 1930 to about 1950 was every bit as warm as the last decade. (Raw and adjusted data has been the same since 1990).  There is also a clearly evident cycle.

After adjustments, there is just a steadily increasing trend, albeit with a flattish interval in the middle.

Although there were some adjustments made in 1948, an adjustment of 1.08C was made in 1966, effectively reducing previous temperatures. This was in reaction to a sharp fall in temperatures over the previous two years, from 4.70C to 2.18C, which the algorithm assumed was due to changes in observation practices.

When we check around the region, however, we find that there were similar temperature drops all over the place. I offer a few examples below, but, as we have seen, similar adjustments were made at nearly every station in that part of the Arctic at around that time.


  1964 1966 Diff
Angmassalik, Greenland -0.28 -2.29 -2.01
Stykkisholmur, Iceland 5.05 3.10 -1.95
Reykjavik, Iceland 6.04 4.24 -1.80
Jan Mayen, Norway -1.29 -2.48 -1.19
Archangel, Russia 1.07 -1.07 -2.14
Murmansk, Russia 0.58 -2.47 -3.05



It is worth noting what the Iceland Met Office have to say about the sea ice years:


A comparison of annual temperatures at three stations, Stykkishólmur in the west, Akureyri in the north and Reykjavík in the southwest reveals some inter-station differences.

The first cold interval, the "ice years", was the coldest of the three in the north and east, but the 1979 to 1986 was the coldest in southwestern Iceland.


This would explain why the temperature drop at Akureyri was greater than the other two.

One other thing worth pointing out is the timing of the onset of cold. In Nuuk, which unlike Angmassalik, lies on the west side of Greenland, the cold came a year later, with the temperature dropping by 2.28C between 1965 and 1967.

Over in Siberia, however, the cold came earlier. For instance, there was a drop of 4.51C between 1962 and 1964. Other stations such as Salehard and Ust cil Ma experienced similar drops.

Certainly, this different timing may have confused the algorithm.


It is ludicrous to assume that all of these drops in temperature were due to station moves or equipment changes.

But we don’t have to make assumptions at all, as there is overwhelming evidence of this dramatic climatic shift. For instance, Dickson & Overhus:


The East Icelandic Current, which had been an ice-free Arctic current in 1948-1963, became a polar current in 1965-1971, transporting drift ice and preserving it…..

Aided by active ice formation in these polar conditions, the Oceanic Polar Front spread far to the south-east of normal, with sea ice extending to the north and east coasts of Iceland….

However, the Great Salinity Anomaly is certainly one of the most dramatic events of the century in the Norwegian Sea.


Or, Lawrence Hamilton, Sea Changes Ashore: The Ocean and Iceland’s Herring Capital:


In the mid-1960s, northwesterly winds associated with a prolonged negative NAO/AO state drove unusual volumes of polar surface water and ice through Fram Strait into the Greenland and Iceland seas.

Dickson et al. (1988:103) described this as “one of the most persistent and extreme variations in global ocean climate yet observed in this century.”…

From 1920 until 1965, relatively warm conditions prevailed over the northern North Atlantic. In 1965, a sudden change occurred; drift ice and polar water covered the north Icelandic shelf during spring.


And then there is HH Lamb, “Climate, History & The Modern World”:


A greatly increased flow of the cold East Greenland Current has in several years (especially 1968 and 1969, but also 1965, 1975 and 1979) brought more Arctic sea ice to the coasts of Iceland than for fifty years. In April-May 1968 and 1969, the island was half surrounded by ice, as had not occurred since 1888.



Let us recall that the GHCN adjustments are triggered by “an abrupt shift in temperature”. There was indeed such an abrupt shift in the 1960’s, and it was real.



Final Thoughts

A few further thoughts:


1) Whenever I raise the question of temperature adjustments, some nonentity usually jumps up and down and tells me to check the algorithm and work out how the adjustments were calculated.

I have no intention of doing so. The adjustments have been made by GHCN, and it is up to them to explain and justify, which is precisely what I have been asking them to do on numerous occasions, without response.

Can I make it clear then that I do not intend to get into any more debate with anybody who thinks they have the right to tell what I should or should not be doing. Any such comments will in future go straight into the spam box.


2) All warming and cooling adjustments balance out.

This is a common defence, but I fail to see the relevance. If incorrect warming adjustments are cancelled out by incorrect cooling ones, then we appear to have twice as many errors.

This hardly gives much confidence in the “robustness” of the global temperature datasets.

If, on the other hand, the cooling adjustments are correct, then clearly global warming has been overstated.

All I can say to those who put forward the “cooling adjustments” defence is to contact GHCN if they feel they are incorrect.


3) Accusations have been made that it is being “unscientific” to object to adjustments, as there may be good reasons for them.

My reply is that it is certainly not scientific to defend such adjustments, unless you can offer concrete evidence for them. And it is definitely “unscientific” to defend them, when real world scientific evidence contradicts them.


4) Some have suggested that we should simply trust the algorithms, and that we should let the “scientists” carry on with their work, as they must know best.

In other words, any evidence which suggests they may be wrong should be suppressed (unless, of course, you write a pal reviewed paper and get it past their gate keepers).

I find this view astonishing. Scientists, particularly public funded ones, should always be held accountable for their work. And any legitimate concerns must be addressed in an open and transparent manner.

The idea that these concerns should be hidden away from the public is abhorrent.

  1. Brad permalink
    February 11, 2015 11:18 pm


  2. February 11, 2015 11:35 pm

    “…we should let the “scientists” carry on with their work, as they must know best.”

    And the sun shall continue to circle the earth.

  3. Quinn the Eskimo permalink
    February 12, 2015 12:14 am

    Obviously, the software’s thresholds for triggering automated adjustment are just wrong and very, very badly designed. They need to go back to square one because the underlying premise has been thoroughly invalidated. At most, exceeding the magnitude of change threshold should trigger only manual and careful contextual review, and not an automated sorcerer’s apprentice wreaking havoc on the temperature record.

  4. February 12, 2015 12:23 am

    Wait…the objection now is that adjustments on average go in both directions, and for every station with a cooled-down past there is another with a warmed-up past.

    So in order to make your claims more robust you’d need to find a large area where adjustments have all gone in the same direction. Ideally you’d analyse all stations and determine if the overall effect is negligible.

  5. February 12, 2015 4:14 am

    Reblogged this on the WeatherAction News Blog and commented:
    Rewriting human history without reference to new discoveries or source documents is never justified, unless you are in the business of propaganda or fantasy fiction. Adjustments may or may not have much effect on our overall measurement of the global temperatures – which is a pointless exercise anyway. What Paul has shown is that through these automated adjustments regional effects are being smeared out of existence. The swings as we know can be massive and localised. The weather just one hundred miles west and north of me bears little relation to the weather I experience.

    What these adjustments are doing is erasing our past and our heritage because they ‘look’ wrong. Regardless of the aesthetics if the noise matches observations then it is likely to be right. If it does not match observations then the change is justified. All I have seen is arm waiving about the theory of adjustments. Adjustments are fine, but not when they contradict a fair body of primary and secondary sources documented at the time. Natural variability appears to offend these days. Is it different than the Victorians defacing ancient art that offended their sensibilities?

    Climate has shaped us and the environment around us. Temperature records when carefully used with observation help us understand our past and future. Vast tracts of our heritage already lies under water and has for thousands of years. Our climate has changed drastically and sometimes rapidly over short and long time periods. We adapted.

    But why read about decadal and centennial changes in the climate by Hubert Lamb or Brian Fagan when you can boot up a spreadsheet and show climate never varied, all because like minded people decided to manufacture a blunt instrument to excise the past so it never happened.

  6. February 12, 2015 4:33 am


    Here, here! And just remember, these so-called expert scientists are all on the public dole. They need to explain in detail where their drastic and seemingly arbitrary “adjustments” and “quality controls” originate from and just what motivates them. No hand waving, please; straight answers.

    It should be something quite simple if a station is off by 4 degrees (C or F – who cares!) it should be painfully simple to demonstrate just why that adjustment was made. Why the conspiracy of silence? The answers should be simple. And, as public employees, they owe it to the public.

  7. Pragmatist permalink
    February 12, 2015 5:02 am

    I’ve been all over the climate universe online looking for somebody (anybody) to provide an explanation for the NCDC algorithm’s adjustments made for these 19 specific sites. Thus far, lots of obfuscation, distraction, platitudes and generalities and little interest in evaluating your findings. Real Climate has a surprisingly weak response, which suggests the climate community is grasping at straws and is reluctant to open up the logic of the algorithm to scrutiny.

    Here’s an interesting bit of information from NCDC regarding the impact of adjusted data (v3) on global mean temperatures:

    “How does this version of GHCN‐Monthly compare to the previous version? The September 2012 release of v3.2.0 has no effect on the unadjusted (raw) data and little effect on
    global temperature rankings based on the adjusted data.  However, the century‐scale global land surface air temperature trend is higher using the adjusted v3.2.0 data. With v3.1.0, the adjusted annual
    global land surface air temperature trend for 1901‐2011 was 0.94°C/Century. Using data from version 3.2.0 this trend is 1.07°C/Century. The greatest differences between the two versions of the adjusted
    datasets are in the data for years prior to 1970.  There is little difference in the global surface temperature trend during the 1979‐2011 period.”

    Click to access GHCNM-v3.2.0-FAQ.pdf

  8. February 12, 2015 6:43 am

    “If incorrect warming adjustments are cancelled out by incorrect cooling ones, then we appear to have twice as many errors.”
    It may add to the noise. But the adjustments are there to remove bias. It is the averaging that damps the noise, and does it very well. So if you can trade bias for noise, that is a success.

    I did an experiment here on the damping of noise. I took a normal computation of the global average since 1900, and added random variation to every single monthly average (including SST). Not just Iceland. Big variation – Gaussian noise with amplitude (sd) 1°C. The result – no discernible difference. The sd for variation of the annual average due to the noise was 0.006°C.

    Of course, that is unbiased noise. Bias would make a difference. But NOAA et al go to a lot of trouble to design algorithms that don’t crate bias. And they can test that extensively on synthetic data.

    You are demonstrating, on a very small sample, the possibility that noise is being introduced. What your analysis can’t possibly tell is whether that introduces bias, and so creates a problem.

    The evidence is that it doesn’t. Here I computed the total effect of adjustments to Arctic (>64°N). Not only is the contribution tiny, but it has very little trend. However, there is a modest jump in the early ’60s.

    • Mikky permalink
      February 12, 2015 9:09 am

      Your graph demonstrates that the potentially flawed algorithms being used introduce negligible changes in trend, as would the definitely flawed method of just adding random numbers to raw data.

      What many here want to know is what UNFLAWED algorithms would do, i.e. what is the TRUE climate history.

      My guess is that this is a bit like the early aviation industry, mostly safe but with a significant number of crashes. But why the apparent lack of interest in the crashes from those who created them?

      • February 12, 2015 9:51 am

        “But why the apparent lack of interest in the crashes from those who created them?”

        These posts do not demonstrate any crashes. They merely establish that adjustments occurred. They seem uninterested in the actual outcome.

      • Mikky permalink
        February 12, 2015 10:42 am

        I would say that Paul has identified potential crash sites in Paraguay and in the Arctic, maybe with many survivors (in terms of long term trend) in the Arctic, but total devastation in Paraguay.

      • February 12, 2015 10:47 am

        ” total devastation in Paraguay.”
        Really? What happened to Paraguay?

        Or to ROW. What is the bad outcome?

      • Mikky permalink
        February 12, 2015 11:49 am

        The bad outcome is that nobody knows for sure the recent history of the climate, just comparing RAW with GISS with BEST etc, and finding them all “similar” does not answer the question. Have different flavours of random numbers been added to RAW, with obviously similar outcomes?

      • Mikky permalink
        February 12, 2015 12:01 pm

        The devastation in Paraguay is that a clearly consistent cooling trend in 1970 has been turned into a dramatic warming:

    • AndyG55 permalink
      February 12, 2015 8:42 pm

      Seems to me that if you discount algorithm problems, you are left with having to come up with another explanation for the huge number of stations that have now been shown to have either a created warming trend, or having had the 1940ish peak squashed.

      I strongly suspect that human “interpretation” might have had a very heavy hand in this.

    • Tom T permalink
      February 12, 2015 10:11 pm

      While finding data ok the GHCN adjustment is difficult I have to call horseshit on your analysis. The USHCN adjustment is readily available. And is on the order of tenths of degrees


      Trying to argue that the GHCN adjustment is an order of magnitude lower than the USHCN is a hard argument to make.

    • Tom T permalink
      February 13, 2015 12:49 am

      I have a hard time believing your analysis when the USHCN adjustment is easy to find and on the order of tenths of a degree. I doubt that the GCHN is an order of magnitude smaller. You need to seriously go back to the drawing board and find where you messed up.

  9. February 12, 2015 9:06 am

    Another superb piece of work, you are taking on the worlds “climate scientists” and their believers almost single-handed yet providing far more scientific analysis than they do.

    I particularly liked your point about peer reviewing (pal reviewing) as this, apparently noble, procedure has been totally devalued by consensus science and used to justify bad science many times. What is needed is criticism of papers and publications in the technical press as used to occur many years ago when knowledgeable parties gave their positive and negative views on issues discussed in the papers.

    I find it unbelievable that homogenisation of data and algorithm models should ever be preferred to measured values. Algorithms and calculated adjustment of data are good tools for assessing fuzzy or noisy data but should not replace the measured values.

    It seems that Climate Science should join Social Science and Economic Science as topics where some scientific methods are used but where the underlying theories are of limited value due to high degrees of randomness in the systems. Unfortunately for the world, their opinions have been taken as proven “settled science” by the gullible.

  10. Jim permalink
    February 12, 2015 12:35 pm

    I have a science degree and I am disgusted at what you have unearthed. I work for the public sector and would be fired if I tried to pull something like this. I have emailed my representatives and will continue to help were ever I can. Keep up the good work!

  11. robinedwards36 permalink
    February 12, 2015 3:23 pm

    The subject of “noise” interests me. Is it purely subjective? As typically viewed in the climate world, noise seems to be just the successive differences between observed values, (which in the case of data from the Icelandic Meteorological Office are very carefully measured and recorded and “digested”, with voluminous meta data), and some unstated but hypothesised values from a climate model which has some (again often unspecified) properties that appeal to the interpreter. The simplest model is just a mean (or possibly median) over the period of interest, the next more complex model being typically a value derived from a linear fit to the data of interest. A further one might be “smoothed” values from a variety of numerical smoothers, which are sometimes specified.

    So I ask, what do you mean by noise? For me, fundamentally it is something that the interpreter has trouble believing. Specifically he/she is expressing scepticism of the experimental scientist’s work. This attitude is typified by the widely adopted technique of smoothing data. “I don’t believe in your professional competence. This is what you should have measured.” In climate work you can’t go back and repeat the experiment. It’s all in real time.

    So I don’t believe in smoothing as being anything other than a sort of mental emollient, applied so that simple people (politicians, journalists and the media in general) think that they have understood or appreciated what the experimental scientists have done, when in fact they know nothing about it. This does not deter them from writing and speaking about the subject – witness the crass outpourings of one Gore (I nearly wrote Gove!).

    This obsession in simplification, which often takes the form of fitting a linear model to data that when plotted show gross discrepancies from such a model has resulted in bizarre outcomes. What are to my mind important aspects of dedicated observational science has resulted in original records being overlooked, unnoticed and deliberately discarded. I hope to write something much more extensive on this when the opportunity arises.

  12. Enzo permalink
    February 12, 2015 7:42 pm

    The word adustment is enough to make me suspicious!
    Thank you for your work!

  13. February 13, 2015 4:40 am

    Reblogged this on Globalcooler's Weblog.

Comments are closed.

%d bloggers like this: