Skip to content

Temperature Anomalies–An Illustration

September 20, 2013

By Paul Homewood


There still seems to be a bit of confusion over what temperature anomalies mean. So let me offer a simplistic illustration.


Figure 1 shows the annual actual temperatures for a theoretical town. From 1990 to 2000, temperatures rose steadily from 10.0C to 20.0C, at a rate of 1.0C a year.

The average temperature for 1990-2000 was 15C. Since 2000, temperatures have not gone up at all, remaining at 20C each year. Nevertheless, current temperatures remain 5.0C higher than the 1990-2000 mean.





Figure 1



Now let’s switch to the same temperatures shown as anomalies, as in Figure 2.

Notice that the shape of the graph remains exactly the same.

The anomalies are the difference between the actual temperature and the baseline, which in this case is 1990-2000. The first year, 1990, is 5.0C lower than the baseline. As the temperatures increase during the 1990’s, this anomaly gradually turns positive, until by 2000 it is +5.0C, i.e 5.0C higher than the baseline.

From 2000 to 2012, the anomaly stays at +5.0C.  This does not mean that temperatures are still increasing, as we already know that they have not. It simply means that they are higher now than they were during the selected base period, i.e.1990-2000.






Figure 2



Why Anomalies?

Just imagine if you were trying to work out global temperatures just by using actual temperatures. You would have to average thousands of stations spread between the equator and the poles, with a range of temperatures of between +50C and minus goodness knows what!

If the mix of stations varies slightly, for instance a few more in warm locations and/or a few less in cold ones,  the average temperature would change, making the whole exercise worthless.

In theory, (at least!), the use of anomalies avoids this problem. Take a colder station out of the mix, say half way up a hill, and it should not matter, as long as the stations remaining have similar anomalies.

  1. Joe Public permalink
    September 20, 2013 10:53 am

    Thanks for that explanation Paul.

  2. Ian permalink
    September 20, 2013 5:23 pm


    You do a good job summarizing the situation. I’ve always wondered about the following:

    “Take a colder station out of the mix, say half way up a hill, and it should not matter, as long as the stations remaining have similar anomalies.”

    How big a caveat is that last part of the sentence? How well tested is it? Where has it been tested?

    • September 20, 2013 5:49 pm

      I think the biggest problem is when you introduce new stations, as these have no history to baseline against. This applies to a large number of stations, as very few of the full database have long continuous records.

      When this happens, you need to patch their records against other stations with overlapping records, and this is when it gets complicated!!!


      Station A runs from 1910-1950
      Station B runs from 1940-1980
      Station C runs from 1970-2012

      Somehow you need to work out a long term trend from all three, and this involves a lot of assumption.

      Take a look at this map of the US, and you will realise that making such assumptions can be dangerous.

  3. Brian H permalink
    September 21, 2013 10:07 am

    The real question: what constitutes an anomalous anomaly?

Comments are closed.

%d bloggers like this: