Skip to content

John Christy’s Testimony To Congress

November 25, 2017
tags:

By Paul Homewood

 

 

modsvsobs_thumb_thumb

John Christy Testimony to Congress

 

 

I published the above graph a couple of weeks ago, to highlight how badly Prof Joanna Haigh of the Grantham Institute had misled listeners on BBC Today about the accuracy of climate models.

The graph came from Dr John Christy as part of his presentation to the US Congress in March 2017.

It is worth republishing his full testimony, which puts the graph into perspective.

 

 

image

“Science” is not a set of facts but a process or method that sets out a way for us to discover information and which attempts to determine the level of confidence we might have in that information. In the method, a “claim” or “hypothesis” is stated such that rigorous tests might be employed to test the claim to determine its credibility. If the claim fails a test, the claim is rejected or modified then tested again. When the “scientific method” is applied to the output from climate models of the IPCC AR5, specifically the bulk atmospheric temperature trends since 1979 (a key variable with a strong and obvious theoretical response to increasing GHGs in this period), I demonstrate that the consensus of the models fails the test to match the real-world observations by a significant margin. As such, the average of the models is considered to be untruthful in representing the recent decades of climate variation and change, and thus would be inappropriate for use in predicting future changes in the climate or for related policy decisions.

The IPCC inadvertently provided information that supports this conclusion by (a) showing that the tropical trends of climate models with extra greenhouse gases failed to match actual trends and (b) showing that climate models without extra greenhouse gases agreed with actual trends. A report of which I was a co-author demonstrates that a statistical model that uses only natural influences on the climate also explains the variations and trends since 1979 without the need of extra greenhouse gases. While such a model (or any climate model) cannot “prove” the causes of variations, the fact that its result is not rejected by the scientific method indicates it should be considered when trying to understand why the climate does what it does. Deliberate consideration of the major influences by natural variability on the climate has been conspicuously absent in the current explanations of climate change by the well-funded climate science industry.

One way to aid congress in understanding more of the climate issue than what is produced by biased “official” panels of the climate establishment is to organize and fund credible “Red Teams” that look at issues such as natural variability, the failure of climate models and the huge benefits to society from affordable energy, carbon-based and otherwise. I would expect such a team would offer to congress some very different conclusions regarding the human impacts on climate.

 

(1) Applying the scientific method to climate models from the IPCC AR5

In my last appearance before this committee (2 Feb 2016) I addressed the active campaign of negative assertions made against the various sources of data we use to monitor the temperature of bulk atmosphere. I demonstrated that main assertions were incorrect and that we can have confidence in the observations and one reason was that we now have several independent sources from around the world providing data with which to intercompare.

In this testimony I shall focus on the temperature of the bulk atmospheric layer from the surface to about 50,000 ft. – a layer which is often called by its microwave profile name TMT (Temperature of Mid-Troposphere). This layer is particularly important because it captures the atmospheric region that is anticipated to warm rapidly and unambiguously if greenhouse theory is well-understood. As such, if the impact of extra greenhouse gases (GHGs) is to be detected, it should be detected here. In Fig. 1 I show an example from a climate model simulation (Canadian Climate Model run CanESM2_rcp45_r3i1p1) of the anticipated temperature change for the period 1979-2016.

 

image

 

Figure 1 indicates that, according to theory, the tropical region should have experienced significant warming over the past 38 years due to extra GHGs. (There were 102 model runs to check and they all indicated a warming tropical atmosphere but to different degrees as shown later.) To test this result we follow the traditional scientific method in which a claim (hypothesis) is made and then is tested against independent information to see if the claim can be sustained or whether it is falsified. If the claim is confirmed, then we generally look for another test to confirm the claim again. If many tests are consistent with the claim, then we may have confidence in it. If the claim fails a test, we look for reasons why and modify or reject the original claim and start over. Since the thrust of this Hearing is to see how the scientific method was or was not applied in the pronouncements about climate science, this will serve as an excellent example because it deals with a foundational climate metric that should reveal significant change if theory is correct – the temperature of the bulk atmosphere.

 

(2) Observational data used to test climate models

Recall that the results from climate models are simply hypotheses (claims) about how the climate should have evolved in the past. The claim here is, “The bulk atmospheric temperature trend since 1979 of the consensus of the IPCC AR5 climate models represents the actual trend since 1979.” (1979 is the beginning of the satellite temperature era.) To test this claim we compare the TMT model trends against TMT from several observational datasets. The first type of observational datatset is built from satellites that directly measure the bulk atmospheric temperature through the intensity of microwave emissions. These data are essentially global in coverage and monitor the Earth everyday. There are three sources, UAH (University of Alabama in Huntsville), RSS (Remote Sensing Systems, San Rafael CA) and NOAA.

The second type of measurement is produced from the ascent of balloons which carry various instruments including thermistors (which monitor the air temperature) as the balloon rises through this layer. From these measurements a value equivalent to the satellite TMT profile is calculated. Balloon stations are not evenly spaced throughout the Earth, but because the upper air is much more horizontally coherent in its features than the surface, a few balloons can represent a very large area in terms of temperature variability. The sources of these balloon datasets are RAOBCORE and RICH (University of Vienna, Austria), NOAA and UNSW (University of New South Wales, Australia).

Finally, major weather centers around the world generate atmospheric conditions every six hours or so of the entire Earth at many vertical levels, called Reanalyses. These products use many sources of data, including satellites and balloons, and merge the observations with a continuously running general circulation model. From the information at the vertical levels the TMT quantity is generated for an apples-to-apples comparison with models, satellites and balloons. The sources of the Reanalyses are ERA-I (European Centre for Medium-Range Weather Forecasts (ECMWF) – ReAnlaysis-Interim), NASAMERRAv2 and JRA-55 (Japan ReAnalyses). These three types of systems – satellites, balloons and reanalyses – represent very different means of computing the bulk atmospheric temperature and are provided by independent, international entities giving us confidence in the observational results.

 

 

(3) Testing the claim – applying the scientific method

In Figure 2 we show the evolution of the tropical TMT temperature since 1979 for the 102 climate model runs grouped in 32 curves by institution. Some institutions contributed a single simulation, others as many as 18. Multiple runs from a single institution’s model category were averaged into a single time series here. The curves show the temperature evolution of the atmosphere in the tropical box shown in Fig. 1.

 

 

image

 

Here we have climate model results (i.e. “claims” or “hypotheses”) to compare with
observational datasets in a test to check whether the model average agrees with the
observed data (i.e. the “claim” or “hypothesis”.) We test the model average because it represents the consensus of the theoretical models and is used to develop policy which is embodied in policy-related products such as the Social Cost of Carbon, the National Climate Assessment and the EPA Endangerment Finding.

I provided the model and observational information as annual temperature anomalies (both tropical and global) to Dr. Ross McKitrick (University of Guelph) who has published extensively as an applied econometrician on the application of statistical techniques to the testing of climate hypotheses. He applied the Vogelsang-Franses F-Test method to these data as described in McKitrick, Ross R., S. McIntyre and C. Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”, Atmosph. Sci. Lett., 11. DOI: 10.1002/asl.290. This method is particularly suitable for determining whether the trends of two time series are equivalent or significantly different. [The result found in their 2010 paper indicated model trends were significantly warmer than observations for the earlier datasets available at that time.]

What we are really testing here are the rates of warming depicted by the models and the observations for the period 1979-2016. I have simplified a depiction of the test in Figure 3 so the rate of warming is directly viewed, showing what the test is measuring.

 

image

 

The basic test question is, “Is the red line significantly different from the others?” The results are saverage trend and the observational datasets whenever the value of the test is greater than 84 at the <1% level. As shown, all test values exceed 84, and thus the mean model trend is highly significantly different from the observation shown in Table 1 recognizing that there is no equivalence between the model .

The scientific conclusion here, if one follows the scientific method, is that the average model trend fails to represent the actual trend of the past 38 years by a highly significant amount. As a result, applying the traditional scientific method, one would accept this failure and not promote the model trends as something truthful about the recent past or the future. Rather, the scientist would return to the project and seek to understand why the failure occurred. The most obvious answer is that the models are simply too sensitive to the extra GHGs that are being added to both the model and the real world.

[We do not use surface temperature as a testable metric because models, to varying degrees, are tuned to agree with the surface temperature observations already – i.e. they’ve been given the answer ahead of time – thus a comparison of the surface would not be a valid scientific test (Hourdin, F.T. et al., “The art and science of climate model tuning”, 2016, doi:10.1175/BAMS-D-00135.1. and Voosen, P., “Climate scientists open up their black boxes to scrutiny”, 2016, Science, 354, pp 401-402. DOI:10,1126/Science.354.6311.401).]

 

 

(4) The IPCC AR5 (2013) displayed a similar result – the models failed

Oddly enough, such an important result (i.e. that models fail the test of representing the real-world bulk temperature trend) was available to see in the most recent IPCC AR5.

Unfortunately, it was buried in the Supplementary Material of Chapter 10 without comment. In Fig. 4, I present the figure that appeared in this IPCC section. I was a reviewer (a relatively minor position in that report) in the AR5 and had insisted that such a figure be shown in the main text because of its profound importance, but the government appointed lead authors decided against it. They opted to place it in the Supplementary Material where little attention would be paid, and to fashion the chart in such a way as to make it difficult to understand and interpret.

 

image

 

I have taken the same information in Fig. 4 (IPCC AR5 Fig. 10.SM.1) and simplified the presentation so as to be clearer in Fig. 5 below. The trends here represent trends at different levels of the tropical atmosphere from the surface up to 50,000 ft. The gray lines are the bounds for the range of observations, the blue for the range of IPCC model results without extra GHGs and the red for IPCC model results with extra GHGs.

 

image

 

What is immediately evident is that the model trends in which extra GHGs are included lie completely outside of the range of the observational trends, indicating again that the models, as hypotheses, failed a simple “scientific-method” test applied to this fundamental, climate-change variable. That this information was not clearly and openly presented in the IPCC is evidence of a political process that was not representative of the dispassionate examination of evidence as required by the scientific method. Further, (and this took guts) the IPCC then claimed high confidence in knowing why the climate evolved as it did over the past few decades (humans as the main cause) ignoring the fact the models on which that claim was based had failed an obvious and rather easy-to perform validation test.

Incredibly, what Fig. 5 shows is that the bulk tropical atmospheric temperature change is modeled best when no extra GHGs are included – a direct contradiction to the IPCC conclusion that observed changes could only be modeled if extra GHGs were included.

 John Christy Testimony to Congress

 

 There are a couple more sections which I have not republished, but the above pretty much says everything. They can be read on the link above.

I would like to re-emphasise and clarify a couple of points:

1) The climate models we are talking about are from the CMIP5 (Coupled Model Intercomparison Project).

CMIP is a set of complex climate models, designed to replicate what had happened in the climate since 1870 and assess what might happen in the century to come.

CMIP5 models were run between 2010 and 2014, and were a crucial element of IPCC AR5.

In other words, the modelled temperature trends up to the date of the model are not projections at all, but hindcasts. In particular, they are based on actual GHG emissions, and are not dependent on assumed emission paths.

2) As Christy points out, the IPCC came to similar conclusions in AR5, though disgracefully they buried it away in the Supplementary Material.

Christy has focussed on the Tropics, but let’s take a closer look at the 60S to 60N chart:

 

image

Figure 4. This is Fig. 10.SM.1 of the IPCC AR5 Supplementary Material for Chapter 10. These are trends (1979-2010) for various vertical levels of the atmosphere from (a) observations (gray band – difficult to see), from (b) models without extra GHGs (blue band) and (c) models with extra GHGs and other forcings (red band).

 

As with the Tropics, actual observed trends either run close to the top of the blue band at the lower altitudes, or below. Throughout the range the models including extra GHGs overstate actual temperature trends.

10 Comments
  1. November 25, 2017 7:32 pm

    Looks like curtains for the claimed ‘enhanced greenhouse effect’.

  2. quaesoveritas permalink
    November 25, 2017 7:38 pm

    Unfortunately I doubt if any of this will be published by the MSM.

    • Stonyground permalink
      November 25, 2017 7:55 pm

      The question is, why the hell not? Isn’t exposing this kind of thing their job?

      • quaesoveritas permalink
        November 25, 2017 8:19 pm

        It used to be, but not any more.

  3. Metoak permalink
    November 25, 2017 8:39 pm

    Thank you, Paul for giving the U.S. Congressional testimony wider coverage across the pond and elsewhere!

  4. Tom Dowter permalink
    November 25, 2017 10:30 pm

    The reason that the models perform so poorly is simply that greenhouse gases represent the only long term temperature forcing that is actually built into them.

    Since 1993, outgoing long wave radiation has been on a strong rising trend. This is the exact opposite of what we would expect if greenhouse gases were the main cause of the warming. On the other hand, it is precisely what we would expect if the warming were down to almost anything else but those greenhouse gases.

    Again, if greenhouse gases were the only cause of the warming, we would expect that the sensitivity, i.e. the slope of a plot of temperature versus the logarithm of the greenhouse gas concentration, would remain constant whichever long period were to be chosen. However, on a sixty year view, the highest apparent sensitivity is more than ten times the lowest.

    Unless or until we incorporate at least one other forcing into them, the models will continue to be a poor representation of reality.

  5. jim permalink
    November 26, 2017 3:05 am

    Yes, but so what? We all know its GIGO. The so-called ‘scientists’ know, the politicians know, the bankers know, the global financial elite know.
    However until the tax-payers, energy bill payers bother to be to be bothered, the MSM will continue to tell lies, the politicians will continue to tell lies, the stupid ‘greens’ will continue to worship at the Gaia altar, and the financial elite will continue to prosper by gaining the upside on all and every ‘renewable’ investment with no downside risk as its all covered by us mugs, the tax-payers.
    Its POLITICS, not ‘science’, and its politics at its most grubby, with MONEY following MONEY to gain POWER.

  6. November 26, 2017 1:15 pm

    IN 2013, I was invited by my Congressman, David B. McKinely (a professional engineer who owns an architectural/engineering firm in Wheeling, WV) to attend “A Panel Discussion Looking at: The Origins and Response to Climate Change” hosted by him in nearby Fairmont. There were 10 on the panel, including Dr. John Christy, although he and one other participated via video conference. David had speakers who presented both sides, although he is firmly on the “we are not doing it and CO2 isn’t to blame” side. David heads up the “Coal Caucus” in the House of Representatives.

    I took notes and wrote a synopsis of each speaker. Dr. Christy said: “Life is brutal and short w/o energy”; current climate “models” are significantly wrong; we need to keep energy costs low for a healthy population’ tried once again to get across the point that “data shows something different from the models”.

    Myron Ebell was another on the panel. He referred to the “global warming group” who tell us they are the “authority” and the danger of the scientific/technological elite’s position pushing for more government and less energy. Myron pointed out that the global warming debate is driven by modeling and not reality. He also referred to the number of deaths in England caused by “the recent cost of heating in the cold snap.”

    It was my privilege to meet and speak with Dennis Avery, an Agricultural Climate Historian and Director of Center for Global Food Issues, Hudson Institute. I have since purchased and read Avery’s book with Fred Singer: “Unstoppable Global Warming: Every 1,500 Years”.

    One of the saddest of sacks, and there were several from across the US, was the WVU, Dept. of Biology chairman. According to his statements, it was clear to me that either he was lying or did not know the history of the vegetation assemblage of the Southern (unglaciated) Appalachians. He claims to be doing research in the forests of the eastern mountainous areas of WV, but showed no understanding of how it came to be. Of course that feigned ignorance allowed him to make fantastically false predictions.

  7. Old Englander permalink
    November 27, 2017 10:20 am

    Definitely worth a revisit. The most astonishing thing in Christy’s testimony is the Fig 10.SM.1 from the AR5 Supplementary Material. That’s the stuff used to “bury” the more embarrassing results that your narrative can’t explain, without actually censoring them completely. If after downloading 350 Mb and reading 1500-odd pages of the AR5 you have the stomach for the online-only Supplementary Material you have more energy than most.

    Of course Christy probably knew what he was looking for, and included a nice colourful model result of the “Hot Spot” (his Fig 1) in the tropical mid troposphere, which is a feature of virtually all the GCM’s. Problem is, the Hot Spot simply isn’t there (that’s the 10.SM.1, and Christy’s bold-line simplification of it), a direct contradiction of the models.

    This failure of models vs experiment/observation should be more widely known. IMO it’s more pronounced than any amount of haggling over the global temperature data series. In any area of physics one looks for testable models. When you have a model prediction and an experimental result, you have a verdict on your theory. Agreement: theory passes a key test, and survives to make more testable predictions. Disagreement: the theory is wrong, or at the very best seriously incomplete. But they bury this in the SM.

  8. November 30, 2017 6:57 pm

    Reblogged this on Climate Collections.

Comments are closed.

%d bloggers like this: