A Clear Example of IPCC Ideology Trumping Fact

By Paul C. Knappenberger and Patrick J. Michaels

Center for the Study of Science, Cato Institute

Within the U.S. federal government (and governments around the world), the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is given authority when it comes to climate change opinion.

This isn’t a good idea.

Here perhaps is the clearest example yet. By the time you get to the end of this post, we think you may be convinced that the IPCC does not seek to tell the truth—the truth being that it has overstated the case for climate worry in in its previous reports. The “consensus of scientists” instead prefers to obfuscate.

IN doing so, the IPCC is negatively impacting the public health and welfare of all of mankind as it influences governments to limit energy use, instead of seeking ways to help expand energy availability (or, just stay out of the way of the market).

Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half. Coming up with reasons why is the hottest topic in climate change science these days, with about a dozen different explanations being forwarded.

Climate model apologists are scrambling to try to save their models’ (and their own) reputations—because the one thing that they do not want to have to admit is perhaps the simplest and most obvious answer of all—that climate models exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the impacts that derive from the model projectionswhich is the death knell for all those proposed regulations limiting our use of fossil fuels for energy.

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, even the IPCC recognizes the recent divergence of model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest this leads you to think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box 9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.

clip_image002

Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that “There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And, it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.

We don’t.

The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (of the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can ascertain the details of what is going on.

 

clip_image004

Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium.”

After all, there is plenty there is room for improvement.

For example, the model range could be much tighter, indicating that the models were in better agreement with one another as to what the simulated trend should be. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (note that the observed trend is 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012 period).

In fact, the latter situation is ongoing—that is, the observed trend is moving steadily leftward in the distribution of model simulated trends.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.

clip_image006

Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this trend is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different possibilities (scenarios) between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend (starting in 1951) would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.

clip_image008

Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to you revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the agreement between models and observations.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC assigns its highest confidence level to the current agreement between models and observations.

If the models are wrong (predict too much warming) then all the impacts from climate change and the urgency to “do something” about it are lessened. The “crisis” dissipates.

This is politically unacceptable.

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
Jimbo
April 16, 2014 6:08 pm

The IPCC was an ill considered concept. They never allowed for failure.

Latitude
April 16, 2014 6:10 pm

Why do so many people discuss the science or computer models…without first acknowledging they are all based on fraudulent temperature records that have been fudged.
Even if they had invented the perfect model…they would never know it….because the models are all tuned to temp histories that have made the past colder and the present warmer….to show a faster rise in global warming…
They cooked their own goose with this one…they will never get an accurate computer model…with out first admitting they cooked the temp record

Latitude
April 16, 2014 6:11 pm

CTM…I have a post in moderation hell…. 😉

April 16, 2014 6:27 pm

one word: “simulation”…in IPCC-speak, this means (A) data, and (B) reality. End.

April 16, 2014 6:45 pm

Models are tuned to reproduce the 20th century air temperature anomaly trend. It would only be surprising if they didn’t successfully track HadCRUT. The reason they don’t track air temperature since year 2000 or so is because the recent years are out of sample and the air temperature trend has inconveniently changed slope.
When models are tuned to reproduce the trend of years 1880-2000, they need one set of parameters. Since the observed trend has changed slope since year 2000, there is a need for a different set of parameters. The previous set of parameters is no longer adequate.
The embarrassment of the previous trend slow-down, 1940-1974 or so, was fixed by fudging the models with supposed NH aerosols. But aerosols are no longer available. So the modelers are stuck. They haven’t figured out a plausible excuse to re-fudge the models to make them fit the recent data.
This all goes to show that climate models are analogous to engineering models. They’re heavily ad hoc parametrized to fit a certain range of data. Outside that range, they quickly diverge from reality. Inside that range, they can reproduce trends, but they can’t explain the causal physics behind the trends.
Climate models are, in short, useless. I hope to publish a paper showing exactly how useless they are. Meanwhile here’s my recent AGU Meeting poster (2.9 mb pdf) describing the wonderfully predictive utility of CMIP5 climate models.

p@ Dolan
April 16, 2014 6:49 pm

Simple and convincing. Brilliant. And sadly, doomed to be ignored by all the cAGW acolytes out there…

Theo Goodwin
April 16, 2014 6:57 pm

Pat Frank says:
April 16, 2014 at 6:45 pm
Once again, Pat Frank nails it. Can’t wait to read his paper.

April 16, 2014 7:13 pm

An often missed subtlety is that while projections from an IPCC climate model may be erroneous, they are insusceptible to being falsified. It is predictions that are susceptible to being falsified but the IPCC climate models do not make them.

Greg Cavanagh
April 16, 2014 7:23 pm

It sounds as though they are averaging trends over a longer period in order to say the difference in trend, overall, is within 0.02 of each other. They need to say the trend is diverging.
The whole thing reads like statistics trickery 101.
Oh, I see. A “Trick” is a clever thing to do, right?

April 16, 2014 7:26 pm

“This sounds like the model are doing pretty good”
No, it sounds like the models are doing pretty well.

ferd berple
April 16, 2014 7:34 pm

Greg Cavanagh says:
April 16, 2014 at 7:23 pm
The whole thing reads like statistics trickery 101.
==============
If we have our feet in the freezer and our heads in the oven, the IPCC says we are statistically comfortable.

Niff
April 16, 2014 7:42 pm

One can only assume that in using Box 9.2 the IPCC is completely incompetent or is fraudulently misdirecting. Unfortunately the CAGW crowd aren’t interested in what underlies the dogma and the IPCC is not subject to any prosecuting jurisdiction.
No matter. The facts should be shouted loud for any who are interested to hear.

SIGINT EX
April 16, 2014 7:42 pm

IPCC Titanic.
Do not trust the … “Captain” !
The “Watch Maker” turned “Ship Designer” on 2nd Deck standing by the spiral staircase and looking at the Ship-clock and glancing to his Swiss Chronograph on his wrist … knows !

gregole
April 16, 2014 7:48 pm

Thank you Pat and Chip. Thorough and to the point. Pat Frank also has an excellent graphic in his comment above. I appreciate the work you guys do to keep all this straight.
Question: What if temperatures drop over the next ten years?

ossqss
April 16, 2014 7:55 pm

Numbers don’t lie unless you program them to do so.
Pretty much sums up the net results of the trillions it took to get us here.
Think about it,,,,,,, where exactly are we ?

M Seward
April 16, 2014 7:55 pm

This whole model results vs observed results will either soon be beyond parody or only vaguely understood via parody it is so bizarre scientifically. The models can be loaded up with some fudge factor to make them mimic the observed trend for some interval. The 1980 – 2000 period would be a good option or you might start a bit earlier. That says nothing about the models integrity at all but just gets them in a position that is convenient for longer term comparison. In short a complete atiface.
This notion that the models reflect the climate system is about as credible as driving a car to the top of a hill then letting it roll down and claiming it is a self controlling autonomous vehicle that will drive itself home.

TheLastDemocrat
April 16, 2014 8:03 pm

Fig 2: “observed trend” is at the 13th percentile. And falling.

kylezachary
April 16, 2014 8:04 pm

It would be kind of shocking if the models didn’t agree with the cherry picked time period they were based off of. But the models didnt exist back then and ever since the models have existed they have not fit reality even remotely. So basically the models are good at showing past temperature paterns but terrible at predicting future temperature patterns. And since we can just look at the record books to see past trends what purpose do the models serve? We don’t need a model that predicts the past, we have google for that. We need a model that predicts the future and they don’t.

Steve
April 16, 2014 8:06 pm

So are the climate change junkies now trying to get away from ‘if a significant timeframe of say 17 years of cololing ocurrs, then we can make it 20 or 30 years from a small percentage of models predicting close to observation’
I dont buy it.

April 16, 2014 8:13 pm

The basis for statistical trickery is application of the equivocation fallacy wherein a logically illicit conclusion are drawn is drawn from an equivocation; the latter is an argument in which a term changes meaning in the midst of this argument. A result is for an argument to look like a syllogism (an argument whose conclusion is true) that isn’t one.

Evan Jones
Editor
April 16, 2014 8:18 pm

(a rate of 0.0107°C/decade).
Typo here. You mean 0.107, of course?

April 16, 2014 8:20 pm

It is Pat Michaels not Pat Frank. Credit where credit is due. This is a stunning presentation of the data.

Evan Jones
Editor
April 16, 2014 8:26 pm

Also, a dozen feakin’ reasons for the pause? It’s obvious, isn’t it? The PDO flip to negative is causing the pause. Same as it did in the 1950s.
The 1950s “pause” was, of course, incorrectly ascribed to aerosols. An excusable mistake: When they were looking at trends in the 1990s, they were smack in the middle of a positive PDO — but PDO was not even described by science until 1996.
The (mild) forcing has applied continuously from 1950 — just at about the rate ol’ Arrhenius predicted it would (+1.1C forcing per CO2 doubling). I wonder what Henny would think about all this if he were around to see it.
So glad to have cleared that up!

Joe Pomykala
April 16, 2014 8:38 pm

Does the observed “trend” looked at by the IPCC in actual observed temperatures compared to their bad forecasts include 1.) the backwards government “adjustment” lowering prior observed temperatures? 2.) heat island effects?, 3.) the fact that any “trend” in temperature may be statistically insignificant and just natural variation?
If going back to decade of 1950s for IPCC to start the data and forecast comparison (does not look good with just last decade and a half), was that not a relatively cold decade, why not start in 1930s or 1940s with warmer observed data and compare to the “forecasts”?
That $29 billion a year the US foolishly spends now on propaganda and preparedness for forecasted global warming which seems not to be showing up, now climatic change, the supposed melting global ice caps which will flood major cities and low countries despite global ice currently in an anomaly above trend (or in natural variation above normal and not an anomaly), do you think that money could bias IPCC forecasts up since funding would dry up for climatic change alarmists isf they could not manufacture forecasts for alarms and more money? It is no surprise at all, that IPCC “forecasts” consistently are above observations, if accurate there wold be no money to pay their salaries, and now they are also going back adjust the observations to create a trend.
Well, on the bright side, at least the White House is not following the advice of Obama’s top science adviser John Holdren who wanted to do mass sterilization of the population by poisoning the water supply to prevent population growth which was also assumed by alarmists “forecasts” to be leading to imminent disaster.

April 16, 2014 8:42 pm

We are not dealing with stupid people. Many of the IPCC scientists are well trained and fully aware of what is happening. I’m sure they know that they will be hung by their own data tampering, and that the models cannot work unless warming begins again- and soon. I used to play with my statistics students by telling them to use a set of data for various analyses. Then I would have them “fudge” 30% of the data and rerun the analyses. A lot of eyes were opened. The only way one could get back to the “truth” was to reinstate the original data. The IPCC, NASA, NOAA and all the other “manipulators” cannot “politically” go back to the data they have altered, so the models are hung on linear increases, and the real climate, historically, hasn’t followed that pattern. This is why we see all the doubling down on fear – they know that time can kill the whole ruse. Political action NOW signifies their fear.

1 2 3 5