A Clear Example of IPCC Ideology Trumping Fact

By Paul C. Knappenberger and Patrick J. Michaels

Center for the Study of Science, Cato Institute

Within the U.S. federal government (and governments around the world), the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is given authority when it comes to climate change opinion.

This isn’t a good idea.

Here perhaps is the clearest example yet. By the time you get to the end of this post, we think you may be convinced that the IPCC does not seek to tell the truth—the truth being that it has overstated the case for climate worry in in its previous reports. The “consensus of scientists” instead prefers to obfuscate.

IN doing so, the IPCC is negatively impacting the public health and welfare of all of mankind as it influences governments to limit energy use, instead of seeking ways to help expand energy availability (or, just stay out of the way of the market).

Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half. Coming up with reasons why is the hottest topic in climate change science these days, with about a dozen different explanations being forwarded.

Climate model apologists are scrambling to try to save their models’ (and their own) reputations—because the one thing that they do not want to have to admit is perhaps the simplest and most obvious answer of all—that climate models exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the impacts that derive from the model projectionswhich is the death knell for all those proposed regulations limiting our use of fossil fuels for energy.

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, even the IPCC recognizes the recent divergence of model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest this leads you to think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box 9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.

clip_image002

Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that “There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And, it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.

We don’t.

The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (of the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can ascertain the details of what is going on.

 

clip_image004

Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium.”

After all, there is plenty there is room for improvement.

For example, the model range could be much tighter, indicating that the models were in better agreement with one another as to what the simulated trend should be. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (note that the observed trend is 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012 period).

In fact, the latter situation is ongoing—that is, the observed trend is moving steadily leftward in the distribution of model simulated trends.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.

clip_image006

Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this trend is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different possibilities (scenarios) between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend (starting in 1951) would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.

clip_image008

Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to you revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the agreement between models and observations.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC assigns its highest confidence level to the current agreement between models and observations.

If the models are wrong (predict too much warming) then all the impacts from climate change and the urgency to “do something” about it are lessened. The “crisis” dissipates.

This is politically unacceptable.

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
Cheshirered
April 17, 2014 1:39 am

R2Dtoo says:
April 16, 2014 at 8:42 pm
That’s a terrific post, and bang on.
Corrupting the original data to secure todays required result ‘locks in’ future divergence, model failure and ultimately, complete unsustainability of the theory.
Tick tock….

Adam
April 17, 2014 1:39 am

If I have lots of “models” and none of them work, do I get a better model by examining the distribution of their output? Perhaps if they make a couple more models to add to the distribution they can get a more realistic looking graph? Or maybe they should reject some of the models?
Perhaps they should apply a time varying weight for each model and chose the weight vectors such that the weighted mean of the models line up with the observations? Each month they can adjust the weights to maintain the illusion that their models are not junk.

DirkH
April 17, 2014 1:41 am

Terry Oldberg says:
April 16, 2014 at 7:13 pm
“An often missed subtlety is that while projections from an IPCC climate model may be erroneous, they are insusceptible to being falsified. It is predictions that are susceptible to being falsified but the IPCC climate models do not make them.”
A discipline that makes no predictions is not science.
If the IPCC makes no predictions, why do politicians the world over impose cow flatulence taxes and help unviable energy solutions like wind and solar into being with taxpayer money.
If we can’t call it science we need another word. I propose “cult”.

Reply to  DirkH
April 17, 2014 9:01 am

DirkH:
You have drawn the correct conclusion from the lack of falsifiability of the models. It is also true that the models convey no information to a policy maker about the outcomes from his or her policy decisions thus being worthless for the purpose of making policy.

DirkH
April 17, 2014 1:45 am

pat says:
April 16, 2014 at 9:56 pm
“[ROPEIK:] Germany’s Energewiende program is trying, not without problems, to convert Europe’s biggest economy to renewable energy. China and India are pouring billions into nuclear energy.”
Ropeik is right. And in a few years Germany will have destroyed its energy security and China and India will retake their classic roles of dominant empires.

Jimbo
April 17, 2014 1:53 am

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

This is the dilemma the IPCC finds itself in. It is a dilemma of its own making. By the way policymakers cannot be led astray because the IPCC produces the results they want. Policymakers find themselves in a dilemma too.
As long as there is no resumption in warming for a decade or more then the IPCC goes from dust to dust. A zombie that tells us with 97% certainty that he is right.

Jimbo
April 17, 2014 1:54 am

After posting I want to amend the last sentence to read:

A zombie that tells us with 97% certainty that he is ALIVE!

Jimbo
April 17, 2014 2:11 am

evanmjones says:
April 16, 2014 at 8:26 pm
Also, a dozen feakin’ reasons for the pause? It’s obvious, isn’t it? The PDO flip to negative is causing the pause. Same as it did in the 1950s.
The 1950s “pause” was, of course, incorrectly ascribed to aerosols. An excusable mistake: When they were looking at trends in the 1990s, they were smack in the middle of a positive PDO — but PDO was not even described by science until 1996………

Thanks for that reminder. Here is the real reason why they are busted and this was AFTER the IPCC’s FIRST report published in 1990. By this time they were hooked and in a corner.

A Pacific interdecadal climate oscillation
with impacts on salmon production
Abstract – June, 1997
(Vol 78, pp. 1069-1079)
Evidence gleaned from the instrumental record of climate data identifies a robust, recurring pattern of ocean-atmosphere climate variability centered over the mid-latitude Pacific basin. Over the past century, the amplitude of this climate pattern has varied irregularly at interannual-to-interdecadal time scales. There is evidence of reversals in the prevailing polarity of the oscillation occurring around 1925, 1947, and 1977; the last two reversals correspond with dramatic shifts in salmon production regimes in the North Pacific Ocean. This climate pattern also affects coastal sea and continental surface air temperatures, as well as streamflow in major west coast river systems, from Alaska to California.
by Nathan J. Mantua, Steven R. Hare, Yuan Zhang,
John M. Wallace, and Robert C. Francis
Published in the
Bulletin of the American Meteorological Society,
http://www.atmos.washington.edu/~mantua/abst.PDO.html

April 17, 2014 2:47 am

Roger, gregole, Theo:
My apologies to all. My apologies to Pat Frank as well.

Chris Wright
April 17, 2014 3:00 am

The authors quote the IPCC as stating:
““The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”
and
“Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade”
0.02 degrees C per decade? What utter nonsense. We can’t even measure the climate to that kind of precision. This claim alone shows that it has nothing to do with science.
These statements don’t mention a rather important consideration: when were the models run? If they had been run in 1950, that would be extraordinarily impressive. Most likely these are recent models that may even have been run later than 2012. In other words, they are mostly hindcasts and not forecasts. Any fool can predict what’s already happened.
It does seem that climate model hindcasts are remarkably accurate, while their actual forecasts fail miserably. There’s only one rational explanation for this: they have been adjusted to match past climate. There are huge numbers of parameters that can be turned up and down as needed. Willis mentioned an intriguing possibility: that these parameters evolve over successive runs, rather like Darwinian natural selection. Parameter changes that improve the hindcast will be kept, while parameter changes that make the hindcast worse will be – shall we say – changed.
If the IPCC is claiming their computer models are accurate and reliable on the basis of their hindcasts then this alone is close to fraudulent.
Chris

April 17, 2014 3:10 am

The alarmists are probably saved by the upcoming El Niño, which will tweak the trend upward for a short while.

nevket240
April 17, 2014 3:34 am

http://www.livescience.com/44871-16-foot-great-white-shark-spotted-near-australian-beach.html
may I inject a bit of humour here. read this article and concentrate hard when you read the last paragraph. this ‘educated’ cretin will be, no doubt, writing ‘Green’ articles at some stage. I hope they are as good as this.
regards.

Bill Marsh
Editor
April 17, 2014 3:39 am

It’s worse than we thought. I thought that the model runs going back to 1950 were ‘tuned’ by adjusting the ‘parameters’ that represent physical processes that the models can’t simulate in order to make them ‘match’ the historical temperature trend. The above gives the impression that the model runs all were set up with a ‘start time’ of 1951 and allowed to run from there, and I don’t think that is the case.
Hans, I fail to understand the glee that some alarmist media types are displaying about an upcoming El-Nino. It’s as if they equate a rise in temperature with ‘it’s CO2 what done it’, when El-Nino has nothing to do with CO2 induced temperature rise, its a natural cycle. If they are hanging their hats on the idea that a ‘super’ El-Nino validates Dr Trenberth’s ‘the heat is hiding in the deep oceans’ theory, then they are headed for disappointment again as Dr Trenberth was referring to heat in the oceans at 2000 meters & below, which again has nothing to do with Kelvin wave El-Nino formation or intensity (mostly 0-300 meters).

April 17, 2014 3:53 am

Since I found Fig. 4 to be of particular interest, let me suggest tightening the scenario descriptions if you use this presentation again.
My initial reading was that the respective rate at which the global temperature changes in each scenario is constant, and I’m still not sure that my reading was incorrect. But the fact that the top, Scenario 3 curve first converges with, then diverges from, and then again approaches the Scenario 2 curve suggests instead that the respective changes are not constant, that each year’s change equals that of the corresponding year in the respective paradigm record interval.
In the latter case, perhaps you’ll consider revising the scenario descriptions to make that clear.

richard
April 17, 2014 4:08 am

the climate models will never ever be right – game over.
My advice to them is have two more climate models that show static temps for another few years and one that shows cooling – cover all bases.
After all give enough monkeys typewriters ………………….

Joe Pomykala
April 17, 2014 4:22 am

Thank you Pat and Paul at/and Cato.
Great for you all to analyze and expose this latest in the ongoing series of IPCC climatic fraud data, good work.
Just like the IPCC “forecast” comparing that biased BS to reality, oppps they are way off, what will be the future IPCC reports/excuses mention?
1.) The NASA and NOAA data sets on temperature must be adjusted down for previous decades to booster claims of global warming for better good of society since they need to be scared and take action.
2.) Sun cycle action slowing leading to less radiation hitting earth ( general effect say cooler at night, hotter/cooler in summer/winter during seasons) has been counteracting global warming, it is the Maunder Minimum.
4.) It is the Milankovitch Cycle and next ice age coming counteracting global warming (end of well documented inter-glacial warming period).
5.) The lag in CO2 behind temperature changes we discovered at Vostok from ice core data is just an anomaly.
6.) We give up, the temperature has not changed or possible it is getting cooler, massive food shortages possible with droughts, more extremes forecasted (cooling or warming does not matter, it is “climatic change” now).
7.) Better funding for global cooling scare forecasted by many climatologists.
8.) To counteract global cooling and climatic change, UN should get countries to do something quickly like subsidize fossil fuels and coal burning.
9.) Last updated IPCC Report recorded for humans before “forecasted” imminent collapse, we need to melt ice caps with nuclear bombs to counter act climatic change and global cooling. https://www.youtube.com/watch?v=DsdWTBNyvX0
10.) We were wrong again with last dozen IPCC reports, should now follow observed data, or
politically “adjust” observed data, global warming back in season, please give me more funding for propaganda to scare people with bs forecasts so have enough income and future funding so I can buy another SUV or go skiing on the quickly melting ski slopes or play hockey with my fake hockey stick on the Mann made graph before it melts.

Jim Cripwell
April 17, 2014 4:56 am

It is all very well writing this sort of thing on WUWT, but are the right people reading it? Who are “the right people”? The people who can blow the lid off the whole CAGW scam.
The APS is, currently, reviewing it’s statement on CAGW. There is a committee of six senior members of the APS who are commissioned to write a report that will, hopefully, be the basis for a new statement by the APS. I can only remember the name of one of these people; Dr. Susan Seestrom. Can measures be taken to see that these six people are made aware that this sort of analysis exists?

Bruce Cobb
April 17, 2014 5:02 am

“Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half.”
No, what we know is that global warming has stopped, for a period now approaching 18 years. This fact alone means the models are complete bogoid junk. Their predictive value is zero. But wait, there’s more. The temperature record the junk models are based on is contaminated, and biased towards warming, possibly by as much as 50%. But wait, there’s more. Starting a temperature record (1951) during a cool period is a cherry-pick taylor-made to show warming. That’s three strikes against the GCM’s, and YEROUTTATHERE!!!

April 17, 2014 5:17 am

This quantifies what I have been saying for a while – that in order to get to the IPCC doomsday scenario of 3C we are going to need warming like never before seen. 8.5 decades for 2.5 C rise means over 0.4 per decade. Every decade.

April 17, 2014 5:20 am

Even using a scenario 4, that temperatures accellerate over any level they have shown in the past 70 years, the confidence is still very low. The models are not predicting that, nor have they done any good at predicting the present.
Low confidence? More like No Confidence. They put all their eggs into the CO2 basket and it sprung a leak.

atthemurph
April 17, 2014 5:49 am

It’s an “Intergovernmental Panel”. That’s as far as anyone needs to go with that.

Richard M
April 17, 2014 6:30 am

I wonder how long it will take for the models to fall below the thresholds if the trend since 2005 continues?
http://www.woodfortrees.org/plot/rss/from:2005/plot/rss/from:2005/trend/plot/hadcrut4gl/from:2005/to/plot/hadcrut4gl/from:2005/trend
Since we are now at a solar maximum, if anything, the trend will drop even more in the future (once the El Niño – La Niña events are over).

Psalmon
April 17, 2014 6:48 am

I got to this from Yahoo’s page. That in itself is a monumental and stunning shift.

markx
April 17, 2014 6:52 am

Nice article.
Question….
How much of the model data shown is forecast … and how much is hindcast?
I suspect a lot of the earlier numbers are data fitting?

beng
April 17, 2014 6:52 am

Every time I see “CMIP” models mentioned, I imagine a room full of chimps on keyboards typing model code.

April 17, 2014 6:58 am

The proper application of the Precautionary Principle (oxymoron?) would therefore mean that since we might be wrong about CO2, we should stop trying to elimate the possible cause and mitigate the expected effects. That way, if some other mechanism is found to be the cause of CAGW, we will still be prepared. /sarc