A Clear Example of IPCC Ideology Trumping Fact

By Paul C. Knappenberger and Patrick J. Michaels

Center for the Study of Science, Cato Institute

Within the U.S. federal government (and governments around the world), the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is given authority when it comes to climate change opinion.

This isn’t a good idea.

Here perhaps is the clearest example yet. By the time you get to the end of this post, we think you may be convinced that the IPCC does not seek to tell the truth—the truth being that it has overstated the case for climate worry in in its previous reports. The “consensus of scientists” instead prefers to obfuscate.

IN doing so, the IPCC is negatively impacting the public health and welfare of all of mankind as it influences governments to limit energy use, instead of seeking ways to help expand energy availability (or, just stay out of the way of the market).

Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half. Coming up with reasons why is the hottest topic in climate change science these days, with about a dozen different explanations being forwarded.

Climate model apologists are scrambling to try to save their models’ (and their own) reputations—because the one thing that they do not want to have to admit is perhaps the simplest and most obvious answer of all—that climate models exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the impacts that derive from the model projectionswhich is the death knell for all those proposed regulations limiting our use of fossil fuels for energy.

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, even the IPCC recognizes the recent divergence of model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest this leads you to think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box 9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.


Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that “There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And, it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.

We don’t.

The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (of the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can ascertain the details of what is going on.



Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium.”

After all, there is plenty there is room for improvement.

For example, the model range could be much tighter, indicating that the models were in better agreement with one another as to what the simulated trend should be. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (note that the observed trend is 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012 period).

In fact, the latter situation is ongoing—that is, the observed trend is moving steadily leftward in the distribution of model simulated trends.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.


Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this trend is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different possibilities (scenarios) between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend (starting in 1951) would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.


Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to you revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the agreement between models and observations.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC assigns its highest confidence level to the current agreement between models and observations.

If the models are wrong (predict too much warming) then all the impacts from climate change and the urgency to “do something” about it are lessened. The “crisis” dissipates.

This is politically unacceptable.

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
April 16, 2014 6:08 pm

The IPCC was an ill considered concept. They never allowed for failure.

April 16, 2014 6:10 pm

Why do so many people discuss the science or computer models…without first acknowledging they are all based on fraudulent temperature records that have been fudged.
Even if they had invented the perfect model…they would never know it….because the models are all tuned to temp histories that have made the past colder and the present warmer….to show a faster rise in global warming…
They cooked their own goose with this one…they will never get an accurate computer model…with out first admitting they cooked the temp record

April 16, 2014 6:11 pm

CTM…I have a post in moderation hell…. 😉

Mike Bromley the Kurd
April 16, 2014 6:27 pm

one word: “simulation”…in IPCC-speak, this means (A) data, and (B) reality. End.

Pat Frank
April 16, 2014 6:45 pm

Models are tuned to reproduce the 20th century air temperature anomaly trend. It would only be surprising if they didn’t successfully track HadCRUT. The reason they don’t track air temperature since year 2000 or so is because the recent years are out of sample and the air temperature trend has inconveniently changed slope.
When models are tuned to reproduce the trend of years 1880-2000, they need one set of parameters. Since the observed trend has changed slope since year 2000, there is a need for a different set of parameters. The previous set of parameters is no longer adequate.
The embarrassment of the previous trend slow-down, 1940-1974 or so, was fixed by fudging the models with supposed NH aerosols. But aerosols are no longer available. So the modelers are stuck. They haven’t figured out a plausible excuse to re-fudge the models to make them fit the recent data.
This all goes to show that climate models are analogous to engineering models. They’re heavily ad hoc parametrized to fit a certain range of data. Outside that range, they quickly diverge from reality. Inside that range, they can reproduce trends, but they can’t explain the causal physics behind the trends.
Climate models are, in short, useless. I hope to publish a paper showing exactly how useless they are. Meanwhile here’s my recent AGU Meeting poster (2.9 mb pdf) describing the wonderfully predictive utility of CMIP5 climate models.

p@ Dolan
April 16, 2014 6:49 pm

Simple and convincing. Brilliant. And sadly, doomed to be ignored by all the cAGW acolytes out there…

Theo Goodwin
April 16, 2014 6:57 pm

Pat Frank says:
April 16, 2014 at 6:45 pm
Once again, Pat Frank nails it. Can’t wait to read his paper.

April 16, 2014 7:13 pm

An often missed subtlety is that while projections from an IPCC climate model may be erroneous, they are insusceptible to being falsified. It is predictions that are susceptible to being falsified but the IPCC climate models do not make them.

Greg Cavanagh
April 16, 2014 7:23 pm

It sounds as though they are averaging trends over a longer period in order to say the difference in trend, overall, is within 0.02 of each other. They need to say the trend is diverging.
The whole thing reads like statistics trickery 101.
Oh, I see. A “Trick” is a clever thing to do, right?

April 16, 2014 7:26 pm

“This sounds like the model are doing pretty good”
No, it sounds like the models are doing pretty well.

ferd berple
April 16, 2014 7:34 pm

Greg Cavanagh says:
April 16, 2014 at 7:23 pm
The whole thing reads like statistics trickery 101.
If we have our feet in the freezer and our heads in the oven, the IPCC says we are statistically comfortable.

April 16, 2014 7:42 pm

One can only assume that in using Box 9.2 the IPCC is completely incompetent or is fraudulently misdirecting. Unfortunately the CAGW crowd aren’t interested in what underlies the dogma and the IPCC is not subject to any prosecuting jurisdiction.
No matter. The facts should be shouted loud for any who are interested to hear.

April 16, 2014 7:42 pm

IPCC Titanic.
Do not trust the … “Captain” !
The “Watch Maker” turned “Ship Designer” on 2nd Deck standing by the spiral staircase and looking at the Ship-clock and glancing to his Swiss Chronograph on his wrist … knows !

April 16, 2014 7:48 pm

Thank you Pat and Chip. Thorough and to the point. Pat Frank also has an excellent graphic in his comment above. I appreciate the work you guys do to keep all this straight.
Question: What if temperatures drop over the next ten years?

April 16, 2014 7:55 pm

Numbers don’t lie unless you program them to do so.
Pretty much sums up the net results of the trillions it took to get us here.
Think about it,,,,,,, where exactly are we ?

M Seward
April 16, 2014 7:55 pm

This whole model results vs observed results will either soon be beyond parody or only vaguely understood via parody it is so bizarre scientifically. The models can be loaded up with some fudge factor to make them mimic the observed trend for some interval. The 1980 – 2000 period would be a good option or you might start a bit earlier. That says nothing about the models integrity at all but just gets them in a position that is convenient for longer term comparison. In short a complete atiface.
This notion that the models reflect the climate system is about as credible as driving a car to the top of a hill then letting it roll down and claiming it is a self controlling autonomous vehicle that will drive itself home.

April 16, 2014 8:03 pm

Fig 2: “observed trend” is at the 13th percentile. And falling.

April 16, 2014 8:04 pm

It would be kind of shocking if the models didn’t agree with the cherry picked time period they were based off of. But the models didnt exist back then and ever since the models have existed they have not fit reality even remotely. So basically the models are good at showing past temperature paterns but terrible at predicting future temperature patterns. And since we can just look at the record books to see past trends what purpose do the models serve? We don’t need a model that predicts the past, we have google for that. We need a model that predicts the future and they don’t.

April 16, 2014 8:06 pm

So are the climate change junkies now trying to get away from ‘if a significant timeframe of say 17 years of cololing ocurrs, then we can make it 20 or 30 years from a small percentage of models predicting close to observation’
I dont buy it.

April 16, 2014 8:13 pm

The basis for statistical trickery is application of the equivocation fallacy wherein a logically illicit conclusion are drawn is drawn from an equivocation; the latter is an argument in which a term changes meaning in the midst of this argument. A result is for an argument to look like a syllogism (an argument whose conclusion is true) that isn’t one.

Evan Jones
April 16, 2014 8:18 pm

(a rate of 0.0107°C/decade).
Typo here. You mean 0.107, of course?

April 16, 2014 8:20 pm

It is Pat Michaels not Pat Frank. Credit where credit is due. This is a stunning presentation of the data.

Evan Jones
April 16, 2014 8:26 pm

Also, a dozen feakin’ reasons for the pause? It’s obvious, isn’t it? The PDO flip to negative is causing the pause. Same as it did in the 1950s.
The 1950s “pause” was, of course, incorrectly ascribed to aerosols. An excusable mistake: When they were looking at trends in the 1990s, they were smack in the middle of a positive PDO — but PDO was not even described by science until 1996.
The (mild) forcing has applied continuously from 1950 — just at about the rate ol’ Arrhenius predicted it would (+1.1C forcing per CO2 doubling). I wonder what Henny would think about all this if he were around to see it.
So glad to have cleared that up!

Joe Pomykala
April 16, 2014 8:38 pm

Does the observed “trend” looked at by the IPCC in actual observed temperatures compared to their bad forecasts include 1.) the backwards government “adjustment” lowering prior observed temperatures? 2.) heat island effects?, 3.) the fact that any “trend” in temperature may be statistically insignificant and just natural variation?
If going back to decade of 1950s for IPCC to start the data and forecast comparison (does not look good with just last decade and a half), was that not a relatively cold decade, why not start in 1930s or 1940s with warmer observed data and compare to the “forecasts”?
That $29 billion a year the US foolishly spends now on propaganda and preparedness for forecasted global warming which seems not to be showing up, now climatic change, the supposed melting global ice caps which will flood major cities and low countries despite global ice currently in an anomaly above trend (or in natural variation above normal and not an anomaly), do you think that money could bias IPCC forecasts up since funding would dry up for climatic change alarmists isf they could not manufacture forecasts for alarms and more money? It is no surprise at all, that IPCC “forecasts” consistently are above observations, if accurate there wold be no money to pay their salaries, and now they are also going back adjust the observations to create a trend.
Well, on the bright side, at least the White House is not following the advice of Obama’s top science adviser John Holdren who wanted to do mass sterilization of the population by poisoning the water supply to prevent population growth which was also assumed by alarmists “forecasts” to be leading to imminent disaster.

April 16, 2014 8:42 pm

We are not dealing with stupid people. Many of the IPCC scientists are well trained and fully aware of what is happening. I’m sure they know that they will be hung by their own data tampering, and that the models cannot work unless warming begins again- and soon. I used to play with my statistics students by telling them to use a set of data for various analyses. Then I would have them “fudge” 30% of the data and rerun the analyses. A lot of eyes were opened. The only way one could get back to the “truth” was to reinstate the original data. The IPCC, NASA, NOAA and all the other “manipulators” cannot “politically” go back to the data they have altered, so the models are hung on linear increases, and the real climate, historically, hasn’t followed that pattern. This is why we see all the doubling down on fear – they know that time can kill the whole ruse. Political action NOW signifies their fear.

April 16, 2014 8:44 pm

I expect to see a lot more nonsense about volcanoes and Chinese aerosols in the near future – it is their only excuse for failure.
BTW is it possible to tweak say carbon sensitivity input to produce a model run which provides a good median agreement with observations? That would be a fascinating calculation :-). Perhaps you could use Willis’ lagged forcing approximation. http://wattsupwiththat.com/2013/06/03/climate-sensitivity-deconstructed/

April 16, 2014 8:54 pm

Fit to adjusted temperature data and based on aerosol values made up as a fudging factor, those models would not reproduce the magnitude of the large ~ 0.5 degree Celsius cooling which occurred, over a three decade period starting in the late 1930s, in non-adjusted original NH temperature history (the cause of the global cooling scare of the 1960s-1970s, with National Geographic then calling it “nearly halfway back to the chill of the Little Ice Age”), nor the likely future (my usual link providing plot & reference).

April 16, 2014 9:15 pm

Results of 108-114 models were compared to actual temperatures. The models give a wider spread of results (0.4°C) for shorter time periods (Fig. 1 a and b) and a narrower spread for the longer time period. This seems to be intuitively wrong if the models had any capability to match reality.
Models that don’t work so large numbers are used to create reality. How many wrongs do you have to use to make a right? The ensemble doesn’t do too well at matching reality. It’s total gibberish. How many billion dollars were poured down this rat hole? And they give advanced degrees and nice tenured professorships for this?

Joel O'Bryan
April 16, 2014 9:29 pm

This analysis is devastating to the “CO2 is evil” CAGW believers.
Ayatollah Al “Jezeera” Gore will issue a Fatwah against this blasphemy any day now.

April 16, 2014 9:34 pm

Typo fixed in boldface: ” . . . cover-up its past indiscretions . . .”

April 16, 2014 9:36 pm

@Patrick Michaels and Chip Knappenberger:
I’m proofreading while I read,
…worry in in its previous….
…This sounds like the model are doing pretty good…
….After all, there is plenty there is room for improvement…..
..In other words, statistician would ……
Such a strongly worded post, might want to correct the above ?
(forgive me if I’m duplicating any comments above).
Just checking syntax.

Joel O'Bryan
April 16, 2014 9:37 pm

@Eric Worrall, “I expect to see more nonsense about volcanos and Chinese aerosols… as their only excuse.”
you overlook their more likely alibi, “the solar minimum ate my CAGW project.” So they will also say ” Feed me anyway with research grants, apply carbon taxes, and decree death to coal since ole’ sol may become active again anyday now.”

April 16, 2014 9:40 pm

Theo Goodwin says:
April 16, 2014 at 6:57 pm
Pat Frank says:
April 16, 2014 at 6:45 pm
Once again, Pat Frank nails it. Can’t wait to read his paper.
bernie1815 says:
April 16, 2014 at 8:20 pm
It is Pat Michaels not Pat Frank. Credit where credit is due. This is a stunning presentation of the data.

Actually, it’s Pat Frank. Theo Goodwin referred explicitly to his poster and forthcoming paper described in the comment here:

April 16, 2014 9:40 pm

Joel O’Bryan
you overlook their more likely alibi, “the solar minimum ate my CAGW project.” So they will also say ” Feed me anyway with research grants, apply carbon taxes, and decree death to coal since ole’ sol may become active again anyday now.”
They might try that at the very end – but if solar activity is an important influence on climate, then it opens Pandora’s box for them – how much of 20th century warming was due to solar activity? So this would be an utter desperation move.

Joel O'Bryan
April 16, 2014 9:45 pm

We see today the Obama administration is willing to fudge the Census Bureau data on healthcare coverage data collection to their favor. They’ve already done shady things with Bureau of Labor Stats data releases. No doubt they will infect NOAA and NASA data with this deceit as well,… if they think they can get away with it.

Joel O'Bryan
April 16, 2014 9:50 pm

@Eric W.
I completely agree. But then most non-experts wouldn’t get that technical point about past assumptions on TSI non-involvement with their original models of forcings.

April 16, 2014 9:56 pm

time for the alarmists to bypass democracy:
16 April: NYT Dot Earth: Andrew C. Revkin: Psychology: A Risk Analyst Explains Why Climate Change Risk Misperception Doesn’t Necessarily Matter
David Ropeik, the risk communication consultant and author of “How Risky is it, Really? Why Our Fears Don’t Always Match the Facts,” had some concerns about the way I characterized our “inconvenient minds” in my TEDx talk in Portland, Ore., over the weekend.
He’s right, of course. The 19-minute presentation on how, with sustained work, we’re a perfect fit for a complicated, consequential century was necessarily oversimplified. Here’s his “Your Dot” piece filling in many blanks, and noting that no one should presume better climate change communication is the path to action on global warming…
DAVID ROPEIK: But this brings me to the second and more profound issue. Most climate change communication, like Showtime’s Years of Living Dangerously and the American Academy for the Advancement of Science’s What We Know campaign, websites like Climate Central and Real Climate, or academic programs like Yale’s Project on Climate Change Communication and George Mason University’s Center for Climate Change Communication, is predicated on the belief that if people know the facts about climate change and finally understand just how serious the problem is, they will surely raise their voices and demand that our governments and business leaders DO SOMETHING!
***But I’m just not sure how much public concern matters. I don’t know how much we need to care how much people care. Bear in mind this heresy comes from someone who has worked directly on climate change communication in many ways, and will continue to. (I recently had the opportunity to help write the FAQs of IPCC Working Group 2, presenting their findings in language non-scientists can comprehend…
We’d have to feel we were at war — bullets-flying, bombs-dropping, buildings-burning and body-bags real, live, NOW “I am in Danger” war — before public concern about climate change would grow strong enough to drive those sorts of actions. The psychology of risk perception warns against the naive hope that we can ever achieve that level of concern with effective communication, but even if it is possible, we are just not going to get there in time, a point made dramatically by the latest IPCC Working Group 3 report. They recommend to policy makers that time is very short before we lock the system into a future likely to produce much more disastrous damage.
***Those policy makers, our leaders, are going to have to act, even without a huge public mandate. On Monday, Robert Stavins, director of Harvard’s Environmental Economics Program and a co-author of the IPCC WG 3 report, said this on the OnPoint radio program:
“This bottom up demand which normally we always want to have and rely on in a representative democracy, is in my view unlikely to work in the case of climate change policy as it has for other environmental problems…. It’s going to take enlightened leadership, leaders that take the lead.”
And they are. The Obama Administration has put a price on carbon by regulating emissions from power plants. Germany’s Energewiende program is trying, not without problems, to convert Europe’s biggest economy to renewable energy. China and India are pouring billions into nuclear energy. Nations and U.S. states and communities are creating feed-in tariffs and incentives to encourage production of renewable energy. (Ergo the soalr panels I just put on my roof!)…

April 16, 2014 9:59 pm

Had been better had IPCC sent their so called experts on courses to learn by understanding Theories of Science what they forgot to learn during attending same courses once upon a time….

April 16, 2014 10:11 pm

Thanks for that,but just wait till Ben Santer sees you at a scientific meeting.

April 16, 2014 10:24 pm

“Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…”

Is the above statement true? If the observed trend is 0.107°C/decade, and it agrees with the models within 0.02 degrees, then the model ensemble mean should be 0.127 or less. But in figure 2 the mean appears to be greater than 0.143. That would make the difference almost double the claimed value of 0.02 degrees per decade. Does anyone know what the CMIP5 [climate model] ensemble mean actually is?

Peter Miller
April 16, 2014 10:40 pm

And let’s not forget our gratitude for the satellites which measure global temperature, for they have kept the statistics reasonably honest for the past 35 years. Prior to the late 1970s, the manipulation/torture/homogenisation of temperature data has run riot, especially the GISS numbers.
Without the satellites acting as the police, the IPCC models would have undoubtedly been shown to be ‘correct’.

April 16, 2014 11:29 pm

19 April: The Economist: Another week, another report
Options for limiting climate change are narrowing
THE Intergovernmental Panel on Climate Change (IPCC), a gathering of scientists who advise governments, describes itself as “policy-relevant and yet policy-neutral”. Its latest report, the third in six months, ignores that fine distinction. Pressure from governments forced it to strip out of its deliberations a table showing the link between greenhouse gases and national income, presumably because this made clear that middle-income countries such as China are the biggest contributors to new emissions. It also got rid of references to historical contributions, which show that rich countries bear a disproportionate responsibility. That seems more like policy-based evidence than evidence-based policy and bodes ill for talks on a new climate-change treaty, planned to take place in Paris next year…
The IPCC still thinks it might be possible to hit the emissions target by tripling, to 80%, the share of low-carbon energy sources, such as solar, wind and nuclear power, used in electricity generation. It reckons this would require investment in such energy to go up by $147 billion a year until 2030 (and for investment in conventional carbon-producing power generation to be cut by $30 billion a year). In total, the panel says, the world could keep carbon concentrations to the requisite level by actions that would reduce annual economic growth by a mere 0.06 percentage points in 2100.
These numbers look preposterous. Germany and Spain have gone further than most in using public subsidies to boost the share of renewable energy (though to nothing like 80%) and their bills have been enormous: 0.6% of GDP a year in Germany and 0.8% in Spain…
Moreover, the assumptions used to calculate long-term costs in the models are, as Robert Pindyck of the National Bureau of Economic Research, in Cambridge, Massachusetts, put it, “completely made up”. In such circumstances, estimates of the costs and benefits of climate change in 2100 are next to useless. Of the IPCC’s three recent reports, the first two (on the natural science and on adapting to global warming) were valuable. This one isn’t.

April 16, 2014 11:56 pm

look if you don’t realise we are on a “spiral to suicide” then you just aren’t being spiritual enough and need some re education out of the demonic darkness your ego centric ways and accept any environmental program must also include social and economic equity. Don’t waste time trying to unravel the settled science quibbling about minor matters. Now is the time to act.
ecotheology the study of religion and ecology “qualifies as a new field” in academia
[i remember my geomorphology professor telling me it was already ‘too late’ to save the world in 1982 lol]

Brian H
April 17, 2014 12:38 am

So, according to the models, Mankind’s CO2 emissions stopped growing completely in about 1980.
Oh, wait …

Joe Born
April 17, 2014 12:42 am

Excellent post.
Note also that rather than use a line to depict HadCrut4 , as Michaels & Knappenberger do, the IPCC uses bars, each of which spans two Box 9.2 bins–and would span about ten bins in Michaels & Knappenberger’s Fig. 2.
Note also that its bar width is the same in all three Box 9.2 plots even though the last plot’s trend covers 62 years, while each of the first two covers only 15. Are we to infer that the uncertainty in the HadCrut4 trend is the same for 62 years as for 15?

Joe Born
April 17, 2014 12:48 am

Does anyone remember offhand where AR5 cites the models’ match to “climatology” as their basis for according the model-exhibited climate sensitivities at least as much weight as the (much lower) observation-based values?
I guess this is what they mean by matching climatology.

Santa Baby
April 17, 2014 1:25 am

“The IPCC was an ill considered concept. They never allowed for failure.”
The basis for the IPCC is the political established UNFCCC.
The real ill considered concept is the UNFCCC. And it’s real concept is less climate and more international Marxism. Remember we have had to act now since the late 1980s, Gore and Gro etc.. Climate treaty creating global government and “getting rid of kapitalism” as Chavez put it in Copenhagen 2009.

April 17, 2014 1:31 am

All attempts by WUWT and similar sceptical sites and individuals appear to make no difference to the general extreme warmist attitude of the media and politicians and the army of scientists sponsored by taxpayers money.
Reasoned arguements, using available data, against the most alarming forecasts appear to make no impact because there is so much money to be made from embracing the alarmist’s forecasts.
Perhaps the only way to force a return to “good” science is to use money , in the form of , basically, gambling to counteract this “green” money . If most of the world is betting on a hot , meteorologically turbulent world , but the hard science facts show that the future climate will actually be more or less the same as it has been for the last 1000 years is there not some form of “futures trading” exchange which would handsomely reward the sceptics because they call it right and punish the extremists because they do not?
Surely the US which invented hedge funds , derivatives and all the other arcane trading mechanisms could come up with this sort of exchange (apart from Las Vegas) to allow this . The beauty would be that the more extreme the alarmists and IPCC become , the greater the money to be made when they are shown to be wrong .
Eventually, as they see the wealth acrueing to the sceptics , it would dawn on the great and the good (Obama, Cameron , the BBC and the Editor of Nature) that the IPCC and much of the state sponsored science is simply wrong – unless of course access to the raw dat on temperature and climate events is withheld from the public .

April 17, 2014 1:33 am

Very cool analysis. As ever it needs to reach the mainstream media. Have you mailed it to the likes of Matt Ridley who might get a piece on this accepted by serious papers?

April 17, 2014 1:36 am

Very nice piece.
Hiding the Decline lives on….

April 17, 2014 1:39 am

R2Dtoo says:
April 16, 2014 at 8:42 pm
That’s a terrific post, and bang on.
Corrupting the original data to secure todays required result ‘locks in’ future divergence, model failure and ultimately, complete unsustainability of the theory.
Tick tock….

April 17, 2014 1:39 am

If I have lots of “models” and none of them work, do I get a better model by examining the distribution of their output? Perhaps if they make a couple more models to add to the distribution they can get a more realistic looking graph? Or maybe they should reject some of the models?
Perhaps they should apply a time varying weight for each model and chose the weight vectors such that the weighted mean of the models line up with the observations? Each month they can adjust the weights to maintain the illusion that their models are not junk.

April 17, 2014 1:41 am

Terry Oldberg says:
April 16, 2014 at 7:13 pm
“An often missed subtlety is that while projections from an IPCC climate model may be erroneous, they are insusceptible to being falsified. It is predictions that are susceptible to being falsified but the IPCC climate models do not make them.”
A discipline that makes no predictions is not science.
If the IPCC makes no predictions, why do politicians the world over impose cow flatulence taxes and help unviable energy solutions like wind and solar into being with taxpayer money.
If we can’t call it science we need another word. I propose “cult”.

Reply to  DirkH
April 17, 2014 9:01 am

You have drawn the correct conclusion from the lack of falsifiability of the models. It is also true that the models convey no information to a policy maker about the outcomes from his or her policy decisions thus being worthless for the purpose of making policy.

April 17, 2014 1:45 am

pat says:
April 16, 2014 at 9:56 pm
“[ROPEIK:] Germany’s Energewiende program is trying, not without problems, to convert Europe’s biggest economy to renewable energy. China and India are pouring billions into nuclear energy.”
Ropeik is right. And in a few years Germany will have destroyed its energy security and China and India will retake their classic roles of dominant empires.

April 17, 2014 1:53 am

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

This is the dilemma the IPCC finds itself in. It is a dilemma of its own making. By the way policymakers cannot be led astray because the IPCC produces the results they want. Policymakers find themselves in a dilemma too.
As long as there is no resumption in warming for a decade or more then the IPCC goes from dust to dust. A zombie that tells us with 97% certainty that he is right.

April 17, 2014 1:54 am

After posting I want to amend the last sentence to read:

A zombie that tells us with 97% certainty that he is ALIVE!

April 17, 2014 2:11 am

evanmjones says:
April 16, 2014 at 8:26 pm
Also, a dozen feakin’ reasons for the pause? It’s obvious, isn’t it? The PDO flip to negative is causing the pause. Same as it did in the 1950s.
The 1950s “pause” was, of course, incorrectly ascribed to aerosols. An excusable mistake: When they were looking at trends in the 1990s, they were smack in the middle of a positive PDO — but PDO was not even described by science until 1996………

Thanks for that reminder. Here is the real reason why they are busted and this was AFTER the IPCC’s FIRST report published in 1990. By this time they were hooked and in a corner.

A Pacific interdecadal climate oscillation
with impacts on salmon production
Abstract – June, 1997
(Vol 78, pp. 1069-1079)
Evidence gleaned from the instrumental record of climate data identifies a robust, recurring pattern of ocean-atmosphere climate variability centered over the mid-latitude Pacific basin. Over the past century, the amplitude of this climate pattern has varied irregularly at interannual-to-interdecadal time scales. There is evidence of reversals in the prevailing polarity of the oscillation occurring around 1925, 1947, and 1977; the last two reversals correspond with dramatic shifts in salmon production regimes in the North Pacific Ocean. This climate pattern also affects coastal sea and continental surface air temperatures, as well as streamflow in major west coast river systems, from Alaska to California.
by Nathan J. Mantua, Steven R. Hare, Yuan Zhang,
John M. Wallace, and Robert C. Francis
Published in the
Bulletin of the American Meteorological Society,

April 17, 2014 2:47 am

Roger, gregole, Theo:
My apologies to all. My apologies to Pat Frank as well.

Chris Wright
April 17, 2014 3:00 am

The authors quote the IPCC as stating:
““The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”
“Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade”
0.02 degrees C per decade? What utter nonsense. We can’t even measure the climate to that kind of precision. This claim alone shows that it has nothing to do with science.
These statements don’t mention a rather important consideration: when were the models run? If they had been run in 1950, that would be extraordinarily impressive. Most likely these are recent models that may even have been run later than 2012. In other words, they are mostly hindcasts and not forecasts. Any fool can predict what’s already happened.
It does seem that climate model hindcasts are remarkably accurate, while their actual forecasts fail miserably. There’s only one rational explanation for this: they have been adjusted to match past climate. There are huge numbers of parameters that can be turned up and down as needed. Willis mentioned an intriguing possibility: that these parameters evolve over successive runs, rather like Darwinian natural selection. Parameter changes that improve the hindcast will be kept, while parameter changes that make the hindcast worse will be – shall we say – changed.
If the IPCC is claiming their computer models are accurate and reliable on the basis of their hindcasts then this alone is close to fraudulent.

April 17, 2014 3:10 am

The alarmists are probably saved by the upcoming El Niño, which will tweak the trend upward for a short while.

April 17, 2014 3:34 am

may I inject a bit of humour here. read this article and concentrate hard when you read the last paragraph. this ‘educated’ cretin will be, no doubt, writing ‘Green’ articles at some stage. I hope they are as good as this.

DC Cowboy
April 17, 2014 3:39 am

It’s worse than we thought. I thought that the model runs going back to 1950 were ‘tuned’ by adjusting the ‘parameters’ that represent physical processes that the models can’t simulate in order to make them ‘match’ the historical temperature trend. The above gives the impression that the model runs all were set up with a ‘start time’ of 1951 and allowed to run from there, and I don’t think that is the case.
Hans, I fail to understand the glee that some alarmist media types are displaying about an upcoming El-Nino. It’s as if they equate a rise in temperature with ‘it’s CO2 what done it’, when El-Nino has nothing to do with CO2 induced temperature rise, its a natural cycle. If they are hanging their hats on the idea that a ‘super’ El-Nino validates Dr Trenberth’s ‘the heat is hiding in the deep oceans’ theory, then they are headed for disappointment again as Dr Trenberth was referring to heat in the oceans at 2000 meters & below, which again has nothing to do with Kelvin wave El-Nino formation or intensity (mostly 0-300 meters).

Joe Born
April 17, 2014 3:53 am

Since I found Fig. 4 to be of particular interest, let me suggest tightening the scenario descriptions if you use this presentation again.
My initial reading was that the respective rate at which the global temperature changes in each scenario is constant, and I’m still not sure that my reading was incorrect. But the fact that the top, Scenario 3 curve first converges with, then diverges from, and then again approaches the Scenario 2 curve suggests instead that the respective changes are not constant, that each year’s change equals that of the corresponding year in the respective paradigm record interval.
In the latter case, perhaps you’ll consider revising the scenario descriptions to make that clear.

April 17, 2014 4:08 am

the climate models will never ever be right – game over.
My advice to them is have two more climate models that show static temps for another few years and one that shows cooling – cover all bases.
After all give enough monkeys typewriters ………………….

Joe Pomykala
April 17, 2014 4:22 am

Thank you Pat and Paul at/and Cato.
Great for you all to analyze and expose this latest in the ongoing series of IPCC climatic fraud data, good work.
Just like the IPCC “forecast” comparing that biased BS to reality, oppps they are way off, what will be the future IPCC reports/excuses mention?
1.) The NASA and NOAA data sets on temperature must be adjusted down for previous decades to booster claims of global warming for better good of society since they need to be scared and take action.
2.) Sun cycle action slowing leading to less radiation hitting earth ( general effect say cooler at night, hotter/cooler in summer/winter during seasons) has been counteracting global warming, it is the Maunder Minimum.
4.) It is the Milankovitch Cycle and next ice age coming counteracting global warming (end of well documented inter-glacial warming period).
5.) The lag in CO2 behind temperature changes we discovered at Vostok from ice core data is just an anomaly.
6.) We give up, the temperature has not changed or possible it is getting cooler, massive food shortages possible with droughts, more extremes forecasted (cooling or warming does not matter, it is “climatic change” now).
7.) Better funding for global cooling scare forecasted by many climatologists.
8.) To counteract global cooling and climatic change, UN should get countries to do something quickly like subsidize fossil fuels and coal burning.
9.) Last updated IPCC Report recorded for humans before “forecasted” imminent collapse, we need to melt ice caps with nuclear bombs to counter act climatic change and global cooling. https://www.youtube.com/watch?v=DsdWTBNyvX0
10.) We were wrong again with last dozen IPCC reports, should now follow observed data, or
politically “adjust” observed data, global warming back in season, please give me more funding for propaganda to scare people with bs forecasts so have enough income and future funding so I can buy another SUV or go skiing on the quickly melting ski slopes or play hockey with my fake hockey stick on the Mann made graph before it melts.

Jim Cripwell
April 17, 2014 4:56 am

It is all very well writing this sort of thing on WUWT, but are the right people reading it? Who are “the right people”? The people who can blow the lid off the whole CAGW scam.
The APS is, currently, reviewing it’s statement on CAGW. There is a committee of six senior members of the APS who are commissioned to write a report that will, hopefully, be the basis for a new statement by the APS. I can only remember the name of one of these people; Dr. Susan Seestrom. Can measures be taken to see that these six people are made aware that this sort of analysis exists?

Bruce Cobb
April 17, 2014 5:02 am

“Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half.”
No, what we know is that global warming has stopped, for a period now approaching 18 years. This fact alone means the models are complete bogoid junk. Their predictive value is zero. But wait, there’s more. The temperature record the junk models are based on is contaminated, and biased towards warming, possibly by as much as 50%. But wait, there’s more. Starting a temperature record (1951) during a cool period is a cherry-pick taylor-made to show warming. That’s three strikes against the GCM’s, and YEROUTTATHERE!!!

Tom Andersen
April 17, 2014 5:17 am

This quantifies what I have been saying for a while – that in order to get to the IPCC doomsday scenario of 3C we are going to need warming like never before seen. 8.5 decades for 2.5 C rise means over 0.4 per decade. Every decade.

April 17, 2014 5:20 am

Even using a scenario 4, that temperatures accellerate over any level they have shown in the past 70 years, the confidence is still very low. The models are not predicting that, nor have they done any good at predicting the present.
Low confidence? More like No Confidence. They put all their eggs into the CO2 basket and it sprung a leak.

April 17, 2014 5:49 am

It’s an “Intergovernmental Panel”. That’s as far as anyone needs to go with that.

Richard M
April 17, 2014 6:30 am

I wonder how long it will take for the models to fall below the thresholds if the trend since 2005 continues?
Since we are now at a solar maximum, if anything, the trend will drop even more in the future (once the El Niño – La Niña events are over).

April 17, 2014 6:48 am

I got to this from Yahoo’s page. That in itself is a monumental and stunning shift.

April 17, 2014 6:52 am

Nice article.
How much of the model data shown is forecast … and how much is hindcast?
I suspect a lot of the earlier numbers are data fitting?

April 17, 2014 6:52 am

Every time I see “CMIP” models mentioned, I imagine a room full of chimps on keyboards typing model code.

April 17, 2014 6:58 am

The proper application of the Precautionary Principle (oxymoron?) would therefore mean that since we might be wrong about CO2, we should stop trying to elimate the possible cause and mitigate the expected effects. That way, if some other mechanism is found to be the cause of CAGW, we will still be prepared. /sarc

April 17, 2014 7:15 am

This suggests with a high confidence that 97.3% (111/114 models) of climate scientists were grossly overpaid grant money for their “work.”

Rod Everson
April 17, 2014 7:47 am

evanmjones says:
April 16, 2014 at 8:18 pm
(a rate of 0.0107°C/decade).
Typo here. You mean 0.107, of course?

Mr. Jones points out what appears to be a typo. It should be corrected if that is what it is. It appears as the assumption for Scenario 2 near the end of the report. As stated, it’s little different than the zero assumption of Scenario 1, when it apparently should be more like mid-way between the assumptions for Scenarios 1 and 3.

April 17, 2014 7:55 am

In freshman physics lab, we teach students that for a collection of N independent determinations of some quantity, with only random error, the uncertainty is given by the standard deviation of the mean: SDOM = SD/sqrt(N).
This provides another trivially simple but illuminating perspective on Figure 2 here. Assume that the 108 model runs constitute independent predictions, with exclusively random errors. By eyeball, looking at Fig. 2, the mean of the runs is about .145 and the standard deviation is at most about .05. So the standard deviation of the mean (the standard deviation divided by the square root of the number N of independent determinations) is at most about .005. So the observations should be within about .005 of .145. But they aren’t. Not even close. The observations are instead about *seven* .005’s away.
In freshman physics lab, it’s more common to have a single unambiguous theoretical prediction, and then measure the thing N times independently. Here the roles of prediction and observation are reversed, but the statistics don’t care about that. If my students compare .11 to .145 in a situation where the SDOM is .005 and say there is excellent agreement, they lose points. To get full credit they should instead conclude that either the prediction is simply wrong, or the uncertainty has been grossly underestimated (probably because the assumption of “exclusively random errors” is wrong, i.e., probably because there are significant systematic errors in their measurements — or, here, in the models).
This isn’t rocket science.

Rod Everson
April 17, 2014 8:02 am

I’m surprised at the lack of comment (or questions) regarding the end result of the analysis, as presented in the last figure showing all three of the proposed scenarios falling below the 2.5 percentile mark over time, including the one that assumes a resumption of the .17C per decade increase of the 80’s and 90’s.
Surprised, because the result surprised me. When the scenarios were presented, I assumed that applying Scenario 3 would eventually make the models look more reasonable. Instead, the falling away that one would expect to occur under Scenario 1 occurs in all three scenarios, just at a somewhat slower pace.
Question: Is this because the models are, on average, forecasting an even greater than .17C increase over the next decade or so? That would seem to have to be the case, but the numbers in the paper’s figure 2 wouldn’t seem to support that.
Another question: Why the sudden divergence between the results for Scenarios 2 and 3 in the year 2019, only to converge again a year later? This would seem to be nearly impossible given the assumptions for the two scenarios. (Either that, or I don’t understand the assumptions–I’m assuming a steady annual increase in temps at the rates specified, nothing more.) Similarly, why does a similar divergence suddenly appear in 2024/25? This sort of result makes me skeptical of the underlying analysis, so it would be helpful to have it explained.
If the results presented are indeed accurate, however, the next few years should be quite interesting.

April 17, 2014 8:04 am

evanmjones (April 16, 2014 at 8:18 p),
Yeah, typo. We gave the correct value earlier in the post. It should be 0.107 as you point out.
Sorry about that,

April 17, 2014 8:09 am

Joe Born (April 17, 2014 at 3:53 am):
Your initial reading is correct. In each scenario, a constant increment is added to the previous year’s value. The curves in Figure 4 jump around a little because of the distribution of the model pdf (which is not smooth).

April 17, 2014 8:11 am

markx (April 17, 2014 at 6:52 am);
The model data are forecasts post 2006. The forecast scenario is the RCP4.5 (which is generous to the models in this comparison).

Robert W Turner
April 17, 2014 8:12 am

IPCC graphs are becoming as ridiculous as their statements. Learning how to adjust the axis on your charts to properly represent the data is elementary. IPCC scientists should start over in grade one, Billy Madison style.

April 17, 2014 8:40 am

Rod Everson (April 17, 2014 at 8:02 am):
Good questions!
The model trend continues to increase. For the period 1951-2030, the mean model trend is 0.175°C/dec. The observed trend is slow to respond to new data points, even when added at an incremental rate of 0.017°C/yr between now and 2030 (our scenario 3). Under scenario 3, the observed trend becomes 0.117°C/dec for the 1951-2030 period. It will take a much longer time before Scenario 3 starts to catch up with a model trend of 0.175, but in the meantime, the models are continuing to run away. So what is required to bring the observations back in line with model expectations is a fairly prolonged observed warming rate in excess of anything yet observed.
The details of Figure 4 (the behavior of the scenarios) is largely dependent on the details of the distribution of model runs—and where the observed trend falls within it (i.e., at what percentile). Since the model pdf is not smooth, the lines in Figure 4 jump around a bit.

April 17, 2014 8:59 am


We explore three different possibilities (scenarios) between now and the year 2030.

I think that evaluation of a fourth scenario would be very instructive:
4. How fast would the GAST have to rise between now and 2030 for the observed 1951-2030 trend to end up in the center of the distribution of the 108 model runs?

April 17, 2014 9:10 am

JJ says (April 17, 2014 at 8:59 am):
The observed temperatures would have to rise at a rate of 0.64°C/dec between 2014 and 2030 to reach a trend of 0.175°C/dec (the average model trend) for the period 1951-2030.

April 17, 2014 9:14 am

The most important part of all of this is that a large majority of the time period where the models are “consistent” with the observed is the time period where the models were calibrated (i.e. forced to match historical records by tweaking the forcings). From my understanding, that is how climate models are built. So, is it really surprising if you say, “Hey, my climate model, which I forced to equal an observed historical trend, is pretty close to that observed historical trend!”

Rod Everson
April 17, 2014 9:14 am

Thanks Chip, I think I understand it now. I do have a suggestion.
A good summary graph to add to the end of the presentation would be a reproduction of figure 2, but with three red lines indicating the 2030 endpoint of Scenarios 1, 2, and 3, and with the blue model results all shifted to their 2030 endpoints. I think that would make it clear how the divergence is occurring, and its extent. I would also suggest adding a fourth red line, centered at the 50% point of the model results along with the required temperature increase to get there (a derived Scenario 4, effectively.) Do you know what that number would be, by any chance?

Rod Everson
April 17, 2014 9:24 am

Thinking on this further, if Scenario 3 does occur the modelers will eventually claim success by simply shifting the beginning point to 2015 or so, after which the models would accurately reflect reality. And frankly, if we go through a couple of decades of real warming again, it would be difficult to fight the political momentum that would inevitably develop in that event. I realize that the models, having failed to this point, would still be worthless, but people are still paying attention to them today, even after having failed. After 15 years of actual warming in line with the models’ annual predictions, grant money would be flowing heavily again despite the analysis here.

April 17, 2014 9:26 am

You know, when they cool the past in their models, they would naturally balance out against reality over the entire period. However, If their models reflected reality (Panel b would line up and not under estimating in the models), then the 62 year period – on average – would be hotter in the models than in reality. What is also funny, Panel a shows statistically zero heating on Earth.
Face it, the IPCC is left to playing games with data to hide the decline. 15 years later and that is still their only trick.

Rod Everson
April 17, 2014 9:29 am

Chip, I see that you had already answered one of my questions while I was in the process of asking it, that being the 0.64C/decade increase required to get to the middle of the model distribution

William Astley
April 17, 2014 10:00 am

In Chess, there is what is called forced moves, where the player must move (to defend against checking attacks) or where the player must move to avoid a large loss of material. The planet is now cooling at both poles (initially more apparent in the Antarctic) in response to the sudden reduction in the solar magnetic cycle which is not surprising as there are in the paleo record cycles of warming (at high latitude regions both poles) in every case followed by cooling that correlate with solar magnetic cycle changes. We will now have a chance by observation to decipher how solar magnetic cycle changes modulate planetary temperature. It appears based on the underlying mechanisms and what has happened before that the cooling will accelerate.
The complication for the climategate cabal scientists if the planet cools, will be that will be forced to defend the charge of what appears to be almost deliberate fabrication or at least ignoring evidence that unequivocally shows their models are incorrect. For example (one of roughly a half dozen issues and observations that support the same assertion), the General Circulation models (GCMs) use a 2% increase in evaporation per degree Celsius change in ocean surface warming (tropics) when theoretical calculations support 6.5% to 7% (no reduction in wind speed) and satellite measurements (tropics) indicate there is a 10% increase in precipitation per degree Celsius change in ocean surface temperature (which indicates wind speeds slightly increase, likely due to differential temperatures in the region in question caused by cloudy and clear sky). That one ‘error’ reduces the increase in warming due to doubling of atmospheric CO2 to less than a degree Celsius.
Discussions of the evaporation issue:
The Physical Flaws of the Global Warming Theory and Deep Ocean Circulation Changes as the Primary Climate Driver
Observation support for the assertion the GCMs’ are incorrect.
The following satellite observational data and analysis from Roy Spencer’s blog site supports the assertion that the increase in evaporation for a 1C rise in ocean surface temperature is at least 6% rather than the IPCC’s assumed 2%
On the Observational Determination of Climate Sensitivity and Its Implications

Pat Frank
April 17, 2014 11:30 am

Thanks, Theo. My manuscript was submitted twice to JGR-Atmospheres, and twice rejected. The experience has been unexpectedly informative.
There were five separate reviewers, who were evidently climate modelers. The rejections were based pretty much entirely on two reviewer objections. The first one was that the “+/-” confidence intervals produced by propagated error are unphysical and amount to an assertion that the model is rapidly oscillating between a green-house and an ice-house climate.
The second was that projections are referenced to an 1850 base-state climate simulation. This base-state simulated climate already has all the errors. Subtracting the base-state climate from the projected climates removes all the error, leaving physically correct trends.
The first reviewer objection is a freshman undergraduate mistake; showing no understanding of the meaning of uncertainty. It became pretty clear that the reviewers were completely unfamiliar with propagated error. That seems to be a pretty basic hole in their education, but may explain a lot about the state of consensus climatology.
The second reviewer objection relies on linear perturbation theory (LPT). All of science, especially Physics, requires testing a theory against observation. But the case for a reference base-state climate cannot have been tested, much less verified, because there are no observables for the 1850 climate. So, no one knows what the base-state errors look like. How, then, is anyone able to say the errors remain constant and subtract away? The modelers are apparently putting a blind trust in LPT.
I pointed all this out in my responses, but the editor was apparently unmoved. So, now I’m on to journal #2. We’ll see how that goes.
Given the prior reviewer objections, the Auxiliary Material document submitted along with the manuscript now also explains the meaning of uncertainty and of confidence intervals for our climate modeler friends.

Reply to  Pat Frank
April 17, 2014 12:17 pm

Your experience suggests that modelers do not have to be statisticians? Surely they must test the robustness or sensitivity of their models by varying the starting conditions or starting year?

Marlo Lewis
April 17, 2014 12:35 pm

Bravo Chip and Pat. How does scenario #3 (resumption of warming at 0.17°C/decade) compare to Michael Mann’s prediction – http://www.scientificamerican.com/article/earth-will-cross-the-climate-danger-threshold-by-2036/ – that global mean surface temperature will reach 2°C above pre-industrial temperatures by 2036 or, at the latest, 2046?

April 17, 2014 12:42 pm

Some participants in this thread are under the impression that the IPCC climate models make predictions. I’m not aware of any of them. If anyone here is aware of some, I’d appreciate a citation to where they are described.

April 17, 2014 12:51 pm

The analysis of the IPCC modelling done here is all too generous when it includes historic time series. Making a modell based on existing, historic, data is a no brainer to even the moderatley skilled scientist and the strength of any model instead lies in its ablity to predict future values. This is the only way to determine if a model is good or not, and the models IPCC base their hypothsesis on are clearly not good at predicting the future (as in scenario a in Figure 1).

April 17, 2014 1:06 pm

Sorry if this has been covered before; the observed decadel trend for 1951-2012 as shown in fig 2 is betwwen .108/.113 deg and in the text this value is given as .107. Which is correct? If it’s .107 then the graph (fig2) would show that only 14, not 18, model runs had lower values than the observed rate of warming.

April 17, 2014 2:01 pm

Colin.A (April 17, 2014 at 1:06 pm):
Good question.
The histogram was done in Excel and the x-axis label refers to the upper bound of the bin it is under. So, the bin labeled .108 includes all model runs with a trend >0.103 and <=0.108. It turns out that all four of the members of this bin have a trend less than .1076.
So, the number stands at 18.
I tried to indicate that on the Figure by placing the observed trend on the x-axis as if it were continuous (rather than indicating bin values). Not perfect, I know, but I think it gets the point across.

April 17, 2014 2:08 pm

Popper (April 17, 2014 at 12:51 pm):
See this post for our analysis that primarily examines model forecasts (instead of hindcasts):
In the current post, we examine hindcasts because we were trying to show that even playing the IPCC’s own game, they are losing.

April 17, 2014 2:14 pm

Marlo Lewis (April 17, 2014 at 12:35 pm):
Mann’s statement is based in the pre-industrial period, our analysis starts in 1951. From the data at hand, your answer is not robustly attainable.

April 17, 2014 2:45 pm

Very good analysis.
It also seems the climate models are fundmentally flawed in their bottoms up, detailed over many years approach. This propagates small errors. This is obvious because the models vary so much. They should all be rejected on this basis only. If any one turns out correct, it will be only a coincident. Whoever thought of this appraoch inthe first had very little common sense it seems. On the other hand, they are so complicated few people understand them. Good job security. And the large variations lets the IPCC say the temperature rise might be 50% higher than their expected value, which is also too high.

April 17, 2014 3:18 pm

Mr. Knappenberger provides us with a link ( http://judithcurry.com/2013/01/19/peer-review-the-skeptic-filter/ ) to “model forecasts.” As “forecast” is synonymous with “prediction” this sounds like a response to my request for one or more citations model predictions. However, upon following this link to a blog post by Dr. Curry and a paper by Michaels et al, I find references to model “projections” but not to model “predictions” (aka forecasts). There is no question that the models make projections. At issue is whether they make predictions. I don’t think so.

Farmer Gez
April 18, 2014 3:02 am

Liken the IPPC approach to a sporting contest. If a team leads by a good margin in the first two quarters but then gradually loses ground in the second half and are finally beaten on the siren. Statistically you could point to a good result for the losing team. Go “Team IPPC”.

April 18, 2014 7:44 am

It shocks me that the author of the IPCC would use the 1951-2010 as the basis for their claim of model accuracy. This is prima facae invalid argument. The data from 1951-2000 has been incorporated into the models. Obviously they jigger the models to fit the data or they wouldn’t exist. So, saying you did a decent fit is not worthy of a first year college graduate. The only data that counts is the data after you fit the curve. Obviously that’s the problem. They fit to the existing data and then immediately the data fails to confirm their fit. This is basically PROOF that their fit is wrong or at least very likely wrong. Even they have to admit this basic fact of the way science works. I just don’t understand how anyone at all swallows such a stupid argument that they did a decent (albeit incredibly expensive) fit to the data and now it’s just the recent data that doesn’t fit. That’s all that matters!!!

April 18, 2014 7:49 am

Terry Oldberg (April 17, 2014 at 3:18 pm):
Climate model make predictions of the future climate given an input of forcings. How forcings may change in the distant future is hard to say. But there are a range of scenarios to cover that. Models predict the climate outcome of any of these scenarios. Model apologists prefer the term “projections” so when the model are wrong, they can claim “well, we never said they were predictions.” That’s BS in my book. If they are not predictions, then they are worthless from the outset.

Reply to  Chip Knappenberger
April 18, 2014 8:47 am

Chip Knappenberger:
Thank you for taking the time to respond.
A model that makes predictions has a different mathematical and logical structure than that of the IPCC climate models. A “prediction” is an extrapolation to the outcome of an event. A count of events of a particular description is a “frequency.” A ratio of two frequencies is a “relative frequency.” In testing a predictive model, one compares the predicted to the observed relative frequencies of the outcomes of events. If there is not a match, the model is falsified by the evidence. Otherwise, it is validated.
The IPCC climate models are insusceptible to being falsified or validated. In the parlance of the IPCC, they are “evaluated.” In an evaluation, projected global temperatures are plotted on X-Y coordinates together with a selected global temperature time series. An evaluation establishes the magnitudes of the errors of the various projections. However, it neither falsifies nor validates the associated model.
It follows from the lack of falsifiability that the research referenced by the IPCC assessment reports has not had a scientific methdology. One of the consequences is for the models to fail to deliver information to policy makers about the outcomes from their policy decisions. Thus, this research has failed to meet its objective of guiding policy. Policy makers have been led by the IPCC to think that they have information when they have none.
That this is true has been obscured by widespread application of the equivocation fallacy in making global warming arguments. An “equivocation” is an argument in which a term changes meanings in the midst of this argument. By logical rule, one cannot legitimately draw a conclusion from an equivocation. To draw an illegitimate conclusion from an equivocation is the equivocation fallacy. The equivocation fallacy is invoked when the terms “prediction” and “projection” are treated as synonyms in making global warming arguments, for the two words have differing meanings. Further information on this topic is available at http://wmbriggs.com/blog/?p=7923 .

April 18, 2014 8:22 am

Thanks, Paul and Patrick. Good article.
The IPCC should have very little credibility, based on past performance.

Pat Frank
April 18, 2014 9:52 am

Chip, “Models predict the climate outcome of any of these [various forcing] scenarios.
Chip, models can’t predict the outcome of any forcing scenario, whatever. They haven’t the physical accuracy to do so. That is, the theory they deploy is a poor theory of climate. Models can’t accurately simulate the terrestrial climate, they can’t make predictions at all, and the scenarios they do produce have no physical meaning.
Terry Oldberg is right about them. The IPCC use equivocal language to give themselves a back door out of failure.

April 18, 2014 10:13 am

Pat Frank (April 18, 2014 at 9:52 am):
“Chip, models can’t predict the outcome of any forcing scenario, whatever.”
Sure they can and they do. Physical accuracy has nothing whatsoever to do with making a prediction. It helps when trying to make a good prediction, though!

Reply to  Chip Knappenberger
April 18, 2014 10:36 am

It seems to me that the only basis for the argument between Knappenberger and Frank is use by Knappenberger of the polysemic form of “predict” and the use by Frank of the monosemic form of of the same word. Use of the polysemic form makes of Knappenberger’s argument an equivocation invalidating Knappenberger’s conclusion. Use of the monosemic form makes of Knappenberger’s argument a syllogism whose conclusion is therefore true.

Pat Frank
April 18, 2014 11:25 am

Chip, the meaning of a prediction in science is that the statement be derived from theory and be single-valued and unique, so as to pose a threat of theory falsification.
Climate model expectation values do not meet either criterion. They are not single-valued and they are not unique. The reason is that climate theory is incomplete and the boundary conditions are poorly constrained.
That means any single set of forcing conditions, applied within any single model, will produce multiple model expectation values. Climate models are unable to produce unique solutions to any forcing scenario. They do not make falsifiable predictions.
Any given model expectation value, e.g. the global T anomaly at 2050 = +1 C, will be accompanied by a confidence interval that reflects the lack of accuracy in the model; the high multiplicity of model solutions. The confidence interval is so large — in this case a minimum of about (+/-)5 C — that the expectation value (+1 C) has no real physical meaning. It imparts no information about the state of the future climate.
A model expectation value of 1(+/-)5 C is not a prediction. Virtually any air temperature at 2050 will fall within that range. The model doesn’t make unique predictions, it cannot be falsified. All the climate models are equally unreliable in that sense. It will never be possible to choose among their varied solutions because all of their expectation values will be subsumed within their huge confidence intervals.
The short of it is that models cannot reproduce the behavior of the terrestrial climate. They are unable to resolve the response of the climate to emitted GHGs. We presently can’t know, therefore, whether these GHGs are having any effect at all on the climate.

Pat Frank
April 18, 2014 11:32 am

By the way, Chip, the reason it looks like models make predictions is that climate modelers never include confidence intervals from propagated error with their scenario trends. You get the lines, you don’t get the error bars.
That makes the lines look visually like a prediction. Everyone goes for the visual impact, reacts to that, and concludes that they’ve seen a prediction. But they haven’t. They’ve seen an instance of incompetent presentation.

Reply to  Pat Frank
April 18, 2014 12:48 pm

Pat Frank:
Right on. With error bars, a falsifiable conclusion is reached that the value of the variable will be found to lie within the range of the error bars when observed. Without error bars, a non-falsifiable conclusion is reached which in effect states that the value of the variable is “about” the stated value where “about” is a polysemic term whose meaning varies dependent upon the conclusion that one wishes to reach.

Andrejs Vanags
April 18, 2014 1:23 pm

I had a chance to participate in the review, and I wish that at the time I had more time and had done more. I only contributed a few comments on the summary. I objected to the high level of confidence given an expected increase in temperatures in the next decade and pointed out that the expectation was that temperatures would drop instead, supported by the drop in sunspot numbers.
For sure I expected to be ‘black listed’ for those comments, but I noticed that they acknowledge me as a reviewer in one of the report addendums. Good for them.
[Thank you for the courtesy of your reply. Mod]

April 18, 2014 5:21 pm

Keep in mind that these models were written long after 1951, so it’s rather dubious (to put it politely) for the IPCC to include a period in their analysis which was used during the model building process. That proves absolutely nothing. The test is what happens AFTER that.

April 18, 2014 9:15 pm

Pat and Terry,
Oh, I see. You guys have eschewed the common useage of the term in order to carry on some esotetic conversation. At this point, I am content to Ieave it to you two to carry on.

Reply to  Chip Knappenberger
April 18, 2014 9:29 pm

That sounds like a grudging capitulation. Are you capitulating? If not, what is your argument?

Pat Frank
April 18, 2014 11:30 pm

Chip, I speak as 30 years an experimental scientist. Nothing I wrote is scientifically exotic. It’s a description of the standard way models and physical results are evaluated in science: unique predictions, accurate observables, and error analysis. It’s not mysterious. And climate modeling has failed that standard.
Hmm. Your bio shows you’ve got the training, Chip. Nothing I wrote should be a mystery to you.
On the other hand, not one single climate modeler I’ve encountered has shown the slightest familiarity with propagation of error. Not one has displayed any understanding of the meaning of a confidence interval derived from physical error. I have some of that evidenced in black-and-white. It’s been as though they had never encountered the concepts until I discussed them; concepts that are basic to an education in Physics or Chemistry.
It’s a very peculiar thing that climate modelers apparently have no idea how to evaluate the physical reliability of their own projections, but that seems to be the case. So, tell me, just out of curiosity: did your education include physical error analysis, and propagation of error through a calculation?

April 19, 2014 1:09 am

Since when did they start trying to model the past?

Leo Smith
April 19, 2014 4:52 am

“The IPCC still thinks it might be possible to hit the emissions target by tripling, to 80%, the share of low-carbon energy sources, such as solar, wind and nuclear power, used in electricity generation. It reckons this would require investment in such energy to go up by $147 billion a year until 2030 (and for investment in conventional carbon-producing power generation to be cut by $30 billion a year). In total, the panel says, the world could keep carbon concentrations to the requisite level by actions that would reduce annual economic growth by a mere 0.06 percentage points in 2100.”
It is just possible they might do that with nuclear, but never with intermittent renewables, because without storage the cost of this per kg CO2 saved starts to get exponential as more and more output is discarded at the peaks of the generation to allow the average level of generation to rise.
And in fact et EROI starts to get to be less than unity after about 70% intermittent renewable generation. That is you are using more energy to construct the renewable generation than you will ever get out of it. Simply because, without storage you must adopt a ‘more renewable generating than we can use, so that worst cases of low wind/sun/wave/tide are fully allowed for’.
Currently the storage we have to co-operate with intermittent renewables consists of some hydro where the geography is favourable, but that is already built. Plus massive use of stored energy in fossil fuels.
But even if more is built, that takes us into a regime where the cost, financial and energy, of the renewable source PLUS whatever is used to co-operate with it to provide dispatch also rises to a very high level, and possibly into less than unity overall EROI.
Its the same for the ‘diversity’ solutions – e.g. building a pan global grid to allow e.g. sunlight on one side of the earth to generate power for the other side.The cost of the (undersea) link exceeds the cost of a (nuclear?) power station at distances around the 1,000km mark. And uses a lot of copper, and anyway you need more than one in case one goes down.
In theory any and all of these renewable scheme ‘fixes’ could work, but as with climate change, that is not what is under dispute. What is under dispute is whether they could ever be cost effective, and even more cogently, whether they would actually produce more energy over a service lifetime than they took to build. A renewable solution that doesn’t pay back its on energy cost is unsustainable, under any definition of that term.
With nuclear the situation is fundamentally different, as nuclear power represents a stored energy source, and therefore needs no fixes to allow it to fully supply a grid 24×7. France generates overall around 75% of its national needs with nuclear power, and a large part of the rest with hydro. Like Switzerland, the combination of the two allows an almost completely carbon free grid.
They can even throw in a bit of cosmetic solar and wind using the existing hydro to balance that, too.
France built its nuclear reactors in 15 years from start to finish.
I am no great supporter of the AGW concept, but cheap fossil fuels are becoming rarer by the decade. We sustain such populations as we have by massive per capita generation of energy over and above what is required to sustain life in isolation, simply because of population density. A rural man may drink from a stream, pick fruit from the trees, hunt the odd deer and shit in the woods, living in a hut made of local wood and thatched with local straw.. Nature will, at low population densities, replenish and recycle all that. At higher densities we are forced to farm crops, carry out animal husbandry, arrange to transport that into the cities, built from similarly transported materials, along with clean water and sewage pumping.
At considerably higher energy cost per capita.
In short CIVILisation, that is living in cities, begins with agriculture, and cheap slave labour, and ends with access to cheap energy. As does the current population level.
And renewable energy will not mark the survival of civilisation, but its death. Renewable energy is absolutely unsustainable in the short to medium term.
Whereas nuclear is sustainable in the medium to long term. Even if we have to thank the egregious de Gaulle and his dreams of an independent French nuclear deterrent and French electricity grid, to demonstrate it.
I write this to warn those who may not totally believe in AGW, but who still think that ‘renewable energy’ is a Good Idea. Its not. It represents a greater and more immediate threat to mankind’s current population levels than AGW ever did.
AS Germany is busy finding out.

April 19, 2014 8:35 am

Over a period of 13 years, my job was to design and manage a succession of scientific studies. While in this job, I learned that the first order of business in the design of a study was to ensure the falsifiability of the claims that would come from the model that would be the point of delivery for the information which would be conveyed to decision makers by the associated model. The property of falsifiability was lent to these claims by the statistical population underlying the model.
For global warming climatology, applications of the equivocation fallacy have replaced falsifiability. A result is for policy makers to base their policies upon a pseudo-science that appears to them to be a science. Applications of the equivocation fallacy make it seem to these policy makers as though they have information about the outcomes of the events of the future when they have no such information!

April 21, 2014 8:38 am

Terry and Frank,
I am bristling at you guys trying to co-opt the word “prediction.” There is a much more common usage of the word that is perfectly applicable to pedestrian conversations. If you want to discuss whether or not climate model output fits your definition of the term, go right ahead. But, the results of that conversation will not impact my usage of the term. Or at least that is my prediction.

Pat Frank
April 21, 2014 10:07 am

Chip, we’re discussing science, not pedestrianism.
In science, prediction has one and only one meaning: use of a physical model to describe a future observable. to be useful, the prediction and the observable must be single-valued (have tight error bars). Error propagated through the model tells us the resolution of the theory — the magnitude of the observable the theory can reliably predict.
Climate models do not have the resolution to reliably predict the effects of GHGs. There isn’t any question about that.
You can use prediction any way you like. You’ll be discussing science, however, only when you use it correctly.

%d bloggers like this:
Verified by MonsterInsights