No Matter How the CMIP5 (IPCC AR5) Models Are Presented They Still Look Bad

UPDATE: I’ve added a comment to the end of the post about the use of 1990 as the start year.

# # #

After an initial look at how the IPCC elected to show their model-data comparison of global surface temperatures in Chapter 1, we’ll look at the CMIP5 models a couple of different ways. And we’ll look at the usual misinformation coming from SkepticalScience.

Keep in mind, the models look best when surface temperatures are presented on a global land-plus-sea surface temperature basis. On the other hand, climate models cannot simulate sea surface temperatures, in any way, shape or form, or the coupled ocean-atmosphere processes that drive their warming and cooling.

# # #

There’s a big hubbub about the IPCC’s change in their presentation of the model-data comparison for global surface temperatures. See the comparison of before and after versions of Figure 1.4 from the IPCC’s 5th Assessment Report (My Figure 1). Steve McIntyre commented on the switch here. (Cross post at WattsUpWithThat here.) Judith Curry discussed it here. The switch was one of the topics in my post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming. And everyone’s favorite climate alarmist Dana Nuccitelli nonsensically proclaimed the models “much better than you think” in his posts here and here, as if that comparison of observed and modeled global surface temperature anomalies is an true indicator of model performance. (More on Dana’s second post later.)

Figure 1

Figure 1

Much of what’s presented in the IPCC’s Figure 1.4 is misdirection. The models presented from the IPCC’s 1st, 2nd and 3rd Assessment Reports are considered obsolete, so the only imaginable reason the IPCC included them was to complicate the graph, redirecting the eye from the fact that the CMIP3/AR4 models performed poorly.

Regardless, what it boils down to is the climate scientists who prepared the draft of the IPCC AR5 presented the model-data comparison with the models and data aligned at 1990 (left-hand cell), and that version showed the global surface temperature data below the model ranges in recent years. Then, after the politicians met in Stockholm, that graph is replaced by the one in the right-hand cell. There they used the base years of 1961-1990 for the models and data, and they presented AR4 model outputs instead of a range. With all of those changes, the revised graph shows the data within the range of the models…but way down at the bottom edge with all of the models that showed the least amount of warming. Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

While that revised IPCC presentation is how most people will envision model performance, Von Storch, et al. (2013) found that the two most recent generations of climate models (CMIP3/IPCC AR4 and CMIP5/IPCC AR5) could NOT explain the cessation of warming.

Bottom line: If climate models can’t explain the hiatus in warming, they can’t be used to attribute the warming from 1975 to 1998/2000 to manmade greenhouse gases and their projections of future climate have no value.


Based on von Storch et al. (2013) we would not expect the CMIP5 models to perform any better on a global basis. And they haven’t. See Figures 2 and 3. The graphs show the simulations of global surface temperatures. Included are the model mean for the 25 individual climate models stored in the CMIP5 archive, for the period of 1950 to 2035 (thin curves), and the mean of all of the models (thick red curve). Also illustrated is the average of GISS LOTI, HADCRUT4 and NCDC global land plus sea surface temperatures from 1950 to 2012 (blue curve). In Figure 2, the models and data are presented as annual anomalies with the base years of 1961-1990, and in Figure 3, the models and data were zeroed at 1990.

Figure 2

Figure 2

# # #

Figure 3

Figure 3

Note how the models look worse with the base years of 1961-1990 than when they’ve been zeroed at 1990. Curious.

The data and model outputs are available through the KNMI Climate Explorer.

NOTE: Every time I now look at a model-data comparison of global land plus sea surface temperatures, I’m reminded of the fact that the modelers had to double the observed rate of warming of sea surface temperatures over the past 31 years to get the modeled and observed land surface temperatures to align with one another. See my post Open Letter to the Honorable John Kerry U.S. Secretary of State. That’s an atrocious display of modeling skills.


In his post Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy, Dana Nuccitelli stated (my boldface):

Global mean surface temperature data are plotted not in absolute temperatures, but rather as anomalies, which are the difference between each data point and some reference temperature. That reference temperature is determined by the ‘baseline’ period; for example, if we want to compare today’s temperatures to those during the mid to late 20th century, our baseline period might be 1961–1990. For global surface temperatures, the baseline is usually calculated over a 30-year period in order to accurately reflect any long-term trends rather than being biased by short-term noise.

It appears that the draft version of Figure 1.4 did not use a 30-year baseline, but rather aligned the models and data to match at the year 1990. How do we know this is the case? Up to that date, 1990 was the hottest year on record, and remained the hottest on record until 1995. At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations. In the draft IPCC figure, that wasn’t the case – the models and data matched exactly in 1990, suggesting that they were likely baselined using just a single year.

Mistakes happen, especially in draft documents, and the IPCC report contributors subsequently corrected the error, now using 1961–1990 as the baseline. But Steve McIntyre just couldn’t seem to figure out why the data were shifted between the draft and draft final versions, even though Tamino had pointed out that the figure should be corrected 10 months prior. How did McIntyre explain the change?

Dana’s powers of observation are obviously lacking.

First, how do we know the IPCC “aligned the models and data to match at the year 1990”? Because the IPCC said they did. The text for the Second Order Draft discussing Figure 1.4 stated:

The projections are all scaled to give the same value for 1990.

So Dana Nuccitelli didn’t need to speculate about it.

Second, Figure 4 is a close-up of view of the “corrected” version of the IPCC’s Figure 1.4, focusing on the models and data around 1990. I’ve added a fine line marking that year. And I’ve also altered the contrast and brightness of the image to bring out the model curves during that time. Contrary to the claims made by Nuccitelli, with the 1961-1990 base years, “the 1990 data point” WAS NOT “located toward the high end of the range of model simulations”.

Figure 4

Figure 4

“Mistakes happen?” That has got to be the most ridiculous comment Dana Nuccitelli has made to date. There was no mistake in the preparation of the original version of Figure 1.4. The author of that graph took special steps to make the models align with the data at 1990, and they aligned very nicely, focusing right in at a pinpoint. And the IPCC stated in the text that the “projections are all scaled to give the same value for 1990.” There’s no mistake in that either.

The only mistakes have been Dana Nuccitelli’s misrepresentation of reality. Nothing new there.

# # #

UPDATE: As quoted above, Dana Nuccitelli noted (my boldface):

At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations.

“Especially hot?” Utter nonsense.

Dana appears to be parroting Tamino from Tamino’s blog post here.

The reality: 1990 was an ENSO-neutral year, according to NOAA’s

Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

Tamino was simply playing games with data as Tamino likes to do, and Dana Nuccitelli bought it hook, line and sinker.

Or Dana Nuccitelli hasn’t yet learned that repeating bogus statements doesn’t make them any less bogus.


newest oldest most voted
Notify of

Dana is just awful.

Doesn’t Dana work for Big Oil? We will call him Oily-Dan for now on.


And how do the models compare with the un-homogenized (original) surface temperature data sets?

G. Karst

The games people play

Sweet Old Bob

Alarmists have cried WOLF! WOLF! WOLF! so long that it sounds like WOOF! WOOF! WOOF! (and sometimes like YIPE! YIPE! YIPE!) Are these models the porch they are preparing to crawl under?


Reading the Tamino post he refers to, I think what he meant was that the draft version, starting from 1990, was the one that should have been aligned differently and thus treated an especially warm year as a normal one. Or am I reading it wrong?


Is this another contender for “worst distortion ever” by Dana? –
“the IPCC says that humans have most likely caused all of the global warming over the past 60 years.”
Given how carefully many of the IPCC statements are worded I would have thought that if that is actually the case, they would have said as much.

Pippen Kool

This was all hashed out over at McIntyre’s site, in the comments section, where there are several people who seem to know what they are talking about. Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line. The bottom line is that it was changed to a more logical starting point, whether or not you think the first graph was a mistake.
Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.


Bob Tisdale: “Then, after the politicians met in Stockholm, . . . .”

They aren’t politicians, they’re dyed-in-the-wooly greenie-regulators:
Guest Essay by Barry Brill
On 23-26 September, scores of representatives of the world’s Environment Ministries are scheduled to meet in Stockholm . . . .

A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.


Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line.
So they did it the first time to show the models were right…
…and changed the second one to show the models were right
They can’t both be right….and in that case they both show the models were wrong again

Theo Goodwin

The Nuccitelli Principle 1: If the IPCC publishes something that deeply embarrasses the IPCC then some mistake happened in the IPCC.
Corollary 1: If some mistake happened in the IPCC and something deeply embarrassing to the IPCC was published then the IPCC is not responsible for the content of the deeply embarrassing thing that was published.
The Nuccitelli Principle 2: Mistakes happen.
Conclusion: The IPCC is not responsible for its deeply embarrassing publications.

Matt Skaggs

Since you read the comments at CA, you must have seen my analogy:
“The soccer player launches the penalty kick and it misses the goal to the right by one foot. Tamino sprints along the end line with his measuring tape and discovers that the goal was actually placed three feet closer to the left corner of the field than the right. Now that the discrepancy has been rectified, we are being told that the proper thing to do is credit the kicker with the goal.”
Let’s see if we can fit your statement to the analogy:
“Using the [original location of the goal] as the ref was clearly a mistake [when the ball was kicked]; the new [location] corrects that by setting the [goal where its should have been]. The bottom line is that it was changed to a more logical [location], whether or not you think the first [kick missed the goal].
Now either you [think it ws a a goal or you don’t], but the [kick was actually inside the envelope of where the goal should have been, so it should be credited as a goal].”
Seems to fit OK.

Theo Goodwin

Pippen Kool says:
October 4, 2013 at 11:00 am
You are reporting half the debate at McIntyre’s site. The glib half.

Fig 9.8 in chapter 9 of AR5 shows the correct comparison between CMIP5 models and the observed temperature trend. The discrepancy after 1998 is very clear. the graph itself can be seen here . This is particularly clear in the comparison between measured and predicted temperature trends from 1998 to 2012.
Using the same parlance as the ISCCP we can state : It is “extremely unlikely” that AR5 models can explain the hiatus in global warming (at 95% confidence) !

please read lucia on how to “zero” models. you’ve bodged it as badly as tamino


What happens to the models if the earth starts to cool again? Could the models account for that? Would the cooling be anthropgenic?


Pippen Kool – I agree about the 1990 versus the 30 year part of the discussion on McIntyre’s site. However, the professor from Duke pretty much destroys the spaghetti chart. And it isn’t personal taste.

Gail Combs

Terry Oldberg says: @ October 4, 2013 at 11:13 am
A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.
Great, Good. Not only can’t the models can’t make PREDICTIONS but the earth has stopped warming for the past couple of decades in spite of a continued increase in CO2 suggesting saturation of the greenhouse effect or at least a major slow down due to the logrithmic nature of the ‘Forcing’ allowing negative feed backs to swamp the effect of CO2.
Geologists looking into the factors causing the descent into glaciation proclaim that CO2 instead of being a cause for alarm is saving us from glaciation.
The latest IPCC says not only can they not come up with a climate sensitivity but that there is no increase in droughts, hurricanes, tornadoes etc. etc. Other reports show the world is greening. Agricultural crops have higher crop yields per acre.
The crisis has been called off, CO2 is saving the earth, lets all go home and celebrate.

I’m sorry to appear confused but it makes sense to fix the model to 1990 especially for FAR. Anything before this is hindcasting – i.e. not real – and used for initialisation. After 1990 is projection. The key is picking a long enough period to be the baseline but essentially all that matters is that your model matches the real at 1990. It doesn’t matter if that year was cold or hot – that’s the year you use.
The same applies for SAR, TAR and AR4. The data should only be presented for the projection part not the hindcast.
Personally I think that the first graph was fine. It showed enough detail and conveyed a clear enough message rather than the hodpodge of the second. Adding more error and squiggles demonstrates that you know LESS than before – hardly congruent with the 95% certainty.


@Zek202, who said; “What happens to the models if the earth starts to cool again? Could the models account for that? Would the cooling be anthropgenic?”
Now those are excellent questions. If we could only get for the record a response from the IPCC and hold it accountable to the answers it gives, because global temperatures could very well decline for the next few decades. As far as I can see, the IPCC can not accommodate for any such cooling given the models it uses.

chris y

You know the temperature in 1990. You should zero the models to the known temperature in 1990.
Each model has an uncertainty range.
Each model is the result of hundreds of runs to get to the best performance.
There are now enough years to start tossing most of the models into the rubbish bin.
The IPCC should pick the model that comes closest to the actual data, and report the predicted climate sensitivity, aerosol forcings, etc for that model. I suspect the crisis is much less than we thought.
The rest is handwaving to maintain grant support for the modeling groups, and retain the high-end predictions, as silly as they are at this stage.

Bryan A

Another interesting DATA shift is apparent in the “Figure 1” side-by-side comparison. The 1990 FAR has a Temp Anomoly of almost 0.3 as the starting point in the AR4 graph but the 1990 FAR anomoly starting point has been shifted to <0.2 in the AR5 Spaghetti Chart. Must be how they lowered the bar


I have to disagree. The trick was eliminating the error bars on the observed data and zooming out on the scale of the graph. No error bars allows them to plot a rising mean trend line, but, it would be obvious that there is no rising mean for the last 15 years if the error bars are added back. They are making the data points as inconspicuous as possible so that your eye only sees the trend line. And for some reason zooming out also gives you the impression that the trend line is right.
Someone should help out the IPCC by recoloring their graph for them. If it becomes known that the color scheme of a graph is essential to its acceptance, well, maybe they might have to add the error bars back themselves.


Too late, the global emissions scheme for airlines is moving ahead. Remember to follow the money always.


Jon Gebarowski says:
“Doesn’t Dana work for Big Oil?”
Yes, he does indeed.
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

If I point the tip of my pen on today’s temperature and drew a bunch of squiggly lines in the same general direction as the last 100 years I would have a more accurate spaghetti graph “projection” than 99% of the model runs.
Their giant swath of possible future predictions include such a wide variety of possibilities it like saying the temperature tomorrow will be between 0 and 100F. And then they still got it wrong.

RC Saumarez

Predictions make statements about the future.
The IPCC predictions, I mean projections, were explicit. OK, these don’t fit data so we’ll do a post hoc redefinition of the projections.
This is part of the shifting sands of post-normal science and would be ethically and intellectually unacceptable in other branches of science. Now that PNS is getting into real difficulty, let’s hope that we can retreat into traditional science.

Bill Illis

Some people just “like” to mislead themselves into believing the climate models have been accurate so far.
Sorry to burst your self-made bubble, but they are not.
The only accurate global warming predictions made so far are from climate models that have FLAT temperature increases. All 1 of them and this one just has huge decadal variability.
The RCP 4.5 scenario from IPCC AR5 has temperatures at 0.76C this month (using a 1961-1990 baseline. Wake me up when Hadcrut4 gets up to 0.76C – current trends have that happening in about 20 years.

Pamela Gray

Some say that the IPCC model ensembles make projections based on scenarios of CO2 emissions and therefore cannot be falsified or called predictions because they do not in any way resemble reality. Dead dog. Won’t bark. Dead horse. Stop beating it.
Common sense trumps semantics every time.

Bill H

Figure 1 should read.. Before Manipulation and After Manipulation..
These people have no shame. we’re going to plot the observations after we warm them up a bit.. is there any level to which they will not stoop to continue the lie?

Richard M

Bob is correct. 1990 is an especially good year as it was ENSO neutral all year. In many ways it could not have been better for a baseline. The IPCC clearly made the change for political purposes.

Two Labs

Statistically, there was nothing wrong with choosing 1990 as the base year. Nothing wrong with choosing the 61-90 average, either. But if changing the base year (or range) changes the forcast result significantly, that’s a statistical red flag.
From what I could tell, IPCC simply increased the confidence range of the AR4 forecasts so that post-2010 average temps could fall within that range. But since these confidence ranges are not calculated statistically, IPCC is certainly free to do this, but not free to do this without admitting that they are less confident in their modeling. Too bad they weren’t honest about that…

Jeff Alberts

catweazle666 says:
October 4, 2013 at 2:00 pm
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

He’s too obtuse for such a name to stick.

Jeff Alberts

Pippen Kool says:
October 4, 2013 at 11:00 am
Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

There is no global temperature. It’s an utterly meaningless statistical construct.


Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

gopal panicker

an amazing amount of supercomputer time wasted on these nonsense models


Our present reality over the past 70 years appears to me to lie within the noise cast by so many of these very sophisticated, quanitative models. From my experience of reservoir production modeling, which can be tweaked to provide a very large range of possible outcomes ….the ones you tend to believe are the ones that fall out from first principles, with minimal assuptions. They are directionally correct with the least amount of forcing or curve fitting. IFrom my exeperience if the trend is wrong, it is time to go back and revisit your assumptions. What strikes me is if that if someone (a public company for example, with public shareholders) was paying for the directional accuracy of these climate models to predict the future physical and therefore finacial behaviour of a producing asset, a lot of these scientific types would be out of business very quickly.

Greg Goodman

Richard Betts of Hadley Centre commented on Climate Audit, saying the revised AR5 figure1.4 was presenting it “just as” done in AR4 and provides a link:
However, it we look at that graph, we note considerable differences in the how the ranges of predictions from the various reports overlap compared to how they are show in AR5.
It’s not “just as” , there is wholesale shifting of, not only the observational data but also the individual reported projections.
It is pretty obvious that if you can find a logic that allows shifting all the data and projections up and down it is a trivial result that they overlap. It demonstrates nothing about the data but a lot about the revisionist nature of the IPCC.
Who was it said: “The future is certain, it is only the past that is unpredictable.”?


Thank you Clivebest 11:20


Matt Skaggs says:
October 4, 2013 at 11:18 am
Ever hear the saying “moving the goal posts”.
What is the “correct location of the goal”…. to the cultists it where the ball goes in. Thats the only ref point that matters.
The non-stopped moving of goal posts such as suddenly you need 30 years of flat temps for it to be a trend but only 7-12 of warming years to be a trend is classic goal post moving.

Michael Asten

I fear the IPCC authors made the mistake with their earlier AR5 draft but are not letting on. If I take AR4 WG1 Fig 1.1 and overlay on AR5 WG1 Fig 1.4 then the uncertainty bounds for TAR temperature projections overlay reasonably closely. However as pointed out above the draft figure (now abandoned) for AR5 as annotated by Steve McIntyre, does not show the TAR uncerainty bounds as overlaying. So rather than a fudge in revising AR5, perhaps a sloppy author made a mistake in preparing the earlier Fig 1.4 of AR5, then fixed it for the current final draft. That said, I dont excuse the use of the spagetti plot – I take a somewhat uncharitable view that use of a completely different plot format may have been a ploy to hide an earlier error, and allow a bit of disinformation to circulate.
I find it very curious that IPCC authors (unlike accountants) feel no need at all to provide comparisons of results for the current time period versus equivalent for the past time period. An accountant who changed formats, baselines, etc and deliberately ignored past results/projections would be shot at dawn, professionally speaking. What a pity scientists cant enforce similar standards.


1990 was a warm year in all data sets. Here’s the HADCru record.
(Had4 global temps)
I plotted the trend prior to the apparent slowdown beginning in 1998, and from 1972 so as not to overemphasise the 1990 anomaly.
To show the problem with centring on 1990, here is a straightforward adjustment centring the trend on 1990, but this time I’ll include all the years 1972 through 2012. (period chosen because the slope has strong statistical significance).
If the blue line was the averaged linear trend from the models (it isn’t of course), this shows why baselining comparisons on a year that lies off the long-term trend presents problems.
You could show the same problem using a cooler year baseline (1985).
Now the post-2000 temps appear warmer – that can’t be right!
A better way would be to select a statisitcally significant long-term trend from observed data (eg, from 1950 to through 1999), and choosing a year that lies on or close to that trend. If you baseline the model ensemble average to that year, you’d at least avoid the problem of biasing the results on a single year anomaly that was warmer or cooler than average.
1982 seems like a good choice.


megawati, you are right. I wondered why they didn’t use 2013.

There is one clear fix in the new IPCC graph and that is the AR4 predictions. These were made after 2000 and if you look at the “before politicians” graph you see how well they track from the 1990 data the consequent downward trend and then up to 2000. Tamino had to leave out AR4 from his “re-alignment” for this reason. Both the AR4 and AR5 model predictions are above the data. The clever optical illusion in the new graph is to move down FAR, SAR and TAR and smudge everything out with bland colors so this contradiction is invisible.

Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker. Thus, to distinguish between predictions and projections is important.