No Matter How the CMIP5 (IPCC AR5) Models Are Presented They Still Look Bad

UPDATE: I’ve added a comment to the end of the post about the use of 1990 as the start year.

# # #

After an initial look at how the IPCC elected to show their model-data comparison of global surface temperatures in Chapter 1, we’ll look at the CMIP5 models a couple of different ways. And we’ll look at the usual misinformation coming from SkepticalScience.

Keep in mind, the models look best when surface temperatures are presented on a global land-plus-sea surface temperature basis. On the other hand, climate models cannot simulate sea surface temperatures, in any way, shape or form, or the coupled ocean-atmosphere processes that drive their warming and cooling.

# # #

There’s a big hubbub about the IPCC’s change in their presentation of the model-data comparison for global surface temperatures. See the comparison of before and after versions of Figure 1.4 from the IPCC’s 5th Assessment Report (My Figure 1). Steve McIntyre commented on the switch here. (Cross post at WattsUpWithThat here.) Judith Curry discussed it here. The switch was one of the topics in my post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming. And everyone’s favorite climate alarmist Dana Nuccitelli nonsensically proclaimed the models “much better than you think” in his posts here and here, as if that comparison of observed and modeled global surface temperature anomalies is an true indicator of model performance. (More on Dana’s second post later.)

Figure 1

Figure 1

Much of what’s presented in the IPCC’s Figure 1.4 is misdirection. The models presented from the IPCC’s 1st, 2nd and 3rd Assessment Reports are considered obsolete, so the only imaginable reason the IPCC included them was to complicate the graph, redirecting the eye from the fact that the CMIP3/AR4 models performed poorly.

Regardless, what it boils down to is the climate scientists who prepared the draft of the IPCC AR5 presented the model-data comparison with the models and data aligned at 1990 (left-hand cell), and that version showed the global surface temperature data below the model ranges in recent years. Then, after the politicians met in Stockholm, that graph is replaced by the one in the right-hand cell. There they used the base years of 1961-1990 for the models and data, and they presented AR4 model outputs instead of a range. With all of those changes, the revised graph shows the data within the range of the models…but way down at the bottom edge with all of the models that showed the least amount of warming. Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

While that revised IPCC presentation is how most people will envision model performance, Von Storch, et al. (2013) found that the two most recent generations of climate models (CMIP3/IPCC AR4 and CMIP5/IPCC AR5) could NOT explain the cessation of warming.

Bottom line: If climate models can’t explain the hiatus in warming, they can’t be used to attribute the warming from 1975 to 1998/2000 to manmade greenhouse gases and their projections of future climate have no value.

WHAT ABOUT THE CMIP5/IPCC AR5 MODELS?

Based on von Storch et al. (2013) we would not expect the CMIP5 models to perform any better on a global basis. And they haven’t. See Figures 2 and 3. The graphs show the simulations of global surface temperatures. Included are the model mean for the 25 individual climate models stored in the CMIP5 archive, for the period of 1950 to 2035 (thin curves), and the mean of all of the models (thick red curve). Also illustrated is the average of GISS LOTI, HADCRUT4 and NCDC global land plus sea surface temperatures from 1950 to 2012 (blue curve). In Figure 2, the models and data are presented as annual anomalies with the base years of 1961-1990, and in Figure 3, the models and data were zeroed at 1990.

Figure 2

Figure 2

# # #

Figure 3

Figure 3

Note how the models look worse with the base years of 1961-1990 than when they’ve been zeroed at 1990. Curious.

The data and model outputs are available through the KNMI Climate Explorer.

NOTE: Every time I now look at a model-data comparison of global land plus sea surface temperatures, I’m reminded of the fact that the modelers had to double the observed rate of warming of sea surface temperatures over the past 31 years to get the modeled and observed land surface temperatures to align with one another. See my post Open Letter to the Honorable John Kerry U.S. Secretary of State. That’s an atrocious display of modeling skills.

UNFORTUNATELY FOR DANA NUCCITELLI, HE DOES NOT APPEAR TO BE KIDDING

In his post Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy, Dana Nuccitelli stated (my boldface):

Global mean surface temperature data are plotted not in absolute temperatures, but rather as anomalies, which are the difference between each data point and some reference temperature. That reference temperature is determined by the ‘baseline’ period; for example, if we want to compare today’s temperatures to those during the mid to late 20th century, our baseline period might be 1961–1990. For global surface temperatures, the baseline is usually calculated over a 30-year period in order to accurately reflect any long-term trends rather than being biased by short-term noise.

It appears that the draft version of Figure 1.4 did not use a 30-year baseline, but rather aligned the models and data to match at the year 1990. How do we know this is the case? Up to that date, 1990 was the hottest year on record, and remained the hottest on record until 1995. At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations. In the draft IPCC figure, that wasn’t the case – the models and data matched exactly in 1990, suggesting that they were likely baselined using just a single year.

Mistakes happen, especially in draft documents, and the IPCC report contributors subsequently corrected the error, now using 1961–1990 as the baseline. But Steve McIntyre just couldn’t seem to figure out why the data were shifted between the draft and draft final versions, even though Tamino had pointed out that the figure should be corrected 10 months prior. How did McIntyre explain the change?

Dana’s powers of observation are obviously lacking.

First, how do we know the IPCC “aligned the models and data to match at the year 1990”? Because the IPCC said they did. The text for the Second Order Draft discussing Figure 1.4 stated:

The projections are all scaled to give the same value for 1990.

So Dana Nuccitelli didn’t need to speculate about it.

Second, Figure 4 is a close-up of view of the “corrected” version of the IPCC’s Figure 1.4, focusing on the models and data around 1990. I’ve added a fine line marking that year. And I’ve also altered the contrast and brightness of the image to bring out the model curves during that time. Contrary to the claims made by Nuccitelli, with the 1961-1990 base years, “the 1990 data point” WAS NOT “located toward the high end of the range of model simulations”.

Figure 4

Figure 4

“Mistakes happen?” That has got to be the most ridiculous comment Dana Nuccitelli has made to date. There was no mistake in the preparation of the original version of Figure 1.4. The author of that graph took special steps to make the models align with the data at 1990, and they aligned very nicely, focusing right in at a pinpoint. And the IPCC stated in the text that the “projections are all scaled to give the same value for 1990.” There’s no mistake in that either.

The only mistakes have been Dana Nuccitelli’s misrepresentation of reality. Nothing new there.

# # #

UPDATE: As quoted above, Dana Nuccitelli noted (my boldface):

At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations.

“Especially hot?” Utter nonsense.

Dana appears to be parroting Tamino from Tamino’s blog post here.

The reality: 1990 was an ENSO-neutral year, according to NOAA’s

Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

Tamino was simply playing games with data as Tamino likes to do, and Dana Nuccitelli bought it hook, line and sinker.

Or Dana Nuccitelli hasn’t yet learned that repeating bogus statements doesn’t make them any less bogus.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

103 Comments
Inline Feedbacks
View all comments
Resourceguy
October 4, 2013 12:40 pm

Too late, the global emissions scheme for airlines is moving ahead. Remember to follow the money always.

catweazle666
October 4, 2013 2:00 pm

Jon Gebarowski says:
“Doesn’t Dana work for Big Oil?”
Yes, he does indeed.
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

October 4, 2013 2:13 pm

If I point the tip of my pen on today’s temperature and drew a bunch of squiggly lines in the same general direction as the last 100 years I would have a more accurate spaghetti graph “projection” than 99% of the model runs.
Their giant swath of possible future predictions include such a wide variety of possibilities it like saying the temperature tomorrow will be between 0 and 100F. And then they still got it wrong.

RC Saumarez
October 4, 2013 2:19 pm

Predictions make statements about the future.
The IPCC predictions, I mean projections, were explicit. OK, these don’t fit data so we’ll do a post hoc redefinition of the projections.
This is part of the shifting sands of post-normal science and would be ethically and intellectually unacceptable in other branches of science. Now that PNS is getting into real difficulty, let’s hope that we can retreat into traditional science.

Bill Illis
October 4, 2013 4:10 pm

Some people just “like” to mislead themselves into believing the climate models have been accurate so far.
Sorry to burst your self-made bubble, but they are not.
The only accurate global warming predictions made so far are from climate models that have FLAT temperature increases. All 1 of them and this one just has huge decadal variability.
The RCP 4.5 scenario from IPCC AR5 has temperatures at 0.76C this month (using a 1961-1990 baseline. Wake me up when Hadcrut4 gets up to 0.76C – current trends have that happening in about 20 years.

Pamela Gray
October 4, 2013 4:17 pm

Some say that the IPCC model ensembles make projections based on scenarios of CO2 emissions and therefore cannot be falsified or called predictions because they do not in any way resemble reality. Dead dog. Won’t bark. Dead horse. Stop beating it.
Common sense trumps semantics every time.

Bill H
October 4, 2013 5:00 pm

Figure 1 should read.. Before Manipulation and After Manipulation..
These people have no shame. we’re going to plot the observations after we warm them up a bit.. is there any level to which they will not stoop to continue the lie?

Richard M
October 4, 2013 5:01 pm

Bob is correct. 1990 is an especially good year as it was ENSO neutral all year. In many ways it could not have been better for a baseline. The IPCC clearly made the change for political purposes.

Two Labs
October 4, 2013 6:07 pm

Statistically, there was nothing wrong with choosing 1990 as the base year. Nothing wrong with choosing the 61-90 average, either. But if changing the base year (or range) changes the forcast result significantly, that’s a statistical red flag.
From what I could tell, IPCC simply increased the confidence range of the AR4 forecasts so that post-2010 average temps could fall within that range. But since these confidence ranges are not calculated statistically, IPCC is certainly free to do this, but not free to do this without admitting that they are less confident in their modeling. Too bad they weren’t honest about that…

October 4, 2013 6:39 pm

catweazle666 says:
October 4, 2013 at 2:00 pm
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

He’s too obtuse for such a name to stick.

October 4, 2013 6:41 pm

Pippen Kool says:
October 4, 2013 at 11:00 am
Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

There is no global temperature. It’s an utterly meaningless statistical construct.

megawati
October 4, 2013 7:17 pm

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

gopal panicker
October 4, 2013 8:07 pm

an amazing amount of supercomputer time wasted on these nonsense models

Leo
October 4, 2013 8:15 pm

Our present reality over the past 70 years appears to me to lie within the noise cast by so many of these very sophisticated, quanitative models. From my experience of reservoir production modeling, which can be tweaked to provide a very large range of possible outcomes ….the ones you tend to believe are the ones that fall out from first principles, with minimal assuptions. They are directionally correct with the least amount of forcing or curve fitting. IFrom my exeperience if the trend is wrong, it is time to go back and revisit your assumptions. What strikes me is if that if someone (a public company for example, with public shareholders) was paying for the directional accuracy of these climate models to predict the future physical and therefore finacial behaviour of a producing asset, a lot of these scientific types would be out of business very quickly.
Leo

October 4, 2013 9:44 pm

Richard Betts of Hadley Centre commented on Climate Audit, saying the revised AR5 figure1.4 was presenting it “just as” done in AR4 and provides a link: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-1-1.html
However, it we look at that graph, we note considerable differences in the how the ranges of predictions from the various reports overlap compared to how they are show in AR5.
It’s not “just as” , there is wholesale shifting of, not only the observational data but also the individual reported projections.
It is pretty obvious that if you can find a logic that allows shifting all the data and projections up and down it is a trivial result that they overlap. It demonstrates nothing about the data but a lot about the revisionist nature of the IPCC.
Who was it said: “The future is certain, it is only the past that is unpredictable.”?

dalyplanet
October 4, 2013 9:53 pm

Thank you Clivebest 11:20

temp
October 4, 2013 10:15 pm

Matt Skaggs says:
October 4, 2013 at 11:18 am
Ever hear the saying “moving the goal posts”.
What is the “correct location of the goal”…. to the cultists it where the ball goes in. Thats the only ref point that matters.
The non-stopped moving of goal posts such as suddenly you need 30 years of flat temps for it to be a trend but only 7-12 of warming years to be a trend is classic goal post moving.

Michael Asten
October 4, 2013 10:39 pm

I fear the IPCC authors made the mistake with their earlier AR5 draft but are not letting on. If I take AR4 WG1 Fig 1.1 and overlay on AR5 WG1 Fig 1.4 then the uncertainty bounds for TAR temperature projections overlay reasonably closely. However as pointed out above the draft figure (now abandoned) for AR5 as annotated by Steve McIntyre, does not show the TAR uncerainty bounds as overlaying. So rather than a fudge in revising AR5, perhaps a sloppy author made a mistake in preparing the earlier Fig 1.4 of AR5, then fixed it for the current final draft. That said, I dont excuse the use of the spagetti plot – I take a somewhat uncharitable view that use of a completely different plot format may have been a ploy to hide an earlier error, and allow a bit of disinformation to circulate.
I find it very curious that IPCC authors (unlike accountants) feel no need at all to provide comparisons of results for the current time period versus equivalent for the past time period. An accountant who changed formats, baselines, etc and deliberately ignored past results/projections would be shot at dawn, professionally speaking. What a pity scientists cant enforce similar standards.

barry
October 5, 2013 12:15 am

1990 was a warm year in all data sets. Here’s the HADCru record.
(Had4 global temps)
I plotted the trend prior to the apparent slowdown beginning in 1998, and from 1972 so as not to overemphasise the 1990 anomaly.
To show the problem with centring on 1990, here is a straightforward adjustment centring the trend on 1990, but this time I’ll include all the years 1972 through 2012. (period chosen because the slope has strong statistical significance).
(example)
If the blue line was the averaged linear trend from the models (it isn’t of course), this shows why baselining comparisons on a year that lies off the long-term trend presents problems.
You could show the same problem using a cooler year baseline (1985).
(example)
Now the post-2000 temps appear warmer – that can’t be right!
A better way would be to select a statisitcally significant long-term trend from observed data (eg, from 1950 to through 1999), and choosing a year that lies on or close to that trend. If you baseline the model ensemble average to that year, you’d at least avoid the problem of biasing the results on a single year anomaly that was warmer or cooler than average.
1982 seems like a good choice.

Rabe
October 5, 2013 1:18 am

megawati, you are right. I wondered why they didn’t use 2013.

October 5, 2013 1:23 am

There is one clear fix in the new IPCC graph and that is the AR4 predictions. These were made after 2000 and if you look at the “before politicians” graph you see how well they track from the 1990 data the consequent downward trend and then up to 2000. Tamino had to leave out AR4 from his “re-alignment” for this reason. Both the AR4 and AR5 model predictions are above the data. The clever optical illusion in the new graph is to move down FAR, SAR and TAR and smudge everything out with bland colors so this contradiction is invisible.

Reply to  clivebest
October 5, 2013 9:13 am

clivebest:
Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker. Thus, to distinguish between predictions and projections is important.

Verified by MonsterInsights