No Matter How the CMIP5 (IPCC AR5) Models Are Presented They Still Look Bad

UPDATE: I’ve added a comment to the end of the post about the use of 1990 as the start year.

# # #

After an initial look at how the IPCC elected to show their model-data comparison of global surface temperatures in Chapter 1, we’ll look at the CMIP5 models a couple of different ways. And we’ll look at the usual misinformation coming from SkepticalScience.

Keep in mind, the models look best when surface temperatures are presented on a global land-plus-sea surface temperature basis. On the other hand, climate models cannot simulate sea surface temperatures, in any way, shape or form, or the coupled ocean-atmosphere processes that drive their warming and cooling.

# # #

There’s a big hubbub about the IPCC’s change in their presentation of the model-data comparison for global surface temperatures. See the comparison of before and after versions of Figure 1.4 from the IPCC’s 5th Assessment Report (My Figure 1). Steve McIntyre commented on the switch here. (Cross post at WattsUpWithThat here.) Judith Curry discussed it here. The switch was one of the topics in my post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming. And everyone’s favorite climate alarmist Dana Nuccitelli nonsensically proclaimed the models “much better than you think” in his posts here and here, as if that comparison of observed and modeled global surface temperature anomalies is an true indicator of model performance. (More on Dana’s second post later.)

Figure 1

Figure 1

Much of what’s presented in the IPCC’s Figure 1.4 is misdirection. The models presented from the IPCC’s 1st, 2nd and 3rd Assessment Reports are considered obsolete, so the only imaginable reason the IPCC included them was to complicate the graph, redirecting the eye from the fact that the CMIP3/AR4 models performed poorly.

Regardless, what it boils down to is the climate scientists who prepared the draft of the IPCC AR5 presented the model-data comparison with the models and data aligned at 1990 (left-hand cell), and that version showed the global surface temperature data below the model ranges in recent years. Then, after the politicians met in Stockholm, that graph is replaced by the one in the right-hand cell. There they used the base years of 1961-1990 for the models and data, and they presented AR4 model outputs instead of a range. With all of those changes, the revised graph shows the data within the range of the models…but way down at the bottom edge with all of the models that showed the least amount of warming. Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

While that revised IPCC presentation is how most people will envision model performance, Von Storch, et al. (2013) found that the two most recent generations of climate models (CMIP3/IPCC AR4 and CMIP5/IPCC AR5) could NOT explain the cessation of warming.

Bottom line: If climate models can’t explain the hiatus in warming, they can’t be used to attribute the warming from 1975 to 1998/2000 to manmade greenhouse gases and their projections of future climate have no value.

WHAT ABOUT THE CMIP5/IPCC AR5 MODELS?

Based on von Storch et al. (2013) we would not expect the CMIP5 models to perform any better on a global basis. And they haven’t. See Figures 2 and 3. The graphs show the simulations of global surface temperatures. Included are the model mean for the 25 individual climate models stored in the CMIP5 archive, for the period of 1950 to 2035 (thin curves), and the mean of all of the models (thick red curve). Also illustrated is the average of GISS LOTI, HADCRUT4 and NCDC global land plus sea surface temperatures from 1950 to 2012 (blue curve). In Figure 2, the models and data are presented as annual anomalies with the base years of 1961-1990, and in Figure 3, the models and data were zeroed at 1990.

Figure 2

Figure 2

# # #

Figure 3

Figure 3

Note how the models look worse with the base years of 1961-1990 than when they’ve been zeroed at 1990. Curious.

The data and model outputs are available through the KNMI Climate Explorer.

NOTE: Every time I now look at a model-data comparison of global land plus sea surface temperatures, I’m reminded of the fact that the modelers had to double the observed rate of warming of sea surface temperatures over the past 31 years to get the modeled and observed land surface temperatures to align with one another. See my post Open Letter to the Honorable John Kerry U.S. Secretary of State. That’s an atrocious display of modeling skills.

UNFORTUNATELY FOR DANA NUCCITELLI, HE DOES NOT APPEAR TO BE KIDDING

In his post Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy, Dana Nuccitelli stated (my boldface):

Global mean surface temperature data are plotted not in absolute temperatures, but rather as anomalies, which are the difference between each data point and some reference temperature. That reference temperature is determined by the ‘baseline’ period; for example, if we want to compare today’s temperatures to those during the mid to late 20th century, our baseline period might be 1961–1990. For global surface temperatures, the baseline is usually calculated over a 30-year period in order to accurately reflect any long-term trends rather than being biased by short-term noise.

It appears that the draft version of Figure 1.4 did not use a 30-year baseline, but rather aligned the models and data to match at the year 1990. How do we know this is the case? Up to that date, 1990 was the hottest year on record, and remained the hottest on record until 1995. At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations. In the draft IPCC figure, that wasn’t the case – the models and data matched exactly in 1990, suggesting that they were likely baselined using just a single year.

Mistakes happen, especially in draft documents, and the IPCC report contributors subsequently corrected the error, now using 1961–1990 as the baseline. But Steve McIntyre just couldn’t seem to figure out why the data were shifted between the draft and draft final versions, even though Tamino had pointed out that the figure should be corrected 10 months prior. How did McIntyre explain the change?

Dana’s powers of observation are obviously lacking.

First, how do we know the IPCC “aligned the models and data to match at the year 1990”? Because the IPCC said they did. The text for the Second Order Draft discussing Figure 1.4 stated:

The projections are all scaled to give the same value for 1990.

So Dana Nuccitelli didn’t need to speculate about it.

Second, Figure 4 is a close-up of view of the “corrected” version of the IPCC’s Figure 1.4, focusing on the models and data around 1990. I’ve added a fine line marking that year. And I’ve also altered the contrast and brightness of the image to bring out the model curves during that time. Contrary to the claims made by Nuccitelli, with the 1961-1990 base years, “the 1990 data point” WAS NOT “located toward the high end of the range of model simulations”.

Figure 4

Figure 4

“Mistakes happen?” That has got to be the most ridiculous comment Dana Nuccitelli has made to date. There was no mistake in the preparation of the original version of Figure 1.4. The author of that graph took special steps to make the models align with the data at 1990, and they aligned very nicely, focusing right in at a pinpoint. And the IPCC stated in the text that the “projections are all scaled to give the same value for 1990.” There’s no mistake in that either.

The only mistakes have been Dana Nuccitelli’s misrepresentation of reality. Nothing new there.

# # #

UPDATE: As quoted above, Dana Nuccitelli noted (my boldface):

At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations.

“Especially hot?” Utter nonsense.

Dana appears to be parroting Tamino from Tamino’s blog post here.

The reality: 1990 was an ENSO-neutral year, according to NOAA’s

Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

Tamino was simply playing games with data as Tamino likes to do, and Dana Nuccitelli bought it hook, line and sinker.

Or Dana Nuccitelli hasn’t yet learned that repeating bogus statements doesn’t make them any less bogus.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

103 Comments
Inline Feedbacks
View all comments
Another Gareth
October 5, 2013 1:29 am

By revising the chart to zero the models at 1990 it makes the warming before then look like a return to a normal rather than a dangerous shift from a previous normal. The IPCC has sacrificed the observed warming pre-1990 in order to protect the models from appearing to be falsified.
Is this something sceptics could exploit? We need to insist that the IPCC be consistent – they can say the warming pre-1990 is nothing exceptional and the models are still worthy of consideration *or* that the pre-1990 warming is the beginning of a man made climate trend and admit the models are not good enough. They cannot say both (but they will).

Geoff Sherrington
October 5, 2013 2:39 am

Bob,
In your depiction of temperature as average of GISS, HADCRUT4 and NCDC, the region around 2010 shows as higher than 1998. It does not show higher on, for example, RSS. There are reasons to expect a difference, as we know, but this is a rather critical difference when one comes to look at the hiatus.
I’m still left with an impression that the small positive slope upwards in the averaged data is, in part, due to adjustments +/- UHI and the difficulty of assessing it.
Therefore, I have a preference for the UAH or RSS data over surface-based observation, particularly because the satellite data has a better chance over the poles, Africa & Sth America.
If you could see in detail how the Aussie record is adjusted by the time the adjusters finish with it, I’d think you might have similar preferences.
So, do you have a strong reason to stick with the average?

Cheshirered
October 5, 2013 2:57 am

Dana doesn’t like it when people question him or his orthodoxy. Almost every post of mine – currently on pre-mod’ at The G, gets deleted now, even the funny ones that take just a little dig at him or agw.
What’s happening here is that as one after another alarmist claims turn to rubble then the louder they squeal and shout. Diversion tactics. (“If the law *and* the evidence are against you – bang the table”) Hence the current advanced spate of ‘worse than we ever thought possible’ articles.
They’re losing the argument because the data isn’t falling their way, and they know they’re losing.

barry
October 5, 2013 3:05 am

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

Models are not run baselined to recent temps, so you have to make a choice. My two cents about that choice is here.

mwhite
October 5, 2013 3:28 am

“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

barry
October 5, 2013 3:51 am

mwhite here.

“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

I wonder how many will read the full article, which includes,
“The scientific evidence on human-caused global warming is clear. Opposition stems from politics, not science.”
and
“There are of course open questions yet to be answered by climate scientists – precisely how sensitive the climate is to the increased greenhouse effect, for example.”

October 5, 2013 4:20 am

barry quotes Nutticelli:
“The scientific evidence on human-caused global warming is clear.”
That is a baseless assertion.
There is no testable, measurable scientific evidence proving that human CO2 emissions are the cause of global warming. None.
What is it about “none” does barry and Nutticelli not understand?

Richard M
October 5, 2013 5:14 am

barry says:
October 5, 2013 at 12:15 am
1990 was a warm year in all data sets.

barry, thanks for showing your religious approach to science. When you start calling an ENSO neutral year “warm” it is obvious you have given up on logic.

John Whitman
October 5, 2013 5:27 am

barry on October 5, 2013 at 3:51 am

mwhite here.
“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

I wonder how many will read the full article, which includes,
“The scientific evidence on human-caused global warming is clear. Opposition stems from politics, not science.”
and
“There are of course open questions yet to be answered by climate scientists – precisely how sensitive the climate is to the increased greenhouse effect, for example.”

– – – – – – –
barry,
You, of course, may wonder that.
I, on the other hand, wonder how any reasonably normal rational human being cannot see that it is clear that there is little credibility in exclamations like this: AGW is unambiguous in the scientifically documented observational record.
I pity Nuccitelli, it is a difficult time to be an apprentice apologist trying to ‘rationalize’ an excuse for the IPCC’s publicly exposed integrity failure.
John

Bill Illis
October 5, 2013 6:10 am

Comment from Jochem Marotzke of the Max Planck Institute in a presentation at the Royal Society about the IPCC report.
“As a result of the hiatus, explained Marotzke, the IPCC report’s chapter 11 revised the assessment of near-term warming downwards from the “raw” CMIP5 model range. It also included an additional 10% reduction because some models have a climate sensitivity that’s slightly too high.”
http://environmentalresearchweb.org/cws/article/news/54904

barry
October 5, 2013 7:47 am

1990 was preceded by the strong 1988/89 La Nina and followed by the eruption of Mount Pinatubo. Therefore, 1990 stands out.

Even detrended, 1990 is a warmer year than average.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1972/to:1999/mean:12/detrend:0.482/plot/hadcrut4gl/from:1972/to:1999/trend/detrend:0.482

But it was an ENSO-neutral year, and as a result, it was a prime year to start a model-data comparison, because it was NOT exceptionally warm in response to an El Nino.

ENSO is not the only factor that accounts for interannual global temperatures. I’m not persuaded that we should baseline to the ENSO indices alone. Still think it’s better to determine a long term temerature trend, and baseline by selecting a year that lies on the trend, which evens out all the wiggles in the long-run, not just ENSO.
If, say, the above-the-trend warmth of 1990 was caused by massive, once-a-century solar flare activity, it would not be reasonable to use 1990. Seeing as we don’t know what what caused 1990 to pop out above the trend, we are left to make a purely statistical decision. If ENSO is a vital consideration, then select a year that satisfies both requirements – must be ENSO neutral and lie on the long-term trend line. That should not be hard to do if ENSO is overwhelmingly the principal driver of interannual fluctuations. ENSO indices are, after all, trendless over the long-term – by design. And it also has the virtue of being less biased by other interannual influences.
(I didn’t introduce Nuticelli’s article here, nor would I have. I don’t think it’s a good article, but I took more exception to the slanted way in which it was introduced, as if Nuticelli thinks the debate should be political. He’s saying the opposite. At the same time, Nuticelli and SkS certainly have a political agenda. And ‘political’ is not referring to governments, but the political ideology of individuals.)

barry
October 5, 2013 7:48 am

“And it also has the virtue of being less biased by other interannual influences.”
“it” = “this method”

Pamela Gray
October 5, 2013 7:58 am

There should be at least four sets of graphs. Each one depicting the modeled output for the 4 different model ensembles (SAR, TAR, FAR, and AR4) marking the hindcasting period and then changing colors to mark the beginning of the “projection” period. Range of runs should be shaded in. Plot the average and range of real observations and add to the graph. Statistical error bars should be calculated and depicted for both models and real observations. If anomalies and robustness are important, than the climatological average should be more than 30 years. Should be at least 50. These researchers shouldn’t be afraid of doing this. That they are speaks volumes about their own doubts.
Why four? There are 4 different investigations here, each with two parts: hindcast and projection periods. So there should be 4 separate graphs which clarify the two phased experiments of each model ensemble. Why more than four? Because within the ensembles, it is possible that input parameter scenarios may be different, IE CO2 percent increase stays at zero, or increases by 1 percentage point each year, or increases by 2 percentage points each year, etc.
The way the current graph of either version is done leaves out important methodological information.

Steve Obeda
October 5, 2013 8:05 am

If the current “pause” is due to natural variation, then the forecasts for the next 20 years should show a much steeper increase than they did five years ago. That’s because we’ll soon have not only the reversal of the natural variation but also the cumulative effects of the CO2, no?

barry
October 5, 2013 9:06 am

Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

Yes, they do. The graphic from the leaked report is 25 years long, and emphasises the recent apparent downturn. The approved graphic is 85 years long (40 more years of hindcast, 20 more of forecast), and therefore gives more context. As global climate change is a long-term (multi-decadal) phenomenon, the second graphic is more appropriate. Regardless of whether scientists or politicians changed it.

barry
October 5, 2013 12:07 pm

Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker.

Falsifiable predictions are a function of science, not policy-making. They are called projections because the policy makers wanted to know what might happen under different forcing scenarios. So they are given a series of ranges – CO2 increase at various different rates, or stabilising at a certain value. This provides more, not less information to policy makers. Commonly decision-makers on any issue at least want to know the ‘best case/worst case’ scenario to get an idea of the range. Individuals frequently weigh decisions on this basis for ordinary life stuff. We try to pick options that balance cost and outcome.

Reply to  barry
October 5, 2013 4:46 pm

Barry:
Thanks for giving me an opportunity to clarify. It is a fact that no events underlie the IPCC climate models. However, it is by counting events of various descriptions that one arrives at the entities which statisticians call “frequencies.” The ratio of two frequencies of particular descriptions is called a “relative frequency.” A relative frequency is the empirical counterpart of a probability. As there are no frequencies or relative frequencies, there are no probabilities. It is by comparison of probability values to relative frequency values that a model is falsified. Thus, the claims that are made by the IPCC climate models are not falsifiable. Also, as “information” is defined in terms of probabilities, “information” is not a concept for the IPCC climate models.
Predictions have a one-to-one relationship with events. As no events underlie the IPCC climate models, there can be no predictions from them. As there are no predictions, the methodology of the associated research cannot truthfully be said to be “scientific.”

October 5, 2013 1:05 pm

barry says:
“…decision-makers on any issue at least want to know the ‘best case/worst case’ scenario…”
That is not what the IPCC does. When have they ever made a “best case scenario”?
‘Best case’ is that a couple of degrees of global warming is a net benefit to humanity. ‘Best case’ is that more CO2 is beneficial to the biosphere.
Give it up, barry. The IPCC never provides a “best case scenario”. Their scenarios go from very, very bad, to Catastrophic.

wrecktafire
October 5, 2013 1:15 pm

I’m with JDN: the zoom out makes the flat spot look much less “significant” (in the subjective sense).
http://www.amazon.com/How-To-Lie-With-Charts/dp/1419651439

barry
October 5, 2013 7:55 pm

Terry,
I disagree that models are not falsifiable. But they are complex, and describe much more than a one to one relationship. A failure of a particular component of climate models (say, the replicability of cloud behaviour) only tells us that cloud modeling is poor (or falsified, if you want to express it in a binary way). Other components do well, like predicting the cooling of the stratosphere. Should I assume you are focussed exclusively on the evolution of global surface temperatures?
Most commenters in the mainstream (such as realclimate) agree that if something like the trajectory of surface temperatures deviated over a sufficient amount of time from the models, then the ability of models to predict surface temps would be falsified.
Predictions and events are not always a one to one relationship, especially not for modeling of complex systems exhibiting chaotic tendencies. Most modeling is probabilistic. There is usually a range given in the prediction. Falsifying occurs not when the real trajectory deviates from the central estimate, but when it consistently falls outside the range.
The envelope for an ensemble at a particular rate of CO2 rise is fairly broad, but not infinite. A year or two of temps outside the envelope would not falsify the models, but a decade of annual temperatures centred around the 0.3% probability range would falsify the models that had the same forcings trajectory as the real world.
Seems to me that people get disgruntled that falsification hasn’t been conceded yet, based on the last few years lying near the bottom of the envelope. But they are too hasty. Time is an important component of climate model prediction/projections. On related, 5, 10, or 15 years of an apparent flat trend of global surface temperatures is not falsification of AGW. Plenty of commenters in the debate aligned with the mainstream view (eg, Tamino) have stated what they think would be the conditions – how long with no global warming, or how many years outside the range – that would falsify predictions and put current understanding of AGW into serious doubt.
Regarding the oft-cited trend from 1998 – the huge el Nino anomaly – my own conditions for falsifying understanding of the relationship between global temp change and CO2 increase is this: 25 years is a fair length of time to get a statistically significant trend from surface data, so if the global surface temperature has not increased by a statistically significant margin from 1998 to 2023, then the central estimates of the relationship of CO2/global temps have been falsified.
This is assuming that no freakish, non-CO2 events have an influence (this cuts both ways, whether a strong forcing event warms or cools the planet late in the trend), just the normal interannual fluctuations.

Reply to  barry
October 6, 2013 9:09 am

Barry:
Thanks for taking the time to reply. In the literature of climatology, “predict” and “prediction” are polysemic. In other words, they have more than one meaning. When a word changes meaning in the midst of an argument, this argument is an example of an “equivocation.” By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy. By drawing conclusions from equivocations, climatologists are repeatedly guilty of instances of the equivocation fallacy in making arguments about global warming. For details, please see my peer-reviewed article at http://wmbriggs.com/blog/?p=7923 .
The equivocation fallacy may be avoided through disambiguation of terms the language in which an argument is framed such that each term of significance to the conclusion is monosemic (has a single meaning). When this is done in reference to arguments about global warming, logically valid conclusions emerge about the nature of the research that is described by the IPCC in its recent assessment reports, One such conclusion is that the methodology of this research was not truly scientific (ibid).
Many of the methodological shortcomings of global warming climatology stem from the absence of reference by the models to the events that underlie them. In the absence of these events it is not possible for one of these models to make a predictive inference. Thus, it is not possible for one of them to make an unconditional predictive inference, that is, “prediction.” A predictive inference is an extrapolation from one observable state of nature to another; conventionally, the first of the two states is called the “condition” while the second is called the “outcome.” In a “prediction,” the condition is observed and the outcome is inferred.
In the falsification of a model, one or more predicted probability values belonging to outcomes are shown not to match observed relative frequency values of the same outcomes in a randomly selected sampling of the events. Absent these events, to falsify a model is obviously impossible.
By the way, events are the entities upon which probabilities are defined. Absent these events, there is no such thing as a probability. Mathematical statistics, which incorporates probability theory as a premise, is out the window.

Sedron L
October 5, 2013 8:43 pm

If there was anything to Bob Tisdale’s book, it would have been put out by a real publisher, and not via a vanity press.

Patrick
October 5, 2013 11:26 pm

I wonder what the graph in figure 1.4 would look like if the temperature scale was not so granular? My guess would be it would not look scary enough.

Verified by MonsterInsights