Tisdale on model initialization in wake of the leaked IPCC draft

Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?

Guest post by Bob Tisdale

The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.

The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.

Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:

And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:

Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?

In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.

Figure 1

Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.

Animation 1

A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.

Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.

Figure 2

CLOSING

The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.

SOURCES

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.

ABOUT: Bob Tisdale – Climate Observations

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
HAS
November 15, 2011 12:38 am

I think it would be fairer if you used absolute temperatures rather than anomalies in this comparison, and included the range derived from the individual model runs.

TBear (Sydney, where it has still not warmed, and almost finished the coldest freakin' October in 50 yrs ...)
November 15, 2011 12:52 am

So, in `layman’s terms’, the modelas are crap?
Is that the gist of this post.
And given that the whole argument seems to be about 10ths of a celsius degree, even if they are just a bit crap, that makes the models well nigh useless. Yeah?
God help us.

November 15, 2011 1:02 am

The models have to have Some initial conditions, so how difficult could it be to use actual conditions? For that matter, how difficult could it be to make a few runs with realistic sensitivity values, ones that match what the ERBE satellite produced?

Stephen Wilde
November 15, 2011 1:32 am

Absolutely right. Not much left to say after that.
Of course it will still leave the upward multicentennial trend from LIA to date but it is hard to attribute that to human emissions.
Furthermore, adjustment for the ENSO multidecadal signal should make it easier to link variations in the background multicentennial temperature trend to multicentennial variations in solar activity.
It is the net balance between solar and oceanic influences over centuries that dictates the shifting of the permanent climate zones to give us what we perceive as climate change.

John Marshall
November 15, 2011 1:37 am

I have said this before and I repeat, even though nobody listens (sarc), throw these models away they are lying.
When the IPCC state that the multidecadal variations not being replicated in the models make laymen wary I weep. Does this problem not make scientists wary?
Obviously not since they are wedded to the AGW theory.

Bloke down the pub
November 15, 2011 1:37 am

When I first took notice of the global warming debate, one of the glaring issues with the warmist’s argument was how they treated natural variability. They were claiming that the rise in observed temperatures was greater than could be explained by climate cycles and that CO₂ was the cause. Yet they wanted to have their cake and eat it, when temperatures failed to rise by claiming then that natural variablity was stronger than ghgs. This was when I,like many others cried foul. Always a pleasure Bob.

Luther Bl't
November 15, 2011 1:42 am

>> Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?
A rhetorical question?

Brian H
November 15, 2011 2:04 am

The eternal modus operandi of prognosticators: seize on a selected chunk of a complex curve and use linear extrapolation to concoct a scarey scenario. Claim special competence in preventing same, demand profligate funding to achieve that.
You think the world’s fools would have gotten weary of being fleeced by the same old scam. But I guess cutting governments in on the action has given it new life.

Brian H
November 15, 2011 2:06 am

The eternal modus operandi of prognosticators: seize on a selected chunk of a complex curve and use linear extrapolation to concoct a scarey scenario. Claim special competence in preventing same, demand profligate funding to achieve that.
You think the world’s fools would have gotten weary of being fleeced by the same old wheeze But I guess cutting governments in on the action has given it new life.

RACookPE1978
Editor
November 15, 2011 2:27 am

“That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. “
Should that not be: “That’s IPCC speak, and it really does say that they are expecting global surface temperatures to flatten for the next two or three decades.”

H.R.
November 15, 2011 2:34 am

Excellent! Thank you, Bob.
If a model can’t replicate what is known, there’s not much reason to believe it will forecast what is unknown. (Now that I understand that, may I please be promoted to 5th grade?)

John B
November 15, 2011 2:46 am

John Marshall says:
November 15, 2011 at 1:37 am
I have said this before and I repeat, even though nobody listens (sarc), throw these models away they are lying.
When the IPCC state that the multidecadal variations not being replicated in the models make laymen wary I weep. Does this problem not make scientists wary?
Obviously not since they are wedded to the AGW theory.
———————-
It does not make scientists weep, at least not those who understand what is being said.
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).
There are many variabilities, but the trend emerges from them and all the evidence points that it is going in one direction.

Braddles
November 15, 2011 2:51 am

“Initialized To Replicate Multidecadal Variability” sounds suspiciously like curve fitting.
It’s been said before, but every post on modelling should say it again: any model that uses curve fitting has no predictive value.

Espen
November 15, 2011 3:10 am

John B says:
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).

Take a look again, and note how the models follow the temperatures quite closely in 1975-2000. Obviously, since the natural variability due to oscillations was in an upwards trend from the seventies and until the 1998 El Niño, a model that didn’t include the multidecadal oscillations should trend significantly lower than the actual temperatures in the 1975-2000 period.

Editor
November 15, 2011 3:11 am

RACookPE1978 says: “Should that not be: ‘That’s IPCC speak, and it really does say that they are expecting global surface temperatures to flatten for the next two or three decades.'”
The quote from the IPCC report is about extremes. Here it is again, “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
I don’t interpret that to say they’re expecting global temperatures to flatten, just that extremes–droughts, floods, record high temperaturess, record low temperatures–are going to be of both signs and they may not necessarily be extremes that one would associate with a warming world because the effects of natural variability are still stronger than an anthropogenic signal, whatever that is.

Editor
November 15, 2011 3:13 am

H.R. says: “Now that I understand that, may I please be promoted to 5th grade?”
You may even be able to skip a few grades.

John B
November 15, 2011 3:31 am

Bob said, in the OP:
“the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.”
—————
No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.
The hindcast shows that the models replicate past climate pretty well over scales of decades to centuries. The forecsasts show what they show as a result of the same modelling under various emission scenarios, not as a result of extrapolating a linear trend.

1DandyTroll
November 15, 2011 3:37 am

So, essentially, they stated that there are no signals of climate change in natural climate, which would indicate that they don’t think natural climate ever change, which of course it never does in neither their bong smoked statistic nor their bubble bong models.

John B
November 15, 2011 3:38 am

Espen says:
November 15, 2011 at 3:10 am
John B says:
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).
Take a look again, and note how the models follow the temperatures quite closely in 1975-2000. Obviously, since the natural variability due to oscillations was in an upwards trend from the seventies and until the 1998 El Niño, a model that didn’t include the multidecadal oscillations should trend significantly lower than the actual temperatures in the 1975-2000 period.
————————————
But the models do not include multidecadal oscillations, and were pretty good, though obviously they missed the spike due to the 1998 El Nino. So, maybe “natural variability” did not dominate from 1975-2000. Maybe CO2 was to blame. At least in part.

November 15, 2011 3:43 am

Since there are signs of the current burst of the global warming is either slowing down or even stalling, lot of attention is directed to the existing natural variability factors.
The Atlantic Multidecadal Oscillation better known as the AMO has frequently been presented as some ‘mystifying’ natural force driving the sea surface temperatures ( SST) of the North Atlantic.
New research shows that the AMO is simply result of the thermo-haline circulation powered by the North Atlantic Subpolar Gyre driving the Atlantic – Arctic exchange.
Put in the most simplistic terms: the AMO is a delayed response (with R^2 = 0.74 ) to the semi-permanent low atmospheric pressure system over Iceland (measured at Reykjavik / Stykkisholmur) as graphically shown here:
http://www.vukcevic.talktalk.net/theAMO.htm
including the link to the relevant pre-print paper (currently in ‘document technical moderation’ at the CCSd / HAL science archive).
Hi Bob
you did ask for details some time ago, although above is a bit over the top promo, any constructive comments will be considered for the final paper version. Tnx.

Brian H
November 15, 2011 4:06 am

John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.

Vince Causey
November 15, 2011 4:10 am

A little bit of background explaining what is meant by “initialised” would be helpful. I am struggling to understand how a model could be run without being initialised. Surely it has to have initial values. Does it mean that these initial values are not as observed in the temperature record? It’s not very clear.

Ask why is it so?
November 15, 2011 4:11 am

my question is a little off subject but when I look at a graph I want to know what the ‘0’ represents. I know it’s using a mean say 1961-1990 but I don’t actually know what the temperature is, only the degrees above and below that mean. Is there a reason why this figure is not shown?

Brian H
November 15, 2011 4:16 am

Vince;
Not just temperature; there are a myriad of parameters and measures. The IPCC explicitly states that real world measurements at any point in time are not used to initialize the models, though I don’t have the link to hand. The “initialization” is made up out of whole cloth, just like the rest of the models.

EternalOptimist
November 15, 2011 4:18 am

Same question as Vince Causey here.
The way I understand the term, Initialisation is a one-off starting point. Surely the models need to start in the right place, but they also need natural variability as part of their intrinsic functionality. It needs to be included somehow.

1 2 3 5