Tisdale on model initialization in wake of the leaked IPCC draft

Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?

Guest post by Bob Tisdale

The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.

The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.

Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:

And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:

Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?

In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.

Figure 1

Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.

Animation 1

A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.

Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.

Figure 2

CLOSING

The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.

SOURCES

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.

ABOUT: Bob Tisdale – Climate Observations

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
Steve Garcia
November 15, 2011 7:34 am

Back in about 1998 I worked with a degreed meteorologist (who has since died) who couldn’t get a job as one (because of, basically, reverse discrimination) who was one of the brighter people I’ve known. It was when I was initially getting interested in the CAGW issue. I’d read enough by then to get me questioning how much due diligence had been done before ruling natural causes out.
He expressed dismay at the idea that they were projecting the temperatures out to the year 2100. He had done atmospheric modeling himself, in school, and knew the wwhys and wherefores of it all. He specifically pointed out that the models did not hindcast worth a diddly, and said that any model that can’t replicate the recent past was not close to being reliable enough to depend on.
He was also doubtful of the data. He was the one who turned me on to the “Great Dying of the Thermometers” at the end of the 1980s, though he did not refer to it by that name. He was extremely scornful of any possible reason to exclude meteorological stations at a time when data storage and processing capacity was increasing by magnitudes.
Truthfully, I have not seen ONE bit of info since that day that has been capable of swaying me to the pro-CAGW side. VERY little even argues in that direction at all, much less being capable of convincing me. So much argument on that side points to the models, and the models are incapable of replicating – of vetting themselves – as Bob is pointing out here.
Nothing has really changed then, since 1998. The models are proven out, and any reference to them as basis for anything whatsoever is useless – and should be SEEN as useless, even by the people pointing at the models. What could possibly be in their minds, thinking that the models are convincing?

November 15, 2011 7:35 am

“What variables should be used in a model that replicates the multidecadal variability of the 20th Century or more centuries back?”
i) The level of solar activity. Though there needs to be more knowledge as to which components of such variability have most impact on the size and intensity of the polar vortices. It is that which controls the behaviour of the mobile polar highs described by Marcel Leroux and thus ultimately gives the sun power over items ii) to iv) below.
ii) The Pacific Multidecadal Oscillation (not PDO which is merely a derivative of ENSO data). Unfortunately we are not yet aware of the reason for the 50 to 60 year cycling as regards that phenomenon but the current position within the cycle would be highly relevant for prediction purposes.
iii) The size, intensity and latitudinal positions of the main permanent climate zones because they control the rate of energy flow from surface to stratosphere which results in net warming or net cooling at any given time.
iv) Global cloudiness and albedo because they indicate how much solar energy is actually gettng into the oceans to fuel the climate system.
Anything else would be subsumed within those parameters.
Needless to say the current models are not well designed in any of those areas.

Theo Goodwin
November 15, 2011 7:36 am

John B says:
November 15, 2011 at 3:38 am
Would some Moderator please give John B the standard Troll Treatment?
[No. One does not strengthen things (mental or physical) unless you exercise against resistance. 8<) Robt.]

November 15, 2011 7:38 am

Bob:
You are a WONDER.
However, I must point out that THE KING HAS NO NEW CLOTHES!
The “sea surface temperatures” are based, prior to the 80’s or 90’s or even the ARGO BOUYS, on fundementally a FICTION.
Ships logs? Guys throwing BUCKETS over the side? Calibration, consistency, QUALITY ASSURANCE? Absolutely lacking. Those data have been manipulated to show WHAT THEY WANTED. I think they are completely BOGUS.
I think others should QUESTION THE SOURCE OF THIS HIGHLY PROCESSED DATA and not accept it at face value.
Max

Neo
November 15, 2011 7:46 am

What kind of modeler only uses 25 years of “training data” when there is over a century of data available ? None who are reputable.

Theo Goodwin
November 15, 2011 7:54 am

Yet another brilliant article by Bob Tisdale. Thanks, Mr. Tisdale.
The big lessons are clear to the many who made comments above. The modelers put all their eggs in the CO2 basket. CO2 concentration in the atmosphere increases linearly, at least given their relatively simple minded assumptions. So, the warming had to go up linearly as in the 1975 to 2000 period. In other words, they treated CO2 and its effects on radiation from the sun as the only natural processes that required modeling. Now they are being forced to admit that other natural processes must be treated as important in their own right. The sum total of all those natural processes make up most of what is called natural variability. The natural processes must be understood in their own right as physical theory (physics) has always done. Climate science must investigate those natural processes and create physical hypotheses that describe the natural regularities found in them. Once this project is well under way, climate science will be on its way to becoming a genuine physical science.
Isn’t it amazing that Trenberth can show that he understands the problems with the models yet continue to act as an attack dog for climate science?

John B
November 15, 2011 7:58 am

Bob Tisdale says:
November 15, 2011 at 7:09 am
John B says: “But the models do not include multidecadal oscillations…”
That’s what this post is about, John B. The question posed by the post is, should they include them. I believe I asked the question twice. Did you read the post? If not why are you wasting our time? Also, this comment by you contradicts an one of your earlier ones, which means you’re not a very good troll.
————————
Yes, I read the post. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise

ferd berple
November 15, 2011 8:02 am

“Dusty says:
November 15, 2011 at 5:30 am
Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Given the forecasting power of climate models, why not use the GCM’s to forecast stock prices and make a killing in the market to pay off the national debt and pay to turn the economy green? Isn’t it time we asked the climate scientists to use their CO2 driven Ouija Boards to help pay the cost of going green?
Surely it is simpler to forecast the future value of a limited mix of industrial stocks than it is to forecast the future climate. Why do climate scientists keep asking for money? Surely the GCM’s in their spare time should be able to tell Hansen and Co where to invest, to pay for the computers and conferences, so they don’t need government assistance.
Like temperature, the Dow has been going up for 100+ years. So surely if we can predict climate we can predict stock market values with even greater accuracy. So, why has not every climate scientist like every economist retired long ago on their investments? Why do they need any government grants?
http://en.wikipedia.org/wiki/File:DJIA_historical_graph_to_jan09_%28log%29.svg

Theo Goodwin
November 15, 2011 8:03 am

Dear Moderator:
John B is creating posts to waste Bob Tisdale’s time. Mr. Tisdale is too nice a man to simply ignore John B, though he should ignore John B. Let us please not enable John B’s harrassment of Mr. Tisdale.

bubbagyro
November 15, 2011 8:09 am

Max:
Yes, GIGO.
I, as a layman to street gambling, became very wary of the street gambler’s game in New York City, when the marks investors could not find the pea in the walnut, even though the first dude always found it. When you see the word “layman”, substitute “mark”. This makes the rest of the IPCC blurb more easily understood.

Editor
November 15, 2011 8:16 am

Ask why is it so? says: “my question is a little off subject but when I look at a graph I want to know what the ’0′ represents. I know it’s using a mean say 1961-1990 but I don’t actually know what the temperature is, only the degrees above and below that mean. Is there a reason why this figure is not shown?”
First: In general, the Global Land Plus Sea Surface Temperature anomaly datasets supplied by GISS, Hadley Centre, and NCDC are only supplied as anomalies. So there is no way that I could present that to you if this post was about that data.
Second: But this post is about Sea Surface Temperature data and Sea Surface temperature is not presented by the suppliers as anomalies (except for one dataset by the Hadley Centre) so I could present it. But I normally download the data after it’s already been converted to anomalies by the KNMI Climate Explorer website (or the NOAA NOMADS website) because that eliminates another step in my data handling. But since you’ve asked, the observed (HADISST) average Sea Surface Temperature during the years of 1910 to 1940 (those were the base years I used in this post) was 17.9 deg C, while for the models it was 17.6 deg

TomRude
November 15, 2011 8:16 am

If my memory does not betray me, I recall Climate Audit had a post on something similar showing models complinace with temps record. And that Weaver’s model was only working with increasing temps while diverging otherwise…

Latitude
November 15, 2011 8:19 am

Max Hugoson says:
November 15, 2011 at 7:38 am
===================================
Max, it is all based on fiction……but the fiction was Goldilocks
too hot, too cold, too much ice, too little ice, etc

Pascvaks
November 15, 2011 8:20 am

If you want to move the world you need a long, strong plank, a fulcrum, and a high place to stand before you jump. Actual measurements from the 20th Century sure seem a might more sturdy than guesstimates. At least you get the last hundred years right if nothing else succeeds. What the hay, we ought to give it a try; I’m sure the Chinese will lend us a few more $trillions$ to reprogram their new superdupper computers at 40% interest compounded hourly. It’s only money, right? And we are at war with nature, the biggest, baddest, meanest SOB on the planet, so what-the-hay mate let’s do it! Oh yes, nearly forgot, (SarcOff)
PS: What are we putting in the water? Everyone seems to be going crazy.

ferd berple
November 15, 2011 8:22 am

John B says:
November 15, 2011 at 5:41 am
And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables.
Tide tables are not modeled based on forcing and feedback. They are modeled based on observed, repeatable cycles, similar to the way humans first forecast the cycle of the seasons, the migration of animals and the orbits of the planets. We observe, find the pattern, then forecast based on this pattern repeating. If the forecast works, it may have have skill. If not, it is wrong and we start over.

Editor
November 15, 2011 8:23 am

Dusty says: “Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Dusty, I haven’t used the trends for predictions. I haven’t made any predictions or projections in this post.

November 15, 2011 8:27 am

IPCC: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
Hansen has also said similar things. What is happening really is a signal to noise problem that we always have with scientific measurements. It appears that in trying to observe climate change the noise is so high that the signal simply gets lost in the noise. That being the case, why bother looking for a signal that you can’t either see or measure? For my money, the coming climate catastrophe that this invisible signal is supposed to predict is a pseudoscientific delusion and their claims of anthropogenic global warming are simply fantasies. Lets take a look at the climate of the last 100 years. The last IPCC report predicted warming of 0.2 degrees per decade for this century. We got zero warming for 13 years. Comes from using climate models that are invalid. Ferenc Miskolczi has shown, using NOAA weather balloon database that goes back to 1948, that the transmittance of the atmosphere in the infrared has been constant for the last 61 years. Carbon dioxide at the same time increased by 21.6 percent. This means that addition of all this carbon dioxide had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, the foundation stone of IPCC models. If you look further you realize that no observations of nature exist that can be said to verify the existence of geeenhouse warming. Satellites show that within the last 31 years there was only one short four year spurt of warming. It started in 1998, raised global temperature by a third of a degree, and then stopped. It was oceanic, not greenhouse in nature. The only other warming in the twentieth century started in 1910 and stopped with World War II. Bjørn Lomborg sees it as caused by solar influence. He is probably right because you cannot turn carbon dioxide on and off like that. Between these two warmings temperature went nowhere while carbon dioxide kept going up. This leaves just Arctic warming to explain. It started suddenly at the turn of the twentieth century, after two thousand years of cooling. It cannot be greenhouse warming because carbon dioxide did not increase in synch with it. This covers the last one hundred years and as required by Miskolczi no warming within this time period can be called a greenhouse warming. And going back to the signal and noise problem, we can say that not being able to detect climate change is simply a case of absence of a signal, not too much noise.

TomT
November 15, 2011 8:27 am

I agree with Bob, John B, why don’t you let people who want to discuss this discuss it, and just go away? It isn’t helpful either to say that skeptics are led by ideology. We are led by facts.
I started being interested in this long before I heard of Gore, and I bet before he heard of global warming. In 1972 my teachers told me that pollution might cause an ice age. By 1980 the New York Times was talking about global warming. I wanted to know what had happened to the ice age that had the same cause. So I have been following this ever since. The day that I see proof that AGW will cause a catastrophe is the day I believe it. So far no such proof exists.

John B
November 15, 2011 8:31 am

erd berple says:
November 15, 2011 at 8:02 am
“Dusty says:
November 15, 2011 at 5:30 am
Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Given the forecasting power of climate models, why not use the GCM’s to forecast stock prices and make a killing in the market to pay off the national debt and pay to turn the economy green?

————–
Short answer, the “efficient market hypothesis”:
http://en.wikipedia.org/wiki/Efficient-market_hypothesis

Editor
November 15, 2011 8:32 am

juanslayton says: “At the top of fig 2 you it says y=0.0054x. Should that be 0.054x?”
EXCEL presents the trend on a yearly basis. I simply converted it to decades for the post.

November 15, 2011 8:57 am

>>
John B says:
November 15, 2011 at 2:46 am
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
<<
This is nonsense. The only way two oscillations will cancel (even out) is if they are exactly tuned to the same frequency, are exactly 180 degrees out of phase, and are exactly the same amplitude. I doubt anyone (but you) is making this statement. Activate two tuning forks whose fundamental frequencies differ by a few Hertz and they will “beat.” The “beat” tone will noticeably get louder and quieter. The beat frequency is the difference of the two oscillators. (Superheterodyne receivers work by this principle.) All the natural oscillation frequencies and amplitudes in climate differ, so that none will magically “even out.”
Jim

Craig Moore
November 15, 2011 8:58 am

Similar to but slightly different in thrust is Pielke, Sr.’s latest post of the tremendous waste of money and resources in propping up the model memes. http://pielkeclimatesci.wordpress.com/2011/11/15/the-huge-waste-of-research-money-in-providing-multi-decadal-climate-projections-for-the-new-ipcc-report/

Hoser
November 15, 2011 8:59 am

There are many GCMs and many climate scientists playing with their toy models. It is up to them to decide how to make them work. Clearly the GCMs have no scientifically relevant predictive power. Consequently, they should not be used to justify policy decisions. However, they do have politically useful predictive power, and that is precisely why they are used for policy decisions. Therefore, the models currently are “working” from the perspective of funding.
A solution to the dilemma would be to create a correct model. However, even if the computing power were available, there are unpredictable external factors that cannot be modeled. Predicting climate is about the same as predicting the future Dow Jones average. If anyone really could predict stock prices, they’d be very rich. On the other hand, if anyone could predict climate accurately, they would be ridiculed by the majority of their peers, their work would not be published, and their research would lose funding. Would it all be worthwhile even if you eventually got a Nobel, after say 20+ years of abuse?

Downdraft
November 15, 2011 9:07 am

Including the multidecadal variation of the observed temperatures in the models, even if the cause is not known, would be a valid change to the models and improve projections significantly. Ignoring it degrades the accuracy of the temperature projections, but accurate projections are not the purpose of the models. The models serve the purpose for which they were written.
I suspect the modelers began with the assumption that without increasing CO2 concentrations, temperatures would be stable, and then built the model to replicate the short term trend in the temperature record. In an email from EPA some years ago, the responder indicated that, indeed, without increasing CO2, they believed temperatures would be stable. He also included a list of all the catastrophes that would occur if nothing is done. There were no positive outcomes, of course.

John B
November 15, 2011 9:12 am

Jim Masterson says:
November 15, 2011 at 8:57 am
>>
John B says:
November 15, 2011 at 2:46 am
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
<<
This is nonsense. The only way two oscillations will cancel (even out) is if they are exactly tuned to the same frequency, are exactly 180 degrees out of phase, and are exactly the same amplitude. I doubt anyone (but you) is making this statement. Activate two tuning forks whose fundamental frequencies differ by a few Hertz and they will “beat.” The “beat” tone will noticeably get louder and quieter. The beat frequency is the difference of the two oscillators. (Superheterodyne receivers work by this principle.) All the natural oscillation frequencies and amplitudes in climate differ, so that none will magically “even out.”
Jim
============
Jim, I obviously didn't make myself clear enough. By "even out", I meant "not contribute to the long term trend". Even if there were only one oscillation, as long as it oscillates around some mean, it will have no net contribution to any longer term trend. I was not referring to beat frequencies between oscillations. My apologies for sloppy wording.
You can see quite clearly what I am referring to in Bob's Figure 1. Lots of ups and downs that are not captured by the models, but the long term trend is as generated by models (that do not model the oscillations).