Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?
Guest post by Bob Tisdale
The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.
The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.
Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:
And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:
Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?
In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :
…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.
To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.
Figure 1
Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.
Animation 1
A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.
Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.
Figure 2
CLOSING
The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.
SOURCES
Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.
ABOUT: Bob Tisdale – Climate Observations



Brian H says:
November 15, 2011 at 4:06 am
John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.
————————
The IPCC publishes projections based on scenarios. e.g. if CO2 emissions level off (scenario), we would see something like this (projection).
EternalOptimist says:
November 15, 2011 at 4:18 am
Same question as Vince Causey here.
The way I understand the term, Initialisation is a one-off starting point. Surely the models need to start in the right place, but they also need natural variability as part of their intrinsic functionality. It needs to be included somehow.
————————–
Actually, they don’t. A model can even be “cold started”. That means that you can assume no wind, no currents, and “climatological” (i.e. average) temperatures, etc., at start up. You then run the model for a while to let it “spin up”. If it is a good model, it will start to resemble reality – winds will start to blow, regional differences will appear, etc. And you can verify how good your model is by starting it from different places and seeing if it converges on reality.
Here is an eample from oceanic modeling:
http://www.oc.nps.edu/nom/modeling/initial.html
This is not knew news, here’s Kevin Trenberth in his Nature Blog June 4 2007:
“In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. They are intended to cover a range of possible self consistent “story lines” that then provide decision makers with information about which paths might be more desirable. But they do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents. There is no estimate, even probabilistically, as to the likelihood of any emissions scenario and no best guess.
Even if there were, the projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.”
John B my understanding is that the models can only hindcast the climate if sulphates are fed into them. In fact Trenberth himself says that el Nino and the PDO are critical modes of variability that affec the PAC rim countries. How does that square with them evening out over a decade?
climate change signals are expected to be relatively small compared to natural climate variability
===============================
And I say that “climate change signals” are way too small to be modeled…
…and we don’t know enough about “natural climate variability” to model
Which makes climate models fun games……but worth less than the paper they are printed on
Yes, they should be initialized with a natural variability component.
Just call it a temporary ocean circulation energy transfer (additional absorbed for periods and then additional energy released in other periods).
What they will find, however, is that their greenhouse forcing formulae do not work. The GHG impact is much lower in that scenario – less than half in fact.
So, I’m sure some of the climate modelers have played around with some cycles (regular or not) but they soon abandon the effort because then the rest of their math / the GHG forced climate model does not work.
Bill Illis says:
November 15, 2011 at 4:56 am
Just call it a temporary ocean circulation energy transfer (additional absorbed for periods and then additional energy released in other periods).
It could be far more precise than that; at least in the Atlantic there is a direct forewarning, by at least 4 but more often up to 10 year in advance, by the Icelandic low atmospheric pressure system. See my post above or http://www.vukcevic.talktalk.net/theAMO.htm
M.A.Vukcevic says:
November 15, 2011 at 5:10 am
——————
I think the AMO and the ENSO are the most important cycles. There are a few others that could be used. But not everyone agrees that the AMO could be like a “forcing” – operating independently and causing its own climate variability.
If some want to use the ocean in general rather than these specific components, then it doesn’t matter and they still end up with a climate model that does not work.
geronimo says:
November 15, 2011 at 4:48 am
John B my understanding is that the models can only hindcast the climate if sulphates are fed into them. In fact Trenberth himself says that el Nino and the PDO are critical modes of variability that affec the PAC rim countries. How does that square with them evening out over a decade?
—————
Trenberth is saying that climate models do not forecast regional climate effects. They produce projections that are meaningful at global or hemispheric scales. For example, the models say “it’s going to get hotter”, but they do not say “it’s going to get hotter in (say) Mexico”. Yes, El Nino and other effects are important if you live in an area affected by them, but climate models will not help predict when those effects will occur – that is not what they are trying to do. The climate models say, ENSO notwithstanding, that it is going to getter hotter on average, though not at every single point on Earth and not that every year will be hotter than the last.
I am sure you are right about sulphates. What is your point?
Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value? It seems to me to be a naive and extremely unscientific approach.
An excellent piece of work. In finance we refer to backtesting not hind casting and any model that does not back test with reasonable results (and even many that do) are automatically rejected.
John B says:
“The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
Comparing the Mauna Loa CO2 chart from 1975 to the present day it appears that the only
“known physics and various forcings” required to replicate the gradient in the IPCC AR4 sea temperature anomalies model is CO2.
Latitude says:
November 15, 2011 at 4:50 am
,,,
And I say that “climate change signals” are way too small to be modeled…
,,,
Which makes climate models fun games……but worth less than the paper they are printed on
——————
That doesn’t follow. As an analogy, we can model the tides pretty accurately, but if you stand on a beach and watch the waves crashing in, you might be tempted to say “the tide signal is too small to be modelled”. So it is with climate. And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables. The waves may be hugely important if you are swimming on that beach, but they do not affect the longer term trend of the tide.
John B (November 15, 2011 at 2:46 am) wrote:
“There are many variabilities, but the trend emerges from them and all the evidence points that it is going in one direction.”
Scarce few disagree. The issue remains that nature’s variability is being ignored and misrepresented.
Chaotic model “initialization” to extrapolate imaginarily-stationary multidecadal oscillations would be a uselessly abstract wool-over-eyes exercise. No one sensible would trust it since there remains so much work to finish sorting out the spatiotemporal geometry of constraints: http://wattsupwiththat.files.wordpress.com/2011/10/vaughn-sun-earth-moon-harmonies-beats-biases.pdf
Solomon Green says:
November 15, 2011 at 5:34 am
Comparing the Mauna Loa CO2 chart from 1975 to the present day it appears that the only
“known physics and various forcings” required to replicate the gradient in the IPCC AR4 sea temperature anomalies model is CO2.
—————-
But we don’t only look at 1975 to the present day. Even Bob’s post goes back to 1910 and, short term variations aside, the models look pretty good. And if CO2 is the major forcing that determines how well they depict reality, that should be telling us something about the reality of the effect of CO2.
Unless, of course, all them lib’rul scientists is lyin’ 🙂
Bob, your post convinces me that these scientists are not very good at hiding things. Not even when they do it on purpose, but especially when they don’t. If your investigation is close to the truth, that a short snippet of the temperature record was used to “tune” the model, bespeaks one thing: you can’t fix stupid models. In this case, I highly recommend they throw the baby out with the bath water.
The real problem with the GCMs is that they don’t “predict” the past record, they can’t explain the present, and they generally aren’t very good at extrapolating into the future.
Ultimately, they are nonlinear curve fitting routines, with parametric components in them that can match any curve you like for at least some small chunk of it, especially when that curve is predominantly monotonic.
This is the fundamental problem with the whole debate. We have at least moderately accurate/believable temperature data from e.g. ice cores and other proxies that extends back over hundreds of thousands to millions of years. This record clearly indicates that we are in a five-million year long “ice age” interrupted by interglacials that seem to be order of or less than ten thousand years long, the Holocene being one of the longest, warmest interglacials in the last million years.
We don’t know why the world was warmer 5 million years ago — MUCH warmer than it is today, in all probability. We don’t know why it started to cool. We don’t know why it cooled to the point where extensive glaciation became the rule rather than the exception, with CO_2 levels that dropped to “dangerously low”, almost too low to support atmospheric plant life at the worst/coldest parts of it. We don’t know why it periodically emerges from glaciation for a few thousand years of warmth. We don’t know why it stops and returns to being cold again. Within the interglacials and/or glacial periods, we don’t know why the temperatures vary as they do, up and down by several degrees C on a timescale of millennia.
What we do know is that the best predictor of temperature variation on a timescale of centuries is the state not of the atmosphere but of the Sun, in particular the magnetic state of the sun. The evidence for this is, IMO, overwhelming. I don’t particularly care if the physics of this isn’t well understood — that’s a challenge, not a problem with the conclusion. The analysis of the data itself permits no other conclusion to be made. Sure, other factors (Milankovitch cycles, orbital eccentricity changes in response to orbital resonances, axial precession) seem to come into play with the longer time scales but THEY aren’t predictive either.
I am unaware of any model that can predict the the PAST, and this is the biggest problem of all. AGW is predicated on the assumption that we know what the temperature “should” be by some means other than numerology and reading tea leaves. And we don’t. It’s a non-Markovian problem laced with chaotic dynamics on an entire spectrum of timescales from seconds to epochs. Most of the local (decadal) dynamics is determined by a mix of the solar cycle, various effectively chaotic limit cycles (e.g. the PDO or AO) on TOP of the strong correlation with magnetic state of the sun, with additional near-random modulation by aerosols and greenhouse gases and cloud cover and with various unknown positive and negative feedbacks.
The problem, in other words, is “difficult”. It is complex, in the precise sense of the term. Locally, one can ALWAYS fit a segment of the temperature curve over decades to centuries as long as it has a monotonic trend. In the case of the 20th century, there are at least two completely distinct fits that can be made to work “well enough” given average monotonic warming. One is “CO_2 is dominant, previous thermal history and solar state and so on perturbations”. One fits the general upward trend to the general upward trend of the CO_2, then finds parameters to scale the correlations between solar state and temperture so that they match the perturbations.
The other is the opposite — explain all or nearly all of the temperature trend on the basis of solar state and dynamical history, and treat the CO_2 forcing as a small component that contributes to a general monotonic trend that would be there anyway even if there was no anthropogenic CO_2 at all. After all, only somebody very, very stupid would think that the mean temperature would be CONSTANT in the absence of humans. The entire climactic record stands ready to refute any such silliness. Nearly all of that record consists of century-long stretches of monotonic warming and cooling, with occasional abrupt changes bespeaking the existence of multiple poincare attractors in the general phase space and chaotic transitions between them.
Both of these can work, but the fundamental problem with the former is that it DOES NOT WORK outside of the 20th century. It is utterly incapable of explaining not JUST the MWP and LIA, but all of the OTHER significant variations in average global temperature visible in the long term proxy based temperature reconstructions. It will not work if indeed global temperatures drop — quite possibly drop precipitously — over the next thirty years as the sun’s state returns to levels not seen since the 1800s if not the 1600s.
One simple fact ignored by the AGW crowd is this. The last time the sun was as active as it was in the 20th century was 9000 years ago, at the very start of the holocene. Indeed, that earlier burst of solar activity (a double maximum that spanned some 300 years with an interval in between peaks) may have been the event that really gave the Holocene a robust start and finally ended the previous glacial period after the Younger Dryas. More or less monotonic warming over the 20th century has to be interpreted in light of that fact — the assumption of “everything BUT CO_2 levels is either the same or unimportant in determining climate, so CO_2 is the explanation” is simply not justified when something else is so spectacularly not the same as a 9000-year grand maximum in the sun across the very interval where the greatest warming was observed.
So, requiring GCMs to fit all of the data is good, but it isn’t enough. Not only all of the 20th century data, but ALL of the data. Until they can fit the data over the last 2000 years (say) or the last 10,000 years, their models are pointless, the moral equivalent of saying “look, a monotonic trend, I can fit that” with any function that (for some range of parameters) exhibits a monotonic trend. They have to be able to predict the UPS and the DOWNS, the long term variations. And the only way one can possibly do this is by including the sun — not just insolation but its actual state — as one of the fit parameters.
Once this is done — and we can, any of us, perform such a fit in our own minds, just from looking at a graph — the entire argument inverts. Suddenly, because solar state is the BEST predictor, the ONLY predictor capable of getting the right gross features of the T(t) curve, everything ELSE becomes the perturbations, the moderators of a general trend slaved to the sun. Suddenly the quality (e.g. R) of the curves goes way up. Suddenly CO_2 is a modulator, sure, but a relatively unimportant one, tenths of a degree C and part of that moderated by negative feedback from clouds, almost lost in the chaotic noise.
rgb
(Could somebody in Big Oil now send me a check, please? I keep hearing about how anybody who is a “denier” is in the pay of Big Oil, but I can never seem to find BO and send them a bill…)
At the top of fig 2 you it says y=0.0054x. Should that be 0.054x?
OK, scratch that, I think I get it.
What variables should be used in a model that replicates the multidecadal variability of the 20th Century or more centuries back?
John B: Your November 15, 2011 at 2:46 am comment discussed how weel the trends matched prior to 1975. I suggest you look at Animation 1, which is why I provided it. It clearly shows that the models do not match the multidecadal variations of the data.
John B says: “No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
John B, where did I write that the models “assume a linear trend”? Are you twisting what I wrote? Read what I wrote again.
You continued, “The hindcast shows that the models replicate past climate pretty well over scales of decades…”
The data illustrated in Animation 1 very clearly contradicts this statement. You are wasting your time…and the time of those who have come to discuss this.
John B says: “But the models do not include multidecadal oscillations…”
That’s what this post is about, John B. The question posed by the post is, should they include them. I believe I asked the question twice. Did you read the post? If not why are you wasting our time? Also, this comment by you contradicts an one of your earlier ones, which means you’re not a very good troll.
It is relatively easy to check the various models outputs for different emission scenarios, compared to instrumental record via the KNMI Climate explorer.
North Atlantic:
http://oi56.tinypic.com/wa6mia.jpg
Debrecen station, Hungary:
http://oi56.tinypic.com/10pt7y9.jpg
As I told before, models are tuned to fit the warm AMO phase (1975-2005) and they are off before and after. They are not able to catch the observed variability in 20th century alone, not speaking about running them backwards and comparing to CET record for example. Unbelievable, that this playstation crap is used for policy decisions.
Bob Tisdale says:
November 15, 2011 at 7:01 am
John B says: “No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
John B, where did I write that the models “assume a linear trend”? Are you twisting what I wrote? Read what I wrote again.
You continued, “The hindcast shows that the models replicate past climate pretty well over scales of decades…”
The data illustrated in Animation 1 very clearly contradicts this statement. You are wasting your time…and the time of those who have come to discuss this.
————–
Bob, you said “latched on to a trend”, as if that is how the models work. You then go on to linearly extrapolate short periods of the model projections, as if that has some meaning.
Figure 1, the one that shows the largest portion of data, is the one that shows most clearly how well the models have worked. Your animation, extrapolating linear trends to arbitrary parts of the data, is at best misleading.
Robert Brown says:
November 15, 2011 at 6:02 am
…
rgb
(Could somebody in Big Oil now send me a check, please? I keep hearing about how anybody who is a “denier” is in the pay of Big Oil, but I can never seem to find BO and send them a bill…)
——————
I think we’ve pretty much gotten over the “big oil” theory. The predominant theory among “warmists” now is that most “skeptics” (not calling them “deniers”) are driven by their own ideology. Something like, “I do not like Obama/Gore/regulation/taxation/socialism/greens/whatever, therefore I don’t want AGW to be real, therefore I will exaggerate uncertainties in the science, leap on any mistakes, invoke conspiracies and cheer for any contrary ideas (even when mutually contradictory) to support my views. Or something like that.
NO!
The models should not be initialized. That is a form of curve fitting that locks errors in the temperature record into the models.
Unfortunately, the models have been initialized to hindcast, as a result of parameter selection by the model builders – which in part explains their lack of skill.
In effect, the model builders have decided that CO2 versus aerosols has a relative level of importance in the model, so they choose the parameters to match. If this doesn’t give a believable hindcast, then they adjust the weights until it does.
Unfortunately, by doing so, they have been misled as to the importance of CO2, which caused virtually every IPCC model to go badly off track in 2000.
So NO, do not initialize the models to hindcast. Remove the parameter weighting that has been used to create the current fit and go back to first principles.
Climate science is at the same stage in its development that orbital mechanics was 2000 years ago. The consensus is that earth (CO2) is at the center of the universe (climate), but there is evidence this might just be wrong.
Like people 2000 years ago, we want to believe that humans are the most important thing in the universe, that the climate, like the universe, revolves around us. Our beliefs in self-importance often lead us astray.