Tisdale on model initialization in wake of the leaked IPCC draft

Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?

Guest post by Bob Tisdale

The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.

The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.

Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:

And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:

Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?

In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.

Figure 1

Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.

Animation 1

A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.

Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.

Figure 2

CLOSING

The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.

SOURCES

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.

ABOUT: Bob Tisdale – Climate Observations

0 0 votes
Article Rating
119 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
HAS
November 15, 2011 12:38 am

I think it would be fairer if you used absolute temperatures rather than anomalies in this comparison, and included the range derived from the individual model runs.

TBear (Sydney, where it has still not warmed, and almost finished the coldest freakin' October in 50 yrs ...)
November 15, 2011 12:52 am

So, in `layman’s terms’, the modelas are crap?
Is that the gist of this post.
And given that the whole argument seems to be about 10ths of a celsius degree, even if they are just a bit crap, that makes the models well nigh useless. Yeah?
God help us.

November 15, 2011 1:02 am

The models have to have Some initial conditions, so how difficult could it be to use actual conditions? For that matter, how difficult could it be to make a few runs with realistic sensitivity values, ones that match what the ERBE satellite produced?

Stephen Wilde
November 15, 2011 1:32 am

Absolutely right. Not much left to say after that.
Of course it will still leave the upward multicentennial trend from LIA to date but it is hard to attribute that to human emissions.
Furthermore, adjustment for the ENSO multidecadal signal should make it easier to link variations in the background multicentennial temperature trend to multicentennial variations in solar activity.
It is the net balance between solar and oceanic influences over centuries that dictates the shifting of the permanent climate zones to give us what we perceive as climate change.

John Marshall
November 15, 2011 1:37 am

I have said this before and I repeat, even though nobody listens (sarc), throw these models away they are lying.
When the IPCC state that the multidecadal variations not being replicated in the models make laymen wary I weep. Does this problem not make scientists wary?
Obviously not since they are wedded to the AGW theory.

Bloke down the pub
November 15, 2011 1:37 am

When I first took notice of the global warming debate, one of the glaring issues with the warmist’s argument was how they treated natural variability. They were claiming that the rise in observed temperatures was greater than could be explained by climate cycles and that CO₂ was the cause. Yet they wanted to have their cake and eat it, when temperatures failed to rise by claiming then that natural variablity was stronger than ghgs. This was when I,like many others cried foul. Always a pleasure Bob.

Luther Bl't
November 15, 2011 1:42 am

>> Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?
A rhetorical question?

Brian H
November 15, 2011 2:04 am

The eternal modus operandi of prognosticators: seize on a selected chunk of a complex curve and use linear extrapolation to concoct a scarey scenario. Claim special competence in preventing same, demand profligate funding to achieve that.
You think the world’s fools would have gotten weary of being fleeced by the same old scam. But I guess cutting governments in on the action has given it new life.

Brian H
November 15, 2011 2:06 am

The eternal modus operandi of prognosticators: seize on a selected chunk of a complex curve and use linear extrapolation to concoct a scarey scenario. Claim special competence in preventing same, demand profligate funding to achieve that.
You think the world’s fools would have gotten weary of being fleeced by the same old wheeze But I guess cutting governments in on the action has given it new life.

RACookPE1978
Editor
November 15, 2011 2:27 am

“That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. “
Should that not be: “That’s IPCC speak, and it really does say that they are expecting global surface temperatures to flatten for the next two or three decades.”

H.R.
November 15, 2011 2:34 am

Excellent! Thank you, Bob.
If a model can’t replicate what is known, there’s not much reason to believe it will forecast what is unknown. (Now that I understand that, may I please be promoted to 5th grade?)

John B
November 15, 2011 2:46 am

John Marshall says:
November 15, 2011 at 1:37 am
I have said this before and I repeat, even though nobody listens (sarc), throw these models away they are lying.
When the IPCC state that the multidecadal variations not being replicated in the models make laymen wary I weep. Does this problem not make scientists wary?
Obviously not since they are wedded to the AGW theory.
———————-
It does not make scientists weep, at least not those who understand what is being said.
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).
There are many variabilities, but the trend emerges from them and all the evidence points that it is going in one direction.

Braddles
November 15, 2011 2:51 am

“Initialized To Replicate Multidecadal Variability” sounds suspiciously like curve fitting.
It’s been said before, but every post on modelling should say it again: any model that uses curve fitting has no predictive value.

Espen
November 15, 2011 3:10 am

John B says:
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).

Take a look again, and note how the models follow the temperatures quite closely in 1975-2000. Obviously, since the natural variability due to oscillations was in an upwards trend from the seventies and until the 1998 El Niño, a model that didn’t include the multidecadal oscillations should trend significantly lower than the actual temperatures in the 1975-2000 period.

Editor
November 15, 2011 3:11 am

RACookPE1978 says: “Should that not be: ‘That’s IPCC speak, and it really does say that they are expecting global surface temperatures to flatten for the next two or three decades.'”
The quote from the IPCC report is about extremes. Here it is again, “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
I don’t interpret that to say they’re expecting global temperatures to flatten, just that extremes–droughts, floods, record high temperaturess, record low temperatures–are going to be of both signs and they may not necessarily be extremes that one would associate with a warming world because the effects of natural variability are still stronger than an anthropogenic signal, whatever that is.

Editor
November 15, 2011 3:13 am

H.R. says: “Now that I understand that, may I please be promoted to 5th grade?”
You may even be able to skip a few grades.

John B
November 15, 2011 3:31 am

Bob said, in the OP:
“the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.”
—————
No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.
The hindcast shows that the models replicate past climate pretty well over scales of decades to centuries. The forecsasts show what they show as a result of the same modelling under various emission scenarios, not as a result of extrapolating a linear trend.

1DandyTroll
November 15, 2011 3:37 am

So, essentially, they stated that there are no signals of climate change in natural climate, which would indicate that they don’t think natural climate ever change, which of course it never does in neither their bong smoked statistic nor their bubble bong models.

John B
November 15, 2011 3:38 am

Espen says:
November 15, 2011 at 3:10 am
John B says:
Take a look at Figure 1. See how the hindcast does not get the peak at around 1940, but it does get the trend all the way back to 1910 (and look elsewhere you will see the trend captured even further back).
Take a look again, and note how the models follow the temperatures quite closely in 1975-2000. Obviously, since the natural variability due to oscillations was in an upwards trend from the seventies and until the 1998 El Niño, a model that didn’t include the multidecadal oscillations should trend significantly lower than the actual temperatures in the 1975-2000 period.
————————————
But the models do not include multidecadal oscillations, and were pretty good, though obviously they missed the spike due to the 1998 El Nino. So, maybe “natural variability” did not dominate from 1975-2000. Maybe CO2 was to blame. At least in part.

November 15, 2011 3:43 am

Since there are signs of the current burst of the global warming is either slowing down or even stalling, lot of attention is directed to the existing natural variability factors.
The Atlantic Multidecadal Oscillation better known as the AMO has frequently been presented as some ‘mystifying’ natural force driving the sea surface temperatures ( SST) of the North Atlantic.
New research shows that the AMO is simply result of the thermo-haline circulation powered by the North Atlantic Subpolar Gyre driving the Atlantic – Arctic exchange.
Put in the most simplistic terms: the AMO is a delayed response (with R^2 = 0.74 ) to the semi-permanent low atmospheric pressure system over Iceland (measured at Reykjavik / Stykkisholmur) as graphically shown here:
http://www.vukcevic.talktalk.net/theAMO.htm
including the link to the relevant pre-print paper (currently in ‘document technical moderation’ at the CCSd / HAL science archive).
Hi Bob
you did ask for details some time ago, although above is a bit over the top promo, any constructive comments will be considered for the final paper version. Tnx.

Brian H
November 15, 2011 4:06 am

John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.

Vince Causey
November 15, 2011 4:10 am

A little bit of background explaining what is meant by “initialised” would be helpful. I am struggling to understand how a model could be run without being initialised. Surely it has to have initial values. Does it mean that these initial values are not as observed in the temperature record? It’s not very clear.

Ask why is it so?
November 15, 2011 4:11 am

my question is a little off subject but when I look at a graph I want to know what the ‘0’ represents. I know it’s using a mean say 1961-1990 but I don’t actually know what the temperature is, only the degrees above and below that mean. Is there a reason why this figure is not shown?

Brian H
November 15, 2011 4:16 am

Vince;
Not just temperature; there are a myriad of parameters and measures. The IPCC explicitly states that real world measurements at any point in time are not used to initialize the models, though I don’t have the link to hand. The “initialization” is made up out of whole cloth, just like the rest of the models.

EternalOptimist
November 15, 2011 4:18 am

Same question as Vince Causey here.
The way I understand the term, Initialisation is a one-off starting point. Surely the models need to start in the right place, but they also need natural variability as part of their intrinsic functionality. It needs to be included somehow.

John B
November 15, 2011 4:21 am

Brian H says:
November 15, 2011 at 4:06 am
John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.
————————
The IPCC publishes projections based on scenarios. e.g. if CO2 emissions level off (scenario), we would see something like this (projection).

John B
November 15, 2011 4:47 am

EternalOptimist says:
November 15, 2011 at 4:18 am
Same question as Vince Causey here.
The way I understand the term, Initialisation is a one-off starting point. Surely the models need to start in the right place, but they also need natural variability as part of their intrinsic functionality. It needs to be included somehow.
————————–
Actually, they don’t. A model can even be “cold started”. That means that you can assume no wind, no currents, and “climatological” (i.e. average) temperatures, etc., at start up. You then run the model for a while to let it “spin up”. If it is a good model, it will start to resemble reality – winds will start to blow, regional differences will appear, etc. And you can verify how good your model is by starting it from different places and seeing if it converges on reality.
Here is an eample from oceanic modeling:
http://www.oc.nps.edu/nom/modeling/initial.html

geronimo
November 15, 2011 4:48 am

This is not knew news, here’s Kevin Trenberth in his Nature Blog June 4 2007:
“In fact there are no predictions by IPCC at all. And there never have been. The IPCC instead proffers “what if” projections of future climate that correspond to certain emissions scenarios. There are a number of assumptions that go into these emissions scenarios. They are intended to cover a range of possible self consistent “story lines” that then provide decision makers with information about which paths might be more desirable. But they do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents. There is no estimate, even probabilistically, as to the likelihood of any emissions scenario and no best guess.
Even if there were, the projections are based on model results that provide differences of the future climate relative to that today. None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.”
John B my understanding is that the models can only hindcast the climate if sulphates are fed into them. In fact Trenberth himself says that el Nino and the PDO are critical modes of variability that affec the PAC rim countries. How does that square with them evening out over a decade?

Latitude
November 15, 2011 4:50 am

climate change signals are expected to be relatively small compared to natural climate variability
===============================
And I say that “climate change signals” are way too small to be modeled…
…and we don’t know enough about “natural climate variability” to model
Which makes climate models fun games……but worth less than the paper they are printed on

Bill Illis
November 15, 2011 4:56 am

Yes, they should be initialized with a natural variability component.
Just call it a temporary ocean circulation energy transfer (additional absorbed for periods and then additional energy released in other periods).
What they will find, however, is that their greenhouse forcing formulae do not work. The GHG impact is much lower in that scenario – less than half in fact.
So, I’m sure some of the climate modelers have played around with some cycles (regular or not) but they soon abandon the effort because then the rest of their math / the GHG forced climate model does not work.

November 15, 2011 5:10 am

Bill Illis says:
November 15, 2011 at 4:56 am
Just call it a temporary ocean circulation energy transfer (additional absorbed for periods and then additional energy released in other periods).
It could be far more precise than that; at least in the Atlantic there is a direct forewarning, by at least 4 but more often up to 10 year in advance, by the Icelandic low atmospheric pressure system. See my post above or http://www.vukcevic.talktalk.net/theAMO.htm

Bill Illis
November 15, 2011 5:18 am

M.A.Vukcevic says:
November 15, 2011 at 5:10 am
——————
I think the AMO and the ENSO are the most important cycles. There are a few others that could be used. But not everyone agrees that the AMO could be like a “forcing” – operating independently and causing its own climate variability.
If some want to use the ocean in general rather than these specific components, then it doesn’t matter and they still end up with a climate model that does not work.

John B
November 15, 2011 5:26 am

geronimo says:
November 15, 2011 at 4:48 am
John B my understanding is that the models can only hindcast the climate if sulphates are fed into them. In fact Trenberth himself says that el Nino and the PDO are critical modes of variability that affec the PAC rim countries. How does that square with them evening out over a decade?
—————
Trenberth is saying that climate models do not forecast regional climate effects. They produce projections that are meaningful at global or hemispheric scales. For example, the models say “it’s going to get hotter”, but they do not say “it’s going to get hotter in (say) Mexico”. Yes, El Nino and other effects are important if you live in an area affected by them, but climate models will not help predict when those effects will occur – that is not what they are trying to do. The climate models say, ENSO notwithstanding, that it is going to getter hotter on average, though not at every single point on Earth and not that every year will be hotter than the last.
I am sure you are right about sulphates. What is your point?

Dusty
November 15, 2011 5:30 am

Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value? It seems to me to be a naive and extremely unscientific approach.

Solomon Green
November 15, 2011 5:34 am

An excellent piece of work. In finance we refer to backtesting not hind casting and any model that does not back test with reasonable results (and even many that do) are automatically rejected.
John B says:
“The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
Comparing the Mauna Loa CO2 chart from 1975 to the present day it appears that the only
“known physics and various forcings” required to replicate the gradient in the IPCC AR4 sea temperature anomalies model is CO2.

John B
November 15, 2011 5:41 am

Latitude says:
November 15, 2011 at 4:50 am
,,,
And I say that “climate change signals” are way too small to be modeled…
,,,
Which makes climate models fun games……but worth less than the paper they are printed on
——————
That doesn’t follow. As an analogy, we can model the tides pretty accurately, but if you stand on a beach and watch the waves crashing in, you might be tempted to say “the tide signal is too small to be modelled”. So it is with climate. And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables. The waves may be hugely important if you are swimming on that beach, but they do not affect the longer term trend of the tide.

Paul Vaughan
November 15, 2011 5:47 am

John B (November 15, 2011 at 2:46 am) wrote:
“There are many variabilities, but the trend emerges from them and all the evidence points that it is going in one direction.”
Scarce few disagree. The issue remains that nature’s variability is being ignored and misrepresented.
Chaotic model “initialization” to extrapolate imaginarily-stationary multidecadal oscillations would be a uselessly abstract wool-over-eyes exercise. No one sensible would trust it since there remains so much work to finish sorting out the spatiotemporal geometry of constraints: http://wattsupwiththat.files.wordpress.com/2011/10/vaughn-sun-earth-moon-harmonies-beats-biases.pdf

John B
November 15, 2011 5:52 am

Solomon Green says:
November 15, 2011 at 5:34 am
Comparing the Mauna Loa CO2 chart from 1975 to the present day it appears that the only
“known physics and various forcings” required to replicate the gradient in the IPCC AR4 sea temperature anomalies model is CO2.
—————-
But we don’t only look at 1975 to the present day. Even Bob’s post goes back to 1910 and, short term variations aside, the models look pretty good. And if CO2 is the major forcing that determines how well they depict reality, that should be telling us something about the reality of the effect of CO2.
Unless, of course, all them lib’rul scientists is lyin’ 🙂

Pamela Gray
November 15, 2011 6:02 am

Bob, your post convinces me that these scientists are not very good at hiding things. Not even when they do it on purpose, but especially when they don’t. If your investigation is close to the truth, that a short snippet of the temperature record was used to “tune” the model, bespeaks one thing: you can’t fix stupid models. In this case, I highly recommend they throw the baby out with the bath water.

November 15, 2011 6:02 am

The real problem with the GCMs is that they don’t “predict” the past record, they can’t explain the present, and they generally aren’t very good at extrapolating into the future.
Ultimately, they are nonlinear curve fitting routines, with parametric components in them that can match any curve you like for at least some small chunk of it, especially when that curve is predominantly monotonic.
This is the fundamental problem with the whole debate. We have at least moderately accurate/believable temperature data from e.g. ice cores and other proxies that extends back over hundreds of thousands to millions of years. This record clearly indicates that we are in a five-million year long “ice age” interrupted by interglacials that seem to be order of or less than ten thousand years long, the Holocene being one of the longest, warmest interglacials in the last million years.
We don’t know why the world was warmer 5 million years ago — MUCH warmer than it is today, in all probability. We don’t know why it started to cool. We don’t know why it cooled to the point where extensive glaciation became the rule rather than the exception, with CO_2 levels that dropped to “dangerously low”, almost too low to support atmospheric plant life at the worst/coldest parts of it. We don’t know why it periodically emerges from glaciation for a few thousand years of warmth. We don’t know why it stops and returns to being cold again. Within the interglacials and/or glacial periods, we don’t know why the temperatures vary as they do, up and down by several degrees C on a timescale of millennia.
What we do know is that the best predictor of temperature variation on a timescale of centuries is the state not of the atmosphere but of the Sun, in particular the magnetic state of the sun. The evidence for this is, IMO, overwhelming. I don’t particularly care if the physics of this isn’t well understood — that’s a challenge, not a problem with the conclusion. The analysis of the data itself permits no other conclusion to be made. Sure, other factors (Milankovitch cycles, orbital eccentricity changes in response to orbital resonances, axial precession) seem to come into play with the longer time scales but THEY aren’t predictive either.
I am unaware of any model that can predict the the PAST, and this is the biggest problem of all. AGW is predicated on the assumption that we know what the temperature “should” be by some means other than numerology and reading tea leaves. And we don’t. It’s a non-Markovian problem laced with chaotic dynamics on an entire spectrum of timescales from seconds to epochs. Most of the local (decadal) dynamics is determined by a mix of the solar cycle, various effectively chaotic limit cycles (e.g. the PDO or AO) on TOP of the strong correlation with magnetic state of the sun, with additional near-random modulation by aerosols and greenhouse gases and cloud cover and with various unknown positive and negative feedbacks.
The problem, in other words, is “difficult”. It is complex, in the precise sense of the term. Locally, one can ALWAYS fit a segment of the temperature curve over decades to centuries as long as it has a monotonic trend. In the case of the 20th century, there are at least two completely distinct fits that can be made to work “well enough” given average monotonic warming. One is “CO_2 is dominant, previous thermal history and solar state and so on perturbations”. One fits the general upward trend to the general upward trend of the CO_2, then finds parameters to scale the correlations between solar state and temperture so that they match the perturbations.
The other is the opposite — explain all or nearly all of the temperature trend on the basis of solar state and dynamical history, and treat the CO_2 forcing as a small component that contributes to a general monotonic trend that would be there anyway even if there was no anthropogenic CO_2 at all. After all, only somebody very, very stupid would think that the mean temperature would be CONSTANT in the absence of humans. The entire climactic record stands ready to refute any such silliness. Nearly all of that record consists of century-long stretches of monotonic warming and cooling, with occasional abrupt changes bespeaking the existence of multiple poincare attractors in the general phase space and chaotic transitions between them.
Both of these can work, but the fundamental problem with the former is that it DOES NOT WORK outside of the 20th century. It is utterly incapable of explaining not JUST the MWP and LIA, but all of the OTHER significant variations in average global temperature visible in the long term proxy based temperature reconstructions. It will not work if indeed global temperatures drop — quite possibly drop precipitously — over the next thirty years as the sun’s state returns to levels not seen since the 1800s if not the 1600s.
One simple fact ignored by the AGW crowd is this. The last time the sun was as active as it was in the 20th century was 9000 years ago, at the very start of the holocene. Indeed, that earlier burst of solar activity (a double maximum that spanned some 300 years with an interval in between peaks) may have been the event that really gave the Holocene a robust start and finally ended the previous glacial period after the Younger Dryas. More or less monotonic warming over the 20th century has to be interpreted in light of that fact — the assumption of “everything BUT CO_2 levels is either the same or unimportant in determining climate, so CO_2 is the explanation” is simply not justified when something else is so spectacularly not the same as a 9000-year grand maximum in the sun across the very interval where the greatest warming was observed.
So, requiring GCMs to fit all of the data is good, but it isn’t enough. Not only all of the 20th century data, but ALL of the data. Until they can fit the data over the last 2000 years (say) or the last 10,000 years, their models are pointless, the moral equivalent of saying “look, a monotonic trend, I can fit that” with any function that (for some range of parameters) exhibits a monotonic trend. They have to be able to predict the UPS and the DOWNS, the long term variations. And the only way one can possibly do this is by including the sun — not just insolation but its actual state — as one of the fit parameters.
Once this is done — and we can, any of us, perform such a fit in our own minds, just from looking at a graph — the entire argument inverts. Suddenly, because solar state is the BEST predictor, the ONLY predictor capable of getting the right gross features of the T(t) curve, everything ELSE becomes the perturbations, the moderators of a general trend slaved to the sun. Suddenly the quality (e.g. R) of the curves goes way up. Suddenly CO_2 is a modulator, sure, but a relatively unimportant one, tenths of a degree C and part of that moderated by negative feedback from clouds, almost lost in the chaotic noise.
rgb
(Could somebody in Big Oil now send me a check, please? I keep hearing about how anybody who is a “denier” is in the pay of Big Oil, but I can never seem to find BO and send them a bill…)

juanslayton
November 15, 2011 6:16 am

At the top of fig 2 you it says y=0.0054x. Should that be 0.054x?

juanslayton
November 15, 2011 6:18 am

OK, scratch that, I think I get it.

Warren in Minnesota
November 15, 2011 6:46 am

What variables should be used in a model that replicates the multidecadal variability of the 20th Century or more centuries back?

Editor
November 15, 2011 6:52 am

John B: Your November 15, 2011 at 2:46 am comment discussed how weel the trends matched prior to 1975. I suggest you look at Animation 1, which is why I provided it. It clearly shows that the models do not match the multidecadal variations of the data.

Editor
November 15, 2011 7:01 am

John B says: “No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
John B, where did I write that the models “assume a linear trend”? Are you twisting what I wrote? Read what I wrote again.
You continued, “The hindcast shows that the models replicate past climate pretty well over scales of decades…”
The data illustrated in Animation 1 very clearly contradicts this statement. You are wasting your time…and the time of those who have come to discuss this.

Editor
November 15, 2011 7:09 am

John B says: “But the models do not include multidecadal oscillations…”
That’s what this post is about, John B. The question posed by the post is, should they include them. I believe I asked the question twice. Did you read the post? If not why are you wasting our time? Also, this comment by you contradicts an one of your earlier ones, which means you’re not a very good troll.

November 15, 2011 7:15 am

It is relatively easy to check the various models outputs for different emission scenarios, compared to instrumental record via the KNMI Climate explorer.
North Atlantic:
http://oi56.tinypic.com/wa6mia.jpg
Debrecen station, Hungary:
http://oi56.tinypic.com/10pt7y9.jpg
As I told before, models are tuned to fit the warm AMO phase (1975-2005) and they are off before and after. They are not able to catch the observed variability in 20th century alone, not speaking about running them backwards and comparing to CET record for example. Unbelievable, that this playstation crap is used for policy decisions.

John B
November 15, 2011 7:17 am

Bob Tisdale says:
November 15, 2011 at 7:01 am
John B says: “No, they haven’t done that at all. The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans. If it happens to generate something close to linear, so be it, but they do not in any way assume a linear trend.”
John B, where did I write that the models “assume a linear trend”? Are you twisting what I wrote? Read what I wrote again.
You continued, “The hindcast shows that the models replicate past climate pretty well over scales of decades…”
The data illustrated in Animation 1 very clearly contradicts this statement. You are wasting your time…and the time of those who have come to discuss this.
————–
Bob, you said “latched on to a trend”, as if that is how the models work. You then go on to linearly extrapolate short periods of the model projections, as if that has some meaning.
Figure 1, the one that shows the largest portion of data, is the one that shows most clearly how well the models have worked. Your animation, extrapolating linear trends to arbitrary parts of the data, is at best misleading.

John B
November 15, 2011 7:28 am

Robert Brown says:
November 15, 2011 at 6:02 am

rgb
(Could somebody in Big Oil now send me a check, please? I keep hearing about how anybody who is a “denier” is in the pay of Big Oil, but I can never seem to find BO and send them a bill…)
——————
I think we’ve pretty much gotten over the “big oil” theory. The predominant theory among “warmists” now is that most “skeptics” (not calling them “deniers”) are driven by their own ideology. Something like, “I do not like Obama/Gore/regulation/taxation/socialism/greens/whatever, therefore I don’t want AGW to be real, therefore I will exaggerate uncertainties in the science, leap on any mistakes, invoke conspiracies and cheer for any contrary ideas (even when mutually contradictory) to support my views. Or something like that.

ferd berple
November 15, 2011 7:34 am

NO!
The models should not be initialized. That is a form of curve fitting that locks errors in the temperature record into the models.
Unfortunately, the models have been initialized to hindcast, as a result of parameter selection by the model builders – which in part explains their lack of skill.
In effect, the model builders have decided that CO2 versus aerosols has a relative level of importance in the model, so they choose the parameters to match. If this doesn’t give a believable hindcast, then they adjust the weights until it does.
Unfortunately, by doing so, they have been misled as to the importance of CO2, which caused virtually every IPCC model to go badly off track in 2000.
So NO, do not initialize the models to hindcast. Remove the parameter weighting that has been used to create the current fit and go back to first principles.
Climate science is at the same stage in its development that orbital mechanics was 2000 years ago. The consensus is that earth (CO2) is at the center of the universe (climate), but there is evidence this might just be wrong.
Like people 2000 years ago, we want to believe that humans are the most important thing in the universe, that the climate, like the universe, revolves around us. Our beliefs in self-importance often lead us astray.

Steve Garcia
November 15, 2011 7:34 am

Back in about 1998 I worked with a degreed meteorologist (who has since died) who couldn’t get a job as one (because of, basically, reverse discrimination) who was one of the brighter people I’ve known. It was when I was initially getting interested in the CAGW issue. I’d read enough by then to get me questioning how much due diligence had been done before ruling natural causes out.
He expressed dismay at the idea that they were projecting the temperatures out to the year 2100. He had done atmospheric modeling himself, in school, and knew the wwhys and wherefores of it all. He specifically pointed out that the models did not hindcast worth a diddly, and said that any model that can’t replicate the recent past was not close to being reliable enough to depend on.
He was also doubtful of the data. He was the one who turned me on to the “Great Dying of the Thermometers” at the end of the 1980s, though he did not refer to it by that name. He was extremely scornful of any possible reason to exclude meteorological stations at a time when data storage and processing capacity was increasing by magnitudes.
Truthfully, I have not seen ONE bit of info since that day that has been capable of swaying me to the pro-CAGW side. VERY little even argues in that direction at all, much less being capable of convincing me. So much argument on that side points to the models, and the models are incapable of replicating – of vetting themselves – as Bob is pointing out here.
Nothing has really changed then, since 1998. The models are proven out, and any reference to them as basis for anything whatsoever is useless – and should be SEEN as useless, even by the people pointing at the models. What could possibly be in their minds, thinking that the models are convincing?

Stephen Wilde
November 15, 2011 7:35 am

“What variables should be used in a model that replicates the multidecadal variability of the 20th Century or more centuries back?”
i) The level of solar activity. Though there needs to be more knowledge as to which components of such variability have most impact on the size and intensity of the polar vortices. It is that which controls the behaviour of the mobile polar highs described by Marcel Leroux and thus ultimately gives the sun power over items ii) to iv) below.
ii) The Pacific Multidecadal Oscillation (not PDO which is merely a derivative of ENSO data). Unfortunately we are not yet aware of the reason for the 50 to 60 year cycling as regards that phenomenon but the current position within the cycle would be highly relevant for prediction purposes.
iii) The size, intensity and latitudinal positions of the main permanent climate zones because they control the rate of energy flow from surface to stratosphere which results in net warming or net cooling at any given time.
iv) Global cloudiness and albedo because they indicate how much solar energy is actually gettng into the oceans to fuel the climate system.
Anything else would be subsumed within those parameters.
Needless to say the current models are not well designed in any of those areas.

Theo Goodwin
November 15, 2011 7:36 am

John B says:
November 15, 2011 at 3:38 am
Would some Moderator please give John B the standard Troll Treatment?
[No. One does not strengthen things (mental or physical) unless you exercise against resistance. 8<) Robt.]

November 15, 2011 7:38 am

Bob:
You are a WONDER.
However, I must point out that THE KING HAS NO NEW CLOTHES!
The “sea surface temperatures” are based, prior to the 80’s or 90’s or even the ARGO BOUYS, on fundementally a FICTION.
Ships logs? Guys throwing BUCKETS over the side? Calibration, consistency, QUALITY ASSURANCE? Absolutely lacking. Those data have been manipulated to show WHAT THEY WANTED. I think they are completely BOGUS.
I think others should QUESTION THE SOURCE OF THIS HIGHLY PROCESSED DATA and not accept it at face value.
Max

Neo
November 15, 2011 7:46 am

What kind of modeler only uses 25 years of “training data” when there is over a century of data available ? None who are reputable.

Theo Goodwin
November 15, 2011 7:54 am

Yet another brilliant article by Bob Tisdale. Thanks, Mr. Tisdale.
The big lessons are clear to the many who made comments above. The modelers put all their eggs in the CO2 basket. CO2 concentration in the atmosphere increases linearly, at least given their relatively simple minded assumptions. So, the warming had to go up linearly as in the 1975 to 2000 period. In other words, they treated CO2 and its effects on radiation from the sun as the only natural processes that required modeling. Now they are being forced to admit that other natural processes must be treated as important in their own right. The sum total of all those natural processes make up most of what is called natural variability. The natural processes must be understood in their own right as physical theory (physics) has always done. Climate science must investigate those natural processes and create physical hypotheses that describe the natural regularities found in them. Once this project is well under way, climate science will be on its way to becoming a genuine physical science.
Isn’t it amazing that Trenberth can show that he understands the problems with the models yet continue to act as an attack dog for climate science?

John B
November 15, 2011 7:58 am

Bob Tisdale says:
November 15, 2011 at 7:09 am
John B says: “But the models do not include multidecadal oscillations…”
That’s what this post is about, John B. The question posed by the post is, should they include them. I believe I asked the question twice. Did you read the post? If not why are you wasting our time? Also, this comment by you contradicts an one of your earlier ones, which means you’re not a very good troll.
————————
Yes, I read the post. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise

ferd berple
November 15, 2011 8:02 am

“Dusty says:
November 15, 2011 at 5:30 am
Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Given the forecasting power of climate models, why not use the GCM’s to forecast stock prices and make a killing in the market to pay off the national debt and pay to turn the economy green? Isn’t it time we asked the climate scientists to use their CO2 driven Ouija Boards to help pay the cost of going green?
Surely it is simpler to forecast the future value of a limited mix of industrial stocks than it is to forecast the future climate. Why do climate scientists keep asking for money? Surely the GCM’s in their spare time should be able to tell Hansen and Co where to invest, to pay for the computers and conferences, so they don’t need government assistance.
Like temperature, the Dow has been going up for 100+ years. So surely if we can predict climate we can predict stock market values with even greater accuracy. So, why has not every climate scientist like every economist retired long ago on their investments? Why do they need any government grants?
http://en.wikipedia.org/wiki/File:DJIA_historical_graph_to_jan09_%28log%29.svg

Theo Goodwin
November 15, 2011 8:03 am

Dear Moderator:
John B is creating posts to waste Bob Tisdale’s time. Mr. Tisdale is too nice a man to simply ignore John B, though he should ignore John B. Let us please not enable John B’s harrassment of Mr. Tisdale.

bubbagyro
November 15, 2011 8:09 am

Max:
Yes, GIGO.
I, as a layman to street gambling, became very wary of the street gambler’s game in New York City, when the marks investors could not find the pea in the walnut, even though the first dude always found it. When you see the word “layman”, substitute “mark”. This makes the rest of the IPCC blurb more easily understood.

Editor
November 15, 2011 8:16 am

Ask why is it so? says: “my question is a little off subject but when I look at a graph I want to know what the ’0′ represents. I know it’s using a mean say 1961-1990 but I don’t actually know what the temperature is, only the degrees above and below that mean. Is there a reason why this figure is not shown?”
First: In general, the Global Land Plus Sea Surface Temperature anomaly datasets supplied by GISS, Hadley Centre, and NCDC are only supplied as anomalies. So there is no way that I could present that to you if this post was about that data.
Second: But this post is about Sea Surface Temperature data and Sea Surface temperature is not presented by the suppliers as anomalies (except for one dataset by the Hadley Centre) so I could present it. But I normally download the data after it’s already been converted to anomalies by the KNMI Climate Explorer website (or the NOAA NOMADS website) because that eliminates another step in my data handling. But since you’ve asked, the observed (HADISST) average Sea Surface Temperature during the years of 1910 to 1940 (those were the base years I used in this post) was 17.9 deg C, while for the models it was 17.6 deg

TomRude
November 15, 2011 8:16 am

If my memory does not betray me, I recall Climate Audit had a post on something similar showing models complinace with temps record. And that Weaver’s model was only working with increasing temps while diverging otherwise…

Latitude
November 15, 2011 8:19 am

Max Hugoson says:
November 15, 2011 at 7:38 am
===================================
Max, it is all based on fiction……but the fiction was Goldilocks
too hot, too cold, too much ice, too little ice, etc

Pascvaks
November 15, 2011 8:20 am

If you want to move the world you need a long, strong plank, a fulcrum, and a high place to stand before you jump. Actual measurements from the 20th Century sure seem a might more sturdy than guesstimates. At least you get the last hundred years right if nothing else succeeds. What the hay, we ought to give it a try; I’m sure the Chinese will lend us a few more $trillions$ to reprogram their new superdupper computers at 40% interest compounded hourly. It’s only money, right? And we are at war with nature, the biggest, baddest, meanest SOB on the planet, so what-the-hay mate let’s do it! Oh yes, nearly forgot, (SarcOff)
PS: What are we putting in the water? Everyone seems to be going crazy.

ferd berple
November 15, 2011 8:22 am

John B says:
November 15, 2011 at 5:41 am
And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables.
Tide tables are not modeled based on forcing and feedback. They are modeled based on observed, repeatable cycles, similar to the way humans first forecast the cycle of the seasons, the migration of animals and the orbits of the planets. We observe, find the pattern, then forecast based on this pattern repeating. If the forecast works, it may have have skill. If not, it is wrong and we start over.

Editor
November 15, 2011 8:23 am

Dusty says: “Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Dusty, I haven’t used the trends for predictions. I haven’t made any predictions or projections in this post.

November 15, 2011 8:27 am

IPCC: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.
Hansen has also said similar things. What is happening really is a signal to noise problem that we always have with scientific measurements. It appears that in trying to observe climate change the noise is so high that the signal simply gets lost in the noise. That being the case, why bother looking for a signal that you can’t either see or measure? For my money, the coming climate catastrophe that this invisible signal is supposed to predict is a pseudoscientific delusion and their claims of anthropogenic global warming are simply fantasies. Lets take a look at the climate of the last 100 years. The last IPCC report predicted warming of 0.2 degrees per decade for this century. We got zero warming for 13 years. Comes from using climate models that are invalid. Ferenc Miskolczi has shown, using NOAA weather balloon database that goes back to 1948, that the transmittance of the atmosphere in the infrared has been constant for the last 61 years. Carbon dioxide at the same time increased by 21.6 percent. This means that addition of all this carbon dioxide had no effect whatsoever on the absorption of IR by the atmosphere. And no absorption means no greenhouse effect, the foundation stone of IPCC models. If you look further you realize that no observations of nature exist that can be said to verify the existence of geeenhouse warming. Satellites show that within the last 31 years there was only one short four year spurt of warming. It started in 1998, raised global temperature by a third of a degree, and then stopped. It was oceanic, not greenhouse in nature. The only other warming in the twentieth century started in 1910 and stopped with World War II. Bjørn Lomborg sees it as caused by solar influence. He is probably right because you cannot turn carbon dioxide on and off like that. Between these two warmings temperature went nowhere while carbon dioxide kept going up. This leaves just Arctic warming to explain. It started suddenly at the turn of the twentieth century, after two thousand years of cooling. It cannot be greenhouse warming because carbon dioxide did not increase in synch with it. This covers the last one hundred years and as required by Miskolczi no warming within this time period can be called a greenhouse warming. And going back to the signal and noise problem, we can say that not being able to detect climate change is simply a case of absence of a signal, not too much noise.

November 15, 2011 8:27 am

I agree with Bob, John B, why don’t you let people who want to discuss this discuss it, and just go away? It isn’t helpful either to say that skeptics are led by ideology. We are led by facts.
I started being interested in this long before I heard of Gore, and I bet before he heard of global warming. In 1972 my teachers told me that pollution might cause an ice age. By 1980 the New York Times was talking about global warming. I wanted to know what had happened to the ice age that had the same cause. So I have been following this ever since. The day that I see proof that AGW will cause a catastrophe is the day I believe it. So far no such proof exists.

John B
November 15, 2011 8:31 am

erd berple says:
November 15, 2011 at 8:02 am
“Dusty says:
November 15, 2011 at 5:30 am
Will somebody please explain to this retired engineer why a linear extrapolation of a curve based on ill defined chaotic sources (especially out to a hundred years) should have any predictive value?”
Given the forecasting power of climate models, why not use the GCM’s to forecast stock prices and make a killing in the market to pay off the national debt and pay to turn the economy green?

————–
Short answer, the “efficient market hypothesis”:
http://en.wikipedia.org/wiki/Efficient-market_hypothesis

Editor
November 15, 2011 8:32 am

juanslayton says: “At the top of fig 2 you it says y=0.0054x. Should that be 0.054x?”
EXCEL presents the trend on a yearly basis. I simply converted it to decades for the post.

Jim Masterson
November 15, 2011 8:57 am

>>
John B says:
November 15, 2011 at 2:46 am
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
<<
This is nonsense. The only way two oscillations will cancel (even out) is if they are exactly tuned to the same frequency, are exactly 180 degrees out of phase, and are exactly the same amplitude. I doubt anyone (but you) is making this statement. Activate two tuning forks whose fundamental frequencies differ by a few Hertz and they will “beat.” The “beat” tone will noticeably get louder and quieter. The beat frequency is the difference of the two oscillators. (Superheterodyne receivers work by this principle.) All the natural oscillation frequencies and amplitudes in climate differ, so that none will magically “even out.”
Jim

Craig Moore
November 15, 2011 8:58 am

Similar to but slightly different in thrust is Pielke, Sr.’s latest post of the tremendous waste of money and resources in propping up the model memes. http://pielkeclimatesci.wordpress.com/2011/11/15/the-huge-waste-of-research-money-in-providing-multi-decadal-climate-projections-for-the-new-ipcc-report/

Hoser
November 15, 2011 8:59 am

There are many GCMs and many climate scientists playing with their toy models. It is up to them to decide how to make them work. Clearly the GCMs have no scientifically relevant predictive power. Consequently, they should not be used to justify policy decisions. However, they do have politically useful predictive power, and that is precisely why they are used for policy decisions. Therefore, the models currently are “working” from the perspective of funding.
A solution to the dilemma would be to create a correct model. However, even if the computing power were available, there are unpredictable external factors that cannot be modeled. Predicting climate is about the same as predicting the future Dow Jones average. If anyone really could predict stock prices, they’d be very rich. On the other hand, if anyone could predict climate accurately, they would be ridiculed by the majority of their peers, their work would not be published, and their research would lose funding. Would it all be worthwhile even if you eventually got a Nobel, after say 20+ years of abuse?

Downdraft
November 15, 2011 9:07 am

Including the multidecadal variation of the observed temperatures in the models, even if the cause is not known, would be a valid change to the models and improve projections significantly. Ignoring it degrades the accuracy of the temperature projections, but accurate projections are not the purpose of the models. The models serve the purpose for which they were written.
I suspect the modelers began with the assumption that without increasing CO2 concentrations, temperatures would be stable, and then built the model to replicate the short term trend in the temperature record. In an email from EPA some years ago, the responder indicated that, indeed, without increasing CO2, they believed temperatures would be stable. He also included a list of all the catastrophes that would occur if nothing is done. There were no positive outcomes, of course.

John B
November 15, 2011 9:12 am

Jim Masterson says:
November 15, 2011 at 8:57 am
>>
John B says:
November 15, 2011 at 2:46 am
The multidecadal osillations do not exhibit a trend, they oscillate. Some are periodic, some, like ENSO, not so much, but they are all oscillations (as far as we know). That is why it is OK to not attempt to model them – over a long enough time, they even out.
<<
This is nonsense. The only way two oscillations will cancel (even out) is if they are exactly tuned to the same frequency, are exactly 180 degrees out of phase, and are exactly the same amplitude. I doubt anyone (but you) is making this statement. Activate two tuning forks whose fundamental frequencies differ by a few Hertz and they will “beat.” The “beat” tone will noticeably get louder and quieter. The beat frequency is the difference of the two oscillators. (Superheterodyne receivers work by this principle.) All the natural oscillation frequencies and amplitudes in climate differ, so that none will magically “even out.”
Jim
============
Jim, I obviously didn't make myself clear enough. By "even out", I meant "not contribute to the long term trend". Even if there were only one oscillation, as long as it oscillates around some mean, it will have no net contribution to any longer term trend. I was not referring to beat frequencies between oscillations. My apologies for sloppy wording.
You can see quite clearly what I am referring to in Bob's Figure 1. Lots of ups and downs that are not captured by the models, but the long term trend is as generated by models (that do not model the oscillations).

John B
November 15, 2011 9:23 am

Jim,
A better way of putting it: the net effect of an individual oscillation “evens out” to zero over a long enough timescale.
John

Disko Troop
November 15, 2011 9:29 am

Max, How can you say such a thing? I was one of the guys than ran a maritime mobile weather station. We religiously threw the bucket over the side, or gave it to the apprentice to do, or phoned the engine room for sea inlet temps, depending on the weather or how much time we had. Sometimes we hauled an uninsulated bucket of water 40 feet to the bridge, other times we heaved it in over the rail 6 feet above the water. The sea inlet would be 54 feet deep at max draft and 22 feet deep in ballast. The thermometer was a mercury one calibrated in WHOLE degrees, not halves, tenths or hundredths. I have to agree with you, manipulating data which was never designed to be used for trending is a fools game. The statistical premise is that if you build a high enough pile of garbage, that somehow, above a certain height, it will magically transmute into gold dust is a typical academic fallacy. There were many occasions when we would be the ONLY reporting ship in the entire South Indian Ocean yet today I see these pretty pictures of temperature trends and computerised charts, and hear people telling me that they can identify a trend of 0.7 degrees in a hundred and fifty years. Honest Guv….me computer says it so it must be right. Utter garbage.

Gail Combs
November 15, 2011 9:32 am

ax Hugoson says:
November 15, 2011 at 7:38 am
…The “sea surface temperatures” are based, prior to the 80′s or 90′s or even the ARGO BOUYS, on
fundementally a FICTION.
Ships logs? Guys throwing BUCKETS over the side? Calibration, consistency, QUALITY ASSURANCE? Absolutely lacking. Those data have been manipulated to show WHAT THEY WANTED. I think they are completely BOGUS.
I think others should QUESTION THE SOURCE OF THIS HIGHLY PROCESSED DATA and not accept it at face value.
__________________________________
I read here at WUWT that someone who had done this type of measurement at sea is going back and getting the actual records to look at. He had asked for help but I do not have the pointer bookmarked.
Perhaps someone else does. (He noted that the data was actually taken in a “rigorous manner” at least by UK seamen.)

Gail Combs
November 15, 2011 9:38 am

John B says:
November 15, 2011 at 7:58 am
….. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise
_______________________________
So what you are saying is the model long term trends, which are always shown as a straight line headed off the paper, are correct and the earth is going to become VERY VERY hot??
You are also saying that any influence from “natural variability” is minor and should be ignored???
What is your basis for such believe.

Roger Knights
November 15, 2011 9:41 am

Tisdale:
In online disputes with warmists, I’ve encountered the claim that the global SST is rising. I haven’t been able to find a chart that I could link to that would counter that–there isn’t one here on WUWT. Is there one anywhere? (Ideally one provided by some supposedly neutral official or academic source.)
If such a one exists, I hope it gets added to this site’s Reference section.

Jiri Moudry
November 15, 2011 9:53 am

We know enough to make 100-year climate predictions but not enough to make a 100-hour weather forecast. That’s a settled science.

November 15, 2011 10:15 am

Hi Bob,
The models are not programmed to “latch” onto the period 1975-2000.
The models produce absolute C ( all over the place ). anomalies are created.
guess the reference period and the effect it will have.
you have to pick an alignment period.

Steve Garcia
November 15, 2011 10:19 am

@Theo Goodwin November 15, 2011 at 7:54 am:

The big lessons are clear to the many who made comments above. The modelers put all their eggs in the CO2 basket. CO2 concentration in the atmosphere increases linearly, at least given their relatively simple minded assumptions. So, the warming had to go up linearly as in the 1975 to 2000 period. In other words, they treated CO2 and its effects on radiation from the sun as the only natural processes that required modeling. Now they are being forced to admit that other natural processes must be treated as important in their own right. The sum total of all those natural processes make up most of what is called natural variability.

These are all things that should have been addressed back in the late 1980s, before drawaing any conclusions. If some of these forcings/processes were not known then, then the ones that were known should have been addressed. To have to be dragged, kicking and screaming, to address them at this late date is an utter scandal.
My VERY FIRST approach to this subject in the 1990s was to go out looking for such studies – looking for the ones thaat falsified ALL other possible forcings, as individual forcings and also as possible combined forcings. When I didn’t find them, I knew this was a case of the Emperor’s New Clothes. Thank Allah and God and Rama and all the other gods, present and past, that Anthony and Steve M, in particular, and Bob and Willis, too, for keeping at it and holding their feet to the fire.

John B
November 15, 2011 10:20 am

Gail Combs says:
November 15, 2011 at 9:38 am
John B says:
November 15, 2011 at 7:58 am
….. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise
_______________________________
So what you are saying is the model long term trends, which are always shown as a straight line headed off the paper, are correct and the earth is going to become VERY VERY hot??
You are also saying that any influence from “natural variability” is minor and should be ignored???
What is your basis for such believe.
————————–
Pretty much, with a few provisos…
Model long term trends are only shown as straight lines by the likes of those who want to discredit them, but they do indeed show the Earth becoming hot (whether it is “VERY VERY hot” depends on how hot you like things). How hot depends mainly on future CO2 emissions.
It is not that “natural variability” is small, rather that it does not show a trend over the timescales we are interested in (decades to centuries). AMO, PDO, ENSO all have an ‘O’ because they are oscillations. And there are no natural variabilities that can plausibly explain the trends in observations. For example, GCRs: even if it could be shown that GCRs help clouds form, there has been no trend in GCRs that would explain post-industrial temperature trends. And so on.
And the basis for my belief? Well, it’s not a belief, it is an acceptance of the science. The physics says CO2+feedbacks will cause warming, observations and models confirm it. Science has looked for (and in detail at) alternative explanations, but not of them stack up. Yes, the science could be wrong… and you could have a winning lottery ticket in your hand. I don’t think either is a very safe bet.

John B
November 15, 2011 10:24 am

Jiri Moudry says:
November 15, 2011 at 9:53 am
We know enough to make 100-year climate predictions but not enough to make a 100-hour weather forecast. That’s a settled science.
———————–
Exactly! In the same way that I do not know if it will be warmer next Tuesday than it was today, but I am pretty sure it will be warmer in July.

November 15, 2011 10:43 am

Climate models should not exist.
Period.
Observe nature.
Period.

Warren in Minnesota
November 15, 2011 10:47 am

Stephen Wilde says:
November 15, 2011 at 7:35 am…
Anything else would be subsumed within those parameters.
Needless to say the current models are not well designed in any of those areas.

I had thought of three of the four parameters that you list, but I hadn’t thought of point three: the latitudinal positions. However, I would add the outside influences of aerosols such as volcanic eruptions.

Keith
November 15, 2011 10:53 am

If the climate models are so good, have been tested to destruction and incorporate everything that is relevant and material, then it must’ve occurred to the modellers to perform runs removing one factor one at a time, i.e. does the model show skill if volcanic aerosols are removed, if variations in solar TSI are removed, solar magnetic flux, manmade aerosols, UV/EUV, oceanic cycles, cloud cover, atmospheric water vapour, etc. If manmade CO2 is dominant, then there should still be a moderate-to-high degree of skill displayed.
We’re often told that only through including man’s CO2 emissions are we able to recreate 20th century temperature trends. OK then, in papers that show 20th century temp recreations by one or numerous models when CO2 is removed as a forcing, are we informed as to what forcings/factors remain and their assigned weightings? If so, are all suspected factors incorporated and, by adjusting weightings, is it impossible to get closer to measured temp trends than the CO2-driven model versions?
I won’t be stunned if there is a model version that does a much better job of anything else published by limiting CO2 to a very minor role and focusing on ALL solar activity, global cloud cover, volcanic aerosols and oceanic cycles. I WILL be stunned, though, if this model version was stuck with and its results and methodology ever published.
Bob, excellent work as ever.

John B
November 15, 2011 11:12 am


Is this the kind of thing you are looking for:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-2-2.html
So yes, they have looked at forcings in isolation and in combination. Lots of people have looked at exactly that.

Editor
November 15, 2011 11:12 am

John B says: “Yes, I read the post. No, I did not contradict myself.”
Anyone reading your comments can see you’ve contradicted yourself. In your November 15, 2011 at 3:31 am comment you wrote, “The hindcast shows that the models replicate past climate pretty well over scales of decades to centuries.” Pretty well is subjective but anyone looking at Animation 1 above can see that this is incorrect. Then in your November 15, 2011 at 3:38 am comment you wrote, “But the models do not include multidecadal oscillations, and were pretty good, though obviously they missed the spike due to the 1998 El Nino.” That appears to be a contradiction, John B. One would think for the models to “replicate past climate pretty well over scales of decades to centuries,” that multidecadal would be included in those timeframes.
Back to your November 15, 2011 at 7:58 am comment: There you wrote, “The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this.”
Again, John B, you continue to overlook the topic of this post. The post is not about long-term trends. They’re not being discussed on this thread. The title of my post is, Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century? That’s the subject discussed in the post. A similar question is asked twice in it, which is why I asked you earlier if you had read it.
You continued, “Your Animation 1 is cherry picking, presumably aimed at showing otherwise”
Cherry picking? You actually made me laugh with that one, John B. Cherry picking? Really? There is a well-know multidecadal signal in the instrument temperature record. It’s even acknowledged by the IPCC in AR4. In Chapter 3, page 249, they state, “Clearly, the changes [in Global Temperature] are not linear and can also be characterized as level prior to about 1915, a warming to about 1945, leveling out or even a slight decrease until the 1970s, and a fairly linear upward trend since then (Figure 3.6 and FAQ 3.1).”
If I had wanted to cherry pick, I would have presented this:
http://i40.tinypic.com/o7n23s.jpg
Note that the mid-century flat spell is now slightly negative. And of course a couple more moderate El Nino events, followed by back-to-back La Nina events, as have happened since about 2005, would lower the trend after 1998 even more. I could have cherry picked for this post but I didn’t.
Just in case you missed the link, refer to this post and click on the link marked “IMAGINE, IF YOU WILL…” toward the bottom of it. That’s the worst case scenario the IPCC is facing because they don’t consider multidecadal variability:
http://bobtisdale.wordpress.com/2011/11/14/imagine-if-you-will/
Now, if you’re not aware, climate model outputs of SST data bear no likeness to observations over the past 30 years or so. Refer to the following two posts. They’ve also been cross posted here at WUWT. I’m sure you’re aware of the implications of that with respect to atmospheric circulation, since much of atmospheric circulation is dependent on the oceans:
http://bobtisdale.wordpress.com/2011/04/10/part-1-%e2%80%93-satellite-era-sea-surface-temperature-versus-ipcc-hindcastprojections/
And:
http://bobtisdale.wordpress.com/2011/04/19/492/
I have the feeling you also need to be made aware that the SST data for the past 30 years does not support the hypothesis of AGW. In the following two posts, I discuss and illustrate that fact pretty well. They too hav been cross posted here. The first one starts off with an introductory discussion of ENSO. My guess is you misunderstand the El Nino-Southern Oscillation as well:
http://bobtisdale.wordpress.com/2011/07/26/enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/
And:
http://bobtisdale.wordpress.com/2011/08/07/supplement-to-enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/

Matt
November 15, 2011 11:12 am

TBear,
No, this post does not say that the climate models are crap. Crap can be used as fertalizer and thus has some redeaming value. The climate models on the other hand….

November 15, 2011 11:49 am

http://climexp.knmi.nl/data/tcet_mean1a.png
This is the longest instrumental record, CET. Can anyone run the model for given area backwards to see the result? Only when all ups and downs will be replayed, THEN we can claim “Sun it is not, nor clouds or aerosols, so it must be CO2 because what else”.

P. Solar
November 15, 2011 12:07 pm

Here are a couple or simple models that show the relative magnitudes of the cyclic and quadratic components.(result of an exponential rise in CO2 conc)
Basic method is to fit a straight line to dT/dt to account for CO2 and try to characterise the majority of the residual with cosines.
Even one ~58y cosine plus a quadratic is better than just about any super computer model.
CO2 emmissions:
http://tinypic.com/r/r76l4h/5
trivial model
http://tinypic.com/r/2nrn24m/5
better model (also shows Scaffeta)
http://tinypic.com/r/2dw924i/5

November 15, 2011 12:08 pm

Keith:
“If the climate models are so good, have been tested to destruction and incorporate everything that is relevant and material, then it must’ve occurred to the modellers to perform runs removing one factor one at a time, i.e. does the model show skill if volcanic aerosols are removed, if variations in solar TSI are removed, solar magnetic flux, manmade aerosols, UV/EUV, oceanic cycles, cloud cover, atmospheric water vapour, etc. If manmade CO2 is dominant, then there should still be a moderate-to-high degree of skill displayed.”
You clearly don’t understand how climate models work, how they are tested and how attribution is done.
1 does the model show skill if volcanic aerosols are removed,
This has been explicitly tested by seeing how models respond to volcanic eruptions. It is one of the better ( but still not perfect ) aspects of the models. Also, the entire sulphur cycle can be
flipped on and off. Skill improves when its on.
2.variations in solar TSI are removed, TSI is also tested. The biggest issue with TSI is NOT
how the models handle it, but rather
A. the historical forcings
B. projecting the future. In the runs that everybody is looking at some of the models
projected flat TSI going forward. Flat from a high baseline. This leads models to overestimate
in the short term, which they have done.
3. solar magnetic flux. To incorporate a physical cause you need physics that connects the
variable to other variables in the model. Missing physics.
4.manmade aerosols, Yup they are in there. You can see the response to adding them or not.
5.UV/EUV : this is an area that some models will cover better than others, based on their
atmosphreic chemistry modules
6:oceanic cycles: This is an OUTPUT of the models not an input you vary! many people make
this mistake. They think that the EMERGENT properties of the system should somehow
be inputs. There are not. What Bob is showing you is that the output of the models does
not capture the emergent properties perfectly.
7 cloud cover: cloud cover is not an input, you dont vary it. cloud cover is an output
8:atmospheric water vapour: this is also an output. Although NASA did one test where they
ZEROED the water vapor. Naturally, the model responded correctly and water vapor returned
to the atmosphere
C02. Here is the simple fact. If you run the models without C02 forcing they perform poorly in hindcast. They miss the current warming. If you include C02 the models do better.
Why?
Simple. GHGs cause warming. More GHGs, more warming. Ask Anthony, WIllis, Lindzen, Christy, Spencer, Monkton, all skeptics ( with backgrounds in physics or good reading skills) understand that more GHGs means warmer planet.
Does that mean that all the warming is caused by C02? No
It means exactly what it shows: without C02, the models get a F
With C02 the models get a C or B. perfect? hardly. Do they confirm ( not prove) that
our core understanding is correct. Yes. Should they be used to set policy?
That is a whole different question.

P. Solar
November 15, 2011 12:14 pm

>>
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends.
>>
No, but if you use “the last 50y of 20th c” as your reference period and ignore the cyclic components you are going to confound GH warming and natural cycles and head off in the wrong direction after y2k.

Editor
November 15, 2011 12:22 pm

Ross Sheehy – “I suppose they could always just whack in a giant sine curve and then adjust the parameters to pretend they know what is actually going on in the world. Even that seems too hard.
Try this:
http://members.westnet.com.au/jonas1/HadleyCurveFit20111114.jpg
I would suggest that it gives a much better clue than fitting a straight line to the last 30 years of the 20thC.
Braddles – “It’s been said before, but every post on modelling should say it again: any model that uses curve fitting has no predictive value.
Correct. So the above graph does indeed have no predictive value. One value it does have, however, is that it demonstrates very clearly that any straight line or curve fitted to 30-odd years of data cannot possibly have any predictive value. Another of its values is that it can point to possible actual influences on temperature which can then be investigated – once there is a mechanism the rules change. See Vukcevic’s link
http://www.vukcevic.talktalk.net/theAMO.htm
“Global importance of the AMO is underlined by the recent Berkeley Earth Project:
We find that the strongest cross-correlation of the decadal fluctuations in ( global ) land surface temperature is not with ENSO but with the AMO.
I would have expected PDO or PDO+AMO, but the point is made – natural factors drive global temperature far more than is understood by the IPCC.
John B – The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans.
For “various” read “selected”. Even that’s being generous, try : For “various” read “CO2”. And they do latch on to trends, that’s how they calibrate the models – look in the IPCC report for the words “parametrization” and “constrained by observation”.

Jim Masterson
November 15, 2011 12:34 pm

>>
John B says:
November 15, 2011 at 9:23 am
Jim,
A better way of putting it: the net effect of an individual oscillation “evens out” to zero over a long enough timescale.
John
<<
In physics, the usual purpose of models is to investigate and discover the processes that are really happening. Apparently the purpose of climate models is to dumb them down so they only show that CO2 is the boogeyman.
>>
John B says:
November 15, 2011 at 10:20 am
And the basis for my belief? Well, it’s not a belief, it is an acceptance of the science. The physics says CO2+feedbacks will cause warming, observations and models confirm it. Science has looked for (and in detail at) alternative explanations, but not of them stack up.
<<
Even Trenberth’s simple cartoon energy model requires that the atmosphere warm faster than the surface. It’s a physical requirement of the feedback model used in the cartoon. The problem appears to be lack of imagination–not lack of alternative explanations. Albedo change (that stays well within the current albedo error ranges) explains the surface warming while warming the atmosphere by the correct, lesser amount.
Jim

P. Solar
November 15, 2011 12:43 pm

“7 cloud cover: cloud cover is not an input, you dont vary it. cloud cover is an output”
Not quite true. They have minimal understanding of cloud formation and precipitation and therefore cannot model the physics. Instead they “parametrise” it. At that point it becomes an input.
Currently used “parameters” cause the models to produce a climate sensitivity that questionable to say the least.

P. Solar
November 15, 2011 12:45 pm

>> The physics says CO2+feedbacks will cause warming
No the feedbacks are pure speculation , not science.

Editor
November 15, 2011 12:47 pm

Roger Knights says: “In online disputes with warmists, I’ve encountered the claim that the global SST is rising. I haven’t been able to find a chart that I could link to that would counter that–there isn’t one here on WUWT. Is there one anywhere?”
Hi, Roger. The rise in Global SST anomalies has slowed considerably over the past decade or so. It really depends on the timeframe they’re looking at. Here are a couple of graphs I prepared for this post, but didn’t feel they contributed to it when I finished writing it. On a decadal basis, the ten-year trends are back to below zero (2010 and YTD2011) for the first time since 1979, while model projections are nowhere close to zero:
http://i40.tinypic.com/dg44uh.jpg
This one really highlights the differences between the observations and the models: On a multidecadal basis, the thirty-year trends peaked around 2005 and appear to be dropping in response to a multidecadal signal that exists over the term of the data, while the model trends continue their march skywards:
http://i41.tinypic.com/280i3bl.jpg
And I would also disagree with any assumption on their parts that the cause of the warming is greenhouse gases. They would need to supply peer-reviewed papers that explain why Sea Surface Temperature anomalies for the East Pacific Ocean (90S-90N, 180-80W) have not risen for the past 30 years:
http://i40.tinypic.com/a5hyti.jpg
And why, between significant El Niño events, Sea Surface Temperature anomalies for the rest of the global oceans (90S-90N, 80W-180) don’t rise for decade-long stretches:
http://i44.tinypic.com/r7jbdf.jpg
ENSO is a process, so there is no way to remove its effects through linear regression as is so often attempted..
As far as I know, there are no papers that address this very obvious relationship that exists in the data. All one needs to do is volcano-adjust the data, and those stand out like sore thumbs. The most recent discussions of that are here (part 1):
http://bobtisdale.wordpress.com/2011/07/26/enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/
And here (part 2):
http://bobtisdale.wordpress.com/2011/08/07/supplement-to-enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/

Christopher Hanley
November 15, 2011 12:51 pm

Dr. Syun Akasofu is making a similar point here I think:
http://wattsupwiththat.com/2009/03/20/dr-syun-akasofu-on-ipccs-forecast-accuracy/

Editor
November 15, 2011 12:52 pm

steven mosher says: “The models are not programmed to ‘latch’ onto the period 1975-2000.”
It was a poor choice of words in my post last week that was carried over to this one because I could not, for the life of me, remember the term curve fitting. I will fix this tomorrow in both posts.
Regards

More Soylent Green!
November 15, 2011 1:30 pm

Brian H says:
November 15, 2011 at 4:06 am
John B, et al;
The model runs are not “initialized” on real world conditions at all. They’re just tweaked to see what effect fiddling the forcings has on their simplified assumptions. It’s not for nothing that the IPCC says in the fine print that they create ‘scenarios’, not projections. Even though they then go on to treat them as projections.

Given what I understand to be the chaotic nature of our climate, a small variation in initial conditions should mean greatly different results from model runs. Unless the models don’t work the way our climate actually works.
Regardless, I’m not sure our data are good enough to give an accurate set of initial conditions.
I’d also like to see some test runs of various models using a multiple standard sets of initial data. Create perhaps a dozen different sets of initial data and run each model with each set. It would be interesting to see the results of each.

John B
November 15, 2011 1:46 pm

More Soylent Green! says:
November 15, 2011 at 1:30 pm

Given what I understand to be the chaotic nature of our climate, a small variation in initial conditions should mean greatly different results from model runs. Unless the models don’t work the way our climate actually works.
Regardless, I’m not sure our data are good enough to give an accurate set of initial conditions.
I’d also like to see some test runs of various models using a multiple standard sets of initial data. Create perhaps a dozen different sets of initial data and run each model with each set. It would be interesting to see the results of each.
—————–
As I understand it, weater is chaotic, climate not so much. As Stephen Mosher pointed out upthread, NASA started a model run with zero water vapour and the water vapour “appeared”. If the model is good, and is run for long enough, it will be relatively insensitive to starting conditions.

Roger Knights
November 15, 2011 3:21 pm

Tisdale:
Thanks for those two charts. I hope they do get posted here in the Reference section. But if they do, I suggest that each have documentation added to the caption describing and linking to the source data and the charting procedure used. They would then be useful “ammo” in the battle.

Editor
November 15, 2011 4:15 pm

Roger Knights says: “Thanks for those two charts. I hope they do get posted here in the Reference section.”
Assuming you’re talking about the East Pacific and Rest-Of-The-World SST anomaly graphs, they are included in my monthly SST anomaly updates. Example:
http://bobtisdale.wordpress.com/2011/11/07/october-2011-sea-surface-temperature-sst-anomaly-update/

November 15, 2011 4:18 pm

Bob its not curve fitting either
The models output absolute C
The actual values are not even close to the real temperature. Those values are then averaged and anomalized. Then they are base shifted ( as I recall ) to align with the temperature record.
The period selected is 1975 to 2000.
Jones even discusses this issue in the mails.
It has nothing whatsoever to do with “curve” fitting models to a data.

Keith
November 15, 2011 4:46 pm

Hi Steven (may I call you Mosh?),
Thanks for your comments. While I’ve looked into this area a little bit (gotta try to prevent brain atrophy when you’re ill somehow), I’ve certainly not invested the time that you and many others have! I don’t understand how the GCMs work, but I’m interested, and also wondering how they should work.
I’m of the view that atmospheric gases should cause warming and that they have their part to play. From what I’ve seen I still don’t think we’re close to pinning down a value (for CO2 particularly, given the overlap with water vapour at certain wavelengths). It seems that known physics has been applied to a number of known potential factors, with a balancing item of ‘feedbacks’ to try to square the circle. This is still open to refinement, or perhaps wholesale reassessment, with every new study and discovery.
My initial point was around whether we know enough about all conceivable material factors to be able to assign accurate weightings to them in models, and whether by adjusting the weightings and playing with the possible impacts of not-well-understood potential factors we could obtain a better match to past temp trends. Given the resources available and the potential importance of the issue, I would expect that someone may have run some what-ifs to try to get ever-better results, assuming cognitive dissonance doesn’t get in the way.
Clearly we don’t know 100%, but how far out are we?
We seem to have volcanic/SO2 forcings tied down quite well, with a decent amount of empirical evidence of effects on reflected shortwave to back it up. Solar TSI in itself is also a comparatively basic calculation. Quantification of the empirical impact of CO2 in an open system seems more shaky, while we’ve not had particularly good measurements of UV/EUV variation so this area is shakier still. Solar magnetic flux and other suppositions related to it are seemingly still at the speculative basis with no clear physial mechanism, but this doesn’t preclude a mechanism ever being identified. I’m not comfortable with what appears to be an assumption that manmade CO2 should be held to be the cause of warming not clearly identified by other physical processes.
I mentioned oceanic cycles not just because of the PDO and AMO, but also due to the vast timescales involved in thermohaline circulation. 20th century upper oceanic heat content may not necessarily be purely determined by 20th century forcings. Bob looks to have as good an understanding of the multidecadal variations as anyone around, but I’ve not seen anybody demonstrate how the centuries-long cycles may affect climate and vice versa. It’s another world down in the depths, so I don’t know if we’ll ever get the data coverage to fully understand it.
Clouds may be an output but, if Svensmark and Stephen Wilde are on the right tracks, there may be variations in global cloud cover that have a solar cause, which will have a complex and variable effect on climate. Might be another area of attribution towards 20th century warming that is not due to CO2 increase. And then there may be factors that nobody has even thought of yet, never mind identified and quantified.
The various models are not in the same county as perfect yet, as you say. Surface temp forecasts based on a dominant role for CO2 haven’t been stunningly accurate, while the tropical mid-tropospheric hotspot isn’t as predicted. Any what-if studies using GCMs that could rule in or out other factors for closer investigation (i.e. to seek potential causation given sufficient correlation, in the looser sense) would be of interest.

Legatus
November 15, 2011 5:53 pm

“Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years”
“That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades.”

One must ask, well, then, what does “climate change signals” mean, exactly? The only signals it can mean are warming, if it means less signals, and it clearly does, then it means less warming. The only possibly effect CO2 can cause, according to physics, is increased downward longwave radiation. Over the oceans (71% of the earth), this results in a slightly greater amount of evaporation, almost instantly countered by that creating clouds and shade, reducing the incoming shortwave radiation and thus reducing the longwave radiation upward which could be trapped and radiated back down. Only over land does CO2 cause warming, and that only when it does not cause too much evaporation which would produce the above effect. Result, CO2 causes some warming only over some of the land, but it is warming. Thus, if we have less “climate change signals”, we must have (slightly) less warming.
I think this is what the IPCC is saying, basically the same thing you are saying here, that the ocean driven variability means that the warming over the next several decades will be quite small. Of course, the IPCC will then claim that it will start to go up again, and will go up even faster, thus saying that their prediction of doom and gloom was right all along. What you are showing is that, if it goes up at the rate it has been with the natural variability included (which the IPCC has now admitted needs to be included), it will not go up faster in the future, and will slow down one or more times in the future again, and thus 100 years from now, it will be warmer, but not much warmer, not nearly as warm as the IPCC has been claiming all along (not warm enough to wreck the world economy eliminating CO2 over).
The simple fact however is that these models have no predictive value. There is no understanding of what causes the ocean circulations that they are now admitting cause “natural variability” or how and why they add or subtract from atmospheric heat, and there is no understanding of solar dynamics enough to be able to tell us why the sun sometimes goes into long periods of quietness which can cause a little ice age or even perhaps a full blown ice age. As such, any statement that their models can predict anything over 100 years is wrong (which the IPCC makes but Bob Tisdale does not).
Simply put, the models are not based on reality. The reason for that is seen in this quote As Upton Sinclair once said: “It is difficult to get a man to understand something when his job depends on not understanding it.”. basically, the stated job of the IPCC is to “prove” that CO2 causes enough warming that they, the IPCC, needs to take over to prevent this disaster. because of this, they investigate CO2, they do not investigate things like ENSO or other ocean effects, or solar variability and it’s effect on climate. In short, they do not investigate natural climate change causes at all, because they do not wish to. As such, they simply do not understand the climate, and thus cannot tell us what it will do in any future.
The only forecast I saw that may explain some of what will happen in the future was one that correctly hindcasted the Dalton and Maunder minimums, and forecast the current quiet sun ( unlike NASA, which got it completely wrong), and forecast that the next solar cycle (or 2) will be quieter yet. This will of course be moderated by the oceans, which will slow everything down, and make it harder to tell if the sun is doing it since what the sun does will not immediately show down here due to the stored heat of the oceans. I also expect, however, that upper latitudes will most definatly notice it, as the cold from the poles invades their land, especially well away from the oceans moderating influence. I expect increasingly cold winters in places like upper North America and upper Europe and Asia. The IPCC will, of course, try either (or both!) to explain this away as merely local conditions, or somehow blame it on CO2 (and there are plenty of people dumb enough to believe them).
I just had a thought… (it won’t happen again, I promise!):
Let us say that the earth has natural temperature regulators, that tend to keep the temperature relatively stable, despite what the sun does.
The sun was [quiet] several hundred years ago (Maunder and Dalton minimums).
The earth then went into La Nina mode to try and capture more sunlight in the oceans to keep things warm.
This means more trade winds.
In those days, ships used sails.
The discovery and colonization of the new world, and the age of sail and discovery, may have been aided and abetted by the quiet sun and the earths La Nina adjustment to it.
The only way we will know if this may be true is to wait a decade or two and see if the sun goes quiet as predicted, and then see if the earth goes into La Nina mode to catch more rays.

Editor
November 15, 2011 6:11 pm

steven mosher says: “Bob its not curve fitting either.”
Steven: It appears as though the models are programmed so that the outputs create a multidecadal trend from the mid-1970s to 2000 that is close to the observed trend. They make no effort to reproduce the early 20th century rise in temperature that is comparable in trend to the latter warming period, and they make no effort to reproduce the mid-century flattening. The appearance is of a simple fit of the forcings curve so that the model output aligns with the first couple of years of observations and the last few decades of the 20th Century. I could overlay the forcings curve atop the model output if you like to illustrate what I’m talking about.

Richard M
November 15, 2011 7:30 pm

I keep wondering why … why, if it’s claimed that the short term cooling is natural variability that will eventually turn around, then why haven’t the models factored this in?
Oh, you mean they don’t understand exactly what has caused this cooling and yet we are to believe they KNOW it will turn around. Yeah, right.
The problem is simple. Unknowns and a few complete errors in their knowledge. The believers simply can’t or don’t want to process the idea of unknowns. We’ve seen this in science over and over again, why would anyone believe we won’t see it with climate? Of course, we already know the answer to that as well. Politics.

P. Solar
November 15, 2011 10:42 pm

John B says:
>>
That doesn’t follow. As an analogy, we can model the tides pretty accurately, but if you stand on a beach and watch the waves crashing in, you might be tempted to say “the tide signal is too small to be modelled”. So it is with climate. And the analogy goes further: we can’t model the details of the beach, the turbulence in the waves, the wind on a particular day, etc., but we can still produce tide tables. The waves may be hugely important if you are swimming on that beach, but they do not affect the longer term trend of the tide.
>>
You analogy is good choice.
We can accurately predict tides by analysing the cycles. We have no idea whatsoever how to model ocean dynamics in a way that can predict tides .
The same is true of climate.
We can do better by looking for cycles and the linearly increasing _rate of change_ of temp due to increasing CO2, than by trying to calculate everything from scratch when we don’t know how do it.
The most trivial curve fitting does better than 20 super computer models and budgets of several billions:
http://tinypic.com/view.php?pic=2nrn24m&s=5

P. Solar
November 15, 2011 11:06 pm

steven mosher says: “Bob its not curve fitting either.”
Sorry, I don’t think that is correct. Until models can derive ALL the fundamentals from first principals , the basic operation does come down to curve fitting. They have a good understanding of a lot of things but the really important factors like cloud formation , precipitation, ocean currents and atmospheric circulation they really have no idea whatsoever.
The bits they can’t model, they have to make up based on empirical data. They give this a fancy name like parametrisation to make it sound scientific but this is basically empirical tweaking of model INPUTS until the hindcast is reasonably close. In other words : curve fitting.
The “curves” in question are the little bits of climate they do know how to model precisely and the “parameters”, rather than pure cosines, but the process is fundamentally the same thing.
Don’t get confused by the detail, climate models are all fundamentally an excersize in curve fitting.

marcoinpanama
November 16, 2011 5:03 am

Theo Goodwin says:
“Isn’t it amazing that Trenberth can show that he understands the problems with the models yet continue to act as an attack dog for climate science?”
Not if his motivation is politics instead of science. I often refer to AGW extremists as the new Bolsheviks. This is where the extreme left wraps around to meet the extreme right and says “Stop the world, I want to get off…” by tearing down the institutions and industries that have brought the human race progress, redistributing the wealth and returning to a “simpler time.” As so aptly described in Atlas Shrugged. AGW is merely the convenient whipping boy.

Editor
November 16, 2011 3:10 pm

Thanks, Anthony

November 16, 2011 5:01 pm

Warning  – Titanic Disaster Ahead
Arguing here whether or not the ~65year AMO natural oceanic temperature cycle should be included in future IPCC-approved climate models smacks of discussing how best to re-arrange the deck chairs on the Titanic. Reading through the blog comments to Bob Tisdale’s excellent and timely posting, it is clear that climate models are bunk and of no value in the climate debate. So we really do need to move on to focus on real climate data.
I have been monitoring the HadCRUT world land/sea temperature series for the past decade.  When I started in 2001, and much to my astonishment, I discovered that the temperature data I plotted out (from 1850 to 2001, a span of 152 years) looked decidedly unalarming. Although the data showed much wild variability from year to year, the long term average temperature rise over the full period was a puny 0.4degC per century.  
As each year has gone by, and despite the ever-growing climate alarmism of this last decade, my original observation that the temperature trend was completely benign has been increasingly confirmed. Here is that temperature chart, now brought up to date:
   http://www.thetruthaboutclimatechange.org/tempsworld.html 
The HadCRUT3 annual average world land/sea temperature data points are plotted in grey. 
The blue linear regression line shows an average rate of temperature increase over the 161 year span of 0.0041degC per year (hence 0.41degC per century). 
The red line shows an 11 year running mean version of the temperature data. This eliminates year-to-year natural variability and reveals a cyclic oscillation of about 67 year period, swinging approximately 0.25degC above and below the long term trend line. This oscillation is generally regarded as due to a long term cyclic changes in ocean heat distribution and is certainly natural.
Those who believe in the power of increased greenhouse gases to raise the world temperature alarmingly have fixated on the 30 year period from 1970 to 2000 when the ocean oscillation was in its rising phase. This allowed alarmists to report an apparently dangerous rate of warming which they ascribed to the big increase in atmospheric CO2 that occurred in the second half of the 20th century due to post-World War 2 industrialisation. They consequently, and erroneously, projected this apparently permanent temperature trend out to show an alarming rise of several degrees C by 2100.
I am not alone in making the above skeptical analysis. Those of us who did so have expected that the temperature curve would turn downwards as we got into the falling phase of the ocean oscillation. Since that is exactly what has happened during the last decade, panic has increasingly set in amongst the climate alarmist fraternity. If this downward swing continues for the next 20 years or so, the skeptical position will surely be overwhelmingly verified. The ‘great global warming panic’ will by then have become simply an object of historical curiosity.
In summary, therefore, I believe that skeptics do their cause no good by wasting time agonising over whether or not the climate models should be tweaked by including this or that new feature. That is surely something for the climate alarmists in their ivory towers to worry about as they contemplate the possible demise of their grand theories.

November 17, 2011 10:17 pm

I know I’m a couple of days late to this, but I must say I’m disturbed by the state of global climate modeling, if the results are only discussed on a global scale.
My comments:
1) If multidecadal (20 to 50+ years) timescale variations are important to the global temperature signal, then the model should be run for a minimum of 200 to 400 years, with observed data to compare to. As John B states, multidecadal variations average out over time, so the only way to know if your model is accounting for them well enough, or if they’re averaging out, is to run it over multiple oscillations. Assuming there aren’t multi-century variations that need to be accounted for (not necessarily a good assumption), then 400 years seems to be a good minimum model run time (8 fifty year cycles).
2) How is comparing a global average temperature supposed to indicate the skill of a model?
If it’s a decent global model, then regional variations (at least dividing the globe into 16 to 64 regions or more), should all have results that are reasonable, compared to observed data. If it hasn’t enough skill to match regional temperature variations, then why should I believe its global results?
I model hydrology for a living, and it’s not enough to know that the flow at the outlet of your system compares well to observed data. If you have 4 ‘subwatersheds’ and have data for each, then you should compare your model at the outlet of each ‘subwatershed,’ as well as the combined ‘watershed’ level results. Just because the watershed-level results look OK, doesn’t mean that the model is actually representing reality very well. It’s possible you’re just getting lucky.
Global average results are a horrible comparison, if your model is designed to consider regional or local physical processes.

November 18, 2011 1:37 am

Anthony Holder is eloquently confirming my previous point above that the existing climate models are bunk. In addition to the fact that they only predict over a comparatively short time span ( in climate change terms) rather than the hundreds of years that would be necessary to even out ~67 year natural cyclic variations, they have no skill at all at the subset level – i.e. regional predictions.
All this confirms that we should simply stop looking at them, obsessing over them, or giving them the oxygen of our attention. To do so deflects honest people who are trying to understand the climate change controversy away from the fact that the “emperor truly has no clothes”, as is evidenced by the long term instrumental temperature record which shows only a paltry 0.4degC per century rise since 1850. You might be surprised how few people know this simple fact which puts this whole silly debate into proper perspective.