Climate projections: Past performance no guarantee of future skill?

crystal_ball2

Forecasting accuracy of Global Climate Models is something that has been at the very heart of the global warming debate for some time. Leif Svalgaard turned me on to this paper in GRL today:

Reifen, C., and R. Toumi (2009), Climate projections: Past performance no guarantee of future skill?, Geophys. Res. Lett., 36, L13704, doi:10.1029/2009GL038082.

PDF available here

It makes a very interesting point about the “stationarity” of climate feedback strengths. In a nutshell, it says that climate models break down after a time because both forcings and feedbacks don’t remain static, and the program can’t predict such changes.

Gavin Schmidt of NASA GISS says something similar in a recent interview:

The problem with climate prediction and projections going out to 2030 and 2050 is that we don’t anticipate that they can be tested in the way you can test a weather forecast. It takes about 20 years to evaluate because there is so much unforced variability in the system which we can’t predict — the chaotic component of the climate system — which is not predictable beyond two weeks, even theoretically. That is something that we can’t really get a handle on.

From Edge: THE PHYSICS THAT WE KNOW: A Conversation With Gavin Schmidt [with video]

Some excerpts from the paper:

The principle of selecting climate models based on their agreement with observations has been tested for surface temperature using 17 of the IPCC AR4 models.

There is no evidence that any subset of models delivers significant improvement in prediction accuracy compared to the total ensemble.

With the ever increasing number of models, the question arises of how to make a best estimate prediction of future temperature change. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) combines the results of the available models to form an ensemble average, giving all models equal weight. Other studies argue in favor of treating some models as more reliable than others [Shukla et al., 2006; Giorgi and Mearns,

2002]. However, determining which models, if any, are superior is not straightforward. The IPCC comments:

‘‘What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change? This question is just beginning to be addressed. . .’’[Intergovernmental Panel on Climate Change, 2007, p. 594].

One key assumption, on which the principle of performance-based selection rests, is that a model which performs better in one time period will continue to perform better in the future. This has been studied in terms of pattern-scaling using the ‘‘perfect model assumption’’ [Whetton et al., 2007]. We examine the question in an observational context for temperature here

for the first time. We will also quantify the effect of ensemble size on the global mean, Siberian and European temperature error. [3] The principle of averaging results from different

models to form a multi-model ensemble prediction also has potential problems, since models share biases and there is no guarantee that their errors will neatly cancel out. For this reason groups of models thus combined have been termed ‘‘ensembles of opportunity’’ [Piani et al., 2005]. Various studies have showed that multi-model ensembles produce more accurate results than single models [Kiktev et al., 2007; Mullen and Buizza, 2002]. Our examination of ensemble performance aims to address the question in the context of the current generation of climate models.

In our analysis there is no evidence of future prediction skill delivered by past performance-based model selection. There seems to be little persistence in relative model skill, as illustrated by the percentage turnover in Figure 3. We speculate that the cause of this behavior is the non-stationarity of climate feedback strengths. Models that respond accurately in one period are likely to have the correct feedback strength at that time. However, the feedback strength and forcing is not stationary, favoring no particular model or groups of models consistently. For example, one could imagine that in certain time periods the sea-ice albedo feedback is more important favoring those models that simulate sea-ice well. In another period,

El Nino may be the dominant mode, favoring those models that capture tropical climate better. On average all models have a significant signal to contribute.

While the authors of this paper still profess faith in model ensembles, the issues they point out with non-staionarity call into question the ability for any model to remain on-track for an extended forecast period.

0 0 votes
Article Rating
78 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
charles platt
July 7, 2009 11:43 pm

Maybe it’s time to do what stock-market observers sometimes do: Pick future performance with a dart board and compare with the accuracy of “official” predictions. Or, better, write some “simulation code” which is driven purely by a suitably weighted pseudo-random number generator.

UK Sceptic
July 7, 2009 11:54 pm

Let me get this straight. One of the major proponents of AGW claims that the current climate models will invariably fail the farther into the future their predictions are extrapolated because they cannot predict accurately? (Who knew?) Yet they expect us to commit trillions to their cause and destroy our economies and ways of life? And they call us delusional?
Are some of the warmists finally getting aroud to admitting that AGW is reductio ad absurdum?

anna v
July 7, 2009 11:56 pm

Ah, well. They use models to show something that is inherent from first principles in the way models are constructed.
Ever since two years, when I started reading into this mess of models, I have been saying that there are inherent problems in the construction of the models.
Ignoring for the moment the large problems of data sampling and approximations, which are another large factor, and the dubious concept of “forcings” that contorts physics, I will stress, and have been stressing, the nonlinearity of the solutions of the differential equations used for the modeling.
If one looks at the structure of the models it is obvious that linear approximations to the solutions of the fluid equations are used all over, explicitly and implicitly ( for example average values are the first order term in the expansion of a well behaved solution in a power series) . Linear approximations of solutions can be used if the solutions are expandable in a perturbative expansion, where the next highest term is always much smaller than the previous. This is not true in the case of climate. The solutions are notoriously nonlinear. Also the implied solutions of equations not even considered but where the average value has been used, will also diverge from reality after a similar interval. Hence the whole construct will fail after a number of time steppings.
In the case of GCM used for weather, we see that a week or two weeks at most make the projections irrelevant. For the climate models the time interval is larger but they still fail, as we have seen, in about a few years.
It is sad that such theoretically obvious conclusions, and I am an experimentalist but these are elementary concepts for users of models, need to go through the rigmarole of model testing to be acceptable to what is “the climate community”.
I disagree that relevant models cannot be constructed. Tsonis et al ( there is a thread here and in CA) have made a start at creating a nonlinear model that takes into account the chaotic nature of weather/climate. That is the way modeling should go, IMO.

Boudu
July 8, 2009 12:03 am

Are you telling me that these models can’t tell the future ? Who would have thought that was a possibilty ? Surely no one committing stellar quantities of money to ‘tackling climate change’.

David Ball
July 8, 2009 12:03 am

Are they finally admitting that the models may not be that great a prediction tool? Too bad back-pedaling isn’t an olympic event. Models certainly have their place, but there has to be awareness of their short comings and the humility to admit that this is so.

tallbloke
July 8, 2009 12:18 am

Gavin is checking that the unbolting mechanism on the fire escape is working ok.
He knew all this years ago, but only now publicly admits it?
What happened to ‘Robust’?

July 8, 2009 12:21 am

Any model defines the relationship between the parameters responsible for change in weather and climate. Its a set of inputs on the one side and output on the other. Lets ignore interactions and feedbacks.
Let us assume that equatorial stratospheric ozone is influential in determining cloud cover and sea surface temperature in the tropics. (Some ENSO models are reported to include elements of this dynamic).
Let us further assume that stratospheric ozone in the tropics depends upon the episodic influx of mesospheric nitrogen oxides into the stratophere via the polar vortexes.
Let us further assume that mesospheric nitrogen oxide concentration varies with solar activity.
If we are not in a position to predict the level of solar activity we can not build a model to show how weather and climate respond to change in ozone levels in the stratosphere.
In this situation we are unable to define the relationship between a critical input and the output we want to predict. Model building is possible, outputs may be predictable so long as stratospheric ozone levels are static, but that’s as far as it can go.

Andrew P
July 8, 2009 1:03 am

That such fundamental problems with models are being accepted by the likes of Gavin Schmidt is a step in the right direction. However, surely the fact that all 40 models all assume that increased water vapour is a positive feedback, when clearly it is not, makes their linear limitations fairly irrelevant?

James Griffiths
July 8, 2009 2:20 am

“While the authors of this paper still profess faith in model ensembles, the issues they point out with non-staionarity call into question the ability for any model to remain on-track for an extended forecast period.”
I don’t know whether to be awed or disgusted by the sheer persistence and bloody mindedness here.
In the face of even their own evidence, there still seems to be a fundamental belief that aggregating any number of clearly flawed models with no skill will magically produce an average with real predictive power.
You’d hope that at some stage someone might realise that the whole concept of “400 wrongs make a right” is not a valid avenue for public funding, let alone public policy. Sadly, in these times of over regulation and institutionally bloated government, somebody predicting something/anything is considered a vital grease for decision making.
The realism of uncertainty is policy making taboo I’m afraid.

John Finn
July 8, 2009 2:45 am

However, surely the fact that all 40 models all assume that increased water vapour is a positive feedback, when clearly it is not, makes their linear limitations fairly irrelevant?
Why do you say “clearly not”? I’d agree that observations suggest a lower feedback than that which is evident in the models, but I’m not sure it’s totally “clear” yet.

jon
July 8, 2009 3:23 am

Who would have predicted a frost in eastern Newfoundland on July 8 … now my veggies are dead 🙁

anna v
July 8, 2009 3:43 am

Andrew P (01:03:36) :
However, surely the fact that all 40 models all assume that increased water vapour is a positive feedback, when clearly it is not, makes their linear limitations fairly irrelevant?
Not really. If linearity were correct it would mean that there was a chance to get a model to fit the past data and project correctly in the future. I am saying there is no such chance anyway by construction.
I see that they glibly are talking of ensemble errors, whereas it is demonstrated that there are no true errors calculated for the model samples: errors from varying by 1 sigma ( true error) of the parameters entering the fits. If you do that for albedo, for example, the fits go all over the place. Error is this virtual reality construct of climate modelers :
Error in the ensemble
mean decreases systematically with ensemble size, N, and
for a random selection as approximately 1/Na, where a lies
between 0.6 and 1.N

from the abstract of the paper.

Curiousgeorge
July 8, 2009 4:09 am

Why bother? Just throw them chicken bones.

John Doe
July 8, 2009 4:19 am

James Griffiths (02:20:19) :
I agree. The use of statistics to models and their ensembles is dangerous. Let’s dig this with a very simple climate model that just contains one equation
Temperature change=Sensitivity*ln(CO2 increase factor)
Set the CO2 increase factor to 1.01 according to IPCC:n common exponential 1% yearly growth scenario. Sensitivity value 5.33 fits well with the historical temperature record.
Now let’s create an ensemble. To show how confident we are let’s have runs with sensitivities 5.321..5.5339. If you want to be sure that our predictions fit to the future measurements, we could use 4.8, 5,0, 5.2, 5.4, 5.6. You see the point. Inventing new runs is based on the decisions of the researchers. I admit that you could see ensembles as research groups’ opinion polls but is that a right way to do climate research.
In ensembles you have entirely different models, not just runs. So, we could replace the growth factor with Michal Hammer̈́’s second order polynomial (see Jennifer Marohasy’s blog). Now we have generated more models. Satisfied?
Mother Nature uses just a single very exact model that is based on physics and other sciences. Our goal is to find it.

Skeptic Tank
July 8, 2009 4:37 am

charles platt (23:43:58) :
Maybe it’s time to do what stock-market observers sometimes do:

The stock market allusion is not far off. Technical trading statisticians and climate modelers do basically the same thing: empirically test their prediction models on past scenarios and continually tweak them until their results match the observation in hindsight.
It doesn’t work. Economic data from, say a 4-month period, 50 years ago may match data seen today. But there are other macroeconomic and geopolitical factors that exist today which could not possibly have existed then. How do you predict the impact of those factors? You can’t. Or, you guess and get lucky. It’s all very scientific.

hunter
July 8, 2009 4:40 am

The models AGW uses to sell its policies are engineering models, not physics models.
They will never be accurate until all of the interactions between the variables are known.
AGW promoters have not even defined all of the variables, yet.

Poptech
July 8, 2009 4:43 am

“…all of our models have errors which mean that they will inevitably fail to track reality within a few days irrespective of how well they are initialized.” – James Annan, William Connolley, RealClimate.org
“These codes are what they are – the result of 30 years and more effort by dozens of different scientists (note, not professional software engineers), around a dozen different software platforms and a transition from punch-cards of Fortran 66, to fortran 95 on massively parallel systems. […] No complex code can ever be proven ‘true’ (let alone demonstrated to be bug free). Thus publications reporting GCM results can only be suggestive.” – Gavin Schmidt, RealClimate.org
“No complex code can ever be proven ‘true’ (let alone demonstrated to be bug free). Thus publications reporting GCM results can only be suggestive.” – Gavin Schmidt, RealClimate.org

Mark Fawcett
July 8, 2009 4:46 am

Maybe Dr Schmidt is starting to realise that all politicians are ultimately answerable to their voters and that the tide may be turning (deity of your choice willing).
You don’t then need the crystal-ball gazing and rune-stone reading powers of climate models to understand that, should such a turnaround come to pass, the aforementioned public servants will hang the scientists out to dry; especially if the same academics had not uttered any words of caution or restraint during the hyperbole of earlier times…
Cheers
Mark

Curiousgeorge
July 8, 2009 4:51 am

There is a fundamental psychology at work with this. People are naturally averse to uncertainty, because it threatens our survival. Going back to the stone age caveman days, if you could not predict with some accuracy where your next meal was coming from, you would have less chance of finding a mate (she wanted a male who could provide for offspring ). The same psychology has been with us ever since, in the form of oracles, fortune tellers, etc. This is no different simply because it is dressed up with fancy mathematics and pretty charts. It’s still the same subconscious striving for certainty about the future. Sorry to say, that we really aren’t any better at prediction today than we were 10,000 years ago.

Wade
July 8, 2009 4:52 am

You have to crawl before you can walk. A skyscraper starts with foundation first. After years and years of study, the 5 day forecast is still filled with much uncertainty. If the short term forecast still has much work to do, how much more so long term forecasts. When a tropical system forms, look at all the uncertainty in the forecast of it. You have to perfect the more simple short term forecast right before you can put faith in the much more complex long term forecast.

July 8, 2009 4:52 am

I suggest that Demetris Koutsoyiannis has provided a comprehensive analysis of these problems. The relevant papers are on his university homepage here, http://www.itia.ntua.gr/dk
Readers of WUWT could profitably spend months studying Demetris’ published papers. The ones most relevant to the general questions being discussed around the theme of “climate projections: past performance no guarantee of future skill” namely, ‘are future climate projections reliable and do they provide grounds to assess impacts in hydrological processes? And how well does current climate research represent the intrinsic climate uncertainty?’ can be found opposite his statement of this under the heading “climate stochastics”.
I wonder if Gavin Schmidt mentions Demetris’ penetrating analysis and devastating findings and responds to them? If he doesn’t he has not addressed the relevant science.
I wonder if Reifen and Toumi cite Demtris’ work (and that of Cohen and Linns) and builds on the results of both?
I suggest further that any useful discussion of these questions has to have regard to Demetris’ analysis and findings.
Richard Mackey

John W.
July 8, 2009 5:16 am

An “ensemble” is when I use Langrangian and Eulerian Finite Element Models, each with a different formulation of the equation of state, to model the behavior of a system. I compare the results, and after analysis (a process occurring in a human brain), make a prediction about how the real world will behave.
Then I run a test or experiment.
After I’ve repeated the simulate-analyze-test cycle a few times, and the code results are doing a reasonable job of predicting test outcomes, I can claim that the codes are useful for making predictions of real world behavior of the subject system.
This is not what our climate modeling friends are doing. Their activity is political agenda driven numerology. I’ll also add, that millions of scientists and engineers working in the pharmaceutical and defense industries would wind up fired (if we’re lucky) or in jail (if we’re not) for pulling the fraudulent crap these clowns have. Which is why, among my colleagues, it is rare to find anyone who gives AGW more than a snicker.

July 8, 2009 5:19 am

Hi Anthony
Off-topic, so please forgive the intrusion. Not sure how to contact you hence this, but one of the readers emailed me a week or so back to point out a reply where you said you had not seen a review copy of my new book Air Con yet.
This was my fault, as we’ve been securing an on-demand print facility in the US and UK and wanted to ensure we could reliably supply Amazon and all orders within a few days. We didn’t want to rattle anyone’s cages too much until we knew people could access Air Con swiftly.
I’m happy to announce that as of the start of this week Air Con is now officially available on tap from distributors in the US, Canada and Europe, which means I can finally send you a review copy, if you wish to flick me a postal address via email to editorial@investigatemagazine.com I’ll get one sent immediately.
Regards
Ian Wishart
PS, to you and all those who post and comment here, this website is truly a credit to collective wisdom and the ability to maintain a healthy skepticism in the face of AGW spin
REPLY: Ian, thank you for the kind words, I’ll be happy to take a look. Email is on the way to you – Anthony

Rhys Jaggar
July 8, 2009 5:22 am

On a slightly different tack, there is an interesting piece in the July 2nd edition of Nature magazine, which describes statements of James Hansen of about a decade ago, in an article about soot/particulate carbon’s role in climate change (which apparently might now be more than thought years ago).
10 years ago, Hansen said that ‘we can address soot emissions now, whereas we can’t address carbon dioxide emissions’.
Now the implication of that for any sane politician should be this: well, right now, we’ll address the soot issue as a near term research goal leading to implementable policy, whereas we’ll need to fund carbon capture/storage technology research until we find a way to bring that within range.
What was the article this week about? Scientists ‘GETTING EXCITED’ about research into soot control.
So here we are: a decade after something addressible was identified, scientists are talking about RESEARCH?
And we wonder why such skepticism exists around this entire field?????
Any chance of one of your experts checking firstly that I got that interpretation of the nature article right and, if so, what their take on policy direction should be in that arena??

Bill Illis
July 8, 2009 5:56 am

The models seem reasonably accurate when they are hindcasting – running the models against the known temperature record.
But they have not shown any skill in predicting the future climate.
They have only been producing actual predictions for about 20 years now, going back to Hansen’s ABC predictions from 1988.
Hansen’s Scenario B, which used input assumptions that are very close to what actually happened, and this prediction is way off.
http://img4.imageshack.us/img4/5277/hansenscenariobandc.png
The predictions from the IPCC’s First Assessment Report in 1990 are way off (it had temps at about +0.8C by now). The predictions from the IPCC’s Second Report from 1996 are closer since they dropped the warming prediction to +2.0C by 2100 (which is now considered too low even though the trendline to date is close).
Here are the predictions made by the IPCC’s Third Assessment Report in 2000 (at least the climate model predictions that are available from the Climate Explorer) – off by 0.25C in just 9 years.
http://img213.imageshack.us/img213/2509/ippctaraverage.png
Here is the Spaghetti graph of the individual IPCC TAR models. (Hard to put much faith in Spaghetti).
http://img18.imageshack.us/img18/8950/ippctarmodels.png
Here is GISS’s predictions submitted to the IPCC’s Fourth Report. The cut-off date for using actual temp records to date was the beginning of 2006 so GISS is off by 0.25C in just 3 years.
http://img189.imageshack.us/img189/7442/gissar4forecasts.png
So, yeah, they can hindcast the climate after tweaking and plugging and knowing what the actual climate has done – Hindcasting models of any type are known for being accurate – the newly reconstructed financial market models are accurately predicting the market meltdown in September for example.

Charlie
July 8, 2009 6:12 am

Richard Mackey says “I suggest that Demetris Koutsoyiannis has provided a comprehensive analysis of these problems. The relevant papers are on his university homepage here, http://www.itia.ntua.gr/dk
Thanks for the pointer. In addition to climate stochastics he also has an interesting editorial in the Hydrological Sciences–Journal–des Sciences Hydrologiques titled “The peer-review system: prospects and
challenges”. August 2005.
http://www.atypon-link.com/IAHS/doi/pdf/10.1623/hysj.2005.50.4.577

July 8, 2009 6:37 am

The public is convinced that climate models are reliable, and that we are all doomed. The news and science media slams them in the head with this message of certain climate doom every day.
Politicians believe it too, and they fund the climate centers like NASA GISS.
It is a circle of doom: Government funded climate scientists –> News and Science media –> Gullible Public –> Politicians –> Government funding for climate

Lazlo
July 8, 2009 6:39 am

“No complex code can ever be proven ‘true’ (let alone demonstrated to be bug free). Thus publications reporting GCM results can only be suggestive.” – Gavin Schmidt, RealClimate.org
The dumbing down of scientific journals, for left wing political motives of course. Happened in the social sciences years ago. Now in so-called climate science.

Hank
July 8, 2009 6:39 am

Here are some modelers from University of Texas. They seem to be focused on abrupt climate change.
http://www.tacc.utexas.edu/research/users/features/climatechange.php

Edward
July 8, 2009 6:41 am

Richard
Here is the team’s and Gavin’s response to Koutsoyiannis:
[Response:Your comments suggest a misunderstanding of a fundamental issue here, namely the distinction between stochastic and deterministic behavior of the climate. There is a fundamental difference between the underlying statistical behavior of climate forcings and the underlying statistical behavior of the climate response to a specified forcing. The stochastic model of AR(1) noise is only ever invoked to explain the unforced component of surface temperature variability. It is nonsensical to attempt to fit a stochastic model to the sum of both unforced and deterministic forced variability. The changes in mean temperature in simulations of e.g. the past 1000 years show that the low-frequency changes in hemispheric mean temperature can be explained quite well in terms of an approximately linear response to changes in natural changes in radiative forcing (see our discussion here, and the additional reviews cited). This is analogous to the fact that the annual cycle in surface temperature at most locations can be described well in terms of an essentially linear response to seasonal changes in insolation. Obviously, the underlying statistical behavior of the forcings themselves on these two timescales is quite different. But it doesn’t matter, from the point of view of understanding the physics of the climate system, what the underlying statistical nature of the variations in forcing is. The response of the system to those changes in forcing is deterministic—any two realizations with small differences in initial conditions will converge, not diverge, in their trajectories with regard to e.g. the global or hemispheric mean temperature, and those trajectories are essentially linearly related to the changes in forcing themselves, at least over the time interval and range of changes in forcing over this timescale. Finally, none of this has any bearing on the statistical description of “noise” present in proxy climate records. It is extremely difficult to reject the null hypothesis of weakly autocorrelated AR(1) red noise in this case. Why one would entertain highly elaborate models of “long-range dependence” and “random walk behavior” when such a simple null hypothesis cannot be rejected, is beyond me. –mike ]
[Response:If I may interject, David’s point is that if the non-climatic ‘noise’ in the proxy series can be modelled as AR(1), what is the likely magnitude of the correlation? He points out that it is much smaller than was recently assumed. Your statements are related to whether the whole series can be modelled as AR(1). These are obviously different issues. However, your statements above seem to imply that climatic series should be thought of as purely stochastic with no deterministic component. This is not likely to be well accepted by most climatologists – because of course would imply that there is no predictability of climate response to any change in external conditions. The fact that many climate changes can be understood in terms of changing solar forcing, volcanic eruptions, greenhouse gas changes, orbital forcing etc. are obvious counter examples to this idea. Therefore the more accepted description is that climate time series consist of a deterministic component together with intrinsic variability and some ‘noise’. In your stochastic descripitions, I am unaware of how you can distinguish these different components, and thus make a claim about the intrinsic variability characteristics. As Mike indicates, I don’t think you can reject the simplest AR(1) hypothesis. – gavin]

Paul Linsay
July 8, 2009 6:46 am

The resort to ensembles of models shows how unscientific this entire bunch is. Each model embodies a different set of physical assumptions about how the climate works, otherwise why have all those models in the first place? At best one is right and the rest are wrong, though it’s certainly possible for them all to be wrong. They clearly do the averaging because none of them can decide which, if any model, is correct. How does any kind of averaging improve the predictions of a bunch of incorrect models, and if one of them by chance is correct, why isn’t its predictions swamped by the bad models?

henrychance
July 8, 2009 6:47 am

tallbloke (00:18:07) :
Gavin is checking that the unbolting mechanism on the fire escape is working ok.
He knew all this years ago, but only now publicly admits it?
What happened to ‘Robust’?
<<<<<<<<<<
I appreciate your clarity in summarizing Schmidt. Also your vividness.

JIm Clarke
July 8, 2009 6:57 am

This information was known 20 years ago! The GCMs only ‘suggest’ future climates based entirely on the assumption that CO2 is a primary driver of global climate. In other words, the models do nothing more than what they were programed to do and have no predictive skills whatsoever! The model output is almost TOTALLY determined by the input assumptions!
The only pertinent question in the debate is the sensitivity of global climate to increasing greenhouse gases. This has always been the ‘only question’ and has almost always been avoided by AGW supporters! All real world evidence shows a low sensitivity and no climate crisis! All observed climate change fits a natural variability pattern and not an anthroprogenic one.
I am beginning to think that Jim Jones had a more compelling argument for drinking his koolaide than AGW supporters have for carbon mitigation. The result of both actions, however, is remarkably similar!

July 8, 2009 7:04 am

Slightly on topic, here’s a link to an article on the Nature.com website illustrating that models are only as good as the data entered. It’s reporting on a paper that examines how AGW could impact the geographic range of Sasquatch. A key point is that “…even if all the data are all highly dubious, a model based on them can still give a plausible-looking result…”.
http://www.nature.com/news/2009/090707/full/news.2009.641.html?s=news_rss
Mike.

henrychance
July 8, 2009 7:09 am

In the course of pharmaceutical research we use testing and apply “double blind” methods to the subjects. We can test the testers. We can eliminate infuence from the testers.
I see a way to test the “model writers” I have conducted experiments in several fields that are not related.
If we took gavin for example, let’s say 10 data sets. Many of the data sets would be not real but look like the real data sets. Give Gavin the sets from assorted time periods and ask him to predict using his model what the next 10 years of mean temp results would look like. we could give him 20 years like from 1928 to 1948, 1957 to 1977 and some other randomly selected 20 year periods. We could include some periods that were totally made up readings but readings within a sensible range.
If we told Mr math wizard what we were doing, he would refuse to “run” the data. There would be too much fear in being wrong. Again to keep the experiment clean, we would not disclose the actual years for the range.
Let me give you a medical example of a non drug nature. People say some are born gay. It is genetic. If that is true, then they would be very comfortible if I brought them some DNA samples and asked them which were gay.

Antonio San
July 8, 2009 7:15 am

“The problem with climate prediction and projections going out to 2030 and 2050 is that we don’t anticipate that they can be tested in the way you can test a weather forecast. It takes about 20 years to evaluate because there is so much unforced variability in the system which we can’t predict — the chaotic component of the climate system — which is not predictable beyond two weeks, even theoretically. That is something that we can’t really get a handle on.” says Gavin Schmidt
Ah the chaotic element of weather and thus climate… Of course these people want anyone to believe that 1) weather is not climate 2) weather is chaotic therefore you cannot use weather to predict climate. Yet when climate models run, they all predict weather…
This is the classic argument that was debunked by the late Marcel Leroux: “Observation of concrete reality suppresses the so-called border between meteorology and climatology, between weather and climate”. And he proves it. Weather is highly regulated and logical and thus its evolution can be used to predict climatic trends. This is of course weather in a slightly different sense than “is it going to rain 5mm in this county?” type of predictive value. But meteorology offers observation based rebuke to the AGW theory and that is why in Schmidt’s viewpoint weather has to be and stay chaotic, unpredictable thus unusable.

Chris
July 8, 2009 7:19 am

Joe Weizenbaum on models… he knew just a tad about computer models:
What is important in the present context is that models embody only the essential features of whatever it is they are intended to represent. … What aspects of reality are and what are not embodied in a model is entirely a function of the model builder’s purpose. But no matter what the purpose, a model, and here I am concerned with computer models of aspects of reality, must necessarily leave out almost everything that is actually present in the real thing. Whoever knows and appreciates this fact, and keeps it in mind while teaching students about the use of computers, has a chance to immunize his or her students against believing or making excessive claims for much of their computer work.
(Weizenbaum 1984, xvii)
Weizenbaum, J. (1984). Computer Power and Human reason. From Judgement to Calculation. Harmondsworth, Middlesex: Penguin.

July 8, 2009 7:35 am

Someone said it before – that anyone who thinks he/she can model something as complex as the earth’s climate simply does not understand the complexity of the earth’s climate.
The earth’s climate is not a game of chess – chess programmes can now beat the best human players – because there are a finite number of chess moves, albeit a very large finite number, but finite nonetheless.
Others have also said recently – and I agree – that GS is looking for a way out. I think he wants to be the first into the lifeboats.

Demesure
July 8, 2009 7:38 am

The notion of the average of untested models is junk science.
It’s like saying because the mean of 21 meteo models is 20 °C for next week, so the “ensemble prediction” would be better than any invidual model. It’s not only theorical nonsense, it’s proven false.

J. Bob
July 8, 2009 8:08 am

Hunter – These are not even engineering models. Engineering models MUST reflect reality, or standard practices, or you could end up in court.

Jim
July 8, 2009 8:17 am

@anna v (23:56:16) : Modeling the climate with the goal of predicting the temperature some years X in the future is a futile endeavor. If we had a model that took into account all the physics of the Sun and Earth perfectly, it would tell us how the climate behaves. It would give us limits on temperature, precipitation, etc.; and show us generally how it works. But climate is chaotic. That renders any model, no matter how perfect, incapable of predicting the future climate.

James Griffiths
July 8, 2009 8:27 am

Bill Illis (05:56:38) :
“The models seem reasonably accurate when they are hindcasting – running the models against the known temperature record.”
In the case of the climate, I would suggest that hindcasting is a gigantic waste of time.
The averages of temperatures over time and areas that are used in measuring the climate give such little information that at the scales the models work on there must be a practically infinite way of hitting the target, and all but one of those ways are wrong.
If the models could hindcast a temperature and explain accurately all the fluxes and processes involved at the resolution necessary to make an accurate prediction going forwards, then that would be something special! Then again, that wouldn’t really be a model, it would be more of an actual backwards running earth!
I’m waffling a little, but my point is I imagine there’s an awful lot of ways to tune a model to the past that work perfectly. Even if you get it right, chaos theory suggests even the tiniest difference in resolution of any input means you won’t recreate the actual sequence of events anyway!

July 8, 2009 8:28 am

“People say some are born gay. It is genetic. If that is true, then they would be very comfortible if I brought them some DNA samples and asked them which were gay.”
/rasp. Why did you even bring this up? It’s not relevant to your point.
It’s quite possible to deduce that a environmental influence is a factor in some end effect without knowing what the direct mechanism is.
Eye color is undoubtedly genetic, but we can’t (yet) examine a set of DNA samples and determine the eye color of each.
Mike.

Jeff Alberts
July 8, 2009 8:35 am

Climate Modelers are basically dowsers. When they know where the target is (past climate), they’re always “accurate”. But when presented with a blind test (future climate), they do no better than chance would dictate.

hunter
July 8, 2009 8:38 am

Hindcasting is simply knowing the answer to the test before you take it.

theduke
July 8, 2009 8:42 am

Next step ahead for Warmists: introduction of models that purportedly account for non-stationarity?
One step ahead of the posse . . .

Poptech
July 8, 2009 8:43 am

Testing a model against past climate (hindcasting) is an advanced exercise in curve fitting, nothing more and proves absolutely nothing. What this means is you are attempting to have your model’s output match the existing historical output that has been recorded. For example matching the global mean temperature curve over 100 years. Even if you match this temperature curve with your model it is meaningless. Your model could be using some irrelevant calculation that simply matches the curve but does not relate to the real world. With a computer model there are an infinite number of ways to match the temperature curve but only one way that represents the real world. It is impossible for computer models to prove which combination of climate physics correctly matches the real world.
Virtual reality can be whatever you want it to be and computer climate models are just that, they are the code based on the subjective opinions of the scientists creating them. The real world has no such bias.

Russ R.
July 8, 2009 9:25 am

A few years back, when a good portion of the arctic ice was blown into warmer waters, the folks over at RC, were risking shoulder dislocation, to pat themselves on the back, at their amazing predictive abilities.
I posted that it was a short time period, based on a short timescale, and that they no real idea if it was an unusual event, or a natural event, or temperature driven, or due to wind, or warmer water temps. I compared it to having a few good holes in golf, and then extrapolating those results to a round.
Gavin explained to me the error of my ways. He said it was more like rolling balls down an incline, and using the results to predict future rolls on different inclines.
I am not hearing the same Gavin, I heard back in those glory days. I am sure a few of these “climate gurus”, have decided it is time to engineer a soft landing.

July 8, 2009 9:35 am

One issue vis-a-vis hindcasting: Curve fitting to WHAT?
Curve fitting to GISS will cause you to tune the model for much greater sensitivity than curve fitting to UAH or RSS.
Since we now have 30 years of satellite data (climate!), it would be interesting to see how the various models fit that data, even in hindcast…

July 8, 2009 9:52 am

Ian Wishart (05:19:43) :
Good to see you here Sir!
I’ve been looking for your book here in Thailand where I live but I haven’t found it yet…
The Thais don’t have much concept of climate change: they don’t like the temperature above 30C (86F) or below 27C (79F)!

brazil84
July 8, 2009 10:14 am

“Testing a model against past climate (hindcasting) is an advanced exercise in curve fitting, nothing more and proves absolutely nothing.”
I totally agree. This is especially so if failed models can be quietly discarded or tweaked until they match history.
“With a computer model there are an infinite number of ways to match the temperature curve but only one way that represents the real world. ”
One way at the very most. It’s possible, even likely, that we simply cannot predict the climate 50 or 100 years from now any more accurately than simply guessing that the climate will be roughly the same as it is now.
“At best one is right and the rest are wrong, though it’s certainly possible for them all to be wrong”
I agree. And yet, all of those models match history. The reasonable inference is that “matching history” does not mean a model is a good one.

Don S.
July 8, 2009 10:16 am

Any second year IT tech school student could construct a model to predict whatever you want. That’s exactly what the “climate researchers” have done. They have delivered us into the hands of mendacious politicians who will seize absolute power. When that happened in late 19th century Montana the citizenry solved the problem with a rope.

brazil84
July 8, 2009 10:21 am

Off topic, but I had an idea for a new blog post, which is to ask the following question:
Let’s assume for the sake of argument that the warmists are correct that most of the observed warming in the second half of the 20th century was due to human CO2 emissions.
In that case, one can ask: What would global surface temperatures be like now but for such warming? I did a calculation (I’m happy to share it!). It turns out that current global surface temperatures would be shockingly low right now without such warming.
The conclusion is that if warmists are right, then we should be happy so much CO2 has been emitted.

Mike86
July 8, 2009 10:38 am

A bit OT, but NASA kicked out another news blurb stating the arctic ice is melting faster now because of the reduction in multi-year ice. Global Warming to blame!
http://www.kcautv.com/Global/story.asp?S=10658610&nav=1kgl

Squidly
July 8, 2009 10:59 am

After watching Gavin’s video, a few things that he says stand out in my mind. 1) He claims that they have nearly perfect knowledge of cloud formations and behavior, precipitation production and behavior, yet, if this were true, why is there not a single climate model that is capable of representing such skillful knowledge? His model’s do not. and 2) He describes increasing CO2 in the atmosphere will warm the ground. Well, any self respecting physicist will tell you, that is impossible. Even if increased CO2 is capable of increasing atmospheric temperature, it is NOT capable of increasing surface temperature, as that would violate some very fundamental physical laws.
Although I can relate to, and I agree with, some of what he says, several of his assertions are simply false, and false in areas he is well versed.

Squidly
July 8, 2009 11:05 am

brazil84 (10:21:08) :

What would global surface temperatures be like now but for such warming? I did a calculation (I’m happy to share it!). It turns out that current global surface temperatures would be shockingly low right now without such warming.

I have often thought about this same thing. If we are warming at an unprecedented rate, even faster than we thought, while setting cold temperature records all around the globe, one has to wonder just how cold we would really be with out this “unprecedented global warming”
I shiver to think…

Edward
July 8, 2009 11:10 am

Keep in mind that Dr. Spencer has said that the 30 years of Satellite data are not sufficient to disprove the models at this point. Spencer has stated that 50 years might be a minimum but that 100 years of data might be required to sort out natural fluctuations vs human induced temperature changes.
Thanks
Ed

Sean
July 8, 2009 12:37 pm

I read Gavin’s quote as meaning something different from the effect this paper is discussing. I think Gavin is saying that it will take 30 years before we know if the current divergence between models and measurements is the effect of an oscillation or measurement noise, or if it reflects a collection of models which are inaccurate. To some extent his point has merit. The 1970-2000 period is far too short to provide robust validation of the models (and we have poor understanding of a good period). It seems the logic is that ‘we only know of one effect which could have caused the sudden rise…’ yet the current pause is just a pipeline stall…
What i believe the paper is suggesting is that if we assume that the climate system has (coincidentally) a well balanced set of feedback mechanisms then it is possible to achieve a good fit whilst neglecting any feedback term so long as the influence of that term is small in the testing period. If clouds can provide a regulating effect, we would only expect to see that regulation on action once other conditions are met – and we are in the realms of non-linear (total) feedback (which is required in order to achieve stable oscillation – ask any EE. Amplifiers are easier to build than oscillators).
Question is, how do we determine if the calibration period covers sufficient of the forcings space to be valid? Have the models been tested to demonstrate stability in different geological time periods? Would they be expected to be stable?

brazil84
July 8, 2009 12:39 pm

“I have often thought about this same thing. If we are warming at an unprecedented rate, even faster than we thought, while setting cold temperature records all around the globe, one has to wonder just how cold we would really be with out this ‘unprecedented global warming”'”
My guess is that some of the months in 2009 would be the coldest months in the last 100 years.

Allan M R MacRae
July 8, 2009 1:42 pm

Bill Illis (05:56:38) :
“The models seem reasonably accurate when they are hindcasting – running the models against the known temperature record.”
Absolutely false Bill.
I have posted here recently that, in order to hindcast, the models use fabricated (false) aerosol data to reproduce the cooling from ~1945-1975. Actual measurements as described by Doug Hoyt et al show no such aerosol trends.

Allan M R MacRae
July 8, 2009 1:49 pm

Edward (11:10:10) :
“Keep in mind that Dr. Spencer has said that the 30 years of Satellite data are not sufficient to disprove the models at this point. Spencer has stated that 50 years might be a minimum but that 100 years of data might be required to sort out natural fluctuations vs human induced temperature changes.”
______________
I disagree.
We can say from satellite data that there has been no global warming since 1979, and we can also say from pre-satellite data that there has been no global warming since 1940, and perhaps even ~0.3C of cooling. And now we have experienced further global cooling for the past decade or so.
Yet the models continue to predict catastrophic warming.
I can confidently conclude that there is adequate data to demonstrate that the models are invalid.

Alan Haile
July 8, 2009 2:37 pm

In today’s ‘Daily Mail’ (popular UK newspaper) a sensible article!
http://www.dailymail.co.uk/debate/article-1198188/Hysteria-real-threat-global-warming.html

AlexB
July 8, 2009 2:46 pm

This is a fair point. I have a random number generator which will generate an integer from 1-6. I roll 18 dice to see if any of them can predict the number before it is generated. For the next run I only select the dice which predicted the right answer as of course they will be more likely to predict it the next time.

Max
July 8, 2009 3:11 pm

Okay, maybe this is a dumb question, but with regard to hindcasting, has anyone ever run the GCM models against a given period of ice-core data? I mean, initializing from ice-core values for CO2 and temp, and making no attempt at curve fitting, what would result? Or do these models simply not function over so wide a range of CO2 and temp?

July 8, 2009 3:19 pm

My favorite model is Kathy Ireland.

July 8, 2009 4:02 pm

I work with people who do plasma physics models. (Polywell Fusion Reactor) All the equations are known to a very high degree of precision. (8 or 10 significant figures) And yet due to the fact that EVERY particle affects ever other particle simulations are not very good. They may give you the general trend (increase the density and the value of x rises), but exact predictions are out of the question. The joke we often use is that we need a real time computer that can run the equations to perfect precision (experiment).
Now compare this to climate where all the equations are NOT known to a high degree of precision and in fact ALL the equations are not even known.
To predict the future with such (#$@*&!!) is not possible. In fact with so many still unknown significant factors (Svensmark) it is useless. And we are just now getting a handle on cosmic rays and clouds. And there are likely still unknown unknowns.

July 8, 2009 4:05 pm

My favorite model is Louisa Lockhart. I have yet to see a good simulation and she is a very good model.

July 8, 2009 4:11 pm

My guess is that some of the months in 2009 would be the coldest months in the last 100 years.
I live in the northern Illinois area and today we had a day where the temperature did not get above 63F. This seems rather unusual for July.

Sandy
July 8, 2009 4:12 pm

A perfect model of our climate would, like our climate, behave chaotically. Give it identical starting conditions ten times and run for a 100 virtual years then I’m sure 10 very different endpoint climates would be produced.
It seems to me that the climate system is not calculable because any given set of starting parameters will not produce the same result if the model is run twice.

July 8, 2009 4:26 pm

If clouds can provide a regulating effect, we would only expect to see that regulation on action once other conditions are met – and we are in the realms of non-linear (total) feedback (which is required in order to achieve stable oscillation – ask any EE. Amplifiers are easier to build than oscillators.
Yes. Say you want to make a low distortion audio sine wave directly from an oscillator. We have ways of doing that (a Wein bridge with a light bulb in the feed back circuit) that is not too bad. After the oscillator settles we can without to much difficulty get a wave with distortion 60db (.1%) down. Going to 80 db without other tricks (filters) is tough. And even filters are rough because they can introduce distortion. And even then there are limits due to the intrinsic noise of resistors and amplifiers. A PERFECT sine wave is impossible to generate.

Louis Hissink
July 8, 2009 5:54 pm

I suspect Gavin will be slowly distancing himself from the politicans who are driving this thing. It is, after all government science, and thus politically directed.

Bill Illis
July 8, 2009 7:39 pm

Allan M. R. MacRae (13:42),
I agree with you. I’ve posted a lot about the made-up Aerosols (and volcano) plugs that are used to make the hindcasts work.
This is GISS Aerosols forcing from 1880 to 2003. Take these numbers and multiply by 0.32 to change the forcing to temperature impact. Total direct and indirect temp impact from Aerosols in GISS models is -0.6C. Obviously, these forcings are manufactured in Hansen’s laboratory.
http://img58.imageshack.us/img58/855/modelaerosolsforcingp.png
The latest study on sulfate aerosols is that they combine with black carbon and soot to produce warming in the atmosphere rather than cooling. This matches better with the temperature experience of China, south Asia, southern California and the northern hemisphere for example.

Richard S Courtney
July 9, 2009 4:58 am

Friends:
There is only one fact that needs to be known about ensemble climate models, and it needs no discussion: i.e.
Average wrong is still wrong.
Richard

DaveE
July 9, 2009 5:08 am

henrychance (07:09:00) :
We could include some periods that were totally made up readings but readings within a sensible range.
Haven’t they already done that one?
DaveE

anna v
July 9, 2009 5:10 am

This thread has been coming to an end, and I am tempted to tell one of my stories, relevant to my reaction to the “skill” of models.
A man goes to the next village to get himself a wife. He meets many young girls but one of them, who smiles very sweetly and only says “yes” and “no” appeals to him, and the marriage is arranged.
He takes the sweet girl to his home and they have a lovely honeymoon, the wife is a good cook too, the only thing is, she keeps saying only “yes”, “no”, and smiling sweetly.
After a while this gets on his nerves, and he tries to get some other reactions from her, tries to make her angry by doing irrational and sometimes cruel things.
Once he brought back a piece of marble pretending it was cheese, asked her to bring it to the table, and made a big show of anger when she did not. Still, she trembled sweetly and did not get angry or say more than “yes”.
Once he bought her a tight pair of shoes and forced her to wear them, still no reaction from her.
Once he hid behind the door and jumped at her scaring her out of her wits, but not out of the “yes” or “no”.
He decided on drastic measures. He pretended he dropped dead, not responding to anything she tried to do with him, just lay there dead. After a while she was convinced he was dead. She started a dirge crying and crying:
Oh, deal huthband, what thall I lemember filst?
The malble cheethe, the tight thoes, or the BAH behind the dool?
She had a speech impediment and had been told not to speak because she would lose the bridegroom.
The dirge is what comes to my mind when I think of the skill of GCM models”
What shall I remember first?
The lack of error propagation? the spaghetti graphs? the insolent use of linearity in a chaotic system?The manipulated data?
So as not to leave the story hanging, the man resurrected himself, hugged his wife, and asked her to please speak up, and he did not care about the lisp! A happy ending, which I do not foresee for the GCMs.

Mark Stewart
July 9, 2009 6:21 am

The modelers all ignore the sun
They rain and snow on everyone
So many models have been run
without clouds in the fray
They’ve ignored the clouds for some time now
From up and down, and still somehow
Its modeled illusions they recall
They really don’t know clouds…. at all

Jeff Alberts
July 9, 2009 9:11 am

Mark Stewart (06:21:46) :
Hehe, nice!

Bob Cormack
July 9, 2009 3:38 pm

James Griffiths (02:20:19) :
In the face of even their own evidence, there still seems to be a fundamental belief that aggregating any number of clearly flawed models with no skill will magically produce an average with real predictive power.

Dogbert has this same idea, in a different context: http://tinypic.com/view.php?pic=2liwwvn&s=4