From the University of Arizona (h/t to WUWT reader Miguel Rakiewicz):
A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale but lose their edge when applied to time frames shorter than three decades and on sub-continental scales.
![Visual04[1]](http://wattsupwiththat.files.wordpress.com/2012/09/visual041.jpeg?resize=612%2C273&quality=83)
Published in the Journal of Geophysical Research-Atmospheres, the study is one of the first to systematically address a longstanding, fundamental question asked not only by climate scientists and weather forecasters, but the public as well: How good are Earth system models at predicting the surface air temperature trend at different geographical and time scales?
Xubin Zeng, a professor in the University of Arizona department of atmospheric sciences who leads a research group evaluating and developing climate models, said the goal of the study was to bridge the communities of climate scientists and weather forecasters, who sometimes disagree with respect to climate change.
According to Zeng, who directs the UA Climate Dynamics and Hydrometeorology Center, the weather forecasting community has demonstrated skill and progress in predicting the weather up to about two weeks into the future, whereas the track record has remained less clear in the climate science community tasked with identifying long-term trends for the global climate.
“Without such a track record, how can the community trust the climate projections we make for the future?” said Zeng, who serves on the Board on Atmospheric Sciences and Climate of the National Academies and the Executive Committee of the American Meteorological Society. “Our results show that actually both sides’ arguments are valid to a certain degree.”
“Climate scientists are correct because we do show that on the continental scale, and for time scales of three decades or more, climate models indeed show predictive skills. But when it comes to predicting the climate for a certain area over the next 10 or 20 years, our models can’t do it.”
To test how accurately various computer-based climate prediction models can turn data into predictions, Zeng’s group used the “hindcast” approach.
“Ideally, you would use the models to make predictions now, and then come back in say, 40 years and see how the predictions compare to the actual climate at that time,” said Zeng. “But obviously we can’t wait that long. Policymakers need information to make decisions now, which in turn will affect the climate 40 years from now.”
Zeng’s group evaluated seven computer simulation models used to compile the reports that the Intergovernmental Panel on Climate Change, or IPCC, issues every six years. The researchers fed them historical climate records and compared their results to the actual climate change observed between then and now.
“We wanted to know at what scales are the climate models the IPCC uses reliable,” said Koichi Sakaguchi, a doctoral student in Zeng’s group who led the study. “These models considered the interactions between the Earth’s surface and atmosphere in both hemispheres, across all continents and oceans and how they are coupled.”
Zeng said the study should help the community establish a track record whose accuracy in predicting future climate trends can be assessed as more comprehensive climate data become available.
“Our goal was to provide climate modeling centers across the world with a baseline they can use every year as they go forward,” Zeng added. “It is important to keep in mind that we talk about climate hindcast starting from 1880. Today, we have much more observational data. If you start your prediction from today for the next 30 years, you might have a higher prediction skill, even though that hasn’t been proven yet.”
The skill of a climate model depends on three criteria at a minimum, Zeng explained. The model has to use reliable data, its prediction must be better than a prediction based on chance, and its prediction must be closer to reality than a prediction that only considers the internal climate variability of the Earth system and ignores processes such as variations in solar activity, volcanic eruptions, greenhouse gas emissions from fossil fuel burning and land-use change, for example urbanization and deforestation.
“If a model doesn’t meet those three criteria, it can still predict something but it cannot claim to have skill,” Zeng said.
According to Zeng, global temperatures have increased in the past century by about 1.4 degrees Fahrenheit or 0.8 degrees Celsius on average. Barring any efforts to curb global warming from greenhouse gas emissions, the temperatures could further increase by about 4.5 degrees Fahrenheit (2.5 degrees Celsius) or more by the end of the 21st century based on these climate models.
“The scientific community is pushing policymakers to avoid the increase of temperatures by more than 2 degrees Celsius because we feel that once this threshold is crossed, global warming could be damaging to many regions,” he said.
Zeng said that climate models represent the current understanding of the factors influencing climate, and then translate those factors into computer code and integrate their interactions into the future.
“The models include most of the things we know,” he explained, “such as wind, solar radiation, turbulence mixing in the atmosphere, clouds, precipitation and aerosols, which are tiny particles suspended in the air, surface moisture and ocean currents.”
Zeng described how the group did the analysis: “With any given model, we evaluated climate predictions from 1900 into the future – 10 years, 20 years, 30 years, 40 years, 50 years. Then we did the same starting in 1901, then 1902 and so forth, and applied statistics to the results.”
Climate models divide the Earth into grid boxes whose size determines its spatial resolution. According to Zeng, state of the art is about one degree, equaling about 60 miles (100 kilometers).
“There has to be a simplification because if you look outside the window, you realize you don’t typically have a cloud cover that measures 60 miles by 60 miles. The models cannot reflect that kind of resolution. That’s why we have all those uncertainties in climate prediction.”
“Our analysis confirmed what we expected from last IPCC report in 2007,” said Sakaguchi. “Those climate models are believed to be of good skill on large scales, for example predicting temperature trends over several decades, and we confirmed that by showing that the models work well for time spans longer than 30 years and across geographical scales spanning 30 degrees or more.”
The scientists pointed out that although the IPCC issues a new report every six years, they didn’t see much change with regard to the prediction skill of the different models.
“The IPCC process is driven by international agreements and politics,” Zeng said. “But in science, we are not expected to make major progress in just six years. We have made a lot of progress in understanding certain processes, for example airborne dust and other small particles emitted from surface, either through human activity or through natural sources into the air. But climate and the Earth system still are extremely complex. Better understanding doesn’t necessarily translate into better skill in a short time.”
“Once you go into details, you realize that for some decades, models are doing a much better job than for some other decades. That is because our models are only as good as our understanding of the natural processes, and there is a lot we don’t understand.”
Michael Brunke, a graduate student in Zeng’s group who focused on ocean-atmosphere interactions, co-authored the study, which is titled “The Hindcast Skill of the CMIP Ensembles for the Surface Air Temperature Trend.”
Funding for this work was provided by NASA grant NNX09A021G, National Science Foundation grant AGS-0944101 and Department of Energy grant DE-SC0006773.
Is it really this bad? They create and fine tune the models BY hindcasting!!! and then validate them by checking out hindcasts? The word forecast has no place in this piece.
LoL in 40 yrs these fools will be screaming ice age, just in time for it to start warming back up. How can you say our long term is accurate when no long term observation has passed proving that claim. Tiz truly mind boggling.
tallbloke says:
September 18, 2012 at 1:17 pm
Tallbloke, good to hear from you. I wondered the same thing. It turns out that the model “predicted” warming over the last hundred year span, because they were using a model that is tuned to reproduce the warming over the last 100 years.
Shockingly, the model was able to reproduce the warming it was tuned to reproduce, which proves that the models “show skills” in forecasting long trends.
The logic in their thought processes is simple, clean and wrong … what more could you want?
w.
It seems that the author is saying the model forecast could [should?] drift away from reality for some indefinite period of time and then drift back toward reality at, say, 90–100 years later.
If that is correct then one would expect the hindcasts to do the same thing. Do they? I doubt it; I would suppose that they would tune the model until the hindcast tracks pretty close to the [questionable=adjusted] historical record. Are graphs of the hindcast available?
From http://www.agu.org/pubs/crossref/2012/2012JD017765.shtml
[+emphasis]
Cutting through the flowery double talk it sounds like they are saying the models don’t work at longer scales either.
Did anyone notice the caption to the maps at the head of the post?
“These maps show the observed (left) and model-predicted (right) air temperature trend from 1970 to 1999.”
Assuming that the chloropleth scale is the same for both images, that model has grossly overpredicted temp trends vs actual observations. And that’s just for the side of the planet that they are showing us. The brick red on the central Eurasian and Antarctic portions of the limb indicate that there’s some unrealized heat over there as well. And the Pampas is substantially cooler than predicted, too.
Is this one of the examples of “skillful” 30 year prediction?
Sounds to me like the models are optimized for whatever the dominant global climate driver is every 30 yrs, but don’t have the sensitivity to distinguish the shorter term natural variations. Sounds a lot like the development of scientific instrumentation over the years in terms of the difference in detection limits between an atomic spectrometer and a mass spectrometer.
Richard M
sept 18 2:10pm
MIGO — Money In Garbage Out
You have knocked it. How better can you describe the Chicken Little Science of Global Warming?
Eugene WR Gallun
The extreme warming IPCC predictions of 1.5C to 5C warming for a doubling of CO2 require that the planet amplifies the CO2 warming which is positive feedback. If the planet’s feedback response to a change in force is negative a doubling of atmospheric CO2 will result in less than 1C warming with most of the warming occurring at high latitude regions of the planet which will cause the biosphere to expand.
There is no extreme AGW warming problem to solve.
http://www.drroyspencer.com/2012/09/uah-global-temperature-update-for-august-2012-0-34-deg-c/
http://wattsupwiththat.com/2012/09/06/uah-global-temperature-up-06c-not-much-change/
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
On the Observational Determination of Climate Sensitivity and Its Implications
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000-2008) satellite instruments. … ….We argue that feedbacks are largely concentrated in the tropics, and the tropical feedbacks can be adjusted to account for their impact on the globe as a whole. Indeed, we show that including all CERES data (not just from the tropics) leads to results similar to what are obtained for the tropics alone – though with more noise. We again find that the outgoing radiation resulting from SST fluctuations exceeds the zerofeedback response thus implying negative feedback. In contrast to this, the calculated TOA outgoing radiation fluxes from 11 atmospheric models forced by the observed SST are less than the zerofeedback response, consistent with the positive feedbacks that characterize these models. The results imply that the models are exaggerating climate sensitivity…. ….However, warming from a doubling of CO2 would only be about 1C (based on simple calculations where the radiation altitude and the Planck temperature depend on wavelength in accordance with the attenuation coefficients of wellmixed CO2 molecules; a doubling of any concentration in ppmv produces the same warming because of the logarithmic dependence of CO2’s absorption on the amount of CO2) (IPCC, 2007)….
This modest warming is much less than current climate models suggest for a doubling of CO2. Models predict warming of from 1.5C to 5C and even more for a doubling of CO2. Model predictions depend on the ‘feedback’ within models from the more important greenhouse substances, water vapor and clouds. Within all current climate models, water vapor increases with increasing temperature so as to further inhibit infrared cooling. Clouds also change so that their visible reflectivity decreases, causing increased solar absorption and warming of the earth….
http://www.forbes.com/sites/jamestaylor/2012/04/11/a-new-global-warming-alarmist-tactic-real-temperature-measurements-dont-matter/
A New Global Warming Alarmist Tactic: Real Temperature Measurements Don’t Matter
What do you do if you are a global warming alarmist and real-world temperatures do not warm as much as your climate model predicted? Here’s one answer: you claim that your model’s propensity to predict more warming than has actually occurred shouldn’t prejudice your faith in the same model’s future predictions. Thus, anyone who points out the truth that your climate model has failed its real-world test remains a “science denier.”
This, clearly, is the difference between “climate science” and “science deniers.” Those who adhere to “climate science” wisely realize that defining a set of real-world parameters or observations by which we can test and potentially falsify a global warming theory is irrelevant and so nineteenth century. Modern climate science has gloriously progressed far beyond such irrelevant annoyances as the Scientific Method.
@JJ says
“Xubin Zeng, a professor in the University of Arizona department of atmospheric sciences who leads a research group evaluating and developing climate models, said the goal of the study was to bridge the communities of climate scientists and weather forecasters, who sometimes disagree with respect to climate change.”
Huh. The goal of the study was political, not scientific.
Imagine that.
++++++++++++
Is it true that the opinion of most US weather forecasters is that the alarmists are wrong? No wonder they disagree. Weather forecasters are dealing with the real world, after all. The climate modelers are dealing with an artifice of their own devising. I can’t say they will never meet, but they are not yet on the same planet, that’s for sure.
“Our goal was to provide climate modeling centers across the world with a baseline they can use every year as they go forward,” Zeng added. “It is important to keep in mind that we talk about climate hindcast starting from 1880. Today, we have much more observational data. If you start your prediction from today for the next 30 years, you might have a higher prediction skill, even though that hasn’t been proven yet.”
I confidently predict that with better data, the model predictions get worse. It’s well know that poor or absent data on things like aerosols and clouds allows the modellers to use these as tunable parameters.
They don’t accurately predict the present, but they do accurately predict the future…. More Kool Aid please… 😉
Willis Eschenbach says:
September 18, 2012 at 3:43 pm
Shockingly, the model was able to reproduce the warming it was tuned to reproduce, which proves that the models “show skills” in forecasting long trends.
As usual, I can’t decide if these people are being deliberately deceptive or they are just delusional.
Although, there is a third possibility.
Zeng said the study should help the community establish a track record whose accuracy in predicting future climate trends can be assessed as more comprehensive climate data become available.
Perhaps this a sugar coated attempt to establish baselines for the model predictions and stop the modellers usual practice of shifting the goalposts every few years, and then claiming their predictions were accurate.
To echo what others have said, hindcasting in no way, shape, or form, validates a model’s ability to predict the future. If you tweak the input and twiddle the forcings, any climate model can hindcast anything you want. Every climate modeller (including Zeng) knows this. It’s scientific fraud to imply that hindcasting can ” test how accurately various computer-based climate prediction models can turn data into predictions,” Hindcasting is cheating, plain and simple.
“A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale ”
THIS CANNOT POSSIBLY BE PROVEN !!!
Another Lew paper? did this also pass peer/pal review?
As someone working professionally in computational fluid dynamics for over 20 years, I agree that there’s NO way they can prove models are reliable for > 30 years, and not reliable for < 30 years. This is pure bunk.
Unfortunately, none of our warmist friends will EVER talk about or debate the numerical models in any detail. Whenever you say "differential equations" and "initial/boundary conditions" they start talking about switch grass and sea ice…
So if the models can accurately predict, say, 100 years out, then all that is necessary to predict, say, the climate of 2014 or 2016, is to set up the initial conditions as they were in 1914 or 1916, and voila, the models can now accurately give us predictions 2 or 4 years or any other period into the future! Let’s see the test of that.
tallbloke says:
September 18, 2012 at 1:17 pm (Edit)
“Climate-prediction models show skills in forecasting climate trends over time spans of greater than 30 years”
How do they know?
####
simple. Start in 1900 and predict. then look at 1900-40, 1901-1941 etc
for every year from 1900 to 1980, see how your 40 year prediction held up versus these alternatives.
1. Naive prediction. Everything stays the same
2. Internal variability. the future is like the past
3. shoulder shrugs
The key is this. The definition of skill. Skill does not mean perfect. Skill means better than
hand waving assertions about the sun. Skill means better than “i dunno, natural variability”
skill means better than alternatives. If you have alternatives for modelling the climate
( temperatures, rain etc ) then show how your alternative has skill. Not that your alternative must be able to predict on a regional basis and predict more than just a global average temp.
When face with uncertainty you build a model. That model will always be wrong. The question is does it have skill as measured against alternatives.
“That is because our models are only as good as our understanding of the natural processes, and there is a lot we don’t understand.”
But I thought it was all settled science!
We’re doomed.
Steven Mosher: for every year from 1900 to 1980, see how your 40 year prediction held up versus these alternatives.
1. Naive prediction. Everything stays the same
2. Internal variability. the future is like the past
3. shoulder shrugs
The key is this. The definition of skill. Skill does not mean perfect. Skill means better than …
Well said. The models show “skill” in that their forecasts were better than simple extrapolations, in mean squared error. The amount of inaccuracy demonstrated over the most recent 30 year period shows that the models are not accurate enough for policy decisions relative to 30+ years in the future. Which model now makes the most reliable prediction for 30 years from now isn’t known yet. What they have documented is a stage of progress, as though to say they have constructed the two railroad lines 1/3 of the way to Promontory or thereabouts; or when the medical profession achieved a good success rate against Hodgkin’s lymphoma. Lots of examples of mixed progress come to mind.
I think they have produced a respectable statement of the state of the art.
Steven Mosher: The question is does it have skill as measured against alternatives.
The other question is does it have sufficient skill to achieve a given goal. For example, the knowledge base that supported the Golden Gate Bridge was insufficient for the Tacoma Narrows Bridge; there were hints of inadequacy in the second case, that was all.
I have a clock that is right twice per day. They’ve got models that can’t get 30 years right, but they get it right on time scales greater than that? So my clock is right twice per day but their models are only right twice per century or so?
That folks, is what they are trying to convince us is evidence that the models “work”. They are full of fudge factors based on the data for the last 100 years, so taken over periods that are a substantial portion of 100 years, they seem to get it right. In brief, they’ve been curve fitted to 100 years of data, so any increment smaller than that is going to be less accurate than the whole data set by default!
All they’ve done is draw a line from 1900 to 2012 and said LOOK! The end points are right! Just ignore all that stuff in the middle that isn’t anywhere near the line!
….and |BTW Dr Zeng, here is the link to IPCC AR4 WG1 2.9.1 Uncertainties in Radiative Forcing where the IPCC authors rank the Level of Scientific Understanding of no less than 14 parameters. Their own scientists rank the understanding (that feeds the models) of no less than NINE of the 14 parameters as either LOW or VERY LOW. In fact, only a SINGLE parameter is ranked as high:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-1.html
So Dr Zeng, could you please explain why we would trust models that are admitted to having a rotten understanding of the science and which are demonstrably wrong (demonstrated by YOU!) except for perhaps two or three times in a hundred years?
@Louis. If they had used a walk-forward testing method, hindcasting of a sort may be used properly. It would require that they start in, say, 1900. Tune the model based only on data known in 1900. Then forecast. Then go to 1901. Tune the model on data known only in 1901. Then forecast . . . . This technique is used by quants when actual money is on the line and gives a pretty good assessment on whether the model has skill. The key is tuning based only on what would have been known had the model been created at the past time from which the forecast is made.
Even with this approach, some bleeding of later information into earlier times occurs because the form of the model is usually determined with knowledge of what is coming. But the parameters are not set using the later information.
Hind-casting using the training data that generated their parameter weights? Something is rotten in the state of Denmark. Or is it California? Hard to tell …
Hindcasts aren’t predictions. How well a model hindcasts could, and likely is, 100% a function of how well it has been tuned to the data. A model could hindcast perfectly and have no predictive accuracy.
Gavin Schmidt to his credit will talk about prediction. He found a statistical model (which means a model that knows nothing about the climate) beat all climate models used by the IPCC in its predictions.
http://www1.ccls.columbia.edu/~cmontel/mss10.pdf
That a statistical model beats a ‘state of the art’ climate model is proof there is effectively no predictive science in the models. In other words the science in the models is wrong.