Climate Models shown to be inaccurate less than 30 years out

From the University of Arizona (h/t to WUWT reader Miguel Rakiewicz):

A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale but lose their edge when applied to time frames shorter than three decades and on sub-continental scales.

These maps show the observed (left) and model-predicted (right) air temperature trend from 1970 to 1999. The climate model developed by the National Center for Atmospheric Research (NCAR) is used here as an example. More than 50 such simulations were analyzed in the published study. (Illustration: Koichi Sakaguchi)
Climate-prediction models show skills in forecasting climate trends over time spans of greater than 30 years and at the geographical scale of continents, but they deteriorate when applied to shorter time frames and smaller geographical regions, a new study has found.

Published in the Journal of Geophysical Research-Atmospheres, the study is one of the first to systematically address a longstanding, fundamental question asked not only by climate scientists and weather forecasters, but the public as well: How good are Earth system models at predicting the surface air temperature trend at different geographical and time scales?

Xubin Zeng, a professor in the University of Arizona department of atmospheric sciences who leads a research group evaluating and developing climate models, said the goal of the study was to bridge the communities of climate scientists and weather forecasters, who sometimes disagree with respect to climate change.

According to Zeng, who directs the UA Climate Dynamics and Hydrometeorology Center, the weather forecasting community has demonstrated skill and progress in predicting the weather up to about two weeks into the future, whereas the track record has remained less clear in the climate science community tasked with identifying long-term trends for the global climate.

“Without such a track record, how can the community trust the climate projections we make for the future?” said Zeng, who serves on the Board on Atmospheric Sciences and Climate of the National Academies and the Executive Committee of the American Meteorological Society. “Our results show that actually both sides’ arguments are valid to a certain degree.”

“Climate scientists are correct because we do show that on the continental scale, and for time scales of three decades or more, climate models indeed show predictive skills. But when it comes to predicting the climate for a certain area over the next 10 or 20 years, our models can’t do it.”

To test how accurately various computer-based climate prediction models can turn data into predictions, Zeng’s group used the “hindcast” approach.

“Ideally, you would use the models to make predictions now, and then come back in say, 40 years and see how the predictions compare to the actual climate at that time,” said Zeng. “But obviously we can’t wait that long. Policymakers need information to make decisions now, which in turn will affect the climate 40 years from now.”

Zeng’s group evaluated seven computer simulation models used to compile the reports that the Intergovernmental Panel on Climate Change, or IPCC, issues every six years. The researchers fed them historical climate records and compared their results to the actual climate change observed between then and now.

“We wanted to know at what scales are the climate models the IPCC uses reliable,” said Koichi Sakaguchi, a doctoral student in Zeng’s group who led the study. “These models considered the interactions between the Earth’s surface and atmosphere in both hemispheres, across all continents and oceans and how they are coupled.”

Zeng said the study should help the community establish a track record whose accuracy in predicting future climate trends can be assessed as more comprehensive climate data become available.

“Our goal was to provide climate modeling centers across the world with a baseline they can use every year as they go forward,” Zeng added. “It is important to keep in mind that we talk about climate hindcast starting from 1880. Today, we have much more observational data. If you start your prediction from today for the next 30 years, you might have a higher prediction skill, even though that hasn’t been proven yet.”

The skill of a climate model depends on three criteria at a minimum, Zeng explained. The model has to use reliable data, its prediction must be better than a prediction based on chance, and its prediction must be closer to reality than a prediction that only considers the internal climate variability of the Earth system and ignores processes such as variations in solar activity, volcanic eruptions, greenhouse gas emissions from fossil fuel burning and land-use change, for example urbanization and deforestation.

“If a model doesn’t meet those three criteria, it can still predict something but it cannot claim to have skill,” Zeng said.

According to Zeng, global temperatures have increased in the past century by about 1.4 degrees Fahrenheit or 0.8 degrees Celsius on average. Barring any efforts to curb global warming from greenhouse gas emissions, the temperatures could further increase by about 4.5 degrees Fahrenheit (2.5 degrees Celsius) or more by the end of the 21st century based on these climate models.

“The scientific community is pushing policymakers to avoid the increase of temperatures by more than 2 degrees Celsius because we feel that once this threshold is crossed, global warming could be damaging to many regions,” he said.

Zeng said that climate models represent the current understanding of the factors influencing climate, and then translate those factors into computer code and integrate their interactions into the future.

“The models include most of the things we know,” he explained, “such as wind, solar radiation, turbulence mixing in the atmosphere, clouds, precipitation and aerosols, which are tiny particles suspended in the air, surface moisture and ocean currents.”

Zeng described how the group did the analysis: “With any given model, we evaluated climate predictions from 1900 into the future – 10 years, 20 years, 30 years, 40 years, 50 years. Then we did the same starting in 1901, then 1902 and so forth, and applied statistics to the results.”

Climate models divide the Earth into grid boxes whose size determines its spatial resolution. According to Zeng, state of the art is about one degree, equaling about 60 miles (100 kilometers).

“There has to be a simplification because if you look outside the window, you realize you don’t typically have a cloud cover that measures 60 miles by 60 miles. The models cannot reflect that kind of resolution. That’s why we have all those uncertainties in climate prediction.”

“Our analysis confirmed what we expected from last IPCC report in 2007,” said Sakaguchi. “Those climate models are believed to be of good skill on large scales, for example predicting temperature trends over several decades, and we confirmed that by showing that the models work well for time spans longer than 30 years and across geographical scales spanning 30 degrees or more.”

The scientists pointed out that although the IPCC issues a new report every six years, they didn’t see much change with regard to the prediction skill of the different models.

“The IPCC process is driven by international agreements and politics,” Zeng said. “But in science, we are not expected to make major progress in just six years. We have made a lot of progress in understanding certain processes, for example airborne dust and other small particles emitted from surface, either through human activity or through natural sources into the air. But climate and the Earth system still are extremely complex. Better understanding doesn’t necessarily translate into better skill in a short time.”

“Once you go into details, you realize that for some decades, models are doing a much better job than for some other decades. That is because our models are only as good as our understanding of the natural processes, and there is a lot we don’t understand.”

Michael Brunke, a graduate student in Zeng’s group who focused on ocean-atmosphere interactions, co-authored the study, which is titled “The Hindcast Skill of the CMIP Ensembles for the Surface Air Temperature Trend.”

Funding for this work was provided by NASA grant NNX09A021G, National Science Foundation grant AGS-0944101 and Department of Energy grant DE-SC0006773.

0 0 votes
Article Rating
97 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
MarkW
September 18, 2012 1:04 pm

If models have such a good track record predicting more than 30 years out, then why were the predictions made back in the 80’s, so far off?

Jack
September 18, 2012 1:06 pm

Zheng is applying for more of that sweet AGW grant money.

Alvin
September 18, 2012 1:08 pm

Come on guys, who mounts a Precision station in a server rack?

Alvin
September 18, 2012 1:11 pm

So, the conclusion is that their current data is wrong and they know that, but if you will just believe them for 30 years then it will all clear up. Thirty years of global climate change based socialist policies. What could go wrong?

Myron Mesecke
September 18, 2012 1:13 pm

This is just the latest excuse for why the models didn’t predict the current cooling.
“But it will warm back up! Just give it 40 years.”

tallbloke
September 18, 2012 1:17 pm

“Climate-prediction models show skills in forecasting climate trends over time spans of greater than 30 years”
How do they know?

September 18, 2012 1:19 pm

I love the way the ‘heat’ just kind of stays put.
As if someone turned off the surface winds, the jet-stream and the ocean currents then forgot to turn them back on again.
Don’t these people ever look at the animations from weather satellites?

PaulH
September 18, 2012 1:20 pm

“Policymakers need information to make decisions now, which in turn will affect the climate 40 years from now.”
I am afraid this is wishful thinking on their part.

Louis
September 18, 2012 1:22 pm

How can you possibly know that climate-prediction models “are good at predicting long-term climate patterns on a global scale” if they have gotten everything wrong so far? What principle of science allows you to assume that model predictions will be right in the future? (I can’t predict what the stock market will do over the next 30 years but I know exactly what the market will do a hundred years from now.)

September 18, 2012 1:22 pm

You mean there is a climate model that is accurate minutes after the run(s)?

Stuck-Record
September 18, 2012 1:22 pm

Brilliant. The get out of jail free card for modeller’s inaccuracy.
“Look. I know we’re totally wrong about everything that has happened, or will happen in the lifetimes of your career/industry/economy, but trust us anyway because we’ll be proved right after we’ve retired and are living on our untouchable pensions.”

JJ
September 18, 2012 1:23 pm

“Xubin Zeng, a professor in the University of Arizona department of atmospheric sciences who leads a research group evaluating and developing climate models, said the goal of the study was to bridge the communities of climate scientists and weather forecasters, who sometimes disagree with respect to climate change.”
Huh. The goal of the study was political, not scientific.
Imagine that.

cd_uk
September 18, 2012 1:23 pm

Hindcasts. Surely if you’re testing your models on the training set you used to fine tune them in the first place, then you’re likely to emulate the training set – it might also explain why they don’t do so well at a regional scale. This is retrospective fitting, and as far as I know you cannot use it to validate the predictive properties of a model – so I’m not sure how this got published unless they actually said something far more nuanced in the paper.
The other thing to note is that there is drift in the control data (the 20th century warming). This is a real problem as a basic stochastic model with a structural drift would probably be just as good. Therefore the test should have included such a baseline where the drift is an extrapolation of the late 19th century trend through the 20th century.
But then we’ve been through this all before and they’ll wheel another one out before another conference in a few years time.

September 18, 2012 1:30 pm

So one what BASIS did they assure they were “accurate” for time periods more than 3 decades?
Sophistry, I think that’s the term here. A more gentle term than “pure fabrication”…
Max

tallbloke
September 18, 2012 1:32 pm

“The scientific community is pushing policymakers to avoid the increase of temperatures by more than 2 degrees Celsius”
In other news, King Canute is going to stop the tide rising.

Bruckner8
September 18, 2012 1:34 pm

Um, hourly updates [via model!] to Hurricane Isaac were incorrect…

H.R.
September 18, 2012 1:35 pm

How can they know that the models are good longer than 30 years out? The earliest models are barely 30 years old and as MarkW points out, the track record isn’t so good. We’ll have to wait until 2040 to find out if the most recent models are any good. Sooo… I’ll let my heirs be the judge of how good the models are in 30-50-100 years.
Meanwhile, I’m totally on board that they’re not very good under 30 years. Darn proud they recognize that and they deserve an attaboy.

Roger Longstaff
September 18, 2012 1:42 pm

This is complete nonsense. GCMs use numerical time step integration to calculate a “climate trajectory”. They lose contact with reality after just a few days of simulated elapsed time. In order to avoid violation of physical laws (conservation of momentum and energy) they employ low pass filters and pause/reset/restart techniques. The UK Metoffice models are completely useless at forecasting weather for the coming season, yet we are asked to believe that the same models can somehow recover numerical fidelity and proceed to make a meaningful prediction of the climate decades into the future, and that the more money we spend on models and supercomputers the more accurate the predictions will become.
It is mathematically impossible to model a complex, multivariate, non-linear and sometime chaotic system using simple deterministic equations. Hindcasting is NOT validation. The models are neither verified nor validated.

Disko Troop
September 18, 2012 1:45 pm

The trouble is that with Hansen and Jones and the like in charge we can’t even predict the past anymore. They keep changing it. I’ve had to go back and change all my photographs.
Ivor Ward

Dave
September 18, 2012 1:45 pm

Looks like more Climate Related Atmospheric Predictions (CRAP) from a recipient of the gravy train

JJ
September 18, 2012 1:46 pm

““Climate scientists are correct because we do show that on the continental scale, and for time scales of three decades or more, climate models indeed show predictive skills. But when it comes to predicting the climate for a certain area over the next 10 or 20 years, our models can’t do it.”
Funny, because we have been told not only that they can do it, but that they have done it. Hansen is still running around claiming that his predictions from the 1980’s are demonstrably accurate. Maybe you fellahs should dedicate a little time to setting him straight. Not gonna hold my breath waiting for that to happen.
““Ideally, you would use the models to make predictions now, and then come back in say, 40 years and see how the predictions compare to the actual climate at that time,” said Zeng. “But obviously we can’t wait that long.”
Not at all obvious to me. Especially given the fact that we do have some prominent and dire predictions made 20 and 30 years ago that have turned out bust. Given that global temps are flat when you guys said that they would be rising precipitously, I say we wait and find out what else you are wrong about.
“Policymakers need information to make decisions now, …”
The information that the policy makers need now is the candid and forthright admission that we don’t have sufficient information right now to make science based decisions about the future.
Policymakers could then rely on adaptive management strategies that do not require extended predictions for decades or centuries out. For scientists to push that logical decision making process away from adaptive management toward untested predictions of doom and gloom scenarios is for scientists to act as policymakers.
“The scientific community is pushing policymakers to avoid the increase of temperatures by more than 2 degrees Celsius because we feel that once this threshold is crossed, global warming could be damaging to many regions,” he said.”
See, when you “push” you arent doing science. You are making policy.

wsbriggs
September 18, 2012 1:46 pm

I’m with MarkW, given that they’ve been so wrong so far, how can they conclude that they’re accurate 100 years out? WUWT? To say they’re accurate in any meaning of the word, would mean that we could compare what they projected (not predicted), and what actually occurred. We’re a long way from that.

September 18, 2012 1:53 pm

Apparently the models miscounted the number of butterflies flapping their wings 40 years ago. Otherwise they’d be spot on.

Green Sand
September 18, 2012 2:02 pm

Taking any forecast about any subject, just where would you expect the greatest degree of accuracy? Close to the inception of the forecast or at the end of the projected timescale?

Editor
September 18, 2012 2:02 pm

“We wanted to know at what scales are the climate models the IPCC uses reliable,” said Koichi Sakaguchi, a doctoral student in Zeng’s group who led the study. “These models considered the interactions between the Earth’s surface and atmosphere in both hemispheres, across all continents and oceans and how they are coupled.”
  The boldface portion is utter nonsense.

DocMartyn
September 18, 2012 2:06 pm

It’s like me with race horses, I can predict the winning horse in every horse race that will be run 100 years hence, but doing so just before the start causes my abilities to wilt.

Mickey Reno
September 18, 2012 2:06 pm

In a crowd of 30 random people, the odds are about 50-50 that two of those people will have the same birthday. Hmmm, I wonder why I brought that up?

Ally E.
September 18, 2012 2:09 pm

“Ideally, you would use the models to make predictions now, and then come back in say, 40 years and see how the predictions compare to the actual climate at that time,” said Zeng. “But obviously we can’t wait that long. Policymakers need information to make decisions now, which in turn will affect the climate 40 years from now.”
*
How convenient. Wasn’t the tipping point supposed to come by 2012? Why does this 30-40 year margin keep tracking into the future? It’s ALWAYS 30-40 years away, always the next generation when “trouble will show”. C’mon, even the warmest of the warmists must be wondering about that by now. We’ve HAD our 30-40 years. Pack it in already. We KNOW it’s for the money. Sheesh!

Richard M
September 18, 2012 2:10 pm

I think we need a new acronym for climate modelling, MIGO … money in … garbage out.

TinyCO2
September 18, 2012 2:10 pm

I suspect that Bernie Madoff might have said his investments would have come good if only they’d given him another thirty years.

LongCat
September 18, 2012 2:12 pm

So models that are built to show warming no matter what can effectively hindcast a warming period.
Shocking.

richardscourtney
September 18, 2012 2:13 pm

Friends:
At September 18, 2012 at 1:42 pm Roger Longstaff says all that needs to be said about the climate models.
But it needs to be said loudly, again and again and again and …
Richard

Richard of NZ
September 18, 2012 2:15 pm

When a report does not even get its description of resolution correct one wonders what else they have wrong. One degree longitude is 60 nautical miles, about 111 kilometres, one degree latitude varies from 60 nautical miles at the equator to zero distance at the poles. They have managed to get their resolution out by 11% understated to infinitely overstated.

TinyCO2
September 18, 2012 2:22 pm

This means that Hansen’s 1988 prediction should be accurate in another six years. Wow, that’s a big El Nino.

JJ
September 18, 2012 2:25 pm

The skill of a climate model depends on three criteria at a minimum, Zeng explained. The model has to use reliable data, its prediction must be better than a prediction based on chance, and its prediction must be closer to reality than a prediction that only considers the internal climate variability of the Earth system and ignores processes such as variations in solar activity, volcanic eruptions, greenhouse gas emissions from fossil fuel burning and land-use change, for example urbanization and deforestation.
Given that what we think we know about the relationship between the internal climate variability of the Earth system and fossil fuel burning is determined by the choice of the assumptions designed into the models, such a comparison of predictive skill is not possible. You don’t know clouds, for example. If you don’t know clouds, you can’t make an “internal climate variability only” based prediction. And given that you arrive at the alleged fossil fuel effect by making assumptions about how clouds are acting when you calibrate the models, you can’t get the other one either.
Zeng described how the group did the analysis: “With any given model, we evaluated climate predictions from 1900 into the future – 10 years, 20 years, 30 years, 40 years, 50 years. Then we did the same starting in 1901, then 1902 and so forth, and applied statistics to the results.
Then what period of data were the models calibrated to? Fifty years into the future from 1900 does not get you into the period of alleged anthro global warming, so you aren’t testing how well those components of the model work. Once you get in to the CO2 era, you’re overrunning the data you used to come up with the anthro parameters. You need to be freezing your parameterizations, making falsifyable predictions about unseen (i.e. future) data, and seeing how well you do.

Andi Cockroft
September 18, 2012 2:28 pm

Colour me stupid, but if you use a series of data to create a model, and create a good model, then its backcast capabilities should be spot-on – no matter which way you run your backcast.
What this doesn’t tell us at all is its validity as a forecast model.
If I have a model that is based on the sine-wave of my electricity supply over the past year, it might seem a viable forecast for the frequency next week – except if I forget to pay my bill !!!!
External factors that are unexpected or not well understood can put a wrench in the works of any forecast model – what if the Sun refuses to play ball with the climate models !!!
Andi

Martin A
September 18, 2012 2:39 pm

Hindcasting.
In the 1970’s, it became apparent to researchers on pattern recognition that a fundamental error was to test a pattern recognition system’s accuracy on the same data used to train it. This inevitably leads to over optimistic assessments of a system’s capability. The same error is now seen with climate models, where they are assessed on their ability to reproduce the statistics of the data used to tune them.

Philip Finck
September 18, 2012 2:41 pm

cd-uk
My thoughts exactly. The models are tuned to match past observations. So if you go into the past, apply data from that point back and make a prediction for the future…. well…. it sure as heck better have some skill. It is kind of like starting at a stop sign and walking back one kilometer. You turn around and predict that if you walk a kilometer forward you will end up at a stop sign. Sigh.
“The models include most of the things we know,” he explained, “such as wind, solar radiation, turbulence mixing in the atmosphere, clouds, precipitation and aerosols, which are tiny particles suspended in the air, surface moisture and ocean currents.”
Hmmm…… I didn’t know `we know’ clouds, aerosols, precipitation, etc all that well, certainly not to qualify as `we know’.

Hot under the collar
September 18, 2012 2:42 pm

“Once you go into details, you realize that for some decades, models are doing a much better job than for some other decades”.
Try tossing a coin guys you may get the same (or more accurate) predictions!

Coach Springer
September 18, 2012 2:45 pm

Climate models > 30 years = Accuracy? Now we know that the advancing ice age consensus of the 70s was accurate for 2012. Good thing the study cleared that up, because there’s been some confusion about those predictions up until now. Too soon to tell about the 80s models, so they’re batting 0-for-1?
I respect the U of A basketball team more than their climate government-is-the-basis-of-science team. At least basketball scores reflect something that actually happened.

September 18, 2012 2:47 pm

Ally E. says:
September 18, 2012 at 2:09 pm
“Ideally, you would use the models to make predictions now, and then come back in say, 40 years and see how the predictions compare to the actual climate at that time,” said Zeng. “But obviously we can’t wait that long. Policymakers need information to make decisions now, which in turn will affect the climate 40 years from now.”
*
How convenient. Wasn’t the tipping point supposed to come by 2012? Why does this 30-40 year margin keep tracking into the future? It’s ALWAYS 30-40 years away, always the next generation when “trouble will show”. C’mon, even the warmest of the warmists must be wondering about that by now. We’ve HAD our 30-40 years. Pack it in already. We KNOW it’s for the money. Sheesh!
================================================================
I wonder if just as “Global Warming” became “Climate Change” we’ll start to see fewer actual years mentioned and more “years from now” in the predictions?
Again I’m reminded of the seafood place that advertised on their buildings, “Free crabs tomorrow!”.

Tazilon
September 18, 2012 2:48 pm

How does anyone know what the models’ accuracy is 30+ years out? They are not 30 years old!

Sean
September 18, 2012 2:55 pm

This paper is just nonsense.

pat
September 18, 2012 2:56 pm

Of course accuracy will only increase with time as the variables narrow. Oh wait. That is absurd.

Dave
September 18, 2012 3:04 pm

Knowing how they’ve “corrected” old data to undoubtedly increase the fit during hindcasting, perhaps they’ve already made up future data that fits the models perfectly. I wouldn’t put it past them…

DocMartyn
September 18, 2012 3:06 pm

” Philip Finck says:
My thoughts exactly. The models are tuned to match past observations.”
no, No, No, and No. They SWEAR on all they love that the models are not fitted to the past in any way what so ever, pinky promise.
We have to take their word for it that they chose their various constants, a prior, and didn’t fit them to the past at all.

cui bono
September 18, 2012 3:09 pm

Zeng described how the group did the analysis: “With any given model, we evaluated climate predictions from 1900 into the future – 10 years, 20 years, 30 years, 40 years, 50 years. Then we did the same starting in 1901, then 1902 and so forth, and applied statistics to the results.”
——-
So from 1900 they could predict – what? Arctic and Antarctic ice extent in the 1930s and 1940s? Air temperatures? Adjusted or unadjusted? Changes in ocean currents? Cloud cover? Rainfall? The 1930s dustbowl (oh no, sorry, too regional).
This is moonshine! Worse than fantasy.
Let Judith Curry check their methods. She had a post recently about abject lack of model testing.
And give the “applied statistics” to Lucia!

JJ
September 18, 2012 3:26 pm

“The models include most of the things we know,” he explained, “such as wind, solar radiation, turbulence mixing in the atmosphere, clouds, precipitation and aerosols, which are tiny particles suspended in the air, surface moisture and ocean currents.”
Most? Ring back when your models include all of the things you (think you) know.
And BTW, you don’t know clouds. Or precip. Or aerosols. And we have a very good inkling that you don’t know solar as well as you need to.
“Once you go into details, you realize that for some decades, models are doing a much better job than for some other decades. That is because our models are only as good as our understanding of the natural processes, and there is a lot we don’t understand.”
Go with that. And understand that a lot of what you don’t understand, you don’t yet know you don’t understand. Until you do understand, STFU with respect to policy. You shouldn’t be “pushing” on anything, when your feet are anchored in ingorance.

richardscourtney
September 18, 2012 3:30 pm

DocMartyn:
At September 18, 2012 at 3:06 pm you say

We have to take their word for it that they chose their various constants, a prior, and didn’t fit them to the past at all.

Really?
I do not know of any modeler who claims “they chose their various constants, a prior, and didn’t fit them to the past”.
And we know for certain fact that they DID fit to the past by use of assumed aerosol cooling.
(ref. Courtney RS ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’, Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
We also know that each climate model uses a different aerosol ‘fudge factor’ to every other climate model.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
So, at most only one of the climate models emulates the climate system of the real Earth and there are good reasons to think none of them do.
Richard

son of mulder
September 18, 2012 3:33 pm

Have they tested the models they ran on the Babbage Difference Engine 100 years ago to show how accurately they predicted todays climate? (;>)

Gary Pearse
September 18, 2012 3:34 pm

Is it really this bad? They create and fine tune the models BY hindcasting!!! and then validate them by checking out hindcasts? The word forecast has no place in this piece.

pkatt
September 18, 2012 3:36 pm

LoL in 40 yrs these fools will be screaming ice age, just in time for it to start warming back up. How can you say our long term is accurate when no long term observation has passed proving that claim. Tiz truly mind boggling.

Editor
September 18, 2012 3:43 pm

tallbloke says:
September 18, 2012 at 1:17 pm

“Climate-prediction models show skills in forecasting climate trends over time spans of greater than 30 years”

How do they know?

Tallbloke, good to hear from you. I wondered the same thing. It turns out that the model “predicted” warming over the last hundred year span, because they were using a model that is tuned to reproduce the warming over the last 100 years.
Shockingly, the model was able to reproduce the warming it was tuned to reproduce, which proves that the models “show skills” in forecasting long trends.
The logic in their thought processes is simple, clean and wrong … what more could you want?
w.

F. Ross
September 18, 2012 3:52 pm

“Climate scientists are correct because we do show that on the continental scale, and for time scales of three decades or more, climate models indeed show predictive skills. But when it comes to predicting the climate for a certain area over the next 10 or 20 years, our models can’t do it.”

It seems that the author is saying the model forecast could [should?] drift away from reality for some indefinite period of time and then drift back toward reality at, say, 90–100 years later.
If that is correct then one would expect the hindcasts to do the same thing. Do they? I doubt it; I would suppose that they would tune the model until the hindcast tracks pretty close to the [questionable=adjusted] historical record. Are graphs of the hindcast available?
From http://www.agu.org/pubs/crossref/2012/2012JD017765.shtml

“The mean bias and ensemble spread relative to the observed variability, which are crucial to the reliability of the ensemble distribution, are not necessarily improved with increasing scales and may impact probabilistic predictions more at longer temporal scales.”

[+emphasis]
Cutting through the flowery double talk it sounds like they are saying the models don’t work at longer scales either.

JJ
September 18, 2012 4:08 pm

Did anyone notice the caption to the maps at the head of the post?
“These maps show the observed (left) and model-predicted (right) air temperature trend from 1970 to 1999.”
Assuming that the chloropleth scale is the same for both images, that model has grossly overpredicted temp trends vs actual observations. And that’s just for the side of the planet that they are showing us. The brick red on the central Eurasian and Antarctic portions of the limb indicate that there’s some unrealized heat over there as well. And the Pampas is substantially cooler than predicted, too.
Is this one of the examples of “skillful” 30 year prediction?

Gibby
September 18, 2012 4:43 pm

Sounds to me like the models are optimized for whatever the dominant global climate driver is every 30 yrs, but don’t have the sensitivity to distinguish the shorter term natural variations. Sounds a lot like the development of scientific instrumentation over the years in terms of the difference in detection limits between an atomic spectrometer and a mass spectrometer.

Eugene WR Gallun
September 18, 2012 5:08 pm

Richard M
sept 18 2:10pm
MIGO — Money In Garbage Out
You have knocked it. How better can you describe the Chicken Little Science of Global Warming?
Eugene WR Gallun

William
September 18, 2012 5:49 pm

The extreme warming IPCC predictions of 1.5C to 5C warming for a doubling of CO2 require that the planet amplifies the CO2 warming which is positive feedback. If the planet’s feedback response to a change in force is negative a doubling of atmospheric CO2 will result in less than 1C warming with most of the warming occurring at high latitude regions of the planet which will cause the biosphere to expand.
There is no extreme AGW warming problem to solve.
http://www.drroyspencer.com/2012/09/uah-global-temperature-update-for-august-2012-0-34-deg-c/
http://wattsupwiththat.com/2012/09/06/uah-global-temperature-up-06c-not-much-change/
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
On the Observational Determination of Climate Sensitivity and Its Implications
We estimate climate sensitivity from observations, using the deseasonalized fluctuations in sea surface temperatures (SSTs) and the concurrent fluctuations in the top-of-atmosphere (TOA) outgoing radiation from the ERBE (1985-1999) and CERES (2000-2008) satellite instruments. … ….We argue that feedbacks are largely concentrated in the tropics, and the tropical feedbacks can be adjusted to account for their impact on the globe as a whole. Indeed, we show that including all CERES data (not just from the tropics) leads to results similar to what are obtained for the tropics alone – though with more noise. We again find that the outgoing radiation resulting from SST fluctuations exceeds the zerofeedback response thus implying negative feedback. In contrast to this, the calculated TOA outgoing radiation fluxes from 11 atmospheric models forced by the observed SST are less than the zerofeedback response, consistent with the positive feedbacks that characterize these models. The results imply that the models are exaggerating climate sensitivity…. ….However, warming from a doubling of CO2 would only be about 1C (based on simple calculations where the radiation altitude and the Planck temperature depend on wavelength in accordance with the attenuation coefficients of wellmixed CO2 molecules; a doubling of any concentration in ppmv produces the same warming because of the logarithmic dependence of CO2’s absorption on the amount of CO2) (IPCC, 2007)….
This modest warming is much less than current climate models suggest for a doubling of CO2. Models predict warming of from 1.5C to 5C and even more for a doubling of CO2. Model predictions depend on the ‘feedback’ within models from the more important greenhouse substances, water vapor and clouds. Within all current climate models, water vapor increases with increasing temperature so as to further inhibit infrared cooling. Clouds also change so that their visible reflectivity decreases, causing increased solar absorption and warming of the earth….
http://www.forbes.com/sites/jamestaylor/2012/04/11/a-new-global-warming-alarmist-tactic-real-temperature-measurements-dont-matter/
A New Global Warming Alarmist Tactic: Real Temperature Measurements Don’t Matter
What do you do if you are a global warming alarmist and real-world temperatures do not warm as much as your climate model predicted? Here’s one answer: you claim that your model’s propensity to predict more warming than has actually occurred shouldn’t prejudice your faith in the same model’s future predictions. Thus, anyone who points out the truth that your climate model has failed its real-world test remains a “science denier.”
This, clearly, is the difference between “climate science” and “science deniers.” Those who adhere to “climate science” wisely realize that defining a set of real-world parameters or observations by which we can test and potentially falsify a global warming theory is irrelevant and so nineteenth century. Modern climate science has gloriously progressed far beyond such irrelevant annoyances as the Scientific Method.

Crispin in Waterloo
September 18, 2012 6:00 pm

@JJ says
“Xubin Zeng, a professor in the University of Arizona department of atmospheric sciences who leads a research group evaluating and developing climate models, said the goal of the study was to bridge the communities of climate scientists and weather forecasters, who sometimes disagree with respect to climate change.”
Huh. The goal of the study was political, not scientific.
Imagine that.
++++++++++++
Is it true that the opinion of most US weather forecasters is that the alarmists are wrong? No wonder they disagree. Weather forecasters are dealing with the real world, after all. The climate modelers are dealing with an artifice of their own devising. I can’t say they will never meet, but they are not yet on the same planet, that’s for sure.

September 18, 2012 6:19 pm

“Our goal was to provide climate modeling centers across the world with a baseline they can use every year as they go forward,” Zeng added. “It is important to keep in mind that we talk about climate hindcast starting from 1880. Today, we have much more observational data. If you start your prediction from today for the next 30 years, you might have a higher prediction skill, even though that hasn’t been proven yet.”
I confidently predict that with better data, the model predictions get worse. It’s well know that poor or absent data on things like aerosols and clouds allows the modellers to use these as tunable parameters.

September 18, 2012 7:03 pm

They don’t accurately predict the present, but they do accurately predict the future…. More Kool Aid please… 😉

September 18, 2012 7:36 pm

Willis Eschenbach says:
September 18, 2012 at 3:43 pm
Shockingly, the model was able to reproduce the warming it was tuned to reproduce, which proves that the models “show skills” in forecasting long trends.

As usual, I can’t decide if these people are being deliberately deceptive or they are just delusional.
Although, there is a third possibility.
Zeng said the study should help the community establish a track record whose accuracy in predicting future climate trends can be assessed as more comprehensive climate data become available.
Perhaps this a sugar coated attempt to establish baselines for the model predictions and stop the modellers usual practice of shifting the goalposts every few years, and then claiming their predictions were accurate.

Louis Hooffstetter
September 18, 2012 7:45 pm

To echo what others have said, hindcasting in no way, shape, or form, validates a model’s ability to predict the future. If you tweak the input and twiddle the forcings, any climate model can hindcast anything you want. Every climate modeller (including Zeng) knows this. It’s scientific fraud to imply that hindcasting can ” test how accurately various computer-based climate prediction models can turn data into predictions,” Hindcasting is cheating, plain and simple.

AndyG55
September 18, 2012 7:48 pm

“A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale ”
THIS CANNOT POSSIBLY BE PROVEN !!!
Another Lew paper? did this also pass peer/pal review?

Frank K.
September 18, 2012 8:14 pm

As someone working professionally in computational fluid dynamics for over 20 years, I agree that there’s NO way they can prove models are reliable for > 30 years, and not reliable for < 30 years. This is pure bunk.
Unfortunately, none of our warmist friends will EVER talk about or debate the numerical models in any detail. Whenever you say "differential equations" and "initial/boundary conditions" they start talking about switch grass and sea ice…

John Kannarr
September 18, 2012 8:14 pm

So if the models can accurately predict, say, 100 years out, then all that is necessary to predict, say, the climate of 2014 or 2016, is to set up the initial conditions as they were in 1914 or 1916, and voila, the models can now accurately give us predictions 2 or 4 years or any other period into the future! Let’s see the test of that.

September 18, 2012 8:37 pm

tallbloke says:
September 18, 2012 at 1:17 pm (Edit)
“Climate-prediction models show skills in forecasting climate trends over time spans of greater than 30 years”
How do they know?
####
simple. Start in 1900 and predict. then look at 1900-40, 1901-1941 etc
for every year from 1900 to 1980, see how your 40 year prediction held up versus these alternatives.
1. Naive prediction. Everything stays the same
2. Internal variability. the future is like the past
3. shoulder shrugs
The key is this. The definition of skill. Skill does not mean perfect. Skill means better than
hand waving assertions about the sun. Skill means better than “i dunno, natural variability”
skill means better than alternatives. If you have alternatives for modelling the climate
( temperatures, rain etc ) then show how your alternative has skill. Not that your alternative must be able to predict on a regional basis and predict more than just a global average temp.
When face with uncertainty you build a model. That model will always be wrong. The question is does it have skill as measured against alternatives.

RoHa
September 18, 2012 9:01 pm

“That is because our models are only as good as our understanding of the natural processes, and there is a lot we don’t understand.”
But I thought it was all settled science!
We’re doomed.

Matthew R Marler
September 18, 2012 9:36 pm

Steven Mosher: for every year from 1900 to 1980, see how your 40 year prediction held up versus these alternatives.
1. Naive prediction. Everything stays the same
2. Internal variability. the future is like the past
3. shoulder shrugs
The key is this. The definition of skill. Skill does not mean perfect. Skill means better than …

Well said. The models show “skill” in that their forecasts were better than simple extrapolations, in mean squared error. The amount of inaccuracy demonstrated over the most recent 30 year period shows that the models are not accurate enough for policy decisions relative to 30+ years in the future. Which model now makes the most reliable prediction for 30 years from now isn’t known yet. What they have documented is a stage of progress, as though to say they have constructed the two railroad lines 1/3 of the way to Promontory or thereabouts; or when the medical profession achieved a good success rate against Hodgkin’s lymphoma. Lots of examples of mixed progress come to mind.
I think they have produced a respectable statement of the state of the art.

Matthew R Marler
September 18, 2012 9:42 pm

Steven Mosher: The question is does it have skill as measured against alternatives.
The other question is does it have sufficient skill to achieve a given goal. For example, the knowledge base that supported the Golden Gate Bridge was insufficient for the Tacoma Narrows Bridge; there were hints of inadequacy in the second case, that was all.

davidmhoffer
September 18, 2012 9:56 pm

I have a clock that is right twice per day. They’ve got models that can’t get 30 years right, but they get it right on time scales greater than that? So my clock is right twice per day but their models are only right twice per century or so?
That folks, is what they are trying to convince us is evidence that the models “work”. They are full of fudge factors based on the data for the last 100 years, so taken over periods that are a substantial portion of 100 years, they seem to get it right. In brief, they’ve been curve fitted to 100 years of data, so any increment smaller than that is going to be less accurate than the whole data set by default!
All they’ve done is draw a line from 1900 to 2012 and said LOOK! The end points are right! Just ignore all that stuff in the middle that isn’t anywhere near the line!

davidmhoffer
September 18, 2012 10:00 pm

….and |BTW Dr Zeng, here is the link to IPCC AR4 WG1 2.9.1 Uncertainties in Radiative Forcing where the IPCC authors rank the Level of Scientific Understanding of no less than 14 parameters. Their own scientists rank the understanding (that feeds the models) of no less than NINE of the 14 parameters as either LOW or VERY LOW. In fact, only a SINGLE parameter is ranked as high:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-1.html
So Dr Zeng, could you please explain why we would trust models that are admitted to having a rotten understanding of the science and which are demonstrably wrong (demonstrated by YOU!) except for perhaps two or three times in a hundred years?

Breaker
September 18, 2012 11:25 pm

@Louis. If they had used a walk-forward testing method, hindcasting of a sort may be used properly. It would require that they start in, say, 1900. Tune the model based only on data known in 1900. Then forecast. Then go to 1901. Tune the model on data known only in 1901. Then forecast . . . . This technique is used by quants when actual money is on the line and gives a pretty good assessment on whether the model has skill. The key is tuning based only on what would have been known had the model been created at the past time from which the forecast is made.
Even with this approach, some bleeding of later information into earlier times occurs because the form of the model is usually determined with knowledge of what is coming. But the parameters are not set using the later information.

Brian H
September 18, 2012 11:50 pm

Hind-casting using the training data that generated their parameter weights? Something is rotten in the state of Denmark. Or is it California? Hard to tell …

September 19, 2012 12:06 am

Hindcasts aren’t predictions. How well a model hindcasts could, and likely is, 100% a function of how well it has been tuned to the data. A model could hindcast perfectly and have no predictive accuracy.
Gavin Schmidt to his credit will talk about prediction. He found a statistical model (which means a model that knows nothing about the climate) beat all climate models used by the IPCC in its predictions.
http://www1.ccls.columbia.edu/~cmontel/mss10.pdf
That a statistical model beats a ‘state of the art’ climate model is proof there is effectively no predictive science in the models. In other words the science in the models is wrong.

Mike McMillan
September 19, 2012 12:31 am

Thirty years, that’s the rough length of each half of the climate’s sixty year warm/cool cycle.

tumpys
September 19, 2012 2:26 am

force fit model to historic temperature record, then claim that somehow means they have predictive skill, what a waste of tax payer money, cant they find something useful to do research on?For

DirkH
September 19, 2012 2:30 am

Zeng talks complete and utter garbage, as commenters above have pointed out (models have been trained/designed including the information in the set used for validation, rendering the validation meaningless).
He’s one of the c0nfidence tricksters of the CO2AGW-scientific apparatus. In a just world, he would be defunded, fired and made to wait a table.

richardscourtney
September 19, 2012 2:40 am

Steven Mosher:
At September 18, 2012 at 8:37 pm you make the plain wrong assertion concerning forecasting skill

When face with uncertainty you build a model. That model will always be wrong. The question is does it have skill as measured against alternatives.

NO!
The question is does it have skill as measured against expectation from chance.
“Alternatives“ have nothing to do with it.
There may be several “alternative” models of the same thing. Indeed, each of the climate models is unique so there are several “alternatives”. But a climate model does not demonstrate “skill” by being “measured against [those] alternatives”.
A climate model demonstrates skill by predicting climate behaviour better than chance would anticipate.
And to date no climate model has demonstrated any predictive skill of any kind.
Richard

Espen
September 19, 2012 2:46 am

The last century had strong natural oscillations with a period of roughly 60 years. That makes it maximally difficult to predict 30 years into the future if your models don’t model these oscillations properly. Hence, you will get a slightly better result for 40 and 50 years, not because the models are better for longer periods, but because they’re lousy.

Chris Wright
September 19, 2012 3:32 am

It does look as if climate models are being abused on an almost industrial scale.
As already pointed out, the claim that the models are accurate over thirty years is laughable – or it would be if it weren’t so serious.
.
I would think that hindcasts are also useless as a means of evaluating predictive skills, because most likely they have been adjusted to match historical data. I have seen some hindcasts that are so accurate over decades as to be completely impossible. These models can’t predict the weather in a few weeks’ time, for Heaven’s sake. The only realistic explanation is that the models were indeed adjusted to match historical data.
.
Because many climate scientists can’t be trusted, the only honest way to test a model’s predictions is to wait until the forecast has matured. So far, all the forecasts from several decades ago have turned out completely wrong.
.
Climate models certainly have their uses, but long term forecasting isn’t one of them. The claim that the models can forecast the climate 50 or 100 years in the future is close to fraudulent.
Chris

phlogiston
September 19, 2012 3:57 am

The head honcho Zeng at least takes a humble and cautious tone here, his key statement is that “our models are only as good as our understanding of the natural processes, and there is a lot that we dont understand”.
However in terms of the Popperian criteria for science to be falsifiable, this is just about as evasive and slippery as it gets. We are told retro-casts from past times forward cant be expected to succeed due to insufficient data quality. And modelling skill cant be expected to improve over not one but several six year periods. And it a model is wrong after 30 years, no worries, just wait another 70 and it will all come right. So only a model made today, with curent data, can be expected to have any skill but we willl have to wait 100 to find out.
No pressure then.
I just read Anthony Beevor’s history of D-day. Contrast the situation of the UK’s head meteorologist James Stagg, and the responsibility he faced: a complex Atlantic weather outlook, stark disagreements between British and American meteorologists, and Eisenhower demanding a forecast on which alone hinged the decision when to launch the invasion.
That was meteorology that mattered.
And, thank God, he got it right.

John Marshall
September 19, 2012 4:48 am

What a dream world these people live in. If a model is no good over 30 years it will be useless over 30 years. Divergence will increase because climate is a chaotic system.
Get real!
[300 years?]

September 19, 2012 6:09 am

The climate models are fudged. The modelers use false, fabricated (mostly aerosol) data to force their models to hindcast, and then they falsely claim the ability to forecast! This false hindcasting practice is scientific fraud.
The subject study is based on this climate model fraud, and is utter nonsense.

G. Karst
September 19, 2012 6:47 am

How can he claims short term skill? Climate modelling predicted significant warming for the last thirty years, of which 15 shows no significant warming. Seems to be no more skillful than random chance to me. Perhaps “skill” is now defined as “not completely wrong”. GK

September 19, 2012 8:08 am

says: September 18, 2012 at 1:04 pm
If models have such a good track record predicting more than 30 years out, then why were the predictions made back in the 80′s, so far off?
-============================================
My reaction too. When did models of any note start being used? ie. what models do we have that predicted the decade we are now in? And where are the results? This sounds like weapons grade [snip . . BS . .mod] to me.

Curt
September 19, 2012 8:32 am

The financial markets are littered with models that do extremely well in hindcasting but are utter failures going forward.

john robertson
September 19, 2012 9:42 am

So now the MIGO are predictions? How come they are always projections when they are shown as rubbish? Richard M thanks, money in garbage out, good thing keyboard semisubmersible.

Peridot
September 19, 2012 9:44 am

This may be OT but I keep seeing references to “Chicken Little” on WUWT. I have absolutely no idea to what this refers as it seems to be an American thing. Anyone tell me (in the UK) what it is, please?
[Reply: It’s Chicken Licken in the U.K. — mod.]

Peridot
September 19, 2012 10:19 am

This may be OT but I keep seeing references to “Chicken Little” on WUWT. I have absolutely no idea to what this refers as it seems to be an American thing. Anyone tell me (in the UK) what it is, please?
[Reply: It’s Chicken Licken in the U.K. — mod.]
Oh dear, I haven’t heard of Chicken Licken either. I didn’t realize my village was so remote.

Chris R.
September 19, 2012 11:18 am

Didn’t Barnet et al. publish a paper a few years back looking at the power spectral density (PSD) of GCM models compared with real temperature data? I am certain that their conclusion was that the PSD of the models they studied was right for very short time periods–up through approx. ten years; right for century-scale time periods; but rather badly wrong for periods between a decade and a century. That seems to be right in line with what these boys and girls are saying.

Chris R.
September 19, 2012 11:25 am

Hi Peridot:
“Chicken Little” refers to a children’s story. Chicken Little is struck by a falling acorn, concludes “The Sky Is Falling!” and proceeds to convince other animals that the sky is indeed falling. The animals all are panicked. In their state of panic they are easily tricked by a fox.
You may readily determine the similarities of the story to what climate skeptics feel the pushers of the AGW theory are saying.

barry
September 19, 2012 8:09 pm

A new study has found that climate-prediction models are good at predicting long-term climate patterns on a global scale but lose their edge when applied to time frames shorter than three decades and on sub-continental scales.

This is hardly news. Projections are more confident for global scale changes over the long-term, and much less so for local-scale changes in the short-term. Same thing has been said in the IPCC and at realclimate (pretty much ever since that website was created). Has anyone ever said differently?

HelmutU
September 20, 2012 2:44 am

Does mister Zheng know the paper of the Alfred Wegener Institut In Germany – as far as I remember it was discussed here at WUWT – ? This papaer showed, that the models could not calculate the correct climate for the last 6000 years, also all parmeters were known.

Peridot
September 20, 2012 8:03 am

Chris R. says:
September 19, 2012 at 11:25 am
“Chicken Little” refers to a children’s story. Chicken Little is struck by a falling acorn, concludes “The Sky Is Falling!” and proceeds to convince other animals that the sky is indeed falling. The animals all are panicked. In their state of panic they are easily tricked by a fox.
Thanks Chris – I understand the reference at last.! I am a pensioner and Chicken Little must have flown under my radar. The analogy is first class!

matt v.
September 20, 2012 8:22 am

”The authors state that the climate models ” …loose their edge when applied to time frames shorter than three decades…” This is very apparent with the Met Office forecasts using million $ computers. They predicted O.48 C rise during 2012 for the global temperature anomaly .[ based on hadcrut3gl]. The actual is running closer to 0.371C to the end of July . The more problematic forecast is their decadal forecast to 2020. They are predicting the global temperature anomaly to go up to 0.8C by 2020 when many models based on historic climate cycles show a decline all the way to 2030 and later. They missed their 2011 forecast and missed 12 of the last 13 annual forecasts per the CLIMATE EDINBURGH analysis. How can they possibly be more accurate on 30 years plus forecasts when you so far off year after year? Would you invest your $’s in an institution for the long term if it fails to meet its current projections year after year? Are these climate modelers counting on that many of us will have short memories and will not remember a ridiculous forecast made some 30 yers ago. Perhaps some will no longer be around to be held accoutable?The unfortunate thing is that these long term models are being used to influence public policy and diverting our limited financial resources for items that will have little if any impact on climate and our mainstream media will not do their homework to tell the public that they are potentially spending money on the wrong priorities .

matt v.
September 20, 2012 9:45 am

Does it have skill when measured against the alternative.? This can lead to false reasoning and bad choices.. If the only skill you have is inadequate due to lack of understnding of all the variables and most importantly it has not been properly validated and if the alternatives[skills] are no better, then one has little justification to yell the sky is falling . It would be totally irresponsible. Its much too early to alarm people about some threat that you do not yet even understand properly yourself . The classic cases are medicines that are put on the market without proper testing and long term evaluations first where the side effects of the medicine start killing people rather than curing them and the medicine has to be withdrawn [but only after great suffering for the public.]