Scientists Develop New Method to Quantify Climate Modeling Uncertainty

(From PhysOrg.com h/t to Leif Svalgaard )– Climate scientists recognize that climate modeling projections include a significant level of uncertainty. A team of researchers using computing facilities at Oak Ridge National Laboratory has identified a new method for quantifying this uncertainty.

Photo: Martin Koser of Denmark
Photo: Martin Koser of Denmark

The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed. One consequence is the possibility of greater warming and more heat waves later in the century under the Intergovernmental Panel on Climate Change’s (IPCC) high fossil fuel use scenario.

The team performed an ensemble of computer “runs” using one of the most comprehensive climate models–the Community Climate System Model version 3, developed by the National Center for Atmospheric Research (NCAR)–on each of three IPCC scenarios. The first IPCC scenario, known as A1F1, assumes high global economic growth and continued heavy reliance on fossil fuels for the remainder of the century. The second scenario, known as B1, assumes a major move away from fossil fuels toward alternative and renewable energy as the century progresses. The third scenario, known as A2, is a middling scenario, with less even economic growth and some adoption of alternative and renewable energy sources as the century unfolds.

The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007. Models run on historical periods typically depart from the actual weather data recorded for those time spans. The team used statistical methods to develop a range of temperature variance for each of the three scenarios, based on their departure from actual historical data.

The approach’s outcome is roughly similar to the National Weather Service’s computer predictions of a hurricane’s path, familiar to TV viewers. There is typically a dark line on the weather map showing the hurricane’s predicted path over the next few days, and there is a gray or colored area to either side of the line showing how the hurricane may diverge from the predicted path, within a certain level of probability. The ORNL team developed a similar range of variance–technically known as “error bars”–for each of the scenarios.

Using resources at ORNL’s Leadership Computing Facility, the team then performed ensemble runs on three decade-long periods at the beginning, middle, and end of the twenty-first century (2000-2009, 2045-2055, and 2090-2099) to get a sense of how the scenarios would unfold over the twenty-first century’s hundred years.

Interestingly, when the variance or “error bars” are taken into account, there is no statistically significant difference between the projected temperatures resulting from the high fossil fuel A1F1 scenario and the middling A2 scenario up through 2050. That is, the A1F1 and A2 error bars overlap. After 2050, however, the A1F1 range of temperature projections rise above those of A2, until they begin to overlap again toward the century’s end.

Typically climate scientists have understood the range of uncertainty in projections to be the variance between high and low scenarios. But when the error bars are added in the range between high and low possibilities actually widens, indicating greater uncertainty.

“We found that the uncertainties obtained when we compare model simulations with observations are significantly larger than what the ensemble bounds would appear to suggest,” said ORNL’s Auroop R. Ganguly, the study’s lead author.

In addition, the error bars in the A1F1 scenario suggests at least the possibility of even higher temperatures and more heat waves after 2050, if fossil fuel use is not curtailed.

The team also looked at regional effects and found large geographical variability under the various scenarios. The findings reinforce the IPCC’s call for greater focus on regional climate studies in an effort to understand specific impacts and develop strategies for mitigation of and adaptation to climate change.

The study was published in the Proceedings of the National Academy of Sciences. Co-authors include Marcia Branstetter, John Drake, David Erickson, Esther Parish, Nagendra Singh, and Karsten Steinhaeuser of ORNL, and Lawrence Buja of NCAR. Funding for the work was provided by ORNL’s new cross-cutting initiative called Understanding Climate Change Impacts through the Laboratory Directed Research and Development program.

More information: The paper can be accessed electronically here: http://www.pnas.org/content/106/37/15555

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

104 Comments
Inline Feedbacks
View all comments
Ben
October 26, 2009 2:14 pm

Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism. The modelers put error bars on their models based on 2000-2007 daya and came up with very high error. This is the correct answer.
Of course, the “error bars suggest higher heat waves” is nonsensical. All it suggests is higher uncertainty. It appears to be an addition by the PhysOrg author (showing ignorance of how error works). However, aside from that line, it was a decent article.
Why so virulent, guys?

Kurt
October 26, 2009 4:25 pm

“Ben (14:14:33) :
Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism.”
None of the model runs could produce temperatures as low as what was seen from 2000-2007. The question is how to interpret this failure. What this paper seems to do is presume that the fundamentals of the model are correct, meaning that the expected, or mean outcome, of the model runs are correct, but that the error has a wider symmetrical distribution around that expected outcome. That’s why they conclude that policymakers should brace for the possibility of larger temperature increases, since under these assumptions there is no reason to think that temperatures in the future could not exceed the expected outcome by the same margin as they have undershot the expected outcome from 2000-2007.
But there is no logical basis for that assumption – it’s just deus-ex-machina. An equally plausible explanation is that the models are simply constructed incorrectly. For example, it is plausible that if the models were to assume net negative temperature feedback on greenhouse gas emissions rather than positive feedback, that the preexisting method of measuring error by the outer boundaries of the model runs would result in a range of outcomes that includes the temperatures seen in the last decade. But if that were the case, then the projected expected impact of CO2 would diminish considerably.
Basically all this article does is present a lame excuse as to why the models don’t replicate the temperatures in the last decade.

Kurt
October 26, 2009 4:41 pm

“Steven Mosher (15:58:58) :
Looking at the charts of “reanalyis” versus the models it stuck me that the reanalysis wasnt observation data at all.”
At the beginning of the paper, the authors mentioned that the IPCC worst-case emissions scenario actually undershot the actual emissions since the time the model runs were conducted. Though the article was vague, my assumption is that the reanalysis was the model’s output using a worst-of-the-worst case scenario that assumed that the enhanced emissions growth rate would continue.
My first reaction to this is the stupidity of continuing to use a model that dramatically overshot recent temperatures, even though the model assumed greenhouse gas emissions that were lower than reality. In other words, because greenhouse gas emissions rates were even higher than projected, the inability of the model to produce recent temperatures is even more of a failure than would appear at first blush. To then simply run that same model again under the enhanced emissions growth rate, and expand the error bars in both directions so as to include the recent temperature is like betting pregame for a football team that is favored by 5 points, and doubling down at halftime when your pick is actually losing to the underdog by 14 points.

Vangel
October 26, 2009 4:56 pm


Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism. The modelers put error bars on their models based on 2000-2007 daya and came up with very high error. This is the correct answer.
Of course, the “error bars suggest higher heat waves” is nonsensical. All it suggests is higher uncertainty. It appears to be an addition by the PhysOrg author (showing ignorance of how error works). However, aside from that line, it was a decent article.
Why so virulent, guys?

Because it is nonsense masquerading as science written by empty suits who know far less than they think they do. And even as their analysis shows that the models are useless they still play their game of deceit. Had the authors been clear about the much greater uncertainty and stopped there the article would have been fine. Instead they made it look as if the uncertainty lent support for some of the more extreme heat scenarios.

1 3 4 5