Scientists Develop New Method to Quantify Climate Modeling Uncertainty

(From PhysOrg.com h/t to Leif Svalgaard )– Climate scientists recognize that climate modeling projections include a significant level of uncertainty. A team of researchers using computing facilities at Oak Ridge National Laboratory has identified a new method for quantifying this uncertainty.

Photo: Martin Koser of Denmark
Photo: Martin Koser of Denmark

The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed. One consequence is the possibility of greater warming and more heat waves later in the century under the Intergovernmental Panel on Climate Change’s (IPCC) high fossil fuel use scenario.

The team performed an ensemble of computer “runs” using one of the most comprehensive climate models–the Community Climate System Model version 3, developed by the National Center for Atmospheric Research (NCAR)–on each of three IPCC scenarios. The first IPCC scenario, known as A1F1, assumes high global economic growth and continued heavy reliance on fossil fuels for the remainder of the century. The second scenario, known as B1, assumes a major move away from fossil fuels toward alternative and renewable energy as the century progresses. The third scenario, known as A2, is a middling scenario, with less even economic growth and some adoption of alternative and renewable energy sources as the century unfolds.

The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007. Models run on historical periods typically depart from the actual weather data recorded for those time spans. The team used statistical methods to develop a range of temperature variance for each of the three scenarios, based on their departure from actual historical data.

The approach’s outcome is roughly similar to the National Weather Service’s computer predictions of a hurricane’s path, familiar to TV viewers. There is typically a dark line on the weather map showing the hurricane’s predicted path over the next few days, and there is a gray or colored area to either side of the line showing how the hurricane may diverge from the predicted path, within a certain level of probability. The ORNL team developed a similar range of variance–technically known as “error bars”–for each of the scenarios.

Using resources at ORNL’s Leadership Computing Facility, the team then performed ensemble runs on three decade-long periods at the beginning, middle, and end of the twenty-first century (2000-2009, 2045-2055, and 2090-2099) to get a sense of how the scenarios would unfold over the twenty-first century’s hundred years.

Interestingly, when the variance or “error bars” are taken into account, there is no statistically significant difference between the projected temperatures resulting from the high fossil fuel A1F1 scenario and the middling A2 scenario up through 2050. That is, the A1F1 and A2 error bars overlap. After 2050, however, the A1F1 range of temperature projections rise above those of A2, until they begin to overlap again toward the century’s end.

Typically climate scientists have understood the range of uncertainty in projections to be the variance between high and low scenarios. But when the error bars are added in the range between high and low possibilities actually widens, indicating greater uncertainty.

“We found that the uncertainties obtained when we compare model simulations with observations are significantly larger than what the ensemble bounds would appear to suggest,” said ORNL’s Auroop R. Ganguly, the study’s lead author.

In addition, the error bars in the A1F1 scenario suggests at least the possibility of even higher temperatures and more heat waves after 2050, if fossil fuel use is not curtailed.

The team also looked at regional effects and found large geographical variability under the various scenarios. The findings reinforce the IPCC’s call for greater focus on regional climate studies in an effort to understand specific impacts and develop strategies for mitigation of and adaptation to climate change.

The study was published in the Proceedings of the National Academy of Sciences. Co-authors include Marcia Branstetter, John Drake, David Erickson, Esther Parish, Nagendra Singh, and Karsten Steinhaeuser of ORNL, and Lawrence Buja of NCAR. Funding for the work was provided by ORNL’s new cross-cutting initiative called Understanding Climate Change Impacts through the Laboratory Directed Research and Development program.

More information: The paper can be accessed electronically here: http://www.pnas.org/content/106/37/15555

0 0 votes
Article Rating
104 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 22, 2009 2:50 pm

Come on!, a model of a model?…or a self indulging model?

October 22, 2009 2:53 pm

Wait, this is like dividing by zero or like pouring the empty into the void!

KimW
October 22, 2009 2:56 pm

So even more computer models predict disaster and the more they ran them, the more disaster. How very scientific. That was sarcasm by theway.

dearieme
October 22, 2009 2:57 pm

“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.” Is 2000-2007 a cherry?

Ron de Haan
October 22, 2009 3:00 pm

“One consequence is the possibility of greater warming and more heat waves later in the century under the Intergovernmental Panel on Climate Change’s (IPCC) high fossil fuel use scenario”.
The article starts with crap so it probably ends with crap so I stopped reading.
It’s a pathetic attempt to clean talk the faulty computer models.
I have adopted Lord Moncton assessment it is not possible to make any reliable climate prediction using computer models.
Anyone who claims to have a reliable climate model is a fraud serving the First Global Revolution.

Sean
October 22, 2009 3:02 pm

The models can’t get clouds right. Shouldn’t they try to get the physics of the weather better before they play these statistical games.

Michael D Smith
October 22, 2009 3:08 pm

I wonder if they also put a distribution around climate sensitivity, since most recent studies show CO2 has a MUCH smaller effect than has been used in models of the past.
Start doing some model runs with a reasonable H2O feedback parameter to CO2 and guess what, there will be no statistically significant difference between ANY CO2 emissions scenarios… (big DUH). Let me guess, they didn’t do that…

DaveE
October 22, 2009 3:12 pm

More uncertainty but it’s WTWT
DaveE.

peter_dtm
October 22, 2009 3:13 pm

Feed the data in up to 1995 – do any of the models predict the current state of the weather ?
If they don’t why does anyone bither with them ?

jack mosevich
October 22, 2009 3:15 pm

If the range of uncertainty is greater than previously assumed then cooler temperatures could be in the offing too. They only clain that higher temperatures migh happen. A bit of bias, no?

Richard Wright
October 22, 2009 3:22 pm

Let’s see, the uncertainty increases but the consequence is that there may be higher temperatures and more heat waves. Of course higher uncertainty couldn’t possibly mean that there may be lower temperatures and fewer heat waves. Using their illustration from projected hurricane paths, it would be like the band of uncertainty was higher on one side of the projected path than the other. No bias in this reporting, is there? I also find it interesting that they judge the uncertainty by the models’ ability to predict the past. They have yet to be able to predict the future.

crosspatch
October 22, 2009 3:24 pm

In addition, the error bars in the A1F1 scenario suggests at least the possibility of even higher temperatures and more heat waves after 2050, if fossil fuel use is not curtailed.

This is the sort of idiotic nonsense that pervades the people pushing the anti-fossil fuel agenda. Again it is proof that it isn’t about “global warming” but about using “global warming” to achieve a different agenda.
As stated in jack mosevich (15:15:04), the wider error bars ALSO produce the same possibility that temperatures will be even colder DESPITE the continued use of fossil fuel. What really chaps my cheeks is this implied notion that fossil fuel use is responsible for any warming or cooling of the climate and if we somehow eliminated the use of it, we could be assured that the climate wouldn’t warm. There has been absolutely no observed relationship between climate behavior and fossil fuel use. That they continue spouting this as if it is some kind of truth makes rubbish of the entire article.
What a bunch of idiots!

michel
October 22, 2009 3:24 pm

“Anyone who claims to have a reliable climate model is a fraud serving the First Global Revolution.”
No, he/she is mistaken. Possibly because of lack of experience with the use of models in forecasting. Possibly because of a political agenda. Possibly because of confirmation bias – one fools oneself. We all do it sometime. Wrong, almost certainly. Fraud, most likely not, and anyway you can’t prove it, so it adds nothing to the observation that they are wrong.

DR
October 22, 2009 3:25 pm

Until they can correctly simulate clouds, solar, ENSO and other aspects of climate processes, reports like that are useless.

Murray Duffin
October 22, 2009 3:29 pm

There isn’t enough fossil fuel to supply the CO2 fo A1F1, and maybe not enough for B1, so what does that do form their uncertainty?

Craigo
October 22, 2009 3:32 pm

The uncertainty is “worse than we predicted”.
Could the error bars also suggest “lower than we predicted” temperatures and less heatwaves? (aren’t heat waves weather, not climate?).
This is all too uncertain for me. Sounds like the science is a littled unsettled today.

Bob Koss
October 22, 2009 3:34 pm

Actual global temperature is 14c-15c. How can they be modeling the earth when the temperatures shown in their fig. 1 graphs stay below 10c throughout the period? It seems to me the emulation of snow and ice coverage along with the albedo figures have to be totally erroneous.

Glenn
October 22, 2009 3:35 pm

Not much surprise that these gavin clone computer geek warmingologists would “suggest” “greater urgency and complexity of adaptation or mitigation decisions.”
The paper “edited” by Stephen H. Schneider after April 2009…
who in March 2009 authored and signed this letter to Congress
http://stephenschneider.stanford.edu/Publications/PDF_Papers/Congressional_Ldrs_Ltr.pdf

October 22, 2009 3:40 pm

“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.”
That’s very strange. Hasn’t it been revealed over and over again that 30 years are required to establish “climate”?

SemiChemE
October 22, 2009 3:40 pm

So, if I’m reading this correctly, the study found that the range of uncertainty in the model was larger than they thought, which is all well and good. What is disturbing is how this article cherry picks the interpretation of this result, stating:
“In addition, the error bars in the A1F1 scenario suggests at least the possibility of even higher temperatures and more heat waves after 2050, if fossil fuel use is not curtailed.”
The problem is that they forget the equally likely possibility of even lower temperatures and cold waves after 2050, if fossil fuel use is not curtailed.

Tenuc
October 22, 2009 3:42 pm

Just illustrates the futility of trying to model a dynamic chaotic system with insufficient knowledge of how each part of our climate operates and interacts, then using data of insufficient granularity and accuracy, on a computer who’s arithmetic unit has far too few decimal places to capture the subtleness of the initial conditions.
Another good example of GIGO.

P Walker
October 22, 2009 3:44 pm

A “modest proposal” – Why not scrap current models , which obviously don’t work , and start over again using all the known variables instead of relying on co2 ? BTW , this is a mostly rhetorical question , although I don’t understand why no one has tried this .

DennisA
October 22, 2009 3:55 pm

For a very revealing background to IPCC modeling, check out the 1999-2001 ECLAT series of seminars on Representing Uncertainty in Climate Models:
http://www.cru.uea.ac.uk/eclat/ All downloadable pdf’s.
Just a few choice selections…..
“Projecting the future state(s) of the world with respect to demographic, economic, social, and technological developments at a time scale consistent with climate change projections is a daunting task, some even consider as straightforward impossible.
Over a century time scale, current states and trends simply cannot be extrapolated. The only certainty is that the future will not be just more of the same of today, but will entail numerous surprises, novelties and discontinuities.
“The probability of occurrence of long-term trends is inversely proportional to the ‘expert’ consensus.”
“….excessive self-cite and “benchmarking” of modeling studies to existing scenarios creates the danger of artificially constructing “expert consensus””
That was two years after Kyoto, when current UK chief climate scientist, Bob Watson, then IPCC Chairman, had said the science was settled.
Since the ECLAT seminars, they became so worried at how uncertain everything was, that they decided the only thing to do was ignore it and boast of ever more robust climate models, because they didn’t want policy makers to think there was any doubt about global warming.
Funny though, eight years later, in 2007, Professor Lenny Smith, a statistician at the London School of Economics, warned about the “naïve realism” of current climate modelling. That’s eight years of ever more expansive and expensive computer systems and ever more complex models, giving rise to almost weekly calls of “new research suggests it’s worse than we thought.”
“Our models are being over-interpreted and misinterpreted,” he said. Over-interpretation of models is already leading to poor financial decision-making, Smith says. “We need to drop the pretence that they are nearly perfect.”
“He singled out for criticism the British government’s UK Climate Impacts Programme and Met Office. He accused both of making detailed climate projections for regions of the UK when global climate models disagree strongly about how climate change will affect the British Isles.” (From New Scientist magazine, 16 August 2007.)
I think the theme song is “I can do MAGICC”…..

tallbloke
October 22, 2009 3:55 pm

HAHAHAHAHAHAHAHAHAHAHAHA! THEY’VE LOST THE PLOT.
Give it up guys, you’re just making yourselves look sillier all the time.

realitycheck
October 22, 2009 3:55 pm

Haven’t posted in a while, but this one definitely got the alarm bells ringing.
These climate modelers really do see the world through some form of catastrophe-tinted spectacles don’t they?
The uncertainty is worse than we first thought – therefore it could get EVEN WARMER!!
???
It could also mean that all the warm projections are a pile of horse excrement too couldn’t it? Ever though of that one Batman? Huh? Huh?
I’m sorry, they must be using that new kind of mathematics in which 5-5 = 10
Now I see.

October 22, 2009 3:58 pm

Well, it took about 5 minutes of reading the SI to find an interesting thing.
A quick scan through the charts and the comparison of Models to “observations” led to this. When they want to look at the bias of the models ( to test for things like normality with Q Q plots) they look at the difference between Models and NCEP “re analysis data” Looking at the charts of “reanalyis” versus the models it stuck me that the reanalysis wasnt observation data at all. So, if you read through the SI you will find this twoard the end. renanalysis data is….. you guessed it.. output from a model. Now, that doesnt make that wrong on its face, but I’ll see if lucia wants to take it up

son of mulder
October 22, 2009 4:00 pm

If the models are judged more uncertain then they are less likely to be right so it says even less about what will happen. Additionally they say “more uncertain than assumed”. Is that not like saying the answer was assumed to be hot horrible and dangerous and now we aren’t so sure of our assumption. I’ve assumed the models are useless and I seem to be getting more confidence that they are useless.

David Ermer
October 22, 2009 4:02 pm

I know I’m late to the party……but you can’t calculate real uncertainty by comparing model runs. All they’ve really done is calculate the instability of their computational method.

John M
October 22, 2009 4:04 pm

The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed…

And in related news, scientists can now confidently say that the Kansas City Royals will not play in this year’s World Series.
But seriously, now with more uncertainty, it’s even more likely that temperature measurements will not be inconsistent with the models.

AlexB
October 22, 2009 4:09 pm

Oh WOW this is amazing. I mean just when you think some branches of mathematical modelling couldn’t be pushed further away from the scientific method they come and surprise you with something like this. So when a mathematical model doesn’t predict observed data, then that indicates uncertainty in the model. It is then (apparently) reasonable to conclude from this uncertainty that you should be concerned with your proposed mechanism being more dominant than you previously imagined. So the more your model doesn’t fit the data, the more you should be concerned about your proposed mechanism being dominant. Incredible! And all this time here’s me, Johnny Chumpo over here thinking my proposed mechanism was just WRONG!

Sandy
October 22, 2009 4:33 pm

And the error bars on the ‘data’??

Gordon Ford
October 22, 2009 4:47 pm

Headline – IPCC Climate Model Uncertainty Greater Than Expected –Ice Age Possible By End Of Century
Climate scientists recognize that climate modeling projections include a significant level of uncertainty. A team of researchers using computing facilities at Oak Ridge National Laboratory has identified a new method for quantifying this uncertainty. One consequence is the possibility of greater cooling and more cold waves later in the century under the Intergovernmental Panel on Climate Change’s (IPCC) high fossil fuel use scenario.
Sound familiar???

Telboy
October 22, 2009 4:55 pm

I don’t think it’s fraud. If it was they’d at least try to make their story believable.

John F. Pittman
October 22, 2009 4:57 pm

Steven Mosher (15:58:58) :
Well, it took about 5 minutes of reading the SI to find an interesting thing.
Great catch Mosher.
Is that the same re-analysis data that previuosly touched off a thread?

TA
October 22, 2009 4:59 pm

So, what’s the uncertainty of the degree of uncertainty? Can they predict the uncertainty of the uncertainty with a high degree of uncertainty? Uncertainly they can!

October 22, 2009 5:00 pm

Though this be madness, yet there is method in’t.
–William Shakespeare, Hamlet

October 22, 2009 5:50 pm

I love it! So predictable – the uncertainty is only to the warm side. What about uncertainty to the low side? No chance of that? I’ll take that bet. Looks like yet another Copengagen run-up paper. What will these people do after Copenhagen (especially when they come home with a hollow agreement at best).
OT- 5.5 in of snow here at the house in the foothills of SW Denver. GFS showing more snow next Tuesday & on Halloween (it always snows on Halloween here)

Philip_B
October 22, 2009 5:53 pm

A “modest proposal” – Why not scrap current models , which obviously don’t work , and start over again using all the known variables instead of relying on co2 ? BTW , this is a mostly rhetorical question , although I don’t understand why no one has tried this .
For a period I worked as consultant assessing the prospects of ultimate success of computer software projects of similar scale to the climate models.
The individuals who had worked on the projects had enormous psychological investment in their systems and would go to great lengths to ensure the systems continued development.
All evidence presented to them that their system was unrecoverably flawed and had little or no chance of ultimate success was rationalized away or avoided. Often combined with the kind of ad hominen attacks we are so familiar with in climate circles.
I have little doubt the climate models are irretreviably flawed and should be scrapped and the development process started again from scratch under a public and transparent process.
I am also certain that this will never happen if the modellers are left to make the decision themselves.
What will likely happen is one or more open source climate models already under development will produce better predictions and make the HadCRU, etc models irrelevant. However, this will take a few years.

Mac
October 22, 2009 5:54 pm

“Anyone who claims to have a reliable climate model is a fraud serving the First Global Revolution.”
I have one. I call it Earth. It is 100% accurate and modeling the current state of the climate. Sorry no forward looking capability.

Pamela Gray
October 22, 2009 5:55 pm

A model of a model and a paper that investigates the data from this model of a model as if it were a thermometer. Reminds me of a book. Anybody read Fahrenheit 451? Remember the virtual family scenes projected on the walls? These models of family life became reality. Now I don’t know which is stranger, reality or fantasy.

F Ross
October 22, 2009 5:57 pm

Just more GIGO

Paul Vaughan
October 22, 2009 6:18 pm

Uncertainty-quantification depends on assumptions.

Paddy
October 22, 2009 6:24 pm

As I recall the risk assessment models used by AIG for default credit swaps failed because the modelers obviously could not identify or provide for unknown-unknowns. This is the disaster that brought the company down when one or more of them occurred.
Isn’t this AIG all over again? How much does this folly with super computers cost us?

October 22, 2009 6:32 pm

No GCM [computer climate model] predicted the flat to declining global temperatures for most of the past decade, just as no computer financial model predicted the severity and extent of the global economic decline over the past year. And the universe of stocks is much simpler than the individual entities affecting the climate.
The only truly successful financial computers are those that take advantage of arbitrage in the microsecond delays between buy/sell orders and execution. But there are no truly successful computer financial models that accurately predict future stock prices.
What is ignored by the faction of the climate community engaged in selling AGW is the fact that computer models are not data. Models are simply a tool. And they are not a very good tool for predicting the climate. They are not evidence, although this paper typically implies that they are.
Empirical evidence is raw, verifiable data. Arguments can be made for adjusting the raw data. And arguments can be made for using different methodologies and computer algorithms in the models. But none of these are evidence.
The problem emerges when the people arguing the case that computer models can be tweaked sufficiently to predict the future refuse to disclose their raw data and their specific methodology, thus making replication and falsification of their hypothetical claims almost impossible. They are uncooperative because they are hiding something.
The scientific method requires that all information resulting in a new hypothesis must be promptly made available to anyone who requests it. Yet the AGW contingent disrespects and ignores the scientific method.
The undeniable fact that certain climate scientists of the CO2=AGW persuasion are still deliberately withholding the data and methodologies that they claim shows that a tiny trace gas will cause runaway global warming means that they are engaging in self-serving propaganda, not science.
Anyone withholding any such information should be barred from receiving further public funding of any kind, because they are deliberately running a scam to defraud the public, nothing less. For example, Michael Mann, of the debunked UN/IPCC hockey stick, still refuses to cooperate with other scientists.
Mann stonewalls requests for his data. And when his methodology is finally teased out of his results, it turns out that not only is Mann devious, but he is incompetent as well, as is shown in his paper published with upside-down Tiljander sediments — which was presumably refereed, and then simply hand-waved through the climate peer review process by his cronies, who perceive scientific skepticism as their enemy.
The scientific method obligates all scientists to do their best to falsify every hypothesis, even those scientist proposing their own hypothesis. This is done mainly through replication. But those climate scientists who have both front feet in the public trough fight the scientific method and scientific skepticism every step of the way.
As we are well aware here, the climate alarmist clique has learned how to game the system for their own personal benefit. They feather their own nests by lining their pockets with undeserved taxpayer money when they refuse to abide by the scientific method; instead they adopt a siege mentality against skepticism in place of the required scientific cooperation.

Back2Bat
October 22, 2009 6:51 pm

Smokey,
Great comment! Here is one reason the scientific method has been thrown out the door,IMO: The AGW side reasons: “What is the harm if we are wrong? The rich countries will “waste” less and the poor countries will be compensated.”
They are clueless with regard to economics.
“Convenient Lies” is the order of the day. Now they are terrorized of fessing up, is my guess.

Alvin
October 22, 2009 6:52 pm

Infinite improbability drive? I’m just saying…

October 22, 2009 6:56 pm

B2B,
Regarding the Copenhagen proposal to compensate ‘poor’ countries like China: click

Back2Bat
October 22, 2009 7:06 pm

Smokey,
You are a wonderful throwback. Too bad you are not President. We need another Andrew Jackson badly. (Sorry about the Cherokees, though)

Back2Bat
October 22, 2009 7:08 pm

“I have one. I call it Earth. It is 100% accurate and modeling the current state of the climate. Sorry no forward looking capability.” Mac
I’m reading the Owner’s manual; it says nothing about not burning carbon.

Evan Jones
Editor
October 22, 2009 7:15 pm

We need another Andrew Jackson badly. (Sorry about the Cherokees, though)
Two words: pet banks.

Tony Hansen
October 22, 2009 7:36 pm

They are the very model,
Of modern climate modelers.

Richard M
October 22, 2009 7:43 pm

When they don’t even know what they don’t know, there isn’t any way to set reliable error bars. This is simply bad science applied to bad science.

Back2Bat
October 22, 2009 7:45 pm

“Two words: pet banks.” Evan
Touché!
Banking is such a corrupting business which is why there should be no government privilege. If only this were a banking blog. I have some ideas …
But better 23 pet banks than 1.
Thanks Evan.

red432
October 22, 2009 7:55 pm

balderdash. the uncertainty can’t be bounded because the only thing they really know is that there is a lot they don’t know, if they’re not completely self deluded.

savethesharks
October 22, 2009 7:55 pm

Philip_B (17:53:51) :
For a period I worked as consultant assessing the prospects of ultimate success of computer software projects of similar scale to the climate models.
The individuals who had worked on the projects had enormous psychological investment in their systems and would go to great lengths to ensure the systems continued development.
All evidence presented to them that their system was unrecoverably flawed and had little or no chance of ultimate success was rationalized away or avoided. Often combined with the kind of ad hominen attacks we are so familiar with in climate circles.
I have little doubt the climate models are irretreviably flawed and should be scrapped and the development process started again from scratch under a public and transparent process.
I am also certain that this will never happen if the modellers are left to make the decision themselves.
This all bears repeating…again and again. May the truth win out. The truth is out there.
Chris
Norfolk, VA, USA

Evan Jones
Editor
October 22, 2009 8:01 pm

But better 23 pet banks than 1.
Come to think of it, I’m also part Cherokee . . .

savethesharks
October 22, 2009 8:08 pm

Smokey I must admit I have saved your words for the future (but I will give credit). Worth saving….and worth repeating…again and again…..again.
May the truth win out…
Smokey (18:32:41) :
No GCM [computer climate model] predicted the flat to declining global temperatures for most of the past decade, just as no computer financial model predicted the severity and extent of the global economic decline over the past year. And the universe of stocks is much simpler than the individual entities affecting the climate.
The only truly successful financial computers are those that take advantage of arbitrage in the microsecond delays between buy/sell orders and execution. But there are no truly successful computer financial models that accurately predict future stock prices.
What is ignored by the faction of the climate community engaged in selling AGW is the fact that COMPUTER MODELS ARE NOT DATA [emphasis mine] . Models are simply a tool. And they are not a very good tool for predicting the climate. They are not evidence, although this paper typically implies that they are.
Empirical evidence is RAW [emphasis mine LOL], verifiable data. Arguments can be made for adjusting the raw data. And arguments can be made for using different methodologies and computer algorithms in the models. But none of these are evidence.
The problem emerges when the people arguing the case that computer models can be tweaked sufficiently to predict the future refuse to disclose their raw data and their specific methodology, thus making replication and falsification of their hypothetical claims almost impossible. They are uncooperative because they are hiding something.
The scientific method requires that all information resulting in a new hypothesis must be promptly made available to anyone who requests it. Yet the AGW contingent disrespects and ignores the scientific method.
The undeniable fact that certain climate scientists of the CO2=AGW persuasion are still deliberately withholding the data and methodologies that they claim shows that a tiny trace gas will cause runaway global warming means that they are engaging in self-serving propaganda, not science.
Anyone withholding any such information should be barred from receiving further public funding of any kind, because they are deliberately running a scam to defraud the public, nothing less. For example, Michael Mann, of the debunked UN/IPCC hockey stick, still refuses to cooperate with other scientists.
Mann stonewalls requests for his data. And when his methodology is finally teased out of his results, it turns out that not only is Mann devious, but he is INCOMPETENT AS WELL [emphasis mine], as is shown in his paper published with upside-down Tiljander sediments — which was presumably refereed, and then simply hand-waved through the climate peer review process by his cronies, who perceive scientific skepticism as their enemy.
The scientific method obligates all scientists to do their best to falsify every hypothesis, even those scientist proposing their own hypothesis. This is done mainly through replication. But those climate scientists who have both front feet in the public trough fight the scientific method and scientific skepticism every step of the way.
As we are well aware here, the climate alarmist clique has learned how to game the system for their own personal benefit. They feather their own nests by lining their pockets with undeserved taxpayer money when they refuse to abide by the scientific method; instead they adopt a siege mentality against skepticism in place of the required scientific cooperation.
Bravo. The BEST apologia yet.
DAMN worth repeating.
Chris
Norfolk, VA, USA

Evan Jones
Editor
October 22, 2009 8:10 pm

Maybe we should develop our own simplified, easily revisable, tentative, “top down” climate model that could be continually adjusted whenever the data jumped the error bars.
As a matter of fact, I think I may do just that . . .
I designed a Civil War game (Blue Vs Gray, GMT Games), which was storyboarded down to the card pick, step loss, and die roll. (If you play wargames you know how rare that is.) So I have a little hands-on experience in relatively simple modeling of immeasurably complex issues.

Back2Bat
October 22, 2009 8:17 pm

“So I have a little hands-on experience in relatively simple modeling of immeasurably complex issues.” Evan
Cool. I love modeling myself. My current model is an Excel model of a new non-fractional reserve banking model. It is honest and profitable. One day, maybe it will be made into “Sim-Banker”

Mike Bryant
October 22, 2009 8:25 pm

Evan… that is a great idea… Mike

Dave Dodd
October 22, 2009 8:55 pm

Why write a book when GIGO says it all?

October 22, 2009 8:57 pm

Ron de Haan (15:00:08) :
I’m with you all the way on this one, Ron.

Craig
October 22, 2009 9:26 pm

Silly question, if they are saying that the uncertainty is greater than first calculated, does that mean temperatures may be lower than models predict? It seems the article is trying to say the tempertures will be so much higher and forget to say they may be so much lower. Why can’t they admit the uncertainty is to great for them to predict anything at all. My crystal ball works real well though. Send me a buck and I’ll tell you the high temperature of July 4, 2109. If I’m wrong, I’ll refund your money. 🙂

rbateman
October 22, 2009 9:29 pm

I got it. You take the today’s stock market close. Tomorrow should be within 10% at the most, under most circumstances. You can run that for a year down the road, and Oct 22, 2010 will be between 9,000 and 11,000, provided nothing really bad happens or someone hasn’t re-invented the Bubble.
Go out 10 years, and you could be looking at 5,000 or 15,000.
In 20 years, you could be near zero or 20,000.
All of this assumes no outliers. The things that turn the game upside down.
In 40 years, you could have crashed and rebuilt, and the error bars start converging, adjusting for inflation.
Climate should be, and most often is, self-correcting. In 20 years, we may be back to pre-1900, and in 40 years we could be back to 1800’s. Or it could end up being another Medieval Warm period, with balmy temps.
Or it could cool & cool & cool, like what happened after the MWP.
What the danger ahead is all about is that some folks have in their head to monkey with that natural process, and thereby rob & emperil our ability to adapt to natural change, citing man’ ability to ruin the climate. Well, if you set out to radically alter it at a global and abrupt scale, that is playing with fire in my book.
The chances of predicting correctly the sequence of climate cycles ahead is nigh impossible.

October 22, 2009 9:33 pm

dearieme (14:57:29) :
“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.” Is 2000-2007 a cherry?

That being a cherry picking is a certainty, 100%

Ray
October 22, 2009 9:39 pm

[no profanity or implied profanity ~ ctm]

savethesharks
October 22, 2009 9:46 pm

[no profanity or implied profanity ~ ctm]

savethesharks
October 22, 2009 9:47 pm

Correction: WELL SAID (lol)
Reply: Wow, you anticipated my reaction in advance. Kudos ~ ctm

rbateman
October 22, 2009 9:51 pm

evanmjones (20:10:02) :
You’ll be a smash hit with that. The sophitication of those board wargames has grown exponentially since the first AH boardgame release. Tactics II was the name, if I remember right.

October 22, 2009 10:33 pm

I guess I’m first with:
It is worse than we thought.
Heh.

Andrew
October 22, 2009 11:29 pm

If you look carefully at the references to this paper, most are at least two years old, and the reference to the data used in the ‘reanalysis’ is in fact eight years old. Is the data more recent than that?
This sort of paper is like a group of Biblical scholars reanalysing the Gospels and even though the source information has either been lost or mangled in translation, they find that although the texts are somewhat inconsistent in their telling of the life of Christ they are close enough. And from this the scholars find that the apocalyptic visions of the Book of Revelation are a valid view of the future.
Replace the appropriate words in the above and it is almost an Abstract for a peer reviewed paper on Climate Science.

Paul Vaughan
October 22, 2009 11:31 pm

Re: Paddy (18:24:29)
We all have a responsibility to oppose what you describe like our lives depend on it (…and some will legitimately debate whether I should be using the word ‘like’). The sustainable defense of civilization cannot be sacrificed to bad assumptions (something completely within our control).

October 23, 2009 12:56 am

“If the range of uncertainty is greater than previously assumed then cooler temperatures could be in the offing too.”
No, the models are programmed to create warming, the only variability is in the amount of warming. Any model parameters that create cooling are rejected. CO2 = warming is a central assumption coded in, so any increase in CO2 will always produce warming over the long term in the model’s output as this is what they are programmed to produce.

tty
October 23, 2009 12:58 am

Bob Koss (15:34:02)
You mustn’t be to hard on those models. It is very difficult to get them to both give a temperature increase (AGW) and a global temperature that look plausible. Since the AGW is the important thing, that is what they tweak the models for. Many of them model global temperatures that are way off target. That’s the reason you alway see temperature anomalies displayed, not absolute temperatures.

TJA
October 23, 2009 3:26 am

What Ron de Haan said. After that first statement, my bs detector prevented me from reading further.

anna v
October 23, 2009 6:06 am

I keep talking about the lack of propagation of errors in these GCMs and the lack of error bars in the projections, which make them meaningless. I thought this might be an effort to correct this but I am mistaken.
In this paper:
http://www.pnas.org/content/106/37/15555.full.pdf
It seems to me that they are using the differences of the model’s runs with the “reanalyzed” data to estimate an error assuming a gaussian distribution of this difference, and they test it over the interval that is not part of the projections , and then project to the end of the century. If I am wrong, I wait to be corrected.
Sounds like a sure fire method to create a hockey stick type situation: observations and models diverge because of statistics, and not because of wrong assumptions in the models.
I have looked at the “supporting information”
http://www.pnas.org/content/suppl/2009/09/08/0904495106.DCSupplemental/0904495106SI.pdf
Look at figure S4. The error bar is 1C. Note it is temperature, so it looks small. If this were a propagated error it would make the anomaly projections nonsense.
Suppose that the true error bars, from error propagation of the input parameters, of the model projection are of the same order of magnitude, the whole thing becomes even more bizarre.
I confess that in my days of analysis, which ended ten years ago, bias did not exist in evaluating models against data. Systematic errors, yes, but they were just added linearly to errors and not in one direction to the curve, but in the +/-, possibly different for each sign, so I cannot understand adding this bias and getting an even warmer curve in s4.
We need a statistician to get hold of this, but for me it sounds like another obfuscation instead of error propagation : varying input parameters within errors and estimating chisquare per degree of freedom and error thereof.

son of mulder
October 23, 2009 6:16 am

evanmjones (20:10:02) :
“Maybe we should develop our own simplified, easily revisable, tentative, “top down” climate model that could be continually adjusted whenever the data jumped the error bars.
As a matter of fact, I think I may do just that . . .”
Agreed, start at the top with Lindzen’s recent result, work down through the tropical troposphere where there is no hot spot and predict what should happen at the surface.

DaveF
October 23, 2009 7:00 am

Why am I reminded of Star Trek’s Heisenberg Compensator?

timbrom
October 23, 2009 7:33 am

So…. the models got it wildly wrong in the period up to 2007. Factor in the amount they were wrong and do another bunch of runs, but with larger error bars. Oh, and don’t use actual climatic data to work out how wrong they were in the first place. This isn’t science. It isn’t even pseudo-science. It’s fraud.

Scott
October 23, 2009 7:42 am

“The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed. One consequence is the possibility of greater warming and more heat waves later in the century…”
One consequence? Well, yes, that is one, it is certainly not the most obvious one. The obvious one is that if the uncertainty is that high than the climate projections are worthless and that any previous hypothesis draw from those flawed projections as null and void.
Basically, the only conclusion that should’ve been reached is that they have to start over from scratch.

bsagat
October 23, 2009 8:54 am

So in practical terms it could be 4 degrees C warmer or 4 degrees C colder in the next 50 years with the long term average being zero

Ray
October 23, 2009 9:37 am

Ray (21:39:23) :
[no profanity or implied profanity ~ ctm]
Wow, that’s a new one!

P Walker
October 23, 2009 10:27 am

Philip B (17:53:51) – Exactly . Like I said , it was a rhetorical question . A bit naive perhaps , but rhetorical .

Vincent
October 23, 2009 10:32 am

As most of you have noticed, this paper is a textbook case of “what’s the worst that could happen” think. Of course, we know that just as likely that temperatures go higher is the possibility that they could go lower. The second possiblility is irrelevant to these people, because if temperatures end up lower than forecast, then all that means is that humanity has “dodged a bullet”.
You can see this mode of thought pervading all pro AGW research. What’s the worst that can happen, what’s the worst possible outcome? The mantra never ends.

Roger Knights
October 23, 2009 12:44 pm

“One consequence? Well, yes, that is one, it is certainly not the most obvious one. The obvious one is that if the uncertainty is that high than the climate projections are worthless and that any previous hypothesis draw from those flawed projections as null and void.”
Yeah, but (shhh) they couldn’t SAY that. (It may, however, be their real message.)

Paul Vaughan
October 23, 2009 1:23 pm

anna v (06:06:31) “We need a statistician to get hold of this”
Be careful with that suggestion. Statisticians are more guilty than anyone of promoting the unquestioning application of absolutely-crazy assumptions. It is the wizardry of their algebraic weaving that holds all this nonsense (in economics, climate science, etc.) together.
What is (very seriously) needed is non-mathematician & non-statistician auditing of assumptions. The issue is not one of advanced credentials & complicated academics. The issue is the absence of common sense at the base of “reasoning”. This is epistemological. No matter how sophisticated the algorithms get, they produce garbage (that threatens the sustainable defense of civilization) if they are underpinned by indefensible assumptions.
Half-serious musing:
Nonetheless, sheeple seem content with the wizardry pulling wool over their eyes. I actually get the impression people like to support corruption and be dominated by evil forces because this is somehow (twistedly) perceived as “more cool”. Note to sheeple: I invite you to prove the musing wrong.

AnonyMoose
October 23, 2009 4:37 pm

No matter the errors in this analysis, the concept is that the more wrong the projections are, these people keep the projections and just predict a wider range of catastrophes. Except, as we’ve seen for other situations, all the adjustments somehow are on the warm side, even if the adjustments are in the descriptions rather than in the error bars.

davidc
October 23, 2009 6:53 pm

From the caption to Figure 1: “The shaded areas indicate uncertainties caused by five initial-condition ensembles.”
Anyone know what this means? I note that over many time periods the shaded areas get smaller with time so it can’t be uncertainties due to initial conditions.
From SI p3/10:
“Because bias and variance are stationary in hindcasts (shown
in this SI Text under Statistical Methods), we assume the same will
be true for projections as well.”
I think I know what this means. The standard deviation was worked out for the hindcast period and that same standard deviation(+/- 3SD) was added at every time point to the model predictions (an average of?).
Since the models are somehow adjusted to the hindcast data this might be reasonable if the model extrapolations were the actual future temperatures. In reality it looks like a massive underestimate of the true uncertainty.
Anna v: Yes, Figure 1 (top) shows no sign of error propagation at all. In effect they use the hindcast data (10 years) to extrolate forward in time for 90 years and there is no increase in “unceratainty” at the end. Ridiculous.

davidc
October 23, 2009 7:21 pm

As noted above, from SI p 3/10:
“NCEP Reanalysis data are taken as a proxy for observations,
even though we are cognizant that these data are not actual
ground measurements, but the product of a model applied to
observed data from a variety of sources.”

anna v
October 23, 2009 10:02 pm

davidc (18:53:05) :
From the caption to Figure 1: “The shaded areas indicate uncertainties caused by five initial-condition ensembles.”
Anyone know what this means? I note that over many time periods the shaded areas get smaller with time so it can’t be uncertainties due to initial conditions.

Yes, I do. In my search for what is happening with error propagation due to the errors in the input parameters, I found out that:
initial- conditions means: a value of the input parameters
these are then varied according to taste (check for “likelihood” in chapter 8 of AR4) of the modeler within the errors and the resulting output is called an “experiment”, and becomes a spaghetti in the graph and used in an average.
the “experiments” are treated as different measurements and used because, they say, the system is chaotic and this is a way of simulating chaos.
Obviously when one does this there is no reason why the spread of differences could not diminish in time as well as expand.
My strong suspicion is that if they did calculate true errors, the bars would go out of the page making nonsense of the climate projections. As it is, the spaghetti of these “experiments” in other variables are extremely out of phase with data and with each other, and they are still used to create the false image of error studies.
Take this bias that moves the curve a degree. In my analysis it would move my error by a degree, since it is systematic and has to be added linearly to the errors .
To take manipulated data, subtract them from this famous model average, and call the difference a statistical effect is so absurd that it is a disgrace for the misuse of the scientific method.
Why, the difference might be how the modeler had slept the day he changed the parameters.

davidc
October 23, 2009 11:05 pm

anna v,
Thanks.
“Why, the difference might be how the modeler had slept the day he changed the parameters”
Figure S1 (SI p4/10) might give a clue on one constraint that might apply. That is, parameters are chosen to give results that have the appearance of a normal distribution. They then take that as a justification for doing statistical tests. Of course, they can get any “standard deviation” they want and therefore pass or fail any statistical test they want.
“My strong suspicion is that if they did calculate true errors, the bars would go out of the page making nonsense of the climate projections”
I’m sure that’s right. The obvious test of the models is some kind of monte-carlo simulation. Or at least, with all parameter combinations at their max and min plausible values (but “plausible” implies that the parameters have a physical meaning, which might be a problem). I haven’t seen anything like that, have you?
Another question you might be able to answer. How can they get any runs at all to work? They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints. Any idea how they do that? I know people have tried reverse engineering from the programs but haven’t seen any progress of significance.

Paul Vaughan
October 24, 2009 12:21 am

anna v (22:02:57) “[…] so absurd that it is a disgrace for the misuse of the scientific method.”
Well-said. It would be laughable, but this type of “reasoning” constitutes a threat to the sustainable defense of civilization. Step 1 is to make the practitioners aware that their actions can generate a destabilizing multi-wave backlash. Step 2 is to afford them opportunity to save face …which is arguably necessary for the greater-good, since under this scenario they will change course more rapidly —- nevermind revenge — too much is at stake to play games – (perhaps only the foolish think this is a game).

anna v
October 24, 2009 12:39 am

davidc (23:05:22) :

I’m sure that’s right. The obvious test of the models is some kind of monte-carlo simulation. Or at least, with all parameter combinations at their max and min plausible values (but “plausible” implies that the parameters have a physical meaning, which might be a problem). I haven’t seen anything like that, have you?

There exist methods of finding the maximum likelihood function when comparing theory and experiment , see http://cdsweb.cern.ch/record/310399/files/CM-P00059682.pdf , it has been published in 1975 but is still behind a pay wall. It has been widely used in the particle physics community for calculating errors.
I think the problem exists for climate models because their “theory” is very complicated and expressed numerically . The models by themselves are very computer time consuming and to introduce the variations necessary to get the chisquare per degree of freedom will probably put them out of the present computational powers.
Their way of playing would be OK in this case, if it was just that, scientific curiosity. Unfortunately politicians are trying to stampede the world on decisions based on what essentially are video games.
Another question you might be able to answer. How can they get any runs at all to work? They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints. Any idea how they do that? I know people have tried reverse engineering from the programs but haven’t seen any progress of significance.
I think E.M. Smith has successfully reversed engineered a version of GISS. He contributes here and also has everything up in his blog http://chiefio.wordpress.com/
Roughly, from what I have generally gleaned, they make a three dimensional grid of the world 200×200, height 20 km, and impose continuity boundary conditions on the fluid equations. I think the time steps are 20 minutes.
What they do not/cannot know, they use an average of ( which is a different way of saying that for unknown equation entering they use a linear approximation).
They hindcast, as is fashionable to call fitting previous data by fiddling with many parameters. That is where the talent of the modeler, like a violinist, enters.
Inherently they are relying on linear approximations within the boxes, so it is not surprising that for the highly non linear chaotic system that climate/weather is the projected solutions will start to diverge after a number of time steps forward from the back fit.

anna v
October 24, 2009 12:44 am

davidc (23:05:22) :
They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints.
Their calculations are completely deterministic, nothing chaotic about them. Each spaghetti line is numerically computed deterministically. They say they simulate chaos by changing the initial conditions, i.e. the input parameters, within errors but at the taste of the modeler.
For true chaotic simulations check the literature for Tsonis et al

Vangel
October 24, 2009 8:15 am

“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.”
What exactly does this mean? Are we to accept the fact that there is clear agreement the historical climate data at a time when the USHCN network has a very high warming bias, when adjustments underestimate the UHI effect, when there is a divergence between the surface and satellite data, and when the global data set has been ‘lost’ and is unavailable for verification? It would appear that these people should spend more of their energy in ensuring that they have access to an objective measure of temperature and less on playing mathematical data on models that use inaccurate inputs.

davidc
October 24, 2009 10:17 am

anna v:
From Wiki:
“Chaos theory is a branch of mathematics which studies the behavior of certain dynamical systems that may be highly sensitive to initial conditions. This sensitivity is popularly referred to as the butterfly effect. As a result of this sensitivity, which manifests itself as an exponential growth of error, the behavior of chaotic systems appears to be random. That is, tiny differences in the starting state of the system can lead to enormous differences in the final state of the system even over fairly small timescales. This gives the impression that the system is behaving randomly. This happens even though these systems are deterministic, meaning that their future dynamics are fully determined by their initial conditions with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.”
So there are two quite different issues here. 1)Sensitivity to parameter values and 2)sensitivity to the initial conditions (of the dynamic variables). Mostly people (eg Wiki) mean 2) when they talk of chaos, but in “Climate Science” it is more 1) I think, which needs to be addressed before 2) can even be considered.

davidc
October 24, 2009 10:30 am

Vangel (08:15:23)
That’s the reason for this (SI p3/10)
“NCEP Reanalysis data are taken as a proxy for observations,
even though we are cognizant that these data are not actual
ground measurements, but the product of a model applied to
observed data from a variety of sources.”
Even “Climate Scientists” can’t extrapolate a declining trend in “global average temperature” (whatever that means) to produce a warming catastrophy (people are not ready for an ice age catastrophe just yet) so they are using “NCEP Reanalysis” (whatever that means) which it seems went up, and is therefore profitably extrapolatable.

Vangel
October 24, 2009 8:23 pm

Even “Climate Scientists” can’t extrapolate a declining trend in “global average temperature” (whatever that means) to produce a warming catastrophy (people are not ready for an ice age catastrophe just yet) so they are using “NCEP Reanalysis” (whatever that means) which it seems went up, and is therefore profitably extrapolatable.
That is exactly my point. They have no clue about the actual temperature readings or the meaning of the ‘average global temperature’ figure they come up with but have no trouble using their algorithms to model the model uncertainties.

Kurt
October 26, 2009 12:33 am

“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.”
What kind of dolt would think that this procedure quantifies the uncertainty of a model? All this paper does is present a post-hoc justification of the same expected model outcome, but move the error bars so that the actual historical data fits within them. In other words, you never have to test the model prediction against the actual outcome – you just fudge the error quantification and assume that the error is randomly distributed above and below the model outcome. Neat. Do you think we can get my auto insurance company to do the same thing if I get into five accidents in a month, so that they don’t raise my rates (expected cost)?

Kurt
October 26, 2009 12:39 am

“We found that the uncertainties obtained when we compare model simulations with observations are significantly larger than what the ensemble bounds would appear to suggest,” said ORNL’s Auroop R. Ganguly, the study’s lead author.”
Isn’t this a euphamism for saying that none of the model runs can produce the temperatures we’ve seen over the last seven years or so?

Ben
October 26, 2009 2:14 pm

Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism. The modelers put error bars on their models based on 2000-2007 daya and came up with very high error. This is the correct answer.
Of course, the “error bars suggest higher heat waves” is nonsensical. All it suggests is higher uncertainty. It appears to be an addition by the PhysOrg author (showing ignorance of how error works). However, aside from that line, it was a decent article.
Why so virulent, guys?

Kurt
October 26, 2009 4:25 pm

“Ben (14:14:33) :
Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism.”
None of the model runs could produce temperatures as low as what was seen from 2000-2007. The question is how to interpret this failure. What this paper seems to do is presume that the fundamentals of the model are correct, meaning that the expected, or mean outcome, of the model runs are correct, but that the error has a wider symmetrical distribution around that expected outcome. That’s why they conclude that policymakers should brace for the possibility of larger temperature increases, since under these assumptions there is no reason to think that temperatures in the future could not exceed the expected outcome by the same margin as they have undershot the expected outcome from 2000-2007.
But there is no logical basis for that assumption – it’s just deus-ex-machina. An equally plausible explanation is that the models are simply constructed incorrectly. For example, it is plausible that if the models were to assume net negative temperature feedback on greenhouse gas emissions rather than positive feedback, that the preexisting method of measuring error by the outer boundaries of the model runs would result in a range of outcomes that includes the temperatures seen in the last decade. But if that were the case, then the projected expected impact of CO2 would diminish considerably.
Basically all this article does is present a lame excuse as to why the models don’t replicate the temperatures in the last decade.

Kurt
October 26, 2009 4:41 pm

“Steven Mosher (15:58:58) :
Looking at the charts of “reanalyis” versus the models it stuck me that the reanalysis wasnt observation data at all.”
At the beginning of the paper, the authors mentioned that the IPCC worst-case emissions scenario actually undershot the actual emissions since the time the model runs were conducted. Though the article was vague, my assumption is that the reanalysis was the model’s output using a worst-of-the-worst case scenario that assumed that the enhanced emissions growth rate would continue.
My first reaction to this is the stupidity of continuing to use a model that dramatically overshot recent temperatures, even though the model assumed greenhouse gas emissions that were lower than reality. In other words, because greenhouse gas emissions rates were even higher than projected, the inability of the model to produce recent temperatures is even more of a failure than would appear at first blush. To then simply run that same model again under the enhanced emissions growth rate, and expand the error bars in both directions so as to include the recent temperature is like betting pregame for a football team that is favored by 5 points, and doubling down at halftime when your pick is actually losing to the underdog by 14 points.

Vangel
October 26, 2009 4:56 pm


Are the rest of you guys reading the same article that I am? After the first 15 comments, I gave up reading because I couldn’t understand the basis of the criticism. The modelers put error bars on their models based on 2000-2007 daya and came up with very high error. This is the correct answer.
Of course, the “error bars suggest higher heat waves” is nonsensical. All it suggests is higher uncertainty. It appears to be an addition by the PhysOrg author (showing ignorance of how error works). However, aside from that line, it was a decent article.
Why so virulent, guys?

Because it is nonsense masquerading as science written by empty suits who know far less than they think they do. And even as their analysis shows that the models are useless they still play their game of deceit. Had the authors been clear about the much greater uncertainty and stopped there the article would have been fine. Instead they made it look as if the uncertainty lent support for some of the more extreme heat scenarios.