(From PhysOrg.com h/t to Leif Svalgaard )– Climate scientists recognize that climate modeling projections include a significant level of uncertainty. A team of researchers using computing facilities at Oak Ridge National Laboratory has identified a new method for quantifying this uncertainty.

The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed. One consequence is the possibility of greater warming and more heat waves later in the century under the Intergovernmental Panel on Climate Change’s (IPCC) high fossil fuel use scenario.
The team performed an ensemble of computer “runs” using one of the most comprehensive climate models–the Community Climate System Model version 3, developed by the National Center for Atmospheric Research (NCAR)–on each of three IPCC scenarios. The first IPCC scenario, known as A1F1, assumes high global economic growth and continued heavy reliance on fossil fuels for the remainder of the century. The second scenario, known as B1, assumes a major move away from fossil fuels toward alternative and renewable energy as the century progresses. The third scenario, known as A2, is a middling scenario, with less even economic growth and some adoption of alternative and renewable energy sources as the century unfolds.
The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007. Models run on historical periods typically depart from the actual weather data recorded for those time spans. The team used statistical methods to develop a range of temperature variance for each of the three scenarios, based on their departure from actual historical data.
The approach’s outcome is roughly similar to the National Weather Service’s computer predictions of a hurricane’s path, familiar to TV viewers. There is typically a dark line on the weather map showing the hurricane’s predicted path over the next few days, and there is a gray or colored area to either side of the line showing how the hurricane may diverge from the predicted path, within a certain level of probability. The ORNL team developed a similar range of variance–technically known as “error bars”–for each of the scenarios.
Using resources at ORNL’s Leadership Computing Facility, the team then performed ensemble runs on three decade-long periods at the beginning, middle, and end of the twenty-first century (2000-2009, 2045-2055, and 2090-2099) to get a sense of how the scenarios would unfold over the twenty-first century’s hundred years.
Interestingly, when the variance or “error bars” are taken into account, there is no statistically significant difference between the projected temperatures resulting from the high fossil fuel A1F1 scenario and the middling A2 scenario up through 2050. That is, the A1F1 and A2 error bars overlap. After 2050, however, the A1F1 range of temperature projections rise above those of A2, until they begin to overlap again toward the century’s end.
Typically climate scientists have understood the range of uncertainty in projections to be the variance between high and low scenarios. But when the error bars are added in the range between high and low possibilities actually widens, indicating greater uncertainty.
“We found that the uncertainties obtained when we compare model simulations with observations are significantly larger than what the ensemble bounds would appear to suggest,” said ORNL’s Auroop R. Ganguly, the study’s lead author.
In addition, the error bars in the A1F1 scenario suggests at least the possibility of even higher temperatures and more heat waves after 2050, if fossil fuel use is not curtailed.
The team also looked at regional effects and found large geographical variability under the various scenarios. The findings reinforce the IPCC’s call for greater focus on regional climate studies in an effort to understand specific impacts and develop strategies for mitigation of and adaptation to climate change.
The study was published in the Proceedings of the National Academy of Sciences. Co-authors include Marcia Branstetter, John Drake, David Erickson, Esther Parish, Nagendra Singh, and Karsten Steinhaeuser of ORNL, and Lawrence Buja of NCAR. Funding for the work was provided by ORNL’s new cross-cutting initiative called Understanding Climate Change Impacts through the Laboratory Directed Research and Development program.
More information: The paper can be accessed electronically here: http://www.pnas.org/content/106/37/15555
I keep talking about the lack of propagation of errors in these GCMs and the lack of error bars in the projections, which make them meaningless. I thought this might be an effort to correct this but I am mistaken.
In this paper:
http://www.pnas.org/content/106/37/15555.full.pdf
It seems to me that they are using the differences of the model’s runs with the “reanalyzed” data to estimate an error assuming a gaussian distribution of this difference, and they test it over the interval that is not part of the projections , and then project to the end of the century. If I am wrong, I wait to be corrected.
Sounds like a sure fire method to create a hockey stick type situation: observations and models diverge because of statistics, and not because of wrong assumptions in the models.
I have looked at the “supporting information”
http://www.pnas.org/content/suppl/2009/09/08/0904495106.DCSupplemental/0904495106SI.pdf
Look at figure S4. The error bar is 1C. Note it is temperature, so it looks small. If this were a propagated error it would make the anomaly projections nonsense.
Suppose that the true error bars, from error propagation of the input parameters, of the model projection are of the same order of magnitude, the whole thing becomes even more bizarre.
I confess that in my days of analysis, which ended ten years ago, bias did not exist in evaluating models against data. Systematic errors, yes, but they were just added linearly to errors and not in one direction to the curve, but in the +/-, possibly different for each sign, so I cannot understand adding this bias and getting an even warmer curve in s4.
We need a statistician to get hold of this, but for me it sounds like another obfuscation instead of error propagation : varying input parameters within errors and estimating chisquare per degree of freedom and error thereof.
evanmjones (20:10:02) :
“Maybe we should develop our own simplified, easily revisable, tentative, “top down” climate model that could be continually adjusted whenever the data jumped the error bars.
As a matter of fact, I think I may do just that . . .”
Agreed, start at the top with Lindzen’s recent result, work down through the tropical troposphere where there is no hot spot and predict what should happen at the surface.
Why am I reminded of Star Trek’s Heisenberg Compensator?
So…. the models got it wildly wrong in the period up to 2007. Factor in the amount they were wrong and do another bunch of runs, but with larger error bars. Oh, and don’t use actual climatic data to work out how wrong they were in the first place. This isn’t science. It isn’t even pseudo-science. It’s fraud.
“The new approach suggests that the range of uncertainty in climate projections may be greater than previously assumed. One consequence is the possibility of greater warming and more heat waves later in the century…”
One consequence? Well, yes, that is one, it is certainly not the most obvious one. The obvious one is that if the uncertainty is that high than the climate projections are worthless and that any previous hypothesis draw from those flawed projections as null and void.
Basically, the only conclusion that should’ve been reached is that they have to start over from scratch.
So in practical terms it could be 4 degrees C warmer or 4 degrees C colder in the next 50 years with the long term average being zero
Ray (21:39:23) :
[no profanity or implied profanity ~ ctm]
Wow, that’s a new one!
Philip B (17:53:51) – Exactly . Like I said , it was a rhetorical question . A bit naive perhaps , but rhetorical .
As most of you have noticed, this paper is a textbook case of “what’s the worst that could happen” think. Of course, we know that just as likely that temperatures go higher is the possibility that they could go lower. The second possiblility is irrelevant to these people, because if temperatures end up lower than forecast, then all that means is that humanity has “dodged a bullet”.
You can see this mode of thought pervading all pro AGW research. What’s the worst that can happen, what’s the worst possible outcome? The mantra never ends.
“One consequence? Well, yes, that is one, it is certainly not the most obvious one. The obvious one is that if the uncertainty is that high than the climate projections are worthless and that any previous hypothesis draw from those flawed projections as null and void.”
Yeah, but (shhh) they couldn’t SAY that. (It may, however, be their real message.)
anna v (06:06:31) “We need a statistician to get hold of this”
Be careful with that suggestion. Statisticians are more guilty than anyone of promoting the unquestioning application of absolutely-crazy assumptions. It is the wizardry of their algebraic weaving that holds all this nonsense (in economics, climate science, etc.) together.
What is (very seriously) needed is non-mathematician & non-statistician auditing of assumptions. The issue is not one of advanced credentials & complicated academics. The issue is the absence of common sense at the base of “reasoning”. This is epistemological. No matter how sophisticated the algorithms get, they produce garbage (that threatens the sustainable defense of civilization) if they are underpinned by indefensible assumptions.
Half-serious musing:
Nonetheless, sheeple seem content with the wizardry pulling wool over their eyes. I actually get the impression people like to support corruption and be dominated by evil forces because this is somehow (twistedly) perceived as “more cool”. Note to sheeple: I invite you to prove the musing wrong.
No matter the errors in this analysis, the concept is that the more wrong the projections are, these people keep the projections and just predict a wider range of catastrophes. Except, as we’ve seen for other situations, all the adjustments somehow are on the warm side, even if the adjustments are in the descriptions rather than in the error bars.
From the caption to Figure 1: “The shaded areas indicate uncertainties caused by five initial-condition ensembles.”
Anyone know what this means? I note that over many time periods the shaded areas get smaller with time so it can’t be uncertainties due to initial conditions.
From SI p3/10:
“Because bias and variance are stationary in hindcasts (shown
in this SI Text under Statistical Methods), we assume the same will
be true for projections as well.”
I think I know what this means. The standard deviation was worked out for the hindcast period and that same standard deviation(+/- 3SD) was added at every time point to the model predictions (an average of?).
Since the models are somehow adjusted to the hindcast data this might be reasonable if the model extrapolations were the actual future temperatures. In reality it looks like a massive underestimate of the true uncertainty.
Anna v: Yes, Figure 1 (top) shows no sign of error propagation at all. In effect they use the hindcast data (10 years) to extrolate forward in time for 90 years and there is no increase in “unceratainty” at the end. Ridiculous.
As noted above, from SI p 3/10:
“NCEP Reanalysis data are taken as a proxy for observations,
even though we are cognizant that these data are not actual
ground measurements, but the product of a model applied to
observed data from a variety of sources.”
davidc (18:53:05) :
From the caption to Figure 1: “The shaded areas indicate uncertainties caused by five initial-condition ensembles.”
Anyone know what this means? I note that over many time periods the shaded areas get smaller with time so it can’t be uncertainties due to initial conditions.
Yes, I do. In my search for what is happening with error propagation due to the errors in the input parameters, I found out that:
initial- conditions means: a value of the input parameters
these are then varied according to taste (check for “likelihood” in chapter 8 of AR4) of the modeler within the errors and the resulting output is called an “experiment”, and becomes a spaghetti in the graph and used in an average.
the “experiments” are treated as different measurements and used because, they say, the system is chaotic and this is a way of simulating chaos.
Obviously when one does this there is no reason why the spread of differences could not diminish in time as well as expand.
My strong suspicion is that if they did calculate true errors, the bars would go out of the page making nonsense of the climate projections. As it is, the spaghetti of these “experiments” in other variables are extremely out of phase with data and with each other, and they are still used to create the false image of error studies.
Take this bias that moves the curve a degree. In my analysis it would move my error by a degree, since it is systematic and has to be added linearly to the errors .
To take manipulated data, subtract them from this famous model average, and call the difference a statistical effect is so absurd that it is a disgrace for the misuse of the scientific method.
Why, the difference might be how the modeler had slept the day he changed the parameters.
anna v,
Thanks.
“Why, the difference might be how the modeler had slept the day he changed the parameters”
Figure S1 (SI p4/10) might give a clue on one constraint that might apply. That is, parameters are chosen to give results that have the appearance of a normal distribution. They then take that as a justification for doing statistical tests. Of course, they can get any “standard deviation” they want and therefore pass or fail any statistical test they want.
“My strong suspicion is that if they did calculate true errors, the bars would go out of the page making nonsense of the climate projections”
I’m sure that’s right. The obvious test of the models is some kind of monte-carlo simulation. Or at least, with all parameter combinations at their max and min plausible values (but “plausible” implies that the parameters have a physical meaning, which might be a problem). I haven’t seen anything like that, have you?
Another question you might be able to answer. How can they get any runs at all to work? They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints. Any idea how they do that? I know people have tried reverse engineering from the programs but haven’t seen any progress of significance.
anna v (22:02:57) “[…] so absurd that it is a disgrace for the misuse of the scientific method.”
Well-said. It would be laughable, but this type of “reasoning” constitutes a threat to the sustainable defense of civilization. Step 1 is to make the practitioners aware that their actions can generate a destabilizing multi-wave backlash. Step 2 is to afford them opportunity to save face …which is arguably necessary for the greater-good, since under this scenario they will change course more rapidly —- nevermind revenge — too much is at stake to play games – (perhaps only the foolish think this is a game).
davidc (23:05:22) :
I’m sure that’s right. The obvious test of the models is some kind of monte-carlo simulation. Or at least, with all parameter combinations at their max and min plausible values (but “plausible” implies that the parameters have a physical meaning, which might be a problem). I haven’t seen anything like that, have you?
There exist methods of finding the maximum likelihood function when comparing theory and experiment , see http://cdsweb.cern.ch/record/310399/files/CM-P00059682.pdf , it has been published in 1975 but is still behind a pay wall. It has been widely used in the particle physics community for calculating errors.
I think the problem exists for climate models because their “theory” is very complicated and expressed numerically . The models by themselves are very computer time consuming and to introduce the variations necessary to get the chisquare per degree of freedom will probably put them out of the present computational powers.
Their way of playing would be OK in this case, if it was just that, scientific curiosity. Unfortunately politicians are trying to stampede the world on decisions based on what essentially are video games.
Another question you might be able to answer. How can they get any runs at all to work? They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints. Any idea how they do that? I know people have tried reverse engineering from the programs but haven’t seen any progress of significance.
I think E.M. Smith has successfully reversed engineered a version of GISS. He contributes here and also has everything up in his blog http://chiefio.wordpress.com/
Roughly, from what I have generally gleaned, they make a three dimensional grid of the world 200×200, height 20 km, and impose continuity boundary conditions on the fluid equations. I think the time steps are 20 minutes.
What they do not/cannot know, they use an average of ( which is a different way of saying that for unknown equation entering they use a linear approximation).
They hindcast, as is fashionable to call fitting previous data by fiddling with many parameters. That is where the talent of the modeler, like a violinist, enters.
Inherently they are relying on linear approximations within the boxes, so it is not surprising that for the highly non linear chaotic system that climate/weather is the projected solutions will start to diverge after a number of time steps forward from the back fit.
davidc (23:05:22) :
They must be constraining the dynamic equations somehow as the calculation progresses, so the results might still be chaotic but within those constraints.
Their calculations are completely deterministic, nothing chaotic about them. Each spaghetti line is numerically computed deterministically. They say they simulate chaos by changing the initial conditions, i.e. the input parameters, within errors but at the taste of the modeler.
For true chaotic simulations check the literature for Tsonis et al
“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.”
What exactly does this mean? Are we to accept the fact that there is clear agreement the historical climate data at a time when the USHCN network has a very high warming bias, when adjustments underestimate the UHI effect, when there is a divergence between the surface and satellite data, and when the global data set has been ‘lost’ and is unavailable for verification? It would appear that these people should spend more of their energy in ensuring that they have access to an objective measure of temperature and less on playing mathematical data on models that use inaccurate inputs.
anna v:
From Wiki:
“Chaos theory is a branch of mathematics which studies the behavior of certain dynamical systems that may be highly sensitive to initial conditions. This sensitivity is popularly referred to as the butterfly effect. As a result of this sensitivity, which manifests itself as an exponential growth of error, the behavior of chaotic systems appears to be random. That is, tiny differences in the starting state of the system can lead to enormous differences in the final state of the system even over fairly small timescales. This gives the impression that the system is behaving randomly. This happens even though these systems are deterministic, meaning that their future dynamics are fully determined by their initial conditions with no random elements involved. This behavior is known as deterministic chaos, or simply chaos.”
So there are two quite different issues here. 1)Sensitivity to parameter values and 2)sensitivity to the initial conditions (of the dynamic variables). Mostly people (eg Wiki) mean 2) when they talk of chaos, but in “Climate Science” it is more 1) I think, which needs to be addressed before 2) can even be considered.
Vangel (08:15:23)
That’s the reason for this (SI p3/10)
“NCEP Reanalysis data are taken as a proxy for observations,
even though we are cognizant that these data are not actual
ground measurements, but the product of a model applied to
observed data from a variety of sources.”
Even “Climate Scientists” can’t extrapolate a declining trend in “global average temperature” (whatever that means) to produce a warming catastrophy (people are not ready for an ice age catastrophe just yet) so they are using “NCEP Reanalysis” (whatever that means) which it seems went up, and is therefore profitably extrapolatable.
Even “Climate Scientists” can’t extrapolate a declining trend in “global average temperature” (whatever that means) to produce a warming catastrophy (people are not ready for an ice age catastrophe just yet) so they are using “NCEP Reanalysis” (whatever that means) which it seems went up, and is therefore profitably extrapolatable.
That is exactly my point. They have no clue about the actual temperature readings or the meaning of the ‘average global temperature’ figure they come up with but have no trouble using their algorithms to model the model uncertainties.
“The team computed uncertainty by comparing model outcomes with historical climate data from the period 2000-2007.”
What kind of dolt would think that this procedure quantifies the uncertainty of a model? All this paper does is present a post-hoc justification of the same expected model outcome, but move the error bars so that the actual historical data fits within them. In other words, you never have to test the model prediction against the actual outcome – you just fudge the error quantification and assume that the error is randomly distributed above and below the model outcome. Neat. Do you think we can get my auto insurance company to do the same thing if I get into five accidents in a month, so that they don’t raise my rates (expected cost)?
“We found that the uncertainties obtained when we compare model simulations with observations are significantly larger than what the ensemble bounds would appear to suggest,” said ORNL’s Auroop R. Ganguly, the study’s lead author.”
Isn’t this a euphamism for saying that none of the model runs can produce the temperatures we’ve seen over the last seven years or so?