From an Ohio State University press release where they see a lot of red, and little else, yet another warm certainty model:
STATISTICAL ANALYSIS PROJECTS FUTURE TEMPERATURES IN NORTH AMERICA
![warming_figure2[1]](http://wattsupwiththat.files.wordpress.com/2012/05/warming_figure21.jpg?resize=640%2C470&quality=83)
They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.
The analysis, developed by statisticians at Ohio State University, examines groups of regional climate models, finds the commonalities between them, and determines how much weight each individual climate projection should get in a consensus climate estimate.
Through maps on the statisticians’ website, people can see how their own region’s temperature will likely change by 2070 – overall, and for individual seasons of the year.
Given the complexity and variety of climate models produced by different research groups around the world, there is a need for a tool that can analyze groups of them together, explained Noel Cressie, professor of statistics and director of Ohio State’s Program in Spatial Statistics and Environmental Statistics.
Cressie and former graduate student Emily Kang, now at the University of Cincinnati, present the statistical analysis in a paper published in the International Journal of Applied Earth Observation and Geoinformation.
“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said. “We wanted to develop a way to determine the likelihood of different outcomes, and combine them into a consensus climate projection. We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”
For their initial analysis, Cressie and Kang chose to combine two regional climate models developed for the North American Regional Climate Change Assessment Program. Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070. The data were broken down into blocks of area 50 kilometers (about 30 miles) on a side, throughout North America.
Averaging the results over those individual blocks, Cressie and Kang’s statistical analysis estimated that average land temperatures across North America will rise around 2.5 degrees Celsius (4.5 degrees Fahrenheit) by 2070. That result is in agreement with the findings of the United Nations Intergovernmental Panel on Climate Change, which suggest that under the same emissions scenario as used by NARCCAP, global average temperatures will rise 2.4 degrees Celsius (4.3 degrees Fahrenheit) by 2070. Cressie and Kang’s analysis is for North America – and not only estimates average land temperature rise, but regional temperature rise for all four seasons of the year.
Cressie cautioned that this first study is based on a combination of a small number of models. Nevertheless, he continued, the statistical computations are scalable to a larger number of models. The study shows that climate models can indeed be combined to achieve consensus, and the certainty of that consensus can be quantified.
The statistical analysis could be used to combine climate models from any region in the world, though, he added, it would require an expert spatial statistician to modify the analysis for other settings.
The key is a special combination of statistical analysis methods that Cressie pioneered, which use spatial statistical models in what researchers call Bayesian hierarchical statistical analyses.
“We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.” |
The latter techniques come from Bayesian statistics, which allows researchers to quantify the certainty associated with any particular model outcome. All data sources and models are more or less certain, Cressie explained, and it is the quantification of these certainties that are the building blocks of a Bayesian analysis.
In the case of the two North American regional climate models, his Bayesian analysis technique was able to give a range of possible temperature changes that includes the true temperature change with 95 percent probability.
After producing average maps for all of North America, the researchers took their analysis a step further and examined temperature changes for the four seasons. On their website, they show those seasonal changes for regions in the Hudson Bay, the Great Lakes, the Midwest, and the Rocky Mountains.
In the future, the region in the Hudson Bay will likely experience larger temperature swings than the others, they found.
That Canadian region in the northeast part of the continent is likely to experience the biggest change over the winter months, with temperatures estimated to rise an average of about 6 degrees Celsius (10.7 degrees Fahrenheit) – possibly because ice reflects less energy away from the Earth’s surface as it melts. Hudson Bay summers, on the other hand, are estimated to experience only an increase of about 1.2 degrees Celsius (2.1 degrees Fahrenheit).
According to the researchers’ statistical analysis, the Midwest and Great Lakes regions will experience a rise in temperature of about 2.8 degrees Celsius (5 degrees Fahrenheit), regardless of season. The Rocky Mountains region shows greater projected increases in the summer (about 3.5 degrees Celsius, or 6.3 degrees Fahrenheit) than in the winter (about 2.3 degrees Celsius, or 4.1 degrees Fahrenheit).
In the future, the researchers could consider other climate variables in their analysis, such as precipitation.
This research was supported by NASA’s Earth Science Technology Office. The North American Regional Climate Change Assessment Program is funded by the National Science Foundation, the U.S. Department of Energy, the National Oceanic and Atmospheric Administration, and the U.S. Environmental Protection Agency office of Research and Development.
###
The man and his methods may be sound, but the object of his work sure isn’t.
Absolute unadulterated garbage. But then predicting things is very difficult, particularly things in the future.
The models should hindcast the past as a test for their forecast capabilities.
How many of the forecasting models have been backcasting correctly the past?
Jo Nova has a post on this:
http://joannenova.com.au/2012/05/we-cant-predict-the-climate-on-a-local-regional-or-continental-scale/
Why is not the ensemble analysed for 1940-1970 hindcast before forecasting 2040-2070?
What is funny to see with these forecasting is that they forecast 2040-2070 so that no possible direct reality check of their forecasting abilities can be done in the next couple of decades.
As already said by others I see in this red forecast only GIGO and BS and no science.
Tell me when you get your new blue cartridge. It will reflect reality closer.
Someone earlier ended their comment, “O-H”.
I-O
It sounds like Cressie has developed a better way to compare/combine the output of computer models in general. The problem is that when applying it to the climate models, he’s tying to make a silk purse out of a sow’s ear.
Steven Mosher says:
May 16, 2012 at 12:29 am
Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
If the best we’ve got is producing this level of dreck it suggests to me that we have mostly proven that the best statistical techniques we have available are entirely inadequate to the task at hand.
Using NOAA data,
They are predicting 3.6F by 2070.
The trend in the USA from 1990 to 2011 is .22F / decade = 166 years for 3.6F
HOWEVER
The trend from 2000 to 2011 is -0.58 degF / Decade which means in 55 years it will be -3.3F COLDER.
I suspect the real number will be in the middle.
I would like to third Mario Lento’s proposition for the adoption of the phrase “guess laundering” – and further propose that the outfit doing it be known as the “guess laundry”.
Yes – it will all come out in the laundry…
Steven Mosher says: May 16, 2012 at 12:29 am Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
What a pity he does not direct his special skills into topics that matter more, using data that mean more. There is plenty of hard science needing such skills.
Dave WendtB/b> says:
May 16, 2012 at 1:31 pm
If the best we’ve got is producing this level of dreck it suggests to me that we have mostly proven that the best statistical techniques we have available are entirely inadequate to the task at hand.
Maybe I’m looking at this too hard, but perhaps the difficulties lie in the fact that most people calling themselves “climate scientists” are statisticians rather than, oh, say, scientists whose field of study actually includes climate in some fashion…
John has a nice post here about why these models are wrong. With increased Earth temperature the outgoing longwave radiated energy increases as proven by measurement and not by models:
http://theinconvenientskeptic.com/2012/05/the-science-of-why-the-theory-of-global-warming-is-incorrect/
The model predicts Canadian North East winter temperatures to rise by 6 C by 2070. During the 64 year period from 1948 to 2011, the Canadian national winter temperatures according to Environment Canada rose only by 1.5 C including the Arctic Mountains and Fiords. In the Arctic Tundra region, they rose 2.1 C. In the North Eastern Forest region , they rose 1C. In the Atlantic Coast area they rose 0.7 C. So nowhere in the North eastern region of Canada have there been winter temperature changes over a much longer period that even remotely approach the model predictions . I am constantly amazed how the modellers are able to predict with great accuarcy the temperatures for some remote future periods when they themselves may not be around to account for their past predictions ,and are complete failures or with no proven credibilty when it come to predicting the next year or next decade. This latest modelling attempt does not seem credible when looking at current or past climate trends in Canada .
I just noticed that in my previous post , I stated the annual temperature rises during the last 64 years for the various regions of Eastern Canada not the winter rises as I intended . Here are the correct winter temperature departures or rises for the north eastern regions of Canada during the last 65 years .
ATLANTIC COAST 0.5C
NORTHEASTERN FORESTS 1.9C
ARCTIC MTNS & FIORDS 2.3 C
ARCTIC TUNDRA 3.2 C
[ data per Environment Canada]
The model of the above paper seems to projec a rise of 6 C in a period of about 30 years between 2041 and 2070
“Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070.”
What about the prediction 2012-2041? What… the model no good for the near term? Wouldn’t that mean by the year 2030, they could no longer predict what was going to happen in the year 2041? So, in 2030, all bets are off, anything could happen in the year 2041. How stupid is this study, if you can’t predict what the temperature will be next year or ten years from now, you CANT predict what the temperature will be 40 years from now either because the temperature 40 years from now depend on the earlier temperatures.
I think they should use the period from 950-1050 AD as their calibration period and then predict what the temperature is going to be in 2013 and let’s see how accurate they are. The model would most likely be wrong by about 6C if not more.
This is just another one of those unfalsifiable DOOM AND GLOOM scenarios to try and scare the public and politicians into action.
ALCHESON
You make some valid observations . My take is that the authors seem to use the temperature data from the past warming phase of the last 60 year climate cycle[1970–2000] in order to predict the warming phase [2040 to 2070 ] of the next climate cycle but they ignore the possible cooling phsase in between [2010 -2040.] If the cooling phase is severe like 1880-1910 , there can be cosiderable temperature drops in between and what will be the temperature at the end of the cooling phase is anyone’s guess.. The conditions during 2040 -2070 may not be similar to 1970-2000.
Well I stopped reading when I got to that statistical concensus prediction; excuse me, projection from several climate models. Statistically sophosticated concensus or no; a concensus of idiots is still an idiot concensus.
Since the climate models do not agree with each other then you can be sure that none of them is reliable, and a mathematical hodge podge is no more believable than any one of them. I don’t think you get any more credible information if you statisticate the average telephone number in the phonebooks, of New York, DC, and Philadelphia, and even throwing in LA and San Fran doesn’t make the “concensus” any more informative.
Why not fix the models so they track the observed data, and forget about statistical concensus.
And for that matter, why not fix the data, so that it really is a valid sampling of the climate continuous function of at least space and time, instead of a handfull of meaningless random samples.
If you core drill a tree to get a tree ring stack, you still get a one dimensional sample of a three dimensional function, that tells you nothing significant about even that one tree’s history; let alone of the whole earth. Oops; if you counted the rings correctly you do get its age reasonably well. I guess that’s why they call it dendrochronology, and not climatology.
“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe.”
This is an incorrect assumption about this scientist (me) – and probably a majority of other skeptics – for two reasons. First, I do not work on a belief system. I work on data, mostly improving its woeful quality in climate work and cautioning sensible people to not waste time by not validating it first.
Second, if you make an ensemble, you have to put uncertainty around it. This usually means that you have to know the uncertainty of the various input models. But this certainty is seldom, if ever, calculated correctly. First, because some variables are constrained before the model is run; and second, because the certainty of the model’s design should be derived from all of runs that have been put through it, unless there are large and agreed valid reasons for rejecting a run (like a typo). If you calculate the uncertainty of a medelling team’s efforts this inclusive way, then form the ensemble, the overall error bounds would be so large that any curve that looked about right would fit between them, meaning that nothing of value has been demonstrated.
That’s part of the reason for some scepticism. You can’t cherry pick model runs any more that you can cherry pick trees for dendrothermometry, a topic of recent debate. I have noticed the absence of statements that modellers did not share results with other teams before submitting their favourite model run to the ensemble calculation. I have seen data that suggests some did. So ‘a priori’ has been degraded in meaning.
Steven Mosher: “However, if you look at the information from various hindcasts ( and how they are wrong in some ways and right in others ) that information can be used in getting a better ensemble forecast.”
Sophisticated mathematics can not get past correlated error among all the models. Even the things they got “right” in the past, such as temperature, must have been “right” for the wrong reasons. Correlated error in precipitation (Wentz) and surface albedo feedback (Roesch) are larger than the energy imbalance of interest. It will be interesting to see what their review of the diagnostic literature for the models says about the documented correlated errors.