From an Ohio State University press release where they see a lot of red, and little else, yet another warm certainty model:
STATISTICAL ANALYSIS PROJECTS FUTURE TEMPERATURES IN NORTH AMERICA
![warming_figure2[1]](http://wattsupwiththat.files.wordpress.com/2012/05/warming_figure21.jpg?resize=640%2C470&quality=83)
They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.
The analysis, developed by statisticians at Ohio State University, examines groups of regional climate models, finds the commonalities between them, and determines how much weight each individual climate projection should get in a consensus climate estimate.
Through maps on the statisticians’ website, people can see how their own region’s temperature will likely change by 2070 – overall, and for individual seasons of the year.
Given the complexity and variety of climate models produced by different research groups around the world, there is a need for a tool that can analyze groups of them together, explained Noel Cressie, professor of statistics and director of Ohio State’s Program in Spatial Statistics and Environmental Statistics.
Cressie and former graduate student Emily Kang, now at the University of Cincinnati, present the statistical analysis in a paper published in the International Journal of Applied Earth Observation and Geoinformation.
“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said. “We wanted to develop a way to determine the likelihood of different outcomes, and combine them into a consensus climate projection. We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”
For their initial analysis, Cressie and Kang chose to combine two regional climate models developed for the North American Regional Climate Change Assessment Program. Though the models produced a wide variety of climate variables, the researchers focused on temperatures during a 100-year period: first, the climate models’ temperature values from 1971 to 2000, and then the climate models’ temperature values projected for 2041 to 2070. The data were broken down into blocks of area 50 kilometers (about 30 miles) on a side, throughout North America.
Averaging the results over those individual blocks, Cressie and Kang’s statistical analysis estimated that average land temperatures across North America will rise around 2.5 degrees Celsius (4.5 degrees Fahrenheit) by 2070. That result is in agreement with the findings of the United Nations Intergovernmental Panel on Climate Change, which suggest that under the same emissions scenario as used by NARCCAP, global average temperatures will rise 2.4 degrees Celsius (4.3 degrees Fahrenheit) by 2070. Cressie and Kang’s analysis is for North America – and not only estimates average land temperature rise, but regional temperature rise for all four seasons of the year.
Cressie cautioned that this first study is based on a combination of a small number of models. Nevertheless, he continued, the statistical computations are scalable to a larger number of models. The study shows that climate models can indeed be combined to achieve consensus, and the certainty of that consensus can be quantified.
The statistical analysis could be used to combine climate models from any region in the world, though, he added, it would require an expert spatial statistician to modify the analysis for other settings.
The key is a special combination of statistical analysis methods that Cressie pioneered, which use spatial statistical models in what researchers call Bayesian hierarchical statistical analyses.
“We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.” |
The latter techniques come from Bayesian statistics, which allows researchers to quantify the certainty associated with any particular model outcome. All data sources and models are more or less certain, Cressie explained, and it is the quantification of these certainties that are the building blocks of a Bayesian analysis.
In the case of the two North American regional climate models, his Bayesian analysis technique was able to give a range of possible temperature changes that includes the true temperature change with 95 percent probability.
After producing average maps for all of North America, the researchers took their analysis a step further and examined temperature changes for the four seasons. On their website, they show those seasonal changes for regions in the Hudson Bay, the Great Lakes, the Midwest, and the Rocky Mountains.
In the future, the region in the Hudson Bay will likely experience larger temperature swings than the others, they found.
That Canadian region in the northeast part of the continent is likely to experience the biggest change over the winter months, with temperatures estimated to rise an average of about 6 degrees Celsius (10.7 degrees Fahrenheit) – possibly because ice reflects less energy away from the Earth’s surface as it melts. Hudson Bay summers, on the other hand, are estimated to experience only an increase of about 1.2 degrees Celsius (2.1 degrees Fahrenheit).
According to the researchers’ statistical analysis, the Midwest and Great Lakes regions will experience a rise in temperature of about 2.8 degrees Celsius (5 degrees Fahrenheit), regardless of season. The Rocky Mountains region shows greater projected increases in the summer (about 3.5 degrees Celsius, or 6.3 degrees Fahrenheit) than in the winter (about 2.3 degrees Celsius, or 4.1 degrees Fahrenheit).
In the future, the researchers could consider other climate variables in their analysis, such as precipitation.
This research was supported by NASA’s Earth Science Technology Office. The North American Regional Climate Change Assessment Program is funded by the National Science Foundation, the U.S. Department of Energy, the National Oceanic and Atmospheric Administration, and the U.S. Environmental Protection Agency office of Research and Development.
###
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I have copied and pasted this from Christopher Booker’s excellent weekly column in the Sunday Telegraph.
“Someone whom I was delighted to meet again in Australia was Professor Ian Plimer, a prominent “climate sceptic”, who is one of Abbott’s advisers. In his latest entertaining book, How To Get Expelled From School (by asking the teachers 101 awkward scientific questions about their belief in global warming), Plimer cites a vivid illustration of how great is the threat posed to the planet by man-made CO2.
If one imagines a length of the Earth’s atmosphere one kilometre long, 780 metres of this are made up of nitrogen, 210 are oxygen and 10 metres are water vapour (the largest greenhouse gas). Just 0.38 of a metre is carbon dioxide, to which human emissions contribute one millimetre.”
This is why I am sceptical and smile to myself as these climate “scientists” constantly run around in metaphorical circles trying to prove that their computer models are right, when clearly they are not!
These folks are brilliant… uhm… religious! CO2 the newest false god. We cannot disprove their models for decades. But we should act prudently just in case they are right! The IPCC’s models continue to be wrong, yet the AGW believers still believe. Another sad day for people disguising themselves as scientists.
Complicate a climate model as much as you like, it’s still as scientifically relevant as an astrolabe.
This is all Bayesian statistics. This started out as a technique to formalize “soft” data, e. g. the opinions of knowledgeable persons. This is the “prior” which is then modified by the actual data (in this case, not real data, but modelling results) and the result is the “posterior”. So this is a guess updated by another guess and ending up as shiny new consensus climate predictions.
Bayesian statistics, properly used, is a legitimate technique, but its popularity in climate science is probably due to the fact that you can get essentially any result you want by a judicious choice of prior. Nor is there any objective method for evaluating the validity of the prior, and you can go back an change the prior any number of times without having to tell anybody.
Seems quite simple to me – no matter how many variable are included, how many million lines of code, no matter how airflows and ocean currents are taken into account, if the effect of increasing carbon dioxide is predicated as a ‘nett energy gain to the system’;
Guess what, when you run that model the output WILL tell you that the system will get warmer.
Bill Tuttle says:
May 15, 2012 at 9:11 pm
“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said.
Actually that is true.
It’s just that the models give different results to what actually happens that causes the doubt .
No-one cares that they disagree in many different wrong ways.
Boring! Lets have some variety. It’s always warming everywhere. Red red red.
There is an old saying “Don’t put all your eggs in one basket”.
Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
Worse than GIGO. More like G2IG2O.
Oh dear, HTML superscript doesn’t work. GsquaredIGsquaredO.
“One of the criticisms from climate-change skeptics is that different climate models give different results, so they argue that they don’t know what to believe,” he said.
I would feel more confident if his project was driven by his own critical perspective and that of other sceptical scientists.
Can someone tell me if this is called “Guess-Laundering”? I may have just been the first to coin a phrase for these new attempts to pursued useful idiots…
benfrommo says:
May 15, 2012 at 10:40 pm
“But as the quote shows here, they cherry pick a time when temperatures went up and CO2 went up to show their case. Why don’t they look at either the ENTIRE time period of say 1950 – today instead of cherry picking the time when CO2 levels and temperatures both went up?”
Exactly. The approach might have merit but by ignoring the period 2000-2010 – the period with which all the models have problems – they are reducing their own work to a caricature.
Cherrypicking the data for the validation means that the validation is worthless. During validation of a model, you should always strive to use as much data as is available.
Furthermore, even if the validation had been done properly, this by no means proves predictive skill out to 2070. What are they thinking. If, as Mosher says, “Cressie is the main man in spatial stats today”, I would expect him to know that. If he pretends he doesn’t… well…
LOL otter, I was about to say Go Bucks too! But I think they chose all the red to show their team colors. Scarlet and Grey! 🙂
All joking aside, I never take those model studies seriously because they are never true. Just wish the MSM would realize how far off those things are.
Steven Mosher says:
May 16, 2012 at 12:29 am
Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
SO WHAT ?
M Courtney says:
May 16, 2012 at 12:06 am
A me, May 15, 2012 at 9:11 pm
“…so they argue that they don’t know what to believe,” he said.
Actually that is true.
It’s just that the models give different results to what actually happens that causes the doubt.
No-one cares that they disagree in many different wrong ways.
Cressie’s statement means he assumes that if they can just bring all their models into agreement, the 60-watt cartoon light bulb (a CFC, naturally) over our collective heads will suddenly blink “on” and we’ll cease being so contrary — which is why he’s concentrating on getting the models to agree. Therein lies the rub:
All data sources and models are more or less certain, Cressie explained…
He’s saying the models are just fine — we’re saying the models are *not* fine. The models are fundamentally flawed because they have no means of replicating all the factors influencing the climate other than by inputting assumptions, and those assumptions are colored by bias, e.g., CO2 drives temperature rather than follows it, or that an increase in temperature automatically triggers an increase in water vapor. In sessence, their macro-models rely on a host of micro-models, and all of them are programmed to run on assumptions which don’t replicate reality. And here’s Cressie saying we can fix that merely by adding more models to the mix, rather than working on refining the assumptions.
It’s not that we don’t know *which* of their climate models to believe, we don’t believe *any* of their climate models, because none of them are capable of producing either the kind of results or “high certainty” that they claim.
The garbage is built into the models — to add multiple models and then expect filet mignon to come out is unrealistic.
otter17~ You understand! Bucks, moola, $$$, cha-ching….! That’s what ‘science’ like this, is all about.
“One of the criticisms from climate-change skeptics” Any chance of them of pointing out who actual claims climate does not change, or is just a standard throw away line designed not for its scientific use but for political purposes ?
Meanwhile its the standard approach , start with base assumptions that support your views , and never mind their actual validity as this is ‘the cause ‘ that is a minor issue , then run models which tell you how bad things will get . Finish by asking for more research cash to do the same again..
.
“We show that there are shared conclusions upon which scientists can agree with some certainty, and we are able to statistically quantify that certainty.”
Certainty?
What they have measured is the extent scientists agree, which has nothing to do with certainty in what they believe. Only certainty (in a statistical sense) that they do believe it.
Scientific certainty results from predictive accuracy. Something climate models are not known for.
One step further – GIGO + lies, damn lies + Bayesian Stats = Concensus = Certainty
Ehem – Guess again
My mother always told me that two wrongs don’t make a right, pity no-one told these guys that two wrong models don’t make a right one.
The posterior mean of the average temperature change projections
Posterior my arse.
a.) What are they smoking??!?!
b.) Where can I buy some?
I see a lot of criticism above, with which I agree. Without comparing models to reality, statements such as this are meaningless.
They performed advanced statistical analysis on two different North American regional climate models and were able to estimate projections of temperature changes for the years 2041 to 2070, as well as the certainty of those projections.
Without a track record, there is no way estimate ‘certainty’ other than SWAG.
Bloke down the pub says:
May 16, 2012 at 2:52 am
“The posterior mean of the average temperature change projections”
Posterior my arse.
Threadwinner!
Steven Mosher says:
May 16, 2012 at 12:29 am
Cressie is the main man in spatial stats today, specifically spatio-temporal stats.
How is he with spatio-temperature stats?