Statistics research could build consensus around climate predictions
Philadelphia, PA—Vast amounts of data related to climate change are being compiled by research groups all over the world. Data from these many and varied sources results in different climate projections; hence, the need arises to combine information across data sets to arrive at a consensus regarding future climate estimates.
In a paper published last December in the SIAM Journal on Uncertainty Quantification, authors Matthew Heaton, Tamara Greasby, and Stephan Sain propose a statistical hierarchical Bayesian model that consolidates climate change information from observation-based data sets and climate models.
“The vast array of climate data—from reconstructions of historic temperatures and modern observational temperature measurements to climate model projections of future climate—seems to agree that global temperatures are changing,” says author Matthew Heaton. “Where these data sources disagree, however, is by how much temperatures have changed and are expected to change in the future. Our research seeks to combine many different sources of climate data, in a statistically rigorous way, to determine a consensus on how much temperatures are changing.”
Using a hierarchical model, the authors combine information from these various sources to obtain an ensemble estimate of current and future climate along with an associated measure of uncertainty. “Each climate data source provides us with an estimate of how much temperatures are changing. But, each data source also has a degree of uncertainty in its climate projection,” says Heaton. “Statistical modeling is a tool to not only get a consensus estimate of temperature change but also an estimate of our uncertainty about this temperature change.”
The approach proposed in the paper combines information from observation-based data, general circulation models (GCMs) and regional climate models (RCMs).
Regional analysis for climate change assessment. Image credit: Melissa Bukovsky, National Center for Atmospheric Research (NCAR/IMAGe)
Observation-based data sets, which focus mainly on local and regional climate, are obtained by taking raw climate measurements from weather stations and applying it to a grid defined over the globe. This allows the final data product to provide an aggregate measure of climate rather than be restricted to individual weather data sets. Such data sets are restricted to current and historical time periods. Another source of information related to observation-based data sets are reanalysis data sets in which numerical model forecasts and weather station observations are combined into a single gridded reconstruction of climate over the globe.
GCMs are computer models which capture physical processes governing the atmosphere and oceans to simulate the response of temperature, precipitation, and other meteorological variables in different scenarios. While a GCM portrayal of temperature would not be accurate to a given day, these models give fairly good estimates for long-term average temperatures, such as 30-year periods, which closely match observed data. A big advantage of GCMs over observed and reanalyzed data is that GCMs are able to simulate climate systems in the future.
RCMs are used to simulate climate over a specific region, as opposed to global simulations created by GCMs. Since climate in a specific region is affected by the rest of the earth, atmospheric conditions such as temperature and moisture at the region’s boundary are estimated by using other sources such as GCMs or reanalysis data.
By combining information from multiple observation-based data sets, GCMs and RCMs, the model obtains an estimate and measure of uncertainty for the average temperature, temporal trend, as well as the variability of seasonal average temperatures. The model was used to analyze average summer and winter temperatures for the Pacific Southwest, Prairie and North Atlantic regions (seen in the image above)—regions that represent three distinct climates. The assumption would be that climate models would behave differently for each of these regions. Data from each region was considered individually so that the model could be fit to each region separately.
“Our understanding of how much temperatures are changing is reflected in all the data available to us,” says Heaton. “For example, one data source might suggest that temperatures are increasing by 2 degrees Celsius while another source suggests temperatures are increasing by 4 degrees. So, do we believe a 2-degree increase or a 4-degree increase? The answer is probably ‘neither’ because combining data sources together suggests that increases would likely be somewhere between 2 and 4 degrees. The point is that that no single data source has all the answers. And, only by combining many different sources of climate data are we really able to quantify how much we think temperatures are changing.”
While most previous such work focuses on mean or average values, the authors in this paper acknowledge that climate in the broader sense encompasses variations between years, trends, averages and extreme events. Hence the hierarchical Bayesian model used here simultaneously considers the average, linear trend and interannual variability (variation between years). Many previous models also assume independence between climate models, whereas this paper accounts for commonalities shared by various models—such as physical equations or fluid dynamics—and correlates between data sets.
“While our work is a good first step in combining many different sources of climate information, we still fall short in that we still leave out many viable sources of climate information,” says Heaton. “Furthermore, our work focuses on increases/decreases in temperatures, but similar analyses are needed to estimate consensus changes in other meteorological variables such as precipitation. Finally, we hope to expand our analysis from regional temperatures (say, over just a portion of the U.S.) to global temperatures.”
To read other SIAM Nuggets, explaining current high level research involving applications of mathematics in popular science terms, go to http://connect.siam.org/category/siam-nuggets/.
Source article:
Matthew J. Heaton, Tamara A. Greasby, and Stephan R. Sain
SIAM/ASA Journal on Uncertainty Quantification, 1(1), 535–559 (Online publish date: December 17, 2013). The source article is available for free access at the link above through December 31, 2014.
About the authors:
Matthew Heaton is currently an assistant professor in the department of statistics at Brigham Young University, but the majority of this work was done while the author was a postgraduate scientist at the National Center for Atmospheric Research. Tamara Greasby is a postdoctoral researcher and Stephan Sain is a scientist at the Institute for Mathematics Applied to the Geosciences (IMAGe) at the National Center for Atmospheric Research (NCAR).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Global warming’s worse year ever: http://www.breitbart.com/Big-Peace/2014/02/19/2014-is-Global-Warming-s-Worst-Year-Ever#
There is nothing rigorous about inputting results from models which fail. Any inputting of ‘model data’ which does not represent reality is unacceptable, anti-scientific nonsense.
The only things which can be combined in a rigorous study are primary data sets which are accepted as accurate and consistent.
“The everlasting delusion of statisticians that they can mine gold from worthless ore
simply by applying statistics to a giant boatload of crappy data and model estimates.”
Not true. The delusion of people who think they are statisticians. Or even know that they aren’t but pretend that they are.
As several people has already said: there is so much wrong statistically with this “new approach” that it is meaningless to even start listing the faults.
Just for the record I’m not a professional statistician myself, but I have read enough statistics and worked enough with statistics and together with statisticians to know what I don’t know.
Consensus building = GOEBBLARIAN PROPAGANDA RIGHT OUT THE BOOK
Confucius says Consensus is for fools
Consensus says I agree.
The climate was the inspiration for Chaos Theory. The point about chaotic systems like this is that if you know everything about every molecule on the planet, and you have infinite computing power, you still can not predict the future state of the system.
So this is just a completely pointless waste of money.
Without any models this should end up looking much like the Stadium Wave. With all traces dropping fast and decades to pass before levels once more attain those at the end of the last century.
Add in the models and you may obtain whatever your fevered mind can conjure.
I can happily use Pythagoras’ theorem or Avogadro’s Hypothesis and they seem to work as close as I can measure. They are not proven but everyone agrees they do a pretty good job. Nothing wrong with consensus as part of your argument if it works, but therein lies the problem.
We will soon have a new and improved consensus, one based on reality. Have you noticed after each IPCC report the graph showing temperature projections keeps getting lowered towards observations? It keeps getting lowered to the sceptics point of view. This is truly the biggest failure of their ‘science’ – alarmism. Alarmism to force us to act on a non-problem while wasting billions on useless, improbable research deserving of the Ig Nobel Prize. / end rant.
Science is the search for true theories, correct explanations. A true theory has no chance of being refuted. Many a ‘probably true’ theory was wrong from the start and would have been found to be so much sooner had it been tested. Looking for confirmation and data ‘consistent with’ a conjecture is unholy work best left to those Bayesian Priors of the Church of Latter-day Alarmists.
“A big advantage of GCMs over observed and reanalyzed data is that GCMs are able to simulate climate systems in the future.”
Really? How can a computer model simulate the unknown. Nobody, least of all a modeller, can know what the climate will be in the years or decades ahead. The truth is modellers cannot simulate climate systems in the future because it is impossible? It’s gobbledegook!
A little OT … I can’t recall where I read it but someone finally came up with what I think is the perfect antithesis for their word “denier” – “CLIMAPHOBE” ,climaphobic, climaphobism, etc.
(Plus you can Homer-ize it too! “Climamaphobe”)
A meta-study that accumulates underlying real data might be valuable.
A meta-study based on models that do not match observations is of very dubious value except to attract funding.
And it looks like an open ended job as it will need regression every time someone revises the numbers.
It has already been said here but GIGO seem to apply.
Mervyn says: “Really? How can a computer model simulate the unknown.”
Well that won’t matter. Eventually no one will know what’s going on outside because everyone will be plugged into The Matrix (or some sort of 24/7 online 100% immersive experience) and then ‘they’ can make up whatever they want for ‘reality’.
Bob Layson says: February 20, 2014 at 2:13 am ” Looking for confirmation and data ‘consistent with’ a conjecture is unholy work best left to those Bayesian Priors of the Church of Latter-day Alarmists.”
.. and their tree in Yamal.
Having no consensus, Gallileo was obviously wrong about that solar centric orbits theory. The fool!
I asked my wife, a retired biostatistician with many publications, to read the paper. Her conclusion was that it was too complex for her to judge without more effort than she was willing to exert. IMHO the complexity itself is both a strength and a weakness. It’s extremely difficult to show that the author’s are correct or to show that they’re wrong.
I wonder how the Committee on Review of Papers dealt with this problem.
How many years of math study does it take to learn that the average of wrong is still wrong?
markstoval says:
February 19, 2014 at 1:37 pm
“They can’t even predict the past.”
Funniest comment in a while. I’ll be stealing that one! 🙂
========================================
RH says:February 19, 2014 at 1:39 pm
And I thought they ran out of ways to fudge data.
——————————————————————
Now that’s the funniest comment I’ve heard.
cn
Hello all: Long time watcher first time commentor.
I’m reminded of an old adage in marketing. “97% of money spent on advertising is wasted, but it’s very hard to know in advance which 97%.”
Personally, I’d skew my sample as far as possible to the “denier” tail, increasing the chances of the “consensus” underestimating the “problem”. Alarmists got it wrong from the beginning, when they could have as easily convinced an unwitting public that a very small dT would produce disastrous results. Now they’re stuck trying to deny temperature reality. Had they set up the rules differently at the start, the “cause” of all that’s bad in the world would be a given but uncontrollable, and they only way for the human race to survive would be to submit to totalitarian control of all global resources.
There’s only one thing t o say to that, and it would get moderated.
Really? I want to see it happen for myself rather than take your word for it.
It is interesting that the Farmers Almanac correctly predicted a bitterly cold winter in the States back in August. This is not the first time that the Almanac has predicted weather at odds with all the usual weather bureaus, and been proved correct. But then they use sunspots, tidal action, lunar cycles, and planetary positions to make their predictions. Given their [claimed] 80% success rate it is an area climate science should be investigating.
By ignoring almost all possible influences on climate the climate ‘science’ can never hope to successfully predict the climate. I look forward to the day that climate science studies the climate and tries to understand it rather than thinking up propaganda to support a false idea.
http://www.climatedepot.com/2014/02/20/feds-failed-with-winter-forecast-but-farmers-almanac-predicted-a-bitterly-cold-winter/
In Bayesian parameter estimation, a non-informative prior probability density function (PDF) serves as a premise to an argument. As prior PDFs are of infinite number, the law of non-contradiction is violated by this method.
There is an exception to the rule that Bayesian methods violate non-contradiction. This exception occurs when the quantities being estimated are the relative frequencies of events. By their decisions, the designers of the study of global warming that is referenced by the IPCC in its periodic assessment reports have ensured that there are no observed events or relative frequencies. Thus, the Bayesian method cannot be used without violation of non-contradiction. It follows that conclusions cannot be logically drawn through the use of this method about Earth’s climate.