Statistics research could build consensus around climate predictions
Philadelphia, PA—Vast amounts of data related to climate change are being compiled by research groups all over the world. Data from these many and varied sources results in different climate projections; hence, the need arises to combine information across data sets to arrive at a consensus regarding future climate estimates.
In a paper published last December in the SIAM Journal on Uncertainty Quantification, authors Matthew Heaton, Tamara Greasby, and Stephan Sain propose a statistical hierarchical Bayesian model that consolidates climate change information from observation-based data sets and climate models.
“The vast array of climate data—from reconstructions of historic temperatures and modern observational temperature measurements to climate model projections of future climate—seems to agree that global temperatures are changing,” says author Matthew Heaton. “Where these data sources disagree, however, is by how much temperatures have changed and are expected to change in the future. Our research seeks to combine many different sources of climate data, in a statistically rigorous way, to determine a consensus on how much temperatures are changing.”
Using a hierarchical model, the authors combine information from these various sources to obtain an ensemble estimate of current and future climate along with an associated measure of uncertainty. “Each climate data source provides us with an estimate of how much temperatures are changing. But, each data source also has a degree of uncertainty in its climate projection,” says Heaton. “Statistical modeling is a tool to not only get a consensus estimate of temperature change but also an estimate of our uncertainty about this temperature change.”
The approach proposed in the paper combines information from observation-based data, general circulation models (GCMs) and regional climate models (RCMs).
Regional analysis for climate change assessment. Image credit: Melissa Bukovsky, National Center for Atmospheric Research (NCAR/IMAGe)
Observation-based data sets, which focus mainly on local and regional climate, are obtained by taking raw climate measurements from weather stations and applying it to a grid defined over the globe. This allows the final data product to provide an aggregate measure of climate rather than be restricted to individual weather data sets. Such data sets are restricted to current and historical time periods. Another source of information related to observation-based data sets are reanalysis data sets in which numerical model forecasts and weather station observations are combined into a single gridded reconstruction of climate over the globe.
GCMs are computer models which capture physical processes governing the atmosphere and oceans to simulate the response of temperature, precipitation, and other meteorological variables in different scenarios. While a GCM portrayal of temperature would not be accurate to a given day, these models give fairly good estimates for long-term average temperatures, such as 30-year periods, which closely match observed data. A big advantage of GCMs over observed and reanalyzed data is that GCMs are able to simulate climate systems in the future.
RCMs are used to simulate climate over a specific region, as opposed to global simulations created by GCMs. Since climate in a specific region is affected by the rest of the earth, atmospheric conditions such as temperature and moisture at the region’s boundary are estimated by using other sources such as GCMs or reanalysis data.
By combining information from multiple observation-based data sets, GCMs and RCMs, the model obtains an estimate and measure of uncertainty for the average temperature, temporal trend, as well as the variability of seasonal average temperatures. The model was used to analyze average summer and winter temperatures for the Pacific Southwest, Prairie and North Atlantic regions (seen in the image above)—regions that represent three distinct climates. The assumption would be that climate models would behave differently for each of these regions. Data from each region was considered individually so that the model could be fit to each region separately.
“Our understanding of how much temperatures are changing is reflected in all the data available to us,” says Heaton. “For example, one data source might suggest that temperatures are increasing by 2 degrees Celsius while another source suggests temperatures are increasing by 4 degrees. So, do we believe a 2-degree increase or a 4-degree increase? The answer is probably ‘neither’ because combining data sources together suggests that increases would likely be somewhere between 2 and 4 degrees. The point is that that no single data source has all the answers. And, only by combining many different sources of climate data are we really able to quantify how much we think temperatures are changing.”
While most previous such work focuses on mean or average values, the authors in this paper acknowledge that climate in the broader sense encompasses variations between years, trends, averages and extreme events. Hence the hierarchical Bayesian model used here simultaneously considers the average, linear trend and interannual variability (variation between years). Many previous models also assume independence between climate models, whereas this paper accounts for commonalities shared by various models—such as physical equations or fluid dynamics—and correlates between data sets.
“While our work is a good first step in combining many different sources of climate information, we still fall short in that we still leave out many viable sources of climate information,” says Heaton. “Furthermore, our work focuses on increases/decreases in temperatures, but similar analyses are needed to estimate consensus changes in other meteorological variables such as precipitation. Finally, we hope to expand our analysis from regional temperatures (say, over just a portion of the U.S.) to global temperatures.”
To read other SIAM Nuggets, explaining current high level research involving applications of mathematics in popular science terms, go to http://connect.siam.org/category/siam-nuggets/.
Source article:
Matthew J. Heaton, Tamara A. Greasby, and Stephan R. Sain
SIAM/ASA Journal on Uncertainty Quantification, 1(1), 535–559 (Online publish date: December 17, 2013). The source article is available for free access at the link above through December 31, 2014.
About the authors:
Matthew Heaton is currently an assistant professor in the department of statistics at Brigham Young University, but the majority of this work was done while the author was a postgraduate scientist at the National Center for Atmospheric Research. Tamara Greasby is a postdoctoral researcher and Stephan Sain is a scientist at the Institute for Mathematics Applied to the Geosciences (IMAGe) at the National Center for Atmospheric Research (NCAR).

Signal THEORY…there is a point at which NOISE dominates INFORMATION.
I have YET to hear anyone talk about SIGNAL THEORY. (Oh, I got it….to much NOISE out there, no SIGNAL..)
@Cynical Scientst: What I didn’t realize, upon interacting with many scientists that work in the environmental sciences, is that they firmly believe that cause and effect can be linked through statistics only. The physics based connection doesn’t seem to be all to important. Worse, if you try to argue the problems associated with a black box statistical approach, you will be chastised for not knowing anything about statistics. Worse still, if you happen to be an engineer, you will be chastised because, apparently, engineers only understand deterministic systems. Thankfully, I have a PhD in statistical data analysis with an electrical engineering degree, so I simply ignore the ad-hominem attacks.
However, there seems to be a problem with how we are educating some students of the sciences. They are abusing the idea that correlation, with all other factors known is equivalent to causation. This is essentially impossible to establish with complex systems. Worse again, after they mistakenly assume that causation has been established through the use of statistics, everything that they observe a-posteriori becomes evidence. Bad news all around.
I would truly like to see how they combine data ” in a statistically rigorous way”, with data drawn from a daily sample size of ONE, non-random, non-replicated grab sample !!!
Amaze me….
Spoiler alert — You don’t get “data” from a climate model! You don’t get data from any model. Model outputs are not facts.
@Lattitude,
Love the article (http://www.businessweek.com/articles/2014-02-18/the-official-forecast-of-the-u-dot-s-dot-government-never-saw-this-winter-coming) and especially love this little quote:
‘Climatologists are trying to use their big miss this winter as a learning experience’. Really? There’s stuff to learn? But this is Settled Science.
Looks like a long- winded but potentially face- saving exercise for those ready to distance themselves from the increasingly Baroque and increasingly embarrassing GCMs. If that is what it will take to calm things down a bit on both the policy and the science fronts, then it is good news.
Why is it wrong to shoot people? It is blatantly obvious that some people are seriously in need of shooting (twice just to make sure) ^.^
Seeking qualitative vs. quantitative rationalization of inherently deficient GCMs is a first-order exercise in turd-polishing. No-one is or can be “expert on the future”, and “error bars” pretending otherwise are naught but academic snares, delusions.
“Climate research” is a classificatory discipline akin to botany. If and when a Mendelian genetic analog appears, benighted catalogers will be the last to know.
Gamecock says:
February 19, 2014 at 1:20 pm
They can’t even predict the past.
———————————————
Haha. That’s because they keep changing it!
Steve from Rockwood says:
February 19, 2014 at 2:28 pm
“Gamecock says:
February 19, 2014 at 1:20 pm
They can’t even predict the past.
———————————————
Haha. That’s because they keep changing it!”
The past is easy to predict. The past always cools over time.
Science by a committee of politicians and parasites — OH, How Wonderful.
Good grief can’t they even READ:
The IPCC actually said in the Science Report in TAR:
So they are going to ignore the fact climate is ‘a complex non linear chaotic signature’
Instead:
We’lll toss eye of newt and toe of frog, wool of bat and tongue of dog, adder’s fork and blind-worm’s sting, lizard’s leg and owlet’s wing, into the charmèd computer all shall go, chanting incantations statistical while stirring well. We’ll call it Gore-bull Worming, this charm of trouble, this bubbling broth of he!!.
(Apologies to the Bard.)
Nuff said (“The term hierarchical model is sometimes considered a particular type of Bayesian network, but has no formal definition”). For starters: anything goes.
Just to make it absolutely clear, Bayesian statistics is a way to build a formidable castle on quicksand by formalizing stubborn prejudices as priors, while “hierarchical” is a buzzword making this process as convoluted, opaque and intractable as possible.
On the plus side the method is developed to handle marketing issues, to which class “climate change” belongs to.
“statistical hierarchical Bayesian model”
This idea will definitely work because the simple ideas are always closer to nature and there is nothing more simple and easy to understand and less convoluted than a good old “statistical hierarchical Bayesian model”. Honestly guv, it is not BS, it is a really simple way at getting to the predefined truth – that global temperatures have been soaring over the past few decades and urgent action is required NOW to save the planet. For our children! [/sarc]
“only by combining many different sources of climate data are we really able to quantify how much we think temperatures are changing.”.
Two basic errors here:
1. Not all data sources are equally reliable. You need a good crystal ball to assign weights.
2. Even in the best case, it will only tell us how the temperatures WERE changing.
Better throw in some climate models in there or else they might just get it right by accident. We can’t have that, can’t we.
Science by by committee, at its worst.
Gamecock says:
February 19, 2014 at 1:20 pm
They can’t even predict the past.
Nice one!
BioBob says: @ur momisugly February 19, 2014 at 2:04 pm
I would truly like to see how they combine data ” in a statistically rigorous way”, with data drawn from a daily sample size of ONE, non-random, non-replicated grab sample !!!
>>>>>>>>>>>>>>>>
Yes one of my favorite nits to pick.
It’s subprime mortgage thinking, applied to climate models. Over here we have a dog turd, but if I put it in a bag with dozens of other dog turds it becomes a reliable low risk investment, er, model.
It will probably turn out the same way as subprime, with perpetrators getting rich and victims still not sure how the people “in charge” let it happen.
Its not unheard of , there where in fact various meeting to decide which gospel should stay in and which stay out of the bible and much rewriting of scripture to suit the political purposes of the church. These people are just carrying on in that ‘fine tradition’ , we already have St Gore with the ‘prophet’ Mann , whose words should never be questioned , calls for the prosecution of ‘unbelievers’ and hatred for ‘heretics’ so this hardly a big step. Its just the next logic step.
Come to think of it, there is this joke about how many French does it take to defend Paris, and the answer is that the number is not known. It hasn’t been tried. I suspect the remake of this for the future would be how may climate scientist does it take to solve the climate problem, and from this post it very much feels like that number is still not known, it hasn’t been tried.
Now the thing is that France had all that modern (military) equipment at the time but didn’t use it and that pretty much resembles climate science. All that stuff is only being used to scare people but beyond that, it’s not doing much good.
A wonderful new map of the world to be sure. Looks like the USA, is the actual center of the universe.
I wonder how many orders of magnitude Nyquist under-sampling, THIS land mark study has ??
So we have a whole new raft of statistical prestidigitation of climate data, that was already known, but that helps nowt, in foretelling what happens tomorrow..
This is kind of like putting a bunch of turds in a blender, set on puree and see what happens. It’ll look a lot like the stuff I get when I eat at a buffet style all you can eat restaurant. I don’t get it, wouldn’t this just reproduce the same ensemble of bad models that we have already?
I believe that these activities such as chasing consensus, arguing over theories which are unsupported empirically are all an orchestrated diversion and smoke screen.
The real danger comes from the United Nations and UN Agenda 21 which is the parent of AGW.
Is the UN a dangerous and malignant bureaucracy? My research points to this.
There is no conspiracy as everything is clearly described and published on official websites etc. All we need to do is read it!
Here is an example on Agenda 21 policy on land.
See the my highlights in red on page 8
http://thedemiseofchristchurch.files.wordpress.com/2014/02/unitednations-conference-on-human-settlements_habitat1.pdf
No wonder property rights are being abused in my city under the guise of an earthquake recovery.
Ever consider the Agenda 21 wetlands policy. It works like this. 1. Persuade a farmer to part with a swampy area of his farm and label it a ‘wetlands restoration’. 2. Do nothing and let the wetland ‘recover’ which means the original drainage will block and the wetland/swamp will spread further onto the farmer’s land. 3. If the farmer in desperation clears the drainage problem then fine or gaol him. 4. Observing that there are now further spreading swamps ‘wetlands’ encroaching on said farmers land, well take that off him too.
http://www.stuff.co.nz/marlborough-express/news/8790787/Farmer-guilty-of-damaging-Rarangi-wetland
Note that this guy once owned the land in question, it is not mentioned how it was transferred to the government. Also willows are mentioned. Willows are not natural in New Zealand and used to be regarded as pests because they block waterways.
In my country, maybe more than 50% of farms are comprised of more than 90% drained swamp.
Check out the warped thinking on page 12. Re: educating men or women!
http://thedemiseofchristchurch.files.wordpress.com/2014/02/education-for-sustainable-development-toolkit.pdf
My blog has more real life examples!
Please visit both posts.
http://www.thedemiseofchristchurch.com
Cheers
Roger
I take it that a “Bayesian Model” is a German way to make Aspirin.
Come to think of it, I could use an aspirin right about now after reading this “peer reviewed” (?) paper. I’ll go with the Monsanto cheap aspirin.
And they don’t need to try and convince me that their statistical mathematics is rigorous. I assume that, a priori (Latin words, I don’t understand), about virtually any branch of mathematics. It’s the mischief they get up to with it that grabs my attention.
Climate models are a good idea and they do work as far as they are designed to.
What is a climate model: They are an attempt to take the known historical data put it on a grid and apply all of the physics that we *can compute*, and to see whether we can make predictions about the future.
Climate models do work.
We can make very good predictions out to a couple of weeks and excellent 24 hour predictions on a local scale.
BUT.
1. We cannot test predictions on a global scale because we do not even have the global data collection system in place to collect the empirical data that we are trying to predict to any where near the precision being spoken about.
2. We cannot (and will never) compute the small scale processes, which are vital and which cannot be ignored and cannot be “averaged”, due to lack of computational power.
3. The equations themselves tell us that predictions beyond a few weeks, on any spatial scale, are COMPLETE NONSENSE because of the error due to the chaotic nature of the nonlinearities. Any real mathematician will tell you that. Just show him the equations and tell him how closely spaced in space and time you have measured the historical data fields. He will tell you that your forward run integrations will get less and less accurate into the future and that after a week (at best!) the uncertainty in your measurement will completely swamp the “signal”.
On point number three. Where the are all of the Professors of Mathematics? Why are they so silent? Why don’t they pipe up and say something about the insanity of pumping billions of dollars into Climate Predictions. Even if you try to predict the end positions of 10 Snooker balls which start out spaced by 1 foot in a line, with the first one being struck into the next and so on, you will fail. Why? Because you cannot measure the starting locations of the balls and the strike point of the cue well enough for the error in that measurement to not swamp the “signal”. How can we expect to predict where the energy and moisture concentrations in the Ocean and Atmosphere will end up 20 years from now? How? Why do we think it is even possible?
The thing that has gone wrong in climate science is that the people who are now modelling the climate do not understand the Mathematical facts about the equations they are modelling. The first set of people to write GCM’s did understand. They were Mathematicians who understood chaos theory and the limitations of what could be predicted. I.e. they knew that the best they could achieve were very good weather models.
But now we have people who are called “Climate Scientists” doing the work, and they do not understand the fundamental mathematics of the equations they are modelling. It is beyond their capacity. Because as Professor Lindzen said recently, “Climate science is for second-raters”. They do not understand that it is IMPOSSIBLE to predict global temperature out to many years or decades.
I REPEAT AT THE TOP OF MY VOICE: IT IS IMPOSSIBLE TO PREDICT TEMPERATURE OUT TO MANY YEARS OR DECADES, THE EQUATIONS TELL US THAT.