Stat Model Predicts Flat Temperatures Through 2050

Doug L. Hoffman

The Resilient Earth

Friday, Dec 18th, 2009

While climate skeptics have gleefully pointed to the past decade’s lack of temperature rise as proof that global warming is not happening as predicted, climate change activists have claimed that this is just “cherry picking” the data. They point to their complex and error prone general circulation models that, after significant re-factoring, are now predicting a stretch of stable temperatures followed by a resurgent global warming onslaught. In a recent paper, a new type of model, based on a test for structural breaks in surface temperature time series, is used to investigate two common claims about global warming. This statistical model predicts no temperature rise until 2050 but the more interesting prediction is what happens between 2050 and 2100.

David R.B. Stockwell and Anthony Cox, in a paper submitted to the International Journal of Forecasting entitled “Structural break models of climatic regime-shifts: claims and forecasts,” have applied advanced statistical analysis to both Australian temperature and rainfall trends and global temperature records from the Hadley Center’s HadCRU3GL dataset. The technique they used is called the Chow test, invented by economist Gregory Chow in 1963. The Chow test is a statistical test of whether the coefficients in two linear regressions on different data sets are equal. In econometrics, the Chow test is commonly used in time series analysis to test for the presence of a structural break.

A structural break appears when an unexpected shift in a time series occurs. Such sudden jumps in a series of measurements can lead to huge forecasting errors and unreliability of a model in general. Stockwell and Cox are the first researchers I know of to apply this econometric technique to temperature and rainfall data (a description of computing the Chow test statistic is available here). They explain their approach in the paper’s abstract:

A Chow test for structural breaks in the surface temperature series is used to investigate two common claims about global warming. Quirk (2009) proposed that the increase in Australian temperature from 1910 to the present was largely confined to a regime-shift in the Pacific Decadal Oscillation (PDO) between 1976 and 1979. The test finds a step change in both Australian and global temperature trends in 1978 (HadCRU3GL), and in Australian rainfall in 1982 with flat temperatures before and after. Easterling & Wehner (2009) claimed that singling out the apparent flatness in global temperature since 1997 is ’cherry picking’ to reinforce an arbitrary point of view. On the contrary, we find evidence for a significant change in the temperature series around 1997, corroborated with evidence of a coincident oceanographic regime-shift. We use the trends between these significant change points to generate a forecast of future global temperature under specific assumptions.

Read the rest of the article here.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

121 Comments
Inline Feedbacks
View all comments
John Cooke
January 6, 2010 3:23 pm

What a waste of time. They are basically saying that you look at a (relatively limited) range of data and how it varies, and attempt to predict what is going to happen based on that.
There is no underlying model, no attempt to understand why things change, and no way that longer term effects can be incorporated at all, since they don’t look at a long time series.
Another example of why climate “science” isn’t. It’s certainly not the sort of science I grew up with.
Haven’t checked the journal, but the version referred to by David Stockwell (14:11:00) above is submitted, not accepted yet by the journal – i.e. the peer review process may not be complete.

Chris D.
January 6, 2010 3:30 pm

Personally, I prefer the (Johnny) Cochran test: It the model won’t fit, you must submit!

rbateman
January 6, 2010 3:31 pm

M. Simon (14:58:32) :
The Great Recession has it’s Deep Solar-Polar Bowl.
What is the point spread?

No point spread, as the Warmists are all-in hoping for a hot summer card to bail them out.
FOI is about to call.

Editor
January 6, 2010 3:32 pm

One thing that the researchers are ignoring is that in the field of personality testing, structural breaks in the results are seen to indicate when a person is lying, since a personality would not change over the course of a test, if the answers given suddenly start changing it means the subject is lying.
It’s also useful for testing to see if someone is cheating, either on a test, or… on the data they are submitting for a scientific paper.

kwik
January 6, 2010 3:32 pm

A model will need very good start-point data.
The only point that will do …… is the Big Bang.

E.M.Smith
Editor
January 6, 2010 3:37 pm

rbateman (14:29:44) :
E.M.Smith (13:50:41) :
Climate has been known to impact economic mood. Sometimes it ominously preceeds it, but only in hindsight.
The Great Depression had it’s Dust Bowl.
The Great Recession has it’s Deep Solar-Polar Bowl.

One of the founding lights of Economics, Jevons, noticed the climate economy link early on. Also found a correlation between sun spots and grain prices. He also invented what he called a “Logic Piano” or early computer. Seems like sunspots, climate, crops, computers and Econ have been linked from the very beginning…
http://chiefio.wordpress.com/2009/05/12/jevons-paradox-coal-oil-conservation/
Jevons was also the first to notice that more efficiently using coal increased consumption, rather than decreasing it. This is called “Jevons Paradox” and is why “increased efficiency” seems to be consistently followed by higher consumption in aggregate… We swapped from gas guzzler barges in the 1970s to Japanese econoboxes to save gas. Then, because the cost of driving was so low, moved WAAyyy out of town to get a better house, but with a long commute. Gas consumption went up, not down…
Oh, and spot me $5 on “Solar” in the Solar – Polar bowl… 😉

latitude
January 6, 2010 3:37 pm

geo (14:24:16) :
“What, no “return with a vengeance” after 2050?”
geo, I think it just goes with the territory.
Like when they say a tropical storm – not a hurricane mind you – SLAMS into the coast.
They have all those words written on the back of their pocket protectors.

View from the Solent
January 6, 2010 3:41 pm

E.M.Smith (15:22:51) :
Always bet on the engineer over the theoretician.
——————————————
Never worry about theory as long as the machine does what it’s supposed to.

Charles Rossi
January 6, 2010 3:44 pm

Look where mathematics got us in the financial world!

Richard Sharpe
January 6, 2010 3:48 pm

George E. Smith (15:13:23) says:

It might be useful to note that the last time that atmospheric CO2 levels were as low as they are today, was about 267 million years ago; go figure.

Hmmm, however, we are told that before the industrial revolution they were lower than now.

Dave F
January 6, 2010 3:53 pm

Til 2050? Unless, of course there is another break. The two more recent breaks happened very close together comparatively speaking.

Kermit
January 6, 2010 3:59 pm

There are people who at least seem to be able to guess market direction consistently enough to make more money on trades than they lose. But take any of the known indicators and put them into a program like NeuroShell Trader Professional, and you soon see that things like trend following indicators or a tool like the MACD are useless. Note that I’m not saying that they cannot be useful in the hands of a good trader. It’s just that trying to actually quantify these indicators is an exercise in frustration.
No, the thing that is so hard for most to understand, especially the modelers, it seems, is that climate models are optimized on the historical climate data. And, there is no additional data to validate the models. We must wait for a considerable amount of time to even have an idea if the models are useful or not. Those who model markets know that it is folly to even try to use a model that has not been validated on strictly out-of-sample data. Even when validated on what should be out-of-sample data, one proceeds very cautiously when betting money on the output of the models.
Now, I have had some experience in discussing modeling with scientists. I have had them insist that the modelers do not use the historical climate data at all when building the models. One in particular, in comments to an Ars Technica article, insisted that the models were based only on physics. This, of course, seemed far fetched to me, so I persisted in my questions. It finally was clear that the models used the historical climate data for validation. But that’s not quite right either. They were optimized on the data. But it still was impossible for posters there to realize the significance of this fact. To me, it was as if huge red flags were waving right in front of me. But then, I guess you need to build hundreds of models – and watch them run in real time over many years (in the markets) to understand why I saw the red flags waving. It helps to have the experience of betting your own money on the results of the models you have built. Your trading account statement regularly forces you to asses the usefulness of your models. I guess without that, I would be like the climate modelers and say that, yes, the models weren’t quite right, but they are new and improved now and I’m really sure now that they are right.
Not only do they not have good data for either the independent variables or the dependent variable, they don’t even know what the independent variables should be in the models.

Basil
Editor
January 6, 2010 4:03 pm

Having done a fair bit of this kind of statistical analysis myself, I’m inclined to quibble a bit with the headline. Looking at the chart, and the error bars, they are not predicting flat temps through 2050 so much as they are predicting “pretty much anything goes” through 2050. Certainly, flat describes the middle of the prediction range, but the range is so broad as to be pretty cover anything. Now I think this is important to understand, because what it reflects is just how broad is the range of natural climate variability. I certainly think flat temps over the next two or three decades are a very strong possibility. But there will be lots of ups and downs along the way.

P Wilson
January 6, 2010 4:06 pm

The Met Office, on the other hand, predict that 2010 will probably be the hottest year on record.
From their usual Roulette and ball style reasoning, which pervades their entire website, they say that it not a certainty. In fact they make one heck of a lot of uncertainty in their web pages. In reading through their pages, nothing has any certainty – models, climate, weather..
Despite the uncertainty they predict the average global temperature in 2010 to be 14.58F, which looks like a very precise certainty.
http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20091210b.html

Ron de Haan
January 6, 2010 4:06 pm

Solar Magnetic field still decreasing:
http://www.heliogenic.net/?p=2180

Tom G(ologist)
January 6, 2010 4:07 pm

phlogiston:
“This reminds me of Asterix and the Soothsayer”
True (and you resurrected a delightful memory for me).
However, I think it reminds me more of Old Thrashbarg’s pronouncement that “It is the ineffible will of Bob.”

Greg Cavanagh
January 6, 2010 4:11 pm

This looks like a simple extrapolation, which I guess statistically it would be.
They are expecting that the period 1910 to 1970 is representative of the future. This is not a very good approach to model any form of reality. It’s like measuring the first 600 metres of a hill covered in fog and then predicting that it’s a mountain 1.2km in height.
Meaningless in any context.

Tom G(ologist)
January 6, 2010 4:11 pm

Now that I think about it, this situation is very much like H2G2 (come on fans).
If you remember, Old Thrashbarg told everyone that he always knew whatever it was that was going to happen (had just happened) because he knew the ineffable mind of Bob. When asked what ineffable meant, he told the Lamuellans to “Look it up.” Considering he had the ONLY dictionary on Lamuella, it was a safe thing for him to tell them.

POUNCER
January 6, 2010 4:34 pm

It seems to me a trader’s market in weather/ climate forecasting is caaled for. Iowa Markets, isn’y it?

Pascvaks
January 6, 2010 4:38 pm

Attempt at humor only:
Met Office Secret Hi-Tech Prediction Method Discovered?
See at –
http://www.intrade.com/partners.jsp?ZID=2040&AID=34&CID=292&page=trade&selConID=215324
Weather forecasts are done by the most under rated sensatific system available to man in the 21st Century.
In the pull down “Selected Markets” block go to “Climate and Weather – Global Temperatures”, then click “Will Global Average Temperatires for 2009-2014 be among Top 5?”

niphredilflower
January 6, 2010 4:49 pm

wow… an entire half of their prediction is a fifty fifty chance, thats their best odds yet. Shame about the other half, one day they might work out that a one sided bet on the unknown is less probable than a no change scenario.
Oh well, they’ll get there one day (if this trend continues).

Dev
January 6, 2010 5:01 pm

A econ stat model to predict the climate 100 years from now? What a load of pure rubbish.
Desperate Statistical Wanking, pure and simple.

cohenite
January 6, 2010 5:09 pm

I did suggest to David that this might be a useful correlation for a predictive model:
http://plus.maths.org/latestnews/may-aug08/oilcricket/oilcricket.jpg
I think some people are missing the point; the predictive model is on the assumption of no more breaks with the [cooling] ‘trend’ produced by the 97 break dominating the assumed AGW warming until 2050; that is internally consistent with the model but the point is that natural factors dominate AGW, which is discussed on page 9 of the paper.
In respect of the markets; my wife is an econometrics whizz and we have little wagers; I have never lost; my ‘model’ is to look at the company essentials and then the company’s degree of affiliation with [a] government and the time to the next election; cause and effect really.

January 6, 2010 5:10 pm

As an engineer, I found that, unlike scientists, I had to deal with the real world. Nature can be very cruel and humiliating. I had to deal with inconvenient things, such as friction, tolerances and technological capabilities. Now scientists are able to take temperature measurements from around the world for over a hundred years and calculate that the earth had warmed by one degree. This is a precise number, with no tolerances or possible margin of error. If you do not believe this, you are, in the words of a Nature Magazine editorial, a member of the “climate-change-denialist fringe.” Any engineer would have to be brain-dead not to be skeptical of the exactitude of the scientist’s claims.
But wait, there’s more. Other scientists have created global climate models which are couple with ocean circulation and, using these exact temperatures mentioned above, calibrated them to the same degree of precision. (Never mind the fact that much less complex models can’t forecast next week’s weather.) Here there is a problem. Using their all-encompassing ability to precisely model all of the natural causes of climate change, these scientists have determined that they can’t explain the observed warming. But, if they create a factor for Anthropogenic Global Warming (AGW) and run their climate models with this factor, they can then match the observed warming perfectly. Using this manufactured evidence, the IPCC can now say, with 90% certainty, that humans are causing global warming. I’m speechless.
http://www.socratesparadox.com

rbateman
January 6, 2010 5:22 pm

Ron de Haan (16:06:57) :
Sure is a weird one, that AP. In fact, the Sun is rather strange these days.
On another note, the Sea Ice around Greenland is about to touch Iceland.
It’s been wanting to do that for a couple of weeks now.