Finally, a climate forecast model that works?

Note: Short term predictions are relatively easy, it remains to be seen if this holds up over the long term. I have my doubts. – Anthony

Guest post by Frank Lemke

The Global Warming Prediction Project is an impartial, transparent, and independent project where no public, private or corporate funding is involved. It is about original concepts and results of  inductive self-organizing modeling and prediction of global warming and related problems.

In September 2011, we presented a medium-term (79 months) quantitative prediction of monthly global mean temperatures based on an interdependent system model of the atmosphere developed by KnowledgeMiner, which was also discussed at Climate Etc. in October 2011. This model describes a non-linear dynamic system of the atmosphere consisting of 5 major climate drivers: Ozone concentration, aerosols, radiative cloud fraction, and global mean temperature as endogenous variables and sun activity (sunspot numbers) as exogenous variable of the system. This system model was obtained from monthly observation data of the past 33 years (6 variables in total: the 5 variables the system is actually composed of (see above) plus CO2, which, however, has not been identified as relevant system variable), exclusively, by unique self-organizing knowledge extraction technologies.

Now, more than a year has passed, and we can verify what has been predicted relative to the temperatures, which have really been measured (fig. 1).


AGW_predictive_model
Fig. 1: Ex-ante forecast (most likely (red), high, low (pink); April 2011 – November 2017) of the system model as of March 2011 vs observed values (black and white square dots; HADCRUT3) from April 2011 to December 2012. These 21 months are used for verification of the out-of-sample predictive power of the system model.

Verifying the prediction skill of the system model from April 2011 to December 2012, the accuracy of the most likely forecast (solid red line) remains at a high level of 75%, and the accuracy relative to prediction uncertainty (pink area) is an exceptional 98%. Given the noise in the data (presumably incomplete set of system variables considered, noise added during measurement and preprocessing of raw observation data, or random events, for example), this clearly confirms the validity of the system model and its forecast.

In comparison, the IPCC AR4 A1B projection currently shows a prediction accuracy of 23% (September 2007 – December 2012, 64 months) and just 7% accuracy for the same forecast horizon as applied for the system model (April 2011 – December 2012, 21 months).

The two models, IPCC model and atmospheric system model, use two very different modeling approaches: theory-driven vs data-driven modeling. The IPCC model is based essentially on AGW theory by emission of greenhouse gases, namely CO2, the presented atmospheric system model on the other hand is a CO2-free prediction model. It is described by 5 other variables. The IPCC model shows a prediction accuracy of 7% and the atmospheric system model an accuracy of 75% for the same most recent 21 months of time…

The climate system is a complex system that consists of a number of variables, which are connected interdependently, nonlinearly and dynamically and where it is not clear, which are the causes and which are the effects. The simplistic linear cause-effect relationship “more atmospheric CO2 = higher temperatures” the IPCC model is based on is not an adequate tool to describe the complexity of the atmosphere sufficiently.

Read the complete post here:

http://climateprediction.eu/cc/Main/Entries/2013/1/21_What_Drives_Global_Warming_-_Update.html

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
207 Comments
Inline Feedbacks
View all comments
crosspatch
January 24, 2013 1:08 pm

How does it “predict” the response to El Nino in 1998? And I find this confusing:

In September 2011, we presented a medium-term (79 months) quantitative prediction of monthly global mean temperatures based on an interdependent system model of the atmosphere developed by KnowledgeMiner, which was also discussed at Climate Etc. in October 2011. This model describes a non-linear dynamic system of the atmosphere consisting of 5 major climate drivers: Ozone concentration, aerosols, radiative cloud fraction, and global mean temperature

So you use global mean temperature to predict global mean temperature? How does that work, exactly?

January 24, 2013 1:12 pm

“no public, private or corporate funding is involved”
How do they manage that? Who pays the electric bills, etc.? I’d sure love to see how that works. 🙂

TRM
January 24, 2013 1:14 pm

“is a CO2-free prediction model” – OMG. Really? Wow. And they got 75% instead of 7% …..
Go figure. It will be very interesting to see if this approach works over decades. I would love to see it expanded so others could re-weight variables and add their own to publicly make predictions.

Pamela Gray
January 24, 2013 1:14 pm

Sunspot numbers? Why? And I am serious. Why? What algorithm do you use for sunspot numbers? And what is that algorithm based on mechanistically (not correlationally)?

Henry Galt
January 24, 2013 1:16 pm

Betting on weather= tomorrow will be mostly similar to today
Betting on climate= the present decade’s average anomaly will be next decade’s average anomaly.
/Bet
WAG= for the next 20 years the average anomaly will be 0.5C
/Sarc
/Fey
😉

Michael John Graham
January 24, 2013 1:16 pm

bye-bye carbon di

YEP
January 24, 2013 1:18 pm

crosspatsch says:
“So you use global mean temperature to predict global mean temperature? How does that work, exactly?”
Presumably lagged actual temperature as a partial vector autoregression (VAR). There’s bound to be persistence in the system.

Eric H.
January 24, 2013 1:20 pm

It’s kind of like using parasitic drag as the main determinant of speed and ETs for a dragster. Though in theory it makes a difference, adding another layer of wax isn’t the solution to your low ETs.

Robinson
January 24, 2013 1:25 pm

“unique self-organizing knowledge extraction technologies.”
What the hell? You mean a Neural Network, don’t you? Why not just say it? Why use this stupid jargon?

Admin
January 24, 2013 1:32 pm

So you use global mean temperature to predict global mean temperature? How does that work, exactly?
The BUT (Business as Usual) theory to date is the most accurate way of predicting next year’s temperatures – same as last year.
However this makes me suspicious too. I once made a horrible mistake in a merchant banking model, in which I accidentally incorporated previous model results into the new run. This slipped through testing, because the inclusion of previous data masked problems with the rest of the model.
Once the mistake was corrected, the rest of the model went wild – very embarrassing.
So I’m very suspicious of any system which places a heavy reliance on previous values. Yes it might and probably is necessary when predicting global temperature, but my experience shows such inclusion could also easily mask problems with the model, at least in the short term.
At the very least I would expect inclusion of previous temperatures to lead to a cumulative error – any slight mistake in predicting this years temperature would create an even larger mistake in predicting next year’s temperature, which over a few iterations would render the model prediction worthless.

crosspatch
January 24, 2013 1:32 pm

YEP says:
January 24, 2013 at 1:18 pm
Presumably lagged actual temperature as a partial vector autoregression (VAR). There’s bound to be persistence in the system.

How well does that work out in the last eight observations shown in the graphic? Prediction is for a downward trend, observations are an upward trend and then suddenly out of nowhere a massive reversal. I dunno, Color me skeptical.

Pamela Gray
January 24, 2013 1:33 pm

IMHO. The temperature lag is probably a function of ENSO and creates the greater influence on prediction. It has been definitively demonstrated that sunspot numbers correlated with temperature is not robust, not reliable, and not valid. But the dang things have legs as much as CO2 does. Which is equally not robust, not reliable, and not valid.
The modeler that gets it right will use ENSO patterns of oceanic circulation and SST (a much slower lagged effect) with a variables related to other atmospheric circulation patterns that come and go (more immediate effects), that kick in after a certain value is reached (IE beyond neutral). Multiple scenarios will demonstrate these long and short term teleconnections, meaning that given an ENSO condition, depending on whether or not a shorter term atmospheric pattern kicks in, the temperatures will be thus.

Kasuha
January 24, 2013 1:39 pm

Somehow I met that site a few months ago. Interesting approach indeed, but it’s not more than yet another extrapolated regression. Sophisticated and slightly obscure regression but still just a regression. There’s no guarantee the relations their self-organizing prediction machine established are real. They may be, or they may be just artifact of the method.

YEP
January 24, 2013 1:41 pm

Data-driven models are good exercises to go through when analyzing a complex, dynamic, non-linear system. But nothing is theory-free, other than simple vector autoregression. Choosing the 6 variables, for example, had to be based on theory. And simple predictive skill doesn’t tell you much, except as something to compare the perfomance of a theory-based structural model with. The coefficients that emerge should be meaningful, and simulations should be used to test for things like stability and results that make sense given what we know about natural processes. What was it von Neumann said? “With four parameters I can fit an elephant and with five I can make him wiggle his trunk.”

DirkH
January 24, 2013 1:49 pm

Eric Worrall says:
January 24, 2013 at 1:32 pm
“The BUT (Business as Usual) theory to date is the most accurate way of predicting next year’s temperatures – same as last year.
However this makes me suspicious too. I once made a horrible mistake in a merchant banking model, in which I accidentally incorporated previous model results into the new run. This slipped through testing, because the inclusion of previous data masked problems with the rest of the model.”
You had state leftover from a PREVIOUS run. (Forgot to clear all variables, I guess)
In the kind of time series extrapolation this model uses, you “look back” to the time series so far – of THIS model run. Which is legit.
Still, I’m not convinced it’ll have predictive skill for climate. Climate is the 30 year mean, in other words, low frequency component. The validation period is too short to tell us much about the low frequency component.
I’m using models of this kind for “next day” trading decisions, so my models have to guess the next day right, in a way. And are trained on a history of a thousand days ATM. I wouldn’t trust these models to look far into the future. It’s a probabilistic guess at best.
But if you need to… well I would say if you want to look 30 years into the future the smartest thing would be to train the model on a history of a thousand consecutive real 30 year intervals of climate.

Matthew R Marler
January 24, 2013 1:50 pm

Knowledge Miner is nice software, but it would be nice to read a complete description of how it was implemented in this case. A bunch of us wrote the same thing about neural networks just a short time ago. Nothing is “self organizing” here: the modelers made choices such as what data to input to the algorithms.
It is good to see the model forecast tested against new data.

Steve C
January 24, 2013 1:50 pm

“The Global Warming Prediction Project is an impartial, transparent, and independent project”? From the name of the project alone, you know that none of those adjectives applies.

January 24, 2013 1:51 pm

The above prediction is quite similar (if you take off the fast fluctuation) to my prediction published in 2010 and later in 2012 papers.
e.g.
Scafetta N., 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. Journal of Atmospheric and Solar-Terrestrial Physics 80, 124-137.
http://www.sciencedirect.com/science/article/pii/S1364682611003385
Scafetta N., 2010. Empirical evidence for a celestial origin of the climate oscillations and its implications. Journal of Atmospheric and Solar-Terrestrial Physics 72, 951-970.
http://www.sciencedirect.com/science/article/pii/S1364682610001495
see here for the latest update of the prediction (since 2000) which agrees great with the data:
http://people.duke.edu/~ns2002/#astronomical_model_1
for example the model predicts a peak in 2015 as mine.
The only problem with the above figure in the post is that it seems that the latest temperature dot for Dec/2012 is located at Jan/2012.

Ian
January 24, 2013 1:51 pm

Has the model been used to make “hind-casts”? If so were they accurate? If not will hind-casting be attempted?

Steve Oregon
January 24, 2013 1:54 pm

This climate forecast model is a real travesty.
Plus it got me thinking.
How will alarmists cope if warming never returns for the rest of their lives?
A few more years of the same will be bad for them. 6, 7 or 8 will be painful.
But 10, 20 or 30 years of a non-warming planet will be catastrophic for their funny little fictitious world.
I sure hope they cry us a river.

Truthseeker
January 24, 2013 1:56 pm

Let us assume that they have correctly identified the most significant variables (which do not include CO2 – IPCC and alarmists please note) and can get a good correlation for past data. The problem is still predicting the values of those variables. Maybe they can use models to predict the variables they are using in the model to predict climate. Of course then they will have other variables to predict, which will mean other models to predict the variables they need for the models to predict the variables they need to predict the variables they need to predict the climate. Then they will need models … ad infinitim …

Rud Istvan
January 24, 2013 1:58 pm

The neural net was fit to 33 years and 3 sunspot cycles. ‘Out of range’ accuracy was “good”, but only for 21 months, less than 2 years. Given the predictability of the seasons (winter is colder then summer), the time series autocorrelations, and the fact that climate changes very slowly, it is not surprising that a sophisticated data fit did better then first principle physics in GCMs– for a couple of years. But that is useless for multidecadal predictions for all of the known problems inherent in out of range forecasting from data fits. The Arts of Truth used a medically peer reviewed correlation between BMI and Miss America pagent winners to “prove” the winner would show up dead from starvation by 2020. You should believe that about as much as CAGW.
And even if CO2 wasn’t a predictor in this net, it is still “there” in the temperature side of the neural net fit. So says nothing about climate sensitivity, either. An interesting question is whether, had it been explicitly added, the neural net would have given better predictions? One suspects yes, but for the simple reason it’s another variable for the net to massage. I believe it was Von Neuman who said, “give me four variables and I can model an elephant. Give me five, and I can model its trunk.”
I agree with Anthony. Doubtful for the long run. Trivial for the short run.

AndyG55
January 24, 2013 2:02 pm

What temperature sets did they use to calibrate to the past?
If they used GISS or HadCrud, they have serious problems matching any future reality.

bill
January 24, 2013 2:02 pm

neural networks can be very good, the test is to keep adding historical strong data sets to keep validating things.
my suspicion is that it will not predict past the “momentum” of the current data or about 10 years.
love the concept of an agnostic neural network data mining, if that is what they did.
the work is in the data assembly not the processing anymore so funding could be quite modest if you value the required very smart people at enthusiast rates.

January 24, 2013 2:07 pm

YEP says:
January 24, 2013 at 1:18 pm
crosspatsch says:
“So you use global mean temperature to predict global mean temperature? How does that work, exactly?”
Presumably lagged actual temperature as a partial vector autoregression (VAR). There’s bound to be persistence in the system.

But what is the source of that persistence?
I doubt there is much persistence from atmospheric temperatures, that is, the thermal energy in the air.
Otherwise, over the 21 month forecast period we have seen a large difference between summer and winter anomalies, which wasn’t the case for most of the prior period. To get such a good forecast it must be forecasting the summer/winter shift, which raises the question, How well does it hindcast the prior period when this shift was absent?

1 2 3 9