Cooling of the Multidecadal Cyclic GMST until about 2030’s suggests La Nina conditions will dominate in the next twenty years.
Guest post by Girma Orssengo, PhD
IPCC’s climate model prediction for a global warming of about 0.2 deg C per decade for the next two decades is contrary to the observed climate pattern.
In the following graphs that show climate data analysis results, the Observed Global Mean Surface Temperature (GMST) shown in Graph “a” has an oscillating Residual GMST of +/- 0.2 deg C as shown in Graph “b”, and a Multidecadal Cyclic GMST of +/- 0.1 deg C as shown in Graph “e”.
As a result, because of these two oscillating components of the Observed GMST, it is incorrect for the IPCC to claim a constant warming rate of 0.2 deg C per decade that lasts for two decades.
Note that for the parameters of the model given in Equation 1, the Residual GMST from 1885 to 2011 shown in Graph “b” has zero mean and zero trend. The result shown in Graph “e” indicates the cooling of the Multidecadal Cyclic GMST until about 2030s. This result suggests La Nina conditions will dominate in the next twenty years. Finally, Graph “f” demonstrates there was no change in the climate pattern before and after mid-20th century, contrary to IPCC claim.
Observed GMST (Graph a) = Residual GMST (Graph b) + Model Smoothed GMST (Graph c)
Model Smoothed GMST = a*Cos[2*Pi*(Year-1910)/60] + b*(Year-1910)^2 + c*(Year-1910) + d
Where a = -0.1050, b = 3.598*10^(-5), c = 3.27*10^(-3), d = -0.345 (Equation 1)
Secular GMST = b*(Year-1910)^2 + c*(Year-1910) + d (Equation 2)
MultiDecadal Cyclic GMST = a*Cos[2*Pi*(Year-1910)/60] (Equation 3)
Looks like a lame curve fitting exercise without any physical basis.
I could come up with a dozen function forms, fit the function to the data and come up with tiny residuals. It would all be meaningless.
It is only meaningful if there is a physical basis for the functional form.
So, another computer model say cooling for the next 15 years. Somehow, I think “Mother Nature” will throw us a curve ball.
From LazyTeenager on September 3, 2012 at 4:16 pm:
Good idea, let’s see what the possibilities are. Post your work when done.
“””””…..Empirical Model Of The Global Mean Surface Temperature…..”””””
So we have an “empirical” (made up) model of the GMST, something we have no believable idea of. Or alternatively, of which, we have no believable idea.
Back in the 1960s, there was a famous paper on an “empirical” model of the fine structure constant alpha, or more strictly of 1/alpha, known to be around 137.
The “empirical” model was; 1/alpha = 4thrt((pi)^a.b^c.d^e.f^g) where a through g are small integers, not necessarily different. Well the paper gave the actual values of a through g .
The “empirical” model’s predicted; excuse me, projected value, agreed with the best peer reviewed experimentally measured value for 1/alpha to less than 2/3rds of the standard deviation of that measured value.; which happens to be known to a few parts in 10^8. I would say that’s a pretty “empirical” agreement with reality.
Like Dr Orssengo’s “empirical” model, this fine structure model had no known connection to the physical universe; but it obviously was correct, because it was accurate to a few parts in 10^8, so it was wildle embraced, although no-one could discern how it connected to reality.
A month later, a computer geek, published a list of several other sets of values (small integers) for a,b,c.d.e,f,g, which also led to 1/alpha to less than the standard deviation; currently about 4.5E-8.
One of those was twice as accurate as the earlier result.
A month after that, a more theoretical geek, pubished a model of an N-dimensional sphere whose radius was 1/alpha, and the solutions were the lattice points in this N-space for different a through g that lay within a thin shell of inner and outer radii, that were less than or greater than 1/alpha by the standard deviation increment; and he derived a complete list of at least 8 values that fit the “empirical” model.
So if you don’t think you can get a believable “empirical ” result by simply f*****ing around with numbers; think again; you CAN !
I’ll mention to Dr Orssengo these two graphs where HadCRUT3 since 1850 is detrended by the quadratic y = 0.000028*(x-1850) – 0.41.
The next logical step is to incorporate the correlation of previous solar cycle length to temperature. After combining this and the cycle shown in Dr Orssengo’s graph (e) the residual appears to fit well with Lindzen & Choi’s value for 2xCO2.
Sorry, slight correction. The quadratic should be y = 0.000028*(x-1850)^2 – 0.41.
LazyTeenager: “Looks like a lame curve fitting exercise without any physical basis. ”
Yep. Just like Newton’s gravity, Planck’s quantums, Einstein’s relativity, the Higgs Boson, Physics, Astronomy, Psychology, Sociology, Climate Science, MMTS adjustments, TOBS adjustments, treemometers and practical proxy practices. Physics may seem particularly queer in that list. But physics is based on empirical modelling tasks taken to Platonic abstraction; for better or worse.
Now, if you’re asking for materialist explanans then that’s an entirely different affair and important only to Philosophers and Metaphysicians (They have tonics.). For otherwise it’s a case of ‘E si pur muove’. Though the work in the OP is still behind the curve of the resident Vukcevic when speaking of Fourier analysis of climatological cycles.
See Klyashtorin and Lyubushin, 2007. While modern “Atmospheric CO2 increase as a lagging effect of ocean warming” may have foundered on the shoals of statistics lacking physical basis, my sense is this one will not. It was from fish, after all, that we learned of PDO.
This posting is handwaving without even bothering to make clear how the results were derived. Since the author has a Ph.D., the author should know that there is published literature based on unit root time series analysis used to determine whether the global temperature anomaly (GTA) time series is stationary and whether there is a upward trend in the series. The author should know the following peer reviewed papers on this topic:
http://www.uoguelph.ca/~rmckitri/research/warming.pdf
http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2011/wp4-11.pdf
http://www.earth-syst-dynam-discuss.net/3/561/2012/esdd-3-561-2012.html
Looks like a lame curve fitting exercise without any physical basis.
No supporter of GCMs should object to curve fitting.
While the writers of GCMs give explanations for their epicycles, that doesn’t make those explanations true. Occam’s Razor and all that.
I’d call this “yet another regression”. What I’m missing is explanation of physical relevance of chosen regression components, particularly the 60-year cycle and quadratic baseline.
The purpose of an empirical model is to represent the observed data as best as possible. It is clear that in Graph “f” 100% of the observed data is bounded by the model. That is the purpose of an empirical model rather than explanation of the physics.
Tallbloke says
This doesn’t allow for the sudden and deep solar slowdown which has begun, and is likely to reach a nadir around 2035. It is a non-linear non-sinusoidal interregnum which occurs on a complex cycle. For those who don’t think solar variation affects climate much – keep watching.
Henry@tallbloke or anyone who can help
On the deceleration of maximum temperatures I discovered it is an ac wave, but I don’t know how to do the plot (best fit). See:
http://wattsupwiththat.com/2012/08/23/agu-link-found-between-cold-european-winters-and-solar-activity/#comment-1067753
Anybody here who can help, please?
A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.
Nylo
A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.
The model established a pattern as shown in Graph “f”. From this graph, It is easy to predict the climate if the pattern continues => Little warming in the next 15 years.
Maus says: September 3, 2012 at 7:38 pm
…….the resident Vukcevic when speaking of….
the resident Vukcevic here on the WUWT is well behind the curve of himself , but there is always a hope that he may catch-up soon with his own private research.
Nylo says:
September 3, 2012 at 11:58 pm
A model is only useful if it allows predicting future behavior
Doing predictions is froth with danger.
I do extrapolation. When it fails, it is fault of the used data set limitations such as length, resolution, compilation and many other factors I can’t be held accountable for.
🙂
Really? But the output of the sun is still falling and the sun is the ONLY source of heat we have to drive climate.
HenryP says:
Henry@tallbloke or anyone who can help
On the deceleration of maximum temperatures I discovered it is an ac wave, but I don’t know how to do the plot (best fit). See:
http://wattsupwiththat.com/2012/08/23/agu-link-found-between-cold-european-winters-and-solar-activity/#comment-1067753
Anybody here who can help, please?
Try gnuplot. It has fit command for non-linear least sqr to fit your desired function and very flexible ways to plot it all. It takes a bit of reading to learn how to get the best from it but it’s an effort well worth time. It goes well beyond just plotting once you master it.
vukcevic says:
Doing predictions is froth with danger.
I do extrapolation. When it fails, it is fault of the used data set limitations such as length, resolution, compilation and many other factors I can’t be held accountable for.
🙂
Data does not extrapolate. Only fitting a model can do that. Assumptions about the suitability of the model, the method of determining the fitted parameters and the assumption that the model will still be valid beyond the range of the source data are also key factors.
All of which can be confounded by poor quality or manipulated data sets. So the whole exercise is indeed fraught with froth. 😉
Girma says: The model established a pattern as shown in Graph “f”. From this graph, It is easy to predict the climate if the pattern continues => Little warming in the next 15 years.
Your residuals are about twice the amplitude of your 60y cycle. So either noise is twice as big as your signal or there are other more significant factors you are not accounting for.
Why do you assume the modelled part will dominate the larger residuals you do not capture in the model?
Why did you chose to fit a parabola and what could cause such a variation?
BTW, I agree that what you suggest is likely but don’t see that it follows from what you show here.
LazyTeenager says:
“It is only meaningful if there is a physical basis for the functional form.”
So empirical tide predictions are “meaningless” then. Nonetheless, they have proved incredibly reliable the world over for more than a century.
I suppose being LazyTeenager means you don’t need to think before posting.
P. Solar says: September 4, 2012 at 3:22 am
Data does not extrapolate
Agree, perhaps I should have been less circumspect, the first part of my post relates to Maus’ reference to the Fourier analysis, so I continued with ‘extrapolation’ comment, which often used once spectral content of the data is found, to show what that the extrapolation either forward or back in time may reveal. Using MS Word’s auto spellchecker is also ‘fraught with froth’
A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.
Spoken like a chartist and completely wrong. The major benefits of good models is to help one understand how systems work.
P. Solar
Your residuals are about twice the amplitude of your 60y cycle. So either noise is twice as big as your signal or there are other more significant factors you are not accounting for.
Good question.
Though the magnitude of the Multidecadal Cyclic GMST is only half of the Residual GMST, it appears that the Multidecadal Cyclic GMST drives the Residual GMST. For example, in Graph “f”, in the 1910s, the Multidecadal Cyclic GMST was below the secular trend curve and the Residual GMST was also near the bottom of the GMST band. In contrast, in the 1940s, the Multidecadal Cyclic GMST was above the secular trend curve and the Residual GMST was also near the top of the GMST band. In the 1970s, the Multidecadal Cyclic GMST was below the secular trend curve and the Residual GMST was also near the bottom of the GMST band.
Therefore, it appears that the Multidecadal Cyclic GMST drives the Residual GMST.
Check out my paper on Central UK Max Temp vs Sunshine Hours at NothingSettledNothingCertain.com (also at Tallblokes Talkshop): by comparing Bright Sunshine Hours to Max Temperatures from 1930, I derived very much the same thing, except that instead of the curivlinear trend I got a linear trend (which doesn’t look unreasonable here, either). The AMO/PDO cyclicty, loaded onto a sunshine-hours linear rise accounted for all but 0.1C/century, which could easily be UHIE or landuse related.
I don’t think there is global maximum sunshine hours going back to 1920. Too bad: any increas in bright sunshine is a decrease in cloud cover. Ipso facto, Lord Monford might say.
Of course you have to explain why there has been an historical reduction in cloud cover, but perhaps the warmists could say that more moisture causes more rain which causes less clouds at some point in the day which leads to more bright sunshine hours in the records: there’s a political spin for everything.
“””””…..Eli Rabett says:
September 4, 2012 at 5:41 am
A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.
Spoken like a chartist and completely wrong. The major benefits of good models is to help one understand how systems work……”””””
Well george e. smith believes that the major benefit of good models is to help one understand how the models work. He thinks we should be so lucky as to have real systems behave the same as our models. In the case of the GCMs, he believes the climate system does not behave the same as the models; or else we wouldn’t need 13 of them, or whatever the count is now up to.