Guest essay by M.S.Hodgart
(Visiting Reader Surrey Space Centre University of Surrey)
The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.
The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.
It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.
A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability
The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.
The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.
The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.
As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.
Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition
HadCRUT4 annual data = trend in the data + temperature noise
The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.
HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.
A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.
In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.
The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity
Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection
In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.
The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.
But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).
trend in the data = mean trend in the data + quasi-periodic oscillation
A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.
The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.
What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).
In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).
So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.
But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.
A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)
If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.
It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.
There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.
[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.
[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)
[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold
In a series of posts at
http://climatesense-norpag.blogspot,com
I have estimated the timing and extent of the possible coming cooling simply by considering the 60 and 1000 year solar cycles and looking also at the current state of solar activity as a guide. The key question is well illustrated in the graph in this piece. Is the recent peak a peak in both the 60 year and 1000 year solar cycles- the blue and red .Looking at the state of the sun it seems more likely than not that it is. Here are the conclusions of the latest post on my site.
“To summarize- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs
1 Continued modest cooling until a more significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
9 Warning !!
The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon. If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up .This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others. A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly .I am sure ,however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.
Would still love to see a genuinely RAW data set.
My understanding is that ALL the major “official” data sets (such as the one referenced here) include “corrections” — that, generally, tend to “increase” the apparent slope of the data. On the other hand, they leave out “corrections” that might decrease the slope (e.g. UHI effect).
There may have been very little net change since roughly the 1940s and today — in the genuine, unmanipulated global temperature data.
There are some who want to believe that the long term trend MUST be the result of CO2, while the short term oscillation is the natural component.
It’s possible that at least some of the long term trend is the result of UHI increases, then there has been the growing solar activity during the 20th century.
I have to say that the left wing Press is ratcheting up something chronic in the UK. Censorship of dissenting blog comments is rife and articles are being written by those totally incognisant of what science is, how it is done and what constitutes scientific concensus.
Nature moves in cycles. The reason for this is remarkably simple. Linear trends lead to extinction. We the observer would not be here if Nature was linear. Historically humans have learned to successfully predict Nature by first identifying the length of the cycles, long before we understood the mechanism behind the cycles.
The failing of modern climate science and the IPCC lies in their insistence that climate is a linear function of the forcings. This of course leads to scary conclusions because all linear trends eventually lead to future extinction. And of course it leads to failed predictions because Nature is not linear.
Once you accept that Nature is not linear, rather its is cyclic, then the scary prediction go away. A change in the forcings is not like pushing an object in space, where there is no friction or gravity to affect the outcome. A change in the forcings is more like changing how and when you push on a child’s swing. You can change the amplitude somewhat, but it is very difficult to change the period of the underlying cycles.
Selection of x and y axis gives an impression of great increase. Forget the actual numbers look at the slope.
Lots of math, but the picture tells the story. The UNIPCC took a short period when the blue and red curves were both positive and said it would continue that way forever. Well it couldn’t and it didn’t.
The 10th order polynomial does give concern, because it is easily over-fit to match the noise rather than the signal. this could make it reasonably useless for predictions.
Wayne @3.18 My coolimg forecast at 6.33 essentially does what you suggest using the mirror SSTs as a guide – I’m a great believer in Ockhams razor
1. The earth’s climate is known to be cyclical, witness the geologic ages. For the most part the temperature oscillates with a warming trend followed by a flat trend followed by a cooling trend. Fitting any type continuous curve through long term temperature does not reflect the earth’s climate variations, thus is not an acceptable mathematical modeling technique. Breaking the time line down into warming, flat and cooling periods then fitting mathematical curves for the periods offers a way to better way to model the earth’s known cycles.
2. There is no sign of carbon dioxide from burning hydrocarbons being the root cause of the temperature rises nor temperature drops in Figure 1.
All of these types of analysis that use a portion of the time from the natural rise from the little ice age still lack proper context. Even this short time period shows a strong natural component.
Yet, if the scale goes back to the end of the last ice age a completely clear picture emerges that we are well within normal variability. We are on the 6th temperature uptick since the end of the last ice age and each one is weaker than the last. The overall temperature curve remains downward to the return of this epoch’s normal temps which is not this temporary interglacial we are enjoying.
That hockey stick graph still seems to be the meme even though it’s completely refuted. People in analysis like this seem to be forgetting that the idea that the oft use phrase “we are experiencing the hottest temperatures in the modern era where we were actually recording temperatures” is a completely misleading statement that implies we are experience exceptional temperatures. The Al Gore movie clip from yesterday hammered this mis-leading theme.
Posts with graphs like this encourage the continuation of this misleading. We are not in an exceptional period of temperatures either in the rate of rise nor in the extent of the rise.
Looking at real temperature reconstructions using ice cores, sediment studies, etc give a very clear context to the whimper of an effect, if any, we as humans are having on temperatures.
If only we did have an effect then we could stop the ultimate return of our shores to the continental shelves.
I wish that with all posts like this either a good ice core temperature proxy graph be linked to or included in the article as a constant reminder to the casual readers that are not up on the science as well as the well informed to never lose sight of the actual context that all these discussions should be held in.
Perhaps the red curve is just another, longer-period, larger-amplitude oscillating cycle overlaid upon the blue..
Here is my impartial take on climate modeling and global warming. The PDO was first described in 1997 and the AMO was described in 1994. Reducing these significant multi-decadal cycles to smooth and stable cycles for mathematical handling is a mistake and yet another example of over reach and over confidence with limited data and understanding of long cycles and the issue of stability of those cycles.
At the 6.33 post the link should be
http://climatesense-norpag.blogspot.com
“In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”
I would say the climate debate has been muddled by trying to tease out a signal in a system we don’t seem to understand very well. If we understood the processes better, I would be more inclined to be interested in what various trends over various time frames might be telling us.
Given how noisy and chaotic the data is and how many processes are at work in the climate system, it blows my mind that anyone would use a temperature data trendline to prove any proposition in climate science. Statistical methods are elegant but in a world where you can’t hold any of the other variables constant, and where some of the variables are unknown, it all seems like bafflegab to me!
John Mason
For the context you want see Fig 6 in the latest post at the link in my 7:47 post
some curve which is most likely to be closest to the truth
Are you looking for
A) a mathematical formula to best describe a curve? Or
B) functions and parameters that represent physical processes and fit an observed outcome?
These are very different things and of very different utility. It is clear, the author is doing (A).
envisages some smooth slow-varying curve … ignoring these rapid changes, but responsive [to what?] over a longer term to general movement [of what?]
Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection
(Skeptic meter pegs off scale high)
First off, it is an empirical search. Over what ranges? How many trials and combinations? How is the agreement measured? For a finite set of trials, there will be at least one maximum, but there may be several combination that come close. To focus on the one max without even reporting the runner’s up turns me completely against the author.
He is letting the data solve the problem of model selection. Therefore, with different data, you could choose different models. That does not get you any closer to understanding. It is just describing.
The mean trend is a cubic spline interpolation which deviates least from a straight line
Completely disconnected from reality and any physical process. Infinite end points. Only the year matters. This is a trend for trend sake and brings us no closer to any understanding of physical processes.
Dave in Canmore – you are right= the patterns are better selected by eyeballing the actual data and not slavishly following mathematical curves see Figs 5 – 9 in the link provided above .
Does anyone know the accuracy of any of the TEMP measurements such as Hadcrut4, etc.? Is the data meaningful measuring a so called average temp? Lastly, what does a confidence level mean? Does it mean only statistical confidence? I may be asking wrong questions because I’m not very familiar with statistics, but would you build anything based on how solid the temp measurements were?
The warming in recent years was caused by natural causes, not for more than half, but for 100 percent. The part of the warming that can’t be explained by natural causes is the result of data tampering. http://iceagenow.info/2013/09/warmists-fiddling-data-years-astrophysicist/
As Piers Corbyn explains: Without data fraud the World is COOLING NOT WARMING.
Andries Rosema and his team studied satellite data and concluded: The earth is steadily cooling off since 1982: http://climategate.nl/2013/07/18/meteosat-satelliet-waarnemingen-1982-2006-aarde-koelt-aanhoudend-af/ This is a Dutch language website, to see the report click on the red hyperlink “download hier”. Global warming is the biggest hoax in our present time, like Alan Caruba has said: http://iceagenow.info/2013/09/global-warming-biggest-lie-exposed/
This is a nice analysis.
Two comments are very profound, and hit on my thought. The two comments are:
NewEnglandDevil (@NewEnglandDevil): “Trend analysis is fine for what it is. What this doesn’t do (as far as I can tell) is link specific physical processes to each of the two smoothing processes. Without linking physical processes to the results I don’t believe any projection using this method would be valid or accurate. In other words, it’s fine for reviewing the past, but tells us nothing about the future.”
RC Saumarez: “The only problem is that there is no mechanistic component to the model and therefore cannot be used to extrapolate temperature – or at least it can but this is rather difficult to justify. At the moment a cubic polynomial is the best fit to the mean temperature but in future this may not be the case.”
The profound issue is this: the natural world does what it will do, and we are fortunate when mathematical models are able to provide some sort of guide to what nature will do. The models are still just models, and are inferior. I agree with the commenters who note here and have noted elsewhere on WUWT that any model of the climate needs to have some governor function or functions at some organizing level higher than that of a sine wave cycling from peak to trough and back again. Higher-level processes govern how these cycles run. AS noted, there are plenty of step functions as well as sine functions and trends.
Global temp is not governed or regulated by a mathematical function, and so there is no underlying genuine mathematical function to discover.
If you show me a spirograph picture, or a fractal, there is an underlying mathematical model that accounts for all data seen.
We need to avoid the temptation to buy into the idea that there is a mathematical model underlying global temp.
“Paul Homewood says:
September 24, 2013 at 2:49 am
The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.
I get nervous when a statistician tells me he can get different results depending on what model he uses.
#################
that’s the nature of the beast.
we can never observe the data generating process. we can only observe its output.
and from any given set of data there are innumerable ways to fit it.
you have two choices
A) try to infer the data generating process from the data. make assumptions and fit curves.
they are countless
B) construct a model of the data generation process ( a theory ) using physical laws and physical entities
and attempt to hindcast and forecast.
Neither method has any epistemic priority. Both can work. For “understanding” method B is preferred because it has the chance of tying into other known physics in a mathematical and ontological manner. Sometimes A will outperform B. Sometimes we have no way of even begining B and all we can do is fit curves. Sometimes we can use A even though we know it cant be correct. The current post models temperature in a way that is unphysical, but it may work over the next few decades. We can be fairly certain that it will fail if we go forward or backward over great periods of time.
There are many good comments above.
I have no issues with the math, but serious concerns about a significant warming bias in the data.
richardbriscoe says: September 24, 2013 at 12:39 am
http://wattsupwiththat.com/2013/09/24/an-impartial-look-at-global-warming/#comment-1425095
“The post 1960 trend is around 1° per century, and shows no sign of increasing. The trend in the early 20th century is at least a third of that, so the man-made element would appear to be only about 0.6° to 0.7° per century. Hardly a cause for panic.”
About a decade ago I estimated a probable warming bias in Hadcrut3 of about 0.07C per decade, based on satellite temperatures This warming bias probably extends back before the satellites were launched, to about 1940.
Let us assume that Hadcrut4 and Hadcrut3 exhibit a similar warming bias, about 0.07C per decade or 0.7C per century.
Adjusting richardbriscoe’s sentence for this warming bias: “…the man-made element would appear to be only about 0.0° C per century. Hardly a cause for panic.”
We wrote with confidence more than a decade ago:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
I suggest that it was warmer in the USA during the 1930’s than it is today, and quite possibly the entire world was warmer then too.
There is strong evidence that the Medieval Warming Period was also warmer than today.
Repeating from 2002, with even greater confidence:
“The alleged global warming crisis does not exist.”
The only thing that maybe correct in the analysis is the oscillation, because the warming trend from the mini ice age is in the data too. The next question is when has the planet recovered from the min ice age? Is it the time when the planet is as warm as Medieval Times? If the temperature significantly increases beyond the temperatures of Medieval Times, then do we have Global Warming?
@Nigel Harris 5:03am
These [sharp reversals at end points] do not appear to my eye to be even remotely justified by the raw data.
Good catch Nigel. Look at the Blue Curve. The center looks nicely sinusoidal, but the end inflections get contorted and tortured.
We must be looking at side lobes of the 10th order polynomial of gigantic scale. It is telling that Hodgart doesn’t include the 11 parameters of that 10th order polynomial fit. After all, it is “unique.”