An impartial look at global warming…

Guest essay by M.S.Hodgart

(Visiting Reader Surrey Space Centre University of Surrey)

The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.

The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.

It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.

A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability

The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.

The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.

The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.

As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.

Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition

HadCRUT4 annual data = trend in the data + temperature noise

The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.

clip_image002

HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.

A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.

In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.

The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity

Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection

In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.

The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.

But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).

trend in the data = mean trend in the data + quasi-periodic oscillation

A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.

The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.

What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).

In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).

So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.

But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.

A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)

If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.

It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.

There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.


[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.

[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)

[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans."
0 0 votes
Article Rating
147 Comments
Inline Feedbacks
View all comments
commieBob
September 24, 2013 4:03 am

If you torture the data enough, it will confess. source Of course a confession obtained by torture should never be accepted as valid.
IMHO, this is just too much processing on too little data. I hope Prof. Briggs weighs in here.

lgl
September 24, 2013 4:07 am

“The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.”
And it’s even worse since the ~20 yrs component is filtered out.
http://virakkraft.com/Hadcrut-20-60.png

Alan Millar
September 24, 2013 4:22 am

This fits in with my thoughts about CO2. Yes it does have an effect but much smaller than the alarmists claim.
Clearly we had a natural warming trend before the huge increase in CO2 after 1945. Since then these emissions seem to have ameliorated the following cooling spell and exacerbated the following, expected,, warming trend from the middle 70s.
Due to the logarithmic nature of increasing atmospheric CO2 we can expect the forcing effect on the natural trends to lessen as time progresses. So there is no catastrophic warming anywhere in prospect and we can expect a slight cooling for the next decade or so.
I think the warming this century will be much less than 0.8C, more like 0.3C.
In addition to the CO2 emissions Solar outputs were very high during the latter half of the 20th century and are likely to average much less during this century. Also we will have two cooling spells and one warming spell this century, the exact opposite of the 20th.
Also I don’t particularly trust the changes made in Hadcrut 4. Like all databases in the hands of confirmed alarmists any changes always cool the past and warm the present. The chances that that is just coincidental are vanishingly small.
Alan

Eliza
September 24, 2013 4:25 am

HADCRUT is adjusted data and not valid in my view (UHI etc). CET in my view is the only valid data (rural since 1850). The only recent valid data is radiosonde RSS and Satellite AMSU. Neither CET nor RSS or AMSU show any significant warming since records began. This person is again analyzing selective data

John West
September 24, 2013 4:31 am

“differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”
The mean trend does not oscillate over this snipet of time, but it’s too soon to claim that it doesn’t oscillate.

September 24, 2013 4:58 am

Once again an analysis that fails to find any CO2 signal in a modern temperarure/time graph. How many more such analyses do we need to have before the Royal Society and the American Physical Societh accept that there is no CO2 signal, and ALL temperature variations are natural in origin.

Carbon500
September 24, 2013 5:02 am

I note with interest Lance Wallace’s comments on the CET. I have a book by meteorologist William James Burroughs (Climate Change – a Multidisciplinary Approach, Cambridge University Press 2001) in which (p107) he comments on the CET.
He says: ‘The CET series confirms the exceptionally low temperatures of the 1690s and in particular the cold late springs of this decade. Equally striking is the sudden warming from the 1690s to the 1730s. In less than forty years the conditions went from the depths of the Little Ice Age to something comparable to the warmest decades of the twentieth century. This balmy period came to a sudden halt with the extreme cold of 1740 and a return to colder conditions, especially in the winter half of the year. Various other series for other European cities confirm this conclusion.’
Later he continues: ‘A more striking feature is the general evidence of interdecadal variability. So, the poor summers of the 1810s are in contrast to the hot ones of the late 1770s and early 1780s, and around 1800. The same interdecadal variability shows up in more recent data. The 1880s and 1890s were marked by more frequent cold winters,while both the 1880s and 1910s had more than their fair share of cool wet summers. A similar variable story emerges from other parts of the world.’

Nigel Harris
September 24, 2013 5:03 am

Is anyone else even slightly sceptical about the sharp reversals in trend shown by the brown line at both the start and end of the data set? These do not appear to my eye to be even remotely justified by the raw data. It looks as though the author’s method has somehow constrained the end-points.

Bruce Cobb
September 24, 2013 5:14 am

Nothing unprecedented, nothing alarming, and no CO2 forcing signal either. Climate is simply doing what it has always done. Alarmists, who should be relieved, will instead hate it as it threatens their Climatist ideology.

John Haddock
September 24, 2013 5:16 am

Whatever combination of cycles and trends is used, the curve fitting is always backward looking. The bigger question is how long we have to wait (statistically speaking) before we can be confident in the forecasting value of any predicted curve, however constructed; 10 years (probably not), 20 years (maybe) or 50 years? Given how little we seem to understand about all the various cycles it may well be another whole academic generation before we really understand.

September 24, 2013 5:21 am

What I get from the article is “an average rise of 0.8 deg by the end of this century.” And that’s taking the HADCRUT records as reliable, despite their dubious provenance.
Compare that with the IPCC prediction of circa 3 degrees. M.S.Hodgart’s Reality Check indicates that the value of S is still hugely overstated.

Nigel Harris
September 24, 2013 5:35 am

On a second look, it appears that the author must have constrained his oscillating component to have zero magnitude at the start and end points of the data. This seems an arbitrary and unwarranted constraint. And it looks to me as though it is only with this constraint that the end of the chart can be made to point so dramatically downwards.

Bill_W
September 24, 2013 5:39 am

Trenberth really sounds like a religious fanatic (prophet) here. ““The previous three have had signature messages,” Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures. ” Then he adds that there are well-funded “deniers” out there.
Sad.

cynical_scientist
September 24, 2013 5:42 am

Some comments.
The question you address at the end is to what extent the current “pause” is a temporary cyclic phenomenon and to what extent it is a true change in the long term trend. In other words you are interested in the decomposition of the brown curve into the sum of the blue and the red, particularly in the last 15 years in the time of the “pause”. Your conclusion seems to be that a warming trend continues unabated and the pause is purely due to short term variation.

So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.
But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.

The problem is that the decomposition into trend and oscillation you are using to draw this conclusion becomes very uncertain near the edges of the curve. In particular I note that you seem to have “pinned” the long term trend to the smoothed temperature at each end – observe how the brown and red curves touch there. This pinning creates an artifact in the decomposition into trend and variation near the edges of the graph. You can also see the effect of this artifact by looking at the variation curve – see how the blue curve appears to have abruptly steep portions at both ends.
I generally like what you have done. It is certainly a better way to draw a curve through the graph than brutally sticking a straight line through it all. But I think you are stretching when you try to use this analysis to try to attribute the recent pause to short term variation and make the claim that a long term warming trend continues unabated underneath. The decomposition into trend and variation on which you base this conclusion appears to suffer from end effects which render it unreliable precisely near the edges of the curve, a region which includes the recent pause which you are trying to draw conclusions about.

September 24, 2013 5:46 am

Trend analysis is fine for what it is. What this doesn’t do (as far as I can tell) is link specific physical processes to each of the two smoothing processes. Without linking physical processes to the results I don’t believe any projection using this method would be valid or accurate. In other words, it’s fine for reviewing the past, but tells us nothing about the future.

minarchist
September 24, 2013 5:50 am

M.S.Hodgart
“The other popular method is to fit a selection of straight lines (least square estimate) to a selected The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.”
Does not your method suffer from the same notorious difficulty? Granted, that is when the Hadley Center begins it’s record, but 1880 is a poor choice for a start date. The globe has been cooling steadily for the last 8,000 years since the Holocene Optimum punctuated by warm periods of decreasing magnitude.
http://wattsupwiththat.files.wordpress.com/2013/03/greenland-ice-core-isotope-past-4000-yrs.png

Editor
September 24, 2013 5:56 am

rogerknights says:
September 24, 2013 at 1:05 am
> This is consistent with Akasofu’s interpretation.
Not really, he settles on something like a linear component for the LIA recovery, with my interpretation that slope should flatten as we get further away from the LIA. This shows a curve with an ever-increasing slope (one that does not match the Mauna Loa CO2 curve).
“10th degree for the polynomial”
Yeah, that implies extending the polynomial a few years in either direction will send it zooming downwards. Looks like some of that is evident on the graph, e.g. the last half cycle is only about 20 years instead of the 30 or so that Akasofu fits. Like Roy Spencer’s caveat when he included a polynomial fit on the UAH temperature record, “for amusement purposes only.”

rogerknights
September 24, 2013 5:57 am

Here’s the link to the story earlier this month on Akasofu’s similar-looking interpretation:
http://wattsupwiththat.com/2013/09/09/syun-akasofus-work-provokes-journal-resignation/

cynical_scientist
September 24, 2013 5:57 am

@Nigel Harris: I fully agree. I reached the exact same conclusion.

Gerry - England
September 24, 2013 6:00 am

I agree that it is a well-reasoned look at recent temperature records, but the missing issue is the anecdotal evidence that exists to suggest that the planet has had major periods where is has been as warm, if not warmer, than it is today. That then begs the question that since this couldn’t have been attributable to human activity, why can’t what caused these warm periods have caused the warmth we have seen recently? And since they appear to have no answer for this – especially as CO2 concentration increases while temperatures don’t – this was why Mann worked so hard to try to erase the MWP and any other early warming to take this inconvenience out of the equation.

Dodgy Geezer
September 24, 2013 6:02 am

Tell me, M.S.Hodgart, would you consider offering such a paper up to a peer-reviewed Climate publication? And if not, why not?

September 24, 2013 6:08 am

I like fitting the data to Fourier type harmonic cycles. Even the “noise” can be treated similarly. Considering the harmonics as independent variables in mult-regression analysis, the statistical significants of each can be determined as well as the shape of the cycles. For example, for an annual cycle first multiply the years by 2*Pi so that x=2*Pi. The factors in the regression are then: the primary cycle cos(x) and sin(x), first harmonic cos(2x) and sin(2x), second harmonic cos(3x) and sin(3x), etc. Include as many harmonics as are statistically significant. For cycles longer than a year, estimate the primary cycle length by counting the peaks within a time period and divide x by the cycle length. Vary the cycle length to determine maximum R^2.

wayne
September 24, 2013 6:12 am

Think many here also see a real problem with the circa +0.7°C adjustments made to the temperature records, generally always downward in the far past, upward in the more recent decades as others have mentioned above, and you know what that does to the graph shown above? It’s really a good chart, I’ll use it in the future for this exact example.
If those adjustments published by NOAA and GISS on their sites are applied to the blue curve you end up with the brown curve. If you remove the adjustments from the brown curve, just roughly eyeballing it, you get back the blue curve. Most of the rural cities in my state show no warming at all since 1890-1895 when the records began but this is but one area and maybe it has some special properties that protect it, or shield it, from this assumed increase in accumulated global energy (therefore a raising of temperature) but in physics I learned that is not possible over a century of time even in a system even as large as the entire Earth. So I agree with most questioning these adjustments themselves. Something is amiss. Maybe a small portion of the 0.7°C is proper but sure seems the majority is in UHI that should have adjusted temperatures downward starting when cites began and scaled larger as the cities matured so to just ignore the 3% of land occupied by cities. That kind of adjustments I could see for the urban sites.
Now the cities here all show an increase as they grew from empty grassland into huge metropolises and that type of warming is naturally expected, but that warming is not global, it only affects a very small few percent of area.

RC Saumarez
September 24, 2013 6:20 am

Certainly we can describe the data in this way and yes this optimal in one sense. It certainly emphasises the periodic component (which is obviously not purely sinusoidal), which is interesting because there seems to be considerable debate over whether this exists.
I’ agree that smoothing per se isn’t particulalry useful..
.
The only problem is that there is no mechanistic component to the model and therefore cannot be used to extrapolate temperature – or at least it can but this is rather difficult to justify. At the moment a cubic polynomial is the best fit to the mean temperature but in future this may not be the case.

September 24, 2013 6:28 am

Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures.
===========
Unfortunately Trenberth demonstrates selection bias in the quote, which is what separates an activist from a scientists. The world is warming at the north pole, but not at the equator or the south pole. This is not the signature of CO2 warming. The warming of the north pole reduces the temperature differential between the pole and the equator which is what drives storms. As a result storm intensity is decreasing in the northern hemisphere.
This is generally good news for people, most of whom live in the northern hemisphere. However, it is not good news for folks like Trenberth that rely upon increasingly scary predictions to separate taxpayers from their hard earned money.