Guest essay by M.S.Hodgart
(Visiting Reader Surrey Space Centre University of Surrey)
The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.
The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.
It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.
A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability
The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.
The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.
The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.
As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.
Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition
HadCRUT4 annual data = trend in the data + temperature noise
The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.
HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.
A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.
In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.
The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity
Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection
In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.
The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.
But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).
trend in the data = mean trend in the data + quasi-periodic oscillation
A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.
The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.
What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).
In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).
So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.
But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.
A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)
If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.
It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.
There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.
[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.
[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)
[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold
Mathematically that analysis is correct.
It is too early yet to say that the long term trend has changed,
However, one should also look at the real world and IF the sun has as much of an effect as history suggests then the recent change in solar behaviour, if maintained, should result in a change in the long term trend in due course simply because the global air circulation has also changed and that appears to affect the proportion of solar energy able to enter the oceans via global cloudiness and albedo changes.
It is still earlier than history suggests for the millennial solar cycle to be going into reverse so the current period of inactive sun may not be maintained for long but we know so little about the reasons for solar variability that predictive ability is low.
Well, not to be a spoil sport, but:
I think there are a number of “infinities.” Number of integers is less than number of real numbers, is less than number of functions, and there are a number of other infinities even bigger.
The comparison of the 1870 to 2012 period with an average rate of 0.053 ± 0.017 deg/decade, to the1959 to 2012 period that has an average rate of 0.090 ± 0.034 deg/decade, suffers from the selection bias. The first included two complete cooling periods, where as the last period only includes 1/2 a cooling period and a full warming period. The rate comparisons are not equivalent.
Very nice. The numbers also tie up nicely with those Judith Curry has recently been talking about, where the recent warming spurt (1980 – 2005) was likely more than 50% ‘natural’.
“impartial” – what’s that?
23 Sept: Live Science: Becky Oskin: Climate Scientists: IPCC Report Must Communicate Consensus
Climate experts also told LiveScience they would like to see the new report stress the scientific consensus on climate change, and emphasize the link between human activities and global warming.
“I hope this report will stress the virtual certainty among the scientific community that humans are affecting the climate system in profound ways, mainly through burning ever-increasing amounts of fossil fuels,” said Jennifer Francis, an atmospheric scientist at Rutgers University in New Jersey. “I hope it will emphasize the high confidence in attribution of many aspects of climate change to increasing greenhouse gases, and de-emphasize the discussion of uncertainty. The public hears “uncertainty” and thinks there is no consensus.”…
Critics of the leaked drafts have focused on what climate scientist Kevin Trenberth said is the “mistaken idea that warming has slowed…
“A key will be whether there is a major succinct message out of this report,” said Trenberth, a climate scientist at the National Center for Atmospheric Research, also in Boulder, Colo.
“The previous three have had signature messages,” Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures. This would also go some way toward addressing [this] mistaken idea.” [6 Unexpected Effects of Climate Change]…
“It is not just a scientific document — it should have policy implications,” Trenberth said. “And, of course, this is why there are well-financed and organized denier campaigns out in force.”…
http://www.livescience.com/39869-rethinking-ipcc-climate-change-report.html
——————————————————————————–
Wow . . . Paging RGB!
Irrelevant. We were told human CO2 would over power all natural variability. The science was settled. That obviously has not been the case. Until the consensus scientists admit they were wrong on both points I won’t be putting any value on anything further they have to say.
This is an excellent post. A genuine attempt to make sense of the data, rather than to promote an agenda – if only the IPCC worked like this.
On this basis, it does indeed look as though the underlying trend is still upwards, so far. In addition, there is a clear indication of the warming trend increasing after 1960, so I would be inclined – on these figures – to accept the likelihood of a man-made element.
However, check the numbers. The post 1960 trend is around 1° per century, and shows no sign of increasing. The trend in the early 20th century is at least a third of that, so the man-made element would appear to be only about 0.6° to 0.7° per century. Hardly a cause for panic.
History may record that the IPCC was not 100% wrong, but massively overplayed their hand, and consequently exagerated the threat, by erfusing to accept that the rapid warming seen in the late 20th century was pricipally due to natural oscillation.
A very interesting and nicely balanced discussion
I know you have stated (if you believe in HadCRUT4) in your analysis, my problem is not with your approach but the data you have used. Using HadCRUT4 adds a degree of legitimacy to that temperature approximation that is not deserved.
I have yet to see a raw temperature record that highlights the discrepancy we see in HadCRUT4 from around the 1940’s to the recent warming episode at the end of the century. I believe Willis showed this in previous posts using the CET temperature record and Chris Monkton’s recent post showing the temperature adjustments made in Darwin and other locations.
The problem as we all know with UHI effects and temperature adjustments making the 1940’s cooler ensure your analysis may well be only measuring these two effects and not warming at all.
I would suggest using your analysis on some raw rural temperature sets and then seeing if you can detect warming in a temperature signal.
Lots of people have pointed out that the climate variation can be modeled assuming an (approximately) 60-year cycle and a gradual rise. They have been told that this is not politically acceptable.
This analysis looks to me like showing TWO curves – a 60-year one and one at about 160. This suggest that we should look for a hot peak at about 2040, followed by another ‘little ice age’ bottoming out in 2200.
But good luck getting anyone in charge to listen to this….. 🙁
Great analysis on trend maths for simulated thermometers. I can’t wait to see how it looks when you test it against actual data.
Certainly looks like there is an upward trend, but not enough to worry about, let alone spend a trillion dollars on ….
Whoops- sorry. Should have said “two SINE curves” above…
I once did something very similar to this analysis. I mad a fit to Hadcrut3 data including identified harmonics. Once you include the 60 year oscillation (AMO?) then climate sensitivity (TCR) for CO2 works out to be 1.4C. The fit shows that the current pause in warming will continue until at least 2025.
see: http://clivebest.com/blog/?p=2353
For any student of cycles, it is clear that there is a cycle of around 60 years in temperatures. This cycle is found in many other climate related variables. Also, there is a cycle of around 208 years in solar output and temperature called either Suess or de Vries cycle. This cycle was rising for the entire twentieth century but is no in the down phase. Another cycle of 2300 years, called Hallstatt cycle is now rising, and will be for hundreds of years yet. These cycles can be clearly seen in temperature and solar proxies over thousands of years.
As far as any human influence is concerned the temperature trend prior to ~1945 is irrelevant.
This is consistent with Akasofu’s interpretation.
The non-oscillatory mean trend (red curve) is a quasi-periodic variation too – just a longer cycle. The ~60 year cycle is not the only climatic cycle – there are many longer cycles (~200 years and longer). The cycles are of course quasi-periodic (variable cycle length, just like the solar ~11-year cycle). Both the ~60 year and the longer ~200 year cycle seem to be plateauing/shifting at this point. That means the cooling will be much more dramatic than in the 50s/60s. Man (and CO2) is irrelevant.
Nice treatment of the data which should, but won’t, mitigate some of the quibbling.
It does, however, need to be kept in perspective.
Data over 140 years is insufficient to make over broad claims about natural variability and it would require a leap of imagination to use this data in and of itself to draw conclusions about cause and effect.
Pat reports on what Kevin “jai mitchell” Trenberth says:
Trenberth is clearly nuts, like Michael Mann, assigning conspiritorial motives to un-named, shadowy “denier campaigners” who he believes are out to get him.
But why won’t Trenberth or Mann name those “deniers” or their organizations? Sunlight is a disinfectant. If there were actually any such “well-financed” organizations “out in force”, then let’s compare their finances with what Trenberth and Mann rake in to spread their climate pseudo-science.
If it were not for psychological projection, the alarmist crowd would lose one of its biggest arguments.
Nice analysis with clearly stated data source and reasoning. It appears to me that the overall global warming conversation has shifted from “all warming is anthropogenic” and “there is no warming” to an acknowledgement that natural warming is also in play. Thanks.
Oppps. I meant to say “natural variation” rather than “natural warming” is also in play.
If you look carefully to the data, you can see that only the middle part shows a nice sinus and the outer edges are cramped together due to the obvious lack of temperature data and correct trending on the outer sides. If you only take the middle part, the cycle is 64 years and aligning perfectly with the solar cycle with solar and sinus minimum at 1912 then two solar minima in between (1923 and 1934) then a solar minimum and sinus maximum at 1944 then again two solar minima in between (1954 and 1965) then a solar and sinus minimum at 1976 etc. etc. I don’t know what this exactly means I just look at the data. I know however that there are also longer solar cycli like the “De Vries” cycle that is just getting in another phase according to for instance prof. De Jager in my country
As the spread between HADCRUT4 and RSS/UAH satellite data increases, It would be interesting to see the same statistical analysis done on RSS and UAH to see how well they compare to HADCRUT4 statistical analysis.
I realize that 34 years of satellite data is a little short, but certainly long enough for statistical significance.
It’s amazing to see how closely the blue oscillating curve fits the PDO warming/cooling cycles. Since the PDO entered its 30-yr cool phase in 2008, it would tend to support future falling temps as indicated on the graph, especially in light of weakening solar cycles which also started from 2008, leading to, in scientific parlance, a super-duper double whammy….