Guest essay by M.S.Hodgart
(Visiting Reader Surrey Space Centre University of Surrey)
The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.
The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.
It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.
A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability
The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.
The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.
The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.
As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.
Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition
HadCRUT4 annual data = trend in the data + temperature noise
The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.
HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.
A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.
In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.
The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity
Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection
In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.
The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.
But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).
trend in the data = mean trend in the data + quasi-periodic oscillation
A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.
The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.
What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).
In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).
So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.
But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.
A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)
If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.
It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.
There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.
[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.
[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)
[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold
Dr Norman Page says:
September 24, 2013 at 7:26 am
Wayne @3.18 My cooling forecast at 6.33 essentially does what you suggest using the mirror SSTs as a guide – I’m a great believer in Ockhams razor
—
Dr. Page, I was out of place all day but I’ll read your comment above and your page at climatesense this evening. Sounds like you put some weight on what the sun is up to and I agree, have always felt climatologists in general are stuck on a self-importance mindset thinking the sun can be ignored and that the Earth controls it’s own climate, I also think not so. Will get back to you in a few hours after I get a better handle on your views.
fhhaynie For the empirical ice core and proxy temperature data on which the 1000 year cycle is based see Figs 6 and 7 in the latest post at
http://climatesense-norpag.blogspot.com
Interestingly Fig 6 also suggests that if you believe (which I obviously don’t) that CO2 is the main climate driver -its long term effect is to cool the earth.
M.S.Hodgart
Good work.
What is your correlation coefficient between your Smoothed GMST model and the Annual GMST?
In my view, Matthew R. Marler errs in loving curve-fitting. While Marler loves a curve, science provides us with no reason to believe that nature loves it.
Terry Oldburg: In my view, Matthew R. Marler errs in loving curve-fitting. While Marler loves a curve, science provides us with no reason to believe that nature loves it.
Nature’s love is irrelevant. There is no reason to believe that nature loves Newton’s laws, but they are accurate descriptions over a wide range of cases. There’s no reason to believe that nature loves Kepler’s laws either, but Newton found them informative. Curve-fitting is a species of honest labor, like gold prospecting and inventing, that sometimes produces good results. But each fitted curve has to be stringently tested, and I don’t love any curve that hasn’t been tested.
Apologies to the author for the tenor of my last comment (“…this analysis is useless. Curve-fitting is meaningless without any real-world mechanisms.”). In writing it I had missed the statement “The graph makes no predictions and should be used only to see what has been happening historically.“. So I should have phrased it as agreement with the author, not criticism.
Dr Norman Page said: “It will be seriously in question if there is not about 0.15 – 0.2 degrees of cooling by 2018-20”- but said in opening “the graph makes no predictions and should be used only to see what has been happening historically”. Also, this 2018-20 short-term projection really deals only with the blue “oscillating component” element, not the continuing red trend in surface temperatures running at ~0.6C per century from 1960 to 2010. But does the 3rd order term in the cubic fit cause the surface temperature also to trend downwards – ie. predict the start of the next glaciation – or does it now predict ever-increasing rises in surface temperatures, so we had all better redirect our efforts to cost-effective amelioration?
Wouldn’t it be fair (and objective) to say that this is really just curve fitting on historical events – perhaps of interest in itself though one can always look up actual data. However it does not truly involving application of scientific method…?
Dear M.S. Hodgart,
Thank you for your work.
As you can see, this is a forum where critical opinion, sometime too critical, can exist alongside more civilized commentary.
A suggestion was made to re-run your analysis using Central England Temperatures (CET). I hope you will do so.
Here is one data source – I am uncertain if it is the best one, and I cannot comment on the existence or absence of a warming bias in CET’s.
The CET website says “The mean daily data series begins in 1772 and the mean monthly data in 1659. … Since 1974 the data have been adjusted to allow for urban warming.”
http://www.metoffice.gov.uk/hadobs/hadcet/
Some concern has been expressed about the increased curvature of the end-points of your “oscillating component”. This could be a fair comment – “end point effects” are not uncommon in this sort of analysis. However, I fail to see this as a serious flaw – it is only necessary to note it.
This page of “record breakers” is interesting:
http://www.metoffice.gov.uk/hadobs/hadcet/cet_record_breakers.html
Regards, Allan
Something seemed odd to me yesterday when I looked at the graph above. I figured it out. The HadCRUT4 anomoloy for 1998 is near 0.7, yet the axis of the graph above only goes to 0.5 or so. Where is the hadcrut4 data set that created the graph in this post? You can see the data here: http://woodfortrees.org/plot/hadcrut4gl but it is different and I don’t know why.
John M Reynolds
Sorry. My bad. I am used to seeing monthly graphs. The average for all 1998 months is .53. — John M Reynolds
TimC I’m sorry -you have misunderstood my earlier comment I said
“TimC I’m not sure what the hypothesis is here .The hypothesis in my cooling forecast linked above is simple and clear. i.e. that the current cooling peak at about 2003 is a peak in both the 60 year and 1000 year quasi cycles,
It will be seriously in question if there is not about 0.15 – 0.2 degrees of cooling by 2018- 20”
In the first sentence “here” refers to this Hodgart post . The rest refers to my own cooling forecast at
http://climatesense-norpag.blogspot.com
I’m saying that the idea that the recent peak includes both the 1000 year and 60 year peaks would be in question if there is not the quoted amount of cooling by 2018-20.
I hope that clarifies things.
Dr Page: thank you, all is now clear! And thanks for the link to your own blog – but (to get this absolutely correct) am I then right to infer it is common ground that there is actually no hypothesis, falsifiable or otherwise, in this article – which I think would have to take the form of some future forecast such as your own?
Absent that, this would seem to be just a “wiggles and loops” analysis exercise: interesting but valid only as from 1870 to 2012 – the blink of an eye in astronomical terms of course.
Theo Goodwin says:
September 24, 2013 at 11:45 am
There are no laws for conservation of temperature in any branch of climate science.
==========
Indeed. Temperature must be combined with humidity to determine the energy content of air. As you reduce the humidity you must increase the temperature for the energy to remain constant.
Perhaps the “unexplained” increase in late 20th century temperature has simply been a response to decreasing humidity over the oceans?
During 1976–2004, global changes in surface RH are small (within 0.6% for absolute values), although decreasing trends of −0.11% −0.22% decade−1 for global oceans are statistically significant.
http://journals.ametsoc.org/doi/abs/10.1175/JCLI3816.1
TimC Hodgart says ” It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.”
I think that’s clear enough.
In reply to:
PMHinSC says:
September 24, 2013 at 1:17 am
…Data over 140 years is insufficient to make over broad claims about natural variability and it would require a leap of imagination to use this data in and of itself to draw conclusions about cause and effect.
William:
Yes, however, there are sets of other observations that logically supports the assertion that the majority of the warming in the last 150 years was due to solar magnetic cycle changes rather than the increase in atmospheric CO2 and that the planet is about to significantly cool due to the current solar magnetic cycle change.
The process to solve physical problems is analogous to fitting together the pieces of a model puzzle or solving a crime investigation. The correct solution explains all observations. There is a physical explanation for what has happened in the past and what will happen in the future. We all understand and agree that it would be ineffective and immoral for a criminal investigator to start an investigation by picking one suspect and then hiding or ignoring evidence that would exonerate the suspect such as an alibi.
The warmists have thrown away or ignored the observations and analysis (evidence) that does not support their assertion that 100% of the warming in the last 50 years was due to the increase in atmospheric CO2.For example (excerpt of the observations/analysis and reasoning to solve the problem):
1. There is observational evidence of 23 cycles of warming and cooling (nine of the cycles occurred in the current interglacial period, the Holocene). The 23 cycles of warming and cooling correlate with solar magnetic cycle changes and have a period of roughly 1500 years. These cyclic warming and cooling periods are called Dansgaard-Oeschger (D-O) cycles named after the two researchers that discovered the cycle in the paleo data.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
2. The latitudinal regions of the planet that warmed in the last 150 years are the same regions of the planet that warmed in the past during a D-O cycle.
3. Detailed analysis of the paleo record shows atmospheric CO2 levels have increased and decreased with no change in planetary temperature. Planetary temperature does not correlate with atmospheric CO2 changes. CO2 has an alibi.
4. The pattern of warming in the last 150 years cannot be explained by increases in atmospheric CO2. As CO2 is more or less evenly distributed in the atmosphere the potential for warming due to the increase in atmospheric CO2 is more or less the same for all latitudes on the planet. As the magnitude of the CO2 forcing is proportion to both the level of CO2 in the atmosphere and to the amount of long wave radiation that is emitted at the latitude in question, the most amount of warming due to the increase in atmospheric CO2 should occur in the tropics as that is the region of the planet that had the most amount of long wave radiation emitted to space prior to the increase atmospheric CO2. That is not observed. The majority of the warming in the last 150 years has been in high latitudes of the Northern hemisphere with the most amount of warming occurring on the Greenland ice sheet which is the same pattern of warming that occurred in past D-O cycles.
http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf
Limits on CO2 Climate Forcing from Recent Temperature Data of Earth
5. And so on.
William Astley says:
September 25, 2013 at 8:51 am
‘The warmists have thrown away or ignored the observations and analysis (evidence) that does not support their assertion that 100% of the warming in the last 50 years was due to the increase in atmospheric CO2.’
This is quite true in my experience. Another item of contra-evidence to their assertion which they have chosen to overlook/ignore is provided by HadCRUT4 itself. It can be seen from basic greenhouse theory that greenhouse warming should amplify not only the global mean surface temperature but also any variations in the global mean surface temperature that are from non-greenhouse sources at the same rate. Therefore if the surface warming indicated by the trend in HadCRUT4 was the result of an increased net greenhouse effect from the increase of atmospheric CO2 over the time period since 1850, we should expect to see the magnitudes of surface temperature variations increase by the same amount over that period too.
In fact the opposite turns out to be the case. The linear trend in HadCRUT4 temperature variations as measured by the standard deviations over relatively short time periods (eg. 1 year, 5 years, 10 years for examples) is actually slightly negative instead of being slightly positive as expected!
These results suggest to me that either HadCRUT4 is seriously inaccurate, or else the surface warming that has occurred since 1850 was not due to an enhanced greenhouse effect after all. But since we have no means of checking the veracity of HadCRUT4 or the magnitude of the net greenhouse effect independently of the warmists who are in control of the science, we seem to be in a position of irreducible uncertainty over the matter and the CAGW-advocates remain in a position of being able to dismiss all such inconvenient counter-evidence with a wave of the hand.
MONTHLY MEAN CENTRAL ENGLAND TEMPERATURE (Degrees C)
http://www.metoffice.gov.uk/hadobs/hadcet/data/download.html
1659-1973 MANLEY (Q.J.R.METEOROL.SOC., 1974)
1974ON PARKER ET AL. (INT.J.CLIM., 1992)
PARKER AND HORTON (INT.J.CLIM., 2005)
Brief description of the data
These daily and monthly temperatures are representative of a roughly triangular area of the United Kingdom enclosed by Lancashire, London and Bristol. The monthly series, which begins in 1659, is the longest available instrumental record of temperature in the world. The daily series begins in 1772. Manley (1953,1974) compiled most of the monthly series, covering 1659 to 1973. These data were updated to 1991 by Parker et al (1992), when they calculated the daily series. Both series are now kept up to date by the Climate Data Monitoring section of the Hadley Centre, Met Office. Since 1974 the data have been adjusted to allow for urban warming.
AVERAGE to 1950
20-year 30-year
9.64 9.55
AVERAGE to 2010
20-year 30-year
10.14 9.97
+0.50C +0.43C
AVERAGE to 1945
20-year 30-year
9.53 9.42
AVERAGE to 2005
20-year 30-year
10.06 9.87
+0.53C +0.45C
My comments:
This database suggests a 0.4 to 0.5C net warming in the 20 or 30 year periods ending in 1945 or 1950 versus the 20 or 30 year period ending in 2005 or 2010.
Note that “Since 1974 the data have been adjusted to allow for urban warming.” The degree of adjustment may be worthy of investigation. It may be inadequate.
Allan MacRae says:
September 26, 2013 at 12:34 pm
If their UHI effect adjustments are like Hansen’s, then they make the adjusted temperatures warmer rather than cooler.
Hi Milodon,
But this is the Hadley Centre, not Hansen.
Wait a second – they both start with Ha!
Maybe it’s a secret code. 🙂
M.S.Hodgart gives us an example of Natural Science in action: observing what exists, and creating a tool or method to help us see what is going on. The hypothesis put forward is the author’s novel method of ‘joint estimation’ and the test is ‘does it increase our understanding?’ In this case the method clearly unlocks new meaning from an already well-studied data series, and must therefore be of seminal interest to climate science as a whole.
I echo others in wanting to see the results from this new method (… that the author is apparently willing to supply on request) when it is applied to the other time-series data sets that are relevant to climate science.
Question re AMO:
PDO 1950 to 2013
http://www1.ncdc.noaa.gov/pub/data/cmb/teleconnections/pdo-f-pg.gif
PDO solidly in Cool Phase since 1998 and/or 2004
AMO 1856 to 2009
http://upload.wikimedia.org/wikipedia/commons/1/1b/Amo_timeseries_1856-present.svg
AMO still in Warm Phase but declining rapidly
Monthly AMO updates at
http://www.esrl.noaa.gov/psd/data/timeseries/AMO/
NOAA: “Since the mid-1990s we have been in a warm phase.”
http://www.aoml.noaa.gov/phod/amo_faq.php#faq_2
Does anyone have an estimate when the AMO changes to Cool Phase – it looks imminent according to some data.
Regards, Allan