An impartial look at global warming…

Guest essay by M.S.Hodgart

(Visiting Reader Surrey Space Centre University of Surrey)

The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.

The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.

It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.

A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability

The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.

The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.

The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.

As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.

Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition

HadCRUT4 annual data = trend in the data + temperature noise

The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.


HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.

A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.

In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.

The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity

Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection

In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.

The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.

But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).

trend in the data = mean trend in the data + quasi-periodic oscillation

A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.

The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.

What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).

In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).

So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.

But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.

A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)

If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.

It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.

There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.

[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.

[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)

[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold


newest oldest most voted
Notify of

Mathematically that analysis is correct.
It is too early yet to say that the long term trend has changed,
However, one should also look at the real world and IF the sun has as much of an effect as history suggests then the recent change in solar behaviour, if maintained, should result in a change in the long term trend in due course simply because the global air circulation has also changed and that appears to affect the proportion of solar energy able to enter the oceans via global cloudiness and albedo changes.
It is still earlier than history suggests for the millennial solar cycle to be going into reverse so the current period of inactive sun may not be maintained for long but we know so little about the reasons for solar variability that predictive ability is low.

Ed Barbar

Well, not to be a spoil sport, but:
I think there are a number of “infinities.” Number of integers is less than number of real numbers, is less than number of functions, and there are a number of other infinities even bigger.

Peter Jones

The comparison of the 1870 to 2012 period with an average rate of 0.053 ± 0.017 deg/decade, to the1959 to 2012 period that has an average rate of 0.090 ± 0.034 deg/decade, suffers from the selection bias. The first included two complete cooling periods, where as the last period only includes 1/2 a cooling period and a full warming period. The rate comparisons are not equivalent.


Very nice. The numbers also tie up nicely with those Judith Curry has recently been talking about, where the recent warming spurt (1980 – 2005) was likely more than 50% ‘natural’.


“impartial” – what’s that?
23 Sept: Live Science: Becky Oskin: Climate Scientists: IPCC Report Must Communicate Consensus
Climate experts also told LiveScience they would like to see the new report stress the scientific consensus on climate change, and emphasize the link between human activities and global warming.
“I hope this report will stress the virtual certainty among the scientific community that humans are affecting the climate system in profound ways, mainly through burning ever-increasing amounts of fossil fuels,” said Jennifer Francis, an atmospheric scientist at Rutgers University in New Jersey. “I hope it will emphasize the high confidence in attribution of many aspects of climate change to increasing greenhouse gases, and de-emphasize the discussion of uncertainty. The public hears “uncertainty” and thinks there is no consensus.”…
Critics of the leaked drafts have focused on what climate scientist Kevin Trenberth said is the “mistaken idea that warming has slowed…
“A key will be whether there is a major succinct message out of this report,” said Trenberth, a climate scientist at the National Center for Atmospheric Research, also in Boulder, Colo.
“The previous three have had signature messages,” Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures. This would also go some way toward addressing [this] mistaken idea.” [6 Unexpected Effects of Climate Change]…
“It is not just a scientific document — it should have policy implications,” Trenberth said. “And, of course, this is why there are well-financed and organized denier campaigns out in force.”…


Wow . . . Paging RGB!


Irrelevant. We were told human CO2 would over power all natural variability. The science was settled. That obviously has not been the case. Until the consensus scientists admit they were wrong on both points I won’t be putting any value on anything further they have to say.

This is an excellent post. A genuine attempt to make sense of the data, rather than to promote an agenda – if only the IPCC worked like this.
On this basis, it does indeed look as though the underlying trend is still upwards, so far. In addition, there is a clear indication of the warming trend increasing after 1960, so I would be inclined – on these figures – to accept the likelihood of a man-made element.
However, check the numbers. The post 1960 trend is around 1° per century, and shows no sign of increasing. The trend in the early 20th century is at least a third of that, so the man-made element would appear to be only about 0.6° to 0.7° per century. Hardly a cause for panic.
History may record that the IPCC was not 100% wrong, but massively overplayed their hand, and consequently exagerated the threat, by erfusing to accept that the rapid warming seen in the late 20th century was pricipally due to natural oscillation.

Karl Wiedemann

A very interesting and nicely balanced discussion


I know you have stated (if you believe in HadCRUT4) in your analysis, my problem is not with your approach but the data you have used. Using HadCRUT4 adds a degree of legitimacy to that temperature approximation that is not deserved.
I have yet to see a raw temperature record that highlights the discrepancy we see in HadCRUT4 from around the 1940’s to the recent warming episode at the end of the century. I believe Willis showed this in previous posts using the CET temperature record and Chris Monkton’s recent post showing the temperature adjustments made in Darwin and other locations.
The problem as we all know with UHI effects and temperature adjustments making the 1940’s cooler ensure your analysis may well be only measuring these two effects and not warming at all.
I would suggest using your analysis on some raw rural temperature sets and then seeing if you can detect warming in a temperature signal.

Dodgy Geezer

Lots of people have pointed out that the climate variation can be modeled assuming an (approximately) 60-year cycle and a gradual rise. They have been told that this is not politically acceptable.
This analysis looks to me like showing TWO curves – a 60-year one and one at about 160. This suggest that we should look for a hot peak at about 2040, followed by another ‘little ice age’ bottoming out in 2200.
But good luck getting anyone in charge to listen to this….. 🙁


Great analysis on trend maths for simulated thermometers. I can’t wait to see how it looks when you test it against actual data.


Certainly looks like there is an upward trend, but not enough to worry about, let alone spend a trillion dollars on ….

Dodgy Geezer

Whoops- sorry. Should have said “two SINE curves” above…

I once did something very similar to this analysis. I mad a fit to Hadcrut3 data including identified harmonics. Once you include the 60 year oscillation (AMO?) then climate sensitivity (TCR) for CO2 works out to be 1.4C. The fit shows that the current pause in warming will continue until at least 2025.

For any student of cycles, it is clear that there is a cycle of around 60 years in temperatures. This cycle is found in many other climate related variables. Also, there is a cycle of around 208 years in solar output and temperature called either Suess or de Vries cycle. This cycle was rising for the entire twentieth century but is no in the down phase. Another cycle of 2300 years, called Hallstatt cycle is now rising, and will be for hundreds of years yet. These cycles can be clearly seen in temperature and solar proxies over thousands of years.

Christopher Hanley

As far as any human influence is concerned the temperature trend prior to ~1945 is irrelevant.


This is consistent with Akasofu’s interpretation.


The non-oscillatory mean trend (red curve) is a quasi-periodic variation too – just a longer cycle. The ~60 year cycle is not the only climatic cycle – there are many longer cycles (~200 years and longer). The cycles are of course quasi-periodic (variable cycle length, just like the solar ~11-year cycle). Both the ~60 year and the longer ~200 year cycle seem to be plateauing/shifting at this point. That means the cooling will be much more dramatic than in the 50s/60s. Man (and CO2) is irrelevant.


Nice treatment of the data which should, but won’t, mitigate some of the quibbling.
It does, however, need to be kept in perspective.
Data over 140 years is insufficient to make over broad claims about natural variability and it would require a leap of imagination to use this data in and of itself to draw conclusions about cause and effect.

Pat reports on what Kevin “jai mitchell” Trenberth says:

The public hears “uncertainty” and thinks there is no consensus.”…
“…warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures…
“It is not just a scientific document — it should have policy implications,” Trenberth said. “And, of course, this is why there are well-financed and organized denier campaigns out in force.”…

Trenberth is clearly nuts, like Michael Mann, assigning conspiritorial motives to un-named, shadowy “denier campaigners” who he believes are out to get him.
But why won’t Trenberth or Mann name those “deniers” or their organizations? Sunlight is a disinfectant. If there were actually any such “well-financed” organizations “out in force”, then let’s compare their finances with what Trenberth and Mann rake in to spread their climate pseudo-science.
If it were not for psychological projection, the alarmist crowd would lose one of its biggest arguments.

David in Michigan

Nice analysis with clearly stated data source and reasoning. It appears to me that the overall global warming conversation has shifted from “all warming is anthropogenic” and “there is no warming” to an acknowledgement that natural warming is also in play. Thanks.

David in Michigan

Oppps. I meant to say “natural variation” rather than “natural warming” is also in play.


If you look carefully to the data, you can see that only the middle part shows a nice sinus and the outer edges are cramped together due to the obvious lack of temperature data and correct trending on the outer sides. If you only take the middle part, the cycle is 64 years and aligning perfectly with the solar cycle with solar and sinus minimum at 1912 then two solar minima in between (1923 and 1934) then a solar minimum and sinus maximum at 1944 then again two solar minima in between (1954 and 1965) then a solar and sinus minimum at 1976 etc. etc. I don’t know what this exactly means I just look at the data. I know however that there are also longer solar cycli like the “De Vries” cycle that is just getting in another phase according to for instance prof. De Jager in my country


As the spread between HADCRUT4 and RSS/UAH satellite data increases, It would be interesting to see the same statistical analysis done on RSS and UAH to see how well they compare to HADCRUT4 statistical analysis.
I realize that 34 years of satellite data is a little short, but certainly long enough for statistical significance.
It’s amazing to see how closely the blue oscillating curve fits the PDO warming/cooling cycles. Since the PDO entered its 30-yr cool phase in 2008, it would tend to support future falling temps as indicated on the graph, especially in light of weakening solar cycles which also started from 2008, leading to, in scientific parlance, a super-duper double whammy….

Warning signs of what? Presumably that we live on a planet in space where the weather changes daily, storms hurricanes floods tornadoes cyclones, if God is looking down – which I doubt – he must worry continuously about just how many half wits have been created by evolution and that common sense is such a rare commodity. I see a number of highly gifted people given the innate ability to take mathematics to the nth degree but still cant tell sh.t from sugar. The fact is that ordinary folk have contributed billions maybe trillions in the pursuit of trying to understand why or how our climate behaves and have failed dismally everyone involved is still dancing on the head of a pin with no end in sight. “uncertainty” equals no consensus but consensus is not proof this is just one classic example of just more humans trying to justify their existence and no the IPCC Judith Curry and everyone else involved continue dancing on the head of a pin whilst Mr Ordinary gets his wealth sequestrated in order to pay for these guys to indulge in their pet hobby when the person who is making their life possible derives no benefit whatsoever except higher and higher energy bills, more restrictions on their ability to travel whilst again the lauded few get to travel across the planet 1st class to tout their jaded theories of how what and where and all I hope and pray is that we get another five years of flat temperatures then you are all toast and in a great need of having to work for a living or get another hobby. Because whatever happens humanity will only get to tinker at the edges at great cost to everyone for no benefit whatsoever except for those at the heart of this giant scam like Gore who will continue to grow his fortune. The EU will spend Euros 165 billion every year ending up at Euros 20 trillion and for what the total devastation of our environment with useless wind and solar supposedly in order to save planet.


I reckon the red line is more likely a longer period sine wave, not unidirectional increasing.


Any polynomials used for regression (with exception of level 0 polynomials) run off to either plus infinity or minus infinity beyond the end of data – unlike world temperatures which stayed within relatively narrow limits for several billions of years already. As a smoothing method it is definitely interesting, but with no validity not just beyond either end of the data, but also anywhere near them. And regarding natural processes behind the data, it doesn’t tell us anything either.
I’d call it yet another regression.


Just one more comment. I wonder about two things:
1/ how much the resulting red curve differs from simple degree-2 polynomial regression of the data
2/ what the result of this analysis would be if applied to periods 1910-today or 1970-today


Well I studied under Stephen Hodgart when I was an undergraduate in Electronic Engineering at Surrey University so I can vouch for his ability. He is an exceptional engineer rather than a theoretical scientist so he has to use science to make things that work – which is always good practice I feel.
However, that said this is just another example of curve fitting. Yes the curve fitted is quite credible, as eye-balling the data would suggest, but with so little data available (and that being rather corrupted by the HADCRUT process) it is difficult to say whether there is any truth to it. I suspect the Stephen Hodgart would find it more profitable, given his knowledge of control systems and feedback, to posit the question “What kind of system would the Earth’s climate need to be to have an exponential input but a non-exponential output?”, because it seems to me that the graph above shows, from 1960 onwards, a linear trend superimposed on a sine.
To me it seems that a continuous sine is unlikely to be related to the exponential increase in CO2, but the linear trend could potentially be related to the exponential increase in CO2 is there is significant negative feedback which just happens to have a similar characteristic to the exponential CO2 increase. Of course, the rate of increase is in any case low, so should not be of immediate concern to any of us.

Phillip Bratby

I don’t like the way the blue curve is turning down at the end.


And we are continuing the cooling trend since the Minoan high and the mediaeval warm period. No matter how you smooth it, it’s always a start point end point issue. Taken from the Maunder minimum of course, there is a gradual warming but it’s inconsistent with CO2.


I liked this analysis. It makes a good case for the presence of one (and likely more?) cycles in the surface temperatures. Data from the next 50-100years will show if the “trend” is the CAGW IPCC prefer or another 200year, 0.5K amplitude cycle perhaps overlaid some smaller CO2/AGW trend.
If we instead could get a much better surface remperature reconstruction going back +1000 years soon, it would help so much in adressing this issue of “Climate” internal variability. I’m to really to old to wait for new data 🙁

tom roche

An excellent post and an honest attempt to quantify carbons influence on temperature. The hugely positive influence of increased carbon on plant life and “sustainability” deserves similiar attention.

I enjoyed reading the article and I find that it just this sort of thing that makes WUWT a wonderful science resource. No wonder Anthony can’t get a dime out of “Big Oil”. 🙂
But I do think such an analysis would be better if I was told it was on UN-adjusted data. I expect a gradual warming since we have had such since the end of the little ice age, but I remain unconvinced that anyone has taken a strong look at real data.
Further, can “real data” even be had? Look at the project of Anthony’s group where they saw that the data sets are based on horribly sited and maintain temperature stations. What good does a statistical analysis of garbage data do?
And even further, what can we really tell about tenths of a degree average warming or cooling over such a short span of time that we have had instrumentation that can measure to a decimal place?
PS: Please bring back the preview function. Please. Please. …

The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.
I get nervous when a statistician tells me he can get different results depending on what model he uses.
I get even more nervous when he picks one that “is closest to the truth”.

Mr Green Genes

… but let us give the Met Office the benefit of the doubt …
Sorry, you lost me at this point.

Dr. John M. Ware

A sweet and reasonable-sounding article; however, questions arise:
What temperature data does he actually use? We read about how actual temperatures from the past have been changed in the record, mostly to cool down earlier periods so that the recent warming looks more obvious. Has the author been able to get around this issue? How do we know?
From my inspection of the graph, it appears that from starting date to ending date the average temperature has risen from -.33 to +.18 or so, or half a degree. How many people can even detect such a small difference? How can the analyst separate it from reader error, urban heat island siting, or other issues so frequently raised on this forum?
There has been a very significant fluctuation in the number and placement of weather stations in the period of the graph. As sites gradually become surrounded by people, cars, planes, asphalt, and the like, the recorded temperature must surely rise; has that issue been allowed for in making the graph?
More questions come, but my point should be obvious: How much can we trust this graph, or any such graph; and why should we?

Tony McGough

Fascinating. Well done. May I suggest to Mr Hodgart that he apply the same expertise to the Central England Temperature record, which goes back an extra century or two? The result could be very enlightening.


I’d like to see the Data for a minimum of three, better yet five, cycles of the blue curve – oscillating component.
I’ll wait…


Mr. Hodgart, quite interesting, but I do see a needed test to be performed before I can see it really working correctly in action.
Seems a good test of your method would be to duplicate the entire 1870-2005 series but rotate all values on the year of 2005 bringing the temperature back to where it was in 1870 in 2140 and see how your ‘mean’ method, the red curve, handles the roll over near 2005 (I think it was actually 2009), that being the peak of the entire 270 year span. I’m just curious how well your mean method handles a now downward century long segment if one occurs. We all see how it handles a general upward trend. Maybe it is just me but I see that red mean curve taking one very, very long time to ever bending over backwards on the downward slide should that happen in the unknown future and might be a flaw in the way you create the mean curve. Try it for us, or, I’d love you to tell us the formulas, I will program it since most stat packages don’t include such methods.
I see no problem in the brown curve, the cubic fit and I’m sure it is better than a simple moving average.

A Crooks

“In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”
My thoughts entirely. For what it is worth I would add one caveat. I suspect that the red curve is distorted by the historic adjustment of old temperatures down and modern temperatures up. I suspect this is the reason for the distortion on the blue oscillation away from a neat sine wave. I’m suspicious but watching closely.
Importantly, the satellite data series from 1979 is probably still not long enough to make any sensible comment about global warming trends since all we can see is the oscillation overprint.
On the other hand though, if one isolates the 60-year sinusoidal oscillation from the satellite data, the long term trend, here in red, does appear to have flattened out rather than steepening up – so it is possible that, as someone else noted above, the red line is actually a bigger sinusoidal oscillation of the order of 250 – 300 years? But we will have to wait at least 20 years to be able to tell.
In the mean time, as I believe it, the UK Met office, Pauchauri, and others (on notricks?) are already conceding, it will be 17 years before there is any warming, which means they have conceded this 60 year oscillation, and therefore by implication, this underlying red trend.

Don K

My first reactions The article is an amazingly clear explanation of extremely difficult material.
A tenth order polynomial? Doesn’t give me warm fuzzy feelings. Intuitively, I’m inclined to have huge doubts about any solution to anything that involves that many variables. Maybe this is an exception. I’ll have to think about whether this is an exception. Probably for years, not hours.
It is encouraging to see a (roughly) sixty year climate cycle component in there. There’s enough evidence for something of that sort that I think one should have serious doubts about any solution to climate prediction that doesn’t exhibit it. e.g. current climate models.
I like the idea of describing first, explaining later. Historic example. Kepler described planetary motion. (And, in passing, he was reportedly not happy with his description — planets follow elliptical paths. He’d expected more elegant circles). Newton came along later and explained the motion — gravity.


Are there examples where adjustments to raw data lead to reductions in temperatures/temperature anomalies? The trend shown by the Hadcrut data may be true but its magnitude may be amplified by corrupted data as noted by our mad lord and others (eg weather stations located next to air conditioner units etc).

Lance Wallace

While searching for Mills 2006 I ran across Mills 2009, (Modelling Current Temperature Trends – Journal of Data Science) which indeed does what Tony McGough was asking for, analyze the CET record:
“The CET series, however, is almost 200 years longer, and examining Figure 3
reveals that there are (at least) two earlier periods that display similar behaviour
to the current warming trend. Figure 4 therefore compares the low-pass trend
fitted to the complete record with similar trends fitted to the record ending in
1736 and 1834 respectively (the other trend fits are very similar). Prior to 1736,
trend temperatures increased by 1.5 oC in the 46 years since 1690, while the 24
years from 1810 saw trend temperatures increase by 0.75 oC. By comparison, the
current warming period has seen trend temperatures increase by almost 1oC in
forty years. Both the earlier trends have ‘current’ slopes, estimated to be 0.047
and 0.045 respectively, in excess of the 2005 slope of 0.040 oC, and both periods
contain temperature extremes that are comparable to those reached in the last
decade. As can be seen from Figure 4, both trends quickly reversed themselves
after these two dates, which were, of course, before serious industrialisation had
Thus the recent warming trend in the CET series is by no means unprecedented.
While we are not suggesting that the current warming trend will necessarily
be quickly reversed, this statistical exercise reveals that examining temperature
records over a longer time frame may offer a different perspective on global
warming than that which is commonly expressed. Indeed, examining much longer
records of temperature reconstructions from proxy data reveals a very different
picture of climate change than just focusing on the last 150 years or so of temperature
observations, with several historical epochs experiencing temperatures
at least as warm as those being encountered today: see, for example, Mills (2004,
2007b) for trend modelling of long-run temperature reconstructions. At the very
least, proponents of continuing global warming and climate change would perhaps
be wise not to make the recent warming trend in recorded temperatures a
central plank in their argument.”


The main point for everyone reading this (without getting too emersed in science detail) is that we still see JUST a whole ONE DEGREE CENTIGRADE warming between around 1910 and 2008.
Gosh . . . . ONE DEGREE in 98 years . . . . we’re doomed. How my body possibly survives an unbelievable 30 degree temperature difference in one year between our winter and summer, I just don’t know. And this morning, within two minutes, I walked from my conservatory upstairs to the study and the difference in temperature was about 10 degrees C. I’m still alive, the water level in my dog’s bowl has’nt risen and the ice in the fridge has’nt melted. However, to help save the planet from this unprecedented warming catastrophe, I am obliged to give the UK government a shed load of vehicle road tax calculated purely on the amount of CO2 spewing from my car’s exhaust. And all because they think I won’t cope with a ONE DEGREE CELCIUS temperature difference during my lifetime.
Please will someone tell the emperor that he has’nt got any clothes on.
PS Nice article, good analysis with clearly stated data source and reasoning. Thank you.

Steven Hill from Ky (the welfare state)

No, no, no, no…we are really in an ice age offset by human activity. When the ice age is over the temp will rise 10C…. 😉 Long live Hansen, Gore, Mann and all the others that make a living on taxes.

James Schrumpf

I don’t see how the natural variability of the temperature data can be called “noise.” If it’s up one year and down the next, how is that noise if it’s actually the true data? The human mind tries to see patterns everywhere it looks. It’s a valuable survival strategy, allowing humans to identify recurring dangers and benefits in their environment. It’s also a problem in that we tend to find patterns where none exist.
I think referring to the natural variability of data that has already been strenuously massaged to get to that one temperature point for the entire Earth as “noise” is a false concept. Rather than trying to get a smooth curve from it, we should step back and say, “Damn, that number moves around a LOT, doesn’t it?”


Very well done Mr. M.S.Hodgart, indeed. But wouldn’t the next analysis be to back out the “corrections & adjustments” made by Hadley & NASA, et. al, for the last 30+ years & re-run your analysis? Regards, nice work, simple, subtle, sophisticated, as in K.I.S.S.