An impartial look at global warming…

Guest essay by M.S.Hodgart
(Visiting Reader Surrey Space Centre University of Surrey)

The figure presented here is a new graph of the story of global warming – and cooling. The graph makes no predictions and should be used only to see what has been happening historically.

The boxed points in the figure are the ‘raw data’ – the annualised global average surface temperature known as HadCRUT4 as released by the UK Meteorological Office. Strictly these are ‘temperature anomalies’. The plot runs from 1870 up to the last complete calendar year 2012. The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.

It is a difficult statistical problem to estimate the historical trend in these kind of time series. The solution requires some kind of smoothing of the data but how exactly? There are an unlimited number of ways of drawing some curve through the data.

A popular method – much used in the climate science literature – is by a moving average. One trouble with it is that quite different looking curves obtain depending on the width of the smoothing window used in that average – also on the choice of window. Another difficulty is its poor dynamic tracking capability

The other popular method is to fit a selection of straight lines (least square estimate) to a selected span of years. The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.

The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.

The source of the problem is what the telecommunication and control engineers call noise in the data – a random-looking variation from one year to the next.

As a conspicuous example of this random variation: in recent years, according to the record, the global temperature (anomaly) was 0.18 deg in 1996 ; had jumped to 0.52 deg in 1998 but had fallen again to 0.29 deg by 2000.

Respecting normal linguistic usage and common-sense we would not want to describe a jump of 0.34 deg in only two years as a phenomenon of ‘global warming’; nor a drop by 0.23 deg over the next two years as ‘global cooling’. Ordinary language, when expressed in mathematics, envisages some smooth slow-varying curve which passes on a middle course through the scattered data, ignoring these rapid changes, but responsive over a longer term to general movement . There needs to be an explicit decomposition

HadCRUT4 annual data = trend in the data + temperature noise

The problem is to estimate that trend in the data when it is corrupted by the presence of this significant noise.

clip_image002

HadCRUT4 global annual averaged temperature anomaly 1870 – 2012 (connected brown box points). Brown curve 26-year span cubic loess estimate. Dashed brown curve 10th degree PR estimate. Red curve is a mean trend. Blue curve is the offset cyclic component of loess. The red circled points identify coincident years of trend and mean trend: in years 1870, 1891, 1927, 1959, 1992, & 2012. Blue circled points delineate alternating cooling and warming in cyclic variation: 1877, 1911, 1943, 1976, & 2005.

A novel principle of joint estimation is proposed here – using two relatively simple methods of smoothing.

In the figure the continuous brown curve is an estimate by locally weighted regression (loess) – using a locally-fitting cubic polynomial and the standard ‘tri-cube’ weighting. Loess is greatly superior generalisation of the moving average[1] . Professor Mills deserves credit for first pointing out the superiority of a cubic over the usual linear or quadratic local polynomial [2]. Unfortunately the standard statistical tools seem not to have caught up with him here – nor with his ‘natural’ solution to the end-point problem (where data runs out after 2012 and before 1870 on this graph.

The dashed brown curve is a standard (unweighted) polynomial regression. The principle of joint estimation is to look for span of years in loess and a degree in the polynomial regression where the two curves most closely resemble each other. There is a least disparity

Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection

In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.

The optimal estimate identifies alternating cooling and warming intervals from 1877 to 2005. Two cooling intervals alternated with two warming intervals. These two cycles of alternating cooling and warming were barely conceded, and certainly not discussed let alone explained, in the influential IPCC 4th report (AR4) published in 2007 and based on data available to 2005.

But this property conflicts with a different requirement: that a trend should be a “smooth broad movement non-oscillatory in nature” (see 1.22 in Kendall and Ord’s classic text [3] ). To reconcile these different requirements the estimated trend must be further decomposed into a non-oscillatory mean trend (red curve) and a quasi-periodic oscillation (blue curve).

trend in the data = mean trend in the data + quasi-periodic oscillation

A unique decomposition is achieved by computer-assisted iterative adjustment of four intersecting common years (red circled points). The mean trend is a cubic spline interpolation which deviates least from a straight line while the oscillatory component has a zero average over the record.

The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.

What goes up may come down. This oscillating component looks to be continuing. Assessment is increasingly uncertain the closer one gets to the last data year of 2012. But despite this difficulty the probability that there is again global cooling in recent years can be stated with high confidence (IPCC terminology – better than 80%).

In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).

So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.

But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.

A non-ambiguous computation of the rate of temperature increase is achieved by working from those common years (red circled points) when the two estimates coincide. The increase for HadCRUT4 from 1870 to 2012 of 0.75 ± 0.24 deg is equivalent to an average rate of 0.053 ± 0.017 deg/decade. From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. The error limits here are the usual ± 2 standard deviations or 95% confidence limits)

If this faster trend were to continue then we would be looking at an average rise from now of 0.8 deg by the end of this century (not choosing to set controversial error limits into the future). It is not however safe to make any predictions on the basis of the plot and the methodology adopted here.

It should not need to be stressed that there is no contradiction between these results and finding that regional warming may be continuing – particularly in high Northern latitudes and the Arctic.

There is a great deal more than can and needs to be said to justify these results. Interested readers can apply to the author for longer treatments and in particular a full and detailed mathematical justification.


[1] “Locally Weighted Regression: An Approach to Regression Analysis by Local Fitting” W. S. Cleveland, S.J. Devlin Journal of the American Statistical Association, Vol. 83, No. 403 (Sep., 1988), pp. 596- 610.

[2] “Modelling Current Trends in Northern Hemisphere temperatures” T.C.Mills International Journal of Climatology 26 p 867- 884 (2006)

[3] “Time Series” Kendall and Ord (1990) 3rd edition Edward Arnold

About these ads

147 thoughts on “An impartial look at global warming…

  1. Mathematically that analysis is correct.

    It is too early yet to say that the long term trend has changed,

    However, one should also look at the real world and IF the sun has as much of an effect as history suggests then the recent change in solar behaviour, if maintained, should result in a change in the long term trend in due course simply because the global air circulation has also changed and that appears to affect the proportion of solar energy able to enter the oceans via global cloudiness and albedo changes.

    It is still earlier than history suggests for the millennial solar cycle to be going into reverse so the current period of inactive sun may not be maintained for long but we know so little about the reasons for solar variability that predictive ability is low.

  2. Well, not to be a spoil sport, but:
    I think there are a number of “infinities.” Number of integers is less than number of real numbers, is less than number of functions, and there are a number of other infinities even bigger.

  3. The comparison of the 1870 to 2012 period with an average rate of 0.053 ± 0.017 deg/decade, to the1959 to 2012 period that has an average rate of 0.090 ± 0.034 deg/decade, suffers from the selection bias. The first included two complete cooling periods, where as the last period only includes 1/2 a cooling period and a full warming period. The rate comparisons are not equivalent.

  4. Very nice. The numbers also tie up nicely with those Judith Curry has recently been talking about, where the recent warming spurt (1980 – 2005) was likely more than 50% ‘natural’.

  5. “impartial” – what’s that?

    23 Sept: Live Science: Becky Oskin: Climate Scientists: IPCC Report Must Communicate Consensus
    Climate experts also told LiveScience they would like to see the new report stress the scientific consensus on climate change, and emphasize the link between human activities and global warming.
    “I hope this report will stress the virtual certainty among the scientific community that humans are affecting the climate system in profound ways, mainly through burning ever-increasing amounts of fossil fuels,” said Jennifer Francis, an atmospheric scientist at Rutgers University in New Jersey. “I hope it will emphasize the high confidence in attribution of many aspects of climate change to increasing greenhouse gases, and de-emphasize the discussion of uncertainty. The public hears “uncertainty” and thinks there is no consensus.”…
    Critics of the leaked drafts have focused on what climate scientist Kevin Trenberth said is the “mistaken idea that warming has slowed…
    “A key will be whether there is a major succinct message out of this report,” said Trenberth, a climate scientist at the National Center for Atmospheric Research, also in Boulder, Colo.
    “The previous three have had signature messages,” Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures. This would also go some way toward addressing [this] mistaken idea.” [6 Unexpected Effects of Climate Change]…
    “It is not just a scientific document — it should have policy implications,” Trenberth said. “And, of course, this is why there are well-financed and organized denier campaigns out in force.”…

    http://www.livescience.com/39869-rethinking-ipcc-climate-change-report.html

    ——————————————————————————–

  6. Irrelevant. We were told human CO2 would over power all natural variability. The science was settled. That obviously has not been the case. Until the consensus scientists admit they were wrong on both points I won’t be putting any value on anything further they have to say.

  7. This is an excellent post. A genuine attempt to make sense of the data, rather than to promote an agenda – if only the IPCC worked like this.
    On this basis, it does indeed look as though the underlying trend is still upwards, so far. In addition, there is a clear indication of the warming trend increasing after 1960, so I would be inclined – on these figures – to accept the likelihood of a man-made element.
    However, check the numbers. The post 1960 trend is around 1° per century, and shows no sign of increasing. The trend in the early 20th century is at least a third of that, so the man-made element would appear to be only about 0.6° to 0.7° per century. Hardly a cause for panic.
    History may record that the IPCC was not 100% wrong, but massively overplayed their hand, and consequently exagerated the threat, by erfusing to accept that the rapid warming seen in the late 20th century was pricipally due to natural oscillation.

  8. I know you have stated (if you believe in HadCRUT4) in your analysis, my problem is not with your approach but the data you have used. Using HadCRUT4 adds a degree of legitimacy to that temperature approximation that is not deserved.

    I have yet to see a raw temperature record that highlights the discrepancy we see in HadCRUT4 from around the 1940’s to the recent warming episode at the end of the century. I believe Willis showed this in previous posts using the CET temperature record and Chris Monkton’s recent post showing the temperature adjustments made in Darwin and other locations.

    The problem as we all know with UHI effects and temperature adjustments making the 1940’s cooler ensure your analysis may well be only measuring these two effects and not warming at all.
    I would suggest using your analysis on some raw rural temperature sets and then seeing if you can detect warming in a temperature signal.

  9. Lots of people have pointed out that the climate variation can be modeled assuming an (approximately) 60-year cycle and a gradual rise. They have been told that this is not politically acceptable.

    This analysis looks to me like showing TWO curves – a 60-year one and one at about 160. This suggest that we should look for a hot peak at about 2040, followed by another ‘little ice age’ bottoming out in 2200.

    But good luck getting anyone in charge to listen to this….. :(

  10. Great analysis on trend maths for simulated thermometers. I can’t wait to see how it looks when you test it against actual data.

  11. Certainly looks like there is an upward trend, but not enough to worry about, let alone spend a trillion dollars on ….

  12. I once did something very similar to this analysis. I mad a fit to Hadcrut3 data including identified harmonics. Once you include the 60 year oscillation (AMO?) then climate sensitivity (TCR) for CO2 works out to be 1.4C. The fit shows that the current pause in warming will continue until at least 2025.
    see: http://clivebest.com/blog/?p=2353

  13. For any student of cycles, it is clear that there is a cycle of around 60 years in temperatures. This cycle is found in many other climate related variables. Also, there is a cycle of around 208 years in solar output and temperature called either Suess or de Vries cycle. This cycle was rising for the entire twentieth century but is no in the down phase. Another cycle of 2300 years, called Hallstatt cycle is now rising, and will be for hundreds of years yet. These cycles can be clearly seen in temperature and solar proxies over thousands of years.

  14. The non-oscillatory mean trend (red curve) is a quasi-periodic variation too – just a longer cycle. The ~60 year cycle is not the only climatic cycle – there are many longer cycles (~200 years and longer). The cycles are of course quasi-periodic (variable cycle length, just like the solar ~11-year cycle). Both the ~60 year and the longer ~200 year cycle seem to be plateauing/shifting at this point. That means the cooling will be much more dramatic than in the 50s/60s. Man (and CO2) is irrelevant.

  15. Nice treatment of the data which should, but won’t, mitigate some of the quibbling.
    It does, however, need to be kept in perspective.
    Data over 140 years is insufficient to make over broad claims about natural variability and it would require a leap of imagination to use this data in and of itself to draw conclusions about cause and effect.

  16. Pat reports on what Kevin “jai mitchell” Trenberth says:

    The public hears “uncertainty” and thinks there is no consensus.”…

    “…warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures…

    “It is not just a scientific document — it should have policy implications,” Trenberth said. “And, of course, this is why there are well-financed and organized denier campaigns out in force.”…

    Trenberth is clearly nuts, like Michael Mann, assigning conspiritorial motives to un-named, shadowy “denier campaigners” who he believes are out to get him.

    But why won’t Trenberth or Mann name those “deniers” or their organizations? Sunlight is a disinfectant. If there were actually any such “well-financed” organizations “out in force”, then let’s compare their finances with what Trenberth and Mann rake in to spread their climate pseudo-science.

    If it were not for psychological projection, the alarmist crowd would lose one of its biggest arguments.

  17. Nice analysis with clearly stated data source and reasoning. It appears to me that the overall global warming conversation has shifted from “all warming is anthropogenic” and “there is no warming” to an acknowledgement that natural warming is also in play. Thanks.

  18. If you look carefully to the data, you can see that only the middle part shows a nice sinus and the outer edges are cramped together due to the obvious lack of temperature data and correct trending on the outer sides. If you only take the middle part, the cycle is 64 years and aligning perfectly with the solar cycle with solar and sinus minimum at 1912 then two solar minima in between (1923 and 1934) then a solar minimum and sinus maximum at 1944 then again two solar minima in between (1954 and 1965) then a solar and sinus minimum at 1976 etc. etc. I don’t know what this exactly means I just look at the data. I know however that there are also longer solar cycli like the “De Vries” cycle that is just getting in another phase according to for instance prof. De Jager in my country

  19. As the spread between HADCRUT4 and RSS/UAH satellite data increases, It would be interesting to see the same statistical analysis done on RSS and UAH to see how well they compare to HADCRUT4 statistical analysis.

    I realize that 34 years of satellite data is a little short, but certainly long enough for statistical significance.

    It’s amazing to see how closely the blue oscillating curve fits the PDO warming/cooling cycles. Since the PDO entered its 30-yr cool phase in 2008, it would tend to support future falling temps as indicated on the graph, especially in light of weakening solar cycles which also started from 2008, leading to, in scientific parlance, a super-duper double whammy….

  20. Warning signs of what? Presumably that we live on a planet in space where the weather changes daily, storms hurricanes floods tornadoes cyclones, if God is looking down – which I doubt – he must worry continuously about just how many half wits have been created by evolution and that common sense is such a rare commodity. I see a number of highly gifted people given the innate ability to take mathematics to the nth degree but still cant tell sh.t from sugar. The fact is that ordinary folk have contributed billions maybe trillions in the pursuit of trying to understand why or how our climate behaves and have failed dismally everyone involved is still dancing on the head of a pin with no end in sight. “uncertainty” equals no consensus but consensus is not proof this is just one classic example of just more humans trying to justify their existence and no the IPCC Judith Curry and everyone else involved continue dancing on the head of a pin whilst Mr Ordinary gets his wealth sequestrated in order to pay for these guys to indulge in their pet hobby when the person who is making their life possible derives no benefit whatsoever except higher and higher energy bills, more restrictions on their ability to travel whilst again the lauded few get to travel across the planet 1st class to tout their jaded theories of how what and where and all I hope and pray is that we get another five years of flat temperatures then you are all toast and in a great need of having to work for a living or get another hobby. Because whatever happens humanity will only get to tinker at the edges at great cost to everyone for no benefit whatsoever except for those at the heart of this giant scam like Gore who will continue to grow his fortune. The EU will spend Euros 165 billion every year ending up at Euros 20 trillion and for what the total devastation of our environment with useless wind and solar supposedly in order to save planet.

  21. Any polynomials used for regression (with exception of level 0 polynomials) run off to either plus infinity or minus infinity beyond the end of data – unlike world temperatures which stayed within relatively narrow limits for several billions of years already. As a smoothing method it is definitely interesting, but with no validity not just beyond either end of the data, but also anywhere near them. And regarding natural processes behind the data, it doesn’t tell us anything either.
    I’d call it yet another regression.

  22. Just one more comment. I wonder about two things:
    1/ how much the resulting red curve differs from simple degree-2 polynomial regression of the data
    2/ what the result of this analysis would be if applied to periods 1910-today or 1970-today

  23. Well I studied under Stephen Hodgart when I was an undergraduate in Electronic Engineering at Surrey University so I can vouch for his ability. He is an exceptional engineer rather than a theoretical scientist so he has to use science to make things that work – which is always good practice I feel.

    However, that said this is just another example of curve fitting. Yes the curve fitted is quite credible, as eye-balling the data would suggest, but with so little data available (and that being rather corrupted by the HADCRUT process) it is difficult to say whether there is any truth to it. I suspect the Stephen Hodgart would find it more profitable, given his knowledge of control systems and feedback, to posit the question “What kind of system would the Earth’s climate need to be to have an exponential input but a non-exponential output?”, because it seems to me that the graph above shows, from 1960 onwards, a linear trend superimposed on a sine.

    To me it seems that a continuous sine is unlikely to be related to the exponential increase in CO2, but the linear trend could potentially be related to the exponential increase in CO2 is there is significant negative feedback which just happens to have a similar characteristic to the exponential CO2 increase. Of course, the rate of increase is in any case low, so should not be of immediate concern to any of us.

  24. And we are continuing the cooling trend since the Minoan high and the mediaeval warm period. No matter how you smooth it, it’s always a start point end point issue. Taken from the Maunder minimum of course, there is a gradual warming but it’s inconsistent with CO2.

  25. I liked this analysis. It makes a good case for the presence of one (and likely more?) cycles in the surface temperatures. Data from the next 50-100years will show if the “trend” is the CAGW IPCC prefer or another 200year, 0.5K amplitude cycle perhaps overlaid some smaller CO2/AGW trend.
    If we instead could get a much better surface remperature reconstruction going back +1000 years soon, it would help so much in adressing this issue of “Climate” internal variability. I’m to really to old to wait for new data :-(

  26. An excellent post and an honest attempt to quantify carbons influence on temperature. The hugely positive influence of increased carbon on plant life and “sustainability” deserves similiar attention.

  27. I enjoyed reading the article and I find that it just this sort of thing that makes WUWT a wonderful science resource. No wonder Anthony can’t get a dime out of “Big Oil”. :-)

    But I do think such an analysis would be better if I was told it was on UN-adjusted data. I expect a gradual warming since we have had such since the end of the little ice age, but I remain unconvinced that anyone has taken a strong look at real data.

    Further, can “real data” even be had? Look at the project of Anthony’s group where they saw that the data sets are based on horribly sited and maintain temperature stations. What good does a statistical analysis of garbage data do?

    And even further, what can we really tell about tenths of a degree average warming or cooling over such a short span of time that we have had instrumentation that can measure to a decimal place?

    PS: Please bring back the preview function. Please. Please. …

  28. The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.

    I get nervous when a statistician tells me he can get different results depending on what model he uses.

    I get even more nervous when he picks one that “is closest to the truth”.

  29. A sweet and reasonable-sounding article; however, questions arise:

    What temperature data does he actually use? We read about how actual temperatures from the past have been changed in the record, mostly to cool down earlier periods so that the recent warming looks more obvious. Has the author been able to get around this issue? How do we know?

    From my inspection of the graph, it appears that from starting date to ending date the average temperature has risen from -.33 to +.18 or so, or half a degree. How many people can even detect such a small difference? How can the analyst separate it from reader error, urban heat island siting, or other issues so frequently raised on this forum?

    There has been a very significant fluctuation in the number and placement of weather stations in the period of the graph. As sites gradually become surrounded by people, cars, planes, asphalt, and the like, the recorded temperature must surely rise; has that issue been allowed for in making the graph?

    More questions come, but my point should be obvious: How much can we trust this graph, or any such graph; and why should we?

  30. Fascinating. Well done. May I suggest to Mr Hodgart that he apply the same expertise to the Central England Temperature record, which goes back an extra century or two? The result could be very enlightening.

  31. I’d like to see the Data for a minimum of three, better yet five, cycles of the blue curve – oscillating component.

    I’ll wait…

  32. Mr. Hodgart, quite interesting, but I do see a needed test to be performed before I can see it really working correctly in action.

    Seems a good test of your method would be to duplicate the entire 1870-2005 series but rotate all values on the year of 2005 bringing the temperature back to where it was in 1870 in 2140 and see how your ‘mean’ method, the red curve, handles the roll over near 2005 (I think it was actually 2009), that being the peak of the entire 270 year span. I’m just curious how well your mean method handles a now downward century long segment if one occurs. We all see how it handles a general upward trend. Maybe it is just me but I see that red mean curve taking one very, very long time to ever bending over backwards on the downward slide should that happen in the unknown future and might be a flaw in the way you create the mean curve. Try it for us, or, I’d love you to tell us the formulas, I will program it since most stat packages don’t include such methods.

    I see no problem in the brown curve, the cubic fit and I’m sure it is better than a simple moving average.

  33. “In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”

    My thoughts entirely. For what it is worth I would add one caveat. I suspect that the red curve is distorted by the historic adjustment of old temperatures down and modern temperatures up. I suspect this is the reason for the distortion on the blue oscillation away from a neat sine wave. I’m suspicious but watching closely.

    Importantly, the satellite data series from 1979 is probably still not long enough to make any sensible comment about global warming trends since all we can see is the oscillation overprint.

    On the other hand though, if one isolates the 60-year sinusoidal oscillation from the satellite data, the long term trend, here in red, does appear to have flattened out rather than steepening up – so it is possible that, as someone else noted above, the red line is actually a bigger sinusoidal oscillation of the order of 250 – 300 years? But we will have to wait at least 20 years to be able to tell.

    In the mean time, as I believe it, the UK Met office, Pauchauri, and others (on notricks?) are already conceding, it will be 17 years before there is any warming, which means they have conceded this 60 year oscillation, and therefore by implication, this underlying red trend.

  34. My first reactions The article is an amazingly clear explanation of extremely difficult material.

    A tenth order polynomial? Doesn’t give me warm fuzzy feelings. Intuitively, I’m inclined to have huge doubts about any solution to anything that involves that many variables. Maybe this is an exception. I’ll have to think about whether this is an exception. Probably for years, not hours.

    It is encouraging to see a (roughly) sixty year climate cycle component in there. There’s enough evidence for something of that sort that I think one should have serious doubts about any solution to climate prediction that doesn’t exhibit it. e.g. current climate models.

    I like the idea of describing first, explaining later. Historic example. Kepler described planetary motion. (And, in passing, he was reportedly not happy with his description — planets follow elliptical paths. He’d expected more elegant circles). Newton came along later and explained the motion — gravity.

  35. Are there examples where adjustments to raw data lead to reductions in temperatures/temperature anomalies? The trend shown by the Hadcrut data may be true but its magnitude may be amplified by corrupted data as noted by our mad lord and others (eg weather stations located next to air conditioner units etc).

  36. While searching for Mills 2006 I ran across Mills 2009, (Modelling Current Temperature Trends – Journal of Data Science) which indeed does what Tony McGough was asking for, analyze the CET record:

    “The CET series, however, is almost 200 years longer, and examining Figure 3
    reveals that there are (at least) two earlier periods that display similar behaviour
    to the current warming trend. Figure 4 therefore compares the low-pass trend
    fitted to the complete record with similar trends fitted to the record ending in
    1736 and 1834 respectively (the other trend fits are very similar). Prior to 1736,
    trend temperatures increased by 1.5 oC in the 46 years since 1690, while the 24
    years from 1810 saw trend temperatures increase by 0.75 oC. By comparison, the
    current warming period has seen trend temperatures increase by almost 1oC in
    forty years. Both the earlier trends have ‘current’ slopes, estimated to be 0.047
    and 0.045 respectively, in excess of the 2005 slope of 0.040 oC, and both periods
    contain temperature extremes that are comparable to those reached in the last
    decade. As can be seen from Figure 4, both trends quickly reversed themselves
    after these two dates, which were, of course, before serious industrialisation had
    occurred.
    Thus the recent warming trend in the CET series is by no means unprecedented.
    While we are not suggesting that the current warming trend will necessarily
    be quickly reversed, this statistical exercise reveals that examining temperature
    records over a longer time frame may offer a different perspective on global
    warming than that which is commonly expressed. Indeed, examining much longer
    records of temperature reconstructions from proxy data reveals a very different
    picture of climate change than just focusing on the last 150 years or so of temperature
    observations, with several historical epochs experiencing temperatures
    at least as warm as those being encountered today: see, for example, Mills (2004,
    2007b) for trend modelling of long-run temperature reconstructions. At the very
    least, proponents of continuing global warming and climate change would perhaps
    be wise not to make the recent warming trend in recorded temperatures a
    central plank in their argument.”

    Delightful!

  37. The main point for everyone reading this (without getting too emersed in science detail) is that we still see JUST a whole ONE DEGREE CENTIGRADE warming between around 1910 and 2008.

    Gosh . . . . ONE DEGREE in 98 years . . . . we’re doomed. How my body possibly survives an unbelievable 30 degree temperature difference in one year between our winter and summer, I just don’t know. And this morning, within two minutes, I walked from my conservatory upstairs to the study and the difference in temperature was about 10 degrees C. I’m still alive, the water level in my dog’s bowl has’nt risen and the ice in the fridge has’nt melted. However, to help save the planet from this unprecedented warming catastrophe, I am obliged to give the UK government a shed load of vehicle road tax calculated purely on the amount of CO2 spewing from my car’s exhaust. And all because they think I won’t cope with a ONE DEGREE CELCIUS temperature difference during my lifetime.

    Please will someone tell the emperor that he has’nt got any clothes on.

    PS Nice article, good analysis with clearly stated data source and reasoning. Thank you.

  38. No, no, no, no…we are really in an ice age offset by human activity. When the ice age is over the temp will rise 10C…. ;-) Long live Hansen, Gore, Mann and all the others that make a living on taxes.

  39. I don’t see how the natural variability of the temperature data can be called “noise.” If it’s up one year and down the next, how is that noise if it’s actually the true data? The human mind tries to see patterns everywhere it looks. It’s a valuable survival strategy, allowing humans to identify recurring dangers and benefits in their environment. It’s also a problem in that we tend to find patterns where none exist.

    I think referring to the natural variability of data that has already been strenuously massaged to get to that one temperature point for the entire Earth as “noise” is a false concept. Rather than trying to get a smooth curve from it, we should step back and say, “Damn, that number moves around a LOT, doesn’t it?”

  40. Very well done Mr. M.S.Hodgart, indeed. But wouldn’t the next analysis be to back out the “corrections & adjustments” made by Hadley & NASA, et. al, for the last 30+ years & re-run your analysis? Regards, nice work, simple, subtle, sophisticated, as in K.I.S.S.

  41. If you torture the data enough, it will confess. source Of course a confession obtained by torture should never be accepted as valid.

    IMHO, this is just too much processing on too little data. I hope Prof. Briggs weighs in here.

  42. “The strong oscillating component – the blue curve – is seen to be contributing more than half of the rate of increase when global warming was at a peak in the early 1990s.”

    And it’s even worse since the ~20 yrs component is filtered out.

  43. This fits in with my thoughts about CO2. Yes it does have an effect but much smaller than the alarmists claim.

    Clearly we had a natural warming trend before the huge increase in CO2 after 1945. Since then these emissions seem to have ameliorated the following cooling spell and exacerbated the following, expected,, warming trend from the middle 70s.

    Due to the logarithmic nature of increasing atmospheric CO2 we can expect the forcing effect on the natural trends to lessen as time progresses. So there is no catastrophic warming anywhere in prospect and we can expect a slight cooling for the next decade or so.

    I think the warming this century will be much less than 0.8C, more like 0.3C.

    In addition to the CO2 emissions Solar outputs were very high during the latter half of the 20th century and are likely to average much less during this century. Also we will have two cooling spells and one warming spell this century, the exact opposite of the 20th.

    Also I don’t particularly trust the changes made in Hadcrut 4. Like all databases in the hands of confirmed alarmists any changes always cool the past and warm the present. The chances that that is just coincidental are vanishingly small.

    Alan

  44. HADCRUT is adjusted data and not valid in my view (UHI etc). CET in my view is the only valid data (rural since 1850). The only recent valid data is radiosonde RSS and Satellite AMSU. Neither CET nor RSS or AMSU show any significant warming since records began. This person is again analyzing selective data

  45. “differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”

    The mean trend does not oscillate over this snipet of time, but it’s too soon to claim that it doesn’t oscillate.

  46. Once again an analysis that fails to find any CO2 signal in a modern temperarure/time graph. How many more such analyses do we need to have before the Royal Society and the American Physical Societh accept that there is no CO2 signal, and ALL temperature variations are natural in origin.

  47. I note with interest Lance Wallace’s comments on the CET. I have a book by meteorologist William James Burroughs (Climate Change – a Multidisciplinary Approach, Cambridge University Press 2001) in which (p107) he comments on the CET.
    He says: ‘The CET series confirms the exceptionally low temperatures of the 1690s and in particular the cold late springs of this decade. Equally striking is the sudden warming from the 1690s to the 1730s. In less than forty years the conditions went from the depths of the Little Ice Age to something comparable to the warmest decades of the twentieth century. This balmy period came to a sudden halt with the extreme cold of 1740 and a return to colder conditions, especially in the winter half of the year. Various other series for other European cities confirm this conclusion.’
    Later he continues: ‘A more striking feature is the general evidence of interdecadal variability. So, the poor summers of the 1810s are in contrast to the hot ones of the late 1770s and early 1780s, and around 1800. The same interdecadal variability shows up in more recent data. The 1880s and 1890s were marked by more frequent cold winters,while both the 1880s and 1910s had more than their fair share of cool wet summers. A similar variable story emerges from other parts of the world.’

  48. Is anyone else even slightly sceptical about the sharp reversals in trend shown by the brown line at both the start and end of the data set? These do not appear to my eye to be even remotely justified by the raw data. It looks as though the author’s method has somehow constrained the end-points.

  49. Nothing unprecedented, nothing alarming, and no CO2 forcing signal either. Climate is simply doing what it has always done. Alarmists, who should be relieved, will instead hate it as it threatens their Climatist ideology.

  50. Whatever combination of cycles and trends is used, the curve fitting is always backward looking. The bigger question is how long we have to wait (statistically speaking) before we can be confident in the forecasting value of any predicted curve, however constructed; 10 years (probably not), 20 years (maybe) or 50 years? Given how little we seem to understand about all the various cycles it may well be another whole academic generation before we really understand.

  51. What I get from the article is “an average rise of 0.8 deg by the end of this century.” And that’s taking the HADCRUT records as reliable, despite their dubious provenance.

    Compare that with the IPCC prediction of circa 3 degrees. M.S.Hodgart’s Reality Check indicates that the value of S is still hugely overstated.

  52. On a second look, it appears that the author must have constrained his oscillating component to have zero magnitude at the start and end points of the data. This seems an arbitrary and unwarranted constraint. And it looks to me as though it is only with this constraint that the end of the chart can be made to point so dramatically downwards.

  53. Trenberth really sounds like a religious fanatic (prophet) here. ““The previous three have had signature messages,” Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures. ” Then he adds that there are well-funded “deniers” out there.
    Sad.

  54. Some comments.

    The question you address at the end is to what extent the current “pause” is a temporary cyclic phenomenon and to what extent it is a true change in the long term trend. In other words you are interested in the decomposition of the brown curve into the sum of the blue and the red, particularly in the last 15 years in the time of the “pause”. Your conclusion seems to be that a warming trend continues unabated and the pause is purely due to short term variation.

    So yes – global warming looks to have stopped (if you believe in HadCRUT4) when one defines global surface temperature in terms of that trend – the brown curve. In fact it has more than stopped – it looks very much to have gone into reverse.

    But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.

    The problem is that the decomposition into trend and oscillation you are using to draw this conclusion becomes very uncertain near the edges of the curve. In particular I note that you seem to have “pinned” the long term trend to the smoothed temperature at each end – observe how the brown and red curves touch there. This pinning creates an artifact in the decomposition into trend and variation near the edges of the graph. You can also see the effect of this artifact by looking at the variation curve – see how the blue curve appears to have abruptly steep portions at both ends.

    I generally like what you have done. It is certainly a better way to draw a curve through the graph than brutally sticking a straight line through it all. But I think you are stretching when you try to use this analysis to try to attribute the recent pause to short term variation and make the claim that a long term warming trend continues unabated underneath. The decomposition into trend and variation on which you base this conclusion appears to suffer from end effects which render it unreliable precisely near the edges of the curve, a region which includes the recent pause which you are trying to draw conclusions about.

  55. Trend analysis is fine for what it is. What this doesn’t do (as far as I can tell) is link specific physical processes to each of the two smoothing processes. Without linking physical processes to the results I don’t believe any projection using this method would be valid or accurate. In other words, it’s fine for reviewing the past, but tells us nothing about the future.

  56. M.S.Hodgart
    “The other popular method is to fit a selection of straight lines (least square estimate) to a selected The notorious difficulty here is the quite different impression one gets depending on the choice of start and stop years.”

    Does not your method suffer from the same notorious difficulty? Granted, that is when the Hadley Center begins it’s record, but 1880 is a poor choice for a start date. The globe has been cooling steadily for the last 8,000 years since the Holocene Optimum punctuated by warm periods of decreasing magnitude.

  57. rogerknights says:
    September 24, 2013 at 1:05 am

    > This is consistent with Akasofu’s interpretation.

    Not really, he settles on something like a linear component for the LIA recovery, with my interpretation that slope should flatten as we get further away from the LIA. This shows a curve with an ever-increasing slope (one that does not match the Mauna Loa CO2 curve).

    “10th degree for the polynomial”

    Yeah, that implies extending the polynomial a few years in either direction will send it zooming downwards. Looks like some of that is evident on the graph, e.g. the last half cycle is only about 20 years instead of the 30 or so that Akasofu fits. Like Roy Spencer’s caveat when he included a polynomial fit on the UAH temperature record, “for amusement purposes only.”

  58. I agree that it is a well-reasoned look at recent temperature records, but the missing issue is the anecdotal evidence that exists to suggest that the planet has had major periods where is has been as warm, if not warmer, than it is today. That then begs the question that since this couldn’t have been attributable to human activity, why can’t what caused these warm periods have caused the warmth we have seen recently? And since they appear to have no answer for this – especially as CO2 concentration increases while temperatures don’t – this was why Mann worked so hard to try to erase the MWP and any other early warming to take this inconvenience out of the equation.

  59. Tell me, M.S.Hodgart, would you consider offering such a paper up to a peer-reviewed Climate publication? And if not, why not?

  60. I like fitting the data to Fourier type harmonic cycles. Even the “noise” can be treated similarly. Considering the harmonics as independent variables in mult-regression analysis, the statistical significants of each can be determined as well as the shape of the cycles. For example, for an annual cycle first multiply the years by 2*Pi so that x=2*Pi. The factors in the regression are then: the primary cycle cos(x) and sin(x), first harmonic cos(2x) and sin(2x), second harmonic cos(3x) and sin(3x), etc. Include as many harmonics as are statistically significant. For cycles longer than a year, estimate the primary cycle length by counting the peaks within a time period and divide x by the cycle length. Vary the cycle length to determine maximum R^2.

  61. Think many here also see a real problem with the circa +0.7°C adjustments made to the temperature records, generally always downward in the far past, upward in the more recent decades as others have mentioned above, and you know what that does to the graph shown above? It’s really a good chart, I’ll use it in the future for this exact example.

    If those adjustments published by NOAA and GISS on their sites are applied to the blue curve you end up with the brown curve. If you remove the adjustments from the brown curve, just roughly eyeballing it, you get back the blue curve. Most of the rural cities in my state show no warming at all since 1890-1895 when the records began but this is but one area and maybe it has some special properties that protect it, or shield it, from this assumed increase in accumulated global energy (therefore a raising of temperature) but in physics I learned that is not possible over a century of time even in a system even as large as the entire Earth. So I agree with most questioning these adjustments themselves. Something is amiss. Maybe a small portion of the 0.7°C is proper but sure seems the majority is in UHI that should have adjusted temperatures downward starting when cites began and scaled larger as the cities matured so to just ignore the 3% of land occupied by cities. That kind of adjustments I could see for the urban sites.

    Now the cities here all show an increase as they grew from empty grassland into huge metropolises and that type of warming is naturally expected, but that warming is not global, it only affects a very small few percent of area.

  62. Certainly we can describe the data in this way and yes this optimal in one sense. It certainly emphasises the periodic component (which is obviously not purely sinusoidal), which is interesting because there seems to be considerable debate over whether this exists.

    I’ agree that smoothing per se isn’t particulalry useful..
    .

    The only problem is that there is no mechanistic component to the model and therefore cannot be used to extrapolate temperature – or at least it can but this is rather difficult to justify. At the moment a cubic polynomial is the best fit to the mean temperature but in future this may not be the case.

  63. Trenberth said. “Maybe this one is that warming signs are everywhere in melting Arctic sea ice, melting Greenland, warming oceans, rising sea levels, and more intense storms as well as higher surface temperatures.
    ===========
    Unfortunately Trenberth demonstrates selection bias in the quote, which is what separates an activist from a scientists. The world is warming at the north pole, but not at the equator or the south pole. This is not the signature of CO2 warming. The warming of the north pole reduces the temperature differential between the pole and the equator which is what drives storms. As a result storm intensity is decreasing in the northern hemisphere.

    This is generally good news for people, most of whom live in the northern hemisphere. However, it is not good news for folks like Trenberth that rely upon increasingly scary predictions to separate taxpayers from their hard earned money.

  64. In a series of posts at

    http://climatesense-norpag.blogspot,com

    I have estimated the timing and extent of the possible coming cooling simply by considering the 60 and 1000 year solar cycles and looking also at the current state of solar activity as a guide. The key question is well illustrated in the graph in this piece. Is the recent peak a peak in both the 60 year and 1000 year solar cycles- the blue and red .Looking at the state of the sun it seems more likely than not that it is. Here are the conclusions of the latest post on my site.
    “To summarize- Using the 60 and 1000 year quasi repetitive patterns in conjunction with the solar data leads straightforwardly to the following reasonable predictions for Global SSTs
    1 Continued modest cooling until a more significant temperature drop at about 2016-17
    2 Possible unusual cold snap 2021-22
    3 Built in cooling trend until at least 2024
    4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
    5Temperature Hadsst3 moving average anomaly 2100 – 0.5
    6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
    7 By 2650 earth could possibly be back to the depths of the little ice age.
    8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and more CO2 would help maintain crop yields .
    9 Warning !!
    The Solar Cycles 2,3,4 correlation with cycles 21,22,23 would suggest that a Dalton minimum could be imminent. The Livingston and Penn Solar data indicate that a faster drop to the Maunder Minimum Little Ice Age temperatures might even be on the horizon. If either of these actually occur there would be a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.

    How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigor for the uninitiated and in relation to the IPCC climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up .This is where scientific judgment comes in – some people are better at pattern recognition and meaningful correlation than others. A past record of successful forecasting such as indicated above is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that certainty drops rapidly .I am sure ,however, that it will prove closer to reality than anything put out by the IPCC, Met Office or the NASA group. In any case this is a Bayesian type forecast- in that it can easily be amended on an ongoing basis as the Temperature and Solar data accumulate.

  65. Would still love to see a genuinely RAW data set.

    My understanding is that ALL the major “official” data sets (such as the one referenced here) include “corrections” — that, generally, tend to “increase” the apparent slope of the data. On the other hand, they leave out “corrections” that might decrease the slope (e.g. UHI effect).

    There may have been very little net change since roughly the 1940s and today — in the genuine, unmanipulated global temperature data.

  66. There are some who want to believe that the long term trend MUST be the result of CO2, while the short term oscillation is the natural component.
    It’s possible that at least some of the long term trend is the result of UHI increases, then there has been the growing solar activity during the 20th century.

  67. I have to say that the left wing Press is ratcheting up something chronic in the UK. Censorship of dissenting blog comments is rife and articles are being written by those totally incognisant of what science is, how it is done and what constitutes scientific concensus.

  68. Nature moves in cycles. The reason for this is remarkably simple. Linear trends lead to extinction. We the observer would not be here if Nature was linear. Historically humans have learned to successfully predict Nature by first identifying the length of the cycles, long before we understood the mechanism behind the cycles.

    The failing of modern climate science and the IPCC lies in their insistence that climate is a linear function of the forcings. This of course leads to scary conclusions because all linear trends eventually lead to future extinction. And of course it leads to failed predictions because Nature is not linear.

    Once you accept that Nature is not linear, rather its is cyclic, then the scary prediction go away. A change in the forcings is not like pushing an object in space, where there is no friction or gravity to affect the outcome. A change in the forcings is more like changing how and when you push on a child’s swing. You can change the amplitude somewhat, but it is very difficult to change the period of the underlying cycles.

  69. Selection of x and y axis gives an impression of great increase. Forget the actual numbers look at the slope.

  70. Lots of math, but the picture tells the story. The UNIPCC took a short period when the blue and red curves were both positive and said it would continue that way forever. Well it couldn’t and it didn’t.

  71. The 10th order polynomial does give concern, because it is easily over-fit to match the noise rather than the signal. this could make it reasonably useless for predictions.

  72. 1. The earth’s climate is known to be cyclical, witness the geologic ages. For the most part the temperature oscillates with a warming trend followed by a flat trend followed by a cooling trend. Fitting any type continuous curve through long term temperature does not reflect the earth’s climate variations, thus is not an acceptable mathematical modeling technique. Breaking the time line down into warming, flat and cooling periods then fitting mathematical curves for the periods offers a way to better way to model the earth’s known cycles.

    2. There is no sign of carbon dioxide from burning hydrocarbons being the root cause of the temperature rises nor temperature drops in Figure 1.

  73. All of these types of analysis that use a portion of the time from the natural rise from the little ice age still lack proper context. Even this short time period shows a strong natural component.

    Yet, if the scale goes back to the end of the last ice age a completely clear picture emerges that we are well within normal variability. We are on the 6th temperature uptick since the end of the last ice age and each one is weaker than the last. The overall temperature curve remains downward to the return of this epoch’s normal temps which is not this temporary interglacial we are enjoying.

    That hockey stick graph still seems to be the meme even though it’s completely refuted. People in analysis like this seem to be forgetting that the idea that the oft use phrase “we are experiencing the hottest temperatures in the modern era where we were actually recording temperatures” is a completely misleading statement that implies we are experience exceptional temperatures. The Al Gore movie clip from yesterday hammered this mis-leading theme.

    Posts with graphs like this encourage the continuation of this misleading. We are not in an exceptional period of temperatures either in the rate of rise nor in the extent of the rise.

    Looking at real temperature reconstructions using ice cores, sediment studies, etc give a very clear context to the whimper of an effect, if any, we as humans are having on temperatures.

    If only we did have an effect then we could stop the ultimate return of our shores to the continental shelves.

    I wish that with all posts like this either a good ice core temperature proxy graph be linked to or included in the article as a constant reminder to the casual readers that are not up on the science as well as the well informed to never lose sight of the actual context that all these discussions should be held in.

  74. Perhaps the red curve is just another, longer-period, larger-amplitude oscillating cycle overlaid upon the blue..

  75. Here is my impartial take on climate modeling and global warming. The PDO was first described in 1997 and the AMO was described in 1994. Reducing these significant multi-decadal cycles to smooth and stable cycles for mathematical handling is a mistake and yet another example of over reach and over confidence with limited data and understanding of long cycles and the issue of stability of those cycles.

  76. “In the author’s view the whole climate debate has been muddled – and continues to be muddled – by not differentiating between this trend in the data (which oscillates) and the mean trend (which does not).”

    I would say the climate debate has been muddled by trying to tease out a signal in a system we don’t seem to understand very well. If we understood the processes better, I would be more inclined to be interested in what various trends over various time frames might be telling us.

    Given how noisy and chaotic the data is and how many processes are at work in the climate system, it blows my mind that anyone would use a temperature data trendline to prove any proposition in climate science. Statistical methods are elegant but in a world where you can’t hold any of the other variables constant, and where some of the variables are unknown, it all seems like bafflegab to me!

  77. some curve which is most likely to be closest to the truth
    Are you looking for
    A) a mathematical formula to best describe a curve? Or
    B) functions and parameters that represent physical processes and fit an observed outcome?
    These are very different things and of very different utility. It is clear, the author is doing (A).

    envisages some smooth slow-varying curve … ignoring these rapid changes, but responsive [to what?] over a longer term to general movement [of what?]

    Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection

    (Skeptic meter pegs off scale high)
    First off, it is an empirical search. Over what ranges? How many trials and combinations? How is the agreement measured? For a finite set of trials, there will be at least one maximum, but there may be several combination that come close. To focus on the one max without even reporting the runner’s up turns me completely against the author.

    He is letting the data solve the problem of model selection. Therefore, with different data, you could choose different models. That does not get you any closer to understanding. It is just describing.

    The mean trend is a cubic spline interpolation which deviates least from a straight line
    Completely disconnected from reality and any physical process. Infinite end points. Only the year matters. This is a trend for trend sake and brings us no closer to any understanding of physical processes.

  78. Dave in Canmore – you are right= the patterns are better selected by eyeballing the actual data and not slavishly following mathematical curves see Figs 5 – 9 in the link provided above .

  79. Does anyone know the accuracy of any of the TEMP measurements such as Hadcrut4, etc.? Is the data meaningful measuring a so called average temp? Lastly, what does a confidence level mean? Does it mean only statistical confidence? I may be asking wrong questions because I’m not very familiar with statistics, but would you build anything based on how solid the temp measurements were?

  80. The warming in recent years was caused by natural causes, not for more than half, but for 100 percent. The part of the warming that can’t be explained by natural causes is the result of data tampering. http://iceagenow.info/2013/09/warmists-fiddling-data-years-astrophysicist/
    As Piers Corbyn explains: Without data fraud the World is COOLING NOT WARMING.
    Andries Rosema and his team studied satellite data and concluded: The earth is steadily cooling off since 1982: http://climategate.nl/2013/07/18/meteosat-satelliet-waarnemingen-1982-2006-aarde-koelt-aanhoudend-af/ This is a Dutch language website, to see the report click on the red hyperlink “download hier”. Global warming is the biggest hoax in our present time, like Alan Caruba has said: http://iceagenow.info/2013/09/global-warming-biggest-lie-exposed/

  81. This is a nice analysis.

    Two comments are very profound, and hit on my thought. The two comments are:

    NewEnglandDevil (@NewEnglandDevil): “Trend analysis is fine for what it is. What this doesn’t do (as far as I can tell) is link specific physical processes to each of the two smoothing processes. Without linking physical processes to the results I don’t believe any projection using this method would be valid or accurate. In other words, it’s fine for reviewing the past, but tells us nothing about the future.”

    RC Saumarez: “The only problem is that there is no mechanistic component to the model and therefore cannot be used to extrapolate temperature – or at least it can but this is rather difficult to justify. At the moment a cubic polynomial is the best fit to the mean temperature but in future this may not be the case.”

    The profound issue is this: the natural world does what it will do, and we are fortunate when mathematical models are able to provide some sort of guide to what nature will do. The models are still just models, and are inferior. I agree with the commenters who note here and have noted elsewhere on WUWT that any model of the climate needs to have some governor function or functions at some organizing level higher than that of a sine wave cycling from peak to trough and back again. Higher-level processes govern how these cycles run. AS noted, there are plenty of step functions as well as sine functions and trends.

    Global temp is not governed or regulated by a mathematical function, and so there is no underlying genuine mathematical function to discover.

    If you show me a spirograph picture, or a fractal, there is an underlying mathematical model that accounts for all data seen.

    We need to avoid the temptation to buy into the idea that there is a mathematical model underlying global temp.

  82. “Paul Homewood says:
    September 24, 2013 at 2:49 am
    The difficulty is finding for a best estimate – some curve which is most likely to be closest to the truth. There is an outstanding problem in what the statistical literature identifies as model selection.

    I get nervous when a statistician tells me he can get different results depending on what model he uses.

    #################

    that’s the nature of the beast.

    we can never observe the data generating process. we can only observe its output.
    and from any given set of data there are innumerable ways to fit it.

    you have two choices

    A) try to infer the data generating process from the data. make assumptions and fit curves.
    they are countless
    B) construct a model of the data generation process ( a theory ) using physical laws and physical entities
    and attempt to hindcast and forecast.

    Neither method has any epistemic priority. Both can work. For “understanding” method B is preferred because it has the chance of tying into other known physics in a mathematical and ontological manner. Sometimes A will outperform B. Sometimes we have no way of even begining B and all we can do is fit curves. Sometimes we can use A even though we know it cant be correct. The current post models temperature in a way that is unphysical, but it may work over the next few decades. We can be fairly certain that it will fail if we go forward or backward over great periods of time.

  83. There are many good comments above.

    I have no issues with the math, but serious concerns about a significant warming bias in the data.

    richardbriscoe says: September 24, 2013 at 12:39 am

    http://wattsupwiththat.com/2013/09/24/an-impartial-look-at-global-warming/#comment-1425095

    “The post 1960 trend is around 1° per century, and shows no sign of increasing. The trend in the early 20th century is at least a third of that, so the man-made element would appear to be only about 0.6° to 0.7° per century. Hardly a cause for panic.”

    About a decade ago I estimated a probable warming bias in Hadcrut3 of about 0.07C per decade, based on satellite temperatures This warming bias probably extends back before the satellites were launched, to about 1940.

    Let us assume that Hadcrut4 and Hadcrut3 exhibit a similar warming bias, about 0.07C per decade or 0.7C per century.

    Adjusting richardbriscoe’s sentence for this warming bias: “…the man-made element would appear to be only about 0.0° C per century. Hardly a cause for panic.”

    We wrote with confidence more than a decade ago:

    “Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”

    http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm

    I suggest that it was warmer in the USA during the 1930’s than it is today, and quite possibly the entire world was warmer then too.

    There is strong evidence that the Medieval Warming Period was also warmer than today.

    Repeating from 2002, with even greater confidence:

    “The alleged global warming crisis does not exist.”

  84. The only thing that maybe correct in the analysis is the oscillation, because the warming trend from the mini ice age is in the data too. The next question is when has the planet recovered from the min ice age? Is it the time when the planet is as warm as Medieval Times? If the temperature significantly increases beyond the temperatures of Medieval Times, then do we have Global Warming?

  85. @Nigel Harris 5:03am
    These [sharp reversals at end points] do not appear to my eye to be even remotely justified by the raw data.

    Good catch Nigel. Look at the Blue Curve. The center looks nicely sinusoidal, but the end inflections get contorted and tortured.

    We must be looking at side lobes of the 10th order polynomial of gigantic scale. It is telling that Hodgart doesn’t include the 11 parameters of that 10th order polynomial fit. After all, it is “unique.”

  86. richardbriscoe says:
    September 24, 2013 at 12:39 am

    There’s a third component besides natural & human, ie a second human input, the fudging of data by “climate scientists”. Taking out that effect & the natural leaves very little room for net general human activity, which produces both cooling & warming. The net effect of human GHGs is vanishingly small, well within margin of error & no cause for concern. To the negligible extent that there might be man-made warming, it’s a good thing, as of course too is more CO2 in the air.

  87. ‘– but let us give the Met Office the benefit of the doubt –’

    Fatal mistake.

    We have no objective reason to give the Met Office, or the CRU with which it collaborated in producing HadCRUT4, the benefit of any doubts. On the contrary we have every reason to be suspicious of these government-sponsored agencies that are openly committed to CAGW-advocacy, especially since the Climategate revelations.

    Trusting in the unverifiable word of these official data-manufacturers is like trusting the tailors who made the King’s New Clothes. It is a fundamentally unscientific act that constitutes our first basic step into the make-believe reality which they have concocted and which they totally control. From there on in it is just an endless, labyrinthine trip through their pre-calculated fantasy-world like Donald Duck’s wanderings in Mathmagicland.

  88. This analysis is along the right line but ends up showing too much warming by the data choices. A better source of data would be hadsst3 which goes back to 1850. It shows the warming is closer to .6C over 160+ years and matches what has been seen in the satellite data more closely. With biased adjustments, siting problems and UHI there is no good global data that includes land data.

    If done in this manner the future warming looks more like .05C/decade or .4C by the year 2100.

  89. @James Schrumpf 4:00 am
    I don’t see how the natural variability of the temperature data can be called “noise.” ….
    I think referring to the natural variability of data that has already been strenuously massaged to get to that one temperature point for the entire Earth as “noise” is a false concept. Rather than trying to get a smooth curve from it, we should step back and say, “Damn, that number moves around a LOT, doesn’t it?”

    I agree with you in sprit, James. What is noise to some is signal to others. I’ve been scientifically amused by seismic interpretation over the past 35 years. What was noise and out of plane artifact is now stratigraphic information. “The rocks really do look like that!”.

    But noise is real. And it should not be confused with error.
    To go from the raw temperatures to the HADCRU4 passes through a lot of hands, each one adds error, seen as noise. To get to the anomally, the “keepers” (1) of the data do not subtract, but ADD corrections for what we THINK are KNOWN phenomena to better analyze an unknown residual. The adding of corrections ADDS ERROR (noise) equal to the difference of (what we THINK – what it really is). So what we are all studying is the
    Anomaly = (Raw data = “local signal” + “local contamination bias and error” + recording error) + Corrections for what is known + error in corrections + bias in corrections.

    This is all bad enough if the bias = 0. But the bias isn’t zero. Too many hands are on the data and in the till at the same time.

    Note 1: “keepers” of the data – a term used with deliberate sarcastic irony.
    See: An Open Letter to Dr. Phil Jones of the UEA CRU , Willis Eschenbach, WUWT 11/27/2011.
    (another nomination to a Watts’ Best collection as well as “Climate Fail Files”)

  90. @James Schrumpf 4:00 am
    I don’t see how the natural variability of the temperature data can be called “noise.” ….
    I think referring to the natural variability of data that has already been strenuously massaged to get to that one temperature point for the entire Earth as “noise” is a false concept. Rather than trying to get a smooth curve from it, we should step back and say, “Damn, that number moves around a LOT, doesn’t it?”

    I agree with you in sprit, James. What is noise to some is signal to others. I’ve been scientifically amused by seismic interpretation over the past 35 years. What was noise and out of plane artifact is now stratigraphic information. “The rocks really do look like that!”.

    But noise is real. And so is error masquerading as noise.
    To go from the raw temperatures to the HADCRU4 passes through a lot of hands, each one adds error, seen as noise. To get to the anomaly, the “keepers” (1) of the data do not subtract, but ADD corrections for what we THINK are KNOWN phenomena to better analyze an unknown residual. The adding of corrections ADDS ERROR (noise) equal to the difference of (what we THINK – what it really is). So what we are all studying is the
    Anomaly = (Raw data = “local signal” + “local contamination bias and error” + recording error) + Corrections for what is known + error in corrections + bias in corrections.

    This is all bad enough if the bias = 0. But the bias isn’t zero. Too many hands are on the data and in the till at the same time.

    Note 1: “keepers” of the data – a term used with deliberate sarcastic irony.
    See: An Open Letter to Dr. Phil Jones of the UEA CRU , Willis Eschenbach, WUWT 11/27/2011.
    (another nomination to a Watts’ Best collection as well as “Climate Fail Files”)

    [Mods: please delete my 9:50 am. This corrects an unclosed tag.)

  91. The second derivative of the mean trend appears to be going negative after 1960 which is exactly the opposite to be expected from more CO2 per CAGW theory.

  92. While I suppose it’s fun to break down temperature data into a sum of a variety of different curves – the curves have to at least correlate to ~something~ in reality to be of any value. For example, the oscillating component has a period of 65 years so what feature in our climate has a 65 year period?

    If there isn’t one then this is nothing more than an exercise in advanced numerology. Just as you said: “There are an unlimited number of ways of drawing some curve through the data.” – there are an unlimited number of CURVES that will add up to that data.

  93. Empirical search finds for a span of 26 years for the loess and a 10th degree for the polynomial. No other combination of loess span and polynomial degree gives such a close agreement. The condition is unique and therefore automatically solves the problem of model selection

    In the author’s view this joint estimate is really the best that can be done in finding for the trend of global surface temperature. For various reasons the loess estimate should be prioritised.

    I love curve-fitting, and of course I respect loess smoothing. This goes into the collection of models that will be stringently tested by the upcoming decades. As I wrote for Vaughan Pratt: if you know the main trend (he had a particular model for the trend), you can estimate the noise; if you know the noise, you can estimate the main trend. If both have to be estimated from the same time series data (with or without “optimal” smoothing), then you have “curve fitting”.

    Can the components of the model, trend and residual oscillation, be related (linearly or polynomially) to measurements of known physical processes?

  94. You keep saying “noise” when what you mean is DATA. Raw temperatures are not noise. Noise would be the non-temperature input to the data: human error, equipment inaccuracy and imprecision, changes in measurement technique. All of which clearly exist. However, treating the actual data as noise to be vanished away by statistical magic in favor of a mythical “global temperature anomaly” destroys any trace of credibility you may have had to start with. Averages are a unique mathematical contrivance that actually contains less information the more information you cram into them. Furthermore, creating a graph that has a 12 hundredths of a degree scale compared to a 142 year scale in order to produce an alarmingly sharp red upward trend instead of the very slow gentle rise actually indicated is hardly impartial. Report real temperatures with 100 degrees on the vertical for 100 years on the horizontal and your basis for alarm disappears.

  95. With respect to curve fitting, I get an R^2 of 0.8785 regressing these data on a 250 year cycle with statistically significant harmonics at 125 years and 62.5 years. Interestingly, projecting into the future, we are in a peak now and can expect a gradual decline until around 2037, followed by a gradual rise for around 30 years, then a more rapid decline. This is not an CAGW prediction.

  96. “But no – average global warming continues ever upwards (still believing in HadCRUT4) when one defines an average global surface temperature in terms of that mean trend – the red curve.”

    The “average” is an artifact of the statistics. One cannot assume that there is a meaningful “average” temperature for Earth. If the problem were formulated in terms of joules, units of energy, then an average would make sense. There are nicely formulated laws for conservation of energy in all branches of physics. There are no laws for conservation of temperature in any branch of climate science. For example, we might all agree that ENSO can redistribute temperatures across the Pacific but that the short term effects of these redistributions must sum to zero over the long run because conservation of energy requires it. Yet all the warming from 1979 to 1998 could have been the result of redistributions of temperature from myriad local phenomena such as ENSO. To imagine that temperatures can be precisely modeled as energy can is to impose on nature a uniformity that might not exist.

  97. I’d have preferred Hadcrut 3. ” In fact it has more than stopped – it looks very much to have gone into reverse.” A projection to 2030 or so would have been interesting, particularly given the imminent release of AR5.

    • Dr. Page,
      I didn’t select the primary wave length based on an expected physical forcing. The trial and error regression on this particular set of data to maximize R^2 gives me those results. You would need a much longer set of data to identify a 1000 year cycle. These observed cycles are most likely riding on progressively longer cycles and projecting outside 1.5 times the observed range is risky business. However, I agree with you that these cycles are likely to be associated with solar cycles.

  98. Anyone familiar with Thom, Zeeman, Ilya Prigogine will know of “Butterfly,” “Cusp,” “Fold,” and “Swallowtail” catastrophes– abrupt transitions, “breaking points” wholly unanticipated by any measure of central tendency (average, mean, median,mode et al.). “Catastrophes” are not mere math/statistical artifacts, but fundamental natural processes. Citing “persistence fallacies” to enshrine long-term norms as permanent is dangerously misleading.

  99. Curve fitting is only curve fitting, however good it might seem to be – “with four parameters I can fit an elephant, with five I can make his trunk wiggle”.

    Science is making falsifiable hypotheses: could you please indicate what the hypothesis is here, and in what circumstances you consider it will be nullified?

  100. TimC I’m not sure what the hypothesis is here .The hypothesis in my cooling forecast linked above is simple and clear. i.e. that the current cooling peak at about 2003 is a peak in both the 60 year and 1000 year quasi cycles,
    It will be seriously in question if there is not about 0.15 – 0.2 degrees of cooling by 2018- 20

  101. I want to see the coefficients of that 10th degree polynomial. I suspect that they will
    either decrease rapidly in magnitude, or lead to a lot of nearly equal terms cancelling
    each other out. In either case, there is no need (and, as Steven Mosher
    points out, no physical motivation), to go to such a high degree.

    However, I must admit it’s fun to see an analysis that includes the statement that:
    “… From 1959 to 2012 this average rate looks to have increased to 0.090 ± 0.034 deg/decade. ”
    Hmmm. Less than 1/2 of the claimed 0.2 deg/decade that the IPCC is claiming.

  102. The sharpness of the downtrend is what is of most interest to me. I have kept similar tabs on HadCRUT3v but using an 11 year binomial smoothing and the same sharp downtrend is present over the past decade ( the peak being 2003 to 2005 depending on whether you look at the global data, the Sh or the NH). I also run the the three data sets ( GL, NH andSH) together plus the NH-SH difference as I think that gives a clearer overall picture. What is at issue here is getting the clearest possible picture of wjhat is happening. I cannot think of a dumber way to illustrate same than using a linear regression but why would I be surprised when the CAGW proponents include such narcissistic nongs as Pachauri, Hansen, Nuccatelli, Mann, Cook and Lewandowski. It is not science we are looking at but agitprop and marketing when nitwits like these are involved.

  103. As others have pointed out, this analysis is useless. Curve-fitting is meaningless without any real-world mechanisms. The fact that there are a couple of visible cycles does not allow any forward projection, because the mechanisms behind the cycles are not known.

    Consider this : Suppose that the analysis was of sunspots instead of global temperature, and that only a couple of cycles were available. How meaningful would the analysis be? The answer is – absolutely useless. Let’s face it, in reality we’ve had how many cycles already – 23? – and how well can we forecast the amplitude and duration of just the next cycle, let alone the next several cycles? We simply can’t do it, our skill level is absolute zero. The wildly wrong forecasts of cycle 24 prove that. Before cycle 24 started, some ‘experts’ expected it to be a big one. It turned out to be the smallest for yonks, much smaller than all forecasts.

    So this temperature analysis is absolutely meaningless and a waste of time in all respects bar one: it does suggest that there are cyclical influences and that alone is sufficient to cast serious doubt on the totally cycle-free vision of the IPCC.

  104. Good post. The author writes: “The raw data cannot of course be treated as absolutely true – but let us give the Met Office the benefit of the doubt – this is hopefully their best effort so far.”

    The data has been fraudulently corrupted. Early temperatures have been depressed in order to create a spurious warming trend. Handwritten records from 1902 show 0.7C. Today’s electronic record shows that reading as -0.2C. You might as well interpret Tony Soprano’s tax return.

    http://endisnighnot.blogspot.co.uk/2013/08/the-past-is-getting-colder.html

  105. At the beginning of the post it said that this would be based on the ‘raw data’. I was hopeful that the author had somehow backed out a significant portion of the historical adjustments, but then i see it’s HadCRUT4 which was rather disappointing. I’ll only make passing mention to the fact that this is not ‘data’ at all, but a statistical construct.

    Leaving that aside, and also leaving aside the issues with fitting a 10th order polynomial to such ‘data’ (lots of degrees of freedom…) what is becoming apparent to me is that there is a cyclical trend that can be linked to physical processes such as the PDO/AMO, as well as a long-term linear trend.

    Given this, the way we can identify an anthropogenic signature in the temperature data is by attempting to identify the cyclical component as best we can, and then compare the long-term linear slope to the near-term linear slope on datasets with long enough temperature histories. If the alarmists are correct, then there should be a clear difference in linear trends over time and in fact we should be seeing an accelerating linear trend in the nearest-term data.

    If the trend is less alarmist and more lukewarm then obviously this acceleration in trend would not appear, and we do not have a justification for spending trillions on Al Gore’s pet projects.

    CET is an obvious candidate; are there other datasets which are of a long enough duration for such an analysis to be undertaken? Using single-site data may also help in identifying the cyclical component; if we find similar periodicity in data we separately analyse from different sites we can be more confident that the periodicity is genuine.

  106. Dr Norman Page says:
    September 24, 2013 at 7:26 am
    Wayne @3.18 My cooling forecast at 6.33 essentially does what you suggest using the mirror SSTs as a guide – I’m a great believer in Ockhams razor

    Dr. Page, I was out of place all day but I’ll read your comment above and your page at climatesense this evening. Sounds like you put some weight on what the sun is up to and I agree, have always felt climatologists in general are stuck on a self-importance mindset thinking the sun can be ignored and that the Earth controls it’s own climate, I also think not so. Will get back to you in a few hours after I get a better handle on your views.

  107. M.S.Hodgart

    Good work.

    What is your correlation coefficient between your Smoothed GMST model and the Annual GMST?

  108. Terry Oldburg: In my view, Matthew R. Marler errs in loving curve-fitting. While Marler loves a curve, science provides us with no reason to believe that nature loves it.

    Nature’s love is irrelevant. There is no reason to believe that nature loves Newton’s laws, but they are accurate descriptions over a wide range of cases. There’s no reason to believe that nature loves Kepler’s laws either, but Newton found them informative. Curve-fitting is a species of honest labor, like gold prospecting and inventing, that sometimes produces good results. But each fitted curve has to be stringently tested, and I don’t love any curve that hasn’t been tested.

  109. Apologies to the author for the tenor of my last comment (“…this analysis is useless. Curve-fitting is meaningless without any real-world mechanisms.”). In writing it I had missed the statement “The graph makes no predictions and should be used only to see what has been happening historically.“. So I should have phrased it as agreement with the author, not criticism.

  110. Dr Norman Page said: “It will be seriously in question if there is not about 0.15 – 0.2 degrees of cooling by 2018-20”- but said in opening “the graph makes no predictions and should be used only to see what has been happening historically”. Also, this 2018-20 short-term projection really deals only with the blue “oscillating component” element, not the continuing red trend in surface temperatures running at ~0.6C per century from 1960 to 2010. But does the 3rd order term in the cubic fit cause the surface temperature also to trend downwards – ie. predict the start of the next glaciation – or does it now predict ever-increasing rises in surface temperatures, so we had all better redirect our efforts to cost-effective amelioration?

    Wouldn’t it be fair (and objective) to say that this is really just curve fitting on historical events – perhaps of interest in itself though one can always look up actual data. However it does not truly involving application of scientific method…?

  111. Dear M.S. Hodgart,

    Thank you for your work.

    As you can see, this is a forum where critical opinion, sometime too critical, can exist alongside more civilized commentary.

    A suggestion was made to re-run your analysis using Central England Temperatures (CET). I hope you will do so.

    Here is one data source – I am uncertain if it is the best one, and I cannot comment on the existence or absence of a warming bias in CET’s.

    The CET website says “The mean daily data series begins in 1772 and the mean monthly data in 1659. … Since 1974 the data have been adjusted to allow for urban warming.”

    http://www.metoffice.gov.uk/hadobs/hadcet/

    Some concern has been expressed about the increased curvature of the end-points of your “oscillating component”. This could be a fair comment – “end point effects” are not uncommon in this sort of analysis. However, I fail to see this as a serious flaw – it is only necessary to note it.

    This page of “record breakers” is interesting:

    http://www.metoffice.gov.uk/hadobs/hadcet/cet_record_breakers.html

    Regards, Allan

  112. Something seemed odd to me yesterday when I looked at the graph above. I figured it out. The HadCRUT4 anomoloy for 1998 is near 0.7, yet the axis of the graph above only goes to 0.5 or so. Where is the hadcrut4 data set that created the graph in this post? You can see the data here: http://woodfortrees.org/plot/hadcrut4gl but it is different and I don’t know why.

    John M Reynolds

  113. Sorry. My bad. I am used to seeing monthly graphs. The average for all 1998 months is .53. — John M Reynolds

  114. TimC I’m sorry -you have misunderstood my earlier comment I said
    “TimC I’m not sure what the hypothesis is here .The hypothesis in my cooling forecast linked above is simple and clear. i.e. that the current cooling peak at about 2003 is a peak in both the 60 year and 1000 year quasi cycles,
    It will be seriously in question if there is not about 0.15 – 0.2 degrees of cooling by 2018- 20″

    In the first sentence “here” refers to this Hodgart post . The rest refers to my own cooling forecast at

    http://climatesense-norpag.blogspot.com

    I’m saying that the idea that the recent peak includes both the 1000 year and 60 year peaks would be in question if there is not the quoted amount of cooling by 2018-20.
    I hope that clarifies things.

  115. Dr Page: thank you, all is now clear! And thanks for the link to your own blog – but (to get this absolutely correct) am I then right to infer it is common ground that there is actually no hypothesis, falsifiable or otherwise, in this article – which I think would have to take the form of some future forecast such as your own?

    Absent that, this would seem to be just a “wiggles and loops” analysis exercise: interesting but valid only as from 1870 to 2012 – the blink of an eye in astronomical terms of course.

  116. Theo Goodwin says:
    September 24, 2013 at 11:45 am
    There are no laws for conservation of temperature in any branch of climate science.
    ==========
    Indeed. Temperature must be combined with humidity to determine the energy content of air. As you reduce the humidity you must increase the temperature for the energy to remain constant.

    Perhaps the “unexplained” increase in late 20th century temperature has simply been a response to decreasing humidity over the oceans?

    During 1976–2004, global changes in surface RH are small (within 0.6% for absolute values), although decreasing trends of −0.11% −0.22% decade−1 for global oceans are statistically significant.

    http://journals.ametsoc.org/doi/abs/10.1175/JCLI3816.1

  117. In reply to:

    PMHinSC says:
    September 24, 2013 at 1:17 am

    …Data over 140 years is insufficient to make over broad claims about natural variability and it would require a leap of imagination to use this data in and of itself to draw conclusions about cause and effect.
    William:
    Yes, however, there are sets of other observations that logically supports the assertion that the majority of the warming in the last 150 years was due to solar magnetic cycle changes rather than the increase in atmospheric CO2 and that the planet is about to significantly cool due to the current solar magnetic cycle change.

    The process to solve physical problems is analogous to fitting together the pieces of a model puzzle or solving a crime investigation. The correct solution explains all observations. There is a physical explanation for what has happened in the past and what will happen in the future. We all understand and agree that it would be ineffective and immoral for a criminal investigator to start an investigation by picking one suspect and then hiding or ignoring evidence that would exonerate the suspect such as an alibi.

    The warmists have thrown away or ignored the observations and analysis (evidence) that does not support their assertion that 100% of the warming in the last 50 years was due to the increase in atmospheric CO2.For example (excerpt of the observations/analysis and reasoning to solve the problem):

    1. There is observational evidence of 23 cycles of warming and cooling (nine of the cycles occurred in the current interglacial period, the Holocene). The 23 cycles of warming and cooling correlate with solar magnetic cycle changes and have a period of roughly 1500 years. These cyclic warming and cooling periods are called Dansgaard-Oeschger (D-O) cycles named after the two researchers that discovered the cycle in the paleo data.
    Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.

    2. The latitudinal regions of the planet that warmed in the last 150 years are the same regions of the planet that warmed in the past during a D-O cycle.

    3. Detailed analysis of the paleo record shows atmospheric CO2 levels have increased and decreased with no change in planetary temperature. Planetary temperature does not correlate with atmospheric CO2 changes. CO2 has an alibi.

    4. The pattern of warming in the last 150 years cannot be explained by increases in atmospheric CO2. As CO2 is more or less evenly distributed in the atmosphere the potential for warming due to the increase in atmospheric CO2 is more or less the same for all latitudes on the planet. As the magnitude of the CO2 forcing is proportion to both the level of CO2 in the atmosphere and to the amount of long wave radiation that is emitted at the latitude in question, the most amount of warming due to the increase in atmospheric CO2 should occur in the tropics as that is the region of the planet that had the most amount of long wave radiation emitted to space prior to the increase atmospheric CO2. That is not observed. The majority of the warming in the last 150 years has been in high latitudes of the Northern hemisphere with the most amount of warming occurring on the Greenland ice sheet which is the same pattern of warming that occurred in past D-O cycles.

    http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf

    Limits on CO2 Climate Forcing from Recent Temperature Data of Earth

    5. And so on.

  118. William Astley says:
    September 25, 2013 at 8:51 am

    ‘The warmists have thrown away or ignored the observations and analysis (evidence) that does not support their assertion that 100% of the warming in the last 50 years was due to the increase in atmospheric CO2.’

    This is quite true in my experience. Another item of contra-evidence to their assertion which they have chosen to overlook/ignore is provided by HadCRUT4 itself. It can be seen from basic greenhouse theory that greenhouse warming should amplify not only the global mean surface temperature but also any variations in the global mean surface temperature that are from non-greenhouse sources at the same rate. Therefore if the surface warming indicated by the trend in HadCRUT4 was the result of an increased net greenhouse effect from the increase of atmospheric CO2 over the time period since 1850, we should expect to see the magnitudes of surface temperature variations increase by the same amount over that period too.

    In fact the opposite turns out to be the case. The linear trend in HadCRUT4 temperature variations as measured by the standard deviations over relatively short time periods (eg. 1 year, 5 years, 10 years for examples) is actually slightly negative instead of being slightly positive as expected!

    These results suggest to me that either HadCRUT4 is seriously inaccurate, or else the surface warming that has occurred since 1850 was not due to an enhanced greenhouse effect after all. But since we have no means of checking the veracity of HadCRUT4 or the magnitude of the net greenhouse effect independently of the warmists who are in control of the science, we seem to be in a position of irreducible uncertainty over the matter and the CAGW-advocates remain in a position of being able to dismiss all such inconvenient counter-evidence with a wave of the hand.

  119. MONTHLY MEAN CENTRAL ENGLAND TEMPERATURE (Degrees C)

    http://www.metoffice.gov.uk/hadobs/hadcet/data/download.html

    1659-1973 MANLEY (Q.J.R.METEOROL.SOC., 1974)
    1974ON PARKER ET AL. (INT.J.CLIM., 1992)
    PARKER AND HORTON (INT.J.CLIM., 2005)

    Brief description of the data

    These daily and monthly temperatures are representative of a roughly triangular area of the United Kingdom enclosed by Lancashire, London and Bristol. The monthly series, which begins in 1659, is the longest available instrumental record of temperature in the world. The daily series begins in 1772. Manley (1953,1974) compiled most of the monthly series, covering 1659 to 1973. These data were updated to 1991 by Parker et al (1992), when they calculated the daily series. Both series are now kept up to date by the Climate Data Monitoring section of the Hadley Centre, Met Office. Since 1974 the data have been adjusted to allow for urban warming.

    AVERAGE to 1950
    20-year 30-year
    9.64 9.55

    AVERAGE to 2010
    20-year 30-year
    10.14 9.97
    +0.50C +0.43C

    AVERAGE to 1945
    20-year 30-year
    9.53 9.42

    AVERAGE to 2005
    20-year 30-year
    10.06 9.87
    +0.53C +0.45C

    My comments:

    This database suggests a 0.4 to 0.5C net warming in the 20 or 30 year periods ending in 1945 or 1950 versus the 20 or 30 year period ending in 2005 or 2010.

    Note that “Since 1974 the data have been adjusted to allow for urban warming.” The degree of adjustment may be worthy of investigation. It may be inadequate.

  120. Allan MacRae says:
    September 26, 2013 at 12:34 pm

    If their UHI effect adjustments are like Hansen’s, then they make the adjusted temperatures warmer rather than cooler.

  121. M.S.Hodgart gives us an example of Natural Science in action: observing what exists, and creating a tool or method to help us see what is going on. The hypothesis put forward is the author’s novel method of ‘joint estimation’ and the test is ‘does it increase our understanding?’ In this case the method clearly unlocks new meaning from an already well-studied data series, and must therefore be of seminal interest to climate science as a whole.

    I echo others in wanting to see the results from this new method (… that the author is apparently willing to supply on request) when it is applied to the other time-series data sets that are relevant to climate science.

  122. Question re AMO:

    PDO 1950 to 2013

    PDO solidly in Cool Phase since 1998 and/or 2004

    AMO 1856 to 2009

    http://upload.wikimedia.org/wikipedia/commons/1/1b/Amo_timeseries_1856-present.svg

    AMO still in Warm Phase but declining rapidly

    Monthly AMO updates at

    http://www.esrl.noaa.gov/psd/data/timeseries/AMO/

    NOAA: “Since the mid-1990s we have been in a warm phase.”

    http://www.aoml.noaa.gov/phod/amo_faq.php#faq_2

    Does anyone have an estimate when the AMO changes to Cool Phase – it looks imminent according to some data.

    Regards, Allan

Comments are closed.