To Tell The Truth: Will the Real Global Average Temperature Trend Please Rise? Part 3

To Tell The Truth: Will the Real Global Average Temperature Trend Please Rise?

Part III

A guest post by Basil Copeland

Again, I want to thank Anthony for the kind invitation to guest blog these musings about what is going on with global average temperature metrics.  It has been a most interesting, and personally rewarding, experience.  My original aim was quite modest, but I fear that the passion that many feel for this issue prevented them from seeing that.  So in this final part to this series, I want to try to make my aim more clear, and to show how a lively exchange of ideas can lead to new insights.

The IPCC has made the earth’s global average temperature trend a central focus in the debate over anthropogenic global warming.  In the AR4 report of Working Group 1, they state:

The range (due to different data sets) of global surface warming since 1979 is 0.16°C to       0.18°C per decade compared to 0.12°C to 0.19°C per decade for MSU estimates of        tropospheric temperatures.  (Chapter 3, Page 237)

Similar, if not the same, estimates are reported in Table 3.3, Page 61, of the Synthesis and Assessment Product 1.1 of the U.S. Climate Change Science Program (accessible here: http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-all.pdf ).  Presumably, these estimates provide some kind of basis for the IPCC SRES scenarios that assume 0.2C per decade warming over the next two decades. 

ttttpart3figure1-520.png

Figure 1

From what I can tell in reading the representations of the sources for these estimates, they are based on a straight-line linear regression that includes corrections for serial correlation.  In other words, regressions that look something like what are shown in Figure 1.  The trend at the top is from Appendix A, Page 130, Figure 1, of the U.S. Climate Change Science Program report just cited.  The second is taken from the RSS website (http://www.remss.com/data/msu/graphics/plots/sc_Rss_compare_TS_channel_tlt.png  accessed on March 15, 2008).  Both show a warming trend of 0.17C/decade since 1979.

Are these “good” estimates of the historical trend since 1979?  Forgive me, but I refuse to accept them as authoritative ex cathedra, nor will any true scientist expect me to.  Bear in mind, I’m taking the data for what it’s worth, and am overlooking any questions about the reliability of the surface record, such as what Anthony is looking into (or Steve Mcintyre at www.climateaudit.org), or the kind of urbanization and land use effects reported by Ross McKitrick and Patrick Michaels. My concern is solely with the technical procedures used to estimate the “trends” that are commonly cited for evidence of global warming.  Bottom line?  There are problems with the way those trends are computed that overestimate the degree of global warming since 1979 by 16.3% to 41.3% (based on results presented below).

In Part II I attempted a demonstration of this using what might be considered to be rather a rather blunt or brute force approach — a test of whether there was a significant “structural break” (the way we describe it in my field of study) after 2001, along with whether or not linear trends are distorted by the effect of the 1998 El Nino.  Nothing in the comments that followed the posting of Part II fundamentally undermined the validity of my conclusions.  The chief concerns seemed to be that my decision to test for a structural break (or “change point”) at the end of 2001 was arbitrary (it wasn’t), or whether one could say anything meaningful about a cyclical system like climate from linear trend lines.  Well, with respect to the latter, that horse is out of the barn, and we’re being told — by supposed authorities — that there has been X degrees of global warming per decade since 1979 on the basis of linear trend lines.  If they can use linear regression to claim that global warming is proceeding apace, well please excuse me for doing the same in questioning them.

Still, the comments were provocative, and encouraged me to dig further into my toolbox of econometric techniques to see if I might be able to come up with something that would alleviate some of the concerns commenters had about what I did.  So it occurred to me that I might treat the weather like a “business cycle” and model it with Hodrick-Prescott smoothing.  (If you want an explanation of what that is, look here: http://en.wikipedia.org/wiki/Hodrick-Prescott_filter ).  The results are presented, for the four global average temperature metrics we are using, in Figures 2 through 5.

ttttpart3figure2-520.png

Figure2 – click for a larger image

ttttpart3figure3-520.png

Figure3 – click for a larger image

ttttpart3figure4-520.png

Figure4 – click for a larger image

ttttpart3figure5-520.png

Figure5 – click for a larger image

Those who think we should let the data tell us where the “change points” are should find this approach more appealing, as well as those who believe we should be modeling the data with non-linear techniques.  But in the end, the point is the same: the “real trend” over the 29 years we are looking at is substantially less than we get using straight-line regression.  With the exception of GISS, Hodrick-Prescott smoothing results in even lower estimates of the degree of global warming over the past 29 years.  As shown in the following Table 1, compared to the two methods I’ve employed, the straight line regression method relied upon by IPCC and the U.S. Climate Change Science Program overstates global warming since 1979 by anywhere from 16.3% (using GISS) to 41.3% (HadCRUT). 

ttttpart3table1.png

No one should be offended by what I’ve done, or what I’m saying.  True science is always open to the possibility of refutation.  Given the policy implications that hang on conclusions about the degree of global warming that has occurred in recent decades, we should take a closer look at what the supposed authorities are telling us, and see if there are not perhaps some significant short-comings in the way they have calculated the degree of global warming in recent decades.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

47 Comments
Inline Feedbacks
View all comments
Stu Miller
March 15, 2008 7:55 pm

Pure bookkeeping issues. Is Figure 1 the combined first two graphs? Where is Table 1 promised in the next to last paragraph?
REPLY: Fixed, thanks for pointing it out. -Anthony

March 15, 2008 9:43 pm

What does it mean for data to exist in a I(2) trend?

Patrick Hadley
March 16, 2008 3:08 am

Thank you for doing this work. The graphs are interesting and the smoothing seems pretty reasonable.
Perhaps you need a little more explanation for those of us who are unfamiliar with the Hodrick-Prescott filter. How have you worked out the “non-linear” trend of the smoothed data? Is it the gradient of the line of best fit through the smoothed data, or simply the gradient of a line drawn between the end points of the smoothed curve? Or is there some other special way the filter produces a trend. In any case surely the figures you come with are intended to be thought of as linear trends, i.e. the gradients of straight line graphs drawn on the time series. E.g. 0.146 per decade is a linear trend.
Just by looking at the graphs it seems that a line of best fit drawn through the smoothing would be very similar to the standard regression line on the unfiltered data. Taking a line from the end points of a smoothed time series could be thought of as a bit dodgy.
I am not yet convinced that you can beat the standard line of best fit used by the IPCC and NASA.

Paul CLark
March 16, 2008 4:32 am

I’m afraid the stats here go over my head (my fault, not Basil’s!), but all analyses of the last decade seem to agree there has been little or no warming, so I even I can see that the longer it goes on, the shallower the post-1979 trend gets, even assuming there is a real trend at all…
But of course it’s being argued by warming believers that the last decade is just a downside blip on top of an otherwise steady trend. Which brings me to my question, which I’m afraid isn’t really directly to do with Basil’s analysis, but seems to be the logical thing to ask as a result of it…
Do any of the IPCC’s models predict a decade or more of flatness like this, temporarily overriding the overall trend, and then (under their assumptions) followed by a quick return to it? Not necessarily in 1998-2008, but anywhere in the past or future? If not, why not? Surely an absence of such an output effect would indicate that they have underestimated factors such as solar variability, PDO etc., maybe missed out some as yet complete unknown linkage, and hence are just GIGO?

steven mosher
March 16, 2008 5:46 am

basil,
one correction. you wrote: “Presumably, these estimates provide some kind of basis for the IPCC SRES scenarios that assume 0.2C per decade warming over the next two decades. ”
these are Emissions scenarios. they are projections over the next hundred years as to what levels of emissions we will see from mankind. There are low level scenarios and high level scenarios. these assumptions about future emissions are fed into GCMs. the gcm then project the temperature.
Great post!

Mike Bryant
March 16, 2008 5:59 am

It seems to me that an investor who is expert on stock charts might be able to tell us if this is a “buy” or a “sell”. If I had shares in this… I’d be nervous.
Mike Bryant

JamesG
March 16, 2008 6:08 am

Well I like it, so thank you Basil. I didn’t mean to be negative about stats before but when you see multiple professionals coming up with multiple, conflicting results you have to wonder about the bias behind it – on both sides. Of course the only reason to use a single straight line is to allow an extrapolation despite not having a clue about the mechanism of the underlying model, which is downright unscientific.

Stan Needham
March 16, 2008 6:37 am

First of all, Basil, thanks for all the work you put into this project. I am not, nor will I ever be, a statistics geek, but I learned more from your series than from anything I’ve read in a long time. As someone whose educational background is in business and history, this statement, in particular, got my attention:
So it occurred to me that I might treat the weather like a “business cycle” and model it with Hodrick-Prescott smoothing.
Anyone who is in their early 30’s or older may remember back in the late 90’s, that a number of young economists and Wall Street types were proclaiming that the business cycle had been defeated, and that we would likely NEVER have another significant down-turn. Government surpluses were projected as far as the eye could see. Remember?
Anyway, thanks for a wonderful educational experience, and thanks to Anthony for providing you with this forum. I think it’s absolutely essential that, absent empirical proof one way or the other, that this debate continue until one or both of two things occurs. Either, (a) as Anthony was noted, Nature will be the final arbiter in which case it will become quite evident that what we’re witnessing is mostly a natural cycle; or (b) energy production will evolve into non carbon-based processes within the next 2 or 3 decades, to a point where man can no longer be blamed. My guess is that it will be a combination of the two.

Gary Gulrud
March 16, 2008 8:41 am

Thank you for your series which I found fair, well-reasoned and informative as well as interesting which is apparently a tall order for statisticians. I especially respect your limiting the discussion to features of the data rather than leaping ahead to causation which statistics seldom informs. The careful analysis you’ve modelled herein ought to be imitated, where not congenitally prohibited, by your critics and readers generally.

steven mosher
March 16, 2008 9:34 am

Atmoz, I haven’t been able to track down the I(2) trend stuff yet. If you
google ” I(2) tests” I think you will find some articles behind the green wall.
It might be nice to walk over to the grad school of econ and get some
time series weenie to provide a helping hand.
If you go to the wiki page Basil cited and look at the footnotes you will
find some references ( critiques of course) and alternative methods.
A kalmen filter is another option that more engineering oriented folks might
be familiar with…
At the core… if you believe that C02 forcing is a log like response function, then I would hazrd that your underlying trend wont be linear, your underlying trend will be a log like function with weather stuff( cyclical), shot like noise ( volcano)
and other stuff superimposed.

Basil
Editor
March 16, 2008 11:06 am

Atmoz,
That reference to an I(2) trend on Wikipedia is new to me. It is not mentioned in the documentation for the software I use. But I think it means something like a “second order trend,” i.e. a trend that trends, as opposed to a fixed (linear) trend, or a random walk. So if the data is better represented by a linear trend, with shocks (represented by dummy constant terms), and structural breaks (abrupt changes in the slope), then HP may not be appropriate. Since the linear trend with shocks and breaks is what I modeled originally, in retrospect it might have been useful to superimpose both on the data at one time. In fact, to also superimpose the straight-line linear trend.
So here goes, for HadCRUT:
http://i26.tinypic.com/51cwaw.jpg
This illustrates effectively the bias in the straight linear trend: it starts out lower, and ends up higher, than either of the other two. As for the other two, I don’t think it is a clear choice as to which is better (let alone “best,” though I suppose I could compute which has the lower error sum of squares). Just eyeballing the data, the HP series probably doesn’t adequately reflect the 1998 El Nino, as compared to the linear model with a dummy for the El Nino, suggesting that it might well have been something like a “shock” to the climate system of some kind, superimposed on what is otherwise a cyclical pattern. The smoother, more continuous representation of the data since 2001 in the HP series looks more “natural” though. From my perspective, it is a wash, because they both start and end up at about the same place, and in an importantly different way that where the straight line trend begins and ends.
I’ll do the same — superimpose all three trends — on the other three data sets, and link to them here, later today.
Steven Mosher,
On the “correction” about the SRES scenarios, thanks for pointing that out. Of course, that raises a question. Are the GCM’s tested, or in some sense calibrated, against historical relationships between emissions and temperature trends? If so, wouldn’t that require an accurate estimate of the historical trend, and if they are in some way baselining or calibrating the GCM’s against an inflated notion of the historical trend, then wouldn’t that overstate the result of the emission scenarios. I.e., GIGO?
Thanks for the other comments. I’ll get back to them later.
Basil

Basil
Editor
March 16, 2008 12:01 pm

Atmoz,
Another thought. HP works on the logarithms</strong? of the trend series, so it needs to be data that can be meaningfully represented in log form. In economic time series, if the data are increasing (or decreasing) at a constant rate of growth, it will be curving upward (or downward) when plotted in linear space. I think all we are saying here is that this is the kind of data that HP presumes.
I agree with what Steven Mosher is saying. In truth, we’ll have some data driven by log like response, and some that is like a shock. If solar cycles have any influence, then the non-linear representations of the solar cycles would imply some kind of log function response in the climate system too, I should think. I would imagine that decadal oscillations of various sorts are also non-linear. All of the non-linear impulses would be what HP is better suited for, as opposed to linear impulses and shocks.
I’ve uploaded images now of all four series with the three types of trends superimposed on each:
GISS: http://i31.tinypic.com/358tly0.jpg
HadCRUT: http://i26.tinypic.com/51cwaw.jpg
UAH_MSU: http://i31.tinypic.com/xpr9ya.jpg
RSS_MSU: http://i32.tinypic.com/1zzhrwo.jpg
Enjoy.

Basil
Editor
March 16, 2008 1:05 pm

Patrick Hadley,
“Just by looking at the graphs it seems that a line of best fit drawn through the smoothing would be very similar to the standard regression line on the unfiltered data.”
You’ve got good eyes, or good intuition. It is not exact, but it is close to what you say. Here’s an example:
http://i28.tinypic.com/2ahxkxt.jpg
The “yhat7” is what results from “a line of best fit drawn through the smoothing.”
“Taking a line from the end points of a smoothed time series could be thought of as a bit dodgy.”
Why? I’d appreciate some discussion about this, as it is really central to my point.
What are we doing here, and why are we doing it? I presume we are looking at the past as a window to the future, assuming that the future is a repeat of the past.
Over the past 29 years, which line gives the best estimate of the total change in anomaly? That’s open for discussion, of course, but I think that either the smoothed HP line, or the lines from Part II, give better estimates of the total “climate change” under “average” conditions, than the straight linear trend.
From the smoothed HP line, the total change in anomaly for HadCRUT over the past 29 years is 0.30. From the straight linear trend, the total change in anomaly for HadCRUT over the past 29 years is 0.45. That’s is a big, big, difference! What makes the number taken from the end points of the straight trend line better or more reasonable than the number taken from the end points of the smoothed trend line? Yes, at the end of the period, the smoothed line is going down. But at the beginning of the period, it is going up. As Steven Mosher would say, that’s just the weather. But over a long period of time, we have the ups and downs that make up the climate more accurately represented in the smoothed series, so that it more accurately represents the total change likely to occur over the next 29 years.
Is there something wrong with that line of reasoning?
Thanks for the discussion!

March 16, 2008 4:53 pm

I’m more simple minded than you. I just averaged all four data sets together and fit straight lines from 1979 to now.
Fitting monthly data, and using Cochrane-Orcutt to deal with the strong serial autocorrelation in the residuals, I get a best fit trend of 1.5 C/century ±0.3 C/century. (That’s a standard error in the trend.)

TCO
March 16, 2008 6:53 pm

When I see people on our side using “ex cathedra” and other such pomposity, it makes me cringe. It’s not just that it’s dorky. It’s that the people prone to such behaviour usually are not as bright as they like to blather on about. See this cartoon:
http://redwing.hutman.net/~mreed/warriorshtm/profundusmaximus.htm
And the basic analysis shown basically hangs on the issue, that a trend connecting the end points (that’s what a smoothed curve basically does for you) gives less trend than linear fit. Of course we know that the paramater of interest CO2 versus time is operating monotonically increasing amidst a background of multi-year effects like ENSO. And we happen to be during a down period last few years. Wonder if this guy would also have advocated using the smoothed curve in 1998 to hit the end point then?!

Basil
Editor
March 16, 2008 7:45 pm

Lucia,
1.5C per century works out to .15C/decade. If you average my data, you get about .12C/decade, or 1.2C per century; either way, by my way of reckoning, about 20 percent less warming. I think that is a difference worth considering.
Basil

Jd
March 16, 2008 8:39 pm

Overall…left-to-right..time moves on..goes up…looks like warming..
REPLY: No argument there, but post 2002 appears flat, and decreasing. This could suggest connection to a larger, long period cycle, such as PDO and solar. LOD is also a possibility.

Richard S Courtney
March 17, 2008 4:29 am

Basil asks:
“On the “correction” about the SRES scenarios, thanks for pointing that out. Of course, that raises a question. Are the GCM’s tested, or in some sense calibrated, against historical relationships between emissions and temperature trends? If so, wouldn’t that require an accurate estimate of the historical trend, and if they are in some way baselining or calibrating the GCM’s against an inflated notion of the historical trend, then wouldn’t that overstate the result of the emission scenarios. I.e., GIGO?”
Chapter 2 from Working Group 3 in the IPCC’s Third Assessment Report (TAR) reports on the methodology used to conduct the SRES analyses. It says;
“Most generally, it is clear that mitigation scenarios and mitigation policies are strongly related to their baseline scenarios, but no systematic analysis has published on the relationship between mitigation and baseline scenarios”.
This statement is in the middle of the Chapter and is not included in the Chapter’s Conclusions. The “mitigation” is a supposed change to mean global temperature as a result of alterations to anthropogenic emissions of greenhouse gases (notably carbon dioxide).
Failure to list this statement as a Conclusion is strange because this statement is an admission that the assessed models do not provide useful predictions of effects of mitigation policies. How could the SRES predictions be useful if the relationship between mitigation and baseline is not known ?
Also, the only valid baseline scenario is an extrapolation from current trends. The effect of an assumed change from current practice cannot be known if there is no known systematic relationship between mitigation and baseline scenario. But each of the SRES scenarios is a claimed effect of changes from current practice. So, the TAR says the SRES scenarios are meaningless gobbledygook.
The above statement in the IPCC TAR (that is hidden in the middle of TAR WG3 Chapter 2) should always be kept in mind when considering global temperature trends and greenhouse gas emissions.
All the best
Richard S Courtney

steven mosher
March 17, 2008 4:42 am

basil, the gcm are run in a hindcast mode. the goodness of fit is unknown to me.

terry
March 17, 2008 4:51 am

impressive series of entries, Anthony et. al. Thanks for putting this out there.

Basil
Editor
March 17, 2008 6:42 am

Lee,
As in the last exchange, this one has probably reached a point of negative marginal utility. You keep saying things that show utterly no understanding whatsoever of what I’m doing. It is as if you’ve concluded that I cannot possibly be on to anything, and you are determined to prove it. There’s nothing wrong with trying to disprove what I’ve done — that’s part of the scientific method — but when it so blinds you that cannot even see what is before you, then skepticism loses its usefulness.
In your latest, you say
“Remember, you smoothed curves is NOT your analytical result. All the smooth curve does for you is determine the values of the two endpoints that you then use for a linear fit to those two points.”
This is a complete misstatement that betrays your determination to refuse to even acknowledge an understanding of what I’m saying. With all three methods — straight line linear regression, linear regression with discontinuities and slope changes, and the smoothed series, I’m interested in what they say about the total change in temperature over the past 29 years as a way of establishing a current climatological norm for measuring climate change using globally averaged temperature metrics. Now I could have used the net change from beginning of the period to the end of the raw data. For HadCRUT, that number equates to 0.067C/decade. That is the one and only case where a statement like what I quote above even comes close to an accurate depiction of something I might have done. But as a statement of what I actually did, it doesn’t begin to come close.
All three methods I’ve been discussing involve “smoothing” the actual data, so as to get a better sense of what was “normal” for the past 29 years, as opposed to relying on the delta from the end points of the raw data. In the case of straight line linear regression, that approach involves the most smoothing, in effect by removing all evidence of cycles or shocks. So if you want to talk about a method that ignores some of the intervening data, the worst villain of the lot is the straight line linear regression. That method, for HadCRUT, yields an estimate of 0.159C/decade as a measure of the climatological norm for the past 29 years
The other two methods involve less extreme smoothing than straight line linear regression, so as to take some measure of the influence of cycles, shocks, or trend changes. So we should expect them to yield some estimate of the total climate change over the past 29 years that is somewhere between the two extremes of no smoothing, 0.067C/decade, and extreme smoothing, 0.159C/decade. And that’s what we get: 0.122C/decade using the technique described in Part II, and 0.103C/decade using the technique described in Part III.
Incidentally, as to the way I calculated the 0.103C/decade, your persistent pestering has paid off. Rather than calculate it from the end points, it occurred to me that I could calculate it from the first differences of the smoothed series. So I differenced the smoothed series for HadCRUT, and the average monthly first difference is
0.00086229
That, my friend, is computed using every single point along the curved series. Now multiply it by 349, and divide it by 120, and see what you get.
Happy now?
And, that number may be amenable to the calculation of a confidence interval. The number above has a standard error of 1.82303E-4. Multiply that by 1.96 for the 95% confidence limit of the monthly number. Then multiply that result by 349 and divide by 120 to get the decadal equivalent. If I’ve done the math correctly, for HadCRUT it all works out to about plus or minus 0.001.
By the way, I’m a practitioner, not a theoretician, so I’d want a number like that reviewed by a professional statistician before I made anything of it, since it is not a number that I can read directly off the output of a statistical analysis program. But I do think we’re on to something here. I take back my remark about this discussion having wandered into the territory of negative marginal utility.

steven mosher
March 17, 2008 7:26 am

Don’t get me started on the SRES. It’s funny everybody focuses on the historical
data and not the “projections” for future emissions. The SRES are the inputs
that drive the GCMs to conclude warming for the future. You all go google
SRES. you read how they predict what the future emissions will be.
post when you stop laughing or crying

Josh
March 17, 2008 8:47 am

Basil, thanks for this. I commented on your last post asking about linear trends for cyclical data. You seemed a bit agitated by some of the comments and I wanted to make sure it was the comments of others and not mine, since I absolutely meant my question as constructive (and as much for my own information as trying to debunk anyone else). I’ve seen that everyone uses linear fits for the (presumably) mostly-cyclical temperature data and so my question was most definitely not directed at your analysis. Your post just made me think about it so that’s where I posted my question. I’ve learned a lot from reading your posts (I had never heard of this Hodrick-Prescott filter but it’s certainly something I plan to understand further). Thank you for taking the time!

Stan Needham
March 17, 2008 8:58 am

post when you stop laughing or crying
I’ll make this short because I’m laughing so hard I’ve got tears streaming down my face, and I’m afraid they’ll short out my keyboard. Thanks for the laugh, Steven.

Verified by MonsterInsights