To Tell The Truth: Will the Real Global Average Temperature Trend Please Rise? Part 3

To Tell The Truth: Will the Real Global Average Temperature Trend Please Rise?

Part III

A guest post by Basil Copeland

Again, I want to thank Anthony for the kind invitation to guest blog these musings about what is going on with global average temperature metrics.  It has been a most interesting, and personally rewarding, experience.  My original aim was quite modest, but I fear that the passion that many feel for this issue prevented them from seeing that.  So in this final part to this series, I want to try to make my aim more clear, and to show how a lively exchange of ideas can lead to new insights.

The IPCC has made the earth’s global average temperature trend a central focus in the debate over anthropogenic global warming.  In the AR4 report of Working Group 1, they state:

The range (due to different data sets) of global surface warming since 1979 is 0.16°C to       0.18°C per decade compared to 0.12°C to 0.19°C per decade for MSU estimates of        tropospheric temperatures.  (Chapter 3, Page 237)

Similar, if not the same, estimates are reported in Table 3.3, Page 61, of the Synthesis and Assessment Product 1.1 of the U.S. Climate Change Science Program (accessible here: http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-all.pdf ).  Presumably, these estimates provide some kind of basis for the IPCC SRES scenarios that assume 0.2C per decade warming over the next two decades. 

ttttpart3figure1-520.png

Figure 1

From what I can tell in reading the representations of the sources for these estimates, they are based on a straight-line linear regression that includes corrections for serial correlation.  In other words, regressions that look something like what are shown in Figure 1.  The trend at the top is from Appendix A, Page 130, Figure 1, of the U.S. Climate Change Science Program report just cited.  The second is taken from the RSS website (http://www.remss.com/data/msu/graphics/plots/sc_Rss_compare_TS_channel_tlt.png  accessed on March 15, 2008).  Both show a warming trend of 0.17C/decade since 1979.

Are these “good” estimates of the historical trend since 1979?  Forgive me, but I refuse to accept them as authoritative ex cathedra, nor will any true scientist expect me to.  Bear in mind, I’m taking the data for what it’s worth, and am overlooking any questions about the reliability of the surface record, such as what Anthony is looking into (or Steve Mcintyre at www.climateaudit.org), or the kind of urbanization and land use effects reported by Ross McKitrick and Patrick Michaels. My concern is solely with the technical procedures used to estimate the “trends” that are commonly cited for evidence of global warming.  Bottom line?  There are problems with the way those trends are computed that overestimate the degree of global warming since 1979 by 16.3% to 41.3% (based on results presented below).

In Part II I attempted a demonstration of this using what might be considered to be rather a rather blunt or brute force approach — a test of whether there was a significant “structural break” (the way we describe it in my field of study) after 2001, along with whether or not linear trends are distorted by the effect of the 1998 El Nino.  Nothing in the comments that followed the posting of Part II fundamentally undermined the validity of my conclusions.  The chief concerns seemed to be that my decision to test for a structural break (or “change point”) at the end of 2001 was arbitrary (it wasn’t), or whether one could say anything meaningful about a cyclical system like climate from linear trend lines.  Well, with respect to the latter, that horse is out of the barn, and we’re being told — by supposed authorities — that there has been X degrees of global warming per decade since 1979 on the basis of linear trend lines.  If they can use linear regression to claim that global warming is proceeding apace, well please excuse me for doing the same in questioning them.

Still, the comments were provocative, and encouraged me to dig further into my toolbox of econometric techniques to see if I might be able to come up with something that would alleviate some of the concerns commenters had about what I did.  So it occurred to me that I might treat the weather like a “business cycle” and model it with Hodrick-Prescott smoothing.  (If you want an explanation of what that is, look here: http://en.wikipedia.org/wiki/Hodrick-Prescott_filter ).  The results are presented, for the four global average temperature metrics we are using, in Figures 2 through 5.

ttttpart3figure2-520.png

Figure2 – click for a larger image

ttttpart3figure3-520.png

Figure3 – click for a larger image

ttttpart3figure4-520.png

Figure4 – click for a larger image

ttttpart3figure5-520.png

Figure5 – click for a larger image

Those who think we should let the data tell us where the “change points” are should find this approach more appealing, as well as those who believe we should be modeling the data with non-linear techniques.  But in the end, the point is the same: the “real trend” over the 29 years we are looking at is substantially less than we get using straight-line regression.  With the exception of GISS, Hodrick-Prescott smoothing results in even lower estimates of the degree of global warming over the past 29 years.  As shown in the following Table 1, compared to the two methods I’ve employed, the straight line regression method relied upon by IPCC and the U.S. Climate Change Science Program overstates global warming since 1979 by anywhere from 16.3% (using GISS) to 41.3% (HadCRUT). 

ttttpart3table1.png

No one should be offended by what I’ve done, or what I’m saying.  True science is always open to the possibility of refutation.  Given the policy implications that hang on conclusions about the degree of global warming that has occurred in recent decades, we should take a closer look at what the supposed authorities are telling us, and see if there are not perhaps some significant short-comings in the way they have calculated the degree of global warming in recent decades.

0 0 votes
Article Rating
47 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Stu Miller
March 15, 2008 7:55 pm

Pure bookkeeping issues. Is Figure 1 the combined first two graphs? Where is Table 1 promised in the next to last paragraph?
REPLY: Fixed, thanks for pointing it out. -Anthony

March 15, 2008 9:43 pm

What does it mean for data to exist in a I(2) trend?

Patrick Hadley
March 16, 2008 3:08 am

Thank you for doing this work. The graphs are interesting and the smoothing seems pretty reasonable.
Perhaps you need a little more explanation for those of us who are unfamiliar with the Hodrick-Prescott filter. How have you worked out the “non-linear” trend of the smoothed data? Is it the gradient of the line of best fit through the smoothed data, or simply the gradient of a line drawn between the end points of the smoothed curve? Or is there some other special way the filter produces a trend. In any case surely the figures you come with are intended to be thought of as linear trends, i.e. the gradients of straight line graphs drawn on the time series. E.g. 0.146 per decade is a linear trend.
Just by looking at the graphs it seems that a line of best fit drawn through the smoothing would be very similar to the standard regression line on the unfiltered data. Taking a line from the end points of a smoothed time series could be thought of as a bit dodgy.
I am not yet convinced that you can beat the standard line of best fit used by the IPCC and NASA.

Paul CLark
March 16, 2008 4:32 am

I’m afraid the stats here go over my head (my fault, not Basil’s!), but all analyses of the last decade seem to agree there has been little or no warming, so I even I can see that the longer it goes on, the shallower the post-1979 trend gets, even assuming there is a real trend at all…
But of course it’s being argued by warming believers that the last decade is just a downside blip on top of an otherwise steady trend. Which brings me to my question, which I’m afraid isn’t really directly to do with Basil’s analysis, but seems to be the logical thing to ask as a result of it…
Do any of the IPCC’s models predict a decade or more of flatness like this, temporarily overriding the overall trend, and then (under their assumptions) followed by a quick return to it? Not necessarily in 1998-2008, but anywhere in the past or future? If not, why not? Surely an absence of such an output effect would indicate that they have underestimated factors such as solar variability, PDO etc., maybe missed out some as yet complete unknown linkage, and hence are just GIGO?

steven mosher
March 16, 2008 5:46 am

basil,
one correction. you wrote: “Presumably, these estimates provide some kind of basis for the IPCC SRES scenarios that assume 0.2C per decade warming over the next two decades. ”
these are Emissions scenarios. they are projections over the next hundred years as to what levels of emissions we will see from mankind. There are low level scenarios and high level scenarios. these assumptions about future emissions are fed into GCMs. the gcm then project the temperature.
Great post!

Mike Bryant
March 16, 2008 5:59 am

It seems to me that an investor who is expert on stock charts might be able to tell us if this is a “buy” or a “sell”. If I had shares in this… I’d be nervous.
Mike Bryant

JamesG
March 16, 2008 6:08 am

Well I like it, so thank you Basil. I didn’t mean to be negative about stats before but when you see multiple professionals coming up with multiple, conflicting results you have to wonder about the bias behind it – on both sides. Of course the only reason to use a single straight line is to allow an extrapolation despite not having a clue about the mechanism of the underlying model, which is downright unscientific.

Stan Needham
March 16, 2008 6:37 am

First of all, Basil, thanks for all the work you put into this project. I am not, nor will I ever be, a statistics geek, but I learned more from your series than from anything I’ve read in a long time. As someone whose educational background is in business and history, this statement, in particular, got my attention:
So it occurred to me that I might treat the weather like a “business cycle” and model it with Hodrick-Prescott smoothing.
Anyone who is in their early 30’s or older may remember back in the late 90’s, that a number of young economists and Wall Street types were proclaiming that the business cycle had been defeated, and that we would likely NEVER have another significant down-turn. Government surpluses were projected as far as the eye could see. Remember?
Anyway, thanks for a wonderful educational experience, and thanks to Anthony for providing you with this forum. I think it’s absolutely essential that, absent empirical proof one way or the other, that this debate continue until one or both of two things occurs. Either, (a) as Anthony was noted, Nature will be the final arbiter in which case it will become quite evident that what we’re witnessing is mostly a natural cycle; or (b) energy production will evolve into non carbon-based processes within the next 2 or 3 decades, to a point where man can no longer be blamed. My guess is that it will be a combination of the two.

Gary Gulrud
March 16, 2008 8:41 am

Thank you for your series which I found fair, well-reasoned and informative as well as interesting which is apparently a tall order for statisticians. I especially respect your limiting the discussion to features of the data rather than leaping ahead to causation which statistics seldom informs. The careful analysis you’ve modelled herein ought to be imitated, where not congenitally prohibited, by your critics and readers generally.

steven mosher
March 16, 2008 9:34 am

Atmoz, I haven’t been able to track down the I(2) trend stuff yet. If you
google ” I(2) tests” I think you will find some articles behind the green wall.
It might be nice to walk over to the grad school of econ and get some
time series weenie to provide a helping hand.
If you go to the wiki page Basil cited and look at the footnotes you will
find some references ( critiques of course) and alternative methods.
A kalmen filter is another option that more engineering oriented folks might
be familiar with…
At the core… if you believe that C02 forcing is a log like response function, then I would hazrd that your underlying trend wont be linear, your underlying trend will be a log like function with weather stuff( cyclical), shot like noise ( volcano)
and other stuff superimposed.

Basil
Editor
March 16, 2008 11:06 am

Atmoz,
That reference to an I(2) trend on Wikipedia is new to me. It is not mentioned in the documentation for the software I use. But I think it means something like a “second order trend,” i.e. a trend that trends, as opposed to a fixed (linear) trend, or a random walk. So if the data is better represented by a linear trend, with shocks (represented by dummy constant terms), and structural breaks (abrupt changes in the slope), then HP may not be appropriate. Since the linear trend with shocks and breaks is what I modeled originally, in retrospect it might have been useful to superimpose both on the data at one time. In fact, to also superimpose the straight-line linear trend.
So here goes, for HadCRUT:
http://i26.tinypic.com/51cwaw.jpg
This illustrates effectively the bias in the straight linear trend: it starts out lower, and ends up higher, than either of the other two. As for the other two, I don’t think it is a clear choice as to which is better (let alone “best,” though I suppose I could compute which has the lower error sum of squares). Just eyeballing the data, the HP series probably doesn’t adequately reflect the 1998 El Nino, as compared to the linear model with a dummy for the El Nino, suggesting that it might well have been something like a “shock” to the climate system of some kind, superimposed on what is otherwise a cyclical pattern. The smoother, more continuous representation of the data since 2001 in the HP series looks more “natural” though. From my perspective, it is a wash, because they both start and end up at about the same place, and in an importantly different way that where the straight line trend begins and ends.
I’ll do the same — superimpose all three trends — on the other three data sets, and link to them here, later today.
Steven Mosher,
On the “correction” about the SRES scenarios, thanks for pointing that out. Of course, that raises a question. Are the GCM’s tested, or in some sense calibrated, against historical relationships between emissions and temperature trends? If so, wouldn’t that require an accurate estimate of the historical trend, and if they are in some way baselining or calibrating the GCM’s against an inflated notion of the historical trend, then wouldn’t that overstate the result of the emission scenarios. I.e., GIGO?
Thanks for the other comments. I’ll get back to them later.
Basil

Basil
Editor
March 16, 2008 12:01 pm

Atmoz,
Another thought. HP works on the logarithms</strong? of the trend series, so it needs to be data that can be meaningfully represented in log form. In economic time series, if the data are increasing (or decreasing) at a constant rate of growth, it will be curving upward (or downward) when plotted in linear space. I think all we are saying here is that this is the kind of data that HP presumes.
I agree with what Steven Mosher is saying. In truth, we’ll have some data driven by log like response, and some that is like a shock. If solar cycles have any influence, then the non-linear representations of the solar cycles would imply some kind of log function response in the climate system too, I should think. I would imagine that decadal oscillations of various sorts are also non-linear. All of the non-linear impulses would be what HP is better suited for, as opposed to linear impulses and shocks.
I’ve uploaded images now of all four series with the three types of trends superimposed on each:
GISS: http://i31.tinypic.com/358tly0.jpg
HadCRUT: http://i26.tinypic.com/51cwaw.jpg
UAH_MSU: http://i31.tinypic.com/xpr9ya.jpg
RSS_MSU: http://i32.tinypic.com/1zzhrwo.jpg
Enjoy.

Basil
Editor
March 16, 2008 1:05 pm

Patrick Hadley,
“Just by looking at the graphs it seems that a line of best fit drawn through the smoothing would be very similar to the standard regression line on the unfiltered data.”
You’ve got good eyes, or good intuition. It is not exact, but it is close to what you say. Here’s an example:
http://i28.tinypic.com/2ahxkxt.jpg
The “yhat7” is what results from “a line of best fit drawn through the smoothing.”
“Taking a line from the end points of a smoothed time series could be thought of as a bit dodgy.”
Why? I’d appreciate some discussion about this, as it is really central to my point.
What are we doing here, and why are we doing it? I presume we are looking at the past as a window to the future, assuming that the future is a repeat of the past.
Over the past 29 years, which line gives the best estimate of the total change in anomaly? That’s open for discussion, of course, but I think that either the smoothed HP line, or the lines from Part II, give better estimates of the total “climate change” under “average” conditions, than the straight linear trend.
From the smoothed HP line, the total change in anomaly for HadCRUT over the past 29 years is 0.30. From the straight linear trend, the total change in anomaly for HadCRUT over the past 29 years is 0.45. That’s is a big, big, difference! What makes the number taken from the end points of the straight trend line better or more reasonable than the number taken from the end points of the smoothed trend line? Yes, at the end of the period, the smoothed line is going down. But at the beginning of the period, it is going up. As Steven Mosher would say, that’s just the weather. But over a long period of time, we have the ups and downs that make up the climate more accurately represented in the smoothed series, so that it more accurately represents the total change likely to occur over the next 29 years.
Is there something wrong with that line of reasoning?
Thanks for the discussion!

March 16, 2008 4:53 pm

I’m more simple minded than you. I just averaged all four data sets together and fit straight lines from 1979 to now.
Fitting monthly data, and using Cochrane-Orcutt to deal with the strong serial autocorrelation in the residuals, I get a best fit trend of 1.5 C/century ±0.3 C/century. (That’s a standard error in the trend.)

TCO
March 16, 2008 6:53 pm

When I see people on our side using “ex cathedra” and other such pomposity, it makes me cringe. It’s not just that it’s dorky. It’s that the people prone to such behaviour usually are not as bright as they like to blather on about. See this cartoon:
http://redwing.hutman.net/~mreed/warriorshtm/profundusmaximus.htm
And the basic analysis shown basically hangs on the issue, that a trend connecting the end points (that’s what a smoothed curve basically does for you) gives less trend than linear fit. Of course we know that the paramater of interest CO2 versus time is operating monotonically increasing amidst a background of multi-year effects like ENSO. And we happen to be during a down period last few years. Wonder if this guy would also have advocated using the smoothed curve in 1998 to hit the end point then?!

Basil
Editor
March 16, 2008 7:45 pm

Lucia,
1.5C per century works out to .15C/decade. If you average my data, you get about .12C/decade, or 1.2C per century; either way, by my way of reckoning, about 20 percent less warming. I think that is a difference worth considering.
Basil

Jd
March 16, 2008 8:39 pm

Overall…left-to-right..time moves on..goes up…looks like warming..
REPLY: No argument there, but post 2002 appears flat, and decreasing. This could suggest connection to a larger, long period cycle, such as PDO and solar. LOD is also a possibility.

Richard S Courtney
March 17, 2008 4:29 am

Basil asks:
“On the “correction” about the SRES scenarios, thanks for pointing that out. Of course, that raises a question. Are the GCM’s tested, or in some sense calibrated, against historical relationships between emissions and temperature trends? If so, wouldn’t that require an accurate estimate of the historical trend, and if they are in some way baselining or calibrating the GCM’s against an inflated notion of the historical trend, then wouldn’t that overstate the result of the emission scenarios. I.e., GIGO?”
Chapter 2 from Working Group 3 in the IPCC’s Third Assessment Report (TAR) reports on the methodology used to conduct the SRES analyses. It says;
“Most generally, it is clear that mitigation scenarios and mitigation policies are strongly related to their baseline scenarios, but no systematic analysis has published on the relationship between mitigation and baseline scenarios”.
This statement is in the middle of the Chapter and is not included in the Chapter’s Conclusions. The “mitigation” is a supposed change to mean global temperature as a result of alterations to anthropogenic emissions of greenhouse gases (notably carbon dioxide).
Failure to list this statement as a Conclusion is strange because this statement is an admission that the assessed models do not provide useful predictions of effects of mitigation policies. How could the SRES predictions be useful if the relationship between mitigation and baseline is not known ?
Also, the only valid baseline scenario is an extrapolation from current trends. The effect of an assumed change from current practice cannot be known if there is no known systematic relationship between mitigation and baseline scenario. But each of the SRES scenarios is a claimed effect of changes from current practice. So, the TAR says the SRES scenarios are meaningless gobbledygook.
The above statement in the IPCC TAR (that is hidden in the middle of TAR WG3 Chapter 2) should always be kept in mind when considering global temperature trends and greenhouse gas emissions.
All the best
Richard S Courtney

steven mosher
March 17, 2008 4:42 am

basil, the gcm are run in a hindcast mode. the goodness of fit is unknown to me.

terry
March 17, 2008 4:51 am

impressive series of entries, Anthony et. al. Thanks for putting this out there.

Basil
Editor
March 17, 2008 6:42 am

Lee,
As in the last exchange, this one has probably reached a point of negative marginal utility. You keep saying things that show utterly no understanding whatsoever of what I’m doing. It is as if you’ve concluded that I cannot possibly be on to anything, and you are determined to prove it. There’s nothing wrong with trying to disprove what I’ve done — that’s part of the scientific method — but when it so blinds you that cannot even see what is before you, then skepticism loses its usefulness.
In your latest, you say
“Remember, you smoothed curves is NOT your analytical result. All the smooth curve does for you is determine the values of the two endpoints that you then use for a linear fit to those two points.”
This is a complete misstatement that betrays your determination to refuse to even acknowledge an understanding of what I’m saying. With all three methods — straight line linear regression, linear regression with discontinuities and slope changes, and the smoothed series, I’m interested in what they say about the total change in temperature over the past 29 years as a way of establishing a current climatological norm for measuring climate change using globally averaged temperature metrics. Now I could have used the net change from beginning of the period to the end of the raw data. For HadCRUT, that number equates to 0.067C/decade. That is the one and only case where a statement like what I quote above even comes close to an accurate depiction of something I might have done. But as a statement of what I actually did, it doesn’t begin to come close.
All three methods I’ve been discussing involve “smoothing” the actual data, so as to get a better sense of what was “normal” for the past 29 years, as opposed to relying on the delta from the end points of the raw data. In the case of straight line linear regression, that approach involves the most smoothing, in effect by removing all evidence of cycles or shocks. So if you want to talk about a method that ignores some of the intervening data, the worst villain of the lot is the straight line linear regression. That method, for HadCRUT, yields an estimate of 0.159C/decade as a measure of the climatological norm for the past 29 years
The other two methods involve less extreme smoothing than straight line linear regression, so as to take some measure of the influence of cycles, shocks, or trend changes. So we should expect them to yield some estimate of the total climate change over the past 29 years that is somewhere between the two extremes of no smoothing, 0.067C/decade, and extreme smoothing, 0.159C/decade. And that’s what we get: 0.122C/decade using the technique described in Part II, and 0.103C/decade using the technique described in Part III.
Incidentally, as to the way I calculated the 0.103C/decade, your persistent pestering has paid off. Rather than calculate it from the end points, it occurred to me that I could calculate it from the first differences of the smoothed series. So I differenced the smoothed series for HadCRUT, and the average monthly first difference is
0.00086229
That, my friend, is computed using every single point along the curved series. Now multiply it by 349, and divide it by 120, and see what you get.
Happy now?
And, that number may be amenable to the calculation of a confidence interval. The number above has a standard error of 1.82303E-4. Multiply that by 1.96 for the 95% confidence limit of the monthly number. Then multiply that result by 349 and divide by 120 to get the decadal equivalent. If I’ve done the math correctly, for HadCRUT it all works out to about plus or minus 0.001.
By the way, I’m a practitioner, not a theoretician, so I’d want a number like that reviewed by a professional statistician before I made anything of it, since it is not a number that I can read directly off the output of a statistical analysis program. But I do think we’re on to something here. I take back my remark about this discussion having wandered into the territory of negative marginal utility.

steven mosher
March 17, 2008 7:26 am

Don’t get me started on the SRES. It’s funny everybody focuses on the historical
data and not the “projections” for future emissions. The SRES are the inputs
that drive the GCMs to conclude warming for the future. You all go google
SRES. you read how they predict what the future emissions will be.
post when you stop laughing or crying

Josh
March 17, 2008 8:47 am

Basil, thanks for this. I commented on your last post asking about linear trends for cyclical data. You seemed a bit agitated by some of the comments and I wanted to make sure it was the comments of others and not mine, since I absolutely meant my question as constructive (and as much for my own information as trying to debunk anyone else). I’ve seen that everyone uses linear fits for the (presumably) mostly-cyclical temperature data and so my question was most definitely not directed at your analysis. Your post just made me think about it so that’s where I posted my question. I’ve learned a lot from reading your posts (I had never heard of this Hodrick-Prescott filter but it’s certainly something I plan to understand further). Thank you for taking the time!

Stan Needham
March 17, 2008 8:58 am

post when you stop laughing or crying
I’ll make this short because I’m laughing so hard I’ve got tears streaming down my face, and I’m afraid they’ll short out my keyboard. Thanks for the laugh, Steven.

Basil
Editor
March 17, 2008 9:43 am

Just a quick addition. It occurs to me that the confidence interval I calculated in my last response to Lee may not be what people think it is. It is what I represented it to be, which is a confidence interval for the mean first difference from the smoothed HadCRUT series. It is not a confidence interval that infers anything about how well the smoothed series fits the raw data. Now the smoothed series fits the raw data better than a straight linear regression, so a confidence interval based on that should be less than a confidence interval based on a straight line regression. But as a practicioner rather than a theoretician or professional statistician, I’m always cautious about claiming statistical significance when using techniques that do not have standard metrics for statistical inference associated with them. It is a cautiousness sort of like not wanting to make claims about PC’s that might prove bogus, if you know what I mean. 🙂
Incidentally, as a practicioner, it has been my experience that practicioners often have a different perspective on things than academics, and can sometimes see things that the academics miss, or have insights that the academics do not. In my own narrow little field of expertise, I have had some modest success in successfully challenging the way academics look at things. I think it notable that it appears that meteorologists, as a group, are more skeptical about the claims of AGW than climate scientists. I’m not surprised, though. And I don’t think their insights, or perspective, are any less valuable that what gets published in peer-reviewed journals. Having published in peer reviewed journals, and having been a referee for a couple of them — something pretty uncommon for a non-academic lacking a Ph.D., I think — I know well both the strengths and limitations of peer review.
So take what I’ve been saying in my blog posts and the dialog in comments with however many grains of salt you wish. I have no axe to grind, or hidden agenda. There’s no question that there’s been “global warming,” especially since the beginning of the instrumental record, which roughly corresponds with the earth coming out of the Little Ice Age. The questions are “how much?” and “why?” I’m not saying anything at all about “why” because that is outside my domain of expertise or practical experience. But as to the question of “how much,” I’m comfortable making some modest claims to expertise based on my past academic training and 30 years of experience as a professional economist and economic consultant. If I bring to the task of looking at “how much” a perspective that is different than the conventional wisdom, and yields some insights perhaps missed by the conventional wisdom, it will not be the first time.
Of course, as to the “why” of the earth warming in the 20th century, I’d consider myself a curious layperson with a kind of well-informed and rational skepticism about the claims of AGW. But that’s not what I’ve been blogging about.

Richard
March 17, 2008 9:56 am

Hello, I’m new here and have been looking for information the might relate to these two links below. They are not peer reviewed papers but I think they highlight important data that was ommited by IPCC. First link on water vapor gives radiative forcing of 131 w/m^2, and much more. Second link claims CO2 only absorbs 8% of long wave radiation. Taken together makes IPCC claims look very suspect. Please delete my comment if I am intruding.
http://www-ramanathan.ucsd.edu/FCMTheRadiativeForcingDuetoCloudsandWaterVapor.pdf
http://www.nov55.com/ntyg.html (CO2 Absorption Spectrum)
Richard

Ian
March 17, 2008 10:29 am

Basil,
Thanks for taking so much time with these posts and responses. One of your comments above caught my eye – I have a question out of ignorance of the smoothing method you used.
You mentioned the avg monthly first difference of 0.00086229, and a confidence interval (I believe for this avg) of +/-0.001. That gives a range of 1.86229 to -0.13771. What, if anything, should I conclude from that (esp. that the range takes in 0)?

steven mosher
March 17, 2008 10:58 am

Fitting monthly or yearly temperature data with a linear regression
is just plain stupid. That’s a theorem somewhere. ok, it’t not stupid.
It’s easily communicated.
I’m less certain about alternatives. A piecewise linear ( say Tammy) is a cheap
and easy fix. Basils approach, I havent got to the bottom of. Beware the cookbook chaps. However, eschew the cookbook at your peril Dr. Mann.
I’d prefer to model the system and devise a technique to detect trend changes
in that system. hmm
Atmoz has an interesting post on Short term cycles ( like ENSO) and the trend excusions you see during these periodic episodes.
At the bottom. This cooling weather phase is probably a very good thing for climate science. Why?

Gary Gulrud
March 17, 2008 11:31 am

TCO: I may be a good example of those affecting the ‘pomposity’ you loathe, but we as a group find Basil’s seamless use of ‘ex cathedra’ amusing and deft.
Regardless of our individual levels of success those who aspire to word-smithery do so to communicate well. It’s a work-in-progress.
I find your tendency toward criticism of your cohort, “people on our side”(here and in adjoining threads), PC.
Might I suggest ‘Eeyore’ as a more fitting, exuse the pun, nom de plume?

randomengineer
March 17, 2008 12:15 pm

Basil, it seems to me that when all is said and done Lee is correct; all you are ultimately doing is doing a curve fit that, if you were to do a linear regression on, merely shows the same trend as before the curve fit. It also seems that what you have done is — again as he points out — merely manipulated what you call the trend TODAY based on the endpoints.
As mosher points out a regression is probably the worst way to do this except for all of the other methods.
That being said, bear in mind that although I disagree with your conclusions, I applaud the post series. Certainly, by looking at the data from a different persepctive, this gives food for thought… meaning you have made your point, even if I figure you are wrong. This is reminiscent of Edison in a way; he said he’d discovered 2000 ways to not make light bulbs but never once failed.
Congratulations on a thought provoking series!

Enochson
March 17, 2008 12:40 pm

Thank you for posting this. I found it enlightening. I would like to see similar statistical analyses of other climate data.

Basil
Editor
March 17, 2008 3:31 pm

A potpourii of replies
Paul Clark,
I don’t know the answer to your question. I would be surprised, though. Roger Pielke Jr. has a little write up today over at http://www.icecap.us on what Lucia is doing to validate some IPCC projections. If you haven’t been following what she’s doing, head over to her web site (http://rankexploits.com/musings/) and take a look.
JamesG,
I understand your point. When I teach anything having to do with statistics, I usually begin “If you torture the data long enough it will confess, even to crimes it did not commit.” I’m sure that’s what Lee thinks I’m doing here, but I’m not. I’m asking some questions of the data that don’t appear to have occurred to many, but not for the purpose of forcing it to confess to any preconceived notions of what it should be saying.
Stan Needham, Gary Gulrud, Josh, Enochson,
Thanks for the kind words. Gary, your comment (“I especially respect your limiting the discussion to features of the data rather than leaping ahead to causation which statistics seldom informs.”) especially made my day when I first read it.
jd,
Yes, it has been going up. But by how much, “on average?” That’s obviously a question that a lot of people are interested in, and I don’t think the answer is as simple as what you get from fitting a straight line through the data.
Stephen Mosher,
“Fitting monthly or yearly temperature data with a linear regression is just plain stupid. That’s a theorem somewhere. ok, it’t not stupid. It’s easily communicated.”
I wouldn’t call it “stupid,” but it is often done to data that don’t deserve it, or which should be analyzed more carefully before concluding too much from the slope of a straight line trend.
Ian,
The .001 is applies to the decadal form of the estimate. Take 0.00086229 and multiply it by 120, and you have 0.1034748. The latter is what the .001 applies to.

Basil
Editor
March 17, 2008 3:33 pm

randomengineer,
Is that what they call damning with faint praise? 🙂 I appreciate the attitude, but you are still not getting it.
Let’s you (and Lee) put aside the smoothed series in Part III for a moment, and think back to the linear trends estimated in Part II. How do you propose calcuating the “average” trend for a trend line which slopes up modestly for a while, shoots sharply up and back down before resuming the modest trend, then jumps up and begins to trend down after 2001? Everybody knows how to read the trend of a straight-line regression. How do you read the “average” trend of a trend line that varies like the trends plotted in Part II? Don’t suggest that I fit a straight line trend through the trend (which is basically what you are proposing for the smoothed series in Part III).
I took a short cut and calculated it from the delta in the end points. But let’s do for the trends in Part II what I did for Lee on the smoothed lines in Part III: first difference the trend, and take the average of the first differences. For HadCRUT in Part II, this produces an “average” trend, per month, of
0.00093967
This compares to the straight line linear trend of
0.00132630
The first number is lower than the second number because of the declining rate of growth at the end of the series. Multiply these numbers by 120 for decadal equivalents. BTW, in my reply to Lee where I calculated the equivalent HadCRUT number for the smoothed series,
0.00086229
I said “Now multiply it by 349, and divide it by 120, and see what you get.” That was incorrect. For these numbers, we just multiply by 120. We divide by 349 when using the delta from the endpoints, which should just give us the same numbers we have above. I.e., if we do the math right, it makes no difference whether we compute the average from the end points, divided by 349, or from the average of the monthly first differences. The numbers should be the same.
Everybody understands the 0.0013230 from a straight line regression, because that is what it is, and can be read from the output of a statistical regression. That doesn’t make it a better, or more correct number than the other two, which I think people are having a hard time grasping because of the novelty of the technique used to derive them. But the technique is sound, and nobody has yet to show otherwise.
In the end, the “trend” is no more meaningful than the delta from the end points. One is simply an estimate of the average monthly change, and the other is a measure of the cumulative change. Choosing which model is “best” has nothing to do with how I’ve calculated the cumulative or monthly or decadal change.

March 17, 2008 5:09 pm

[…] Temperature Trend Please Rise? Posted on March 18, 2008 by tommoriarty I appreciate the detailed write-up that Basil Copelanddid for Anthony Watts’ “Watts Up With That” over the last several days.  However, […]

tommoriarty
March 17, 2008 5:18 pm

I reproduced Basil’s smoothing using the Hodrick-Prescott filter. I also repeated the smoothing by ending three months early (Nov. ‘07), six months early (Aug. ‘07), nine months early (May ‘07), and 12 months early (Feb. ‘07). The results can be seen here
This simply demonstrates that this smoothing technique, like most others, can be quite unstable near the beginning and end of a time series.
By the way, I made a donation to Watts’ tip jar , today and recommend that readers give what they can.
REPLY: Thanks Tom!

steven mosher
March 17, 2008 5:30 pm

basil,
I think we are in violent agreement. The descriptions I hear from climate scientists lead me to believe that the underlying model ( reality) is not
linear. So, fitting the temps to a linear trend is excel easy and not very
informative. That said, I scratch my head when pressed for an alternative.
Corrections for serial correlation are a good start..
I liked your approach, I just need to wrap my skull around it.

Forrest
March 17, 2008 6:31 pm

Basil,
Thank you. I have enjoyed this read… Please forgive me for what I am about to post if I am wrong about what you are trying to say.
Basil is simply saying that a straight linear progression maybe leaving out possible information about a larger overall trend. To be honest we do not know what this larger overall trend portains to, but in looking a a smoothed dataset you can see where the nuiances of said trend maybe occuring rather then looking at the trend as a whole.
Look at the DOW over the last year. If this where the average temperature the trend on linear regression would be positive. This is true even though we are now basically at the same amount of money that the DOW is now vs a year ago. Now if you were to look at this as a smoothed trend you would see that it was positive, and now it is negative. You get a bettter idea of what is going on really by the smooth trend.
Again sorry if I am speaking out of turn, just trying to help explain what is going on by attempting to give an outside example that may not have people as biased in thier thinking… If that is what is going on.

Basil
Editor
March 17, 2008 6:55 pm

Tom,
I’ve replied to you over on your own web page.
Basil

Basil
Editor
March 18, 2008 4:59 am

As the discussion draws to its close, I want to thank everyone who responded, even Lee! Exposing one’s view of things to criticism and the possibility of refutation is the essence of objectivity and critical thinking. While Lee, and perhaps others, remain unconvinced that there is utility at looking to the cumulative change implied by various trending or smoothing methods, I’m still convinced that there is.
But in all the discussion of whether it makes any sense to imply a trend from the average cumulative change of a series, or concern about too much weight being given to the downturn at the end of the period of analysis, a significant observation made in Part II has largely gone overlooked. There I noted:
“Incidentally [a lower implied decadal average than what results from a , this not entirely owing to fitting a downward trend through the data since 2001. Separate slope and constant dummy variables are also included for the 1998 El Nino, and this accounts for some of the difference. In fact, somewhat surprisingly, when a constant dummy is added for the 1998 El Nino, it reduces the slope (trend) for the non-El Nino part of the time series through 2001. We usually expect a constant dummy to affect the model constant term, not the slope. But in every case here it reduces the slope in a significant way as well, so some [maybe most?] of the difference in the “dT” and the result we’d get from a straight trend line owes to the effect of controlling for the 1998 El Nino.”
For anyone still interested in trying to learn something from all of this, I invite you to look carefully at the following chart, for HadCRUT, which superimposes all three trending/smoothing methods on the data, paying particular attention to around 1995 and 1996, in the months before the 1998 El Nino:
http://i26.tinypic.com/51cwaw.jpg
First, note how poorly the HP smoothing captures what is happening at that point: it begins trending upward while the actual anomalies are moving downward before they begin the rapid climb to the 1998 peak. In the same way, the straight linear trend, that so many seem convinced is the only way to do this, is well above the x-axis at a time when the anomalies are moving downward below the x-axis. Both the straight line trend, and HP smoothing, are excessively influenced by the 1998 El Nino, i.e. “pulled upward” by it, in a way that is unjustified by the data.
Of the three techniques, only the one that models 1998 El Nino as a shock (and “change point” also), avoids this bias and accurately measures the straight line trend in the period prior to the break point used to control for the 1998 El Nino. And that trend, taken directly from my regression output, and not some “dodgy” number I’ve calculated from endpoints (though it would be exactly the same, rendering moot the concerns or criticisms about my “dodgy” technique), converted to decadal form, is 0.118C/decade, well below the 0.159C/decade derived from the straight line for the entire period. The latter is unduly influenced and biased by the 1998 El Nino.
Here’s a prediction for the future. As we move forward in time, the trend from a straight line regression through the data since the beginning of the satellite period, 1979, will drift downward as we move away from 1998 El Nino. When the El Nino becomes part of the earlier half of the data series, those looking for evidence of AGW will finally see the wisdom of controlling for it somehow, because at that point it will actually become a negative influence on the overall trend, being on the other side of the fulcrum point as it were.
Again, thanks for all the dialog.

randomengineer
March 18, 2008 7:57 am

Basil, I wasn’t damning you with faint praise. I simply didn’t agree with your conclusion. This is an example of what I meant by your stuff being interesting. Superimpose your HP plot on this —
http://www.climateaudit.org/?p=2868#comment-225274
— and you appear to have invented the cosmic ray detector that can be seen in temp data. Seems to confirm that temp cycles look to be the same as cosmic ray cycles… then again it could also be that cosmic ray detection equipment is temp sensitive… whoops. 🙂
(Here’s a prediction for the future. As we move forward in time, the trend from a straight line regression through the data since the beginning of the satellite period, 1979, will drift downward…)
Well… Duh. I could tell you that without the fancier versions of showing statistical trends. I’d guess though that you could treat 1998 as an outlier and ignore it and conclude similarly. Or.. you could calc 2nd derivatives and plot regressions on that; it would do the same thing, especially if you tossed 1998 as an outlier. I merely wanted to point out that your plots didn’t add any more than what I already knew or could derive using simpler means.
On the other hand refer back to the cosmic ray and neutron plots and now you have a statistical tool that tells me something I didn’t already know.

steven mosher
March 18, 2008 8:18 am

basil,
In fact they are waiting for the next El nino to bring the “trend” back to the
norm.
There is no climate trend. there is the mean of a calculation. we never observe
the climate. It does not exist.

March 18, 2008 10:47 am

The phase transition at the end of 2001, noted in this excellent series, is also evident in the data of the International Satellite Cloud Climatology Project. Any connection?
REPLY: We’ll take a look. – Anthony

Basil
Editor
March 18, 2008 3:17 pm

Randomengineer,
I’m looking at using LAD (least absolute deviation) to do the regressions. This will produce a straight line, which everybody seems to be able to understand, but will not be as influenced by outliers the least squares regression is. I may return to the use of it with the same four metrics this discussion has focused on, but right now, I’ve turned my attention to the IPCC’s claims about what has happened over the hundred year period 1906-2005.
Steven Mosher,
🙂

March 20, 2008 4:01 am

[…] and reality, as diagnosed by surface air and tropospheric temperatures (e.g. see, see and see) and upper ocean heat content (i.e. see), Climate Science is reposting a weblog from 2006 titled […]

JM
March 20, 2008 9:50 am

“I’m taking the data for what it’s worth, and am overlooking any questions about the reliability of the surface record,”
Then why do you constantly point to those questions?
The usual – the primary – assumption with any data series is that a.) it will have errors, and b.) those errors will be randomly distributed.
Every analysis made on this site is based on those assumptions. The statistical techniques waved at every opportunity here as some sort of Better Business Bureau badge of approval, just flat out don’t work without the assumption of random distribution of errors. That is a fact of life.
But for apparently agenda related reasons, you – and others – constantly insinuate that the errors are systemic – in one direction only, rather than having the random character assumed in your own analysis. Yet you use bogus application of statistical techniques to support those insinuations.
A claim of unidirectional error in all data series – that all errors in the measurement of temperature are positive, with none negative – is an extraordinary claim, and you better supply extraordinary evidence. Muttered insults aren’t going to cut it.
If you believe there are systemic errors you should make that case, not just say ‘abracadabra’ with statistical magic and predigestation, while stroking your beard in false concern.
You can’t have it both ways. You can’t on the one hand make arguments from data, and then hint that the data is suspect. Make an honest argument from the data available, or alternatively question – with real argument, not just concern trolling – that the data should not be relied on.
Otherwise you’re just substituting major league noise and nitpicking for honest discourse.

Steve Fitzpatrick
March 20, 2008 11:02 am

Basil,
Thanks for an interesting analysis of the recent temperature data. My only comment is for people like Lee: Please tell us when a temperature trend is long enough/clear enough to infer that the trend is real. There is no statistically significant trend in average global temperature since 2001. How long would the current lack of statistically significant warming have to continue before it would be reasonable for someone to doubt IPCC’s 2007 projections of warming (and resulting environmental disruptions) due to carbon dioxide over the coming two decades? 3 more years? 5 more years? 20 more years? Under what circumstances could people reasonably conclude that the IPCC projections of 0.2C per decade are simply wrong? Would a falling temperature trend for the next 5 years be enough?
The researchers at the Hadley Center have demonstrated the courage of their convictions, and have predicted no warming until the end of 2009 (due mostly to ENSO), followed by a return to “rapid warming” in 2010 and beyond.
This prediction by the Hadley Center is enormously helpful, for it begins to place the climate modelers in the same boat as everyone else who works in science: good science makes accurate predictions, while bad science makes incorrect predictions. With the Hadley Center on record about the next several years, reasonable people will be able to evaluate the validity of the climate models the Hadley Center is using to make their predictions.
If there are no circumstances under which we can conclude from incorrect predictions that the models are wrong, or if the modelers simply refuse to make or be judged by the model predictions, then the modelers have left the field of science and entered the field of theology….. and they should just be ignored by scientists (and everyone else).