To Tell the Truth: Will the Real Global Average Temperature Trend Please Rise? Part1

To Tell the Truth:  Will the Real Global Average Temperature Trend Please Rise? Part I

A guest post by Basil Copeland

[NOTE: After seeing some other analyses posted in comments by Basil, I’ve invited him to post his work here. I hope you will enjoy it as much as I have so far – Anthony]

Everybody talks about the weather, but rarely has a scientific debate engaged the public as have concerns about climate change and and anthropogenic global warming.  It is a scientific issue or debate that everyone can have an “informed” opinion about just by going outside, or by thinking about how climate has changed in their lifetime.  If they cannot understand the physics of GCM’s (global climate models) they can read a thermometer and opine whether it is getting colder or warmer “than it used to be.”  Few scientific issues or debates are as reducible to an everyday metric — a thermometer reading — as the debate over global warming.

The experts merely fan the fires when they issue press releases about how this year or that is the warmest since whenever, or that the earth’s temperature is rising at X degrees per decade and is likely to continue to rise Y to Z degrees for the rest of the century.  The truth is that taking the earth’s temperature is no easy task.  Some would argue that it is not even possible to speak of a global temperature as such, e.g. that climate is regional, not global.  Others, such as the host of this blog, have drawn attention to serious questions about the accuracy of the station records on which estimates of global average temperatures are frequently based.  Then there are the stat geeks, like myself, who understand how hard it is to accurately or meaningfully measure the “average” of anything!  It begs reciting the old saw about a statistician being someone who can stand around with one foot in a bucket of boiling water, and the other foot in a bucket of ice water, and say that “on the average” they feel fine.

But despite all the legitimate reasons to question the usefulness of global average temperature metrics as measures of climate change or global warming, we’re not likely to stop using them any time soon.  So we should at least use them the best we can, especially when it comes to divining trends in the data, and even more so when it comes to extrapolating such trends.  In a series of recent blogs, our host has drawn attention to the dramatic drop in global average temperature from January 2007 to January 2008, and more recently to what appear to be essentially flat trends in global average temperature metrics over the past decade.  Not surprisingly, a vigorous discussion has ensued about how reliable or meaningful it is to base inferences on a period as short as ten years, not to mention a one year drop like we saw from January 2007 to January 2008.  While there are legitimate questions one might raise regarding the choice of any period to try to discern a trend in global average temperature, there is no a priori reason why a period of 10 years could not yield meaningful insights.  It all depends on the “skill” with which we look at the data. 

I’m going to suggest that we begin by looking at an even shorter period of time: 2002:01 through 2008:01.  Before I explain why, I need to explain how we will be looking at the data.  Rather than the familiar plot of monthly temperature anomalies, I want to call attention to the seasonal difference in monthly anomalies.  That, in a sense, is how this all started, when our host called attention to the sharp drop from January 2007 to January 2008.  That 12 month difference is a “seasonal difference,” when looking at monthly data.  The average of 12 monthly seasonal differences is an estimate of the annual “trend” in the data.  To illustrate, consider the following series of monthly seasonal differences:

0.077, 0.056, 0.116, 0.036, -0.067, -0.03, -0.119, -0.007, -0.121, -0.176, -0.334, -0.595

These are the 12 monthly seasonal differences for the HadCRUT anomalies from February 2007 through January 2008.  During that 12 month span of time, the average monthly seasonal difference was -0.097, and this is an estimate of the annual “trend” in the anomaly for this 12 month period.

With that by way of introduction, take a look now at Figure 1.  This figure plots cumulative seasonal differences going back in time from the most recent month, January 2008, for each of the four global average temperature metrics under consideration. 

tttpart1image1.png

Figure 1

While they vary in the details, they all turn negative around the end of 2001 or the beginning of 2002.  At the point where the series cross the x-axis, the cumulative seasonal difference from that point until January 2008 is zero.  Since the “trend” over any period of time is simply the sum of the seasonal differences divided by the number of seasonal differences, that’s just another way of saying that since near the end of 2001, there has been no “net” global warming or cooling, i.e. the “trend” has been basically flat, or near zero.  Yet another way to put it is that over that period of time, negative and positive seasonal differences have worked to cancel the other other out, resulting in little or no change in global average temperature.

But Figure 1 tells us more than just that.  Whenever the cumulative monthly seasonal difference is below zero, the average monthly seasonal difference over that time frame is negative, and the annual trend is negative also.  For most of the time since 2001, the cumulative seasonal difference has been negative, indicating that the average seasonal difference, and hence “trend,” has been negative. 

This is shown, in somewhat different fashion, in Figure 2. In the most recent 12 months, the trends vary from -5.04% to -9.70%.  They diminish as we go back in time toward 2001, but are mostly negative until then, with the exception of positive trends at 36 months for GISS and UAH_MSU. 

ttttpart1figure2.jpg

Figure 2

Finally, in Figure 3, we have the more familiar anomalies plotted, but just for the period 2001:01 through 2008:01.  The basic picture is the same.  At the end of the period the anomalies are below where they were at the beginning of the period, indicating an overall decline in the anomalies over this period of time.  Interestingly, the UAH_MSUn series dips below the x-axis four times during this period.  When we consider that the metrics have all been normalized to a zero anomaly around their 1979:01 to 2008:01 means, that indicates that within the last six years, the UAH_MSU series has returned to, and dipped below, the 1979:01 to 2008:01 mean anomaly four times.  All of the metrics have dipped below their 29 year mean twice in the last six years, and are well below the mean at the end, in January 2008.

ttttpart1figure3-520.png

Figure 3 – click for larger image

However you look at the data, since 2001 the “trend” in all four metrics has been either flat, or negative.  There has been no “global warming” since 2001, and if anything, there has been “global cooling.”  But is it “statistically significant?”  I imagine that one could fit some simple trend lines through the data in Figure 3 and show that the trend is negative.  I would also imagine that given the variability in the data, the trends might not be “statistically significant.”  But since statistical significance is often measured by reference to zero, that would be just another way of saying that there has been no statistically significant warming since 2001.

But that may not be the most insightful way to look at the data, or frame the issue.  Prior to 2001 we have a much longer series of data in which there has likely been a positive trend, or “global warming.”  What can we say, if anything, about how the period since 2001 compares to the period before it?  Rather than test whether the trends since 2001 are significantly different than zero, why not test whether the trends since 2001 are significantly different than the trends in the 23 years that proceeded 2002?  We will look at that intriguing possibility in Part II.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

69 Comments
Inline Feedbacks
View all comments
Chris Knight
March 13, 2008 4:27 am

Thank you Nick, and Basil
All I ended up doing was:
|A |B |C |D
n |date |raw hadcrut |=Bn-B(n-12) |=Average(C(n-11):Cn)
Plot A vs D as a scatter plot, and change chart type to an area plot. Lots of regular oscillations after 1882 – much noisier before.
You lose the first two years of data due to the method of calculation (n has to >=24), so the first plotted data point is December 1851. Then the timeline needs to be adjusted so that any events concur with their dates rather than the 12/2=6 month displacement that this method leads to
So what does it really tell us apart from a smoothed down version of the anomalies?
They go up and down and sweep out areas, which could be measured by integration, but it seems a little OTT.
I look forward to Basil’s Part II.

steven mosher
March 13, 2008 4:28 am

Atmoz, i thought your post and graph were spot on. 10 years is too short to
capture the underlying climate trend accurately if you have weather events that
can last 3-7 years.

March 13, 2008 7:34 am

Atmoz…. I got confused. I read the first part in detail, and then must have skimmed over the later bits. That’s a problem with me in blog comments! (I should learn to never comment on a post on “blog B” at “blog A”, relying on my memory.)
I see what you intended to show now. I posted comments over there about a few caveats.

randomengineer
March 13, 2008 7:53 am

(10 years is too short to capture the underlying climate trend accurately if you have weather events that can last 3-7 years.)
I believe you.
You’re stating it’s a Nyquist problem; you need N times the frequency (3-7 years) to see a worthwhile signal. Sample interval needs to be N times that of the highest frequency you intend to detect where N should be at least double.
So let’s assume you’re correct.
This is exactly why the Tamino chart Lee shows is worthless; if you look at ENSO and PDO and solar cycles then you have the same underlying Nyquist problem. Anything less than say 120 years of data is suspect; you can’t see the PDO/solar/ENSO signal effect otherwise.

steven mosher
March 13, 2008 8:46 am

I knew somebody would bring up nyquist. I was hesitant to. I’m thinking
that ATMOZ should do some artifical temp series, with some random
ENSO type events and see what pops out.
Its sems intuitive that if you are looking for a DC bias of sorts that you cant find it by sampling over a period that is dominated by “natural” weather frequencies..
Conceptually, if you have events that last 7 years, then your not going
to get any good look at the underlying bias unless you sample out beyond that period ( 2x being a good guess, hat tip to nyquist) .
Maybe LUCIA will look at this.

Bruce
March 13, 2008 9:02 am

Atmoz,
I was looking at your post on your site.
What struck me is the implication that the .2C per decade warming is relatively recent … and therefore CO2 could be the culprit.
What about 1910 to 1940? Isn’t that a .2C per decade rise?
What would happen if you used 1940 as a starting point? Would not the rise drop in half?
Starting points and end points matter.
Picking the mid 1970’s as a starting point maximizes warming.
Picking the 1940’s would minimize it.
Picking 1998 flattens it.
Starting in 1934 in the USA should show a flat line.
Ignoring 1910 to 1940 is in my opinion wrongfully aids the faulty theory that CO2 is the culprit.
I like this graph: http://www7.ncdc.noaa.gov/CDO/cdodivisionalselect.cmd?nationSelect=110&regionSelect=101&startMonthSelect=01&startYearSelect=1988&endMonthSelect=01&endYearSelect=2008&outputRadio=staticGraph&staticGraphElementSelect=TMP&filterSelect=00&method=doStaticGraphOutput&reqtype=nation
It clearly says” You want to spend 20 trillion to fix WHAT?????? There is nothing to fix.

Dell
March 13, 2008 9:10 am

JM (17:45:15) :
“Dell (11:55:56), your referenced article is analysing US land temperature.”
“The US is not the world.”
Yes it based up temps in the US, and not the world, but the US temp data set is the most wide spread land, complete long term data set in the world. Do we really know what actual temps were in the first half of the 20th Century in third world countries? Yet much of the pre 1950 world temps are based on estimates, not real temps.
If you can show me a global temp data base that is based entirely on actual temps, and not guesstimates for pre-1950, then we can talk century long Global temps, and solar trends.
But instead, we have the very persons who are promoting global warming alarmism, who are estimating much of the pre-record global temps. The fact that there is so much discrepancy the between the patterns of the US actual data and the global estimated data pre-1950, yet post 1960, we see almost identicle patterns, which would make the pre-1950 global temp estimates highly suspect.

Bob B
March 13, 2008 9:28 am

randomengineer you are exactly correct. It would be preferable to have N=2x or higher and the signal must be bandlimited.

Jim Arndt
March 13, 2008 10:13 am

Hi,
Bob B and randomengineer, the frequency of events varies. ENSO 3 to 7 years and PDO 25 to 30 years. I would think for something like PDO you would need at least 3 cycles and maybe 4 to see the net effect on climate and we wont go into solar cycles of 200 years 400 years and so forth.
Lee , yes we are still waiting for the formula and data for hinge points.

Stan Needham
March 13, 2008 11:03 am

Bruce,
Just exactly what does the graph in your 9:02:32 post show, and for what geographic area?

March 13, 2008 11:22 am

My understanding of the Nyquist frequency is that it’s used to determine the minimum sampling rate to detect the high frequency –i.e. fast — variations. When discussing the trend in AGW, we don’t care about those so much. Those are already averaged in the monthy or annual averages.
Here, we are discussing problems detecting low frequency oscillations. Detecting the lowest frequencies is also difficult.
And the arguments over whether or not they exist are impossible to fully resolve without a long data sets.
On the low frequency side: We know there are 11 years solar cycles; that means if you don’t know when they start or end, you would need to sample for at least 11 years to average over a cycle. Suppose someone thinks this matters a lot. To disprove that, and convince them, you need to collect data over several 11 year periods and show it to them. (Others might be convinced by order of magnitude estimates– but in come cases in AGW discussions, these order of magnitude estimates pre-suppose that we can estimate the effects we are arguing over.)
How much solar variablity matters to the final computation of the underlying trend in C/century depends on the magnitude of the solar variability relative to other forcings.
AGW theory says the solar variability is now piddly compared to the underlying trend due GHG forcing. But to test that claim empirically, we need to collect data over at least one or two solar cycles. If we can detect the effect of solar variations, that’s a problem for AGW theory. If we can’t detect the effect of solar variations, that tends to confirm AGW. ( Of course, due to volcanism and other factors, we might need many solar cycles!)
Similarly, if the PDO is thought (by anyone) to affect GMST significantly, and we wanted to convince them they were wrong, we would need to measure GMST over at least one or two PDOs and see whether or not the PDO influence could be detected, and to show its effect is less than the over all trend.
Once again, the only way to empirical way to test whether PDO matters enough to affect any underlying trend due to AGW is to get data over several PDO cycles.

Bruce
March 13, 2008 11:37 am

Stan,
USA. Temperature

Bob B
March 13, 2008 12:51 pm

Tripe like “tipping point” ,warmest year 1998,2006,2007—polar bears dying, Arctic sea ice?

Stan Needham
March 13, 2008 1:18 pm

Bruce,
I was actually being a little facetious. I assumed TMP meant temperature and the “National” in “National Summary” probably wasn’t Guatemala. It’s just that it looks so odd without the hockey stick on the right end, heh.

randomengineer
March 13, 2008 1:36 pm

Lucia — (My understanding of the Nyquist frequency is that it’s used to determine the minimum sampling rate to detect the high frequency –i.e. fast — variations.)
Sort of… think of this more like the number of cycles you have to have before you can detect them. Works for any frequency if you think of it that way. You are right that you need 2x PDO’s to detect any signal, 2x of 11 year solar, etc. thus nyquist still applies.
And you’re right in that nyquist is certainly more associated with frequency domain stuff like audio CD’s and 44.1 KHz sampling to be able to reliably sample 20 KHz audio.

Bruce
March 13, 2008 10:31 pm

Stan, the hockey stick is there. but its on its side. 🙂

Stan Needham
March 14, 2008 6:35 am

Stan, the hockey stick is there. but its on its side.
Well, that is surely a sneaky way to hide it. You’re right, though, when I turn my head sideways it becomes quite obvious — thanks Bruce. ROTFL!!!

TCO
March 16, 2008 4:28 pm

With this kind of multi-year periodicity in the data, you can’t tell anything from a 10 year run. It’s amazing that the same guys (“my side”) which complains about the reductions in degrees of freedom from autocorrelation, would advocate an analsysis like this with too few data.
The stuff about seasons and such is poorly explained and I don’t see how it adds more degrees of freedom to deal with the short sample period (given ENSO).
In addition, the agnosticism on Watts’s earlier one year analysis is troubling. It’s like he doesn’t want to call out “our side”.
Well guys…our side should be truth.

April 1, 2008 7:46 am

[…] other heat stores in the climate system are too small (and the atmosphere has clearly not warmed over the last few years). Global sea ice cover is actually above average at present (the Antarctic sea ice is at a near […]

Verified by MonsterInsights