Guest Post by Jeff Id:
“Tamino” has made a couple of posts on how the last 10 year drop in temperature is not statistically significant, so it isn’t real. He went too far in his last one and began claiming it was a tactic of some kind of creature called a denialist to confuse and confound the public.
Let’s see what Tamino has been saying on his blog link HERE.
Some of you might wonder why I make so many posts about the impact of noise on trend analysis, and how it can not only lead to mistaken conclusions about temperature trends, it can be abused by those who wish deliberately to mislead readers. The reason is that this is still a common tactic by denialists to confuse and confound the public.
I just hate bad science. First he points out how Bjorn Lomborg made some comments about temperature decreasing, after placing the ever more popular label of denialist on him implying Lomborg’s statements were intended to confound and confuse the public. Heres the main point of what Bjorn Lomborg said.
They (temperatures) have actually decreased by between 0.01 and 0.1C per decade.
Ok, so graphs like the one below are the reason Bjorn Lomborg is a denialist.
I copied this graph from Digital Diatribes of a Random Idiot – A great unbiased site for trends (link on the right). Note the slope of -.0082 (.01C/month units or .00098 degC/year – Thanks to digitial diatribes comment below) in the equation on the graph. Most of us know this is actual data and is correct, in fact every measure is showing similar results. The earth stopped warming- a very inconvenient truth. So Tamino what’s the argument, why are the evil and uncooperative denialists wrong?
Statistics of course.
Here comes the numbers from Tamino.
The most natural meaning of “this decade” is — well, this decade, i.e., the 2000’s. So I computed the trend and its uncertainty (in deg.C/decade) for three data sets: NASA GISS, RSS TLT, and UAH TLT, using data from 2000 to the present. To estimate the uncertainties, I modelled the noise as an ARMA(1,1) process. Here are the results:
Data Rate (deg.C/decade)
Uncertainty (2-sigma)
GISS +0.11 0.28 RSS +0.03 0.40 UAH +0.05 0.42 All three of these show warming during “this decade,” although for none of them is the result statistically significant.
Ok Tamino has calculated GISS, RSS and UAH. One ground measurement and two satellite. For those of you who don’t spend their afternoons and weekends digging into this. ARMA is a fancy sounding method for what ends up being a simple process Tamino has used to estimate the standard deviation of the temperature. Sometimes it seems the global warming guys believe the more complicated the better, but no matter. He has a 2 sigma column which represents about 95%. He then goes on to say that because of the sigma 0.28 or 0.40 is bigger than the trend, the trend is not statistically significant. He repeats the comment below.
Let’s make the same calculation using data from January 1998 to the present:
Data Rate (deg.C/decade)
Uncertainty (2-sigma)
GISS +0.10 0.22 RSS -0.07 0.38 UAH -0.05 0.38
Finally one can obtain negative trend rates, but only for 2 of the 3 data sets. But again, none of the results is statistically significant. Even allowing this dreadfully dishonest cherry-picked start date, the most favorable
Now Tamino claims to be a statistician so I can’t see how he made such a simple boneheaded error but if he wants to pitch softballs, I’ll hit em. Just to make sure he’s in good and deep here’s one more quote.
I’ve previously said “Those who point to 10-year “trends,” or 7-year “trends,” to claim that global warming has come to a halt, or even slowed, are fooling themselves.” I may have been mistaken; is Lomborg fooling himself, or does he know exactly what he’s doing?
So, Mr. Lomborg, we’re all very curious: how did you get those numbers?
Wrong turns everywhere
The first and really obvious error Tamino makes is referring to the short term variation in temperature as noise. Noise in the context of sigma is related to measurement error. How can we determine the measurement error of the three methods GISS, RSS and UAH. Well the graph of the three is below.
The first thing you notice from this graph is that the 3 measurements track each other pretty well. The signal is therefore not completely noise. Well what is the level of noise? We have above 12 measurements per year times 29 years. So we don’t need ARMA or other BS we can simply subtract the data. I put the numbers in a spreadsheet and calculated the difference between RSS and GISS, RSS and UAH and UAH and GISS. With 348 measurments for each type of instrument I was able to get a very good estimate of standard deviation of the actual measurements. Again, no ARMA, just using the difference between the graphs.
GISS – RSS one sigma 0.099 Two sigma 0.198
RSS-UAH one sigma 0.101 Two sigma 0.202
GISS-UAH one sigma 0.058 Two sigma 0.116
These are actual numbers and are substantially lower than the estimated two sigma by Tamino but still bigger than the 0.1 C per decade although the two sigma GISS – UAH is within a 90% confidence interval already!
This isn’t the end though. Tamino ended his discussion there implying shenanigans and other things of those who see a trend.
Both of our standard deviation calcs are for a SINGLE measurement NOT a trend.
This is a big screw up. How can a self proclaimed statistical expert miss this, it’s beyond me. Anyway, none of us is universally right every day but most hold their tongue rather than post a big boner on the internet. Well most scientists realize that when you take more than one measurement of a value you improve the accuracy. So being a non-genius, I used R to calculate what the statistical certainty of the slope is when taken over 10 year trends. Thanks again to Steve McIntyre for pointing me to this software. I don’t love it but it is convenient.
t=read.csv(”c:/agw/giss data/10 year variation.csv”, header=FALSE)
x = (1:length(t[,1]))
y=t[,1]
a=gls(y ~x)
confint(a)
confint(a)[2,1]-confint(a)[2,2]
y=t[,2]
a=gls(y ~x)
confint(a)
confint(a)[2,1]-confint(a)[2,2]
y=t[,3]
a=gls(y ~x)
confint(a)
confint(a)[2,1]-confint(a)[2,2]
What this script does is load the difference files i.e. GISS-UAH, fits a line to them and presents a number for the statistical confidence interval of the slope coefficient at 95 percent confidence which is about two sigma. The confidence of the slope of the trend is as follows
GISS – RSS Two sigma 0.00108 DegC/year
RSS-UAH Two sigma 0.001068 DegC/year
GISS-UAH Two sigma 0.0005154 DegC/year
Despite a standard deviation of .02 We have a twenty times more accurate slope measurement of 0.001degC/year !
Conclusions
1. We can say with a high degree of certainty that we know the trend of temperature for any ten year plot to within .01 degC/decade.
2. We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.
3. We can also say that Tamino owes a few more apologies.
He and Real Climate still don’t let me post on their blogs!
I wonder why?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


“I wonder why?”
Your’ve gotta fit the agenda….
Conformance to the AGW mantra = Expertise and Opportunity to Speak!
I’m actually reminded by the scene of Neo’s mouth being closed in the first Matrix movie… No opportunity to complain or contribute if you cannot be heard.
Wait a minute here. Does this mean that certain of the AGW proponents are now *gasp* in a state of denial?!!
Are you confusing measurement noise (i.e. difference between RSS and Uah) for ‘weather’ noise (i.e. the difference in temperature due to variations such as ENSO?)
This guy is a politically motivated nobody, why even waste your time with him?
REPLY: Funny, he says the same thing about me. 😉
– Anthony
Jeff: A couple of questions:
Does this also work with the RSS, GISS and UAH data, individually, for each slope? Not just the slopes of the difference?
If I am interpreting this right, this means that the slope over a period of time (10 years), is much more accurate than a single point. No surprise there. But, one of the standard counter arguments, is that longer term trends (1979-2008) show positive trends, vs 1998-2008. What does this show over the period 1979-2008? Does the two-sigma value get higher? Does the two-sigma value get higher as the end point progresses toward the present (1979-1998 vs 1979-2008)? If so, would this indicate that the longer term trend is getting less reliable, as the two-sigma gets a higher value?
Lastly, would you like someone else to try to post this at Tamino’s?
Ok now I am confused. Isn’t what Tamino refers to as noise is NOT measurement error but the “weather” noise hiding the linear trend?
Thank you Mr Id, I enjoyed that.
One other note to add regarding the trendline. The R-Square number is a measly 0.0003, thus indicating that there is no statistical significance of temperature change over time. Down or Up.
Another question to consider. Which of these statements is true:
The accuracy of our temperature measurement is better than 0.2 degrees. Therefore the lack of warming over the last 8 years is not due to measurement error. And the warming over the last 30 years is definitely not due to measurement error.
The accuracy of the temperature measurement is worse than 0.2 degrees. Therefore the lack of warming over the last 8 years could easily be measurement error.
The accuracy of the temperature measurement has been good in the last 10 years while we have observed no warming, and poor over the last 40 years when we observed warming. I justify this claim due to ??
Since many of the historical measurements, and even some of the current ones, are people reading a min/max thermometer, I’d say, just at a very rough guess, that both the accuracy of the thermometer is less than .2c accuracy and the skill of the reader adds another order of magnitude of measurement error.
Tamino’s Folly is simply another example of insignificant climate trends (too small a time frame or relative degree of change) that are over wrought with questionable analysis trying to prove a flawed point or support a political agenda.
This may be a little OT, but underscores the notions of time frame and significance of change. Did anyone see in Nature that Global Warming causes giant magnets? The article said, “Scientists have unearthed giant magnetic fossils, the remnants of microbes buried in 55-million-year-old sediment. The growth of these unusual structures during a period of massive global warming provides clues about how climate change might alter the behaviour of organisms.”
OMG! If we don’t stop AGW now, we’re going to get giant magnets! A missing detail from the article is that there was no ice on the planet at the time with a sea level 200 meters higher than now. The IPCC predicts a sea level rie in the next century of less than 1 meter. I guess we call all still be concerned about GW-caused magnets.
The URL is here: http://www.nature.com/news/2008/081020/full/news.2008.1180.html
I think Tamino has made the transition from Bulldog to Mad Dog.
He’s howling, not at the moon, but at the Sun, the quiet Sun.
sportute: Granted, the R2 value is low.
It approaches unity, though, as you move the start date forward. Using HadleyCRUT3 data, I get R2 at 0.9872, for 2005-2008 (using Hadley’s projected 2008 global average anomaly).
R2 is high, but the record short.
But, in my opinion, this only shows the value(?) of using arbitrary start and stop dates. I can get nearly any trend, and nearly any value for R2, by changing the dates.
You know, temperature is a state function. That means that the temperature is an absolute, and is independent of the path that it took to get there.
It actually means that the trend is not particularly important at any given point in time – its the actual value that matters.
Put it this way: I get a glass and fill it with water. It takes 10 seconds to fill. I then drink it in 3 seconds. Now: I could plot water in the glass over time, fit a trend line, and claim that the trendline “proves” that the glass is filling with water. But who cares? The point is that once I have drunk it, the water is gone and the glass is empty.
First, I have to thank Anthony for carrying my post. I will drop by and answer some of the questions.
I have to address this issue because it was a common point on my blog.
Michael Hauber asked
“Are you confusing measurement noise (i.e. difference between RSS and Uah) for ‘weather’ noise”
Michael I believe has read a different post by Tamino. One which matches ARMA noise to temp and he demonstrates that 10 year down trends happen even though the net change is up. Tamino did a decent job on the post but in his conclusion he conveniently fails to mention that a downtrend also may be a real change in direction.
My point in this post regards a different article by Tamino where he tries, and fails to prove that a ten year trend is not significant because of the overall noise of the data. I am working on a follow up post which will demonstrate that even using the full (and incorrect) noise, the trend in these numbers is statistically significant.
Long story but my argument is independent of which type of noise (weather or instrument). Simply that from the variation in this data we can show that a ten year trend is also significant. The AGW guys must also accept the down with the up. Even if Tamino doesn’t want to.
Hi John
So we don’t need ARMA or other BS we can simply subtract the data. I put the numbers in a spreadsheet and calculated the difference between RSS and GISS, RSS and UAH and UAH and GISS.</i.
I am trying to follow your reasoning and would like to try and check your results. When you were subtracting the values for UAH, RSS and GISS, could you tell me the offset you used to cater for the fact that they have different baselines?
thanks,
JP
we don’t need ARMA or other BS– this post is satire, right?
The graph looks like a hockey stick, lying flat on the ground. It is pretty funny when people like Tamino are forced to take positions about whether or not there is a slight cooling trend.
Wasn’t it supposed to be really hot now?
John,
For this calc we only need the variation between measurements so I took the means of each 30yr trend and offset the graphs to have equal mean.
I am trying to reconcile these statements:
1. Digital Diatribes of a Random Idiot – A great unbiased site for trends
2. We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.
With these graphs from the aforementioned site ..
http://digitaldiatribes.files.wordpress.com/2008/10/giss120raw0908.jpg
http://digitaldiatribes.files.wordpress.com/2008/10/uah120raw0908.jpg
which seem to show a warming 120-month trend. Can anybody tell me where I am going wrong?
JP
Having used trend analysis to trade comodities for over 30 years, i have noticed that each commodity has a unique time structure. Using a 30 year moving average to observe PDO would be of very little use, likewise a 2 year moving average for ENSO. Assuming all data is equally accurate (MSU), i like an exponential moving average that gives higher weighting to more recent data. Anthony, i enjoy your website, and visit daily, sometimes twice. I also enjoy the humorous comments.
Well I have a problem with the whole premise. This UAH plot, and the quite similar (but different) GISStemp plot are plots of “anomalies”, not plots of the global mean surface temperature.
Now the Mauna Loa CO2 data has a distinct trend, because it is derived from raw data made at a single point on the planet in presumably a consistent manner. Remember that “trends” are the derivatives of functions; not the integrals, and the UAH plot above is totally eratic in trend, being sometimes positive and sometimes negative, but that is because it is the result of applying a (consistent) algorithm to a set of data taken from many different places at many different times, and the similar GISS temp is presumably taken from the readings of actual thermometers somewhere.
Just because you can perform some mathematical transformation on a set of data, does not mean there is any validity to the result of doing so.
If ML gathers daily CO2 values, and averages them to come up with a monthly number to plot, that is a valid process for reducing the effect of experimental errors, and measurement noise, to get a better value for a slowly varying funtion.
But GISStemp is a conglomerate of disparate measurment locations where the variable being measured is quite different at each location, because it is supposed to be different. Each of thosae locations will suffer from real noise sources such as weather.
The “true” mean surface temperature of the planet is practically indeterminate, because of the problem of Nyquist violation in the sampling methodology. There aren’t enough thermometers in the universe to properly sample the earth’s surface temperature in compliance with the Nyquist sampling theorem; and the practical violation is so huge, that the aliassing noise corrupts even the zero frequency signal which of course is the average value being sought.
The lay public takes GISStemp as being the actual global mean temperature. UAH say their graph represents lower Troposphere “anomalies”, which presumably is all in the atmosphere, and mostly is not in the life inhabited portions, so it is hardly a misery index for living things.
Trying to low-pass filter any of these anomaly plots, and call the result a trend is subject to different results depending on where you start and finish the trend line, and how long a time interval elapses.
Taking the oldest reliable information from maybe a million years ago, and today’s date data would presumably give the best value for what is being called the trend; but it is not useful for any human decision making processes.
However it does keep the grant money coming in to keep generating such non-information. Those with an axe to grind, or a prediction to support, will pick and choose their start and stop dates to demonstrate the “trend” they are trying to sell.
It is clear from either UAH or GISS, anomaly plots, that the next point to appear on those graphs is quite unpredictable. So much for the notion that there is a trend.
It has been said that random white noise is the highest information content signal possible; because no future data point can be predicted, and there is no valid interpolation process to obtain some past but unmeasured intermediate data point.
In the end we are just having a love affair with the mathematical process, since the data tells us nothing about whether the earth is gaining energy or losing energy; it can’t since that question is not a simple function of any knid of “anomaly”.
If 2009 is similar to 2008 in global temp anomalies, then the slightly negative trend goes back to 1995 (confirmed with RSS dataset, my preference). Thus, from 1995-2009 (15 years), no warming. Wait till this time next year for more hysterics from the AGW crowd. I believe someone at RC (or Atmoz) said that 15 years of no warmth would mean the climate models are seriously in error. I can’t wait till next year when they start denying their own comments.
Tamino & real Climate wouldn’t let any dissenting views on their blogs. They are the climate police so scared of factual opinion will demolish their houses of straw.
I wonder why Tamino didn’t use HadCRUT data?
That is after all, the official database of the IPCC and the UN.