**Guest Post by Jeff Id:**

“Tamino” has made a couple of posts on how the last 10 year drop in temperature is not statistically significant, so it isn’t real. He went too far in his last one and began claiming it was a tactic of some kind of creature called a denialist to confuse and confound the public.

Let’s see what Tamino has been saying on his blog link HERE.

Some of you might wonder why I make so many posts about the impact of noise on trend analysis, and how it can not only lead to mistaken conclusions about temperature trends, it can be abused by those who wish deliberately to mislead readers. The reason is that this is

stilla common tactic by denialists to confuse and confound the public.

I just hate bad science. First he points out how Bjorn Lomborg made some comments about temperature decreasing, after placing the ever more popular label of denialist on him implying Lomborg’s statements were intended to confound and confuse the public. Heres the main point of what Bjorn Lomborg said.

They (temperatures) have actually decreased by between 0.01 and 0.1C per decade.

Ok, so graphs like the one below are the reason Bjorn Lomborg is a denialist.

I copied this graph from Digital Diatribes of a Random Idiot – A great unbiased site for trends (link on the right). Note the slope of -.0082 (.01C/month units or .00098 degC/year – Thanks to digitial diatribes comment below) in the equation on the graph. Most of us know this is actual data and is correct, in fact every measure is showing similar results. The earth stopped warming- a very inconvenient truth. So Tamino what’s the argument, why are the evil and uncooperative denialists wrong?

Statistics of course.

Here comes the numbers from Tamino.

The most natural meaning of “this decade” is — well, this decade, i.e., the 2000’s. So I computed the trend and its uncertainty (in deg.C/decade) for three data sets: NASA GISS, RSS TLT, and UAH TLT, using data from 2000 to the present. To estimate the uncertainties, I modelled the noise as an ARMA(1,1) process. Here are the results:

DataRate

(deg.C/decade)Uncertainty

(2-sigma)GISS +0.11 0.28 RSS +0.03 0.40 UAH +0.05 0.42 All three of these show

warmingduring “this decade,” although for none of them is the result statistically significant.

Ok Tamino has calculated GISS, RSS and UAH. One ground measurement and two satellite. For those of you who don’t spend their afternoons and weekends digging into this. ARMA is a fancy sounding method for what ends up being a simple process Tamino has used to estimate the standard deviation of the temperature. Sometimes it seems the global warming guys believe the more complicated the better, but no matter. He has a 2 sigma column which represents about 95%. He then goes on to say that because of the sigma 0.28 or 0.40 is bigger than the trend, the trend is not statistically significant. He repeats the comment below.

Let’s make the same calculation using data from January 1998 to the present:

DataRate

(deg.C/decade)Uncertainty

(2-sigma)GISS +0.10 0.22 RSS -0.07 0.38 UAH -0.05 0.38

Finally one can obtain negative trend rates, but only for 2 of the 3 data sets. But again, none of the results is statistically significant. Even allowing this dreadfully dishonest cherry-picked start date, the most favorable

Now Tamino claims to be a statistician so I can’t see how he made such a simple boneheaded error** **but if he wants to pitch softballs, I’ll hit em. Just to make sure he’s in good and deep here’s one more quote.

I’ve previously said “Those who point to 10-year “trends,” or 7-year “trends,” to claim that global warming has come to a halt, or even slowed, are fooling themselves.” I may have been mistaken; is Lomborg fooling himself, or does he know exactly what he’s doing?

So, Mr. Lomborg, we’re all very curious: how

didyou get those numbers?

**Wrong turns everywhere**

The first and really obvious error Tamino makes is referring to the short term variation in temperature as noise. Noise in the context of sigma is related to measurement error. How can we determine the measurement error of the three methods GISS, RSS and UAH. Well the graph of the three is below.

The first thing you notice from this graph is that the 3 measurements track each other pretty well. The signal is therefore **not completely noise**. Well what is the level of noise? We have above 12 measurements per year times 29 years. So we don’t need ARMA or other BS we can simply subtract the data. I put the numbers in a spreadsheet and calculated the difference between RSS and GISS, RSS and UAH and UAH and GISS. With 348 measurments for each type of instrument I was able to get a very good estimate of standard deviation of the actual measurements. Again, no ARMA, just using the difference between the graphs.

GISS – RSS one sigma 0.099 Two sigma 0.198

RSS-UAH one sigma 0.101 Two sigma 0.202

GISS-UAH one sigma 0.058 Two sigma 0.116

These are actual numbers and are substantially lower than the estimated two sigma by Tamino but still bigger than the 0.1 C per decade although the two sigma GISS – UAH is within a 90% confidence interval already!

This isn’t the end though. Tamino ended his discussion there implying shenanigans and other things of those who see a trend.

**Both of our standard deviation calcs are for a SINGLE measurement NOT a trend.**

This is a big screw up. How can a self proclaimed statistical expert miss this, it’s beyond me. Anyway, none of us is universally right every day but most hold their tongue rather than post a big boner on the internet. Well most scientists realize that when you take more than one measurement of a value you improve the accuracy. So being a non-genius, I used R to calculate what the statistical certainty of the slope is when taken over 10 year trends. Thanks again to Steve McIntyre for pointing me to this software. I don’t love it but it is convenient.

t=read.csv(”c:/agw/giss data/10 year variation.csv”, header=FALSE)

x = (1:length(t[,1]))

y=t[,1]

a=gls(y ~x)

confint(a)

confint(a)[2,1]-confint(a)[2,2]

y=t[,2]

a=gls(y ~x)

confint(a)

confint(a)[2,1]-confint(a)[2,2]

y=t[,3]

a=gls(y ~x)

confint(a)

confint(a)[2,1]-confint(a)[2,2]

What this script does is load the difference files i.e. GISS-UAH, fits a line to them and presents a number for the statistical confidence interval of the slope coefficient at 95 percent confidence which is about two sigma. The confidence of the slope of the trend is as follows

GISS – RSS Two sigma 0.00108 DegC/year

RSS-UAH Two sigma 0.001068 DegC/year

GISS-UAH Two sigma 0.0005154 DegC/year

Despite a standard deviation of .02 We have a twenty times more accurate slope measurement of 0.001degC/year !

**Conclusions**

1. We can say with a high degree of certainty that we know the trend of temperature for any ten year plot to within .01 degC/decade.

2. We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.

3. We can also say that Tamino owes a few more apologies.

**He and Real Climate still don’t let me post on their blogs!**

I wonder why?

“I wonder why?”

Your’ve gotta fit the agenda….

Conformance to the AGW mantra = Expertise and Opportunity to Speak!

I’m actually reminded by the scene of Neo’s mouth being closed in the first Matrix movie… No opportunity to complain or contribute if you cannot be heard.

Wait a minute here. Does this mean that certain of the AGW proponents are now *gasp* in a state of denial?!!

Are you confusing measurement noise (i.e. difference between RSS and Uah) for ‘weather’ noise (i.e. the difference in temperature due to variations such as ENSO?)

This guy is a politically motivated nobody, why even waste your time with him?

REPLY:Funny, he says the same thing about me. 😉– Anthony

Jeff: A couple of questions:

Does this also work with the RSS, GISS and UAH data, individually, for each slope? Not just the slopes of the difference?

If I am interpreting this right, this means that the slope over a period of time (10 years), is much more accurate than a single point. No surprise there. But, one of the standard counter arguments, is that longer term trends (1979-2008) show positive trends, vs 1998-2008. What does this show over the period 1979-2008? Does the two-sigma value get higher? Does the two-sigma value get higher as the end point progresses toward the present (1979-1998 vs 1979-2008)? If so, would this indicate that the longer term trend is getting less reliable, as the two-sigma gets a higher value?

Lastly, would you like someone else to try to post this at Tamino’s?

Ok now I am confused. Isn’t what Tamino refers to as noise is NOT measurement error but the “weather” noise hiding the linear trend?

Thank you Mr Id, I enjoyed that.

One other note to add regarding the trendline. The R-Square number is a measly 0.0003, thus indicating that there is no statistical significance of temperature change over time. Down or Up.

Another question to consider. Which of these statements is true:

The accuracy of our temperature measurement is better than 0.2 degrees. Therefore the lack of warming over the last 8 years is not due to measurement error. And the warming over the last 30 years is definitely not due to measurement error.

The accuracy of the temperature measurement is worse than 0.2 degrees. Therefore the lack of warming over the last 8 years could easily be measurement error.

The accuracy of the temperature measurement has been good in the last 10 years while we have observed no warming, and poor over the last 40 years when we observed warming. I justify this claim due to ??

Since many of the historical measurements, and even some of the current ones, are people reading a min/max thermometer, I’d say, just at a very rough guess, that both the accuracy of the thermometer is less than .2c accuracy and the skill of the reader adds another order of magnitude of measurement error.

Tamino’s Folly is simply another example of insignificant climate trends (too small a time frame or relative degree of change) that are over wrought with questionable analysis trying to prove a flawed point or support a political agenda.

This may be a little OT, but underscores the notions of time frame and significance of change. Did anyone see in Nature that Global Warming causes giant magnets? The article said, “Scientists have unearthed giant magnetic fossils, the remnants of microbes buried in 55-million-year-old sediment. The growth of these unusual structures during a period of massive global warming provides clues about how climate change might alter the behaviour of organisms.”

OMG! If we don’t stop AGW now, we’re going to get giant magnets! A missing detail from the article is that there was no ice on the planet at the time with a sea level 200 meters higher than now. The IPCC predicts a sea level rie in the next century of less than 1 meter. I guess we call all still be concerned about GW-caused magnets.

The URL is here: http://www.nature.com/news/2008/081020/full/news.2008.1180.html

I think Tamino has made the transition from Bulldog to Mad Dog.

He’s howling, not at the moon, but at the Sun, the quiet Sun.

sportute: Granted, the R2 value is low.

It approaches unity, though, as you move the start date forward. Using HadleyCRUT3 data, I get R2 at 0.9872, for 2005-2008 (using Hadley’s projected 2008 global average anomaly).

R2 is high, but the record short.

But, in my opinion, this only shows the value(?) of using arbitrary start and stop dates. I can get nearly any trend, and nearly any value for R2, by changing the dates.

You know, temperature is a state function. That means that the temperature is an absolute, and is independent of the path that it took to get there.

It actually means that the trend is not particularly important at any given point in time – its the actual value that matters.

Put it this way: I get a glass and fill it with water. It takes 10 seconds to fill. I then drink it in 3 seconds. Now: I could plot water in the glass over time, fit a trend line, and claim that the trendline “proves” that the glass is filling with water. But who cares? The point is that once I have drunk it, the water is gone and the glass is empty.

First, I have to thank Anthony for carrying my post. I will drop by and answer some of the questions.

I have to address this issue because it was a common point on my blog.

Michael Hauber asked

“Are you confusing measurement noise (i.e. difference between RSS and Uah) for ‘weather’ noise”

Michael I believe has read a different post by Tamino. One which matches ARMA noise to temp and he demonstrates that 10 year down trends happen even though the net change is up. Tamino did a decent job on the post but in his conclusion he conveniently fails to mention that a downtrend also may be a real change in direction.

My point in this post regards a different article by Tamino where he tries, and fails to prove that a ten year trend is not significant because of the overall noise of the data. I am working on a follow up post which will demonstrate that even using the full (and incorrect) noise, the trend in these numbers is statistically significant.

Long story but my argument is independent of which type of noise (weather or instrument). Simply that from the variation in this data we can show that a ten year trend is also significant. The AGW guys must also accept the down with the up. Even if Tamino doesn’t want to.

Hi John

So we don’t need ARMA or other BS we can simply subtract the data. I put the numbers in a spreadsheet and calculated the difference between RSS and GISS, RSS and UAH and UAH and GISS.</i.I am trying to follow your reasoning and would like to try and check your results. When you were subtracting the values for UAH, RSS and GISS, could you tell me the offset you used to cater for the fact that they have different baselines?

thanks,

JP

we don’t need ARMA or other BS– this post is satire, right?The graph looks like a hockey stick, lying flat on the ground. It is pretty funny when people like Tamino are forced to take positions about whether or not there is a slight cooling trend.

Wasn’t it supposed to be really hot now?

John,

For this calc we only need the variation between measurements so I took the means of each 30yr trend and offset the graphs to have equal mean.

I am trying to reconcile these statements:

1. Digital Diatribes of a Random Idiot – A great unbiased site for trends

2. We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.

With these graphs from the aforementioned site ..

http://digitaldiatribes.files.wordpress.com/2008/10/giss120raw0908.jpg

http://digitaldiatribes.files.wordpress.com/2008/10/uah120raw0908.jpg

which seem to show a warming 120-month trend. Can anybody tell me where I am going wrong?

JP

Having used trend analysis to trade comodities for over 30 years, i have noticed that each commodity has a unique time structure. Using a 30 year moving average to observe PDO would be of very little use, likewise a 2 year moving average for ENSO. Assuming all data is equally accurate (MSU), i like an exponential moving average that gives higher weighting to more recent data. Anthony, i enjoy your website, and visit daily, sometimes twice. I also enjoy the humorous comments.

Well I have a problem with the whole premise. This UAH plot, and the quite similar (but different) GISStemp plot are plots of “anomalies”, not plots of the global mean surface temperature.

Now the Mauna Loa CO2 data has a distinct trend, because it is derived from raw data made at a single point on the planet in presumably a consistent manner. Remember that “trends” are the derivatives of functions; not the integrals, and the UAH plot above is totally eratic in trend, being sometimes positive and sometimes negative, but that is because it is the result of applying a (consistent) algorithm to a set of data taken from many different places at many different times, and the similar GISS temp is presumably taken from the readings of actual thermometers somewhere.

Just because you can perform some mathematical transformation on a set of data, does not mean there is any validity to the result of doing so.

If ML gathers daily CO2 values, and averages them to come up with a monthly number to plot, that is a valid process for reducing the effect of experimental errors, and measurement noise, to get a better value for a slowly varying funtion.

But GISStemp is a conglomerate of disparate measurment locations where the variable being measured is quite different at each location, because it is supposed to be different. Each of thosae locations will suffer from real noise sources such as weather.

The “true” mean surface temperature of the planet is practically indeterminate, because of the problem of Nyquist violation in the sampling methodology. There aren’t enough thermometers in the universe to properly sample the earth’s surface temperature in compliance with the Nyquist sampling theorem; and the practical violation is so huge, that the aliassing noise corrupts even the zero frequency signal which of course is the average value being sought.

The lay public takes GISStemp as being the actual global mean temperature. UAH say their graph represents lower Troposphere “anomalies”, which presumably is all in the atmosphere, and mostly is not in the life inhabited portions, so it is hardly a misery index for living things.

Trying to low-pass filter any of these anomaly plots, and call the result a trend is subject to different results depending on where you start and finish the trend line, and how long a time interval elapses.

Taking the oldest reliable information from maybe a million years ago, and today’s date data would presumably give the best value for what is being called the trend; but it is not useful for any human decision making processes.

However it does keep the grant money coming in to keep generating such non-information. Those with an axe to grind, or a prediction to support, will pick and choose their start and stop dates to demonstrate the “trend” they are trying to sell.

It is clear from either UAH or GISS, anomaly plots, that the next point to appear on those graphs is quite unpredictable. So much for the notion that there is a trend.

It has been said that random white noise is the highest information content signal possible; because no future data point can be predicted, and there is no valid interpolation process to obtain some past but unmeasured intermediate data point.

In the end we are just having a love affair with the mathematical process, since the data tells us nothing about whether the earth is gaining energy or losing energy; it can’t since that question is not a simple function of any knid of “anomaly”.

If 2009 is similar to 2008 in global temp anomalies, then the slightly negative trend goes back to 1995 (confirmed with RSS dataset, my preference). Thus, from 1995-2009 (15 years), no warming. Wait till this time next year for more hysterics from the AGW crowd. I believe someone at RC (or Atmoz) said that 15 years of no warmth would mean the climate models are seriously in error. I can’t wait till next year when they start denying their own comments.

Tamino & real Climate wouldn’t let any dissenting views on their blogs. They are the climate police so scared of factual opinion will demolish their houses of straw.

I wonder why Tamino didn’t use HadCRUT data?

That is after all, the official database of the IPCC and the UN.

Tamino has a graph at his own site on this subject from earlier this year. It uses a running mean of ten years. It is obvious from his graph that his next data point (when 2008 is complete) will show significant cooling. The very warm year of 1998 will come of the average and the cool year of 2008 will come on. It will happen as soon as the December temperatures are out. If he has any intellectual honesty, he will have to admit that significant cooling has occured at least by his definition

Several things about your temperature measurments.

Once measured you will never increase your accuracy.

By computing averages of multiple measurements you may very well increase your confidence level of where the actual peak of the normal distribution bell curve of measurement accuracy is and your difference from it your accuracy is still what it is.

It is also interesting to note that all these single measurements you cite are actually averages of multiple measurements. Not only that but the GISS has been adjusted by our pal Herr Hanson so it is a candidate for being tossed out as unreliable.

Among the technologies used to measure temperature the old mercury thermometer is one of the more accurate ones for the range of temperature that we are most interested in.

I also wonder about the graph labled May 1997 to Current.

How come there are multiple years 2007, 2006, 2004 and so forth but poor little 2005 only gets one mention. I think some tic marks on the horizontal axis would be extremely helpful.

John Philip:

You are correct that the 120-point trends are positive. It is also true that for HadCrut, RSS, and UAH, this positive trend flattens, and even goes negative, if extended back to 1997. This, of course, all has to do with arbitrary starting points that impact the slope of the line. The particular chart that Jeff cites is the chart that was put together precisely to show exactly how far back in the data to go without seeing a warming trend. That doesn’t mean there can’t be a warming trend shown for shorter (in this example, 120-month) periods.

In Tamino’s post, he muses philisophical about what the writer of the article meant by “this decade.” I think he mistakenly assumed that “this decade” starts at the year 2000 in his first analysis. I am guessing that the author began this decade in 2001. And if so, it makes sense, since every temperature measure, including GISS and NCDC data shows a negative trend line since the beginning of 2001. I’ve posted those charts in the past. I don’t specifically show charts for HadCrut, RSS, and UAH starting in January 2001, but my spreadsheets verify that these trends are negative.

Steve in SC: Well, there are tickmarks at the zero anomaly. Sorry about 2005 getting dissed… Excel decided for me how the points would be spaced. I suppose I could include month for a better reference point.

Oh, and since my site is referenced as a link on the side, but this was originally posted at Jeff’s site, I will shamelessly provide the link. 🙂

http://digitaldiatribes.wordpress.com

I calculated the trend per decade awhile ago using the Hadley Centre monthly temperature dataset going back to 1850. I started the analysis in Jan. 1850 – then Feb 1850, March 1850 etc. etc. all the way up to the most recent month.

The trend per decade depends on when one starts the measurement and there is a definitive trend in that trend.

[The warming climate models predict 0.2C per decade more-or-less since CO2 started rising at the modern rate – it really should apply starting in 1850 because the logarithmic relationship of GHGs to temperature means temps should have increased pretty close to the 0.2C per decade over the entire period but they often like to pretend it didn’t really kick off until very modern times but this is not accurate – it is simple math afterall – CO2 up – temps up on a logarithmic basis.]

But …

If one starts measuring in 1850, the trend is only 0.04C per decade.

The trend rises slowly so that if you start measuring in 1920, the trend is about 0.07C per decade.

By 1940, the trend is 0.08C, by 1960 the trend is 0.14C per decade, by 1970 we are at 0.17C per decade and by 1992, the magic 0.2C per decade is reached.

Then the trend starts falling precipitously, so that if one starts measuring in 1997, the trend per decade is 0.0C.

From 2002, we are at -0.2C per decade and a scary -0.7C per decade is reached if you start measuring in 2005.

The biggest cause of these varying trends is, in fact, the PDO and El Ninos. There were so many El Ninos in the 1986 to 2006 period that the trendline gets skewed upward.

If you take the PDO impact/changes out of the equation, the trend per decade is a fairly stable 0.08C per decade over most of the period, less than half of the predictions of the climate models.

Those are tiny things. I thought it was just a hairy line to emphasize the zero line.

You are making me get out the bifocals now.

I have to follow up Steve real quick and point out that 1998 also only gets one year mark. It goes: 1997 1997 1998 1999 1999…. I have no idea if that matters, but just to say.

(follow-up to my last posting): as does 2001 while we are at it!

Mr Steve in SC asked (18:32:08) :

“I also wonder about the graph labeled May 1997 to Current.

How come there are multiple years 2007, 2006, 2004 and so forth but poor little 2005 only gets one mention.”

2005 is when I had my first heart attack and is, therefore, treated with particular caution.

(A word from one who knows, if you’re thinking of having a heart attack, think again; there’s a lot of ouch involved.)

Tamino is tightly linked to both Mann and Josh Halpern aka “Eli Rabbit.”

REPLY:More than that, he also has a paper out with Hansen, Gavin et al of RealClimate. He’s in the employ of big nocarbon -AnthonyAh, you uncovered the man behind the curtain. I would love to narrow it down further.

That explains a lot.

I am curious because I read that he recently stated his professional expertise in statistics on real climate. How expert can he be to miss such easy points as above?

REPLY:here is his paper with the RC crew and dendro-mannhttp://www.jamstec.go.jp/frcgc/research/d5/jdannan/comment_on_schwartz.pdf

– Anthony

Tamino’s been bought off? Oh boy, whatta surprise!

Isn’t that the group of people who think exaggerating and falsifying data in order to get there point across is O.K.

“year drop in temperature is not statistically significant,”

Note to Tamino: If you can acurately measure the change, it is significant….

Bjorn Lomborg? a denialist? Is he serious? I guess a denialist is anyone who does not agree with Tamino?

Re: “He [Tamino] and Real Climate still don’t let me post on their blogs!”

Why? Because Real Climate is actually Faux Climate and is staffed by folks who cannot stand to be questioned or challenged and Tamino is a simply clown with similar fears.

Without AGW predominance, hysteria, pseudoscience, and outright junk science these folks would be out of a job. Anyone who legitimately questions the AGW hypothesis (it does not merit being called a theory, imho) threatens their funding and standing.

Re:

Mike C (15:44:07) :

This guy is a politically motivated nobody, why even waste your time with him?

REPLY: Funny, he says the same thing about me. 😉

– Anthony

Funny, an anonymous blogger who inserts distinctively political ad hominem attacks into his writing and who is by intent and effect an anonymous nobody considers Anthony a politically motivated nobody.

Somebody is looking quite silly. No wonder “Tamino” hides his identity from his readers, peers, and colleagues.

Agree with David just as most of us don’t even bother to look at GISS data re manipulation of data ect..for about a year now maybe its time no attention whatsoever should be given to these folks re: science only otherwise… I’m sure they are nice guys LOL. I think they are digging their own climate grave quite nicely they don’t need any help?

I believe that Obama and McCain are confidant that AGW is real. If these numbers are not reviewed and claims debunked we will all be paying carbon credits to Gore, Hansen, Mann, Tamino and to many others. We may be paying in any case, that is how successful the AGW propaganda machine has been.

“He and Real Climate still don’t let me post on their blogs!

I wonder why?”

Just this single fact is all I need to know who are right or who are wrong.

Infidels are not allowd in their church.

I have encountered this problem also, getting posts deleted on AGW bloggs (not CA though) or blocked when I started to point out that real world data do not support AGW and linked to the actual data.

Aren’t you all tired of this seemingly never-ending cherry picking of 1998? Fortunatelyn 2009 is coming so that skeptics will have to refer to a 11-year trend to “prove” temperatures are decreasing – then I’ll come back with the 10-year average.

First, let me say that having followed this site for a long time, I’m pretty convinced that GW isn’t happening. But…

The statistics in this post are wrong. Jeff Id has confused measurement noise (which he doesn’t really estimate 100% correctly – rather he uses the difference in different measurement series as a, probably reasonable, proxy) with weather fluctuations.

Jeff correctly points out that the measurement error is probably very small. However, the accuracy of a trend line isn’t based on that alone, but also weather fluctuations about the trend (which are not due to noise, but instead are real changes that don’t conform to a trend). So in talking about the “significance” of a slope, what we’re really talking about is how adequate a straight line is as a description of the weather changes.

I think it’s fair to say that eyeballing the graphs, you’d be hesitant about fitting a straight line to them, either to show global cooling or to show global warming.

In any case, what Tamino

reallyshowed was that there is no statistically significant warming trend over the decade, and only one of the three series shows any evidence of warming at all. Which strikes me as agreeing with the general attitude of this website, no matter how he tries to spin it.For this calc we only need the variation between measurements so I took the means of each 30yr trend and offset the graphs to have equal mean.Got it, thanks.

I am still unsure, though how subtracting e.g. UAH from GISS anomalies gives a measurement of ‘noise’? They are, after all, measuring different physical quantities – the tropospheric and surface temperatures respectively. One would expect a close, but not exact correlation – for example, the troposphere seems to be more sensitive to ENSO events, rising more in an El Nino, cooling more in a La Nina. The amount of this difference is interesting but I am unclear as to how it translates into an uncertainty value for the slope,

Similarly, the RSS and UAH use the same raw MSU data and process it differently to get the atmospheric temperature. Subtracting one from the other highlights these different results but I am not sure how it is a metric of noise in the original measurements?

Tamino (and me) calculate the trend for the last 120 months at +0.11C but with a large uncertainty, meaning that the actual trend may be substantially different. Your assertion is that

We can say that temperatures have dropped this past decade, just as our eyes looking at the graphs had already told us.. It is not clear from your method above how the positive but uncertain trend is transformed into a cooling trend. Could you expand?Tamino w.r.t. denialists:

A perfect example of the pot calling the white appliances black.

Jeff Id:

I don’t get this. You are looking at the difference between the measurements of 3 independent methods.

Quote: “What this script does is load the difference files i.e. GISS-UAH, fits a line to them and presents a number for the statistical confidence interval of the slope coefficient at 95 percent confidence which is about two sigma.”

Well done. You have showed that there is a trend between the measurements.

YOu missunderstand what Tamino has done. His 2 sigma error is not on You are testing different hypothesis to Tamino.

In the first part of your post you calculate the error on each indevidual point by look at the standard devidation of the differences between the different data sets. Your conclusion is that the error in the measurements of the individual points is larger than the size of the trend.

In the second part you attempt to find an uncertainty on the trend. But you use the difference between the measurements… WHAT?

For example, is the GISS data and the RISS data are identical, that is the diferrence were zero at all points, that does not mean that we can be certain of the trend that results. What you have tested is if there is a trend in the differences between the different data sets. That is, do the data sets converge or diverge over time. And you find that they do not … big surprise.

Taminos approach has been to ask can we reject the null hypothesis that the trend is -0.1C per decade given the evidence presented. He tests this by using the standard hypotehsis testing procedue. First he assumes that the null hypothesis is correct – that there is a cooling trend of 0.1C. Assuming this he calculates the standard deviation of the slope of the trend, given a noise model for the indevidual points. He then asks if the real trend seen in the data is within two standard deviations of null (Lomborg’s) trend, and finds that it is not. Its not how I would have done it, but there you go.

SO what is the conclusion?

Well obviously we can say that the average temperatue of the last ten years has been lower.

We can reject the hypothesis that there is a trend of 0.1C cooling at the 95% confidence level.

There is no evidence to reject the hypothesis that the trend is a -0.01C cooling.

Similarly there is no evidence to reject the hypothesis that the trend is a 0.1C warming.

Tamino is the nom-de-plume (or should that be ‘de guerre’?) of Greg Foster, is it not?

I meant of course Grant Foster.

“Put it this way: I get a glass and fill it with water. It takes 10 seconds to fill. I then drink it in 3 seconds. Now: I could plot water in the glass over time, fit a trend line, and claim that the trendline “proves” that the glass is filling with water. But who cares? The point is that once I have drunk it, the water is gone and the glass is empty.”Hmm… change it to beer and I begin to see what you’re talking about. But I’m not sure I agree that it is a pointless figure. Surely you have just provided the key description of the reason why I have to wait so long at the bar on a Friday night?

This is a really important finding. It indicates that for a bar to be operating properly (that is, for more glasses to be emptying than filling at any moment in time), the critical parameter is that drinks should be served in less time than it takes to drink them. Surely this research should be passed to the brewing industry immediately? Or at least my local?

The task of the sceptic of a theory is always much easier than that of the proponent. If someone proposes that all swans are black it only requires the sceptic to find one black swan to disprove the theory. No matter how many white swans are produced by the other side they cannot win the argument.

If the sceptic can show that on several of the respected temperature records there has been a slight cooling trend for over 11 years, then that serves as their black swan. For basic global warming alarmism to be true it is necessary that all periods as long as 11 years, without any extraordinary factors such as volcanoes, should have a warming trend.

Sceptics should not waste their time on responding to arguments that other, shorter periods, have warming trends. That is just as irrelevant as a white swan after the black swan has been found.

We don’t need ARMA? Hmm, the Tamino guy uses an ARMA model because the variation on the longterm trend is very much like red noise, that is noise which is auto-correlated on some typical time scale. That’s indeed a valid approach and much better than the simple-minded appraoch taken by Id. I’m afraid it’s you, Jeff who made the “big boner” here.

However, Tamino’s analysis shows no significant underlaying trend in the data used. An increase by 0.05-0.10 (predicted by the AWG hypothesis) over the time span would have stood out like a sore thumb, and therefore isn’t there. The magic flute of a warming trend is not to be found in there.

So, perhaps you are both wrong?

Pingback: Alex Jones' Prison Planet: The truth will set you free!

May I suggest a great book by Shumway and Stoffer,

http://anson.ucdavis.edu/~shumway/tsa.html

In Example 6.2. both signal and observation errors are considered as a stochastic process. Not sure if every alarmist agrees with random walk model for global temperature signal, but the analysis is very illustrating in the book. Something that combines Tamino’s and Jeff’s approaches.

“3. We can also say that Tamino owes a few more apologies.”

Hold your breath…..

Ed Zuiderwijk – You are right. The data over the last 10 years doesn’t provide the evidence to say that the underlying trend not flat, but neither does it provide data to say that we can reject that the hypothesis that the underlying trend is warming. Doesn’t provide enough data to say anything very much about the underlying trend, except that we can reject the hypothsis that the underlying trend is a 0.1C per decade cooling.

Two comments:

On the importance of picking a starting and ending date for determining trends.

If you look at the 20 or 30 year trend, everything is OK on the stock market. Just an adjustment. However, if you are an investor, you know there has been a shift in trends.

Knowing when trends have truly changed versus a minor blip is the key to understanding anything worth measuring. If you could just figure out the difference, you would be a very wealthy person in the stock market. Lots of people claim to have discovered techniques for doing so and are always peddling their software or methods or whatever. The nagging question I always have for these types of folks is: why don’t you just apply your methods to your own investments?

In any event, it is really hard to determine whether changes in data represent a change in trend or not,

while we are in the period of transition. Later on, with 20/20 hindsight, we can all agree when a trend took place.My second point, when looking at these charts I like to pretend I’m looking a chart from my stockbroker depicting my portfolio. Then I ask myself, “Is this going up or down or nowhere?” OT – I really am longing for those good old days of going nowhere.

My conclusions on temperature trends?

We most definitely have not had a warming trend over the last ten years.

IPCC and Hansen type projections for the future have already departed from observations. Others might quibble about the mathematically precise point we have to say the projections have failed, but in my mind, they have failed. I have to wonder, is it possible for these projections to ever be falsified in the minds of some?

We might have a slight cooling trend underway, but again, if the drops of the last couple of years continue, then we will most definitely say this was the turning point.

I am not a climate expert, but it seems to me that the notion of “weather noise” in a thirty-year data set is a little bit disingenuous. Over thirty years, the only thing we could possibly measure is weather. We could compare this thirty year trend to historical trends of similar length, or compare an end point to another point in the distant past to get an idea of climate. The problem with the whole AGW idea is that, in order to get people concerned about global warming, the concept has to be within the experience of living humans. Historical data can be manipulated (hockey stick) because the common man trusts his memory better than any technical analysis. It was colder 30 years ago. I remember the July 4th snow when I was in elementary school. AGW is all about the weather, not climate, so skeptics get sucked into these futile debates about short-term trends. Again, I’m not an expert, this is just my opinion.

John Phillip said

Tamino (and me) calculate the trend for the last 120 months at +0.11C but with a large uncertainty, meaning that the actual trend may be substantially different.

Check your uncertainty, If you think only stochastic you are correct however you know the slope to a high degree of certainty. (minus the corrections to the satellites and ground stations of course) The same as if you measured the same value 300 times.

Ian Sandbury

“Assuming this he calculates the standard deviation of the slope of the trend, given a noise model for the indevidual points.”

This is where Tamino didn’t finish his math. We know the slope of the line to within 0.001. Tamino didn’t do any calc of the slope and instead presented standard deviation as his proof of the null. When you actually calculate the slope confidence intervals you find a much smaller value and find to a very high degree of certainty that we have a downslope.

William

“The statistics in this post are wrong. Jeff Id has confused measurement noise (which he doesn’t really estimate 100% correctly – rather he uses the difference in different measurement series as a, probably reasonable, proxy) with weather fluctuations.”

While he is correct that my sigma isn’t 100% perfect he misses the point that I have intentionally gone out of my way to eliminate the weather noise. A common mistake in this post I’m afraid.

Regarding the statistics, my method of subtraction sig_RSS-sig_UAH overestimates the instrument error because the error from both instruments is combined.

I believe the correct val should be sig_delta = sqrt(sig_RSS^2+sig_UAH^2) but I haven’t bothered to look it up because it simply means I have overestimated my sig_delta. I have already made the point above that because of the measurement precision we know the slope well enough to show that Bjorn was correct.

Alright, one more then I need to work. There are a lot of smart people on these threads, I am a skeptic but only based on the ridiculous science and politics of this issue. I don’t claim AGW is false anywhere, some of the science is. Without specifics and unrelated to this post above I will say I believe now that some of it is done with intent. I wouldn’t have said that two months ago.

Patrick Hadley

The task of the sceptic of a theory is always much easier than that of the proponent.

I disagree with this. Certainly a skeptic can pick out a weak argument and hit it but when reasonable argument is squelched in government, media and even on some blogs (as I point out above) it makes the skeptics argument much more difficult.

” If the sceptic can show that on several of the respected temperature records there has been a slight cooling trend for over 11 years, then that serves as their black swan. ”

I don’t make the claim anywhere that this ends global warming. Only that it happened. Tamino treated it like a political issue and in his first post was reasonable yet biased but in his second he made the point the trend wasn’t statistically different from slopes of 0.4.

Since I can’t discuss it with him on his blog, I do it on mine.

And thanks to Anthony, I had a chance to do it here. I assume Tamino is welcome to post here as all people are on my blog. Is that correct?

I believe Lucia’s had some discussions about this:

http://rankexploits.com/musings/category/statistics/

Ed, indeed often it is assumed that variation on a longterm temperature trend is very much like red noise. This is of course only valid, in case there is indeed a trend during the periode of measurement. So this premiss should never be forgotten. However, ARMA could then indeed be used.

But Jeff Id’s estimate should also give a good indication at first sight. This is because the four means of measurement are reasonably independent, which is a premiss for being able to use differentials to calculate sigma. Else you’d be ignoring structurally non-constant (e.g. offsets only) errors.

However, if both methods give very big differences, one should get suspicous. Problably then, one of the two premisses will not hold.

In the end, nevertheless, not that many dispute the core-statement made by Lomorg, which is that the temperature indeed did not rise or at least not that much the last 10 years. What is disputed, whether this can or perhaps even should be seen, as natural variation or not. Tamino assumes it is natural and everybody who claims otherwise is a denier.

The truth is that Tamino could be right (except for the denier part). It could be just natural, but after 10 years of no (strong) warming and indeed having the theory predicting a trend that strong, the likelyhood of that is decreasing.

This is a bit OT but weather related. I just spent some time wandering over the new NOAA NWS maps that go with each area clicked on for the current temp. If you click on a town in Wallowa County, the temp will be somewhere around 15 to 25 degrees F right now, depending on the town you click on. But if you click in an area outside a little town circle (like in the middle of someone’s field), the temp is 43 degrees F. Every time. No matter where you click. The other thing I have begun to notice is that the automatic weather station that I check here: http://www.enterpriseweather.com/ is not updating like it used to. The temp reading right now at that weather site is from 9:00 last night.

I have also noticed that even though I am directly across from the Pendleton, Oregon airport station (its on the south facing mesa hill and I am on the other side of our little canyon on the north facing hill), the temp being recorded there ain’t nothin like it is where I am standing. I have crinkly grass under my feet and a chilly 29 degrees while the airport is recording a balmy 38 degrees F. When I get a chance, I will try to survey that station. Anthony, what do I need to do for that?

Am I alone in finding “Grant Foster” a hoot? Does your writing foster many grants, Anthony? Probably not.

Armin

There is nothing wrong with ARMA for Taminos first post. I would use it for that. In the case above we shouldn’t use ARMA because the difference gives us a method to isolate instrument variation. Still if you do use ARMA you actually get similar results when done correctly.

If you use ARMA Tamino got a sigma 2 times my sigma. This means that if he had applied slope confidence calculations he would have a result of two times my 0.001 or 0.002 deg C / year for 95% confidence.

None of this makes any conclusion about what happens next. Only that Bjorn is correct.

As a person who is neither a scientist or a mathmatician, I have a question that perhaps someone could help me with.

When someone refers to “noise” in a climate trend, it seems to me that they could be referring at least one of two things. First, they may be asserting that temperature randomly fluctuates around a given trendline. Second, they may be asserting that temperature fluctuates around a given trendline, that the fluctuation is not random, but that we lack the knowledge to determine what is causing the fluctuation.

It seems to me that Tamino is asserting that the “noise” is a random fluctuation. Does anyone know whether this true? What if the so-called “noise” has a cause that we lack the sophistication to identify?

PaulD:

The noise is chaotic not random. Yes in theory it does have a cause, but it is not predicable. Some will argue over whether chaotic systems are inherently unpredictable or if its just that we cannot predict them. The point however is that it acts as though it is random in terms of the fluctuations about the trendline.

Thanks to Anthony on behalf of those who have a good logical grip of the relevant science but are susceptible to disinformation from those who might be better mathematicians but who use their maths to obscure truth rather than to find it.

There was a time when maths was supposed to be a simple logical form of expression which from time to time could be more effective than words. That time has long gone and a good deal of modern maths usage is designed to confuse and deceive.

Noise: (noiz) n. any data in a set that either a. serves to disprove the theory upon which your grant is funded, or b. fails to prove the theory upon which your reputation is based.

Here’s my attempt at a clarifying point, as I understand it:

There is a difference between the standard deviation of the data itself that calculates a confidence around a mean value and the confidence interval of the observed trend line. The ARMA analysis originally done proves the point that actual observed data showing a negative trend can occur within a longer range set of data that has a positive trend. It does not prove that the current observations fall into that increasing trend model, only that it is possible, and a reasonable hypothesis. It doesn’t negate the hypothesis at all that the observed trend is a correct reflection of the actual trend.

The statement Tamino seems to dispute is a statement of observed fact. The observed fact is that trend lines (I’m assuming the author meant 2001-current as “this decade”) show that there has been cooling. This is not the same as saying that global warming has stopped, nor is it a statement that says this could be one of those aberrations that occur with fluctuating data. It says that, given the observed data, the trends are negative.

Tamino used a valid statistical process to make an invalid claim: that the author made an incorrect statement about the cooling trend lines. Whatever one’s thoughts about the statistical validity of a low r-squared of the trend line or high standard deviation of the observed data, the best that Tamino should have done with his point is to refer back to his ARMA presentation and remind the readers that the observed negative trend may well be the effect of what he refers to as “noise” and leave it at that.

What Jeff has done is to show that the calculated trend line, which is the best fit regardless of your comfort level with the best fit, is properly calculated – given the observed data – to a high degree of certainty. The only uncertainty with the observed data that would impact the certainty of the trend calculation then, is the measurement error in the observations.

A proxy to test this is consistency of the different sets of temperature measurements. By determining that the significance of these differences are not enough to change the trend line significantly enough to turn a negative trend into a positive trend, the conclusion is that we can be reasonably certain that the best fit trend line of temperature measurements are, in fact, negative.

The argument is simply that making such a statement is a true statement, based on observed data, and is in no way some kind of manipulation of fact by a “denialist.” It does not argue that other considerations as to why it is negative shouldn’t be considered through techniques such as an ARMA analysis.

Statistics is a shady science. This fact is easily verifiable 2-4 times a year when employment figures are released. Invariably the assumptions underlying the statistics are accused of being skewed and both political parties claim contradictory “victories” based on the same data set. Statistics have validated clinical trials that have lead to dangerous drugs being released for general consumption, only to victimize many innocents, clog our courts with lawsuits, and raise everyone’s healthcare costs. Statistics are used misleadingly to support everything racist ideals, to racism in standardized testing. Even the statistical methods used in Quantum Mechanics are believed by many to be cherry picked and self limiting in the hunt for a unified field theory. But we all know the biggest goose-egg of all statistical undertakings is weather prediction and climate science, perhaps the most inaccurate, incomplete, and most “fudged” excuse for science going on in the world today.

See statistics, like computer modeling and politics, cannot be done without some consideration of the desired outcome, and a set of human assmuptions to get there. Thus attempting to “prove science” through the use of statistics, is not only hopeless, but shows a complete lack of understanding of either. If you are capable of making accurate assumptions and handling complex mathematical calculations, then setting out to prove something statistically rather than empirically is to stray as far from doing true science as is possible. Physics/Thermodynamics holds the answers to the AGW riddle, that statistics will only serve to convolute until long after we are enslaved to legislation. Chemistry is the true SCIENCE behind the storage and transfer of heat at the molecular level, under varying atmospheric conditions. The whole of the science is built around this. Statistics’ is a footnote tool occasionally used to help clarify a point to the chemist, not the sole means to prove the point. STATISTICS NEVER PROVES SCIENCE.

It should be painfully obvious that the heavy use of statistics to support AGW theory is indicative of the disingenuous nature of the science and political effort behind it. This is why I have times over, and again now, assert that any effort to combat statistics with statistics is already a concession to bad science. It is fighting on their turf, and therefore one for the loss column. The statistics need to be rejected on the basis that even the best statistics are a far cry from scientific certainty. Meeting flawed premise with flawed premise proves nothing, nor does it advance the effort at all. Rather it sets it back, by failing to stick to the basic “Denier” assertion, that “the science needs to be better”. Statistics are never good science, so why should we attempt to dabble there too?

As much as I love seeing a “warmist” get a much needed comeuppance, I must assert that you “made” Tamino in the first place by validating his very invalid approach to “proving” global warming. That will NEVER HAPPEN through the use of statistics. You gave him is clout, you gave him his validity, so at the end of the day, you only managed to wound a monster of your own creation. Not a significant victory, enjoyable as it may be.

AGW is a very old, very well thought out, and very well funded movement. Distraction, surprise, and digression will be par for the course. To fight this fight we must do better. This entire article, is really a victory for the warmists, as a significant amount of well intentioned brain power was exhausted NOT proving a totally insignificant and diversionary point. Using methods that will never PROVE anything. Anthony, I love and respect your efforts, and all the heart and soul you put into fighting the good fight. Nobody was going to believe Taminos idiotic statistical assertions over the clearly visible trends in the data. But you lost sight of the big picture, and got locked into some “due diligence” effort over some contrived, meaningless statistics, and gave them validity where there was none. These guys fight dirty. Sometimes you gotta call a spade a spade, and a liar a liar, and move on. Your work is a threat to them. They studied you, figured out a way to draw you into their game then deployed a tactician to do just that. Tamino is that person, and he accomplished his mission. The beauty of science is never having to give the untrue or discredited a moment of consideration. Neither does a scientific blog. If (when) Tamino fails to ascribe to that, he will show his true colors and discredit himself. It’s not as fun, but that’s how you gotta play it…

Magnus A made a comment about Tamino’s bad math at the link above. Here’s his reply

“I’ve seen Jeff Id’s post; its reproduced at the garbage dump Anthony Watts’ blog. That’s where it belongs. It’s good for a laugh.”

How do you know when someone is backed into a corner?

Anthony,

Since [Tamino] won’t let you post on his blog you may not have seen this sarcastic parody that insults you, Leif, Senator Inhofe, Sarah Palin and anyone else with a dissenting opinion of AGW in a thinly veiled ad hominem attack.

http://tamino.wordpress.com/2008/10/19/ipc-projection-falsified/

Even one of the commenter’s takes a mocking swipe at you and your blog traffic. Maybe it’s a good thing he doesn’t let you post.

So, the debate on climate change has degenerated to derision and ridicule. The following quote comes to mind:

Ridicule is the first and last argument of fools. -Charles Paul Simmons, (1924), US Author

REPLY:I worry not what Tamino nor his commenters say. When they stop hiding behind web monikers, and use a real name like you have done, then they become real people with valid opinions. As it stands now they just exist in the noise band. – AnthonyHow far back do you need to go to get a significant warming trend (outside 2 sigmas) using Taminos method? Has there been a significant trend since, say 1988 or 1979?

The discussion of when “this decade” start reminds me of news storied published in 1999. Squabbles erupted between the traditionalists who point out that the calender system has no year zero, so the centuries are supposed to run from 1-100, 101-200 etc. In this context, all the big parties were supposed to occur on the night of Dec 2000.

However, the new fangled usages assumes decades are 100-199, 200-299 etc. Of course one just pretends there is no difficulty with the first decade AD being 1-99, which is flanked by (-99 to -1 BCE).

Some of this is discussed at wikipedia where they say:

Those following ordinal year names naturally choose

* 2001–2010 as the current decade

* 2001–2100 as the current century

* 2001–3000 as the current millennium

Those following cardinal year names equally naturally choose

* 2000–2009 as the current decade

* 2000–2099 as the current century

* 2000–2999 as the current millennium

So, as you see, there is no “one true answer” telling us when “this decade” began. Some will say 2001; some will say 2000!

Anthony,

I disagree with Derek D (10:21:44).

Statistics has useful tools, and statistical results are meaningful when used correctly and understood in context.

Thank you for what you are doing.

Lyman Horne

Statistics. Actually, you’re both off base. Sigma measures the spread (confidence interval) around the mean for a given sample to a certain degree of accuracy. The 95% accuracy indicates an alpha of .05. This says that the actual mean has a 1 in 20 chance of residing outside the confidence interval. An alpha of .05 is the minumum measure for a statistical sample to be considered statistically signficant. However, what is “statistically significant” will change depending on the consequences of being wrong. Political polls are file with this alpha, new drugs require an alpha of much tighter precision.

To determine the alhpa, you would first need to know what a complete measurement of world temperature would require. I’ve never seen anyone say what that would be and can’t imagine what one would look like. I would be inherently complex, given the size, distribution, and dynamic nature of the atmospheric temperature. The number physical locations would be extreme and would have to be near continuous. From what I’ve read, satellite measurements give a pretty near comprehensive data set. However, in that case, measurements would not be statistical (ie a sample of the whole) but actual measurements of the whole. Statistical calculations would not be relevant. How you combine and evaluate the numbers would obviously involve mathmatics but not confidence intervals for actual temperatures.

Lyman,

So I don’t make any enemies here, understand that I fully agree that statistics are useful tools and can produce meaningful results. Furthermore, I have a TREMENDOUS appreciation for Anthony running this site, and undertaking the efforts he has in the interest of truth.

However in this case of Anthony vs Tamino, we have a case of two different conclusions, implying two totally opposite conclusions derived from THE EXACT SAME DATA SET. This is where statistics become a trap, a point not lost on the Warmists. If you set out to claim something like AGW that certainly will never be proven factually and scientifically, it is in your best interests to stay out of the arena of proof entirely. Thus the warmists prefer to present one manipulated statistical conclusion after another. The result is that right minded people like Anthony feel compelled to diligently rebuke the data, pointing out procedural errors as any good scientist would. Problem is he’s not dealing with scientists, he’s dealing with political propagandists, who feel no compulsion to bear the burden of proof. So after spending much time and effort debunking one claim, he is met with 10 more, and it goes without saying that they will be flawed too. So how many months weeks or years should he waste giving them thorough review? It’s a diversion. The agenda will be pushed on through other avenues, while Anthony struggles pores over reams of statistics that we all know are bunk from the outset. And at that point, his good intentions, and diligent efforts are all for naught.

So while Anthony’s strength is statistics, mine is outwitting shitheads. And in this case, my strengths may pay better dividends against such an enemy as the Warmist crowd. As such my post is nothing more than the best intentioned, best reasoned, most altruistic advice I can offer after considering the bigger picture and the cast of characters involved. Anthony’s talents are too valuable to waste grading the half-assed statistics homework of amateur scientists and professional propagandists.

Derek D (13:07:38) :

Whew! A thread ending blog if I ever saw one!

I read, and appreciated, Derek D’s post. And I think I understand the points he was making. There were two which I would like to add to….

I agree that statistics can certainly be a valuable tool, but a bad master. Derek said that “STATISTICS NEVER PROVES SCIENCE…” – I thought it was the case that statistics do not PROVE anything. The scholium is simply a set of techniques for understanding trends in large masses of data. What never seems to be realised is that statistics will happily indicate any trend you care to look for – people seem to think that just because I have calculated a rising trend in pet ownership amongst left-handed Norwegians that this is somehow of importance….

I am not so sure I am with Derek when he warns against joining battle ‘on the enemies’ ground’. It is true that there you will be mired in manipulation, and your words will be twisted to make it seem as if you are always loosing.

But the battle will not be won only amongst the politicians and warmist propagandists. Many non-mathematical scientists and other opinion formers currently believe because they have no time or inclination to read beyond the propaganda, and cannot believe that some complex maths can be anything but correct. They will need to be led gently away from this belief – just telling them that they are dead wrong will not persuade them. They need to see disputation in maths to understand for themselves that lies can be numeric as well as verbal. This is where Steve McIntyre scores highly. He stresses that he is neither a ‘warmer’ nor a ‘denier’ – he just wants to see good science, so he spends a lot of time uncovering appalling statistical malfeasance. And this difficult position which he has held is now standing him in good stead – ‘warmer’ science institutions are able to listen to him, because he is scrupulously polite and talks their language. It is from work such as his that the huge walls of the AGW hypothesis will first start to crumble.

Well, I read this entire list of posts, looking for some climate insights; but it seems like I am reading from the lecture notes of the statistical mathematics lecturer in the political science department.

The beautiful thing about mathematics, particularly statistical mathematics, is that you can apply such analyses to any set of data; whether such data is the computed output of f(xyz), or is random, or chaotic, or simply quite arbitrary.

A good example would be to start with say the Manhattan telephone directory, where all the numbers derive from some sort of rules, at least they are all ten digits, counting area codes, but otherwise there is no rational connection between the number and the exact location of the telephone that rings, if you dial that number. But it is trivial to compute the average, or mean, the median, or any other statistical parameter associated with that book of raw data. But it doesn’t mean anything to anybody (but a statistician). And when they print a new phone book in say three years; nothing meaningful can be ascertained from the differences between the statistical analysis of the two editions of the telephone book.

Do any of the people who call themselves climatologists, actually do anything that involves Physics, or Physical Chemistry, or Oceanograph, or Geology; or anything that is real science.

Mathematics as you know, is all pure fiction; it isn’t any universal truth. We made it all up in our heads, out of whole cloth. We had darn good reasons for doing that; to create tools to describe the functioning of our science models; which are themselves all fictional as wel.

Absolutely NOTHING, that we use or talk about in mathematics, actually exists anywhere in the real universe.

There are no points, no lines, no planes, no circles or spheres; none of those things exist.

The equation x^2 +y^2 +z^2 = r^2 describes a sphere in Euclidean geometry, but no way does it explain something like 8km high mountains on the surface of such an object !

It is no wonder that the GCM computer geeks seem to rule the roost in climatology; they don’t seem to even need any input form the real universe to work their magic.

All such things are perfectly good data sets to work the wonders of statistical mathematics on.

The results are quite meaningless of course.

A good example of statistics run amok arose in the Viet Nam War era, when somebody decided to have a draft lottery based on birthdates. The calendar days were assigned sequential numbers from 1 for Jan 1, to 366 for dec 31, including Feb 29 of course.

Numbers were drawn out of a hat, figuratively speaking, and perosn born on that date were chosen first to go into the military conscription program.

Within days of that very first draft lottery; the walls leaked statisticians, who started to pronounce the lottery unfair, and biassed, because more people with low numbers in Jan-feb were chosen early, in the view of thes math whizzes.

Now in such a draft lottery, there are factorial 366 possible draft outcomes; each one different. That is one huge number; I’ll let you math jocks calculate that for yourselves, since I don’t have a good calculator handy at the moment.

So these mathematical geniuses pronounced the first draft lottery to be not random; and they made this momentous gaffe, on the result of one single sample out of factorial 366 possible outcomes.

One of those possible outcomes was to have the numbers come out as Jan1, jan2, jan3,…..dec28, dec 29, dec30, dec31, in exact calendar order.

That outcome, which would have shocked the world, is no more unlikely, than the real outcome that occurred with that first draft lottery.

Statistics gives no information about any single event. Now if they had a draft lottery every second, and did so for the entire estimated age of the universe; you might get enough “experimental” data to say the lottery was non random; but genius level statisticians reached that conclusion based on a single data point

Who knew my ranting would provoke such a rational and intellectual discussion. But that is, after all, why I love to come here. Real scientists can handle a little tough language as long as the day is ultimately ruled by the facts.

You are the last of a dying breed. Keep fighting the good fight…

I do not disagree with Derek’s postings.

After spending years in academia with computer modelers (sic), I am disgusted with the output, being as it can only be, based on (often predetermined) assumptions.

I also think Derek has a good point. The AGW group is so well funded that thousands of people can spend their lives producing papers which must individually be disputed. In the meantime some of them might have validity. It is a nightmarishly and impossibly huge task.

I believe the only thing that can stop it is cold weather. No science, statistics, posturing or calling shenanigans will do it, there is too much money involved. Currently we are in a downtrend or flat trend, the sun is quiet and it is becoming evident that the trend is continuing through winter. I am not particularly religious but we are definitely not in the drivers seat on this one.

Pointing out that the downtrend is clearly real, using as simple an example as I could find seemed important. I think Anthony Watts understood the statistics and significance of the demonstration immediately just as he has followed the solar, ice and measurement stations so well.

Statistics are part of the ammunition of the politician. When the government (of any party, in any country) claims prices are stable or educational standards are rising they seek to justify their claim with statistics, and they do so because they know some people will be persuaded (even if they have no understanding of the statistics). We can say those people should not be persuaded, but they are. What are we meant to do, leave them persuaded by something we do not accept and say “bah, it’s only statistics”?

The political advantage to be gained by use of statistics is real. Opposing politicians would be ill-advised to say “bah, it’s only statistics”, that would be an admission of defeat. They have to answer like with like and produce their own statistics to seek to persuade the volatile audience against the government’s position. That the refutation might be based on just as flimsy ground as that being refuted is neither here nor there. It is not, at heart, an argument about who is right but about whether one side is to be allowed to deploy machine guns while the other side limits itself to water pistols.

Undertaking a statistical analysis on a set of data is, of itself, no more useful than doing a crossword. Its utility comes from the conclusions you draw (or invite others to draw) and from the consequences of those conclusions. As has been said here many times no one would give two hoots whether the world is warming unless it had adverse consequences. The warmists use statistics to conclude both that the world is warming and that the warming will have a seriously detrimental effect. From that point they advocate radical changes to the way we live. None of that stands without their supporting statistics and none of it can be challenged effectively without challenging those statistics.

One victory might spur your opponent into undertaking another statistical exercise which (as if my magic) comes to the same conclusion as the one you disproved. Your choice then is to say either “I give up, I’m tired of this game” or “ok, let’s see if his second attempt is any better”. And if you can cast sufficient doubt on his second method you are faced with the same choice when his third appears. There is no way out of this apart from surrender.

There is, however, an important side effect of being able to challenge a succession of analyses sufficiently strongly to force your opponent to have another go. Where his work is relied on by politicians to press for a particular policy, the fact that the reasons for that policy are re-cast time and again because flaws have been identified can (not will, but can) throw doubt on the policy itself. It is a war of attrition – you cannot avoid the attrition if you want to win the war.

I thoroughly enjoyed reading Derek’s posts, and find them enlightening, as they present a view from the 30,000′ level that is easy to overlook.

Are the statistical arguments without value?…I’d say no, because at the 500′ level, which is where many of us engage in this lunacy, it’s nice to have a few things in your hip pocket so you can say “Yeah…well if that’s true, why isn’t THIS true?”, in an attempt to make someone realize that the “science” is maybe not what they think it is.

In a war, you need both privates and generals…and the pay is the same, whether you’re fighting or marching.

JimB

Tamino is part of the “country club” of climate science referenced by Tenured Prof in the thread at Climate Audit titled “Peter Brown and Mann et al 2008” at comment #161. I posted a comment in response which I expect Steve McIntyre will snip. But this needs to be pointed out to the club members who somehow think they should never be subjected to criticism. My comment:

stan Says:

Re: Tenured Prof (#161),

Prof,

The country club of climate scientists would not bother anyone, if they didn’t use their club to influence govt policy. But they use their little club to try to impose enormous costs on everyone else in the world. Some in the club are even demanding that anyone who disagrees with the club be thrown in jail.

This may not be a focus of Steve’s, but it is a really big deal to a lot of his readers.

Bottom line — when the country club members use the club’s activities to demand significant sacrifices be imposed on the rest of the world, the rest of the world should have the right to demand that the club be competent, open, transparent, and honest in its dealings. And that right is not dependent on whether they like McIntyre or want to blackball him.

When the club started pushing the world around, the world got the right to push back. And looking under the rock is the first step.

This just in. I wonder how accurate each of these stations are.

These data are preliminary and have not undergone final quality control by the National Climatic Data Center (NCDC). Therefore, these data are subject to revision. Final and certified climate data can be accessed at the NCDC – http://www.ncdc.noaa.gov.

Record Report

000

SXUS76 KPDT 231815

RERPDT

RECORD EVENT REPORT

NATIONAL WEATHER SERVICE PENDLETON OR

1115 AM PDT THU OCT 23 2008

…NEW DAILY RECORD LOW TEMPERATURES FOR OCTOBER 23RD…

NOTE: STATIONS MARKED WITH * INDICATE THAT THE STATION REPORTS ONCE

PER DAY. FOR CONSISTENCY…THESE VALUES ARE CONSIDERED TO HAVE

OCCURRED ON THE DAY THE OBSERVATION WAS TAKEN BUT MAY HAVE ACTUALLY

OCCURRED (ESPECIALLY FOR MAX TEMPERATURE) ON THE PREVIOUS DAY.

STATION PREVIOUS NEW RECORDS

RECORD/YEAR RECORD BEGAN

PENDLETON(ARPT), OR 29 / 1984 29 (TIED) 1934 :SINCE MID

*UNION ES, OR 20 / 1980 17 1928

WALLA WALLA, WA 32 / 2000 32 (TIED) 1949 :SINCE MID

Sobering, I thank the previous writers.

Three points however.

One. The ENTIRE premise of ALL of the AGW-power-based taxes and government controls is based on a 1/2 of one degree rise in temperatures that (are claimed to) correspond to rising CO2 levels since WWII.

However, the ONLY time in the earth’s 4+ billion year history when both CO2 and temperatures have BOTH been rising at the same time is the 26 year period from 1972 through 1998. In the ten-year period since 1998, temperatures have been steady, or arguably, falling slightly, despite a steady increase in CO2 levels. If only one year in 37 showed a declining temeprature, that’s noise/weather/whether/wonder/why/why not/weird results and means nothing.

but when ten years of the AGW-extremist’s precious 37 year trend show complete opposite results of Hansen’s politics? That becomes meaningful!

Two. Realists MUST use any means available (including statistics) to show that the AGW’a-politically-driven-conclusion of taxes, government-controls and death is wrong.

Three. Why a straight line? Up, down, flat, or vertical. Temperatures across times are NOT linear, and any model that pretends to compare real-world data with a straight line will be wrong.

Use a sine wave. (Or a cosine wave if you are left handed, or prefer the metric system for sum reason.) Model the temperatures since 1908 WITH A SINE WAVE and you will see a long, natural rise and fall that match closely to a 70 year cycle that shows the high point in 1935-1945, the lows in 1970, the high point in 1995-2005, and today’s slow fall from the 2004-2006 peak. That curve, though harder to plot, WILL match real-world measured dates. Er, data. 8<)

But to try to match a flat line? No – You can ALWAYS be shown by a AGW-extremist to be wrong. Because, with a straight line, you ARE going to be wrong.

My eyeball field geologist computer says the last decade trend is flat. The 1998 record El Nino flood year is an anomaly.

1980-1997 = flat

1997-1998=step up about 0.3

1999-2008 = flat

Hard to believe CO2 warming produced this step function, which looks like noise, weather, what have you.

You can also say with very high confidence level that this decade has been almost 0,2 degrees Celcius warmer than the previous decade.

I think that’s what counts. The word “climate” refers to conditions of long period of time. Usually 30 years, but shorter periods can be used if seen necessary. However, I think anything less than 10 years is more weather than climate.

Climate change hasn’t stopped since this decade is record warm and a lot warmer than the previous one.

I think this shows perfectly what’s this all about: http://tamino.wordpress.com/2008/12/31/stupid-is-as-stupid-does/