CRUTEM3 error getting attention by Met Office

This is a repost of two articles from John Graham-Cumming’s blog. I watched with interest earlier this month where he and a colleague identified what they thought to be a math error related to error calculation when applied to grid cells. It appears now through a journalistic backchannel that the Met Office is taking the issue seriously.

https://i0.wp.com/hadobs.metoffice.com/hadcrut3/diagnostics/CRUTEM3_bar.png?resize=501%2C356

What I found most interesting is that while the error he found may lead to slightly less uncertainty, the magnitude of the the uncertainty (especially in homogenization) is quite large in the context of the AGW signal being sought. John asks in his post: “If you see an error in our working please let us know!” I’m sure WUWT readers can weigh in. – Anthony


The station errors in CRUTEM3 and HadCRUT3 are incorrect

I’m told by a BBC journalist that the Met Office has said through their press office that the errors that were pointed out by Ilya Goz and I have been confirmed. The station errors are being incorrectly calculated (almost certainly because of a bug in the software) and that the Met Office is rechecking all the error data.

I haven’t heard directly from the Met Office yet; apparently the Met Office is waiting to write to me when they have rechecked their entire dataset.

The outcome is likely to be a small reduction in the error bars surrounding the temperature trend. The trend itself should stay the same, but the uncertainty about the trend will be slightly less.

===============================================

Something odd in the CRUTEM3 station errors

Out of the blue I got a comment on my blog about CRUTEM3 station errors. The commenter wanted to know if I’d tried to verify them: I said I hadn’t since not all the underlying data for CRUTEM3 had been released. The commenter (who I now know to be someone called Ilya Goz) correctly pointed out that although a subset had been released, for some years and some locations on the globe that subset was in fact the entire set of data and so the errors could be checked.

Ilya went on to say that he was having a hard time reproducing the Met Office’s numbers. I encouraged him to write a blog post with an example. He did that (and it looks like he had to create a blog to do it). Sitting in the departures lounge at SFO I read through his blog post and Brohan et al.. Ilya’s reasoning seemed sound, his example was clear and I checked his underlying data against that given by the Met Office.

The trouble was Ilya’s numbers didn’t match the Met Office’s. And his numbers weren’t off by a constant factor or constant difference. They followed a similar pattern to the Met Office’s, but they were not correct. At first I assumed Ilya was wrong and so I checked and double checked has calculations. His calculations looked right; the Met Office numbers looked wrong.

Then I wrote out the mathematics from the Brohan et al. paper and looked for where the error could be. And I found the source. I quickly emailed Ilya and boarded the plane to dream of CRUTEM and HadCRUT as I tried to sleep upright.

Read the details at JGC’s blog: Something odd in the CRUTEM3 station errors

0 0 votes
Article Rating
181 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
RockyRoad
February 25, 2010 8:18 am

Please fix this sentence:
What I found most interesting is that while the may error he found may lead to slightly less uncertainty
Thanks!
REPLY: Fixed. Thank you. One too many mays, that’s what I get for writing with a raging head cold. – A

February 25, 2010 8:32 am

Well hopefully the met is going to start again with tranparency this time. We shall see.

Pamela Gray
February 25, 2010 8:32 am

I would want a clear time line of when the subsets and their locations became the global set. Depending on oceanic/atmospheric conditions at the time, this could help explain the overall anomaly discrepancy between satellite and ground sensors in the NH as demonstrated in the Lean and Rind paper Leif linked for us.

Kacynski
February 25, 2010 8:34 am

Wow. So they can’t even get the math right, and yet they want the world to spend billions on a fix for a problem they say may be coming?
I think we should tell them where to go.

Pamela Gray
February 25, 2010 8:39 am

Anthony, I am at home too dealing with a raging head cold and solidly blocked up Eustachian tubes. But this gives me time to engage in my favorite website and my favorite topics.
REPLY: Just try to keep the CODE out of your nose – A

Rob
February 25, 2010 8:39 am

I assume the retraction of the sea levels rising has already shown up on your radar screen:
http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo780.html
REPLY: Yes story done here several days ago, thanks- A

February 25, 2010 8:43 am

Ha ha –http://bushynews.com/wp-content/uploads/2010/02/ipcc2.gif

TerryS
February 25, 2010 8:52 am

It is pity the Met Office didn’t let JG-C know that he was correct. Instead he had to find it out from a journalist.

Cold Englishman
February 25, 2010 8:58 am

Fiddling with it at this end of the computation is all very well, but is it based on the raw data, value added or homogenised data.
In other words, more accurately computed rubbish is still rubbish.

brian
February 25, 2010 9:05 am

why don’t they release the source code for the software, I’m sure WUWT readers will be able to help out with a bit of QA work.

Phil.
February 25, 2010 9:11 am

The error bars are 95%, i.e. 2 sigma, the blogger hasn’t realized this and is just calculating sigma.

February 25, 2010 9:12 am

My pesonal take on Ben Santers latest rant on Real Climate.

Baa Humbug
February 25, 2010 9:23 am

I’m not quite clear on this. The error only affects the uncertainty range? What exactly does this mean?
One thing I do understand, yet again it takes a couple of “amateurs” to correct the pros backed by dollars and sophisticated equipment at the Met. Or am I making too much of it?

JMurphy
February 25, 2010 9:27 am

Mr Watts : With regard to temperature data, what is your response to Tamino, who has finished processing the GCHN data and subsequently now states :
“The claim that the station dropout is responsible for any, let alone most, of the modern warming trend, is utterly, demonstrably, provably false. The claim that adjustments introduced by analysis centers such as NASA GISS have introduced false warming is utterly, demonstrably, provably false.”
http://tamino.wordpress.com/2010/02/25/false-claims-proven-false/#more-2346
http://tamino.wordpress.com/2010/02/25/shame/
Is he correct in also stating that no-one else (particularly anyone who had doubt in that data) has done this to see whether there was in fact a real bias ? That can’t be true, surely ?
REPLY: I had a look, but reproducing will be rather difficult when he stakes stands like this:
Tamino’s response to request for code from Zeke:

[Response: I haven’t decided whether or not to publish this (peer reviewed). If I don’t I’ll probably make the code available to those who I consider serious investigators. That does not include denialists.]

It’s Jones’s “Why would I make the data available to you, when your aim is to find something wrong with it?” quote all over again. I’ve sent out messages to others involved, we’ll see if there’s any merit. -A

Nick
February 25, 2010 9:49 am

Is homogenization error a constant value for all stations, or does it differ by station? Graham-Cumming and Goz treat is as constant. I’m not versed in statistics, so any insight would be appreciated!

MattN
February 25, 2010 9:51 am

I am unclear how fixing an error can make the uncertainty ranges *smaller*. Wouldn’t it make them larger, as in, we’re now less certain?

psi
February 25, 2010 9:55 am

[Response: I haven’t decided whether or not to publish this (peer reviewed). If I don’t I’ll probably make the code available to those who I consider serious investigators. That does not include denialists.]
Without being able to pass judgment on the technical merits in question, I would proffer the response that anyone who feels the need to continue using phrases like “denialist” is not operating as a scientist, but an ideologue. Based on this, I am prejudiced to suspect that when Tamino states that x is is “utterly, demonstrably, provably false,” he is either wrong or engaging in some kind of straw man argument. This kind of data should be part of the public record, period. AGW proponents can’t have it both ways. The more right they are about the potentially catastrophic consequences of warming, the more imperative it would seem that the data used to support the conclusion that warming is actually occurring are a matter of public concern. Therefore the more they practice this kind of selectivity and secrecy (which reminds one irresistibly of the ethical practices of immature Jr. High school students and their cliques), the less they deserve public confidence.

Pamela Gray
February 25, 2010 10:01 am

Oh good heavens Tamino. You are in need of a strong-tempered mother willing to put your stingy behind, after a good paddle, in a corner. Now share your candy. There is one word we mothers use for children who behave like this. Snot.

February 25, 2010 10:04 am

reproducing will be rather difficult
Why?
Reproducing Tamino’s results with the same data and the same code seems like it would be trivial, and not prove anything – unless you think he’s just flat making up his results.
The reference to Jones would be relevant if the data were private, but it’s not. Running those numbers should settle the question.
REPLY: The point is that I can or the main author behind this, E.M. Smith, can undertake a replication of his own. I’ve sent him an email and I hope he does. But reproducing exactly what “Tamino” has done would be difficult without his code. If its no big deal, and there’s no trade secrets or proprietary data involved, and as you say “trvial” why then does he offer only to share with people who might agree with his results and exclude others?
Do I trust “Tamino”, a man supposedly of science but who won’t put his name to his critcisms, who regularly denigrates others, and who now won’t share what he claims falsifies the work of people who do put their name to their work? In a word, no. -A

February 25, 2010 10:06 am

How good are former Geography students at software development for the HADCRUT code… ? (Tim Mitchell , missing tim Harry_read_me.txt)
http://www.cru.uea.ac.uk/~timm/personal/index.html
Tim Mitchell:
“At Oxford University I read geography (1994-1997, School of Geography). My college was Christ Church. At Oxford I developed a special interest in the study of climate change.
In 1997 I moved to Norwich to carry out the research for a PhD at the Climatic Research Unit (CRU) of the University of East Anglia. My subject was the development of climate scenarios for subsequent use by researchers investigating the impacts of climate change. I was supervised by Mike Hulme and by John Mitchell (Hadley Centre of the UK Meteorological Office). The PhD was awarded in April 2001.”
Of course if you get eco green evangelical christians students with a geography degree to write the HADCRUT code…?
“…Although I have yet to see any evidence that climate change is a sign of Christ’s imminent return, human pollution is clearly another of the birth pangs of creation, as it eagerly awaits being delivered from the bondage of corruption (Romans. 19-22).
Tim Mitchell works at the Climactic Research Unit, UEA, Norwich, and is a member of South Park Evangelical Church.
http://www.e-n.org.uk/p-1129-Climate-change-and-the-Christian.htm
What do you get (see for yourselves in in Harry_read_me.txt)
Even the scientists seem to want to Believe in AGW, rather than test for it experimentally, like John Haughton, they seem to want to belive that humans are polluting and destroying the earth. If you don’t look for the null hypothesis, you may not find the good news that AGW theory, may have a miniscule impact when compared to natural processes.
CRU provides one of three datasets (Hadcru) for OTHER researchers:
(we have since seen that the 3 are more interlinked than previously thought)
Tim Mitchell:
“An important part of my work is to develop climate data-sets. My intention is that these data-sets will then be used by researchers investigating the impacts of climate change. Here I provide access to these data-sets.”
CRU TS 1.2 10′ Europe 1901-2000 time-series pre, tmp, dtr, vap, cld MITCHELL et al, 2003 this site
CRU TS 2.0 0.5° globe 1901-2000 time-series pre, tmp, dtr, vap, cld MITCHELL et al, 2003 this site
(full list see the links)
http://www.cru.uea.ac.uk/~timm/data/index.html
http://www.cru.uea.ac.uk/~timm/data/index-table.html
There are three centres which calculate global-average temperature each
month.
•Met Office, in collaboration with the Climatic Research Unit (CRU) at the University of East Anglia (UK)
•Goddard Institute for Space Studies (GISS), which is part of NASA (USA)
•National Climatic Data Center (NCDC), which is part of the National Oceanic and Atmospheric Administration (NOAA) (USA)”
Tim Mitchell again:
http://www.cru.uea.ac.uk/~timm/research/index.html
I imagine an audit of the software development processes would be interesting to see.
If you’re going to do good science, release the computer code too:
http://www.guardian.co.uk/technology/2010/feb/05/science-climate-emails-code-release
Steve Mosher sums it up:
“So, I take a hard hard line on this. If you dont freely release your data and freely release your code in all cases then I am not rationally bound to even consider your claims.
You haven’t produced science, you’ve just advertised it.
The real science, is not the paper describing the data, its not the words describing the algorithm. the real science is the data AS YOU USED IT
and the code AS YOU RAN IT.

kadaka
February 25, 2010 10:06 am

Pamela Gray (08:39:18) :
Anthony, I am at home too dealing with a raging head cold and solidly blocked up Eustachian tubes. But this gives me time to engage in my favorite website and my favorite topics.

Ah, this may explain why “Phil.” (http://deleted/) has suddenly sprung to the attack in the O’Reilly post after nearly exactly 24 hours (within six seconds, might have had a jumpy “submit comment” twitch). Like a jackal on the African plains, he sensed you were ill and weak, and decided to pounce!

Ruhroh
February 25, 2010 10:06 am

I think the point is that if you hold down the button on the food processor long enough, everything is entirely uniform. Thereafter, it is impossible to backtrack to the roles of the various similar ingredients.
JGC and Ilya found gridding errors that varied improperly with the number of stations.
That problem was not masked by homogenization.
IHMO
RR

Alan the Brit
February 25, 2010 10:16 am

One does wonder at times why a public body, funded annually by the hard pressed British Taxpayer to the tune of £270M, with 1800 staff, with a new £30M XBox360 that does 2 billion calculations per second (allegedly) & a carbon footprint the size of a small town, housed in a state of the art green building just outside Exeter, Devon, should be having “errors” at all no matter how “small” considering that is appears to be a veritable idylic relaxed working environment! For that kinda money, I expect perfection! The same basic model that they use for 5 day weather forecasting is used for medium term forecasting & long term climate change prediction. If they get the weather wrong, what chance climate, but of course weather isn’t climate, its only, well, weather really, except when it’s climate!
Pamela Gray, I do hope you & Anthony get better soon!

Scott B.
February 25, 2010 10:23 am

What’s up with that chart. My eyes can’t focus on it. LOL. That’s a good way to hide errors…

Steve M.
February 25, 2010 10:25 am

Do I trust “Tamino”, a man supposedly of science but who won’t put his name to his critcisms, who regularly denigrates others, and who now won’t share what he claims falsifies the work of people who do put their name to their work? In a word, no. -A

I was thinking the same thing. It’s good to see that you openly put your name on this site, along with your credentials. I read through the comments on Tamino’s site, and not that it’s all cheer-leading. I couldn’t prove that opposing opinions are moderated out, but have a gut feeling they are.

Douglas DC
February 25, 2010 10:42 am

Pamela Gray (10:01:13) :
Oh good heavens Tamino. You are in need of a strong-tempered mother willing to put your stingy behind, after a good paddle, in a corner. Now share your candy. There is one word we mothers use for children who behave like this. Snot.
Great Visual. Reminds me of an incident involving my cousins, I, and
Granma’s cookie jar.For a crippled Cherokee woman she could move fast
and was creative with her cane..
To share is good-don’t be afraid.Being dragged to the front porch by the
ear, now that’s real fear…

kadaka
February 25, 2010 10:43 am

Alan the Brit (10:16:50) :
…with a new £30M XBox360 that does 2 billion calculations per second (allegedly)…

Well, there’s the problem right there! You need Sony PlayStation 3’s for decent supercomputing. The US Air Force thinks so, therefore it must be true.

Pamela Gray
February 25, 2010 10:49 am

For a readable review of calculating error see the following link. Determinate errors are what I refer to as sloppy math, sloppy data entry, biased assumptions ending up as mathematical calculations, etc. For example, after I entered my data into the computer spread sheet, I rechecked each entry with the original recorded numbers to make sure I had not made a “sloppy work”, IE determinate, error.
Indeterminate errors would happen when the environment in which the data is taken changes slightly (or even greatly). To reduce indeterminate errors, repeated measures are sometime used which should reduce the calculated error value. I did this by taking the same auditory brainstem response reading repeatedly (IE 1500 beeps, 1500 readings). The screen that displayed the electrical brainstem response clearly demonstrated that the more often a reading was taken, the clearer and more stable the signal response became. I was reducing indeterminate errors.
The land-based temperature data set has so far been peppered with both kinds of errors.
http://www.lhup.edu/~dsimanek/errors.htm

February 25, 2010 10:49 am

The Times reports that the BBC is considering dumping weather forecasting from the Met Office and is out to tendering other weather services. Mind you that is a case of the pot (The BBC) calling the kettle (the Metoffice) black!

Pamela Gray
February 25, 2010 10:56 am

Addendum
The ABR equipment included computer code that would average responses together after each one was taken, thus calculating the electrical response with a finer and finer error number as repeated samples were taken. It wasn’t that the brainstem itself was getting better at it the longer it was subjected to the beeps.

Bob Kutz
February 25, 2010 11:03 am

JMurphy (09:27:58) :
REPLY: I had a look, but reproducing will be rather difficult when he stakes stands like this:
Tamino’s response to request for code from Zeke:
[Response: I haven’t decided whether or not to publish this (peer reviewed). If I don’t I’ll probably make the code available to those who I consider serious investigators. That does not include denialists.]
Tony; I believe you’ll get the chance;
From;
http://tamino.wordpress.com/2010/02/25/false-claims-proven-false/#comment-39807;
Bob Kutz // February 25, 2010 at 6:45 pm | Reply
I won’t sink to comments such as yours, I will again ask; can I see your work?
[Response: (from Tamino, presumbly) Other than the page caption, the words “GHCN” and “global” don’t appear on that page. And the caption to the graph you linked is “DIFFERENCE BETWEEN RAW AND FINAL USHCN DATA SETS.”
As for seeing my work, you bet your ass you can. I’ve decided to publish.]
I think that’s really his way of saying; I’ll hold it private until it’s published, which is certainly his right to do. We’ll see.
Give him credit though; at least he didn’t delete my comments, as RC would have.

Mark
February 25, 2010 11:08 am

I’d like to know if this error works in favor of the IPCC and AGW or against. Seems like most ‘errors’ have been in their favor.

Steve Keohane
February 25, 2010 11:15 am

It seems to me that using ‘95% confidence limits’, +/-2 sigma, exaggerates the quality of the data by showing only 2/3rds of the natural variance. Another way to use sigma is to point out that +/- 3 sigma is considered normal distribution, wherein 99.7% of natural occurrences will fall. A lot of manufacturing processes became very successful by controlling process variance +/- 3 sigma to within output specifications. It is not rational to place a specification (CO2 restrictions) on a process with greater inherent variation than the specifications and further expect (bank on) an outcome within specifications. Not something I would invest in.

sturat
February 25, 2010 11:21 am

So, can you answer the following question?
Have you or any one you know performed a statistical analysis that contradicts Tamino’s claim:
“The claim that the station dropout is responsible for any, let alone most, of the modern warming trend, is utterly, demonstrably, provably false. The claim that adjustments introduced by analysis centers such as NASA GISS have introduced false warming is utterly, demonstrably, provably false.”
What I’m looking for is evidence of any reproducible analysis that supports the claim that the observed warming trend is a result of station dropout and/or adjustments.
Stating that one would need to have access to Tamino’s code is not applicable as stated by Paul Daniel Ash above.
One would assume that since you have been stating the opposite from Tamnio (and others) for some time that you have performed or have access to someone who has performed this analysis.
It’s a simple question with a simple answer: Yes I have or No I haven’t.
REPLY:Two things. 1) I’ve been studying USHCN, not GHCN, different networks 2) Yes it is a simple question, but I’m not the person to answer it. E.M. Smith is the one that raised this claim and did all the GHCN analysis, and I’ve sent him an email with the link and the question. I’m sure he’ll respond either here or at his blog. http://chiefio.wordpress.com
Your defense of the phantom researcher “Tamino” withholding code is rubbish. How well would “I’ve disproved you but I won’t show you how, nor will I give my name” hold up in a court of law? Not at all.
-A

NickB.
February 25, 2010 11:23 am

Coming soon in CRUTEMP4… the 30’s now the coldest decade on record
/sarcoff
But seriously, is there any way to plot the delta trends between the different versions of CRU and GISS. Seems like every few years the 30’s get colder and colder.

Jason F
February 25, 2010 11:23 am

OT – off topic but I wonder how the folk at http://www.skepticalscience.com/ with their handy iPhone app will rebut this, actually I don’t think I’ve seen a more vile AGW propoganda site or app

rbateman
February 25, 2010 11:34 am

Pamela Gray (10:49:38) :
Astronomical imagers perform the same kind of IE reduction, taking multiple images through a noisy atmosphere to improve the signal/noise ratio.
They even ‘dither’ about, moving the centerpoint of the camera with respect to the object to smooth out local atmospheric disturbances and pixel sensitivities.
You can do the same with climate, taking several nearby stations to get a signal means. Erratic readings and missing data can be suppressed in such a manner.
The ratio of rural to urban stations should accurately reflect the current land use of any area, to suppress the noise of land use on the climactic signal.
Current GISS practice of taking a single station to represent a large area is the opposite and noisiest way imaginable to represent climate.

Michael Jankowski
February 25, 2010 11:50 am

“I couldn’t prove that opposing opinions are moderated out, but have a gut feeling they are.”
Anyone who has spent any time there posting opposing opinions know they are frequently moderated out and selectively-edited (before the poster is banned, of course).

February 25, 2010 12:22 pm

Anthony,
In this case replicating what Tamino did wouldn’t be that difficult. There are two primary components that you would need to figure out:
1) How to undertake the spatial weighting. While this is a bit beyond my programming ability (hence my interest in Tamino’s script), the basic idea is simple: split up the world into grid cells based on lat/lon, and calculate the average anomaly of stations in the “pre-cutoff” and “post-cutoff” series (or calculate the average anomaly of all stations in each grid to compare with GISS et al). Weight each grid cell by its area to determine the contribution to global temp anomalies. This leads us to…
2) How to combine station records within grid cells. Here Tamino developed his own “optimal” method (described briefly here: http://tamino.wordpress.com/2010/02/08/combining-stations/) that would be difficult to replicate at present, since he intends to publish a paper with this as the primary innovation and doesn’t want to jump the gun (something you should be empathize with given your similar position on surfacestations). However, Chad has been nice enough to outline a number of other methods of combining stations over at his blog (http://treesfortheforest.wordpress.com/) that you could easily take advantage of.
Its a bit more work than looking at individual stations or simply averaging anomalies, but it is the “correct” way to go about a rigorous analysis to see if the differing number of stations available in GHCN over time introduces any bias into the global temperature record. Once you do it, you can also easily dive into a regional analysis of station availability, checking to see if the much-touted “Bolivia” and “Canada” station data availability issues over the past decade have any real effect by comparing grid cells with both “pre-cutoff” and “post-cutoff” stations over the period where reports are available for both.

REPLY:
Well like I keep telling everyone else, GHCN is E.M. Smith’s specialty, and I’ve notified him. Like you, it’s a bit beyond my programming ability to replicate and I wouldn’t undertake it now anyway, since I’m busy collaborating on the USHCN paper with co-authors. Let’s see what Mr. Smith (who does have that programming skill) has to say about it. – A

February 25, 2010 12:28 pm

On spatial weighting, it might be easy to just break the world up into grid cells of 4 deg of latitude by 5 deg longitude. Per Gavin, the approximate area of each grid cell would be:
grid_area = 4 * pi^2 * 6378.1^2 * cos(lat*pi/180) * 5/360 * 4/360
where lat is the latitude of the grid cell in question in degrees.

February 25, 2010 12:29 pm

er, I meant degrees in the previous post, not radians. Mind editing oh helpful moderator?

Pamela Gray
February 25, 2010 12:30 pm

That was also my take on a single station measured just once a day for an entire grid instead of several nearby stations measured at the same time once a day and then averaged together with the lower error calculation available. The indeterminate noisy error of one station is copied by every grid filled in with the data from this one sensor. Sloppy research design.

Ken Harvey
February 25, 2010 12:31 pm

Anthony, Pamela. Take half a teaspoon of bicarbonate of soda (baking soda, an alkaliser) in a little water, four times on day one. The same twice on day two. If you should really need it by then, once on day three. Cold gone. It will work equally well on full blown ‘flu. Courtesy of my grandmother, some seventy odd years ago. It works just as certainly now as it did back then.

steven mosher
February 25, 2010 12:37 pm

JMurphy (09:27:58) :
WRT Tamino.
here is a simple test for you.
Visit this site:
http://statpad.wordpress.com/
This site is run by a published statistics professor.
he reviewed tamino’s work and had a question about Tamino’s method.
he attempted to post a comment.
He was banned.
I attempted to post a comment. ” tamino have you looked at this?”
THAT was banned.
so you try.
In fact EVERYONE should try this. Send a simple note to tamino.
Tamino, have you read this blog by the statistics professor.
http://statpad.wordpress.com/
AND WHEN TAMINO REFUSES TO LET YOUR QUESTION GET THROUGH YOU HAVE A GOOD INDICATION ABOUT THE OPENNESS OF HIS MIND.
I’m sure he wont mind the traffic.
Go ask. everybody ask.
Heck, Tell people on every CAGW site that you asked, have them ask.

RomanM
February 25, 2010 12:38 pm

Re: sturat (Feb 25 11:21),
It is just as well that Tamino is withholding his code, because that way he does not have to defend the use of his “optimal” method of calculating grid temperatures.
I tried to discuss this with him by leaving a comment on his web site, but the comment got deleted. After I wrote a post discussing his method, he again refused to discuss the methodology.
I guess we’ll have to wait until he publishes it in a (climate science) peer reviewed journal. 😉

steven mosher
February 25, 2010 12:56 pm

“Tamino’s response to request for code from Zeke:
[Response: I haven’t decided whether or not to publish this (peer reviewed). If I don’t I’ll probably make the code available to those who I consider serious investigators. That does not include denialists.]”
Can I ask a question please, Tamino?
can I ask a question?
Have you read RomanM’s site?
Did you read his comment that you banned?
I think Grant Foster has put himself in a funny spot here.
Grant Foster is Tamino.
If Grant decides to publish his paper he has to make a decision.
Does he change his analysis in light of Roman’s work or does he ignore it?
A. If he changes his work, then he has to credit Roman. To make
sure of this people need to alert Tamino to this. If he publishes
we will remember. We will raise the issue of attribution. We will
look at the guidelines from his employer. So, people need to make
this issue more public. Now.
B. If he doesnt change his approach, then I suppose roman will have a
nice comment to make or somebody can use his work to do a nice
paper.
If Grant doesnt publish a paper, we can speculate as to why.

steven mosher
February 25, 2010 12:59 pm

Zeke Hausfather (12:22:17) :
here is a simple request. If you havent read Roman’s work please have a read.
then, ask Tamino what he thinks. include a link to Roman’s site in your request.
That’s a fair request. can you do that? if not, why not?

Michael Jankowski
February 25, 2010 1:02 pm

“…Here Tamino developed his own ‘optimal’ method…”
Oh, developing his own method? At least it’s not called ‘robust.”
I see Tamino has a new comment:
“…As for me, I’ve seen lots of people do an analysis which is wrong. But none have come close to doing it right. In particular, none seems to pay attention to the crucial area-weighting step…”
In other words, his way is the right way, and everyone else is wrong.
It’s interesting that he notes, “It’ll be more work to finish the southern hemisphere.” He admittedly hasn’t even finished, and he’s already got his conclusion.

Brandon Sheffield
February 25, 2010 1:07 pm

Steve M. (10:25:37) :
Do I trust “Tamino”, a man supposedly of science but who won’t put his name to his critcisms, who regularly denigrates others, and who now won’t share what he claims falsifies the work of people who do put their name to their work? In a word, no. -A
“I was thinking the same thing. It’s good to see that you openly put your name on this site, along with your credentials. I read through the comments on Tamino’s site, and not that it’s all cheer-leading. I couldn’t prove that opposing opinions are moderated out, but have a gut feeling they are.”
They are moderating us out, I have posted 6 times within 1 hr, lo and behold not one of my comments posted. Read what I was posting below:
I find your claims to be very bold. The fact you are resorting to name calling, however , is very bullish. I know that there is not one scientist, skeptical or not, who would ever say the Holocaust did not happen/was a non-event. When you use such words toward those that are skeptical it is very offensive and very offensive for those that hear it being used that are not skeptical. So, please do everybody a favor and retract the word “Denialists” from your paper. Otherwise, in my book weather I agree with you or not, you sound like a heartless, and racist weather man. Besides, the skeptics are not out to destroy the science, they are trying to help you get it right. The Science is not even 5% perfectly understood. They are your counter part parts in a system of checks and balances. So, again, refrain from bringing your self down to the MSM, Greep Peace, Eco-Terrorists’s level. You are better than that. We all are.

Brandon Sheffield
February 25, 2010 1:18 pm

I have also posted this:
Zach // February 25, 2010 at 2:05 pm | Reply………………………………………”and that their preferred politicians and commentators say it’s a reasonable position.”
I have to disagree with you on this. According to the constitution politicians do not tell the people what the issues are and if they are right or wrong. The consistution clearly gives power to the people to decide what the issues of the day are and what the government should do about it. When I say the people, I mean normal every day average citizens, not a small group of scientists. The Scientists are wrong on both sides of the fence as well. They should not be trying to convince the Government about anything, unless the citizens on a majority vote want the scientists to be heard by the congress, then the Judicial Branch will determine the legality of any plan of action to be taken by the government

Editor
February 25, 2010 1:18 pm

Comments for this entry bring up Tamino. Also mentioned is Tamino’s attitude regarding release of code and information.
Quite interesting given the recent post by Judith Curry on ‘trust’ and the perception by the public of the pro-AGW side of the discussion.
—————————-

Jean Parisot
February 25, 2010 1:20 pm

“1) How to undertake the spatial weighting. While this is a bit beyond my programming ability (hence my interest in Tamino’s script), the basic idea is simple: split up the world into grid cells based on lat/lon, and calculate the average anomaly of stations in the “pre-cutoff” and “post-cutoff” series (or calculate the average anomaly of all stations in each grid to compare with GISS et al). Weight each grid cell by its area to determine the contribution to global temp anomalies. This leads us to…”
Where and when was the decision to use arbitrary grid polygons made? While it may have made the “math” easier, it is not an optimal approach. Modern spatial statistics and GIS systems can handle far more complex data aggreations. Has anyone considered making the “grid” as homogeneous as possible, with an “even” depth of datum?

Jean Parisot
February 25, 2010 1:21 pm

g – the missing letter from aggregations above.

carrot eater
February 25, 2010 1:34 pm

Michael Jankowski (13:02:24) :
There are many right ways to do something, but averaging together absolute temperatures is definitely a wrong way. And for getting decent global figures, not using spatial averaging is also definitely wrong.
That still leaves you with many good ways of doing it.

February 25, 2010 1:42 pm

Mosh,
Tamino is reading this thread (he even has a comment from here in a blog post), so I’m sure he is well aware of your advocacy that he should respond to Roman’s augmentation of his optimal method. I try not to go out of my way to needlessly antagonize folks :-p

sturat
February 25, 2010 1:56 pm

“REPLY:Two things. 1) I’ve been studying USHCN, not GHCN, different networks 2) Yes it is a simple question, but I’m not the person to answer it. E.M. Smith is the one that raised this claim and did all the GHCN analysis, and I’ve sent him an email with the link and the question. I’m sure he’ll respond either here or at his blog. http://chiefio.wordpress.com
1) I guess you’re saying that if one performs a similar analysis using the USHCN network then you would arrive at an opposite conclusion to one using the GHCN network. So, were is your reproducible analysis of the USCHN network?
2) Please provide a link to the specific analysis by E.M. Smith using his methods ( or Roman’s) that show that the station dropout problem is significant? By analysis I mean, a set of runs, graphs, and supported conclusions that are comparable to those of Tamino and show the opposite effect. Comparable means using more than a handful of stations and for at least the entire Northern Hemisphere.
3) As to your assertion of my “defense of the phantom researcher” I am puzzled by a couple of things. While it can be inferred from my original post that I lean towards Tamino’s analysis being correct, my question to you was in regards to can you show (by yourself or through the work of others) that it is incorrect. As to having to have Tamino’s exact algorithm in order to show that it is incorrect that would be the case if the algorithm was the important item in question. But, in this case, the conclusion of the algorithm is in question, or rather your assertion that it is incorrect, and this could be shown by the presentation of analysis supporting the opposite conclusion (as I asked in the first place.)
Try to keep your responses ( and those of your reader) to the technical aspects of the question: Is there a reproducible, rigorous analysis that shows that the station dropouts are a problem.
Oh, and I see that Tamino will be publishing his algorithm and analysis. I would like to believe that if no significant errors are found in his analysis that you would be willing to publicly accept the results.
REPLY: Like I said I’ll let Mr. Smith respond on GHCN, you can view his many different GHCN analyses here: http://chiefio.wordpress.com/
I suggest you ask him questions directly rather than relying upon me as a proxy. As for USHCN, there’s a full analysis in process now, being prepared for a journal paper. When that paper is accepted, we’ll publish an SI that has the data, the methods, code, and results. This is the standard practice that I will follow. – Anthony

David Segesta
February 25, 2010 2:10 pm

Pamela Gray (08:39:18) :
“Anthony, I am at home too dealing with a raging head cold and solidly blocked up Eustachian tubes.”
Well that makes three of us. Can you catch a cold over the internet?
REPLY: Not if you have an anti-virus program that is up to date ;-P

February 25, 2010 2:17 pm

“Jason F (11:23:33) : …actually I don’t think I’ve seen a more vile AGW propoganda site or app
Then you haven’t seen this:
http://www.climatecops.com/
On topic, we wouldn’t be having this discussion if all the raw data, code, documentation (if there is any), and such had been made publicly available. So many errors have been found to date it is mind boggling.
Have they ever thought of the idea that other people could help? Only if you’re in the club, I guess.

Peter of Sydney
February 25, 2010 2:18 pm

No matter how small they make the error flags, the trend depicted is still meaningless and does not prove anything, least of all that AGW is factual. I suppose if the world temperatures were declining instead of rising gradually over the past 100+ years, they would be calling it AGC instead of AGW. That in fact was the case for a short time during the panic of the new ice-age mania a few decades ago. They didn’t call it AGC but the threat of a new ice-age was blamed on man. So, unless the climate stops changing for the first time ever in the history of the planet, there will always be alarmists running around like mad hatters declaring we are doomed unless we do such and such, which when analyzed in detail is proven to be useless and nothing more than a scheme by the very rich and powerful to become even more rich and powerful at the expense of the rest of us.

Dave Andrews
February 25, 2010 2:21 pm

Zeke
“I try not to go out of my way to needlessly antagonize folks “
Judging from your posts on a number of blogs I would agree with your statement. Where I might disagree is with you is aiming that statement at Steven M when Tamino’s record on antagonising is incomparably worse.

carrot eater
February 25, 2010 2:29 pm

If people are finding that Tamino moderates heavily, I’ve found the same of EM Smith. So unless one of the two open up, the only cross-discussion can occur here.
Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.

steven mosher
February 25, 2010 2:30 pm

Zeke Hausfather (13:42:06) :
Mosh,
Tamino is reading this thread (he even has a comment from here in a blog post), so I’m sure he is well aware of your advocacy that he should respond to Roman’s augmentation of his optimal method. I try not to go out of my way to needlessly antagonize folks :-p
I can appreciate that. here is what you know. tamino will probably be antagonized if you dont ask. precautionary principle. You can be fairly certain that if tamino does not answer the question, then he will be antagonized ALOT. But if you antagonize him a little, and suggest that he avoid the comment catastrophe by taking a little pain today.. why you are doing a noble thing!

steven mosher
February 25, 2010 2:38 pm

Zeke I think what happens is that people read stuff for Tamino and then send him mails. people like you and nick stokes and who have math ability should just go read the stuff roman did. Then make a private comment to Tamino, behind the scenes. Explain to him that he might be wrong about the optimality of his approach.
Then Tamino will give you his argument. You can then carry that argument into Romans blog. It’s a bit bizarre but that is how it happens. Tamino can’t be seen talking to anybody in the CA gang, unless of course he is calling them a criminal or stooge. he can’t even take questions from Lucia and she believes in AGW! Any way, I trust you to have a fair read of things. go talk to tammy in private. Who knows Romans approach may show MORE WARMING.
dunno. I do know its silly not to talk.

steven mosher
February 25, 2010 2:48 pm

Roman,
Tamino will now not publish his paper. he’ll post a PDF or something.
Probably not share code. he will say ” I’ll share code with people who are not denialists” but then wont explain what he means by denialist ( I believe in AGW ).
The climate science ditto heads will link his stuff ” as tamino’s optimal method shows..” and the issue of his behavior will remain front
and center.
Now, Tamino is stuck. If he publishes his PDF or whatever, then the story
of how he could not address simple questions will always be there.
And if he doesnt publish people will point to the episode and say ” what the heck?” was he that afraid of answering questions from a retired professor?
Hansen’s bulldog. right.

February 25, 2010 2:54 pm

sturat (13:56:01) :
Question. Do you read the replies to your questions? Do you read other posts here responding? Did you go to the link that was provided by steven mosher (12:37:35) 😕 Did you read Tamino’s post in its entirety? While the graphs were pretty, I didn’t see anything other than “It is so, because I said it is so.” He’s only asserted he’s done something. He hasn’t shown anything. Would it make you feel better if I make up a graph and label it for you, stating I proved something? What is with you guys that you believe everything you read but only from people to claim to know everything about climate science? (And now, apparently he’s developed a statistical analysis skill beyond a statistician. Strange that he didn’t seem to possess that ability in some of his earlier works.) For the love of all that is holy, TRY SOME CRITICAL THINKING!!

wayne
February 25, 2010 2:57 pm

rbateman (11:34:27) :
The ratio of rural to urban stations should accurately reflect the current land use of any area…

I assume you mean that since there is much more rural area than urban area the two anomalies should not be merely summed together. It is commonly seen as ruralAnom + urbanAnom on GISS calculations (I think it was GISS). Instead it should more accurately be ruralAnom * ruralAreaPct + urbanAnom * urbanAreaPct, ruralAreaPct + urbanAreaPct being equal to one. Was this your point to Pamela on urban-rural?
However, I’ve never seen data on the ratio of urban to rural land use stated as such, or even at per grid cell. Rural anom usually being a small fraction of urban anom.

1DandyTroll
February 25, 2010 3:02 pm

@Anthony “Do I trust “Tamino”, a man supposedly of science but who won’t put his name to his critcisms, who regularly denigrates others, and who now won’t share what he claims falsifies the work of people who do put their name to their work? In a word, no. -A”
you mean that a dude who’s into math and aavso and used to have tamino as part of his email address and does, or did, work as a data analyst concerning customer service or some such, and wrote some data analysis software for that amateur astronomy club aavso. Only chipped in on the climate arena. And apparently likes to write simple for simple folks and newbies.
But I’m sure it’s just a seized upon coincidence turned into a trick.

carrot eater
February 25, 2010 3:03 pm

Well, I wandered over to this statpad webside, and was surprised to see myself being referenced. Is the main complaint coming from my question, over whether the offsets are month-specific or not? I think they should be month-specific, but I also don’t think it’ll make that big a deal to the final results. It certainly won’t affect the main points.

February 25, 2010 3:04 pm

carrot eater (14:29:47) :
“Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.”
Yes, it is. Its probably out of my range of understanding error margins. But the dual discussion on this thread does have a commonality. It is dreadfully apparent, that the scientists doing the research haven’t properly engaged mathematicians. (Nor, apparently, database administrators.) Much of climate science requires proper statistical analysis to come to the proper conclusions and data integrity is of the utmost importance. Hopefully, when some of our friends engage in their “do over”, they won’t fail to seek expertise when they engage in something outside their purview.
Cheers

sturat
February 25, 2010 3:26 pm

James Sexton (14:54:39) :
Yes, I do read the threads and the link. At least the ones that don’t resort to name calling and shouting. From your comments I take it that you have generated your own analysis on the topic of station dropouts. I look forward to your publication of your results.
As to whether he has shown anything, I agree that the atomic details of his algorithm have not been presented (yet), but his explanations and “pretty” seem to me to merit consideration.
Since Mr. Watts and others have stated many times and quite loudly that there is something wrong with the temperature record (one aspect being the station dropout issue), I felt it was appropriate to ask for similar analysis that contradicts Tamino. If, and when, they can produce such an analysis that does “prove” their assertion, I will be interested in following the critiques that are sure to follow.
As to the link in Steven Mosher, I did go there, but all I saw was an interesting post on a potential error in Tamino’s algorithm, but no supportive analysis to show that it made a significant difference when applied to actual data. It is entirely possible that I have not read the relevant post with a complete analysis, though.
I do agree with carrot eater, steven mosher, and others that it is unfortunate that these discussions can’t be held in a more civil manner. Summarily blocking posts at either site adds to the mistrust and detracts from the important results.
Perhaps Tamino, E.M. Smith, and RomanM could each publish their analyses and the discussion could center on the technicalities of analysis rather than the personalities of the parties.

George E. Smith
February 25, 2010 3:26 pm

“”” RockyRoad (08:18:32) :
Please fix this sentence:
What I found most interesting is that while the error he found may lead to slightly less uncertainty
Thanks! “””
And please fix the conclusion; fixing of errors can only steer away from absolute uncertainty.
There is no measure of certainty in results known to contain errors.
Seems to me, that in the case of the Hubble Telescope main mirror; the known error in construction was also known to a high degree of certainty.
Mox nix; the resulting images were still garbage.

Phillep Harding
February 25, 2010 3:34 pm

Time for someone to create a AGW persona to get the inside scoop on this?

Steve M. From TN
February 25, 2010 4:02 pm

sturat (15:26:30) :
James Sexton (14:54:39) :
Yes, I do read the threads and the link. At least the ones that don’t resort to name calling and shouting

You must not read Tamino’s website then.

carrot eater
February 25, 2010 4:16 pm

James Sexton (14:54:39) :
Tamino more than just asserted that he did something; he described what he was doing, step by step. His results seem reasonable, though there is always a chance there is some basic math error in there somewhere. We may yet see revisions as he improves his methods, but it’s highly unlikely that the basic point will change.
The question is, had EM Smith done this sort of analysis, in order to support his ideas? It doesn’t have to be the exact same, but you do need some sort of calculation that emulates the GISS processing. So far as anybody can tell, the answer is no; it certainly isn’t in the SPPI document, but we’ll have to see what he says.

Richard Saumarez
February 25, 2010 4:21 pm

I have trawled through the HADCRUT data. The lack of sophistication is is extraordinary. The mean temperature is the integral of temperature over a period divided by the period in question, not the arithmetic mean. The analysis of “mean” temperature is mathematically illiterate. The spatial analysis is deficient (“Delauney Triangulation is done by a package and we don’t have the code”, paraphrased from Harry_read_me.txt). The gridding allows interpolation across the Himalayas and the Alps. Some stations are 700 Kms apart (i.e.: the length of British Ilses)
The only hope on the horizon is that the EPA endangerment finding has led to a legal challenge from multiple sources and the data and analysis will be subpoened. This will be scrutinised by some respectable mathematicians, who know what they are talking about.
On a slightly O/T remark. I have developed medical equipment that involves digital signal processing. The regulatory requirements need a truck load of documentation (what color do you dream in …?) and every statement has to be justified. We are supposed to overturn our economies on the basis of adjusted data when when the raw data has been lost?
A legal challege is going to expose the quality of the data on which AGW is based.

WAG
February 25, 2010 4:25 pm

I’ve noticed that there hasn’t been one substantive response to Tamino’s critique. Not one. All you do is make ad hominems and say that he won’t reveal his name, but that doesn’t make the merits of his criticisms less valid. The fact that you even make this criticism proves you are operating in bad faith. Anyone who was operating in good faith would not only respond directly to the criticism, but also would refrain from making redirecting arguments like “Tamino won’t reveal his name.”
Can anyone–ANYONE–prove Tamino is wrong?
REPLY: Probably, just not in insta-time. You want it IMMEDIATELY. Science isn’t about insta-results nor should any be expected. I haven’t even heard back from E.M. Smith yet. Did Tammy do his whole thing in an afternoon while at his day job office? I think not. Get some patience or bugger off – A

OldOne
February 25, 2010 4:35 pm

@ Brandon Sheffield
Thanks for sharing that your comments were “moderated out” of “Tamino”s blog.
How revealing that someone who runs a blog called “Open Mind” is too “Closed-minded” to post comments that disagree with or question his position. Not gonna find any honest search for truth there! A leopard cannot change its spots.

Malcolm
February 25, 2010 4:35 pm

Can someone tell me if measurement and calibration errors are considered in these global averages since 1850.
i.e. how accurate was global temperature measurement in 1850???

WAG
February 25, 2010 4:40 pm

“You haven’t released your code” is such a cop-out. Anyone who uses those words should immediately lose credibility. If you can’t answer Tamino’s claims, do research of your own.
[Sorry, but releasing one’s code is the essence of scientific transparency and an essential requirement for attaining credibility…. you evidently are just a warmist advocate… – The Night Watch]

David44
February 25, 2010 4:41 pm

Off topic but temperature related: http://news.bbc.co.uk/2/hi/asia-pacific/8535792.stm
Extreme cold in Mongolia has killed so much livestock that the United Nations is starting a programme to pay herders to clean and collect the carcasses. More than 2.7m livestock have already died and another 3m carcasses are expected by June. The UN Development Programme cash-for-work programme aims to produce income for herders whose livelihoods have disappeared due to the weather. Concern is high for the risk of disease posed by piles of rotting dead animals. Once melting of the snow starts, this poses the threat of the spread of diseases such as anthrax and salmonella, infection and soil pollution.
In addition to the Minnesotans for Global Warming, I suspect there are a very many Mongolians for Global Warming, too.
Zud phenomenon
Persistent snowfall has created a blanket of snow over the entire country, with 60% covered by 20-40cm (8-16in) of snow, the UNDP says. This year is particularly harsh because of a phenomenon called Zud, which occurs when severely cold winters of below -50C (-58F) are preceded by dry summers that preclude sufficient grazing.
Fodder supplies have run out, resulting in the loss of millions of livestock in a country where a third of the population rely on herding and agriculture, the UNDP said.

Ross M
February 25, 2010 4:42 pm

I read John Graham-Cumming’s blog post and I can’t really make sense of it. It is probably my maths skills that are lacking but why is it that he says:
“You’d expect the error to be smaller with more samples in a grid square, but because of the division by the number of stations it’s actually getting larger.”
Dividing by a bigger number yeilds a smaller number, doesn’t it? I hope he can clarifiy his post a little more.

WAG
February 25, 2010 4:46 pm

The funniest thing about you folks complaining that Tamino is anonymous and won’t release his code, is that this comment at Tamino predicted your response exactly:
http://tamino.wordpress.com/2010/02/25/shame/#comment-39779

Excellent work, Tamino, but you know what the response will be, right? You will need to post your methods, code, interim analyses, etc., etc., etc. and who’s Tamino, anyway; then they will go onto the next item.

Try to be more original next time.

E.M.Smith
Editor
February 25, 2010 4:52 pm

@ JMurphy (09:27:58) :
Per Tamino: He shows that two sets of over averaged data match in one period of time then asserts this means they must match in another period of time (when one of them is missing). That is a logic failure.
There are differences in response to long cycle events, such as the PDO for example, where areas can have similar behaviours in one part and opposite in another. look, for example, at the “loopy jet stream” we have now where being on one side or the other matters a great deal, yet a few years ago we had a flatter jet stream and the USA was more interchangeable. And no, I’m not saying that is THE answer, I’m saying that is AN example of the class of issue. I’ve seen these “contra areas” in several regional reports I’ve done. You are depending on luck and faith that they are balanced. They are not.
And averaging all of bucket A and all of bucket B simply hides too much. I’ve been down that path before and it’s a fruitless one. (For example, it hides that the Pacific Basin is not participating in ‘Global Warming’).
So take New Zealand. If you leave Campbell Island out of the entire set, you get no warming in the averages. The ’tilt” comes from having that lone cold island in the data during the early years, then out in later years. That’s with the “raw” data. It is clear that that single station biases the New Zealand set.
Now you could average New Zealand in with some other place that has a colder later year series and then say that you didn’t find anything, or you could average in a warmer station later and say you found ‘trend’ or you can do all kinds of over averaging that hides the fact that the basic individual station data for New Zealand is not warming significantly but the average set with a cold station dropout in the middle has a warming bias. But that would still be an error of over averaging. And you are depending on HOPE to get (or not get) any given trend. And HOPE is not a strategy.
Does CRUtemp(sub-n) or NCDC (whatever process) have a way of removing that inherent bias in the data? We don’t know because they have not released their code. (For GIStemp we know that it is not a perfect filter. I’ve benchmarked the code and a warming bias in station deletions leaks through as a warmer anomaly map. So for 1 out of 3 software sets at a minimum any bias in the input data leaks to the output). But that the base data is made biased is not in doubt. That’s what all those ‘average the data and look at the trends’ reports I made were all about. Measuring the measurements.
Some areas have no inherent warming to speak of, others have significant warming. And in many cases where there is warming, it only shows up as a “bolus” when thermometer drops happen. But to see it, you have to get finer grained than “average it all together”. (There is a reason a mix of all the soda flavors on a fountain is called a “suicide”… the result is “not good”.)
For what it’s worth, here is a (VERY preliminary) version of a dT/dt report for New Zealand. The dT/dt method differs from “First Difference” in that I do not reset a series of ‘anomalies’ to zero at a missing data item. I just carry the state forward until there IS a data item and then all that “Delta T” shows up in that time period. This puts more variation in individual yearly values if there are many missing years, but has the virtue of preserving overall trend better. I emphasis that this is “young code” and not completely QA’d yet, so subject to revision. It will also have a known tendency to “thermometer start of lifetime” bias in that if a thermometer-month first reports in a very cold year, it would report warming from that time forward due to that starting time bias. I’ve not finished the code to remove that bias, but to the extent thermometers arrive over time, that bias is reduced.
So here you can see that it’s possible to mitigate the warming bias introduced by that cold Campbell Island station if you do your anomalies “self to self” (but that is not how GIStemp does them). You can see that we had a very warm 1998 by about the same amount that we had a cold 1902, but in the end we are now just about back where we started (and not as warm as 1956 or 1916 in New Zealand).
New Zealand is not warming beyond what you would expect from UHI, Airport Tarmac (that warms without regard for landing numbers) and random wanderings (along with cyclical wanderings like the PDO, where we’ve just entered a cooling phase for the next 20 to 30 years.)
That second column “dT” is the “money quote”. That’s the cumulative average change of temperature from all thermometers over their lifetime to date. The second column is the change in that specific year.

Produced from input file: ./DTemps/Temps.M507
Thermometer Records, Average of Monthly dT/dt, Yearly running total
by Year Across Month, with a count of thermometer records in that year
-----------------------------------------------------------------------------------
YEAR     dT dT/yr  Count JAN  FEB  MAR  APR  MAY  JUN JULY  AUG SEPT  OCT  NOV  DEC
-----------------------------------------------------------------------------------
1864  0.000  0.00    4   0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
1865 -0.067 -0.07    4   0.5  0.7  1.4 -0.5 -0.6  0.5 -1.5  0.4 -0.3 -0.6  0.0 -0.8
1866  0.200  0.27    5  -0.6 -0.5 -0.9 -0.5  0.3  0.0  0.4  1.2  0.9  1.1  1.5  0.3
1867  0.158 -0.04    5   0.8  0.3  0.6  1.5  0.0 -0.7  1.2 -0.8 -0.6 -0.4 -2.5  0.1
1868 -0.875 -1.03    5  -2.4 -2.4 -0.8 -1.1 -0.5 -0.2 -1.5 -1.2 -1.0 -0.8  0.9 -1.4
1869 -0.300  0.57    5   0.5  1.3  0.1  0.0 -0.6  0.7 -0.7  1.4  1.1  0.7  0.0  2.4
1870 -0.375 -0.08    5   1.0  0.3 -0.4 -1.2  0.5  1.1  0.6 -0.8 -0.7  0.7  1.1 -3.1
1871 -0.517 -0.14    5   0.0 -1.1  0.9  0.9  0.0 -1.4  0.4  0.6 -0.6 -0.8 -1.8  1.2
1872  0.192  0.71    5   0.7  1.2  0.0  1.0  1.1  0.0 -0.2 -0.9  1.2 -0.2  1.7  2.9
1873 -0.183 -0.38    5  -1.7 -1.0  0.1 -1.2  0.0  1.2 -0.6  1.4 -1.0  0.8 -0.9 -1.6
1874 -0.517 -0.33    5   1.0  0.3  0.0  0.4 -1.4 -1.8 -0.2 -0.3 -0.2 -1.4  0.1 -0.5
1875 -0.267  0.25    5   0.0  0.4 -0.3  1.5  0.7  0.9  1.0 -0.8  0.0  0.5 -1.4  0.5
1876  0.150  0.42    5   0.0  0.5  0.6 -1.0  0.3 -0.3 -0.8  0.6  0.7  2.1  2.4 -0.1
1877 -0.467 -0.62    5  -0.1 -0.7 -0.7 -1.1 -1.6  0.4 -0.4  0.1 -0.2 -1.8 -0.7 -0.6
1878 -0.550 -0.08    6  -1.8 -1.2 -0.2  1.3  0.1 -0.2  0.4 -1.1  1.3  0.0 -0.1  0.5
1879 -0.392  0.16    6   1.5  0.0  0.6 -1.4  2.2 -0.4 -0.3  0.6 -0.2  0.1 -0.3 -0.5
1880  0.300  0.69    6   0.6  2.5  0.4  1.9 -0.3  0.5  1.2  0.7  0.0 -0.2  1.3 -0.3
1881  0.125 -0.17    4  -1.3 -0.4 -0.3  0.0  0.0  0.7  0.4  0.3 -0.4  0.1 -1.2  0.0
1882 -0.058 -0.18    4   0.7 -1.7  0.4 -0.2 -0.7  0.1 -0.5 -0.7  0.2 -0.7 -0.1  1.0
1883 -0.258 -0.20    4   1.0  2.4 -0.2 -1.4  0.2 -0.5 -0.4  0.5 -1.5  0.0 -0.8 -1.7
1884 -1.000 -0.74    4  -3.2 -2.9 -1.6 -0.2 -1.5 -0.4  0.0  0.2  0.7  0.0  0.1 -0.1
1885 -0.483  0.52    4   1.1  1.5  1.2  0.4  0.7  0.9  0.1 -0.3 -0.4  0.2  0.8  0.0
1886 -0.375  0.11    4   0.8  0.8  0.0  1.1  1.0 -1.7 -0.7 -0.7 -0.2  0.1  0.5  0.3
1887 -0.133  0.24    4   2.7  0.1  0.8  0.2 -0.8  1.1  0.5 -0.1 -0.1 -0.2 -1.4  0.1
1888 -0.875 -0.74    4  -2.5 -2.3 -1.7 -2.1 -0.2 -0.1  0.0  1.4  0.3  0.4 -0.7 -1.4
1889 -0.175  0.70    4   1.3  1.5  0.1  0.8  0.5 -0.3 -0.4 -1.0  1.0  0.6  1.7  2.6
1890 -0.133  0.04    4  -2.1  0.1  0.2  1.2 -0.5  0.6  0.4  0.4  0.0  0.3 -0.1  0.0
1891 -0.408 -0.28    4   0.2 -0.5 -0.1 -1.1 -0.3 -2.0 -0.6 -0.1  0.1  0.0  0.4  0.7
1892 -0.017  0.39    4   0.3  0.2  0.7  0.7  0.5  1.7  1.0  1.2 -0.7 -1.0  1.7 -1.6
1893  0.158  0.18    4   0.5 -0.1 -1.9  0.0  1.5 -0.2  0.2  0.4  0.7  1.1 -0.1  0.0
1894  0.150 -0.01    5   1.0  0.7  1.0 -1.4 -1.0  0.7 -0.2 -1.0 -1.2  0.0 -1.1  2.4
1895 -0.392 -0.54    5   0.3 -0.3 -0.5 -0.7 -0.2 -0.6 -2.0 -1.1  0.9 -0.9 -0.8 -0.6
1896 -0.358  0.03    5  -1.2 -0.9  0.4  1.3 -0.1  0.4  2.1  0.5  0.0 -0.9 -0.6 -0.6
1897 -0.383 -0.02    5   0.7  0.4  0.1  0.0  0.3 -0.4 -0.4 -0.1 -0.2  0.2  0.8 -1.7
1898 -0.533 -0.15    5  -1.2 -1.9 -1.3  0.0  0.0  0.0 -0.2  0.1  0.0  0.5  0.3  1.9
1899 -0.508  0.03    5   0.5  1.2  1.5  0.6 -0.5  0.2 -1.2 -0.4  0.3  0.0 -0.4 -1.5
1900 -0.267  0.24    5  -1.1 -0.5  0.3  0.4  0.7 -0.7  1.4  1.9 -0.2  0.9 -0.2  0.0
1901 -0.533 -0.27    5   0.3  0.5 -1.5 -0.7 -0.2  1.5 -0.7 -1.8  0.2 -0.4 -0.2 -0.2
1902 -1.025 -0.49    5   0.4  0.4  1.7 -0.5 -0.9 -0.9  0.0  0.1 -2.3 -1.8 -0.8 -1.3
1903 -0.583  0.44    5  -1.8 -0.5 -1.2 -0.3  0.8 -1.1  0.6 -0.6  1.4  2.9  1.9  3.2
1904 -0.483  0.10    5   2.1  0.8  0.8  0.4  0.1  1.1 -0.1  0.5  0.5 -1.7 -1.3 -2.0
1905 -0.825 -0.34    6  -2.4 -0.3  0.0 -0.5  0.0 -0.6 -0.1  0.3 -0.5 -0.2  0.2  0.0
1906 -0.950 -0.13    6   0.4 -1.7 -1.6 -0.5 -0.4  0.6  0.4  0.0  0.0  0.8 -0.2  0.7
1907 -0.117  0.83    6   2.3  2.6  2.7  2.2  0.1 -0.7 -0.3 -0.1 -0.2 -1.1  1.1  1.4
1908 -0.475 -0.36    6  -0.3 -1.1 -0.9 -1.3  0.8  1.1 -0.8 -0.7  1.2  0.8 -0.7 -2.4
1909  0.033  0.51    6  -1.8  0.7  0.6 -0.1  0.4  0.1  1.4  1.7 -0.4  0.4  0.6  2.5
1910  0.092  0.06    6   1.7  0.8 -0.1 -0.3  0.0 -0.2 -0.9 -0.3  0.0  0.5  0.4 -0.9
1911 -0.367 -0.46    6  -0.5 -1.7  0.6  2.8 -0.4  0.0  0.2 -0.2 -0.3 -1.1 -1.9 -3.0
1912 -1.042 -0.68    6  -1.1 -1.7 -3.1 -2.5 -1.5 -0.8  0.0 -0.8  0.7  0.2  0.2  2.3
1913 -0.808  0.23    6   1.3  1.2  1.5 -1.0 -1.1 -0.5  0.4  0.7  0.2  0.5  0.5 -0.9
1914 -0.742  0.07    6   0.9  0.8  0.3  1.6  1.0  0.3 -0.5 -0.5 -1.0 -0.5 -0.7 -0.9
1915 -0.408  0.33    5  -1.3 -1.1 -1.8 -1.2  1.4  0.5  1.1  0.8  1.6  1.3  0.6  2.1
1916  0.575  0.98    5   0.1  2.7  3.8  2.0  0.8  2.2 -0.3 -0.2 -0.9 -1.2  1.4  1.4
1917  0.367 -0.21    5   1.5 -1.7 -1.6  0.3  0.4 -1.2  0.7 -0.1  0.5  0.6 -0.1 -1.8
1918 -0.542 -0.91    5  -0.9  1.6  0.0 -0.8 -1.3 -0.6 -3.1 -0.3 -1.4 -0.6 -2.0 -1.5
1919 -1.000 -0.46    5  -2.0 -1.2 -0.9 -1.4 -0.8 -0.3  1.8  0.5 -0.8 -0.2 -0.5  0.3
1920 -0.792  0.21    5   0.0  0.1  0.2  1.2 -0.5  0.1  0.2 -1.2  0.4  0.4  0.5  1.1
1921 -0.600  0.19    5   1.2 -0.8 -0.3 -1.0  1.2  0.3 -0.5  0.8  1.1  0.0  0.7 -0.4
1922 -0.400  0.20    5   0.5  1.1 -0.5  0.8  0.2 -1.2 -0.2  0.4 -0.4  1.1  0.0  0.6
1923 -0.567 -0.17    5   0.1 -2.0 -0.1 -1.6  0.1  0.3 -0.5 -1.2  0.5 -1.1  2.1  1.4
1924  0.408  0.97    5   0.6  2.7  2.2  4.3  0.0  0.8  0.6  1.4  0.8  0.9 -0.8 -1.8
1925 -0.800 -1.21    5  -0.5 -1.7 -2.0 -3.0 -0.7 -1.2  0.5 -0.9 -2.2 -0.9 -1.9  0.0
1926 -0.608  0.19    5  -0.2 -1.2 -0.1  1.5  0.5  0.7  0.0  0.4  1.0 -0.1  0.1 -0.3
1927 -0.725 -0.12    5   0.3  2.4  0.9 -2.2 -0.6 -1.2 -0.1 -0.6  0.0  0.0 -0.2 -0.1
1928  0.133  0.86    5  -0.2  0.6  1.4  2.9  1.3  1.3  0.7  0.7 -0.1  0.2  1.0  0.5
1929 -0.633 -0.77    5  -0.2 -1.7 -1.2 -2.1 -1.5  0.9 -1.2 -0.6 -0.9 -0.1  0.0 -0.6
1930 -1.242 -0.61    5  -1.2 -0.2 -0.7  0.0  0.2 -1.7 -0.9  0.4 -0.3 -1.5 -1.6  0.2
1931 -1.042  0.20    5   0.1 -1.3 -0.4 -0.3  0.1 -0.1  0.9 -0.7 -0.3  1.6  2.3  0.5
1932 -0.875  0.17    5  -0.3  1.1  0.8  0.7 -0.3  0.5 -0.8 -0.7  1.2  0.3 -0.3 -0.2
1933 -0.442  0.43    5   2.0  1.3  0.8 -0.4  0.0 -0.8  1.0  1.2  0.5 -0.5 -0.7  0.8
1934 -0.242  0.20    5  -1.9 -0.6 -1.2  0.8 -0.1  0.9 -0.6  0.4  0.1  0.1  2.0  2.5
1935 -0.067  0.18    5   3.5  2.0  1.8  0.5  0.0  0.0  0.3 -0.2 -1.8  0.2 -3.1 -1.1
1936 -0.642 -0.58    5  -1.8 -2.3 -3.0 -0.3 -0.5  0.1 -0.1  0.7  1.1  0.7  1.4 -2.9
1937 -0.850 -0.21    5  -1.4 -1.8  1.5 -1.5  1.1 -1.2 -0.1 -0.4  0.1 -1.5  0.5  2.2
1938  0.283  1.13    5   2.2  4.6  2.2  3.4  1.1  1.4 -0.3 -0.3  0.3  1.1  0.1 -2.2
1939 -0.550 -0.83    6  -2.9 -3.1 -1.4 -2.1 -0.8  0.9 -0.7 -0.6 -0.2 -0.8 -0.2  1.9
1940 -0.592 -0.04    7   2.2 -0.1 -1.1 -1.6 -0.9 -1.1  1.2  1.0  0.4  0.2 -0.7  0.0
1941 -0.550  0.04    8   0.0  1.0  1.5  0.3  0.8 -0.6  0.1 -1.1 -0.1 -0.6  0.2 -1.0
1942 -0.250  0.30    8  -1.6 -0.6 -1.0  1.2  0.3  1.4  0.5  1.0  0.5  1.4  0.5  0.0
1943 -0.408 -0.16    8   0.9  0.4 -0.2  0.1 -1.2 -0.3 -0.7 -1.1 -0.7 -0.9  0.5  1.3
1944 -0.408  0.00    8   0.4  0.4  0.9  0.4  0.4 -0.7  0.5  0.4 -0.4  0.1 -0.8 -1.6
1945 -0.608 -0.20    8   0.2  0.0 -0.4 -0.9 -0.4 -0.5 -1.0  1.3  0.5 -1.1  0.5 -0.6
1946 -0.517  0.09    8  -1.3 -0.4  0.2  0.2  1.4  1.5  1.5 -0.7  0.1  0.7 -2.3  0.2
1947 -0.300  0.22    8  -0.3 -0.5  0.0 -0.3 -0.5 -0.4 -0.4  0.2  0.2  0.5  2.4  1.7
1948 -0.117  0.18    8   1.9  0.4  0.3  0.2  0.2  0.0  0.5 -0.2  0.0 -0.1 -0.4 -0.6
1949 -0.383 -0.27    8  -1.7  0.8 -1.1 -1.2 -0.5  0.3  0.1 -0.3 -0.5  0.7  0.2  0.0
1950 -0.308  0.07    9   1.1 -1.0 -0.2  0.3  1.3 -0.2 -0.7 -0.3  0.2  0.0  0.4  0.0
1951 -0.442 -0.13   14  -0.3  0.2  0.7  0.7 -1.2 -0.5  0.0  0.0 -0.1 -0.4 -0.2 -0.5
1952 -0.233  0.21   14  -0.9  0.5 -1.0 -0.4  0.6  1.2 -0.7  1.2  0.4  0.3 -0.1  1.4
1953 -0.200  0.03   15   0.4 -1.0  0.3 -0.3  0.3  0.1  0.6 -0.1 -0.1 -0.6  0.8  0.0
1954  0.092  0.29   15   0.6  1.7  1.2 -0.1  0.5  0.7 -0.1 -0.6 -0.6  0.0  0.3 -0.1
1955  0.350  0.26   15   0.3  0.4 -0.3  1.5  0.5 -1.3 -0.3  0.9  0.8  1.3 -1.0  0.3
1956  0.525  0.18   15   1.8 -1.0 -1.2  1.4 -0.7  1.5  0.7 -0.5  0.0 -0.1  0.3 -0.1
1957  0.167 -0.36   15  -1.5  1.0  1.8 -1.5  0.1 -0.9 -0.7  0.5 -0.1 -1.2 -0.6 -1.2
1958  0.158 -0.01   15  -1.2  0.0 -0.2 -1.8 -0.5  0.0 -0.1 -0.3 -0.5  1.8  0.9  1.8
1959 -0.100 -0.26   15   1.6 -1.0 -0.6  1.1 -1.8 -0.6  0.6 -0.1  0.8 -2.0 -0.6 -0.5
1960  0.017  0.12   15  -0.5 -0.1 -1.0 -0.4  2.3  1.2  0.2 -0.3 -0.2  1.6 -0.1 -1.3
1961 -0.050 -0.07   16  -0.5  0.0 -0.1  0.0 -0.7 -0.6 -0.5 -0.1 -0.6  0.4  0.2  1.7
1962  0.500  0.55   18   1.4  0.2  1.0  0.2  1.7  1.0  0.8  0.7  0.5  0.0  0.0 -0.9
1963 -0.317 -0.82   18  -0.8  0.6 -0.7 -1.0 -1.6 -1.3 -0.6 -1.2 -0.1 -1.1 -1.1 -0.9
1964 -0.225  0.09   18  -1.4 -1.1  0.3  0.4 -0.4  0.2  0.8  0.8 -0.2 -0.3  0.6  1.4
1965 -0.433 -0.21   18   2.0 -0.6 -0.1  0.0  0.0  0.0 -1.6 -0.3  0.1 -0.8 -0.2 -1.0
1966 -0.242  0.19   18  -1.2  2.2  0.9  0.5 -0.2 -0.2  0.4 -0.4 -0.2  0.6  0.0 -0.1
1967 -0.075  0.17   18  -0.1 -1.9 -0.1  0.0  0.8 -0.3  0.0  2.2 -0.3  0.9  0.2  0.6
1968 -0.083 -0.01   18   0.1  0.3  1.4  0.0  0.7  1.0 -0.3 -0.9  0.0 -1.1 -0.3 -1.0
1969 -0.208 -0.13   18   0.0 -0.6 -2.1 -0.8 -0.8 -1.3 -0.3 -0.4  1.8 -0.4  1.1  2.3
1970  0.450  0.66   18   1.5  0.7  1.2  1.4 -0.4  1.6  1.6  0.6 -0.7  1.3 -0.1 -0.8
1971  0.542  0.09   16  -0.4  1.0 -0.7 -0.2  1.4  0.8 -0.8  0.2  0.0 -0.1 -0.2  0.1
1972 -0.142 -0.68   16  -1.3 -1.9  0.7 -0.1 -1.1 -2.9  0.4 -1.9  0.2  0.1  1.1 -1.5
1973  0.250  0.39   16   0.5  1.0 -0.6 -0.3  0.5  2.0 -0.6  1.2  0.1  0.0 -0.5  1.4
1974  0.417  0.17   16  -0.3  1.5 -1.1  0.6  0.0 -0.5  1.2 -0.4  0.0 -0.1  0.1  1.0
1975  0.325 -0.09   16   1.9 -0.8  2.1  0.1  0.5 -0.6 -1.0  0.7 -0.6  0.1 -1.3 -2.2
1976 -0.442 -0.77   16  -1.6 -2.9 -1.1 -0.5 -1.3 -0.2  0.0 -0.1 -0.6 -1.0 -0.6  0.7
1977 -0.450 -0.01   16  -0.7  1.8  0.2 -0.1 -0.8  0.7  0.3 -0.4 -1.0  0.2  0.3 -0.6
1978  0.325  0.78   16   1.4  0.9  0.2  1.6  1.7 -0.1  0.3  0.4  1.5 -0.2  0.9  0.7
1979  0.275 -0.05   16   0.0 -0.9  0.4 -1.5 -0.6  1.0 -0.1 -0.6  0.3  0.8  0.5  0.1
1980 -0.025 -0.30   16  -0.1  0.3 -1.6 -0.5  0.4 -0.7 -0.5  0.1  0.3  0.6 -1.5 -0.4
1981  0.392  0.42   12   0.4  0.2  1.8  1.4 -0.2  0.9  0.4 -0.6 -0.9 -0.7  0.9  1.4
1982 -0.275 -0.67   11  -0.6  0.2 -1.2 -2.2  0.3 -1.5 -1.0  0.7  0.1 -1.0  0.3 -2.1
1983 -0.350 -0.08   11  -1.2 -2.0 -0.3  0.8 -0.6  0.5  0.0  0.1  0.4  1.5 -0.4  0.3
1984  0.300  0.65   11   0.0  1.1  1.3  0.3  0.0  0.8  1.4  0.6  0.0 -0.6  1.0  1.9
1985  0.325  0.03   11   2.4  0.6 -0.8  0.0  0.2  0.0  0.0 -0.7  0.2 -0.2 -0.9 -0.5
1986  0.283 -0.04   10   0.2  0.3  0.2  0.6  0.4 -0.6 -1.3 -0.5 -0.8  0.9  0.1  0.0
1987  0.183 -0.10   11  -0.4 -1.0 -0.9 -0.7  0.1  0.0  0.3  1.3  0.4 -0.1  0.3 -0.5
1988  0.383  0.20   11  -1.1  0.9  0.1 -0.7 -0.5  0.5  0.9 -0.5  1.0  0.7  0.3  0.8
1989  0.433  0.05   11   1.1 -0.5  0.7  0.7  0.4  0.0 -1.2  0.2  0.0  0.2  0.0 -1.0
1990  0.400 -0.03    9  -1.1  1.2  0.5  0.2  0.3 -0.4  0.9  0.0 -1.6 -0.7 -0.3  0.6
1991 -0.133 -0.53   10  -0.1 -1.1 -0.8 -0.9 -0.7 -0.5 -0.5  0.5  0.8 -0.5 -1.6 -1.0
1992 -0.525 -0.39   10   0.0 -0.2 -1.1 -0.9 -0.7 -0.2  0.5 -1.3 -1.5 -0.6  1.4 -0.1
1993 -0.567 -0.04   10  -1.1 -0.6 -0.2  0.3  0.5  1.1 -0.2 -0.3  0.1  0.8 -0.9  0.0
1994 -0.375  0.19    9   1.6  1.2  0.5  0.9  0.3 -1.3 -0.6  0.1 -0.2 -0.9  0.0  0.7
1995  0.275  0.65    6  -0.3  0.3  1.3  1.7  0.2  0.6  0.0  0.4  1.5  0.7  0.3  1.1
1996  0.217 -0.06    9   0.4 -0.1 -0.6 -0.3 -0.5 -0.3  0.3 -0.2  0.7  0.4 -0.1 -0.4
1997  0.158 -0.06    9  -1.2  0.1  0.3 -0.8  0.6  0.5  0.0  0.4 -0.8 -0.2  0.6 -0.2
1998  1.058  0.90    9   0.9  1.4  1.5  1.1  0.2  0.5  1.7  0.1  1.2  1.0  0.5  0.7
1999  0.975 -0.08    9   1.1 -0.9  0.2 -0.3  0.6  0.1 -0.8  0.0 -0.1  0.0  0.2 -1.1
2000  0.625 -0.35    9  -1.6 -1.0 -1.5  0.0 -0.4  0.2  0.7  0.0 -0.1 -0.4 -1.5  1.4
2001  0.692  0.07    9  -0.6  0.4  0.0 -0.2 -0.1 -0.5 -1.8  0.6  0.4  0.7  1.4  0.5
2002  0.500 -0.19    9   1.2 -0.4  0.6  0.0 -0.4  0.9  0.8 -0.3 -0.1 -2.1 -1.1 -1.4
2003  0.467 -0.03    8  -0.8 -0.2  0.3  0.0  0.3  0.1 -0.9 -0.2 -0.1  0.7  0.0  0.4
2004  0.142 -0.32    8   1.3  0.0 -1.8 -1.0  0.0 -0.5  0.3 -0.5 -1.0  0.2  0.9 -1.8
2005  0.792  0.65    8  -0.7  1.6  1.2  0.5  0.1 -1.2  1.1  1.2  1.1  0.1 -0.2  3.0
2006  0.325 -0.47    8   0.0 -1.1 -1.3  1.8 -0.5 -0.4 -0.5 -0.5  0.2 -0.1  0.0 -3.2
2007  0.583  0.26    8  -0.2 -0.4  1.6 -1.5  1.2  0.6  0.1  0.4 -0.3 -0.2 -0.4  2.2
2008  0.592  0.01    8   0.4  0.0 -0.2  0.1 -1.2  0.4  0.1 -0.3  0.2  0.0  0.6  0.0
2009  0.233 -0.36    8   0.4  0.5 -0.6  0.1 -1.3 -1.2 -0.9  1.0 -0.3 -0.6 -0.5 -0.9
2010  0.167 -0.07    8  -0.8  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
For Country Code 507
From input file data/EMS.inv11.M1800.dt

In the “raw data simple average” of the New Zealand report ( at this url:
http://chiefio.wordpress.com/2009/11/01/new-zealand-polynesian-polarphobia/ )
you will find about 1.5 C of warming bias “kick in” when that station is dropped from the series. (Look for the “temperature series’ report in the link). And just below that one is a simple average with Campbell Island missing from the whole set. It shows little to no warming.
So you see, a thermometer drop most certainly DOES bias New Zealand. And all it takes is an existence proof of ONE to show that the same bias can show up in any of the data from the planet. But we also can see that if you do your anomaly calculations right (i.e. not cold season Apples to warm season Oranges) you can suppress that Survivor Bias in the basic data series.
If you look at similar reports for other regions, you can find one warming (Canada) while another is cooling ( Africa). Averaging all them together does not increase your knowledge, it decreases it. And you are just hoping that all those averages of averages have some meaning. Dividing those “mixed fruit baskets” into kept and tossed and averaging those two sets of “mixed fruit baskets” to compare against each other is just hiding what you are looking for.
So I don’t see any reason to “duplicate” something that is not very useful to begin with.
I could say more on that point, but I, too, am contemplating a potential to publish so the exact way in which other folks have ‘missed it’ is not something I’m willing to share just yet.
FWIW, the original “average the raw data and look at it” that shows an introduced bias in the data was for the purpose of seeing what had to be done to “clean it up” and for seeing what magnitude of “issue” was facing GIStemp. It was NOT to say that was exactly what happened to the planet, but rather “that was what happened to the data”. GIStemp was measured and found wanting to the task of removing that bias from the data.
To the extent the data were more representative of reality, GIStemp would work better. If the data set were about 1/2 to 3/4 C less “biased” GIStemp would be usable (IMHO – based on measured bias of data in and what was done to it by GIStemp).
The basic problem is that GIStemp does anomalies as “Basket A at time A” to “Basket B at time B” and those baskets are very non-representative now. You have about 1500 stations (about 1000 after GIStemp drops a bunch) that must fill 8000 boxes. Many boxes have only one station in them. So in many cases you are comparing one thermometer in the present to a different thermometer in the past (and in some cases they are fictional and ‘filled in’ from somewhere else…) Under those circumstances any “station survivor bias” will be amplified, not diminished. And station drops damage the GIStemp product, not improve it. For that reason alone, you need more stations in GHCN, not less.
Unless, of course, you wish to stop using GIStemp (which would be fine with me…)

Malcolm
February 25, 2010 4:57 pm

Does anyone believe that it is possible to know the global temperature to an accuaracy +/-0.4 Deg C in 1850?????
I certainly don’t!
I have read Brohan et al, and it does not convince me!

sturat
February 25, 2010 4:58 pm

BTW, I did send a copy of my original post question to E. M. Smith http://chiefio.wordpress.com/ ,but it must not have made it past his input filters. I sent it a couple of hours ago to this thread: “Canadian Concatenation Conundrum” and there have been posts added since then, but not mine.
I can understand that he might not have an immediate answer, but I don’t see why my post should be held up for that. I thought that’s what the comments on a blog are for.
So, since you suggested (twice) that my question related to alternative analyses that condradict Tamino’s analysis regarding station dopout effects should be sent to Mr. Smith, would you consider asking him to respond or at least release my comment from quaratine?

North of 43 south of 44
February 25, 2010 5:00 pm

Ross M (16:42:56) :
Likely it is because they were dividing by a fixed number of stations in the grid box instead of the actual number that were there and the grid squares that they had all of the data for contained less than that fixed number of stations.
If you divide 100 by 30 you get 3.333 if you divide 100 by 25 you get 4 and the last I looked 4 > 3.333 but I might be wrong.

Michael Jankowski
February 25, 2010 5:22 pm

Mosher, you said, “Tamino will now not publish his paper. he’ll post a PDF or something.”
His latest comments are pre-emptively striking the odds of making it through peer-review…too “scientifically unimportant,” lol.

Pamela Gray
February 25, 2010 5:27 pm

From my link above: “Determinate errors can be more serious than indeterminate errors for three reasons. (1) There is no sure method for discovering and identifying them just by looking at the experimental data. (2) Their effects can not be reduced by averaging repeated measurements. (3) A determinate error has the same size and sign for each measurement in a set of repeated measurements, so there is no opportunity for positive and negative errors to offset each other.”
This is why, when I have gained 5 lbs, and it shows up every morning, I subtract 5 lbs from my bathroom scale from then on. Than I average the lot together and can say with utmost confidence I haven’t gained a single pound. Do the math. It works.
PS, You say maths, I say math.

February 25, 2010 5:38 pm

E.M.Smith,
If there is no evidence of divergence prior to station “dropping”, how on earth could NCDC who, in your words, “is seriously complicit in data manipulation and fraud…by creating a strong bias toward warmer temperatures through a system that dramatically trimmed the number… of weather observation stations” know which stations to drop a priori?
The match between past temperatures doesn’t necessarily preclude divergence in future temperatures, it merely suggests its unlikely. It does, however, put a damper on your conspiracy theory.
By the way, Lucia has a new post on the matter that is worth checking out: http://rankexploits.com/musings/2010/effect-of-dropping-station-data

February 25, 2010 6:14 pm

carrot eater (14:29:47) :
If people are finding that Tamino moderates heavily, I’ve found the same of EM Smith. So unless one of the two open up, the only cross-discussion can occur here.
Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.

That’s what one of the things I like about slashdot, only one comment has ever been deleted and that was due to a court order, another is it not only community moderated, there is a meta-moderation to moderate the moderations.

carrot eater
February 25, 2010 6:29 pm

Michael Jankowski (17:22:18) :
“His latest comments are pre-emptively striking the odds of making it through peer-review…too “scientifically unimportant,” lol.”
It is fairly unimportant. It’d be a minor paper, as it is now. Maybe he could expand the analysis a bit to something that hadn’t really been shown before.

E.M.Smith
Editor
February 25, 2010 6:34 pm

WAG (16:25:52) : Anyone who was operating in good faith would not only respond directly to the criticism,
Or: “has a life and limited time and a dozen commitments already in queue and can’t jump just because somebody else wants it.”
So I was at the hospital today (it was relatively minor, but still I was sitting there…) and I was at the hospital (last night? night before? it’s a blur … ) with an elderly relative in the emergency room. Again for a long time. (She recovered “fine”).
Somewhere in between those two I was picking up a “Bigendian” box so I can run the final STEP of GIStemp (which also means about 2 to 4 days of sys admin work to make the box ready… yet to be done. But it has Open Office already on it so I’ll be able to do more graphs faster.)
All of which involve being Away From Keyboard (as my “about” tab says will happen on my blog). So frankly, the first time I realized some folks had their panties in a bunch and were ranting was about 1 hour before I posted the New Zealand Existence Proof above. I still have not read more than about 1/2 dozen of the comments in this thread (and probably will need to do that tomorrow… maybe… if I get some more time.)
So anyone who wants to demand specific services on a specific schedule will just have to join the trolls at the back of the queue. It’s just not a life priority right now. (Or you can pay my billing rate. $100 / hour for commercial operations. $200 / hour for “Climate Stability Deniers”. Discounts available for bulk purchase or for Friends of Anthony. And move to the head of the priority queue.)
The good news is that I got the dT/dt ‘merged modification flag’ code written while waiting in the emergency room… and had some good “think time” about how to best handle the “start of time” bias in dT/dt. So as of now I’ve got a good “characterization of the data” and measured bias in the data from thermometer drops over time by location; and I’ve got a decent tool to asses the actual warming / cooling in a region. I’ve also got GIStemp benchmarked through STEP1 fairly detailed and through STEP3 in rough form. I’m darned close to being able to do a proper end to end comparison of “what goes in” vs “what comes out” vs “what ought to come out”. And that is a bit more of a priority to me than how some one else has made The Usual Errors. [ over averaging, assuming things not in evidence, insufficient level of detail in investigation, “Hypothetical Cows” of all sorts, etc.]
Can anyone–ANYONE–prove Tamino is wrong?
No one needs to prove him / her / it wrong (do we know the gender of a Tamino?). They must prove they are right.
I’m “unimpressed” with the hand waving done. It’s over averaged and it depends on the faith that a trend at time A in bulk data continues into the period when the data is not in existence. At best it’s an inference, not a proof, of anything.
I’ve got a nice tidy little existence proof in New Zealand. It has the virtue that you can calculate it by hand if desired. (There are others, too, but one existence proof ought to be enough.) I’ve also got a benchmark of GIStemp using the whole USHCN set from 5/2007 to 11/2009. That is, you take the GHCN 136 USA thermometers and add in the rest of the (missing) USA and you get less warming. BTW, for the inevitable “no it doesn’t” per the USHCN.v2 addition point in 11/2009: USHCN.v2 is ‘warmer” than USHCN. There was a nice bit of magic wand usage there, where they took out USHCN in 2007 via letting it run obsolescent, let GHCN warm things, then add back in a pre-warmed USHCN.v2 so “nothing changes”. So you must compare Same to Same to see the bias. Not USHCN to USHCN.v2. But it is clear that the drop for the USA thermometers in GHCN from 1200+ IIRC to 136 did warm the USA data. Adding the USA back in cools the anomaly map. (There are also monthly changes that demonstrate warming of the data via thermometer drops, but that level of detail is a bit too much for this thread.)
Two existence proofs trumps one hand waving any day.
Oh, and a free sidebar: Note that the large dropping of USA stations in GHCN does not happen because the data are not available. NCDC has the data in the USHCN.v2 data set. It happens by a choice (or implicit choice) not for lack of the NCDC not reporting the data to itself…
“REPLY: Probably, just not in insta-time. You want it IMMEDIATELY. Science isn’t about insta-results nor should any be expected. I haven’t even heard back from E.M. Smith yet. ”
My apologies for my ‘sloth’, but know you know “the rest of the story”… I’ll get to my email queue a bit later tonight and respond to the (undoubtedly dozens of emails…) as time permits.
“Did Tammy do his whole thing in an afternoon while at his day job office? I think not. Get some patience or bugger off – A ”
May I buy you the beverage of your choice when next we meet ? 😉
I couldn’t have said it better myself…
FWIW, my priorities are very simple:
Family first.
Make money to support family.
(i.e. clients schedules, if any.)
Friends and commitments to friends.
Make money to support interests. Including classic Mercedes repairs 8-{
(i.e. stock trading)
Interests such as moving forward my deconstruction of GIStemp and GHCN change analysis. This includes writing the code I want written to learn the things I think need to be learned and to complete commitments that I’ve made to others. (This is all done “Space Available” and Space-A is not deterministic. Unlike what AGW folks chant, I get no funding from Exxon nor any other company. Some tips in the tip jar are about it. And a donated computer from a friend.)
Loads of other stuff like house maintenance, pet care, shopping with the spouse, catching up on sleep, oh, and I had to get a flat fixed Tuesday(?) too…
Then, after all that:
“Demands” by folks who could easily be making such demands only to disrupt my time or my productivity (or just because they are too impatient to chew their food before swallowing).
So let me make it perfectly clear to folks in that last category: I’m the driver of my own schedule. I work on my time to my priorities (and those of clients and friends) and not on your time. Things that serve to deflect from the priorities I’ve listed go to the bit bucket. Sorry to disappoint you, but your ‘needs’ are just not very important.

February 25, 2010 7:19 pm

Science isn’t about insta-results nor should any be expected.
That’s a bit histrionic. Anthony, you’ve been making assertions about the missing station data for quite some time. Nobody should make any assumptions – pro or con – about Tamino’s study until he actually publishes something substantive. Both sides need to crank back on the triumphalism until there’s more to go on.
“Demands” by folks who could easily be making such demands only to disrupt my time or my productivity
Phil Jones, is that you?

E.M.Smith
Editor
February 25, 2010 7:26 pm

carrot eater (14:29:47) : If people are finding that Tamino moderates heavily, I’ve found the same of EM Smith.
There is one of me. If I had a team of moderators who could keep things orderly, I could leave it more open. As it is, for simple reasons of limited time I can’t let it turn into a free for all. Sorry, but you have plenty of other places to run wild.
I’ve also seen a pattern of what appears to be organized graffiti and attempts to hijack threads. So no, you don’t get cart blanch. Get over it.
The topics covered at ‘my place’ are what interest me, and the comments and commenters that get through are those that support a positive and productive environment. AND don’t thread hijack or worse, hijack my time. If you don’t like that, you can always go somewhere else. Please.
So unless one of the two open up, the only cross-discussion can occur here.
Nope. I have very limited time for “cross-discussion’. My time goes preferentially into my work plan. So I’m going to be ‘out of time budget’ on this thread in about, oh, another 2 minutes. Then you can “cross-discuss” with someone else.
Frankly, I find it a complete waste of my time to explain where other folks have gotten something wrong. I really don’t care. Intelligence is limited but stupidity knows no bounds. So it’s an infinite time sink. One I choose not to indulge. It’s a manager thing… And I’ve been one for a few decades. It’s not a habit that it going away.
Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.
So why don’t you get back to the topic of the thread.
Frankly, I had half decided to ignore the Tamino thread hijack, but it looked like Anthony was expecting me to say something. Otherwise I’d have just ignored it and stayed on topic. (A place where I intend to be shortly).
Tamino is of no interest to me. I would much rather be comparing my dT/dt reports (after dealing with the ‘start of time’ bias) to GIStemp anomaly maps and getting a true end to end bias benchmark.
I’m close enough now that I can do the comparison of “data with bias as computed from ‘unadjusted’ GHCN” and compare it with dT/dt by continent and make some very interesting observations about how much bias in in each subset as a measured value. Then compare that with GIStemp maps and see what bleeds through. Now that’s an interesting topic.
And about 2 weeks work elapsed time. Sigh.
Unfortunately, it will have to wait a day or two. Email queue calling.
So what you need to have as a “takeaway” is that I’m not interested in “cross-talk” as a matter of time management not as a matter of any agenda. If you don’t understand that, then I have a suggestion.
Get a Republican friend and a Democrat friend. BOTH of you try to call the Governor on the phone. That nobody gets through is because of time management, not because of a political orientation.
So I’m going to do what I’m going to do, and I’m not particularly interested in what the AGW folks think of it. And I’m negatively interested in “cross-discussion” that accomplishes nothing at the expense of very limited available time.
So with that, I’m going on to other topics.

carrot eater
February 25, 2010 7:32 pm

E.M.Smith (16:52:31) :
“Per Tamino: He shows that two sets of over averaged data match in one period of time then asserts this means they must match in another period of time (when one of them is missing). That is a logic failure. ”
Slow down a bit, as this is quite a shift from the claim in the SPPI report. At that time, it was claimed that ‘systematically and purposefully’, certain stations were removed.
We have, from SPPI, “Calculating the average temperatures this way would ensure that the mean global surface temperature for each month and year would show a false-positive temperature anomaly – a bogus warming.”
This is saying that simply removing those stations, in itself, would ‘ensure’ a spurious warming, as the removed stations tended to be from ‘cooler’ locations. What Tamino has done shows that simply dropping those stations didn’t, in itself, ‘ensure’ any warming.
Yes, you could get an inaccurate read if the removed stations then would have started diverging from their grid-box neighbors, after the time of dropping. But to drop stations with that purpose in mind would require time travel. If the subset of dropped stations had been correlating with the other stations just fine up to that point, how would you know to drop that particular subset? Without time travel, you wouldn’t know.
So from Tamino’s results, we have no particular reason to think that dropping that subset of stations has in itself caused much more undersampling, where you miss local differences in trends because of a lack of stations. I suppose it’s possible, and some parts of Africa certainly do look undersampled. But we don’t know if the error due to undersampling would be warming or cooling; that’s what the error bars are for. Based on my reading of the SPPI report, this is a very different issue from that posed in the SPPI. It’s an issue that could be addressed by filling in data from SYNOPs, or the database that Dr. Spencer is now using. It sounds like NOAA is collecting more archived data for the next release of GHCN, as well.
By the way, you can see that nobody wants or wanted undersampling; see Peterson 1997, “Initial Selection of a GCOS Surface Network”. They wanted a nicely filled in map, including some high elevation stations where they were afraid a valley station would not correlate with the mountains.

E.M.Smith
Editor
February 25, 2010 7:34 pm

sturat (16:58:48) : BTW, I did send a copy of my original post question to E. M. Smith http://chiefio.wordpress.com/ ,but it must not have made it past his input filters. I sent it a couple of hours ago to this thread: “Canadian Concatenation Conundrum” and there have been posts added since then, but not mine.
Well, I’ve been here, not moderating there… Yes, that “time management” issue… See the answer up thread here with two existence proofs.
BTW, some folks and some content goes “straight through” so some folks stuff ‘goes up’ even if I’m asleep… It’s not me actively doing anything in most cases.
So look up thread here and read what i’ve already written. And I’ll leave here and go look at your comment. I see it’s a duplicate of one here, so I’m not going to start a discussion of it under the Canada thread.

February 25, 2010 7:53 pm

sturat
“I do agree with carrot eater, steven mosher, and others that it is unfortunate that these discussions can’t be held in a more civil manner.”
Steve Mosher is civil. But don’t be so quick to enable carrot eater’s dissembling. I’ve noticed his repeated comments on various alarmist sites, where he fully engages in calling skeptics “denialist,” “deniers” and similar terms when referring to commentators on WUWT and similar sites.
Is he being a chameleon? A hypocrite? Kissing up to closed-minded people like tamino?
You can decide for yourself. But I’ve got his number.

sturat
February 25, 2010 8:04 pm

EM Smith
I am sympathetic regarding your recent trip the hospital, etc. Life does seem to get in the way of life sometimes, doesn’t it.
But, it seems you still had time to post several rambling, little value added posts that did little to add to the discussion.
I’m particularly intrigued by your accusation that Tamino is handwaving his arguments when all you have stated is that you have a couple of “tidy little existence proofs” that invalidate all of his work if we would only trust you. And, don’t bother you about asking for details since you are above question. Please, give us all a break.
Now, I can’t say myself whether Tamino’s analysis is absolutely correct, partially correct, or just down right wrong. But, at least he and others are willing to discuss the analysis and examine the disagreements. All I’ve seen on your site and this one is trust me, I’m too busy, I’m going to publish it someday, other people stole my data, and …
What I have asked for I believe is a reasonable request. Show a competing analysis on a comparable set of data that contradicts Tamino’s conclusion. I know Tamino took a couple of weeks to produce his results and that he continues to expand and refine the work. (He has stated that he open to incorporating RomanM’s comments into his analysis.) So, I don’t expect you or anyone else to be able to respond instantly. It would be civil to say that you are “interested” and would work on a response to be published (blogged?) in a couple of weeks. Assuming that it fits into your “time management” constraints.
That said, Mr. Watts, since it appears that your go to guy, EM Smith, is not interested in responding (or allowing similar questions on his blog) where is the reproducible analysis that shows that Tamino is wrong.
Oh, please don’t drag in the “court of law” meme. It really has not applicability here. Besides Mr. Stokes was convicted by a court of law.

sturat
February 25, 2010 8:08 pm

EM Smith
Crossing posts. Wouldn’t you know it.
I just read your response on this site acknowleging my cross post to your site. I do (although I might not sound like it) understand about too many things to do and the lack of time. So, I see your point in not addressing my questions on your site.
I look forward to yours and others reaponses on thei site though.

sturat
February 25, 2010 8:15 pm

Smokey
wrt carrot eater
A couple of phrases spring to mind
“A rose by any other name … willl still have thorns”
“If the shoe fits, …”
I realize that either of the above comments could be construed as close to name calling, and if you or someone else takes offence, then so be it.
I’m just asking for arguments be presented with as complete and robust of a technical, data driven nature as possible. Continued assertions that are not backed up with real analysis do not help anybody.

sturat
February 25, 2010 8:56 pm

Make that Scopes, not Stokes.
Sigh, ….

February 25, 2010 9:04 pm

sturat (20:15:43),
I’ve read carrot eater’s denigration of skeptics, using insulting terms like “denialist” on alarmist blogs, so quit wasting your time apologizing for him.
Instead, let’s see those ‘robust’ analyses by tammy. No need to have secrets about climate data and algorithms; they’re not nuclear defense secrets.
The fact that tammy hides his methodologies indicates that he’s all bluster. All show and no go. All hat and no cattle. That dog won’t hunt.
That’s what we’ve come to expect from tammy’s closed mind.

Dr A Burns
February 25, 2010 9:50 pm

Can anyone explain how temperatures are claimed to be accurate to +/- 0.2 degrees in the 1930’s, when electronic thermistors hadn’t been invented ?
Even later temperatures measured electronically to 0.1 degrees but recorded to only +/- .5 degrees
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf page 11
Lots of other errors on top of this, as has been much discussed.
Early temperatures measurements were a bit of a joke:
http://climate.umn.edu/doc/twin_cities/Ft%20snelling/1850sum.htm

rabbit eater
February 25, 2010 10:06 pm

Can anyone -ANYONE- say what the temperature anomaly would have been if CO2 had remained the same?

Dave F
February 25, 2010 10:10 pm

If you have to delete my other comment to hold to policy, I understand.
Looking at the graph that started this thread (not involving Tammyno) it looks like the warming since 1850 could be as much as 1C or as little as ~.2C? Am I really supposed to be quaking in my boots here? The dart board graphic Anthony uses has a whole new meaning, imho.

Dave F
February 25, 2010 10:25 pm

Malcolm (16:57:12) :
I don’t believe that for 1850, 1950, or even 2009. I wouldn’t even give anyone decimals.
I don’t believe anyone who thinks they can calculate the temperature of the entire planet using air temperature and sea temperature over the entire planet, using spatially non uniform measuring, introducing adjustments to the data using an algorithm, using adjustments that have dubious necessity, all the while retracting statements in the ‘most comprehensive’ scientific document supporting their position. So they lose me as soon as they say ‘point…’ I am just not buying it. My BS-dar goes off every time.
Open question:
Why even consider the minimum temperature if you are looking for a warming signal? Why not look at only maximum (since we are supposed to fry)? Or, better yet, why not look at max and min separately instead of smashing them together? Or, even better, why use averages? Temperature seems like one of those quantities where the better statistic to study would be median.

E.M.Smith
Editor
February 25, 2010 10:30 pm

sturat (20:04:38) : But, it seems you still had time to post several rambling, little value added posts that did little to add to the discussion.
Your POV. I was responding to what folks asked. That you don’t like my answers is not very valuable.
I posted a clear link to the analysis that shows New Zealand has no warming trend without a specific station and a warming trend with it. A station that is dropped in GHCN half way through so it cools the past. I see no need to re-do already done work. It is one OF MANY existence proofs that the thermometer changes are toward warming bias of the data set.
I’ve also shown a comparison with a method for finding a (more or less) clear actual trend and showen that the dT/dt method results in an answer roughly in sync with the “non-biased constant station” set for New Zealand in the link (since that was not already up at my site).
Further, I’ve pointed out there there is a GIStemp benchmark up on the site (said link having been posted here too many times already and easily found under the GIStemp tab on my site. Hardly hiding anything nor putting anything where it can’t be found.
So you see, I’ve not only shown the data have a warming bias, but I’ve shown that the GIStemp technique does not correctly handle it while another technique can.
Further, I gave my evaluation of what Tamino did based on a cursory examination. That you don’t like my evaluation is of slightly less interest to me than was actually making the evaluation. I stated specific areas where I saw “issues” so other folks could explore them if they wished.
Now what is probably lost on you is the simple fact that Tamino might well have shown that some particular anomaly processing method could keep the trends of both data sets similar. Just like I showed that GIStemp fails to block all warming signal and another technique does pretty good.
NONE of those means the data were not biased by change. They only show that the process performed on that data has better or worse handling of that bias. To see the bias in the data does NOT require looking at Tamino’s stuff, it requires looking at the data. And in addition to the specific example with link given, there are dozens of others on my blog.
Thermometer change clearly biases the data toward a warming profile.
The only question is how well do various methods handle it.
Then you want to insult me by saying that my responses were not up to some hypothetical imaginary standard you have internalized? And folks wonder why I have no interest in playing these “You must prove FOO is wrong” games….
if we would only trust you.
I said nothing about trust. I said the existence proofs are up for inspection. Go inspect.
And, don’t bother you about asking for details since you are above question.
You would have me retype all the details that are already in postings you are too lazy to go read? Yeah, don’t ASK me for details that are already up in written form.

Please, give us all a break.

Happy too. Didn’t want this time sink to begin with. So your wish is granted. I’ll take a break from this complete thread hijack and pointless troll-fest until further notice.
All I’ve seen on your site and this one is trust me,
As I’ve pointed out above. I’ve never said “trust me”. I publish my code as soon as it’s QA’d and stable and every thing I do is described to the point where anyone can reproduce it (and many have). That you don’t see that speaks volumes.

I’m too busy, I’m going to publish it someday,

The first is true. The other is bent. I’m “looking into perhaps publishing” one feature which I’m in active discussions on. Sorry you don’t like that, but it’s fairly standard practice as I understand it. If it’s ‘not new’ it gets shut down. So tough.
But that portion is not needed to demonstrate there is warming bias introduced into the data by thermometer dropping. I’ve got 2 selected entry point examples for you AND several dozen other very detailed postings about the specific kinds of bias. Go read them. Start with the “GIStemp” tab at the top and pay particular attention to the “GHCN Global Analysis”.
What happens to that bias in processing in the various software packages for the various data series is up to debate. That the data is ‘warmed” is not.

other people stole my data, and …

Now you are just making up flat out lies.
I have never said anyone “stole my data”. First off, I don’t have any data. It’s all NCDC’s data. I just process it in interesting ways. Secondly, my code is put up under open software rules for folks to do with what they will. I have nothing proprietary for anyone to “steal”.
Sorry, but that puts you in flat out Troll land.

What I have asked for I believe is a reasonable request.

Sink a few days of my life into working out what someone else did wrong and showing how they can be proven wrong when they have chosen a broken method to begin with, and you expect me to use that method. Yeah, that’s real reasonable /sarcoff>
I’ll do it right after you prove that my zero point energy machine idea doesn’t work. It uses cold fusion and palladium, and you have to use them in your proof. (Do you see how impossible that is and how it can be an infinite time sink?)

Show a competing analysis on a comparable set of data that contradicts Tamino’s conclusion. I know Tamino took a couple of weeks to produce his results and that he continues to expand and refine the work.

So you “only” want to consume a “couple of weeks” of my life (or perhaps more since I’d have to reverse engineer whatever he dreamed up). And that’s “reasonable”…
It would be civil to say that you are “interested”
No. It would be a flat out lie. And I don’t do that. I have NO interest what so ever (as you would have known if you hard bothered to read my prior statements, but you seem to have decided they had nothing to say…)
where is the reproducible analysis that shows that Tamino is wrong.
Prove a negative. Uh Huh… Look. I stand by what I do. I don’t play negative proof games.

Oh, please don’t drag in the “court of law” meme. It really has not applicability here. Besides Mr. Stokes was convicted by a court of

I have no idea were this is coming from/ “Court of law”? I didn’t say anything about courts or law. And I’ve got no idea who this Mr. Stokes is that your dragging in.
At this point, you seem to have gone completely off the deep end.
So here is my suggestion. Look at the cases where I’ve demonstrated that the thermometer changes over time move station data from cold places to warmer places. Then consider that GIStemp compares ‘basket A in early times” to “basket B in later times” (as already described above) and that there is a benchmark run on NCDC data using GISS code that shows this bias comes through the code.
Then ponder the meaning of documented reproducible existence proof.
And yes, it’s already published on the site. And it won’t even take 2 weeks of your life for you to find it. And no, I’m not going to spoon feed it too you. I’m going to “give you a break”.

John Whitman
February 26, 2010 12:39 am

JGC & Ilya Goz ,
Wonderful effort.
This case of correcting MET Office to show they had less error in their grid cells helps show the balance of the blogosphere.
Also, your case show the profound capability of the blogosphere in real time.
John

steven livingston
February 26, 2010 1:08 am

Should I wear 3-D glasses to figure out what the graph shows ?

Frank Lansner
February 26, 2010 1:15 am

Well , at http://www.hidethedecline.eu a tiny tiny problem with the CRU Brohan 2006 data was reported..
http://hidethedecline.eu/pages/posts/temperature-corrections-of-the-northern-hemisphere-144.php
Its just the Northern Hemisphere area ..

carrot eater
February 26, 2010 5:37 am

Smokey (21:04:32) :
“The fact that tammy hides his methodologies indicates that he’s all bluster. All show and no go. All hat and no cattle. That dog won’t hunt. ”
Nothing is hidden. He’s described everything he did in pretty close detail. Anybody could program what he did, for themselves. Just as anybody could pretty well emulate what GISS does, for themselves, just by reading their papers. Which is what Tamino did, except he made a few changes, which he specifically mentions.
This is half the point of the exercise – you can do this stuff for yourself. Tamino did it for himself. It took him a month, presumably working on it in his spare time, but he did it. So it’s doable, even if it isn’t your day job. So if you make a claim, like those in the SPPI report, then you should also put the work in, and find out if the claim has any merit. Without doing the analysis, it’s a little empty.

February 26, 2010 5:40 am

E.M. Smith,
I’m curious about your use of existence proofs. Such a logical proof would seem to be applicable to the question “Can a warming bias exist if a reporting station is dropped from a data set” but not to the question of whether such a bias does, in actuality, exist. The question at hand is not a logicl one, but a statistical one.
He shows that two sets of over averaged data match in one period of time then asserts this means they must match in another period of time (when one of them is missing).
“Assertion” is, again, a term from predicate logic. The data presented by Tamino show that the trend up to 1990 without the missing stations is the same as the trend with those stations included. He also showed that NASA’s adjustments did not increase the warming trend. His methods may be flawed, and when/if he makes his results and calculations public they can be evaluated. But it’s wrong to say Tamino merely made an assertion.
I am not sure why you have such a tone of being put-upon, or why the word “demand” so frequently appears in inverted commas. No one has “demanded” anything. The only person who suggested you weigh in was Anthony Watts, and I feel sure that it was a request, not a demand. Perhaps the word “demand” appeared in your personal correspondence. It is nowhere in this thread,except in your comments.
Certainly, those that say Tamino’s analysis was flawed have a responsibility to show how he erred; those, like me and you, who have either not enough time or not enough expertise – or neither – can’t really make any judgment, can we?

February 26, 2010 6:08 am

Only by publicly posting his complete code & methodology, and anything else requested by skeptical scientists, can Tammy regain credibility.
That’s how the scientific method works. Wake me when s/he provides full and complete transparency, openness and cooperation.

Steve Keohane
February 26, 2010 6:25 am

E.M.Smith (22:30:42) : neither cast ye your pearls before trolls, lest they trample them under their feet, and turn again and rend you

Pamela Gray
February 26, 2010 6:27 am

Folks, one of the best ways for scientists, mathematicians, etc to demonstrate whether or not a hypothesis has legs (what I call substantiation research as opposed to original research) is to “do it a different way”, as well as repeat it. Both repetition and coming to the hypothesis from another angle are standard methods of developing hypotheses into theories.
Repetition: The standard form is to repeat exactly what was done but on a grander scale, IE with more subjects, soil plots, more sensors, improved sensors, etc. Any mathematician or scientist who has been in the trenches doing research understands this must be done. You can’t skip this test. And it is done best by a fresh set of eyes. To do this, all data and methods must be readily available. If you are not willing to do this, be prepared to have your hypothesis debunked in the second form of substantiation research:
Robustness (IE same hypothesis, different angle): The standard form is to study the problem from a different angle, using different experimental methods and analysis, and discover that the hypothesis is robust and powerful in other areas as well. For example if the CO2 theory not only increases temps, it should also increase water vapor absorption and re-radiation. So analyze water vapor absorption and/or OLR. If the CO2 hypothesis is both powerful and robust, it will affect other areas as well.
In my study, others had done the original research in several labs. But we wanted to find out if high frequency tone bursts were differentiated by the auditory brainstem in waves I as well as waves III and IV (these are labels for major synaptic junctions). We did indeed find this to be true. But just to be sure, the lab repeated the experiment with more subjects. Hypothesis confirmed. It was and is a stable and robust hypothesis. The auditory brainstem response is capable of differentiating high frequencies (stable) at its earliest synaptic junction (robust) as well as at later junctions.
So Tamino, put it all out there. Your research will be vastly improved, as will your standing in the blogosphere science community, if you are willing to put it out there.

kim
February 26, 2010 6:27 am

Ian Jolliffe.
=====

kim
February 26, 2010 6:30 am

His bias makes it seem as if he’s wearing blinders, but really, he just sees as through a glass darkly.
==================

carrot eater
February 26, 2010 6:35 am

Smokey (06:08:21) :
The complete methodology is posted there. When there was something left ambiguous about what he did, I asked and he answered. He says he’ll include his code in the supplementary info of whatever paper he writes, so we’ll see about that then.
But you’re gravely mistaken about the scientific method. If I describe what I did, then that is enough. You don’t need all my spreadsheets and code and scribbled notes to read, understand, verify or build upon my work. It’s generally better if you do all those things for yourself, as there’s then a higher chance of any errors being caught (or, demonstrating that the initial description wasn’t good enough).
As I said – Tamino started from scratch, and was able to build a program that roughly emulates (with some differences) what GISS does. That’s half the point – you can do this yourself, and you don’t need to copy somebody else’s code in order to do so.
The other point is this: The SPPI report says, “Calculating the average temperatures this way would ensure that the mean global surface temperature for each month and year would show a false-positive temperature anomaly – a bogus warming.” This is a strong statement. Where is the analysis that shows this to be true? Is it in the SPPI report?
Zeke Hausfather (17:38:49) :
Exactly. Unless somebody at the WMO, NCDC or the collection of the national met services knows how to travel time, the original assertion seems more than a little weak.

DirkH
February 26, 2010 6:37 am

“carrot eater (05:37:16) :
[…]
Just as anybody could pretty well emulate what GISS does, for themselves, just by reading their papers. ”
Now, maybe carrot eater has never written a program so he just can’t know, but let me assure the readership that it is impossible to replicate the exact behaviour of a program including all its bugs and side effects just from a description of the way it *should* work or the observation of a test run of the original program.
So, carrot eater is wrong here. 100% certified wrong.

Pamela Gray
February 26, 2010 7:03 am

I agree DirkH. Confirmation research is what it means. Same data set. Same code. Same analysis. But fresh pair of eyes. And if you are good, fresh interpretation and new and different angles to propose for further study into the robustness of the findings and/or technique. Skip this step at your peril. No drug company would EVER skip this step.

February 26, 2010 7:05 am

carrot eater (06:35:01) :
“The complete methodology is posted there… [Tammy] says he’ll include his code in the supplementary info of whatever paper he writes, so we’ll see about that then.”
Tammy – and you – can say anything. Either the complete methodology leading to his conclusions is posted, or it’s not.
In this case it’s not.

carrot eater
February 26, 2010 7:11 am

DirkH (06:37:53) :
Dirk, I can assure you I’ve written more than a couple programs. We’re not talking about replicating every behavior to the 10th digit. That isn’t what reproducibility is about. We’re talking about getting basically the same results. And this is exactly what Tamino has just demonstrated. He started from scratch, and even did a bunch of things differently from what GISS does, and still got the same basic results. If I were him, I’d go back and also write a version more faithful to the GISS methods, but that’s up to him.
This is just how things work in science. If somebody describes what calculations they did, I should be able to go home and try it for myself. Without copying his code. If I get basically the same results, all is well. If I can’t, then I ask a couple questions. Maybe that will uncover that one of us made some error in the programming, or math, or physics, or whatever.

carrot eater
February 26, 2010 7:17 am

Smokey (07:05:19) :
You can also just say anything. Can you back it up? What major description of his methodology is missing, from his posts over the last month?
If you sit down and implement what he did for yourself, you’ll have some minor differences. But all the important bits are described there. Unless one of you make a basic error, you should be able to get consistent results.
The processing that GISS does is really quite simple, people. You can make your own version for yourself, just as Tamino did (with a few modifications). It won’t be exactly the same, but that isn’t the point. The point is, can you start with the same sort of data, apply the same sorts of processing steps, and get consistent results?

A C Osborn
February 26, 2010 8:18 am

carrot eater, Paul Daniel Ash, sturat E M Smith doesn’t need to do the work.
This Post http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
completley contradicts Tamino’s work and I know who I would rather believe.
OK before you say it, it only covers 48 Sites, but the same algorithms are applied everywhere else by the NCDC.

February 26, 2010 8:20 am

Only by publicly posting his complete code & methodology, and anything else requested by skeptical scientists, can Tammy regain credibility.
That’s how the scientific method works.

Well, he’s said (http://tamino.wordpress.com/2010/02/25/show-and-tell/#comment-39877) he’ll make the code and results available. We’ll have to wait and see before concluding it’s an empty promise.
I don’t know where “anything else requested” fits into the scientific method; that sounds a bit more like the “demands” E.M. Smith finds so offensive.
The central point carrot eater made is a good one, though: there’s nothing stopping anyone who wants to from carrying out an analysis on their own. The question at hand is not Tamino per se, it is the validity of the temperature records. The onus is certainly on those who assert that there is a warming bias to show, not merely assert, that it exists. That is how the scientific method works.

February 26, 2010 8:41 am

E M Smith doesn’t need to do the work.
No one is saying he – or anyone – needs to do anything. The observation that assertions need to be backed up by evidence to be taken seriously is uncontroversial in the extreme.
before you say it, it only covers 48 Sites
Thank you. So you see the problem.
I know who I would rather believe.
Belief and science are completely orthogonal. This is another self-evident, uncontroversial premise.

Steve Keohane
February 26, 2010 8:45 am

E.M.Smith(Or you can pay my billing rate. $100 / hour for commercial operations. $200 / hour for “Climate Stability Deniers”. Discounts available for bulk purchase or for Friends of Anthony. And move to the head of the priority queue.)
You need the caveat of adjusting price based on attitude!
carrot eater (19:32:50) : This is saying that simply removing those stations, in itself, would ‘ensure’ a spurious warming, as the removed stations tended to be from ‘cooler’ locations. What Tamino has done shows that simply dropping those stations didn’t, in itself, ‘ensure’ any warming.
So this is just a coincidental correlation, similar to that of CO2 and Temperature, eh?
http://i27.tinypic.com/14b6tqo.jpg

carrot eater
February 26, 2010 8:48 am

If anybody really wants code to play with, instead of working it out for themselves, the ccc people have done their own version of the work. This is a true GISS emulation, with none of the differences from GISS that Tamino has. It’s global, not just NH.
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
Paul Daniel Ash (08:20:42) :
Yes. The original SPPI document made a strong assertion. Forget about providing code; I cannot even find an analysis that backs up that assertion. This is why that document is attracting criticism.

1DandyTroll
February 26, 2010 9:06 am

@Paul Daniel Ash ‘The central point carrot eater made is a good one, though: there’s nothing stopping anyone who wants to from carrying out an analysis on their own. The question at hand is not Tamino per se, it is the validity of the temperature records.’
The devourer of carrots also stated that all the information was there for anyone to emulate the flute and GISS even. Point is the point carrot makes ain’t a good one. The little flute thinks hi’self debunked something, on his blog, but doesn’t want to show the implementation of the equations, hence he begs the reader to have faith in his words instead of being able to judge for one self. That’s not science. Neither is it any point to believe that two different peoples coding in different space and time will yield the same implementation of equations, but carrot isn’t in to computers like that so he don’t no better, but the little flute should’ve, with all that he has claimed to be.
The point is for everyone to see and be able to judge for them self. That the flute and carrot doesn’t have problem is no wonder since neither of them are scientists, but believers who seem to not mind a one sided scientific process, no peer review please, and don’t really look at my kind of raw data.

carrot eater
February 26, 2010 9:46 am

1DandyTroll (09:06:26) :
What equations do possibly you want?
The only equations he’d need to provide were already given here, for how he combined the stations. He needed to give them, because he’s doing it differently from anybody else.
http://tamino.wordpress.com/2010/02/08/combining-stations/
Everything else he did, you can figure out for yourself. Just as he figured it out for himself, instead of copying and pasting GISS code. If there’s something else you don’t understand, you need to build up a bit of background knowledge by reading previous papers on the topic. If as you go along, there’s something major that you feel he didn’t describe, then you might ask him.
But in any case, the clear climate code guys did their own version, and provided the code. So perhaps we could get over the red herring of the code.
Steve Keohane (08:45:42) :
You seriously think a simple average of absolute temperatures means anything at all? No, it doesn’t. It’s meaningless.

February 26, 2010 10:22 am

The little flute thinks hi’self debunked something, on his blog, but doesn’t want to show the implementation of the equations, hence he begs the reader to have faith in his words instead of being able to judge for one self.
This is the strangest reasoning, even separate from the Br’er Rabbit phrasing. Just using Tamino’s code to get Tamino’s results would show nothing, other than that Tamino’s code produces Tamino’s results. Doing one’s own analysis would be replicating, rather than merely repeating, the work.
And as carrot eater showed, downloadable code is available now for an analysis using GIStemp that gives the same results as Tamino’s: just go to http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
In any regard, Tamino has – as has been repeatedly noted – announced that he will release the R code he used. Skepticism is warranted, even suspicion, talk is cheap, etc. But the blithe assertion that Tamino is hiding his code is unwarranted at this point.

A C Osborn
February 26, 2010 11:49 am

Paul Daniel Ash
carrot eater
Perhaps you can explain the logic behind this description
Each new station is offset so that it has the same average value during the time of overlap with the reference.
Can you tell me what the “Offset” is and how you arrive at “the same average value” for different stations.
Neither of you have responded to my previous post about this thread
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
Which just slightly contradicts the work being done by Tamino.

carrot eater
February 26, 2010 12:17 pm

A C Osborn (11:49:46) :
First off, that other thread has little to nothing to do with what Tamino is showing. He’s looking at the effects of station dropoffs and GISS adjustments, not the effect of UHI on USHCN stations. That other thread has problems of its own (why only 48 stations?), but that’s a separate issue.
The offset is referring to a method of combining anomalies. In climate, you work with anomalies, not absolute temperatures. So what’s important is how a station changes over time, not how hot or cold it is in absolute terms. So when combining stations to get an average at a certain location, you want to combine them in a way such that you combine the trends (warming or cooling). One way is to use these offsets. This is illustrated clearly in Figure 5 in Hansen/Lebedeff (1987), as well as the equations on the page before. http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html
Tamino took this method, and changed it by changing how the offsets are calculated. In his way, it doesn’t matter which station you start with. In GISS’s way, it matters slightly which one you start with.
These descriptions make sense, if you take the time to learn about the field by reading the literature. I don’t know anything about medicine; I would not expect to understand a random paper from JAMA if I just picked it up. Same goes with climate; once you educate yourself, then things like that will have meaning.

February 26, 2010 12:41 pm

Perhaps you can explain the logic behind this description
Each new station is offset so that it has the same average value during the time of overlap with the reference.
Can you tell me what the “Offset” is and how you arrive at “the same average value” for different stations.

As far as “logic” goes, carrot eater understands this much better than I do, but in case she/he is gone and not coming back I will take a stab: all the stations in a particular grid don’t necessarily have readings for each given time period. Computing offsets is a way of combining station data in a way that temperature readings can be averaged for that grid.
Neither of you have responded to my previous post
Look again.

February 26, 2010 12:43 pm

carrot eater (12:17:52) :
I thought that Tamino had combined raw data for his analysis, not anomalies. Sheesh, I need to go back to school…

A C Osborn
February 26, 2010 12:53 pm

carrot eater (12:17:52) :
Sorry, Tamino is working with Absolute temperatures and still using Offsets.
Doesn’t Tamino get his Data from NCDC?

carrot eater
February 26, 2010 1:19 pm

Paul Daniel Ash (12:43:56) :
Tamino started with v2.mean. This holds raw data.
By raw, it is this: somebody at the weather service of the relevant country took the max and min temperatures for all the days of the month and found a mean temperature for the month. So by ‘raw’, it’s a monthly average mean temperature. Somebody had to do a basic math operation to get the number. They could conceivably mess their addition and division, so that is a possible source of error.
From this raw data, you can then calculate anomalies using a variety of methods. The method Tamino used is described on his blog in the page I linked above. GISS, CRU and NCDC each use their own method, and each is different from Tamino.

DirkH
February 26, 2010 1:47 pm

“carrot eater (07:11:06) :
[…]
Dirk, I can assure you I’ve written more than a couple programs. We’re not talking about replicating every behavior to the 10th digit. That isn’t what reproducibility is about. We’re talking about getting basically the same results.”
Talk about error propagation. Talk about weird homogenization algorithms that take temperatures from the Amazon and shove them over to Bolivia. “Basically the same results”????? Are you kidding me????
For very trivial programs you’re right. You’re not right for the complexity we’re talking about.

carrot eater
February 26, 2010 1:53 pm

A C Osborn (12:53:38) :
The very moment you add or subtract an offset, it is no longer an absolute temperature. I would call it an anomaly at that point; it just isn’t centered on zero.
What you do (well, you don’t have to, but GISS and Tamino do) is add whatever offset you need to each station to get them to overlap. See the figure and math in the 1987 paper. Then you can combine them. After you combine them, you can recenter the whole thing on zero.
You can experiment with this yourself. So long as there aren’t any station drops, it is mathematically equivalent to centering each station on zero first, and then combining them.

carrot eater
February 26, 2010 1:56 pm

DirkH (13:47:00) :
The distance-weighted interpolation that GISS does is conceptually pretty simple. If you wanted to recreate it, you could. Tamino chose not to.
I know the GISS code looks like a mess, but conceptually, it’s actually pretty simple. But the ccc guys will have made it less messy to look at, by the time they’re done.

Stu
February 26, 2010 3:51 pm

DirkH (13:47:00) :
“Talk about error propagation. Talk about weird homogenization algorithms that take temperatures from the Amazon and shove them over to Bolivia. “Basically the same results”????? Are you kidding me????”
Temperature or temperature anomalies? Big difference isn’t it.

carrot eater
February 26, 2010 4:25 pm

Stu (15:51:47) :
I don’t think people realise just how far anomalies can correlate. Further than you’d guess, before looking into it.
If somebody’s worried about Bolivia, then why not go back to the period where Bolivia data are in the GHCN, and see how well they correlate with the neighboring stations? If the answer is not well, then you could argue there is a sampling problem there.

1DandyTroll
February 26, 2010 5:02 pm

@Paul Daniel Ash ‘This is the strangest reasoning, even separate from the Br’er Rabbit phrasing. Just using Tamino’s code to get Tamino’s results would show nothing, other than that Tamino’s code produces Tamino’s results. Doing one’s own analysis would be replicating, rather than merely repeating, the work.’
What it would show is if the result has any valid integrity and not f:ed up due to bugs or worse some creative but crappy implementation.
“And as carrot eater showed, downloadable code is available now for an analysis using GIStemp that gives the same results as Tamino’s: just go to http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
Well, happy friggin compiling then. Personally I wouldn’t expect anything else since the flute did what he could to mimic the original result at the same time doing his best trying to disprove a critical result. He, obviously, succeeded in mimic the original result and therefore claims he has now disproved the critical result. So it begs the question on what exactly did he do, and how, so as to judge if it is valid or not.
“In any regard, Tamino has – as has been repeatedly noted – announced that he will release the R code he used. Skepticism is warranted, even suspicion, talk is cheap, etc. But the blithe assertion that Tamino is hiding his code is unwarranted at this point.”
Unwarranted, really? Seems to have taken some peer2peer pressure to drive his ego to a breaking point, but still he, for some reason, choose to try and save his ego to try and get published, in what, the official peer review press? for what? Trying to debunk a critic’s critical claim that was essentially made on a blog.
Yeah that’s a rational process…. maybe for someone trying to get out under the rats in a sinking boat, oars and all, perhaps.

Pamela Gray
February 26, 2010 10:03 pm

I am astonished at the amount of misinformation showing up here. Replication means exactly what it means. Duplicating the work, but yes, with a fresh pair of eyes, in a different lab. All else must be under the same conditions, including the code. However, one can analyze the resultant data using different kinds of statistical calculations, graphing it differently, interpreting it differently, and suggesting further areas of study not suggested by the original author. But the bottom line is this, the way the data was obtained originally must be the same in replication in order to prove that the work can be replicated.
There are two reasons for this.
1. One of them is designed to keep us from becoming cheaters. To keep us from fudging the data and not telling. Or do a secret little trick to the data that no one else knows about. Replication is designed to keep us honest.
2. The other reason is to uncover mistakes the original author didn’t catch. Remember the plot in the movie “Medicine Man”? He could not replicate the initial experiment. So a fresh pair of eyes discovered, while repeating the experiment one last time, that it was the ants in the sugar bowl used for the calibration sugar solution that was the source of the peak he couldn’t produce in all his replication experiments.
Publish, then release the code Tamino. It’s just the way research is done. Get used to it or stop playing scientist.

A C Osborn
February 27, 2010 3:39 am

carrot eater (13:53:07) :
You may call it an anomaly at that point, but Tamino calls it a Temperature in his desc ription of his latest work, only after adding the Offset does he then work out the anomalies.
Thank you for pointing out the link to http://pubs.giss.nasa.gov/abstracts/1987/Hansen_Lebedeff.html
Now I know why E M Smith is so dismissive of the methology.

carrot eater
February 27, 2010 4:24 am

A C Osborn (03:39:44) :
It doesn’t matter what you call it; the point is, you are not averaging together absolute temperatures. If a station was colder than the others, it isn’t anymore once you add it to the pile. What you do preserve is its trend – whether it was warming or cooling. This understanding alone should tell you that it doesn’t matter if the station you’re adding is hot or cold; it matters what its trend was.
The method works absolutely fine when there are no gaps in data. In fact. pretty much all the methods do. Even with gaps in data, it works well enough. You can play with it yourself, and see what it takes to make it break.
It looks like EM Smith was re-inventing something like the First Difference method (see Peterson, GRL 103: 25,967-25,974 (1998). Reading the literature before you start working on a topic can save you a lot of trouble, and spare you some blushes. But if he likes that better, you’ll see it won’t really affect the results; you’ll just be using what NOAA uses.
So perhaps you shouldn’t be so dismissive – different methods have strengths and weaknesses, but all roads lead to Rome in the end.

carrot eater
February 27, 2010 4:34 am

Pamela Gray (22:03:22) :
He’s said he’ll release his code when he publishes. But you don’t need his code to check his work. There is nothing stopping you from writing your own, and seeing if you get the same results. If you or anybody interested starts today, you would probably have it done before his paper is published. That’s the whole point. If Tamino made some little math error someplace, then when you write your own code without looking at his, you probably won’t make that same error.
You don’t even have to follow the exact same methodology as him. If you use a slightly different methodology, and still can make the same conclusion, then you know the conclusion is not that sensitive to processing method. Which gives you confidence that it’s a reliable conclusion. If you have to cherry-pick some peculiar processing method to get the conclusion, then it isn’t a strong conclusion.

kim
February 27, 2010 5:13 am

I’ve got a warrant for the arrest of that code. Just show me where the rascal is hiding.
============================

A C Osborn
February 27, 2010 7:50 am

carrot eater (04:34:12) :
Pamela Gray (22:03:22) :
Has Tamino published a list of the Stations and Grid references for the boxes that he has used?
As I didn’t see it.
He has also used the very manipulated GHCN data not the NCDC “Quality Controlled” data, or better still the NCDC “Raw” data.

DirkH
February 27, 2010 8:12 am

The excuses carrot eater brings up for not releasing code are astonishing. Imagine this: Researcher A writes a program, creates a graph with it, proving heating up of the globe. He doesn’t release the program but gives a verbal description of what he thinks his program does. (You wouldn’t believe how often people believe their program does a certain thing when it in fact does something completely different; actually this is a normal part of developing a program – finding and eliminating these discrepancies)
Researcher B replicates the program using the carrot-eater-approach and the verbal description given by Researcher A. He gets a different graph, showing cooling of the globe, publishes it and what now? (He also doesn’t release his code; it’s not necessary in carrot-eater-science. Verbal descriptions of what the program is supposed to do are accepted as sufficient by all researchers in carrot-eater-world.)
Do we conclude that Researcher A has lied? That Researcher B has lied? That one of them has made a mistake? Both?
Now, wouldn’t it be the obvious solution to treat the programs as part of the published result?
A program is BTW not functionally different from a mathematical proof or a mathematical argument. Following carrot eater’s logic, the mathematicians in his world would write papers that have a conclusion like “I just proved that the speed of light in vacuum is constant but i can’t give you the equations because they’re mine.”
That would be a pretty bizarre world – or is it an accurate description of the reality of climate science?

carrot eater
February 27, 2010 8:40 am

DirkH (08:12:42) :
First off, Tamino will release his code when he publishes.
Second, it’s all a red herring. The ccc guys released their code, to the same end. Now what, Dirk? You’ve got the code.
Then, this is indeed how science works. I publish a paper, saying what I did. I should give enough of a description for somebody to recreate and get consistent results, if they were curious. I’ll cite other previous papers as appropriate.
So researcher B is interested, and tries the same thing. If it’s experimental, he’ll do his own experiments. If it’s mathematical processing like this, he’ll write his own code. If he gets consistent results, great. Maybe he’ll go on to improve on my method. If not, then he looks to see if he made a mistake. Then re-reads my paper to make sure he didn’t miss something. Maybe it becomes clear that I made a mistake. If he still can’t figure out the reason for the divergence, then he might send me an email. Maybe it gets to the point where we compare code. It’ll get sorted out in the end.
But the point you’re missing is that what Tamino did is quite simple, assuming you have a little bit of background information. If you think he maybe messed up, then try it for yourself. If you don’t have the background information, then read 3-4 papers and the documentation for GHCN, and you’ll be set. You didn’t have to wait for Tamino to do this; you could have done it yourself before he even started it.
And the critical point goes back to the original SPPI document. It says:
““Calculating the average temperatures this way would ensure that the mean global surface temperature for each month and year would show a false-positive temperature anomaly – a bogus warming.””
What is the basis for this statement? All I see is a bunch of absolute temperatures averaged together. (Please note that I’m not asking for the code anybody used to do that; I can average numbers together for myself). But a bunch of absolute temperatures just averaged together is absolutely meaningless. What you need is something like what Tamino, or the ccc guys, did. Where is it?

A C Osborn
February 27, 2010 10:48 am

carrot eater (08:40:24) :
Oh dear, wrong again “The ccc guys released their code”, they may have but it is no longer available as they are conveniently rewriting it in Python. The links to the Code lead nowhere.
Plus of course they are using the totally corrupted GISS data, or haven’t you been reading the work done by other countries using their original data to show that GISS representing their temperatures are completely wrong?
See the work by Finland, Russia, Australia and New Zealand, plus even the UK is having to revew their own data and calling for a complete re-appraisal of Temperature data.

carrot eater
February 27, 2010 11:37 am

A C Osborn (10:48:44) :
What are you talking about. The ccc code for splitting between dropped and not-dropped stations is right there. The current release of ccc-gistemp is up there. The link leads right to what you want, if you want to test this on ccc-gistemp. It wouldn’t be my choice, but it’s all right there.
Tamino used the GHCN raw. No adjustments. Nothing ‘corrupted’. Just saying things doesn’t make them so. The only adjustment GISS does is a crude one for UHI, and that has nothing to do with the station drop issue.
Again, you don’t have to copy Tamino’s code to do this. Did Tamino have to copy anybody’s code? Try writing something for yourself. If you doubt Tamino’s or the ccc’s result, that’s what you should do.

A C Osborn
February 27, 2010 12:32 pm

carrot eater (11:37:02) : please provide a link to the Code, where it is visible as the link that they provide does take you to the code http://code.google.com/p/ccc-gistemp/.
CCC used GISS, I didn’t say Tamino used it.

carrot eater
February 27, 2010 2:02 pm

A C Osborn (12:32:33) :
GISS starts with GHCN raw data for everywhere except the US lower 48. The only adjustment GISS makes is the UHI one. But in any case, what’s being tested here is whether the stations that were dropped in 1990 were somehow different, such that dropping them gave a warming bias.
Did you not follow the link above? The description is here,
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
the code to split the input is here
http://code.google.com/p/ccc-gistemp/source/browse/trunk/tool/v2split.py
and the latest version of ccc-gistemp is here
http://code.google.com/p/ccc-gistemp/downloads/list
Even the page you linked has links to the download pages. I don’t use ccc, so any questions specific to ccc, direct to them, not me.

1DandyTroll
February 27, 2010 5:53 pm

Eater…. of Carrots
“What equations do possibly you want?”
I’ve been thinking a lot and hard, mind on this maybe simple question. To me it isn’t really a simple question, since I, like I imagine most teachers want, actually want to se how the simpleton, er pupil, solved the problem.
But by all means, believe what ever people tell you to believe. After all you are, I hope, free enough to believe what ever you want to believe in. Personally I rather want to know the details of what’s what.
With that said, can you produce the equations the little flute used? That you can’t produce the actual implementation is obvious, but maybe you do know all the equations used? How about which equation he used in the selection process? Do you actually know which stations he used and which he disregarded?
“Everything else he did, you can figure out for yourself.”
For all the special person I am, I’m not supposed to need to figure everything out for myself. If some fool claims he owns the moon he better be able to f*cking prove it.

carrot eater
February 27, 2010 6:38 pm

1DandyTroll (17:53:25) :
Sit and wait for his code then. In the meantime, you can read the ccc’s code, which of course uses the existing ccc-gistemp code. Or Zeke Hausfather’s; he’s done his own version now, with the code available. I don’t care. The conclusion on this one isn’t going to change with minor changes in implementation.
Forget Tamino. Suppose you had just read the SPPI report. Did that prove its claim? How would you approach the question of seeing whether the station drop claim had any merit?

February 27, 2010 7:50 pm

carrot eater (18:38:45):
“Forget Tamino. Suppose you had just read the SPPI report. Did that prove its claim?”
Earth to carrot eater: Yes, we would like to forget that tamino, who hides his hidden methods, is exempt from scrutiny… if you ever stopped being his publicity agent.
And keep in mind that skeptics have nothing to prove. The promoters of any new hypothesis have the burden of showing that it explains reality better than the long held theory of natural climate variability. So far, CO2=CAGW fails; natural variability prevails.

A C Osborn
February 28, 2010 5:18 am

carrot eater (14:02:30) :
The whole point is that by using GISS the damage is already done.
Raw values must be used otherwise the “FIX” is already in, lowering old readings, raising new ones, adding in missing data points or deleting data points to suit the required “Trend”.
You obviously do not believe all the evidence that other Scientists around the world have been submitting or you would not gloss over starting the analysis process with garbage.
Have you read the Finnish and Russion Reports?
Do you believe them?

A C Osborn
February 28, 2010 5:19 am

carrot eater (14:02:30) :
Thank you for finding the CCC Code for me.

kadaka
February 28, 2010 6:07 am

So to summarize:
GISS adds 2 +2, gets 5.
Tamino adds 2 + 2, gets 5.
carrot eater says Tamino used a different method, got the same result, therefore 2 + 2 equals 5 is confirmed. And since it was the same result as GISS, Tamino’s work is confirmed. You don’t have to know exactly how Tamino got that result, take a month yourself and see if you too can get 2 + 2 to equal 5. For further confirmation, the Clear Climate Code people followed the method used for GISS and their code also gets 2 + 2 to equal 5.
There is a consensus, the science is settled, the debate is over. 2+2=5 and that’s that.

carrot eater
February 28, 2010 6:13 am

Smokey (19:50:22) :
Like I said, if you really want code, there are now two groups who’ve put theirs out, for this exact problem. or you could wait for tamino’s. Or you could try to work it out for yourself. I don’t care, but just complaining about not having one person’s code is going to get you anywhere.
“And keep in mind that skeptics have nothing to prove. ”
Really? They can just say anything they want, without having anything to back it up? A specific claim was made. They don’t have to prove it? Interesting.

carrot eater
February 28, 2010 6:39 am

kadaka (06:07:30) :
You think everybody’s getting a wrong result, just because you don’t like the result?
The question here is, were the stations that were dropped circa 1990 somehow different, such that simply dropping would cause a spurious warming?
You are convinced that this statement is true? Why? What have you seen, to make you think that?

carrot eater
February 28, 2010 6:44 am

A C Osborn (05:18:55) :
GISS starts with raw data for everywhere except the US. Do you realise that? You can then study the UHI adjustment they make.

kadaka
February 28, 2010 8:07 am

carrot eater (06:39:23) :
You think everybody’s getting a wrong result, just because you don’t like the result?

Not at all.
Repeating an error is not confirmation there is no error.
Besides, “everybody” is not getting the same result anyway.
And, if the original data is bad, then the result and the methods used to get the result does not matter anyway.
That’s that. Have a nice day!

kadaka
February 28, 2010 8:42 am

Curious.

[italics added]
carrot eater (18:38:45) :
1DandyTroll (17:53:25) :
Sit and wait for his code then. In the meantime, you can read the ccc’s code, which of course uses the existing ccc-gistemp code. Or Zeke Hausfather’s; he’s done his own version now, with the code available. (…)

Yet earlier in the thread:

[italics added]
Zeke Hausfather (12:22:17) :
Anthony,
In this case replicating what Tamino did wouldn’t be that difficult. There are two primary components that you would need to figure out:
1) How to undertake the spatial weighting. While this is a bit beyond my programming ability (hence my interest in Tamino’s script), (…)

So, is carrot eater saying that Zeke Hausfather has now done something, as of the 27th of February, that Zeke himself said he could not do back on the 25th?
Wow, that Zeke fellow sure can learn and code in a hurry!
Anyone got a link to Mr. Hausfather’s original code?

kadaka
February 28, 2010 11:33 am

Interesting.

carrot eater (06:13:39) :
Smokey (19:50:22) :
Like I said, if you really want code, there are now two groups who’ve put theirs out, for this exact problem. or you could wait for tamino’s. Or you could try to work it out for yourself. I don’t care, but just complaining about not having one person’s code is going to get you anywhere.

Hansen releases Method A (Hansen and Lebedeff 1987) with details.
GISS uses Method A, releases “GISTemp” code.
Clear Climate Code uses Method A, releases ccc-gistemp code (not yet finished).
Tamino uses Method B, does not release Method B with details, does not release code.
Proposed: It does not get you anywhere to complain about not having code that uses Method B (nor about not even knowing what Method B is), as you already have two sources of Method A code, and you could always make up your own Method C and your own code for that.
Meanwhile, just sit back and accept that Method B yields valid results according to its code.
O-kay… Moving along…

SteveGinIL
March 1, 2010 1:32 am

The more this goes on, the more it seems the Hockey team simply wasn’t very good at the science, AND that they knew it enough about their inadequacies to be insecure about it.
Back in the 1960s there was a book out named The Peter Principle. The Peter Principle stated as its observation that people are promoted to their level of incompetence.
I don’t feel sorry for these losers, because they were such blow hards, and who can feel sorry for jaggoffs? It is such a good thing that – inch by inch – the adults are beginning to have their say in this.
What will come of it all?
The IPCC will be eventually disbanded, due to the numbers underlying all their arguments are, piece by piece, being shown to be just flat wrong. The scientists were so in love with being the center of attention (remember that before 1988 climatology was just a scientific backwater) that they fudged – consciously or unconsciously (and does it matter which?) – to keep the limelight on themselves and the free trips and treatment as royalty.
In their insecurity the rats were not about to be cornered, they ad hominem attacked anyone who threatened to put them in that corner.
GOD BLESS WHOEVER LEAKED THE EMAILS, for the rats are being shown for what they are – not very good at statistics and programming.
WHY they didn’t have a statistician on staff is beyond me. No one was there to correct them? How does that happen?
And why not true programmers? Some need to keep it all close to the vest? That is exactly what people do who are insecure about their work and their status – keep everyone else from seeing them for the frauds that they feel they are.

March 1, 2010 2:29 am

@A C Osborn: I am genuinely concerned. What could we, Clear Climate Code, do to make finding the code easier?

March 1, 2010 4:35 am

Various people have asserted that ccc-gistemp is not finished. This is true in the same sense that, say, Microsoft Windows, or Firefox, is not finished. Since our goal is code clarity, and there are ways in which our code is not perfectly clear, the project is not finished. However, we have published releases which consist entirely of our code, which reimplement the GISTEMP algorithm entirely, and which match the GISTEMP results.
The most recent release is 0.3.0. Clarification work since 0.3.0 can be seen here. This ongoing work includes changes to eliminate all intermediate rounding and truncation of data, and to expose and document all numerical parameters in a separate module. There will be a release 0.4.0 at some point in the next couple of weeks, to package up these changes.

kadaka
March 1, 2010 9:22 am

Nick Barnes (04:35:40) :
Various people have asserted that ccc-gistemp is not finished. This is true in the same sense that, say, Microsoft Windows, or Firefox, is not finished. (…)

Both of which use release numbers starting with an integer greater than zero. Thus it has not been “officially” signaled that ccc-gistemp is a finished product, which has been done for both Windows and Firefox. They have entered into the normal cycle of ongoing revisions and modifications as needed, there are versions you can point to and say they are finished products, such as Win 95. This has not yet happened with ccc-gistemp. Thus ccc-gistemp is not finished in the same sense that Windows or Firefox is not finished.

kadaka
March 1, 2010 10:53 am

Re: kadaka (09:22:23) :
Should be “Thus ccc-gistemp is not not finished in the same sense that Windows or Firefox is not finished.”
But, heck with it, time to stop abusing the English language. ccc-gistemp does not have a finished version, Windows and Firefox have finished versions. ‘Nuff said.

March 1, 2010 10:56 am

kadaka: Both of which use release numbers starting with an integer greater than zero. Thus it has not been “officially” signaled that ccc-gistemp is a finished product, which has been done for both Windows and Firefox.
Don’t pay too much attention to release numbers. Years ago I worked on a project which used the names of fish for releases (Whitebait, Halibut, and so on), which avoided all this reading of release number tea-leaves. I run the project, and I can tell you “officially” that release 0.3.0 is more finished than a lot of software ever gets. It is being continually improved – 0.4.0 will be quite a bit better than 0.3.0; I have a notion of what release 1.0.0 will be like, and a vaguer notion of when it might be produced – but as it stands it is already pretty good, and it is certainly “finished” (in the sense that it is a complete implementation of the GISTEMP algorithm in freshly-written Python).
If you really want to understand the GISTEMP algorithm, I recommend that you download ccc-gistemp 0.3.0 (or the current sources, or wait for 0.4.0). If you look at that code and find it is unclear, that is a bug, and we would welcome your bug report.
You do really want to understand the GISTEMP algorithm, right?

March 1, 2010 11:24 am

Smokey (19:50:22) :
And keep in mind that skeptics have nothing to prove. The promoters of any new hypothesis have the burden
Uh oh, Anthony, one of your readers is saying you have the burden of proving that “instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and unidirectionally tampered with that it cannot be credibly asserted there has been any significant “global warming” in the 20th century.”
Smokey says you better get busy on proving that.

March 1, 2010 11:34 am

kadaka (08:42:28) :
Anyone got a link to Mr. Hausfather’s original code?
http://drop.io/0yhqyon
Wow, that Zeke fellow sure can learn and code in a hurry!
He said it was “a bit” beyond his programming ability, but he also said it wasn’t “that difficult.” It’s an example of what you can do if you have some skill and a desire to learn.
More about his method and his results here:
http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/

carrot eater
March 1, 2010 1:06 pm

kadaka (08:42:28) :
Yup, it took Zeke all of one or two days to figure out how to do spatial averages. He was unnecessarily intimidated by it at first, and needed a pointer on how to work out the surface area of a grid box given the coordinates of the corners. This stuff isn’t that hard.

carrot eater
March 1, 2010 1:19 pm

kadaka (08:07:31) :
“Besides, “everybody” is not getting the same result anyway.”
Really? Exactly who has found that the station drops are ensuring a warming bias to the global record? Where is this analysis?
kadaka (11:33:24) :
It really doesn’t much matter how you slice it. The main differences between Tamino and GISS are:
– how the offset is computed when combining anomalies
– the grid box dimensions
– Tamino is not using any adjustments, and is not using USHCN for the US, so he’s using purely raw data everywhere
whereas the ccc guys do just what GISS does. Zeke on the other hand uses yet another method for combining stations (and one that I think needs to be improved, by the way). GISS, CRU and NCDC also go about all these things in their own way.
But the conclusion on the question of station drop is not sensitive to these differences in implementation.
kadaka (10:53:44) :
You’re reduced to arguing about release numbers?

carrot eater
March 1, 2010 1:30 pm

kadaka (11:33:24) :
Oh, and you keep saying the details of Tamino’s method are missing. Yet you can’t identify which.
Anybody with a familiarity with the subject should be able to pretty well reconstruct Tamino’s version, based on what he’s said. In the cases where I was unclear on what he did, I asked and he clarified. He’s said what data he is using. He’s said how he’s combining duplicates. He’s said how he’s combining stations. He’s said what sort of grid he’s using, and how he’s combining those. He might not have said what he did about relatively empty grid boxes; I don’t remember. But again, you don’t even need to use Tamino’s exact version. You could use Zeke’s. You could use the ccc’s. Either which way, the conclusion is there.

March 2, 2010 9:55 am

Paul Daniel Ash (11:24:21),
You shouldn’t fabricate statements when they’re so easy to check. Who do you think you are, Dan Rather? Here’s your misrepresentation:
“Smokey says you better get busy on proving that.”
Smokey said no such thing, nor was it implied. What I said, verbatim, was, “keep in mind that skeptics have nothing to prove. The promoters of any new hypothesis have the burden of showing that it explains reality better than the long held theory of natural climate variability. So far, CO2=CAGW fails; natural variability prevails.”
Anthony has nothing to prove; as I clearly stated, it is the promoters of the CO2=CAGW hypothesis who have that burden.
By attempting to re-frame the debate in order to score a minor point, you not only lost the point, but by prevaricating you exposed your own mendacity.
Next time, make sure you’re quoting others accurately. It will help you avoid being exposed as a deliberate inventor of fabricated assertions.

carrot eater
March 2, 2010 11:18 am

Smokey (09:55:29) :
No, there is indeed a new hypothesis here: that stations from cold locations were intentionally dropped, and that this would somehow “ensure” a warming bias to global trends.
If something is “ensured”, doesn’t that mean that the people putting forth the hypothesis should have done some analysis to back this up?
If you think the skeptics have nothing to prove, to you that means they can just say whatever they like, without having a firm basis?

March 2, 2010 7:42 pm

carrot eater (11:18:43),
There is no new station hypothesis as stated in your first paragraph. That is a fact, not an hypothesis.
Scientific skeptics never have anything to prove, in any discipline. They can certainly offer proof if they like, but it is not required. What is required is that those putting forth a new hypothesis like CAGW have the obligation under the scientific method to fully cooperate with anyone who wants to replicate their experiments or validate their assertions.
If that requires their code and the raw temperatures, and every change made in converting raw to adjusted temps, it is not up to the promoters of the new hypothesis to refuse — without simultaneously discarding the scientific method. All data and methodologies must be scrutinized for error. That is how science arrives at the truth [or as close to the truth as we can currently get].
So no, skeptics have nothing to prove. It is the purveyors of the CAGW hypothesis who have the burden of showing that their model explains reality better than natural climate variability.
So far, they have failed. If there were truth to the CAGW hypothesis, transparency and cooperation with skeptical scientists would immediately rescue it. But the ongoing stonewalling of information means they got nothin’.