New Compendium Paper on Surface Temperature Records

NOTE: An update to the compendium has been posted. Now has bookmarks. Please download again.

I have a new paper out with Joe D’Aleo.

First I want to say that without E.M. Smith, aka “Chiefio” and his astounding work with GISS process analysis, this paper would be far less interesting and insightful. We owe him a huge debt of gratitude. I ask WUWT readers to visit his blog “Musings from the Chiefio” and click the widget in the right sidebar that says “buy me a beer”. Trust me when I say he can really use a few hits in the tip jar more than he needs beer.

surface temp cover image

The report is over 100 pages, so if you are on a slow connection, it may take awhile.

For the Full Report in PDF Form, please click here or the image above.

As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.

Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document.  We welcome constructive feedback on this compendium.

Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.

The short read: The surface record is a mess.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
280 Comments
Inline Feedbacks
View all comments
tokyoboy
January 27, 2010 4:50 pm

Great work Anthony! Hats off to your really perseverant efforts.
Just one point. I expected inclusion of more straightforward graphics as those Peter The 6th Grade and his Papa presented, comparing rural and urban temp trends, in a video of 9 Dec 2009, bur could not fine one, though enough information surrounding that issue is given in your Compendium…….

Deech56
January 27, 2010 5:00 pm

RE Jan (12:54:02) :

Just for the fun and for the MJK understanding I made also this 2001-2010 chart
There is the GISTEMP+CO2 slightly ascending trend, the UAH+RSS descending trend and of course the sunspot number descending trend.
I didn’t made it up – its the woodfortrees engine which makes such funny charts… 😉
BTW if you add HADCUT3 it has the same trend as the satelites…

But are those trends from UAH and RSS significant? I doubt it.
I hope you aren’t trying to make some kind of solar-temp correlation based on only 10 years worth of data.

tokyoboy
January 27, 2010 5:08 pm

Dave McK (17:02:03) :

Yeah, that’s the video I referred to at 16:50:32.

dlr
January 27, 2010 5:12 pm

I think you’ve got a typo on page 7. It says “This divergence is not new and has been growing. NOAA proclaimed June 2008 to be the eighth-warmest for the globe in 129 years. Meanwhile NASA showed it was the 9th-coldest June in the 30 years of its record.”
I think you meant to say “RSS and UAH” not “NASA” in the second sentence.

Deech56
January 27, 2010 5:13 pm

RE boballab (14:10:34) :

luminous beauty (11:57:01) :
Maybe this map from NCDC of the GHCN temperature Anomalies from Jan-Dec 2008 will help:
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/map-land-sfc-mntp-200801-200812-pg.gif

That’s a map of grids, not stations. LB’s question still stands.

Tom in Texas
January 27, 2010 5:17 pm

“Paul K2 (16:04:06) : I am having trouble seeing your point by looking at the graphs at the end of that article…”
I think you are having trouble for several reasons:
First, for some reason, Menne only used 40% of the USHCN stations in his “article”.
Since about 90% of the stations have been surveyed, I’d call this cherry picking #1.
Yes, I know, that’s all the information he was able to purloin at the time.
Second, the graphs start at 1980. USHCN sites were specifically chosen because their datasets extend back to the 1880’s. Why 1980? Cherry pick #2.

Jan
January 27, 2010 5:38 pm

“I hope you aren’t trying to make some kind of solar-temp correlation based on only 10 years worth of data.”
There is always solar-temp correlation – causal correlation – the sun is undoubtedly one of the crucial climate drivers – no doubt about it. Or you contest the assumption 99.999…% of radiant energy comes from sun? Good luck.

Tom Graney
January 27, 2010 5:39 pm

I think the video is actually more persuasive than paper we’re here talking about. Good job.

Phil Clarke
January 27, 2010 5:46 pm

Apart from the worrying lack of understanding of the way global anomalies are calculated, something of a prerequisite, one would have thought, this document is littered with factual errors… here are three I found on a cursory readthrough..
1. The CRU email quote is out of context, To quote John Nielsen-Gammon “Mann, in his quote, was accusing Steve McIntyre of having little regard for the truth. By taking it out of context, D’Aleo and Watts are intentionally making it look like Mann is admitting to having little regard for the truth.”
2. “Ian “Harry” Harris, a programmer at the Climate Research Unit, kept extensive notes of the defects he had found in the data and computer programs that the CRU uses in the compilation of its global mean surface temperature anomaly dataset”. That would be CRU’s flagship product – HADCrut, but Harris was working on upgrading CRU TS 2.1 (to v3.0) a completely different and much less widely-used product.
3.”Jones used data by Wang which Keenan has shown was fabricated.” Keenan did indeed allege scientific fraud by Wang, but Wang was investigated and cleared. In any case the fraud case was about station selection and not data fabrication.
Clarification, retraction and corrections required to be taken seriously.

David
January 27, 2010 6:59 pm

Paul my response to you was mainly because of this post…
” Since this report clearly says the distribution of cooler and warmer stations is important, then the authors seem to think that an average temperature is being calculated.”
Atributing the misunderstanding of a blog commentor to the authors of the paper, as well as your intimating that anomalies could not be transposed via the methods you outlined, was the communication of my post.
So at least I got you back on track with the link you posted. Tom in Texas (17:17:24) : was an acceptable response.
And by the way, in non peer reviewed papers they do throw out mean averages and there are any number of ways that could be affected.
Nick Stokes (15:57:48) : see rbateman (16:09:03) : Truly is amazing, if the “average temperature of all the GHCN stations being used in each year”, plotted by years shows a decline, and the claimed, by the faithful, higher warm anomaly (high altitiude) stations were dropped, where is the warming?
Phil Clarke (17:46:49) : assigning confusion of bloggers to the authors is way not cool. Your other three assertations are just that. “Wang was investigated and cleared.” This was not via a statute of limitations, like in climategate, so your assigning double jeporady is also way not cool.

John from MN
January 27, 2010 7:27 pm

Anthony, Joe and Michael. Great work. But if you rework the paper leave out the name calling and political under-tone. Just stick to the facts. Don’t follow the other-sides mistake of embellishing the facts with name calling or disparging grand-standing adjectives………Keep up the good work. Sincerely, John

January 27, 2010 8:00 pm

rbateman (16:09:03) :
Nick Stokes (15:57:48) :
If, after lopping off all the cooler stations, the GHCN averages continue to fall, it can only mean one thing:
There ain’t no Global Warming going on, and even more to the point, there’s a whole lot of Global Cooling.

Ah yes, the style of argument here:
“It can be shown that they systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler. “
Reality: GHCN station averages have declined.
Global Cooling! Yay! Told you so!

E.M.Smith
Editor
January 27, 2010 9:26 pm

Rhys Jaggar (02:40:36) : Mr Watts/Dr D’Aleo
To say that your paper represents a ’smoking gun’ in the refutation of AGW would be akin to saying that Omaha beach was recaptured by one man and his dog………..

Woof! 😉
Nick Stokes (04:55:40) : The report makes a very misleading comparison on p 12. It shows a GISS global temp map for April 1978, and a purported corresponding map for April 2008, to show how coverage has shrunk.
But that is an incomplete map for 2008. […]. Or you can generate it by going to this GISS page. There are a lot more stations than the preliminary map showed.

Oh please. Talk about misleading. You know better than most that 2 months ago GIStemp revamped their processing and now include USHCN.v2 stations (that they did not until 15 November 2009, since USHCN ‘cut off’ in May of 2007, there were, effectively, a few thousand stations ‘put back in’ via that move). So it is accurate to show 2008 as it looked in 2008 and not as reimagined 2 months ago. When making a document you stick a stake in the ground, gather your data, and write.
So pointing at the “now” version of GIStemp is just saying the walnut shell moved, and we’d all better keep an eye on it or we won’t find the pea. Yeah, right /sarcoff>
BTW, USHCN.v2 has the warming moved into the historical record. It’s “warmer” than USHCN. So the swap from USHCN (minus the most recent couple of years) to USHCN.v2 (with a colder past) is a nice “shell game” but not very good science. Trying to get folks to confound “what it was as written” with “what it is with the new improved 4 walnut shell monte” is just… oh. (SELF SNIP!).
There are nice ‘blink charts’ showing this at the links in this article:
http://chiefio.wordpress.com/2010/01/15/ushcn-vs-ushcn-version-2-more-induced-warmth/
So no, I don’t think we need to keep our eye on the walnut shells, no matter how many of them you trot out. We need to clear the board of such misdirections and look directly at the data. And a key part of that is stabilizing the constant changes over time (stop the walnut shells from moving) so we can see how this game is really being played.
Karl B. (07:11:41) : “Smith found that in New Zealand the only stations remaining had the words “water” or “warm” in the descriptor code. Some 84% of the sites are at airports, with the highest percentage in southern cold latitudes.”
What does that last sentence mean? It reads as if colder sites were kept, but I think you mean that the colder sites that were kept were airports, which are normally warmer?

If I remember the context of the original correctly, I was pointing out that the warming bias of excess airports was being placed in larger part in what ought to have been the colder places. i.e. Want to warm up ‘the cold bits’? Put the thermometer closer to the jet exhaust… Basically, the “most bang for the buck” with the least “visible oddity” (i.e. a very hot hot place up north… like one of the tropical Islands, would stand out if added warmth was too much.)
From the original report here:
http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
This chart has the percentage of sites that are airports on the very far right. That is also shown as percentage of airports in each latitude band.
the part about New Zealand says:

First, New Zealand:
This chart is by latitude bands from “South Pole up to 44 S” latitude to “above 36 S” labeled “NP” (as in “everything to the North Pole…)

      Year SP -44  -43  -42  -41  -40  -39  -38  -37  -36  -NP
DArPct: 1869  0.0 21.4 14.3  0.0  0.0  0.0  0.0 21.4  0.0  0.0 57.1
DArPct: 1879  0.0 23.1 19.2  0.0  0.0  0.0  0.0 19.2  0.0  0.0 61.5
DArPct: 1889  0.0 26.2  2.4  0.0  0.0  0.0  0.0 23.8  0.0  0.0 52.4
DArPct: 1899  0.0 21.7 13.0  0.0  0.0  0.0  0.0 21.7  0.0  0.0 56.5
DArPct: 1909  0.0 37.5 15.6  0.0  0.0  0.0  0.0 15.6  0.0  0.0 68.8
DArPct: 1919  0.0 33.3 16.7  0.0  0.0  0.0  0.0 16.7  0.0  0.0 66.7
DArPct: 1929  0.0 20.0 20.0  0.0  0.0  0.0  0.0 20.0  0.0  0.0 60.0
DArPct: 1939  0.0 23.1 19.2  0.0  0.0  0.0  0.0 19.2  0.0  0.0 61.5
DArPct: 1949  0.0 27.8  9.3  0.0  0.0  0.0  0.0  9.3  0.0  0.0 46.3
DArPct: 1959  6.1 23.5  4.7  4.2  0.0  4.2  0.0  4.7  4.2  0.0 51.6
DArPct: 1969  9.8 20.6  3.5  3.5  0.0  3.5  2.8  4.9  3.5  3.1 55.2
DArPct: 1979 11.6 20.3  5.7  3.0  0.0  5.7  3.0  6.0  3.0  5.7 63.9
DArPct: 1989 11.3 24.8  5.0  0.9  0.0  5.0  4.5 10.4  0.5  4.5 66.7
DArPct: 1999 12.3 23.6  9.4  0.0  0.0  9.4  9.4  6.6  0.0  9.4 80.2
DArPct: 2009 12.0 24.1 12.0  0.0  0.0 12.0 12.0  0.0  0.0 12.0 84.3
For COUNTRY CODE: 507
From source ./vetted/v2.inv.id.withlat

Clearly the 50 percent early values are places that started life as “flat but not an airport” fields and later got tarmac.
(I explained this earlier in the posting. Due to the, frankly, ignorant data structure design; you get ONE status flag for many things that change constantly over time. So any place that is an airport today is called an airport of all prior history. Even in the 1800’s… So this report is, but nature, very conservative. Those early years of 50%+ airports are clearly NOT airports in those years and there were really none then. But since we KNOW it was zero airports, I went ahead and included it as an interesting indication of how broken the GHCN data structure is, and because it does inform about land use changes over time, in an amusing sort of way. Ok, caveat out of the way, back to the quote.

The startling value is that ending value of 84.3% Airports in New Zealand. This is one of the highest I’ve seen so far. The temperature in New Zealand IS the temperature at the airports. Especially the most southern, cold, latitudes. In fact, if we zoom in on that last year:

LATpct: 2009 12.5 25.0 12.5  0.0  0.0 12.5 12.5  0.0  0.0 25.0 100.0
AIRpct:      12.5 25.0 12.5  0.0  0.0 12.5 12.5  0.0  0.0 12.5 87.5

That LATpct is the percentage of total thermometers in a given latitude band. There is only one value, the most northernly, that has a higher percentage than the airports percentage. It looks like there is ONE non-airport thermometer in New Zealand. Looking at those stations still active in 2009, we find it is Raoul Island:
So the whole point here is that the one thermometer that is not subject to jet turbine heating pollution is on a nice more tropical island… but those colder more southern locations, well, they are snuggled up to the airplanes.
FWIW, a look at New Zealand temperatures is here:
http://chiefio.wordpress.com/2009/11/01/new-zealand-polynesian-polarphobia/
That includes an exercise of taking Campbell Island out of the whole data set (i.e. not leaving that cold station in the baseline) so a more consistent set of places is averaged to see what’s going on. The result is that New Zealand shows no warming in the base data. The “warming signal” is largely carried by leaving Campbell Island in the baseline but taking it out of the present… Couple this with the Airports Percentage and I’d even hazard a guess that New Zealand may well be actually cooling over time.
That would take a more fine grained study, but it is clear that the base data do not support any warming hypothesis. (Unless, of course, you run them through the meat grinder and cook them in the oven… then hide the process with a final ‘anomaly’ at the end…)

E.M.Smith
Editor
January 27, 2010 9:51 pm

MJK (08:59:27) : Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001.
Well, last December was certainly cool:
http://chiefio.files.wordpress.com/2010/01/ghcn_giss-dec_250km_anom12_2009_2009_1991_2006.gif?w=500&h=294
as described in this posting:
http://chiefio.wordpress.com/2010/01/27/temperatures-now-compared-to-maintained-ghcn/
That chart is the December “anomaly map” with a ‘currently used data’ baseline of 1991-2006 and a 250 km “spread” to the data (smallest GISS let you make). Note that the Arctic comes form optimal interpolations of temperature estimates from ice estimates from satellites… so the Arctic Red is basically a permanent fantasy and is not in any way a temperature or a temperature anomaly… Notice the deep purple over North America and Asia?

rbateman
January 27, 2010 10:15 pm

Bruce (16:49:14) :
Briffa could certainly have motive if indeed he was fed to the dogs.
In which case he who laughs last…
Your 2nd point: There is the lament that ‘ it is a travesty we cannot account for the recent decline’ meaning if they push it any further, the wheels would come off. CRU leaked, IPCC and NOAA have pushed it too far.
The wheels have now come off.

E.M.Smith
Editor
January 27, 2010 10:23 pm

mikelorrey (07:54:54) :
Thank you! I knew something had happened. I knew there had to have been a meeting and decision made. I had not had the time to chase down exactly who and when. You have. Bravo!.
Richard M (07:58:21) : Next, I too appreciate the work of EM Smith. However, I am a bit worried. I am also a software engineer and have commented more than once on the sad shape of the GISS software. This problem makes me concerned about having a single source of examination for exactly the same reasons I question the correctness of the GISS code itself. I know I wouldn’t trust myself and, although I would say without hesitation that EM Smith appears much more meticulous than I, it still leaves me uncomfortable.
Thanks for the positive evaluation of work quality. BTW, I publish methods and software specifically so that folks can double check my homework. I’ve had several folks duplicate key parts (often with entirely different hardware and software platforms).
Frankly, nothing I’ve done is all that complicated. It’s just “meat and potatoes” characterize the data (originally for benchmarking GIStemp, then I got pulled off into looking at GHCN based on what the benchmark evaluation said about the pattern of the input data…) Most of it can be done in Excel if folks wanted to. (In fact, some of the confirmation I’ve had is from folks using MS and Excel… though at least 2 groups have now done database implementations)
There is an interesting investigation based on one of those databases going on here:
http://diggingintheclay.blogspot.com/2010/01/climate-database-development.html
With a look at the cool map interface to the data under development here:
http://82.42.138.62/GISSMaps/stationtrendsraw.asp
and with some rather interesting graphics in their report here:
http://diggingintheclay.blogspot.com/2010/01/mapping-global-warming.html
with some, IMHO, spectacular conclusions from the visualization.
While I’m happy to think I might have in some small way helped them to do what they are doing, my decision to focus on ‘minimal change to GIStemp FORTRAN’ for maximal validity of benchmarking has left the field clear for folks to do this much more “sexy” and much more valuable approach.
It’s what I would have liked to have done had I not been doing “other things”… A real database with a real graphical interface.
So, want confirmation? Go “knock yourself out”… and enjoy the much more visual approach they have built. (Sigh… I have “graphics envy” again 😉
But back to what I’ve done:
Further, I’ve done some of this on two different platforms (Mac and Linux) with two different methods ( FORTRAN and things like grep / wc ) with the same result. Best I could do at turning up any ‘pathological failure mode’ on my own. I also tend to ‘spot check’ the program results. So when I run a bit of FORTRAN that sums, oh, airstations by latitude; then I’ll pick a place like New Zealand where the absolute number is small enough to do by hand and do it by hand. The two results have to match. (and I usually do this for more than 2 test cases… belt and suspenders…)

rbateman
January 27, 2010 10:58 pm

E.M.Smith (21:26:23) :
A question for you:
In attempting to come up with values to plug holes in raw data sets should I
1.) take the slope between the previous and following data points or
2.) use the average hi/low values for the date in the stations history.
3.) Your suggestion

E.M.Smith
Editor
January 27, 2010 11:06 pm

PaulH from Scotland (08:01:58) :
I’m going to interleave my comments with Gavin’s non-response response. The summary is “the anomaly will save us” along with the usual mythology about a station anomaly being done early and only against itself when that is NOT what the code does. It is, in essence, a self delusion. It may be what the published papers showed has validity, but it is NOT what the code does.

[Response: This is, was, and forever will be, nonsense.

Nice slammed shut mind. It will never find the error staring it in the face.
The temperature analyses are not averages of all the stations absolute temperature. Instead, they calculate how much warmer or colder a place is compared to the long term record at that location.
The PAPERS that support the Reference Station Method and the anomaly process may well have done “selfing” but that is NOT what is done in GIStemp. This is a common delusion among warmers and one they cherish dearly.
The reality is that a thermometer anomaly IS calculated against a “basket of others”. It is also a reality that this is done long after all the ‘in-fill’, homogenizing and UHI calculations. Basically, the “anomaly” can not protect you from all the broken bits done before it is calculated.
Until they get past the fantasy of what they believe is being done and look at what the code actually does do, they will get nowhere.
This “Belief in the Anomaly” is just that. A “Faith Based Belief” in how they think the world works. It is not based on an inspection of what the GIStemp code actually does. Since they then go on to describe the miracles worked by the fantasy anomaly (that is not the one the code does) I’m not going to bother commenting on those bits…

This anomaly turns out to be well correlated across long distances – which serves as a check on nearby stations and as a way to credibly fill in data poor regions.

This, of course, will fail when the “nearby stations” are up to 1000 km away (and things like the PDO were on one phase during the baseline 30 years; and a different phase now, breaking the old pattern of correlation…) and have changed over time (that is, the baseline was made between 2 rural areas and the survivor bias shows up when the survivor is, oh, an Airport and it fills in a cow pasture…). But why look at reality…
Me, I prefer existence proofs:
1) Pisa Italy gets a “wrong way anomaly” UHI correction of 1.4 C in the wrong direction. Busted…
2) The benchmark shows that the actual stations used DOES matter. Busted…
3) Airports now make up 92% of the GHCN for the USA (and as we saw above, ALL of New Zealand other than Raoul Island). Good luck getting a nice pristine “rural reference station” for use in that UHI correction… and that then means that the “anomaly boxes” made from those stations are ALL comparing an airport today to an open area not filled with jet exhaust and tarmac in the past. Your paper no longer applies… Busted… (HOW can you do a “reference station method” where there is NO rural reference station? The GIStemp code just passes a station through if it can’t find enough ‘reference stations’. So your choices are a) Wrong way via airports or b) NO UHI correction due to not enough reference stations.)
Theory, meet reality. Reality wins.
There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place (from 31 different datasets, only 3 of which update in real time).
See above about Wunderground having no problem getting real time data. But this basically is just saying “sin of omission not commission”. Frankly, HOW the sin was done does not interest me that much (unless I’m hired as a forensics investigator… then it becomes much fun!) It also ignores things like USHCN where they clearly DID have the ‘real time data’ and chose not to use it.
Read Peterson and Vose (1997) or NCDC’s good description of their procedures or Zeke Hausfather’s very good explanation of the real issues on the Yale Forum. – gavin]
Appeal to authority and deflection to a set of procedures that do NOT address the actual facts on the ground.
Frankly, what is written in some paper is only useful for showing what fantasy someone THINKS is happening. It’s what is really done to the data that matters. So somebody has a written procedure. Good for them. I’ve run the data (their data) through the code (their code) and the anomaly changes. Reality just is.
It’s about as useful as an annual report from Lehman Brothers or Bernie Madoff. Nice fantasy. I’d like to look in the vault now…
So look, these folks REALLY do believe this stuff. It’s a simple pat answer and they don’t have to bother reading FORTRAN or doing benchmarks. I’ve got NO problem with that. Leave me and the economy alone and you can indulge in that all you want.
Where I have a problem is when the guys who wrote and run the code did not do any QA benchmarks and have not bothered to show that the code does what the paper validated. (it doesn’t) Then they want to take their computer fantasies and tell me what car I can drive, what food I can eat, and how much my heating bill is going to be. Sorry, but “no”.
FWIW, this is entirely normal.
I’ve never met a computer programmer who didn’t think “this time for sure” after a “one line fix”. I’ve never met a researcher who did not believe they had found a basic truth and it now could applied to all sorts of places beyond where it was demonstrated. But that does not make them right.
SIDEBAR: The Devil’s Data Processing Dictionary has a definition for “one line fix”. It’s a single line change of code that will fix a bug with certainty. It will also have the bug it introduces fixed by the NEXT “one line fix”…

January 27, 2010 11:20 pm

E.M.Smith (21:26:23) :
“2 months ago GIStemp revamped their processing and now include USHCN.v2 stations”

That’s an absurd excuse for the difference between this purported plot from the report, and this current GISS plot. Both have the US pretty much covered. The differences are in places like Africa, Australia and Canada. No USHCN there.

Editor
January 27, 2010 11:21 pm

0220 here. Sleep is definitely overrated.

E.M.Smith
Editor
January 27, 2010 11:21 pm

Doug S (08:29:08) : I would dearly like to see the code that is used to calculate this (assuming I have their explanation correct). This approach seems like it would have many challenges to model correctly – so many variables to account for.
The code is “up” on my site for casual observation. I also have a link to the NASA download site if you want a full set. The version I have up is from “prior to 15 Nov 2009” and the download link is a never version that I’ve not inspected yet (been building a new machine to run it on…).
See: http://chiefio.wordpress.com/gistemp/
as a general entry point. It is more ‘human oriented’ but has a “geek corner” down at the bottom. A general technical brief look is in this link, along with a link to the download location at NASA here:
http://chiefio.wordpress.com/2009/02/25/inside-gistemp-an-overview/
Be advised. It’s pretty messy code…

Patrick Davis
January 27, 2010 11:24 pm

Tut tut tut. NIWA in New Zealand will have lots of tricky questions to answer in the coming years (Good I say – It’s about time their bad science was exposed. NZ$800k to move some native worms from one place to another so a road could be exteneded as one extreme, IMO, example).
“brc (02:31:41) :
I assume you’ll use your new vote wisely at the next election!”
I’ve made it crystal clear to the main (Australian) parties that if they want my vote, they’ll need to earn it (Alter all I qualified for and earnt citizenship, I now hold 3). And, as I am sure you know, *ALL* the main parties are in on the ground floor and support an ETS in one form or another. Labour and KRudd747 are one hit wonders, that was my prediction when they came to power and they won’t win this year if called. The Greens are less than useless IMO. The Liberals are slimy so and so’s and were the main party to think about an ETS in the 1990’s (Howard trying to capture the “Green Vote”. In fact Labour’s CPRS is the Liberal’s ETS, just with different shorts on ). They are all as bad as one another IMO.
February will be an intersting month for the Senate.

Patrick Davis
January 27, 2010 11:46 pm

Sorry, it was DoC in New Zealand not NIWA re: NZ$800k wormgate (I couldn’t help myself). Regardless, NIWA have manipulated temperature data, ignoring site issues etc.

Andrew P.
January 27, 2010 11:49 pm

Tom Graney (10:44:23) :
@Smokey;
I agree that a rural station is less likely to have extraneous heat effects, and if you shift away from stations with no heat effects to a population of sites that is gradually gaining heat effects then this will cause the system to exhibit an upward bias in temperature over time. But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out.

Yes but if development / land use change continues in the vicinity the effect will be cumulative. The magnitude of the UHI effect is dependent on a number of factors but one of them is the size of the settlement. In my village (pop. 2500) it is only about 1C but in Edinburgh (pop. 500,000) it is at least 2C, and in London (c. 8,000,000) it is typically about 5 or 6C. So as settlements grow, and buildings become more developed with central heating / air conditioning systems (which has unquestionably been the global trend) I would argue that the UHI effect has become cumulative. You also have to bear in mind that UHI isn’t just about having the potential to artificially raise maximum temperatures, but more significant is the effect it has on reducing the extremes of night time minimums; sun-warmed asphalt/concrete, radiation from warm buildings and warm air from AC vents are the key to the UHI effect. This graph showing the seasonal trend in Salehard in Russia shows a dramatic rise in winter temperatures in the last 10 years – which I would suggest correlates with the economic recovery of the region after the hardships and subsequent collapse of the Soviet empire:
http://www.neutralpedia.com/wiki/File:Salehard_seasonal.gif

Editor
January 28, 2010 12:04 am

E.M.Smith (23:06:45) :
Ah, now I REALLY get it. And I can see from that huge change in my understanding, now why the anomaly is so flawed – it magnifies all the bias you have seen and I am starting to see (by a different method). Steven Mosher (this thread/another thread?) is right – we really need a flow chart of what is done by each organisation that does its own adjustment.