Why Automatic Temperature Adjustments Don't Work

The automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Guest essay by Bob Dedekind

Auckland, NZ, June 2014

In a recent comment on Lucia’s blog The Blackboard, Zeke Hausfather had this to say about the NCDC temperature adjustments:

“The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”

In other words, an automatic computer algorithm searches for breakpoints, and then automatically adjusts the whole prior record up or down by the amount of the breakpoint.

This is not something new; it’s been around for ages, but something has always troubled me about it. It’s something that should also bother NCDC, but I suspect confirmation bias has prevented them from even looking for errors.

You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Sheltering

Sheltering occurs at many weather stations around the world. It happens when something (anything) stops or hinders airflow around a recording site. The most common causes are vegetation growth and human-built obstructions, such as buildings. A prime example of this is the Albert Park site in Auckland, New Zealand. Photographs taken in 1905 show a grassy, bare hilltop surrounded by newly-planted flower beds, and at the very top of the hill lies the weather station.

If you take a wander today through Albert Park, you will encounter a completely different vista. The Park itself is covered in large mature trees, and the city of Auckland towers above it on every side. We know from the scientific literature that the wind run measurements here dropped by 50% between 1915 and 1970 (Hessell, 1980). The station history for Albert Park mentions the sheltering problem from 1930 onwards. The site was closed permanently for temperature measurements in 1989.

So what effect does the sheltering have on temperature? According to McAneney et al. (1990), each 1m of shelter growth increases the maximum air temperature by 0.1°C. So for trees 10m high, we can expect a full 1°C increase in maximum air temperature. See Fig 5 from McAneney reproduced below:

clip_image002

It’s interesting to note that the trees in the McAneney study grow to 10m in only 6 years. For this reason weather stations will periodically have vegetation cleared from around them. An example is Kelburn in Wellington, where cut-backs occurred in 1949, 1959 and 1969. What this means is that some sites (not all) will exhibit a saw-tooth temperature history, where temperatures increase slowly due to shelter growth, then drop suddenly when the vegetation is cleared.

clip_image004

So what happens now when the automatic computer algorithm finds the breakpoints at year 10 and 20? It automatically reduces them as follows.

clip_image005

So what have we done? We have introduced a warming trend for this station where none existed.

Now, not every station is going to have sheltering problems, but there will be enough of them to introduce a certain amount of warming. The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming. Therefore the adjustments will always be only one way – towards more warming.

UHI (Urban Heat Island)

The UHI problem is similar (Zhang et al. 2014). A diagram from Hansen (2001) illustrates this quite well.

clip_image007

clip_image009

In this case the station has moved away from the city centre, out towards a more rural setting. Once again, an automatic algorithm will most likely pick up the breakpoint, and perform the adjustment. There is also no countering mechanism that produces a long-term cooling trend. If even a relatively few stations are affected in this way (say 10%) it will be enough to skew the trend.

References

1. Hansen, J., Ruedy, R., Sato, M., Imhoff, M, Lawrence, W., Easterling, D., Peterson, T. and Karl, T. (2001) A closer look at United States and global surface temperature change. Journal of Geophysical Research, 106, 23 947–23 963.

2. Hessell, J. W. D. (1980) Apparent trends of mean temperature in New Zealand since 1930. New Zealand Journal of Science, 23, 1-9.

3. McAneney K.J., Salinger M.J., Porteus A.S., and Barber R.F. (1990) Modification of an orchard climate with increasing shelter-belt height. Agricultural and Forest Meteorology, 49, 177-189.

4. Lei Zhang, Guo-Yu Ren, Yu-Yu Ren, Ai-Ying Zhang, Zi-Ying Chu, Ya-Qing Zhou (2014) Effect of data homogenization on estimate of temperature trend: a case of Huairou station in Beijing Municipality. Theoretical and Applied Climatology February 2014, Volume 115, Issue 3-4, 365-373

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
166 Comments
Inline Feedbacks
View all comments
June 10, 2014 9:54 pm

A good thread (and greetings, Bob, from the southern Christchurch).
I’ve long been of the opinion, having worked for decades in accounting and BI systems, that it’s high time the big-data practices used throughout that area, were used in the temperature record.
Put simply, there’s gear and software round that can handle the volume of data points in the record, AND differentiate raw from ‘this adjustment by that process on this date by which user’ – so that the raw data stays untouched.
But the crew who seem to run the temperature datasets seem to have just the one data point per time period per station AND then they go and adjust Those!
To accounting types, that’s absolute sacrilege. It is obscuring the audit trail.
So instead of having a Kelburn temp of (I’m making this up, don’t take it as Gospel) 15.3 C on Thursday the 43rd of Germinal, 1934, 1330 local time, and then wondering what adjustments were made (which results in the sort of hand-waving observed in this here thread) it should be possible to layer adjustments like this invented set of data records:
Type: Raw temp Value: 13.6C DateTime: 1330 43/10/1934 or (whatever Zulu, cannot find a UTC converter for French Revolutionary dates), location: (lat/long) GUID (to make sure the thing really is unique) Processor: reader . Process description: Actual Reading by some fallible Human.
Type: Adjustment Value: -0.6C DateTime: 1330 43/10/1934, location: (lat/long) GUID, Processor: NIWA homogenator. Process description: NZ UHI assessment removal
Type: Adjustment Value: +0.37C DateTime: 1330 43/10/1934, location: (lat/long) GUID Processor: GISS krige step 1. Process description: Temp field harmonisation radius 200Km
Type: Adjustment Value: +0.15C DateTime: 1330 43/10/1934, location: (lat/long) GUID Processor: GISS krige step 2. Process description: Temp field harmonisation radius 1500Km
Then, using standard data query techniques, it would be possible to Both say:
Adjusted temp at lat/long On 1330 43/10/1934 Was 13.52 C
AND
Three adjustments were made to arrive at this sum:
1 local (NIWA) of -0.6 C
2 international (GISS Krige) of +0.52 C
AND
Raw, as-observed temp at lat/long On 1330 43/10/1934 Was 13.6 C
Big data. Big cubes. Lotsa layers. Full transparency.
But this is all too much to expect, eh….
Sigh.

Bob Dedekind
June 10, 2014 11:09 pm

Wayne Findley says: June 10, 2014 at 9:54 pm
I agree Wayne, the days of the climate data manipulators saying “Trust us, we’re from the Government” are way past. In the new sceptical era they have to demonstrate some transparency, or they simply won’t be taken seriously.
That’s not to say that the manipulations are necessarily wrong in every case, but the public is getting less and less tolerant of ‘black box’ processing, with vague assurances that the job is always done right.

June 11, 2014 12:14 am

Tom Wigley Climategate e-mail to Phil Jones: “Phil, Here are some speculations on correcting SSTs to partly explain the 1940s warming blip. If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip. I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from. Removing ENSO does not affect this. It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.”
This is agenda driven adjustments, revealed, in an e-mail to the guy who creates the most widely used global average temperature plot, HadCRUT, whose latest version four update of it in 2012 included Phil’s attribution of his new Saudi Arabian university appointment.

June 11, 2014 12:17 am

Sorry, moderator, in trying to log on successfully to various news sites throughout the day, my log on here changes accounts sporadically. They are all me, from NYC, this time as nikfrommanhattan.
-=NikFromNYC=-

June 11, 2014 12:30 am

Thanks for a good read Bob – great to see those GISS diagrams being used – again and again.

June 11, 2014 1:01 am

Tailingsproject, thanks for posting the link for me.

JohnnyCrash
June 11, 2014 1:13 am

This Kelburn station that we keep referring to has a step adjustment and a slope change. The slope change makes no physical sense. The step change, notwithstanding the accuracy of the value chosen for the step, at least makes some sense. Is the slope change the time of measurement bias? How is that bias always a positive slope? How do we know when the measurements were actually taken? Is the slope from averaging to nearby stations? Averaging with other stations, even close by makes no physical sense, because the temperature of a station 20 miles away or even 100 feet away has no bearing on the temperature of a station. A station’s temperature is a function of the air temperature immediately adjacent to the thermometer and not at all a function of the temperature of another station. You simply cannot remove station errors by averaging with other stations. You can use averaging If you take multiple samples of the same piece of air with the same thermometer. You cannot average different station data. You cannot fill in gaps by averaging to other stations or averaging surrounding days. There are 50 error inducing “things” going on around these ground stations from the paint weathering, repainting with a different batch of paint, changing land use, plant growth, insect nests, soil moisture, damage to the housing, instrument drift, bored temp readers who make up #’s so they don’t have to go out and actually read the data, time of measurement, height and vision quality of person reading the data, dirt and grime, etc etc, The data, adjusted or unadjusted, cannot be used to determine long term fraction of a degree trends. I don’t have a problem with the data being inaccurate. I have a problem with using this data to say with a straight face that this proves that the earth’s temperature is changing one way or the other.

June 11, 2014 2:42 am

Bob Dedekind says: “I actually don’t care. All I ask is that someone understands and communicates exactly why it happens.”
If that is your position, then why did you not do a little work to understand the Pairwise Homogenization Algorithm of NOAA so that you could communicate exactly what it does?
Bob Dedekind says: “I agree Wayne, the days of the climate data manipulators saying “Trust us, we’re from the Government” are way past. In the new sceptical era they have to demonstrate some transparency, or they simply won’t be taken seriously.”´
The raw data, the time of observation bias corrected data and the homogenized data is all freely available. You can download the algorithm, check how it is working, if you are unable to you can feed it with data and see what it does. You can read the articles hat describe the algorithm. Are you sure there is anything NOAA could do that would make you take them seriously?
P.S. If Fortran is too hard for you, there are many more homogenization methods, coded in other languages, which you could use, to check if your problem is real.

June 11, 2014 4:45 am

I think the evidence of the systematic problem in the corrections applied is shown by the difference between adjusted and raw. There is a systematic positive trend in the difference. I have computed this myself from GHCN previously, as have others. The exact result depends on the vintage of GHCN used, but they are all fundamentally the same result. There is an example plot (graph 6 down the page) at:
http://stevengoddard.wordpress.com/maps-and-graphs/
Everytime I ask this question there is a stony silence, I suspect because it is the elephant in the room:
What is the physical explanation for the systematic upward trend in corrections with time between the raw and the adjusted temperature data? Mosher? Stokes? Hausfather? Anybody?

Bill Illis
June 11, 2014 6:38 am

How about this Berkeley station – Amundsen Scott at the south pole.
26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth. Tell that to the scientists freezing their butts off in -60.0C temperatures. All of the stations in Antarctica have the occasional extremely cold month compared to average (there are actually far more of these than extremely warm months in most stations – not so much at Amundsen Scott). The Berkeley/Mosher algorithm assumes there is a quality control failure but it is just what mother nature delivers in Antarctica and an automatic algorithm should not identify a failure when that is just what the climate does. And why does the algorithm only identify the downspikes and none of the upspikes. There must be a bias in the algorithm (something I’ve mused about before but this example is pretty clear that is the case.
http://berkeleyearth.lbl.gov/stations/166900
The actual (and yes fully quality-controlled) raw data is reported here and it has virtually no trend despite Berkeley having revised it to +1.0C over 50 years.
http://www.antarctica.ac.uk/met/READER/surface/Amundsen_Scott.All.temperature.html
A couple of months out-of-date chart of the above raw temps.
http://s13.postimg.org/6w98pvd8n/Amund_Scott_90_S.png

mpainter
June 11, 2014 8:27 am

This interminable disciussion about the desirability or reliability of data “correction” convinces me that such corrections should be made rarely and with utmost discretion, and certainly not by automation. This simply introduces reasonable doubts about such adjustments and clouds the whole issue with doubt and uncertainty.
For science to work, the data needs to be acceptable to all and reliability should never be an issue, or you are shot in the foot at the start.
To me, the algorithmic adjustments of data are the antithesis of proper science. Talk about error creeping in, or assumptions that may or may not be valid! It is foolishness to undertake a work when your data methodology is questionable. The focus is on your methods instead of your conclusions.

phi
June 11, 2014 8:54 am

A graph in relation to the discussion: http://img837.imageshack.us/img837/5687/fontea.jpg
This is a comparison of regional temperature (Alps):
1. Homogenized temperature of Davos (red).
2. The same set but before adjustments (light blue).
3. Always the same set but homogenized according to the recommendations of Hansen et al. 2001 (dark blue).
4. A proxy of glaciers, melting anonmaly, Huss et al. 2009 (green).

NikFromNYC
June 11, 2014 10:02 am

Bill Illis on Antarctica: “26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth.”
Confusing still is what those really represent since they don’t lead to break points, and are all for valleys but not peaks despite the overall spiky noise in both directions.

NikFromNYC
June 11, 2014 10:08 am

Phil Jones’ official FAQ on lack of raw data for the standard global average product still used in climatology:
“Since the early 1980s, some NMSs, other organizations and individual scientists have given or sold us (see Hulme, 1994, for a summary of European data collection efforts) additional data for inclusion in the gridded datasets, often on the understanding that the data are only used for academic purposes with the full permission of the NMSs, organizations and scientists and the original station data are not passed onto third parties. Below we list the agreements that we still hold. We know that there were others, but cannot locate them, possibly as we’ve moved offices several times during the 1980s. Some date back at least 20 years. Additional agreements are unwritten and relate to partnerships we’ve made with scientists around the world and visitors to the CRU over this period. In some of the examples given, it can be clearly seen that our requests for data from NMSs have always stated that we would not make the data available to third parties. We included such statements as standard from the 1980s, as that is what many NMSs requested. The inability of some agencies to release climate data held is not uncommon in climate science.”

“We are not in a position to supply data for a particular country not covered by the example agreements referred to earlier, as we have never had sufficient resources to keep track of the exact source of each individual monthly value. Since the 1980s, we have merged the data we have received into existing series or begun new ones, so it is impossible to say if all stations within a particular country or if all of an individual record should be freely available. Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.”
http://www.cru.uea.ac.uk/cru/data/availability/

Michael D
June 11, 2014 10:13 am

Hi Willis:
a) I can’t see the image
b) I still think the “raw” data is the way to go. Though of course remove broken data (e.g. bear knocked over the weather station). The remaining anomalies should then be addressed verbally.
The raw data is however politically inconvenient (tells the wrong story) so they “fix” it.

Dougmanxx
June 11, 2014 10:32 am

Bill Illis says:
June 11, 2014 at 6:38 am
Awesome post! Does Berkeley ever release actual “average temperature” data like you link to? Or is their “data” always just an “anomaly”? This is currently my pet peeve, as release of the “average temperature” allows even someone who is unsophisticated to see what changes have been made to the record over time. And it makes more sense to most people than something slippery like an “anomaly”. So if the “average temperature” used to calculate the anomaly changes, it’s blindingly obvious to anyone what is going on. TBH this IS what’s happening, it’s simply hidden by the disingenuous veil of “anomaly”.

June 11, 2014 10:48 am

Science 101 sniff test:
Berkeley BEST Test in fact, of what their Al-Gore-ithm does for one of the most obvious cases of urban warming of all, the Central Park castle on a hill versus only a few miles up the Hudson River of rural West Point military academy, in which NYC should obviously be adjusted *down* but alas, it’s just gradual urban heating, not worthy of a breakpoint at all, even though BEST adds a whopping 24 slice and dice breakpoints for other reasons most of which only Hal 9000 understands the frantic meaning of:
http://www.john-daly.com/stations/WestPoint-NY.gif
The Berkeley BEST versions, in which Central Park shows no proper adjustment *down* to account for clear urban heating effects:
http://berkeleyearth.lbl.gov/stations/167589
For Berkeley BEST, “raw” data for West Point which unlike the *other* “raw data” that poor old John Daly must have just looked up “wrong” actually shows likewise warming and is oddly broken into two separate records without any explanation required for third parties to understand the procedure:
http://berkeleyearth.lbl.gov/stations/36834
http://berkeleyearth.lbl.gov/stations/167589
That’s why we must always leave things to “experts” I guess. Thermometers are too complicated. Maybe we should ask a genuine rocket scientist then:
“In my background of 46 years in aerospace flight testing and design I have seen many examples of data presentation fraud. That is what prompted my interest in seeing how the scientists have processed the climate data, presented it and promoted their theories to policy makers and the media. What I found shocked me and prompted me to do further research. I researched data presentation fraud in climate science from 1999 to 2010.” – Burt Rutan, winner of the X-Prize for the first private space vehicle.
He continues:
“In general, if you as an engineer with normal ethics, study the subject you will conclude that the theory that man’s addition of CO2 to the atmosphere (a trace amount to an already trace gas content) cannot cause the observed warming unless you assume a large positive feedback from water vapor. You will also find that the real feedback is negative, not positive!”
http://scholarsandrogues.com/2012/01/31/climate-science-discussion-between-burt-rutan-and-brian-angliss/
Adjustments towards a more accurate view on the ground are one thing, but Berkeley chopping and re-joining data sets together at the same level renders their result utterly meaningless since their real input is but an average of eight year long snippets without using real world station histories to support it. Like any other highly parametrized black box that not even open source code can untangle for outsiders, it can make the elephant’s tail wiggle to match recent climate model predictions as desired by what is commonly known as a “brazen liar” in the form of highly activist Richard Muller who quite actively promoted a proven to be *false* media narrative that he started his BEST project as a skeptic and by its results was converted, and that is in fact *how* he obtained Koch brothers funding for it in the first place.
-=NikFromNYC=-, Ph.D. in carbon chemistry (Columbia/Harvard)

Editor
June 11, 2014 11:39 am

Green Sand says:
June 10, 2014 at 12:46 pm

Willis Eschenbach says:
June 10, 2014 at 12:32 pm

Does anyone else have problems with it? If so I could embed it in a different manner.

————————————————-
Yes, no can see, Win 7, Firefox

Dang, go figure. Well, I’ve swapped it out for a jpg, that should do it.
w.

Editor
June 11, 2014 11:49 am

Victor Venema says:
June 10, 2014 at 2:36 pm

In climatology relative homogenization methods are used that also remove trends if the local trend in one station does not fit to the trends in the region. Evan Jones may be able to tell you more and is seen here as a more reliable source and not moderated.

Sadly, this is done using the rubric of the known (relatively) good correlation between nearby temperature sets. What the authors of these methods never seem to have either considered or tested is whether there is (relatively) good correlation between nearby temperature trends … and it turns out that despite the correlation of the data, the trends are very poorly correlated … here are the trends from Alaska, for example:

All of these stations are within 500 miles of Anchorage, and all of them have a (relatively) good correlation with the Anchorage temperature (max 0.94, mean 0.75) … but their trends are all over the map, with the largest being no less than three times the smallest, hardly insignificant. Further discussion of the graphic is here.
As a result, I’m totally unimpressed with the trend-based “homogenization methods”. I have never, ever seen a valid practical demonstration that it is a valid method. To me, removing or “adjusting” a climate station because its trend doesn’t agree with other local trends is a Procrustean joke that has no place in climate science.
w.

June 11, 2014 11:51 am

“No, there’s an observed change of about 0.8°C, and that’s when the altitude change happened. They are saying that that isn’t a climate effect, and changing (for computing the index) the Thorndon temps to match what would have been at Kelburn, 0.8°C colder.”
Seems to me that it would be a lot more on the up-and-up to adjust the NEW temperatures to match what it would have been at the older site if you’re going to adjust at all. At least then it would be clear to all what you’re doing… pretending the site hasn’t moved, rather than pretending the site has been in the new location all along.

Editor
June 11, 2014 12:17 pm

NikFromNYC says:
June 11, 2014 at 10:02 am

Bill Illis on Antarctica:

“26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth.”

Confusing still is what those really represent since they don’t lead to break points, and are all for valleys but not peaks despite the overall spiky noise in both directions.

I couldn’t make any sense out of that one. The parts I didn’t understand are:
1. Why are all of the “quality control” flags on cold temperatures, and not warm temperatures?
2. I can think of a host of reasons why a thermometer in Antarctica might read too high, from exhaust from an idling sno-cat to waste heat from the local buildings. But I cannot think of any reason why it would read too low … so in addition to the question of why they are all on the low side, we have the question of why they are reading up to 6°C low at all?
3. Are we truly to assume that these measurements, taken by trained scientists at great effort, are so terribly bad? Seems unlikely.
4. A number of the QC flagged temperatures are less than half a degree from the “regional expectation”, while other temperatures that are up to 6°C !!! different from the “regional expectation” are not flagged … why?
5. Exactly what algorithm decided that these points needed “quality control”?
6. Were these results ever checked by a human being for reasonableness? And if not, why?
In short, the adjustments to this record are an impenetrable mystery. Zeke or Mosh, if you don’t explain this, folks will assuredly just assume you are churning out junk … some answers are required here.
w.
PS—Why doesn’t Richard Mueller have the stones to come and answer these questions, and instead lets Mosh and Zeke take the heat? My theory is that Richard is AWOL because he saw a microphone in the next room and has rushed to grab it, trampling two old ladies in the process, but that’s just a hypothesis … Mosh? Zeke? Any insights on this one as well?

Editor
June 11, 2014 12:23 pm

NikFromNYC says:
June 11, 2014 at 10:08 am

Phil Jones’ official FAQ on lack of raw data for the standard global average product still used in climatology:
“Since the early 1980s, some NMSs, other organizations and individual scientists have given or sold us (see Hulme, 1994, for a summary of European data collection efforts) additional data for inclusion in the gridded datasets, often on the understanding that the data are only used for academic purposes with the full permission of the NMSs, organizations and scientists and the original station data are not passed onto third parties. Below we list the agreements that we still hold. We know that there were others, but cannot locate them, possibly as we’ve moved offices several times during the 1980s. Some date back at least 20 years. Additional agreements are unwritten and relate to partnerships we’ve made with scientists around the world and visitors to the CRU over this period. In some of the examples given, it can be clearly seen that our requests for data from NMSs have always stated that we would not make the data available to third parties. We included such statements as standard from the 1980s, as that is what many NMSs requested. The inability of some agencies to release climate data held is not uncommon in climate science.”

This nonsense is nothing but is a crocodile crossed with an abalone. Phil Jones made the same totally bogus claims back when I made my FOI request for his data. In fact, when he actually tried to dig them out, he could only find three such agreements, only one of which had any constraints on the further use or revelation of the data. Nor was he able to show that “our requests for data from NMSs have always stated that we would not make the data available to third parties”, that’s an outright lie.
In short, this is just another typical Phil “Pantsonfire” Jones crockabaloney …
w.

June 11, 2014 1:21 pm

Willis Eschenbach says: “As a result, I’m totally unimpressed with the trend-based “homogenization methods”. I have never, ever seen a valid practical demonstration that it is a valid method.”
Then I have two articles for you to read:
Venema et al. 2012 discusses benchmarking results for a range of algorithms: OA at http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
Williams et al., 2012 discusses results of applying the US benchmarks to USHCN: Available at ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/algorithm-uncertainty/williams-menne-thorne-2012.pdf
Willis Eschenbach says: Nor was he able to show that “our requests for data from NMSs have always stated that we would not make the data available to third parties”, that’s an outright lie.
The situation is getting better, like the USA more and more countries release their climate data freely. However, many still do not release all their data. Mostly because the finance ministers want the weather services to earn a little money by selling the data. The weather services themselves would love their data to be used.
I would say, just try to gather climate data yourself. Then you will see that Phil Jones was right.

Bob Dedekind
June 11, 2014 1:28 pm

Victor Venema says: June 11, 2014 at 2:42 am
I have no need to do that, I can see the results of the adjustments with my own eyes. And yes, the problem really exists, because I have presented a real-life example for you.
Did you read my comment above regarding Albert Park? If not, read it, and then come back and tell me why the homogenisation technique failed to do the following:
1) detect and correct the distorted Albert Park trend, and
2) account for it in the 1966 breakpoint adjustment.
It’s not my job, nor that of other folk here, to work through the code to find the faults. Programs are designed to do things, and when they fail to do those things then it’s obvious from the outputs.
Remember, the Albert Park trend has been shown to be 0.9°C/century higher than surrounding sites. That’s a significant amount (the delta alone is higher than the global trend!), yet the claim has been made that trend checks were performed. I suggest they should revisit their code – it has a bug!
And by the way, I used to program in Fortran. I did for over almost a decade.

Rob
June 11, 2014 2:00 pm

The Jones et. al. reconstruction was perhaps the first to employ this method.
From what data I was able to obtain for
my region here in the Southeastern U.S.
Urban Heat Island effects were never
rectified.
I’m working up some better station and
5×5 grid points than either CRU, GHCN,
or USHCN.
Warwick Hughes has some similar work.