Why Automatic Temperature Adjustments Don't Work

The automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Guest essay by Bob Dedekind

Auckland, NZ, June 2014

In a recent comment on Lucia’s blog The Blackboard, Zeke Hausfather had this to say about the NCDC temperature adjustments:

“The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”

In other words, an automatic computer algorithm searches for breakpoints, and then automatically adjusts the whole prior record up or down by the amount of the breakpoint.

This is not something new; it’s been around for ages, but something has always troubled me about it. It’s something that should also bother NCDC, but I suspect confirmation bias has prevented them from even looking for errors.

You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Sheltering

Sheltering occurs at many weather stations around the world. It happens when something (anything) stops or hinders airflow around a recording site. The most common causes are vegetation growth and human-built obstructions, such as buildings. A prime example of this is the Albert Park site in Auckland, New Zealand. Photographs taken in 1905 show a grassy, bare hilltop surrounded by newly-planted flower beds, and at the very top of the hill lies the weather station.

If you take a wander today through Albert Park, you will encounter a completely different vista. The Park itself is covered in large mature trees, and the city of Auckland towers above it on every side. We know from the scientific literature that the wind run measurements here dropped by 50% between 1915 and 1970 (Hessell, 1980). The station history for Albert Park mentions the sheltering problem from 1930 onwards. The site was closed permanently for temperature measurements in 1989.

So what effect does the sheltering have on temperature? According to McAneney et al. (1990), each 1m of shelter growth increases the maximum air temperature by 0.1°C. So for trees 10m high, we can expect a full 1°C increase in maximum air temperature. See Fig 5 from McAneney reproduced below:

clip_image002

It’s interesting to note that the trees in the McAneney study grow to 10m in only 6 years. For this reason weather stations will periodically have vegetation cleared from around them. An example is Kelburn in Wellington, where cut-backs occurred in 1949, 1959 and 1969. What this means is that some sites (not all) will exhibit a saw-tooth temperature history, where temperatures increase slowly due to shelter growth, then drop suddenly when the vegetation is cleared.

clip_image004

So what happens now when the automatic computer algorithm finds the breakpoints at year 10 and 20? It automatically reduces them as follows.

clip_image005

So what have we done? We have introduced a warming trend for this station where none existed.

Now, not every station is going to have sheltering problems, but there will be enough of them to introduce a certain amount of warming. The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming. Therefore the adjustments will always be only one way – towards more warming.

UHI (Urban Heat Island)

The UHI problem is similar (Zhang et al. 2014). A diagram from Hansen (2001) illustrates this quite well.

clip_image007

clip_image009

In this case the station has moved away from the city centre, out towards a more rural setting. Once again, an automatic algorithm will most likely pick up the breakpoint, and perform the adjustment. There is also no countering mechanism that produces a long-term cooling trend. If even a relatively few stations are affected in this way (say 10%) it will be enough to skew the trend.

References

1. Hansen, J., Ruedy, R., Sato, M., Imhoff, M, Lawrence, W., Easterling, D., Peterson, T. and Karl, T. (2001) A closer look at United States and global surface temperature change. Journal of Geophysical Research, 106, 23 947–23 963.

2. Hessell, J. W. D. (1980) Apparent trends of mean temperature in New Zealand since 1930. New Zealand Journal of Science, 23, 1-9.

3. McAneney K.J., Salinger M.J., Porteus A.S., and Barber R.F. (1990) Modification of an orchard climate with increasing shelter-belt height. Agricultural and Forest Meteorology, 49, 177-189.

4. Lei Zhang, Guo-Yu Ren, Yu-Yu Ren, Ai-Ying Zhang, Zi-Ying Chu, Ya-Qing Zhou (2014) Effect of data homogenization on estimate of temperature trend: a case of Huairou station in Beijing Municipality. Theoretical and Applied Climatology February 2014, Volume 115, Issue 3-4, 365-373

0 0 votes
Article Rating
166 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bloke down the pub
June 10, 2014 3:40 am

The fact that adjustments always seem, at least, to cool the past and warm the present should have set alarm bells ringing, but there’s none so deaf as those that don’t want to hear.

johnmarshall
June 10, 2014 3:44 am

Very interesting, many thanks.

June 10, 2014 3:58 am

And thus the problem with all the models. GIGO will not be denied.

Rob
June 10, 2014 4:03 am

Changing the entire prior record is a quick, dirty and often erroneous methodology. More accurately, all validated change points should be treated as entirely new and independent stations.

Nick Stokes
June 10, 2014 4:26 am

Here is some detail about the GCHN temperature record in Wellington WMO 93436, which I believe is Kelburn. There weren’t any adjustments in 1949 or 1959, when the trees were cut. Nor is a change clear in 1969, though there was an interruption to adjustment in the early 70’s.
The main big event was in 1928, when the site moved from Thorndon at sea level to kelburn at 128 m. The algorithm picked that one.

June 10, 2014 4:27 am

I don’t understand the rationale for the breakpoints and why they would adjust all the station’s
past data. Exactly what are they supposedly correcting for? Bad temp data? Bad station location?

Stephen Wilde
June 10, 2014 4:29 am

Climate scientists really aren’t all that bright are they ?

Alex
June 10, 2014 4:41 am

Its difficult to work out the rationale when some people are not rational

Admin
June 10, 2014 4:42 am

In my old science class we had a name for data which required adjustments which were of a similar magnitude to the trend we were attempting to analyse.

Paul Carter
June 10, 2014 4:51 am

Nick Stokes says:
“… Wellington WMO 93436, which I believe is Kelburn. There weren’t any adjustments in 1949 or 1959, when the trees were cut.”
Wellington is very windy – one of the windiest places in NZ and the Kelburn Stevenson screen is on the brow of a hill which is exposed to strong winds from every angle. The site is visible (about 2kms) from my house and I get much the same winds. With the strength of those winds, the shelter from trees makes less difference to the overall temperature at the site compared with other, less windy tree-sheltered sites. The biggest impact to temperature at Kelburn is the black asphalt car-park next to the Stevenson screen.

June 10, 2014 4:53 am

We all know that coming up with one “average temperature” for the globe is stupid beyond belief. Your post highlights some of the problems with doing that. But we all also should have known that the government “scientists” will see what they want to see and disregard the rest. Does anyone in the world really think that Hansen was trying to get accurate measurements when he had the past cooled and the present heated up artificially?
The best we can do is use satellites for measurement to try to get some sort of “global temperature” and we will have to wait a long time before that record is long enough to have real meaning. Why is it that the long term stations that have been in rural areas and undisturbed by the heat island effect always seem to show no real 20th century warming outside the normal and natural variation? F’ing luck?

Stephen Richards
June 10, 2014 5:04 am

How many times does it need be said that the modification of past, pre-calibrated data is unacceptable as part of any scientific activity.

Alex
June 10, 2014 5:04 am

Clearly the man responsible for this is the Marlboro man (X files)

ferdberple
June 10, 2014 5:11 am

Nick Stokes says:
June 10, 2014 at 4:26 am
Here is some detail about the GCHN temperature record
===========
the raw data for this site shows decreasing temperatures over the past 130 years. the adjusted data shows increasing temperatures over the past 130 years.
man-made global warming indeed.
the author has raised a valid point with automated adjustment. as cities and vegetation grow up around a weather station, this will lead to a slow, artificial warming due to sheltering. human intervention to reduce the effects of sheltering will lead to a sudden cooling.
the pairwise homogenization algorithm is biased to recognize sudden events, but fails to recognize slow, long term events. Since sudden events are more likely to be cooling events and slow events are more likely to be warming events (due to human actions) the algorithm over time will induce a warming bias in the signal. thus it can be said that global warming is caused by humans.
the author also correctly identifies that the human subconscious prevents us from recognizing these sorts of errors, because the scientific consensus is that temperatures are warming. thus, the experimenters expect to see warming. any error that lead to warming are thus not seen as errors, but rather as confirmation.
this article raises a very valid signal processing defect in the pairwise homogenization algorithm.

Jonathan Abbott
June 10, 2014 5:15 am

Could anyone post up explicit examples of these types of adjustments in any of the various temperature series?

Nick Stokes
June 10, 2014 5:26 am

ferdberple says: June 10, 2014 at 5:11 am
“the raw data for this site shows decreasing temperatures over the past 130 years. the adjusted data shows increasing temperatures over the past 130 years.”

No, what it shows is mostly steady temperatures up to about 1928, then a big dive, then increasing temperatures since. In 1928 the site moved from Thorndon at 3 m altitude to Kelburn at 128 m. That caused a 0.8°C drop in temperature. The automatic algorithm discovered that and made the correct adjustment. That is why the trend quite properly changed.

ferdberple
June 10, 2014 5:51 am

No computer algorithm can correctly adjust the temperature record based on temperature alone. this is a basic truism of all computer testing. you cannot tell if your “correction” is correct unless you have a “known correct” or “standard” answer to compare against.
to correctly adjust temperatures, you need an additional column of data. something that gives you more information about the temperature, that allows you to determine if an adjustment is valid.
thus the author is correct. the pairwise homogenization algorithm is likely to create errors, because it is more sensitive to errors in the short term than the long term. thus, any bias in the temperature distribution of short and long term errors will guarantees that the pairwise homogenization algorithm will introduce bias in the temperature record.
Unless and until it can be shown that there is no temperature bias in the distribution of short term and long term temperature errors, the use of the pairwise homogenization algorithm is unwarranted. The authors sheltering argument strongly suggests such a bias exists, and thus any temperature record dependent on the the pairwise homogenization algorithm is likely to be biased.

Steve Wood
June 10, 2014 5:52 am

Nick Stokes, 5.26 : “……… In 1928 the site moved from Thorndon at 3 m altitude to Kelburn at 128 m. That caused a 0.8°C drop in temperature. The automatic algorithm discovered that and made the correct adjustment. That is why the trend quite properly changed.”
‘Properly changed’ Isn’t there an incorrect assumption here that temperatures at 3m will trend the same as the recorded temperatures at 128m? 125m is a big height difference. or is it me?

Nick Stokes
June 10, 2014 6:04 am

ferdberple says:
“The authors sheltering argument strongly suggests such a bias exists”

Well, it’s a theoretical argument. But the examples don’t support it. Kelburn does not show adjustment when the trees were cut. And as for Auckland, it’s a composite record between Albert Park and the airport at Mangere, which opened in 1966. I don’t know when the record switched, but there is a break at 1966. Before that there is 100 years of Albert Park, with no adjustment at all except right at the beginning, around 1860.

ferdberple
June 10, 2014 6:04 am

Nick Stokes says:
June 10, 2014 at 5:26 am
No, what it shows
========
The unadjusted data shows temperatures decreasing over 130 years. The adjusted data shows temperatures increasing over 130 years. This is a simple fact
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/5/50793436001.gif
you are rationalizing that the corrections have “improved” the data quality. I am arguing that this is unknown based on the temperature data.
your argument that the data is improved is that the station location was changed in 1928. however, that information is not part of the temperature record. which confirms my argument above. you cannot know if the temperature adjustment is valid based on temperature alone. you need to introduce another column of data. in this case station location.
this is the fundamental problem with trying to use the temperature record itself to adjust temperature, it contains insufficient information to validate the corrections are in fact correction and not errors.

Alex
June 10, 2014 6:07 am

Nick Stokes says:
June 10, 2014 at 5:26 am
‘The automatic algorithm discovered that and made the correct adjustment. That is why the trend quite properly changed.’
Does that mean that you approve of data change?
To me, raw data is sacrosanct. It may have been gathered ‘incorrectly’ but it should stay the same. It may be considered faulty at some other time but you don’t change it. You explain why it was faulty or different
This is not an experiment you can try on different ‘runs’. You only get one shot at getting it right or wrong.

Truthseeker
June 10, 2014 6:11 am

To Nick Stokes,
Try and explain away all of the warming bias that has introduced into USHCN data that Steve Goddard has uncovered in many posts. For some of the recent analysis, start at this post and go from there.
http://stevengoddard.wordpress.com/2014/06/08/more-data-tampering-forensics/

Peter Azlac
June 10, 2014 6:11 am

Research results from China and India confirm the critical role of wind speed and vapour pressure in changes in surface temperature whilst answering the apparent paradox between the IPCC claim that increased surface temperature will induce a positive feedback from water vapour by increasing surface evaporation of water leading to higher back radiation from greater low level cloud formation and the measured global decreases in evaporation from Class A Pan Evaporation units that do not support this claim. An example is these data from India that show the critical influence of soil moisture, hence precipitation, in combination with changes in wind speed that affect the rate of evapo-transpiration.
http://www.tropmet.res.in/~bng/bngpaper/999238-Climatic_Change_2013_Reprint.pdf
Precipitation levels are linked to ocean cycles – ENSO, PDO, AMO etc and so we might expect temperature anomaly breakpoints to be affected by them also, especially minimum temperatures. The main effect of shading of the meteorological sites is to reduce evapo-transpiration, hence the cooling effect whilst lowered precipitation reduces soil moisture and hence ground cover allowing greater retention of surface radiation that is released at night to increase minimum temperatures. Thus in many, if not most instances, temperature anomalies are a measure of changes in precipitation and wind speed and not in any significant way to the effects of increases in non condensing GHGs such as CO2 and methane.

Latitude
June 10, 2014 6:15 am

but there will be enough of them to introduce a certain amount of warming…
Like a fraction of a degree, that can’t even be read on a thermometer..that can only be produced by math
http://suyts.wordpress.com/2014/05/27/how-global-warming-looks-on-your-thermometer/

Bob Dedekind
June 10, 2014 6:15 am

Hi Nick,
Nobody said that the algorithm can’t pick up breakpoints, it’s obvious it would pick up 1928. Also, as Paul mentioned before, Kelburn is less affected than other sites – I just used it because the station history specifically mentioned the cut-back dates.
What you have to do is explain to us all exactly what checks are implemented in the algorithms that PREVENT the artificial adjustments I listed in my post.
My apologies for the slow reply, we have high winds here at the moment and the power went out for a while.
I’ll also be offline for a few hours as it’s past 1AM over here and I could do with some sleep.
Good night all.

ferdberple
June 10, 2014 6:19 am

This problem with automated corrections is not specific to temperature data. Think of the human body. a disease that causes a large, sudden change is almost always recognized and eliminated by the immune system. however, a disease that causes a slow change in the body is poorly recognized by the immune system and can be extremely difficult to eliminate.
data errors act in a similar fashion. normally, if you are only interested in the short term you need not worry about slow acting errors. TB and cancer contracted yesterday do not much affect you today. However, when you want to build a temperature record over 130 years it is the slow acting errors that prove fatal to data quality.

Nick Stokes
June 10, 2014 6:24 am

Alex says: June 10, 2014 at 6:07 am
“Does that mean that you approve of data change?”

The data hasn’t changed. It’s still there in the GHCN unadjusted file.
People adjust data preparatory to calculating a global index. Wellington is included as representative of the temperature history of its region. Now the region didn’t undergo a 0.8°C change in 1928. They moved the weather station. That isn’t information that should affect the regional or global history.
ferdberple
” which confirms my argument above. you cannot know if the temperature adjustment is valid based on temperature alone”

Well, this one doesn’t confirm it. The computer looked at the temperature record and get it right.
Steve Wood says: June 10, 2014 at 5:52 am
“Isn’t there an incorrect assumption here that temperatures at 3m will trend the same as the recorded temperatures at 128m? 125m is a big height difference. or is it me?”

No, there’s an observed change of about 0.8°C, and that’s when the altitude change happened. They are saying that that isn’t a climate effect, and changing (for computing the index) the Thorndon temps to match what would have been at Kelburn, 0.8°C colder.

NikFromNYC
June 10, 2014 6:25 am

The elephant in the room is the fake “former skeptic” Richard Muller and his sidekick Steven Mosher with their extreme and highly parameterized example of Steven Goddard worthy black box data slicing and dicing to form a claimed hockey stick, but oddly enough the alarmist version was funded directly by the Koch brothers. Oh, look, it suddenly matches climate models in the last decade, just like the SkepticalScience.com tree house club Frankenstein version does where Cowtan & Way used satellite data to up-adjust the last decade in a way that the satellite data itself falsifies.
“I have no idea how one deals with this– to be candid, McIntyre or Watts in handcuffs is probably the only thing that will slow things down.” – Robert Way in the exposed secret forum of John Cook’s site.

Alex
June 10, 2014 6:39 am

Ok Nick.
You don’t approve of mangling raw data

June 10, 2014 6:39 am

Nick stokes says:
“No, what it shows is mostly steady temperatures up to about 1928, then a big dive, then increasing temperatures since. In 1928 the site moved from Thorndon at 3 m altitude to Kelburn at 128 m. That caused a 0.8°C drop in temperature.”
The elevation change was 125 m, which at the typical lapse rate of 0.64 degC/100 m gives a shift for the elevation change of…0.8 degC, as you state.
BUT…
1. The actual shift from unadjusted to adjusted data over the period 1864 to 1927 in the GHCN data for Wellington, NZ is 0.98 degC, NOT 0.8 degC.
2. There are additional and complex adjustments applied after about 1965.
If we simply apply a correction of 0.8 degC to pre-1928 unadjusted data the regression slope (through annual averages) is +0.44 degC/century
If we only apply a correction of 0.98 degC to pre-1928 unadjusted data the regression slope (through annual averages) is +0.65 degC/century
If we use the final GHCN adjusted data the regression slope (through annual averages) is 0.93 degC/century.
So the simple elevation correction is not the whole picture. The final trend is over 2X greater than the trend with just the elevation correction.

ferdberple
June 10, 2014 6:47 am

Nick Stokes says:
June 10, 2014 at 6:24 am
The computer looked at the temperature record and get it right
===========
what the computer got right was the date.
however, you needed to introduce a new column of data to determine that. you could not determine even the date from the temperature record alone.
thus, if you need to add another column to validate the data, then the added column (location) is what should be driving your adjustments. not the temperature column.
this is basic data processing. you don’t use the column you are trying to correct to adjust itself, because this introduces new errors. rather you introduce an additional, independent column on which to base your adjustments.

Alex
June 10, 2014 7:00 am

ThinkingScientist says:
June 10, 2014 at 6:39 am
Perhaps I was premature in my earlier comment to Nick. I didn’t have your information at my fingertips. I have a great interest in AGW stuff but I’m not that ‘anal’ about it. I mean no disrespect with that last comment. Some people go into deeper research about things that I don’t.

ferdberple
June 10, 2014 7:04 am

The final trend is over 2X greater than the trend with just the elevation correction.
==========
and the algorithm fails to recognize slow changes, such as the growth of vegetation or the growth of cities and farms.
instead the algorithm is only sensitive to rapid changes. thus, it will introduce bias unless the temperature distribution of slow and fast acting changes is identical. something that is highly unlikely to be true worldwide.
thus, the pairwise homogenization algorithm is unsuitable for data quality enhancement of the temperature record.
what is required is an algorithm that is insensitive to the rate of change. it needs to correct equally changes that take decades with the same accuracy as it corrects changes that take days.
this cannot be done using only the temperature record, it your intent is to determine if there is a long term trend in the data. what is required is an algorithm based on non temperature data, such as station location.

Alex
June 10, 2014 7:06 am

I apologise for the last sentence. Truely disgusting for an alleged english teacher in a university

Alex
June 10, 2014 7:08 am

I’m not getting any better. I’m outta here

Tom In Indy
June 10, 2014 7:10 am

ThinkingScientist says:
June 10, 2014 at 6:39 am

My thoughts as well. Can you also take a look at the post 1928 trend before the adjustment and after the adjustment?
Maybe Zeke can explain where the increase in trend comes from in the post 1928 data. It looks pretty flat in the “QCU” Chart compared to the post 1928 uptrend in the “QCA” Chart at this link –
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/5/50793436001.gif
.

Gary Pearse
June 10, 2014 7:11 am

So abrupt changes in temperature are assumed to need adjustment (it is automatic). Are abrupt changes not possible except by foibles of location and operation of equipment? Balderdash. This is how the all time high in the US, 1937 got adjusted some tenths of degrees C below 1998 by Hansen and his homog crew.

latecommer2014
June 10, 2014 7:15 am

While airport temperatures a necessary at airports, no such stations should be allowed in the national system. Example from yesterday: I live in a semi rural area 20 miles from an airport station surrounded by concrete . Yesterday the reported temp from this site was 106F while my personal station recorded 101F, and a nearby station more embedded in suburbia read 103F. Which do you think was reported as the official temp? Surprise, it’s a new record and the team says “see?!”.

June 10, 2014 7:18 am

Just to be clear, the trend is for the full series from 1864 to the latest measure, not just pre-1928. Sorry if not clear.

John Slayton
June 10, 2014 7:26 am

While sheltering by plant growth may be the most obvious and frequent case of gradual biasing, the fact is that any gradual process may change the measured temperatures and will not be caught by the adjustment algorithm unless there is a correction that occurs suddenly. So a Stevenson screen that is ill maintained as the whitewash deteriorates to bare wood, will likely show rising temperatures. When Oscar the Observer (or more likely his wife) decides that it needs painting, will the change be large and sudden enough to be caught by the system and invoke an adjustment? One question I’d like to see answered: “Exactly how big and how sudden a discrepancy will trigger adjustment?”
There are any number of potential gradual changes in station environments. In Baker City, Oregon,
the current station is at the end of a graveled access road running alongside the airport runway. Since runways tend to be laid out parallel to prevailing winds, it is possible that air warmed by transit along the road frequently passes directly into the station. (OK, it is also possible that exactly the opposite occurs; I have never been able to establish which way the wind blows up there. Sigh…)
What is of interest here is an unexpected change in the road. If the gallery were up, I would put a link here to a photo titled “Baker City Gravel Weathering.” What it shows is that the gravel immediately under the surface is much lighter in color than at the very top. There is a surprising amount of weathering that has taken place in the short time since that road was graveled. To what extent the change would affect heating of passing air, to what extent the air would be traveling into the weather station, etc, I don’t know. I think it unlikely in this case that it has much effect. But that’s not my point.
The point is that any number of unexpected changes in the micro-environment of a station can influence the readings, and no general computer algorithm will even catch them, much less correct the record.

Andrew
June 10, 2014 7:28 am

Wellington is very windy – one of the windiest places in the solar system (FTFY)

ARW
June 10, 2014 7:46 am

Taking the Wellington Example. (ferdberple post at 6.04am) If the original 3m ASL station location data was simply recorded as ending in 1928 and the new station at 128m ASL was recorded as a completely new station, then there would be no need to apply an automatic adjustment to the data. They are different stations locations. What are the rules (in any) about moving stations and then recombining the data into a single station? What percentage of the long term stations suffer from this “mangualtion” of the data if there was a change in location but not name? How far apart in xyz do they have to be to be considered new stations?

June 10, 2014 7:47 am

Tom in Indy says:
“Can you also take a look at the post 1928 trend before the adjustment and after the adjustment?”
Yes, the linear regression trends for the periods 1929 – 1988 (annual averages) are:
Unadjusted GHCN 1929 – 1988 is +0.96 degC / Century
Adjusted GHCN 1929 – 1988 is +1.81 degC / Century

Theodore
June 10, 2014 8:13 am

The painting of Stevenson Screens provides another breakpoint that causes an additional spurious breakpoint that requires adjusting the temp down because the white paint absorbs less heat than faded wood.

June 10, 2014 8:19 am

Bob,
For Kelburn, at least, I don’t see any sort of saw-tooth pattern in the data for either Berkeley or NCDC, and the detected breakpoints don’t correspond with your clearing dates.
Berkeley – http://berkeleyearth.lbl.gov/stations/18625
NCDC – ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/5/50793436001.gif
Sawtooth patterns are one of the harder inhomogeneities to deal with, though they can go both ways (e.g. sheltering can also progressively reduce the amount of sunlight hitting the instrument, as in the case of a tree growing over the sensor in Central Park, NY). Most homogenization approaches are structured to find larger breaks (station moves, instrument changes) and not to over-correct smaller ones for exactly this reason.
We are working on putting together methods of testing and benchmarking automated homogenization approaches that will include sawtooth pattern inhomogenities. You can read more about it here: http://www.geosci-instrum-method-data-syst-discuss.net/4/235/2014/gid-4-235-2014.pdf
As far as UHI goes, the concern that homogenization will not effectively deal with trend biases is a reasonable one. For the U.S., at least, homogenization seems to do a good job at removing urban-rural differences: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

June 10, 2014 8:26 am

Rob,
I agree that treating breakpoints as the start of a new station record is a better approach. We do that at Berkeley Earth.
.
Gary Pearse,
Its not just abrupt changes. Its sustained step changes. If all temperatures after, say, 1928 are on average 0.8 C colder, AND this pattern is not seen at nearby stations, then it will be flagged as a localized bias (in the Wellington case due to a station move to a new location 100 meters higher than the old one).

Barrybrill
June 10, 2014 8:36 am

Paul Catter says shelter is less important at Kelburn because it is an exceptionally windy site. On the contrary, this windines means the data is particularly susceptible to contamination by vegetation growth.
After cutbacks in 1949, 1959 and 1959, the Met Service request for a cutback in 1981 was declined by the Wellington City Council..Complining that the trees were causing routine 20%distortions in wind speed, the Met Service re-built its anemometer in a new location. But the thermometer has stayed put while the trees have continued to grow over the last 32 years.
Amusingly, the Wellington daily, The Dominion, reported the Council’s refusal to allow tree trimming as a deliberate attempt to produce warmer reported temperatures. The Council felt that Wellington’s “Windy City” sobriquet was damaging its image!

June 10, 2014 9:24 am

The other problem comes in as the number of stations are reduced if we see a loss of “colder” stations. I *believe* (but am not certain) that I read a few years back that as the number of stations in NOAA’s network have declined, the number of higher latitude and higher altitude stations had been declining fastest. If that is true, when looking at surrounding stations and trying to grid temperatures, that would be expected to introduce a warm bias as the colder stations have been removed from the process. I suppose some of this could be compensated for by using some of the data from the SNOTEL sites in some, but not all parts of the country. Does anyone have any more current information on the nature of the stations being removed from the network?

Ashby Manson
June 10, 2014 9:41 am

This is an interesting explanation for the systemic cooling of past records. An innocent error that makes sense. How does it check against the data and revisions?

pochas
June 10, 2014 10:09 am

Every adjustment gives you a little wiggle room to favor your own hypothesis. Thats where Global Warming comes from.

barrybrill
June 10, 2014 10:09 am

“Thinking Scientist” says:
“The [Wellington] linear regression trends for the periods 1929 – 1988 (annual averages) are:
Unadjusted GHCN 1929 – 1988 is +0.96 degC / Century
Adjusted GHCN 1929 – 1988 is +1.81 degC / Century”
This suggests that opaque GHCN adjustments almost doubled an already high warming trend during this 50-year period. What could have triggered them? Did they record the Thorndon-Kelburn site relocation of December 1927 as occurring post January 1929?
The unadjusted data show a temperature increase way above the global average during that period, presumably as a result of shelter/UHI contamination. Nearby stations show only mild
warming during the same period.

Gary Palmgren
June 10, 2014 10:13 am

“The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming.”
Actually there is a process. If a forest grows on a ridge next to a temperature station, the air will cool significantly under the forest canopy and the cool air will flow down the ridge and cool the thermometer. Harvest the trees and the temperature will go up. This is very apparent when riding a motorcycle past such ridges on a warm day. This is just another example of why temperature adjustments cannot be automated and must be done on a site by site basis.

June 10, 2014 10:16 am

I should mention that NCDC’s PHA doesn’t just look for step changes; it also looks for (and corrects) divergent trends relative to neighboring stations. It should be able to correct equally for the gradual trend bias and the sharp revision to the mean, though as I mentioned earlier this could be better tested using synthetic data (a project that ISTI folks are working on). Having a standard set of benchmarks (which include different types of inhomogenities) to test different algorithms against should help ensure that there is no residual bias.
Berkeley does things slightly differently; it mainly looks for step changes, but downweights stations whose trends sharply diverge from their neighbors when creating regional temperature fields via kriging.
The Kelburn station is a good example of the need for -some- sort of homogenization. The station move in 1928 had an impact similar to the century-scale warming at the location. Not correcting for that (or similar biases due to TOBs changes or instrument changes) does not give you a pure, unbiased record.

MikeH
June 10, 2014 10:23 am

Would this be a good analogy? And would anyone be able to get away with this?
Lets say I purchased IBM stock in 1990 at $100 per share.
And for argument sake, I sold it today at $200.
BUT, since there was inflation in between 1990 and 2014, I calculated the original $100 per share purchase would be equivalent to a $150 share purchase price in today’s finances. Therefore, my capital gain is really $50 per share, not the actual $100 per share.
Would the IRS let me use that creative math? To me, this is the same creative math being used in the temperature record.
BTW, if this is a real stock tax strategy, please let me know. I usually buy high and sell low, I need all of the help I can get.

Reply to  MikeH
June 10, 2014 12:53 pm

BTW, if this is a real stock tax strategy, please let me know. I usually buy high and sell low, I need all of the help I can get.

Your income taxes are indexed (thanks to Ronald Reagan), your capital gains are not. Sorry, you just made $100 profit and are going to be taxed on the whole $100.

Michael D
June 10, 2014 10:35 am

Just go back to the raw data for large-scale roll-ups of the temperature, and trust that the discontinuities will either a) all average out due to their random distribution, or b) introduce temporary artefacts that wise climate scientists can point to, wisely, and explain. Don’t remove the artefacts – that introduces more complex artefacts that are much more difficult to explain.

Tom In Indy
June 10, 2014 10:36 am

The claim is that it’s appropriate to make adjustments to breakpoints in the micro-data at the individual station level in order to get a more accurate picture of the underlying trend. If true, then why is it not equally valid to make similar adjustments to breakpoints at the macro-data level? For example, the 1998 El-Nino? There is a clear step change that represents an anomalous deviation from the underlying trend. A second claim is that Enso is a “random” element of climate, so why isn’t a portion of this anomalous and random event removed from the macro-data in order to get a more accurate picture of the underlying trend?
Here is the image from the new post on the UAH Global Temperature Update for May.
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2014_v5.png
It’s obvious that the 1998 El-Nino contributed to the trend, but according to the logic that supports adjustments to the GHCN data, a portion of that El-Nino effect on the trend should be removed. It’s an outlier from surrounding data points (monthly anomalies) just like break points at the local station level are outliers from surrounding stations.

June 10, 2014 10:37 am

Bob Dedekind, you can download the free objective (automatic) homogenization algorithm of NOAA. Did you do so and tested it with a saw tooth signal to see if your (or more accurately James Hansen’s) potential problem is a real one?

June 10, 2014 10:39 am

Tom In Indy,
Its really a question of scale. Changes in climate over time tend to be pretty highly spatially correlated. If one station has a big step change that doesn’t appear in any other nearby stations, its likely an artifact of some localized bias (station move, instrument change, TOBs change) rather than a real climate signal. ENSO, on the other hand, affects broad regions of the world and is not in any way a result of instrument-related issues.

Doug Proctor
June 10, 2014 11:03 am

Hiking in the Canadian Rockies I’ve noted, to my distaste, that coming off a pleasant open ridge in full sun, if I drop down into a limber pine stand (max 5 m high) the temp jumps and I am almost claustrophobic with the sudden heat AND humidity. And, of course, no breeze. I didn’t know there was an empirical relationship.
The GISS temperature adjustment vs time graph was the first Whoa! moment for me: the implication was that all previous temperature measurements were essentially incorrect, reading far too high – relative to what we measure now. The concept didn’t bother me too much if the data was pre-1930, but I found the continual “correction” disturbing, especially for temperature data collected in the post 1970 period. I didn’t believe that there was a fundamental problem with measuring temperatures back then UNLESS a warmer current temperature was desired that caused GISS to be comparing apples to oranges.

Editor
June 10, 2014 11:03 am

First, my thanks for an interesting post.
Next, some folks here seem to think that “raw data” is somehow sacred. Well, it is, but only for preservation purposes. All real data has to go through some kind of QC, quality control. So I have no theoretical problem with doing that … but as always the devil is in the details.
Now, in my opinion Berkeley Earth has done both a very good and a not-so-good job of curating and processing the data. It’s very good for a simple reason—they have been totally transparent about the data and the code. Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.

They’ve done a not-so-good job of it, in my opinion, for a couple of reasons. First, they use the data website as a propaganda platform to spread their own political views about the climate issues. For example, at the top of the individual data pages it says in big bold type,

Read our new Skeptic’s Guide to Climate Change, and learn the facts about Global Warming

To me, that’s a sales pitch, and it is a huge mistake. If you as the curator of a dataset use that data as a platform for propagandizing your own alarmism, you call into question your impartiality in the handling of the data. It may not be true, but as the handler of the data, there’s a “Caesar’s Wife” issue here, where they should avoid the appearance of impropriety. Instead, they have been very vocal proponents of a point of view that, curiously, will make them lots of money … shocking, I know, but Pere et Fille Mueller have a for-profit business arm of their “impartial” handling of the climate data. It reminds me of the joke in the Pacific islands about the missionaries—”They came to the islands to do good … and they have done very well indeed.” For me, using the data web site to pitch their alarmism is both a huge tactical error, and an insight into how far they are willing to go to alarm people and line their own pockets … unsettling.
I also say they’ve done a not-so-good job because in my opinion they have overcorrected the data. Take a look at the Wellington data above. They say that there are no less than ten “empirical breaks” in the data, by which they mean places where the data is not like the average of the neighborhood.
I’m sorry, but I find that hard to swallow. First off, they show such “empirical breakpoints” in the 1890s … I find it very difficult to credit that there are enough neighboring thermometers in 1890s New Zealand to even begin to make such a determination.
It’s part of the difficult question of discontinuity. Let me use the example of where I live, on the Northern California coast an hour north of San Francisco. I live in a weather zone which has such anomalous weather that it has it’s own name. It’s called the “Banana Belt”, because it almost never freezes. It is a very, very narrow but long zone between about 600-800′ (180-240m) in elevation on the ocean side of the first ridge of mountains inland from the coast. It’s most curious. It freezes uphill from us, and downhill from us, hard frosts, but it almost never freezes here.
So if you have a year with very few freezes (it is California, after all), the temperature record at my house isn’t too different from the temperatures recorded at the weather station in the valley.
But of you have say a three-year stretch with a number of hard frosts, all of a sudden we have an “empirical break” between the temperature at my house and the regional average temperature, one which the Berkeley Earth folks might “adjust” out of existence.
In addition, temperatures here are very wind dependent. Because we’re on the coast and the wind typically is running along the coast, if the wind on average switches by only a few degrees, we get a warm land breeze instead of a cool sea breeze … and such shifts in wind are sometimes quite long-lasting. Again, when this happens, we get an “empirical break” between the weather here, and what is being recorded at the local weather station.
Note also that in general there is no “halfway” in the wind. We’re either getting a sea breeze or a land breeze, and when one changes to the other, it’s quick and boy, do you notice a difference. It is not a continuous process. It is an abrupt discontinuous shift from one thermal regime to another.
This highlights the problem—just how discontinuous do we expect our temperatures to be, both in time and space?
Berkeley Earth uses “kriging” to create a “temperature field”. Now, this is not a bad choice of how to go about it, and sadly, it might even be our best choice. It certainly beats the hell out of gridcell averaging …
But kriging, like all such methods, doesn’t handle edges very well. It assumes (as we almost must assume despite knowing it’s not true) that if at point A we have a measurement of X, and at point B we have a measurement of Y, that half-way between A and B the best guess is the average of X and Y.
But that’s not how nature works. If point A is in the middle of a cloud and point B is near it in clear air, the best guess is that at the midway point it is either 100% clear air or 100% cloud. And guessing “half-cloud” will almost never be correct. Nature has edges and discontinuities and spots and stripes. And although our best guess is (and almost has to be) smooth transitions, that’s not what is actually happening. Actually, it’s either a sea breeze or a land breeze, with discontinuous shift between them. In fact, nature is mostly made up of what the Berkeley Earth folks call “empirical breaks” …
I mentioned above the question of how discontinuous we expect our weather to be. The problem is made almost intractable by the fact that we expect to find discontinuities such as those where I live even if our records are perfect. This means that we cannot determine the expected prevalence of discontinuities using our records, because we cannot tell the real discontinuities like my house from the spurious. If my temperatures here at my house are different from those down in the valley, there is no way to tell from just the temperature data alone whether that is an actual discontinuity, or whether it is an error in the records—it could be either one. So we don’t even know how discontinuous we expect the temperature record to be. And that makes the level at which we “adjust” the temperature purely a judgement call.
Berkeley Earth defines what they call a “regional expectation” of temperature. If a given station departs from that regional expectation, it is “adjusted” back into compliance with the group-think. The obvious problem with that procedure, of course, is that at some setting of their thresholds for action, the temperatures at my house will be “adjusted” to match the region. After all, the “Banana Belt” is a very narrow strip of land which is distinctly different from the surrounding region, we defy “regional expectations” every day.
So the real question in this is, where do you set the rejection level? At what degree of difference do you say OK, this station needs adjusting?
Looking at the Wellington record above, I’d say they’ve set the rejection level, the level where they start messing with the data, far too low. I’m not buying that we can tell that for a couple of years in the 1890s the Wellington record was reading a quarter of a degree too high, and that when it dropped down, it resumed a bit higher than when it left off. I’d say they need to back off on the sensitivity of their thresholds.
This is where their political posturing returns to bite them in the gearshift knob. As I mentioned, at some level of setting of the dials, the temperatures at my house get “adjusted” out of existence … and the level of the setting of those dials is in the hands of Richard Mueller et al., who have a clearly demonstrated political bias and who have shown a willingness to use the data for propaganda purposes.
The huge problem with this situation is, of course, that the long-term temperature trend is inversely proportional to the setting of the level at which you begin adjustment. If you set the level low, you adjust a lot, and the long-term trend goes up. If you set the level high, you only adjust a little, and the long-term trend is smaller.
And if you think Richard Mueller doesn’t know that … think again. In my estimation, that is the very reason why the level is set as low as it is, a threshold so easily reached that their automatic algorithm is adjusting a couple of years in 1890 in Wellington … because the more adjustments, the higher the trend.
So I’d disagree with the title of this post. The problem is not that the automatic adjustments don’t work. The problem is that with Richard Mueller’s hand on the throttle, automatic adjustments work all too well …
Best to everyone on a foggy cool morning here, with water dripping off of the magically self-watering redwood trees who can pluck their moisture from the very air, on a day when the nearest weather station says it’s hot, dry, and sunny …
w.

PeterB in Indianapolis
June 10, 2014 11:12 am

With modern technology, a properly calibrated digital thermometer can take individual readings every few seconds which can all be put into a computer file as a 24-hour time series. Every station, using the proper technology, could reasonably have a MINIMUM of 3600 temperature observations per day, which would give a MUCH better resolution of actual temperature at a given station for each given day.
The problem comes in when you attempt to AVERAGE such things into one “observation”.
One of the best examples I can give for this is one of my favorite days when I was a young boy.
I was asleep at midnight, but I know that the temperature in my area was in the mid 40s (F). By 10:30 in the morning, the temperature was 57 (again F). Then a powerful cold front ripped through the area, and by 1:30 PM local time the temperature was 7 (yes, F). By 11:59 PM, it had dropped to 5F.
So…. if you only had ONE station reading from a nearby station for that day, or if you AVERAGED a bunch of readings for that particular day, it wouldn’t tell you squat about what ACTUALLY happened on that day.
To me, the best you could do is take as many observations as possible over 24 hours at a station, and average them out over the whole 24 hours, but even THAT wouldn’t reflect reality in any meaningful way.
To take old station data that could have all SORTS of problems like the one I described above, and then to try to AVERAGE ALL STATIONS to create a “global temperature” is simply ludicrous. Global Temperature has ABSOLUTELY NO MEANING WHATSOEVER under those conditions.
It MIGHT have SOME meaning using modern satellite data, but prior to modern satellites, trying to calculate a global average temperature is about the most idiotic exercise I can conceivably imagine. Even with modern satellite data, the concept of “global average temperature” is still pretty dubious, but at least it is based on real data that we know the method of collection for….

kadaka (KD Knoebel)
June 10, 2014 12:06 pm

From Willis Eschenbach on June 10, 2014 at 11:03 am:

(…) Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.
[image???]

How did you manage to embed http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/18625-TAVG-Alignment.pdf which is clearly an pdf, as an image? It is not coming up for me, just a blank space with a broken image icon.

Jim S
June 10, 2014 12:14 pm

I’m sorry, when did changing data become acceptable in science?

kadaka (KD Knoebel)
June 10, 2014 12:18 pm

From Nick Stokes on June 10, 2014 at 4:26 am:

Here is some detail about the GCHN temperature record in Wellington WMO 93436, which I believe is Kelburn. There weren’t any adjustments in 1949 or 1959, when the trees were cut.

And thank you for the URL, Nick. Backing it up led me to discover a very interesting global map:
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/cas.ghcnm.tavg.v3.2.2.20140610.trends.gif
Trends in Annual TAVG, 1901 to 2013
Save and peruse before it can be “disappeared”. You may note the odd pockets of cooling in the midst of heating. How did those happen?
But mainly notice how at 70% oceans, with huge chunks of continents unaccounted for, there is very little coverage showing. Practically all of it is Northern Hemisphere land, where you’d find UHI contamination.
And from such is crafted a global average temperature? That is deliberate deception, or extreme hubris.

Editor
June 10, 2014 12:32 pm

kadaka (KD Knoebel) says:
June 10, 2014 at 12:06 pm

From Willis Eschenbach on June 10, 2014 at 11:03 am:

(…) Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.
[image???]

How did you manage to embed http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/18625-TAVG-Alignment.pdf which is clearly an pdf, as an image? It is not coming up for me, just a blank space with a broken image icon.

Curious, KD, it displays fine on my computer (Mac/Safari). Does anyone else have problems with it? If so I could embed it in a different manner.
w.

Dougmanxx
June 10, 2014 12:39 pm

Nick and Zeke, Interesting discussion, but pointless because no one knows what the “average temperature” was for those stations you bandy on about. What were they? Any clue? Or is the only thing you have an “anomaly” that changes at a whim? What are the “average temperatures” for say… 1928? What was the “average temperature” for 1928 in say… 1999? 2011? 2012? 2013? Were they different? If so, that simply means you are making “adjustments” on top of “adjustments”. Anyone who publishes “anomaly” information should be required to also publish what they are using as their “average temperature”, that way you can put to rest, quickly and quietly any of us who have questions. Why won’t anyone answer this simple question for me? What is the “average temperature”?

Louis
June 10, 2014 12:43 pm

Stephen Wilde says:
June 10, 2014 at 4:29 am
Climate scientists really aren’t all that bright are they ?

You’re conclusion is based on the assumption that the methods used to adjust temperature data create a warming bias due to some dumb mistake rather than by intelligent design. If it was just incompetence or stupidity, the adjustments would have an equal chance of creating a cooling bias as a warming bias. These things may not be going according to proper science, but they are going according to plan.

Green Sand
June 10, 2014 12:46 pm

Willis Eschenbach says:
June 10, 2014 at 12:32 pm
Does anyone else have problems with it? If so I could embed it in a different manner.

————————————————-
Yes, no can see, Win 7, Firefox

Bob Dedekind
June 10, 2014 12:59 pm

Hi all,
Awake now. The important points here are (I believe):
1) Adjustments are necessary if you want an “accurate” station record. An example is 1928 in Kelburn. It is, however, important to note that you cannot just apply (for example) a generic altitude adjustment for similar situations. Why not? Well, take Albert Park. It is over 100m higher than Mangere, the site that replaced it. Yet during an overlap period it was shown to be 0.66°C warmer! Now normally there is no overlap period, and any automatic adjuster would have made a mes of it.
2) The question of need for adjustments is a red herring. What is actually under discussion is whether there are any checks done during the automatic homogenisation process that detect and prevent incorrect adjustments of the slow-then-sudden variety. I think it’s pretty clear there aren’t. Nick mentioned the detection of spurious trends, but I know that in the NZ case almost all our long-term records come from urban sites, that are themselves contaminated by sheltering or UHI. Also, I’m less convinced by this argument, considering some of the adjustments I’ve seen, that make a steep trend worse.

Bob Dedekind
June 10, 2014 1:03 pm

Oops, apologies, about 50m higher. Albert Park should be 0.3°C cooler than Manger. It was 0.66°C warmer.

Nick Stokes
June 10, 2014 1:05 pm

kadaka (KD Knoebel) says: June 10, 2014 at 12:18 pm
“And from such is crafted a global average temperature? That is deliberate deception, or extreme hubris.”

The map doesn’t show the information used to get a global temperature. It only shows individual stations with more than a century of data. And of course it doesn’t show all the ocean data.
I have maps showing that information here. You can choose 1902 as a start date. It can show the individual stations, and it does show the ocean data.

Bob Dedekind
June 10, 2014 1:14 pm

Victor Venema says:
June 10, 2014 at 10:37 am
Good grief, is that Fortran? Cool, I haven’t used that in twenty years.
Are you suggesting that the Hansen-type issue never occurs? Or that there is in fact a mechanism built into the algorithm to detect and prevent it?

Bob Dedekind
June 10, 2014 1:29 pm

Willis Eschenbach says: June 10, 2014 at 11:03 am
Thanks Willis. You’re quite right regarding the early records. NIWA had this to say about the NZ stations generally, and I’m sure it applies equally to most the rest of the world:
“In the process of documenting the revised adjustments for all the ‘seven-station’
series, it was recognised that there was lower confidence in New Zealand’s early
temperature measurements, and there were fewer comparison sites from which to
derive adjustments for non-overlapping temperature series. Thus, a decision was made
not to include temperatures prior to 1900. Furthermore, if there were site changes
around 1910 that were difficult to justify, then the time series was truncated at that
point.”

Bob Dedekind
June 10, 2014 1:37 pm

Victor Venema says: June 10, 2014 at 10:37 am
If you’re suggesting that Hansen-like problems don’t occur, then Williams (2012) disagrees with you, since they postulate exactly that mechanism for why there is a bias:
“This suggests that there are factors causing breaks with a negative sign bias before 1979 (in addition to the TOB) that are offsetting the largely positive shifts caused by the transition to MMTS afterwards. For example, there may have been a preference for station relocations to cooler sites within the network, that is, away from city centers to more rural locations especially around the middle of the twentieth century [Hansen et al., 2001].

Nick Stokes
June 10, 2014 1:49 pm

ThinkingScientist says:
“If we simply apply a correction of 0.8 degC to pre-1928 unadjusted data the regression slope (through annual averages) is +0.44 degC/century”

That doesn’t sound right to me. I found placing such a change 64 yr into a 125 yr stretch makes a change of 0.96 °C/century. Which is close to the total change.

Bob Dedekind
June 10, 2014 2:06 pm

Nick Stokes says: June 10, 2014 at 6:04 am
“And as for Auckland, it’s a composite record between Albert Park and the airport at Mangere, which opened in 1966. I don’t know when the record switched, but there is a break at 1966. Before that there is 100 years of Albert Park, with no adjustment at all except right at the beginning, around 1860.”
The break at Albert Park is a downward adjustment of about 0.6°C. (At least it was in v2, I’ll have to check now with v3, unless you can tell me.)
It’s a perfect example of the second (Hansen) type of adjustment error. We have a station with known long-term sheltering problems that were never resolved (no clearing of vegetation) which drove up the temperatures. Even NIWA acknowledged the sheltering problem.
Then in 1966 the whole previous record is adjusted down because of the 0.6°C difference with Mangere. Textbook case. Not only was no trend reduction performed on the station, they made it considerably worse by an incorrect adjustment!
These are the sanity checks that seem never to be performed after automatic adjustments.

June 10, 2014 2:36 pm

Bob Dedekind says: “If you’re suggesting that Hansen-like problems don’t occur, then Williams (2012) disagrees with you, since they postulate exactly that mechanism for why there is a bias:”
Why would I ask you to check whether a saw tooth is a serious problem if I thought that a saw tooth never occurs?
The problem Hansen worried about was homogenization using only information from the station history. Information that leads to jumps (relocations, new instruments, shelters) is better documented as changes that lead to gradual inhomogeneities (less wind or more shade due to growing tress, urbanization or irrigation).
That is why you should not only correct jumps known in metadata, but also perform statistical homogenization to remove the unknown jumps and gradual inhomogeneities. Other fields of science often use absolute homogenization methods (finance and biology), with which you can only remove jumps. In climatology relative homogenization methods are used that also remove trends if the local trend in one station does not fit to the trends in the region. Evan Jones may be able to tell you more and is seen here as a more reliable source and not moderated.
P.S. To all the people that are shocked that the raw data is changed before computing a trend: that is called data processing. Not much science and engineering is done without.

Bob Dedekind
June 10, 2014 3:00 pm

Victor Venema says: June 10, 2014 at 2:36 pm
None of what you say addresses the problem. Whether the jump is known from metadata or found via automatic processing is irrelevant. The problem is that an incorrect adjustment is made that increases the trend artificially. That is what Hansen shows.
And that is what GHCN does to at least one site I know of: Albert Park in Auckland. There will be many others around the globe, and it doesn’t take too many errors like this to skew the trend.
Didn’t it ever worry you that the adjustments always had a net warming effect, across all stations and all long-term timeframes? What if the effect had been a net cooling, would it have gone unchecked?

John in Oz
June 10, 2014 3:02 pm

So when can we expect the ‘sudden change’ shown in Mann’s hockey stick graph to be corrected?

June 10, 2014 3:33 pm

Like I wrote, relative statistical homogenization methods used in climate do not only correct for jumps, but also for gradual non-climatic changes. I hope Evan Jones can convince you.
Didn’t it ever worry you that the adjustments always had a net warming effect, across all stations and all long-term timeframes? What if the effect had been a net cooling, would it have gone unchecked?
No, that is not a reason to automatically worry. If it were, we would worry. It is somewhat strange to assume that scientists are stupid.
There are many reason why there might be a net cooling in the raw data. Many stations started in cities to be used for meteorology, for which less accuracy is sufficient. Now that climatology has become more important, stations are relocated to positions outside of cities. There have also often been relocated from warm cities to relatively cool airports outside the urban heat island in the 1940s. We have more irrigation nowadays. Old measurements were often not as well protected against radiation errors (solar and heat radiation). In the 19th century, the measurements were often performed on a North wall of a non-heated room or a free standing stand in the garden. Quite often the sun did get on the instrument or warm the wall underneath it. Thermometers are nowadays very often mechanically ventilated, in the past they were not.
These cooling effects are mostly understudied, unfortunately, so I cannot quantify them or tell you which effects are most important. Most studies have been on non-climatic effects that would produce a net warming effect. We wanted to be sure that these effects are smaller than the observed temperature increases. These studies are important for the research on the detection of climate change.

Nick Stokes
June 10, 2014 3:33 pm

Bob Dedekind says: June 10, 2014 at 3:00 pm
“And that is what GHCN does to at least one site I know of: Albert Park in Auckland. There will be many others around the globe, and it doesn’t take too many errors like this to skew the trend.
Didn’t it ever worry you that the adjustments always had a net warming effect, across all stations and all long-term timeframes?”

You have no quantification of the effect of sheltering at Albert Park. And you don’t know that the adjustment for the move was incorrect.
It certainly isn’t true that adjustments always have a warming effect. Someone mentioned Barrow AK at Lucia’s. It turned out that there adjustment greatly reduced the trend.

ripshin
Editor
June 10, 2014 3:34 pm

After reading Mr. Dedekind’s post, as well as NCDC’s paper on the subject, here are a couple of additional thoughts to consider:
1) The NCDC automated homogenization algorithm (AHA) is trying to accomplish a task for which it is fundamentally unsuited. No matter how carefully crafted the algorithm, it is, at it’s most basic level, trying to get the effect of adding information to the raw data record (in order to make it usable), without actually adding the required information. It’s interesting to note that the authors of the paper acknowledge the fact that the “correction” of the raw data used to be a manual process by which the specific circumstances of any given site were taken into account (information added to the record). Since this became too cumbersome, the AHA was created to automate this process. Unfortunately, I believe the creators of the AHA forgot that the whole point of the exercise was to add in any necessary information required to render the raw data record usable, and instead, shifted their focus to statistical analysis, WHICH PRESUPPOSES THAT ALL REQUIRED INFORMATION ALREADY EXISTS WITHIN THE DATA RECORD.
2) On the surface, the claim that “sheltering” has caused a statistically significant number of breakpoints seems anecdotal to me. Is there some data out there to suggest that it’s the prevalent mechanism causing breakpoints? I ask because it seems like there could be any number of external factors that might cause said breakpoints. Then again, maybe it’s irrelevant. If the primary behavior of the breakpoints is “slow up, fast down” and the AHA always addresses this by (in math terms) shifting the Y-Intercept of the older data, rather than adjusting its slope, then it would seem like a pretty easy case to make that the algorithm is inappropriately making a judgement call in the absence of the actual information by which to make it.
3) At a philosophical level, it seems like this incessant effort to “correct” the historical data record is based on the assumption that’s it’s necessary, NOW, to know what the climate trend is so “we can do something.” If you subtract out the belief that “we must do something now” then there’s no reason we can’t just wait for the trending data to be established by these new temperature measuring systems. (Didn’t we just read about this great US system, like, yesterday?)
4) Back to the AHA for moment, notwithstanding my observation above, it’s really difficult not to respect the effort that went into creating the algorithm. Reading through the paper, to me, was a testament to the thoughtfulness of the authors who were diligently trying to make their own proverbial “chicken salad”. Still, the mere fact that such manipulation is required tells me that the whole date set should have a big fat “FOR INFORMATION ONLY” stamped all across it. Meaning, you can review it for curiosity’s-sake, but it’s not valid to use as a basis for engineering purposes (my world) or policy decisions.
Anyway, these are just thoughts…maybe valid, maybe not.
rip

milodonharlani
June 10, 2014 3:37 pm

It doesn’t work as science, but it achieves its objective.

June 10, 2014 3:51 pm

June 10, 2014 at 12:32 pm | Willis Eschenbach says:

Does anyone else have problems with it? If so I could embed it in a different manner.

Here neither … Win 7, Chrome

Bob Dedekind
June 10, 2014 3:54 pm

Nick Stokes says: June 10, 2014 at 3:33 pm
“You have no quantification of the effect of sheltering at Albert Park. And you don’t know that the adjustment for the move was incorrect.”
Actually I do. NIWA compared the Albert Park record to Te Aroha, a suitable rural site, and found a differential of 0.09°C/decade. We know that the wind run decreased from 1915 to at least 1976 (Hessell, 1980). That’s about 0.5°C over sixty years, right there.
We checked this and arrived at the same result. When we performed our Auckland analysis we therefore reduced the Albert Park slope a la Aguilar (2003) and only then did we check the offset wrt Mangere. Needless to say, this accurate manual approach produced a trend way lower than the incorrect GHCN adjustment, which not only didn’t detect and correct the inflated Albert Park trend it actually made the Auckland combined record much worse by introducing an erroneous 0.6°C downwards adjustment on top!
“It certainly isn’t true that adjustments always have a warming effect. Someone mentioned Barrow AK at Lucia’s. It turned out that there adjustment greatly reduced the trend.”Classic strawman. Where did I say that adjustments “always have a warming trend.”? I said there is a net warming trend after adjustments.

June 10, 2014 4:02 pm

A good test would be what happens to the global trend
Before and after breakpoints.
I wonder what you all suppose.
Willis?
Another good test is what happens with synthetic data
Do the changes move you toward the truth or away?
Willis?
More later.
But ask yourself this. If your theory is that breakpoints
Move the global average in a significant way.. what
Will you say if the evidence shows otherwise?
If your theory is that changes or adjustments move
Your estimate away from the truth.. what will say
When tests double blind tests show the opposite.
Hypothetical questions .. perhaps those tests have been
Done.. perhaps not.
Hmm what would feynman say if the tests contradicted a theory of how adjusting works?
More later. Cant text and drive..

June 10, 2014 4:03 pm

Jonathan Abbott says:
June 10, 2014 at 5:15 am
Could anyone post up explicit examples of these types of adjustments in any of the various temperature series?
Right from the birth of GW in the 1986 papers of Jones et al – this html version shows 4 diagrams with steps corrected as discussed.
Full TR027 Southern Hemisphere Book html version
http://www.warwickhughes.com/cru86/tr027/index.htm
scroll down to “STATION HOMOGENEITY ASSESSMENT”
If you go back to –
http://www.warwickhughes.com/cru86/
there are pdf of TR022 for the northern hemisphere

June 10, 2014 4:06 pm

Zeke Hausfather says; June 10, 2014 at 10:39 am
“Its really a question of scale. Changes in climate over time tend to be pretty highly spatially correlated.”
Please provide data, a data analysis, or a creditable research paper that confirms your assertion. And does the correlaton truly depend on data or are they corrupted by assumptions, or a’priori reltionships that guarantee the outcome.
Thanks
Dan

Bob Dedekind
June 10, 2014 4:27 pm

Mosher:
“But ask yourself this. If your theory is that breakpoints
Move the global average in a significant way.. what
Will you say if the evidence shows otherwise?”

I actually don’t care. All I ask is that someone understands and communicates exactly why it happens. This of course also implies that they check the actual stations (or a reasonable subset) before and after adjustments to ensure that sanity prevails.
However, right now it’s patently obvious that a station like Albert Park has been royally screwed up by the adjustments, and nobody noticed. But as usual we are treated to various arguments about why we, and not the automatic adjustment system, are wrong.
The fact is that there are no checks in place to prevent this sort of error occurring all around the world, otherwise Albert Park wouldn’t have happened. It could well be one of the contributors to the +0.3°C/century trend increase after adjustments, and I suggest it is.

Nick Stokes
June 10, 2014 4:29 pm

Bob Dedekind says: June 10, 2014 at 3:54 pm
“Classic strawman. Where did I say that adjustments “always have a warming trend.”? I said there is a net warming trend after adjustments.”

You said:
“Didn’t it ever worry you that the adjustments always had a net warming effect, across all stations and all long-term timeframes?”
Maybe the comma has a subtle effect.

David Riser
June 10, 2014 4:30 pm

Well I am with Bob Dedekind on this one, I know its hard work but adjustments to be accurate and useful have to be manual with research. Otherwise your putting some kind of bias in the works and essentially making the entire data series useless for understanding Climate. We really have no clue if there is a warming or cooling trend over who knows how much of the record. When you throw in the error bars it gets worse. Even picking station moves and Doing auto adjusts for height over a land surface using the free air average lapse rate is going to introduce bias. in the wellington case the bias is warm and its documented. From NIWA’s web site
“The offset of 0.8°C used in the webpage illustration agrees with what we would expect for the altitude difference between the sites, based on the free atmosphere lapse rate of 0.65°C per 100 metres. In practice, the adjustment is calculated by comparing overlapping temperature records for other sites across the 1927/28 site change. The altitude effect is an a priori reason for expecting Kelburn to be colder than Thorndon, but there is no straightforward theoretical way of calculating the actual difference along a sloping land surface in the same way as there is for the free atmosphere. In fact, over a much broader spatial scale, the lapse rate along a land surface in New Zealand tends to average around 0.5°C/100m (Norton, 1985). This would equate to Kelburn being colder than Thorndon by 0.6°C; the larger calculated difference of 0.8°C being used in the “seven-station” series therefore suggests other local influences such as exposure or aspect may also be affecting the sites in question.”
By the way the overlapping site change was for 31 days or so, was acutally 1C, but they couldn’t choke that so they went with the free atmosphere lapse rate anyhow. Silliness really, there is a bias there, pretty significant warm or cold it doesn’t matter, Because the purpose of doing this averaging business is to understand what is happening globally and the data is too trashed to do that.
v/r,
David Riser

Bob Dedekind
June 10, 2014 4:31 pm

One more point Mosher. If someone fixes the system, I’ll go first to the adjusted Albert Park to see if it matches common sense, viz: trend reduced, correct offset applied relative to Mangere.
Until then, argue away, you won’t convince me.

Bob Dedekind
June 10, 2014 4:32 pm

Nick,
No, it was you leaving out the vital word “net” that did it, and it wasn’t all that subtle either.

Editor
June 10, 2014 4:38 pm

Bob Dedekind says:
June 10, 2014 at 2:06 pm

… Then in 1966 the whole previous record is adjusted down because of the 0.6°C difference with Mangere. Textbook case. Not only was no trend reduction performed on the station, they made it considerably worse by an incorrect adjustment!
These are the sanity checks that seem never to be performed after automatic adjustments.

Thanks, Bob. Indeed. I’ve never seen any computer based decision-making system that doesn’t at times make ridiculously bad decisions when faced with the eternal variability of nature. Which is OK if you check through them afterwords and find the bad ones and figure out how they happened and change your algorithm.
Regards,
w.

Bob Dedekind
June 10, 2014 4:40 pm

David Riser says: June 10, 2014 at 4:30 pm
Yes, and there are plenty of examples where lower stations have colder temperatures than higher ones. Hokitika comes to mind – and inversion effect operated on the town-based site that didn’t affect the airport site.

Bob Dedekind
June 10, 2014 4:43 pm

Willis Eschenbach says: June 10, 2014 at 4:38 pm
“Which is OK if you check through them afterwords and find the bad ones and figure out how they happened and change your algorithm.”
Exactly.

Editor
June 10, 2014 4:44 pm

Steven Mosher says:
June 10, 2014 at 4:02 pm

A good test would be what happens to the global trend
Before and after breakpoints.
I wonder what you all suppose.
Willis?

As Holmes remarked, it is a grave error to theorize in advance of the facts …
And in any case, the question is not answerable as stated, because it depends entirely on the level at which you set the thresholds for intervention into the temperatures.
w.

DR
June 10, 2014 6:36 pm

Why are there so many more high max temperatures during the 1930’s (you know, the dust bowl years) than the last 20-30 years? Or have they been adjusted away or deleted?
Is it assumed weather station observers were too stupid to read thermometers?

Rex
June 10, 2014 7:15 pm

Bob, what do you think of the claim made by NIWA on its
website that the Seven Station Series is “representative of
New Zealand” … a load of codswallop if you ask me, given
that six of the 7 stations are perimetral. Good for measuring
the effects of sea breezes !

Bob Dedekind
June 10, 2014 7:19 pm

Rex says: June 10, 2014 at 7:15 pm
According to Jim Salinger, there are six climatic regions in NZ, and the seven stations are representative, with one repeat obviously. I don’t know if that’s valid one way or the other, but I know that there are few long-term stations (well, pretty much seven), and I suppose that’s all that really matters in the end.

June 10, 2014 9:54 pm

A good thread (and greetings, Bob, from the southern Christchurch).
I’ve long been of the opinion, having worked for decades in accounting and BI systems, that it’s high time the big-data practices used throughout that area, were used in the temperature record.
Put simply, there’s gear and software round that can handle the volume of data points in the record, AND differentiate raw from ‘this adjustment by that process on this date by which user’ – so that the raw data stays untouched.
But the crew who seem to run the temperature datasets seem to have just the one data point per time period per station AND then they go and adjust Those!
To accounting types, that’s absolute sacrilege. It is obscuring the audit trail.
So instead of having a Kelburn temp of (I’m making this up, don’t take it as Gospel) 15.3 C on Thursday the 43rd of Germinal, 1934, 1330 local time, and then wondering what adjustments were made (which results in the sort of hand-waving observed in this here thread) it should be possible to layer adjustments like this invented set of data records:
Type: Raw temp Value: 13.6C DateTime: 1330 43/10/1934 or (whatever Zulu, cannot find a UTC converter for French Revolutionary dates), location: (lat/long) GUID (to make sure the thing really is unique) Processor: reader . Process description: Actual Reading by some fallible Human.
Type: Adjustment Value: -0.6C DateTime: 1330 43/10/1934, location: (lat/long) GUID, Processor: NIWA homogenator. Process description: NZ UHI assessment removal
Type: Adjustment Value: +0.37C DateTime: 1330 43/10/1934, location: (lat/long) GUID Processor: GISS krige step 1. Process description: Temp field harmonisation radius 200Km
Type: Adjustment Value: +0.15C DateTime: 1330 43/10/1934, location: (lat/long) GUID Processor: GISS krige step 2. Process description: Temp field harmonisation radius 1500Km
Then, using standard data query techniques, it would be possible to Both say:
Adjusted temp at lat/long On 1330 43/10/1934 Was 13.52 C
AND
Three adjustments were made to arrive at this sum:
1 local (NIWA) of -0.6 C
2 international (GISS Krige) of +0.52 C
AND
Raw, as-observed temp at lat/long On 1330 43/10/1934 Was 13.6 C
Big data. Big cubes. Lotsa layers. Full transparency.
But this is all too much to expect, eh….
Sigh.

Bob Dedekind
June 10, 2014 11:09 pm

Wayne Findley says: June 10, 2014 at 9:54 pm
I agree Wayne, the days of the climate data manipulators saying “Trust us, we’re from the Government” are way past. In the new sceptical era they have to demonstrate some transparency, or they simply won’t be taken seriously.
That’s not to say that the manipulations are necessarily wrong in every case, but the public is getting less and less tolerant of ‘black box’ processing, with vague assurances that the job is always done right.

June 11, 2014 12:14 am

Tom Wigley Climategate e-mail to Phil Jones: “Phil, Here are some speculations on correcting SSTs to partly explain the 1940s warming blip. If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip. I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from. Removing ENSO does not affect this. It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.”
This is agenda driven adjustments, revealed, in an e-mail to the guy who creates the most widely used global average temperature plot, HadCRUT, whose latest version four update of it in 2012 included Phil’s attribution of his new Saudi Arabian university appointment.

June 11, 2014 12:17 am

Sorry, moderator, in trying to log on successfully to various news sites throughout the day, my log on here changes accounts sporadically. They are all me, from NYC, this time as nikfrommanhattan.
-=NikFromNYC=-

June 11, 2014 12:30 am

Thanks for a good read Bob – great to see those GISS diagrams being used – again and again.

Jonathan Abbott
June 11, 2014 1:01 am

Tailingsproject, thanks for posting the link for me.

JohnnyCrash
June 11, 2014 1:13 am

This Kelburn station that we keep referring to has a step adjustment and a slope change. The slope change makes no physical sense. The step change, notwithstanding the accuracy of the value chosen for the step, at least makes some sense. Is the slope change the time of measurement bias? How is that bias always a positive slope? How do we know when the measurements were actually taken? Is the slope from averaging to nearby stations? Averaging with other stations, even close by makes no physical sense, because the temperature of a station 20 miles away or even 100 feet away has no bearing on the temperature of a station. A station’s temperature is a function of the air temperature immediately adjacent to the thermometer and not at all a function of the temperature of another station. You simply cannot remove station errors by averaging with other stations. You can use averaging If you take multiple samples of the same piece of air with the same thermometer. You cannot average different station data. You cannot fill in gaps by averaging to other stations or averaging surrounding days. There are 50 error inducing “things” going on around these ground stations from the paint weathering, repainting with a different batch of paint, changing land use, plant growth, insect nests, soil moisture, damage to the housing, instrument drift, bored temp readers who make up #’s so they don’t have to go out and actually read the data, time of measurement, height and vision quality of person reading the data, dirt and grime, etc etc, The data, adjusted or unadjusted, cannot be used to determine long term fraction of a degree trends. I don’t have a problem with the data being inaccurate. I have a problem with using this data to say with a straight face that this proves that the earth’s temperature is changing one way or the other.

June 11, 2014 2:42 am

Bob Dedekind says: “I actually don’t care. All I ask is that someone understands and communicates exactly why it happens.”
If that is your position, then why did you not do a little work to understand the Pairwise Homogenization Algorithm of NOAA so that you could communicate exactly what it does?
Bob Dedekind says: “I agree Wayne, the days of the climate data manipulators saying “Trust us, we’re from the Government” are way past. In the new sceptical era they have to demonstrate some transparency, or they simply won’t be taken seriously.”´
The raw data, the time of observation bias corrected data and the homogenized data is all freely available. You can download the algorithm, check how it is working, if you are unable to you can feed it with data and see what it does. You can read the articles hat describe the algorithm. Are you sure there is anything NOAA could do that would make you take them seriously?
P.S. If Fortran is too hard for you, there are many more homogenization methods, coded in other languages, which you could use, to check if your problem is real.

June 11, 2014 4:45 am

I think the evidence of the systematic problem in the corrections applied is shown by the difference between adjusted and raw. There is a systematic positive trend in the difference. I have computed this myself from GHCN previously, as have others. The exact result depends on the vintage of GHCN used, but they are all fundamentally the same result. There is an example plot (graph 6 down the page) at:
http://stevengoddard.wordpress.com/maps-and-graphs/
Everytime I ask this question there is a stony silence, I suspect because it is the elephant in the room:
What is the physical explanation for the systematic upward trend in corrections with time between the raw and the adjusted temperature data? Mosher? Stokes? Hausfather? Anybody?

Bill Illis
June 11, 2014 6:38 am

How about this Berkeley station – Amundsen Scott at the south pole.
26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth. Tell that to the scientists freezing their butts off in -60.0C temperatures. All of the stations in Antarctica have the occasional extremely cold month compared to average (there are actually far more of these than extremely warm months in most stations – not so much at Amundsen Scott). The Berkeley/Mosher algorithm assumes there is a quality control failure but it is just what mother nature delivers in Antarctica and an automatic algorithm should not identify a failure when that is just what the climate does. And why does the algorithm only identify the downspikes and none of the upspikes. There must be a bias in the algorithm (something I’ve mused about before but this example is pretty clear that is the case.
http://berkeleyearth.lbl.gov/stations/166900
The actual (and yes fully quality-controlled) raw data is reported here and it has virtually no trend despite Berkeley having revised it to +1.0C over 50 years.
http://www.antarctica.ac.uk/met/READER/surface/Amundsen_Scott.All.temperature.html
A couple of months out-of-date chart of the above raw temps.
http://s13.postimg.org/6w98pvd8n/Amund_Scott_90_S.png

mpainter
June 11, 2014 8:27 am

This interminable disciussion about the desirability or reliability of data “correction” convinces me that such corrections should be made rarely and with utmost discretion, and certainly not by automation. This simply introduces reasonable doubts about such adjustments and clouds the whole issue with doubt and uncertainty.
For science to work, the data needs to be acceptable to all and reliability should never be an issue, or you are shot in the foot at the start.
To me, the algorithmic adjustments of data are the antithesis of proper science. Talk about error creeping in, or assumptions that may or may not be valid! It is foolishness to undertake a work when your data methodology is questionable. The focus is on your methods instead of your conclusions.

phi
June 11, 2014 8:54 am

A graph in relation to the discussion: http://img837.imageshack.us/img837/5687/fontea.jpg
This is a comparison of regional temperature (Alps):
1. Homogenized temperature of Davos (red).
2. The same set but before adjustments (light blue).
3. Always the same set but homogenized according to the recommendations of Hansen et al. 2001 (dark blue).
4. A proxy of glaciers, melting anonmaly, Huss et al. 2009 (green).

NikFromNYC
June 11, 2014 10:02 am

Bill Illis on Antarctica: “26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth.”
Confusing still is what those really represent since they don’t lead to break points, and are all for valleys but not peaks despite the overall spiky noise in both directions.

NikFromNYC
June 11, 2014 10:08 am

Phil Jones’ official FAQ on lack of raw data for the standard global average product still used in climatology:
“Since the early 1980s, some NMSs, other organizations and individual scientists have given or sold us (see Hulme, 1994, for a summary of European data collection efforts) additional data for inclusion in the gridded datasets, often on the understanding that the data are only used for academic purposes with the full permission of the NMSs, organizations and scientists and the original station data are not passed onto third parties. Below we list the agreements that we still hold. We know that there were others, but cannot locate them, possibly as we’ve moved offices several times during the 1980s. Some date back at least 20 years. Additional agreements are unwritten and relate to partnerships we’ve made with scientists around the world and visitors to the CRU over this period. In some of the examples given, it can be clearly seen that our requests for data from NMSs have always stated that we would not make the data available to third parties. We included such statements as standard from the 1980s, as that is what many NMSs requested. The inability of some agencies to release climate data held is not uncommon in climate science.”

“We are not in a position to supply data for a particular country not covered by the example agreements referred to earlier, as we have never had sufficient resources to keep track of the exact source of each individual monthly value. Since the 1980s, we have merged the data we have received into existing series or begun new ones, so it is impossible to say if all stations within a particular country or if all of an individual record should be freely available. Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e. quality controlled and homogenized) data.”
http://www.cru.uea.ac.uk/cru/data/availability/

Michael D
June 11, 2014 10:13 am

Hi Willis:
a) I can’t see the image
b) I still think the “raw” data is the way to go. Though of course remove broken data (e.g. bear knocked over the weather station). The remaining anomalies should then be addressed verbally.
The raw data is however politically inconvenient (tells the wrong story) so they “fix” it.

Dougmanxx
June 11, 2014 10:32 am

Bill Illis says:
June 11, 2014 at 6:38 am
Awesome post! Does Berkeley ever release actual “average temperature” data like you link to? Or is their “data” always just an “anomaly”? This is currently my pet peeve, as release of the “average temperature” allows even someone who is unsophisticated to see what changes have been made to the record over time. And it makes more sense to most people than something slippery like an “anomaly”. So if the “average temperature” used to calculate the anomaly changes, it’s blindingly obvious to anyone what is going on. TBH this IS what’s happening, it’s simply hidden by the disingenuous veil of “anomaly”.

June 11, 2014 10:48 am

Science 101 sniff test:
Berkeley BEST Test in fact, of what their Al-Gore-ithm does for one of the most obvious cases of urban warming of all, the Central Park castle on a hill versus only a few miles up the Hudson River of rural West Point military academy, in which NYC should obviously be adjusted *down* but alas, it’s just gradual urban heating, not worthy of a breakpoint at all, even though BEST adds a whopping 24 slice and dice breakpoints for other reasons most of which only Hal 9000 understands the frantic meaning of:
http://www.john-daly.com/stations/WestPoint-NY.gif
The Berkeley BEST versions, in which Central Park shows no proper adjustment *down* to account for clear urban heating effects:
http://berkeleyearth.lbl.gov/stations/167589
For Berkeley BEST, “raw” data for West Point which unlike the *other* “raw data” that poor old John Daly must have just looked up “wrong” actually shows likewise warming and is oddly broken into two separate records without any explanation required for third parties to understand the procedure:
http://berkeleyearth.lbl.gov/stations/36834
http://berkeleyearth.lbl.gov/stations/167589
That’s why we must always leave things to “experts” I guess. Thermometers are too complicated. Maybe we should ask a genuine rocket scientist then:
“In my background of 46 years in aerospace flight testing and design I have seen many examples of data presentation fraud. That is what prompted my interest in seeing how the scientists have processed the climate data, presented it and promoted their theories to policy makers and the media. What I found shocked me and prompted me to do further research. I researched data presentation fraud in climate science from 1999 to 2010.” – Burt Rutan, winner of the X-Prize for the first private space vehicle.
He continues:
“In general, if you as an engineer with normal ethics, study the subject you will conclude that the theory that man’s addition of CO2 to the atmosphere (a trace amount to an already trace gas content) cannot cause the observed warming unless you assume a large positive feedback from water vapor. You will also find that the real feedback is negative, not positive!”
http://scholarsandrogues.com/2012/01/31/climate-science-discussion-between-burt-rutan-and-brian-angliss/
Adjustments towards a more accurate view on the ground are one thing, but Berkeley chopping and re-joining data sets together at the same level renders their result utterly meaningless since their real input is but an average of eight year long snippets without using real world station histories to support it. Like any other highly parametrized black box that not even open source code can untangle for outsiders, it can make the elephant’s tail wiggle to match recent climate model predictions as desired by what is commonly known as a “brazen liar” in the form of highly activist Richard Muller who quite actively promoted a proven to be *false* media narrative that he started his BEST project as a skeptic and by its results was converted, and that is in fact *how* he obtained Koch brothers funding for it in the first place.
-=NikFromNYC=-, Ph.D. in carbon chemistry (Columbia/Harvard)

Editor
June 11, 2014 11:39 am

Green Sand says:
June 10, 2014 at 12:46 pm

Willis Eschenbach says:
June 10, 2014 at 12:32 pm

Does anyone else have problems with it? If so I could embed it in a different manner.

————————————————-
Yes, no can see, Win 7, Firefox

Dang, go figure. Well, I’ve swapped it out for a jpg, that should do it.
w.

Editor
June 11, 2014 11:49 am

Victor Venema says:
June 10, 2014 at 2:36 pm

In climatology relative homogenization methods are used that also remove trends if the local trend in one station does not fit to the trends in the region. Evan Jones may be able to tell you more and is seen here as a more reliable source and not moderated.

Sadly, this is done using the rubric of the known (relatively) good correlation between nearby temperature sets. What the authors of these methods never seem to have either considered or tested is whether there is (relatively) good correlation between nearby temperature trends … and it turns out that despite the correlation of the data, the trends are very poorly correlated … here are the trends from Alaska, for example:

All of these stations are within 500 miles of Anchorage, and all of them have a (relatively) good correlation with the Anchorage temperature (max 0.94, mean 0.75) … but their trends are all over the map, with the largest being no less than three times the smallest, hardly insignificant. Further discussion of the graphic is here.
As a result, I’m totally unimpressed with the trend-based “homogenization methods”. I have never, ever seen a valid practical demonstration that it is a valid method. To me, removing or “adjusting” a climate station because its trend doesn’t agree with other local trends is a Procrustean joke that has no place in climate science.
w.

kcrucible
June 11, 2014 11:51 am

“No, there’s an observed change of about 0.8°C, and that’s when the altitude change happened. They are saying that that isn’t a climate effect, and changing (for computing the index) the Thorndon temps to match what would have been at Kelburn, 0.8°C colder.”
Seems to me that it would be a lot more on the up-and-up to adjust the NEW temperatures to match what it would have been at the older site if you’re going to adjust at all. At least then it would be clear to all what you’re doing… pretending the site hasn’t moved, rather than pretending the site has been in the new location all along.

Editor
June 11, 2014 12:17 pm

NikFromNYC says:
June 11, 2014 at 10:02 am

Bill Illis on Antarctica:

“26 quality control failures identified by the automatic algorithm despite the fact this is supposed to be one of the highest quality weather research stations on Earth.”

Confusing still is what those really represent since they don’t lead to break points, and are all for valleys but not peaks despite the overall spiky noise in both directions.

I couldn’t make any sense out of that one. The parts I didn’t understand are:
1. Why are all of the “quality control” flags on cold temperatures, and not warm temperatures?
2. I can think of a host of reasons why a thermometer in Antarctica might read too high, from exhaust from an idling sno-cat to waste heat from the local buildings. But I cannot think of any reason why it would read too low … so in addition to the question of why they are all on the low side, we have the question of why they are reading up to 6°C low at all?
3. Are we truly to assume that these measurements, taken by trained scientists at great effort, are so terribly bad? Seems unlikely.
4. A number of the QC flagged temperatures are less than half a degree from the “regional expectation”, while other temperatures that are up to 6°C !!! different from the “regional expectation” are not flagged … why?
5. Exactly what algorithm decided that these points needed “quality control”?
6. Were these results ever checked by a human being for reasonableness? And if not, why?
In short, the adjustments to this record are an impenetrable mystery. Zeke or Mosh, if you don’t explain this, folks will assuredly just assume you are churning out junk … some answers are required here.
w.
PS—Why doesn’t Richard Mueller have the stones to come and answer these questions, and instead lets Mosh and Zeke take the heat? My theory is that Richard is AWOL because he saw a microphone in the next room and has rushed to grab it, trampling two old ladies in the process, but that’s just a hypothesis … Mosh? Zeke? Any insights on this one as well?

Editor
June 11, 2014 12:23 pm

NikFromNYC says:
June 11, 2014 at 10:08 am

Phil Jones’ official FAQ on lack of raw data for the standard global average product still used in climatology:
“Since the early 1980s, some NMSs, other organizations and individual scientists have given or sold us (see Hulme, 1994, for a summary of European data collection efforts) additional data for inclusion in the gridded datasets, often on the understanding that the data are only used for academic purposes with the full permission of the NMSs, organizations and scientists and the original station data are not passed onto third parties. Below we list the agreements that we still hold. We know that there were others, but cannot locate them, possibly as we’ve moved offices several times during the 1980s. Some date back at least 20 years. Additional agreements are unwritten and relate to partnerships we’ve made with scientists around the world and visitors to the CRU over this period. In some of the examples given, it can be clearly seen that our requests for data from NMSs have always stated that we would not make the data available to third parties. We included such statements as standard from the 1980s, as that is what many NMSs requested. The inability of some agencies to release climate data held is not uncommon in climate science.”

This nonsense is nothing but is a crocodile crossed with an abalone. Phil Jones made the same totally bogus claims back when I made my FOI request for his data. In fact, when he actually tried to dig them out, he could only find three such agreements, only one of which had any constraints on the further use or revelation of the data. Nor was he able to show that “our requests for data from NMSs have always stated that we would not make the data available to third parties”, that’s an outright lie.
In short, this is just another typical Phil “Pantsonfire” Jones crockabaloney …
w.

June 11, 2014 1:21 pm

Willis Eschenbach says: “As a result, I’m totally unimpressed with the trend-based “homogenization methods”. I have never, ever seen a valid practical demonstration that it is a valid method.”
Then I have two articles for you to read:
Venema et al. 2012 discusses benchmarking results for a range of algorithms: OA at http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
Williams et al., 2012 discusses results of applying the US benchmarks to USHCN: Available at ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/algorithm-uncertainty/williams-menne-thorne-2012.pdf
Willis Eschenbach says: Nor was he able to show that “our requests for data from NMSs have always stated that we would not make the data available to third parties”, that’s an outright lie.
The situation is getting better, like the USA more and more countries release their climate data freely. However, many still do not release all their data. Mostly because the finance ministers want the weather services to earn a little money by selling the data. The weather services themselves would love their data to be used.
I would say, just try to gather climate data yourself. Then you will see that Phil Jones was right.

Bob Dedekind
June 11, 2014 1:28 pm

Victor Venema says: June 11, 2014 at 2:42 am
I have no need to do that, I can see the results of the adjustments with my own eyes. And yes, the problem really exists, because I have presented a real-life example for you.
Did you read my comment above regarding Albert Park? If not, read it, and then come back and tell me why the homogenisation technique failed to do the following:
1) detect and correct the distorted Albert Park trend, and
2) account for it in the 1966 breakpoint adjustment.
It’s not my job, nor that of other folk here, to work through the code to find the faults. Programs are designed to do things, and when they fail to do those things then it’s obvious from the outputs.
Remember, the Albert Park trend has been shown to be 0.9°C/century higher than surrounding sites. That’s a significant amount (the delta alone is higher than the global trend!), yet the claim has been made that trend checks were performed. I suggest they should revisit their code – it has a bug!
And by the way, I used to program in Fortran. I did for over almost a decade.

Rob
June 11, 2014 2:00 pm

The Jones et. al. reconstruction was perhaps the first to employ this method.
From what data I was able to obtain for
my region here in the Southeastern U.S.
Urban Heat Island effects were never
rectified.
I’m working up some better station and
5×5 grid points than either CRU, GHCN,
or USHCN.
Warwick Hughes has some similar work.

Bob Dedekind
June 11, 2014 2:48 pm

Victor Venema (@VariabilityBlog) says: June 11, 2014 at 1:21 pm
“Venema et al. 2012 discusses benchmarking results for a range of algorithms: OA at http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
That’s an impressive author list. Now, how about getting one of those guys to run Albert Park through their algorithms, and see which one produces a correct result?
Conversely, if none of them do, then you have a topic for ‘further research’ grant applications.
Everybody wins.

June 11, 2014 2:51 pm

Bob Dedekind, homogenization can only improve the trend estimates for larger regions. It is known that it cannot improve every single station, it can only do so on average.
Thank you for finding a problem with this one station. I am sure NOAA will be interested in trying to understand what went wrong, that may provide useful information to improve their homogenization method. Just like the problems it has in Iceland and they are looking into what made the Pairwise Homogenization Algorithm produce too shallow trends in the Arctic (like Cowtan and Way just found).
However your claim was: “You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.”
That sounded more general. Then one would expect you to be able to back that up with evidence that there is a general problem that significantly changes the global mean temperature. Especially when you write:
“In the new sceptical era they have to demonstrate some transparency, or they simply won’t be taken seriously.”´
Or is that rule only for other people making claims. Demonstrate transparently that there is something wrong. Alternatively send a polite email to NOAA that one of their homogenized stations does not look right. They will be interested.

Bob Dedekind
June 11, 2014 3:07 pm

Victor Venema (@VariabilityBlog) says: June 11, 2014 at 2:51 pm
“However your claim was: “You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.”
That sounded more general.”

It is more general. It’s not just one site.
I showed clearly in my post above that there is an inherent problem with automatic breakpoint analysis, in general. To date nobody has shown why my contention is wrong. I produced peer-reviewed references to back up my argument. I showed an example, Albert Park. Why Albert Park? It’s in Auckland, where I happen to live, and I’ve done detailed analysis on its temperature history. We have a lot of data on it. Are there others? Absolutely – Hansen identified the problem back in 2001. We expect that by now the problem would have been solved in the algorithms. It isn’t, otherwise GHCN v3 would not have the incorrect adjustments for Albert Park.
And how can you “improve the trend estimates for large regions” by introducing incorrect artificial warming trends for individual stations in those regions?

June 11, 2014 3:21 pm

Our comments crossed due to moderation.
Now, how about getting one of those guys to run Albert Park through their algorithms, and see which one produces a correct result?
We will in the International Surface Temperature Initiative. Here multiple algorithms will be applied to a new global temperature dataset that includes GHCN and thus also Albert Park.
With Zeke and NOAA we are working on a study that especially a lot of more difficult than realistic scenario’s to see if and when the algorithms no longer improve the data. That also includes a test dataset with many saw tooth inhomogeneities.
Both studies are volunteer efforts and will still take some time. There is unfortunately more funding for global climate models as for studying the quality of our invaluable historical observation. If you are in a hurry and want to be sure that the strong claim you made in the post actually holds you will have to brush up your Fortran skills.

June 11, 2014 4:54 pm

The BEST method of finding “break points” to adjust temperature records to improve the record’s accuracy is fundamentally flawed in several areas.
First, as in the saw-toothed example at the head of this post:
From: Rasey Jan 23, 2013 at 11:30 am

I believe that many times, perhaps most, we should not create a new record even if the jump is obvious. …..
Let me nominate the occasional “painting of a Stephenson screen” as a member of a class of events called recalibration of the temperature sensor. Other members of the class might be: weeding around the enclosure, replacement of degrading sensors, trimming of nearby trees, removal of a bird’s nest, other actions that might fall under the name “maintenance”.
A property of this “recalibration class” is that there is slow buildup of instrument drift, then quick, discontinuous offset to restore calibration. At time t=A0 the sensor is set up for use at a quality satisfactory for someone who signs the log. The station operates with some degree of human oversight. At time t=A9, a human schedules some maintenance (painting, weeding, trimming, sensor replacement, whatever). The maintenance is performed and at the time tools are packed up the station is ready to take measurements again at time t=B0. A recalibration event happened between A9 and B0. The station operates until time t=B9 when the human sees the need for more work. Tools up, work performed, tools down. t=C0 and we take measurements again. The intervals between A0-A9, B0-B9 are wide, likely many years. A9-B0 and B9-C0 recalibration events are very short, probably within a sample period. My key point is that A0-A9 and B0-B9 contain instrument drift as well as temperature record. A9-B0, B9-C0 are related to the drift estimation and correction.
At what points in the record are the temperatures most trustworthy? How can they be any other but the “tools down” points of A0, B0, C0?….
1. From everything I have read about the BEST process, it would slice the record into a A0-A9 segment and a B0-B9 segment and treat the A9-B0, B9-C0 displacement as a discontinuity and discard it. BEST will honor the A0-A9, B0-B9 trends and codify two episodes of instrument drift into real temperature trends. Not only will Instrument drift and climate signal be inseparable, we have multiplied the drift in the overall record by discarding the correcting recalibration at the discontinuities. (more)

What the BEST process does, by slicing and dicing nice long temperature records into separate segment station records is bake in slow instrument drift and station contamination as climate signal and discards the critical recalibration information. This is madness.
Second: A fundamental assumption of the BEST process is that the potential instrument error does not change over the length of a segment. In reality, if there is any physical reason to split a record into two separate stations, then the beginning of each segment is far more reliable than the end of the segment. Someone had to have set up the station and calibrate it. At the end of the station, they might have recalibrated it, but it is unlikely when it is temporarily abandoned, permanently shut down, or destroyed.
The third fundamental flaw in the BEST process is that by slicing long records and using the slope of the segments, they are taking a low-pass temperature signal and turning it into a band-passsignal and eliminating the lowest frequencies found in the signal. Then they are integrating the slope segments to reconstruct a signal with (what they say) reliable low frequency content.
I want to be very clear of my meaning here. I am NOT saying I expect to see any particular dominant frequency in temperature data. This is totally an information content issue.

Power vs Phase & Frequency is the dual formulation of Amplitude vs Time. There is a one to one correspondence. If you apply a filter to eliminate low frequencies in the Fourier Domain, and a scalpel does that, where does it ever come back? (Rasey: 12/13/12 in “Circular logic Not Worth a Milikelvin”

Low frequency information is not contained in the high frequency content. You cannot use high frequency information to predict the low frequency stuff. If you filter out the Low Frequencies with a scalpel, they are gone. Regional homogenization cannot restore them if all the data has been subjected to the same scalpel process.
Fourth: the segment lengths that come out of the BEST automated scalpel are absurdly short to be used in a climate study. Look at them for yourselves. Here is my look at Denver Stapleton Airport, a BEST record that runs from 1873 to 2011 with TEN!! breakpoints, six of them between 1980 and 1996 inclusive. The station record has temperatures from before the airport existed, after the airport closed, and ten breakpoints, some as quick as 2 years in a record officially 130+ years long when the station itself probably existed only from after 1919 to 1995. That seems like an excessive number of breakpoints, especially when they don’t correlate will with documented airport expansion events.

Bob Dedekind
June 11, 2014 4:56 pm

Victor Venema (@VariabilityBlog) says: June 11, 2014 at 3:21 pm
“If you are in a hurry and want to be sure that the strong claim you made in the post actually holds you will have to brush up your Fortran skills.”
No fear, I have no intention of deliberately subjecting myself to Fortran again!
I’m happy to wait for the results of the study, but in the meantime I regard the GHCN v3 adjustments as incorrect, for the reasons I’ve laid out above, and because nobody has shown me any reason to conclude that something as basic as the Hansen problem has been catered for in the algorithms.
I’m not saying, by the way, that the Hansen problem is easy to solve using automatic means. Far from it – what do you use as the references for other trends, the homogenised or unhomogenised stations? Homogenised makes sense on the surface, but If you use the homogenised stations, you are using stations that have already had incorrect Hansen-type warming trends introduced, so your analysis is flawed. For example, if the current homogenised Auckland station is used as a reference, the calculated trend differences will be hopelessly wrong. And if you adjust another station (say Wellington) using Auckland, it becomes wrong too, but is then used as a homogenised reference. And so on.
If on the other hand you use the unhomogenised stations, genuine breakpoints such as 1928 in Kelburn have not yet been dealt with, and the trends will be wrong there too.
You have to remove the non-climatic trends first before you can do the breakpoint checks, but how do you find the non-climatic trends without resorting to trend comparison checks with stations that have themselves been incorrectly altered?
Tricky. But I’m sure you’ll find the solution.

Two Labs
June 11, 2014 6:35 pm

As a statistician who has reviewed the adjustment data, I can guarantee you that the author’s suspician about spurrious warming adjustments and confirmation bias is correct. My impression is that there, indeed, has been warming, but not as much as the adjusted records indicate. I’d have to do a much more thorough analysis to calculate how much, and I don’t know if anyone has attempted that before. Fair warning – that would take a lot of personal time, time which I certainly don’t have…

Editor
June 11, 2014 6:57 pm

Mosh and/or Zeke, Stephen Rasey above and Bob Dedekind in the head post raise several points that I hadn’t considered. Let me summarize them, they can correct me if I’m wrong.
• In any kind of sawtooth-shaped wave of a temperature record subject to periodic or episodic maintenance or change, e.g. painting a Stephenson screen, the most accurate measurements are those immediately following the change. Following that, there is a gradual drift in the temperature until the following maintenance.
• Since the Berkeley Earth “scalpel” method would slice these into separate records at the time of the discontinuities caused by the maintenance, it throws away the trend correction information obtained at the time when the episodic maintenance removes the instrumental drift from the record.
• As a result, the scalpel method “bakes in” the gradual drift that occurs in between the corrections.
Now this makes perfect sense to me. You can see what would happen with a thought experiment. If we have a bunch of trendless sawtooth waves of varying frequencies, and we chop them at their respective discontinuities, average their first differences, and cumulatively sum the averages, we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves.
So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.
In your reply, please consider that I have long thought and written that the scalpel method was the best of a bad lot of methods, all methods have problems but I thought the scalpel method avoided most of them … so don’t thump me on the head, I’m only the messenger here.
w.

Bob Dedekind
June 11, 2014 7:13 pm

Willis Eschenbach says: June 11, 2014 at 6:57 pm
“…we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves.”
Bingo. Until this issue is solved the “most accurate” trend is the unadjusted one. My reasons are:
1) In a large dataset such as the land stations, it is to be expected that true breakpoints from station moves are randomly distributed, in sign and time. It is fair then (or at least less error-prone) to use the unadjusted series “as is”, at least as far as breakpoints are concerned.
2) The issue of shelter growth and/or urban heat islands means that the overall unadjusted trend is an upper bound. We can at least state with confidence that the true trend is less than the unadjusted trend.

Editor
June 11, 2014 7:39 pm

Victor Venema (@VariabilityBlog) says:
June 11, 2014 at 1:21 pm

Willis Eschenbach says: “As a result, I’m totally unimpressed with the trend-based “homogenization methods”. I have never, ever seen a valid practical demonstration that it is a valid method.”
Then I have two articles for you to read:
Venema et al. 2012 discusses benchmarking results for a range of algorithms: OA at http://www.clim-past.net/8/89/2012/cp-8-89-2012.html

Thanks for that, Victor. I took a look at your study. It is interesting, but I fear you haven’t dealt with the issue I identified above. Let me repeat that section:

But kriging, like all such methods, doesn’t handle edges very well. It assumes (as we almost must assume despite knowing it’s not true) that if at point A we have a measurement of X, and at point B we have a measurement of Y, that half-way between A and B the best guess is the average of X and Y.
But that’s not how nature works. If point A is in the middle of a cloud and point B is near it in clear air, the best guess is that at the midway point it is either 100% clear air or 100% cloud. And guessing “half-cloud” will almost never be correct. Nature has edges and discontinuities and spots and stripes. And although our best guess is (and almost has to be) smooth transitions, that’s not what is actually happening. Actually, it’s either a sea breeze or a land breeze, with discontinuous shift between them. In fact, nature is mostly made up of what the Berkeley Earth folks call “empirical breaks” …
I mentioned above the question of how discontinuous we expect our weather to be. The problem is made almost intractable by the fact that we expect to find discontinuities such as those where I live even if our records are perfect. This means that we cannot determine the expected prevalence of discontinuities using our records, because we cannot tell the real discontinuities like my house from the spurious. If my temperatures here at my house are different from those down in the valley, there is no way to tell from just the temperature data alone whether that is an actual discontinuity, or whether it is an error in the records—it could be either one. So we don’t even know how discontinuous we expect the temperature record to be. And that makes the level at which we “adjust” the temperature purely a judgement call.

In your study you use both actual inhomogeneous observational data, and artificial “homogeneous” data to test the algorithms. But that assumes that you know how homogeneous the natural dataset would be if we had perfect data … but since we don’t, we only have inhomogeneous data, I see no way to tell the real inhomogeneities from the artificial.
Let me raise a point here. In your paper you encapsulate the problem when you say:

In essence, a homogeneous climate time series is defined as one where variations are caused only by variations in weather and climate. Long instrumental records are rarely if ever homogeneous.

Now, every “homogenization” method implicitly makes the following assumption:

I have a computer algorithm which can reliably tell variations caused by weather and climate from variations not caused by weather and climate.

But since almost every record we have is known to be inhomogeneous … how can we tell the difference? To solve this, many algorithms make a further assumption:

If a signal only appears in one station, it’s an inhomogeneity.

I would question both those assumptions, for the obvious reasons, and I would again point out that we don’t know much about what a long-term homogeneous temperature record might look like … which makes depending on a computer algorithm a very dubious procedure, particularly without manual individual quality control (the lack of which seems to be all the rage).

Williams et al., 2012 discusses results of applying the US benchmarks to USHCN: Available at ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/algorithm-uncertainty/williams-menne-thorne-2012.pdf

That one I had read, including this

Nevertheless, creating a comprehensive breakpoint identification and adjustment scheme requires making a number of judgment calls at various decision points in the algorithm [Thorne et al., 2005] no matter how robust the underlying statistical methods might be. Decisions are required for all processing steps from how to define target and reference series to the particular statistical breakpoint tests applied and mechanisms for adjusting each detected break. Seemingly innocuous choices could, in theory, have large impacts upon the final product.

And at that point, the guy with his hand on the throttle gets to decide where to set the breakpoints … and as a result, the guy with his hand on the throttle gets to set the eventual trend. I wouldn’t mind so much if the full range of possibilities were spread out so we could see which one was chosen … but generally we don’t get that, we get the chosen, anointed result with little or no exploration of what happens with different “judgement calls at various decision points”.

Willis Eschenbach says:

Nor was he able to show that “our requests for data from NMSs have always stated that we would not make the data available to third parties”, that’s an outright lie.

The situation is getting better, like the USA more and more countries release their climate data freely. However, many still do not release all their data. Mostly because the finance ministers want the weather services to earn a little money by selling the data. The weather services themselves would love their data to be used.
I would say, just try to gather climate data yourself. Then you will see that Phil Jones was right.

Phil Jones was right about what? Right to refuse to release his data? Right to tell porkies? Victor, I was the poor fool who made the first FOI request to Phil Jones for his data. In response, he told me a number of flat-out fairy tales. Among them was the claim that much of his data was subject to confidentiality agreements. When he was forced by a subsequent FOI request to produce the purported agreements covering much of his data, he came up with … exactly one. And that truth-free claim about confidentiality agreements was just one among many of his bogus excuses for not producing his data for public examination.
You obviously haven’t heard the details of the squalid episode known as Climategate. My own small part in it is detailed here … read’m and weep …
w.

June 11, 2014 8:10 pm

Nick Stokes linked to this site ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/5/50793436001.gif
It clearly shows that the fit between 1930 til the present would show no warming. Adjusting for the one off event creates a warming trend for 70+years. A cooling trend prior to the one off event becomes a constant temp.Even a child could spot that the correction is not correcting anything, its creating a trend where there was none.

Bob Dedekind
June 11, 2014 8:33 pm

vicgallus says: June 11, 2014 at 8:10 pm
“It clearly shows that the fit between 1930 til the present would show no warming. Adjusting for the one off event creates a warming trend for 70+years.”
Vic,
There are times when an adjustment is necvessary. I would say that the Kelburn jump in 1928 is reasonable. The reason is that the site of the weather station moved up a hill, resulting in generally colder temperatures.
Have a look at the Wellington section here:
http://www.climateconversation.wordshine.co.nz/docs/Statistical%20Audit%20of%20the%20NIWA%207-Station%20Review%20Aug%202011%20SI.pdf

Bill Illis
June 11, 2014 9:48 pm

I am not expecting the algorithms to be fixed.
There was a lot of testing done on these algorithms. They have been designed to produce the desired result. If anything, the adjustment process will have to create another 2.5C of warming in the next 86 years so any “fixes” have to be in a certain direction.

June 12, 2014 12:14 am

Dedekind – Granted the pre 1928 data only looks like the trend has been changed but the post 1930 data does have a change to the trend. It is not merely shifted up uniformly. Why does 1990 need to be shifted up more than 1930?

Bob Dedekind
June 12, 2014 2:52 am

vicgallus says: June 12, 2014 at 12:14 am
“…the post 1930 data does have a change to the trend.”
Yes, I have no idea what that’s all about. Neither NIWA nor our group identified any problems there. But I’ve noticed that with the GHCN results – they don’t necessarily coincide with any real changes.

June 12, 2014 3:45 am

Another cross comment. These moderation pauses are getting tedious. Is my opinion so dangerous?
Bob Dedekind says: “It is more general. It’s not just one site.”
Maybe I missed something. A saw tooth type pattern naturally happens more often, but do you have more examples as Albert Park that such a problem is not solved right?
And how can you “improve the trend estimates for large regions” by introducing incorrect artificial warming trends for individual stations in those regions?
Analogies seem to be dangerous in the climate “debate”, but let me still try one.
Do you or your boss always make the perfect decision? Is your firm still profitable? Would it be more profitable if the CEO would be paralyzed by aiming to make every single decision perfectly?
The stations around Albert Park almost surely also have non-climatic jumps. Think back 100 years, it is nearly impossible to keep the measurements and the surrounding constant. These jumps create wrong trends in these stations and in the regional mean climate. As long as the improvements in the other stations are larger than the problem created in Albert Park, the regional mean trend will become more accurate.

Reply to  Victor Venema (@VariabilityBlog)
June 12, 2014 12:14 pm

Is my opinion so dangerous?

Your opinion is not. Your behavior is.

NikFromNYC
June 12, 2014 4:43 am

The issue of BEST baking in slow errors due to gradually fading paint by removing periodic repainting jumps that in fact self-correct those errors over time means it barely passes the laugh test, being designed to automatically *remove* sudden station maintenence back to pristine conditions while leaving in the bad conditions. Their claim that absolute value temperatures perfectly calibrated to the unwavering freezing and boiling points of water since the Farenheight scale of the 1700s can be adjusted downwards in the past requires much more support than they afford by mere cheerleading about the superiority of black box algorithms.
To test the validity of BEST’s claim that the US is warming at all we can skirt around the urban heating effect issue by simply looking at October when no heating or air conditioning is used around temperature stations or in whole big cities. And in October most states show absolutely no warming going back to the 1920s:
http://themigrantmind.blogspot.com/2009/12/hundred-years-of-october-cooling.html
The United States isn’t some oddball location even though it’s only a small percentage of Earth’s surface, and its the best data archive we have by far. October falsifies warming claims, bluntly, and by logical extension thus falsifies the entire global average temperature claim too. Only if the US is climatically isolated and anomalous can you wiggle out of this, acting as a lawyer.
Knowing any jury might acquit carbon dioxide since the canary didn’t die and the dog didn’t bark and the USA didn’t warm when the big machines were shut off, BEST adjusts the USA to create a warming trend all based on adjustments by activist “experts”:
http://oi58.tinypic.com/68wm5u.jpg
But where are the recent stories of heat waves in the states?
“Not guilty!” says the jury.

Paul Carter
June 12, 2014 5:33 am

At June 10, 2014 at 8:36 am Barrybrill says:
“Paul Carter says shelter is less important at Kelburn because it is an exceptionally windy site. On the contrary, this windines means the data is particularly susceptible to contamination by vegetation growth.”
The Kelburn Stevenson screen is on the top of a hill – see google earth at -41.284499° 174.767920° . From the aerial view it looks like there’s a lot of trees surrounding the site but they’re largely down-hill from the screen, and don’t have the same impact on wind that a flat area surrounded by such trees would have. You can use street-view to verify this – there are street-view photos at the entrance to the car park. Cutting the trees back would make little difference to the amount of stationary air, as testified by the recorded temperatures. You need to appreciate just how windy Wellington is and particularly how exposed that spot is to understand how the trees have relatively little impact on overall temperature at that site.

David Riser
June 12, 2014 7:14 am

Well victor in reply to your last bit, the financial analogy, I would like to point out that CEO’s who make decisions as poorly as the temperature adjustments at GHCN (ie changing a level trend to rising) usually end up getting sent away from fraud. So its not just about making the occasional mistake. I think our commenter from the financial sector would point out that in his realm total transparency is a requirement, as it should be in any endeavor. With total transparency you can go back and look at something and decide if the adjustment was warranted based on facts. This takes time but at the end of the day if your creating a temperature record for the ages you have all the time in the world.
v/r,
David Riser

phi
June 12, 2014 8:05 am

Victor Venema,
“These jumps create wrong trends in these stations and in the regional mean climate.”
The majority of the examples given here show that precisely the jumps are not an alteration of the trend but rather are partial corrections which should not be canceled.
– The increase in tree height corrected by cutting,
– The deterioration of the painting corrected by a facelift,
– Increasing anthropogenic perturbations corrected by moving to a less disturbed area.

Editor
June 12, 2014 10:30 am

Bob Dedekind says:
June 11, 2014 at 7:13 pm

Willis Eschenbach says: June 11, 2014 at 6:57 pm

“…we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves.”

Bingo. Until this issue is solved the “most accurate” trend is the unadjusted one. My reasons are:
1) In a large dataset such as the land stations, it is to be expected that true breakpoints from station moves are randomly distributed, in sign and time. It is fair then (or at least less error-prone) to use the unadjusted series “as is”, at least as far as breakpoints are concerned.

Mmmm … I’d guess that station moves on average would be from more urban to more rural. As a result they’d average cooler The data is there in the BEST dataset. So many questions … so little time. Mosh or Zeke might know.

2) The issue of shelter growth and/or urban heat islands means that the overall unadjusted trend is an upper bound. We can at least state with confidence that the true trend is less than the unadjusted trend.

I don’t have any numbers to back this up, but in general the permanent changes from human occupation such as roads, buildings, parking lots, and the like all make a location warmer. As does the replacement of forests with fields. In addition, in many locations we have large amounts of thermal energy being released (airports, near highways, Arctic towns in winter, in all cities, from airconditioning exhaust, industrial operations, etc.)
So in general, over time we’d expect to see the record increasingly affected by human activities.
w.

Editor
June 12, 2014 10:35 am

Victor Venema (@VariabilityBlog) says:
June 12, 2014 at 3:45 am

Another cross comment. These moderation pauses are getting tedious. Is my opinion so dangerous?

The moderators on this site are unpaid volunteers. We need moderators 24/7, so they are spread around the planet. And there’s not always as many of them available as we might like.
And yes, Victor, sometimes they need to get some sleep, or one of them has other things to to.
So no, Victor … sometimes it’s not all about you and your opinion …
My rule of thumb, which I follow at least somewhat successfully, is:

“Never assign to malice what is adequately explained by error, accident, or foolishness.”

w.
[As of this morning, only 1,279,591 items have been reviewed and accepted. Thank you for your compliments Willis. 8<) .mod]

phi
June 12, 2014 11:35 am

The number and amplitude of adjustments shows that thermometers are not very reliable, especially for long-term trends. The problem of quantification is not yet hopeless. We may use proxies whose high frequency correlation with temperature is proved. If several proxies are consistent, we can legitimately think they give a reasonable estimate of the trend. Two examples:
http://img38.imageshack.us/img38/1905/atsas.png
http://imageshack.us/a/img21/1076/polar2.png

Bob Dedekind
June 12, 2014 1:11 pm

Willis Eschenbach says: June 12, 2014 at 10:30 am
“Mmmm … I’d guess that station moves on average would be from more urban to more rural. As a result they’d average cooler”Quite possibly, but judging from the NZ record that only happened in the latter years. But regardless of that, think about the process – we have urban sites slowly increasing, often over many decades, with a non-climatic trend. Then there’s a move to a more rural site. So we have a saw-tooth. Can we correct for it automatically? Not easily, and it’s not being done right now.
So until the problem is resolved, we must leave the unadjusted record as is. Adjusting it is known to produce a Hansen error. Not adjusting it is maybe not perfect, but it’s better than the alternative.
Unfortunately, we do have data towards the right hand side of the saw-tooth that is artificially too high, so the trend is skewed up a little, which is why I believe it will be an upper bound on the trend. But that may be debatable – the main point is that the unadjusted is a better model right now until the issue is corrected.

Bob Dedekind
June 12, 2014 1:24 pm

Victor Venema (@VariabilityBlog) says: June 12, 2014 at 3:45 am
“A saw tooth type pattern naturally happens more often, but do you have more examples as Albert Park that such a problem is not solved right?”
Do I need more? I showed that this is a general problem. I showed how it affected an example. It is very clear from comments made by Zeke, Nick and yourself that there are no built-in checks to prevent this happening, in fact it almost seems as if nobody even thought of it, from the reaction I’m getting.
And then on top of that we can all see that the textbook example of Albert Park/Mangere fails the test, so we know that the software does not solve this problem.
Now Hansen identified the problem in 2001, Williams (2102) specifically states that the pre-1979 negative bias is possibly due to movement of stations in the Hansen manner, and Zhang et al. (2014) deal with this head-on, even quantifying it.
Quote from Zhang:
“Our analysis shows that data homogenization for [temperature] stations moved from downtowns to suburbs can lead to a significant overestimate of rising trends of surface air temperature.”
The only one in denial seems to be your good self.

Bob Dedekind
June 12, 2014 1:54 pm

Willis Eschenbach says: June 12, 2014 at 10:30 am
“Mmmm … I’d guess that station moves on average would be from more urban to more rural. As a result they’d average cooler”
I don’t think I expressed myself very well in my last reply to you Willis, so let me re-phrase it.
You mention that they’d average cooler, simply because of the move to a rural setting.
Why would that be? Why would moving a site a few kilometers one way or the other automatically make it cooler?
Only because the original site was artificially too warm (sheltering, UHI, etc.).
So should we adjust for this problem? No, the first site was too warm. It got corrected. The correction is inherent in the station move.
The real problem is differentiating between this sort of corrective move and the other type that may (for example) involve an altitude change.

Bob Dedekind
June 12, 2014 2:21 pm

Victor Venema (@VariabilityBlog) says: June 12, 2014 at 3:45 am
“The stations around Albert Park almost surely also have non-climatic jumps. Think back 100 years, it is nearly impossible to keep the measurements and the surrounding constant. These jumps create wrong trends in these stations and in the regional mean climate. As long as the improvements in the other stations are larger than the problem created in Albert Park, the regional mean trend will become more accurate.”
Not necessarily. As long as any uncorrected non-climatic trend exists in station records, you are guaranteed to create a worse trend by adjusting breakpoints blindly, unless you first remove the non-climatic trends. The reason is that the non-climatic trends are usually in the same direction. And most long-term stations around the world are sited near cities and areas of population growth, and are likely to be affected. Even rural sites can be affected by sheltering growth.
Now the non-climatic breakpoints are randomly distributed and so as long as there are a statistically large number of them they should cancel out.
Peterson et al. (1998, p. 1513) noted that “Easterling and Peterson (1995a,b) found that on very large spatial scales (half a continent to global), positive and negative homogeneity adjustments in individual station’s maximum and minimum time series largely balance out, so when averaged into a single time series, the adjusted and unadjusted trends are similar.”
At least by not adjusting breakpoints you aren’t introducing an error, which is what the peer-reviewed literature is telling us you are doing currently.
One should first follow Hippocrates here, and do no harm. The remedy shouldn’t be worse than the symptom.

NikFromNYC
June 13, 2014 3:16 am

(A) Three days ago:
“More later. Cant text and drive..” – Steven Mosher of the BEST project
(B) Two days ago:
“If we have a bunch of trendless sawtooth waves of varying frequencies, and we chop them at their respective discontinuities, average their first differences, and cumulatively sum the averages, we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves. / So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.” – Willis Eschenbach
(C) Today:
“Chirp, chirp, chirp….” – Crickets strumming alight the BEST black box
(D) Possible hint two days ago:
“Berkeley does things slightly differently; it mainly looks for step changes, but downweights stations whose trends sharply diverge from their neighbors when creating regional temperature fields via kriging.” – Zeke Hausfather of the BEST project

Dinostratus
June 13, 2014 9:57 am

Can I get Bob Dedekind’s email from someone?

Bob Dedekind
June 13, 2014 4:16 pm

Dinostratus says: June 13, 2014 at 9:57 am
“Can I get Bob Dedekind’s email from someone?”
Anthony has it, I don’t mind if he gives it to you.

June 14, 2014 3:47 pm

Bob Dedekind says: “Quote from Zhang:
“Our analysis shows that data homogenization for [temperature] stations moved from downtowns to suburbs can lead to a significant overestimate of rising trends of surface air temperature.”
The only one in denial seems to be your good self.”

The algorithm used by Zhang was designed to only correct jump inhomogeneities and not the gradual ones, because he wanted to study how urbanization affected this station. Thus he made an effort not to remove the urbanization signal. Normal homogenization methods also remove gradual inhomogeneities, as I have written so often, but you do not respond to.
NikFromNYC says: “(C) Today: “Chirp, chirp, chirp….””
If there are no new arguments and the author still makes the word guaranteed bold, there is a moment that further discussion does not make much sense any more and one just hopes that the reader forms his own informed opinion.

Bob Dedekind
June 14, 2014 7:02 pm

Victor Venema (@VariabilityBlog) says: June 14, 2014 at 3:47 pm
“Normal homogenization methods also remove gradual inhomogeneities, as I have written so often, but you do not respond to.”
You have written it, but I see no evidence of it in the NOAA adjustments. I have looked through all the NZ stations, and there is no evidence of gradual adjustments that cool the trend. Yet most of these records are affected by either sheltering or UHI, as is well documented (Hessell, 1980).
Until I see evidence of gradual adjustments that decrease the trend over decades, and then (and only then) breakpoint analysis being applied, I maintain that it isn’t being done.
Of course, there’s another option, that these algorithms exist and work well, but NOAA isn’t using them, but that’s another kettle of fish.

Editor
June 14, 2014 11:32 pm

Victor Venema (@VariabilityBlog) says:
June 14, 2014 at 3:47 pm

NikFromNYC says:

“(C) Today: “Chirp, chirp, chirp….””

If there are no new arguments and the author still makes the word guaranteed bold, there is a moment that further discussion does not make much sense any more and one just hopes that the reader forms his own informed opinion.

Thanks, Victor, You seem to misunderstand Nik’s point, perhaps because you have cut the context out of his quote. He said in full:

(A) Three days ago:

“More later. Cant text and drive..”

– Steven Mosher of the BEST project
(B) Two days ago:

“If we have a bunch of trendless sawtooth waves of varying frequencies, and we chop them at their respective discontinuities, average their first differences, and cumulatively sum the averages, we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves.
So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.”

– Willis Eschenbach
(C) Today:

“Chirp, chirp, chirp….”

– Crickets strumming alight the BEST black box

In other words, he was commenting on the lack of response to my question. It has nothing to do with the author, or whether the word “guaranteed” is in bold type.
Since indeed neither you, Mosh, Zeke, nor anyone else has answered my simple question, let me state it again:

So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.

Your comments appreciated,
w.

NikFromNYC
June 15, 2014 1:13 am

The purposefully misleading propaganda produced by the BEST team is jaw dropping. In their front page linked glossy brochure, “Skeptic’s Guide to Climate Change” they make this unsupportable claim:
“Yes, natural variability exists, and the Earth’s temperature has changed in the past. However, for the past century we know that CO2 is coming from human burning of fossil fuels. While climate has changed in the past, possibly even as quickly and dramatically as it is changing today, we nevertheless can tell from the unique carbon fingerprint that today’s warming is human caused.”
Get the purposefully *false* logic here, meant to sway layperson readers and policy makers into given their team more money:
(A) Recent warming has repeatedly perfect precedent in the past, *unrelated* to any burst in CO2.
(B) Therefore, because we have an isotopic signature that shows that the recent CO2 burst is indeed caused by fossil fuel use, thus *all* of today’s warming is human caused. Of course the “unique carbon fingerprint” also relies on utterly falsified hockey sticks which have now become such outright scams that all of peer review in climate “science” now delegitimizes all alarmist studies, including BEST:
http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg
Their blunt claim is an outright logic-twisting lie, and coming from Berkeley with its historically rigorous reputation, it is a positively immoral and self-destructive one.
They go on to inflate this false logic that equates mere continuation of the warming trend since the Little Ice Age bottomed out hundreds of years ago with *all* warming being human caused:
“The science is clear: global warming is real, and caused by human greenhouse gas emissions.”
NOBODY SAYS GLOBAL WARMING ISN’T REAL, and implying it to be so represents willful *slander*, but that’s the implication here by the words on the page. Then they make the monstrous leap to a call for action as if American policy would make any dent in future emissions anyway:
“Demand sustainable and cost-effective solutions in the US and around the world”
When these scammers act like they have the high moral ground, they are just acting, as their promoted polices threaten to become genocidal.
“In the event that I am reincarnated, I would like to return as a deadly virus, in order to contribute something to solve overpopulation.” – Prince Philip

June 15, 2014 5:34 am

Willis Eschenbach says: “The moderators on this site are unpaid volunteers. We need moderators 24/7, so they are spread around the planet. And there’s not always as many of them available as we might like.”
I appreciate the work of moderators, they have a very important function to keep the discussion free of bad language and off topic comments. Below this post they have been so kind to release my comments quite fast. No complaints about their work wrt my comments.
I was just wondering why I am under moderation, I am relatively friendly and do not call people enema or WC. It is unfortunately hard to avoid words like “mistake”, when I think this blog is in error.
Bob Dedekind says: “The only one in denial seems to be your good self.”
That is not a very friendly remark. As I am sure everyone here would agree after all the complaints about the word “denier”. Moderators, is this a reason to put Bob Dedekind under moderation? Or is moderation only for dissidents of the local party line?
Willis Eschenbach says: “In other words, he was commenting on the lack of response to my question. It has nothing to do with the author, or whether the word “guaranteed” is in bold type.”
Could be, I thought he was commenting on the general lack of response. And by now I unfortunately feel that responding to Dedekind does not make much sense any more.
Willis, could you please explain him the relative homogenization approach and how it also removes gradual inhomogeneities? Maybe he listens to you.
Willis Eschenbach says: Since indeed neither you, Mosh, Zeke, nor anyone else has answered my simple question, let me state it again:
So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.
Your comments appreciated,

I did not respond because I do not feel qualified to judge the BEST method. I did read the article, but did not understand key parts of it. Thus, I am not surprised it was published in an unknown journal.
If I understand that part of the method right, I would prefer them to remove data with gradual inhomogeneities, rather than just give them a lower weight. Whether this leads to significant problems I do not know, but that is something that should be studied. That is a planned project, but that will still take some time.
[ Mr. Venema, you wonder why you are on moderation, it is because you are sneering and taunting on your own blog. an example is that you took a comment by our host, suggesting you have a fixation on WUWT and then added words not said to make it “My immature and neurotic fixation on WUWT”, writing a 4,225 word blog post to that effect. it proved exactly the point about you having a fixation on WUWT and Mr. Watts. you will probably write about this too. -mod]

Bob Dedekind
June 15, 2014 2:30 pm

Victor Venema says: June 15, 2014 at 5:34 am
“Moderators, is this a reason to put Bob Dedekind under moderation? Or is moderation only for dissidents of the local party line?”
I was also under moderation when I posted comments here. As Willis mentioned before, it’s actually not all about you. And there is a difference between saying someone is “in denial” regarding recent developments in the peer-reviewed literature, and labelling them a “denier”, with its inherent Holocaust denier implications.
“Willis, could you please explain him the relative homogenization approach and how it also removes gradual inhomogeneities? Maybe he listens to you.”
The proof is in the pudding. If I see that none of the NZ GHCN sites I’ve looked at show any signs of removal of gradual inhomogeneities, when those sites have known gradual inhomogeneities, then I must conclude that such removals do not exist.
Now, they may exist in all manner of new, exciting algorithms, but in v3 of GHCN they do not exist. Or they don’t work, take your pick.

Bob Dedekind
June 15, 2014 2:39 pm

OK Victor, let’s get specific. Does NOAA use a gradual inhomogeneity reduction scheme in their GHCN v3, before applying breakpoint adjustments? Yes or no.
If yes, show me some examples. I have shown a counter example (Auckland) so you’ll also have to explain why the scheme failed in that case.
If no, why are you still here, arguing that they do?

Bob Dedekind
June 15, 2014 3:43 pm

Someone once wrote (emphasis added):
“So, yes, if you are interested in the global climate, you should use a homogenization method that not only removes break inhomogeneities, but also gradual ones. Thus, in that case you should not use a detection method that can only detect breaks like Zhang et al. (2013) did.
Furthermore, you should only use the station history to precise the date of the break, but not for the decision whether to remove the break or not. The latter is actually probably the biggest problem. There are climatologists that use statistical homogenization to detect breaks, but only correct these breaks if they can find evidence of this break in the station history, sometimes going at great length and reading the local newspapers around that time.
If you would do this wrong, you would notice that the urban station has a stronger trend than the surrounding stations. This is a clear sign that the station is inhomogeneous and that your homogenization efforts failed. A climatologist would thus reconsider his methodology and such a station would not be used to study changes in the regional or global climate.”

Note the bit about climatologists reading station histories, newspapers, etc. before making adjustments.
Note also the title of this post at the very top of this page: “Why Automatic Temperature Adjustments Don’t Work”.
Note that Auckland, and other NZ sites known to have gradual inhomogeneities are included in GHCN v3.
Note that these stations, after adjustment, show no signs of removal of gradual inhomogeneities, suggesting strongly that NCDC only detect breakpoints.
Note that the adjusted stations have stronger trends than surrounding stations.
Note lastly that according to Victor, NCDC should reconsider their methodology.
It seems we’re all in agreement then.

Bob Dedekind
June 15, 2014 4:22 pm

Ha, I just realised I said this: “The proof is in the pudding.”
The correct saying is of course “The proof of the pudding is in the eating.”
My apologies to all those offended by my poor idioms, they certainly offend me.