Why Automatic Temperature Adjustments Don't Work

The automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Guest essay by Bob Dedekind

Auckland, NZ, June 2014

In a recent comment on Lucia’s blog The Blackboard, Zeke Hausfather had this to say about the NCDC temperature adjustments:

“The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”

In other words, an automatic computer algorithm searches for breakpoints, and then automatically adjusts the whole prior record up or down by the amount of the breakpoint.

This is not something new; it’s been around for ages, but something has always troubled me about it. It’s something that should also bother NCDC, but I suspect confirmation bias has prevented them from even looking for errors.

You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Sheltering

Sheltering occurs at many weather stations around the world. It happens when something (anything) stops or hinders airflow around a recording site. The most common causes are vegetation growth and human-built obstructions, such as buildings. A prime example of this is the Albert Park site in Auckland, New Zealand. Photographs taken in 1905 show a grassy, bare hilltop surrounded by newly-planted flower beds, and at the very top of the hill lies the weather station.

If you take a wander today through Albert Park, you will encounter a completely different vista. The Park itself is covered in large mature trees, and the city of Auckland towers above it on every side. We know from the scientific literature that the wind run measurements here dropped by 50% between 1915 and 1970 (Hessell, 1980). The station history for Albert Park mentions the sheltering problem from 1930 onwards. The site was closed permanently for temperature measurements in 1989.

So what effect does the sheltering have on temperature? According to McAneney et al. (1990), each 1m of shelter growth increases the maximum air temperature by 0.1°C. So for trees 10m high, we can expect a full 1°C increase in maximum air temperature. See Fig 5 from McAneney reproduced below:

clip_image002

It’s interesting to note that the trees in the McAneney study grow to 10m in only 6 years. For this reason weather stations will periodically have vegetation cleared from around them. An example is Kelburn in Wellington, where cut-backs occurred in 1949, 1959 and 1969. What this means is that some sites (not all) will exhibit a saw-tooth temperature history, where temperatures increase slowly due to shelter growth, then drop suddenly when the vegetation is cleared.

clip_image004

So what happens now when the automatic computer algorithm finds the breakpoints at year 10 and 20? It automatically reduces them as follows.

clip_image005

So what have we done? We have introduced a warming trend for this station where none existed.

Now, not every station is going to have sheltering problems, but there will be enough of them to introduce a certain amount of warming. The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming. Therefore the adjustments will always be only one way – towards more warming.

UHI (Urban Heat Island)

The UHI problem is similar (Zhang et al. 2014). A diagram from Hansen (2001) illustrates this quite well.

clip_image007

clip_image009

In this case the station has moved away from the city centre, out towards a more rural setting. Once again, an automatic algorithm will most likely pick up the breakpoint, and perform the adjustment. There is also no countering mechanism that produces a long-term cooling trend. If even a relatively few stations are affected in this way (say 10%) it will be enough to skew the trend.

References

1. Hansen, J., Ruedy, R., Sato, M., Imhoff, M, Lawrence, W., Easterling, D., Peterson, T. and Karl, T. (2001) A closer look at United States and global surface temperature change. Journal of Geophysical Research, 106, 23 947–23 963.

2. Hessell, J. W. D. (1980) Apparent trends of mean temperature in New Zealand since 1930. New Zealand Journal of Science, 23, 1-9.

3. McAneney K.J., Salinger M.J., Porteus A.S., and Barber R.F. (1990) Modification of an orchard climate with increasing shelter-belt height. Agricultural and Forest Meteorology, 49, 177-189.

4. Lei Zhang, Guo-Yu Ren, Yu-Yu Ren, Ai-Ying Zhang, Zi-Ying Chu, Ya-Qing Zhou (2014) Effect of data homogenization on estimate of temperature trend: a case of Huairou station in Beijing Municipality. Theoretical and Applied Climatology February 2014, Volume 115, Issue 3-4, 365-373

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

166 Comments
Inline Feedbacks
View all comments
Bob Dedekind
June 12, 2014 1:54 pm

Willis Eschenbach says: June 12, 2014 at 10:30 am
“Mmmm … I’d guess that station moves on average would be from more urban to more rural. As a result they’d average cooler”
I don’t think I expressed myself very well in my last reply to you Willis, so let me re-phrase it.
You mention that they’d average cooler, simply because of the move to a rural setting.
Why would that be? Why would moving a site a few kilometers one way or the other automatically make it cooler?
Only because the original site was artificially too warm (sheltering, UHI, etc.).
So should we adjust for this problem? No, the first site was too warm. It got corrected. The correction is inherent in the station move.
The real problem is differentiating between this sort of corrective move and the other type that may (for example) involve an altitude change.

Bob Dedekind
June 12, 2014 2:21 pm

Victor Venema (@VariabilityBlog) says: June 12, 2014 at 3:45 am
“The stations around Albert Park almost surely also have non-climatic jumps. Think back 100 years, it is nearly impossible to keep the measurements and the surrounding constant. These jumps create wrong trends in these stations and in the regional mean climate. As long as the improvements in the other stations are larger than the problem created in Albert Park, the regional mean trend will become more accurate.”
Not necessarily. As long as any uncorrected non-climatic trend exists in station records, you are guaranteed to create a worse trend by adjusting breakpoints blindly, unless you first remove the non-climatic trends. The reason is that the non-climatic trends are usually in the same direction. And most long-term stations around the world are sited near cities and areas of population growth, and are likely to be affected. Even rural sites can be affected by sheltering growth.
Now the non-climatic breakpoints are randomly distributed and so as long as there are a statistically large number of them they should cancel out.
Peterson et al. (1998, p. 1513) noted that “Easterling and Peterson (1995a,b) found that on very large spatial scales (half a continent to global), positive and negative homogeneity adjustments in individual station’s maximum and minimum time series largely balance out, so when averaged into a single time series, the adjusted and unadjusted trends are similar.”
At least by not adjusting breakpoints you aren’t introducing an error, which is what the peer-reviewed literature is telling us you are doing currently.
One should first follow Hippocrates here, and do no harm. The remedy shouldn’t be worse than the symptom.

NikFromNYC
June 13, 2014 3:16 am

(A) Three days ago:
“More later. Cant text and drive..” – Steven Mosher of the BEST project
(B) Two days ago:
“If we have a bunch of trendless sawtooth waves of varying frequencies, and we chop them at their respective discontinuities, average their first differences, and cumulatively sum the averages, we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves. / So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.” – Willis Eschenbach
(C) Today:
“Chirp, chirp, chirp….” – Crickets strumming alight the BEST black box
(D) Possible hint two days ago:
“Berkeley does things slightly differently; it mainly looks for step changes, but downweights stations whose trends sharply diverge from their neighbors when creating regional temperature fields via kriging.” – Zeke Hausfather of the BEST project

Dinostratus
June 13, 2014 9:57 am

Can I get Bob Dedekind’s email from someone?

Bob Dedekind
June 13, 2014 4:16 pm

Dinostratus says: June 13, 2014 at 9:57 am
“Can I get Bob Dedekind’s email from someone?”
Anthony has it, I don’t mind if he gives it to you.

June 14, 2014 3:47 pm

Bob Dedekind says: “Quote from Zhang:
“Our analysis shows that data homogenization for [temperature] stations moved from downtowns to suburbs can lead to a significant overestimate of rising trends of surface air temperature.”
The only one in denial seems to be your good self.”

The algorithm used by Zhang was designed to only correct jump inhomogeneities and not the gradual ones, because he wanted to study how urbanization affected this station. Thus he made an effort not to remove the urbanization signal. Normal homogenization methods also remove gradual inhomogeneities, as I have written so often, but you do not respond to.
NikFromNYC says: “(C) Today: “Chirp, chirp, chirp….””
If there are no new arguments and the author still makes the word guaranteed bold, there is a moment that further discussion does not make much sense any more and one just hopes that the reader forms his own informed opinion.

Bob Dedekind
June 14, 2014 7:02 pm

Victor Venema (@VariabilityBlog) says: June 14, 2014 at 3:47 pm
“Normal homogenization methods also remove gradual inhomogeneities, as I have written so often, but you do not respond to.”
You have written it, but I see no evidence of it in the NOAA adjustments. I have looked through all the NZ stations, and there is no evidence of gradual adjustments that cool the trend. Yet most of these records are affected by either sheltering or UHI, as is well documented (Hessell, 1980).
Until I see evidence of gradual adjustments that decrease the trend over decades, and then (and only then) breakpoint analysis being applied, I maintain that it isn’t being done.
Of course, there’s another option, that these algorithms exist and work well, but NOAA isn’t using them, but that’s another kettle of fish.

Editor
June 14, 2014 11:32 pm

Victor Venema (@VariabilityBlog) says:
June 14, 2014 at 3:47 pm

NikFromNYC says:

“(C) Today: “Chirp, chirp, chirp….””

If there are no new arguments and the author still makes the word guaranteed bold, there is a moment that further discussion does not make much sense any more and one just hopes that the reader forms his own informed opinion.

Thanks, Victor, You seem to misunderstand Nik’s point, perhaps because you have cut the context out of his quote. He said in full:

(A) Three days ago:

“More later. Cant text and drive..”

– Steven Mosher of the BEST project
(B) Two days ago:

“If we have a bunch of trendless sawtooth waves of varying frequencies, and we chop them at their respective discontinuities, average their first differences, and cumulatively sum the averages, we will get a strong positive trend despite the fact that there is absolutely no trend in the sawtooth waves themselves.
So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.”

– Willis Eschenbach
(C) Today:

“Chirp, chirp, chirp….”

– Crickets strumming alight the BEST black box

In other words, he was commenting on the lack of response to my question. It has nothing to do with the author, or whether the word “guaranteed” is in bold type.
Since indeed neither you, Mosh, Zeke, nor anyone else has answered my simple question, let me state it again:

So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.

Your comments appreciated,
w.

NikFromNYC
June 15, 2014 1:13 am

The purposefully misleading propaganda produced by the BEST team is jaw dropping. In their front page linked glossy brochure, “Skeptic’s Guide to Climate Change” they make this unsupportable claim:
“Yes, natural variability exists, and the Earth’s temperature has changed in the past. However, for the past century we know that CO2 is coming from human burning of fossil fuels. While climate has changed in the past, possibly even as quickly and dramatically as it is changing today, we nevertheless can tell from the unique carbon fingerprint that today’s warming is human caused.”
Get the purposefully *false* logic here, meant to sway layperson readers and policy makers into given their team more money:
(A) Recent warming has repeatedly perfect precedent in the past, *unrelated* to any burst in CO2.
(B) Therefore, because we have an isotopic signature that shows that the recent CO2 burst is indeed caused by fossil fuel use, thus *all* of today’s warming is human caused. Of course the “unique carbon fingerprint” also relies on utterly falsified hockey sticks which have now become such outright scams that all of peer review in climate “science” now delegitimizes all alarmist studies, including BEST:
http://s6.postimg.org/jb6qe15rl/Marcott_2013_Eye_Candy.jpg
Their blunt claim is an outright logic-twisting lie, and coming from Berkeley with its historically rigorous reputation, it is a positively immoral and self-destructive one.
They go on to inflate this false logic that equates mere continuation of the warming trend since the Little Ice Age bottomed out hundreds of years ago with *all* warming being human caused:
“The science is clear: global warming is real, and caused by human greenhouse gas emissions.”
NOBODY SAYS GLOBAL WARMING ISN’T REAL, and implying it to be so represents willful *slander*, but that’s the implication here by the words on the page. Then they make the monstrous leap to a call for action as if American policy would make any dent in future emissions anyway:
“Demand sustainable and cost-effective solutions in the US and around the world”
When these scammers act like they have the high moral ground, they are just acting, as their promoted polices threaten to become genocidal.
“In the event that I am reincarnated, I would like to return as a deadly virus, in order to contribute something to solve overpopulation.” – Prince Philip

June 15, 2014 5:34 am

Willis Eschenbach says: “The moderators on this site are unpaid volunteers. We need moderators 24/7, so they are spread around the planet. And there’s not always as many of them available as we might like.”
I appreciate the work of moderators, they have a very important function to keep the discussion free of bad language and off topic comments. Below this post they have been so kind to release my comments quite fast. No complaints about their work wrt my comments.
I was just wondering why I am under moderation, I am relatively friendly and do not call people enema or WC. It is unfortunately hard to avoid words like “mistake”, when I think this blog is in error.
Bob Dedekind says: “The only one in denial seems to be your good self.”
That is not a very friendly remark. As I am sure everyone here would agree after all the complaints about the word “denier”. Moderators, is this a reason to put Bob Dedekind under moderation? Or is moderation only for dissidents of the local party line?
Willis Eschenbach says: “In other words, he was commenting on the lack of response to my question. It has nothing to do with the author, or whether the word “guaranteed” is in bold type.”
Could be, I thought he was commenting on the general lack of response. And by now I unfortunately feel that responding to Dedekind does not make much sense any more.
Willis, could you please explain him the relative homogenization approach and how it also removes gradual inhomogeneities? Maybe he listens to you.
Willis Eschenbach says: Since indeed neither you, Mosh, Zeke, nor anyone else has answered my simple question, let me state it again:
So I’d like to know if and how the “scalpel” method avoids this problem … because I sure can’t think of a way to avoid it.
Your comments appreciated,

I did not respond because I do not feel qualified to judge the BEST method. I did read the article, but did not understand key parts of it. Thus, I am not surprised it was published in an unknown journal.
If I understand that part of the method right, I would prefer them to remove data with gradual inhomogeneities, rather than just give them a lower weight. Whether this leads to significant problems I do not know, but that is something that should be studied. That is a planned project, but that will still take some time.
[ Mr. Venema, you wonder why you are on moderation, it is because you are sneering and taunting on your own blog. an example is that you took a comment by our host, suggesting you have a fixation on WUWT and then added words not said to make it “My immature and neurotic fixation on WUWT”, writing a 4,225 word blog post to that effect. it proved exactly the point about you having a fixation on WUWT and Mr. Watts. you will probably write about this too. -mod]

Bob Dedekind
June 15, 2014 2:30 pm

Victor Venema says: June 15, 2014 at 5:34 am
“Moderators, is this a reason to put Bob Dedekind under moderation? Or is moderation only for dissidents of the local party line?”
I was also under moderation when I posted comments here. As Willis mentioned before, it’s actually not all about you. And there is a difference between saying someone is “in denial” regarding recent developments in the peer-reviewed literature, and labelling them a “denier”, with its inherent Holocaust denier implications.
“Willis, could you please explain him the relative homogenization approach and how it also removes gradual inhomogeneities? Maybe he listens to you.”
The proof is in the pudding. If I see that none of the NZ GHCN sites I’ve looked at show any signs of removal of gradual inhomogeneities, when those sites have known gradual inhomogeneities, then I must conclude that such removals do not exist.
Now, they may exist in all manner of new, exciting algorithms, but in v3 of GHCN they do not exist. Or they don’t work, take your pick.

Bob Dedekind
June 15, 2014 2:39 pm

OK Victor, let’s get specific. Does NOAA use a gradual inhomogeneity reduction scheme in their GHCN v3, before applying breakpoint adjustments? Yes or no.
If yes, show me some examples. I have shown a counter example (Auckland) so you’ll also have to explain why the scheme failed in that case.
If no, why are you still here, arguing that they do?

Bob Dedekind
June 15, 2014 3:43 pm

Someone once wrote (emphasis added):
“So, yes, if you are interested in the global climate, you should use a homogenization method that not only removes break inhomogeneities, but also gradual ones. Thus, in that case you should not use a detection method that can only detect breaks like Zhang et al. (2013) did.
Furthermore, you should only use the station history to precise the date of the break, but not for the decision whether to remove the break or not. The latter is actually probably the biggest problem. There are climatologists that use statistical homogenization to detect breaks, but only correct these breaks if they can find evidence of this break in the station history, sometimes going at great length and reading the local newspapers around that time.
If you would do this wrong, you would notice that the urban station has a stronger trend than the surrounding stations. This is a clear sign that the station is inhomogeneous and that your homogenization efforts failed. A climatologist would thus reconsider his methodology and such a station would not be used to study changes in the regional or global climate.”

Note the bit about climatologists reading station histories, newspapers, etc. before making adjustments.
Note also the title of this post at the very top of this page: “Why Automatic Temperature Adjustments Don’t Work”.
Note that Auckland, and other NZ sites known to have gradual inhomogeneities are included in GHCN v3.
Note that these stations, after adjustment, show no signs of removal of gradual inhomogeneities, suggesting strongly that NCDC only detect breakpoints.
Note that the adjusted stations have stronger trends than surrounding stations.
Note lastly that according to Victor, NCDC should reconsider their methodology.
It seems we’re all in agreement then.

Bob Dedekind
June 15, 2014 4:22 pm

Ha, I just realised I said this: “The proof is in the pudding.”
The correct saying is of course “The proof of the pudding is in the eating.”
My apologies to all those offended by my poor idioms, they certainly offend me.

1 5 6 7