Why Automatic Temperature Adjustments Don't Work

The automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Guest essay by Bob Dedekind

Auckland, NZ, June 2014

In a recent comment on Lucia’s blog The Blackboard, Zeke Hausfather had this to say about the NCDC temperature adjustments:

“The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.”

In other words, an automatic computer algorithm searches for breakpoints, and then automatically adjusts the whole prior record up or down by the amount of the breakpoint.

This is not something new; it’s been around for ages, but something has always troubled me about it. It’s something that should also bother NCDC, but I suspect confirmation bias has prevented them from even looking for errors.

You see, the automatic adjustment procedure is almost guaranteed to produce spurious, artificial warming, and here’s why.

Sheltering

Sheltering occurs at many weather stations around the world. It happens when something (anything) stops or hinders airflow around a recording site. The most common causes are vegetation growth and human-built obstructions, such as buildings. A prime example of this is the Albert Park site in Auckland, New Zealand. Photographs taken in 1905 show a grassy, bare hilltop surrounded by newly-planted flower beds, and at the very top of the hill lies the weather station.

If you take a wander today through Albert Park, you will encounter a completely different vista. The Park itself is covered in large mature trees, and the city of Auckland towers above it on every side. We know from the scientific literature that the wind run measurements here dropped by 50% between 1915 and 1970 (Hessell, 1980). The station history for Albert Park mentions the sheltering problem from 1930 onwards. The site was closed permanently for temperature measurements in 1989.

So what effect does the sheltering have on temperature? According to McAneney et al. (1990), each 1m of shelter growth increases the maximum air temperature by 0.1°C. So for trees 10m high, we can expect a full 1°C increase in maximum air temperature. See Fig 5 from McAneney reproduced below:

clip_image002

It’s interesting to note that the trees in the McAneney study grow to 10m in only 6 years. For this reason weather stations will periodically have vegetation cleared from around them. An example is Kelburn in Wellington, where cut-backs occurred in 1949, 1959 and 1969. What this means is that some sites (not all) will exhibit a saw-tooth temperature history, where temperatures increase slowly due to shelter growth, then drop suddenly when the vegetation is cleared.

clip_image004

So what happens now when the automatic computer algorithm finds the breakpoints at year 10 and 20? It automatically reduces them as follows.

clip_image005

So what have we done? We have introduced a warming trend for this station where none existed.

Now, not every station is going to have sheltering problems, but there will be enough of them to introduce a certain amount of warming. The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming. Therefore the adjustments will always be only one way – towards more warming.

UHI (Urban Heat Island)

The UHI problem is similar (Zhang et al. 2014). A diagram from Hansen (2001) illustrates this quite well.

clip_image007

clip_image009

In this case the station has moved away from the city centre, out towards a more rural setting. Once again, an automatic algorithm will most likely pick up the breakpoint, and perform the adjustment. There is also no countering mechanism that produces a long-term cooling trend. If even a relatively few stations are affected in this way (say 10%) it will be enough to skew the trend.

References

1. Hansen, J., Ruedy, R., Sato, M., Imhoff, M, Lawrence, W., Easterling, D., Peterson, T. and Karl, T. (2001) A closer look at United States and global surface temperature change. Journal of Geophysical Research, 106, 23 947–23 963.

2. Hessell, J. W. D. (1980) Apparent trends of mean temperature in New Zealand since 1930. New Zealand Journal of Science, 23, 1-9.

3. McAneney K.J., Salinger M.J., Porteus A.S., and Barber R.F. (1990) Modification of an orchard climate with increasing shelter-belt height. Agricultural and Forest Meteorology, 49, 177-189.

4. Lei Zhang, Guo-Yu Ren, Yu-Yu Ren, Ai-Ying Zhang, Zi-Ying Chu, Ya-Qing Zhou (2014) Effect of data homogenization on estimate of temperature trend: a case of Huairou station in Beijing Municipality. Theoretical and Applied Climatology February 2014, Volume 115, Issue 3-4, 365-373

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

166 Comments
Inline Feedbacks
View all comments
barrybrill
June 10, 2014 10:09 am

“Thinking Scientist” says:
“The [Wellington] linear regression trends for the periods 1929 – 1988 (annual averages) are:
Unadjusted GHCN 1929 – 1988 is +0.96 degC / Century
Adjusted GHCN 1929 – 1988 is +1.81 degC / Century”
This suggests that opaque GHCN adjustments almost doubled an already high warming trend during this 50-year period. What could have triggered them? Did they record the Thorndon-Kelburn site relocation of December 1927 as occurring post January 1929?
The unadjusted data show a temperature increase way above the global average during that period, presumably as a result of shelter/UHI contamination. Nearby stations show only mild
warming during the same period.

Gary Palmgren
June 10, 2014 10:13 am

“The important point is that there is no countering mechanism – there is no process that will produce slow cooling, followed by sudden warming.”
Actually there is a process. If a forest grows on a ridge next to a temperature station, the air will cool significantly under the forest canopy and the cool air will flow down the ridge and cool the thermometer. Harvest the trees and the temperature will go up. This is very apparent when riding a motorcycle past such ridges on a warm day. This is just another example of why temperature adjustments cannot be automated and must be done on a site by site basis.

June 10, 2014 10:16 am

I should mention that NCDC’s PHA doesn’t just look for step changes; it also looks for (and corrects) divergent trends relative to neighboring stations. It should be able to correct equally for the gradual trend bias and the sharp revision to the mean, though as I mentioned earlier this could be better tested using synthetic data (a project that ISTI folks are working on). Having a standard set of benchmarks (which include different types of inhomogenities) to test different algorithms against should help ensure that there is no residual bias.
Berkeley does things slightly differently; it mainly looks for step changes, but downweights stations whose trends sharply diverge from their neighbors when creating regional temperature fields via kriging.
The Kelburn station is a good example of the need for -some- sort of homogenization. The station move in 1928 had an impact similar to the century-scale warming at the location. Not correcting for that (or similar biases due to TOBs changes or instrument changes) does not give you a pure, unbiased record.

MikeH
June 10, 2014 10:23 am

Would this be a good analogy? And would anyone be able to get away with this?
Lets say I purchased IBM stock in 1990 at $100 per share.
And for argument sake, I sold it today at $200.
BUT, since there was inflation in between 1990 and 2014, I calculated the original $100 per share purchase would be equivalent to a $150 share purchase price in today’s finances. Therefore, my capital gain is really $50 per share, not the actual $100 per share.
Would the IRS let me use that creative math? To me, this is the same creative math being used in the temperature record.
BTW, if this is a real stock tax strategy, please let me know. I usually buy high and sell low, I need all of the help I can get.

Reply to  MikeH
June 10, 2014 12:53 pm

BTW, if this is a real stock tax strategy, please let me know. I usually buy high and sell low, I need all of the help I can get.

Your income taxes are indexed (thanks to Ronald Reagan), your capital gains are not. Sorry, you just made $100 profit and are going to be taxed on the whole $100.

Michael D
June 10, 2014 10:35 am

Just go back to the raw data for large-scale roll-ups of the temperature, and trust that the discontinuities will either a) all average out due to their random distribution, or b) introduce temporary artefacts that wise climate scientists can point to, wisely, and explain. Don’t remove the artefacts – that introduces more complex artefacts that are much more difficult to explain.

Tom In Indy
June 10, 2014 10:36 am

The claim is that it’s appropriate to make adjustments to breakpoints in the micro-data at the individual station level in order to get a more accurate picture of the underlying trend. If true, then why is it not equally valid to make similar adjustments to breakpoints at the macro-data level? For example, the 1998 El-Nino? There is a clear step change that represents an anomalous deviation from the underlying trend. A second claim is that Enso is a “random” element of climate, so why isn’t a portion of this anomalous and random event removed from the macro-data in order to get a more accurate picture of the underlying trend?
Here is the image from the new post on the UAH Global Temperature Update for May.
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_May_2014_v5.png
It’s obvious that the 1998 El-Nino contributed to the trend, but according to the logic that supports adjustments to the GHCN data, a portion of that El-Nino effect on the trend should be removed. It’s an outlier from surrounding data points (monthly anomalies) just like break points at the local station level are outliers from surrounding stations.

June 10, 2014 10:37 am

Bob Dedekind, you can download the free objective (automatic) homogenization algorithm of NOAA. Did you do so and tested it with a saw tooth signal to see if your (or more accurately James Hansen’s) potential problem is a real one?

June 10, 2014 10:39 am

Tom In Indy,
Its really a question of scale. Changes in climate over time tend to be pretty highly spatially correlated. If one station has a big step change that doesn’t appear in any other nearby stations, its likely an artifact of some localized bias (station move, instrument change, TOBs change) rather than a real climate signal. ENSO, on the other hand, affects broad regions of the world and is not in any way a result of instrument-related issues.

June 10, 2014 11:03 am

Hiking in the Canadian Rockies I’ve noted, to my distaste, that coming off a pleasant open ridge in full sun, if I drop down into a limber pine stand (max 5 m high) the temp jumps and I am almost claustrophobic with the sudden heat AND humidity. And, of course, no breeze. I didn’t know there was an empirical relationship.
The GISS temperature adjustment vs time graph was the first Whoa! moment for me: the implication was that all previous temperature measurements were essentially incorrect, reading far too high – relative to what we measure now. The concept didn’t bother me too much if the data was pre-1930, but I found the continual “correction” disturbing, especially for temperature data collected in the post 1970 period. I didn’t believe that there was a fundamental problem with measuring temperatures back then UNLESS a warmer current temperature was desired that caused GISS to be comparing apples to oranges.

Editor
June 10, 2014 11:03 am

First, my thanks for an interesting post.
Next, some folks here seem to think that “raw data” is somehow sacred. Well, it is, but only for preservation purposes. All real data has to go through some kind of QC, quality control. So I have no theoretical problem with doing that … but as always the devil is in the details.
Now, in my opinion Berkeley Earth has done both a very good and a not-so-good job of curating and processing the data. It’s very good for a simple reason—they have been totally transparent about the data and the code. Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.

They’ve done a not-so-good job of it, in my opinion, for a couple of reasons. First, they use the data website as a propaganda platform to spread their own political views about the climate issues. For example, at the top of the individual data pages it says in big bold type,

Read our new Skeptic’s Guide to Climate Change, and learn the facts about Global Warming

To me, that’s a sales pitch, and it is a huge mistake. If you as the curator of a dataset use that data as a platform for propagandizing your own alarmism, you call into question your impartiality in the handling of the data. It may not be true, but as the handler of the data, there’s a “Caesar’s Wife” issue here, where they should avoid the appearance of impropriety. Instead, they have been very vocal proponents of a point of view that, curiously, will make them lots of money … shocking, I know, but Pere et Fille Mueller have a for-profit business arm of their “impartial” handling of the climate data. It reminds me of the joke in the Pacific islands about the missionaries—”They came to the islands to do good … and they have done very well indeed.” For me, using the data web site to pitch their alarmism is both a huge tactical error, and an insight into how far they are willing to go to alarm people and line their own pockets … unsettling.
I also say they’ve done a not-so-good job because in my opinion they have overcorrected the data. Take a look at the Wellington data above. They say that there are no less than ten “empirical breaks” in the data, by which they mean places where the data is not like the average of the neighborhood.
I’m sorry, but I find that hard to swallow. First off, they show such “empirical breakpoints” in the 1890s … I find it very difficult to credit that there are enough neighboring thermometers in 1890s New Zealand to even begin to make such a determination.
It’s part of the difficult question of discontinuity. Let me use the example of where I live, on the Northern California coast an hour north of San Francisco. I live in a weather zone which has such anomalous weather that it has it’s own name. It’s called the “Banana Belt”, because it almost never freezes. It is a very, very narrow but long zone between about 600-800′ (180-240m) in elevation on the ocean side of the first ridge of mountains inland from the coast. It’s most curious. It freezes uphill from us, and downhill from us, hard frosts, but it almost never freezes here.
So if you have a year with very few freezes (it is California, after all), the temperature record at my house isn’t too different from the temperatures recorded at the weather station in the valley.
But of you have say a three-year stretch with a number of hard frosts, all of a sudden we have an “empirical break” between the temperature at my house and the regional average temperature, one which the Berkeley Earth folks might “adjust” out of existence.
In addition, temperatures here are very wind dependent. Because we’re on the coast and the wind typically is running along the coast, if the wind on average switches by only a few degrees, we get a warm land breeze instead of a cool sea breeze … and such shifts in wind are sometimes quite long-lasting. Again, when this happens, we get an “empirical break” between the weather here, and what is being recorded at the local weather station.
Note also that in general there is no “halfway” in the wind. We’re either getting a sea breeze or a land breeze, and when one changes to the other, it’s quick and boy, do you notice a difference. It is not a continuous process. It is an abrupt discontinuous shift from one thermal regime to another.
This highlights the problem—just how discontinuous do we expect our temperatures to be, both in time and space?
Berkeley Earth uses “kriging” to create a “temperature field”. Now, this is not a bad choice of how to go about it, and sadly, it might even be our best choice. It certainly beats the hell out of gridcell averaging …
But kriging, like all such methods, doesn’t handle edges very well. It assumes (as we almost must assume despite knowing it’s not true) that if at point A we have a measurement of X, and at point B we have a measurement of Y, that half-way between A and B the best guess is the average of X and Y.
But that’s not how nature works. If point A is in the middle of a cloud and point B is near it in clear air, the best guess is that at the midway point it is either 100% clear air or 100% cloud. And guessing “half-cloud” will almost never be correct. Nature has edges and discontinuities and spots and stripes. And although our best guess is (and almost has to be) smooth transitions, that’s not what is actually happening. Actually, it’s either a sea breeze or a land breeze, with discontinuous shift between them. In fact, nature is mostly made up of what the Berkeley Earth folks call “empirical breaks” …
I mentioned above the question of how discontinuous we expect our weather to be. The problem is made almost intractable by the fact that we expect to find discontinuities such as those where I live even if our records are perfect. This means that we cannot determine the expected prevalence of discontinuities using our records, because we cannot tell the real discontinuities like my house from the spurious. If my temperatures here at my house are different from those down in the valley, there is no way to tell from just the temperature data alone whether that is an actual discontinuity, or whether it is an error in the records—it could be either one. So we don’t even know how discontinuous we expect the temperature record to be. And that makes the level at which we “adjust” the temperature purely a judgement call.
Berkeley Earth defines what they call a “regional expectation” of temperature. If a given station departs from that regional expectation, it is “adjusted” back into compliance with the group-think. The obvious problem with that procedure, of course, is that at some setting of their thresholds for action, the temperatures at my house will be “adjusted” to match the region. After all, the “Banana Belt” is a very narrow strip of land which is distinctly different from the surrounding region, we defy “regional expectations” every day.
So the real question in this is, where do you set the rejection level? At what degree of difference do you say OK, this station needs adjusting?
Looking at the Wellington record above, I’d say they’ve set the rejection level, the level where they start messing with the data, far too low. I’m not buying that we can tell that for a couple of years in the 1890s the Wellington record was reading a quarter of a degree too high, and that when it dropped down, it resumed a bit higher than when it left off. I’d say they need to back off on the sensitivity of their thresholds.
This is where their political posturing returns to bite them in the gearshift knob. As I mentioned, at some level of setting of the dials, the temperatures at my house get “adjusted” out of existence … and the level of the setting of those dials is in the hands of Richard Mueller et al., who have a clearly demonstrated political bias and who have shown a willingness to use the data for propaganda purposes.
The huge problem with this situation is, of course, that the long-term temperature trend is inversely proportional to the setting of the level at which you begin adjustment. If you set the level low, you adjust a lot, and the long-term trend goes up. If you set the level high, you only adjust a little, and the long-term trend is smaller.
And if you think Richard Mueller doesn’t know that … think again. In my estimation, that is the very reason why the level is set as low as it is, a threshold so easily reached that their automatic algorithm is adjusting a couple of years in 1890 in Wellington … because the more adjustments, the higher the trend.
So I’d disagree with the title of this post. The problem is not that the automatic adjustments don’t work. The problem is that with Richard Mueller’s hand on the throttle, automatic adjustments work all too well …
Best to everyone on a foggy cool morning here, with water dripping off of the magically self-watering redwood trees who can pluck their moisture from the very air, on a day when the nearest weather station says it’s hot, dry, and sunny …
w.

PeterB in Indianapolis
June 10, 2014 11:12 am

With modern technology, a properly calibrated digital thermometer can take individual readings every few seconds which can all be put into a computer file as a 24-hour time series. Every station, using the proper technology, could reasonably have a MINIMUM of 3600 temperature observations per day, which would give a MUCH better resolution of actual temperature at a given station for each given day.
The problem comes in when you attempt to AVERAGE such things into one “observation”.
One of the best examples I can give for this is one of my favorite days when I was a young boy.
I was asleep at midnight, but I know that the temperature in my area was in the mid 40s (F). By 10:30 in the morning, the temperature was 57 (again F). Then a powerful cold front ripped through the area, and by 1:30 PM local time the temperature was 7 (yes, F). By 11:59 PM, it had dropped to 5F.
So…. if you only had ONE station reading from a nearby station for that day, or if you AVERAGED a bunch of readings for that particular day, it wouldn’t tell you squat about what ACTUALLY happened on that day.
To me, the best you could do is take as many observations as possible over 24 hours at a station, and average them out over the whole 24 hours, but even THAT wouldn’t reflect reality in any meaningful way.
To take old station data that could have all SORTS of problems like the one I described above, and then to try to AVERAGE ALL STATIONS to create a “global temperature” is simply ludicrous. Global Temperature has ABSOLUTELY NO MEANING WHATSOEVER under those conditions.
It MIGHT have SOME meaning using modern satellite data, but prior to modern satellites, trying to calculate a global average temperature is about the most idiotic exercise I can conceivably imagine. Even with modern satellite data, the concept of “global average temperature” is still pretty dubious, but at least it is based on real data that we know the method of collection for….

kadaka (KD Knoebel)
June 10, 2014 12:06 pm

From Willis Eschenbach on June 10, 2014 at 11:03 am:

(…) Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.
[image???]

How did you manage to embed http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/18625-TAVG-Alignment.pdf which is clearly an pdf, as an image? It is not coming up for me, just a blank space with a broken image icon.

Jim S
June 10, 2014 12:14 pm

I’m sorry, when did changing data become acceptable in science?

kadaka (KD Knoebel)
June 10, 2014 12:18 pm

From Nick Stokes on June 10, 2014 at 4:26 am:

Here is some detail about the GCHN temperature record in Wellington WMO 93436, which I believe is Kelburn. There weren’t any adjustments in 1949 or 1959, when the trees were cut.

And thank you for the URL, Nick. Backing it up led me to discover a very interesting global map:
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/cas.ghcnm.tavg.v3.2.2.20140610.trends.gif
Trends in Annual TAVG, 1901 to 2013
Save and peruse before it can be “disappeared”. You may note the odd pockets of cooling in the midst of heating. How did those happen?
But mainly notice how at 70% oceans, with huge chunks of continents unaccounted for, there is very little coverage showing. Practically all of it is Northern Hemisphere land, where you’d find UHI contamination.
And from such is crafted a global average temperature? That is deliberate deception, or extreme hubris.

Editor
June 10, 2014 12:32 pm

kadaka (KD Knoebel) says:
June 10, 2014 at 12:06 pm

From Willis Eschenbach on June 10, 2014 at 11:03 am:

(…) Not only that, but as in the example under discussion, whose Berkeley Earth record is here, they display very clearly the points where they think the data has problems, and what they’ve done about it.
[image???]

How did you manage to embed http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/18625-TAVG-Alignment.pdf which is clearly an pdf, as an image? It is not coming up for me, just a blank space with a broken image icon.

Curious, KD, it displays fine on my computer (Mac/Safari). Does anyone else have problems with it? If so I could embed it in a different manner.
w.

Dougmanxx
June 10, 2014 12:39 pm

Nick and Zeke, Interesting discussion, but pointless because no one knows what the “average temperature” was for those stations you bandy on about. What were they? Any clue? Or is the only thing you have an “anomaly” that changes at a whim? What are the “average temperatures” for say… 1928? What was the “average temperature” for 1928 in say… 1999? 2011? 2012? 2013? Were they different? If so, that simply means you are making “adjustments” on top of “adjustments”. Anyone who publishes “anomaly” information should be required to also publish what they are using as their “average temperature”, that way you can put to rest, quickly and quietly any of us who have questions. Why won’t anyone answer this simple question for me? What is the “average temperature”?

Louis
June 10, 2014 12:43 pm

Stephen Wilde says:
June 10, 2014 at 4:29 am
Climate scientists really aren’t all that bright are they ?

You’re conclusion is based on the assumption that the methods used to adjust temperature data create a warming bias due to some dumb mistake rather than by intelligent design. If it was just incompetence or stupidity, the adjustments would have an equal chance of creating a cooling bias as a warming bias. These things may not be going according to proper science, but they are going according to plan.

Green Sand
June 10, 2014 12:46 pm

Willis Eschenbach says:
June 10, 2014 at 12:32 pm
Does anyone else have problems with it? If so I could embed it in a different manner.

————————————————-
Yes, no can see, Win 7, Firefox

Bob Dedekind
June 10, 2014 12:59 pm

Hi all,
Awake now. The important points here are (I believe):
1) Adjustments are necessary if you want an “accurate” station record. An example is 1928 in Kelburn. It is, however, important to note that you cannot just apply (for example) a generic altitude adjustment for similar situations. Why not? Well, take Albert Park. It is over 100m higher than Mangere, the site that replaced it. Yet during an overlap period it was shown to be 0.66°C warmer! Now normally there is no overlap period, and any automatic adjuster would have made a mes of it.
2) The question of need for adjustments is a red herring. What is actually under discussion is whether there are any checks done during the automatic homogenisation process that detect and prevent incorrect adjustments of the slow-then-sudden variety. I think it’s pretty clear there aren’t. Nick mentioned the detection of spurious trends, but I know that in the NZ case almost all our long-term records come from urban sites, that are themselves contaminated by sheltering or UHI. Also, I’m less convinced by this argument, considering some of the adjustments I’ve seen, that make a steep trend worse.

Bob Dedekind
June 10, 2014 1:03 pm

Oops, apologies, about 50m higher. Albert Park should be 0.3°C cooler than Manger. It was 0.66°C warmer.

Nick Stokes
June 10, 2014 1:05 pm

kadaka (KD Knoebel) says: June 10, 2014 at 12:18 pm
“And from such is crafted a global average temperature? That is deliberate deception, or extreme hubris.”

The map doesn’t show the information used to get a global temperature. It only shows individual stations with more than a century of data. And of course it doesn’t show all the ocean data.
I have maps showing that information here. You can choose 1902 as a start date. It can show the individual stations, and it does show the ocean data.

Bob Dedekind
June 10, 2014 1:14 pm

Victor Venema says:
June 10, 2014 at 10:37 am
Good grief, is that Fortran? Cool, I haven’t used that in twenty years.
Are you suggesting that the Hansen-type issue never occurs? Or that there is in fact a mechanism built into the algorithm to detect and prevent it?

Bob Dedekind
June 10, 2014 1:29 pm

Willis Eschenbach says: June 10, 2014 at 11:03 am
Thanks Willis. You’re quite right regarding the early records. NIWA had this to say about the NZ stations generally, and I’m sure it applies equally to most the rest of the world:
“In the process of documenting the revised adjustments for all the ‘seven-station’
series, it was recognised that there was lower confidence in New Zealand’s early
temperature measurements, and there were fewer comparison sites from which to
derive adjustments for non-overlapping temperature series. Thus, a decision was made
not to include temperatures prior to 1900. Furthermore, if there were site changes
around 1910 that were difficult to justify, then the time series was truncated at that
point.”

Bob Dedekind
June 10, 2014 1:37 pm

Victor Venema says: June 10, 2014 at 10:37 am
If you’re suggesting that Hansen-like problems don’t occur, then Williams (2012) disagrees with you, since they postulate exactly that mechanism for why there is a bias:
“This suggests that there are factors causing breaks with a negative sign bias before 1979 (in addition to the TOB) that are offsetting the largely positive shifts caused by the transition to MMTS afterwards. For example, there may have been a preference for station relocations to cooler sites within the network, that is, away from city centers to more rural locations especially around the middle of the twentieth century [Hansen et al., 2001].

Nick Stokes
June 10, 2014 1:49 pm

ThinkingScientist says:
“If we simply apply a correction of 0.8 degC to pre-1928 unadjusted data the regression slope (through annual averages) is +0.44 degC/century”

That doesn’t sound right to me. I found placing such a change 64 yr into a 125 yr stretch makes a change of 0.96 °C/century. Which is close to the total change.