Unwarranted Temperature Adjustments and Al Gore's Unwarranted Call for Intellectual Tyranny

Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University

For researchers like myself examining the effect of local microclimates on the ecology of local wildlife, the change in the global average is an absolutely useless measure. Although it is wise to think globally, wildlife only responds to local climate change. To understand how local climate change had affected wildlife in California’s Sierra Nevada and Cascade Mountains, I had examined data from stations that make up the US Historical Climate Network (USHCN).

I was quickly faced with a huge dilemma that began my personal journey toward climate skepticism. Do I trust the raw data, or do I trust the USHCN’s adjusted data?

For example the raw data for minimum temperatures at Mt Shasta suggested a slight cooling trend since the 1930s. In contrast the adjusted data suggested a 1 to 2°F warming trend. What to believe? The confusion resulting from skewing trends is summarized in a recent study that concluded their “results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.” 13.

clip_image002

I began exploring data at other USHCN stations from around the country and realized that a very large percentage of the stations had been adjusted very similarly. The warm peaks from the 1930s and 40s had been adjusted downward by 3 to 4°F and these adjustments created dubious local warming trends as seen in examples from other USHCN stations at Reading, Massachusetts and Socorro, New Mexico.

Steele_fig2

Because these adjustments were so widespread, many skeptics have suspected there has been some sort of conspiracy. Although scientific papers are often retracted for fraudulent data, I found it very hard to believe climate scientists would allow such blatant falsification. Data correction in all scientific disciplines is often needed and well justified. Wherever there are documented changes to a weather station such as a change in instrumentation, then an adjustment is justified. However unwitting systematic biases in their adjustment procedure could readily fabricate such a trend, and these dramatic adjustments were typically based on “undocumented changes” when climate scientists attempted to “homogenize” the regional data. The rationale for homogenization is based on the dubious assumption that all neighboring weather stations should display the same climate trends. However due to the effects of landscape changes and differently vegetated surfaces,1,2 local temperatures often respond very differently and the minimum temperatures are especially sensitive to different surface conditions.

For example even in relatively undisturbed regions, Yosemite’s varied landscapes respond in very contrary ways to a weakening of the westerly winds. Over a 10-year period, one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all.16 Depending on the location of a weather station, very different trends are generated. The homogenization process blends neighboring data and obliterates local differences and then fabricates an artificial trend.

Ecologists and scientists who assess regional climate variability must only use data that has been quality controlled but not homogenized. In a climate variability study, scientists computed the non-homogenized changes in maximum and minimum temperatures for the contiguous United States.12 The results seen in Figure A (their figure 1b) suggest recent climate change has been more cyclical. Those cyclical changes parallel the Pacific Decadal Oscillation (PDO). When climate scientists first began homogenizing temperature data, the PDO had yet to be named, so I would like to suggest instead of a deliberate climate science conspiracy, it was their ignorance of the PDO coupled with overwhelming urbanization effects that caused the unwarranted adjustments by causing “natural change points” that climate scientists had yet to comprehend. Let me explain.

Steele_fig3

Homogenizing Contrasting Urban and Natural Landscape Trends

The closest USHCN weather station to my research was Tahoe City (below). Based on the trend in maximum temperatures, the region was not overheating nor accumulating heat. Otherwise the annual maximum temperature would be higher than the 1930s. My first question was why such a contrasting rise in minimum temperature? Here changing cloud cover was not an issue. Dr. Thomas Karl, who now serves as the director of the NOAA’s National Climatic Data Center partially answered the question when he reported that in over half of North America “the rise of the minimum temperature has occurred at a rate three times that of the maximum temperature during the period 1951-90 (1.5°F versus 0.5°F).”3 Rising minimum temperatures were driving the average but Karl never addressed the higher temperatures in the 1930s. Karl simply demonstrated as populations increased, so did minimum temperatures even though the maximums did not. A town of two million people experienced a whopping increase of 4.5°F in the minimum and was the sole cause of the 2.25°F increase in average temperature.4

clip_image008

Although urban heat islands are undeniable, many CO2 advocates argue that growing urbanization has not contributed to recent climate trends because both urban and rural communities have experienced similar warming trends. However, those studies failed to account for the fact that even small population increases in designated rural areas generate high rates of warming. For example, in 1967 Columbia, Maryland was a newly established, planned community designed to end racial and social segregation. Climate researchers following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents.5 Although Columbia would be classified as a rural town, that small population raised local temperatures five times greater than a century’s worth of global warming. If we extrapolated that trend as so many climate studies do, growing populations in rural areas would cause a whopping warming trend of 26°F per decade.

CO2 advocates also downplay urbanization, arguing it only represents a small fraction of the earth’s land surface and therefore urbanization contributes very little to the overall warming. However arbitrary designations of urban versus rural does not address the effects of growing population on the landscape. California climatologist James Goodridge found the average rate of 20th century warming for weather stations located in a whole county that exceeded one million people was 3.14°F per century, which is twice the rate of the global average. In contrast, the average warming rate for stations situated in a county with less than 100,000 people was a paltry 0.04°F per century.6 The warming rate of sparsely populated counties was 35 times less than the global average.

Furthermore results similar to Goodridge’s have been suggested by tree ring studies far from urban areas. Tree ring temperatures are better indicators of “natural climate trends” and can help disentangle distortions caused by increasing human populations. Not surprisingly, most tree-ring studies reveal lower temperatures than the urbanized instrumental data. A 2007 paper by 10 leading tree-ring scientists reported, “No current tree ring based reconstruction of extratropical Northern Hemisphere temperatures that extends into the 1990s captures the full range of late 20th century warming observed in the instrumental record.”8

Because tree ring temperatures disagree with a sharply rising instrumental average, climate scientists officially dubbed this the “divergence problem.”9 However when studies compared tree ring temperatures with only maximum temperatures (instead of the average temperatures that are typically inflated by urbanized minimum temperatures) they found no disagreement and no divergence.10 Similarly a collaboration of German, Swiss, and Finnish scientists found that where average instrumental temperatures were minimally affected by population growth in remote rural stations of northern Scandinavia, tree ring temperatures agreed with instrumental average temperatures.11 As illustrated in Figure B, the 20th century temperature trend in the wilds of northern Scandinavia is strikingly similar to maximum temperature trends of the Sierra Nevada and the contiguous 48 states. All those regions experienced peak temperatures in the 1940s and the recent rise since the 1990s has never exceed that peak.

Steele_fig5
Figure B. 2000 year summer temperature reconstruction of northern Scandinavia. Warmest 30 year periods are highlighted in by light gray bars (i.e. 27-56, or 1918-1947) and coldest 30 year periods are highlighted by dark gray bars (i.e. 1453-1482) Reprinted from Global and Planetary Change, vol. 88-89, Esper, J. et al, Variability and extremes of northern Scandinavian summer temperatures over the past two millennia.(REF11)

How Homogenizing Urbanized Warming Has Obliterated Natural Oscillations

It soon became obvious that the homogenization process was unwittingly blending rising minimum temperatures caused by population growth with temperatures from more natural landscapes. Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations,17 using what I term the “blind majority rules” method. The most commonly shared trend among neighboring stations became the computer’s reference, and temperatures from “deviant stations” were adjusted to create a chimeric climate smoothie. Wherever there was a growth in population, this unintentionally allows urbanization warming effects to alter the adjusted trend.

Climate computers had been programmed to seek unusual “change-points” as a sign of “undocumented” station modifications. Any natural change‑points caused by cycles like the Pacific Decadal Oscillation looked like deviations relative to steadily rising trends of an increasingly populated region like Columbia, Maryland or Tahoe City. And the widespread adjustments to minimum temperatures reveal this erroneous process.

I first stumbled onto Anthony Watts’ surface station efforts when investigating climate factors that controlled the upslope migration of birds in the Sierra Nevada. To understand the population declines in high-elevation meadows on the Tahoe National Forest, I surveyed birds at several low-elevation breeding sites and examined the climate data from foothill weather stations.

Marysville, CA was one of those stations, but its warming trend sparked my curiosity because it was one of the few stations where the minimum was not adjusted markedly. I later found a picture of the Marysville’s weather station at SurfaceStations.org website. The Marysville weather station was Watts’ poster child for a bad site; he compared it to the less-disturbed surface conditions at a neighboring weather station in Orland, CA. The Marysville station was located on an asphalt parking lot just a few feet from air conditioning exhaust fans.

The proximity to buildings also altered the winds, and added heat radiating from the walls. These urbanization effects at Marysville created a rising trend that CO2 advocate scientists expect. In contrast, the minimum temperatures at nearby Orland showed the cyclic behavior we would expect the Pacific Decadal Oscillation (PDO) to cause. Orland’s data was not overwhelmed by urbanization and thus more sensitive to cyclical temperature changes brought by the PDO. Yet it was Orland’s data that was markedly adjusted- not Marysville! (Figure C)

Steele_fig6

Steele_fig8
Figure C. Raw and adjusted minimum temperature for Marysville and Orland California.

Several scientists have warned against homogenization for just this reason. Dr. Xiaolan Wang of Meteorological Service of Canada wrote, “a trend-type change in climate data series should only be adjusted if there is sufficient evidence showing that it is related to a change at the observing station, such as a change in the exposure or location of the station, or in its instrumentation or observing procedures.” 14

That waning went unheeded. In the good old days, weather stations such as the one in Orland, CA (pictured above) would have been a perfect candidate to serve as a reference station. It was well sited, away from pavement and buildings, and its location and thermometers had not changed throughout its history. Clearly Orland did not warrant an adjustment but the data revealed several “change points.” Although those change points were naturally caused by the Pacific Decadal Oscillation (PDO), it attracted the computer’s attention that an “undocumented change” had occurred.

To understand the PDO’s effect, it is useful to see the PDO as a period of more frequent El Niños that ventilate heat and raise the global average temperature, alternating with a period of more frequent La Niñas that absorb heat and lower global temperatures. For example heat ventilated during the 1997 El Nino raised global temperatures by ~1.6°F. During the following La Niña, temperatures dropped by ~1.6°F. California’s climate is extremely sensitive to El Niño and the PDO. Reversal in thr Pacific Decadal Oscillation caused natural temperature change-points around the 1940s and 1970s. The rural station of Orland was minimally affected by urbanization, and thus more sensitive to the rise and fall of the PDO. Similarly, the raw data for other well-sited rural stations like the Cuyamaca in southern California also exhibited the cyclical temperatures predicted by the PDO (see Figure D, lower panel). But in each case those cyclical temperature trends were homogenized to look like the linear urbanized trend at Marysville.

Steele_fig9
Figure D. Upper panel PDO Index. Lower Panel Cuyamaca CA raw versus adjusted minimum temperatures.

Marysville however was overwhelmed by California’s growing urbanization and less sensitive to the PDO. Thus it exhibited a steady rising trend. Ironically, a computer program seeking any and all change-points dramatically adjusted the natural variations of rural stations to make them conform to the steady trend of more urbanized stations. Around the country, very similar adjustments lowered the peak warming of the 1930s and 1940s in the original data. Those homogenization adjustments now distort our perceptions, and affect our interpretations of climate change. Cyclical temperature trends were unwittingly transformed into rapidly rising warming trends, suggesting a climate on “CO2 steroids”. However the unadjusted average for the United States suggests the natural climate is much more sensitive to cycles such as the PDO. Climate fears have been exaggerated due to urbanization and homogenization adjustments on steroids.

Skeptics have highlighted the climate effects of the PDO for over a decade but CO2 advocates dismissed this alternative climate viewpoint. As recently as 2009, Kevin Trenberth emailed Michael Mannand other advocates regards the PDO’s effect on natural climate variability writing “there is a LOT of nonsense about the PDO. People like CPC are tracking PDO on a monthly basis but it is highly correlated with ENSO. Most of what they are seeing is the change in ENSO not real PDO. It surely isn’t decadal. The PDO is already reversing with the switch to El Nino. The PDO index became positive in September for first time since Sept 2007.”

However contrary to Trenberth’s email rant, the PDO continued trending to its cool phase and global warming continued its “hiatus.” Now forced to explain the warming hiatus, Trenberth has flipped flopped about the PDO’s importance writing “One of the things emerging from several lines is that the IPCC has not paid enough attention to natural variability, on several time scales,” “especially El Niños and La Niñas, the Pacific Ocean phenomena that are not yet captured by climate models, and the longer term Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) which have cycle lengths of about 60 years.”18 No longer is CO2 overwelming natural systems, they must argue natural systems are overwhelming CO2 warming. Will they also rethink their unwarranted homogenization adjustments?

Skeptics highlighting natural cycles were ahead of the climate science curve and provided a much needed alternative viewpoint. Still to keep the focus on CO2, Al Gore is stepping up his attacks against all skeptical thinking. In a recent speech, rightfully took pride that we no longer accept intolerance and abuse against people of different races or with different sexual preferences. Then totally contradicting his examples of tolerance and open mindedness, he asked his audience to make people “pay a price for denial”.

Instead of promoting more respectful public debate, he in essence suggests Americans should hate “deniers” for thinking differently than Gore and his fellow CO2 advocates. He and his ilk are fomenting a new intellectual tyranny. Yet his “hockey stick beliefs” are based on adjusted data that are not supported by the raw temperature data and unsupported by natural tree ring data. So who is in denial? Whether or not Gore’s orchestrated call to squash all skeptical thought is based solely on ignorance of natural cycles, his rant against skeptics is far more frightening than the climate change evidenced by the unadjusted data and the trees.

Literature cited

1. Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society

2. Lim,Y-K, et al., (2012) Observational evidence of sensitivity of surface climate changes to land types and urbanization,

3. Karl, T.R. et al., (1993) Asymmetric Trends of Daily Maximum and Minimum Temperature. Bulletin of the American Meteorological Society, vol. 74

4. Karl, T., et al., (1988), Urbanization: Its Detection and Effect in the United States Climate Record. Journal of Climate, vol. 1, 1099-1123.

5. Erella, E., and Williamson, T, (2007) Intra-urban differences in canopy layer air temperature at a mid-latitude city. Int. J. Climatol. 27: 1243–1255

6. Goodridge, J., (1996) Comments on Regional Simulations of Greenhouse Warming Including Natural Variability. Bulletin of the American Meteorological Society. Vol.77, p.188.

7. Fall, S., et al., (2011) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. Journal Of Geophysical Research, Vol. 116

8. Wilson R., et al., (2007) Matter of divergence: tracking recent warming at hemispheric scales using tree-ring data. Journal of Geophysical Research–A, 112, D17103, doi: 10.1029/2006JD008318.

9. D’Arrigo, R., et al., (2008) On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change, vol. 60, p. 289–305

10. Youngblut, D., and Luckman, B., (2008) Maximum June–July temperatures in the southwest Yukon region over the last three hundred years reconstructed from tree-rings. Dendrochronologia, vol. 25, p.153–166.

11. Esper, J. et al. (2012) Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. Global and Planetary Change 88–89 (2012) 1–9.

12. Shen, S., et al., (2011) The twentieth century contiguous US temperature changes indicated by daily data and higher statistical moments. Climatic Change Volume 109, Issue 3-4, pp 287-317.

13. Steirou, E., and Koutsoyiannis, D. (2012) Investigation of methods for hydroclimatic data homogenization. Geophysical Research Abstracts, vol. 14, EGU2012-956-1

14. Wang, X., (2003) Comments on ‘‘Detection of Undocumented Changepoints: A Revision of the Two-Phase Regression Model’’. Journal of Climate; Oct2003, Vol. 16 Issue 20, p. 3383-3385.

15. Nelson, T., (2011) Email conversations between climate scientists. ClimateGate 2.0: This is a very large pile of “smoking guns.” http://tomnelson.blogspot.com/

16. Lundquist, J. and Cayan, D. (2007) Surface temperature patterns in complex terrain: Daily variations and long-term change in the central Sierra Nevada, California. Journal of Geophysical Research, vol. 112, D11124, doi:10.1029/2006JD007561.

17. Menne. M., (2009) The U.S. Historical Climatology Network Monthly Temperature Data, version 2. The Bulletin for the American Meterological Society. p. 993-1007

18. Appell, D. (2013) Whither Global Warming? Has It Slowed Down? The Yale Forum on Climate Change and the Media. http://www.yaleclimatemediaforum.org/2013/05/wither-global-warming-has-it-slowed-down/

Adapted from the chapter Why Average Isn’t Good Enough in Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Read previous essays at landscapesandcycles.net

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
185 Comments
Inline Feedbacks
View all comments
richardscourtney
September 26, 2013 4:03 am

Kev-in-Uk:
Thankyou for your post at September 26, 2013 at 3:34 am
http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1427080
in reply to my post at September 26, 2013 at 2:10 am
http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1427020
You say and expand on

That’s correct, of course, the temperature dataset is in effect, ‘undefined’ – if it were ‘defined’, they would not be able to keep adjusting its contents willy nilly to suit the agenda !
The sooner folk get away from these datasets as ‘gospel’ the better – but in the meantime it is all we have to work with, and worse still, it (the data) is under the control of the ‘climate data gatekeepers’, and cannot be throughly questioned as per Jim Steeles examples.

Yes, of course you are right: I agree that the measurement errors have importance. However the measurement errors are a trivial matter in the context of the issue I raised.
As I said, the metrics constructed from those measurements have no definition and they have no possibility of calibration. Hence, each team that provides such a construct ‘adjusts’ the measurement results and combines the ‘adjusted’ results in a manner of their choosing. And they each choose to do that in a different way most months.
In this circumstance, the accuracy and precision of the original measurements have little or no relevance because the original measurement results are ‘adjusted’ (i.e. changed) in an arbitrary manner many times.
When the original measurements are subjected to arbitrary alterations, is the result science?
Richard

September 26, 2013 4:07 am

Tonyb of:
http://climatereason.wordpress.com/
appears to be impersonating tonyb of:
http://climatereason.com/LittleIceAgeThermometers/
Am I right, or not ?

Rathnakumar
September 26, 2013 4:26 am

Thank you for a very interesting post!

bit chilly
September 26, 2013 4:34 am

nick stokes,please write “the earth is not a climate model” 5000 times whilst beating your self over the head with a hammer.

Iggy Slanter
September 26, 2013 4:38 am

A thousand years ago lay people (peasants) were not allowed to read the Bible. They had to believe it, they had to accept what was preached about it, but they could not read it until they had been properly taught just how to read it.
Evidently it is the same with climate data. Like milk it must be pasteurized and filtered before it is safe for public consumption.

tonyb
Editor
September 26, 2013 4:52 am

Stephen
Many thanks for your comment. Its ok thanks, Word press asked me to set it up but I use my own specialist site or normally get other sites to publish my material..
I certainly have no intention of policing an active blog. How people like Judith or Anthony find the time I don’t know.
tonyb

Paul Mackey
September 26, 2013 5:35 am

Excellent Piece. Takes doen the whole climate science edifiss piece by piece. I have certainly been taught to ensure rigioursly there is no systematic errors in either my experimental equipement and also in any analysis methods used. Seems this homoginisation was performed based on an incorrect assumption and blindly applied autonomously by a computer. Where were the scientists in this, checking and validating their assumptions and processes?

Kev-in-Uk
September 26, 2013 5:42 am

richardscourtney says:
September 26, 2013 at 4:03 am
Yes, I must agree again – but they do tend to like to use measurement errors, etc, as a ‘cover’ for the adjustments too – so within that framework, the ‘errors’ are indeed important.
As I have said before, until someone manually goes through EACH and EVERY set of RAW station data, using a defined set of rules for ‘analysis’ and ‘interpretation’ (ignoring any current computerised algorithms) – breaking station data up into appropriate ‘segments’ as required, comparing observer records, etc, etc – no-one can ‘estimate’ the amount of corrections, IF ANY, that may (or may not) be required.
Even then, that single set of station data remains unique – and does not constitute to an equivalent ‘yardstick’ for a nearby station (e.g. for homogenisation purposes) -, at least, not without doing the same for every other nearby station, whereupon it may be apparent that local microclimate type factors exist, etc, etc.
I fail to see how any comuter program can analyse the raw data as effectively as a knowledgable human, manually cross checking suspect data as required (for example, with local press reports?), etc. It follows that any constructed dataset, without this kind of careful preparation (as opposed to some computer run algorithm) is likely to be worthless!

catweazle666
September 26, 2013 5:53 am

Good piece, Jim.
With regard to UHI, you don’t need to be a climate scientist to observe it, anyone with a modern car which is fitted with an external temperature gauge can experience the effect simply by keeping an eye on it while normally driving around.
I can assure any doubters that even in the UK the effect can be very marked indeed, I have seen 6°C and higher between open country and the environs of cities such as London and Manchester.
Even small towns and villages can generate a couple of degrees.

beng
September 26, 2013 5:58 am

Jim Steele, you’re a warrior in a nest of sycophants.

beng
September 26, 2013 7:24 am

***
jim Steele says:
September 25, 2013 at 10:21 pm
Johanna, If you are going to add an interlude, when you mention “lots of “boids” around…
***
She must be a closet New Yorker…:)

Rod Everson
September 26, 2013 8:29 am

1. I agree with most here that there’s no way that the “adjustments” were honestly made, but I understand why you wrote your paper in a way that gives them the benefit of the doubt.
2. Although I can’t claim to understand everything you’ve written, yours is one of the clearest papers I’ve read about the issue of adjustment of the historical temperature record, so thank you.
3. The part I found most interesting, and in a way most disturbing, was your initial point that you had to decide which temp records to use when analyzing a local, not global, climate issue. By messing with the data, as I believe “they” almost certainly have, they corrupt science at all levels, not just the global level. Who knows how many studies will now generate inconclusive, or even dead wrong, results because the researchers rely upon the adjusted data? (as someone like Nick Stokes presumably would, for example….sorry, Nick, but you’ve got to admit this is an interesting paper as it relates to homogenization of the data.)
4. Perhaps the “global” scientists will eventually be brought to heel by true scientists attempting to research, and make sense of, issues in there own, localized, areas? What would 100 local sea-level researchers conclude separately, for example, and how would their combined results compare with what the “global” scientists are telling us about the global sea level?

scarletmacaw
September 26, 2013 8:38 am

Nick Stokes says:
September 26, 2013 at 1:55 am
Socorro is interesting. BEST also identified a break at 1968, which is very obvious on your graph. And there was a station move then. Looks like the algorithm was onto something.

Eyeballing the raw data on your link, it appears that Socorro was a reasonably smooth temperature trend that was then adjusted upwards based on disagreement with surrounding measurement(s), and that the error was in the surrounding measurements. There were no obvious discontinuities in the BEST Socorro raw graph, but there were in the comparison graph.
I think that’s an algorithm fail.
Also, your comments to others are very much along the line of ‘Read the Bible.’ What does that say about the nature of Climate Science?

September 26, 2013 9:27 am

Jim Steele,
Nick identified the correct station for Reading MA (the difference in name was throwing me off).
As far as cooling biases due to moves from city centers to airports/waste water treatment plants goes, a disproportionate number of urban stations pre-1940s were located on rooftops, and even when they were not siting concerns were not paramount. While there are certainly going to be some cases where airport locations would be warmer than the prior urban center location, these will be the exception rather than the rule.
That said, airports are not free of their own UHI concerns. If you want, you can look only at non-airport rural stations, though the trends are not particularly different post-1950: http://rankexploits.com/musings/2010/airports-and-the-land-temperature-record/
I think the fundamental issue here is the assumption (by NCDC and Berkeley) that climate changes are highly spatially correlated, and that any significant local variance that is not reflected in other nearby station is a non-climatic factor. Thats not to say that local variations might not reflect a real local change (e.g. through urbanization, vegetative cover changes, a tree growing over the instrument, etc.). Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result in biased estimates of broader regional climate change. For specific local temperature work (e.g. ecological studies), raw data may in some cases be preferable to use, though you have to be quite careful. For example, I was looking at heat waves in the Chicago area and discovered a whole slew of days in the ~40C range back in the 1930s at the Midway Airport station, with nothing even remotely close to that thereafter. I dug around a bit and found out that prior to 1940 the instrument was located on a black rooftop!
We (Berkeley Earth) are doing a paper for the AGU this year (and, later, for publication) regarding a new quarter-degree resolution homogenized CONUS dataset we’ve created. As part of that, we are analyzing the spatial coherence of temperature trends compared to unhomogenized products (e.g. PRISM), satellite data (RSS and UAH), and reanalysis products (MERRA and NARR). While the analysis is not complete, we’ve found that the spatial structure of warming from 1979 to 2012 in the homogenized Berkeley product is quite similar to that of the satellite record, which has nominally complete CONUS spatial coverage, and quite different from the unhomogenized PRISM product which tends to show stations with dramatically different trends within a few miles of eachother. I’ll send you a copy once we formally present it.
Hope that helps,
-Zeke

September 26, 2013 9:30 am

Reblogged this on wwlee4411 and commented:
Global Warming is a NON-issue!

September 26, 2013 9:31 am

scarletmacaw,
Which is more likely: the surrounding 50 stations all had step changes at the same time while Socorro remained unchanged, or Socorro had a step change not seen at the surrounding 50 stations? Its worth pointing out that the Socorro stations has moved at least 5 times in its history, and likely more as documentation tends to be somewhat shoddy prior to 1950.
Difference series with surrounding stations often reveal step-change inhomogeneities that are not easily visible to the naked eye. The reason is simple: difference series remove any common weather or climate variability, so the only thing left is the divergence.

Kev-in-Uk
September 26, 2013 9:45 am

Zeke Hausfather says:
September 26, 2013 at 9:27 am
with reference to your black roof top example – this illustrates perfectly why homogenisation, gridding, and averaging is flawed in trying to create a spatial dataset without absolute detailed knowledge (and understanding) of the individual station data and what it represents.
Just out of curiosity, using your example, was the station data subsequently ‘adjusted’ – and by how much?; and how was the adjustment figure arrived at?, was a note put ‘on file’ to say why the adjustment was made, etc?
My point being that any subsequent use of the adjusted data, may have concluded that there was still something wrong, and adjusted it further…etc, etc – you get the picture.
If I go to one of the websites and download temperature data, or if I were just joining the climate elite – how would I get details of these kind of adjustments??

September 26, 2013 9:46 am

I throw out this comment every once in a while. Please forgive me if you find it redundant, but those who have not seen it before may find the subject illustrative of what may have happened viz-a-viz climate research.
Please research the now-disgraced historian Michael Bellesiles, author of the (retracted) Bancroft Award winning book, “Arming of America.” The late 1990s early 2000s scandal is fascinating and, I believe, holds many parallels as to what has happened in the climate research community. In a nutshell, Bellesiles fabricated research, “proving” that gun ownership was not widespread in early America, and the gun culture, along with the accepted interpretation of the Second Amendment is a recent invention. A ‘consensus’ of historians supported it unreservedly, and his award-winning book citing his research was a Times best seller. The research was quoted in court cases concerning the Constitutionality of gun control laws. The deception was finally brought down by the dogged efforts of an attorney and a computer programmer. You will see many of the same elements then as now – lost data, disdain for “non-experts”, reluctance of experts to question accepted theory, falsification of data. I think the ultimate shame, and final blow, came when the “deniers” showed that Bellesiles claimed to have research documents that were known to have been destroyed in fires caused by the 1906 San Francisco earthquake.
If you think it is not possible for someone or a group with malicious intentions to completely mislead a scholarly community and experts in a field of study, this wll change your mind. People who want to believe in a fraud will do so, even if it goes against their prior understanding and beliefs.

September 26, 2013 10:28 am

Kev-in-Uk,
Thankfully all the raw data is archived, and the two major adjustment approaches (NCDC and Berkeley) each use the same raw data as inputs, so there is no risk of double adjusting. Also, as long as station moves (e.g. rooftop to airport) are somewhat stochastic in time they can be easily identified and corrected through difference series from neighbors do to obvious step changes. The only time you would run into issues is if lots of stations in an area all moved at the same time, something that records show is generally not the case. Adjustments are documented, though they are also automated. Unfortunately, not all station moves, instrument changes, time of observation changes, vegetative changes, site characteristics changes, etc. are documented, so manual homogenization using station metadata is both prohibitively time-consuming (given the 40,000 or so stations in use) and also necessarily incomplete, especially outside of the U.S. where station metadata is often lacking. Automated methods that use neighbor-comparisons to identify breakpoints have proven much more effective.
As far as data access goes, you can get raw, TOBs-only adjusted, and fully homogenized data from NCDC for all USHCN stations here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/
You can find more information about the NCDC algorithm here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-williams2009.pdf
You can also find tests done using synthetic temperature data (to make sure that homogenization correctly addresses both cool and hot biases) here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf
The Berkeley approach is outlined here: http://www.scitechnol.com/2327-4581/2327-4581-1-103.pdf (technical details here: http://www.scitechnol.com/2327-4581/2327-4581-1-103a.pdf)
You can also see individual raw and adjusted station records with break points highlighted for each station: http://berkeleyearth.lbl.gov/stations/163658

September 26, 2013 10:29 am

The Chestnut Hill lat and long given by Berkeley are still about 7.5 miles away from the USHCN’s Reading station. So I doubt we are referring to the same data. Also your Berkeley links are average temp. However more often than not, maximum temperatures are adjusted much differently than minimums. Because maximums are occur during mid day when convection likely mixes the entire air column whereas minimums typically measure temperatures when the air is stillest minimums are more sensitive to surface conditions that will naturally vary with location. That suggests local variations that should never be homogenized.
Zeke says, “non-climatic factor. Thats not to say that local variations might not reflect a real local change (e.g. through urbanization, vegetative cover changes, a tree growing over the instrument, etc.).”
That is precisely the problem and why I mentioned the Yosemite study where a change in the winds caused “one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all”
The methodology denies natural variations. Instead of simply averaging that variation, homogenization fabricates a trend based on the other stations of choice thus amplifying their impact. Furthermore changes in vegetation add another curious tweak. Much of the eastern United States was deforested which typically dries the land, reduces heat capacity and evapotranspiration and raises temperatures. Reforestation usual cools local temperatures. When the forest recovers, we should see a cooling trend accurately represents the local climate. To call that artificial adjust that trend into a warming trends misleads our perceptions about what is actually happening.
You still take the argument elsewhere. Your example of the Chicago airport is a good example of the validation of the need to make adjustments, but I never suggested such adjustments were uncalled for. However I still await your analysis of why the adjustments of Orland vs Marysville occurred because it suggests other biases.
You also have yet to address the tree rings studies. All the statistical tests can be biased. Berkeley bases its adjustments on “expected means”. However as I referenced no tree ring study that extends into the 90s, when the instrumental data suggests rapid CO2-caused warming, reproduces that warming. The statistical adjustment all inflate temperatures relative to the expected means temperatures in the most natural settings. If we are to make meaningful long term climate comparisons for natural habitat, we must rely on tree rings. Yet when faced with the tree ring contradiction, Mann, Briffa and others in effect suggests trees have become wooden-headed deniers. They argue worldwide the trees suffered a sudden case of insensitivity. Mann and Jones dealt with the cooling tree ring problem by “hiding the decline” in their graphic presentations.

September 26, 2013 10:41 am

Zeke, Your Chicago rooftop and its effects on record high temperatures is an excellent example of the fallacy of parading around record high temperatures as proof of CO2 warming. Satellite data clearly shows that as vegetation is removed skin temperatures rise by as much as 40F. I argue landscapes changes have had a far greater impact on record temperatures. If people are concerned about record heat, add some greenery. Altering CO2 will have at most a trivial impact.

September 26, 2013 10:54 am

Jim Steele,
Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage? Both UAH and RSS show high correlations of 1979-2012 trends over distance, with no cases of cooling and warming stations within a few miles of each-other. While I agree that for specific locations (e.g. your Yosemite case), there might be local variations due to localized factors, but extrapolating these localized factors into estimates of regional climate change would be inaccurate (it would be the same as extrapolating urban station records to surrounding areas without considering UHI bias).
As far as tree rings go, I admit to having no real expertise in dendro-paleoclimatology, but I was led to believe that tree rings imperfectly mirror temperatures at best. There have been some recent studies of proxy records with high temporal resolution up to present that show quite good correspondence to post-1950s observations: http://www.agu.org/pubs/crossref/pip/2012GL054271.shtml
Here is Berkeley’s homogenization of the Orland station: http://berkeleyearth.lbl.gov/stations/34846
There are clear step-changes in the difference series relative to neighbors that are identified. The record is cut at each of these breakpoint and recombined using a least-squares approach. I find it hard to believe that there would be a nearly 1C persistent drop in Orland temperatures in 1940 that did not occur in any nearby stations caused by actual climatic factors. Rather, it was likely either an undocumented station move, local vegetative change, or other factor. These are removed as inhomogenities in the estimate of the regional temperature field.
Marysville has similar large step changes not seen in nearby stations, especially in the earlier part of the record, of a magnitude of 1C or more. It also has four documented station moves: http://berkeleyearth.lbl.gov/stations/34100
Again, this works both ways. While you’ve focused on cases where homogenization increases the trend, in the case of the Tahoe airport you can see a clear removal of UHI signal: http://berkeleyearth.lbl.gov/stations/162484

September 26, 2013 10:58 am

Here is one more issue with averaging. If we assume that homogenization is valid and represents the regional temperature trend, then maximum temperatures at Orland agree with tree ring data. Homogenized maximums show a cooling trend. If maximum temperatures have cooled since the 30s, there is not accumulation of heat energy in the atmosphere no matter what the trend for minimum temperature. By averaging a greater rising minimum with a cooling maximum, a misleading rise in global average is portrayed. And Orland’s homogenized maximum shows a far greater cooling than the minimum raw data.
http://cdiac.ornl.gov/cgi-bin/broker?id=046506&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMAX&minyear=1883&maxyear=2012

September 26, 2013 11:15 am

Jim,
I agree averages can be misleading, and looking at min and max temperatures separately can be far more interesting. We currently have min and max temps shown for regional climate estimates, but not for individual stations (min and max are homogenized separately, but we don’t currently generate graphs for each). Here is the regional climate expectation for the Orland area based on 79 stations within 100 km, with min and max temperatures shown: http://berkeleyearth.lbl.gov/locations/39.38N-120.69W

September 26, 2013 11:28 am

Also, I have a longer comment addressing some of your questions, but it appears to be stuck in moderation.