Unwarranted Temperature Adjustments and Al Gore's Unwarranted Call for Intellectual Tyranny

Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University

For researchers like myself examining the effect of local microclimates on the ecology of local wildlife, the change in the global average is an absolutely useless measure. Although it is wise to think globally, wildlife only responds to local climate change. To understand how local climate change had affected wildlife in California’s Sierra Nevada and Cascade Mountains, I had examined data from stations that make up the US Historical Climate Network (USHCN).

I was quickly faced with a huge dilemma that began my personal journey toward climate skepticism. Do I trust the raw data, or do I trust the USHCN’s adjusted data?

For example the raw data for minimum temperatures at Mt Shasta suggested a slight cooling trend since the 1930s. In contrast the adjusted data suggested a 1 to 2°F warming trend. What to believe? The confusion resulting from skewing trends is summarized in a recent study that concluded their “results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.” 13.

clip_image002

I began exploring data at other USHCN stations from around the country and realized that a very large percentage of the stations had been adjusted very similarly. The warm peaks from the 1930s and 40s had been adjusted downward by 3 to 4°F and these adjustments created dubious local warming trends as seen in examples from other USHCN stations at Reading, Massachusetts and Socorro, New Mexico.

Steele_fig2

Because these adjustments were so widespread, many skeptics have suspected there has been some sort of conspiracy. Although scientific papers are often retracted for fraudulent data, I found it very hard to believe climate scientists would allow such blatant falsification. Data correction in all scientific disciplines is often needed and well justified. Wherever there are documented changes to a weather station such as a change in instrumentation, then an adjustment is justified. However unwitting systematic biases in their adjustment procedure could readily fabricate such a trend, and these dramatic adjustments were typically based on “undocumented changes” when climate scientists attempted to “homogenize” the regional data. The rationale for homogenization is based on the dubious assumption that all neighboring weather stations should display the same climate trends. However due to the effects of landscape changes and differently vegetated surfaces,1,2 local temperatures often respond very differently and the minimum temperatures are especially sensitive to different surface conditions.

For example even in relatively undisturbed regions, Yosemite’s varied landscapes respond in very contrary ways to a weakening of the westerly winds. Over a 10-year period, one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all.16 Depending on the location of a weather station, very different trends are generated. The homogenization process blends neighboring data and obliterates local differences and then fabricates an artificial trend.

Ecologists and scientists who assess regional climate variability must only use data that has been quality controlled but not homogenized. In a climate variability study, scientists computed the non-homogenized changes in maximum and minimum temperatures for the contiguous United States.12 The results seen in Figure A (their figure 1b) suggest recent climate change has been more cyclical. Those cyclical changes parallel the Pacific Decadal Oscillation (PDO). When climate scientists first began homogenizing temperature data, the PDO had yet to be named, so I would like to suggest instead of a deliberate climate science conspiracy, it was their ignorance of the PDO coupled with overwhelming urbanization effects that caused the unwarranted adjustments by causing “natural change points” that climate scientists had yet to comprehend. Let me explain.

Steele_fig3

Homogenizing Contrasting Urban and Natural Landscape Trends

The closest USHCN weather station to my research was Tahoe City (below). Based on the trend in maximum temperatures, the region was not overheating nor accumulating heat. Otherwise the annual maximum temperature would be higher than the 1930s. My first question was why such a contrasting rise in minimum temperature? Here changing cloud cover was not an issue. Dr. Thomas Karl, who now serves as the director of the NOAA’s National Climatic Data Center partially answered the question when he reported that in over half of North America “the rise of the minimum temperature has occurred at a rate three times that of the maximum temperature during the period 1951-90 (1.5°F versus 0.5°F).”3 Rising minimum temperatures were driving the average but Karl never addressed the higher temperatures in the 1930s. Karl simply demonstrated as populations increased, so did minimum temperatures even though the maximums did not. A town of two million people experienced a whopping increase of 4.5°F in the minimum and was the sole cause of the 2.25°F increase in average temperature.4

clip_image008

Although urban heat islands are undeniable, many CO2 advocates argue that growing urbanization has not contributed to recent climate trends because both urban and rural communities have experienced similar warming trends. However, those studies failed to account for the fact that even small population increases in designated rural areas generate high rates of warming. For example, in 1967 Columbia, Maryland was a newly established, planned community designed to end racial and social segregation. Climate researchers following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents.5 Although Columbia would be classified as a rural town, that small population raised local temperatures five times greater than a century’s worth of global warming. If we extrapolated that trend as so many climate studies do, growing populations in rural areas would cause a whopping warming trend of 26°F per decade.

CO2 advocates also downplay urbanization, arguing it only represents a small fraction of the earth’s land surface and therefore urbanization contributes very little to the overall warming. However arbitrary designations of urban versus rural does not address the effects of growing population on the landscape. California climatologist James Goodridge found the average rate of 20th century warming for weather stations located in a whole county that exceeded one million people was 3.14°F per century, which is twice the rate of the global average. In contrast, the average warming rate for stations situated in a county with less than 100,000 people was a paltry 0.04°F per century.6 The warming rate of sparsely populated counties was 35 times less than the global average.

Furthermore results similar to Goodridge’s have been suggested by tree ring studies far from urban areas. Tree ring temperatures are better indicators of “natural climate trends” and can help disentangle distortions caused by increasing human populations. Not surprisingly, most tree-ring studies reveal lower temperatures than the urbanized instrumental data. A 2007 paper by 10 leading tree-ring scientists reported, “No current tree ring based reconstruction of extratropical Northern Hemisphere temperatures that extends into the 1990s captures the full range of late 20th century warming observed in the instrumental record.”8

Because tree ring temperatures disagree with a sharply rising instrumental average, climate scientists officially dubbed this the “divergence problem.”9 However when studies compared tree ring temperatures with only maximum temperatures (instead of the average temperatures that are typically inflated by urbanized minimum temperatures) they found no disagreement and no divergence.10 Similarly a collaboration of German, Swiss, and Finnish scientists found that where average instrumental temperatures were minimally affected by population growth in remote rural stations of northern Scandinavia, tree ring temperatures agreed with instrumental average temperatures.11 As illustrated in Figure B, the 20th century temperature trend in the wilds of northern Scandinavia is strikingly similar to maximum temperature trends of the Sierra Nevada and the contiguous 48 states. All those regions experienced peak temperatures in the 1940s and the recent rise since the 1990s has never exceed that peak.

Steele_fig5
Figure B. 2000 year summer temperature reconstruction of northern Scandinavia. Warmest 30 year periods are highlighted in by light gray bars (i.e. 27-56, or 1918-1947) and coldest 30 year periods are highlighted by dark gray bars (i.e. 1453-1482) Reprinted from Global and Planetary Change, vol. 88-89, Esper, J. et al, Variability and extremes of northern Scandinavian summer temperatures over the past two millennia.(REF11)

How Homogenizing Urbanized Warming Has Obliterated Natural Oscillations

It soon became obvious that the homogenization process was unwittingly blending rising minimum temperatures caused by population growth with temperatures from more natural landscapes. Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations,17 using what I term the “blind majority rules” method. The most commonly shared trend among neighboring stations became the computer’s reference, and temperatures from “deviant stations” were adjusted to create a chimeric climate smoothie. Wherever there was a growth in population, this unintentionally allows urbanization warming effects to alter the adjusted trend.

Climate computers had been programmed to seek unusual “change-points” as a sign of “undocumented” station modifications. Any natural change‑points caused by cycles like the Pacific Decadal Oscillation looked like deviations relative to steadily rising trends of an increasingly populated region like Columbia, Maryland or Tahoe City. And the widespread adjustments to minimum temperatures reveal this erroneous process.

I first stumbled onto Anthony Watts’ surface station efforts when investigating climate factors that controlled the upslope migration of birds in the Sierra Nevada. To understand the population declines in high-elevation meadows on the Tahoe National Forest, I surveyed birds at several low-elevation breeding sites and examined the climate data from foothill weather stations.

Marysville, CA was one of those stations, but its warming trend sparked my curiosity because it was one of the few stations where the minimum was not adjusted markedly. I later found a picture of the Marysville’s weather station at SurfaceStations.org website. The Marysville weather station was Watts’ poster child for a bad site; he compared it to the less-disturbed surface conditions at a neighboring weather station in Orland, CA. The Marysville station was located on an asphalt parking lot just a few feet from air conditioning exhaust fans.

The proximity to buildings also altered the winds, and added heat radiating from the walls. These urbanization effects at Marysville created a rising trend that CO2 advocate scientists expect. In contrast, the minimum temperatures at nearby Orland showed the cyclic behavior we would expect the Pacific Decadal Oscillation (PDO) to cause. Orland’s data was not overwhelmed by urbanization and thus more sensitive to cyclical temperature changes brought by the PDO. Yet it was Orland’s data that was markedly adjusted- not Marysville! (Figure C)

Steele_fig6

Steele_fig8
Figure C. Raw and adjusted minimum temperature for Marysville and Orland California.

Several scientists have warned against homogenization for just this reason. Dr. Xiaolan Wang of Meteorological Service of Canada wrote, “a trend-type change in climate data series should only be adjusted if there is sufficient evidence showing that it is related to a change at the observing station, such as a change in the exposure or location of the station, or in its instrumentation or observing procedures.” 14

That waning went unheeded. In the good old days, weather stations such as the one in Orland, CA (pictured above) would have been a perfect candidate to serve as a reference station. It was well sited, away from pavement and buildings, and its location and thermometers had not changed throughout its history. Clearly Orland did not warrant an adjustment but the data revealed several “change points.” Although those change points were naturally caused by the Pacific Decadal Oscillation (PDO), it attracted the computer’s attention that an “undocumented change” had occurred.

To understand the PDO’s effect, it is useful to see the PDO as a period of more frequent El Niños that ventilate heat and raise the global average temperature, alternating with a period of more frequent La Niñas that absorb heat and lower global temperatures. For example heat ventilated during the 1997 El Nino raised global temperatures by ~1.6°F. During the following La Niña, temperatures dropped by ~1.6°F. California’s climate is extremely sensitive to El Niño and the PDO. Reversal in thr Pacific Decadal Oscillation caused natural temperature change-points around the 1940s and 1970s. The rural station of Orland was minimally affected by urbanization, and thus more sensitive to the rise and fall of the PDO. Similarly, the raw data for other well-sited rural stations like the Cuyamaca in southern California also exhibited the cyclical temperatures predicted by the PDO (see Figure D, lower panel). But in each case those cyclical temperature trends were homogenized to look like the linear urbanized trend at Marysville.

Steele_fig9
Figure D. Upper panel PDO Index. Lower Panel Cuyamaca CA raw versus adjusted minimum temperatures.

Marysville however was overwhelmed by California’s growing urbanization and less sensitive to the PDO. Thus it exhibited a steady rising trend. Ironically, a computer program seeking any and all change-points dramatically adjusted the natural variations of rural stations to make them conform to the steady trend of more urbanized stations. Around the country, very similar adjustments lowered the peak warming of the 1930s and 1940s in the original data. Those homogenization adjustments now distort our perceptions, and affect our interpretations of climate change. Cyclical temperature trends were unwittingly transformed into rapidly rising warming trends, suggesting a climate on “CO2 steroids”. However the unadjusted average for the United States suggests the natural climate is much more sensitive to cycles such as the PDO. Climate fears have been exaggerated due to urbanization and homogenization adjustments on steroids.

Skeptics have highlighted the climate effects of the PDO for over a decade but CO2 advocates dismissed this alternative climate viewpoint. As recently as 2009, Kevin Trenberth emailed Michael Mannand other advocates regards the PDO’s effect on natural climate variability writing “there is a LOT of nonsense about the PDO. People like CPC are tracking PDO on a monthly basis but it is highly correlated with ENSO. Most of what they are seeing is the change in ENSO not real PDO. It surely isn’t decadal. The PDO is already reversing with the switch to El Nino. The PDO index became positive in September for first time since Sept 2007.”

However contrary to Trenberth’s email rant, the PDO continued trending to its cool phase and global warming continued its “hiatus.” Now forced to explain the warming hiatus, Trenberth has flipped flopped about the PDO’s importance writing “One of the things emerging from several lines is that the IPCC has not paid enough attention to natural variability, on several time scales,” “especially El Niños and La Niñas, the Pacific Ocean phenomena that are not yet captured by climate models, and the longer term Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) which have cycle lengths of about 60 years.”18 No longer is CO2 overwelming natural systems, they must argue natural systems are overwhelming CO2 warming. Will they also rethink their unwarranted homogenization adjustments?

Skeptics highlighting natural cycles were ahead of the climate science curve and provided a much needed alternative viewpoint. Still to keep the focus on CO2, Al Gore is stepping up his attacks against all skeptical thinking. In a recent speech, rightfully took pride that we no longer accept intolerance and abuse against people of different races or with different sexual preferences. Then totally contradicting his examples of tolerance and open mindedness, he asked his audience to make people “pay a price for denial”.

Instead of promoting more respectful public debate, he in essence suggests Americans should hate “deniers” for thinking differently than Gore and his fellow CO2 advocates. He and his ilk are fomenting a new intellectual tyranny. Yet his “hockey stick beliefs” are based on adjusted data that are not supported by the raw temperature data and unsupported by natural tree ring data. So who is in denial? Whether or not Gore’s orchestrated call to squash all skeptical thought is based solely on ignorance of natural cycles, his rant against skeptics is far more frightening than the climate change evidenced by the unadjusted data and the trees.

Literature cited

1. Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society

2. Lim,Y-K, et al., (2012) Observational evidence of sensitivity of surface climate changes to land types and urbanization,

3. Karl, T.R. et al., (1993) Asymmetric Trends of Daily Maximum and Minimum Temperature. Bulletin of the American Meteorological Society, vol. 74

4. Karl, T., et al., (1988), Urbanization: Its Detection and Effect in the United States Climate Record. Journal of Climate, vol. 1, 1099-1123.

5. Erella, E., and Williamson, T, (2007) Intra-urban differences in canopy layer air temperature at a mid-latitude city. Int. J. Climatol. 27: 1243–1255

6. Goodridge, J., (1996) Comments on Regional Simulations of Greenhouse Warming Including Natural Variability. Bulletin of the American Meteorological Society. Vol.77, p.188.

7. Fall, S., et al., (2011) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. Journal Of Geophysical Research, Vol. 116

8. Wilson R., et al., (2007) Matter of divergence: tracking recent warming at hemispheric scales using tree-ring data. Journal of Geophysical Research–A, 112, D17103, doi: 10.1029/2006JD008318.

9. D’Arrigo, R., et al., (2008) On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change, vol. 60, p. 289–305

10. Youngblut, D., and Luckman, B., (2008) Maximum June–July temperatures in the southwest Yukon region over the last three hundred years reconstructed from tree-rings. Dendrochronologia, vol. 25, p.153–166.

11. Esper, J. et al. (2012) Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. Global and Planetary Change 88–89 (2012) 1–9.

12. Shen, S., et al., (2011) The twentieth century contiguous US temperature changes indicated by daily data and higher statistical moments. Climatic Change Volume 109, Issue 3-4, pp 287-317.

13. Steirou, E., and Koutsoyiannis, D. (2012) Investigation of methods for hydroclimatic data homogenization. Geophysical Research Abstracts, vol. 14, EGU2012-956-1

14. Wang, X., (2003) Comments on ‘‘Detection of Undocumented Changepoints: A Revision of the Two-Phase Regression Model’’. Journal of Climate; Oct2003, Vol. 16 Issue 20, p. 3383-3385.

15. Nelson, T., (2011) Email conversations between climate scientists. ClimateGate 2.0: This is a very large pile of “smoking guns.” http://tomnelson.blogspot.com/

16. Lundquist, J. and Cayan, D. (2007) Surface temperature patterns in complex terrain: Daily variations and long-term change in the central Sierra Nevada, California. Journal of Geophysical Research, vol. 112, D11124, doi:10.1029/2006JD007561.

17. Menne. M., (2009) The U.S. Historical Climatology Network Monthly Temperature Data, version 2. The Bulletin for the American Meterological Society. p. 993-1007

18. Appell, D. (2013) Whither Global Warming? Has It Slowed Down? The Yale Forum on Climate Change and the Media. http://www.yaleclimatemediaforum.org/2013/05/wither-global-warming-has-it-slowed-down/

Adapted from the chapter Why Average Isn’t Good Enough in Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Read previous essays at landscapesandcycles.net

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
185 Comments
Inline Feedbacks
View all comments
September 26, 2013 11:37 am

You make my point. The 100 km radius includes Chico, Yuba City and Marysville as well as intense agricultural regions which are often left bare for months so a monthly breakdown would be more informative. Yuba City went from 37,000 people in the year 2000 to 65,000 in 2010. Chico has grown from 9,000 in 1930 to almost 90,000 in 2010. As Goodridge’s study showed, growing regional populations cause a rising trend in temperature. All those areas will raise temperatures via landscape changes and that is what you are using to homogenize Orland’s data.

Janice Moore
September 26, 2013 12:04 pm

{Brief Bird, etc., Interlude — GO TO Next Post for Science}
Thanks, Johanna, for responding. I went to bed earlier than usual. LOL, did you notice how Patrick and Dudley (thank you, gentlemen, for also taking the time to reply — much appreciated) answered my query about signs of spring? With temperature and weather statistics. Useful, but, well, not exactly what I was looking for… . So, the boids have flown (parrots, anyway, for the time being). One of them (a scarlet MacCaw sp?) had his or her “eye on” this thread and tapped out a comment; smart bird! Re: cute children’s song: I think that New Yorkers (I’ve only known a few) would say that song was about how New Joisey people tohlk, lol.
Here, in addition to what I wrote above, the woods are much quieter, now. The robins that stayed around or who moved in from up north are in a contemplative mood. The Stellar’s Jay (I’m going to name him Nick Stokes) has FINALLY run out of things to squawk about and shut — up, and those badun’s, the sparrows, with nobody’s nests to invade and take over are more subdued. Perhaps, they will turn over a new leaf. Not likely. As likely as members of the Fantasy Science Cult acknowledging reality. American Bald Eagles and the swans will grace the scene for the next few months as Winter and the joy of bare-branches-against-stars comes out in all its majestic splendor. (To me, Spring is Hope, Summer is Happiness, Autumn is Splendour, and Winter is Joy)
Do you think it is a female-male thing (v. a v. Patrick’s and Dudley’s answers to me versus your response re: signs of spring)? I wonder. Well, I don’t really care, for labeling and homogenizing (heh) people only leads to false assumptions. There are trends, but, individuals are often anomalous (and I try to not assume that “all men are…” or “we women … .”
Thanks, again, for responding!
Warm regards from your bird loving pal in the U.S.A.,
Janice
**************************
{END OF BIRD INTERLUDE}

September 26, 2013 12:20 pm

Jim,
If you are worried about urban biases being aliased into rural stations during homogenization, you can try only using rural stations to homogenize. Thats what we did in our recent JGR paper, and we found that rural-only homogenization was as good as all-station homogenization in detecting and eliminating urban warming bias, at least in the latter part of the record (and still eliminate most of the difference in the early part). In the earlier part you run into issues where there aren’t enough stations available to detect inhomogenities.

KevinM
September 26, 2013 12:21 pm

Please!
This is an excellent work, but will be dismissed by nonskeptics due to the needless name calling. Remove language klike:
“Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations”
I agree with your assessment, but the delivery is casual and borderline offensive.

kadaka (KD Knoebel)
September 26, 2013 12:44 pm

From Zeke Hausfather on September 26, 2013 at 10:54 am:

Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage?

As reported here 3 and 1/2 years ago, the heat plume from large cities increases formation of thunderstorms.
I had read earlier how the mega-city Tokyo makes its own weather, the waste heat is so great the upwelling heat plume leads to cool air along the ground being sucked in at the outskirts generating wind, and the heat plume travels upwards generating thunderstorms.
As these heat plumes rise they naturally spread out in the atmosphere.
Thus as the satellites are measuring temperatures of the lower troposphere (TLT), they would see those heat plumes thus their measurements would be contaminated by UHI.
The question is dilution, at how high above what size population center generating how much waste heat is the effect negligible?

benofhouston
September 26, 2013 1:08 pm

A perfect example of why I never had the computer automatically fix anything. Auto-fixers almost always mutate into auto-make-worsers.

KevinM
September 26, 2013 1:14 pm

David Hoffer,
Awesome catch. I copy-paste-saved that one for later reference. The game might be over before you win the bet though.

September 26, 2013 1:52 pm

Zeke
It is not clear in any manner how your method eliminates urbanized warming by simply using a “statistical category” of all rural stations, or why such a method should be trusted. As I reported “following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents” Columbia Maryland would be categorized as rural but clearly a growing population and associated landscape changes raised temperatures. Only by comparing temperatures before and after population growth can we reliable separate population effects. Your statistical methods more likely blind us to that powerful population effect and erroneously suggests it represents CO2 warming. Again I trust the tree ring trends as indicators of climate change much more because those tree locations are more removed from those human influences. And no tree ring study finds the warming that your methods suggest. Who to believe?

September 26, 2013 2:33 pm

Average and what it means. At a meeting with a contractor and the plant representative over the actual thickness of asphalt concrete installed, the contractor produced wonderful data that showed that while there were areas deficient in thickness, the average was greater. The plant manager responded with “Go home, turn on your oven and put your head in it while your a__ is in the kitchen; how does that average out for you?”

Romcconn
September 26, 2013 2:34 pm

Re Zeke Hausfeather on September 26 at 9:27 pm
According to his study at http://rankexploits.com/musings/2010/airports-and-the-land-temperature-record/ which Zeke links to, roughly 600 of the 2000 GHCN stations that existed in 1900 were located at airports. Really? Wilbur and Orville didn’t get off the ground until 1903.
GIGO, I’d say.

September 26, 2013 3:30 pm

From Zeke Hausfather on September 26, 2013 at 10:54 am:
Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage?
Why on earth would you assume that satellite measurements are free of land-use effects? Such assumptions taint adjustments and make no sense. Satellite data clearly shows that as vegetation is removed temperatures dramatically rise from forested to grassland to barren ground which cause the highest temperatures. (Read Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society) The lower troposphere is heated primarily by contact with those surfaces. As the surface heats so does the air above it. Convection then carries that heated air to the stratosphere. A hotter surface also emits more infrared. So even if Co2 remained unchanged, a forest that was removed to create a airport run way, the resulting increased in infrared would increase the greenhouse effect.

September 26, 2013 3:41 pm

Iggy Slanter says September 26, 2013 at 4:38 am
A thousand years ago lay people (peasants) were not allowed to read the Bible.

a) if there were any to be read (printing press ca. 1450) and
b) pre-supposes they could read as well …
Probably not a good example.

scarletmacaw
September 26, 2013 4:32 pm

Zeke Hausfather says:
September 26, 2013 at 9:31 am
scarletmacaw,
Which is more likely: the surrounding 50 stations all had step changes at the same time while Socorro remained unchanged, or Socorro had a step change not seen at the surrounding 50 stations? Its worth pointing out that the Socorro stations has moved at least 5 times in its history, and likely more as documentation tends to be somewhat shoddy prior to 1950.

The adjustments in question were not pre-1950. The one Nick pointed out was in 1968.
It is not clear just how many stations were used to adjust the Socorro data. I count 31 stations within 100 km, 19 of which were not in operation in 1968. I can certainly conceive of one station out of the remaining 12 creating a false step change.

September 26, 2013 5:04 pm

Romcconn says September 26, 2013 at 2:34 pm

roughly 600 of the 2000 GHCN stations that existed in 1900 were located at airports. Really? Wilbur and Orville didn’t get off the ground until 1903.

Dirigibles, zeppelins and blimps?
.

September 26, 2013 5:24 pm

Perhaps relevant:
“TOBS” by Steve McIntyre, posted on Sep 24, 2007 at 9:15 PM
http://climateaudit.org/2007/09/24/tobs/

johanna
September 26, 2013 7:19 pm

Zeke
“Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result in biased estimates of broader regional climate change. For specific local temperature work (e.g. ecological studies), raw data may in some cases be preferable to use, though you have to be quite careful. For example, I was looking at heat waves in the Chicago area and discovered a whole slew of days in the ~40C range back in the 1930s at the Midway Airport station, with nothing even remotely close to that thereafter. I dug around a bit and found out that prior to 1940 the instrument was located on a black rooftop!”
——————————————————–
You keep evading the point, which is that “homogenisation” mixes good data with bad. The fact that a screaming outlier was picked up in your example does not address the question. It is the classification of stations which is at issue here, not just their outputs.
The climate change debate, for reasons known only to “climate scientists”, is conducted in terms of fractions of a degree. That seems to be prima facie evidence that there is not a lot of substance there, since fractions of a degree matter not at all to plants or animals. The methods used to compute the fractions are at best speculative. The notion that multiplying the number of calculations inherently brings forth better results is absurd.
Like Jim, and many others who comment and post here, I have real world experience of using data and statistics. The sheer bloody hubris and ignorance of those who feed massive datasets into a computer and assume that whatever comes out is “the answer” is staggering. You say:
“Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result result in biased estimates of broader regional climate change.”
Yep, let’s just ignore inconvenient facts, because they “will result in biased estimates”.
When I was analysing public opinion surveys, that was the sort of thinking that lost elections. Ponder on that.

Jeff
Reply to  johanna
September 26, 2013 8:34 pm

“The climate change debate, for reasons known only to “climate scientists”, is conducted in terms of fractions of a degree. ”
You raise an excellent point that I have long wondered about. One of the first things taught in basic science class is that you can not create data of greater precision than your instrumentation. Even if you combine and manipulate data, it can not have any greater precision than the least precise instrument used to gather the data and the data must be rounded off to that level of precision.
Prior to the age of electronic thermometers, which roughly corresponds to the age before satellite based temperature measurement, it seems that all the historical temperature data was gathered using variations of glass tube thermometers. I have informally tried to read up on the history of such instruments and from what I have been able to find so far, there does not seem to have been a standard design thermometer that was used for gathering the weather records. Anecdotally, it seems most of these thermometers were graduated no more precisely than increments of 1 degree F. I did find one example of an antique weather station thermometer graduated as precisely as 1/2 degree F, but that seemed to be rare and was the most accurate I have seen so far. Given such limits on the precision of the historical data, I don’t understand how statements can meaningfully be made about trends in the data to precision of tenths of a degree prior to the age of better instrumentation.
Does anybody know of a good discussion about the history and capabilities of weather station thermometers and the standards to which they were designed, manufactured, calibrated, used, maintained, etc.? In depth understanding of the instrumentation used to gather the original data seems to me to be a topic somewhat lacking coverage in the discussion of the temperature archives given the levels of precision upon which arguments hinge.

Nick Stokes
September 26, 2013 7:23 pm

scarletmacaw says: September 26, 2013 at 4:32 pm
“It is not clear just how many stations were used to adjust the Socorro data. I count 31 stations within 100 km, 19 of which were not in operation in 1968.”

This site gives you an interactive map – just choose NM. I made some plots at random of annual dailymin data:
Socorro
Mountain Park
Los Lunas
You can see that Socorro makes a big dive in 1966 (not 1968, as I had estimated from the plot above), and it’s a real step. That’s what BEST and USHCN adjusted for, based on the time series. It’s near the time of an actual move. The nearby locations do not show a step change there.

September 26, 2013 8:15 pm

Reblogged this on gottadobetterthanthis and commented:
Worth understanding. Data is important. Often data is manipulated just to make it mathematically easy to use. It is more important to respect the data than to show one’s math skills.

scarletmacaw
September 26, 2013 9:35 pm

Nick Stokes says:
September 26, 2013 at 7:23 pm
You can see that Socorro makes a big dive in 1966 (not 1968, as I had estimated from the plot above), and it’s a real step. That’s what BEST and USHCN adjusted for, based on the time series. It’s near the time of an actual move. The nearby locations do not show a step change there.

Thanks for the link.
I plotted the TMIN for Socorro, and there was a decrease between 1965 and 1980. It was not clear to me from looking at the graph that it was a step change. That was the same time frame where it was getting so cold the Newsweek cover questioned the next ice age, so the cold drop could have been real. Also there was no corresponding step in TMAX.
And is there a link to the number of nearby stations used in the homogenization? Was it 12, 50, or some other number? Does BEST use a fixed number of stations, a fixed radius, or some other method of deciding which stations to use?

scarletmacaw
September 26, 2013 9:48 pm

NIck, to add to that, I looked at the other two sites. Los Lunas also shows a drop in TMIN from 1965 to 1980 in an otherwise fairly flat temperature vs, time, but shows a monotonic increase in TMAX that looks artificial. Mountain Park is fairly flat in both TMIN and TMAX. None of these three sites are at all similar except for the 1970s decrease which is weak in Mountain Park but strong in the other two (before it was homogenized out of Socorro).

Nick Kermode
September 27, 2013 1:31 am

Hi Mr Steele, I had a couple of questions up thread you may have missed seeing as there were quite a few around the same time. I’ll repost here for your convenience. Cheers, Nick
“Mr Steele, your Fig. A (their figure 5b not 1b) is in fact GDCN not GHCN, a minor difference perhaps for the US but NOAA do say “Unlike GDCN, however, GHCN-Daily contains numerous data streams for updated data that enhance the latency of the dataset through rapid and frequent updates. Relative to GDCN, GHCN-Daily also contains a much more comprehensive set of QA checks as well as a more expansive set of historical data sources. “, so there is a difference. Also the graph is the average of a graph the authors say is “only a rough and visual assessment of the change points. Rigorous detection of change points in a time series needs to go through a statistical procedure” My main point is though that you use that graph and say “The results seen in Figure A (their figure 1b) (sic) suggest recent climate change has been more cyclical” whilst they say…..”such data sets must be homogenized before they can be used for climatological analysis”. Is there a reason you can make a climatological judgement from that graph when the authors you site say specifically you can’t?
TIA Nick”

Ryan
September 27, 2013 3:10 am

Well I would only say this. The adjustments have all been done. In fact they were done some time ago. Since the end of the adjustment process, we happen to have had “the pause”.
I have a suspicion that “the pause” will continue from here on in. Right now the IPCC is trying to claim that “the pause” is statistically insignificant. How anybody can claim a change in trend that is 30% of the overall data period is “statistically insignificant” I don’t know. But there will come a time when nobody with a brain will be able to claim such a thing.
There is one good thing I would like to say about climate scientists. They may have had the brass balls to create a historica warming trend out of thin air, but at least so far they haven’t had the brass balls to create a current warming trend out of thin air in full view of a watchful public.

aaron
September 27, 2013 3:59 am

This is another of those posts that are good at their core, substantive, but the title and crankish rant about Al Gore at the end make it un-shareable.

September 27, 2013 8:24 am

Kermode I did not miss your earlier question. I simply though you missed the point. I offered a published graph of non-homogeized data to show what the data looks like before the homogenization process occurs. Like the author I also acknowledged that adjustments are sometimes needed, but the debate here is about how and why that homogenization is done. I apologize if I misinterpreted your intent, but your question implied my use of that graph as misleading. You also seemed to be cherry-picking that inconsequential acknowledgement and ignored others so It felt as if your intent was denigrate not discuss.
For example the authors also said “Fortunately, the random component of such error tends to average out in large area averages and in calculations of temperature change over long periods; therefore, stations’ data do not always need to be homogenized” You and Nick Stokes implied my calling it a GHCN data set was misleading, but the authors said “The updated dataset of the GDCN is the GHCN-D (Global Historical Climatology Network-Daily).” then “The differences
between GHCN-D and GDCN are reflected in some individual stations and have little effect on the regional average results like those presented in this paper.” So why would you both make it an issue?
As with my analysis the authors also suggested El Nino and La Nina events generated the peaks.
You also failed to mention that the authors found, “It is thus intriguing that the minimum temperature is more symmetrically distributed than the mean temperature. These observations indicate the existence of more cold extremes in all temperatures”. Their observations certainly suggest that the lack of recent cold records may indeed be a product of homogenization.

September 27, 2013 8:33 am

“crankish rant about Al Gore at the end make it un-shareable”
Aaron I realize my Gore statements may turn off people who are allied with Al Gore’s politics, but when Gore makes several speeches urging to put a “price on denial” it is nothing less than a call for intellectual tyranny and such orchestrated attempts to demonize skeptics as evil deniers is probably the greatest threat to the scientific process and respectable debate. I am curious how you would characterize Gore’s comments and the battering of skeptical scientists with the word denier so they could indeed be shareable. Or do you accept his comments?