Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University
For researchers like myself examining the effect of local microclimates on the ecology of local wildlife, the change in the global average is an absolutely useless measure. Although it is wise to think globally, wildlife only responds to local climate change. To understand how local climate change had affected wildlife in California’s Sierra Nevada and Cascade Mountains, I had examined data from stations that make up the US Historical Climate Network (USHCN).
I was quickly faced with a huge dilemma that began my personal journey toward climate skepticism. Do I trust the raw data, or do I trust the USHCN’s adjusted data?
For example the raw data for minimum temperatures at Mt Shasta suggested a slight cooling trend since the 1930s. In contrast the adjusted data suggested a 1 to 2°F warming trend. What to believe? The confusion resulting from skewing trends is summarized in a recent study that concluded their “results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.” 13.
I began exploring data at other USHCN stations from around the country and realized that a very large percentage of the stations had been adjusted very similarly. The warm peaks from the 1930s and 40s had been adjusted downward by 3 to 4°F and these adjustments created dubious local warming trends as seen in examples from other USHCN stations at Reading, Massachusetts and Socorro, New Mexico.
Because these adjustments were so widespread, many skeptics have suspected there has been some sort of conspiracy. Although scientific papers are often retracted for fraudulent data, I found it very hard to believe climate scientists would allow such blatant falsification. Data correction in all scientific disciplines is often needed and well justified. Wherever there are documented changes to a weather station such as a change in instrumentation, then an adjustment is justified. However unwitting systematic biases in their adjustment procedure could readily fabricate such a trend, and these dramatic adjustments were typically based on “undocumented changes” when climate scientists attempted to “homogenize” the regional data. The rationale for homogenization is based on the dubious assumption that all neighboring weather stations should display the same climate trends. However due to the effects of landscape changes and differently vegetated surfaces,1,2 local temperatures often respond very differently and the minimum temperatures are especially sensitive to different surface conditions.
For example even in relatively undisturbed regions, Yosemite’s varied landscapes respond in very contrary ways to a weakening of the westerly winds. Over a 10-year period, one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all.16 Depending on the location of a weather station, very different trends are generated. The homogenization process blends neighboring data and obliterates local differences and then fabricates an artificial trend.
Ecologists and scientists who assess regional climate variability must only use data that has been quality controlled but not homogenized. In a climate variability study, scientists computed the non-homogenized changes in maximum and minimum temperatures for the contiguous United States.12 The results seen in Figure A (their figure 1b) suggest recent climate change has been more cyclical. Those cyclical changes parallel the Pacific Decadal Oscillation (PDO). When climate scientists first began homogenizing temperature data, the PDO had yet to be named, so I would like to suggest instead of a deliberate climate science conspiracy, it was their ignorance of the PDO coupled with overwhelming urbanization effects that caused the unwarranted adjustments by causing “natural change points” that climate scientists had yet to comprehend. Let me explain.
Homogenizing Contrasting Urban and Natural Landscape Trends
The closest USHCN weather station to my research was Tahoe City (below). Based on the trend in maximum temperatures, the region was not overheating nor accumulating heat. Otherwise the annual maximum temperature would be higher than the 1930s. My first question was why such a contrasting rise in minimum temperature? Here changing cloud cover was not an issue. Dr. Thomas Karl, who now serves as the director of the NOAA’s National Climatic Data Center partially answered the question when he reported that in over half of North America “the rise of the minimum temperature has occurred at a rate three times that of the maximum temperature during the period 1951-90 (1.5°F versus 0.5°F).”3 Rising minimum temperatures were driving the average but Karl never addressed the higher temperatures in the 1930s. Karl simply demonstrated as populations increased, so did minimum temperatures even though the maximums did not. A town of two million people experienced a whopping increase of 4.5°F in the minimum and was the sole cause of the 2.25°F increase in average temperature.4
Although urban heat islands are undeniable, many CO2 advocates argue that growing urbanization has not contributed to recent climate trends because both urban and rural communities have experienced similar warming trends. However, those studies failed to account for the fact that even small population increases in designated rural areas generate high rates of warming. For example, in 1967 Columbia, Maryland was a newly established, planned community designed to end racial and social segregation. Climate researchers following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents.5 Although Columbia would be classified as a rural town, that small population raised local temperatures five times greater than a century’s worth of global warming. If we extrapolated that trend as so many climate studies do, growing populations in rural areas would cause a whopping warming trend of 26°F per decade.
CO2 advocates also downplay urbanization, arguing it only represents a small fraction of the earth’s land surface and therefore urbanization contributes very little to the overall warming. However arbitrary designations of urban versus rural does not address the effects of growing population on the landscape. California climatologist James Goodridge found the average rate of 20th century warming for weather stations located in a whole county that exceeded one million people was 3.14°F per century, which is twice the rate of the global average. In contrast, the average warming rate for stations situated in a county with less than 100,000 people was a paltry 0.04°F per century.6 The warming rate of sparsely populated counties was 35 times less than the global average.
Furthermore results similar to Goodridge’s have been suggested by tree ring studies far from urban areas. Tree ring temperatures are better indicators of “natural climate trends” and can help disentangle distortions caused by increasing human populations. Not surprisingly, most tree-ring studies reveal lower temperatures than the urbanized instrumental data. A 2007 paper by 10 leading tree-ring scientists reported, “No current tree ring based reconstruction of extratropical Northern Hemisphere temperatures that extends into the 1990s captures the full range of late 20th century warming observed in the instrumental record.”8
Because tree ring temperatures disagree with a sharply rising instrumental average, climate scientists officially dubbed this the “divergence problem.”9 However when studies compared tree ring temperatures with only maximum temperatures (instead of the average temperatures that are typically inflated by urbanized minimum temperatures) they found no disagreement and no divergence.10 Similarly a collaboration of German, Swiss, and Finnish scientists found that where average instrumental temperatures were minimally affected by population growth in remote rural stations of northern Scandinavia, tree ring temperatures agreed with instrumental average temperatures.11 As illustrated in Figure B, the 20th century temperature trend in the wilds of northern Scandinavia is strikingly similar to maximum temperature trends of the Sierra Nevada and the contiguous 48 states. All those regions experienced peak temperatures in the 1940s and the recent rise since the 1990s has never exceed that peak.

How Homogenizing Urbanized Warming Has Obliterated Natural Oscillations
It soon became obvious that the homogenization process was unwittingly blending rising minimum temperatures caused by population growth with temperatures from more natural landscapes. Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations,17 using what I term the “blind majority rules” method. The most commonly shared trend among neighboring stations became the computer’s reference, and temperatures from “deviant stations” were adjusted to create a chimeric climate smoothie. Wherever there was a growth in population, this unintentionally allows urbanization warming effects to alter the adjusted trend.
Climate computers had been programmed to seek unusual “change-points” as a sign of “undocumented” station modifications. Any natural change‑points caused by cycles like the Pacific Decadal Oscillation looked like deviations relative to steadily rising trends of an increasingly populated region like Columbia, Maryland or Tahoe City. And the widespread adjustments to minimum temperatures reveal this erroneous process.
I first stumbled onto Anthony Watts’ surface station efforts when investigating climate factors that controlled the upslope migration of birds in the Sierra Nevada. To understand the population declines in high-elevation meadows on the Tahoe National Forest, I surveyed birds at several low-elevation breeding sites and examined the climate data from foothill weather stations.
Marysville, CA was one of those stations, but its warming trend sparked my curiosity because it was one of the few stations where the minimum was not adjusted markedly. I later found a picture of the Marysville’s weather station at SurfaceStations.org website. The Marysville weather station was Watts’ poster child for a bad site; he compared it to the less-disturbed surface conditions at a neighboring weather station in Orland, CA. The Marysville station was located on an asphalt parking lot just a few feet from air conditioning exhaust fans.
The proximity to buildings also altered the winds, and added heat radiating from the walls. These urbanization effects at Marysville created a rising trend that CO2 advocate scientists expect. In contrast, the minimum temperatures at nearby Orland showed the cyclic behavior we would expect the Pacific Decadal Oscillation (PDO) to cause. Orland’s data was not overwhelmed by urbanization and thus more sensitive to cyclical temperature changes brought by the PDO. Yet it was Orland’s data that was markedly adjusted- not Marysville! (Figure C)

Several scientists have warned against homogenization for just this reason. Dr. Xiaolan Wang of Meteorological Service of Canada wrote, “a trend-type change in climate data series should only be adjusted if there is sufficient evidence showing that it is related to a change at the observing station, such as a change in the exposure or location of the station, or in its instrumentation or observing procedures.” 14
That waning went unheeded. In the good old days, weather stations such as the one in Orland, CA (pictured above) would have been a perfect candidate to serve as a reference station. It was well sited, away from pavement and buildings, and its location and thermometers had not changed throughout its history. Clearly Orland did not warrant an adjustment but the data revealed several “change points.” Although those change points were naturally caused by the Pacific Decadal Oscillation (PDO), it attracted the computer’s attention that an “undocumented change” had occurred.
To understand the PDO’s effect, it is useful to see the PDO as a period of more frequent El Niños that ventilate heat and raise the global average temperature, alternating with a period of more frequent La Niñas that absorb heat and lower global temperatures. For example heat ventilated during the 1997 El Nino raised global temperatures by ~1.6°F. During the following La Niña, temperatures dropped by ~1.6°F. California’s climate is extremely sensitive to El Niño and the PDO. Reversal in thr Pacific Decadal Oscillation caused natural temperature change-points around the 1940s and 1970s. The rural station of Orland was minimally affected by urbanization, and thus more sensitive to the rise and fall of the PDO. Similarly, the raw data for other well-sited rural stations like the Cuyamaca in southern California also exhibited the cyclical temperatures predicted by the PDO (see Figure D, lower panel). But in each case those cyclical temperature trends were homogenized to look like the linear urbanized trend at Marysville.

Marysville however was overwhelmed by California’s growing urbanization and less sensitive to the PDO. Thus it exhibited a steady rising trend. Ironically, a computer program seeking any and all change-points dramatically adjusted the natural variations of rural stations to make them conform to the steady trend of more urbanized stations. Around the country, very similar adjustments lowered the peak warming of the 1930s and 1940s in the original data. Those homogenization adjustments now distort our perceptions, and affect our interpretations of climate change. Cyclical temperature trends were unwittingly transformed into rapidly rising warming trends, suggesting a climate on “CO2 steroids”. However the unadjusted average for the United States suggests the natural climate is much more sensitive to cycles such as the PDO. Climate fears have been exaggerated due to urbanization and homogenization adjustments on steroids.
Skeptics have highlighted the climate effects of the PDO for over a decade but CO2 advocates dismissed this alternative climate viewpoint. As recently as 2009, Kevin Trenberth emailed Michael Mannand other advocates regards the PDO’s effect on natural climate variability writing “there is a LOT of nonsense about the PDO. People like CPC are tracking PDO on a monthly basis but it is highly correlated with ENSO. Most of what they are seeing is the change in ENSO not real PDO. It surely isn’t decadal. The PDO is already reversing with the switch to El Nino. The PDO index became positive in September for first time since Sept 2007.”
However contrary to Trenberth’s email rant, the PDO continued trending to its cool phase and global warming continued its “hiatus.” Now forced to explain the warming hiatus, Trenberth has flipped flopped about the PDO’s importance writing “One of the things emerging from several lines is that the IPCC has not paid enough attention to natural variability, on several time scales,” “especially El Niños and La Niñas, the Pacific Ocean phenomena that are not yet captured by climate models, and the longer term Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) which have cycle lengths of about 60 years.”18 No longer is CO2 overwelming natural systems, they must argue natural systems are overwhelming CO2 warming. Will they also rethink their unwarranted homogenization adjustments?
Skeptics highlighting natural cycles were ahead of the climate science curve and provided a much needed alternative viewpoint. Still to keep the focus on CO2, Al Gore is stepping up his attacks against all skeptical thinking. In a recent speech, rightfully took pride that we no longer accept intolerance and abuse against people of different races or with different sexual preferences. Then totally contradicting his examples of tolerance and open mindedness, he asked his audience to make people “pay a price for denial”.
Instead of promoting more respectful public debate, he in essence suggests Americans should hate “deniers” for thinking differently than Gore and his fellow CO2 advocates. He and his ilk are fomenting a new intellectual tyranny. Yet his “hockey stick beliefs” are based on adjusted data that are not supported by the raw temperature data and unsupported by natural tree ring data. So who is in denial? Whether or not Gore’s orchestrated call to squash all skeptical thought is based solely on ignorance of natural cycles, his rant against skeptics is far more frightening than the climate change evidenced by the unadjusted data and the trees.
Literature cited
1. Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society
2. Lim,Y-K, et al., (2012) Observational evidence of sensitivity of surface climate changes to land types and urbanization,
3. Karl, T.R. et al., (1993) Asymmetric Trends of Daily Maximum and Minimum Temperature. Bulletin of the American Meteorological Society, vol. 74
4. Karl, T., et al., (1988), Urbanization: Its Detection and Effect in the United States Climate Record. Journal of Climate, vol. 1, 1099-1123.
5. Erella, E., and Williamson, T, (2007) Intra-urban differences in canopy layer air temperature at a mid-latitude city. Int. J. Climatol. 27: 1243–1255
6. Goodridge, J., (1996) Comments on Regional Simulations of Greenhouse Warming Including Natural Variability. Bulletin of the American Meteorological Society. Vol.77, p.188.
7. Fall, S., et al., (2011) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. Journal Of Geophysical Research, Vol. 116
8. Wilson R., et al., (2007) Matter of divergence: tracking recent warming at hemispheric scales using tree-ring data. Journal of Geophysical Research–A, 112, D17103, doi: 10.1029/2006JD008318.
9. D’Arrigo, R., et al., (2008) On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change, vol. 60, p. 289–305
10. Youngblut, D., and Luckman, B., (2008) Maximum June–July temperatures in the southwest Yukon region over the last three hundred years reconstructed from tree-rings. Dendrochronologia, vol. 25, p.153–166.
11. Esper, J. et al. (2012) Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. Global and Planetary Change 88–89 (2012) 1–9.
12. Shen, S., et al., (2011) The twentieth century contiguous US temperature changes indicated by daily data and higher statistical moments. Climatic Change Volume 109, Issue 3-4, pp 287-317.
13. Steirou, E., and Koutsoyiannis, D. (2012) Investigation of methods for hydroclimatic data homogenization. Geophysical Research Abstracts, vol. 14, EGU2012-956-1
14. Wang, X., (2003) Comments on ‘‘Detection of Undocumented Changepoints: A Revision of the Two-Phase Regression Model’’. Journal of Climate; Oct2003, Vol. 16 Issue 20, p. 3383-3385.
15. Nelson, T., (2011) Email conversations between climate scientists. ClimateGate 2.0: This is a very large pile of “smoking guns.” http://tomnelson.blogspot.com/
16. Lundquist, J. and Cayan, D. (2007) Surface temperature patterns in complex terrain: Daily variations and long-term change in the central Sierra Nevada, California. Journal of Geophysical Research, vol. 112, D11124, doi:10.1029/2006JD007561.
17. Menne. M., (2009) The U.S. Historical Climatology Network Monthly Temperature Data, version 2. The Bulletin for the American Meterological Society. p. 993-1007
18. Appell, D. (2013) Whither Global Warming? Has It Slowed Down? The Yale Forum on Climate Change and the Media. http://www.yaleclimatemediaforum.org/2013/05/wither-global-warming-has-it-slowed-down/
Adapted from the chapter Why Average Isn’t Good Enough in Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism
Read previous essays at landscapesandcycles.net



That’s the “meteorological” way of counting the seasons. It gets the coldest months into the winter and the warmest months into the summer.
He’s not a Dr.–he has no PhD.
I found this article to be a very clear and concise summary of a topic that is central to the debate and I concluded that humans are having a truly major impact on our climate by using homogenization of data techniques. Perhaps Nick Stokes could come with a new term to describe it…?
rogerknights says:
September 26, 2013 at 12:15 am
for some reason Australia works on 1 September, etc, instead of the equinoxes and solstices to determine the season.
That’s the “meteorological” way of counting the seasons. It gets the coldest months into the winter and the warmest months into the summer.
————————————————————————
Yup. Where I live, the coldest month is July, and the hottest is February. Still, no matter where you draw a line, there are always arguments pro and con.
Re my earlier post regarding the relative absence of birds in early Spring, I should add that a lot of them are staying close to home because they are flat out feeding their chicks. In Summer, they reappear, often with offspring in tow (especially magpies and crested pigeons, which are very familial).
“You say that the USHCN adjustments are unwarranted, but you say very little about what they actually are”
Doesn’t really matter does it Nick. These are one-sided adjustments for the wrong reason (homogination where the good sites are adjusted and not the bad) and not the typical adjustments that ensure all individual adjustments as a group do not affect the mean or trend and are statistically symmetric. Like, one a little high, one equally a little low and at the same x… but never tint the overall dataset, these do. I have personally run an analysis on BEST data, and if it’s the best we have it is a shame. Seems the real BEST dataset would be to just drop back to the raw data and let any variances cancel themselves out.
Oops, should be “the hottest is January” above. The point is, the seasons reflect the coldest and hottest months in the centre of Summer and Winter.
wayne,
Adjustments should only be symmetric if biases are stochastic. In the case of TOBs or MMTS transitions this is clearly not the case. That said, there are still plenty of cases where the PHA (and Berkeley’s scalpel) lower trends relative to the raw data. You can find histograms of global GHCN adjustments at Nick’s blog: http://moyhu.blogspot.com/2012/02/ghcn-v3-homogeneity-adjustments-revised.html
Similarly, here is a good example of the Berkeley approach removing UHI from a station (lowering the trend at the Reno/Tahoe airport from 1.84 C per decade to 0.83 C; a more than 50% decrease).
http://berkeleyearth.lbl.gov/stations/167546
A couple of years ago WUWT carried my article which dealt with exactly the inconsistency of historic temperatures mentioned in this excellent article by Jim Steele. I provide a few excerpts below and would urge people to read the source book from which they are derived
“This material is taken from Chapter 6 which describes how mean daily temperatures are taken;
“If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”
That readings should be routinely taken in this manner as late as the 1900′s, even in major European centers, is somewhat surprising.
There are numerous veiled criticisms in this vein;
“…the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”
Other complaints made by Doctor von Hann include this comment, concerning the manner in which temperatures are observed;
“…the combination of (readings at) 8am, 2pm, and 8pm, which has unfortunately become quite generally adopted, is not satisfactory because the mean of 8+2+ 8 divided by 3 is much too high in summer.”
And; “…observation hours which do not vary are always much to be preferred.”
That the British- and presumably those countries influenced by them- had habits of which he did not approve, demonstrate the inconsistency of methodology between countries, cultures and amateurs/professionals.”
http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/
—— ——- —-
The long and the short of it that we know only in general terms the past climate and its direction of travel, and even that becomes obscured when already fragile readings are then substantially altered-for whatever reason may be considered valid at the time. To believe we know temperatures from way back that are accurate to tenths of a degree is sheer hubris.
tonyb
tonyb
Zeke Hausfater wrote “Here is Reading, MA, for example: http://berkeleyearth.lbl.gov/stations/163049. The station has had three document station moves and two documents TOBs changes in its history, as well as a number of other notable undocumented step changes relative to surrounding stations.”
Your link is confusing. The Berkeley homogenization data refers Reading WB located at 40.38 N 75.946 W. In contrast the USHCN Reading station I referred to is located Latitude 42.5242, Longitude -71.1264. I believe you mistakenly confused Reading Pennsylvania with Reading Massachusetts. Do you have a link where we can compare Reading MA.in my graph?
I had read your paper a while and will look at again to refresh my memeory on the details But I was curious about the mechanism and methods by which your paper suggested that a move from an urban center to an airport would have a cooling affect. You wrote “Moreover, cooling
biases can be introduced into the temperature record when stations move from city centers to more rural areas on the urban periphery. This may have occurred, for example, during the period between about 1940 and 1960 when stations were moved from urban centers to newly constructed airports.”
When I step barefoot from a grassy park to the pavement of the parking lot, I notice an intense increase in temperature. Did you specifically measure and compare temperatures at each airport and the weather station’s previous location? Although urban centers are typically warmer, conditions immediately surrounding the weather station may be more important than its proximity to the urban center. I suspect the difference between a park’s micro climate and the airport’s would be huge, and it would be insignificant whether or not you characterized the airport to be in an urban or rural setting.
I was always taught to go to the basic data, as recorded, preferably in the original notebook. This way one could identify any corrections, misread figures etc.
If there was a change in instrumentation or location (including anychange in surrounding vegetation), then a substantial time of overlap is necessary to get a more accurate figure for subsequent ‘adjustments’. Otherwise there was a real risk of .cooking the books’ to get an answer that pleased the teacher/boss/paymaster.
Has anyone homogenised the global temperature data, but based on stations such as Orland that do not seem to have undergone any substantial changes, in order to reduce any urban heat island effect? I seem to remember that Dr Jones refused to release temperature data because they might try to prove the CRU’s conclusions wrong!
Jim Steele
See my comment above yours regarding the uncertainty of the historic temperature record.
If you really want to see in forensic detail all the adjustments that need to be made to raw data then read the book title referenced here ‘Improved Understanding…’
http://www.isac.cnr.it/~microcl/climatologia/improve.php
It is a very difficult read and I had to borrow it three times from the Met Office Library in order to fully absorb it. Even then, whether we end up with the TRUE temperature recorded on any one day in a specific city in any particular century remains open to question.
Tonyb
Zeke, You provide links to take our attention away from what was presented here. Let’s focus on what is here in front of all of us. How does your analyses explain and justify the difference between the way Marysville and Orland were adjusted. Then people can decide if you or Berkeley have a valid argument that should be pursued further.
jim Steele says: September 25, 2013 at 9:13 pm
” True, my essay did not mention it was GHCN data but the paper I referenced does. You might read it instead of simply pronouncing “I don’t think it is.” In the future I encourage you to delve more deeply into the facts before simply choosing what is convenient for your beliefs.”
No, in fact the paper (Shen, Sec 2) says:
“We used the US National Climatic Data Center’s GlobalDaily Climatology Network (GDCN) v1.0 (Gleason 2002).”
Quite different.
@Nick Stokes — “If you don’t think they are experts, why look at their data? ”
Two things:
1) “Science is the belief in the ignorance of experts.” — Richard Feynman
2) What data? There’s the data that folks testify was collected from actual instruments on a given data and time. And then there’s the counterfactual computer game of “This is what the data might look like if we had instruments we don’t have.” That latter one? That’s not data in anyone’s dictionary. It’s the elevation of Mario Bros. as best practices for household plumbing.
jim Steele says: September 26, 2013 at 1:26 am
“Do you have a link where we can compare Reading MA.in my graph?”
Reading MA goes by the name of Chestnut Hill (with Reading given as alternative). It is here. BEST also identified a breakpoint at 1970. There was a staion move in 1960.
Socorro is interesting. BEST also identified a break at 1968, which is very obvious on your graph. And there was a station move then. Looks like the algorithm was onto something.
Fabi says: September 25, 2013 at 9:03 pm
“Any links to these numerous papers, please? Thank you.”
There’s a long list at the end of
this USHCN post, most with links.
Dudley Horscroft:
At September 25, 2013 at 10:59 pm you respond to Mark Albright and say to him in total
There is an underlying issue that so far has been ignored in the thread.
The individual temperature measurements were obtained as meteorological data but they are being adjusted to enable their combination as climatological data of regional (e.g. global, hemispheric, etc.) “average” temperature. There is no definition of the metric obtained by the combination and no possibility of a calibration for that metric.
The process to combine the meteorological data alters the empirically obtained temperature data (as Jim Steele clearly shows). Of itself this only has importance if the unaltered data is discarded so is lost for future use. Any example of such discarding of unaltered data is a severe – and elementary – error which it is hard to imagine a competent scientist would do except egregiously. Several comments in this thread have expressed suspicion at the adjustments of data, but it is the discarding of unaltered data which should be cause of righteous indignation.
And that discarding returns us to the really serious underlying issue. Perhaps one day it may be possible to define e.g. mean global temperature and to devise a calibration standard for it. However, such metrics are NOT now defined and, therefore, each team which provides time series of such data defines the metric in its own way and alters the definition almost every month.
davidmhoffer provides a good explanation (at September 25, 2013 at 7:57 pm) of one reason the definition of the metrics is changed each month. If the definition of a datum were not altered then its value would not alter, but it does. See e.g.
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
The basic problem is that the climate temperature metrics are not defined so people can ‘process’ the meteorological data in any arbitrary way to obtain a climate datum they desire to compute at any time. This garbles the meteorological data (and the unaltered data is often discarded) and provides completely arbitrary climatological data.
A group of us attempted to address this problem over a decade ago but our attempt was prevented; see especially Appendix B of this
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard
I consider Al Gore the living embodiment of Barquan Blasdel (lacking only the mental discipline required for success).
Perhaps Al Gore knows that political influence can come by manipulating people’s religious tendencies. Gore has had contacts with various religious and psychology and spiritual organisations, and when I watched “An Inconvenient Truth” I couldn’t help but notice the religious heartstrings being activated. The psychology says that “religion” is a mental structure, which can be filled with the content from any mainstream religion, but it can also be secular, just filled with other material, like environmentalism. Not that the environment isn’t real, just that the environment can become a religious secular conviction. We’ve inherited a monotheistic us v them, one true way, good v evil, saint v sinner culture and to some extend we all still have it in us, waiting to be activated.
Well spotted Stefan – Gore highest academic qualification is, I believe, in Theology.
Excellent contribution, but the author is being just a little too kind to the adjusters. When it comes to this subject, I have become convinced, by way of numerous examples seen, that one should not put down to ignorance that which can be adequately explained by a good little old conspiracy. (Apologies to the original version)
Nice job, Jim. Hope the book takes off.
This comments thread is as good a read as the paper itself. Not unusual for WUWT.
richardscourtney says:
September 26, 2013 at 2:10 am
That’s correct, of course, the temperature dataset is in effect, ‘undefined’ – if it were ‘defined’, they would not be able to keep adjusting its contents willy nilly to suit the agenda !
The sooner folk get away from these datasets as ‘gospel’ the better – but in the meantime it is all we have to work with, and worse still, it (the data) is under the control of the ‘climate data gatekeepers’, and cannot be throughly questioned as per Jim Steeles examples.
And of course, we are still forgetting the elephant in the room – and that is the actual accuracy of the thermometers and reader/human error (especially) of earlier data. In short, we are back to the noise to signal ratio problem – we simply cannot be assured of detecting a signal over the last 150 years within the noise, natural variation and other errors combined. Anyone stupid enough to believe this is feasible, is in a dream world. Using the best (pun) statistical analysis or not, the bottom line is that the data is ‘dirty’ (contains a lot of noise) and spans an awful lot of changes which any form of estimation (and adjustment) is merely adding to the errors involved.
The temperature datasets have been CONSTRUCTED – that is the best description – and cannot be assumed to reflect reality or fact. At best (pun again) they are but a tiny ‘indicator’ and should certainly not be used for earth shattering policy decision making…..that’s my view and I’m not going to be changing it anytime soon!
An excellent article, Jim. Thanks!
I can only hope that the raw data will not be deleted.