Unwarranted Temperature Adjustments and Al Gore’s Unwarranted Call for Intellectual Tyranny

Guest essay by Jim Steele, Director emeritus Sierra Nevada Field Campus, San Francisco State University

For researchers like myself examining the effect of local microclimates on the ecology of local wildlife, the change in the global average is an absolutely useless measure. Although it is wise to think globally, wildlife only responds to local climate change. To understand how local climate change had affected wildlife in California’s Sierra Nevada and Cascade Mountains, I had examined data from stations that make up the US Historical Climate Network (USHCN).

I was quickly faced with a huge dilemma that began my personal journey toward climate skepticism. Do I trust the raw data, or do I trust the USHCN’s adjusted data?

For example the raw data for minimum temperatures at Mt Shasta suggested a slight cooling trend since the 1930s. In contrast the adjusted data suggested a 1 to 2°F warming trend. What to believe? The confusion resulting from skewing trends is summarized in a recent study that concluded their “results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.” 13.

clip_image002

I began exploring data at other USHCN stations from around the country and realized that a very large percentage of the stations had been adjusted very similarly. The warm peaks from the 1930s and 40s had been adjusted downward by 3 to 4°F and these adjustments created dubious local warming trends as seen in examples from other USHCN stations at Reading, Massachusetts and Socorro, New Mexico.

Steele_fig2

Because these adjustments were so widespread, many skeptics have suspected there has been some sort of conspiracy. Although scientific papers are often retracted for fraudulent data, I found it very hard to believe climate scientists would allow such blatant falsification. Data correction in all scientific disciplines is often needed and well justified. Wherever there are documented changes to a weather station such as a change in instrumentation, then an adjustment is justified. However unwitting systematic biases in their adjustment procedure could readily fabricate such a trend, and these dramatic adjustments were typically based on “undocumented changes” when climate scientists attempted to “homogenize” the regional data. The rationale for homogenization is based on the dubious assumption that all neighboring weather stations should display the same climate trends. However due to the effects of landscape changes and differently vegetated surfaces,1,2 local temperatures often respond very differently and the minimum temperatures are especially sensitive to different surface conditions.

For example even in relatively undisturbed regions, Yosemite’s varied landscapes respond in very contrary ways to a weakening of the westerly winds. Over a 10-year period, one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all.16 Depending on the location of a weather station, very different trends are generated. The homogenization process blends neighboring data and obliterates local differences and then fabricates an artificial trend.

Ecologists and scientists who assess regional climate variability must only use data that has been quality controlled but not homogenized. In a climate variability study, scientists computed the non-homogenized changes in maximum and minimum temperatures for the contiguous United States.12 The results seen in Figure A (their figure 1b) suggest recent climate change has been more cyclical. Those cyclical changes parallel the Pacific Decadal Oscillation (PDO). When climate scientists first began homogenizing temperature data, the PDO had yet to be named, so I would like to suggest instead of a deliberate climate science conspiracy, it was their ignorance of the PDO coupled with overwhelming urbanization effects that caused the unwarranted adjustments by causing “natural change points” that climate scientists had yet to comprehend. Let me explain.

Steele_fig3

Homogenizing Contrasting Urban and Natural Landscape Trends

The closest USHCN weather station to my research was Tahoe City (below). Based on the trend in maximum temperatures, the region was not overheating nor accumulating heat. Otherwise the annual maximum temperature would be higher than the 1930s. My first question was why such a contrasting rise in minimum temperature? Here changing cloud cover was not an issue. Dr. Thomas Karl, who now serves as the director of the NOAA’s National Climatic Data Center partially answered the question when he reported that in over half of North America “the rise of the minimum temperature has occurred at a rate three times that of the maximum temperature during the period 1951-90 (1.5°F versus 0.5°F).”3 Rising minimum temperatures were driving the average but Karl never addressed the higher temperatures in the 1930s. Karl simply demonstrated as populations increased, so did minimum temperatures even though the maximums did not. A town of two million people experienced a whopping increase of 4.5°F in the minimum and was the sole cause of the 2.25°F increase in average temperature.4

clip_image008

Although urban heat islands are undeniable, many CO2 advocates argue that growing urbanization has not contributed to recent climate trends because both urban and rural communities have experienced similar warming trends. However, those studies failed to account for the fact that even small population increases in designated rural areas generate high rates of warming. For example, in 1967 Columbia, Maryland was a newly established, planned community designed to end racial and social segregation. Climate researchers following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents.5 Although Columbia would be classified as a rural town, that small population raised local temperatures five times greater than a century’s worth of global warming. If we extrapolated that trend as so many climate studies do, growing populations in rural areas would cause a whopping warming trend of 26°F per decade.

CO2 advocates also downplay urbanization, arguing it only represents a small fraction of the earth’s land surface and therefore urbanization contributes very little to the overall warming. However arbitrary designations of urban versus rural does not address the effects of growing population on the landscape. California climatologist James Goodridge found the average rate of 20th century warming for weather stations located in a whole county that exceeded one million people was 3.14°F per century, which is twice the rate of the global average. In contrast, the average warming rate for stations situated in a county with less than 100,000 people was a paltry 0.04°F per century.6 The warming rate of sparsely populated counties was 35 times less than the global average.

Furthermore results similar to Goodridge’s have been suggested by tree ring studies far from urban areas. Tree ring temperatures are better indicators of “natural climate trends” and can help disentangle distortions caused by increasing human populations. Not surprisingly, most tree-ring studies reveal lower temperatures than the urbanized instrumental data. A 2007 paper by 10 leading tree-ring scientists reported, “No current tree ring based reconstruction of extratropical Northern Hemisphere temperatures that extends into the 1990s captures the full range of late 20th century warming observed in the instrumental record.”8

Because tree ring temperatures disagree with a sharply rising instrumental average, climate scientists officially dubbed this the “divergence problem.”9 However when studies compared tree ring temperatures with only maximum temperatures (instead of the average temperatures that are typically inflated by urbanized minimum temperatures) they found no disagreement and no divergence.10 Similarly a collaboration of German, Swiss, and Finnish scientists found that where average instrumental temperatures were minimally affected by population growth in remote rural stations of northern Scandinavia, tree ring temperatures agreed with instrumental average temperatures.11 As illustrated in Figure B, the 20th century temperature trend in the wilds of northern Scandinavia is strikingly similar to maximum temperature trends of the Sierra Nevada and the contiguous 48 states. All those regions experienced peak temperatures in the 1940s and the recent rise since the 1990s has never exceed that peak.

Steele_fig5

Figure B. 2000 year summer temperature reconstruction of northern Scandinavia. Warmest 30 year periods are highlighted in by light gray bars (i.e. 27-56, or 1918-1947) and coldest 30 year periods are highlighted by dark gray bars (i.e. 1453-1482) Reprinted from Global and Planetary Change, vol. 88-89, Esper, J. et al, Variability and extremes of northern Scandinavian summer temperatures over the past two millennia.(REF11)

How Homogenizing Urbanized Warming Has Obliterated Natural Oscillations

It soon became obvious that the homogenization process was unwittingly blending rising minimum temperatures caused by population growth with temperatures from more natural landscapes. Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations,17 using what I term the “blind majority rules” method. The most commonly shared trend among neighboring stations became the computer’s reference, and temperatures from “deviant stations” were adjusted to create a chimeric climate smoothie. Wherever there was a growth in population, this unintentionally allows urbanization warming effects to alter the adjusted trend.

Climate computers had been programmed to seek unusual “change-points” as a sign of “undocumented” station modifications. Any natural change‑points caused by cycles like the Pacific Decadal Oscillation looked like deviations relative to steadily rising trends of an increasingly populated region like Columbia, Maryland or Tahoe City. And the widespread adjustments to minimum temperatures reveal this erroneous process.

I first stumbled onto Anthony Watts’ surface station efforts when investigating climate factors that controlled the upslope migration of birds in the Sierra Nevada. To understand the population declines in high-elevation meadows on the Tahoe National Forest, I surveyed birds at several low-elevation breeding sites and examined the climate data from foothill weather stations.

Marysville, CA was one of those stations, but its warming trend sparked my curiosity because it was one of the few stations where the minimum was not adjusted markedly. I later found a picture of the Marysville’s weather station at SurfaceStations.org website. The Marysville weather station was Watts’ poster child for a bad site; he compared it to the less-disturbed surface conditions at a neighboring weather station in Orland, CA. The Marysville station was located on an asphalt parking lot just a few feet from air conditioning exhaust fans.

The proximity to buildings also altered the winds, and added heat radiating from the walls. These urbanization effects at Marysville created a rising trend that CO2 advocate scientists expect. In contrast, the minimum temperatures at nearby Orland showed the cyclic behavior we would expect the Pacific Decadal Oscillation (PDO) to cause. Orland’s data was not overwhelmed by urbanization and thus more sensitive to cyclical temperature changes brought by the PDO. Yet it was Orland’s data that was markedly adjusted- not Marysville! (Figure C)

Steele_fig6

Steele_fig8

Figure C. Raw and adjusted minimum temperature for Marysville and Orland California.

Several scientists have warned against homogenization for just this reason. Dr. Xiaolan Wang of Meteorological Service of Canada wrote, “a trend-type change in climate data series should only be adjusted if there is sufficient evidence showing that it is related to a change at the observing station, such as a change in the exposure or location of the station, or in its instrumentation or observing procedures.” 14

That waning went unheeded. In the good old days, weather stations such as the one in Orland, CA (pictured above) would have been a perfect candidate to serve as a reference station. It was well sited, away from pavement and buildings, and its location and thermometers had not changed throughout its history. Clearly Orland did not warrant an adjustment but the data revealed several “change points.” Although those change points were naturally caused by the Pacific Decadal Oscillation (PDO), it attracted the computer’s attention that an “undocumented change” had occurred.

To understand the PDO’s effect, it is useful to see the PDO as a period of more frequent El Niños that ventilate heat and raise the global average temperature, alternating with a period of more frequent La Niñas that absorb heat and lower global temperatures. For example heat ventilated during the 1997 El Nino raised global temperatures by ~1.6°F. During the following La Niña, temperatures dropped by ~1.6°F. California’s climate is extremely sensitive to El Niño and the PDO. Reversal in thr Pacific Decadal Oscillation caused natural temperature change-points around the 1940s and 1970s. The rural station of Orland was minimally affected by urbanization, and thus more sensitive to the rise and fall of the PDO. Similarly, the raw data for other well-sited rural stations like the Cuyamaca in southern California also exhibited the cyclical temperatures predicted by the PDO (see Figure D, lower panel). But in each case those cyclical temperature trends were homogenized to look like the linear urbanized trend at Marysville.

Steele_fig9

Figure D. Upper panel PDO Index. Lower Panel Cuyamaca CA raw versus adjusted minimum temperatures.

Marysville however was overwhelmed by California’s growing urbanization and less sensitive to the PDO. Thus it exhibited a steady rising trend. Ironically, a computer program seeking any and all change-points dramatically adjusted the natural variations of rural stations to make them conform to the steady trend of more urbanized stations. Around the country, very similar adjustments lowered the peak warming of the 1930s and 1940s in the original data. Those homogenization adjustments now distort our perceptions, and affect our interpretations of climate change. Cyclical temperature trends were unwittingly transformed into rapidly rising warming trends, suggesting a climate on “CO2 steroids”. However the unadjusted average for the United States suggests the natural climate is much more sensitive to cycles such as the PDO. Climate fears have been exaggerated due to urbanization and homogenization adjustments on steroids.

Skeptics have highlighted the climate effects of the PDO for over a decade but CO2 advocates dismissed this alternative climate viewpoint. As recently as 2009, Kevin Trenberth emailed Michael Mannand other advocates regards the PDO’s effect on natural climate variability writing “there is a LOT of nonsense about the PDO. People like CPC are tracking PDO on a monthly basis but it is highly correlated with ENSO. Most of what they are seeing is the change in ENSO not real PDO. It surely isn’t decadal. The PDO is already reversing with the switch to El Nino. The PDO index became positive in September for first time since Sept 2007.”

However contrary to Trenberth’s email rant, the PDO continued trending to its cool phase and global warming continued its “hiatus.” Now forced to explain the warming hiatus, Trenberth has flipped flopped about the PDO’s importance writing “One of the things emerging from several lines is that the IPCC has not paid enough attention to natural variability, on several time scales,” “especially El Niños and La Niñas, the Pacific Ocean phenomena that are not yet captured by climate models, and the longer term Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillation (AMO) which have cycle lengths of about 60 years.”18 No longer is CO2 overwelming natural systems, they must argue natural systems are overwhelming CO2 warming. Will they also rethink their unwarranted homogenization adjustments?

Skeptics highlighting natural cycles were ahead of the climate science curve and provided a much needed alternative viewpoint. Still to keep the focus on CO2, Al Gore is stepping up his attacks against all skeptical thinking. In a recent speech, rightfully took pride that we no longer accept intolerance and abuse against people of different races or with different sexual preferences. Then totally contradicting his examples of tolerance and open mindedness, he asked his audience to make people “pay a price for denial”.

Instead of promoting more respectful public debate, he in essence suggests Americans should hate “deniers” for thinking differently than Gore and his fellow CO2 advocates. He and his ilk are fomenting a new intellectual tyranny. Yet his “hockey stick beliefs” are based on adjusted data that are not supported by the raw temperature data and unsupported by natural tree ring data. So who is in denial? Whether or not Gore’s orchestrated call to squash all skeptical thought is based solely on ignorance of natural cycles, his rant against skeptics is far more frightening than the climate change evidenced by the unadjusted data and the trees.

Literature cited

1. Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society

2. Lim,Y-K, et al., (2012) Observational evidence of sensitivity of surface climate changes to land types and urbanization,

3. Karl, T.R. et al., (1993) Asymmetric Trends of Daily Maximum and Minimum Temperature. Bulletin of the American Meteorological Society, vol. 74

4. Karl, T., et al., (1988), Urbanization: Its Detection and Effect in the United States Climate Record. Journal of Climate, vol. 1, 1099-1123.

5. Erella, E., and Williamson, T, (2007) Intra-urban differences in canopy layer air temperature at a mid-latitude city. Int. J. Climatol. 27: 1243–1255

6. Goodridge, J., (1996) Comments on Regional Simulations of Greenhouse Warming Including Natural Variability. Bulletin of the American Meteorological Society. Vol.77, p.188.

7. Fall, S., et al., (2011) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends. Journal Of Geophysical Research, Vol. 116

8. Wilson R., et al., (2007) Matter of divergence: tracking recent warming at hemispheric scales using tree-ring data. Journal of Geophysical Research–A, 112, D17103, doi: 10.1029/2006JD008318.

9. D’Arrigo, R., et al., (2008) On the ‘Divergence Problem’ in Northern Forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change, vol. 60, p. 289–305

10. Youngblut, D., and Luckman, B., (2008) Maximum June–July temperatures in the southwest Yukon region over the last three hundred years reconstructed from tree-rings. Dendrochronologia, vol. 25, p.153–166.

11. Esper, J. et al. (2012) Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. Global and Planetary Change 88–89 (2012) 1–9.

12. Shen, S., et al., (2011) The twentieth century contiguous US temperature changes indicated by daily data and higher statistical moments. Climatic Change Volume 109, Issue 3-4, pp 287-317.

13. Steirou, E., and Koutsoyiannis, D. (2012) Investigation of methods for hydroclimatic data homogenization. Geophysical Research Abstracts, vol. 14, EGU2012-956-1

14. Wang, X., (2003) Comments on ‘‘Detection of Undocumented Changepoints: A Revision of the Two-Phase Regression Model’’. Journal of Climate; Oct2003, Vol. 16 Issue 20, p. 3383-3385.

15. Nelson, T., (2011) Email conversations between climate scientists. ClimateGate 2.0: This is a very large pile of “smoking guns.” http://tomnelson.blogspot.com/

16. Lundquist, J. and Cayan, D. (2007) Surface temperature patterns in complex terrain: Daily variations and long-term change in the central Sierra Nevada, California. Journal of Geophysical Research, vol. 112, D11124, doi:10.1029/2006JD007561.

17. Menne. M., (2009) The U.S. Historical Climatology Network Monthly Temperature Data, version 2. The Bulletin for the American Meterological Society. p. 993-1007

18. Appell, D. (2013) Whither Global Warming? Has It Slowed Down? The Yale Forum on Climate Change and the Media. http://www.yaleclimatemediaforum.org/2013/05/wither-global-warming-has-it-slowed-down/

Adapted from the chapter Why Average Isn’t Good Enough in Landscapes & Cycles: An Environmentalist’s Journey to Climate Skepticism

Read previous essays at landscapesandcycles.net

About these ads

184 thoughts on “Unwarranted Temperature Adjustments and Al Gore’s Unwarranted Call for Intellectual Tyranny

  1. ‘I found it very hard to believe climate scientists would allow such blatant falsification.’

    Not sure why that is climate ‘scientists’ have shown time and again that is not just acceptable but a rewarded approach to working in this area?

    A adjustment are not a problem if , [they're] justified, documented correctly and the old data remains available. [All] three of which get a big F for fail when it come to climate ‘scientists’ For the bottom line remains the same , its the object or ‘the cause that matters , how many facts have to die to get there means nothing .

  2. I can think of a number of climate psientists, not based at the University of East Anglia, or Penn State. of course, who would allow such blatant falsification.

  3. Excellent! I too believe one should only consider the “raw” data only. Averaging global temperatures is too simplistic.

    Since this is so easy to see, the question is begged: Why does the “establishment” not see it? I have come to believe they do and CAGW is a hoax to justify “redistributing our wealth” to those “in on” the game. Not a new thought.

  4. There may be an error near the end where it is written:

    based on adjusted data that are not unsupported by the raw temperature data and

  5. As a retired scientist, I can say that the mistreatments of data described here should never have been accepted. This is a black eye for journal editors and reviewers.

  6. It’s very difficult NOT to think conspiracy. I know: “Do not attribute to malice what is more easily attributed to incompetence”. The level of incompetence required, however, exceeds the believable; even taking account of the decline in academic quality of the past half century.

  7. Systemic, perhaps. Unintentional, I have problems with. If only, say 2/3 of the adjustments went in one direction, then there would be, possibly, room for a slither of doubt.

    But all adjustments lead the same way, mostly to make the past colder, therefore the rate of warming greater.

    This is a clever strategy because if they made The Now warmer, people could go outside and verify the temperature. We cannot go outside 100 years ago and verify the temperature. It baffles my mind that they can bamboozle so many “intelligent”, OK, educated, people. What effects are being adjusted for, because UHI should result in in adjustments in the opposite direction?

  8. That is a great write up sir.

    We just need to see a couple of valiant PhD students in atmospheric physics or some other relevant field (with equally valiant supervisors) to pull apart the entire raw and adjusted US temperature data bases using the station location quality designator as a simple filter. Compare, contrast then report.

  9. You say that the USHCN adjustments are unwarranted, but you say very little about what they actually are, or the extensive justifications that have been given. Paper here, with more refs. And the adjustments aren’t all homogeneity. At least as large a component is TOBS. This is simply an adjustment that arises where the time of observation has undergone a recorded change, which has a predictable effect on measured minmax temp. For reasons of NWS policy, that has a systematic bias. Is that “unwarranted”?

  10. The AGW hypesters hide from this by endless filibusters and name calling of skeptics.

  11. Wait, you used tree ring data to disprove non-UHI global warming?
    I thought there was a hockey stick welded into every tree in the world.

    At least that’s what the Mann says /sarc

  12. But what can you do about it? That’s the proper question. Please try and interest a major daily or TV and radio news service to interview you .
    Andrew Bolt and Alan Jones are good targets for an Australian audience.

  13. If only all the graphs were readable, or at least linked to larger legible versions. Socorro NM vs Reading MA is borderline, but especially the caption is too squashed to read on Figures A, B, C, and D. “View image” on Fig D gets me a PNG 504×523 image, with small graphs in the upper left using maybe only half of the image area. Looks like a bad cut-and-paste onto a blank canvas. Someone goofed up.

    Dear management, please fix soon.

  14. Nick Stokes says:
    September 25, 2013 at 4:12 pm

    >>>>>

    Nick jumps in with both feet…

    “This is simply an adjustment that arises where the time of observation has undergone a recorded change, which has a predictable effect on measured minmax temp.”

    Spin, spin, spin..that’s all they have left.

  15. @Nick Stokes. You miss the point by a wide measure. Indeed there are justifiable adjustments and I said as much. But Figure A for the contiguous USA is quality controlled GHCN data that accounts for all the documented changes that you mention. The issue is homogenization of data despite the lack of documentation. I used raw data instead of TOBS adjustments, because the validity of those adjustments is questionable. Minimum temperatures should be immune to observation time if they use the standard minimum-maximum thermometers I am accustomed to. That said, if we use the USHCN TOBS for Mt Shasta, Reading MA and Socorro NM there still is no warming trend.

  16. To me this temperature thing has the same problem as the man with many clocks. He never knows the exact true time and no one knows the exact true temperature and never will. What’s normal, who knows? What’s average, who knows? What’s mean, who knows? And we never will, never. It’s all bull excrement.

  17. What I find disheartening is the fact they adjust the better stations to look like the stations which are heavily influenced by unnatural heating. Instead of finding out why the bad stations are wrong they made them all wrong..

    That’s not incompetence that is manipulation and with malicious intent. Ottomar Eddenhoffer said it best:
    “(EDENHOFER): First of all, developed countries have basically expropriated the atmosphere of the world community.[B] But one must say clearly that we redistribute de facto the world’s wealth by climate policy.[/B] Obviously, the owners of coal and oil will not be enthusiastic about this. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy anymore, with problems such as deforestation or the ozone hole.”

    http://www.theblaze.com/stories/2010/11/18/u-n-official-admits-we-redistribute-worlds-wealth-by-climate-policy/

    It has never been about the climate its been about power and money control..

  18. This has always been a mean to the ends of total populace control through unsubstantiated fear-mongering. AL Gore is nothing more than a charlatan and snake oil salesman.

  19. Nick,

    STOP the handwaving and address the stations at Marysville and Orland only. Explain those two. Give us the rock solid reasons for those two stations only. I’m sure you can do it, but it will require a cessation of handwaving, walnut shell maneuvering and double speak. You know well that those stations deserve better, so hop to it. Then we can discuss the corruption of the rest of the data whether from ignorance, hubris, greed or patronizing arrogance.

    pbh

  20. Think about it. The CRU (supposed to be a repository for world climate records) adjusts the data and then dumps the raw data which is forever lost. Climate-gate exposed just what the game is. How can you go back and check what the raw data was if it no longer exists? Jim ,you give them the benefit of the doubt far greater than I ever would. I am old and cant throw them far so I dont trust them at all.

  21. “undocumented changes”

    That there is a clear pattern of this makes the adjusters’ innocence implausible. Where there should be a paper trail — and there isn’t one? Something is almost certainly wrong.

    Take heart, Mr. Stokes! At least it shows that those you support above are not incompetent.

    ****************
    Another fine article, Mr. Steele — thank you for all you do for the truth.

  22. Nice job!

    Even a CO2 advocate can easily understand this.

    Now, ,,,,,, could we integrate the influence of 1,200km smoothing and that infamous use of extrapolation for the “Paul Harvey” rest of the story on terrestrial based temperature data? Not SARC >

    How many stations do we actually have above 80N, even 70N for that matter? Just curious……

  23. Janice,

    In my opinion what the globalists propaganda people have done (IPCC, Al Gore, etc) amounts to organized crime against humanity. Sadly it is not incompetence but blatant deception. Mr Steele shows just how badly muddled the mess is and just how the deception is being carried out.

    I applaud you Mr Steele for showing just how easily deceived we have all been over the years. I am placing this one on my Facebook as this merits wide distribution.

    Bill

  24. Marcos says:
    September 25, 2013 at 4:20 pm

    http://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_chapter_07.pdf

    I don’t know how heavily Jones & Hansen had stepped on the raw data in IPCC’s First AR (1990), Figure 7.6, but it’s closer to reality than post-Hockey Stick manipulation. They seem to have played loose in smoothing the data, at any rate.

    IIRC, which I may not, IMO Jones admitted in an interview after Climategate that the 1930s were warmer than the 1990s globally. Please correct me if wrong. The earlier decade definitely was hotter in the US.

  25. McComberBoy says:
    September 25, 2013 at 5:12 pm


    Nick,

    STOP the handwaving and address the stations at Marysville and Orland only. Explain those two. Give us the rock solid reasons for those two stations only. I’m sure you can do it, but it will require a cessation of handwaving, walnut shell maneuvering and double speak. You know well that those stations deserve better, so hop to it. Then we can discuss the corruption of the rest of the data whether from ignorance, hubris, greed or patronizing arrogance.

    pbh

    Nick Stokes et. al.

    Please reply to McComberBoy and provide a justification for the documented adjustments to Orland and Marysville in the manner described.

  26. jim Steele says: September 25, 2013 at 4:51 pm
    ” But Figure A for the contiguous USA is quality controlled GHCN data that accounts for all the documented changes that you mention.”

    Really? It doesn’t say that it’s GHCN, and I don’t think it is. But it does say that it is non-homogenized.

    McComberBoy says: September 25, 2013 at 5:12 pm
    “STOP the handwaving and address the stations at Marysville and Orland only.”

    I don’t have any particular knowledge of those two stations. I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.

  27. I can’t stand to see flat out lying and cheating….combined with sitting issues
    They are not the same…

    But ‘adjusting the past temps to show a warming that didn’t exist….more rapid warming
    has been the downfall of every computer linear model that was tuned/backcast to those fake past temps

    No wonder every computer game shows faster warming than really happened…
    ..they are all tuned to past temps that showed faster warming than really happened

  28. I dont buy that the confusion in climate science was accidental. All the discrepancies Im aware of, and there are many of them when looking at the raw data, go in ONE direction, supporting the claim co2 is a major climate driver. Even then with the adjusted data the claims have vast holes in them to the point its surprising the theory has any traction. The claimed dangers are also range from drastically inflated to outright lies. If this was all an accident it would have taken an amazing set of coincidences and completely incompetent scientists.

    Then of course you have the way the mythology has been pushed onto the population, not only top down but with constant calls to authority and FEAR. We are told, all the skeptic arguments have been accounted for, but any skeptic who knows a decent amount on the topic knows this is a bold faced lie on the bulk of points one can raise. Why lie if the data supports your case? Why the need to block ANY debate right from the start of this mythology based 100% on cherry picked datasets that insists you must ignore raw and real world data. We have barely started the debate on this topic, and what part of it we have “debated” has been mostly on blogs like this rather then among those in the field. Any other field of science that is still debated you see both sides of the argument honestly portrayed in context as those in the field ferret it out. This isnt even remotely the case with cagw. In fact its worse then that, anytime any skeptic stance gains the slightest traction the mythology and story changes all without any real meaningful debate. Even in many cases with “studies” that claim to support the official stance and almost always with obvious extreme flaws in said “studies” . there isnt a chance this is all accidental imo.

    At no juncture in the rise of the theory co2 is a major climate driver has this issue resembled anything like science, or the ways science proves itself to have merit. As a person who has studied the IMF and how it uses lies, media and politicians to crush and subvert as it concentrates power coupled with very real and pressing enviro issues such as those involving our oceans we also have the motive for all this as this was used to justify global taxation and essentially government right from the start. In fact in those same power centers a few years back as they failed to make goals set on advancing cagw, they proposed a global poverty tax literally while relating it to the fact they are unlikely to get their tax on a global level unless temps suddenly rise. You dont believe banksters could do such things? study the IMF in depth, their claims versus what happens, what our media says versus what happens in reality. This topic is drastically different them most realize. Much of the third world is such by design. Lots of conspiracy involved but I wouldnt call it theory when you can trace enough of it out to see it really clearly if you take the time.

    I suspect most on this bandwagon are true believers, but there is no doubt in my mind in coming decades we will see very clearly this was agenda driven the entire time. Honestly it would be much scarier to me if this wasnt agenda driven. If this was agenda driven, then it makes sense, and humans are the greedy self serving beings most of us have always been. If this wasnt purposeful it says much worse things about our species.

  29. From Nick Stokes on September 25, 2013 at 6:27 pm:

    I don’t have any particular knowledge of those two stations. I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.

    Translation: “I don’t know why! They’re the experts, they must have had their reasons! Look, just read their papers, they say it’s all in there. LEAVE ME ALONE!”

  30. FWIW. I lived in the 40s, 50s, 60s, 70s, 80s, 90s, 00s, 10s.

    I remember the buds on the trees and short sleeve shirts in early March in the early 50s. Then, the falling temperatures of the 60s 70s. Magazines wrote about the coming Ice Age. Even in the early 90s, in our area, the last big snow was in early April. Now, the last snow tends to be in March — later than I remember from the early 50s.

    Anecdotal I know. But, many of us who lived during the period in question do not think its warmer now than then. Problem is, perhaps most of us who personally experienced the changes from the 40s to now are “retired” and no longer relevant in the working/political world.

  31. I know how to prove those worldwide temperatures adjustors of fraud . They adjust it so it shows same warming around the world , then write a paper about it and say , see this is the proof . That’s impossible , as we know that during PDO , AMO and ENSO cycles ( prolonged Nina and Nino ) some continents must be 0.2 – 0.3deg cooler or warmer , from 0deg anomaly . That gives us altogether 0.4 – 0.6deg difference there should be during 30 years cycles between continents . Do we see that currently ? No we don’t , that’s I would say is a deadly mistake .

  32. Excellent article that explains in more or less layman’s terms the temperature measurement issue beyond siting.
    It was just such statistical manipulation, of devious intent or not, which alerted a mathematically astute friend of mine to the erroneous observations behind the whole warming theory, which he then passed on to me.

  33. It appears you’ve addressed many of the adjustments Nick Stokes mentioned earlier but I didn’t see whether an adjustment for observation time or a change in thermometer was part of the reason for the original adjustment. Do you know if this was a factor in those adjustments?

  34. Psh. The temperature data products are based on a true story. In the same way that the Texas Chainsaw Massacre is based on a true story.

    Only heretics and apostates worry about True. It’s still got Truthiness.

    /obligatory tag to indicate levity.

  35. kadaka (KD Knoebel) says:September 25, 2013 at 6:48 pm
    “They’re the experts, they must have had their reasons! Look, just read their papers, they say it’s all in there.”

    If you don’t think they are experts, why look at their data? This post asked:
    “Do I trust the raw data, or do I trust the USHCN’s adjusted data?”
    If that’s a real question, then it can only be answered by finding out why they made the adjustments. They have said plenty about it.

  36. Edit:

    Yet his “hockey stick beliefs” are based on adjusted data that are not unsupported by the raw temperature data and unsupported by natural tree ring data.

    Contradicts the argument. Remove “not”.

  37. Nick Stokes & McComberBoy To put what I feel. in a simple earthy tone. here is what I think Jim is saying. If we have two buckets. one has water in it and the other has urine in it, and I mix the two are you two going to drink it? Obliviously, when it comes to climate measurements, you two will drink the “water” and do drink heartily. In my case, I prefer to keep my “water” uncontaminated. The problem I have with so-called climate scientists, is that they seem to think if you mix contaminated “water” with uncontaminated “water” you can end up with something drinkable. That might be true if you had one contaminated bucket for every hundred thousand buckets but it looks to me like the contaminated buckets outnumber the uncontaminated buckets two to one. In that case, I am not about to drink that water. If you are willing to drink do so, so be it but please do not ask or force me to join you. For the rest you skeptics, sorry that I am so crude stating how I feel. I do like to state things in plain down to earth statements of fact so that the intellectually challenged might grasp what I am saying..

  38. Excellent work JIm, thank you.

    I posted this a week ago.

    http://wattsupwiththat.com/2013/09/16/what-people-will-read-and-see-with-the-ipccs-lead-off-illustration-from-the-ar5-spm/#comment-1419131

    Allan MacRae says: September 17, 2013 at 3:01 am

    I suggest that the Surface Temperature datasets exhibit a significant warming bias and should not be used as-is for rational discussions of global warming.

    In the USA, it appears that the warmest years in the modern data record occurred in the 1930’s. This may be true globally as well. Hadcrut3 probably has a warming bias of about 0.2C since ~1980 and this warming bias may extend back several more decades.

    ….

  39. McIntyre backs up Lucia and Curry etc in their pursuit of the BSh…ing IPCC cover up and excuses.
    Some good graphs to look at and consider. VG comment ftom McKitrick as well in comments.

    http://climateaudit.org/2013/09/24/two-minutes-to-midnight/#more-18392

    Let’s hope that when this version 5 BS is officially released the blogger baseball bats are ready to smash it up quickly.
    But the pre release efforts have been good so far. Go Steve, Judith, Lucia, Ross, Bob, Anthony etc.

  40. Re: Making the past colder

    My understanding is that this is in part due to the process of “infilling”. Where NASA/GISS has missing data for a given year for a given grid cell, but they do have it for other years (over 50% I think) they draw a linear trend through the data and calculate the missing years from that. Since the current temps actually ARE warmer than, say, 100 years ago, as we go forward in time adding new data, the linear slope increases and the older missing data is, the more likely it will be “infilled” with a calculated value that is increasingly cooler than the previous calculation, plus, we achieve a greater number of older blank years that now have more than 50% over the total time span, so the result is additional grid cells that were previously blank now “infilled” with temps calculated from that linear trend. As we add new data, the “infilling” process as I understand it continues to cool the past.

    You can actually see this for yourself at the NASA/GISS web site:

    http://data.giss.nasa.gov/gistemp/maps/

    Using Land/Ocean, Annual Trends, and a 1200 km radius, I got the following results:

    1900 – 1949 trend = 0.45
    1950 – 2000 trend = 0.53

    1900 – 2000 trend = 0.72

    Now it doesn’t take a genius to figure out that the trend from 1900 – 2000 cannot possibly exceed the trends of 1900 – 1949 and 1950 – 2000, but that is exactly what the tool claims. The reason is “infilling”. With the additional spacial and temporal coverage of the current time period, which is in fact warmer, using the 1900 – 2000 trend provides a much larger number of older grid cells which can be “infilled” via a linear trend. The problem with this of course is that climate is cyclical, not linear, and it is utterly daft to assume that as we collect more warm data in the current era, that this justifies adjusting grid cells for which we have no data increasingly downward.

    But the joke may be on them in the end. If we go through a cooling cycle, and it is looking like we are going to, as temps decline, the linear trend extended across all the data will actually become less, causing the infilling process to calculate increasingly warmer temps for data from the past which is missing.

    I’m betting that when that starts to become apparent they will declare an update to the methodology.

  41. Allen63 says:
    September 25, 2013 at 6:54 pm

    FWIW. I lived in the 40s, 50s, 60s, 70s, 80s, 90s, 00s, 10s.

    I remember the buds on the trees and short sleeve shirts in early March in the early 50s. Then, the falling temperatures of the 60s 70s. Magazines wrote about the coming Ice Age. Even in the early 90s, in our area, the last big snow was in early April. Now, the last snow tends to be in March — later than I remember from the early 50s.

    Anecdotal I know. But, many of us who lived during the period in question do not think its warmer now than then. Problem is, perhaps most of us who personally experienced the changes from the 40s to now are “retired” and no longer relevant in the working/political world.

    ======================================================================
    I wouldn’t say “anecdotal”. You didn’t say where you live or lived but you remember what it was like and realize that it doesn’t match up to what’s being pushed. Not that age is a guarantee of wisdom or honesty (There are a lot of old snakes out there.), but wisdom with age and honesty is not something to be ignored.

  42. there is no possible rationale for TOBS adjustment of min/max data.
    get real, nick. it doesn’t matter what anybody writes about it.

  43. Nick Stokes says:

    If that’s a real question, then it can only be answered by finding out why they made the adjustments. They have said plenty about it.
    —-

    Really?

    What have they said specifically about the adjustments made to stations mentioned in this post? And why exactly they were made?

    Please enlighten us, Nick.

  44. David Hoffer, thanks for that lucid and helpful (to a definite non-genius) explanation of what I had only a moderately firm grasp on till now. LOL, imagine if the early sea chart drawers or map makers had used that kind of “thinking.” Taking a bearing from, say, the horn of Africa, then a point or two northward ending with one in Gibraltar, and they would have had ships sailing full ahead right into Cote Ivoire! And they would have been out of business in a hurry.

    IPCC (Intn’l Pseudoscientists and Climate Crooks) — your days are numbered.

    “Mene, mene, tekel, upharsin… .”

  45. Nick Stokes says:
    September 25, 2013 at 4:12 pm

    ………At least as large a component is TOBS. This is simply an adjustment that arises where the time of observation has undergone a recorded change, which has a predictable effect on measured minmax temp. For reasons of NWS policy, that has a systematic bias. Is that “unwarranted”?

    =======================================================================
    How would TOBS produce 7 new record highs for 2010 where I live? At most it would change which day the record was “set”.

    http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1426755

  46. Gunga Din, right on! You are so cool to support Allen, and so correct. Anecdotal evidence is not “no evidence.” I think that you, Allen, are likely a more reliable data source than the above-discussed “data” sets. And I KNOW my great-grandfather’s daily weather journal (admitted into evidence in a court case, even), jotted down faithfully and honestly in his careful, geological surveyor’s, manner was FAR more accurate.

    As has been said in other contexts, data infilling is “precision without accuracy.”

  47. An excellent and compelling post. Than you Jim Steele. There is interrogating the data, and then there is torturing it!

    I would be more reluctant than some here to attribute this process to fraud. We know from numerous areas of science that preconceptions can colour the way in which research is conducted. As Spencer Weart said, climate science is a collective undertaking, requiring large teams, so it is necessarily socially constructed — but that doesn’t mean it is only socially constructed.

    While there are undoubtedly some instances of fraudulent behaviour in climate science, it is the ‘overriding’ of the usual quality assurance processes that have allowed mistakes to have gone uncorrected. Many of these institutional factors have been more politicised. The undermining of anonymous peer review through the ‘virtual’ communities of self-appointed gatekeepers, and the empowering of consensus through the IPCC have all been factors. There are others: reliance on models, noble causes, funding availability, etc that have corrupted the scientific process — have created a ‘value slope’ (to use a term of philosopher of science Iain Boale).

    It’s all coming unstuck, as the deficiencies of models tuned to errant data sets go awry. It has been a costly lesson – not just because of the billions spent on research that is no better in its predictive efficacy than 30 years ago, but because of the costs of the wrong policies. The best example here in Australia is probably the Wonthaggi desalination plant, built as a Public-Private partnership, with cost now north of $5billion, helped (ironically) by delays caused by rain: the state of Victoria will force consumers to pay each year for water they don’t need.

    People and their governments will not be kind, but we might hope that from the wreckage we can extract the valuable knowledge on PDO, ENSO, etc that might allow better decisions in future (by governments, farmers and other risk managers.

  48. AND…. Gunga Din, to continue, #(:)) (just saw your 8:28pm post) I’ve not complimented you on this before, but WAY TO GO with that historic research and detective work. That you kept such fine records of all that is impressive. Thanks for the several times (and keep it up, too — lots of new readers all the time) you have shared on WUWT about how those historic temps just — WHOA! — changed…. . (cue: Twilight Zone music… nuh, nuh, NUH, nuh — nuh, nuh, NUH, nuh…) {0.0}

  49. The data is the data, until it is adjusted. Then it’s no longer absolute data, but climate model modelled data.

    By specifying a climate model, you have established theoretical relationships between actual observing stations. By adjusting the raw data, you can decrease your model hindcast uncertainty. If your model forecast blows up after 17 additional years of new collected raw data, then your model either sucks, or you adjusted the data too much, trying to squeeze a spurious postulated signal from too much uncertainty in the raw data.

    So Professor Steele is right. GISS output (and its ilk) is useless for many/most/all real-world applications. Real world applications using actual observations have more uncertainty in their forecasts, leading to more modest claims of new scientific knowledge. Well, in the military-industrial-educational-scientific government-funded by We the People taxes complex, modest claims don’t bring home the bacon.

    I agree with Nick Stokes too. There was nothing wrong with the underpinnings of GISS and its ilk. Until, of course, this d*** pesky warming pause occurred which the models didn’t predict. I.e. it’s a beautiful theory, it’s a shame it doesn’t predict reality.

    Live and learn. Try some stuff, and if it doesn’t work, then it means you learned something and you’re smarter now. Time to try something else. But if you keep doing it, then it means you’re stupid.

  50. Nick Stokes says:
    September 25, 2013 at 6:27 pm

    “I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.”

    Mr. Steele did just look at it in a very clear and understandable way. Not only did he identify the reasons given for, and the problem with the homogenization adjustment algorithm, he described exactly why it happens and verified it with two different methods (tree rings and unaltered data from unaltered rural stations.)

    He even goes out of his way to argue that it is an ‘honest’ mistake. Perhaps it was made honestly, but the net results are too convenient and very suspicious. After honest mistakes are made, honest people try to correct them, and not remain silently complicit to falsehoods generated by those mistakes.

    So the issue has been looked at and all you can say is that it should be looked at. Is Jim Steele the wrong person to look at this? Do we need some high muckity-mucks to work on this before we can say that there is a problem here? Seems like we have been down this road before:

  51. “Instead of promoting more respectful public debate, he in essence suggests Americans should hate “deniers” for thinking differently than Gore and his fellow CO2 advocates. He and his ilk are fomenting a new intellectual tyranny”.

    Gore’s fomenting tyranny isn’t new and it certainly isn’t “intellectual”.

    He’s like Hitler making the case for the Aryan Race.
    His semi science is as crooked as can be, just like the Agenda (21) behind this hoax.

    We all know what happened to Hitler and his Third Reich so giving it a second try participating in the creation of a tyrannic One World Government is just plain stupid.

    The day will come that he will be charged with ” the initiation and participation of a criminal conspiracy against human kind”, treason against the US constitution, fraud, financial crimes and racketeering.

    Present at his trial will be the entire staff and board of the BBC and the Church of England who’s money was burned with failed Green Tech Speculations and the Chicago Carbon Exchange and his movie titled “An Inconvenient Truth” will be shown in evidence of the biggest fraud in history after the banking scheme.

  52. Nick Stokes says:
    September 25, 2013 at 7:43 pm

    About the experts explaining their homogenization rationale is not enough. Nick, the post tells us that they use a computer algorithm to make the adjustments and that these aren’t always justified. An inflection caused by the PDO should not be adjusted, for example. I can see why this might have been done given that Trenberth an expert in these matters didn’t even acknowledge the PDO to be a real issue,despite, or perhaps because of, skeptics trying to point out its importance for a decade at least. You know full well that WUWT and other quality science based blogs don’t just pick up its debating points from the news media. They delve into the science and this work has contributed most of the worthwhile climate science of the last decade. Skeptics aren’t being suitably acknowledged but the chopping of Climate Sensitivity by 2/3 or more, the restoration of the LIA, the MWP and all the other WPs, the return of natural variability to its prominent place, vindicated by nature’s own reassertion on the subject is now becoming the new “discoveries” taken as their own by IPCC scientists. In the process, standards in the science have been raised beyond the competence of many of the proponents of CO2 warming and papers with shoddy logic, statistics and unsupported conclusions have had to be reluctantly withdrawn because to not do so went beyond the bounds of decency. Nick, the highly regarded Trenberth argued that the PDO was skeptic baloney in 2009 but now, needing to save something after 17 years of flat temps, he is putting it forward to explain the phenomenon.

    Tell me, Nick, why did the CO2 scientists accept shovelfuls of terrible papers without stepping in and criticizing them – even Greenpeace and WWF grey literature by political scientists got accepted. Why? Because they supported the meme. This tells me why you accept, holus bolus, the methods of altering the historical temperature series. Even you know that the 1998 “high” was manufactured by pushing the hotter 1930s temperatures down – it was even done in 1998 by an underling of Hansen (I believe it was an FOI revelation- it was in a post on WUWT) because of the desperation to get the hottest, driest period in the century submerged below 1998. Nick, are you going to be supporting the dwindling position of the CO2 control knobs until the end? Are you as strong and confident a CO2 climate control guy today as you were, say, before 2009? Whether I get a straight answer regarding these things will tell me all I need to know about you.

  53. Nick Stokes says:
    September 25, 2013 at 6:27 pm

    ‘I don’t have any particular knowledge of those two stations. I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.’

    Any links to these numerous papers, please? Thank you.

  54. Janice Moore says:
    September 25, 2013 at 8:35 pm

    AND…. Gunga Din, to continue, #(:)) (just saw your 8:28pm post) I’ve not complimented you on this before, but WAY TO GO with that historic research and detective work. That you kept such fine records of all that is impressive.

    ===========================================================================
    Thank you, but I’m just a peon who got curious about all the Global Warming talk and thought to copy/paste a list of record temps from my local NWS site for my area into Excel back in 2007. As time went on and after I found WUWT, I did it again every few years. If you saw my spreadsheet I doubt you’d call it a “fine record” (None of the years data paste cleanly into columns.) but it is a record and some of the “scratches” stand out enough that even I could see them.

  55. When you look through the window into the sausage factory that is “climate science”, as bad as you are afraid it is going to be, it’s worse, every time. People are rightfully cautious about suggesting a conspiracy, but it is becoming difficult to explain it otherwise.

  56. Nick Stokes:”I don’t have any particular knowledge of those two stations. I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.”

    Short form of above: “I refuse to answer the question. I demand you admire my handwaving!”

  57. Yep, David H., that dodgy methodology is already backfiring big time on them. Heh, heh.

    Jim, congratulations on a post which explains in plain English, with easily comprehended visual aids, some concepts that are not always understood by non-scientists. I note that the thrust of your post supports our host’s forthcoming paper, which I hope to see before I die. :)

    My knowledge of using statistics and data homogenisation, such as it is, is based more in areas like human surveys and economic models. These fields are quite different to temperature data evaluation in many ways. But, I learned very early on to ask the most searching, irritating, “dumb” questions about data collection and anything that was subsequently done to it. The tendency of lazy statisticians to assume that if your head was in the oven and your feet were in the freezer, your body temperature was “normal” came to light surprisingly often.

    Like you with your micro climate studies, what we really wanted to know was what was happening in a specific subset of the data, as well as the big picture. Delving into these similarities and differences can produce startling insights into crappy methodology, as in this case. Mixing good data with rubbish to spit out a number is endemic, sadly.

    More power to your arm, and I hope that you will continue to post here!

  58. @ Nick Stokes “It doesn’t say that it’s GHCN, and I don’t think it is. But it does say that it is non-homogenized.”

    True, my essay did not mention it was GHCN data but the paper I referenced does. You might read it instead of simply pronouncing “I don’t think it is.” In the future I encourage you to delve more deeply into the facts before simply choosing what is convenient for your beliefs.

  59. “and thought to” {Gunga Din}

    And that, my brother in the battle for Truth, is what makes you exceptional. SO WHAT if your data presentation isn’t professional looking — who cares? You can find it; that’s all that matters.

    Wasn’t Einstein who said, “… leave elegance to my tailor.” ?

    You go, Raj (you’ve been promoted from “peon” #(:)).

  60. Hey, Johanna,

    I hope you see this — here, the nights are getting quite chilly. The swallows have flown south and the Canadian Geese have arrived. Soon, the Trumpeter Swans will fly in from the NW, plaintively crying, “We’re here! We’re here!” As we head into fall here (about Lat. 45 in NW corner of U.S.A.), I have often thought of you folks, esp. you, in Australia as spring is getting underway. Tell me what you are seeing (or hearing or smelling) that tells Australians that spring has begun? Sorry for the off-topic, but I so enjoyed talking about birds with you awhile back.

    Hope those next-door dogs aren’t a problem!

    Take care, down there,

    Janice

  61. UHI effects can be quite dramatic, as I found yesterday evening while driving along Interstate 20. In eight minutes I crossed from somewhat built up Shreveport downtown to outskirts of suburbs east of town in Bossier City, a distance of about 7 miles. The ambient air temperature fell a whopping 10 degrees!

  62. Bill H says: “Think about it. The CRU (supposed to be a repository for world climate records) adjusts the data and then dumps the raw data which is forever lost. Climate-gate exposed just what the game is.”

    Bill every political movement has many strange bedfellows. The dumping of the CRU data does indeed raise great suspicions. We taught students they should keep their lab note books as evidence to support their results if ever challenged. That CRU expets dumped raw data flies in the face of all professional logic and in that case I too suspect dumping raw data was a move to hide other wrong doings.

    That said I believe such fraudulent acts are carried out by a minority. The adjustments are not always similar to what I have shown, so I think the problem was a one-size fits all method that was tainted wherever urbanization effects were strong. However there is a legitimate criticism that their advocacy encouraged them to blindly accept adjustments that make no sense. Tunless you believed CO2 is the root of all climate change. Such blindness will always happen because we are all blinded by our beliefs. ONly respectful debate can free us of our illusions. For me the real crime is the attempt to stifle that debate and the varied suggestions to criminalize skepticism. Science is a process of suggesting good and bad ideas, that then must be rigorously tested. Trenberth, Mann and Gore and their followers are actively trying to stifle scientific debate by calling everyone who disagrees deniers. Gore’s call to “put a price on denial” and David Suzuki’s article to deny the deniers the right to deny smacks of nascent totalitarian politics. The defile good science and such tactics should be condemned by the scientific community. I suspect most scientists are troubled by such tactics but remain quiet so as not to attract the advocates wrath and threaten their funding.

  63. IBSMDAHARCOWATPTIHAEATV

    .
    .
    .

    ( in before Steve Mosher drops a hit and run comment of why altering the past temperatures is honest and ethical and then vanishes )

  64. @Nick Stokes

    The TOBS may actually have a decent justification for the adjustments made. It is odd, however that the adjustments results are colder past and warmer present with a 0.3 degree slope.

    What the author is referring to, however is the homogenization process which is very poorly explained in the USHCN article. It also is not represented in their handy little graphs. I’d be interested to know the net results of the changes made actually are…

  65. The idea that vetted observers in Reading, Massachusetts back in the 1930s and 1940s were misreading peak temperatures on the order of several degrees Centigrade is so untenable as to require either the entire data set and all such similar sets be discarded or the “adjustments” be rescinded.

  66. “Janice Moore says:

    September 25, 2013 at 9:20 pm”

    Here in Sydney, Aus, we are experiencing a record breaking spring. We already had the warmest winter on record apparently. We also have a new, nationwide, record average temperature…all determined by the use of 112 thermometers, 1 for every ~68,500 square kilometres, or less. no weather reporter mentions wind direction with these records but always compares a measured temperature with a, calculated, average, stressing any measurement that is ABOVE that average.

  67. If a station has changed its time of observation during the historical observing period, then an adjustment is necessary. Unfortunately it is difficult to know the proper size of the adjustment, since this is site and season dependent. The adjustment can often be as large as the signal you are trying to resolve, which in this case is the change in temperature over many years. For instance, if a site changed its TOBS from 5 PM to 8 AM, that site will report cooler temperatures for no other reason than they changed their time of observation. A 5 PM observer will often sample a warm daily maximum temperature twice, once on the day it occurred and again the next day. This leads to a warm bias in reported temperature which is removed when the TOBS changes to 8 AM. For this reason I prefer to work with data from sites that report calendar day max/min temperatures so I don’t have to deal with a rather large and uncertain TOBS correction.

  68. Hi Janice

    Spring has sprung here in Canberra, and the parrots have all gone bush, where there are rich pickings at this time of the year. It seems that there is wisdom in the old rhyme:

    Spring is sprung
    Da grass is riz
    I wonder where them boidies is?

    “Da liddle boid is on da wing.”
    “How absoid!
    Da liddle wing is on da boid!”

    This was taught to us in 4th class as an example of a New York accent.

    Anyway, there are not a lot of birds about just now; they will return as the weather heats up and they want a drink and a bath at my place. What’s happening at yours?

    I’m sure that Jim won’t object to this interlude.

    Best – J

  69. Nick Stokes says:
    September 25, 2013 at 6:27 pm
    I’m saying is that the adjustments were made, by stated algorithm, for reasons that have been extensively described, in numerous papers, and those reasons should be looked at.
    ============
    The excellent article did just that and explained how the computer algorithms behind the adjustments were faulty. They saw urbanization as a natural process not needing adjustment, and the PDO as an unnatural process needing adjustment.

    The author has gone on to argue that this was not a result of malicious intent, it was a result of ignorance. At the time climate scientists did not realize that there were cyclical natural processes, so they did not allow for them in the computer code.

    I would add to that the experimenter expectation effect. The adjustments coincided with what the researchers believed was happening – temperatures were increasing due to human activity – so they didn’t question the accuracy of the adjustments.

    Had the adjustments shown that temperatures were not increasing, that instead they were cyclical, this would have gone against what was expected, and the assumption would have been that the adjustments were faulty and in need of revision.

    So, over time the result meet expectations, whether they are correct or not. And since the effect is subconscious, researchers that are involved are powerless to detect it without extremely careful experimental design.

  70. Conspiracy to falsify the temperature record?

    Or, no conspiracy — just incompetence?

    You decide.

    But notice that every USHCN temperature ‘adjustment’ goes in the most alarming direction…

  71. “Nick Stokes says:
    September 25, 2013 at 4:12 pm
    [...] And the adjustments aren’t all homogeneity. At least as large a component is TOBS. This is simply an adjustment that arises where the time of observation has undergone a recorded change, which has a predictable effect on measured minmax temp. [...]”

    I read you linked site. Apparently the time-of-observation adjustments aren’t based on actual recorded data, but are just estimated adjustments based on assumptions regarding time-of-observation variations, which has essentially the same problems as the homogeneity assumptions.

  72. Boy do I ever agree with you Jim Steele. If you have the capabilities keep posting such articles, they needs to be held up to their eyes weekly. This is why I keep saying temperature CHARTS, not the actual temperatures, are never going to return to 70′s level even if they actually do in reality.

    Manufactured artificial warming.

  73. Janice, what we have here in Banora Point (just south of the Queensland border) is “WEATHER”. When the wind blows from the south, it is cool. When it blows from the north, it is warm. We are officially in spring – for some reason Australia works on 1 September, etc, instead of the equinoxes and solstices to determine the season.
    These prevailing winds can last for a fortnight or more. Sometimes it rains, sometimes it doesn’t. And sometimes it absolutely buckets down, like when in June a few years ago we had 530 mm in 36 hours. And we had a few years of drought, so several of the states built desalination plants, because the “Climate Commission” said the dams would never fill again. When the desal plants were ready, it rained (they haven’t been used since!), and the operators of a dam west of Brisbane had to let water out of the dam as they were worried that the water level would rise so high as to overtop the dam – which might have been too much for the strength of the dam. Unfortunately, when the water reached the valley below, it met a bit of even heavier rain – disaster and then floods in Brisbane. See http://en.wikipedia.org/wiki/2010%E2%80%932011_Queensland_floods
    for comment. Luckily, unlike New York and New Orleans. Queensland Rail parked its electric trains on high ground to wait it out!
    We have seen the photos of water flooding Colorado – a well known “desert”! Hope that the missing 500 people have been found safe by now.

  74. I just checked Orland in the Historical Observing Metadata Repository (HOMR) and it shows no change in ObsTime since 1931. The ObsTime seems to have remained at 0800 LT throughout the period from 1931-2013:

    http://www.ncdc.noaa.gov/homr/#ncdcstnid=10100110&tab=PHR

    The metadata shows and SurfaceStations.org documentation confirms Orland has not changed over to the electronic Maximum/Minimum Temperature Sensor, MMTS, so no adjustment should be necessary for change of instrumentation. It looks like the ObsTime changed from 1700 LT to 0700 LT in 1934, meaning an ObsTime adjustment should have been applied to the portion of the temperature record preceding 1934. Here is a raw observer form from Orland which shows the ObsTime in Sept 1929 was 1700 LT:

    http://www1.ncdc.noaa.gov/pub/orders/IPS-8F4F6DD6-5EFE-4FA5-BDC7-B38262D85EEA.pdf

    As is often the case, the metadata from HOMR seems to conflict with the ObsTime found on the raw observer forms. Yes, working with climate data is messy given the sparse and conflicting metadata available.

  75. BTW, re accuracy of data. Many years ago, the Australian Met Office got worried about the data received from one of the outback weather stations. It bore little or no resemblance to the data from other stations, which were probably a few hundred miles away, but it should have been consistent. Eventually a man was sent out to check the instruments. When he arrived, he found that the lady’s husband had died during the previous year, and she did not know how to read the instruments. As she needed the honorarium her husband had received for telegraphing in the data, she just telegraphed in the data for the previous year. I believe that the gentleman decided the best thing to do was to teach her how to use the instruments correctly. All well from then on.

  76. I’m not sure that I want to go there, Jim. :)

    However, for the more literal minded, Canberra is awash with parrots during autumn and winter, principally because it is an artificially created national capital in the middle of bare sheep paddocks. It has millions of European trees which produce nuts and seeds, plus lots of well watered grass, in an otherwise bleak landscape (from a parrot’s point of view).

    There are also some artificial lakes, and thousands of acres of heavily planted suburban backyards, which many other kinds of birds thrive in. Because summers are very hot and dry, just having a daily refreshed birdbath and some sympathetic plantings in my modest backyard brings me a daily nature documentary of birdlife in summer.

  77. wayne says: “If you have the capabilities keep posting such articles, they needs to be held up to their eyes weekly.”

    I intend to adapt an essay from the book every few weeks. I wrote the book to be scientifically robust yet readily understood by the layperson. So far comments on Amazon suggest I succeded. My hope is the book will be read widely by the public as well as used in environmental studies classes. I just got word it already made it to the suggested reading list at one such University class.

  78. Re Mark Albright

    If the OBS Time changes, you note that a break in series occurred at that date, and DO NOT MODIFY the previous data. Same if there is a change of location, or the immediate vicinity changes from open grassland to suburban car park. Why would only climate persons (cannot call them scientists) modify previous data? No financial, unemployment or other record that I know of has ‘modified data’ – always there is an indication of “break of series” when the definitions or something else has changed.

  79. Mr Steele, your Fig. A (their figure 5b not 1b) is in fact GDCN not GHCN, a minor difference perhaps for the US but NOAA do say “Unlike GDCN, however, GHCN-Daily contains
    numerous data streams for updated data that enhance the latency of the dataset through
    rapid and frequent updates. Relative to GDCN, GHCN-Daily also contains a much more
    comprehensive set of QA checks as well as a more expansive set of historical data sources. “, so there is a difference. Also the graph is the average of a graph the authors say is “only a rough and visual assessment of the change points. Rigorous detection of change points in a time series needs to go through a statistical procedure” My main point is though that you use that graph and say “The results seen in Figure A (their figure 1b) (sic) suggest recent climate change has been more cyclical” whilst they say…..”such data sets must be homogenized before they can be used for climatological analysis”. Is there a reason you can make a climatological judgement from that graph when the authors you site say specifically you can’t?

    TIA Nick

  80. From Nick Kermode on September 25, 2013 at 11:04 pm:

    Mr Steele, your Fig. A (their figure 5b not 1b) is in fact GDCN not GHCN…

    Reference in question, Shen et al (2011), “The twentieth century contiguous US temperature changes indicated by daily data and higher statistical moments”

    Available here on Springer for $40 (has abstract, references, but no supplementary info):

    http://link.springer.com/article/10.1007/s10584-011-0033-9

    Free version:

    http://www-rohan.sdsu.edu/~shen/pdf/shen_climc_2011.pdf

  81. Jim Steele,

    You might find our recent paper on urbanization and homogenization interesting: http://onlinelibrary.wiley.com/doi/10.1029/2012JD018509/abstract

    It turns out that pairwise homogenization tends to do a good job at removing urban-correlated biases even under very stringent definitions of urbanity and when only rural stations are used in the homogenization process. Its also worth pointing out that the net effect of CONUS homogenization is actually to lower TMin relative to TOBS-only adjustments.

    As far as general reasons for homogenization goes, they are covered pretty well here: http://rankexploits.com/musings/2013/a-defense-of-the-ncdc-and-of-basic-civility/

    Similarly, MMTS transition biases are one of the larger contributors to the increase in CONUS max temps post-homogenization: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

    In general, raw data is problematic due to the fact that weather stations were not designed to be climate stations. Over the last century most have moved two or three times, had at least two different instruments, have had time of observation changes, and other issues whose impact can dwarf the background climate signal. Some of these are true biases (e.g. instrument changes), while others are changes in true local condition that are not representative of the broader regional climate (e.g. urbanization). One of the current challenges is that homogenization cannot effectively differentiate between the two.

    You might also find the Berkeley homogenization process worth a look. We provide charts showing specific breakpoints as well as difference series used to detect step changes. Here is Reading, MA, for example: http://berkeleyearth.lbl.gov/stations/163049. The station has had three document station moves and two documents TOBs changes in its history, as well as a number of other notable undocumented step changes relative to surrounding stations.

  82. jorgekafkazar says:
    September 25, 2013 at 4:06 pm

    Al Gore: still more of WHAT LYSENKO SPAWNED.

    Hey, that is a perfect anagram of Stephan Lewandowsky …

  83. I stand corrected. According to tonight’s weathercast on Channel 10, Australia has had it’s warmest 12 MONTHS on record.

  84. for some reason Australia works on 1 September, etc, instead of the equinoxes and solstices to determine the season.

    That’s the “meteorological” way of counting the seasons. It gets the coldest months into the winter and the warmest months into the summer.

  85. Here changing cloud cover was not an issue. Dr. Thomas Karl, who now serves as the director of the NOAA’s National Climatic Data Center . . . .

    He’s not a Dr.–he has no PhD.

  86. I found this article to be a very clear and concise summary of a topic that is central to the debate and I concluded that humans are having a truly major impact on our climate by using homogenization of data techniques. Perhaps Nick Stokes could come with a new term to describe it…?

  87. rogerknights says:
    September 26, 2013 at 12:15 am

    for some reason Australia works on 1 September, etc, instead of the equinoxes and solstices to determine the season.

    That’s the “meteorological” way of counting the seasons. It gets the coldest months into the winter and the warmest months into the summer.
    ————————————————————————
    Yup. Where I live, the coldest month is July, and the hottest is February. Still, no matter where you draw a line, there are always arguments pro and con.

    Re my earlier post regarding the relative absence of birds in early Spring, I should add that a lot of them are staying close to home because they are flat out feeding their chicks. In Summer, they reappear, often with offspring in tow (especially magpies and crested pigeons, which are very familial).

  88. “You say that the USHCN adjustments are unwarranted, but you say very little about what they actually are”

    Doesn’t really matter does it Nick. These are one-sided adjustments for the wrong reason (homogination where the good sites are adjusted and not the bad) and not the typical adjustments that ensure all individual adjustments as a group do not affect the mean or trend and are statistically symmetric. Like, one a little high, one equally a little low and at the same x… but never tint the overall dataset, these do. I have personally run an analysis on BEST data, and if it’s the best we have it is a shame. Seems the real BEST dataset would be to just drop back to the raw data and let any variances cancel themselves out.

  89. Oops, should be “the hottest is January” above. The point is, the seasons reflect the coldest and hottest months in the centre of Summer and Winter.

  90. A couple of years ago WUWT carried my article which dealt with exactly the inconsistency of historic temperatures mentioned in this excellent article by Jim Steele. I provide a few excerpts below and would urge people to read the source book from which they are derived

    “This material is taken from Chapter 6 which describes how mean daily temperatures are taken;

    “If the mean is derived from frequent observations made during the daytime only, as is still often the case, the resulting mean is too high…a station whose mean is obtained in this way seems much warmer with reference to other stations than it really is and erroneous conclusions are therefore drawn on its climate, thus (for example) the mean annual temperature of Rome was given as 16.4c by a seemingly trustworthy Italian authority, while it is really 15.5c.”

    That readings should be routinely taken in this manner as late as the 1900′s, even in major European centers, is somewhat surprising.

    There are numerous veiled criticisms in this vein;

    “…the means derived from the daily extremes (max and min readings) also give values which are somewhat too high, the difference being about 0.4c in the majority of climates throughout the year.”

    Other complaints made by Doctor von Hann include this comment, concerning the manner in which temperatures are observed;

    “…the combination of (readings at) 8am, 2pm, and 8pm, which has unfortunately become quite generally adopted, is not satisfactory because the mean of 8+2+ 8 divided by 3 is much too high in summer.”

    And; “…observation hours which do not vary are always much to be preferred.”

    That the British- and presumably those countries influenced by them- had habits of which he did not approve, demonstrate the inconsistency of methodology between countries, cultures and amateurs/professionals.”

    http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/

    —— ——- —-
    The long and the short of it that we know only in general terms the past climate and its direction of travel, and even that becomes obscured when already fragile readings are then substantially altered-for whatever reason may be considered valid at the time. To believe we know temperatures from way back that are accurate to tenths of a degree is sheer hubris.
    tonyb

    tonyb

  91. Zeke Hausfater wrote “Here is Reading, MA, for example: http://berkeleyearth.lbl.gov/stations/163049. The station has had three document station moves and two documents TOBs changes in its history, as well as a number of other notable undocumented step changes relative to surrounding stations.”

    Your link is confusing. The Berkeley homogenization data refers Reading WB located at 40.38 N 75.946 W. In contrast the USHCN Reading station I referred to is located Latitude 42.5242, Longitude -71.1264. I believe you mistakenly confused Reading Pennsylvania with Reading Massachusetts. Do you have a link where we can compare Reading MA.in my graph?

    I had read your paper a while and will look at again to refresh my memeory on the details But I was curious about the mechanism and methods by which your paper suggested that a move from an urban center to an airport would have a cooling affect. You wrote “Moreover, cooling
    biases can be introduced into the temperature record when stations move from city centers to more rural areas on the urban periphery. This may have occurred, for example, during the period between about 1940 and 1960 when stations were moved from urban centers to newly constructed airports.”

    When I step barefoot from a grassy park to the pavement of the parking lot, I notice an intense increase in temperature. Did you specifically measure and compare temperatures at each airport and the weather station’s previous location? Although urban centers are typically warmer, conditions immediately surrounding the weather station may be more important than its proximity to the urban center. I suspect the difference between a park’s micro climate and the airport’s would be huge, and it would be insignificant whether or not you characterized the airport to be in an urban or rural setting.

  92. I was always taught to go to the basic data, as recorded, preferably in the original notebook. This way one could identify any corrections, misread figures etc.
    If there was a change in instrumentation or location (including anychange in surrounding vegetation), then a substantial time of overlap is necessary to get a more accurate figure for subsequent ‘adjustments’. Otherwise there was a real risk of .cooking the books’ to get an answer that pleased the teacher/boss/paymaster.
    Has anyone homogenised the global temperature data, but based on stations such as Orland that do not seem to have undergone any substantial changes, in order to reduce any urban heat island effect? I seem to remember that Dr Jones refused to release temperature data because they might try to prove the CRU’s conclusions wrong!

  93. Jim Steele

    See my comment above yours regarding the uncertainty of the historic temperature record.

    If you really want to see in forensic detail all the adjustments that need to be made to raw data then read the book title referenced here ‘Improved Understanding…’

    http://www.isac.cnr.it/~microcl/climatologia/improve.php

    It is a very difficult read and I had to borrow it three times from the Met Office Library in order to fully absorb it. Even then, whether we end up with the TRUE temperature recorded on any one day in a specific city in any particular century remains open to question.

    Tonyb

  94. Zeke, You provide links to take our attention away from what was presented here. Let’s focus on what is here in front of all of us. How does your analyses explain and justify the difference between the way Marysville and Orland were adjusted. Then people can decide if you or Berkeley have a valid argument that should be pursued further.

  95. jim Steele says: September 25, 2013 at 9:13 pm
    ” True, my essay did not mention it was GHCN data but the paper I referenced does. You might read it instead of simply pronouncing “I don’t think it is.” In the future I encourage you to delve more deeply into the facts before simply choosing what is convenient for your beliefs.”

    No, in fact the paper (Shen, Sec 2) says:
    “We used the US National Climatic Data Center’s GlobalDaily Climatology Network (GDCN) v1.0 (Gleason 2002).”

    Quite different.

  96. @Nick Stokes — “If you don’t think they are experts, why look at their data? ”

    Two things:

    1) “Science is the belief in the ignorance of experts.” — Richard Feynman

    2) What data? There’s the data that folks testify was collected from actual instruments on a given data and time. And then there’s the counterfactual computer game of “This is what the data might look like if we had instruments we don’t have.” That latter one? That’s not data in anyone’s dictionary. It’s the elevation of Mario Bros. as best practices for household plumbing.

  97. jim Steele says: September 26, 2013 at 1:26 am
    “Do you have a link where we can compare Reading MA.in my graph?”

    Reading MA goes by the name of Chestnut Hill (with Reading given as alternative). It is here. BEST also identified a breakpoint at 1970. There was a staion move in 1960.

    Socorro is interesting. BEST also identified a break at 1968, which is very obvious on your graph. And there was a station move then. Looks like the algorithm was onto something.

  98. Dudley Horscroft:

    At September 25, 2013 at 10:59 pm you respond to Mark Albright and say to him in total

    If the OBS Time changes, you note that a break in series occurred at that date, and DO NOT MODIFY the previous data. Same if there is a change of location, or the immediate vicinity changes from open grassland to suburban car park. Why would only climate persons (cannot call them scientists) modify previous data? No financial, unemployment or other record that I know of has ‘modified data’ – always there is an indication of “break of series” when the definitions or something else has changed.

    There is an underlying issue that so far has been ignored in the thread.

    The individual temperature measurements were obtained as meteorological data but they are being adjusted to enable their combination as climatological data of regional (e.g. global, hemispheric, etc.) “average” temperature. There is no definition of the metric obtained by the combination and no possibility of a calibration for that metric.

    The process to combine the meteorological data alters the empirically obtained temperature data (as Jim Steele clearly shows). Of itself this only has importance if the unaltered data is discarded so is lost for future use. Any example of such discarding of unaltered data is a severe – and elementary – error which it is hard to imagine a competent scientist would do except egregiously. Several comments in this thread have expressed suspicion at the adjustments of data, but it is the discarding of unaltered data which should be cause of righteous indignation.

    And that discarding returns us to the really serious underlying issue. Perhaps one day it may be possible to define e.g. mean global temperature and to devise a calibration standard for it. However, such metrics are NOT now defined and, therefore, each team which provides time series of such data defines the metric in its own way and alters the definition almost every month.

    davidmhoffer provides a good explanation (at September 25, 2013 at 7:57 pm) of one reason the definition of the metrics is changed each month. If the definition of a datum were not altered then its value would not alter, but it does. See e.g.

    The basic problem is that the climate temperature metrics are not defined so people can ‘process’ the meteorological data in any arbitrary way to obtain a climate datum they desire to compute at any time. This garbles the meteorological data (and the unaltered data is often discarded) and provides completely arbitrary climatological data.

    A group of us attempted to address this problem over a decade ago but our attempt was prevented; see especially Appendix B of this

    http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm

    Richard

  99. I consider Al Gore the living embodiment of Barquan Blasdel (lacking only the mental discipline required for success).

  100. Perhaps Al Gore knows that political influence can come by manipulating people’s religious tendencies. Gore has had contacts with various religious and psychology and spiritual organisations, and when I watched “An Inconvenient Truth” I couldn’t help but notice the religious heartstrings being activated. The psychology says that “religion” is a mental structure, which can be filled with the content from any mainstream religion, but it can also be secular, just filled with other material, like environmentalism. Not that the environment isn’t real, just that the environment can become a religious secular conviction. We’ve inherited a monotheistic us v them, one true way, good v evil, saint v sinner culture and to some extend we all still have it in us, waiting to be activated.

  101. Excellent contribution, but the author is being just a little too kind to the adjusters. When it comes to this subject, I have become convinced, by way of numerous examples seen, that one should not put down to ignorance that which can be adequately explained by a good little old conspiracy. (Apologies to the original version)

  102. Nice job, Jim. Hope the book takes off.

    This comments thread is as good a read as the paper itself. Not unusual for WUWT.

  103. richardscourtney says:
    September 26, 2013 at 2:10 am

    That’s correct, of course, the temperature dataset is in effect, ‘undefined’ – if it were ‘defined’, they would not be able to keep adjusting its contents willy nilly to suit the agenda !
    The sooner folk get away from these datasets as ‘gospel’ the better – but in the meantime it is all we have to work with, and worse still, it (the data) is under the control of the ‘climate data gatekeepers’, and cannot be throughly questioned as per Jim Steeles examples.

    And of course, we are still forgetting the elephant in the room – and that is the actual accuracy of the thermometers and reader/human error (especially) of earlier data. In short, we are back to the noise to signal ratio problem – we simply cannot be assured of detecting a signal over the last 150 years within the noise, natural variation and other errors combined. Anyone stupid enough to believe this is feasible, is in a dream world. Using the best (pun) statistical analysis or not, the bottom line is that the data is ‘dirty’ (contains a lot of noise) and spans an awful lot of changes which any form of estimation (and adjustment) is merely adding to the errors involved.
    The temperature datasets have been CONSTRUCTED – that is the best description – and cannot be assumed to reflect reality or fact. At best (pun again) they are but a tiny ‘indicator’ and should certainly not be used for earth shattering policy decision making…..that’s my view and I’m not going to be changing it anytime soon!

  104. Kev-in-Uk:

    Thankyou for your post at September 26, 2013 at 3:34 am

    http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1427080

    in reply to my post at September 26, 2013 at 2:10 am

    http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1427020

    You say and expand on

    That’s correct, of course, the temperature dataset is in effect, ‘undefined’ – if it were ‘defined’, they would not be able to keep adjusting its contents willy nilly to suit the agenda !
    The sooner folk get away from these datasets as ‘gospel’ the better – but in the meantime it is all we have to work with, and worse still, it (the data) is under the control of the ‘climate data gatekeepers’, and cannot be throughly questioned as per Jim Steeles examples.

    Yes, of course you are right: I agree that the measurement errors have importance. However the measurement errors are a trivial matter in the context of the issue I raised.

    As I said, the metrics constructed from those measurements have no definition and they have no possibility of calibration. Hence, each team that provides such a construct ‘adjusts’ the measurement results and combines the ‘adjusted’ results in a manner of their choosing. And they each choose to do that in a different way most months.

    In this circumstance, the accuracy and precision of the original measurements have little or no relevance because the original measurement results are ‘adjusted’ (i.e. changed) in an arbitrary manner many times.

    When the original measurements are subjected to arbitrary alterations, is the result science?

    Richard

  105. nick stokes,please write “the earth is not a climate model” 5000 times whilst beating your self over the head with a hammer.

  106. A thousand years ago lay people (peasants) were not allowed to read the Bible. They had to believe it, they had to accept what was preached about it, but they could not read it until they had been properly taught just how to read it.
    Evidently it is the same with climate data. Like milk it must be pasteurized and filtered before it is safe for public consumption.

  107. Stephen

    Many thanks for your comment. Its ok thanks, Word press asked me to set it up but I use my own specialist site or normally get other sites to publish my material..

    I certainly have no intention of policing an active blog. How people like Judith or Anthony find the time I don’t know.

    tonyb

  108. Excellent Piece. Takes doen the whole climate science edifiss piece by piece. I have certainly been taught to ensure rigioursly there is no systematic errors in either my experimental equipement and also in any analysis methods used. Seems this homoginisation was performed based on an incorrect assumption and blindly applied autonomously by a computer. Where were the scientists in this, checking and validating their assumptions and processes?

  109. richardscourtney says:
    September 26, 2013 at 4:03 am

    Yes, I must agree again – but they do tend to like to use measurement errors, etc, as a ‘cover’ for the adjustments too – so within that framework, the ‘errors’ are indeed important.
    As I have said before, until someone manually goes through EACH and EVERY set of RAW station data, using a defined set of rules for ‘analysis’ and ‘interpretation’ (ignoring any current computerised algorithms) – breaking station data up into appropriate ‘segments’ as required, comparing observer records, etc, etc – no-one can ‘estimate’ the amount of corrections, IF ANY, that may (or may not) be required.
    Even then, that single set of station data remains unique – and does not constitute to an equivalent ‘yardstick’ for a nearby station (e.g. for homogenisation purposes) -, at least, not without doing the same for every other nearby station, whereupon it may be apparent that local microclimate type factors exist, etc, etc.
    I fail to see how any comuter program can analyse the raw data as effectively as a knowledgable human, manually cross checking suspect data as required (for example, with local press reports?), etc. It follows that any constructed dataset, without this kind of careful preparation (as opposed to some computer run algorithm) is likely to be worthless!

  110. Good piece, Jim.

    With regard to UHI, you don’t need to be a climate scientist to observe it, anyone with a modern car which is fitted with an external temperature gauge can experience the effect simply by keeping an eye on it while normally driving around.

    I can assure any doubters that even in the UK the effect can be very marked indeed, I have seen 6°C and higher between open country and the environs of cities such as London and Manchester.

    Even small towns and villages can generate a couple of degrees.

  111. ***
    jim Steele says:
    September 25, 2013 at 10:21 pm

    Johanna, If you are going to add an interlude, when you mention “lots of “boids” around…
    ***

    She must be a closet New Yorker…:)

  112. 1. I agree with most here that there’s no way that the “adjustments” were honestly made, but I understand why you wrote your paper in a way that gives them the benefit of the doubt.

    2. Although I can’t claim to understand everything you’ve written, yours is one of the clearest papers I’ve read about the issue of adjustment of the historical temperature record, so thank you.

    3. The part I found most interesting, and in a way most disturbing, was your initial point that you had to decide which temp records to use when analyzing a local, not global, climate issue. By messing with the data, as I believe “they” almost certainly have, they corrupt science at all levels, not just the global level. Who knows how many studies will now generate inconclusive, or even dead wrong, results because the researchers rely upon the adjusted data? (as someone like Nick Stokes presumably would, for example….sorry, Nick, but you’ve got to admit this is an interesting paper as it relates to homogenization of the data.)

    4. Perhaps the “global” scientists will eventually be brought to heel by true scientists attempting to research, and make sense of, issues in there own, localized, areas? What would 100 local sea-level researchers conclude separately, for example, and how would their combined results compare with what the “global” scientists are telling us about the global sea level?

  113. Nick Stokes says:
    September 26, 2013 at 1:55 am

    Socorro is interesting. BEST also identified a break at 1968, which is very obvious on your graph. And there was a station move then. Looks like the algorithm was onto something.

    Eyeballing the raw data on your link, it appears that Socorro was a reasonably smooth temperature trend that was then adjusted upwards based on disagreement with surrounding measurement(s), and that the error was in the surrounding measurements. There were no obvious discontinuities in the BEST Socorro raw graph, but there were in the comparison graph.

    I think that’s an algorithm fail.

    Also, your comments to others are very much along the line of ‘Read the Bible.’ What does that say about the nature of Climate Science?

  114. Jim Steele,

    Nick identified the correct station for Reading MA (the difference in name was throwing me off).

    As far as cooling biases due to moves from city centers to airports/waste water treatment plants goes, a disproportionate number of urban stations pre-1940s were located on rooftops, and even when they were not siting concerns were not paramount. While there are certainly going to be some cases where airport locations would be warmer than the prior urban center location, these will be the exception rather than the rule.

    That said, airports are not free of their own UHI concerns. If you want, you can look only at non-airport rural stations, though the trends are not particularly different post-1950: http://rankexploits.com/musings/2010/airports-and-the-land-temperature-record/

    I think the fundamental issue here is the assumption (by NCDC and Berkeley) that climate changes are highly spatially correlated, and that any significant local variance that is not reflected in other nearby station is a non-climatic factor. Thats not to say that local variations might not reflect a real local change (e.g. through urbanization, vegetative cover changes, a tree growing over the instrument, etc.). Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result in biased estimates of broader regional climate change. For specific local temperature work (e.g. ecological studies), raw data may in some cases be preferable to use, though you have to be quite careful. For example, I was looking at heat waves in the Chicago area and discovered a whole slew of days in the ~40C range back in the 1930s at the Midway Airport station, with nothing even remotely close to that thereafter. I dug around a bit and found out that prior to 1940 the instrument was located on a black rooftop!

    We (Berkeley Earth) are doing a paper for the AGU this year (and, later, for publication) regarding a new quarter-degree resolution homogenized CONUS dataset we’ve created. As part of that, we are analyzing the spatial coherence of temperature trends compared to unhomogenized products (e.g. PRISM), satellite data (RSS and UAH), and reanalysis products (MERRA and NARR). While the analysis is not complete, we’ve found that the spatial structure of warming from 1979 to 2012 in the homogenized Berkeley product is quite similar to that of the satellite record, which has nominally complete CONUS spatial coverage, and quite different from the unhomogenized PRISM product which tends to show stations with dramatically different trends within a few miles of eachother. I’ll send you a copy once we formally present it.

    Hope that helps,

    -Zeke

  115. scarletmacaw,

    Which is more likely: the surrounding 50 stations all had step changes at the same time while Socorro remained unchanged, or Socorro had a step change not seen at the surrounding 50 stations? Its worth pointing out that the Socorro stations has moved at least 5 times in its history, and likely more as documentation tends to be somewhat shoddy prior to 1950.

    Difference series with surrounding stations often reveal step-change inhomogeneities that are not easily visible to the naked eye. The reason is simple: difference series remove any common weather or climate variability, so the only thing left is the divergence.

  116. Zeke Hausfather says:
    September 26, 2013 at 9:27 am

    with reference to your black roof top example – this illustrates perfectly why homogenisation, gridding, and averaging is flawed in trying to create a spatial dataset without absolute detailed knowledge (and understanding) of the individual station data and what it represents.
    Just out of curiosity, using your example, was the station data subsequently ‘adjusted’ – and by how much?; and how was the adjustment figure arrived at?, was a note put ‘on file’ to say why the adjustment was made, etc?
    My point being that any subsequent use of the adjusted data, may have concluded that there was still something wrong, and adjusted it further…etc, etc – you get the picture.
    If I go to one of the websites and download temperature data, or if I were just joining the climate elite – how would I get details of these kind of adjustments??

  117. I throw out this comment every once in a while. Please forgive me if you find it redundant, but those who have not seen it before may find the subject illustrative of what may have happened viz-a-viz climate research.

    Please research the now-disgraced historian Michael Bellesiles, author of the (retracted) Bancroft Award winning book, “Arming of America.” The late 1990s early 2000s scandal is fascinating and, I believe, holds many parallels as to what has happened in the climate research community. In a nutshell, Bellesiles fabricated research, “proving” that gun ownership was not widespread in early America, and the gun culture, along with the accepted interpretation of the Second Amendment is a recent invention. A ‘consensus’ of historians supported it unreservedly, and his award-winning book citing his research was a Times best seller. The research was quoted in court cases concerning the Constitutionality of gun control laws. The deception was finally brought down by the dogged efforts of an attorney and a computer programmer. You will see many of the same elements then as now – lost data, disdain for “non-experts”, reluctance of experts to question accepted theory, falsification of data. I think the ultimate shame, and final blow, came when the “deniers” showed that Bellesiles claimed to have research documents that were known to have been destroyed in fires caused by the 1906 San Francisco earthquake.

    If you think it is not possible for someone or a group with malicious intentions to completely mislead a scholarly community and experts in a field of study, this wll change your mind. People who want to believe in a fraud will do so, even if it goes against their prior understanding and beliefs.

  118. Kev-in-Uk,

    Thankfully all the raw data is archived, and the two major adjustment approaches (NCDC and Berkeley) each use the same raw data as inputs, so there is no risk of double adjusting. Also, as long as station moves (e.g. rooftop to airport) are somewhat stochastic in time they can be easily identified and corrected through difference series from neighbors do to obvious step changes. The only time you would run into issues is if lots of stations in an area all moved at the same time, something that records show is generally not the case. Adjustments are documented, though they are also automated. Unfortunately, not all station moves, instrument changes, time of observation changes, vegetative changes, site characteristics changes, etc. are documented, so manual homogenization using station metadata is both prohibitively time-consuming (given the 40,000 or so stations in use) and also necessarily incomplete, especially outside of the U.S. where station metadata is often lacking. Automated methods that use neighbor-comparisons to identify breakpoints have proven much more effective.

    As far as data access goes, you can get raw, TOBs-only adjusted, and fully homogenized data from NCDC for all USHCN stations here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/

    You can find more information about the NCDC algorithm here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/menne-williams2009.pdf

    You can also find tests done using synthetic temperature data (to make sure that homogenization correctly addresses both cool and hot biases) here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

    The Berkeley approach is outlined here: http://www.scitechnol.com/2327-4581/2327-4581-1-103.pdf (technical details here: http://www.scitechnol.com/2327-4581/2327-4581-1-103a.pdf)

    You can also see individual raw and adjusted station records with break points highlighted for each station: http://berkeleyearth.lbl.gov/stations/163658

  119. The Chestnut Hill lat and long given by Berkeley are still about 7.5 miles away from the USHCN’s Reading station. So I doubt we are referring to the same data. Also your Berkeley links are average temp. However more often than not, maximum temperatures are adjusted much differently than minimums. Because maximums are occur during mid day when convection likely mixes the entire air column whereas minimums typically measure temperatures when the air is stillest minimums are more sensitive to surface conditions that will naturally vary with location. That suggests local variations that should never be homogenized.

    Zeke says, “non-climatic factor. Thats not to say that local variations might not reflect a real local change (e.g. through urbanization, vegetative cover changes, a tree growing over the instrument, etc.).”

    That is precisely the problem and why I mentioned the Yosemite study where a change in the winds caused “one section of Yosemite National Park cooled by 1.1°F, another rose by 0.72°F, while in a third location temperatures did not change at all”

    The methodology denies natural variations. Instead of simply averaging that variation, homogenization fabricates a trend based on the other stations of choice thus amplifying their impact. Furthermore changes in vegetation add another curious tweak. Much of the eastern United States was deforested which typically dries the land, reduces heat capacity and evapotranspiration and raises temperatures. Reforestation usual cools local temperatures. When the forest recovers, we should see a cooling trend accurately represents the local climate. To call that artificial adjust that trend into a warming trends misleads our perceptions about what is actually happening.

    You still take the argument elsewhere. Your example of the Chicago airport is a good example of the validation of the need to make adjustments, but I never suggested such adjustments were uncalled for. However I still await your analysis of why the adjustments of Orland vs Marysville occurred because it suggests other biases.

    You also have yet to address the tree rings studies. All the statistical tests can be biased. Berkeley bases its adjustments on “expected means”. However as I referenced no tree ring study that extends into the 90s, when the instrumental data suggests rapid CO2-caused warming, reproduces that warming. The statistical adjustment all inflate temperatures relative to the expected means temperatures in the most natural settings. If we are to make meaningful long term climate comparisons for natural habitat, we must rely on tree rings. Yet when faced with the tree ring contradiction, Mann, Briffa and others in effect suggests trees have become wooden-headed deniers. They argue worldwide the trees suffered a sudden case of insensitivity. Mann and Jones dealt with the cooling tree ring problem by “hiding the decline” in their graphic presentations.

  120. Zeke, Your Chicago rooftop and its effects on record high temperatures is an excellent example of the fallacy of parading around record high temperatures as proof of CO2 warming. Satellite data clearly shows that as vegetation is removed skin temperatures rise by as much as 40F. I argue landscapes changes have had a far greater impact on record temperatures. If people are concerned about record heat, add some greenery. Altering CO2 will have at most a trivial impact.

  121. Jim Steele,

    Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage? Both UAH and RSS show high correlations of 1979-2012 trends over distance, with no cases of cooling and warming stations within a few miles of each-other. While I agree that for specific locations (e.g. your Yosemite case), there might be local variations due to localized factors, but extrapolating these localized factors into estimates of regional climate change would be inaccurate (it would be the same as extrapolating urban station records to surrounding areas without considering UHI bias).

    As far as tree rings go, I admit to having no real expertise in dendro-paleoclimatology, but I was led to believe that tree rings imperfectly mirror temperatures at best. There have been some recent studies of proxy records with high temporal resolution up to present that show quite good correspondence to post-1950s observations: http://www.agu.org/pubs/crossref/pip/2012GL054271.shtml

    Here is Berkeley’s homogenization of the Orland station: http://berkeleyearth.lbl.gov/stations/34846

    There are clear step-changes in the difference series relative to neighbors that are identified. The record is cut at each of these breakpoint and recombined using a least-squares approach. I find it hard to believe that there would be a nearly 1C persistent drop in Orland temperatures in 1940 that did not occur in any nearby stations caused by actual climatic factors. Rather, it was likely either an undocumented station move, local vegetative change, or other factor. These are removed as inhomogenities in the estimate of the regional temperature field.

    Marysville has similar large step changes not seen in nearby stations, especially in the earlier part of the record, of a magnitude of 1C or more. It also has four documented station moves: http://berkeleyearth.lbl.gov/stations/34100

    Again, this works both ways. While you’ve focused on cases where homogenization increases the trend, in the case of the Tahoe airport you can see a clear removal of UHI signal: http://berkeleyearth.lbl.gov/stations/162484

  122. Here is one more issue with averaging. If we assume that homogenization is valid and represents the regional temperature trend, then maximum temperatures at Orland agree with tree ring data. Homogenized maximums show a cooling trend. If maximum temperatures have cooled since the 30s, there is not accumulation of heat energy in the atmosphere no matter what the trend for minimum temperature. By averaging a greater rising minimum with a cooling maximum, a misleading rise in global average is portrayed. And Orland’s homogenized maximum shows a far greater cooling than the minimum raw data.

    http://cdiac.ornl.gov/cgi-bin/broker?id=046506&_PROGRAM=prog.gplot_meanclim_mon_yr2012.sas&_SERVICE=default&param=TMAX&minyear=1883&maxyear=2012

  123. Jim,

    I agree averages can be misleading, and looking at min and max temperatures separately can be far more interesting. We currently have min and max temps shown for regional climate estimates, but not for individual stations (min and max are homogenized separately, but we don’t currently generate graphs for each). Here is the regional climate expectation for the Orland area based on 79 stations within 100 km, with min and max temperatures shown: http://berkeleyearth.lbl.gov/locations/39.38N-120.69W

  124. You make my point. The 100 km radius includes Chico, Yuba City and Marysville as well as intense agricultural regions which are often left bare for months so a monthly breakdown would be more informative. Yuba City went from 37,000 people in the year 2000 to 65,000 in 2010. Chico has grown from 9,000 in 1930 to almost 90,000 in 2010. As Goodridge’s study showed, growing regional populations cause a rising trend in temperature. All those areas will raise temperatures via landscape changes and that is what you are using to homogenize Orland’s data.

  125. {Brief Bird, etc., Interlude — GO TO Next Post for Science}

    Thanks, Johanna, for responding. I went to bed earlier than usual. LOL, did you notice how Patrick and Dudley (thank you, gentlemen, for also taking the time to reply — much appreciated) answered my query about signs of spring? With temperature and weather statistics. Useful, but, well, not exactly what I was looking for… . So, the boids have flown (parrots, anyway, for the time being). One of them (a scarlet MacCaw sp?) had his or her “eye on” this thread and tapped out a comment; smart bird! Re: cute children’s song: I think that New Yorkers (I’ve only known a few) would say that song was about how New Joisey people tohlk, lol.

    Here, in addition to what I wrote above, the woods are much quieter, now. The robins that stayed around or who moved in from up north are in a contemplative mood. The Stellar’s Jay (I’m going to name him Nick Stokes) has FINALLY run out of things to squawk about and shut — up, and those badun’s, the sparrows, with nobody’s nests to invade and take over are more subdued. Perhaps, they will turn over a new leaf. Not likely. As likely as members of the Fantasy Science Cult acknowledging reality. American Bald Eagles and the swans will grace the scene for the next few months as Winter and the joy of bare-branches-against-stars comes out in all its majestic splendor. (To me, Spring is Hope, Summer is Happiness, Autumn is Splendour, and Winter is Joy)

    Do you think it is a female-male thing (v. a v. Patrick’s and Dudley’s answers to me versus your response re: signs of spring)? I wonder. Well, I don’t really care, for labeling and homogenizing (heh) people only leads to false assumptions. There are trends, but, individuals are often anomalous (and I try to not assume that “all men are…” or “we women … .”

    Thanks, again, for responding!

    Warm regards from your bird loving pal in the U.S.A.,

    Janice
    **************************
    {END OF BIRD INTERLUDE}

  126. Jim,

    If you are worried about urban biases being aliased into rural stations during homogenization, you can try only using rural stations to homogenize. Thats what we did in our recent JGR paper, and we found that rural-only homogenization was as good as all-station homogenization in detecting and eliminating urban warming bias, at least in the latter part of the record (and still eliminate most of the difference in the early part). In the earlier part you run into issues where there aren’t enough stations available to detect inhomogenities.

  127. Please!

    This is an excellent work, but will be dismissed by nonskeptics due to the needless name calling. Remove language klike:

    “Climate scientists cloistered in their offices have no way of knowing to what degree urbanization or other landscape factors have distorted each weather station’s data. So they developed an armchair statistical method that blended trends amongst several neighboring stations”

    I agree with your assessment, but the delivery is casual and borderline offensive.

  128. From Zeke Hausfather on September 26, 2013 at 10:54 am:

    Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage?

    As reported here 3 and 1/2 years ago, the heat plume from large cities increases formation of thunderstorms.

    I had read earlier how the mega-city Tokyo makes its own weather, the waste heat is so great the upwelling heat plume leads to cool air along the ground being sucked in at the outskirts generating wind, and the heat plume travels upwards generating thunderstorms.

    As these heat plumes rise they naturally spread out in the atmosphere.

    Thus as the satellites are measuring temperatures of the lower troposphere (TLT), they would see those heat plumes thus their measurements would be contaminated by UHI.

    The question is dilution, at how high above what size population center generating how much waste heat is the effect negligible?

  129. A perfect example of why I never had the computer automatically fix anything. Auto-fixers almost always mutate into auto-make-worsers.

  130. David Hoffer,
    Awesome catch. I copy-paste-saved that one for later reference. The game might be over before you win the bet though.

  131. Zeke

    It is not clear in any manner how your method eliminates urbanized warming by simply using a “statistical category” of all rural stations, or why such a method should be trusted. As I reported “following the city’s development found that over a period of just three years, a heat island of up to 8.1°F appeared as the land filled with 10,000 residents” Columbia Maryland would be categorized as rural but clearly a growing population and associated landscape changes raised temperatures. Only by comparing temperatures before and after population growth can we reliable separate population effects. Your statistical methods more likely blind us to that powerful population effect and erroneously suggests it represents CO2 warming. Again I trust the tree ring trends as indicators of climate change much more because those tree locations are more removed from those human influences. And no tree ring study finds the warming that your methods suggest. Who to believe?

  132. Average and what it means. At a meeting with a contractor and the plant representative over the actual thickness of asphalt concrete installed, the contractor produced wonderful data that showed that while there were areas deficient in thickness, the average was greater. The plant manager responded with “Go home, turn on your oven and put your head in it while your a__ is in the kitchen; how does that average out for you?”

  133. From Zeke Hausfather on September 26, 2013 at 10:54 am:
    Would you agree that satellite measurements of TLT should be free of any urbanization of land-use related biases and have complete spatial coverage?

    Why on earth would you assume that satellite measurements are free of land-use effects? Such assumptions taint adjustments and make no sense. Satellite data clearly shows that as vegetation is removed temperatures dramatically rise from forested to grassland to barren ground which cause the highest temperatures. (Read Mildrexler,D.J. et al., (2011) Satellite Finds Highest Land Skin Temperatures on Earth. Bulletin of the American Meteorological Society) The lower troposphere is heated primarily by contact with those surfaces. As the surface heats so does the air above it. Convection then carries that heated air to the stratosphere. A hotter surface also emits more infrared. So even if Co2 remained unchanged, a forest that was removed to create a airport run way, the resulting increased in infrared would increase the greenhouse effect.

  134. Iggy Slanter says September 26, 2013 at 4:38 am

    A thousand years ago lay people (peasants) were not allowed to read the Bible.

    a) if there were any to be read (printing press ca. 1450) and
    b) pre-supposes they could read as well …

    Probably not a good example.

  135. Zeke Hausfather says:
    September 26, 2013 at 9:31 am
    scarletmacaw,

    Which is more likely: the surrounding 50 stations all had step changes at the same time while Socorro remained unchanged, or Socorro had a step change not seen at the surrounding 50 stations? Its worth pointing out that the Socorro stations has moved at least 5 times in its history, and likely more as documentation tends to be somewhat shoddy prior to 1950.

    The adjustments in question were not pre-1950. The one Nick pointed out was in 1968.

    It is not clear just how many stations were used to adjust the Socorro data. I count 31 stations within 100 km, 19 of which were not in operation in 1968. I can certainly conceive of one station out of the remaining 12 creating a false step change.

  136. Romcconn says September 26, 2013 at 2:34 pm

    roughly 600 of the 2000 GHCN stations that existed in 1900 were located at airports. Really? Wilbur and Orville didn’t get off the ground until 1903.

    Dirigibles, zeppelins and blimps?

    .

  137. @ Zeke

    “Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result in biased estimates of broader regional climate change. For specific local temperature work (e.g. ecological studies), raw data may in some cases be preferable to use, though you have to be quite careful. For example, I was looking at heat waves in the Chicago area and discovered a whole slew of days in the ~40C range back in the 1930s at the Midway Airport station, with nothing even remotely close to that thereafter. I dug around a bit and found out that prior to 1940 the instrument was located on a black rooftop!”
    ——————————————————–
    You keep evading the point, which is that “homogenisation” mixes good data with bad. The fact that a screaming outlier was picked up in your example does not address the question. It is the classification of stations which is at issue here, not just their outputs.

    The climate change debate, for reasons known only to “climate scientists”, is conducted in terms of fractions of a degree. That seems to be prima facie evidence that there is not a lot of substance there, since fractions of a degree matter not at all to plants or animals. The methods used to compute the fractions are at best speculative. The notion that multiplying the number of calculations inherently brings forth better results is absurd.

    Like Jim, and many others who comment and post here, I have real world experience of using data and statistics. The sheer bloody hubris and ignorance of those who feed massive datasets into a computer and assume that whatever comes out is “the answer” is staggering. You say:

    “Rather, these local variations are not indicative of changes over the broader region, so spatially interpolating without homogenization will result result in biased estimates of broader regional climate change.”

    Yep, let’s just ignore inconvenient facts, because they “will result in biased estimates”.

    When I was analysing public opinion surveys, that was the sort of thinking that lost elections. Ponder on that.

    • “The climate change debate, for reasons known only to “climate scientists”, is conducted in terms of fractions of a degree. ”

      You raise an excellent point that I have long wondered about. One of the first things taught in basic science class is that you can not create data of greater precision than your instrumentation. Even if you combine and manipulate data, it can not have any greater precision than the least precise instrument used to gather the data and the data must be rounded off to that level of precision.

      Prior to the age of electronic thermometers, which roughly corresponds to the age before satellite based temperature measurement, it seems that all the historical temperature data was gathered using variations of glass tube thermometers. I have informally tried to read up on the history of such instruments and from what I have been able to find so far, there does not seem to have been a standard design thermometer that was used for gathering the weather records. Anecdotally, it seems most of these thermometers were graduated no more precisely than increments of 1 degree F. I did find one example of an antique weather station thermometer graduated as precisely as 1/2 degree F, but that seemed to be rare and was the most accurate I have seen so far. Given such limits on the precision of the historical data, I don’t understand how statements can meaningfully be made about trends in the data to precision of tenths of a degree prior to the age of better instrumentation.

      Does anybody know of a good discussion about the history and capabilities of weather station thermometers and the standards to which they were designed, manufactured, calibrated, used, maintained, etc.? In depth understanding of the instrumentation used to gather the original data seems to me to be a topic somewhat lacking coverage in the discussion of the temperature archives given the levels of precision upon which arguments hinge.

  138. scarletmacaw says: September 26, 2013 at 4:32 pm
    “It is not clear just how many stations were used to adjust the Socorro data. I count 31 stations within 100 km, 19 of which were not in operation in 1968.”

    This site gives you an interactive map – just choose NM. I made some plots at random of annual dailymin data:
    Socorro
    Mountain Park
    Los Lunas

    You can see that Socorro makes a big dive in 1966 (not 1968, as I had estimated from the plot above), and it’s a real step. That’s what BEST and USHCN adjusted for, based on the time series. It’s near the time of an actual move. The nearby locations do not show a step change there.

  139. Nick Stokes says:
    September 26, 2013 at 7:23 pm

    You can see that Socorro makes a big dive in 1966 (not 1968, as I had estimated from the plot above), and it’s a real step. That’s what BEST and USHCN adjusted for, based on the time series. It’s near the time of an actual move. The nearby locations do not show a step change there.

    Thanks for the link.

    I plotted the TMIN for Socorro, and there was a decrease between 1965 and 1980. It was not clear to me from looking at the graph that it was a step change. That was the same time frame where it was getting so cold the Newsweek cover questioned the next ice age, so the cold drop could have been real. Also there was no corresponding step in TMAX.

    And is there a link to the number of nearby stations used in the homogenization? Was it 12, 50, or some other number? Does BEST use a fixed number of stations, a fixed radius, or some other method of deciding which stations to use?

  140. NIck, to add to that, I looked at the other two sites. Los Lunas also shows a drop in TMIN from 1965 to 1980 in an otherwise fairly flat temperature vs, time, but shows a monotonic increase in TMAX that looks artificial. Mountain Park is fairly flat in both TMIN and TMAX. None of these three sites are at all similar except for the 1970s decrease which is weak in Mountain Park but strong in the other two (before it was homogenized out of Socorro).

  141. Hi Mr Steele, I had a couple of questions up thread you may have missed seeing as there were quite a few around the same time. I’ll repost here for your convenience. Cheers, Nick

    “Mr Steele, your Fig. A (their figure 5b not 1b) is in fact GDCN not GHCN, a minor difference perhaps for the US but NOAA do say “Unlike GDCN, however, GHCN-Daily contains numerous data streams for updated data that enhance the latency of the dataset through rapid and frequent updates. Relative to GDCN, GHCN-Daily also contains a much more comprehensive set of QA checks as well as a more expansive set of historical data sources. “, so there is a difference. Also the graph is the average of a graph the authors say is “only a rough and visual assessment of the change points. Rigorous detection of change points in a time series needs to go through a statistical procedure” My main point is though that you use that graph and say “The results seen in Figure A (their figure 1b) (sic) suggest recent climate change has been more cyclical” whilst they say…..”such data sets must be homogenized before they can be used for climatological analysis”. Is there a reason you can make a climatological judgement from that graph when the authors you site say specifically you can’t?

    TIA Nick”

  142. Well I would only say this. The adjustments have all been done. In fact they were done some time ago. Since the end of the adjustment process, we happen to have had “the pause”.

    I have a suspicion that “the pause” will continue from here on in. Right now the IPCC is trying to claim that “the pause” is statistically insignificant. How anybody can claim a change in trend that is 30% of the overall data period is “statistically insignificant” I don’t know. But there will come a time when nobody with a brain will be able to claim such a thing.

    There is one good thing I would like to say about climate scientists. They may have had the brass balls to create a historica warming trend out of thin air, but at least so far they haven’t had the brass balls to create a current warming trend out of thin air in full view of a watchful public.

  143. This is another of those posts that are good at their core, substantive, but the title and crankish rant about Al Gore at the end make it un-shareable.

  144. @NIck Kermode I did not miss your earlier question. I simply though you missed the point. I offered a published graph of non-homogeized data to show what the data looks like before the homogenization process occurs. Like the author I also acknowledged that adjustments are sometimes needed, but the debate here is about how and why that homogenization is done. I apologize if I misinterpreted your intent, but your question implied my use of that graph as misleading. You also seemed to be cherry-picking that inconsequential acknowledgement and ignored others so It felt as if your intent was denigrate not discuss.

    For example the authors also said “Fortunately, the random component of such error tends to average out in large area averages and in calculations of temperature change over long periods; therefore, stations’ data do not always need to be homogenized” You and Nick Stokes implied my calling it a GHCN data set was misleading, but the authors said “The updated dataset of the GDCN is the GHCN-D (Global Historical Climatology Network-Daily).” then “The differences
    between GHCN-D and GDCN are reflected in some individual stations and have little effect on the regional average results like those presented in this paper.” So why would you both make it an issue?

    As with my analysis the authors also suggested El Nino and La Nina events generated the peaks.

    You also failed to mention that the authors found, “It is thus intriguing that the minimum temperature is more symmetrically distributed than the mean temperature. These observations indicate the existence of more cold extremes in all temperatures”. Their observations certainly suggest that the lack of recent cold records may indeed be a product of homogenization.

  145. @aaron “crankish rant about Al Gore at the end make it un-shareable”

    Aaron I realize my Gore statements may turn off people who are allied with Al Gore’s politics, but when Gore makes several speeches urging to put a “price on denial” it is nothing less than a call for intellectual tyranny and such orchestrated attempts to demonize skeptics as evil deniers is probably the greatest threat to the scientific process and respectable debate. I am curious how you would characterize Gore’s comments and the battering of skeptical scientists with the word denier so they could indeed be shareable. Or do you accept his comments?

  146. (A follow up to this http://wattsupwiththat.com/2013/09/25/unwarranted-temperature-adjustments-and-al-gores-unwarranted-call-for-intellectual-tyranny/#comment-1426755)

    With the official release of AR5 summary, nobody may ever see this but, TWC this morning was running a bar across the bottom of the screen while they were interviewing “the usual suspects”.
    One thing caught my eye. (I had the volume on mute. I use the weather on the 8’s to know when it’s time to leave for work.) It was something about record highs in the USA outpacing lows between 1998 and 2012. Maybe that’s true but my confidence is a bit less than 95%. At least my confidence in the records is not robust.
    In my little spot of the USA, in April of 2012 said there were 39 record highs and 7 record lows since 1998. In January of 2013 there were (suddenly) 50 record highs and only 5 record lows. All of 2012 only had 3 new records, all highs.
    The numbers are being diddled with and “scicobabble” given to excuse it.

  147. Al Gore — American Bloviator

    Forever, forever, its all Al Gore
    Now, in the future and always before
    Spinning himself with the words he can whirl
    The earth is his oyster, he is its pearl

  148. OOPS!
    ” In January of 2013 there were (suddenly) 50 record highs and only 5 record lows.”
    Should read:
    ” In January of 2013 there were (suddenly) 50 record highs and only 5 record lows since 1998.”

  149. Eugene WR Gallun says:
    September 27, 2013 at 2:05 pm

    Al Gore — American Bloviator

    Forever, forever, its all Al Gore
    Now, in the future and always before
    Spinning himself with the words he can whirl
    The earth is his oyster, he is its pearl

    ==========================================================================
    May I suggest:
    “The earth is his oyster, he is its spoor”
    (But I guess it all depends on your point of view.8-)

  150. Hi again Mr Steele, sorry if it seemed that way, I rarely make questioning comments as it often comes out the wrong way when it’s not face to face. I wasn’t saying it was misleading I was asking why it could be used for a climate hypothesis. I understand your post is discussing the hows and why’s of homogenization (which you agree data needs) so was just confused as to why then you made a hypothesis on the un-homogenized graph. I also don’t think that their acknowledgement that it can’t be used for climate analysis is inconsequential. Nor the fact that they say it is based on a “rough and visual assessment”. If you agree about the need but argue (I agree its worthwhile area of debate) about the why, when and how it is confusing you would do this and for me detracted from your overall argument. Was just asking for an explanation not accusing you of misleading.
    On the GDCN and GHCN issue I did acknowledge straight up that it made little difference (to the US) as the authors also say but why not just use the correct reference? A little difference in a topic debating little differences could be important, no?

    “You also failed to mention……” I didn’t fail to mention, in fact I didn’t comment on the actual homogenization results because I don’t know enough about the processes. I recognise I know just enough to get myself in trouble so I wouldn’t question that point. I am hear reading your stuff, along with Nick and Zeke’s to try learn more on that issue.

    Thanks for your reply. Nick

  151. Gunga Din Sept 27, 2:12

    Thank you for taking notice of the beginning of a poem I will eventually finish.

    Eugene WR Gallun

  152. Nick Stokes says September 25, 2013 at 6:27 pm
    “STOP the handwaving and address the stations at Marysville and Orland only.”
    I don’t have any particular knowledge of those two stations. I’m saying is that the adjustments were made, by stated algorithm,
    ————
    Ahhh! The adjustments were made by stated Algorithm. All is well now.

    Stated without the lisp that would be “Al Gore-ism”.

Comments are closed.