I’ve been waiting for this statement, and the National Climate Assessment has helpfully provided it

The National Climate Assessment report denies that siting and adjustments to the national temperature record has anything to do with increasing temperature trends. Note the newest hockey stick below.

NCA_sitingh/t to Steve Milloy

Source: http://nca2014.globalchange.gov/system/files_force/downloads/low/NCA3_Climate_Change_Impacts_in_the_United%20States_LowRes.pdf?download=1

Yet as this simple comparison between raw and adjusted USHCN data makes clear…

2014_USHCN_raw-vs-adjusted

Click for graph source – Source Data: NOAA USHCN V2.5 data http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

…adjustments to the temperature record are increasing – dramatically. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data. The climate reality our government is living in is little more than a self-serving construct.

Our findings show that trend is indeed affected, not only by siting, but also by adjustments:

Watts_et_al_2012 Figure20 CONUS Compliant-NonC-NOAA

The conclusions from the graph above (from Watts et al 2012 draft) still hold true today, though the numbers have changed a bit since we took all the previous criticisms to heart and worked through them. It has been a long, detailed rework, but now that the NCA has made this statement, it’s go time. (Note to Mosher, Zeke, and Stokes – please make your most outrageous comments below so we can point to them later and note them with some satisfaction.).

 

 

 

258 thoughts on “I’ve been waiting for this statement, and the National Climate Assessment has helpfully provided it

  1. Note to Mosher, Zeke, and Stokes – please make your most outrageous comments below so we can point to them later and note them with some satisfaction.).

    I dunno, Mr. Watts, Dad used to say to let sleeping dogs lie.
    : > )

  2. I wonder if people realize the adjustments are more than the claimed global warming…..
    and without the adjustments……it would show cooling

  3. “..quality analyses of these uncertainties have not found any major issues of concern affecting the conclusion..”

    And if you like your healthcare plan, you can keep your healthcare plan.

  4. I have question reagarding another graph used in the NCA report. it’s the one showing average global temperature data (black line) since 1900 with a green bar showing model predictions with natural influences only. It implies that human activity has recently caused warming. But How do they explain 1910 data which shows average temperatures much lower than it would have been with natural forces? This seems to imply that humans caused cooling just after the turn of the century. Does anyone what the reasoning is?

  5. The post-normal scientific method :

    “If the data does not fit the theory, adjust the data.”

  6. I have written an entire Essay on this for the next book on energy and climate. Essay title, When Data Isn’t. Getting worse, steadily worse. USHCN V2 over V1, GHCN after YE2013(new nClimDiv), HadCRU4r2 over r1, Aus BOM, NZ BM, everywhere except the satellite records UAH and RSS. All cool the past and warm the present, the opposite of proper homogenization for UHI. Calculates at least 3/7 of the global record since 1900, and more likely half. Many places, cooling was turned into warming. Reykjavik, Sulina (Rumania), Darwin (Aus) some better known specific examples of complete inversion.

  7. Latitude says:
    May 6, 2014 at 11:31 am
    Matt…..because you adjust up for UHI (/sarcasm)…there, fixed.

  8. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data.

  9. Sorry, my last post escaped and was incomplete.

    Anthony said;

    “The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data. ”

    Mosh has explained to me several times why his algorithm makes it OK to cool the past but I still don’t really understand his rationale. Perhaps YOU can explain it?

    tonyb

    REPLY: Other than to say it is bullshit of the highest order, I can’t. In business, people would go to jail for doing things like that. Ever since Mosher joined BEST, he stopped thinking rationally about this issue. – Anthony

  10. This is a completely GOVERNMENT-controlled report:

    “USGCRP is a confederation of the research arms of 13 Federal agencies, which carry out research and develop and maintain capabilities that support the Nation’s response to global change.

    USGCRP is steered by the Subcommittee on Global Change Research (SGCR) of the National Science and Technology Council’s Committee on Environment, Natural Resources, and Sustainability (CENRS), and overseen by the White House Office of Science and Technology Policy (OSTP).”
    {Source: http://www.globalchange.gov/about/organization-leadership — emphasis mine}

    Full of boldfaced l1es and recklessly unsupported conjecture such as this:

    “Climate change is happening now. The U.S. and the world are warming, global sea level is rising, and some types of extreme weather events are becoming more frequent and more severe.”
    {Here: http://www.globalchange.gov/climate-change}

    it is OBVIOUSLY biased to the point of uselessness.

    **************************************************************

    {from my comment yesterday on this thread: http://wattsupwiththat.com/2014/05/05/how-not-to-measure-temperature-part-95-new-temperature-record-of-102-in-wichita-but-look-where-they-measure-it/}

    NOAA is just a propaganda machine for the Envirostalinists and Enviroprofiteers (such as windmill project investors):

    For “Breezy” — NOAA’s graphic is….. you guessed it! A bunch of windmills:

    from: http://forecast.weather.gov/MapClick.php?lat=33.448376495000446&lon=-112.07403860799968&site=all&smap=1#.U2gI0sJOVDw

    (yesterday about a 3PM)

    “This
    Afternoon

    Breezy

    (windmill graphic)

    Breezy”
    ******************************************************

    Also related are these WUWT posts:
    1. http://wattsupwiththat.com/2013/07/22/why-the-noaa-global-temperature-product-doesnt-comply-with-wmo-standards/

    2. http://wattsupwiththat.com/2013/07/05/questions-for-noaa-and-nps-death-valley-that-have-gone-unanswered-related-to-the-100-year-celebration-of-the-hottest-ever-temperature/

    ****************************************

    HOWEVER…. TAKE HEART!

    The Average American Isn’t Buying It

    USGCRP Man {knock, knock, knock – front door}:

    Joe the Plumber {opens door, cocks head, narrows eyes}: What do you want?

    G Man: To talk to you about climate change. Trust me. I’m from the government.

    JP: Yeah, sure…… . LOL!

    G Man: You can feel it getting warmer, er, colder, er… more extreme.

    JP: Nope. I don’t. Sorry, but you’ll have to go. You’re making me miss the game. {gentle nudge and…. firmly SHUTS the front door…. G man left standing on front porch…. alone.}
    *********************************************

    Bwah, ha, ha, ha, haaa! CO2 UP. WARMING STOPPED. More extreme weather events — NOT.

    You can SMELL the desperation — that report reeks of it, heh, heh, heh.

  11. As with the IPCC, the Adaptation Chapter lays out the guts of planned actions. It also suggests Learning by Doing. That would be what Marxists call Theory in Action. CAGW is just the excuse.

    Don’t miss the Risk Disk.

  12. “REPLY: … it is bullshit of the highest order, … .” An-tho-ny {to TonyB at 11:46am}

    Precisely!

    Amen.

  13. Here’s an interview with the lead author, Alan Robock, who still thinks the Soviet Union is in power (I kid you not):

    http://therealnews.com/t2/index.php?option=com_content&task=view&id=31&Itemid=74&jumival=11824

    ROBOCK: John Holdren, his science advisor, knows that. The question is: politically what can you do? And money talks. You know, so as they used to say at a clothing store New Jersey, money talks, nobody walks. So money is very important in power.

    And, now, you say the problem is capitalism. We could get in a whole discussion of what other economic system. But the Soviet Union, which isn’t that capitalist, is living on their fossil fuels and they’re selling them. They aren’t even–.

    JAY: In Russia.

    ROBOCK: In Russia, yeah.

  14. “Other than to say it is bullshit of the highest order, I can’t.”

    Gosh, and they call me terse…

  15. .
    They are trying to Hide the Recent Decline by padding the temperatures. But how long can they keep padding, until the petticoats of reality begin to show beneath the fabricated hemline?

    .

    Its a bit like the UK Labour Party trying to keep a straight face, while promising that the money supply and government spending was not out of control. The give-away was when Gordon Brown said he had ended the economic boom-bust cycle – everyone then knew that we were in for a great monetary bust.

    You can only dam the tide of reality for so long…..

    Ralph

    P.S. Thanks for your inept gold sale, Gordon, I made a packet on that one…

  16. This is so discouraging. How can we communicate this to the media in such a way that they understand it? Can the auditor general check the facts, and check your calculation, so that there is a formal statement from a national official? Can the scientific and political resources of other large countries (e.g. Australia?) provide a critique?

  17. re: “You can only dam the tide of reality for so long” :

    How long? How many more times can they push down the past temperatures?

  18. climatereason says:
    May 6, 2014 at 11:46 am
    Mosh has explained to me several times why his algorithm makes it OK to cool the past
    =====
    cr, here’s a trick question for Mosh….
    Ask him how is it possible they can publish adjusted data….from stations where they have no raw data
    ============
    Steve has some great ‘blink’ charts on the adjustments………

    http://stevengoddard.wordpress.com/tracking-us-temperature-fraud/

  19. oh for crying out loud…..what bad word did I use that time
    I’m trying to not use any and thought I had most of them figured out by now…

    got a post in moderation hell again…………..

  20. Honestly that Table 28.6 on page 683 could be renamed as Barriers to the Intended Revolution whether Voters Want It or Not. For someone like me who has taken the intentions of changing the US social, political, and economic systems based on the supposed Age of Abundance from 1960 to 2014, it really does read as the unstoppable nonconsensual political coup that has been sought for so long.

  21. The small adjustment made by most datasets for UHI could be disproved it they considered only the rural stations over the period. I mean the real rural stations, not the satellite chosen ones. Stations unencumbered with streets, buildings, people, vehicles and the like should give an accurate record of temperature.

    REPLY – Fear not. Our team is ALL ABOUT microsite. We have isolated the well sited stations and obtained the “true signal”. ~ Evan

  22. Conspiracy theorists are crazy.
    I think not in this case.
    I’m a Criminal Conspiracy theorist.
    In this post there’s some of the evidence.
    In “Climategate” there’s more.

    Let’s here Algore say ” This is the type of catastrophic fraud we can expect as ‘Climate Change’ progresses.” in his condescending preacher’s voice.

  23. Conspiracy theorists are crazy.
    I think not in this case.
    I’m a Criminal Conspiracy theorist.
    In this post there’s some of the evidence.
    In “Climategate” there’s more.

    Let’s here Algore say ” This is the type of catastrophic ‘intentional falsehood’ we can expect as ‘Climate Change’ progresses.” in his condescending preacher’s voice.

  24. Consp1racy theorists are crazy.
    I think not in this case.
    I’m a Criminal Consp1racy theorist.
    In this post there’s some of the evidence.
    In “Climategate” there’s more.

    Let’s here Algore say ” This is the type of catastrophic ‘intentional falsehood’ we can expect as ‘Climate Change’ progresses.” in his condescending preacher’s voice.

  25. So indeed, there isn’t anything new here that we didn’t already know. Instead of detrending urban stations to get natural temperature variations without any side effect of UHI, they adjust the temperatures trends of rural stations upwards to hide the UHI phenomenon…

  26. The graph of temperature adjustments from Steve Goddard is devastating. In view of the critical remarks about Steve’s methods often made by one of your more cryptic commenters, it would be good to have a full discussion of the data that went into that graph.

  27. Anthony, I know we have only data for a relatively short time, but what does the REFERENCE network show versus the full USHCN at this point? I assume the network of reference sites is still intact?

    I’m really just curious – my belief is that surface temperature (at least in the way we are currently measuring it) is useless for climate studies (but great for weather) and we should turn mainly to the satellite data.

  28. The conclusions from the graph above (from Watts et al 2012 draft) still hold true today, though the numbers have changed a bit since we took all the previous criticisms to heart and worked through them.

    Talk about the endless nights, the lost weekends . . . I’ll be needing to attend HCN Anonymous.

    Our final results are (roughly):
    Class 1\2 (raw+MMTS): 0.185 C/decade
    Class 3\4\5 (raw+MMTS): 0.335 C/decade

    Class 1\2 (NOAA-adjusted): 0.324 C/decade
    Class 3\4\5 (NOAA-adjusted): 0.325 C/decade

    To demonstrate conclusively that the stations we dropped are not a result of cherrypicking:
    Class 1\2 stations we dropped (raw+MMTS): 0.118 C/decade
    Class 3\4\5 stations we dropped (raw+MMTS): 0.213 C/decade

    That’s after dropping TOBS-biased stations, dropping stations with known moves, and adjusting for MMTS conversion. Those were the objections back in 2012. Anthony’s decision to make a pre-release was one of great wisdom and foresight: It elicited these criticisms which allowed the corrections before we hit peer review. I’ll be making a set of new maps Real Soon Now.

    “quality analyses of these uncertainties have not found any major issues of concern affecting the conclusion..”

    Well, except for the “quality” part. (Okay, and the “analysis” part.) But fear not! “Rev. Anthony and his screeching mercury monkeys” are on the job!

  29. Sorry, but I can’t seem to find the USHCN final minus raw graph on the link.

    Any help?

  30. Anthony,

    Methinks the last point in your raw vs adjusted USHCN graph is in error.

    As far as the need for homogenization goes, we’ve been over this time and time again. There are certain network transitions (TOBs, CRS to MMTS, de-urbanization of stations post 1940s) that introduce some pretty significant biases into U.S. temperature records, most of which are (unfortunately) in the same direction. Not correcting for these gives you a skewed picture of what is actually going on. Its relatively easy to check if pairwise homogenization approaches are leading to systemic bias; simply create experiments using synthetic data like Williams et al did (which is cited in the paragraph in your post). They found that worlds with positive biases were addressed just as effectively as worlds with negative biases. We found similar results when testing the Berkeley homogenization approach.

    Also, your initial graph conflates TOBs adjustments with other homogenization (e.g. for station moves or sensor transitions). TOBs represents the bulk of the adjustment, at least for minimum temperature; homogenization in the U.S. actually reduces the century scale trend in minimum temperatures relative to TOBs-only adjustments.

    I was pretty amused to find the Assessment citing Fall et al of evidence that homogenization is effective in removing biases in the station network. I can’t comment on your new work until the data is available, so we will have to see how it turns out.

  31. Anthony said in reply to me (with respect to cooling the past)

    ‘REPLY: Other than to say it is bullshit of the highest order, I can’t. In business, people would go to jail for doing things like that. Ever since Mosher joined BEST, he stopped thinking rationally about this issue. – Anthony’

    I’m glad its not just me, I thought I was being incredibly stupid in not understanding the rationale, as Mosh says it with such assurance.

    tonyb

  32. They will make a warming trend regardless of the actual data.

    There is a warming trend. In order for bad microsite to exaggerate a trend, there must be a real warming trend to exaggerate.

    But their homogenization procedure identifies the “outliers” (i.e., the lower-trend Class 1\2 stations, ~20% of our sample) and adjusts them to conform with the poorly sited 80% majority. Homogenization therefore eliminates all trace of the true signal. (At some point I will ask Anthony if I can make an actual post on that.)

  33. Anthony,

    Regarding your upcoming paper, I hope you look in detail at what is happening to CRN12 stations in the late 1940s. There was definitely some massive step changes that were being picked up by the homogenization algorithms: http://rankexploits.com/musings/wp-content/uploads/2010/03/Picture-145.png

    What is happening is mainly that stations are being moved from city centers to newly constructed airports and wastewater treatment plants. Many of these stations are (oddly enough) CRN12 despite being located at airports. This move leads to a big step change downward, which is removed via homogenization.

    For your new results, how do the 1970s-present trend differences between unhomogenized and homogenized stations look? Is the difference still dominated by a big step change in the 1940s?

  34. Dale Hartz

    The small adjustment made by most datasets for UHI could be disproved it they considered only the rural stations over the period. I mean the real rural stations, not the satellite chosen ones. Stations unencumbered with streets, buildings, people, vehicles and the like should give an accurate record of temperature.

    In East Africa there are just four, REPEAT FOUR, stations currently reporting.

    Three are at airports, and the other is at Mwanza, which has a population of 700,000.

    All bar Dar es Saleem are classified as rural, as they are dark at night, e.g. the airports don’t operate at night, so the other three have no UHI adj.

    At Dar es Salaam, there is no UHI adj since 1960, despite the population growing from about 129,000 to 4 million since then.

    Does the word FRAUD spring to mind?

    http://notalotofpeopleknowthat.wordpress.com/2014/04/11/how-reliable-are-temperature-records-in-africa/

  35. obviously they left Alaska out?

    Yes, but Alaska is not part of USHCN. Waste heat rather than heat sink will tend to be more of an issue up there. Both the delta and the dampening.

  36. I’m glad its not just me, I thought I was being incredibly stupid in not understanding the rationale, as Mosh says it with such assurance.
    tonyb
    >>>>>>>>>>>>>>>>>>>>>

    Absolutely not just you. In fact, Mosher is on record as claiming that station drop out and hence fewer stations doesn’t affect anything. Well, if that is so, adding stations ought not to affect anything either. He can’t have it both ways, but was silent when I pointed that out.

  37. Let’s make a $1000 bet — You say the climate will be warmer or unchanged on 1 January 2050 if the USA doesn’t take steps to address the problem! I say regardless of any any steps the United States takes the climate will be warmer, colder or unchanged on 1 January 2050!

    I’m pretty sure no one — not even climate-change fanatics — will take this bet! In addition, the climate-change fanatics will have to admit that regardless of any steps the USA may take the efficacy of those steps won’t be measurable until 2060 at the earliest!

    But leaving that fact aside, the effects of any steps the USA may take will be negligible unless China, India and the Third World take comparable steps, i.e., China and India will have to reduce their GDPs by roughly 50% and Third-World folks will have to find a more climate-friendly substitute for animal dung when cooking or staying warm!

    Make that bet $10,000!

  38. Does anyone here think it is coincidental that Brookings picked today to launch its new Planet Policy blog? http://www.brookings.edu/blogs/planetpolicy

    For those who do not follow what Brookings pushes under the name of Metropolitanism or Global Cities, the NCA becomes the reason the already desired changes that used to go by the name Regional Equity suddenly become a federal mandate.

  39. “He who controls the past controls the future. He who controls the present controls the past”
    – George Orwell

  40. The present is getting warmer, the past is getting cooler, and it has nothing to do with real temperature data – only adjustments to temperature data.

    There is the TOBS issue. Either one must adjust (upward) for that or else drop the biased stations. Or at the very least, split the trends.

  41. evanmjones,

    Station moves and instrument changes also introduce real bias. I’ll be interested to see how MMTS transitions are dealt with in the final paper. Congratulations on all the hard work by the way; we may not always agree on things, but doing the grunt work needed to get a paper published helps advance science in the long run, no matter which of our conclusions stand the test of time.

  42. @ Tony B (12:37pm) — “Mosh says it with such assurance.”

    B. S.ers are good at that.

    Truth-tellers only assert with high confidence what they firmly and reasonably believe to be true and they tend to assume that the B.S.er does likewise. B. S.ers count on this.

    That you want to believe him says good things about YOU, Tony B!
    *********************************************

    Well, Latitude! I’ve been watching for your moderated post to appear for nearly an hour, now… I wonder what in the WORLD you did say (lol — just mod on lunch break, no doubt), heh. At least you now have a “watch this space” interest going in some of us… .
    #(:))

    Good luck!

  43. The average reasonable person may not follow advanced math, but they know the smell of bs when they smell it.
    This report, attemtping to once again silence skeptics, will fail.

  44. For reference, here are the max and min temperature for stations with MMTS transitions compared to nearby stations without transitions for the 10 years before and after the instrument change. The effect is not subtle, though there is a fair amount of variation at the individual station level.

  45. LOL….Hey Janice ( inset wavy hand thingy here)

    I figued it out….it was a word in the link

    ======

    climatereason says:
    May 6, 2014 at 11:46 am
    Mosh has explained to me several times why his algorithm makes it OK to cool the past
    =====
    cr, here’s a trick question for Mosh….
    Ask him how is it possible they can publish adjusted data….from stations where they have no raw data
    ============
    Steve has some great ‘blink’ charts on the adjustments………pages of them

    http://stevengoddard.wordpress.com/tracking-us-temperature-xxxxx/

    insert f………r………..a……..u……….d where the x’s are

  46. climatereason says:
    May 6, 2014 at 12:49 pm
    Davidmhoffer
    Perhaps the additional stations are magic ones whereas the reduced stations are just plain ordinary ones
    tonyb
    >>>>>>>>>>>>>>>

    I suspect you are correct. Mosh didn’t answer the question as to where the extra stations from the past came from either. I see Zeke H has weighed in, perhaps he will enlighten us. But my suspicion is that the extra stations Mosh refers to are indeed artificial constructs.

  47. climatereason says:
    May 6, 2014 at 12:37 pm
    I’m glad its not just me, I thought I was being incredibly stupid in not understanding the rationale, as Mosh says it with such assurance.

    tonyb
    ——————————————–

    Mosh’s confidence in his work is not evidence that it is correct. And that the data shows such consistency of adjustments is strong evidence that there is an inherent problem, though not neccessarily where that problem may be. Which leads us to the reality that there are currently no usable long term global average temp data sets. All BEST did for me was prove that we started measuring global temp 1979 when the satalites went up.

    Also, I have never heard Mosh argue that the adjustments are correct. I have only heard him argue that they are the best we can do. Quite different things. Although, I don’t want to put words in his mouth.

  48. One silver lining is that if modelers conform their models to reproduce the adjusted data they don’t have a snowball’s chance in hell of making accurate predictions of future climate. Not that they were doing all that well anyway.

    Have they given any rationale for the huge addition they made to I guess that’s last year’s temperatures? Is no one there honest enough to blow the whistle on these shenanigans?

  49. Instead of detrending urban stations to get natural temperature variations without any side effect of UHI, they adjust the temperatures trends of rural stations upwards to hide the UHI phenomenon…

    UHI does play a role, certainly regarding offset — but MICROSITE is king when it comes to trend.

  50. Regarding Mosh: Our Leroy (2010) ratings will be available when we publish. If Mosh substitutes those ratings for what he is now using, I predict he will get a different result.

  51. @ Latitude — Hi! (insert wavy hand thingy here) — lol.

    Glad you got that figured out. Thanks for letting me know!

    ******************************************
    Well, Mr. Murphy… . New to these parts, eh? LOL, slightly modifying your words, THIS is what Mr. M0sher regularly prevaricates (at least he is consistent):

    “I have only heard him argue that they are the best {“adjustments”} we {he and his henchmen} can do.”

  52. “It has been a long, detailed rework, but now that the NCA has made this statement, it’s go time.”

    So the above statement could perhaps appear to be taken by some as a threat, or something else entirely.

    So has Watts, et. al. (201X) even been sent to any journal to date?

    Or should we all expect Watts, et. al. (201X) V2.0 (or is it now V3.0) to be published in draft form on the interwebs once more?

    Also, why wait specifically for the NCA, by the time you actually get Watts, et. al. (201X) published in a journal NCA will be long gone.

  53. Doing this at the aggregate is a questionable methodology. Each and every site adjustment needs a peer reviewed paper and public comment. There are too many local variables that need to be considered.

  54. “…It has been a long, detailed rework, but now that the NCA has made this statement, it’s go time… ”
    Anthony, this is important. I will send the link to everyone in the position of power I can think of to bring the truth to the public. I even have a niece who is an NYC reporter for CBS.
    Also Sharyl Attkisson (retired from CBS) has a website and still has influence, I think. She has reported on failed energy projects related to “climate change” in the past.

  55. Station moves and instrument changes also introduce real bias.

    Indeed they do. As for moved stations, the short answer is that we simply drop them.

    I’ll be interested to see how MMTS transitions are dealt with in the final paper.

    I use Menne (2009 & 2010) as the basis. In short, since this is a step change and not a “blip”, the closer MMTS conversion is to the middle of the study period, the greater the adjustment and the closer to either end, the lesser the adjustment. The final results conform with Menne’s.

    Congratulations on all the hard work by the way; we may not always agree on things, but doing the grunt work needed to get a paper published helps advance science in the long run, no matter which of our conclusions stand the test of time.

    Thanks.

    Well, as I said, once y’all plug in the new set of station ratings (Leroy 2010), we may find ourselves in better agreement than you might imagine. The ratings used by BEST are via Leroy (1999)

    And let’s not forget all the volunteers. A shout out to every man present who has observed a station. Every man who has observed a station is my brother.

  56. So the above statement could perhaps appear to be taken by some as a threat, or something else entirely.

    We like to think of it as “kind assistance”.

    (Okay, maybe not the kind they are after.)

  57. Based on what Zeke said, I don’t believe he’ll accept anything we publish. Like with Mosher and his defense of the indefensible, they’ll always find some way to adjustify the unjustifiable. And no Zeke, I’m not going to answer your questions. Having been burned by my trusting BEST at the get go, I won’t share any information again until after publication.

  58. If you’re fishing for catfish you use a bait that stinks.
    Perhaps they are fashioning a hook rather than a hockey stick?

  59. These are not the temperature adjustments you are looking for.
    Move along. Move along.

    Apologies to George Lucas. :-)

  60. Doing adjustments is not an issues on its own , its doing adjustments without good reason based on sound science. The lack of information on what they did , how they did it and why they did it , along with failing to retain the unadjusted data that is the problem for climate ‘science’

  61. Wait Whut… do you mean to say that the last year’s temperature adjustment was UP 1.0 degrees relative to all of the other adjustments?

    REALLY?

    That can’t be right.

  62. Also, your initial graph conflates TOBs adjustments with other homogenization (e.g. for station moves or sensor transitions). TOBs represents the bulk of the adjustment, at least for minimum temperature; homogenization in the U.S. actually reduces the century scale trend in minimum temperatures relative to TOBs-only adjustments.

    Yeah. And homogenization also deletes utterly the signal of the well sited stations and has almost no effect on the poorly sited set. To paraphrase le Carre, “It is an outrage. I shall tell everybody.”

    We drop all TOBS-biased stations, and to confirm this J-NG ran TOBS-adjusted data on our final set and that result is even a little lower than our results. We also drop moved stations. And we account for MMTS conversion.

  63. evanmjones,

    We may well be in agreement using the new Leroy ratings; I look forward to have the chance to run the numbers myself. If your results hold up, there is also the question of why CRN12 stations are subject to such large adjustments. Showing trend differences correlated with CRN rating is a useful first step, though additional work needs to be done to track down the causation (e.g. what exact breakpoints are being detected in pairwise comparisons, and why are they being made?).

    Anthony,

    If homogenization is “indefensible” than the bulk of the scientific literature in the field is wrong. This may well be true, but as Carl Sagan was fond of saying, extraordinary claims require extraordinary evidence. I’ll be a skeptic till I run the analysis myself :-)

    • An in Zeke’s reply, he illustrates perfectly the problem of the mindset of the university AGW culture. He’s already dictating terms to me without having data and procedure in hand.

      I feel sad for you. You have the same affliction as Venema, who thinks I’m anti-homogeniziation.

      Homogenization, done properly, is a useful tool. The simple fact is, it is done improperly by NCDC and is a oversized hammer, and the wholesale bludgeoning of the true climate signal with that homo-hammer ends up creating confirmation bias and a signal that is not representative of reality.

  64. As always I will wait to examine the data as used
    And code as run.
    That means the data used to classify stations
    The actual data not merely links.
    The protocals
    Who did the rating
    How they were trained
    Records of differences between raters.
    Time of rating to see if their is drift over time

    Lots of data.

    And then the methods to check.

    In short the same skeptical treatment all science should
    Get.

    Nice pre announcement however.

  65. Shocking really. If you tried this sort of thing to your profit figure before floating your company on the stock exchange you’d be in prison.

  66. Maybe off topic but The Weather Channel used to frequently include the record high and low for the day in the local “Weather on the 8’s”.
    I haven’t seen the days record temps for my area mentioned since last November.
    “Things that make you go ‘Hmmmm'”.

  67. @Latitude at 11:12 am
    I wonder if people realize the adjustments are more than the claimed global warming…..
    and without the adjustments……it would show cooling

    That is easy. No they don’t. And how can they realize it when the smoking gun is titled:
    “USHCN Final Minus Raw Temperatures as of May 5, 2014.”

    Let’s try:
    NOAA fudge factors used to change the US Historical Climate Network temperature record to turn a slight cooling or raw temperatures into an artificial warming to fit Political Objectives.

  68. That is a rather shameful graph – unless it’s true. Sadly there’s no way to know the truth. We do know though that it is a heavily tampered record, and each tampering is an admission they discovered they’d been wrong about all previous data set tamperings, and that will surely be shown to be true with this latest tampering when it too is fudged.

  69. We may well be in agreement using the new Leroy ratings; I look forward to have the chance to run the numbers myself. If your results hold up, there is also the question of why CRN12 stations are subject to such large adjustments.

    I can tell you exactly why. Only 20% of stations are Class 1\2 (ave. low trend). 80% are Class 3\4\5 (ave. high trend). Homogenization looks for outliers. So which stations do you think get identified as outliers and in which direction do you think they get adjusted?

    If 80% were well sited and 20% were poorly sited, homogenization would work as intended. I am a wargame designer, and any developer worth half his salt would pick that problem in no time during playtest. These guys don’t have a facility with numbers.

    They are mathematicians, surely, but they do not roll around in the numbers like a wargamer and they can’t seem to figure out that after the tenth snake-eyes in a row it is time to examine the dice.

    If homogenization is “indefensible” than the bulk of the scientific literature in the field is wrong.

    The bulk of the scientific literature in the field is WRONG, WRONG, WRONG. Homogenization reduces the error bars, doesn’t it? After all, you have just adjusted away your outilers, haven’t you? See my error bar. See how pleasingly small it is. Drinks with little umbrellas all round.

    Meanwhile, the pea (the correct signal of the Class 1\2s) has vanished.

    What is left is not even pea soup. All trace of the true signal has been eliminated. They have made complete pap out of their data. It is a travesty.

    This may well be true, but as Carl Sagan was fond of saying, extraordinary claims require extraordinary evidence. I’ll be a skeptic till I run the analysis myself :-)

    We make an extraordinary claim. We provide extraordinary proof. You will be able to run the analysis yourself; full data and methods will be provided and complete replication will be possible.

  70. From the Climategate emails # – 2328
    date: Wed, 3 Jun 2009 15:07:25 +010 ???
    from: “Parker, David”
    subject: RE: Tom’s thoughts on urban errors …
    Everybody wants to add an estimate of what UHI bias might be into their error bars, but it seems to me that rather than trust folk lore that there is a uhi bias, they first need to find one systematically in the network. Until they do that, the former is just hand waving to appease the know-littles. Jim Hansen adjusts his urban stations (based on night-lights) to nearby rural stations, but if I recall correctly (I’ll send that paper shortly), he warms the trend in 42 percent of the urban stations indicating that nearly half have an urban cold bias. Yet error analyzers want to add a one sided extra error bar for uhi…..
    Regards,
    Tom

    http://www.ecowho.com/foia.php?file=1057.txt&search=Hansen+adjust

    Bold in the original.

    Can we FOIA this paper and confirm that Hansen thinks UHI makes 42 percent of cities colder?

  71. Zeke Hausfather says:
    May 6, 2014 at 12:35 pm
    Anthony,

    Methinks the last point in your raw vs adjusted USHCN graph is in error.

    As far as the need for homogenization goes, we’ve been over this time and time again. There are certain network transitions (TOBs, CRS to MMTS, de-urbanization of stations post 1940s) that introduce some pretty significant biases into U.S. temperature records, most of which are (unfortunately) in the same direction.
    __________________________________________

    I would simply point out that not only are the biases (nearly) always in the same direction, they seem, somehow, to be (nearly) monotonically increasing from the beginning of the record to present time. This seems all too convenient to me and doesn’t pass the sniff test.

    Please, Zeke, I ask that you show both the unadjusted and unadjusted “global mean temperature anomaly” on the same plot. Just once. It will give all some very important context.

  72. Of course by “monotonically increasing” I meant “monotonically warming” in case it wasn’t clear.

  73. One last request from Zeke and Mosher: when would you predict that the network will be stable enough such that the trend in the adjustments (from decreasing Ts in the past to increasing Ts in the present) flattens?

  74. BEST wanted another scientist.. so they hired Mosher . roflmao !

    The ONLY real reason they could have for hiring Mosher is because they want a low level journalist !

  75. Who controls the past controls the future; who controls the present controls the past.

    — George Orwell, 1984

  76. Did I miss something in the comments? If so, please direct me to them. There is a massive hockey-stick like tail at the end of the graph. It looks, at face, value, utterly indefensible. The elephant in the room. Did Zeke, Mosher et al., address this? Because if they didn’t, that speaks volumes to me. I suppose I’m asking this question because I’d like to know if I should be weighing their commentary as well intentioned (although not necessarily correct) skeptical criticism or as spin doctoring.

  77. “… datacode… .” (Steven M0sher at 1:58pm)

    Ingredients…

    Sausage making machine…

    Stretchers like flour make profits go UP…

    Inspectors looking the other way…

    There is MUCH more to Mr. M0sher’s words than meets the eye… .

  78. As always I will wait to examine the data as used

    Right.

    And code as run.

    The only “code” we use is for MMTS adjustment and maybe regional weighting, if you can even call it “code”).

    That means the data used to classify stations

    Leroy (2010) ratings for heat sink, only.

    The actual data not merely links.

    All data will be included in a very comprehensive SI

    The protocals

    Will be provided.

    Who did the rating

    Ultimately, me. (but double checked by Anthony and John-NG and his students)

    How they were trained

    Leroy U. Class of 2010. (“That’s a fact, Jack!”)

    Records of differences between raters.

    I made all the final ratings (much rechecking), which are checked by others on the team.

    Time of rating to see if their is drift over time

    Legitimate concern, but one I have dealt with. Review has eliminated any “drift”.

    FWIW, I also check the GE historical wayback machine to judge if a rating has changed over time. if the rating changes, that counts as a station move and the station is dropped.

  79. “… they’ll always find some way to adjustify the unjustifiable. An-tho-ny Watts (1:43pm).

    Good one!

    And sad, but true.

  80. evanmjones,

    Pairwise homogenization approaches by-and-large look for step-change breakpoints in difference series between stations. If there is a good CRN12 stations surrounded by bad CRN345 stations, it shouldn’t be “adjusted upwards” incorrectly unless there are simultaneous breakpoints at the majority of surrounding stations. Its a good check against false positives, as divergent trends will not necessarily be corrected unless they have a breakpoint that is not shared by surrounding stations.

    Here is an example of how Berkeley does it for Orland, CA: http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/34846-TAVG-Alignment.pdf

    That said, its definitely worth looking into in more detail. I’d enjoy the chance to look through the difference series between CRN12 stations and their surrounding neighbors, identifying which step changes are flagged by the NCDC and Berkeley methods, seeing whether those correspond to documented station moves or other inhomogenities, etc.

    REPLY: Better yet, why not simply throw out all the questionable data for questionable stations, by giving up on trying to “salvage” obviously polluted data with various corrections and methodologies, and just go with data you know to be free of such problems? Why is there this persistent idea that you have to use all the data, no matter how corrupted, because of a belief it can be “fixed”? – Anthony

    REPLY – What Anthony said, in spades. ~ Evan

  81. You have the same affliction as Venema, who thinks I’m anti-homogeniziation.

    I have been back and forth with VV fairly extensively since 2012. He damnwell knows I’m anti-homogenization!

  82. It may just be a typo that causes that last point to be so high. Or taking a value for part of a year
    and multiplying it to adjust it for a full year which does not seem like the right way to do it. So Zeke said he thought that was an error. 2014 is not over so that value can’t be the final one.

  83. Bill, you are likely correct. However, you can count on the adjusted adjustment to still be warmer than the same for 2013… know how? Because EVERY adjustment is (nearly) ALWAYS warmer than that for the previous period just a year earlier… but noooooo, that doesn’t mean we’re cooking the books.

    Let’s see… looks like a duck, walks like a duck, sounds like a duck……..

  84. It may just be a typo that causes that last point to be so high. Or taking a value for part of a year

    I think it may be that new way of calculating US temperatures Anthony reported on recently.

  85. REPLY – Fear not. Our team is ALL ABOUT microsite. We have isolated the well sited stations and obtained the “true signal”. ~ Evan
    ===
    You’re my hero! :D

  86. Anthony,

    Should I throw out Orland? It has two TOBs changes and a station move documented. If I throw out every USHCN station with a documented inhomogenity there would be no USHCN stations. Not to mention that station metadata is spotty at best; there are likely quite a few undocumented station moves and similar things in the past. There is not always a bright line to demarcate “good” and “bad” stations. I’d rather have a consistent approach to deal with all breakpoints than an add-hoc rule of which stations to keep and which to drop.

    REPLY: So you are suggesting what we are doing is “ad hoc” now? Jeez. OK I’m done. – Anthony

  87. Why is there this persistent idea that you have to use all the data, no matter how corrupted, because of a belief it can be “fixed”? – Anthony
    ====
    You’ll never fix a problem that way….they’re not really trying to fix a problem though

    This all went down the cans….when we started paying them

  88. evanmjones,

    There has been no change in the official US temperature record methodology this year. This change that was reported here a few weeks back referred to calculations of climate divisions, a specific product not used in calculating CONUS temperatures.

  89. The National Climate Assessment should have been subcontracted out to the Kardashians or Paris Hilton to give it credibility or even just a bit more gravitas.

    Perhaps it should just be called the national Climate Adjustment for the sake of truth in advertising.

  90. “One silver lining is that if modelers conform their models to reproduce the adjusted data they don’t have a snowball’s chance in hell of making accurate predictions of future climate. Not that they were doing all that well anyway.”

    How about having the modelers re-jigger their models and then attempt to hindcast over the past 20 years, to see if they can FINALLY claim to have models that have a chance at actually predicting something accurately?

  91. It’s like they’re taking their cues from the Twilight Zone. How’s that go again?

    “There is nothing wrong with your television set. Do not attempt to adjust the picture. We are controlling transmission. If we wish to make it louder, we will bring up the volume. If we wish to make it softer, we will tune it to a whisper. We will control the horizontal. We will control the vertical. We can roll the image, make it flutter. We can change the focus to a soft blur or sharpen it to crystal clarity.”

    Isn’t that their modus operandi when it comes to global warming?

  92. I recommend “Supplement 3″ for graphic enticements.

    Fig. 5 The 800,000 year ice core CO2 concentration record. Admits that “natural factors” have caused the concentration over time to “vary”, but does not mention the strong heat connection, because, of course, that would undermine what they are selling. Very specious way of slipping a hockey stick into the report. Anti-science to the nth.

    Fig. 12 Hockeystick “adapted” from Mann et al 2008. Does “adapted” mean erasing all “proxy based” black ink after 1850 circa? Or is this report claiming the “thermometer” based record is exactly the same, graphically, with the proxies post-1850 or so?

  93. MattN asks: “What is their justification for a 1.6F positive adjustment?”

    Very simple: The more the planet cools, the larger the disparity between reality and ideology.

  94. it’s go time

    Best (no pun intended) of luck with the gatekeepers (aka review process).

  95. Sorry, I meant “Appendix 3″ above, titled “Climate Science Supplement”. See also Supplemental Message 6″ in Appendix three for some funny stuff about “averaging” models.

  96. evanmjones says:
    May 6, 2014 at 2:17 pm
    I can tell you exactly why. Only 20% of stations are Class 1\2 (ave. low trend). 80% are Class 3\4\5 (ave. high trend). Homogenization looks for outliers. So which stations do you think get identified as outliers and in which direction do you think they get adjusted?
    ====
    This deserves repeating…..

  97. Janice Moore says:
    May 6, 2014 at 2:34 pm

    “There is MUCH more to Mr. M0sher’s words than meets the eye…”

    The Mosh is now just a paid mouthpiece shill for Best . I’m sure he will enjoy it for a while.

    Wonder if its also his job to turn on the red light in the evening.

  98. Zeke Hausfather says:
    May 6, 2014 at 12:35 pm
    Anthony,

    Methinks the last point in your raw vs adjusted USHCN graph is in error.

    As far as the need for homogenization goes, we’ve been over this time and time again. There are certain network transitions (TOBs, CRS to MMTS, de-urbanization of stations post 1940s) that introduce some pretty significant biases into U.S. temperature records, most of which are (unfortunately) in the same direction.
    _________________________________

    Actually, the biases aren’t mostly in the same direction, are they? Seems they changed directions sometime after the year 2000 from making the raw data cooler to making it warmer… why is this?

    • Seems they changed directions sometime after the year 2000 from making the raw data cooler to making it warmer… why is this?

      @K Scott – actually, as the data ages, the adjustments start changing as well. Notice that 1998 was adjusted up, before it was adjusted down (to accommodate 05 and 10 looking more like records). I call 98 the kerry year.

  99. After another lustrum, or perhaps two, of no warming that can no longer be concealed, even the slowest of politicians is going to back away from the panic stricken carbon strangulation policies that have become so Politically Correct these days.

    We can only hope that their enlightenment occurs before they’ve inflicted irreparable harm on our economies.

  100. Wow Anthony. Looking forward to reading the paper! This answers that question I posted last week about why you focus on the temperature record nicely, thank you.

  101. That graph puzzles me. It shows an ‘adjustment’ of nearly 1 degree from 1979 to 2013 (the 2014 point looks spurious), yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?

  102. Anthony, You know I love you. An affection I have demonstrated with my treasure.

    That said, this is post hoc arm waving.

    Your surface station work was groundbreaking and under appreciated by the establishment. Bravo for you!

    May I suggest another group sourced experiment to “test” the adjustments. I believe there have to be high fidelity proxies to temperature in the historical that can be used to test the adjustments. Being a citified farm boy, I’ll go back to my roots and suggest agriculture. If the 30s really were comparatively cooler than they were a now, are there agricultural records we can consult? Do individual farms/farm families maintain records?

    So my suggestion to you is to:
    1. Call on your community to suggest the proxies.I wonder if tree farms might be a good source of data.
    2. Design a statistically valid experiment.Run it and compare the results to the proxies.
    3. Do it a couple of times.
    4. Write another paper.

    Very respectfully (and affectionately) yours
    RobertInAz

  103. I’ll followup on my tree farm suggestion. I mentioned it because somebody was writing about the difficulty of paleo temperature reconstructions to get temperature as an independent variable. IIRC, it was a most excellent series of 4 posts on the extraordinary difficulty of such an endeavor. He mentioned tree growth models. Given the agricultural importance of lumbar and the noncontroversial nature of local rainfall records, I wonder if it might be possible to back out temperature to a reasonable degree of precision using tree growth models, historical tree growth results and the historical precipitation record.

    This may work for other crops. But tree rings are so very well established as treenometers to fractions of a degree.

  104. Zeke Hausfather says:

    Pairwise homogenization approaches by-and-large look for step-change breakpoints in difference series between stations. If there is a good CRN12 stations surrounded by bad CRN345 stations, it shouldn’t be “adjusted upwards” incorrectly unless there are simultaneous breakpoints at the majority of surrounding stations. Its a good check against false positives, as divergent trends will not necessarily be corrected unless they have a breakpoint that is not shared by surrounding stations.

    How is the systematic difference in variability between CRN12 and CRN345 stations addressed?

    CRN345 stations are poorly rated mostly because of asphalt; concrete; buildings etc; too close to the instrument. In part the effect of this material is to buffer temperature and diminish the extent of abrupt change. In a cold snap the CRN3/4/5 instruments sitting on their warm concrete pads near their brick walls will react more slowly to the temperature change than the less buffered rural CRN1/2 instruments. Will this not lead to spurious detection of a step change in the CRN1/2 stations causing them to be adjusted upwards to match the nearby poorer quality thermally buffered ones?

    Furthermore the immediate environment around a CRN5 instrument sited in a concrete yard near buildings while far from ideal, may actually be more stable than that of a CRN 1/2 instrument in a rural setting where grass can be cut, there are going to be seasonal changes in vegetation, and where the effects of agriculture – ploughing a field or cutting trees – on nearby land may lead to step changes which should not be adjusted for because they are temporary and will be reversed as grass grows, vegetation grows back and so on. Such changes are natural and do not bias the record over the long term. Because of this natural variability more step changes are likely to be detected at CRN1/2 stations. Each step change results in the CRN1/2 station being adjusted to match its CRN3/4/5 neighbours which are subject to the more gradual effect of UHI.

    A step change detection algorithm is useful to flag possible bad instruments or changes to the measurement environment which should prompt a reevaluation of the site. But it should not be used in the absence of other evidence to automatically adjust the temperature record. Step changes reflect changes to the instrument or its close environment. These changes may be a cause for reclassifying the station. But changes which do not lead to reclassification are not expected to bias the temperature record and shouldn’t be adjusted for. If the effect of the step change detection algorithm is to change the measured rate of warming then it is spuriously introducing a bias into the temperature record.

    Systematic UHI warming bias will not be picked up with a step change detector. The problem is the slow creeping growth of cities resulting in a systematic warming bias, not step changes to the immediate environment of the instrument.

  105. k scott denison,

    The zero line on the y-axis is an artifact of the baseline period chosen for calculating anomalies in individual station records and isn’t really physically meaningful. In general, the NCDC’s approach to homogenization assumes that the present records are correct, and will adjust past records up in down to align everything when removing detected breakpoints that occur at one at one station but none of the surrounding stations.

  106. White House report on Recent US Temperature Trends cites Anthony Watts’ paper here:

    http://nca2014.globalchange.gov/report/our-changing-climate/recent-us-temperature-trends

    Fall, S., A. Watts, J. Nielsen-Gammon, E. Jones, D. Niyogi, J. R. Christy, and R. A. Pielke, Sr., 2011: Analysis of the impacts of station exposure on the US Historical Climatology Network temperatures and temperature trends. Journal of Geophysical Research, 116, D14120, doi:10.1029/2010JD015146.↩

    Maybe not so fringe after all.

  107. Zeke, are you saying they just detected a 1 degree increase step change in the present records…
    so they adjusted the past down 1 1/2 degrees?

    There’s no way to look at that graph and not realize they were detecting present day increases…
    …and adjusting the past down (decrease) each time

    But then getting rid of truly rural stations…and homogenizing the rest would get the same result

    …and results look like what they are after

  108. If I was a computer games modeler….I would be really pissed

    No wonder they can’t even get a trend right….they are working with numbers that have been worked to show a trend that doesn’t even exist

    Even if they designed the perfect climate computer game….it would never be right…and they wouldn’t even know it

  109. “Better yet, why not simply throw out all the questionable data for questionable stations, by giving up on trying to “salvage” obviously polluted data with various corrections and methodologies, and just go with data you know to be free of such problems? Why is there this persistent idea that you have to use all the data, no matter how corrupted, because of a belief it can be “fixed””

    Well cynical me thinks that if they discounted the bad, stations, or established realistic error bands, or stopped twerking around with adjustments, then the results wouldn’t be scary enough to keep the grant money flowing.

    I’ve had the pleasure of running a statistical control system for critical components for a tier 1 auto supplier. (Critical in our lingo means that if you stuff it up people are maimed or killed) BEST admit to 70% of US stations being between +-2C and +-5C out. We are looking to detect a mean temperature shift of 0.6C. You cannot hope to find one with the other. Were you to take a similar proposition to an Auto OEM, even concerning a purely decorative component you’d be told to f*ck off out the room, they wouldn’t be coy about using that language either.

  110. Zeke, with regard to your statement about UAH Satellite temperatures being in ‘quite good’ agreement with USHCN numbers… Where are you getting those UAH values..? I decided to go over to Dr. Spencer’s page to see what they have, being purveyors of that data, and they’ve got.

    Are you parsing out just USA data from the satellite record? To me, the data sets (your UAH and Spencer’s UAH) look quite different, and without a provided explanation.

  111. You can’t just adjust data based on random numerical algorithms. All that will do is adjust in the direction of the mathematicians bias. Any adjustments would have to take into account the equipment (for example MMS equipment fails hot) (paint failure creates added heat) etc. It would also have to take into account the geography, as in nearby water will have a limiting effect, people create additional heat, and its not uncommon for elevation differences to create mathematically detectable anomalies due to air mass temperature changes. In fact the geography can cause legitimate differences. Homogenization improperly done just causes bias. Interestingly we have the satellite records that are near bias free, though short, this offers us a sort of limit for the mathematicians. Any trend from a land data set that is significantly different than satellite is probably biased one way or another; given those bias’s were probably better off with unadjusted data. At least with unadjusted data were not fooling ourselves.

  112. In a land where constants aren’t and variables won’t, the Fairfax Law strikes again.
    The Fairfax Law clearly states that any facts which support the outcome you want are fair facts for the discussion. This especially seems to apply to CAGW, where made-up facts trump actual measurements. We HAVE TO HAVE a man-made catastrophe, so any facts that support that outcome are fair facts for the argument.

  113. Could Zeke explain why the adjustments are on a nice slope? If major changes took place in the 1940’s like Zeke said then we should see no adjustments UNTIL the 1940’s then no adjustments again. Sorry Zeke but the fan is on and you are throwing a lot of …. into it and I’m just glad I’m on the right side of the fan because it’s all falling back in your face. Try the truth next time.

  114. Dumb question: Is there a plot of the USHCN raw temp data over the time span of the corrections applied – 1880 – present? Cheers -

  115. As popular as this site is, if you really want to reach and communicate with people who aren’t already convinced global warming is a scam, you have to do a better job of explaining stories like this. Who is the USHCN and what temperature data are they adjusting? Is it US data? Global data? Is it a data set nobody looks at? How can they make this much of an adjustment to one set of temperature data, won’t it look ridiculous when compared to other measurement data sets of the same thing? I’m sure your loyal climate gurus get it and catch on quickly and trust any data you show, and I trust you too, but many people won’t. And I can just see anyone who visits this site who isn’t already convinced global warming is a scam, seeing this graph, not believing it, asking those questions above, clicking on your data links and not seeing the graph, and not seeing enough explanation of where the data is that was used to make the graph, and especially seeing that 1.3 F bump above all previous data points, and just saying its BS. I can’t forward this story to anyone because it is so spectacular anyone I send it to will surely ask me “Where did he get that, show me the data, I don’t see it in his links” and I wouldn’t be able to tell them. If the data is in your links its too hard to find. So they won’t believe it, its too easy to dismiss. Reaching your fellow climate gurus is one thing, reaching the other 96% is another, and I don’t know how you would do it, but this isn’t doing it. Maybe its just too complicated.

  116. I do not understand how government employees and government funded scientists can clearly admit they have fudged the data since 1960 (that is what the “final minus raw” graph shows) and not be charged with fraud. If a company sent in an SEC, EPA or IRS form with fraud as blatant as this they would be charged with a criminal offense. These people really do belong in prison.

  117. USHCN doesn’t make the adjustments, the people there make them. Who are these people?

  118. Well if I had ever “adjustified” ANY report of ANY parameter in ANY R&D work, I ever did, for ANY of my former employers (all first line companies); or worst yet changed someone else’s reported measured results; they would have fired my arse faster than I could take the last swallow from my coffee cup.

    I have always assumed that you could get thrown in jail for adjustifying any public reports of anything; erasing e-mails, destroying records; any of that stuff.

    One of my employers; a big one, but not the biggest, was VERY big on lab bench data taken from standard production line samples of supposedly “final version” products, to verify, that products shipped to customers conformed to the written specs guaranteed in the company product catalog.

    Falsifying any observed results, was grounds for immediate dismissal.

    Archives of past information are vital for when problems show up , and need to be researched for causes.

    The whole idea of altering public records of taxpayer funded information; just makes me puke.

    If some past result is believed to have had some sort of anomaly; you don’t change the record, you add to the information, with a statement, of what possible sources of error might have been in play at the time.

    Sounds like government climate science is akin to having a hook and ladder fire truck with three drivers, front, middle and rear, each driving what (s)he thinks is the better route; with nobody knowing exactly where the fire is.

  119. Stepped out for several hours, no response from Zeke or Mosh so I will ask again:

    If it is Mosher’s position that station drop out does not affect the over all answer, how can he justify the past becoming colder by increasing the number of stations? If the latter were true, then station drop out would have to warm the present.

    Pick one guys. Which is it?

  120. The Mosh is now just a paid mouthpiece shill for Best .

    I know you’re not wild about Mosh. But you have to understand that the questions he is asking are just the sort of ones I would ask, myself. It is part and parcel to the scientific method that our data, means, and methods be fully and easily available, and that there must be ability to replicate the results.

    When we publish, all of that material will be made fully available.

  121. Dear Jimmi the Dalek,

    Re: “(the 2014 point looks spurious),” — that is correct. THAT, if I’m not mistaken, goes to the main point of this post. We must, as Gunga Din said yesterday: “Nip it in the bud.”
    (http://wattsupwiththat.com/2014/05/05/how-not-to-measure-temperature-part-95-new-temperature-record-of-102-in-wichita-but-look-where-they-measure-it/#comment-1629465)

    “NOAA final adjusted data says: + .309°C/decade
    (above at “Comparison…”)

    “USHCN Final Minus Raw Temperature May 5, 2014″ — shows about a 1 degree jump for 2014 (above graph),

    which certainly does NOT agree with the satellite temperature record:

    UAH (Satellite) Temperature Anomalies:

    YEAR MON NH (Northern Hemisphere)
    2013…….1…….+0.517
    2013…….2…….+0.372
    201 …….3 …….+0.333
    2013 …..4 …….+0.128
    2013……5……..+0.180
    2013……6……..+0.335
    2013……7……..+0.134
    2013……8……..+0.111
    2013……9……..+0.339
    2013 …..10……+0.331
    2013……11……+0.160
    2013……12……+0.272
    2014…….1…….+0.387
    2014……. 2…….+0.320
    2014……..3 ……+0.337

    (Source: http://wattsupwiththat.com/2014/04/07/uah-global-temperature-update-for-march-2014-status-quo/)

    Finally,

    A Little Perspective to Keep the Facts in View:

    “{Per t}he monthly satellite lower-troposphere temperature anomaly from Remote Sensing Systems, Inc., … there has now been no global warming – at all – for 17 years 5 months.”

    (Source: http://wattsupwiththat.com/2014/02/06/satellites-show-no-global-warming-for-17-years-5-months/ — emphasis mine)

    And the Forest….:

    The Medieval Warm Period is normally given as 950 AD to 1250 AD or 1063 years BP to 763 years BP. In the beginning of this period, temperatures in Central Greenland rose by 1.5°C in less than 200 years. This has been fairly well documented as a worldwide event 48. It is uncertain what the global average temperature was during the period and whether the world as a whole was warmer than now, or not. But, certainly in areas where we have records, such as Greenland, the UK, and in China, temperatures were comparable to temperatures today and in some cases warmer. A considerable amount of recent research attempts to compare temperatures during the Medieval Warm Period to today on a global basis 48.

    Little Ice Age

    The Little Ice Age was not a true ice age, but the cooler period after the end of the Medieval Warm Period. It is generally considered to have started by 1350 AD (663 years BP) 49 and it pretty much ended by 1850 AD … .

    (emphasis mine)

    (Source: http://wattsupwiththat.com/2013/11/17/climate-and-human-civilization-over-the-last-18000-years/)

  122. Zeke and others,

    I think you should look carefully at a systematic choosing of stations by their quality rather than their trends. The result of each method simply cannot be significantly different if the bulk processing methods and station choosing methods both work.

    For Anthony’s result to be inaccurate and the bulk math method to be accurate, you simply need to identify how choosing the best possible stations is somehow biasing the record downward. Nick Stokes wrote a compelling post on the reliability of using only 60 stations for global temp. If that is the case and Anthony chooses stations based on their quality, there should be no difference whatsoever. That said, if he finds ANY difference at all in trend between station quality levels, I would think that BEST would be highly interested in the result rather than dismissive.

    Excepting some of the historic unexplained pre-usage of privately disclosed data, I fail to see why this is a heated discussion. It should be resolvable by looking closely at station sorting criteria to determine whether an error was made or whether there is merit to the findings.

  123. Mr. X (at 3:08pm) — I think you said it best (lol):

    “We will control all that you see and hear….. sit quietly… .”

    “The Outer Limits” — Intro.

  124. yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?

    Satellite readings are, in one sense, a proxy. They are based on microwave reflections; clouds and ice can be an issue. They are not adjusted for anything else, I think, and we are directly assured that UAH does not use the surface record in any way. But, more to the point, satellites measure Lower Troposphere temperatures (+ the other atmospheric layers), not surface.

    Dr. Christy (a co-author in this) had previously calculated that LT trends must, necessarily, be 20% higher than surface trends (1.2 amplification), and up to 40% higher (1.4 amplification), heading towards the equator. He was perplexed that this did not show up in the record. Our current results split the uprights at an amplification factor of 1.25.

    So Dr. Christy’s theory is vindicated, as well as Anthony’s grand vision.

  125. Actually, the biases aren’t mostly in the same direction, are they? Seems they changed directions sometime after the year 2000 from making the raw data cooler to making it warmer… why is this?

    Because that was the one year they got it right?

  126. Satellite readings are, in one sense, a proxy. They are based on microwave reflections; clouds and ice can be an issue. they are not adjusted for anything else, I think, and we are directly assured that UAH does not use the surface record in any way.
    ===
    Evan, I’m curious about this…
    ..initially, how were they tuned?….and what, if anything, are their readings compared to in order to account for drift, etc
    I know they are trying to “devine” a temp…but like sea levels….they would have to have some way to check for accuracy…and they would have had to have some way to initially get them on track

  127. Because that was the one year they got it right?
    ===
    No, they are just saying that 2001 and 2002 were the only two years that anyone could read a thermometer right…….. :)

  128. Even the raw data produced by the NCDC cannot be trusted. Someday, the auditors, forensic accountants and justice lawyers will be going in and I hope people will be held to account.

    Meteorologists in 1870, and 1880 and 1910 and 1930 and 1950 and 1990 and 2013 were too dumb to understand that the temperature should be recorded at the same time of day or that a simple minimum and maximum would suffice. They NEVER learned how to properly record the temperature. All 1 million of them through history.

    That is the justification of continuing to adjust the historical temperature record every single month. In fact, even last month’s temperature recorders were just as dumb as those in 1870 who received the directive from the Weather Bureau on the time of day temperature recording. They never got it right and hence even last month’s records require an adjustment.

    It cannot be justified by “another” paper (among 28 done before) showing how records were screwed up, even last month.

  129. Even if they designed the perfect climate computer game….it would never be right…and they wouldn’t even know it

    It wouldn’t even be fun.

    Some of these so-called mathematicians simply have not rolled in the mud with the numbers the way some of us have. They seem to have forgotten the top-down world where things actually have to add up.

    If you want to know if the dice are loaded, don’t ask one of them; they can’t give you an answer you can use — give me a good old common-sense hard copy wargamer’s quantification assessment every time.

  130. Why adjust the data at all?

    If you are really trying to find “change”, homogenization is the exact opposite
    of what you should be doing! 

    If you are honestly attempting to find “real” trends, then it is the relative temperature that is required from the local raw data, and not the absolute temperatures! 

    Each record should be examined in its own context.
    Every site move, instrument or methodology change creates a discontinuity that should be treated as a new and unique dataset, that should be examined independently. 

    It amazes me the that climate scientists can make outrageous claims for the veracity of proxy records and then in the same breath completely discount recorded weather data. What they should be seeing, is that local raw data, is the best ‘proxy’ record the world has ever had and treat it with according reverence!!

    Just one well sited station with consistent instrumentation and record keeping methodologies will tell you the truth about so called “Climate Change”. It will show you if it is Global, how it affects that zone, if there is a ‘change’ in any direction, how long, of what magnitude and more.

    If a site begins as a rural paddock and ends as a car park in the concrete jungle, all other things being equal, the data alone is useful because it can tell us a lot about the history of that process!

    It is the data from the trees and not the forest that matter and as one great Mann demonstrated, even a single tree can be very useful! ;-)

  131. jimmi_the_dalek says:
    May 6, 2014 at 4:18 pm
    That graph puzzles me. It shows an ‘adjustment’ of nearly 1 degree from 1979 to 2013 (the 2014 point looks spurious), yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?
    —————————————————————-
    The agreement between RSS and the surface record was reasonable. Lately the divergence is increasing rapidly, at a rate greater then the warming rate.

    http://stevengoddard.wordpress.com/2013/05/10/giss-rapidly-diverging-from-rss/

  132. Even the raw data produced by the NCDC cannot be trusted.

    I know. I brought the case to Mac a couple of years ago over some “inhomogeneities” regarding USHCN1 vs. USHCN2. (I’ll omit the various four-letter words.) He did a water test and found some discrepancies. We never followed up on that, though.

  133. Further supportin the likely error of the ajustments is the fact that off all continuesley active USCHN stations, the vast majority of the highs occured in the 30s and 40s. (There is no ajustments on a high record, it just is.) If anything UHI in conjunction with CAGW should have smashed those records.

  134. In general, the NCDC’s approach to homogenization assumes that the present records are correct

    Ah there’s the rub. And if it turns out it ain’t correct, then homogenization not only adjusts in the wrong direction, but it makes whatever correct signal you may or may not have vanish, leaving not a trace.

    In my old profession, we called that “crappy design and development”. In my current profession, too, come to think of it.

  135. evanmjones says (May 6, 2014 at 6:25 pm): “Dr. Christy (a co-author in this) had previously calculated that LT trends must, necessarily, be 20% higher than surface trends (1.2 amplification), and up to 40% higher (1.4 amplification), heading towards the equator. He was perplexed that this did not show up in the record.”

    Wasn’t this mentioned in an article/comment at WUWT? I remember reading about this before, i.e. that the “official” global surface temp trend pretty well matches the LT trend, and it shouldn’t, so there is something wrong with the “official” surface temps or with the theory of troposphere temp trend amplification. I’ve looked, but can’t find it.

  136. ..initially, how were they [satellites] tuned?….and what, if anything, are their readings compared to in order to account for drift, etc

    They weren’t at first, for drift. That was later corrected.

    I know they are trying to “devine” a temp…but like sea levels….they would have to have some way to check for accuracy…and they would have had to have some way to initially get them on track

    That I do not know. All I do know is that they deny using surface data to adjust, and I believe them.

  137. evanmjones says (May 6, 2014 at 2:17 pm): “I am a wargame designer…”

    What a coincidence! I’ve played wargames! :-)

    May I ask which titles you’ve worked on?

  138. Meteorologists in 1870, and 1880 and 1910 and 1930 and 1950 and 1990 and 2013 were too dumb to understand that the temperature should be recorded at the same time of day or that a simple minimum and maximum would suffice.

    IIRC, TOBS-bias was not discovered until the 1950s. It’s a nasty error, and I did not really understand it until I blocked out an example.

    The new CRN network is pristine. I don’t think I spotted a Class 3, and many of them are so Class 1 it hurts. Makes you think America is a big place, all of a sudden. They have triple-redundant PRT sensors (so much for homogenization) and are 24-hour records (so much for TOBS).

    Unfortunately we will have to wait a couple decades until that data is useful.

  139. Should Anthony and StevenGoddard be the only sites discussing the increasing divergence between the satellites and GISS? Is this addressed anywhere in the “approved” literature?

  140. Is there a plot of the USHCN raw temp data over the time span of the corrections applied – 1880 – present?

    Yes, raw USHCN data is available. The HOMR metadata going back to the late 70’s is very good these days, greatly improved. Someone at NCDC made a good hire.

  141. Regarding Mosher’s questions, let me answer by saying we will be doing the exact opposite of this famous quote from CRU’s Phil Jones:

    “Why should I make the data available to you, when your aim is to try and find something wrong with it?”

    We plan a very extensive SI with the paper, so that people can replicate the work. If it isn’t replicable, it isn’t science. I feel pretty good about all of this, though I’m sure there will be some people who will try to pull fast ones, but there will be others that will follow through without playing games and see how this long and painstaking work has paid off.

  142. What a coincidence! I’ve played wargames! :-)
    May I ask which titles you’ve worked on?

    Ever play Blue Vs Gray? #B^)

    Scott Wilmot Bennett says:
    May 6, 2014 at 6:41 pm (Edit)

    They should use the whole dang USHCN as a study group.

  143. evanmjones @ 6:25 pm

    So are you saying that the satellite records have a warming bias? Has Dr Spencer commented on this?

  144. Wasn’t this mentioned in an article/comment at WUWT?

    I have mentioned it, but I can’t say exactly where.

    So are you saying that the satellite records have a warming bias? Has Dr Spencer commented on this?

    No, they measure Lower Troposphere (more or less) correctly, as designed. Surface trends should be 20%+ lower than LT. Therefore it is not the satellite trends, but the surface station trends which are too high.

  145. So, how did the attempt to bait Steven Mosher work out for you guys?

    “I know you’re not wild about Mosh. But you have to understand that the questions he is asking are just the sort of ones I would ask, myself. It is part and parcel to the scientific method that our data, means, and methods be fully and easily available, and that there must be ability to replicate the results.

    When we publish, all of that material will be made fully available.”

    How much did you expect him to be able to say without the missing information Anthony?

  146. jimmi_the_dalek says:
    May 6, 2014 at 7:32 pm
    Latitude @ 6:56
    It looks as if whoever constructed that graph omitted to ensure that the base lines are the same, because they seem to be comparing these two series without adjusting the offset,

    http://www.woodfortrees.org/plot/gistemp/from:1998/plot/rss/from:1998

    because that is what the page links to.
    ====================================
    Laditudes 6:56 link was to US surface T chart. This was oringinally accepted by Hansen.
    I think you are referring to this…
    The agreement between RSS and the surface record was reasonable. Lately the divergence is increasing rapidly, at a rate greater then the warming rate.

    http://stevengoddard.wordpress.com/2013/05/10/giss-rapidly-diverging-from-rss/

    I do not read it as a baseline chart. It is showing how much the two metrics have separated from each other since 1998, where they start at that distance from each other, and the mean separation is increasing.
    Indeed as pointed out, in absolute terms the surface is warming faster then the RSS reading, the opposite of what is suppose to happen per here…
    Evanjones
    “No, they measure Lower Troposphere (more or less) correctly, as designed. Surface trends should be 20%+ lower than LT. Therefore it is not the satellite trends, but the surface station trends which are too high” and here…
    “Dr. Christy (a co-author in this) had previously calculated that LT trends must, necessarily, be 20% higher than surface trends (1.2 amplification), and up to 40% higher (1.4 amplification), heading towards the equator. He was perplexed that this did not show up in the record.”

  147. ““Why should I make the data available to you, when your aim is to try and find something wrong with it?””

    Did Steven ever say that or anything with the same meaning here? Seems pretty weak to drag other peoples statements into it while talking about Steven. Why not just stick to things he actually said?

  148. Indeed, over the U.S. the agreement between satellite and surface records is quite good

    The point being that the agreement between LT and surface trend should not be “quite good”.

    LT trend should be 20% or more higher than surface trend.

    Evan, I’ve played it. Old time wargamers unite!

    Holy Cow. Well, I designed and developed that one. And researched/wrote all the historical script. If you’ve got any rules (or strategy) questions, just ask. Any friend of BvG is a friend of mine. BTW, that game is a top-down model, and I storyboarded it from 1st Bull Run to Appomattox, down to the last card pick, step loss, and die roll.

    I have a few helpful comments for climate scientists who imagine they can “model” the Eastern Front — using Advanced Squad Leader rules . . . (Are you MAD? What are you THINKING? And downhill from there.)

  149. O.K. now is when skepticism has gone off the rails. I was called crazy on Goddard’s site for merely asking him to finally offer confirmation that this sudden hockey stick of claimed 2014 adjustments was real instead of either an artifact of his personal software or an artifact of how different station data arrives over the year etc., by offering a dirt simple raw vs. adjusted plot. The expectation would be since, no, there *is* no sudden FULL DEGREE jump in any final plots I can find, which would of course be highly suspicious and an obvious sign of a glitch, then the real hockey stick would have to be an upside down one in the *raw* data, and in a great public relations breakthrough, skeptics could claim yet another Hide The Decline event. Yet even so, such sudden glitches have in the past simply led to discovery of an error somewhere and its correction, such as Hansen’s Y2K bug.

    So where are those plots of monthly or weekly raw/final data extending into 2014? Anybody with current graphing software out there?! I only had good software set up several years, alas.

    Whatever happened to the civil tradition of sending a letter to the organization about a sudden glitch? Without that, skeptics are effectively implying motive for a surreal and nearly unbelievably sudden adjustment glitch, and instead of enjoying a PR coup that could gain headlines, skeptics will be seen as crackpots since it’s all just the personal software of one blogger. Goddard has released the data extraction code, written in C++, so has anybody actually checked it? I am not qualified. That I was attacked for even asking makes me highly suspicious, akin to alarmist blog experiences.

  150. It would probably help if you threw out some one liners about corrupt scientists and the propagation of a great hoax on the world by people who want to make us all communists, or something like that so that people can tell for certain which side you are on :P

  151. David Riser: Suggesting commenter Steve is a troll for trying to help out by offering a sense of layperson perspective and then asking him to do general homework instead suggests groupthink.

    Steve nails it, perfectly, asking simply for clear supporting evidence, rather than a naked plot:

    “I can’t forward this story to anyone because it is so spectacular anyone I send it to will surely ask me “Where did he get that, show me the data, I don’t see it in his links” and I wouldn’t be able to tell them.”

    Steve, you have to click on the hockey stick graph itself, and that leads to an anonymous blog by one “Steven Goddard” who regularly posts conspiracy theories about Obama killing school kids in arranged shooting events in order to enact anti-gun laws, and regularly posts Holocaust imagery too, to the delight of his small army of cheerleaders, so using his site as your reference will scare away any normal everyday person, probably for good. One of his top commenters is a notorious Net spammer of a crackpot iron sun theory who happens to be a convicted son/daughter rapist, who is banned here. His site is a PR disaster since organized AGW enthusiasts are well aware of his failings and leverage it quite often to successfully stereotype all skeptics.

  152. evanmjones says: May 6, 2014 at 2:17 pm
    “What is left is not even pea soup. All trace of the true signal has been eliminated. They have made complete pap out of their data. It is a travesty.”

    Mr. Jones,
    Did you really just compare their efforts to homeopathy, as their homogenization has watered down the meaningful stuff until there is only a memory of it? This deserves a new name coined, though climateopathy seems too obvious and hides the homeo issue. Homeoclimatopathy seems quite the mouthful, as well. Hmmmm.

    BTW, I live in Beijing, where taking pictures of things as simple as weather stations can get a visa terminated (at best) if the wrong people are annoyed (and one reason I use both a psuedonym and a VPN to post things that could theoretically be taken the wrong way), but if you need better close ups of Beijing area weather stations, I could make this effort if you can get me the location information of those where Google Earth isn’t adequate. Yes, that is permission to email me under the assumption that Anthony has the access to my non-public address.

  153. Evan: IIRC, TOBS-bias was not discovered until the 1950s. It’s a nasty error, and I did not really understand it until I blocked out an example.

    Actually this subject has been up for discussion since the 1890’s. There’s a good, short article in The Monthly Weather Review, October 1934, pp 375-376, by W.F. Rumbaugh, titled The Effect of Time of Observation on Mean Temperature. A footnote by the Review editor calls attention to earlier discussions in the Monthly Weather Review of September 1934 and November 1919, as well as an 1891 Publication by McAdie titled Mean Temperatures and Their Correction in the United States, and another by Hahn in Der Lehrbuch der Meteorologie, undated. I have no idea how to get hold of these last two, but the Monthly Review articles are easy to get at the AMS site.

    I had an exchange with brother Stokes on an earlier thread about the possibility of corrections for elevated temperatures of urban stations during this period; you may have seen it. I am impressed/depressed about how little the subject of discussion has changed in a hundred + years.

    There is another possible correction that I have not seen discussed, and I’d be interested in anything you may have seen about it. During an extended period in the second half of the 19th century, Signal Corps/Weather Bureau doctrine was to get the thermometer as high as possible off the ground. The weatherstations gallery has a number of pictures showing stations on the roofs of multi-storied buildings. I came across a good one when I was looking in the history of San Luis Obispo. I’d put a link in here, but the site is down for maintenance at the moment. Anyway, at some point this policy was changed, and it seems to me that the effect might have been as regular and significant as the more recent change from CRS to MMTS instruments. Has anyone done a study of this?

  154. To Mosher, Zeke and any others who support the corrections shown at the top of the post, please address the following questions:

    1. Please explain the physical problem or process associated with the temperature measurements that requires a systematic and increasing correction to the data.
    2. Given that a possible UHI correction would be expected to be a systematic and increasingly negative adjustment with time to present, what physical processes are affecting the temperature measurements such that the cumulative corrections are positive and larger than the expected UHI corrections
    3. Given the problem of boundary conditions and temperature inversion at night, why is the average of Tmin and Tmax used, rather than just Tmax?

    Answer me those questions clearly with good physical reasoning and I might start to take these corrections seriously.

  155. Zeke Hausfather (May 6, 2014 at 3:02 pm) “If I throw out every USHCN station with a documented inhomogenity there would be no USHCN stations”

    Do you keep Norfolk? It has bad micro-siting: http://shpud.com/weather/main.php?g2_itemId=48 It has urbanized in spurts up to the present: http://shpud.com/weather/main.php?g2_itemId=142 To show that garbage sites like Norfolk can be used for any purpose requires showing that there is no deleterious affect from the inclusion of the data. That applies equally to 100 or more other sites but it can’t be demonstrated with any kind of bulk analysis. One must show (1) that Norfolk has been properly adjusted and (2) that using Norfolk in regional averages did not result in improper adjustments to any neighboring sites. I am quite frankly very tired of reading bulk analyses like this: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/algorithm-uncertainty/williams-menne-thorne-2012.pdf which does not answer my questions.

  156. NikFromNYC (May 6, 2014 at 10:10 pm) “One of his top commenters is a notorious Net spammer of a crackpot iron sun theory who happens to be a convicted son/daughter rapist, who is banned here”

    My understanding is that the charges were dropped. There are valid reasons to exclude people from these discussions, but the dredging up of old personal attacks, no matter how justified they may seem in any particular case, is not a valid reason.

  157. @NikFromNYC

    “Steve Goddard” has many many failings so his claims have to be read very sceptically. On the other hand I would suggest your attempt to link him to a crackpot by virtue of the fact that a crackpot has posted comments on his site, is itself indicative of your credibility.

  158. Nick Adams, May 6, 2014 at 11:25 am, provided the following link detailing the adjustments that have been made to the temperature record:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL

    The one that interests me is SHAP.

    That page says “Application of the Station History Adjustment Procedure (yellow line) resulted in an average increase in US temperatures, especially from 1950 to 1980. During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites. When adjustments were applied to correct for these artificial changes, average US temperature anomalies were cooler in the first half of the 20th century and effectively warmed throughout the later half.”

    Doesn’t this just export any historic UHI to the new location? The SHAP adjustments surely ought to be negative on temp data before a move to a cooler site and unadjusted after the move as you would think the new site would be chosen to be better. The temp data instead appears to be unadjusted before a move to a cooler site and positively adjusted after a move, with those adjustments reaching a plateau once the moves were largely done.

  159. I have been saying for a long time that this stratagem is so clever, it can only be deliberate. Past temperatures cannot be verified while present ones can, so to create warming by cooling the past is shrewd. Dishonest, yes, but shrewd.

  160. Let the temperature fit the crime

    Mann’s objective all sublime
    He shall achieve within time
    A data torture crime
    Temperature is a crime
    And make each data point
    Unwillingly represent
    A source of innocent increment
    A source of increment!

    Gratis Gilbert & Sullivan Mikado

  161. Did you really just compare their efforts to homeopathy, as their homogenization has watered down the meaningful stuff until there is only a memory of it?

    Worse. There is no trace of it, whatever. After homogenization, the well sited stations average 0.324 C/decade and the poorly sited stations average 0.325.

    Unless you go back to formula you would never suspect there was “meaningful stuff” in the first place.

  162. That chart as done by NOAA has been around for a long time, although it only goes to 2000:
    src=”http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif”>

  163. DC.sunsets says:
    May 6, 2014 at 11:42 am
    All science is Political Science when politics writes the grant-money checks.

    Maybe this has already been done, but if not I’d like to see someone do a “funding study.”

    The purpose would be to categorize all federal grants awarded to climate-related studies as being awarded to known skeptics, known alarmists, and neutral parties. And of the neutral parties, how many of them were awarded future grants if their original results were classifieds as skeptic, neutral, or alarmist.

    Put another way, do climate skeptics have trouble getting federal grant money and is there a study that indicates that to be the case?

  164. If they keep lowering past temperatures people will think we’re emerging from a little ice age! Oh, we are, aren’t we.

    Anyway, it’s good to see they keep increasing the choco ration.

  165. Lots of questions from folks; let me see if I can catch up now that I’ve had my morning coffee.

    ——–

    Cynical Scientist – Interesting suggestion regarding reoccuring step changes like cutting grass at rural stations. I suspect those effects would be too small to be picked up by the homogenization algorithms, though they could result in problems over time. Williams et al tried to address this by looking at how the results change when you raise or lower the threshold for detecting step changes such that only major changes (e.g. station moves, instrument changes, paving the area under the station) trigger homogenization.

    Variability differences shouldn’t really affect the trend unless there is also a change in the mean, something that can be easily picked up by pairwise comparisons.

    As far as UHI goes, it is likely a combination of step changes (many triggered by microsite changes) and gradual slope (more influenced by macro-scale changes). Our recent paper found that step-change homogenization is pretty good at removing UHI impacts in the U.S., at least for the last 60 years or so: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

    ———

    Latitude – Each record is compared to its surrounding neighbors, and the algorithm looks for break points at one station that are not shared by its 20 or so surrounding stations. The assumption is that climate change is by-and-large a regional phenomenon, and any persistent step changes on a monthly scale seen at one station but none of its surrounding stations are due to some localized bias rather than a real climate effect.

    The current temperatures are used as the reference, so if there is a 1/2 degree step change down at a specific station at some point in the past, the NCDC method corrects it by moving past temperatures up by 1/2 degree. Berkeley does something somewhat different, cutting station records whenever it detects a breakpoint and treating them as individual stations. The results of the two approaches are quite similar, however, especially for the U.S.: http://rankexploits.com/musings/wp-content/uploads/2013/01/USHCN-adjusted-raw-berkeley.png

    You do get pretty much the same result if you do homogenization using only rural stations and toss out all the urban stations. We did this test in our UHI paper.

    ———

    Salamano – I’m using a subset of UHI only over land areas in the U.S. It agrees quite well with homogenized data. As does the USCRN since it began having complete U.S. data in 2004: http://rankexploits.com/musings/wp-content/uploads/2013/01/Screen-Shot-2013-01-16-at-10.40.46-AM.png

    ———

    Jared – The figure shows the cumulative effect of adjustments, and its far from linear. Also, while there were some major adjustments to CRN12 stations in the 1940s, the biggest adjustments were TOBs changes and the MMTS transition in the 1960s-1980s, which is where you see the bulk of the increase. I also get something slightly different from Anthony when I compare raw and homogenized data: http://i81.photobucket.com/albums/j237/hausfath/USHCNHomogenizedminusRaw_zps284d69fe.png

    ———

    davidmhoffer – Adding stations can cool the past if they are in areas with no spatial coverage prior to adding those station (e.g. in the Arctic) and if they have a higher trend than the global average. This is not really relevant for the U.S., however, where spatial coverage is fine. In general, unless there is an area with low spatial coverage, adding more stations will have a minor effect on the temperature record.

    ———

    Jeff Id – I’d hope no one ever chooses stations based on their trends :-p. Its worth pointing out that Anthony finds that the best sited stations have the same trend as the badly sited ones post-homogenization. They appear to have a lower trend prior to homogenization, though in Fall et al they had the same trend, so its worth looking in more detail exactly what changed in the ratings between the old and new papers.

    The difference between CRN12 and CRN345 trends in the raw data could have a number of explanations: (1) homogenization is biasing the trends upward by “spreading” the warming from badly sited stations; (2) there are some inhomogenities (like the 1940s move from city centers to airports) that are correlated with CRN ratings; (3) there is bias in the spatial coverage between the different sets of stations contributing to the trend differences.

    Once the ratings are released, I’d like to look more in-depth at the specific breakpoints detected in the CRN12 stations to see what is driving these differences. I’d also like to compare CRN12 temperatures to nearby Climate Reference Network stations, satellite data (UAH/AIRS), and reanalysis data. Anthony may well be correct, though I’ll defer judgement until I actually have the data to see for myself.

    ———

    ThinkingScientist – I’d be happy to answer your questions to the extent I can.

    1) There are a number of systemic biases introduced in the U.S. record over the past century. First a significant number of stations were moved from urban rooftops to newly constructed airports in the 1940s, resulting in a step change downward in readings after the move. Second, a large portion of the network had its time of observations changed in the 1960s and 1970s, also resulting in a negative bias. Third, most of the network transitioned from liquid in glass thermometers to MMTS electronic instruments in the 1980s, resulting in an average max cooling bias of around 0.6 C. There are also numerous documented and undocumented station moves and microsite changes, as well as a non-negligable UHI effect. All of these are biases that, ideally, should be addressed in creating our best estimate of U.S. (and global) temperatures.

    2) UHI is real, but its impact isn’t huge. Time of observation changes introduce a larger bias, for example. Homogenization (excluding TOBs corrections) actually lowers min temperatures, which is what we would expect when UHI is being corrected. I’d suggest reading my recent paper on UHI and homogenization in the U.S. for more details: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

    3) Homogenization is generally done separately on max and min temperatures. Mean is calculated from the resulting homogenized max and min.

  166. “Zeke Hausfather says:
    May 6, 2014 at 12:35 pm
    Methinks the last point in your raw vs adjusted USHCN graph is in error”

    And if it is not in error, will he agree to call it “bullshit of the highest order”, too?

  167. evanmjones,

    If I recall correctly, the TLT amplification factor over land is actually right around 1.

    See http://www.realclimate.org/index.php/archives/2009/11/muddying-the-peer-reviewed-literature/

    and http://climateaudit.org/2011/11/07/un-muddying-the-waters/

    REPLY: Yes but nothing on RealClimate is actually real. And, I simply don’t trust a NASA organization that uses noisy and maladjusted surface temperature data over a satellite sensing program – i.e. the business of NASA. You’d think that would be their goal.

    We’ll get the answer from the people that actually DO the work in satellite sounding and post here. – Anthony

  168. Salamano –

    Oops, I meant UAH, not UHI in my reply above (which looks to still be in moderation). Climate science is something of an acronym soup, and sometimes its hard to keep them all straight…

  169. Anthony,

    Well, given that Klotzbach get their 1.25 amplification figure from Schmidt 2009, I think he might be the right person to ask :-p

    Steve McIntyre did the math himself and got an amplification factor of 1.05 over land. Read the Nov. 8th update to Steve’s post.

    REPLY: I did, and while Steve’s work is admirable, he’s not in the business of remote sensing (this appears to be his first effort), and neither is Schmidt. I prefer to ask somebody who actually does the work, and the person who designed the instrument on the bird, Spencer, is the best choice. – Anthony

  170. I’m very much looking forward to reading this paper. Considering all the work that went into this project, it’s very exciting to see that it was not a waste of time. Perhaps it will even end up being a landmark paper. I’m chilling some bubbly in any event.

  171. The adjusted temperature record is a joke. In countries with strong global warming political movements, the supposedly scientific data gets adjusted in the same direction. In other countries, that doesn’t happen. Strange, that.
    Multitudes of papers are written based on a joke of a temperature record, rendering those papers a joke as well. If the foundation is weak, so is whatever you build on top of it.
    The next step will be for the believers to delete the facts (ie the raw data) so that their own twisted version of reality becomes the new “raw” data. If you can control the facts…

  172. Zeke, I read through the paper that you linked in a post (in moderation?) hausfather-etal2013.pdf and have a couple of questions. First, what is the actual homogenization algorithm used in USHCN? Your paper says “Homogenization of the USHCN monthly version 2 temperature data does not specifically target changes associated with urbanization. Rather, the procedure used involves identifying and accounting for shifts in the monthly temperature series that appear to be unique to a specific station‐‐the assumption being that a spatially isolated and sustain shift in a station series is caused by factors unrelated to background climate variations [Menne et al. 2010].”

    I found the Menne 2010 paper here: http://onlinelibrary.wiley.com/doi/10.1029/2009JD013094/full It says “n version 2 of the USHCN temperature data [Menne et al., 2009], the apparent impacts of documented and undocumented inhomogeneities were quantified and removed through automated pairwise comparisons of mean monthly maximum and minimum temperature series as described by Menne and Williams [2009].” I found the Menne 2009 paper here: http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2613.1 and I’ve seen it before several times. It claims “Use of a simple difference in means test does, however, address both gradual and sudden changes,…”. What is the difference in means test? It is not explained in the paper.

  173. Evan (and any other interested persons):

    Here’s another early paper dealing with TOB: W. Ellis, “On the Difference produced in the Mean Temperature derived from Daily Maxima and Minima as dependent on the Time at which the Thermometers are read,” Quart. Journ. Roy. Met. Soc. XVL, 1890, 213-218.

    Tony B: Thanks for the references.

  174. Zeke, thanks for the graphic. It shows means separated by “empirical breaks” How are those breaks determined? I assume they are station specific? From metadata? And if there is no metadata? Then presumably the station mean is compared to the regional mean. Then it is adjusted? How?

    Thanks in advance.

  175. jimmi_the_dalek says:
    May 6, 2014 at 4:18 pm
    That graph puzzles me. It shows an ‘adjustment’ of nearly 1 degree from 1979 to 2013 (the 2014 point looks spurious), yet the surface temperature record is in reasonable (i.e better than 1 degree) agreement with the satellite record over that period. How can that be the case? Has the satellite record been adjusted too?

    ————————————————————————————————————–

    The surface warms and cools greater than the troposphere above any one point from satellite data, Surface data should show more cooling or warming than satellite, without the exception of strong ENSO signals. This indicates that cooling over recent years should show more with surface data than satellite data. This has not been observed because they are dishonestly adjusting out this cooling to keep with the satellite data..

    When global temperature warms over a longer period the surface does so more than the satellite and this is then not adjusted to match satellite data. It is a spurious cherry picking warm bias incompetent way of managing surface data. Interpolating with sparse data regions in favor of satellite data, says all everyone needs to know about the mismanagement of surface data. Despite the differences between the two interpolation is far less accurate.

  176. Thanks, John.

    I remember hearing the explanation and having to figure out what was actually going on by constructing a top-down model.

    By the way, folks, John is one of our best and most determined surface station surveyors.

  177. eric1skeptic,

    The pairwise homogenization process used by NCDC iterates through each station, calculating the difference between each station and all neighbors in its proximity (say, the nearest 20 stations, though that parameter is tunable). It looks through these difference series for sudden step changes that occur in the difference series that are consistent across all neighbor pairs. Effectively its looking for changes that occur at a particular point in time at one station but not at any of the surrounding stations, with the assumption that abrupt localized changes that occur at one station but not at any of its neighbors reflect localized biases like station moves or instrument changes.

    This can result in problems if there are simultaneous changes at all the neighbors but not at the target station. Thankfully most of the inhomogenities in the surface temperature record like station moves, TOBs changes, or instrument changes were phased in gradually, and were not adopted simultaneously across the network. Problems can also occur when there are few neighboring stations, forcing the algorithm to go further afield to find neighbors and potentially misclassifying true regional climate changes as localized biases. This seems to have occurred in the case of a number of Arctic stations, as this recent piece by Robert Way points out: http://www.skepticalscience.com/how_global_warming_broke_the_thermometer_record.html

    In the U.S., thankfully, station coverage is dense enough (especially since all ~7000 coop stations are used in breakpoint detection) that this should’t be much of an issue here.

    Homogenization also doesn’t necessarily deal well with slope inhomogenities (vs. step-change inhomogenities). Thankfully most of the inhomogenities (including UHI) appear to show up more as a set of smaller step changes rather than a very gradual warming bias, though more would could be done analyzing this.

    I’d also suggest reading Williams et al if you haven’t yet. They set out to test homogenization by creating synthetic temperature data where the “truth” is known and artificial biases of different types are added. They explore how well the algorithm deals with different types of issues and different bias signs, to make sure that the algorithm doesn’t end up artificially introducing cooling or warming bias when correcting errors. They also look at different possible permutations of the number of neighbors needed, the size of the break needed, or the distance from the station needed to see how they affect the results. Their paper is here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf

    The code used in the pairwise homogenization method is available here: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/

  178. eric1skeptic,

    To address your other questions, the size of the step change in the difference series needed to flag an inhomogenity is configurable; I’m not sure what exact value is used in the PHA, though the Williams et al paper tested a number.

    Once a breakpoint is detected, different algorithms “correct” it in different ways. NCDC’s PHA essentially just collapses and step changes identified in the difference series to create a continuous record. Berkeley Earth cuts the station records at that point, treating everything before and after as different stations and using a least squares/kriging approach to combine all the fragments into a spatial temperature field.

    Metadata is treated differently in different algorithms. The PHA lowers the threshold for detecting breakpoints when metadata indicates that a breakpoint has occurred. Berkeley just creates breaks at any metadata-determined breakpoint even if it doesn’t show up in the difference series. Its worth pointing out that both methods detect quite a few breakpoints that are not documented in the metadata, as the metadata is rather poor (especially further back in time) in the U.S. and nearly non-existent for the rest of the world.

  179. Zeke Hausfather (May 7, 2014 at 11:55 am) “Thankfully most of the inhomogenities (including UHI) appear to show up more as a set of smaller step changes rather than a very gradual warming bias, though more would could be done analyzing this.”

    Zeke, thanks for the information. It seems like the stepwise urbanization needs to be tested particularly when the step threshold is tuned for other discontinuities. Thanks for the link to the Williams paper. I just scanned it. I agree with their conclusion that increases in Tmax are probably underestimated with their method. But it seems likely to me that Tmin rises are overestimated. Those are the most significant effect of urbanization. Tmax rarely changes due to UHIE in my experience.

  180. But… but UHI adjustments must be in the warmer direction! Look at it this way: the more people are at any place, the more energy is drawn from the surrounding environment because peoples body temperature is about 37°C and this is warmer than the air at most places. What do you think where the energy to heat those people up comes from? It has a cooling effect.
    /sarc

  181. Nik, go jump in a lake! Steve asked questions, I pointed him in the direction of honest answers. He never replied back to this post, so he is either doing his homework as he should or he was a troll. The post is fairly easy to understand and the information he asked about is easy to get to. If you intend to post on a blog you should at least understand the subject. Or possibly think you understand it.

  182. eric1skeptic: [trimmed]

    Will Nitschke, since Iron Man is often Steve’s most prolific commenter, I am exactly trying to DISCONNECT Goddard from being associated with him and by extension every skeptic too. If this issue isn’t blindingly obvious to you then it suggests a severe lack of both political savvy and emotional intelligence. It’s called constructive criticism. I say that as one of the most active online skeptics on news sites who also lives but four blocks from Hansen’s old office here on the Upper West Side. Goddard is just really bad at PR in what has become a PR war. Criticizing him isn’t some act of treason that reflects on my “credibity.”

    David A: This is like that old episode of Saturday Night Live: “What’s the *PRICE* of the car?” but the sales guy won’t ever say it. I *know* where the archive of obscurely formatted data is, but its been months now that I’ve asked somebody, anybody with graphics software set up (like I did three years ago but no longer have) to finally just PLOT it, raw and adjusted. Each month more makes me quite suspicious if a classic Hide The Decline issue really exists, because nobody has shown the actual decline.

  183. David Riser, what skeptics need to do isn’t offer homework and accusations but to offer quickly comprehendible infographics, which I happen to be expert at producing, but I don’t have graphing software that can parse obscure climate data files that rely on metadata for station identification. This is so terribly silly, that such an *extreme* claim of fraudulent data adjustment comes only from a single blogger’s software who refuses to show the two plots that created the difference curve! Am I dreaming or something? It’s quite surreal that Goddard’s plot is suddenly appearing here without comment, just as if it’s handed down from on high. Commenter Steve’s concern was spot on, that he couldn’t yet even show friends this claim for fear of ridicule. Goddard’s claim still exists in a bizarre sideshow vacuum, which is a shame.

    • @NikFromNYC – Love him or revile him, Steve Goddard is no “bizaare sideshow”. He is a pit bull that has found a target and will not let go – the temperature adjustments. As such, he has been cited numerous times on both blogs, and news sites for his insightful articles.

      As long as you follow his rules of posting, he does not care about your private life. To tar him with a “guilt by association” is merely a diversionary tactic to those who either do not like him, or cannot debate him.

      You are better than that.

  184. @NikFromNYC

    The Steve Goddard site is not interested in your opinions on how he should run his site. If you think of SkS and RealClimate as propaganda sites, think of Real Science as counter-propaganda. I’m not big on any form of propaganda, but other obviously feel it has its “uses.”

    Goddard’s, according to my understanding, tolerates all comments as long as they are not overt trolling and not completely off topic. I presume he has that policy because the propaganda sites heavily censor negative commentary. Hence his desire to be “open”.

    Goddard provided links to the data and explained how he calculated the graph. It seems to me that it would be far more interesting if you’d done an analysis rather than just concoct an accusation. I’m certainly open minded either way.

  185. Well, give it a try you guys. Find someone you know who doesn’t have a strong position on any of these issues, and send them to Goddards site. What exactly will you tell them when they ask “Don’t you have any sources who aren’t nutcases and who don’t hide their identities? Can’t you send me to one of them instead?”

  186. Zeke Hausfather (May 6, 2014 at 12:43 pm)

    “This move leads to a big step change downward, which is removed via homogenization.”

    Indeed. And it is very strange, especially if we assume:

    Zeke Hausfather (May 7, 2014 at 8:43 am)
    “UHI is real, but its impact isn’t huge.”

    In fact, you do not know the magnitude of this impact because you can not assess the gradual increase in perturbations.

    You could however use proxies, such as TLT :

    Zeke Hausfather (May 7, 2014 at 8:56 am)
    “If I recall correctly, the TLT amplification factor over land is actually right around 1.”

    But with a better memory : http://img215.imageshack.us/img215/5149/plusuah.png

  187. @Steven Mosher says: at May 6, 2014 at 1:58 pm:

    “As always I will wait to examine the data as used
    And code as run…”

    Mr Mosher, you are to be congratulated on your rigorous approach to the science, and I’m heartened to see that you apply it universally.

    I have no real standing here, and I was wondering if you could do the world of science a favor?

    Could you please have a chat with Professor Michael Mann, Professor Stephan Lewandowsky, Dr Phil Jones, aah …

    … I think I’ve got a list here somewhere …

    … Can I get back to you?

  188. @ Matt G This indicates that cooling over recent years should show more with surface data than satellite data.This indicates that cooling over recent years should show more with surface data than satellite data.

    Yes. It works both ways. Presence of heat sink will exaggerate both a warming and cooling trend. Since our 1979 – 2008 raw+MMTS data shows overall warming, the bias is towards warming. Same goes for 1998 – 2008, but reversed, with the cooling spuriously exaggerated.

  189. Its worth pointing out that Anthony finds that the best sited stations have the same trend as the badly sited ones post-homogenization. They appear to have a lower trend prior to homogenization, though in Fall et al they had the same trend, so its worth looking in more detail exactly what changed in the ratings between the old and new papers.

    I can answer that.

    The old ratings are based on Leroy (1999) which measure only distance to heat sink. The new ratings are based on Leroy (2010), which account for both area and distance.

    We noticed the problem with Leroy (1999) back when we were working on Fall et al. We had a running joke about how all Class 4 stations are equal, but some Class 4 stations are more equal than others.

  190. evanmjones,

    ——
    “@ Matt G This indicates that cooling over recent years should show more with surface data than satellite data.This indicates that cooling over recent years should show more with surface data than satellite data.”

    Yes. It works both ways. Presence of heat sink will exaggerate both a warming and cooling trend.
    ——

    This is only true for high latitudes (above 60 °). Overall, the opposite is expected : tropospheric amplification due to change in absolute humidity.

  191. This is only true for high latitudes (above 60 °). Overall, the opposite is expected : tropospheric amplification due to change in absolute humidity.

    Oh, sorry for the confusion: I was only referring to heat sink effect, not LT/surface comparison. (I’ll let Dr. Chirsty address that; he’s the expert.)

    I.e., if it cools, the badly sited surface net will exaggerate the cooling. If it warms, then warming will be exaggerated. Therefore, during the “pause”, trend will not be exaggerated in either direction.

  192. evanmjones,
    Thank you for the reply. I’m intrigued, what are these heat sinks that can act in both directions?

    My understanding of the problem (based on an energy balance) is that the main anthropogenic influences are always going to warming (energy consumption and urban drainage).

  193. @ phi

    Be sure not to confuse the actual reading and the trend. Theoretically, a station could have very high (spurious) readings, yet show no trend over time at all.

    A heat sink is generally a structure or paved area. They store heat and release it at Tmin. The disparity between the sink and the sensor temp widens as warming continues. When cooling intrudes, the effects are reversed: there is disproportionately less heat stored by the sink as it cools, just as there was disproportionately more heat being stored on the way up.

    Or, to look at it another way, when there is warming (natural or anthropogenic), the effect exaggerates the trend on the way up. On the way back down, there is less and less heat in the sink and the process reverses itself.

    What goes up, must come down.

    The sensor will be reading will too high in both cases. But the higher the temperatures, the more the greater the spuriously high offset. So it actually increases the trend.

    That’s what NOAA missed. They figured if a station was offset 1C too high in 1979, it would be also be offset 1C too high in 2008. But during that warming period, the heat sink effect came into play and by 2008 it is reading over 1.5C too high. So the trend (sic) was exaggerated by 0.5C.

    Then say it cools back down to 1979 levels: It will read just the same as it did in 1979. The same effect that created the extra warming has reversed itself to produce extra cooling. It is still reading 1C too high, of course. But it was reading 1.5C too high in 2008. So it cooled by an extra 0.5C on the way down just the same as it warmed 0.5C on the way up.

    Waste heat (burn barrel, BBQ, AC exhaust) is an entirely different dynamic and may decrease trend (esp. at TMAX) as the effect swamps the sensor, even as the offset remains way too high.

  194. evanmjones,

    I think I got a good estimate of perturbations for a regional case thanks to the use of multiple proxies of high quality and consistent. I have not seen the phenomenon you describe on Tavg but a warming bias gradually increasing whether in an overall situation of warming or cooling.

    In my opinion, the main effect of various pavements is not related to thermal inertia but to the lack of evaporation induced by the drainage. If you do the math, you will see that the additional energy available for sensible heat is about one third of the solar energy received. This is huge and more than enough (with energy consumption) to explain the bias. Which bias we can roughly estimate when a change of site occur.

  195. NIk,
    Info Graphics don’t help folks who ask questions which are plainly answered on the graphic and in the post if you bother to read it, demonstrate basic knowledge later in the post, and then say no one will read this without answering those questions. Hence I imply he may be a troll and give him homework if you will that answers the questions he already knows for someone who may actually care but have a hard time reading the post for some bizarre reason. Kind of like how you seem a bit trollish at times. Nothing personal but this post is pretty easy to understand and the hockey stick graph is pretty easy to figure out and duplicate, so maybe you didn’t actually read the whole thing in which case I would cut you a tiny bit of slack.

  196. @phi

    On a regional basis, yes. All sorts of things can be going on there. I like the drainage construct. But you are using Leroy (1999). It’s insufficient for rating.

    But that is a mesosite issue. I am looking at microsite, only. For my purposes, if 10% or less area within 30 m. is heat sink (etc.), it makes it a Class 2 and that’s as good as it gets. My “local USHCN site” in Central Park has very good siting (Frankie and Johnnie, paired Hygro and MMTS backup.). And — in the middle of Manhattan — it has quite a low trend.

    Mesosite does matter. Trends in urban areas are a bit higher than the undeveloped. ’tis true. Cropland is even worse. “Rural” my patoot. It’s not even bucolic. But that is not what is making the GHCN trends a travesty.

    Sorry UHI guys. But not to fear, I’m just going to draw you in a little closer. Microsite is where it’s at. The place to be.

    Microsite is the new UHI. You heard it here first. Word up.

    By the way, our study period is only from 1979 – 2008, a nice patch of US warming, coinciding with the satellite record and compatible with Menne (2009). And that means the metadata is quite tight, as far as these things go. COOP as a whole has sparse metadata, but USHCN meta is far, far better.

  197. Anthony & Evan,
    Best of luck in your submission and I hope you can get it published soon and in a journal you’re happy with!

    Have you read our study of the Surfacestations results yet? Pdf here: http://oprj.net/articles/climate-science/11. Obviously we did not have access to your new Leroy, 2010 results, and so we were using the Fall et al., 2011 dataset. However, our results seem to roughly concur with your findings.

    We found poor siting increased the unadjusted trends by about 32% and TOB-adjusted trends by about 18%. The nominal “good-poor” difference for the fully-adjusted trends is close to zero, but this seems to predominantly be a result of the blending problem with the Menne et al. homogenization algorithm.

    We found two blending problems were occurring:

    1. As Evan mentioned above, because the good stations are in the minority, the homogenization algorithm tends to adjust the good stations to better match the poor stations, i.e., more siting biases are introduced to the good stations than are removed from the poor stations.

    2. Many rural USHCN stations are affected by urban blending in the fully-adjusted dataset. This introduces a general “warming” trend into the entire dataset, substantially increasing the average trends of the USHCN.

    For anyone who doesn’t want to read our full paper (it’s quite long & detailed), we wrote a shorter summary of our main findings on our blog here

    We have also uploaded all the data and code for our paper to FigShare: http://dx.doi.org/10.6084/m9.figshare.1004025

  198. Wow. I must be on a moderation list. Instant comment was most innocent question I’ve ever posted. IFAIK, I’ve never posted an immoderate comment anywhere.

  199. Coming late to this, the major error may be faith in Steve Goddard. Perhaps someone might inquire about that last step and y axis scale?

    REPLY: Nick made some errors of his own, but we’ve got it all sorted now. Look for a new post showing the reason behind the spike. – Anthony

  200. Claimsguy says “This is interesting”

    It is, and the comments are interesting, and the identities of the commentators more so.
    Suffice it to say that the last point in Goddard’s graph is spurious, and there are problems with the rest of it as well.

  201. “It is, and the comments are interesting, and the identities of the commentators more so.”

    Very interesting, especially given how this discussion was started:

    “(Note to Mosher, Zeke, and Stokes – please make your most outrageous comments below so we can point to them later and note them with some satisfaction.).”

Comments are closed.