July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC

Andrew Freedman

I’ve noticed there’s a lot of frenetic tweeting and re-tweeting of this “sound bite” sized statement from this Climate Central piece by Andrew Freedman.

July was the fourth-warmest such month on record globally, and the 329th consecutive month with a global-average surface temperature above the 20th-century average, according to an analysis released Wednesday by the National Climatic Data Center (NCDC).

It should be noted that Climate Central is funded for the sole purpose of spreading worrisome climate missives. Yes it was a hot July in the USA too, approximately as hot as July 1936 comparing within the USHCN, No debate there. It is also possibly slightly cooler if you compare to the new state of the art Climate Reference Network.

But, those comparisons aside, here’s what Climate Central’s Andrew Freedman and NOAA/NCDC won’t show you when discussing the surface temperature record:

Final USHCN adjusted data minus raw USHCN data Graph created by Steve Goddard

It isn’t hard to stay above the average temperature value when your adjustments outpace the temperature itself. There’s about 0.45°C of temperature rise in the adjustments since about 1940.

Since I know some people (and you know who you are) won’t believe the graph above created by taking the final adjusted USHCN data used for public statements and subtracting the raw data straight from the weather station observers to show the magnitude of adjustments. So, I’ll put up the NCDC graph, that they provided here:

http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

But they no longer update it, nor provide an equivalent for USHCN2 (as shown above), because well, it just doesn’t look so good.

As discussed in: Warming in the USHCN is mainly an artifact of adjustments on April,13th of this year, this graph shows that when you compare the US surface temperature record to an hourly dataset (ISH ) that doesn’t require a cartload of adjustments in the first place, and applies a population growth factor (as a proxy for UHI) all of the sudden, the trend doesn’t look so hot. The graph was prepared by Dr. Roy Spencer.

There’s quite an offset in 2012, about 0.7°C between Dr. Spencer’s ISH PDAT and USHCN/CRU. It should be noted that CRU uses the USHCN data in their data, so it isn’t any surprise to find no divergence between those.

Similar, but not all, of the adjustments are applied to the GHCN, used to derive the global surface temperature average. That data is also managed by NCDC.

Now of course many will argue that the adjustments are necessary to correct the data, which has all sorts of problems with inhomogenity, time of observation, siting, missing data, etc. But, none of that negates this statement:  July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC

In fact, since the positive adjustments clearly go back to about 1940, it would be accurate to say that: July was also the 864th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC.

Dr Spencer concluded in his essay Warming in the USHCN is mainly an artifact of adjustments :

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

To counter all the Twitter madness out there over that “329th consecutive month of above normal temperature”, I suggest that WUWT readers tweet back to the same people that it is also the 329th or 864th consecutive month (your choice) of upwards adjustments to the U.S. temperature record.

Here’s the shortlink to make it easy for you:

http://wp.me/p7y4l-i66

About these ads

154 thoughts on “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC

  1. “Lies. Damn lies. And statistics.” We in England have a statue to the guy who (apparently) said that – President Lincoln. It’s in Parliament Square.
    And, I suggest, he would recognise the import of the post above, that much – at least, of the touted CAGW is a statistical artefact.
    Have a wonderful one!

  2. How come we can’t get the Main Stream Media to report on this Graph from the NOAA??

    It would seem to me that the Graph itself would put much of the AGW hyperventilating to rest.

  3. What are the physical processes that underlie the upwards adjustment of the raw tempertature records?

    After much reading, I have not yet heard one good explanation, based on a physical process for doing so.

    If you consider Urban Heat Island, then present temperatures should be adjusted down. But they adjust them up?

  4. Freedman??? I wonder if he sees the irony in there.
    I wonder if the warmistas get away with a lot of their “unprecedented”,”never seen”,etc statements is because they are aimed at,and devoured by,KIDS? Of course 100F is going to be all of the above,if you are only 20 yrs young.

  5. Well, when you realise the headline scary stuff was made by government, and/or, quasi-government bureaucrats just doing the bidding of their political masters, which is:

    “Make us look green and good when we increase taxes and give out insane subsidies for renewable energy to our buddies.”.

  6. It looks like during the 90ies they found other ways to manipulate the temperature – probably by killing thermometers – so they stopped adjusting upwards. Do they have a political reason to show ever increasing temperatures? Maybe they get paid more on hot days?

  7. Anthony:

    Your post pertains to the frequent adjustment to the U.S. temperature record by NOAA/NCDC. However, the same problem exists for all the global temperature data sets obtained from ‘surface’ measurements.

    The frequency of these adjustments prevented publication of a paper of which I was lead author. That paper shows the global temperature data sets are ‘not fit for purpose’. An email about this from me was one of the documents leaked from CRU by ‘climategate’. Hence, I made a submission about it to the UK Parliament’s Select Committee that did the so-called investigation of (actually a whitewash of) ‘climategate’.

    My submission explains how the frequent changes to the global temperature data sets prevented publication of the paper, and why those data sets are not ‘fit for purpose’. It includes as Appendices the pertinent email and a draft of the paper. It is on the UK Parliamentary Record where it can be read at

    http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm

    Richard

  8. Dr. Deanster: “How come we can’t get the Main Stream Media to report on this Graph from the NOAA??”

    Because that would go against MSM’s political agenda. The MSM is not about reporting truth or fact. MSM is about disseminating propaganda and driving society to a predetermined goal. Global Warming has never been about man-made CO2 and it’s claimed effects on earth’s climate.

    Global Warming is about driving society to a elitists group’s idea of Utopia. While, the United Nations saw Global Warming as a way to redistribute wealth, power, and control via guilt trip and fines using a well orchestrated ruse. Naturally, the Greenies who despise wealth and power, and longed for Utopia, were all to eager join in the SCAM. As for the so called scientists of Global Warming, they jumped on board Global Warming, since it meant nearly unlimited funding forced out of Taxpayers. Global Warming continues now, because they are all in so deep; some to the point of prosecution for waste, fraud, and abuse.

  9. Smokey on August 19, 2012 at 2:14 pm

    That’s nice smokey, pity we don’t live In the
    Lower Stratosphere!
    RSS shows a trend of .133k per decade down here, but you already know that don’t you….

  10. There is a very clear description of the reasons behind the adjustments here:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    You really do not adress the reasons for the adjustments in this post. If you want to make the point that the adjustments are not justified, or not correct, you need to give arguments for that!

    REPLY: And yet, it still doesn’t change the headline whether the adjustments are valid or not. – Anthony

  11. Could someone please tell me why the general public is never told that the temperature records are changed by government activists to make the tempuratures look hotter? If FaceBook was cooking the corportate records in such a fashion someone would go to jail. How does Hansen and crew get away with this so blatently? Serious question: how do they get away with it?

  12. I believe 329 is a lie.

    Here is the most recent January’s

    Year Temperature Rank
    2012 36.49 115 115
    2011 29.87 35 35
    2010 30.92 56 56
    2009 31.19 59 59
    2008 30.83 55 55
    2007 31.66 67 67
    2006 39.71 118 118
    2005 33.61 95 95
    2004 30.61 51 51

    Look at all the low rankings!

  13. Well, we are still creeping out of the little ice age. What would you rather have, a frost every month?

  14. Smokey, the global satellite record seems to reveal the real source of the current USHCN U.S. data adjustment amount. The drop in the satellite record from 1980 looks a lot like an inversion of the amounts of the increasing USHCN adjustments upward, Perhaps they monitor the most accurate temperature measurement from the satellites to see what they have to add each month to keep up with Nature.

  15. Jan 1900 to Jul 2012 = 1339 months.

    1200 months for 1900 to 2001, leaving 139 months for the start of the 21st century.

    So far, then, 139/1200 of the 21st century is above the average of the 20th century (about 11%). In another 1061 months, we’ll see if this century is hotter or not.

    But now to the 20th century values. Their values imply that the last 190 months of the 20th century were warmer than the average of the 1010 months prior. Since they like ratios, a quick calculation means there was at least a 1010:190 (101:19) ratio of cooler months (below the average) to warmer months (above the average).

    Wow. Sure seems that it’s worse than we thought.

    Well, it is.

    For example, GISS says that the best ESTIMATE for absolute global mean (based on an averaging period from 1950-1981) is 57.2 deg-F. So that’s their “zero”, the point at which they’d say a particular month is above the 20th century average..

    But this article is based on NCDC data, and their page says the 20th century average is 60.4 deg-F. And they used the entire century for their averaging period.

    NCDC’s 20th century average is 3.2 degrees HIGHER than GISS’s 20th century average.

    Seems that none of the “climate scientists” can say exactly what the 20th century average was – which makes the whole “it’s been the 329th consecutive month with a global-average surface temperature above the 20th-century average” an exercise in stupidity.

  16. Looking at those two graphs you might wonder If they changed the correction from degF to degC but forgot to adjust the numbers. Reminds me of when we decimalised the pound sterling and a lot of shops simply changed the d’s to p’s. The effect was similar.

  17. I read the adjustments description at the site given by Peter Roessingh, and it is clear that they are happily adjusting against the UHI when sites are moved from urban to rural or airports. But that presupposes that the UHI was constant at the old site throughout its history, when in fact the UHI would have been gradually increasing throughout. The old sites would have been well-placed at their inception, with subsequent urban development necessitating the eventual shift of sites. I daresay that process of steady UHI build-up throughout the life of the old sites has not been adjusted for in the official datasets, as that would constitute a significant downwards adjustment, negating much of the existing upward adjustments.

  18. Smokey says: August 19, 2012 at 2:14 pm
    “USHCN is only the U.S. ["adjusted"] data. But the central question concerns global warming. So let’s look at the global satellite record, which is by far the most accurate temperature measurement.”
    Smokey, you should understand the data you cut & paste before drawing any conclusions. “TLS” data is for the stratosphere. Since the total energy from the earth is about constant, a drop in temperature (and hence a drop on energy from the stratosphere) would be expected in order to counteract the warming (and increased energy output from the ground.

    The DROP that you point out for the STRATOSPHERE is, in fact, exactly what the IPCC expects to accompany WARMING in the TROPOSPHERE. So your data is supporting the IPCC’s models and conclusions. Thanks! :-)

    Climate model results summarized by the IPCC in their third assessment show overall good agreement with the satellite temperature record. In particular both models and satellite record show a global average warming trend for the troposphere (models range for TLT/T2LT 0.6 – 0.39°C/decade; avg 0.2°C/decade) and a cooling of the stratosphere (models range for TLS/T4 -0.7 – 0.08°C/decade; avg -0.25°C/decade)
    Wikipedia

  19. To Dr Deanster
    You stated “How come we can’t get the Main Stream Media to report on this Graph from the NOAA??
    It would seem to me that the Graph itself would put much of the AGW hyperventilating to rest.”

    I believe you answered your own question. They are part of the progressive team pushing for the real agenda of controlling peoples actions. I think it has, long ago, passed from being about saving the planet. The climate, along with environmentalism in general has just become a means to an end.

  20. And why is Mr. Watts talking about the US surface temperatures and the adjustments in the USHCN data, although the quote he cites is about the globally averaged surface temperature anomaly?

    I don’t know what Mr. Watts argument is supposed to be. I don’t see it. Is he suggesting that the statement in the quote is false, because there were adjustments made to the USHCN data set (which represent only ca. 1.6% of the area of the whole globe and don’t matter much for the globally averaged temperature anomaly, anyway, even if the adjustment in the USHCN data were significantly flawed, an assertion for which Mr. Watts still has to provide the evidence)?

    REPLY: Oh gosh, Mr. Perlwitz, why don’t you write up the answer on your new GISS approved smear page? I wouldn’t want to offend your delicate sensitivities here. Either way, whether the adjustments are valid or not, it still doesn’t change the headline. – Anthony

  21. Well, Anthony has praised the new CONUS USCRN network as being the REAL data.
    This site has some of those stations plotted

    http://climateandstuff.blogspot.co.uk/2012/08/uscrnusrcrn-conus-data.html

    It seems that the data shows an average temp increase of some 40 stations is +o.3 degC /decade and the maximum figures show +0,6deg C per dacade all over the 2002 to 2012 range.

    It also shows a 6.8 deg C over since 2010 (somewhat irrelevant but interesting)

    REPLY:
    Yes, but if I tried to make a claim for cooling with 3 years worth of data, the climate attack dogs would be all over me. Even Ben Santer says 17 years is the minimum needed – Anthony

  22. Can’t for the life of me understand why the adjustment quantity is continuously increasing (and aligning with the rise in the record, just co-incidentally?). Those events which warrant an adjustment happen in an instant and would register as a (near) constant shift there after.

  23. It is truly amusing that these adjustments are somehow causing glaciers to melt and numerous species to shift their habits. How do they do that?

  24. Peter Roessingh says:
    August 19, 2012 at 2:47 pm
    There is a very clear description of the reasons behind the adjustments here:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    You really do not adress the reasons for the adjustments in this post. If you want to make the point that the adjustments are not justified, or not correct, you need to give arguments for that!

    ========================================
    From the link you provided the second sentence of NOAA’s Introduction says “The USHCN is comprised of 1221 high-quality stations from the U.S. Cooperative Observing Network…….. ”

    Peter, do you think that statement is justified or correct?

  25. Peter Roessing: “There is a very clear description of the reasons behind the adjustments here:
    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    Sure, let’s give the most minimal inspection to things possible. The UHI adjustment procedure is mentioned at your link as the final step of their fiddling and that it ‘uses the regression approach outlined in Karl, et al. (1988)’.

    But this is meaningless by itself. Are they using his 1988 calculated average effect based on the siting, network, climate, technology and population at the time? Are they using his exact derived coefficients regarding the individual population/UHI correlation? Have they ever rebased the thing? And if yes or no, do they do it on a yearly basis to capture the varying nature of the human hives the network stations are scattered about in?

    I don’t know as they don’t say and leave to much open to interpretation. But what it is clear is that Karl found just what Goodrich found in 77[1]. But what is clear is that the UHI that completely escaped Muller’s attention ranges from an average of 0.06 for a pop of 2000 to 2.57 for a pop 10 Million, where ‘average’ is the alteration of the midpoint of Tmin and Tmax as used everywhere else in the field.

    But this poses a significant problem when facing population growth and station resitings. Let me state here that I am unaware of what alternations have occured in the situation with the population distribution with respect to the COOP stations. I haven’t bothered to look since I don’t get paid to do this nonsense.[2] But the population in the US (Including everything not just CONUS) is 128.5% of that as in 1988. But lets assume there are no siting differences with respect to the distribution and that only population has changed, equally distributed as an average affair. Then all of the biases for UHI will be on the increase in general. How much? That depends. Assuming a decadal increase of 0.2 C then a city of pop 100,000 (1988) will explain just less of the difference, while a city of pop 200,000 will have heated *more than* the decadal increase over the last 24 years.

    All of this is aside any issues of siting alterations of the very plain fact that the population in city centers has outpaced the population in rural areas over the same time frame. Such that if the USHCN is doing anythig at all close to what Karl (1988) is on about — whether or not they are rebiasing or simply using the 1988 regression curves — then we know that there is an increasing *negative correction* against the unadjusted data.

    Now why does this matter? Because the UHI correction is the very last thing performed into the entire series of adjustements and occurs only after data selection and cleaning (outliers), homogenization, TOB (with all the nonsense that implies), MMTS bias corrections, and backfill of missing data by interpolation. Every one of which save MMTS precedes Karl 1988 (Quayle 1991) and except for that one are all authored by Karl.

    But here’s the kicker: Aside from the MMTS instrumentation correction there are no other possible manners to introduce a constant and consistent bias in the measured temperatures *other than the UHI correction* of Karl 1988. Such that unless there are significant shenanigans in the siting changes over time then it is *absolutely mandatory* that the correction over time will be *increasingly negative*. Or it’s just plain ol’ fraud.

    But if we assume that all the positive difference is due to siting changes then it must be that the current siting is *more rural on a population basis* then it was in 1988. But the average national effect by an 85% rural network composition in 1988 was estimated to be 0.06 C on the high side. Such that if we moved *every single station to the middle of nowhere by now* then the total *possible* positive correction with respect to 1988 has to be – 0.06 C. And yet the correction over that time, by eyeballing the graphs above, is nearly 0.2 C.

    The entire notion is unsupportable without appeal to a desperately unwarranted cock-up of epic magnitude in the period between 1988 and the preset. Full stop. But all that tells us is that the USHCN adjustments are entirely out of line with respect to the only time-sensitive changing bias in their data fiddling routines. eg. The post-adjust temps are pure GIGO by accident or by fraud.

    Some closing notes: Karl (1988) defines ‘rural’ — for the baseline — as populations of less than 2000, and had an average of 700. He also hazarded that the regression values were no good on a global basis and strongly emphasized that it could not be used to predict UHI per station due the variances involved (His R values were in the range of roughly 0.5 to 06) and that it should only be used for *regional* alterations. The regions for which he provided a rough map of, but did not go into the justifications for in the 1988 paper. There is more of course and some questions to be asked. But on the whole there is nothing terribly offensive about Karl (1988) other than considerations about accuracy and error bars.

    [1] I believe it was 77. It’s a thread just a few days back on UHI effects in California.
    [2] Big Oil can send me funding propositions care of the Koch Brothers, Fox News, Anthony Watts, or any other Secular Satan that is supposed to be a front-man for ‘denialist’ propaganda.

  26. mbw says:
    August 19, 2012 at 4:20 pm

    It is truly amusing that these adjustments are somehow causing glaciers to melt and numerous species to shift their habits. How do they do that?

    Simple answer – They don’t because they haven’t

  27. If, as one would expect given its importance, the monitoring systems are getting better, why are the largest adjustments being made to the latest data?

    Surely, if the new systems are automatic, well-sited and therefore free of biases, they need LESS adjustment rather than more.

    As one of our famous Oz politicians is well known for saying – “Please explain!”

  28. Anthony is celebrating the record number of adjustments. Take it or leave it. I say Hoorah! for the record number of upward adjustments. I see many more to come. Everyone making those upward adjustments should get a green jacket like at the Masters Golf Tournament.

  29. There are undoubtedly reasons for the adjustments. However, the adjustments include assumptions and estimates. When the selection of these values determines whether there has been a recent warming trend or not, please excuse our skepticism. I don’t think there has been an intentional effort to raise temperatures, but it is seems likely that once an adjustment was found that increased temperatures, it was found to be satisfactory and there wasn’t much work to verify assumptions and estimates. Finally, I have asked many people on many occasions where I can find a record and explanation for the adjustments on a station-by-station basis. If the adjustments are so easily defended, why not get them out into the public domain? The reason they aren’t could just be that it hasn’t been a priority and people are busy. But, it should be a priority now, because commenters on this blog are representative of millions of voters who are demanding accountability for use of federal funds on climate issues. Resistance to getting this done simply fuels suspicion, which is already at a high level.

  30. @James Allison
    That is not the point. Anthony defends his headline, but does not address the reasons behind it.

  31. I think the total adjustments are even higher in the most recent several months, closing in on a over 0.8C since 1934 (less than that since 1895).

    The problem is that noone knows what the total adjustments are. Do you know? Does anyone? Does Tom Karl even know? We could ask him perhaps but I don’t think he ever responds to questions from non-Team players.

  32. mbw says:
    August 19, 2012 at 4:20 pm
    It is truly amusing that these adjustments are somehow causing glaciers to melt and numerous species to shift their habits. How do they do that?
    ================================================
    Then tell us what should happen instead to “numerous” species and glaciers as the Earth continues to climb out of the LIA?

  33. So Climate Central is yet another P.R. wing of the catastrophists.

    As if SEJ weren’t enough to inculcate “right-thinking”. On their front page they write:

    “I have a built-in bias against reporters who have axes to grind. I think there are reporters that allow their own bias to encroach on their journalism, and that’s a crime against journalism.” — Don Hewitt

    Flip over to their page on Climate Change and read the predetermined views to be propagated. It also does a “nice” inuendo on WUWT:

    Watts Up With That is one of the more civil and well-read of the denier blogs. It is not reliable as a source of factual information. It does not disclose its funding sources. Anthony Watts, its proprietor, has worked as a broadcast weatherman for years but has no degree.

    (bold mine)

  34. Peter Roessingh,

    I thought the reasons behind Anthony’s headline were quite clear: USHCN and other government agencies “adjust” the temperature record. They either lower past temps to show a scary rise, or they show artificially higher current temperatures.

    That is not OK. That is dishonest, no? Flagrantly dishonest! You, as a taxpayer, are paying for that deceptive propaganda. Is that A-OK with you? Because it’s not OK with me.

  35. Smokey on August 19, 2012 at 7:12 pm

    No smokey, the adjustments are there for a reason and are clearly explained.
    What is dishonest is the TLS graph you put up earlier in this thread, and called it global temps.

  36. Jan P Perlwitz says:
    August 19, 2012 at 4:12 pm

    “And why is Mr. Watts talking about the US surface temperatures and the adjustments in the USHCN data, although the quote he cites is about the globally averaged surface temperature anomaly?”

    I guess Mr. Perlwitz GISS thinks that the U.S. is not part of the globe [heh]… /sarc

    (I wish that instead of blogging, GISS would turn their attention to addressing real problems with the crappy Model E and other “products” they produce. I wonder what government charge code they use for blogging anyway…hmmm).

  37. So in the 21st century, with literally millions of dollars thrown at it and with computer power beyond the dreams of those who lived in the early to mid 1900s, we can’t measure surface temperature as accurately as some bloke with a mercury thermometer in 1940? Hardly surprising that a lot of people don’t believe the temperature adjustments are valid.
    If these adjustments are really true, then it is a scandal of stupendous proportions that with all of our technology we can’t even measure the temperature correctly!
    On the other hand, if the adjustments are not required then that is an equal scandal.
    This is the equivalent of pleading guilty either way!

  38. Bernd Felsche: Is cute how that bolded phrase is linked. Presumably the selected phrases are ones they wish associated by search engines with the link.

  39. Smokey says:
    August 19, 2012 at 2:14 pm

    USHCN is only the U.S. ["adjusted"] data. But the central question concerns global warming. So let’s look at the global satellite record, which is by far the most accurate temperature measurement.

    Sorry for my dumb, but what is TLS as labeled on the graph?

  40. mbw wrote: “It is truly amusing that these adjustments are somehow causing glaciers to melt and numerous species to shift their habits. How do they do that?”

    We only have good ice data since 1979, and there is good evidence that the ice has reached this level before. Also, Hansen himself used to attribute a significant amount of the black soot that settles on ice to its melting, but that is hushed now.

  41. Daveo says:
    August 19, 2012 at 7:35 pm
    Smokey on August 19, 2012 at 7:12 pm

    No smokey, the adjustments are there for a reason and are clearly explained.
    What is dishonest is the TLS graph you put up earlier in this thread, and called it global temps.

    That is so much BS. I and many others have seen the Menne 2011 presentation. The only positive adjustment would be sensor movement (TOB is done as of the MMTS migration). After 2000, when they said they were done, we are 12 more years of adjustments that ONLY go UP. In my industry, this is called “tampering” and product gets recalled. Unless, you can prove to me that the population around the sensors has been declining, your “CLEARLY EXPLAINED” carries no value. I looked at this “CLEARLY EXPLAINED” and it is nothing more than a description of papers that many of us are quite aware DO NOT justify the adjustments. If anyone were actually looking, all of this data product would have to be recalled.

    As populations increases around a sensor, temperature go DOWN in your world (Which is why one would bump the temperature UP)? TOB adjustments are not longer acceptable, we have seconds of observation frequency with the MMTS systems. What is your or Jan’s defense of such obvious tampering?

  42. Zeke Hausfather says:
    August 19, 2012 at 7:59 pm
    “Final USHCN adjusted data minus raw USHCN data Graph created by Steve Goddard”

    What?

    If you have a better representation of these *UP* adjustments, please, indulge us…

  43. Walter Dnesis it possible for the general public to get ahold of raw and adjusted GHCN data for analysis?

    Raw and corrected data is available for most major temperature series, see http://www.realclimate.org/index.php/data-sources/ for the links. However:

    Regarding the “Headline”, the primary upward adjustment is for time-of-observation (TOBS) changes, as seen in http://wattsupwiththat.files.wordpress.com/2012/04/ushcn-adjustments.jpg – adjustments needed due to stations (mostly rural) shifting when they take their measurements to earlier in the day. See Vose 2003 (http://www.agu.org/pubs/crossref/2003/2003GL018111.shtml), Karl et al 1986 (http://adsabs.harvard.edu/abs/1986JApMe..25..145K), where the effects of these shifts are discussed – they artificially bias the record colder.

    TOBS is clearly seen in the records when looking at individual stations (offsets when recording times change), easily replicable via Monte Carlo testing, and if ignored you will be working with data that has known errors. This is also one of the issues with the Watts et al unsubmitted paper – TOBS primarily affects rural (less staffed) stations, and the use of uncorrected data in that work makes separating out station differences far more challenging.

    In the meantime, while technically correct, the tone of the opening post insinuates that the temperature record has been artificially modified, rather than (as per the literature and the records themselves) had errors corrected to provide the best data possible. As seen by many of the commentors happily extending that into accusations of lies, deception, and other nonsense.

    I consider that unfortunate, and (IMO) consider the opening post simply rabble-rousing rhetoric.

  44. David Sanger says:
    August 19, 2012 at 8:25 pm

    Satellite record for Lower Troposphere from MSU:
    Trend +0.133K/decade

    I notice you like simple linear trends, regardless of what the data actually follows. 8<)

    So, if the actual thermometer readings need to be "adjusted" upwards to approximate a linear trend, then what does the raw data do?

    [Can we] justify all of the consistent, repeating positive changes by NOAA based on a single, unverified, unrepeated 1988 paper by one person 24 years later? (Well, two papers: Hansen is spreading his NASA-GISS temperature trends over 1200 km based on HIS 1987 paper…..)

  45. Maus says:
    August 19, 2012 at 4:48 pm

    Compelling information, good analysis and summary.

    Thank you.

  46. It is misleading to put a headline up which in effect questions the authenticity of the method but does not give an explanation about the reasons.

    I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler. To make the stations comparable a positive adjustment would need to be made which then influences the overall average adjustment.

    It would seem overly trite and implausible to suggest that people make adjustments, openly declare them and would be deliberately biasing the adjustment.

  47. July was also the 329th consecutive month of positive upwards adjustment

    Am I interpreting this correctly that each of the last 329 months had a 50/50 chance of being adjusted up or down, depending on circumstances, but it was always up for 329 consecutive months? The chances of this happening by pure chance is 2^329 = 1 x 10^99. This is more than the number of atoms in a billion billion universes!

  48. Regarding my last post, my apologies – the graph I linked to (here at WUWT) does not appear to have the correct labeling: TOBS corrections have been erroneously labeled as “station location quality adjustments”. http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif for a correctly labeled graph, with the individual effects of the various corrections on the record shown at http://www.ncdc.noaa.gov/img/climate/research/ushcn/mean2.5X3.5_pg.gif

    From the USHCN page, http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html :

    Applying the Time of Observation adjustment (black line) resulted in approximately a 0.3F warming from the late 1960’s to the 1990’s. The shift from Cotton Region Shelters to the Maximum/Minimum Thermometer System in the mid-1980’s is clearly evident in the difference between the TOBS and the MMTS time series (red line). This adjustment created a small warming in the US annual time series during the mid to late 1980’s. Application of the Station History Adjustment Procedure (yellow line) resulted in an average increase in US temperatures, especially from 1950 to 1980. During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites. When adjustments were applied to correct for these artificial changes, average US temperature anomalies were cooler in the first half of the 20th century and effectively warmed throughout the later half. Filling in missing data (blue line) produced cooler temperatures prior to 1915. Adjustments to account for warming due to the effects of urbanization (purple line) cooled the time series an average of 0.1F throughout the period of record.

    Raw and corrected data are shown in http://www.ncdc.noaa.gov/img/climate/research/ushcn/rawurban3.5_pg.gif – note that the adjustments are actually quite small compared to the observed temperature swings.

  49. KR says:
    “…adjustments needed due to stations (mostly rural) shifting when they take their measurements to earlier in the day.”

    Have the vast majority of US temperature readings shifted to earlier in the day? Is a similar adjustment in the opposite direction made for those stations where measurements have shifted to later in the day? Have the temperature readings continued to be taken earlier and earlier in the day to justify making ever larger adjustments?

    TonyM says:
    “I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler.”

    I find it hard to believe that stations that have been moved to cooler areas outnumber the ones that used to be rural and have gradually become more urban and thus warmer. Is there any evidence that UHI has become less of a factor for the majority of US stations?

  50. TonyM says:
    August 19, 2012 at 8:54 pm

    I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler. To make the stations comparable a positive adjustment would need to be made which then influences the overall average adjustment…

    Prove to me with the *CLEARLY EXPLAINED* USHCN that these were moved stations over the past 10 years. If you can, I will donate $10 to your charity. Not enough? Imagine that.

    Moved stations CAN NOT out weigh UHI, siting, etc.

    Jan P Perlwitz:

    I am sure there is a super computer at you disposal that give us the list of site moves since 2000?

  51. For some of the latest work on these adjustments it’s worth looking at Williams et al 2012 (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/williams-menne-thorne-2012.pdf), where they examine random pairwise stations, noting that:

    When applied to the real‐world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of bias in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum).

    It could be argued that it’s better to look at raw temperature data than data with these various adjustments for known biases. It could also be argued that it’s worth not cleaning the dust and oil off the lenses of your telescope when looking at the stars. I consider these statements roughly equivalent, and (IMO) would have to disagree.

    REPLY: Perhaps if you had the benefit of knowing what I do, you’d understand better. For example, demonstrate that each stations actual TOBS change time data was used to adjust the temperature data for that station.

    To use the cleaning analogy, would you use the appropriate lens certified cloth/paper/brush for the job, or would any old random rag do for cleaning a precision lens? – Anthony

  52. Werner Brozek says:
    August 19, 2012 at 9:12 pm
    July was also the 329th consecutive month of positive upwards adjustment

    Am I interpreting this correctly that each of the last 329 months had a 50/50 chance of being adjusted up or down, depending on circumstances, but it was always up for 329 consecutive months? The chances of this happening by pure chance is 2^329 = 1 x 10^99. This is more than the number of atoms in a billion billion universes!

    Yup. That is how auditors in the real world find the *huh?* or “What do you think you are doing?”.

  53. KR says:
    August 19, 2012 at 9:46 pm
    For some of the latest work on these adjustments it’s worth looking at Williams et al 2012 (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/williams-menne-thorne-2012.pdf), where they examine random pairwise stations, noting that:

    When applied to the real‐world observations, the randomized ensemble reinforces previous understanding that the two dominant sources of bias in the U.S. temperature records are caused by changes to time of observation (spurious cooling in minimum and maximum) and conversion to electronic resistance thermometers (spurious cooling in maximum and warming in minimum).

    Okay, but when does the TOB stop being an influence? God, this is just something else.

    When, the, fancy, electronic, devices, replace, the, old, nasty, mercury, sensors, that, old, man, Cricket, records, on, his, stone, and, charcoal… does the TOB adjustment(S) stop?

    So, this “spurious cooling in maximum” is okay with you. Have you verified this with a mercury device? Do we just build cities around all of our agriculture to minimize the *HOT* rural effect? It does not work both ways?

    If you understood the literature, just a little, the most was the impact was the 6am LOW in urban areas. Also, there is apparently a LAMINAR issue with big tall buildings. Nature’s wind can not flow over such rough surfaces without leaving a little heat behind. Oh my… gasp!

    The only thing “spurious” is the fact that buildings in a CITY convolutes measurements, and therefore, must be excluded from measurement records.

  54. Dr. Deanster says:
    August 19, 2012 at 1:59 pm

    How come we can’t get the Main Stream Media to report on this Graph from the NOAA??

    It would seem to me that the Graph itself would put much of the AGW hyperventilating to rest.
    ____________________________________
    Because there is a lot of money to be made off CAGW and the owners of the press are linked to those fraudsters in one way or another. At minimum through advertising dollars, or in the case of General Electric, outright ownership.

    A look at the ownership/boards of our news media is enlightening.

  55. @RACookPE1978 the comment was made earlier by @Smokey “let’s look at the global satellite record,” and then he [she] gave a link to a graph which, as pointed out by @Daveo and @tjfolkerts was actually “Lower Stratosphere” temperatures, which as expected are declining.

    The source for the graph, and also the TLT “Lower Troposphere” graph I linked to (which shows an upward trend) is this page of satellite data from Remote Sensing Systems:

    http://www.ssmi.com/msu/msu_data_description.html#msu_decadal_trends

    Also of interest is an interactive presentation of recent and historical satellite temperature readings (since 1980 or so I think) from different channels corresponding to frequencies and altitudes.

    There’s quite a lot of info on the site also about their methods and algorithms.

  56. Anthony Watts writes:

    REPLY: Oh gosh, Mr. Perlwitz, why don’t you write up the answer on your new GISS approved smear page?

    That I had set up a “GISS approved smear page” is a made up assertion by Mr. Watts.

    Either way, whether the adjustments are valid or not, it still doesn’t change the headline.

    So, apparently there is no argument by Mr. Watts in his posting regarding the quote cited by him, which was about the statistics of the globally averaged surface temperature anomaly.

    And my reply here to Smokey’s comment he made in http://wattsupwiththat.com/2012/08/19/july-was-also-the-329th-consecutive-month-of-positive-upwards-adjustment-to-the-u-s-temperature-record-by-noaancdc/#comment-1061075 has been vanished altogether, so it appears. Well, Mr. Watts can do this at his discretion as he likes, since it’s his blog.

  57. Frank K. quotes me and replies:

    “And why is Mr. Watts talking about the US surface temperatures and the adjustments in the USHCN data, although the quote he cites is about the globally averaged surface temperature anomaly?”

    I guess Mr. Perlwitz GISS thinks that the U.S. is not part of the globe [heh]… /sarc

    Since I already had said something about this in the comment to which Frank K replies, I guess he is overburdened with reading more than one sentence. /sarc

  58. Daveo says:
    August 19, 2012 at 7:35 pm

    Smokey on August 19, 2012 at 7:12 pm

    No smokey, the adjustments are there for a reason and are clearly explained.
    What is dishonest is the TLS graph you put up earlier in this thread, and called it global temps.
    ___________________________________
    So how come when these “Adjustments” were taken to a court of law in New Zealand, we had so much dancing around?

    From the “A goat ate my homework” excuse book: More major embarrassment for New Zealand’s ‘leading’ climate research unit NIW

    NZ sceptics v. NIWA – summary of case

    Affidavits are for ever

    Don’t mention the Peer Review! New Zealand’s NIWA bury the Australian review:
    The Australians must have said something awful.

    And the same dancing around happened in Australia –
    Senator Cory Bernardi put in a Parliamentary request to get our Australian National Audit Office to reassess the BOM records.

    Threat of ANAO Audit means Australia’s BOM throws out temperature set, starts again, gets same results

    And of course there is the Phil Jones of the CRU excuse
    The Dog Ate Global Warming

    The Climategate e-mails shed New Light on Jones’ Document Deletion Enterprise

    Now you want us to believe a rabid Luddite activist like Hansen would not tinker with the US temperature data to support his obsession with banishing coal “Death Trains” and all other carbon based fuels?

  59. The headline looks wrong. “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC”.

    Certainly the TOB corrections are mostly upwards, but looking at the NOAA chart there were drops in the early 90s, a couple in the late 90s and also one in the late 80s.

    I mean I understand that you need a soundbite that’s as strong as the science-based climate side, but just using a false one doesn’t bring any reason to your cause.

    I guess you figured that wasn’t important.

  60. TonyM says:
    August 19, 2012 at 8:54 pm

    It is misleading to put a headline up which in effect questions the authenticity of the method but does not give an explanation about the reasons.

    I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler.
    __________________________
    No that is incorrect. They tossed out a lot of stations but kept airports which I guess they call “Rural”

    The ‘Station drop out’ problem
    Graphs of stations dropped by region.
    Graph 1

    Graph 2

    WUWT: On the “march of the thermometers”
    …Most of the station dropout issue covered in that report [ compendium report ] is based on the hard work of E. M. Smith, aka “chiefio“, who has been aggressively working through the data bias issues that develop when thermometers have been dropped from the Global Historical Climate Network.
    …the GHCN station dropout Smith has been working on is a significant event, going from an inventory of 7000 stations worldwide to about 1000 now, and with lopsided spatial coverage of the globe. According to Smith, there’s also been an affinity for retaining airport stations over other kinds of stations. His count shows 92% of GHCN stations in the USA are sited at airports, with about 41% worldwide….

    These individual weather stations show how really bad airports are:
    Here is a quick look at the only city & close by airport listed for North Carolina. The city is on the North Carolina/Virgina border and right on the ocean. Take a look at the city vs the airport. Norfolk City and

    Norfolk International Airport

    North to south through the middle of the state
    Raleigh NC

    Large city in the middle of NC – Fayetteville NC

    South – Lumberton NC

    Other Coastal Cities:
    Middle – Elisabeth City

    South – Wilmington NC

    “small cities”
    North – Louisburg

    South – Southport

    Note that Raleigh/Durham airport at 4:12 AM is 2-4F warmer that the surrounding stations. A closer look WUWT: RDU’s paint by numbers temperature and climate monitoring

    Here is the raw 1856 to current Atlantic Multidecadal Oscillation Amazing how the temperatures follow the Atlantic ocean oscillation as long as the weather station is not sitting at an airport isn’t it?

  61. Jan P Perlwitz says:

    August 19, 2012 at 4:12 pm

    What he is saying is that the adjustment of past, present or future data is an unforgivable sin. In of the real sciences, the adjustment of recorded data without clear, calculated and reason judgement would be a dismissable offence. However, it seems that in climate scamming anything goes as long as the mùoney keeps rolling in.

    I hope this helps you to understand. THE ADJUSTMENT OF RECORDED DATA IS CHEATING !!!!

  62. Friends:

    Let us be clear.

    There is no justifiable reason to alter values measured decades in the past.

    For example, if temperature measurements were taken at different times of day then that is a cause of uncertainty (i.e. inherent error). Using assumptions to adjust for these times of measurement does not reduce uncertainty: it introduces additional unquantifiable error from the assumptions.

    The entire subject of the surface temperature data sets shows that the compilers of these data sets are ignorant of basic measurement theory.

    For example, the plotted temperatures are averages (i.e. means) intended to show trends over a region (e.g. the contiguous US, a hemisphere, the globe, etc.). But if such trends are to be meaningful indications of changes over the region then the mean has to be obtained from
    (a) a statistically random sample
    or
    (b) the same population used as the sample for each datum.

    However, (a) is not possible because the measurement sites are not randomly distributed. And (b) is not possible because the measurement sites differ from year-to-year (e.g. individual sites ‘move’ or close).

    The adopted solution has been to compensate for the lack of a random sample by adjusting the available data. In principle this can be correct. The lack of randomness is a distortion to what is being measured (this is like viewing an image at an angle: the angle distorts the image). So, a model of the distortion is obtained and the data is adjusted according to the model (this is like determining the wrong viewing angle for an image and adjusting the image by that angle).

    However, the distortion created by the lack of a random sample is not known and cannot be determined (this is like not knowing the angle at which an image is viewed). Therefore, any model of the distortion is a guess: and any compensation for the distortion by use of the model is a guess.

    So, the ‘adjustments’ to the data sets are mere guesses. And these guesses have no validity because there is no calibration possible for determination of their validity. Arguments about UHI magnitude – and similar issues – do not change this because the problem is a sampling problem.

    Re-adjusting data from decades in the past can only be an alteration to the compensation model: i.e. use of a different guess.

    There is no justifiable reason to alter values measured decades in the past.

    Richard

  63. Wombat says:
    August 20, 2012 at 12:40 am

    The headline looks wrong. “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC”.

    Certainly the TOB corrections are mostly upwards, but looking at the NOAA chart there were drops in the early 90s, a couple in the late 90s and also one in the late 80s.
    ___________________________________
    “Drops” are NOT the same as going below zero (negative) so pull another rabbit out of your hat. There is a positive upwards adjustment added to everything some are just not as big an adjustment.

    Also thanks to the drop in “Official” temperature stations we now have 92% of GHCN stations in the USA sited at airports. A discussion of one airport site that illustrates the problems with weather stations at airports RDU’s paint by numbers temperature and climate monitoring Despite the fact that station is at 4:12 am, 2-4F warmer than any of the surrounding area temperatures it is adjusted UP!!! This also supports the 2F warmer problem link

    If you look at the photos you can see that the area is surrounded by blacktop.

    …there are relatively few (less than 500) stations that have raw temperature data prior to 1880. From 1880 onwards there is a more or less linear increase in the no. of reporting stations (i.e. stations that have raw data) from 1880 to about 1950 when the number reaches a little over 3300. After this point, within the space of four years, there is a sudden expansion in the number to over 4500, which then reaches a peak of 5348 stations in 1966. Its worth mentioning that there are 7250 records in the GHCN station inventory file (v2.temperature.inv) some of which are for ‘Ships’ but it is clear from this peak count that there isn’t raw temperature data available in the GHCN v2.mean file for all the stations listed in the NOAA GHCN station inventory file.

    After peaking in 1966 the total raw data station count then declines in a more or less linear fashion to about 3750 in 1989. Over the next couple of years there is a sudden ‘drop out’ of stations from the total station count to about 1900 in 1992… After 1992 there is a more or less linear decline in the raw data station count to about 1630 stations in 2005. There is then a further sudden inexplicale ‘drop out’ to about 960 stations in 2006 with in 2009 the total station count reduced to a mere 840 stations. http://diggingintheclay.wordpress.com/2010/01/21/the-station-drop-out-problem/

    So since around 1989 stations were being dropped and now the USA is represented by airport stations (92%) KNOWN to have higher readings than the surrounding areas and on top of that we add a steadily increasing “adjustment”

    Is Hansen still going to be steadily increasing the “adjustment” as a glacier builds of NYC?

  64. richardscourtney says:
    August 20, 2012 at 1:32 am

    Friends:

    Let us be clear.

    There is no justifiable reason to alter values measured decades in the past.

    For example, if temperature measurements were taken at different times of day then that is a cause of uncertainty (i.e. inherent error). Using assumptions to adjust for these times of measurement does not reduce uncertainty: it introduces additional unquantifiable error from the assumptions.

    The entire subject of the surface temperature data sets shows that the compilers of these data sets are ignorant of basic measurement theory…..
    _______________________________
    AMEN!

    The manipulation, messaging and adjustment of the data to fit the preconceived notions of CAGW is the common thread throughout Climastrology. From Callender’s tossing out all the historical results for CO2 above 280 ppm to Mauna Loa tossing out “outliers” to the burying of Ice Core results showing CO2 in the past was well over 350 ppm to denying the existence of the historically documented LIA and medieval warm period it is data manipulation all the way down.

    Any hypothesis that requires this much data manipulation should be shot and buried with a stake in its heart!

  65. @Mike Jonas
    Did you actually look at it? The reasons are clearly stated, changes in design of thermometer housing, changes in the type of thermometers, Urban heat island effect and so on. And each step backed up by peer reviewed papers for the details.

  66. You asked me to rephrase. I’m not sure how to make my meaning more clear but I’ll have a go.

    The mod should not edit my post without making it clear it was a mod’s edit not mine. If he or she doesn’t like my observation, then he or she can say so using their own nic. Either that or not publish my post at all. Mods shouldn’t be replacing commenters’ words with their own preferred wording. Hope my meaning is clear now.

    (My comment in question should be removed from this thread in any case, since as I already said in a follow up comment I inadvertently sent it to the wrong thread.)

  67. “There’s quite an offset in 2012, about 0.7°C between Dr. Spencer’s ISH PDAT and USHCN/CRU.”

    There’s also quite an offset between Dr Spencer’s ISH PDAT and Dr Spencer’s UAH USA48 data. His +0.013C per decade trend in ISH PDAT since 1973 contrasts against his +0.23C per decade trend in UAH USA48 since 1979. The UAH USA48 trend is 18 times faster. Not an inconsiderable difference.

    Perhaps the proof of the pudding is in the eating? For example, is the observed rate of glacier retreat in Glacier National Park, Montana over the past 30 years more suggestive of a +0.013C or a +0.23C per decade rate of temperature increase?

  68. DWR54:

    At August 20, 2012 at 2:41 am you ask:

    For example, is the observed rate of glacier retreat in Glacier National Park, Montana over the past 30 years more suggestive of a +0.013C or a +0.23C per decade rate of temperature increase?

    I answer, neither. It is mostly indicative of precipitation changes.

    Richard

  69. @richardscourtney (August 20, 2012 at 1:32 am)
    You are assuming the distortion can not be determined, but that is not correct, the nature of the main changes (measurement time, switching housing, use of modern electronic sensors) can be modeled very well. See also KR’s response above(August 19, 2012 at 8:32 pm). To call it “mere guesses” has no basis. Read the related papers, you will see they all have adequate discussion of error margins.

  70. Peter Roessingh says:

    “Read the related papers…”

    I think Richard Courtney has read more learned papers than Peter Roessingh is even aware exist.

  71. If this summer in the US climate is considered not to be ‘normal’ then most likely it is not caused by CO2 concentration, since there is not much abnormal about CO2 this year compared to the last or one before. Instead this summer should be compared to 1934 and 1953/54.
    This map shows

    California current appears to be ‘much cooler’ than normal (less evaporation), and by the ocean current loop’s appearance that was the case for some time.
    Less evaporation in the west Pacific means less rain in the US mid-west
    One reason why that could happen most likely is the turn over in the Kuroshio-Oyashio currents system. These current systems have been temporarily disturbed by tectonic movements of Honshu in March of 2011 (M9 earthquake and many subsequent strong aftershocks), the powerful tsunami causing break in the thermohaline layering.
    Wikipedia’s list of the Japan’s M8+ earthquakes

    http://en.wikipedia.org/wiki/List_of_earthquakes_in_Japan

    01 September 1, 1923 M8.3
    March 2, 1933 M8.4 Major drought 1934
    December 20, 1946 M8.1
    March 4, 1952 M8.1 Major drought 1953-4
    May 16, 1968 M8.2
    September 25, 2003 M8.3
    March 11, 2011 M9.0 Major drought 2012

    Japan’s major earthquake in month of March could have a high probability of causing major drought in the USA (3 of 7 and all in March). Current takes one year to reach Canada and few more months down to California coinciding with the time when a strong evaporation is needed to provide summer rains across the mid-west.
    Two September quakes causing Kuroshio current’s turn-over would reach California 15 months later, which is mid-winter, so by the summer effect may peter out, causing only a minor drought.

  72. I have very little confidence in Dr. Spencer’s ISH PDAT. Or rather, I have none – the whole way how it was created is fishy. First of all, it is not based on population changes, it is based on single population snapshot from 2000 and a number of assumptions that this single population snapshot will be somehow reflected in linear temperature trend over the whole period for that site. Apart of that, it basically declares all stations with low population as the only ones which are correct and adjusts all of the rest to match it. Not mentioning the overall trend acquired from the processing is even lower than average trend from the lowest population stations so I’m afraid there are some more errors in the math.

    That being said, I don’t think GISS is ok either. My personal guess is there is some tiny little rounding error somewhere in their homogenization procedure which leads to accumulating slightly more positive than negative adjustments, resulting in the net change we can see today. But that’s nothing more than a guess, I would need to spend a lot of time looking into their code and methods to confirm or disprove that.

  73. NOAA habitually (monthly) makes adjustments to their entire global temperature dataset. So far in 2012, we have downloaded 21 different versions of their dataset. They appear to be adjusting the temperatures several times per month. In fact, since we only check once every 6 to 7 days, the number ’21’ might be low since we could have missed a few over the last 7.5 months.

    To give you an example of what this means, their record of the January 1880 (yes, 1880) has been adjusted 21 times – that is they have reported 21 different temperatures for January 1880
    since December 31, 2011. And remember, these are only the adjustments applied in 2012, so far, and they have been doing this for years. (These are global temp adjustments, not U.S. alone.)

    We do have a wide variety of fake temperature charts that may be of interest: http://www.c3headlines.com/fabricating-fake-temperatures.html

  74. Was the following what the mod took exception to, since the other part of my post was published? Not sure why that would be. This is the gist of it:

    People may want to come back to this article after the promised adjustments have been made to the data in the recent unpublished Watts et al paper. This includes adjustments such as TOBs, UHI and other relevant changes to raw land surface temperature records in the USA.

    Probably worth noting that had the USA had a standard time of observation over the years, like many other countries, a lot of confusion would be avoided (except by those who intend confusion).

  75. Peter Roessingh:

    At August 20, 2012 at 3:52 am you say to me

    @richardscourtney (August 20, 2012 at 1:32 am)
    You are assuming the distortion can not be determined, but that is not correct, the nature of the main changes (measurement time, switching housing, use of modern electronic sensors) can be modeled very well. See also KR’s response above(August 19, 2012 at 8:32 pm). To call it “mere guesses” has no basis. Read the related papers, you will see they all have adequate discussion of error margins

    Firstly, thankyou for your advice but I assure you that I have read the related papers and I have had direct communication with compilers of the data sets. In case you doubt this, I draw your attention to what I wrote my post in this thread at August 19, 2012 at 2:32 pm and I ask you to read the document (especially its Appendices) linked from that post.

    I point out that the link is to the UK Parliamentary Record and it shows my submission to the Select Committee of the UK Parliament that ‘investigated’ (i.e. whitewashed) ‘climategate’. If that submission were false then I would be guilty of perjury.

    Appendix A is an email from me which was leaked from CRU as part of ‘climategate’ and – as is clear from what it says – it is part of discussion of these issues with compilers of the global temperature data sets and their major associates. Appendix B is a draft paper (of which I was Lead Author) which fully justifies what I have written in this thread. And the document itself is an explanation of how the frequent – and unjustifiable – changes to the data sets were used to prevent publication of that paper.

    Secondly, the adjustments are pure “guesses” for the reason I explained in my post at August 20, 2012 at 1:32 am. Simply, there is no way to validate the correction model for sampling error so it cannot be known if any ‘adjustments’ will increase or reduce the effects of the sampling error. In other words, you are plain wrong when you write saying to me

    You are assuming the distortion can not be determined, but that is not correct,

    The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.

    Any ‘adjustments’ to address

    (measurement time, switching housing, use of modern electronic sensors)

    or anything else are assumed to improve the accuracy of the determination of global temperature. ut they may be compounding the major effect of sample error, and there is no way to determine if that is the case. Simply, each time it is assumed an ‘adjustment’ is made then it is a guess that the ‘adjustment’ will not make the total error larger. “Explanations” of how and why the ‘adjustments’ are conducted cannot alter that.

    My email in the link was discussion of this very problem with specific reference to masking.

    Richard

  76. @ Jan Perlwitz

    And why is Mr. Watts talking about the US surface temperatures and the adjustments in the USHCN data, although the quote he cites is about the globally averaged surface temperature anomaly?

    If a high quality USHCN dataset need such heavy adjustments, there is not a cat in hell’s chance that the vast majority of low quality global stations can provide reliable information.

  77. @ Ian W

    It is truly amusing that these adjustments are somehow causing glaciers to melt . How do they do that?

    Glaciers have been melting back since the late 1800’s, and this reflects the fact that temperatures are higher now than during the LIA, when glaciers expanded hugely. This does not mean that temperatures are continuing to increase.

    I’ll give you a clue. Take an ice cube out the freezer and it will start to melt. If it is still melting 10 minutes later, does this mean the temperature in the room is increasing?

  78. As Anthony has referred to before, although we can get individual station data, nobody outside NCDC seems to know just how the individual data is amalgamated together to provide national temperatures, and they seem determined not to release this information.

    It is surely time for NCDC to publish each year a full comparison of raw and final temperatures, with a full explanation of the difference. I can’t imagine there is any other organisation, public or private, who could get away with massaging data in the way they do without full transparency and independent justification.

    I would guess the public at large would be furious if they were told the truth.

  79. KR says:
    August 19, 2012 at 8:32 pm

    “TOBS is clearly seen in the records when looking at individual stations (offsets when recording times change), easily replicable via Monte Carlo testing, and if ignored you will be working with data that has known errors.”

    KR – could you please:

    (1) State the time of observation bias (TOBS) algorithm completely so we know what you’re talking about here.
    (2) Show how it is clearly seen when looking at individual stations.
    (3) Show how it is easily replicable via Monte Carlo testing.

    Please, no reference to papers. I want to read your explanations and see your data. Thanks in advance.

  80. NOAA habitually (monthly) makes adjustments to their entire global temperature dataset. So far in 2012, we have downloaded 21 different versions of their dataset. They appear to be adjusting the temperatures several times per month. In fact, since we only check once every 6 to 7 days, the number ’21’ might be low since we could have missed a few over the last 7.5 months.

    To give you an example of what this means, their record of the January 1880 (yes, 1880) has been adjusted 21 times – that is they have reported 21 different temperatures for January 1880 since December 31, 2011. And remember, these are only the adjustments applied in 2012, so far, and they have been doing this for years. (These are global temp adjustments, not U.S. alone.)

    We do have a wide variety of fake temperature charts that may be of interest: http://www.c3headlines.com/fabricating-fake-temperatures.html

  81. TonyM says:
    “I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler.”

    NCDC say this about USHCN

    The stations were chosen using a number of criteria including length of period of record, percent missing data, number of station moves and other station changes that may affect the data homogeneity,

    There would therefore be no significant location changes of the type you describe, merely local, minor moves which could be warmer or cooler (e.g. a change in altitude). Where there is a major move, a new station would be created.

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

  82. ****
    Frank K. says:
    August 19, 2012 at 7:36 pm

    I wonder what government charge code they use for blogging anyway…hmmm).
    ****

    It goes under the catch-all category of “media”, “education”, or “communication” — word-speak for propaganda.

  83. Anthony“Perhaps if you had the benefit of knowing what I do, you’d understand better. For example, demonstrate that each stations actual TOBS change time data was used to adjust the temperature data for that station.”

    An argument from authority? A great deal of work has been done on the TOBS issue, which has known for >150 years (http://agwobserver.wordpress.com/2012/08/01/papers-on-time-of-observation-bias/). If you have evidence of TOBS data being incorrectly applied, please share it – in particular (since I’m sure that there are the odd stations here and there where TOBS was incorrectly recorded), if it’s widespread enough to have an effect on continental US temperatures. Given the amount work that has been done, published, and publicly available; demonstrating the issue, the effects, and the corrections – the burden of proof is on you in this case.

    [ Note that it isn't even necessary to use the TOBS metadata to locate and correct for these effects: examining local groups of stations, as in the BEST methods or in Williams et al 2012 (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/williams-menne-thorne-2012.pdf), losses of local correlation match instrumentation or TOBS changes which can then be corrected. And given the correlation between stations as shown in Hansen 1987 (http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf), with US stations averaging a distance of ~85km apart, the ~0.9 correlation between neighboring stations is quite sufficient to spot these changes.

    However, the USHCN uses the metadata on TOBS and instrument changes. I've seen no evidence supporting an assertion that they use it incorrectly. ]

    REPLY: Have you looked at the data -vs- the application of it? You might be surprised. Remember we are dealing with people here. Assigned times and actual times YMMV – Anthony

  84. I see the GISS climate charlatan didn’t like the chart I posted. Tough noogies, Perlwitz. Reality intrudes on your fantasy.☺

  85. @Peter Rosburgh:

    “There is a very clear description of the reasons behind the adjustments here:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html

    You really do not adress the reasons for the adjustments in this post. If you want to make the point that the adjustments are not justified, or not correct, you need to give arguments for that!”

    Do you find it rather strange when the entire “signal” can be accounted for by subtracting the sum total of the adjustments? NOAA knows that when it posts its monthly data, the public at large believes it to be raw information. And I cannot think of another field that relies on adjusted data to the degree the Climate field does (and that’s saying something when one considers the Department of Labor Statistics).

    As I’ve said in earlier posts, Climate is nothing more than a human construct. You can create any narrative you wish if you apply to correct statistical alogorthim.

  86. Frank K. at August 20, 2012 at 6:02 am

    KR – could you please:

    (1) State the time of observation bias (TOBS) algorithm completely so we know what you’re talking about here.
    (2) Show how it is clearly seen when looking at individual stations.
    (3) Show how it is easily replicable via Monte Carlo testing.

    Please, no reference to papers. I want to read your explanations and see your data. Thanks in advance.

    I see no reason whatsoever to replicate multiple man-years of effort for your sake, not when you have demonstrated literacy. Read the papers I referenced, do some searching at Google Scholar (http://scholar.google.com/) if you want more. See also the Argument by Question fallacy (http://www.don-lindsay-archive.org/skeptic/arguments.html#question).

    You can lead a horse to water, but you cannot make him drink. Or, apparently, read the references…

  87. I’m sure this has been talking about before, but what are the official explanations as to why they make this systemic upward adjustment?

  88. Smokey says: August 19, 2012 at 8:52 pm
    “Some folks didn’t like my previous chart. So they will probably hate this one.
    No, we like the graph just fine. The first graph shows perfectly legitimate data and a perfectly legitimate trend line for stratospheric temperatures. And the second graph is just fine too — it shows a small, 10-year long decline in the surface temperature, together with a reasonable trendline. All data helps provide a more complete picture of what the climate is doing, and what effect, if any, people have on the climate. This sort of cooling trends force people to think about the relative effects of “internal chaotic variation”, external factors (eg solar cycles) and CO2 (but does a priori not mean any one factor can be thrown out).

    What we don’t like is when you post link after link, but as often as not, the graph is wrong or misinterpreted. We know you are passionate about this issue, but inaccurate and/or misinterpreted information does nothing to move the discussion forward, but instead muddies the waters.

    REPLY:
    I assume then you’ll take issue with Dr. James Hansen’s presentation in 1988 before the Senate, where we had just 10 years of warming from 1978 (prior to which people were talking about a new ice age). If Hansen can use a ten year warming trend from 1978-1988 to raise alarm, why can’t Smokey use a ten year cooling trend to say “hey not so fast’? Goose, gander, and all that.- Anthony

  89. tjfolkerts says:

    “What we don’t like is when you post link after link, but as often as not, the graph is wrong or misinterpreted.”

    “We”? Do you presume to speak for everyone, or do you have a mouse in your pocket? I post charts because they convey information at a glance. The charts cannot be “wrong” because they are based on empirical observations and data. They are certainly not wrong “as often as not” [I post hundreds of charts, so you're going to have to back up that claim with lots of examples, or climb down]. And if you misinterpret the charts I post, well, I don’t hear that complaint very much. Maybe you should work on your chart comprehension. Most folks have no problem understanding what they mean.

  90. KR says:
    August 20, 2012 at 7:31 am

    “I see no reason whatsoever to replicate multiple man-years of effort for your sake, …”

    Can’t state the TOBS algorithm eh??

    [sigh] That’s OK KR…don’t worry about it. Usually when I ask these kinds of questions, the people who think it’s so simple and obvious can’t seem to answer even simple and obvious questions.

  91. Lest people think I’m being picky in my asking questions about TOBS, here is the explanation of TOBS from NOAA (bolding is mine).

    Next, the temperature data are adjusted for the time-of-observation bias (Karl, et al. 1986) which occurs when observing times are changed from midnight to some time earlier in the day. The TOB is the first of several adjustments. The ending time of the 24 hour climatological day varies from station to station and/or over a period of years at a given station. The TOB introduces a non climatic bias into the monthly means. The TOB software is an empirical model used to estimate the time of observation biases associated with different observation schedules and the routine computes the TOB with respect to daily readings taken at midnight. Details on the procedure are given in, “A Model to Estimate the Time of Observation Bias Associated with Monthly Mean Maximum, Minimum, and Mean Temperatures.” by Karl, Williams, et al.1986, Journal of Climate and Applied Meteorology 15: 145-160.

    Note that TOBS is: (1) an empirical model and (2) estimates the actual TOBS. Hmmm. Estimates? Really? What data were used? Error range?

    Also note the referral to “TOB software”. Has any seen the listing for the TOB software? Is it generally available? If not, why not?

    It has been stated that TOBS is only relevant in the United States and not the rest of the world. Why is that? Did all of the other climate monitors around the world from 1880 to the present day always log their min/max temperatures at precisely the same time?

    Questions, questions…

  92. Anthony – “Have you looked at the data -vs- the application of it? You might be surprised.”

    I’m quite willing to be surprised. Please – show the data. In particular, demonstrate that there is a consistent bias or error in TOBS corrections (as opposed to random offsets from someone’s poor record keeping at a few stations, which would imply loss of accuracy but not bias) that makes the TOBS corrections erroneous.

    You have asserted or implied that TOBS and instrumental corrections are being done incorrectly, with bias – I await evidence to that effect. Barring that evidence, the corrections for instrumental changes and TOBS changes are (IMO) being done properly, based on the last 150 years of study of this issue.

    REPLY: I didn’t say with bias, that your words, just improperly/sloppily do. It will actually take another crowd-sourcing project to disentangle the mess (and determine the bias) they created though not paying attention to what observers actually do properly. There’s a disconnect. See Frank K.’ s comment. – Anthony

  93. For those (unlike KR) who wish to delve more into the TOBS algorithm, the original Karl et al. paper is here.

    Note that the data used to derive the TOBS equations, as described in the paper:

    “seven years of hourly data (1958-64) were used at 107 first order stations in the United States…”
    “Of these 107 stations, 79 were used to develop the equations, and 28 were reserved as an independent test sample.”

    Note that the period 1958-64 is in Jim Hansen’s climate sweet spot (1950 – 1980), when the world was (briefly) in climate nirvana…

    Question: Has anyone updated the TOBS equations/algorithm with more recent data?

    And remember – TOBS only affects the continental U.S. !! For Canada, Mexico, Alaska + Hawaii (!), Puerto RIco, … rest of the world, non-TOBS data is A-OK! (Apparently).

  94. mbw says: August 19, 2012 at 4:20 pm
    It is truly amusing that these adjustments are somehow causing glaciers to melt and numerous species to shift their habits. How do they do that?
    ————————————————

    In Northern Europe, we had the latest spring on record this year, with many trees being three or more weeks late in coming into leaf.

    .

  95. Anthony – Quite seriously, if you have evidence indicating incorrect handling of TOBS changes, by all means publish it. Not just a blog post, but a submission to a peer-reviewed journal. If such claims can be demonstrated and proven, it will refute numerous papers and methods (which ones depending on whether you are discussing the methodology or the application thereof). If you have such evidence, it would be worth looking at.

    Until such time, however, I’m going to apportion belief to evidence, and consider the 150 years of considering TOBS bias and the published methodologies for correcting it to be properly executed.

    TOBS and instrumental corrections remove spurious biases from the data. Unless you can demonstrate, prove that such corrections are improper, or introduce enough uncertainty to seriously degrade the data, or introduce a bias larger than they remove, the conclusions drawn from corrected data will be more accurate than those drawn from raw data.

    (It’s very important to clean the telescope…)

    Frank K. – My compliments on actually reading the Karl et al 1986 paper that I and several other commenters mentioned. I would suggest following up Vose et al 2003, “An evaluation of the time of observation bias adjustment in the U.S.
    Historical Climatology Network” (ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/vose-etal2003.pdf).

    TOBS changes have been primarily a US issue due to the number of stations in the US (mostly rural) that have changed from sunset to early morning temperature readings. The US weather network relies on a large number of volunteer weather reporters (the Cooperative Observer Program, http://www.nws.noaa.gov/om/coop/ and http://www.nws.noaa.gov/om/coop/Publications/coop_factsheet.pdf), and quite frankly morning observations are more convenient as they match hydrological reading times.

  96. Re: richardscourtney @August 20, 2012 at 1:32 am

    Friends:
    Let us be clear.
    There is no justifiable reason to alter values measured decades in the past.

    I respectfully disagree, and some adjustments do appear to be justifiable. However I do agree that the original data should never be adjusted. Raw data should be retained, warts and all and any suggested adjustments should accompany it, along with appropriate cautions.

    There is an example of a data adjustment in the current ECMWF newsletter discussing air temperature data collected from aircraft sensors. They say:

    it has been clear for some time that the temperature measurements are biased, often by 0.5°C compared with radiosonde measurements. This has a considerable impact on temperature analyses, especially over Europe and North America at the tropopause level, where the aircraft data volumes are large and dominate the analysis… When aircraft data is bias corrected in this way the fit of the analysis and short-range forecast (i.e.background) to the data improves considerably.

    Both adjusted and original data are available, and this is a nice example of a negative temperature adjustment improving model skill. It is also reassuring to see the ECMWF testing the model fit to the data, rather than trying to retrofit the data to the model as NOAA may be attempting.

  97. Smokey claims: “The charts cannot be “wrong” because they are based on empirical observations and data. ”
    That would be true if the charts actually were all correct. As I have pointed out before, you have posted charts where you claimed the data was centuries but it was decades. Where the charts have poor eye-ball trends passed of as mathematical fits. And here we have evidence of global warming passed off as evidence of global cooling.

    If you really want to make your point, then post a graph, explain what it is, and what it means.

    PS Anthony. Yes, Hansen was definitely a bit brash in going to congress with a 10 year temperature trend, especially in light of the cooling trend the decade or two before that. The 1990’s definitely fit with his predictions. The 2000’s have definitely raised some question marks. It will be fascinating to see how this plays out.

  98. So for this supposedly so critically important problem, they used: (a) just 7 years data (1958-1964), (b) a small subset of all stations, and (c) used just 28 stations as test “samples”

    And – the TOBS algorithm may well not have been updated using more recent station data.

    Oh, and we can’t forget … the entire global temp record, supported by the fairly extensive set of US station data – is ALL based, not on hard data, but rather on a mishmash of multi-layered mathematical equations, algorithms and assumptions which consistently produce a strong upward bias.

    Nope – nothing fishy there.

    And the defenders here rarely offer technical insight and detailed answers, but rather veiled ad-hominem attacks and cites to authority/papers etc.

    If its all so easy and we’re all such fools you’d think real scientist types like Perlowitz could easily explain and make us all look like the silly rubes we are. Funny thing though – they rarely if ever offer any meaningful contribution …

  99. And this from the same type folks who insist there is no UHI effect.

    It must be a figment of my imagination when I drive from my home – appx. 25 miles in to the downtown area that I can consistently watch the thermometer in my vehicle climb appx 3 deg F and as I drive home the same, an appx 3 deg F drop.

    Ahhh they say – but you are talking temperature not trend – and UHI does not affect trend.

    To that I simply say bullpoop.

    First, in the US and in many parts of the globe there has been significant growth in and expansion of urban areas. And as a specific area continues to grow and urbanize the UHI affect increases. While it may be true to say UHI does not affect daily or short term temp “trends” – it is silly – beyond common sense – to say increasing urbanization of the world and the resultant increase in UHI areas and size of existing UHI locations, has no effect on temp trends.

    Second, as to long term trends … it is equally ridiculous to claim UHI has no effect on temp trends (again vs actual temps). In 1900 there were extremely few stations affected by UHI … it largely did not exist. Since then we have seen a massive urbanization, of the US and the world.

    Every one of those areas unaffected by UHI in 1900, that has become urbanized, MUST SHOW a UHI effect on the long term temp trends. A station that was rural and is now rural will show a natural temp trend. A station that was rural and became UHI affected will absolutely show a HIGHER temp trend over the same period. It is impossible not to – the increase in tempos due to UHI must be incorporated in to the stations trend.

    Put another way a station that went from rural to UHI – must incorporate the UHI bias in to its trend data over time – and when you introduce a 2-3 degree F increase in temps you absolutely will increase the trend over time.

    Which is why the claim there is no difference in current “trend” between UHI and non-UHI stations is so silly. A true statement on a short term basis – where an city is largely fully built out and its UHI “mass” is no longer changing. A simply and completely false statement when talking about long term trends.

    IMO a paired station approach – where known UHI affected stations are compared to a group of high quality stations outside the UHI area to determine the UHI affect and bias – is the better way. This should also be applied to any lower quality station data. Then you get a true measure of the UHI affect and can compensate accordingly.

  100. Richardscourtney says on August 20, 2012 at 5:23 am

    “The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.”

    The corrections are for well defined changes (observation time, new housing, electronic sensors) that can be modeled well. This can be verified by looking at to the first order station records that show the “real” effects. As an example, Karl et al (1986) use an independent dataset to compare their model for TOB to the “real” bias known from the first order (hourly) station records and show convincingly that the overall quality of the measurement is drastically improved.
    The fact that sampling errors are not easy to quantify does not preclude the application of a correction for some other error, as the comparison to the real bias shows. Your argument is theoretical, and trumped by emprical collected data that shows the model *is* working.

  101. A. Scott:

    At August 20, 2012 at 1:39 pm you make a good argument about the importance of UHI. But I write to comment on the part of your post that says

    IMO a paired station approach – where known UHI affected stations are compared to a group of high quality stations outside the UHI area to determine the UHI affect and bias – is the better way. This should also be applied to any lower quality station data. Then you get a true measure of the UHI affect and can compensate accordingly.

    You may well be right, but such sets of stations don’t exist (especially not over long times), so your suggestion is of merely academic interest.

    Richard

  102. Atomic Hairdryer:

    Thankyou for your post addressed to me at August 20, 2012 at 12:42 pm.

    I agree that there are specific examples where data from the past can be justified as needing correction for it to be compared with newer data obtained by an alternative method.

    And I agree with your excellent example of such a case.

    Importantly, I very strongly agree with the need to retain – and to report – the original and the corrected data.

    However, with respect, your argument and your illustration support what I said.

    You have discussed two data sets which are of similar kind except that they are obtained using different methods. In this case the two data sets can be intercalibrated and – when the calibration difference is known – then either data set can be adjusted to enable direct comparison of the data obtained by the different methods.

    Now, imagine that such intercalibration were not possible for the two data sets and you did not know to which set each datum belonged. It would not then be reasonable to guess how to adjust each datum so it could be compared to the others. And the global temperature issue is two orders of magnitude worse than that.

    In your example you have only two data sets. A global temperature time series of a century has 100 data sets because each year in the global temperature series has a unique data set (measurement locations, number of measurement sites, the measurement methods and measurement equipment all differ between years). And there is no possibility of knowing how to intercalibrate any of them.

    The ‘adjustments’ to original measurement results are intended to overcome this problem but they cannot because the effects of sampling error are not known.

    The sampling errors change from year-to-year as the number and locations of measurement sites change. Hence, any ‘adjustment’ to measurements at individual sites will affect the sampling error. It cannot be known if the adjustment will increase or decrease the sampling error which has unknown magnitude but it is probably large. Hence, adjustments intended to reduce one error (e.g. UHI effect) may increase effect of sampling error with resulting increase the total error, and it cannot be known if and when this is true. Therefore, no adjustment can be justified.

    This is basic stuff in measurement theory, and – as I said – it is clear that the compilers of the temperature time series are inadequately educated in measurement theory.

    Richard

  103. Peter Roessingh:

    re. your post addressed to me at August 20, 2012 at 2:03 pm.

    Please see the reply to Atomic Hairdryer which I have just posted. It explains the point which you seem to have not understood.

    Richard

  104. Smokey wrote:

    I see the GISS climate charlatan didn’t like the chart I posted. Tough noogies, Perlwitz. Reality intrudes on your fantasy.

    Why would you say that, Smokey? Are you playing games now? I very much doubt you don’t know that my comments to your “charts” have mysteriously vanished from here. And now you pretend I didn’t know what to reply to it.

    Btw: Moderator “dbs” didn’t seem to see a problem with Smokey insulting some other person in his comment here.

  105. In politics, it is not who votes that counts; it is who counts the votes. In weather reporting, it is not what the thermometers show; it is who says what the thermometers show.

  106. Richard, I don’t disagree, however …

    … where there are currently stations that could be used in a paired approach there should be a study done IMO to quantify UHI and its affect on trends vs. paired stations.

    … where there are not good paired stations outside the UHI areas they should be added, so going forward we can establish a meaningful comparison – to use hard data to study the issue and trends. Over time accumulate accurate data which would allow a more accurate understanding to be developed.

    I think use might possibly be made of the large numbers of home reporting stations for something such as this. Where you have a strength in number of reporting stations, the effect of the accuracy of any one station becomes somewhat less important.

  107. Paul Homewood says:
    August 20, 2012 at 6:01 am

    ….It is surely time for NCDC to publish each year a full comparison of raw and final temperatures, with a full explanation of the difference. I can’t imagine there is any other organisation, public or private, who could get away with massaging data in the way they do without full transparency and independent justification.

    I would guess the public at large would be furious if they were told the truth.
    ___________________________________
    I keep thinking of the IRS and “Creative Accounting” Too bad we can not sic the IRS auditors on the Climastrologists, although if any of them read this blog they may have a few doubts by now on how the Climastrologists treat other data sets like their income tax records, all those conferences for example.

  108. KR says:
    August 20, 2012 at 11:57 am

    Anthony – Quite seriously, if you have evidence indicating incorrect handling of TOBS changes, by all means publish it….
    _____________________________
    Good grief, Anthony is ONE person working part time on his own dime. The amount of disentangling by UNPAID volunteers of the mess made of the climate records by Climastrologists is really quite astonishing. Especially when you consider the active and malicious thwarting of FOIAs and even lawsuits as I showed in my earlier comment

    Climastrologists are also among the highest paid academics in the USA so there is no excuse for the trash they are generating at the taxpayers expense.

    …What about climate scientists? Well, university lecturers and professors earn an average of $49.88 an hour over a 1,600-hour work year, for a total salary of about $80,000. In the public sector, “atmospheric, earth, marine, and space sciences teachers, postsecondary” earn considerably more than the average university teacher ($70.61 per hour). They also work much less (1,471 hours each year), and despite their lower workload, they pull down about $104,000 a year….

    So climate scientists are very well compensated, out-earning all other faculty outside of law in hourly-wage terms. What about the rest of the public sector? Astonishingly, only one other public-sector profession — psychiatrist — pays better than climate science, at just over $73 an hour. In other words, climate scientists have the third-highest-paid public-sector job, ranking above judges.

    What about the private sector? That’s led by airline pilots, who earn about $112 an hour, but work for only 1,100 hours a year, followed by company CEOs at an average of $91 an hour. Physicians and surgeons earn almost as much as CEOs, at $89.51 an hour. Private-sector law-school professors, interestingly enough, earn far less than their public-school counterparts, at $82 an hour. After that come professor-level jobs in engineering, at $76.11, and dentists, at $73.19. These are the only private-sector professions that pay more than climate science….

    http://www.nationalreview.com/articles/261776/all-aboard-climate-gravy-train-iain-murray

    It should not be up to volunteers to be scrutinizing the weather stations and the data when we have paid trillion to have it done. Time to fire the lot of you and save ourselves the money and agrivation

  109. Once a group of officials, from the Club of Rome in 1991 and earlier, to Ottmar Edenhofer, head economist of the IPCC openly admit “ But one must say clearly that we redistribute de facto the world’s wealth by climate policy. Obviously, the owners of coal and oil will not be enthusiastic about this. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy anymore”

    Now once these open statements have been made, what is there left to argue on the climate. It is not, and never has been about the climate, but a world government (again, Edenhofer): “Basically it’s a big mistake to discuss climate policy separately from the major themes of globalization. The climate summit in Cancun at the end of the month is not a climate conference, but one of the largest economic conferences since the Second World War. ”

    Basically we have the top spokesman for the IPCC itself happily telling the world it is not about the climate. Now had anyone outside the internet bothered to print this (they didn’t) we wouldn’t get victims like the one today who posted on my Facebook about low lying cities being underwater after we’re dead. It’s not her fault, she is probably busy and trusts what the papers tell her as most people do. I like to check things for myself, and with the internet have absolutely no trouble doing so. Of course the longer this drags on the harder it’ll be to produce warming and its effect when the sea level drops and the ice caps are overall stable whatever they do locally. If I won the lottery (as was being discussed on the radio today after someone here won over £100 million) I’d buy a paper for a day and publish it myself, and commission an independent company for an hour long documentary.

    We already have the data. If the media simply shared what we already know this would be over in a week, trust me.

  110. KR (again):

    “TOBS changes have been primarily a US issue…”

    Why? What evidence do you have for the rest of the world? Why not Canada, Mexico, Alaska, Hawaii (by the way, the last two are part of the U.S.)? Show us the metadata…

    By the way, we really don’t know that the TOBS algorithm is being applied correctly at NOAA since no one can find the TOBS adjustment computer program. It’s not on their web site. Please let us know if anyone finds it.

  111. In 2007 I copied a list of record temperatures for my area. I did it again in April of 2012. I compared record highs of the two. 21 of the of the records covering common years have been changed. Some by as much as 5*F.
    I can understand correcting typos from when the handwritten or typed paper records were entered into a database. (“OOPS! That 7 was really a 9.”) I can understand correcting problems that may have cropped up when, say, a DOS database was converted to a Windows database.
    But to adjust a number in order to “correct” it? Put an asterisks on with a notation explaining why you think it may be suspect.

  112. JP wrote on August 20, 2012 at 7:20 am

    “Do you find it rather strange when the entire “signal” can be accounted for by subtracting the sum total of the adjustments?”

    Not in this case no. It depends on the nature of the correction. If instrumental changes cause a well defined change in measured values that is two times as big as the signal, it is well possible that the correction is double the signal magnitude, yet the signal is not less real because of that. If you want to argue that there is something wrong with the correction you need to point out what the problem is. Just stating the magnitude of the correction is meaningless.

    Richardscourtney does indicate a problem by by pointing out out that sampling error is unknown and makes it impossible to do a meaningful correction. I disagree. In no way the presence of one error makes it impossible to correct for another well described one. In addition the problem of sampling errors was investigated recently (Shen et al 2012). Here is their conclusion:

    “The sampling error analysis, the previous studies on observational errors, and the comparison between our current work and Menne et al. (2009) reveal the impact of station errors, sampling errors, and consequences resulting from different grid sizes and data aggregation methods. Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”

    In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

    Shen, S.S.P., Lee, C.K. & Lawrimore, J. (2012)
    Uncertainties, Trends, and Hottest and Coldest Years of U.S. Surface Air Temperature since 1895: An Update Based on the USHCN V2 TOB Data
    Journal of Climate 25: 4185-4203. DOI: 10.1175/JCLI-D-11-00102.1

  113. Peter Roessingh :

    Your post at August 21, 2012 at 1:29 am says

    Richardscourtney does indicate a problem by by pointing out out that sampling error is unknown and makes it impossible to do a meaningful correction. I disagree. In no way the presence of one error makes it impossible to correct for another well described one. In addition the problem of sampling errors was investigated recently (Shen et al 2012). Here is their conclusion:

    “The sampling error analysis, the previous studies on observational errors, and the comparison between our current work and Menne et al. (2009) reveal the impact of station errors, sampling errors, and consequences resulting from different grid sizes and data aggregation methods. Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”

    In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

    I refuse to accept that you are sufficiently stupid as to believe what you have written, so I conclude that you are being disingenuous. I explain my conclusion as follows.

    Firstly, you admit that “sampling error is unknown and makes it impossible to do a meaningful correction.” But assert “. In no way the presence of one error makes it impossible to correct for another well described one.”

    Your assertion would be true if sampling error were a constant, but – as I explained – it is not.
    (a) There is no way to discern the magnitude of sampling error in any one year.
    and
    (b) Sampling error differs from year-to-year.
    therefore
    (c) There is no way to discern if the observed changes are effect of varying sample error.

    And the quotation from the paper you cite states point (a) and dismisses it in the same sentence; viz.

    “Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”

    So the error magnitudes are not quantified (i.e. they “may be of nontrivial magnitude”) but they are known to be too small to alter the results (i.e. “they are not large enough to alter the trend”).

    You claim to swallow that pseudoscientific nonsense!?

    Furthermore, the issue of (c) is not addressed at all.
    (As an aside I point out that in the same post I am answering you dismiss JP’s observation asking;
    “Do you find it rather strange when the entire “signal” can be accounted for by subtracting the sum total of the adjustments?” I stress that point (c) is
    There is no way to discern if the observed changes are effect of varying sample error.)

    Papers like those of Shen et al 2012 and Menne et al. 2009 pass pal review and get published in climastrology. Similar papers in the real sciences are rejected by peer review and get rejected for publication.

    As you say

    the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

    Similarly, when Nelson put his telescope to his blind eye he said, “I see no ships”.

    Richard

  114. I’ve noted the raw vs adjusted numbers before, mostly in slashdot comments but at least once on climate audit. It’s good to see this problem being noted more widely. I’ve never seen any explanation for why TOBS adjustments fairly neatly follow a parabolic curve (excel fitted a quadratic with r=.98). However… when I checked USHCN v2 the adjustments were negative, but larger. One of the graphs is wrong, or it’s using different data than USHCN v2 as it was in 2010.

    GHCN is almost as interesting and actually where I started checking after the back and forth about darwin. The last century has a definite upward trend in adjustments. Nearly linear in fact.

  115. richardscourtney August 21, 2012 at 7:54 am wrote:

    “You claim to swallow that pseudoscientific nonsense!?”

    I suggest you write up your critique of Shen et al 2012 and publish it, either in a peer reviewed journal or elsewhere on the internet. Publishing your data is how real science is done, not by making ad hominem attacks in a blog. I am (again) done here.

  116. Peter Roessingh:

    Your pathetic post at August 22, 2012 at 12:42 am says in total;

    richardscourtney August 21, 2012 at 7:54 am wrote:

    “You claim to swallow that pseudoscientific nonsense!?”

    I suggest you write up your critique of Shen et al 2012 and publish it, either in a peer reviewed journal or elsewhere on the internet. Publishing your data is how real science is done, not by making ad hominem attacks in a blog. I am (again) done here.

    That was not ad hom.

    You quoted Shen et al 2012 saying
    1.
    The errors may be large
    2.
    and are not known
    3.
    but don’t affect the result.

    And I asked – with incredulity – if you accepted that pseudoscientific nonsense.

    My incredulous question is warranted because if the errors are large but not known then it is impossible to determine whether or not they affect the result.

    If you did not like what the quotation says then why did you present it?

    And my post answered your evasion about “publishing your data is how real science is done” in my post where I commented on peer review. I add that in real science any pertinent information is considered. Only pseudoscience uses excuses to ignore pertinent data (e.g. only information published in particular places will be considered).

    Richard

  117. Richard,

    Regarding the ad hom: I was talking about your qualification of Shen et al. If you want to call their work pseudoscience – a fairly strong statement-, you need to back that up with detailed criticism.

    I did not say the errors are unknown, those are your words. I cited Shen at al. as a source that explores the magnitude of those errors, and cited their conclusion.

    Neither did I say i did not like Shen at al.’s conclusion. Those are again your words. If *you* don’t like their conclusion then please detail the problems you see with their data treatment, either here, or better, in a paper.

    Finally what makes you think I, or anybody else wants to ignore pertinent data? I certainly did not say so.

    Peter.

  118. Peter Roessing:

    Your post August 22, 2012 at 4:37 am makes a series of untrue assertions.

    You say to me:

    Regarding the ad hom: I was talking about your qualification of Shen et al. If you want to call their work pseudoscience – a fairly strong statement-, you need to back that up with detailed criticism.

    No, one example of pseudoscience is sufficient and I provided two.

    I remind that
    science consists of seeking the nearest possible approximation to ‘truth’ by formulating ideas and seeking information which refutes an idea then amending or rejecting the idea in light of the information
    but
    pseudoscience consists deciding an idea is ‘truth’ then seeking information which supports the idea while ignoring or rejecting information which refutes the idea.

    My first example of their pseudoscience was my having cited their statement which you had quoted and my showing it to be logical nonsense that supports a contention. i.e. I said;

    So the error magnitudes are not quantified (i.e. they “may be of nontrivial magnitude”) but they are known to be too small to alter the results (i.e. “they are not large enough to alter the trend”).

    I could have used ridicule by saying something like this.
    So, these ‘scientists’ admit the errors “may be of nontrivial magnitude”, admit they don’t know how big the errors are, but conclude the errors don’t affect their results. And that is what they call science! It is excusing information which refutes their adopted ‘truth’.

    Secondly, I pointed out that they made no consideration of the effect of the sample differing from year-to-year and, therefore, there is no way to discern if the observed changes are effect of varying sample error. Either their having ignored that was pseudoscience or it was incompetence. I say it was pseudoscience: are you saying it was incompetence? Either way, it demolishes their conclusion.

    You say to me:

    I did not say the errors are unknown, those are your words. I cited Shen at al. as a source that explores the magnitude of those errors, and cited their conclusion.

    Er, Ahem. How can I put this? Ah, I know how, I say you are using sophistry.

    I said the effects of sampling errors are unknown and you cited and quoted Shen et al. as saying those errors “may be of nontrivial magnitude” (which means they are not known). Shen et al. do NOT quantify the sampling errors. Indeed, they cannot because nobody knows how to quantify the error of a single datum when there is no independent calibration available: the sample of each year is a unique set (i.e. it is a datum) that cannot be assessed as part of a collection of different sets. (The diameters of an apple, a pear and a banana cannot be used to assess errors in the determination of the apple’s diameter in the absence of independent calibration.)

    You say to me:

    Neither did I say i did not like Shen at al.’s conclusion. Those are again your words.

    Oh! Really? So you posted their conclusion because you did not like it?

    I find that strange, especially when you only provided that in your post and you wrote on the basis of it

    In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

    That “discussion” consisted solely of your presentation of your quotations from Shen et al..

    And you say

    If *you* don’t like their conclusion then please detail the problems you see with their data treatment, either here, or better, in a paper.

    I stated the problems with their analysis; i.e. it makes an illogical deduction and fails to address the major problem with the data. No more detail is needed. (Having proven the contents of a bin are garbage then one does not need to detail each item of rubbish in the bin.)

    And you conclude

    Finally what makes you think I, or anybody else wants to ignore pertinent data? I certainly did not say so.

    Alarmists often try to ignore pertinent data and examples are legion. For example, they want everybody to ignore that statistically significant global warming stopped for the last 10 years but existed for each of the three previous 10-year periods.

    However, since you don’t want to ignore pertinent data, I await your response to the fact that observed variations in global temperature may be induced by variations in the sample used to obtain global temperature because the sample changes from year-to-year and the sample error of each year cannot be determined.

    Richard

  119. The ball is clearly in your court. Shen et all. have written an 18 page long paper adressing the topic of sampling errors and you call that pseudoscience. As far as I can see, you base that conclusion on my quote of the last 9 lines. That simply will not do.

  120. Also relevant is Weithmann 2011 “Optimal Averages of US Temperature Error Estimates and Inferences” (http://sdsu-dspace.calstate.edu/bitstream/handle/10211.10/1792/Weithmann_Alexander.pdf?sequence=1).

    Richard Courtney – Nobody who understands the science claims that we know everything exactly. But claiming, as you do, that “The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.” is just nonsense, and appears to be an Argument from Complexity (http://www.don-lindsay-archive.org/skeptic/arguments.html#complexity) – a rhetorical trick.

    Sampling errors are a quite well understood topic in science, and analysis of error sources allows determining the bounds, the possible range of errors.

    A simple Reductio ad absurdum of your argument: If uncertainty was (as you claim) complete ignorance, we could never risk getting out of bed – because our sampling of personal experience is never random, never complete, and we cannot know everything about our circumstances. Yet somehow we manage in the face of that uncertainty…

  121. Peter Roessingh and KR:

    Peter Roessingh, OK choose to ignore what I wrote if you want. Do whatever makes you feel comfortable. Truth is truth whether or not you ignore it.

    KR, you (deliberately?) misrepresent what I have written.

    At no time have I claimed “uncertainty is complete ignorance”. On the contrary, I have stated that (a) statistical uncertainty is easily assessed for the data in a data set
    but
    (b) the uncertainty of an individual datum cannot be determined unless it is part of a data set or has an independent calibration.

    I am surprised that you claim to be ignorant of those truths.

    The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.

    I will simplify to extreme so hopefully you understand the point.

    In one year all the measurements are obtained in the tropics,
    and
    in the next year half the measurements are taken in the tropics and half in the Arctic.

    An average temperature is obtained from the data in each of those years. And the averages show that year two has a lower average than year one. That does not indicate the globe cooled between the two years: it is an effect of different measurement sites.

    A statistical analysis can be conducted on the data of each year. And it will give probability limits (e.g. 95% confidence) for the obtained average temperature of each year. However, it will not deconvolute the effects of the different measurement sites. Indeed, the calculated confidence limits will be misleading. The average in year one will provide narrower 95% confidence limits than the confidence limits of the average for year two. However, the average obtained for year one is a less accurate indication of global temperature than the average obtained in year two.

    And a statistical analysis can be conducted on the total data of both years to obtain confidence limits of them both. But the result will be an error of unknown magnitude. This is because the calculation of confidence assumes a random sample, and the samples are NOT random. Indeed, they are not even consistent from year to year.

    So, there is no known way to assess the effects on the average of changes to the measurement sites.

    Any assertion (e.g. Shen et al.) that error limits (i.e. confidence limits) have been determined cannot be correct unless a revolutionary method of statistical analysis is also presented. Nobody has yet devised such a revolutionary method.

    Richard

  122. Richard Courtney – Thank you for clarifying your point.

    “…there is no known way to assess the effects on the average of changes to the measurement sites.”

    Incorrect: the data from different (or changing sites) are not independent – we do know something about areas not sampled. See Hansen and Lebedeff 1987 (http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf), the correlation of temperature _anomalies_ (note: anomalies, changes, not absolute temperatures) out to quite large distances. Measuring the temperature anomaly at any point gives you considerable information about the surrounding area, albeit with a correlation and certainty decreasing with distance (providing computable probability limits, dependent on sample distances).

    Samples of temperature anomalies taken from different points within a range of correlation are related, they are all part of an interdependent data set: and hence your claim of independence and lack of relationships between different sample sites simply does not hold, and the uncertainties can be determined.

  123. KR:

    I am answering your post at August 23, 2012 at 7:04 am.

    You say the data from measurement sites are not independent. So what? I did not say they are? I said the data sets from individual years are independent because the numbers and positions of measurement sites are unique for each year.

    I am familiar with Hansen & Lebedeff (1987). But it proves my point.

    As you and they say, information from immediately adjacent to a measuring station can be inferred from the data of the measuring station. But the quality of the inferred data degrades very rapidly with distance from the measuring station. Hence, assumptions are made as to the nature of the degradation.

    If H&L did provide definitive information on the degradation then different teams (e.g. GISS and HadCRUT) would all use the same assumptions for the degradation with distance from a measuring station. They don’t.

    Indeed, H&L admit their system is merely a guess. They say their method is “designed to provide accurate long-term variations” and they assess whether it does that by comparison with performance of a climate model.

    How does one know if their guess is a good representation of the model performance or if the model is a good representation of the guess? All one can say is that the model and the guess agree to described standards. That agreement is meaningless unless the climate model emulates real-world changes at a regional level, and no climate model does that.

    Indeed, the fact that their guess emulates the climate model is indicative that their guess is wrong because the model does not emulate changes at a regional level: no climate model does.

    So, the use of data measured at a site to infer data at another place is merely a guess. And the confidence in the accuracy of the inferred data is zero because one cannot determine the confidence of a guess.

    Furthermore, data for regions far distant from any measurement station is a pure guess. This is so obvious that each team does not include regions far distant from any measurement site. But they use different assumptions concerning which areas to not include. And the effect of this is exactly the same as my extremely simplified example in my explanation you answered.

    The quality, the number, and the sites of measurements vary from year to year. Hence, the guesses vary from year to year. In other words, the application of the guesses makes no difference to the basic issue which I stated; viz.

    The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.

    And all this brings us back to where I ended my last post; i.e.

    Any assertion (e.g. Shen et al.) that error limits (i.e. confidence limits) have been determined cannot be correct unless a revolutionary method of statistical analysis is also presented. Nobody has yet devised such a revolutionary method.

    Richard

  124. Richard Courtney – You’ve digressed _hugely_ from the discussion, which (AFAICS) regards estimating temperature changes in observations. Climate models are a different topic entirely, and not at all relevant to estimating uncertainties in observations.

    GISTEMP uses scaled anomaly correlations, HadCRUT uses spatial blocking for area weighting – different researchers, different approaches. While I have my personal opinions as to which approach works better, they have their reasons for their choices, their approaches; both methods are supportable.

    “So, the use of data measured at a site to infer data at another place is merely a guess. And the confidence in the accuracy of the inferred data is zero because one cannot determine the confidence of a guess.”False

    Use of data to infer values at another place are estimates, not guesses, supported by _measurements_ of local correlations. Not assumptions (as you claim), but measurements leading to correlations with confidence intervals. Local correlation (whether by distance weighted averaging as per GIS or simple regional block weighting as per HadCRUT) means that location sampling at any time point consists of a single, interrelated data set. Spatial sampling has calculable uncertainties, and hence error limits can be determined.

    “The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.”False

    Hello? Year to year measurements are also part of an interdependent data set, as there is continuity in sampling across those years. Unless you can document a single moment when the entire set of weather stations were abandoned and a new set built, breaking continuity – one, mind you, with no calibration against previous data. Even if the entire network is replaced over time, continuity and cross-calibration ensure that there is only one data set. The “revolutionary method of statistical analysis” you seem to be demanding is called time series analysis (more literature than I care to quote), from that comes information about temporal trends. Claims that time series data cannot be acquired, dealt with, or have it’s error limits calculated are just nonsense.

    Claiming that climate data points in time and space (in the presence of temporal continuity and spatial correlation) are not related is a specious argument. Error limits and confidence intervals _can_ be determined for both.

    I have to say that I find the _sheer variety_ of claims that amount to “There’s uncertainty, therefore we know nothing, therefore don’t believe what anyone says about the data” both astound and appall me. There are limits to our knowledge, uncertainties – but uncertainty does not mean ignorance.

    Adieu

  125. KR:

    You begin your fallacious rant at August 23, 2012 at 1:40 pm saying

    Richard Courtney – You’ve digressed _hugely_ from the discussion, which (AFAICS) regards estimating temperature changes in observations.

    SAY WHAT!?

    I stuck rigidly to the subject and concluded my reply to you by directly quoting from my post which you questioned and saying nothing you had presented changed that in any way.

    The remainder of my answer to you directly addressed the relevance of the paper you (not me) cited.

    Your only response to my explanations of faults with that paper is to make the mistaken (or deliberately untrue) assertion

    Year to year measurements are also part of an interdependent data set, as there is continuity in sampling across those years.

    As I have repeatedly explained to you, that is not true. Indeed, in my post you dispute I wrote an explanation of how changing some measurement sites altered both the obtained average and its statistical significance. But in your response you make the daft assertion

    Year to year measurements are also part of an interdependent data set, as there is continuity in sampling across those years. Unless you can document a single moment when the entire set of weather stations were abandoned and a new set built, breaking continuity – one, mind you, with no calibration against previous data.

    That is risible!
    The “entire set has to be changed” for “continuity” to be lost? Did you think before writing that? It says you are claiming that if all except one measurement site closed then the data would still be contiguous so the results would be comparable to the data obtained using many sites. OK. Which one do you want to choose because using only that one would save a lot of bother?

    And you again misrepresent what I have said. Your repetition of the falsehood can only be egregious because I directly refuted it to you at August 23, 2012 at 3:26 am saying

    KR, you (deliberately?) misrepresent what I have written.

    At no time have I claimed “uncertainty is complete ignorance”. On the contrary, I have stated that
    (a) statistical uncertainty is easily assessed for the data in a data set
    but
    (b) the uncertainty of an individual datum cannot be determined unless it is part of a data set or has an independent calibration.

    I am surprised that you claim to be ignorant of those truths.

    I am not surprised that you have run away.

    Richard

    PS I shall be absent and unable to reply for some days after this. (My absence will probably good for my blood pressure because I will not be able read any more of your twaddle).

  126. Dear Richard,

    This is a good time to give a short overview of the discussion so far.
    But let me first say that i fully think that your approach is less then constructive. KR aptly summerized that as:

    “…claims that amount to “There’s uncertainty, therefore we know nothing, therefore don’t believe what anyone says about the data”.

    However, i do not want to start a new sideline in the discussion about that, but instead will try to focus on the actual science: the corrections of the raw data and the associated errors.

    The main post of Anthony made a very simple statement:

    “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC”

    Although not explicitly stated, the message of the post is that the correction is the only real source of the rising temperature, and that there is no true signal in the data.

    I commented that the statement was meaningless unless the reasons behind the adjustments and their validity were taken into account. To give an extreme example, if a new thermometer would be introduced that gives a consistent reading one degree higher than the old model, than a correction of one degree down is needed for meaningful comparisons. It is not at all a problem that the correction is of the same order (or larger) as the signal itself, as long as the correction (and associated errors!) are calculated correctly. I also provided sources to the original literature that explains how and why the corrections were done.

    Anthony just defended his headline, arguing it remains true, irrespective of the accuracy of the corrections. That is true, but not informative. You made a (much) stronger statement. I quote:

    “The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.”

    This changes the topic slightly, the original post concerns the positive corrections in the data, and those are largely due to the correction of time of observation bias (TOBS), but your (false) statement is about sampling errors.

    I provided a references about TOBS (Karl 1986) and KR mentioned Vose et all. 2003. Anybody questioning the validity of the TOBS correction should point out the flaws in the methods detailed in these papers. That has not happened.

    But as said, you brought up the problem of sampling errors and make the (false) claim they can not be estimated. I quoted Shen at al 2012 that shows the sampling errors can be, -and have been- estimated. KR mentioned Weithmann 2011.

    Your called the Shen et all. paper “pseudoscientific nonsense” and based that very strong statement on a (distorted) representation of the last 9 lines of the paper.
    If you read the paper you will see that Shen et all. *do* estimate the errors (hence your claim they are unknown is false). They quantify the errors using the fact that the data is correlated, and find that especially in the older measurements the variances are “non-trivial” (and that is something else than unknow!!) but in other periods are nearly zero. The authors subsequently evaluate the effect of the error estimates on the overall trend, and conclude that in spite of the fact that they were sometimes of “non-trivial” size, they they were not large enough to alter the trend of the contiguous U.S. surface air temperature. If you want to defend the position that the error estimates in Shen at al 2012 are not correctly computed, you need to demonstrate that in a more rigorous way than just pointing to a (in your eyes) logical inconsistency in the formulation of their end conclusion.

    The other point you assert is this:
    “each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.”

    This is not true, a series of measurements of april temperatures from a particular gridbox are all correlated over time because they are at about the the same geographic location. This correlation holds over distances in the order of a thousand kilometers as demonstrated by Hansen & Lebedeff (1987). You dismiss this argument by boldly stating that Hansen & Lebedeff’s results are also wrong, just as you say that Shen at all. 2012 are wrong. You give two arguments: one is that not all groups use these results (an argument that in no way invalidates the reasoning in the paper itself). The other argument is that H&L87 use a comparison with global climate models to validate their results, and that this proves their results are invalid since describe local effects and GCM are global. Apart from the fact that the logic here is questionable to say the least, you ignore the fact that this method is only one of the ways used in H&L87 to validate their result. But we can argue forever. I am willing to do that if you insist, but i want to make a more general point

    So to sustain your position you have consistently argued that *all* the papers that were cited as evidence against it were faulty, and there authors badly informed about measurement techniques or worse, that the papers were pseudoscience.

    There is a point where such a strategy becomes untenable. If you can only maintain your position by arguing everybody except yourself is misguided (or a fraud), without backing your statements up with peer reviewed papers, you have to start to question your own position.

    Peter.

  127. Peter Roessingh:

    I apologise that I have been unable to reply to your post at August 25, 2012 at 12:56 pm until now. If you are a regular reader of WUWT then you will know I often have such absences.

    For the record, I absolutely refute your following unjustifiable and completely untrue assertions aimed at me.

    Untruth #1
    You say I am claiming:
    “There’s uncertainty, therefore we know nothing, therefore don’t believe what anyone says about the data”.

    There is no possible excuse for that lie because I have twice refuted it saying;

    At no time have I claimed “uncertainty is complete ignorance”. On the contrary, I have stated that
    (a) statistical uncertainty is easily assessed for the data in a data set
    but
    (b) the uncertainty of an individual datum cannot be determined unless it is part of a data set or has an independent calibration.

    I am surprised that you claim to be ignorant of those truths.

    Untruth #2
    You say:

    I quote:

    “The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.”

    This changes the topic slightly, the original post concerns the positive corrections in the data, and those are largely due to the correction of time of observation bias (TOBS), but your (false) statement is about sampling errors.

    I provided a references about TOBS (Karl 1986) and KR mentioned Vose et all. 2003. Anybody questioning the validity of the TOBS correction should point out the flaws in the methods detailed in these papers. That has not happened.

    NO! ABSOLUTELY NOT!
    The papers you cite make no assessment of the effects of the samples not being random. Indeed, they cannot do that because there is no known way to make such an assessment.

    “Correcting” wrong data prior to processing it may make the results of the processing more or less accurate, and there is no way to determine which it does. Hence, as I said, the “corrections” cannot be justified. Hence, as I explained, the adoption of any “methods” for the “corrections” is a “flawed” procedure.

    Untruth #3
    You say :
    I quoted Shen at al 2012 that shows the sampling errors can be, -and have been- estimated. KR mentioned Weithmann 2011.

    Effects of changing non-random samples cannot be assessed by any known method. A claim to make such an assessment is NOT an “estimate”: it is a guess with no possibility of validation.

    Untruth #4
    You say to me:
    “Your called the Shen et all. paper “pseudoscientific nonsense” and based that very strong statement on a (distorted) representation of the last 9 lines of the paper.
    If you read the paper you will see that Shen et all. *do* estimate the errors (hence your claim they are unknown is false).”

    Indeed, they do claim to assess effects of non-random samples. Any such claim is pseudoscientific nonsense because there is no known way to do it. Correlations certainly do not.

    Untruth #5
    You say to me:
    “You dismiss this argument by boldly stating that Hansen & Lebedeff’s results are also wrong”

    NO! I explained how and why they are wrong. Also, I pointed out that no other team which provides estimates of global temperature accepts the silly arguments of Hansen & Lebedeff because they, too, know those arguments are wrong.

    Untruth #6
    You say to me:
    “There is a point where such a strategy becomes untenable. If you can only maintain your position by arguing everybody except yourself is misguided (or a fraud), without backing your statements up with peer reviewed papers, you have to start to question your own position.”

    That is offensive in the extreme.
    At August 20, 2012 at 5:23 am (n.b it was addressed to you) I cited my earlier post at August 19, 2012 at 2:32 pm which explained how the Team used nefarious method to prevent publication of a paper with me as Lead Author that addressed these issues. I linked to the UK Parliamentary Record where an explanation of the problems is spelled out.

    Richard

  128. Richard,

    You wrote:

    “Indeed, they [Shen at al 2012] do claim to assess effects of non-random samples. Any such claim is pseudoscientific nonsense because there is no known way to do it.”

    So we have at least established that Shen at al 2012 provide a method to assess effects of non-random samples, and that you claim that their method is nonsense.

    As proof for this claim you provide a linked to the UK Parliamentary Record and a reference to an draft of a paper with the title: “A call for revision of Mean Global Temperature (MGT) data sets”

    I failed to find any argument in your sources that relate to the methods used by Shen at al 2012. Can you cite the lines in your document that are relevant, and explain in more detail what is wrong with the method of Shen at al? If I misunderstood you, can you point me to another source to back up your strong claim that Shen et al are wrong and write ‘pseudoscientific nonsense” ?

    Peter.

Comments are closed.