Spiking temperatures in the USHCN – an artifact of late data reporting

Correcting and Calculating the Size of Adjustments in the USHCN

By Anthony Watts and Zeke Hausfather

A recent WUWT post included a figure which showed the difference between raw and fully adjusted data in the United States Historical Climatology Network (USHCN). The figure, used in that WUWT post was from from Steven Goddard’s website, and in addition to the delta from adjustments over the last century, included a large spike of over 1 degree F for the first three months of 2014.  That spike struck some as unrealistic, but knowing that a lot of adjustment goes into producing the final temperature record, some weren’t surprised at all. This essay is about finding the true reason behind that spike.

2014_USHCN_raw-vs-adjusted

One commenter on that WUWT thread, Chip Knappenberger, said he didn’t see anything amiss when plotting the same data in other ways, and wondered in an email to Anthony Watts if the spike was real or not.

Anthony replied to Knappenberger via email that he thought it was related to late data reporting, and later repeated the same comment in an email to Zeke Hausfather, while simultaneously posting it to Nick Stokes blog, who had also been looking into the spike.

This spike at the end may be related to the “late data” problem we see with GHCN/GISS and NCDC’s “state of the climate” reports. They publish the numbers ahead of dataset completeness, and they have warmer values, because I’m betting a lot of the rural stations come in later, by mail, rather than the weathercoder touch tone entries. Lot of older observers in USHCN, and I’ve met dozens. They don’t like the weathercoder touch-tone entry because they say it is easy to make mistakes.

And, having tried it myself a couple of times, and being a young agile whippersnapper, I screw it up too.

The USHCN data seems to show completed data where there is no corresponding raw monthly station data (since it isn’t in yet) which may be generated by infilling/processing….resulting in that spike. Or it could be a bug in Goddard’s coding of some sorts. I just don’t see it since I have the code. I’ve given it to Zeke to see what he makes of it.

Yes the USHCN 1 and USHCN 2.5 have different processes, resulting in different offsets. The one thing common to all of it though is that it cools the past, and many people don’t see that as a justifiable or even an honest adjustment.

It may shrink as monthly values come in.

Watts had asked Goddard for his code to reproduce that plot, and he kindly provided it. It consists of a C++ program to ingest the USHCN raw and finalized data and average it to create annual values, plus an Excel spreadsheet to compare the two resultant data sets. Upon first inspection, Watts couldn’t see anything obviously wrong with it, nor could Knappenberger. Watts also shared the code with Hausfather.

After Watts sent the email to him regarding the late reporting issue, Hausfather investigated that idea, and ran some different tests and created plots which demonstrate how the spike was created due to that late reporting problem. Stokes came to the same conclusion after Watts’ comment on his blog.

Hausfather, in the email exchange with Watts on the reporting issue wrote:

Goddard appears just to average all the stations readings for each year in each dataset, which will cause issues since you aren’t converting things into anomalies or doing any sort of gridding/spatial weighting. I suspect the remaining difference between his results and those of Nick/myself are due to that. Not using anomalies would also explain the spike, as some stations not reporting could significantly skew absolute temps because of baseline differences due to elevation, etc.”

From that discussion came the idea to do this joint essay.

To figure out the best way to estimate the effect of adjustments, we look at four difference methods:

1. The All Absolute Approach – Taking absolute temperatures from all USHCN stations, averaging them for each year for raw and adjusted series, and taking the difference for each year (the method Steven Goddard used).

2. The Common Absolute Approach – Same as the all absolute approach, but discarding any station-months where either raw and adjusted series are missing.

3. The All Gridded Anomaly Approach – Converting absolute temperatures into anomalies relative to a 1961-1990 baseline period, gridding the stations in 2.5×3.5 lat/lon grid cells, applying a land mask, averaging the anomalies for each grid cell for each month, calculating the average temperature for the whole continuous U.S. by a size-weighted average of all gridcells for each month, averaging monthly values by year, and taking the difference each year for resulting raw and adjusted series.

4. The Common Gridded Anomaly Approach – Same as the all-gridded anomaly approach but discarding any station-months where either raw and adjusted series are missing.

The results of each approach are shown in the figure below, note the spike has been reproduced using method #1 “All Absolutes”:

USHCN-Adjustments-by-Method-Year

The latter three approaches all find fairly similar results; the third method (The All Gridded Anomaly Approach) probably best reflects the difference in “official” raw and adjusted records, as it replicates the method NCDC uses in generating the official U.S. temperatures (via anomalies and gridding) and includes the effect of infilling.

The All Absolute Approach used by Goddard gives a somewhat biased impression of what is actually happening, as using absolute temperatures when raw and adjusted series don’t have the same stations reporting each month will introduce errors due to differing station temperatures (caused by elevation and similar factors). Using anomalies avoids this issue by looking at the difference from the mean for each station, rather than the absolute temperature. This is the same reason why we use anomalies rather than absolutes in creating regional temperature records, as anomalies deal with changing station composition.

The figure shown above also incorrectly deals with data from 2014. Because it is treating the first four months of 2014 as complete data for the entire year, it gives them more weight than other months, and risks exaggerating the effect of incomplete reporting or any seasonal cycle in the adjustments. We can correct this problem by showing lagging 12-month averages rather than yearly values, as shown in the figure below. When we look at the data this way, the large spike in 2014 shown in the All Absolute Approach is much smaller.

USHCN-Adjustments-by-Method-12M-Smooth

There is still a small spike in the last few months, likely due to incomplete reporting in April 2014, but its much smaller than in the annual chart.

While Goddard’s code and plot produced a mathematically correct result, the procedure he chose (#1 The All Absolute Approach) comparing absolute raw USHCN data and absolute finalized USHCN data, was not, and it allowed non-climatic differences between the two datasets, likely caused by missing data (late reports) to create the spike artifact in the first four months of 2014 and somewhat overstated the difference between adjusted and raw temperatures by using absolute temperatures rather than anomalies.

About these ads

176 thoughts on “Spiking temperatures in the USHCN – an artifact of late data reporting

  1. Hi Anthony,

    It looks like that there is still a little bit less that 1°F temperature adjustment with your last 3 methods since the 40’s. Why did not you comment about that ?

    REPLY:
    I did, see -“The one thing common to all of it though is that it cools the past, and many people don’t see that as a justifiable or even an honest adjustment.” – Anthony

  2. So there is a need to adjust some data even after the year 2000. Is there any chance that the meteorologist in the USA will learn to report their results so that no adjustments are necessary?
    And will the temperatures from about 2005 to 2014, which at this point of time seem unadjusted, stay that way?

  3. So there is a need to adjust some data even after the year 2000. Is there any chance that the meteorologists in the USA will learn to report their results so that no adjustments are necessary?
    And will the temperatures from about 2005 to 2014, which at this point of time seem unadjusted, stay that way?

  4. So what i take away from this analysis is, under any of the 4 approaches, the raw temperature data has been adjusted to reflect an additional temperature increase of about 1 degree F over the past 60 to 70 years. Why is this? It does not on its face seem to be reasonable. It needs to be explained in a clear and believable fashion so that I and others do not conclude that, intentional or not, it reflects the biases of those who want to believe in the worst case scenarios of climate change.

  5. While Goddard’s analysis may have exaggerated the differences between raw and adjusted data, I still find it more than curious that past temperatures are lowered and present or near-present temperatures are adjusted higher, always resulting in a rising trend. Sorry, I find it hard to believe that this is coincidental.

  6. Shouldn’t the effects of UHI dictate that adjustments should be cooling the present to reflect the UHI phenomenon?

  7. The fact that they adjust data at all means they aren’t measuring it correctly.

    Andrew

  8. So, an increase of ~ 0.7 deg F (< 0.4 deg C) in 115 years, of which 1/2 (~ 0.2 deg C) can be attributed to mankind. Phew, it's making me sweat!

  9. Why is the most negative adjustment centered around 1940? Seems convenient to apply the largest cooling adjustment to the hottest period of the century because it came before significant CO2.

  10. From here:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

    Under “Station siting and U.S. surface temperature trends”

    “Photographic documentation of poor siting conditions at stations in the USHCN has led to questions regarding the reliability of surface temperature trends over the conterminous U.S. (CONUS). To evaluate the potential impact of poor siting/instrument exposure on CONUS temperatures, The Menne et al. (2010) compared trends derived from poor and well-sited USHCN stations using both unadjusted and bias-adjusted data. Results indicate that there is a mean bias associated with poor exposure sites relative to good exposure sites in the unadjusted USHCN version 2 data; however, this bias is consistent with previously documented changes associated with the widespread conversion to electronic sensors in the USHCN during the last 25 years Menne et al. (2009) . Moreover, the sign of the bias is counterintuitive to photographic documentation of poor exposure because associated instrument changes have led to an artificial negative (“cool”) bias in maximum temperatures and only a slight positive (“warm”) bias in minimum temperatures.”

    Are they saying that a poorly sited station, say one near an airport tarmac, does this:
    On a day with an actual high of 95 F, it reports 98 F and later when the low is 85 F it reports 89 F, which then appears that the max temp is “only” being reported 3 degrees high while the low is 4 degrees high. Would the average bias then be 3.5 degrees, making bias in the max temp (3 degrees) “cool” while the mim temp (4 degrees) be a “warm” bias?

    Just wondering.

  11. This issue speaks clearly to the problem of method. Without a standard anyone can pretty much mash the data as they like and adjust away. This makes it easy for alarmists to start their caterwauling. Without a standard there is a justifiable call of foul when anyone publishes an adjusted data set. So sorry Steve Mosher but your BEST aint! It also legitimizes the discussion that an average temperature is meaningless and has no purpose. Perhaps an interim solution would be to publish a Steve Goddard style absolute raw vs adjusted as a fair warning device. I think his method has the most merit in terms of showing what is going on. What it demonstrates is the total adjustment and the fact that stations are massaged even when they are putting out good data. I think we should remember the only actual purpose for a large area average is to observe change over time. The more you hash and adjust the less likely you are to actually observe that difference and the more likely your going to see artificial non-real changes. Particularly in a chaotic system.
    v/r,
    David Riser

  12. Gridding, infilling, adjusting, why is this so complicated?

    Step one: read thermometer.
    Step two: compare to previous reading.
    Everything else is nonsense.

    If you don’t have data, don’t make it up.

  13. Interesting post. However, could I suggest not misusing the term “absoute temperature” in this context. It has a well defined meaning: the temperature in kelvin, it does not seem to be an approriate choise to refer to a real temperature as opposed to a temperature anomaly.

    Could I suggest “actual temperature” to emphasise that you are refering to a real temperature measurement and not anomalies, where that is needed.

  14. Two weeks ago I was called crazy on Steve Goddard’s blog for suggesting that his new hockey stick of adjustments was due indeed to late data reporting as I pressed him to also display both raw/final plots and single station plots.

    (A) I wrote:

    “There’s just no easy five minute way to confirm your claim, so far, something that can be conveniently plotted using data plotted outside of your own web site. Normally such a jump would be considered a glitch somewhere, possibly a bug, and such things periodically lead to correspondence and a correction being issued. But here, motive is attached and around in circles skeptics spin, saying the same thing over and over that few outside of the blogosphere take seriously. I can’t even make an infographic about this since I am currently not set up as a programmer and I can’t exactly have much impact referencing an anonymous blogger as my data source. How do I easily DIY before/after pairs of plots to specific stations? Or is this all really just a temporary artifact of various stations reporting late etc? As a serious skeptic I still have no idea since the presentation here is so cryptic. I mean, after all, to most of the public, “Mosher” is an unknown, as is he to most skeptical Republicans who enjoy a good update for their own blogs.”

    (B) After pressing the issue, asking for before/after its, Goddard replied:

    “I see. Because you have zero evidence that I did anything wrong, I must be doing something wrong. / Sorry if the data upsets your “real skeptic” pals.”

    “Take your paranoid meds.”

    “I believe you have completely lost your mind, and are off in Brandonville.”

    (C) His cheerleader team added:

    “Jesus, is PCP unpredictable.”

    “I’m gonna have to call transit police to have you removed from the subway.”

    http://stevengoddard.wordpress.com/2014/04/26/noaa-blowing-away-all-records-for-data-tampering-in-2014/

    (D) I think Goddard has one of the most important blogs of all since his ingenious variations on a theme are captivating and often quite entertaining and he’s great at headline writing and he is responsible for many DrudgeReport.com links to competent skeptical arguments and Marc Morano of ClimateDepot.com along with Instapundit.com and thus many conservative bloggers too often pick up on his posts. He has recently added a donation button and certainly deserves support, overall. He is highly effective exactly because his output is never buried in thousand word essays full of arcane plots, like is necessarily the case with this clearinghouse mothership blog.

  15. JohnWho says:
    May 10, 2014 at 7:06 am

    From here:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

    Under “Station siting and U.S. surface temperature trends”

    “Photographic documentation of poor siting conditions at stations in the USHCN has led to questions regarding the reliability of surface temperature trends over the conterminous U.S. (CONUS). To evaluate the potential impact of poor siting/instrument exposure on CONUS temperatures, The Menne et al. (2010) compared trends derived from poor and well-sited USHCN stations using both unadjusted and bias-adjusted data. Results indicate that there is a mean bias associated with poor exposure sites relative to good exposure sites in the unadjusted USHCN version 2 data; however, this bias is consistent with previously documented changes associated with the widespread conversion to electronic sensors in the USHCN during the last 25 years Menne et al. (2009) . Moreover, the sign of the bias is counterintuitive to photographic documentation of poor exposure because associated instrument changes have led to an artificial negative (“cool”) bias in maximum temperatures and only a slight positive (“warm”) bias in minimum temperatures.”

    Are they saying that a poorly sited station, say one near an airport tarmac, does this:
    On a day with an actual high of 95 F, it reports 98 F and later when the low is 85 F it reports 89 F, which then appears that the max temp is “only” being reported 3 degrees high while the low is 4 degrees high. Would the average bias then be 3.5 degrees, making bias in the max temp (3 degrees) “cool” while the mim temp (4 degrees) be a “warm” bias?

    Just wondering.
    ——————–
    They’re saying that their historical instrument calibration history is so sketchy that they have to try to adjust for an unmeasured (and probably unmeasurable) instrument bias over time, complicated by new biases introduced with new instruments, in uncalibrated and nonstandardized data environments. In short, the record is GIGO…

  16. …so, we’re still too stupid to read a thermometer
    There was just this short window around 2002, when we were able to

  17. All adjustments are negative and generally greater the farther back from present. The supposedly “problematic” 1940’s show greater adjustments in all but S. Goddard’s method. This doesn’t explain the rationale of seemingly always cooling the past with lesser adjustments as the climate “warms.”

  18. Nik, Goddard is correct, he did nothing wrong, he explained his methodology clearly. Is the hockey stick real in this case, it would depend on your standard….. oh there isn’t one. So yes over time the hockey stick may go down but that will be because of the “readjustment” of the record and the completion of the 2014 year. There is probably a good case that it sensationalizes the issue, which creates discussion. hmmm not a bad thing I think, thanks Anthony for this continuation of the discussion!

  19. Much of the apparent spike in absolute temperatures is a artifact of infilling coupled with late reporting. USHCN comprises 1218 stations, and was a selected subset of the larger ~7000 station coop network chosen because nearly all the stations have complete data for 100 years or more. After homogenization, NCDC infills missing data points in the data with the average monthly climatology plus a distance weighted average of surrounding station anomalies. Unfortunately not all stations report on time, so in the last few months a lot more stations are infilled. These infilled values will be replaced with actual station values once those stations report.

    Infilling has no real effect on the overall temperature record when calculated using anomalies, as it mimics what already happens during gridding/spatial interpolation. However, when looking at adjusted absolute temperatures vs raw absolute temperatures it can wreck havoc on your analysis, because suddenly for April 2014 you are comparing 1218 adjusted stations (with different elevations and other factors driving their climatology) to only 650 or so raw stations reporting so far. Hence the spike.

    Chip Knappenberger also pointed out that a good test of whether or not the spike was real was to compare the USHCN adjusted data to the new climate reference network. NCDC conveniently has a page where we can see them side-by-side, and there is no massive divergence in recent months (USHCN anomalies are actually running slightly cooler than USCRN):

    http://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/

    Unfortunately there has been relatively little adjustments via homogenization from 2004 to present (the current length of well-spatially-distributed CRN data), so comparing raw and adjusted data to USCRN doesn’t allow us to determine if one or the other fits better.

    For folks arguing that we should be using absolute temperatures rather than anomalies: no, thats a bad idea, unless you want to limit your temperature estimates only to stations that have complete records over the timeframe you are looking at (something that isn’t practical given how few stations have reported every single day for a century or more). Anomalies are a useful tool to deal with a changing set of stations over time while avoiding introducing any bias.

    As far as the reasons why data is “adjusted”, I don’t really want to rehash everything we’ve been arguing about on another thread for the last two days, but the short version is that station records are a bit of a mess. They were set up as weather stations more than climate stations, and they have been subject to stations moves (~2 per station over its lifetime on average), instrument changes (liquid in glass to MMTS), time of observation changes, microsite changes over 100 years, and many other factors. Both time of observation changes and MMTS changes introduced a cooling bias; station moves are a mixed bag, but in the 1940s and earlier many stations were moved from building rooftops in city centers to airports or wastewater treatment plants, also creating a cooling bias. Correcting these biases is why the adjustments end up increasing the century-scale trend, particularly in the max data (adjustments apart from TOBs actually slightly lower the century scale trend in min data).

    Its also worth pointing out that both satellite and reanalysis data (not using the same surface records) both agree better with adjusted data than raw data: http://rankexploits.com/musings/2013/a-defense-of-the-ncdc-and-of-basic-civility/uah-lt-versus-ushcn-copy/

  20. Anthony or Zeke: For those who suspect there are similar results globally, is there also a GHCN Final MINUS Raw Temperature analysis graph anywhere?

  21. What is the rationale for adjusting data in the first place? Why not just use the raw data?

  22. Anthony,

    Thanks for the explanation of what caused the spike.

    The simplest approach of averaging all final minus all raw per year which I took shows the average adjustment per station year. More likely the adjustments should go the other direction due to UHI, which has been measured by the NWS as 8F in Phoenix and 4F in NYC.

    • @stevengoddard

      Your are welcome, you should make a note of the issue on those posts that use the graph so that people that see it don’t think it is a data tampering issue, but simply an artifact of a method combined with missing data due to late reporting.

  23. Zeke and I are in agreement, especially about

    “…the short version is that station records are a bit of a mess. They were set up as weather stations more than climate stations, and they have been subject to stations moves (~2 per station over its lifetime on average), instrument changes (liquid in glass to MMTS), time of observation changes, microsite changes over 100 years, and many other factors.”

    But there is more to it than that., and cooling biases are smaller compared to other biases that are not being dealt with properly, or at all.

  24. Zeke,
    I get what your saying but you don’t get it! Anomalies or no anomalies is not the issue. Adjusting the record destroys the usefulness and creates bias to the record. Yes your limited to stations that have records over the period you want to compare – that is good science. Making things up is not, it performs no useful purpose other than to create debate. So your point about Climate is valid, we have weather stations, so does the rest of the world. live with it. Making things up does not tell a useful story.
    As for the satellite record, that has been adjusted/calibrated as well, covers a short period etc. I will leave it to others to discuss that (Roy Spencer is the top dog for that). Overall I don’t think its particularly valid to use the satellite record to validate your adjustment/homogenization when that was used to calibrate the things in the first place.
    v/r,
    David Riser

  25. @David Riser. The satellite data is not calibrated using surface data, see this post and note the section on calibration.

    “Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). PRTs work somewhat like a thermistor, but are more accurate and more stable. Each PRT has its own calibration curve based upon laboratory tests.”

    http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/

  26. Bob, see the presentation by Steriou and Koutsoyiannis at the European Geosciences Union Assembly 2012, session HS7.4/AS4.17/CL2.10. It is available on line at itia.ntua.gr/en/docinfo/1212. Compares a large Global GHCN homogenization sample. The cool the past bias is everywhere. My favorite example from this paper is Sulina Romania, a Danube delta town of 3500 reachable only by boat. No change raw became 4C of warming in GHCN v2 in a small town surrounded by water.

  27. David Riser,

    Using anomalies rather than absolute temperatures isn’t adjusting the data per se. Homogenization does adjust the data, but the alternative is only using stations with no moves, instrument changes, time of observation changes, etc. These simply do not exist, at least over the last 100 years. The U.S. has arguably the best network in the world and even here most of our “best” sited stations have moved at least once and changed instruments as well.

    Some adjustment is necessary (even Anthony’s new paper adjusts for MMTS transitions), and I’d argue that the automated pair-wise approach does a reasonably good job. I’d suggest reading the Williams et al 2012 paper for some background on how its tested to make sure that both warming and cooling biases are properly addressed: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/williams-etal2012.pdf (NCDC’s site is experiencing some issues at the moment, but hopefully the link will work soon).

    Thankfully going forward we won’t need any adjustments for U.S. data, as we have the climate reference network. The difference between raw and adjusted USHCN data and the climate reference network should provide a good empirical test for the validity of adjustments to the USHCN network. Unfortunately the last 10 years is still inconclusive; while the adjusted data has a trend closer to the climate reference network than the raw data, the results are not statistically significant.

  28. “Thankfully going forward we won’t need any adjustments for U.S. data”

    LOL

    Starting… now.

    Wow. No more adjustments. I don’t believe it. Seriously.

    Andrew

  29. Thanks Anthony for the post about the calibration of the satellite record, but after reading the post I still don’t believe that a comparison of the two is valid. But I will say that after reading it I feel much better about using that record for climate variability over the long haul and would think that if were going to sink money into understanding climate the satellite record is much more important than ground based. Folks just need to be a bit more patient in terms of getting a handle on what the actual natural variability is. Ground based is still useful for regional weather forecasting but all the computer time spent torturing the data is probably misplaced.
    v/r,
    David Riser

  30. Could the removal, in the last year or so, of reporting stations play a role? I recall some 600 stations impacted.

  31. Zeke, see my post to Anthony above (when it gets out of moderation, hehe the curse of using Anthony’s name). I understand the desire to know, but I take exception to wasting my tax money on a fanciful experiment using station records when we have satellites that will probably tell us the answer long before the climate stations do. Particularly when a subset of people use this travesty as a means of destroying ours and others economy.
    v/r,
    David Riser

  32. Bad Andrew,

    Its easy enough to use USHCN data up to 2004 and CRN data after 2004. Thats the magic of anomalies :-p

    Here is what that graph looks like: http://rankexploits.com/musings/wp-content/uploads/2013/01/Screen-Shot-2013-01-16-at-10.40.46-AM.png

    Of course, the CRN can’t help us improve our estimates of temperatures before 2004, apart from validating (or invalidating) our adjustments to the USHCN network post-2004. If we leaned that our pairwise homogenization methods after 2004 were systemically wrong (or right), it would shed light on whether they were similarly biased (or unbiased) prior to 2004 since its the same automated method detecting and correcting for breakpoints. Thats why the CRN will be a good empirical experiment for the validity of homogenization in the U.S.

  33. ossqss,

    Some people have the unfortunate tendency to conflate late reporting with station removal (which, as far as I know, has generally not occurred). USHCN stations can take a few months to report and be processed, and GHCN stations can take much longer (some countries are pretty lax about sending in CLIMAT reports).

    If anything, we should get a lot more stations to work with in the next year, since GHCN version 4 will have abour 25,000 more stations than the current 7,000 or so in GHCN version 3.

  34. Congratulations to the authors for running this check, and also to Nick Stokes.

    Although it is obvious that the enormous data sets used in climate research need adjustments to correct systematic errors, the fact that adjustments have been made doesn’t reassure me. I guess I’m thinking about it this way. There is a large number of potential errors that creep up in such data sets, some known and some unknown. The surface stations project provides conclusive evidence that this is true. When a correction is made one (or at most a small number) of such possible errors is taken as the basis for modifying the data. Making such a correction then makes a small selection from a large population of potential errors. What assurance can there be that such a correction does not in fact increase the bias in the data rather than reduce it? Probably there is an extensive literature dealing with this problem, but when I see that data has been adjusted I find that I have less confidence in it, despite the fact that the adjustment may have been perfectly appropriate.

  35. There is no other field in science where the data is routinely corrected in one direction only. In real science you only correct your data for analyzer drift or bias from a calibration standard. And that sort of adjustment will always be random in both directions, like you see in the data prior to 1960. The consistent adjust up of the temperature datain one direction is bogus; there can be no valid reason for always adjusting your data up, and conveniently the amount needed to prove your AGW hypothesis.

  36. David Riser asserts: “Nik, Goddard is correct, he did nothing wrong, he explained his methodology clearly.”

    Goddard attached motive and called this “tampering” and in that he was indeed very wrong and nearly caused a PR disaster had this claim appeared more widely before it was debunked here. Given that he has the full technical background to within minutes determine the reason for the glitch, yet he proceeded to push it repeatedly while publicly asserting that I was a lunatic for asking for before/after plots, means that your assertion is false. This claim has made skeptical criticism of overall adjustments less credible and has provided ammo to online activists who relish such incidents of gross incompetence.

  37. Why is it okay to make adjustments to the global temperature record which causes the world to go out and waste $358 billion per year, …

    … but if you did the same with your company’s financial statements, or your prospectus or your country’s economic data (just to prove a personal pet economic theory), and you caused someone to lose $358 billion, there would be serious repercussions.

  38. I have a few comments still in moderation so they may end up after this one. However I would like to point out that this silliness of homogenization and adjustment feeds the monster found here http://www.climate.gov/ for which I am both horrified and upset about. That whole website is the biggest pack of bs I have ever seen and It is being paid for by my tax dollars. Thank god for someone like Steve Goddard who is at least willing to show this stuff for what it is.

    That website is the biggest reason that I oppose any kind of adjustment without a standard. Zeke you may think that the data is done being adjusted but I guarantee you that is not the case. You’re kind of like a frog in a boiling pot of water, you agree to things because on the surface it appears to be reasonable but if you take a closer look you will realize that politics is involved.
    v/r,
    David Riser

  39. Zeke says:
    “If anything, we should get a lot more stations to work with in the next year, since GHCN version 4 will have abour 25,000 more stations than the current 7,000 or so in GHCN version 3.”

    The real issue though is quality of these stations, not quantity. My surfacestations work in Fall et al. showed that over 3/4 of the stations had siting issues in the USA. That number is expected to be the same or worse in GHCN.

    Adding noisy signals doesn’t improve the accuracy of the signal being extracted.

  40. For me, it is a bit frustrating that folks do not remember that the US temperature monitoring network was not and is not broken. It is performing as designed. It was put in place over a century ago when USA climate information was primarily anecdotal and inconsistent. It has provided a wealth of information to science. Urban Heat Island: not a problem in system design as people want to know what the temperature actually is to go about their daily lives, not what it might be if there was no city there.

    The problem everyone in the climate science world is that that climate monitoring system was never designed to reliably detect long term temperature trends of only a degree or two Fahrenheit. After all, with daily high to low temperature swings in the range of twenty degrees and annual high to low swings of over one hundred degrees Fahrenheit not at all unusual, an overall accuracy of plus or minus a couple degrees Fahrenheit was certainly considered adequate. (+/- 0.5 degrees accuracy thermometers plus siting, installation, and human errors)

    So, we should always take any statements about temperature or temperature trends with a grain of salt if the accuracy claimed is much better than plus or minus two degrees Fahrenheit or one degree Celsius. It is always possible that all the various manipulations of our historic temperature data are actually useless at improving our understanding of global climate.

  41. Sometimes I get the impression that we lose the forest for the trees. Data manipulation is a dangerous game. Explaining an “adjustment” doesn’t necessarily justify an adjustment, and once it becomes the “official” set of data, all other interpretations and uses are based on the assumption that it is “right”. I have trouble getting past the missing 1930s. My family were settlers who lost everything in the late 30s. Also, I think we lose sight of the fact that an adjustment of 0.4C is a high percentage of projected change if one buys the 2C increase projected. Year by year, “new record high temps” can be announced based on changes of hundredths of a degree. Any adjustment upward could allow this game to be continued virtually without notice if temps continue to be “flat”.

  42. NIk,
    I wouldn’t call this debunked. Goddard explained his methodology and provided his data. His method was used to reproduce the graph. This post explains the hockey stick, which is fine. Its good for clarity, it doesn’t change the message. When the climate report comes out and its way hot its published as such, when it comes back down to earth nothing is said. This post and the previous one generated a ton of discussion which is also good. I would say that Steve Goddard is correct, the adjustments are deliberate, they tell an incorrect story and that story is shouted to the masses. So I say again, Thank You Steven Goddard!
    v/r,
    David Riser

  43. [snip - if you have criticism, explain it. I'm not going to tolerate any more of your drive by crypto-comments that contain the word "hint". For a person who's always on about full disclosure with data and code, you sure do a crappy job of following your own advice when you make comments - Anthony]

  44. Zeke Hausfather says:
    May 10, 2014 at 8:16 am
    Using anomalies rather than absolute temperatures isn’t adjusting the data per se. Homogenization does adjust the data, but the alternative is only using stations with no moves, instrument changes, time of observation changes, etc.

    Individuals have analysed some of those adjustments and shown that they are incorrect when comparing to other local stations, but no one goes back in to the system and removes or corrects the adjustment process. The mass adjusment process is crap, as has been shown many times, one of the worst cases being Iceland, which are adjusted after already being adjausted by their process.
    Also the classification of Rural/Urban etc is completely broken as has been proved many times.
    The notion of an “Average Global Temperature” is nonsense and when handled by people with an agenda can and has been shown to be manipulated to prove AGW.
    If the world needs to know whether it is warming or not you can take each station, calculate any change over a period, noting any “Changes” made to the station and then let people make up thier own minds, not force a gridded, homogenized and badly adjusted version on us.

  45. “Its good for clarity, it doesn’t change the message.”

    This is what he actually said.

    “Two things stand out about the current USHCN data tampering graph. The most obvious is the huge amount of tampering going on in 2014, but almost as bizarre is the exponential increase in tampering since about the year 1998.

    *insert graph that purports to show the exponential increase in tampering*

    There is no rational reason for either of these – so here is my guess. Obama wants credit for healing the climate. He has been engaging in every imaginable form of BS to get an international agreement through this year or next, and after he gets the agreement he will tell NOAA to stop tampering – and will then take credit for the drop in temperature.”

    Just being technically correct in a certain way with the graph doesn’t excuse his attempt to misrepresent what it means.

  46. “The notion of an “Average Global Temperature” is nonsense and when handled by people with an agenda can and has been shown to be manipulated to prove AGW.”

    You do realize that you are talking to one of the people who actively works on that subject?

  47. Anthony,

    I agree with your assessment of what causes the spike mathematically, and I saw the same thing. I don’t agree that it is proper for USHCN to fabricate adjusted data for stations where they don’t have raw data.

    REPLY: yes, which speaks to the numbers the release in the monthly State of the Climate Reports, which are released with US historical temperature rankings for that month before all the data is in. Invariably, those rankings change later.

    You really should issue a correction on your blog to make the reason for the spike clear, that it isn’t “tampering” per se, but an artifact of missing data and your method – Anthony

  48. drumphil says:
    May 10, 2014 at 9:11 am
    You do realize that you are talking to one of the people who actively works on that subject?

    Yes and do you think he has adequately explained the 1 degree cooling of the past in his various expanations?

  49. tgasloli said at 8:42 am
    There is no other field in science where the data is routinely corrected in one direction only. …

    B I N G O !

    And it’s not just temperature. Ocean Heat and Sea Level are also adjusted and I’m sure evry other aspect of Global Warming from polar bears to to hurricanes and glaciers is similarly fudged. Winston Smith and the Animal Farm pigs have nothing on these guys.

  50. Zeke Hausfather says:
    May 10, 2014 at 8:16 am
    David Riser,
    Using anomalies rather than absolute temperatures isn’t adjusting the data per se.
    >>>>>>>>>>>>>>>>.

    No it isn’t. But both methods mask the real problem. We’re trying to understand how CO2 affects energy balance. As energy flux varies with T raised to the 4th power, averaging either absolute values or anomalies simply dispenses with this fact and produces a trend that is at best only loosely related to the problem at hand. In fact, there are circumstances when the trends in average energy flux and average temperature (or anomalies of temperature) can be in opposite directions.

    Unless and until the laws of physics are applied to the data, all your work to produce an accurate temperature record will result in little more than a more accurate trend of the wrong metric.

  51. The other thing that I question about “Global warming” is how can warming be “global” when it is not even “Continental”, let alone hemispherical?
    How many times have we seen one half of the USA or Europe warm while the other half cools. To me global means everywhere at once.

  52. David Riser,

    Goddard’s problem is that he is just averaging all the absolute temperatures together each year. Everyone else (Anthony included in his papers) use anomalies, because absolutes run into lots of problems when the set of stations is not consistent over time. That why all three other methods we explore (absolute temps with forced consistency, anomaly methods) all get pretty much the same result, and why they are all fairly different from Goddard’s result.

    R2Dtoo,

    We all know the raw data is biased because of station moves, instrument changes, TOBs changes, etc. The question is can we effectively detect and remove these biases using statistical techniques without adding any additional bias in the process. The method used by NCDC (and Berkeley) is conceptually simple: look at the difference between a station and all its surrounding neighbors, look for step change breakpoints at one station that are not shared by the neighbor, and flag that as an inhomogenity. For example, if one station changes its instrument from liquid-in-glass to MMTS in January 1984, but most of its neighbors don’t make the same change at the same time, this pairwise approach will detect a step change at that station in 1984 and remove it (or cut the record and treat everything afterwards as a separate station in the Berkeley approach). This will generally work unless all the surrounding stations change at the same time, something that is fairly unlikely given that the major changes (TOBs, MMTS) were all slowly phased in over 5-10 years.

    For folks saying that dealing with biases in raw data is somehow unique to climatology, I’ve had to deal with it in other fields as well. For example, databases of manual electricity meter reads contain all kinds of weird artifacts, missing data, or errors that need to be corrected during data normalization that, if not addressed, would skew the results of a disaggregation analysis. What is needed is a consistent, automated, well-documented, and tested approach to dealing with errors in raw input data. That way if you discover that your algorithm isn’t working properly, its easy enough to make changes and rerun the whole dataset. You start running into problems when you make one-off or manual adjustments to individual stations, because you don’t necessarily end up with consistent standards.

  53. I have noticed NCDC announcing “nth warmest” statements about the previous month. In all cases the satellite data does not seem to agree with these statements. Are these statements also an artifact of missing stations? If so, I would have to say the Goddard has a point. NCDC is making false statements about the climate. Since that is the result of their own process, it is by definition .. “tampering”.

  54. Richard M,
    That is correct. They will usually go back and update it once the data is in but there isn’t any discussion or announcement. Its one reason the top 10 appear so often. NCDC publishes it as a top 5 or 10 then changes is to where its supposed to setting up the next opportunity for a top etc. Its one reason I have issues with adjustments. Steven Goddard’s work demonstrates this neatly and yes the spike looks bizarre but it doesn’t stop NCDC from doing what they do.
    So Zeke I get your point but you don’t get mine, so we just have to agree to disagree.
    v/r,
    David Riser

  55. David Riser says:
    May 10, 2014 at 9:53 am

    I notice that Zeke does a lot of “justifying” the adjustments, but has not really commented on the at least 1 degree cooling of the past that they appear to have caused, as most have noted error adjustments should never all go in one direction, unless by mistake or by design.

  56. Ima.
    The reasons for adjustments are real.
    The adjustments are validated
    The adjustments have been investigated by skeptics and vindicated.

    Lets take two types of adjustments
    Station moves. A station is located at 1000m asl
    It moves to sea level. This will create a false
    Warming signal. So its adjusted.

    Next a ststion changes instruments. Side by side
    Tests are run an adjustment is created.

    Lastly the station changes tob. This bias is removed
    By a tobs change.

    Tobs adjustments are the biggest.
    Those adjustments were verified many times including
    At john daly site.

    REPLY:
    Sure, that’s an opinion, but the stations that don’t need adjustments to fix problems is where the true signal exists, everything else is just guesswork and “best estimates” with error bars. Adding more noisy stations to the mix does not improve the accuracy of getting the sought after AGW signal, it only increases the uncertainty.

    You’ll soon be having to revise your world view. In the meantime, please limit your drive-by expositions here unless they contain something of substance, backed up by some citation. – Anthony

  57. Is there a published reason why past temperatures have been adjusted downward? Or is there even an acknowledgement by the climate science community that this is happening?

  58. NCDC reports absolute temperatures, not anomalies. Most of my comparisons are vs. GHCN HCN vs. NCDC. Once in a while I do the USHCN comparisons like this one.

    REPLY: Yes, but it’s wrong, so learn from the mistake, issue a correction and move on. – Anthony

  59. It is interesting that the adjustment in 1940’s US temperature data is greater than the difference between the MWP and the LIA in the famous hockey stick graph. So the measurement error in 1940’s US temperature data is claimed to be greater than Michael Mann’s estimate of the actual range of Northern Hemisphere average temperatures over almost 900 years. Isn’t amazing how stable Northern Hemisphere average temperature was for the 9 centuries preceding the introduction of significant CO2 by humans? Or, rather, isn’t it amazing how gullible the IPCC, the press, and public were when the hockey stick graph came out.

  60. Mr. Mosher,
    I would say adjustments have not been vindicated. I would suggest the opposite is true. The difficulty with most of what you say is; the adjustments are automated within a grid, they are not found by reviewing the thousands of station records and doing a station by station adjustment. The grid resolution is too large to effectively capture all the changes that can occur within that grid, the fact that 1000 ft. differential in altitude can and does exist within your grid demonstrates the fallacy of which you speak. Tobs is a tiny issue that you all assume was unknown in the past yet I am pretty sure the folks doing the observation were very aware of what they are doing, so finding tobs changes in the record by looking for step changes in the data and doing a massive rewrite of that data is not an acceptable practice.
    v/r,
    David Riser

  61. Mosher, I don’t think they should be adjusted at all. There should instead be a break in the data indicating changed conditions (IE construction of a road over what was once grass, or a new building goes up, or a new kind of unit). And when a station is moved from its pole stuck in the ground -up, down, or sideways- it should be declared dead. End of data for that station. I think a new data series should be put together with these criteria, and a new set of station ID numbers provided per the above criteria. Any other research entity (IE agricultural seed plots) gathering subject data would do this. That climate researchers don’t is just plain laziness or a desire to create long records where none exist. I am thoroughly unimpressed.

  62. Pamela Gray says:
    May 10, 2014 at 10:52 am

    The unwarranted adjustments are made for the express purpose of lying. Same goes for the notations that should be made along your suggested lines but aren’t. I hope that enough original observational data, not adjusted, bent, folded, spindled & mutilated by mendacious, trough-feeding climastrologists, remain intact for a more honest temperature record someday to be constructed for purposes of actual science rather than warmunista advocacy.

  63. We should note also that from the website:

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

    we discover that the data is not adjusted it is “corrected” (following is a direct quote):

    “USHCN temperature records have been “corrected” to account for various historical changes in station location, instrumentation, and observing practice.”

    The word “corrected” is in quotes. I guess they couldn’t figure out how to get a “wink” emoticon to display.

    :)

  64. NikFromNYC
    You say
    thus many conservative bloggers too often pick up on his posts

    Do you mean too = to = also
    or too as in too many = an excess?

    It’s not clear from your text and to and too and two are in fact three different words with different meanings.

  65. Any chance that a couple of these records be maintained here on WUWT? Suggest that USHCN data be used, gridded, with missing data not counted, no other adjustments. It should be stand-alone, without reference to USHCN computed data which changes whenever somebody gets the idea for a new “validated” adjustment. After a certain interval it could be considered “research grade.” The simple raw average should also be maintained.

  66. How can anyone say Stephen Goddard is flat out wrong?

    It does not matter that there may be an odd station error here or there when he is demonstrating that the so called corrections to the data (which he ignores) are causing a greater than 1 degree cooling of the past and also that using incomplete latest month data produces the “hottest month ever” syndrome.

    Just about everybody can see and agrres that that is what is happening, so how is it “wrong”?

  67. Nobody has tried to answer Paul Homewood’s question either, was he doing it wrong as well?
    And all the others in the past that have shown data tampering on a massive scale in the name of “Correction”.

  68. The point JohnWho makes is key here –
    “USHCN temperature records have been “corrected” to account for various historical changes in station location, instrumentation, and observing practice.”

    It would be fine if our lords and masters were using the data as Anthony and Zeke are – ie in a genuine joint attempt to determine the optimum data.

    But when it’s being used for out and out propaganda then I’d suggest Steve G blowing his whistle for the obvious foul is acceptable.

    Sadly until the “team” et al releases all data and are prepared to accept full discussion of the issues at hand (as indeed our two authors are doing), then “playing fair” will be rather naive to say the least.

    As a pleb tax-payer in the UK having to put up with complete BS for several years now and having my pocket picked with impunity the burden of proof on the CAGW crowd has now reached quite a considerable level – the chances of any of the existing criminals reaching that level is very close to zero.

  69. AAARGH! I’m not colorblind, but looking at those charts I feel as if I am. Can you PLEASE please please increase the color saturation and brightness of the lines? They look like four shades of mud.

    • Doug Jones why not simply adjust monitor rather than make us re do the graph? As far as I can tell there is nothing wrong with it. And no one else is complaining about the color scheme.

  70. I think there is little value in ground-based thermometers for the purpose of determining a global temperature. They can be locally useful if well placed, maintained and reported.

  71. Steven Mosher says:
    May 10, 2014 at 10:12 am
    Ima.
    The reasons for adjustments are real.
    The adjustments are validated
    The adjustments have been investigated by skeptics and vindicated.

    Steven: Perhaps you can address the following question as a way to confirm your comments:

    Why did the temperature gauges that were used prior to 1940 overstate actual temperatures by approximately 1 or more degrees F? (If the raw data has been adjusted downward by over a degree, is this not stating that the original readings had overstated temperatures by a like amount?)

    The encroachment of the UHI has been in the years afterwards. I would have thought that the temperature readings in these earlier years would have required less correction.

    If there is a clear and reasonable explanation as to why historical temperature readings consistently overstated the actual temperature of that period of time, then this needs to be articulated so that people such as myself can comprehend.

    If there is not a clear and reasonable explanation, then perhaps we need to step back and ask ourselves if the temperature adjustment process has not perhaps been compromised by some unintended infusion of bias or through methodology error.

    Please help me to understand. This is all that I am asking.

  72. From my understanding of the BEST methodology, which I invite Mr. Mosher to correct if wrong, a station move is considered a new station, a station change in instrumentation is considered a new station, and a station change in Tobs is a new station. Then, after this chopping the data into pieces that are internally consistent and comparable, anomalies are created, with no adjustment to any raw numbers (other than simple erroneous ones). The anomalies are then woven together into regional and global trends.

    The BEST methodology strikes me as the right way to go about things, although it is still susceptible to a changing mix of site qualities. The USHCN method adds adjustments that are well-founded, but have errors of their own.

  73. @ Steve Goddard
    Is it possible to differentiate between reported (by the weather stations) and infilled data. If so it should be possible to run the comparison between the absolute temps as reported (climate report) and the raw.. In that way one should be able to compare only the stations that have already reported. I have a feeling the hockey stick will remain. I realize there is a confounding use of the word reported what I am trying to say is just leave out the missing stations if that is possible.

  74. Anthony,
    not to pick but Doug is right! Pretty difficult to parse the graph I had to break out the glasses earlier, I wouldn’t have said anything except for your reply. I am pretty sure messing with the monitor would not fix the issue.
    v/r,
    David Riser

    • David Riser, I honestly don’t understand why some people are having so much trouble reading this simple graph especially when you can click on it to enlarge it. Do you not get that that you can click on graphics to enlarge them ? virtually every graphic posted on WUWT is this way

  75. Yes. Issue a correction. It’s a road well traveled by NOAA, NCDC, NASA,etc who have issued corrections over the years to the point where nobody believes anything they publish. Raw data is missing, but they go ahead and publish a monthly climate report to justify their existence. (Monthly climate report, a nice oxymoron).

    They shouldn’t be expected to get the numbers right. They’re government employees after all.

    .
    ” stevengoddard says:
    May 10, 2014 at 10:28 am

    NCDC reports absolute temperatures, not anomalies. Most of my comparisons are vs. GHCN HCN vs. NCDC. Once in a while I do the USHCN comparisons like this one.

    REPLY: Yes, but it’s wrong, so learn from the mistake, issue a correction and move on. – Anthony

  76. Ima,

    Back in the 1940s virtually all the stations used liquid-in-glass thermometers, which read about 0.6 degrees warmer in max temperatures (and about 0.2 degrees colder in min temperatures) than the new MMTS instruments introduced in the 1980s. This means that actual max temperatures (as measured by MMTS instruments) would have been ~0.6 degrees colder, and contribute part of the reason for adjusting past temps downwards. Time of observation biases introduce similar warming, as shown in this figure from Menne et al 2009: http://stevengoddard.files.wordpress.com/2014/01/screenhunter_28-jan-18-13-25.gif

    Here are changes in TOBs over time: http://rankexploits.com/musings/wp-content/uploads/2012/07/TOBs-adjustments.png

    And a detailed discussion of MMTS biases: http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

    The combination of MMTS and TOBs drives the bulk of the downward adjustment in past mean temperatures. Min temperatures are actually adjusted down slightly via homogenization, likely due to detecting and correcting for some UHI bias.

    UnfrozenCavemanMD,

    Indeed, Berkeley doesn’t technically “adjust” anything; we cut stations at breakpoints detect through neighbor comparisons and treat everything after a breakpoint as a new station. Interestingly enough, you end up with pretty much the same result as using NCDC’s method: http://rankexploits.com/musings/wp-content/uploads/2013/01/USHCN-adjusted-raw-berkeley.png

    FundMe,

    That is exactly what our method 2 in this post does; only look at station-months where both raw and adjusted series have readings. It nicely eliminates the “spike”.

    To other folks: Sorry if I’m slow to respond, I’m heading out on a camping trip and will have little if any internet access.

  77. -=NikFromNYC=- says:

    May 10, 2014 at 8:43 am
    ============
    Yep, I’ve been banned from Goddard’s site.
    Maybe I deserved it ?

  78. Sorry, that last post should have read “Past min temperatures are actually adjusted up slightly via the non-TOBs part of homogenization, likely due to detecting and correcting for some UHI bias”. Its somewhat confusing when down and up mean opposite things in terms of trend impact depending on if they happen in the present of in the past…

  79. I had noted on my blog a diagnosis here. The problem isn’t late notification by USHCN, or even not using anomalies, though with anomalies that would not arise. I didn’t use anomalies. The problem is just wrong methodology. You can’t subtract the average of a whole lot of adjusted readings from the average of a different whole lot of unadjusted readings and expect the difference to reflect the effect of adjustment, unless you have avoided other major reasons for difference. And those reasons are the disparate station/month times that have gone into the averages. It’s not apples to apples.

    Here the dominant problem is that in 2014, all stations (1218) had data for all four months. Raw data were 891 for Jan, 883 for Feb, 883 for Mar, and 645 for Apr. Biased toward winter. To exaggerate, you’re comparing winter raw with spring final. You’ll get a big difference, but it’s not due to adjustment.

    But there can be biases due to different stations in the selection too. It’s just the wrong thing to do. You can do it right by forming differences first for months where stations have both raw and final data and averaging those. Then you know that adjustment is the reason for the difference.

  80. A 1 degree anomaly in Pumpkin Center does not have the same significance as a 1 degree anomaly in central Phoenix. Anomalies at a location need to be normalized by, e.g. converted to sigma of the measurements for that location.

  81. Anthony,
    Yes you can blow it up but the colors are so close together and bland it makes it hard to read. You picked some colors that are particularly close together spectrum wise. Many people have color perception issues that don’t impact them in day to day reading etc. But the colors you picked are exactly those colors that cause most of the issue’s (blue and green and brown and tan). I imagine that most folks don’t have a lot of issues with it, because the 3 are so far apart from the one. Many others don’t really care because quite frankly there isn’t a lot of love for statistical abuse of the temperature record. A lot of folks look at what Steven Goddard did and say he has a valid point, he was forthright in providing code and data, he just disagrees with how folks interpret his graph. But the point is well made.
    v/r,
    David Riser

    • David Riser well that’s too bad because I plan to take the rest of the whole weekend off it can wait till Monday I have more important and fun things to do. Zeke is also going on a camping trip and he has the source of the graph.

  82. Can someone take a minute and explain to me how you can get an anomaly…
    …and not know what the temperature is

  83. Zeke Hausfather says: May 10, 2014 at 7:52 am
    “For folks arguing that we should be using absolute temperatures rather than anomalies: no, thats a bad idea, unless you want to limit your temperature estimates only to stations that have complete records over the timeframe you are looking at”

    Zeke, it is a bad idea, but it’s what NOAA does for CONUS temperatures. I think it’s just because that’s how it has always been done. They come up with an average temp for the US of 54°F or whatever.

    Tp do that, as you say, you must have complete records for every station/month, else you’ll get artefacts like this spike. That’s why, as part of their “final”, they have included FILNET, which in effect interpolates all missing data. No-one else needs to do that, because they use anomalies instead.

    In fact this works, and if you do it right, it gives a fair result. But it’s very trappy. Great caution is needed in dealing with averages of absolute temperatures. Here is another place where it goes wrong. If the stations aren’t the same, then you get differences due to their different situations. Here it was probably that USCRN stations are on average higher altitude. But it’s an apples/apples issue.

  84. In case its not clear, ROYGBIV, note green and blue are together and brown and tan are together in orange yellow. So maybe a Red Line, a Yellow line, Blue Line and a Violet Line would have been better. Just a thought for future.
    v/r,
    David Riser

  85. Zeke Hausfather says:
    May 10, 2014 at 1:03 pm
    Ima,

    Back in the 1940s virtually all the stations used liquid-in-glass thermometers, which read about 0.6 degrees warmer in max temperatures (and about 0.2 degrees colder in min temperatures) than the new MMTS instruments introduced in the 1980s

    My sources (Wikipedia yuk yuk) tell me that mercury thermometers are more precise than that:

    “According to British Standards, correctly calibrated, used and maintained liquid-in-glass thermometers can achieve a measurement uncertainty of ±0.01 °C in the range 0 to 100 °C, and a larger uncertainty outside this range: ±0.05 °C up to 200 or down to −40 °C, ±0.2 °C up to 450 or down to −80 °C.[37]”

    I’ve yet to be convinced that we have been measuring temperatures incorrectly for hundreds of years. If you have any further documentation I would appreciate it. I suppose this is why they call us skeptics.

  86. So. All the adjustments are still going the same way. Eventually, the charts will say “yesterday it was near absolute zero, but tomorrow we will burn!”

  87. Zeke,
    Dad knew what he was doing, it is complete arrogance to say otherwise without proof. And using statistics to detect step changes in a chaotic system and to use close stations to do pairwise comparison when differentials of 10’s of degrees are possible over very short distances (parts of a mile) due to land use, elevation, bodies of water and even vegetation is not defensible.
    v/r,
    David Riser

  88. A C Osborn says: May 10, 2014 at 11:31 am
    “How can anyone say Stephen Goddard is flat out wrong?”

    It’s just wrong method. The thing he’s graphing does not reflect only adjustment. The spike is the most obvious issue. It’s predictable through the year. Here he has one in Illinois.

    April has about 200 more stations (than earlier months) where final data has added estimated data. That’s adding spring data to a mainly winter average, and pushes up the average. If you kept doing SG-type plots through the year, you’ll see first the spike diminish, because the proportional effect of the extra 200 fades. By about November, the spike will go negative. That’s because the latest estimated data added will be autumn data, and colder than the year to date average. It brings the average down. It’s predictable, and reflects just seasonality.

  89. @ Zeke
    I see that now..a smaller hockey stick than Steve Goddard’s but but but..man o man that is a decade and a half of model warming manufactured right there and it only took three months Wow.

  90. These “adjustments” falsely exacerbate warming. The past is adjusted down while the near term is adjusted up.
    The bias apparent here is not in the thermometers.
    The bias is with the US government agents’ conclusions about the thermometers.

  91. Zeke Hausfather (May 10, 2014 at 8:16 am) – “Using anomalies rather than absolute temperatures isn’t adjusting the data per se. “.

    It isn’t adjusting the data at all. It’s simply a way of using the data. The data consists of the original measurements, ie. the absolute temperatures. For some purposes – eg. to relate changes across different data sources – it is appropriate to use anomalies. Note that, together with their basis (which must therefore always be properly documented), anomalies still carry the original measurements, ie. the absolute temperatures. [Similarly for adjustments].

    Steven Mosher – I’m a bit bemused by your (May 10, 2014 at 10:12 am) “Station moves. [...] So its adjusted.“. I thought the BEST method, which you participate in, was to treat a station move (or similar) as creating a new station. In such a situation, every time an adjustment is made, a possible error is introduced. In other situations, adjustments may be legitimate, but (as for anomalies) their basis must also be kept so that the adjusted temperatures still carry the originals.
    Pamela Grey (May 10, 2014 at 10:52 am) and UnfrozenCavemanMD (May 10, 2014 at 12:27 pm) and perhaps others comment along these lines too.

  92. At the very least Oregon should take its stations back from any kind of national ownership and fund proper data handling. In fact every state should do that. Are we really becoming one big state with a King in power? Really? Take back our weather stations, our plains and forests, our mineral resouces, and our lands from national ownership. I for one, have had enough of this feudal set up.

  93. Bob Greene says:
    May 10, 2014 at 7:39 am

    “All adjustments are negative and generally greater the farther back from present. The supposedly “problematic” 1940′s show greater adjustments in all but S. Goddard’s method. This doesn’t explain the rationale of seemingly always cooling the past with lesser adjustments as the climate “warms.””

    Bob, there can be little question that there has been a lot of agenda driven adjustments. I can see why there might need to be some adjustment for station moves, etc. but the first serious adjustments made (by Hansen et al at GISS) were in 1997-8 when it became clear that the much anticipated new CONUS temperature average high WASN’T going to obliterate the pesky 1937 all time (instrumental) high. There were some emails between an underling working on the “problem” and Hansen that after a few iterations came up with the 1998 high (because of the super El Nino of that year, they realized it might be some time before there was an actual new high if they didn’t act now). I haven’t found the WUWT article on this yet but probably someone here can dig it up for you. Without this “work”, 1937 would still reign as the top average temperature today. The mid thirties to mid forties highs also showed up elsewhere, including Greenland. There has been almost continuous adjusting (as shown by the plots above) since – essentially rotating the record counterclockwise with the thumbtack ~ at 1950 or so and by pushing the “jumps” back into alignment – older ones downward, more recent ones upward – without being assured that the jumps aren’t real. Anyone with the links, please supply.

  94. Oh. And I would take back our wildlife and the management there of. The animals that live here belong to Oregon. And the ones that are not native to Oregon will just have to go back where they came from. If we screw up, it will be our fault. I get really irritated with outsiders telling me, a life-long Oregon resident, whether or not I can use my own &^%$# forest!

    Sorry, but this station thing has my Irish red hair flaming!!!!

  95. Zeke Hausfather says:
    May 10, 2014 at 1:03 pm

    I am surprised that the thermometers were not regularly calibrated. Back in the 60’s when I took chemistry and physics in high school, that was one of the first things we learned. Put a thermometer in boiling water and check to see what it read (or yes, melted Napthalene is a better test). Put it in ice water and see what it read, and develop a correction formula for the thermometer it necessary. (We were also given “calibration” thermometers to calibrate our thermometers. This is like a grade 9 or 10 experiment.) I thought thermometers in weather stations were calibrated intermittently but that they didn’t realize that glass creep over a period of time could also cause issues. Did no one ever do simple calibrations by going from station to station with a calibration thermometer? I am gob smacked it they didn’t My Physics and Chemistry teachers were very clear on the need for calibration and we are talking 1959/60 so why would weather station equipment not be subject to regular calibrations? Maybe I just don’t understand as it is something I was trained to do from junior high school to high school, to university and in my engineering career.
    These adjustments seem odd if there was any sort of calibration. There has to be something else like a bias of some sort or systemic error like maybe having used the wrong calibration procedure.

    Many agencies require regular calibration, I am surprised this would not be done on weather stations so why the adjustment? (maybe I have not read enough of Anthony’s work) http://www.albertahealthservices.ca/EnvironmentalHealth/wf-eh-thermometer-calibration.pdf

    I also am very suspicious of the Mean/Average temperatures being used after looking at many station data since the trend of the low temperatures and the trend of the high temperatures seem to paint a picture of convergence. i.e.. The stations may show flat, decreasing or slightly increasing HIGHS over 60 to 100 years+; while low temperatures show LESS COLD trends resulting in the appearance of convergence of highs and lows for the years of record. i assume this is a short term phenomenon but it does lead to a warming trend for the MEAN temperatures, Seems somewhat meaningless for most of the year : -38 C doesn’t feel a lot warmer than -40C as a low, nor does – 20 versus – 22 and the highs that stay the same at -10 or go down a degree to -11C. And summer high temperatures don’t seem too be increasing but summer lows are getting higher (in most cases). The averages give us a measure, but they don’t tell us what is happening.

    But that is just my simple opinion or maybe it’s a pet peeve. But “mean” temperature is but one calculated “measurement” variable that does not tell the whole story.

    Sample (note there is a labelling error on the second to last graph but the comments still apply)

    https://www.dropbox.com/s/mumeba3ox98vdaj/TisdalePost.pdf

  96. I have never seriously studied statistics, so I have no argue from authority to offer you, but I have realised that this post is just wrong on a number of counts.

    Steve Goddard is looking at two data sets. One is the observed or raw data and one is the published or adjusted data. Both data sets are snapshots as far as Steve’s analysis goes as new values will be added as time passes. Therefore Steve’s approach is valid as it is comparing one data set to another at the one point in time. It is not doing a grid or station by station analysis.

    The fact that stations are missing from either data set is part of the adjustments that have been done. If some stations are “late” for the raw data set, but are being given a value by the reported data set, then that is part of the adjustment. If these data points become available at a later point in time, then doing the analysis again with a different snapshot is valid, but it does not make the analysis that was done at that point in time any less valid.

    Using anomalies or other data torture methods is a different analysis. Doing anything else to the data is just adding an unnecessary layer of manipulation that is not required and not really valid when you are looking at the data set to data set approach that Steve has done.

    Another failed gatekeeping exercise by WUWT.

  97. Bad Andrew says:
    May 10, 2014 at 6:59 am

    The fact that they adjust data at all means they aren’t measuring it correctly.
    ========================================================
    The fact that they adjust the data is the fact that they adjust the data, regardless of measurement accuracy.

  98. Gary Pearse says: May 10, 2014 at 3:23 pm
    “Without this “work”, 1937 would still reign as the top average temperature today.”

    No, 2012 is hottest on any reckoning.

  99. Nick,
    Not really, if you look at the highly inflated bs temp records you would see that 2010 reigns supreme. 2012 is number 9 apparently and if you used raw data 1937 would be unbeatable until the next ENSO step change. I would point you at the NCDC’s climate page but its been wiped out and replaced by http://www.Climate.gov a total propaganda piece.
    v/r,
    David Riser

  100. “The fact that they adjust the data is the fact that they adjust the data, regardless of measurement accuracy.”

    In science, you don’t adjust data you think is accurate. To adjust data that is accurate would be a sign of mental illness. Is that what you are suggesting is happening here?

    Andrew

  101. Just a short thought regarding the Mercury-in-glass thermometers. When I started
    working for the “Weather Bureau” in the early 1960’s, we used the standard USWB
    thermometers, which were provided, for years by Wexler (sp). These thermometers
    were etched with a serial number, and came with an individual corrections card,
    generally at 5 degree intervals, valid to within 0.1 degree F. However, the Max
    and Min thermometers had only a plus/minus accuracy of 1 degree F.

    Gradually, due to cost considerations, the regular thermometers were allowed to
    be replaced by thermometers which had an overall accuracy of plus/minus 1
    degree F, as they were broken and replaced. So, as time went on, the
    liquid-in-glass thermometer readings were degraded.

    At larger airports, and then gradually at smaller airports, the remote thermometers
    were installed (HO-60, HO-63, and in the military, the TMQ-11 series). At least, in
    the early 1960s when I was working as an observer trainee, the electronics
    technician went out to the instrument and used a mercury-in-glass thermometer
    held up to the intake of the electronic sensor, and would compare its reading
    with the temperature indicated on the readout inside the USWB office. If it was
    outside the plus/minus 1 degree F range, it was adjusted back within tolerance.

    I went into research for about 20 years, and came back to observing in the late
    1970s. At the sites where I worked, the HO-60 series temperature sensors had
    been replaced by the HO-83 series sensors due to costs, and better relative
    humidity/dewpoint sensors, and the data quality checks had
    been reduced to around once a month for comparison. Note that the
    temperature sensor’s claimed accuracy was plus/minus 1 degree C (1.8 degrees
    F). There have been many discussions concerning temperature inaccuracies
    with this sensor in the literature due to deficient ventilation.

    When ASOS replaced the HO-83 sensor, only a little improvement was made,
    with the same problems, although they reversed the flow from in at the top…
    out at the bottom in the HO-83 to in at the bottom…out at the top in the
    ASOS system with a much greater strength air flow. However, the same sensor
    package was used, with the same dew-point mirror with thermocooler for
    dewpoint. With the increased ventilation, the claimed accuracy was stated
    to be plus/minus 0.5 degree C (plus/minus 0.9 degree F. Later, a modification
    to the housing was made, where a “skirt” helped to cut down the air “recycling”
    around the sensor. during very light winds. This modification was redesignated
    the “1088” sensor package from the HO-88 sensor package used previously.
    Althougha new relative humidity sensor has replaced the cooled mirror to
    compute dewpoint, as of November 2011, the power was still being applied to
    the thermoelectric cooled dew point sensor.

    You should realize that the temperature reading on the 1088 series sensor
    is a 5 minute average, with the “center” reading, if not rejected as out of
    bounds (within 2 degrees F of the average of the two preceeding and two
    last readings). This center point is used as the temperature observation time
    on all ASOS observations from airport sites using this instrument package.

    Sorry to be so long, but it gives you some more insight to changes in the
    thermometer readings in the aviation sensor department of NWS/FAA
    airport sites. These sensors are used to report the highs and lows at sites
    located at airports across the United States.

  102. David Riser says: May 10, 2014 at 6:15 pm
    “Nick,
    Not really, if you look at the highly inflated bs temp records you would see that 2010 reigns supreme. 2012 is number 9 apparently and if you used raw data 1937 would be unbeatable until the next ENSO step change.”

    Gary was talking about CONUS. I think some global numbers are coming in here. Here is a news story about 2012. Not #9.

  103. Steven Mosher (May 10, 2014 at 10:12 am) “Lets take two types of adjustments. Station moves. A station is located at 1000m asl. It moves to sea level. This will create a false Warming signal. So its adjusted.”

    Earlier Zeke said: “station moves are a mixed bag, but in the 1940s and earlier many stations were moved from building rooftops in city centers to airports or wastewater treatment plants, also creating a cooling bias.”

    Stations on rooftops are not necessarily warmer. They usually were placed on tall platforms well above the roof and would mainly be warmer on radiational cooling nights. The move may or may not show up as a discontinuity in the difference of means from the site to the region. But it probably doesn’t matter much.

    After the move to the airport we would generally see warming both in microsite and sometimes in the vicinity. In the case of Reagan National (not in USHCN, thank goodness for small favors) they keep adding gravel around the ASOS sensor that has increased year after year: http://shpud.com/weather/main.php?g2_itemId=155 and http://shpud.com/weather/main.php?g2_itemId=158 slowly raising the average presumably without discontinuities.

    The observable effect is that with calm winds (no influence from the nearby Potomac) the temperature will actually rise on radiational cooling nights when the wind goes calm (whereas it will modestly drop with continuing winds). That same effect will generally not occur at Patuxent and Annapolis (both also on the water) or any other area stations. Only Baltimore’s Science Center has a worse-sited thermometer in the DC area.

    If there are groundskeepers at National Airport, the “station of record” for DC slowly warming the site, they can certainly exist elsewhere.

    Other problems are increasing parking areas at airports. At national new parking areas are relatively close by since it is a cramped location. Other airports have increasing urbanization following 70’s and 80’s exoduses to the suburbs.

    There does not seem to be a good solution for dealing with this data prior to 1979. But for most purposes data after 1979 should be obtained from the satellite record, for areas like the continental US and larger.

  104. Scientific disagreement is not fraud, but in the poisoned waters (thank you Michael Mann and Co.) of climate debate, paranoid ideation occurs spontaneously when, virtually without exception, all the old temperatures get downward adjustments, while all the new temperatures get adjusted upwards. The vigorous inquiry in this thread is refreshing and makes me question my own reflexive skepticism and look harder at the actual science. More would be accomplished if others would follow suite, and shut up and calculate, as no one looks dumber than when they are cheer-leading phenomenology.

  105. The cooling of the past seems to be systematically applied to a large proportion of the minimum temperatures.

    Unhomogenized quality controlled data for the USA shows cyclical temperatures as one would expect from cycles of the Pacific Decadal Oscillation.

  106. So, let’s see: Every time we come out with a new upgrade of the world’s historical climate database, 1936 get adjusted down another couple of tenths of a degree. Imagine, historical record temperatures are just like money – they get blown away by inflation. The 1930s now look sort of like the Little Ice Age; someone run and check – was London, in 1815, under a mile of ice?

    The USHCN needs to be sent to prison, along with just about every other person who got named in the ClimateGate emails, and maybe a few since then. They are crooks, charlatans, liars, communists, bad scientists, or some combination of all those things. 20 years at a minimum. And I mean it.

  107. “You really should issue a correction on your blog to make the reason for the spike clear, that it isn’t “tampering” per se, but an artifact of missing data and your method – Anthony”

    “Yes, but it’s wrong, so learn from the mistake, issue a correction and move on. – Anthony”

    So, what do you think of Goddards “correction” Anthony?

  108. So here we have the Bullshite reason for adjusting past temperatures lower “Back in the 1940s virtually all the stations used liquid-in-glass thermometers, which read about 0.6 degrees warmer in max temperatures (and about 0.2 degrees colder in min temperatures) than the new MMTS instruments introduced in the 1980s.”
    Now I have worked in “Quality” all of my working life, including Metrology labs, under no circumstances would it be acceptable to go back and change a hundred years of “Calibrated” values Ad Hoc because a new measuring method was obviously WRONG. The MMTS thermometers should have been calibrated to the Glass/Mercury ones, not used to thorw a hundred years of readings in the bin.
    Does Zeke actually believe that the Victorians were so bad at Engineering & Science that they could not get a Thermometer scale right, it just beggars belief.
    As others have mentioned Glass/Mercury Thermometers were accurate and I would far rather believe one than an “Electronic” device, anyone tried taking their temperature with a modern Thermometer to see what crap some of them are?

    James Hall (NM) says:
    May 10, 2014 at 6:32 pm
    says it all for me, he speaks from experience, add to that all the siting problems we know about and “natural” differences in temperatures within a few hundred yards let alone Miles you can see that each Thermometer measures a REAL temperature for where it is.
    Just because it doesn’t agree with another thermometer 10k away does not make it wrong or in need of “Correction”.

    Nick Stokes says:
    May 10, 2014 at 2:05 pm

    It’s just wrong method.

    No it is exactly the right method to show the “Manipulation” that is taking place and I am not just talking about the so called Corrections, I am talking about the presentation of the data to the Public to push an Agenda. You can deny it as much as you like but most people on here do not believe you.

    It is quite odd how the same people that have argued that UHI does not affect the Global Temp trend but arre now saying adding in shorter length records or thermometers with broken records does.
    Perhaps they are prepared to present their findings for this?
    WUWT study shows that “Poor Quality” sites are a bigger problem and 1000km “Gridding” is complete joke.

  109. According to all 4 described methods the temperatures measured in the USA in the year 2014 have to be adjusted 0.1°C or more up!?

  110. A C Osborn says: May 11, 2014 at 2:42 am
    “The MMTS thermometers should have been calibrated to the Glass/Mercury ones, not used to thorw a hundred years of readings in the bin.”

    The readings are not in the bin. They are online an Steven Goddard posted them. It’s not a calibration issue, the actual readings are not disputed. The question is whether a warm afternoon has been counted twice because of TOB. That is a bias and needs correction. It is wrong not to correct.

    Try Mosh’s experiment.

  111. Station move adjustments should be 50% up and 50% down. Not 99% down (in terms of the past record).

    MMTS sensor should be a single small adjustment in 1984 to 2000 – not one that seems to new number every month right through the record from 1895 to July 2014 (July’s change is already programmed it seems).

    TOBs – I’m tired of hearing about this. The weather bureau was issuing directives regarding proper recording times for TOBs in 1871 already. You think the observers in 1870 or 1934 or 2014 do not know that the time of day affects the reading.

    BEST station chopping – there should be just as many station cuts when there is a increase in temperatures as there are when then there is a drop. BEST adjusts out all the drops but leaves all the increases in. Prove me wrong Mosher. I’ve asked for this info going on a dozen times now and it is not reported anywhere.

    These are merely “justifications” to adjust the temperature record. It does not mean that the adjustments were implemented properly.

    In fact, the systematic nature of the adjustment through time is all the proof one needs that this is done to merely increase the trend.

    Why is there a systematic adjustment over the whole record? There was a smooth systematic impact from all these changes – bollocks, it should be variable over the record.

    Why does it change every month? [I get an email of the changes every month in GISS temperatures (from the changes at the NCDC) – literally half the monthly record changes every month, sometimes 3 or 4 times every month).

    Here is a ScreenCap of what got changed in advance of the April temperature report. Yellow and strikeout are the changes.

  112. It is interesting to see that the discussion keeps referring to calibration of sensors.

    I note from a link earlier in this thread http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/ about satellite sensors.

    It seems strange that satellites are orbiting without any external calibration standards or methods.

    It seems as though they rely upon PT100s being averaged to determine the ‘hot’ body temperature.

    All well and good, but how are the PT100s calibrated and how often?

    It appears as though each satellite is ‘adjusted’ by using the results of a previous satellite.

    By ‘flying’ over a certain spot within a few minutes of each other, the drift in one is ‘adjusted’ to match that of the other.

    That seems to be a bit incestuous to me.

    I know of no use of PT100s where you can allow operation without regular calibration to a traceable, higher standard.

  113. The email between Wrigley and Jones explains the scientific methods used to deal with unsavory blips:

    Phil,
    Here are some speculations on correcting SSTs to partly explain the 1940s warming blip.
    If you look at the attached plot you will see that the land also shows the 1940s blip (as I’m sure you know). So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean — but we’d still have to explain the land blip.

    I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips — higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from.
    Removing ENSO does not affect this. It would be good to remove at least part of the 1940s blip,
    but we are still left with “why the blip”.

    Let me go further. If you look at NH vs SH and the aerosol effect (qualitatively or with MAGICC) then with a reduced ocean blip we get continuous warming in the SH, and a cooling in the NH — just as one would expect with mainly NH aerosols.

    The other interesting thing is (as Foukal et al. note — from MAGICC) that the 1910-40 warming cannot be solar. The Sun can get at most 10% of this with Wang et al solar, less with Foukal solar. So this may well be NADW, as Sarah and I noted in 1987 (and also Schlesinger later). A reduced SST blip in the 1940s makes the 1910-40 warming larger than the SH (which it currently is not) — but not really enough. So … why was the SH so cold around 1910? Another SST problem? (SH/NH data also attached.)

    This stuff is in a report I am writing for EPRI, so I’d appreciate any comments you (and Ben) might have.

    Tom

  114. Well nick,
    Point taken about us vs global, I would like to debate the 2012 vs 1930’s with you but unfortunately NCDC has been taken off line for the last few days… not sure if its coming back. One easy way to hide the adjustments is to take the data away from the citizenry. You can get annual info for the globe from cimate.gov but not us raw or even adjusted data right now. I do find this somewhat disturbing that a government server would be offline for this long. Anyone have any insight into this?
    v/r,
    David Riser

  115. David Riser says” One easy way to hide the adjustments is to take the data away from the citizenry. You can get annual info for the globe from cimate.gov but not us raw or even adjusted data right now.”

    I agree that is a problem that seems to be politically driven. Give USHCN credit for making both raw and adjusted data accessible as well as maximum and minimums. If NOAA truly wanted the public to be able to examine the data they would provide easy links to max, min, raw, quality controlled but not homogenized, and homogenized. The data is available for those who tenaciously hunt for it. It should be readily available.

  116. jim Steele (May 11, 2014 at 7:38 am) “If NOAA truly wanted the public to be able to examine the data they would provide easy links to max, min, raw, quality controlled but not homogenized, and homogenized.”

    If they truly wanted the public to examine their data *and methods*, they would allow the public to run their code after filtering out obvious crap stations like Norfolk Virginia. But if they really cared about a quality product they would have filtered them out themselves a long time ago. I for one am not interested in developing and running GIGO algorithms.

  117. Does it strike anyone else as ironic that “the science is settled” but the temperature record on which “the science” is based, is constantly adjusted?

    Perhaps ironic is the wrong word…Moronic is much better.

  118. So to summarize, irrespective of which method do we use, Goddard’s, or Watts’, the magnitude of the upward adjustment from 1960 onwards is roughly equal to the “trend” reported?

  119. And Antony finds appropriate to obsess over Goddard’s completely insignificant and irrelevant “mistake”, instead over ‘upjustments’ equal to the trend…

  120. Historical temperature data base has not been shown to be credible, given the magnitude of data adjustments and the potential for these to include biased or selective assumptions that yield incorrect data for comparative purposes.

    The climate science profession needs to understand and acknowledge that they have major credibility issues on their hands. “Just trust us, we are scientists” is NOT going to work. They have tried this for a decade now and the public is understandably in my opinion either skeptical as to the reliability of trustworthiness of the data upon which the climate models have been based or apathetic to the entire matter; Particularly in light of the 17+ recent years of relatively little change.

    If climate change is as real of a threat and if the magnitude of change is as significant as they project, then they had better reassess their approach, open up the data, invite scientific criticism, accept whatever short-term blows to credibility that might occur for past mistakes and cover ups. Stop treating their profession as if it were some subset of political social science and restore faith and credibility in its legitimacy as a hard science; something that at the moment, fewer and fewer believe to be the case.

    As just as example, yesterday on Reddit, there were 2 “ask me anything” posts from climate change experts. First of all, many of the questions asked had the appearance of having been prepared in advance by the climate scientists or their associates (they were simply too packaged, well-written and focused on their narrative to have been asked randomly by members of the general public. But more telling, the level of interest by the public (and Reddit is global btw) was extremely limited. Their posts left the front pages quickly and faded into obscurity in just a few short hours.

    They have been fighting battles, public relations battles, using politically-based strategies, and they are losing. They not understand that the general public can smell when it’s political from miles away and have demonstrated a considerable distaste when it is force fed upon them.

    They need to understand that they are scientists, and as such, they make for really inept politicians. The subject at hand is not one that lends itself to politically-based solutions, except with the progressive left who will always be 100% supportive since they view the entire climate change matter as one of their justifications for a much broader social agenda. But less than 1 in 5 people consider themselves to be progressive liberals.

    The far larger body of moderates and independents need to be convinced if there is to be any hope of taking action on climate change and they are not going to buy off on the “trust us” approaches that have been employed to date.

    They especially are not going to buy off once they understand that the historical record has been significantly manipulated (corrected, adjusted, whatever) because of “faulty thermometers?”, TOBS, gradient infills, etc etc etc. The cat is out of the bag on this, thanks to Anthony Watts, Steven Goddard et al and eventually, someone in the MSM – most likely beginning with a couple of the British newspapers, will eventually begin to question why this is so.

    Once they have to start explaining in the defense, they have lost the argument. In my opinion, the climate science community has a major FUBAR on their hands, they have an lot of explaining to do and they had better lead in this matter. They have dug themselves into a deep hole, yet they keep on digging and they will likely continue to do so until they are called out, as the alternatives will be seen as just too painful.

    I have personally been trying for years to understand if global warming is going to be as severe and if the ramifications are going to be as disruptive as are being forecast. I honestly want to know the truth. I know no more what the truth is today than I did say 5 years ago and I have less confidence in the science today than I did back then. What you guys are doing isn’t working. You had better work very hard in building credibility because you are not going to succeed politically. Look at the political change in Australia. This is going to happen elsewhere unless you change your approach.

    • Ima, absolutely spot on! I am not a scientist, just a retired “pleb” tax-payer. I have been following this “story” for a number of years now wanting to know the truth (I know, I know!). All that has happened is my long-standing respect for the scientific community has taken a very large nosedive. (I would also mention I have been interested in the nutrition and dietary science as well and the similarities are quite remarkable). I’m afraid if a “climate scientist” told me today is Sunday I’d spend an awful long time checking if that was so – and that really is the sad aspect of all this shenanigans.

  121. Essentially, what Zeke Hausfather is saying is that Prof Richard Lindzen is wrong when he writes:

    Inevitably in climate science, when data conflicts with models, a small coterie of scientists can be counted upon to modify the data. Thus, Santer, et al (2008), argue that stretching uncertainties in observations and models might marginally eliminate the inconsistency. That the data should always need correcting to agree with models is totally implausible and indicative of a certain corruption within the climate science community.

    The entire discussion is preposterous when viewed with a rational perspective. Every wild-eyed prediction by the promoters of the global warming scare has been flat wrong. So now, it has devolved into nitpicking over tenths and hundreths of a degree, with those sounding the false alarm insisting that their bogus methodology must be accepted. As Lindzen says:

    “Future generations will wonder in bemused amazement that the early 21st century’s developed world went into hysterical panic over a globally averaged temperature increase of a few tenths of a degree, and, on the basis of gross exaggerations of highly uncertain computer projections combined into implausible chains of inference, proceeded to contemplate a roll-back of the industrial age”

    A few tenths of a degree fluctuation is well within normal parameters; in the past, global temperatures have changed by tens of degrees, within decades. But the current drum-beating is over a minuscule ≈0.6º change — over a century and a half! Those promoting the carbon scare should be ashamed of their ridiculous, unscientific and emotional scaremongering. It is self-serving nonsense.

    I call on Zeke Hausfather to admit that the climate Null Hypothesis has never been falsified, and thus, that nothing either unusual or unprecedented is happening viz-a-viz “global warming”.

    Admit it, Zeke. Because that is a fact. Isn’t it?

  122. At the closed door meeting: “Dammit, we have to show what we WANT and BELIEVE the data should be…”

  123. johnbuk – Thanks for the comments. I agree with you observations. In the past few years I have become increasingly jaded with respect to the lack of honesty and integrity in nearly every institution. I spent 30 years around politics and worked for 2 governors. I noticed attitudes and values beginning to change in the early 2000s and they seem to have gotten progressively (pun?) worse. Don’t know if it’s because I am getting older, the internet is exposing us to more of what has happened behind the scenes in the past, or if it is truly getting worse. Sadly I believe it is the latter.

  124. dbstealey, as long as we’re quoting Richard Lindzen about this issue, it might be worthwhile to hear what he had to say to the British House of Commons Energy and Climate Change Committee in January when they were holding hearings about the IPCC AR5 report. Lindzen confirmed what a lot of us have suspected for a long time (at least since November 2009) about many of those who have gone into the “climate science” field (his comments justify the scare quotes, IMO). The video should start playing at the 2:49:14 mark (if it doesn’t, drag the playhead to that point). Watch for about 3 minutes:

    Watch the whole thing (3 hours 8 minutes) starting from the beginning if you have a strong stomach.

  125. Nick writes “The question is whether a warm afternoon has been counted twice because of TOB.”

    Or indeed a cold morning counted twice for the same reason. The need for TOBs is justifiable but the fact its a large positive adjustment spread over such a long time has always made me more than a little sceptical.

    Do we make the adjustments based on individual readings? Afterall if consecutive min/max readings are different then there is no need for the adjustment as they cant be effected.

    Also I can see policy changes may change when readings are taken (ie 7am vs 9pm for example) but are the adjustments always based on actual reading time data? Or on what policy says should have been the reading time in the absense of reading time data?

    And human nature would have some readings at “not the right time” for whatever reason (habit, sort the mail first, whatever) but not necessarily written down accurately because there is an expectation they should follow procedure but why give anyone reason to question them? What is the harm if they read at 11am rather than 7am like they were meant to?

    TOBs is always going to be somewhat hit or miss and errors will definitely be introduced against the adjusted readings as a result.

  126. TimTheToolMan says: May 11, 2014 at 9:04 pm
    “Do we make the adjustments based on individual readings? After all if consecutive min/max readings are different then there is no need for the adjustment as they cant be effected.”

    No, you can’t tell that way. Suppose a warm Monday afternoon, peaks at 4pm. The 5pm reading will say that 4pm is max for Monday, but the 5pm will be max for Tuesday.

    “are the adjustments always based on actual reading time data?”
    They are based on the agreed reading time. The observer can request a change; that’s what triggers the adjustment. But they have another check. The observer reports the temperature at the time of reading (not the time). That’s usually alrady good enough to tell am from pm. But with thousands of readings, there’s a very good estimate of compliance.

    “What is the harm if they read at 11am rather than 7am like they were meant to?”
    Suppose it’s a very cold morning, min 6am, so 7am will be recorded as the min for the next day. The later he reads, the higher will be that min, with no balancing effect.

  127. Nick writes “No, you can’t tell that way. Suppose a warm Monday afternoon, peaks at 4pm. The 5pm reading will say that 4pm is max for Monday, but the 5pm will be max for Tuesday.”

    True. A lower temperature at the time of reading can still be greater than the next day’s temperature. So we would have had to record min, max and current to overcome that.

    But Nick writes “They are based on the agreed reading time.”

    I have a big problem with that. Suppose someone took the reading at the right time on weekdays but later (or earlier) on the weekends. Or they were pretty random about it. Then that would throw the adjustment out considerably.

    Nick answers ““What is the harm if they read at 11am rather than 7am like they were meant to?””

    That was probably written badly since you misunderstood. I was writing that from the point of view of a person who was meant to read the temperature at a certain time and rather than having to explain to their boss why they were late reading the temperature they could easily write down the agreed time as a bit of a “white lie”.

  128. TimTheToolMan says: May 12, 2014 at 12:49 am
    “So we would have had to record min, max and current to overcome that.”

    In fact they did. But it still doesn’t pin it down. If the max next day is a bit higher than that current, it doesn’t mean it didn’t happen a few minutes later.

    “Or they were pretty random about it.”
    They could write down random temperatures too. But these are Coop volunteers. They go to a lot of trouble to contribute. There’s no point in doing that haphazardly.

  129. Bill Illis says:
    May 11, 2014 at 5:29 am
    TOBs – I’m tired of hearing about this. The weather bureau was issuing directives regarding proper recording times for TOBs in 1871 already. You think the observers in 1870 or 1934 or 2014 do not know that the time of day affects the reading.

    And I’m tired of reading nonsense such as this! We know when the coop stations recorded their data from their records! So despite the directives we know the coop stations didn’t follow them, and that the TOBS changed over time, the effect of it was analyzed and a correction formula determined from that analysis so that the changes in practice over time could be corrected for. Whether the observers in 1934 knew that time of day effected the readings doesn’t matter because the fact is that they mostly weren’t going out to read the thermometers at midnight!

  130. Nick writes “If the max next day is a bit higher than that current, it doesn’t mean it didn’t happen a few minutes later.”

    Also true but then it becomes a bias in the other direction because you throw away the higher temperature on the first day in favour of making the temperature on the second day more “realistically” lower. You cant win.

    Nick writes “They could write down random temperatures too. But these are Coop volunteers. They go to a lot of trouble to contribute. There’s no point in doing that haphazardly.”

    Human nature is what it is. I’m sure more than a few “readings” were guessed from missed days for various reasons. A couple of days a week of readings at a convenient time rather than the agreed time would have significant impact too.

  131. Nick Stokes says:
    May 11, 2014 at 9:55 pm
    “No, you can’t tell that way. Suppose a warm Monday afternoon, peaks at 4pm. The 5pm reading will say that 4pm is max for Monday, but the 5pm will be max for Tuesday.”
    OK I accept that “could” happen.
    So are you going to correct for it, you say we must?
    But it didn’t happen and it didn’t happen 6 days that week and it certainly didn’t happen for months in the winter.
    So you have now “corrected” 6 days that week and months in the winter that didn’t need it.
    The reason for our differences is you believe you know what was happening 80 years ago and we say you don’t know what the actual weather was doing for any given day in any given location.

    The next question is do you hourly readings within a kilometre of those Co-Op weather stations that are read at 5pm to verify what you are sayung is correct and what I am saying can’t be?

  132. Some years ago I was in a marine lab where the instrument recording outside temperature was electronically recording in real time. It was fairly easy to observe max and min daily temp and the mean temp was also automatically deduced. I asked why we still relied on (Tmin + Tmax)/2 as a proxy for Tmean and Steven Mosher on this site put me right.
    At the time I did not realise that the Tmaxes (and Tmins) were still being recorded only at specific times of the day. The thermometer which we had at school recorded Tmax and Tmin for the whole previous 24 hours (requiring it only to be read at approximately the same time each morning). Having read this thread I am more confused than ever. Why is it necessary still to use instruments for the measurement for temperature which require TOB? Jobs for the boys?

  133. The system has changed over to contiuous readings now, but the TOBs is used to “Correct” the old Max/Min once per day readings.
    My problem with that is they don’t know if a correction was necessary or not, but they seem to apply it anyway, I am still trying to get my head around Nick’s posts on it on his Forum.

  134. Goddard’s analysis and conclusion are entirely correct – since 1890 temps have declined on average, yet the US gov’t still pushes the hockey stick as ‘science’. Fraud, data tampering, mendacity are not science. You can quibble over esoterics – is the inflammation a lot or just enough to support more government regulation over a carbon economy? This is splitting hairs. Instead of arguing in public why didn’t WUWT simply communicate with Goddard in private, post the results and say the discrepancy might be due to xyz [I don't think we know] ? Why would you try to devalue Goddard’s work which is correct and relevant ?

  135. Ferdinand (@StFerdinandIII) says:
    May 12, 2014 at 8:02 am
    This is splitting hairs. Instead of arguing in public why didn’t WUWT simply communicate with Goddard in private, post the results and say the discrepancy might be due to xyz [I don't think we know] ? Why would you try to devalue Goddard’s work which is correct and relevant ?

    It is neither correct nor relevant! As pointed out above it is an improper averaging of different sets of data, if the analysis is done properly with the same sets of stations the large spike disappears.

  136. take out 2014 and you still have proof that there is bias in the adjustments …
    also since nobody knows what the late reporting stations are in 2014 isn’t it just a guess that the spike is caused by late reports ?

  137. #1 is the simple, common-sense approach. It answers the question: “what is the actual effect of homogenization and other adjustments”?

    #2-4 all share the same problem: they cannot answer the above question. News flash for Zeke: no one believes you guys are doing any of the adjustments honestly, even the station dropouts. Sorry, but your fellow climate scientists have dug a deep credibility hole.

    Because it is treating the first four months of 2014 as complete data for the entire year, it gives them more weight than other months

    This is true, but so what? Everyone knows the year is incomplete. Yes, the line will probably spike less by the end of the year, but it’s obviously a YTD value.

  138. We know when the coop stations recorded their data from their records! So despite the directives we know the coop stations didn’t follow them, and that the TOBS changed over time, the effect of it was analyzed and a correction formula determined from that analysis so that the changes in practice over time could be corrected for.

    And the correction formula just keeps getting warmer and warmer. If you only look for your keys under the lamppost, there’s a 100% chance you will never find them anywhere else.

  139. The tobs adjustment really is pure bs. It doesn’t make a hill of beans if the person recording the min/max did it at a earlier time than midnight. The only thing that matters is, did they record the temp after the high for the day. If any of you had spent any time doing daily temp records you would know that for any given area the max min is very close to the same time of day. So if my max peaked at 4pm and my min is at 6am, as long as I reset the thermometer before I go to bed its all good. I am sure that the coop volunteers did exactly that. So before you go blabbing about what if is, why don’t you make some daily readings, say every hour or so and see what you actually get. what you will find is that TOBS is an excuse to freeze the world after the fact.

  140. A C Osborn says: May 12, 2014 at 6:05 am
    “it certainly didn’t happen for months in the winter:

    No, it can easily happen in winter. Here’s one way to think about it. An extreme case. Suppose the max always happens at 4pm, and you read and reset at 4pm every day. Then when you write down the max, it will be the higher of now or 4pm yesterday. Best of two. Summer or winter. Higher values count for two days. Minima only once. You were “warming the past”.

    If you read at 6am, and that’s when minima occurred, you’d get a cool bias. And if you change from 4pm to 6 am, that’s a big change.

    TOBS isn’t applied to daily readings. It’s a correction applied to monthly averages. It would apply a different correction to different seasons, if that’s what the data says.

  141. Nick writes “Then when you write down the max, it will be the higher of now or 4pm yesterday. Best of two. Summer or winter. Higher values count for two days. Minima only once. You were “warming the past”.”

    However being realistic again, if you wrote down the max at 4pm and then it increased some more that evening and the next day was even higher then the way the TOBs adjustment works now, you’ll artificially cool the average of those two days when in fact the average should have been adjusted up.

  142. Nick,
    That’s not how a min/max works…… You take your reading late or early doesn’t matter as long as your consistent. You avoid it during min/max the thermometer does the work. So typically a coop person would visit their thermometer once a day after 5pm or perhaps later. But the typical highest temp will have been much earlier. As long as your consistent throughout the month there is no bias. The instructions have always been very simple and if they do request a TOBS change its logged and its a one time deal that if done carefully, once again does not introduce any bias. All this bs of “if this time and reset such and such” means nothing! In fact to assume that the coop observer can’t figure this out when its been discussed since 1800’s is unbelievable arrogant. To accuse them of malfeasance without evidence is criminal. To adjust their data with statistics and without ever looking at their records is stupid.
    v/r,
    David Riser

  143. David Riser says: May 12, 2014 at 8:58 pm
    “In fact to assume that the coop observer can’t figure this out when its been discussed since 1800’s is unbelievable arrogant. To accuse them of malfeasance without evidence is criminal”

    No-one is accusing anyone of malfeasance or any kind of failure. It is assumed that the Coop observers are faithfully following the agreed procedure. The TOBS is calculated on what they reported.

    There’s nothing theoretical about the TOBS effect. You can simulate a min/max reading system using hours and hours of modern data, and quantify the bias precisely. I’ve described that here.

  144. Nick writes “There’s nothing theoretical about the TOBS effect.”

    Apart from the assumption that the readings are taken at the agreed time you mean?

  145. TimTheToolMan says: May 13, 2014 at 6:08 am
    “Nick writes “There’s nothing theoretical about the TOBS effect.”
    Apart from the assumption that the readings are taken at the agreed time you mean?”

    Actually, not even that. I posted this graph of the change of reading time over the years, from Vose 2003. It shows the metadata time (what people said) and dotted curves which are “method of DeGaetano”. This is a clever analysis based on the temperatures that were reported at the time of reading. With thousands of temperatures reported, and a good handle on diurnal variation from thousands of hourly readings, you can make a very good estimate of whether reporting times are correct. And the plot shows pretty good agreement.

  146. “clever analysis”

    “good handle”

    “very good estimate”

    “pretty good agreement”

    Nick, can I hire you as my latex salesman?

    Andrew

  147. Nick also wrote “And the plot shows pretty good agreement.”

    How does that look when compared to ALL the adjusted data rather than just the 30 years you’ve chosen?

  148. TimTheToolMan says: May 13, 2014 at 7:43 am
    DeGaetano, Arthur T. “A method to infer observation time based on day-to-day temperature variations.” Journal of climate 12.12 (1999).

  149. Nick,
    If they followed the agreed TOBS there is no discrepancy or bias. I am thinking that yall need to actually observe the temperature over the course of a day for an entire year for each degree of latitude to understand where I am coming from. Climate scientists are making a mountain out of this because it conveniently fits their idea of what is. Honestly if the record is such a train wreck then yall aught to toss it out and start over, oh yea we have satellite now, so no need. hmm and the temps are flat, what do you know.
    v/r,
    David Riser

  150. Nick writes “They could write down random temperatures too. But these are Coop volunteers. They go to a lot of trouble to contribute. There’s no point in doing that haphazardly.”

    It seems understanding human nature isn’t your thing, Nick ;-)

    Straight up from DeGaetano we have…

    “Introduction
    Differences in observation time among U.S. climatological stations arise due to the voluntary nature of the Cooperative Observer Network. Daily maximum and minimum temperature observations from the cooperative network are usually taken at an hour that is convenient for the volunteer”

    Thanks for poionting me at the reference though. That should make for interesting reading.

Comments are closed.