Human error in the surface temperature record

Guest essay by John Goetz

As noted in an earlier post, the monthly raw averages for USHCN data are calculated with up to nine days are missing from the daily records. Those monthly averages are usually not discarded by the USHCN quality control and adjustment models, although the final values are almost always estimated as a result of that process.

The daily USHCN temperature record collected by NCDC contains daily maximum (TMAX) and minimum (TMIN) temperatures for each station in the network (ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/hcn/). In some cases, measurements for a particular day were not recorded and are shown as -9999 in either or both the TMAX or TMIN record for that day. In other cases, a measurement was recorded but failed one of a number of quality-control checks.

Quality-Control Checks

I was curious as to how often different quality-control checks failed, so I wrote a program to cull through the daily files to learn more. I happened to have a very small number of USHCN daily records already downloaded for another purpose, so I used them to debug the software.

I quickly noticed that my code was calculating a larger number of consistency check fails from the daily record for Muleshoe, TX than was indicated by the “I” flag in the station’s corresponding USHCN monthly record. The daily record, for example, flagged the minimum value on February 6 and 7, 1929 and the maximum value on February 7 and 8. My code was counting that as three failed days but the monthly raw data for Muleshoe indicated it was two days.

Regardless of how many failures should have been counted, it was clear from the daily record why they were flagged. The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility. The same was true for February 7th relative to the 8th.

I noticed there were quite a few errors like this in the Muleshoe daily record, spanning many years. I wondered how the station observer(s) could make such a mistake repeatedly. It was time to turn to the B-91 observation form to see if it could shed any light on the matter.

Transcription Errors

The B-91 form obtained from http://www.ncdc.noaa.gov/IPS/coop/coop.html is linked below. After converting the temperatures to Celsius the problem became apparent. The first temperature (43) appears to have been scratched out. The last temperature in that column (39) has a faint arrow pointing to it from a lower line labelled “1*”. The “*” is a note that states “Enter maximum temperature of first day of following month”.

February 1929 B-91 for Muleshoe, TX

It appeared that whoever transcribed this manual record into electronic form thought that the observer intended to scratch out the first temperature and replace it with the one below, and thus shifted the maximum values up one day for the entire month.

Muleshoe

To determine the observer’s intent, the B-91 for March, 1929 was examined to see if the first maximum temperature was 39, as indicated by the “1*” line on the February form. Not only was the first maximum temperature 39, it appeared to be scratched out with the same marking. Although the scratch marking appeared on the March form, that record was transcribed correctly. A quick check of the January, 1929 B-91 showed the same scratch marks over the first temperature.

March 1929 B-91 for Muleshoe, TX

January 1929 B-91 for Muleshoe, TX

The scratch marks appear in other forms as well. October, 1941 looked interesting because both of the failed quality checks were not due to an obvious reason. The flagged temperatures were not unusual for that time of year or relative to the temperatures the day before and after. Upon opening the B-91, the same “scratch out” artifact was visible over the first maximum temperature entry! Sure enough, the maximum temperatures were shifted in the same manner as February, 1929. As a result, two colder days were discarded from the average temperature calculation.

October 1941 B-91 for Muleshoe, TX

Because the markings were similar, it appeared they were transferred to multiple forms when they lay piled in a stack, probably because the forms were carbon copies. This likely would have happened after they were submitted, because on the 1941 form the observer did scratch out temperatures and was clear where the replacements were written.

Impact of the Errors

In addition to one incorrect maximum temperature, the full three days flagged as failing the quality check were not used to calculate the monthly average. The unadjusted average reflected in the electronic record was 0.8C whereas the paper record was 0.24C, just over half a degree cooler. The time of observation estimate was 1.41C. The homogenization model decided that a monthly value could not be computed from daily data and discarded it. It infilled the month instead, replacing the value with an estimate of 0.12C computed using values from surrounding stations. While that was not a bad estimate, the question is would it have been 0.12C if the transcription had been correct? Furthermore, because the month was infilled, GHCN did not include it.

In the case of January, 1941, the unadjusted average reflected in the electronic record was 2.56C whereas the paper record was 2.44C. The TOB model estimated the average as 3.05C. Homogenization estimated the temperature at 2.65C. That was was retained by GHCN.

Discussion

Only recently have we had the ability to collect and report climate data automatically, without the intervention of humans. Much of the temperature record we have was collected and reported manually. When humans are involved, errors can and do occur. I was actually impressed with the records I saw from Muleshoe because the observers corrected errors and noted observation times that were outside the norm at the station. My impression was that the observers at that station tried to be as accurate as possible. I have looked through B-91 forms at other stations where no such corrections or notations were made. Some of those stations were located at people’s homes. Is it reasonable to believe that the observers never missed a 7 AM observation for any reason, such as a holiday or vacation, for years on end? That they always wrote their observation down the first time correctly?

The observers are just one human component. With respect to Muleshoe, the people who transcribed the record into electronic form clearly misinterpreted what was written, and for good reason. Taken by themselves, the forms appeared to have corrections. The people doing data entry likely did so many years ago with no training as to what common errors might occur in the record or the transcription process.

But the transcribers did make mistakes. In other records I have seen digits transposed. While transposing a 27 to a 72 is likely to be caught by a quality control check, transposing a 23 to 32 probably won’t be caught. Incorrectly entering 20 instead of -20 can get a whole month’s worth of useful data tossed out by the automatic checkers. That data could be salvaged by a thorough re-examination of the paper record.

Now expand that to the rest of the world. I think we have done as good a job as could be expected in this country, but it is not perfect. Can we say the same about the rest of the world? I’ve seen a multitude of justification for the adjustments made to the US data, but a lack of explanation as to why the rest of the world is adjusted half as frequently.

Advertisements

124 thoughts on “Human error in the surface temperature record

  1. At least human error goes both ways.
    On average this can be expected to have no impact on the trend.
    The adjustments to the temperature records seem more knowing.

    • M Courtney has a strong point. However, since we do not note the “+” of positives the missing “-” will always raise the observed temperature.
      If the Government can spend $3Trillion each year we can afford to fix these errors if the temperature really makes a difference for policy. I think we should save the money since the trends are based almost exclusively on natural phenomena not human conduct.

      • Column 3, the “Range”, is clearly the next-line’s Maximum minus this line’s Minimum, so the data was used correctly as per the observer’s intent. The problem was the pernicious order of the columns, with “Maximum” in column 1 and “Minimum” in column 2, but the day’s minimum happens before the maximum! So observers who fill out the form sequentially would stagger the results as happened here. It would have been obvious to design this form with “Minimum” in column 1 and “Maximum” in column 2, but I don’t think user friendliness was a consideration back then.

      • This article isn’t well researched, I regret to say. The displayed temperature record staggers the minimum and maximum because of the 8AM time of its observations (see upper right of the form). At the bottom of the form it states “See cover for instructions”, and we can see those instructions at https://archive.org/stream/1924InstructionsCoopObservers/1924%20Instructions%20Coop%20Observers_djvu.txt . There it is explained that these temperature stations had two thermometers, a mercury maximum thermometer and an alcohol minimum thermometer. The day’s observation is taken once, preferably at sunset (as the instructions helpfully suggest) so that the day’s minimum and maximum are both to hand at that time. However, the form shown in this article, being taken at 8AM, shows the overnight low from that morning but the maximum of the day before. This is why the maximum is shifted 1 day from the minimum, not because the readings were interpreted by someone, but because of the 8AM daily reading time. The first maximum was thus struck out because it was the maximum of the day before the 1st of the month.
        So if this article had been researched properly, then I daresay the thrust of it would be quite different.

      • NZ Willy is correct. If they took the measurement at 8AM (not at sunset as the directions suggest) then the Maximum is associated with the previous calendar day. It sounds like that is how it was transcribed into the database. It doesn’t sound like an error. It seems Mr. Goetz misunderstands this?
        “It appeared that whoever transcribed this manual record into electronic form thought that the observer intended to scratch out the first temperature and replace it with the one below, and thus shifted the maximum values up one day for the entire month.”
        It seem from my reading of the instructions (linked by NZ Willy) that this is precisely what they should have done if they read at 8 AM and not at sunset.
        Is it possible they shifted it up two places? As I look at the above form I don’t see any days in the entire month where a minimum is higher than the maximum recorded on the next line.
        For example 26 and 35 are the minimum and maximum for Feb 6 with 35 being recorded on the line for Feb 7. 12 and 24 are the minimum and maximum for Feb 7 with 24 being recorded on the line for Feb 8.
        Does Mr. Goetz mean that they offset it even more than one day in transcribing it. Did they associate a minimum of 26 on the 6th with the 24 degree maximum on the line for the 8th?
        Respectfully request a further explanation from Mr. Goetz. Thanks.

      • Hey Steven, you say, “NET NET NET,, adjustments COOL THE RECORD.. use raw data go team skeptic”
        When raw data gives a more true description of the record than does adjusted data, then absolutely, use raw data. As for the “go team sceptic”, I think you may have a misunderstanding. Cooling the record, warming the record — sceptics do not have a goal of showing no global warming. We have a goal of determining what the truth is. Showing cooling or disproving warming is not our goal — although I will say that it appears current best evidence shows that catastrophic anthropogenic global warming is not happening.

      • Oh dear, more mendacity.
        Yeah, the farther you go into the PAST the more it is is COOLED, the RECENT temperatures are WARMED to exaggerate the totally spurious warming trend.
        And then you wonder why we don’t believe a word you post!

      • As I understand it, Mr. Mosher, what you are referring to is the efforts to cool down certain data sets to account for UHI effect. The question is then, of course, are they cooled down enough? Or is it just enough to preserve the warming trend while still allowing strawman statements like this one?

    • Of course humans introduce biases
      I know in the lighthouses, there was a tendency to make up readings on bad nights to avoid the hassle of having to go outside and do the measurements.

    • Someone once did a study of pricing errors at grocery stores, and they discovered that the vast majority of errors were in the store’s favor.
      I read somewhere that the vast majority of temperature adjustments cool earlier years and warm recent years.
      Food for thought.

      • Quite often in grocery stores, the price per unit is higher for the large size than for the smaller size.
        Safeway Stores used to sell ” regular sized ” toilet paper rolls, for a good price and
        ” double ” rolls at the exact same price, for half as many double rolls. The ” double ” rolls actually had on 50% more surface area; not twice as much.
        And the mega rolls which were the same price for one quarter of the number of rolls, that were ” twice the size of the ” double ” rolls, were actually only double the size of the
        ” standard ” rolls.
        So the standard size rolls were the cheapest, and the mega rolls were the most expensive, and all were purportedly the same price per unit.
        Of course they eventually discontinued the ” standard ” size rolls, so the double rolls are now the cheapest, but more expensive than the originals were. And for good measure they switched the package color on the ” best buy ” product from blue to green, so as to further confuse the shopper. So finally my wife knows to now buy the green, instead of the blue which is far more expensive.
        Izzat ” caveat emptor ” in coliseum langwidge ??

    • I see no reason why a “random walk” scenario should result in the center of mass remaining indefinitely at the same point. I seem to recall that statistical analysis shows that the average position actually moves. (somewhere I recall, Pi comes into that somehow). Of course the direction of that average translation is entirely unpredictable.
      So human errors can definitely move the howling dog off of the thorn bush.
      g

      • On the contrary. A graph from Goddard proves nothing as such.
        I’d like to go Mosher here, but I’ll save you from terse insults and such. Just reproduce it, lets see then.

      • Hugs, the silence is deafening from Mosher and others that routinely defend the necessity/robustness of the temperature adjustments being made. The beauty of this particular graph among the many that could be posted is that it’s completely self-evident what’s going on. These two variables should be completely unrelated, yet have near perfect correlation.
        Coincidence?

      • Hugs: “On the contrary. A graph from Goddard proves nothing as such.”
        Unlike some other climate bloggers, Goddard posts all necessary references so that anyone with the necessary ability can verify his findings.
        Clearly you lack that ability.

      • >>Show us CO2 versus temperature for the ice-age cycle.
        >>And then please shut up.
        Here is exhibit one m’lud – Ice Age temperature vs CO2. And as you can see, ladies and gentlemen of the court:
        a. CO2 rises to a maximum. And when it hits MAX CO2, the world cools.
        … Ergo, increasing CO2 concentrations cools the atmosphere.
        b. CO2 reduces to a minimum. And when it hits MIN CO2, the world warms again.
        … Ergo, reducing CO2 concentrations warms the atmosphere.
        Ergo, CO2 must be a powerfully negative-feedback temperature regulator. 😉
        http://www.brighton73.freeserve.co.uk/gw/paleo/400000yearslarge.gif

    • As the adjustments shown are for the US data only, they can’t really be compared against CO2, which is a global index. The correct comparison would be for global land and sea surface temperatures versus global CO2.

  2. “The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility. The same was true for February 7th relative to the 8th.”
    Actually this happens fairly common in winter than you think. The reason with this for example 12 hour periods for max and min temperatures don’t overlap.
    An example in UK.
    9.00am to 9.00pm Max 7.8 c
    9.00pm to 9.00am Min 11.8 c
    There is nothing wrong with the weather station recording these temperatures. This scenario occurs by firstly the day started cold with northerly winds and veered to strong south westerly winds during the day. Very mild and cloudy weather results pushing north from the North Atlantic ocean with a warm front and the southerly air originating all the way down to near the Azores.
    There are many occasions around the world when significant changes in weather patterns can result in usual maximum and minimum temperatures on the same day. It seems to me they are doing the best they can in rejecting real data, in favor of estimated modeled for further confirmation bias.

    • “9.00am to 9.00pm Max 7.8 c
      “9.00pm to 9.00am Min 11.8 c
      “There is nothing wrong with the weather station recording these temperatures.”

      There are few stations in the U.S. taking temperature readings more than once per day, and those that do record both the MAX and MIN for each period. The scenario you describe would require the temperature to rise from 7.8 to 11.8 almost instantly at 9pm TOBS, and temperature remaining at or above 11.8 for the next 12 hours.

    • nice try … but fail … if at 8:59 pm the temp was 7.8 c are you saying its possible that 2 minutes later it was 11.8 c … because if the first time segment max temp was not at 8:59 then it must have been even lower at 9 pm (the 7.8 being the MAX for the time period) … so yes it is impossible … best case scenario max temp at 8:59 pm of 7.8 c and at 9:01 pm temp at 8 c and went UP for the next 12 hours … so that the min temp for the send time segment was 8 c … but a 4 degree diff in you example … that is impossible …

      • Can’t edit and put a correction in, but the max was 11.6 c, 7.8 c was 12.00 midday temperature. 4 c in seconds is impossible, but temperature rose during the night and by 9.00 am around 13 c.

      • KaiserDerden, a change of 4C in a few minutes is not impossible? On the plains in western Canada and northern part of western USA “chinook” winds in winter can have very large changes in minutes. Ranchers used to talk about riding from very cold winter temperatures into well above freezing in a few minutes.
        “Chinook winds have been observed to raise winter temperature, often from below -20 °C (-4 °F) to as high as 10-20 °C (50-68 °F) for a few hours or days, then temperatures plummet to their base levels. The greatest recorded temperature change in 24 hours was caused by Chinook winds on January 15, 1972, in Loma, Montana; the temperature rose from -48 to 9 °C (-54 to 48 °F).”
        If your station is in the foothills of Alberta or Montana, you can indeed get very rapid temperature changes in a very short time. If climatologists aren’t aware of these kinds of conditions and those referred to by Matt G, then you have novices buggering up the records. The temperature shifts that are seen as ‘discontinuities’ are actually corrected automatically by an algorithm -don’t you love the term!- (this gives the opportunity to shift the most recent temperatures upwards to ‘correct’ them in the case of an apparent ‘drop’ that may, in fact be real. Yes human errors are a fact of life, but really, it is a pretty simple thing to read and record a temperature. I think a lot of real data is getting corrected.

      • Garry, winds go up and down.
        The reason is that the Black Hills of South Dakota are home to the world’s fastest recorded rise in temperature, a record that has held for nearly six decades.
        On January 22, 1943, the northern and eastern slopes of the Black Hills were at the western edge of an Arctic airmass and under a temperature inversion. A layer of shallow Arctic air hugged the ground from Spearfish to Rapid City. At about 7:30am MST, the temperature in Spearfish was -4 degrees Fahrenheit. The chinook kicked in, and two minutes later the temperature was 45 degrees above zero. The 49 degree rise in two minutes set a world record that is still on the books. By 9:00am, the temperature had risen to 54 degrees. Suddenly, the chinook died down and the temperature tumbled back to -4 degrees. The 58 degree drop took only 27 minutes.

        http://www.blackhillsweather.com/chinook.html
        107 degrees of change in 1/2 hour.

      • “4 c in seconds is impossible”
        I have watched it happen. My village has a public time/temp digital display – the absolute temp not necessarily accurate. One evening I was sitting having a quiet beer on the waterfront when a major storm moved in. As the rain started, the temp reading plummeted 4C in less than one minute.

      • DD More, I once spoke with a man who was in Spearfish when that temperature change happened. He claimed it was so quick that several shops had their display windows break from thermal shock.

      • “4 c in seconds is impossible”
        I have watched it happen. My village has a public time/temp digital display – the absolute temp not necessarily accurate. One evening I was sitting having a quiet beer on the waterfront when a major storm moved in. As the rain started, the temp reading plummeted 4C in less than one minute.
        ————————————————————–
        Not surprised less than one minute, but literally this would require a 4 c rise in one second.

    • @ Matt, I have seen the same here in western Canada, temps here in the winter can fluctuate very much the same way you describe with very quick cold fronts and warm weather preceding those and then following, we try as best we can take our obs at 7 am and 7 pm. We also do regular “reality checks” with other stations in the area as to eliminate errors.

      • Of course, if there are winds, it is possible that the current air mass in a given location, will get replaced by an air mass from a different; and maybe hotter air mass from somewhere else.
        A 30 MPH wind can move an air mass as much as mile in just two minutes, and as we all know, some different air masses can be a lot smaller than a mile; for example say a tornado funnel.
        So has any place actually seen the Temperature of the LOCAL AIR MASS increase by 49 deg. F in two minutes or cool down 58 deg. F in 27 minutes, with NO winds ??
        I doubt that

    • Mildura Australia Feb 2012. Minimum 27th, 20.9°C. Max 28th, 18.9°C
      The minimum on the 28th was 17.6°C. Not that rare as there can be a lot of cloud cover keeping minimums high before a cold front moves through. Something that cities like Melbourne experience often. I’m sure its had minimums before 9am higher than maximums after 9am.

  3. ” but a lack of explanation as to why the rest of the world is adjusted half as frequently”
    There is a very simple explanation – TOBS. The US had mostly volunteer observers, with their own opinions about when they should check the thermometer. ROW had employees, who observed at prescribed times.
    I can’t see the point of this article. There are millions of B91 forms, and I’m sure you’ll find mistakes. So what to do? Throw out the lot?

    • This form says right at the top that the TOB was 8 AM. Yet a full 0.5C TOB adjustment was still applied to the station by the model. How about we throw out the lot of misapplied TOB adjustments?

      • Yes. All data is TOBS-adjusted to match current practice for that station, which is presumably not 8am. MMTS of course does not have a min/max thermometer, but the daily average is calculated as if TOB were midnight.

      • You looked at morning to midnight TOBS adjustments before and found them to be small.
        http://moyhu.blogspot.com.au/2014/05/the-necessity-of-tobs.html
        This station’s TOBS adjustment of +0.5C far exceeds any of the others in the group of 190 stations you looked at before.
        Have you forgotten this analysis? It seems that this large of a TOBS adjustment should have set off some alarm bells for someone that has studied it before.

      • “It seems that this large of a TOBS adjustment “
        Yes, it is quite large for 7am (the 1941 time) to midnight. So the time adjusted to may not be midnight. Incidentally, didn’t people here say that adjustments were done to cool the past?
        “MMTS stand for Minimum Maximum Temperature System”
        Yes, but it is usually used for the thermistor version.

    • the point is to show that the supposed data that drives billions of dollars of government spending on green schemes is not valid or rigorous … you have mix dog poop with vanilla ice cream … don’t serve that to me and call it dessert … 🙂

    • Two reasons really. TOBs is one; the other is network density. The U.S. has ~7,000 co-op stations to use in pairwise homogenization. The rest of the world (with a few exceptions) has a much less dense network of stations. The fewer nearby neighbors you have beyond a certain point, the less likely you are to detect local breakpoints like station moves or instrument changes, especially if the effect is relatively modest.

      • Surely temperature station moves will, on average, have no impact on the temperature trend, though. We need to focus on adjustments that are likely to have an impact and I’m not sure TOBs has a big one either due to human nature interfering with “standard” reading times.
        Put it this way, if policy is to read at 8am but twice a week its read late in the morning or even in the evening (say on the weekends) then the actual TOBs adjustment could be less than half of what is applied by that policy based assumption.
        The TOBs adjustment is an extraordinarily large adjustment and hence needs an extraordinarily large justification and it frankly just doesn’t have it. If anything it simply increases the error range of our readings.

    • “So what to do? Throw out the lot?” Nick, sigh, for the purpose of climate models yes. They are simply not good enough. They are good for historical reference, only that. Trying to use them leads to all types of corrections and modifications based on what the individual at the time feels it should be.
      michael

    • Fascinating how the warmist assumes that if you are employed by the govt you instantly become more reliable and trustworthy.

      • “Phil jones had an interesting approach.”
        That’ll be Phil “I’ll destroy all my data before I let anyone outside the Hockey Team inspect it” Jones of UEA CRU, will it?
        I’m impressed – not.

    • How about accounting for and fixing the errors that are found through a quick QA/QC check like John just did?
      Are you really this obtuse?

    • That begs the question: were the ROW employees all making observations at the same time that the TOBs adjustment is targeting? If not, you have a problem.
      Elsewhere I read a comment that essentially said the ROW lacks the station density to do pairwise homogenization on the scale done in the US.
      Both of the above are telling me we have have an unevenly observed (and adjusted) temperature record.
      The wizards of smart believe they can algorithmically detect any flaws and correct them with estimates produced by so-called state-of-the-art models. I am not seeing evidence of such skill.
      Which brings me to the point of this post and the others I posted recently and the ones forthcoming. The source data has many flaws, and those flaws are injected at multiple points along the chain of custody. The wizards of smart believe they can algorithmically detect those flaws and correct them with so-called state-of-the-art models. I argue that

      • “That begs the question: were the ROW employees all making observations at the same time that the TOBs adjustment is targeting? “
        They were consistent over time. There is no one right time. TOBS change is like a change of instrument. The instrument before and after may be good, but you may still need to adjust for calibration for a consistent record. Reading at 9AM, 5PM, doesn’t matter as long as you stick to it. If you change, the record needs adjusting for consistency.

      • Nick writes “Reading at 9AM, 5PM, doesn’t matter as long as you stick to it.”
        Actually you’re better off randomly reading at either 9am and 5pm so that the errors cancel out. That way, from a long term trend perspective, you dont need to adjust the data at all.

    • “There is a very simple explanation – TOBS. The US had mostly volunteer observers, with their own opinions about when they should check the thermometer. ROW had employees, who observed at prescribed times.”
      Absolutely amazing. The entire world took their thermometer readings almost in unison, and folks in the U.S. just selected the time to observe theirs on their own personal whims.
      Got any more BS stories to share?

  4. All very interesting, I worked 10 months at Tolk station outside Muleshoe (stayed in Clovis), but in the overall big picture why does it matter?
    As I understand it the basic premise of the CAGW crowd is that increasing concentrations of atmospheric CO2 disrupt the “natural” atmospheric heat balance and the only way to restore that “natural” balance is by radiating that unbalanced heat back to space per the S-B relationship, i.e. increasing the surface temperature. BTW, the atmosphere is not, as some postulate, a closed system. That assumption simplifies calculations, but ignores reality.
    One, there is no such thing as the “natural” heat balance. As abundantly evident from both paleo and contemporary records the atmospheric heat balance has always been and continues to be in constant turmoil w/o regard to the pitiful 2 W/m^2 of industrial CO2 added between 1750 and 2011. Fluctuations in incoming and outgoing radiation, changing albedo from clouds and ice, cosmic rays, 10 +/- W/m^2 range of solar insolation from perigee to apogee, etc. refute that notion of a closed system.
    Two, radiation is far from the only source of rebalancing the “natural” heat balance. Water cools the surroundings when it evaporates and warms the surroundings when it condenses. The water vapor cycle, clouds, precipitation, etc., a subject which IPCC AR5 admits to having a poor understanding, modulates and moderates the atmospheric heat balance and has done so for millions of years all without the help or hindrance of industrialized man. The atmospheric water cycle is just on huge global atmospheric swamp cooler for the earth. Other planets don’t have that. The popular GHE considers radiation only and excludes water vapor. Large commercial greenhouses typically have a wall full of evaporative cooler pads, water & fans.
    CAGW has zip to do with science and everything to do with a hazy, starry eyed, utopian, anti-fossil fuel (90% anti-coal) agenda bereft of facts & reality.

  5. “The time of observation estimate was 1.41C”
    “replacing the value with an estimate of 0.12C ”
    “Homogenization estimated the temperature at 2.65C”
    You will notice that all data are recorded to the nearest whole degree. Estimates to 0.01 degree are meaningless. You cannot exceed measurement accuracy when data are non homogeneous.

    • I was thinking that too. If the observation is a whole Farenheit number then it’s essentially n +/- 0.5 to account for rounding up or down. So in reality there’s a 0.5degF error bar straight from the thermometer reading, with no way of knowing whether those errors cancel out over time or not.
      Once we factor in siting, equipment defects and TOBS I’d be surprised if we know the actual Tmax/Tmin to within +/- 1degF. Then there’s all that homogenisation, averaging and adjustment.
      If we’re honest, we should just say that we need to include at least a +/-1degC error bar. Our best estimates would then indicate that the temperature today is the same as it’s been for the past 150 years or more. We can then close down the AGW industry!

  6. Anyone who has worked in a large organization knows that despite the competence and best intentions of all involved, errors happen. Over the last few hundred years covering the entire globe how can there not be questions about the validity of the temperature records. This is not intended to denigrate any individuals or systems. It is just that s..t happens. The more records the chances of more s..t.

      • I agree but even their accuracy should be suspect. This is a monumental job and human nature leads us to believe
        we can do more than we can do and know more than we can know. I see the fingerprint of hubris all over climate science.

  7. According to Zeke’s hand-waving about TOBS, most stations collected data late in the afternoon in 1929, so they had to apply a 0.3 C adjustment to the entire record.
    http://www.skepticalscience.com/understanding-tobs-bias.html
    Yet here we have a station that collected temperature at 8 AM, and still had a ~0.5C TOB adjustment applied for the month.
    I hope Zeke will weigh in on why a 0.5C TOB adjustment is needed for this station that collected temps at 8 AM in 1929.

    • About what Zeke says.
      Climate Etc. – Understanding adjustments to temperature data
      by Zeke Hausfather All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location.

      http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/
      What He measured
      Interviewed was meteorologist Klaus Hager. He was active in meteorology for 44 years and now has been a lecturer at the University of Augsburg almost 10 years. He is considered an expert in weather instrumentation and measurement. One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ ” For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.”
      Hager confirms to the AZ that the higher temperatures are indeed an artifact of the new instruments.

      http://notrickszone.com/2015/01/12/university-of-augsburg-44-year-veteran-meteorologist-calls-climate-protection-ridiculous-a-deception/
      http://wattsupwiththat.com/2015/03/06/can-adjustments-right-a-wrong/
      I could find nowhere the affect made to the monthly data you are posting here. Zeke ‘says’
      At first glance, it would seem that the time of observation wouldn’t matter at all. After all, the instrument is recording the minimum and maximum temperatures for a 24-hour period no matter what time of day you reset it. The reason that it matters, however, is that depending on the time of observation you will end up occasionally double counting either high or low days more than you should. For example, say that today is unusually warm, and that the temperature drops, say, 10 degrees F tomorrow. If you observe the temperature at 5 PM and reset the instrument, the temperature at 5:01 PM might be higher than any readings during the next day, but would still end up being counted as the high of the next day. Similarly, if you observe the temperature in the early morning, you end up occasionally double counting low temperatures. If you keep the time of observation constant over time, this won’t make any different to the long-term station trends. If you change the observations times from afternoons to mornings, as occurred in the U.S., you change from occasionally double counting highs to occasionally double counting lows, resulting in a measurable bias.
      So Zeke, where is the double counting of high or low temperatures in this month’s data and did you get the correct sign +/- on your MMTS?

    • The adjustment is different for different parts of the US and different seasons.
      If you want proof go to the skeptics site which first audited this.
      John Daly
      The skeptics proved that TOB was real, that it differed by location, and that you needed to correct for it

  8. Zeke has said elsewhere that because most stations in the US collected data late in the afternoon, a 0.3C TOB adjustment must be applied to the entire temperature record.
    Yet here we have a station in 1929 that collected data at 8 AM, and a ~0.5C TOB adjustment was still applied to it.
    Hopefully Zeke will weigh in on why such a large TOB adjustment was needed for this station collecting data at 8 AM.

      • But you generally don’t get different adjustments year-over-year. The February adjustment at Ice Station Zero, for example, doesn’t change with each passing year. That is not skillful.

  9. and always bearing in mind that an average temperature is as meaningful as an average phone number…
    and always bearing in mind that a bull who chases the cape gets the sword no matter how fiercely he attacks it…
    after it’s all been examined thoroughly, don’t forget to flush.

  10. “I have looked through B-91 forms at other stations where no such corrections or notations were made. Some of those stations were located at people’s homes. Is it reasonable to believe that the observers never missed a 7 AM observation for any reason, such as a holiday or vacation, for years on end? That they always wrote their observation down the first time correctly?”
    As I have explained to folks many times there is no such as “raw” data. There is a purported first report.
    Filled with errors.
    The US is probably the worst in the world when it comes to these things because we relied on volunteers.
    That’s why when you run code to find mistakes and correct them that you find the following:
    the US record is one of the worst from a stand point of inhomgenieties. That’s why you learn very little
    about adjustments by studying it. your basically studying the outlier.
    sad but true
    The typical US skeptic assumes the US is best so problems elsewhere MUST be worse. well, not so

    • The errors in this case were not due to the volunteers. They were due to the folks hired by the governmental agency. “Professionals”.

        • In this case I think it is pretty clear the observer did not correct anything. Reread the post please. There are spurious “cross-out” marks on many of the B-91 forms, and they appear to be some form of artifact that occurred after the forms were received by NCDC, possibly due to transfer of the bluing on carbon copies. The observer’s intent seems very clear. The transcriber was confused.

    • Steven Mosher
      The US is probably the worst in the world when it comes to these things because we relied on volunteers.
      Well its nice to know your thoughts on volunteers.
      Myself I’ll take volunteers 24/7 they are doing it out of a sense of responsibility a higher calling if you will. If you can’t see that and have to malign them, then perhaps you should be in another occupation.
      I’m normally not like this.
      michael

    • Stephen Mosher: “The US is probably the worst in the world when it comes to these things because we relied on volunteers.”
      Ye gods…
      You really are a piece of work!

  11. A daily temperature record is a very recent innovation.
    To understand the past we use various tools like ‘what grew here 1,000 years ago?’ Various clues are examined. The biggest factor remains that Ice Age clues are many and very strong and point to this being an Ice Age era with very short interglacials.
    We have various clues in rocks, fossils, glacier scouring marks on rocks, etc. to philosophize about the probable past. Ditto with warm weather clues in the past.
    The quibbling over very minor temperature changes when we have an actual thermometer data record to examine is rather silly.when we consider that all of this is over very tiny changes in temperature compared to any Ice Age plunge.

  12. The time of obs shouldn’t matter with maximum and minimum values in theory. They are taken on thermometers which record the respective parameters which are then reset for the next period, usually 24 hours. I have a couple of issues with the way such readings are handled. In Australia, the reset time is 9am (which in GMT varies in the months daylights savings time commences and ceases) so the maximum temperature is recorded for the previous day at 0900- no matter that a hot wind has sprung up and it’s 5C hotter than the maximum the previous day. The max is also read at 1500, but in summer the max can be higher at 1800 than it had been at 1500.
    The introduction of electronic probes means that this practice is anachronistic, as a midnight to midnight max is easily obtained (and min, obviously) but the practice of recording the “daily max” at 0900 continues to keep electronic records in line with manual records. One issue with the electronic probes is that they are more sensitive than mercurial max thermometers, often reading 0.5 degrees Celcius higher than the mercurial max in the same screen. To my mind this should point to older maxima being adjusted upwards, if they are adjusted at all. Then again, the electronic probes often give a slightly lower minimum temp, although the minimum thermometers used in Australia could give incorrect minima during windy periods- the marker subject to “shaking down”. Before the advent of electronic probes, suspect minima could only be noted in the Obs book (especially as professional observers would have been taking at least hourly temp readings for aviation).

    • I think that electronic thermometers can react more quickly to temperature and therefore will always give higher highs and lower lows than mercury thermometers, since they have built-in averaging due to thermal inertia. In science labs we were always told to wait for the mercury to stabilise before reading to allow for that inertia — glass isn’t a brilliant conductor!

  13. The more I’ve paid attention to all this (Yes, I’m just a Layman.) the more I’m convinced that trying to get a “Global Temperature” from past records and even present surface station records is trying to, what’s the phrase, “Making silk out of a sows ear”.
    Computers are powerful tools. But no matter what program they run, sound as it may seem to be, the foundation, the data, of the model produced is faulty.
    The House of caGW is built on sand.
    Satellites are the only truly global measurements we have.
    True, calibrations are involved. But is a “calibration” an “adjustment”?
    Only if the goal is to reach a desired conclusion rather than accuracy.

  14. Is it reasonable to think these observers never missed an observation? Of couse not, but it is much worse than that. The coop station data is bad, but the First Order Stations are not far behind. Prior to 1961 mercury and alcohol thermometers were used in Weather Bureau stations and these thermometers came with correction cards, which were promptly thrown in a drawer and never looked at again. In the early 60s they started using a rube-goldberg device that transmitted a timed pulse to an indicator that had a 1/2 degree variation and the needle was 1/2 degree wide. Just because you can mathematically arrive at accuracy of 0.1 degrees, does not make it true.

  15. Recording errors, if they are not immediately corrected, can only be accounted for by increasing the error bar of the data. The big trick is determining just how much more ‘unknowable’ the data is because of recording errors. The +- variance of the various types and designs of thermometers can be estimated but trying to determine the additional unknown caused by mistakes is real but difficult problem.
    Trying to make use of data where 0.1 degree is a vital difference but the real unknown of comparison data collected 300 years ago is but plus or minus 2 degrees is futile and misleading. Everyone agrees that there must have been an average temp 300 yrs ago but trying to figure out what it was to a tenth of a degree is usually futile.

  16. I think Mosher hit the nail on the head. We should throw them out, though that will never happen. This is the solar data times 10,000. And there is NO money to grant for systematically combing through hard copies for these kinds of mistakes. Instead, there is PLENTY of incentive NOT to.
    Frankly, I don’t even think God can manage to throw these records out. Heck, God couldn’t manage to throw the serpent out of his garden and instead left the damn thing in there till he had to throw the humans out. What chance do we have of these records getting cleaned up or thrown out, let alone the researchers who are benefiting from the errors?

    • Heck, God couldn’t manage to throw the serpent out of his garden and instead left the damn thing in there till he had to throw the humans out.

      😎
      Actually, it was Adam’s job to do that. It was delegated to him.
      15 And the LORD God took the man, and put him into the garden of Eden to dress it and to keep it. (Gen 2:15 KJV)
      The word “keep” is also used in the sense of “guard”. He had been given the authority and the power (Created “in the image of God”, spirit. (see John 4:24)) to keep Lucifer’s influence out. He lost both when he chose to learn about evil.
      We’ve been stuck with the serpent and his snakes running the local show (for now) ever since.
      Freedom of will is a big deal to God. He has the “raw power” to overturn it. But He also has the “raw love” and wisdom and justice to allow people to choose His solution, even though we don’t deserve it. He is true to Himself.
      Maybe you and others here may think I’m just a nut (maybe a likable nut?).
      I got this and more here http://sunriseswansong.wordpress.com/2013/07/11/attention-surplus-disorder-part-two/ .
      Please don’t respond. Caleb has allowed this to remain but he “doesn’t have a dog in this fight”.
      And here, there is the potential for a massive “derail”.
      Just read and consider.

  17. Hi Mr. Goetz, could you please provide a brief bio on yourself – I would like to excerpt some of your posts here with your permission, and I apologize, I don’t know who you are or your employer.
    Thanks

  18. I’ve made a couple of trips to Muleshoe since 2010 to try to track down the station’s history. Typically for such stations, it’s been located all over town. Posted my stuff, including pictures, in the photo gallery. So my first reaction when reading this post was to review what I’d found to see if there was anything relevant to Mr. Goetz subject. (Probably not, but wouldn’t hurt to look.) Being out of California at the moment, I don’t have access to my original materials, but I could look on line, if the gallery were back in service. Just sayin’….

  19. emsnews September 28, 2015 at 3:49 pm said
    “A daily temperature record is a very recent innovation.
    To understand the past we use various tools like ‘what grew here 1,000 years ago?’ Various clues are examined. The biggest factor remains that Ice Age clues are many and very strong and point to this being an Ice Age era with very short interglacials.
    We have various clues in rocks, fossils, glacier scouring marks on rocks, etc. to philosophize about the probable past. Ditto with warm weather clues in the past.
    The quibbling over very minor temperature changes when we have an actual thermometer data record to examine is rather silly.when we consider that all of this is over very tiny changes in temperature compared to any Ice Age plunge.”
    There is no problem with the large temperature changes from the Roman and Mediaeval Warms to the cold spells between them. The problem is that small temperature changes recorded by imperfect thermometers, read by imperfect observers, transcribed by imperfect clerks and used in imperfect models by so-called “climate scientists” can be used by those with an axe to grind to promote bad policies. And we suffer from it.
    If what Mike T has said: “One issue with the electronic probes is that they are more sensitive than mercurial max thermometers, often reading 0.5 degrees Celcius higher than the mercurial max in the same screen. To my mind this should point to older maxima being adjusted upwards, if they are adjusted at all. ” is correct, then this means that the alleged 0.8K rise in global mean temp is really not much more than a real 0.3K rise in gmt. So effectively there is negligible rise in gmt due to the near doubling in atmospheric CO2 over the last 150 years.
    This should be dinned into the brains of all politicians BEFORE the Paris conference, not after.

  20. This sentence from paragraph 5 of the article above, “The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility.”, is this an erroneous assumption? It’s utterly common for the following day’s max to be lower than the daily min. from the previous day when a cold-front moves in. Or is the author referring to a time-of-observation problem where the high-max-temp. observed by the observer on the 7th is actually the high-max-temp that registered on the thermometer on the 6th; so, yes, a max could not be less than the min for the same day. Help! Is the above quoted sentence problematic, or not?

  21. In Australia and apart from questions about the accuracy of homogenisation of the raw dataset, I believe there are still unresolved questions about the impact of metrication in 1972. Almost half the country’s observer written records before 1972 were rounded to .0F and the proportions immediately dropped to between 10-20% at all stations when the new C thermometers and observer practices were introduced. Many western countries reverted from F to C in the 1960s and 1970s when AGW supposedly became pronounced. Many people claim the rounding was equally up and down but I disagree on the basis that it would be human nature for a greater proportion to be truncated down. The BoM detected a .1C artificial warming at the time of metrication but decided not to adjust for it. Australia’s rounded temps are dissected at http://www.waclimate.net/round/rounded-australia.html
    As for the accuracy of the homogenised or corrected ACORN dataset, it’s hottest ever day in Australia was at Albany on the south west coast on 8 February 1933 with a max of 51.2C, even though that day is confirmed to have had a top temp of 44.8C in raw. The single day error increases the ACORN reading of Albany for Feb 1933 from a monthly average of 28.58C to a monthly average of 28.81C, and the incorrect adjustment has been recognised but uncorrected for several years.

  22. Each recorded temperature is more properly recorded as, for example, 31° +/- 0.5°. This is a human reporting an analog scale thermometer, and recording the number only in whole units. The 31° recorded would be the same for any temperature from 30.5° — it is a recording of a range, just under 1° wide.
    There is no way to remove this known original measurement error — all results must then be recorded with the same error notation — and all results are also a range, 1° wide. The the monthly averages should be noted as 1.41° +/- 0.5° (the range — 1.91° to 0.91°). There is no valid method of determining where in the range the actual value should be placed.
    In the surface station I investigated personally, Santo Domingo, the Chief Meteorologist explained how the “shorter than average” Dominican males who recorded the daily temperatures were instructed to stand on the concrete block kindly provided, so that their eyes would be level with the thermometer, at the right angle, to record the temperature correctly. He noted that the shorter men were especially prideful, and would not stand on the block, thus always recorded temperatures from a low angle. It is uncertain how much this simple cultural factor influence the readings from this station.
    It is interesting how much the automatic TOB adjustment changes the actual reading –> “The unadjusted average reflected in the electronic record was 0.8C whereas the paper record was 0.24C, just over half a degree cooler. The time of observation estimate was 1.41C. ”
    I would like to see this blanket TOB adjustment researched from scratch, against modern digital hourly data to see if the adjustment used in all data bases is actually valid.

  23. This is an essentially pointless exercise. Clearly the signal-to-noise ratios of historical temperature records are inadequate to discover if temperatures worldwide or even in the USA have changed significantly, much less whether any such change was due to human actions. Error after error at every stage in multiple processes renders the “data,” raw or otherwise, unfit for purpose. Those who report historical temperatures and compare them to present temperatures, who FAIL to report the standard error of historical temperatures, are simply misleading the public, know it, and should stop. BEST practice is somehow not entirely the truth. Indeed silk purses are not made from sow’s ears…

  24. Speaking from the perspective of someone who collects reams of field data in the private sector, this discussion is mind boggling. The way these data are so carelessly treated through adjustment, homogenization, infilling and nonsensical error estimation, you would think that the data serve no real purpose. To think that trillions of dollars worth of ramifications hang on such flimsy quality control procedures makes my mind spin.
    In the private sector world of real consequences, there would be no estimations, no in fillings, no homogenization. Good stations with good data would be selected in various parts of the world and stations with discontinuities or discrepancies would be dropped. Period. If I collect some data in the field with a 3DCQ greater than the clients accepted limit, I don’t get to say like so many apologists here “What do want me to do? throw it out? It’s the best I have!” I don’t have the option to discard the data quality rules and estimate the data location. Instead my data gets tossed because it isn’t accurate enough for the purpose of the client. Period.
    I’m left concluding that only someone whose work has no consequences could imagine that these data are accurate enough for the purpose they are being used for. Why else is this so obvious to everyone who works in the real world and so hard to understand for academics and bureaucrats?

    • I have made similar comments for years.
      These weather stations were never intended to perform the function to which they are being put. They are not fit for purpose, and their data is being over extrapolated beyond its capabilities.
      If the Climate Scientists wish them to perform the task to which they are now being put, the starting point would be to audit each and every station for its siting, siting issues, station moves, equipment used, screen changes, maintenance to equipment and screen, changes to equipment, record keeping and the approach to accurate record keeping, the length of uninterrupted records etc. etc The good stations could be identified and the poor stations could be thrown out.
      Essentially what should have been looked for is the equivalent of USCRN stations but with the longest continuous data records. It may be that we would be left with only 1000 or perhaps only 500 stations worldwide, but better to work with good quality pristine data that requires no or little adjustment than to work with loads and loads of cr*p quality station data and cr*p data needing endless data manipulation/adjustment/homogenisation.
      We are now no longer examining the data and seeing what the data tells us, but rather we are simply examining the efficacy of the various adjustments/homogenisation undertaken to that data.
      Quite farcical really.

      • Exactly! There are billions of government dollars available for study after study but the most basic data QC is abandoned? The gulf between best practices in the Climate Science community and the real world is staggering beyond comprehension.

  25. The only way can get a true surface data set is by using only the same samples throughout it from start to finish. They are changed all the time so they are always measuring different parts of the planets surface all the time. That has never been a technique that should be used to estimate a massive surface area by just tenths of a degree using a tiny percentage of it. It is impossible to suggest there has been any accuracy in it and the only thing that is close to this ideal are the satellite data sets. Would need a million weather stations on the planet’s surface to even come close to what satellite can measure in the troposphere. Forty four thousand weather stations are roughly one percent of planet’s surface.

  26. I always thought it would be interesting to try and verify the station data with some kind of proxy. Wouldn’t it be interesting to see a set of proxy data collected for the US since 1880 compared to the temperature record. Yes, proxy data has it’s own limitations but if enough data was collected that should tend to average out the errors.
    I suspect the problem is no one in the government wants to see any attempt to validate the data. Hence, nothing could ever get funded. It would almost take a volunteer group.

  27. This is in reference to the graph that ralfellis presented above in the comments section.
    Wait a minute, are you saying that CO2 is a follower of a temperature trend rather than the cause of a temperature trend?
    I’m just a layman, and I often don’t understand all the scientific jargon, but it seems to me that if CO2 is a “negative-feedback temperature regulator” that a solution calling for a reduction in CO2 to save the world has a big problem.
    This makes me wonder…
    Can anyone (preferably a scientist) answer these four questions:
    • If human beings were producing the same amount of CO2 before the last Ice Age that we are today, would the last Ice Age have been averted?
    • Is the fact that human beings burn fossil fuels today going to avert another Ice Age?
    • If human activity is capable of abnormally warming the Earth, are we capable of abnormally cooling it?
    • What human activity would abnormally cool the Earth?

  28. Double counting lows/highs.
    Maybe some can explain this to me, but wouldn’t you only get a double high/low only once…the day you switch? As long as you don’t switch anymore, then it shouldn’t be a problem. It seems to me that a single switch of TOBS would be insignificant in the data.

    • As I see it, apart from rare isolated events, this can only potentially be a significant and repeated problem where the station TOB coincides with the warmest part of the day.
      Obviously the temperature profile of every day is slightly different, but in general the warmest time of the day is an hour or so after the sun has reached its peak height that day. Thus the warmest time of the day is usually some time between about 1pm and 3:30pm.
      That being the case, every station that has a TOB coinciding broadly with the warmest time of the day should either be disregarded from the data set, or it should be put in a separate bin, and very detailed and careful consideration should be given to its record, with an adjustment being made if necessary.
      But I consider the better practice would be to disregard any station if it has a TOB coinciding approximately with the warmest period of the day.
      It would be interesting if Zeke, or Mosher would comment on why they do not simply disregard stations that have TOBs around the warmest part of the day.

Comments are closed.