Human error in the surface temperature record

Guest essay by John Goetz

As noted in an earlier post, the monthly raw averages for USHCN data are calculated with up to nine days are missing from the daily records. Those monthly averages are usually not discarded by the USHCN quality control and adjustment models, although the final values are almost always estimated as a result of that process.

The daily USHCN temperature record collected by NCDC contains daily maximum (TMAX) and minimum (TMIN) temperatures for each station in the network (ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/hcn/). In some cases, measurements for a particular day were not recorded and are shown as -9999 in either or both the TMAX or TMIN record for that day. In other cases, a measurement was recorded but failed one of a number of quality-control checks.

Quality-Control Checks

I was curious as to how often different quality-control checks failed, so I wrote a program to cull through the daily files to learn more. I happened to have a very small number of USHCN daily records already downloaded for another purpose, so I used them to debug the software.

I quickly noticed that my code was calculating a larger number of consistency check fails from the daily record for Muleshoe, TX than was indicated by the “I” flag in the station’s corresponding USHCN monthly record. The daily record, for example, flagged the minimum value on February 6 and 7, 1929 and the maximum value on February 7 and 8. My code was counting that as three failed days but the monthly raw data for Muleshoe indicated it was two days.

Regardless of how many failures should have been counted, it was clear from the daily record why they were flagged. The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility. The same was true for February 7th relative to the 8th.

I noticed there were quite a few errors like this in the Muleshoe daily record, spanning many years. I wondered how the station observer(s) could make such a mistake repeatedly. It was time to turn to the B-91 observation form to see if it could shed any light on the matter.

Transcription Errors

The B-91 form obtained from http://www.ncdc.noaa.gov/IPS/coop/coop.html is linked below. After converting the temperatures to Celsius the problem became apparent. The first temperature (43) appears to have been scratched out. The last temperature in that column (39) has a faint arrow pointing to it from a lower line labelled “1*”. The “*” is a note that states “Enter maximum temperature of first day of following month”.

February 1929 B-91 for Muleshoe, TX

It appeared that whoever transcribed this manual record into electronic form thought that the observer intended to scratch out the first temperature and replace it with the one below, and thus shifted the maximum values up one day for the entire month.

Muleshoe

To determine the observer’s intent, the B-91 for March, 1929 was examined to see if the first maximum temperature was 39, as indicated by the “1*” line on the February form. Not only was the first maximum temperature 39, it appeared to be scratched out with the same marking. Although the scratch marking appeared on the March form, that record was transcribed correctly. A quick check of the January, 1929 B-91 showed the same scratch marks over the first temperature.

March 1929 B-91 for Muleshoe, TX

January 1929 B-91 for Muleshoe, TX

The scratch marks appear in other forms as well. October, 1941 looked interesting because both of the failed quality checks were not due to an obvious reason. The flagged temperatures were not unusual for that time of year or relative to the temperatures the day before and after. Upon opening the B-91, the same “scratch out” artifact was visible over the first maximum temperature entry! Sure enough, the maximum temperatures were shifted in the same manner as February, 1929. As a result, two colder days were discarded from the average temperature calculation.

October 1941 B-91 for Muleshoe, TX

Because the markings were similar, it appeared they were transferred to multiple forms when they lay piled in a stack, probably because the forms were carbon copies. This likely would have happened after they were submitted, because on the 1941 form the observer did scratch out temperatures and was clear where the replacements were written.

Impact of the Errors

In addition to one incorrect maximum temperature, the full three days flagged as failing the quality check were not used to calculate the monthly average. The unadjusted average reflected in the electronic record was 0.8C whereas the paper record was 0.24C, just over half a degree cooler. The time of observation estimate was 1.41C. The homogenization model decided that a monthly value could not be computed from daily data and discarded it. It infilled the month instead, replacing the value with an estimate of 0.12C computed using values from surrounding stations. While that was not a bad estimate, the question is would it have been 0.12C if the transcription had been correct? Furthermore, because the month was infilled, GHCN did not include it.

In the case of January, 1941, the unadjusted average reflected in the electronic record was 2.56C whereas the paper record was 2.44C. The TOB model estimated the average as 3.05C. Homogenization estimated the temperature at 2.65C. That was was retained by GHCN.

Discussion

Only recently have we had the ability to collect and report climate data automatically, without the intervention of humans. Much of the temperature record we have was collected and reported manually. When humans are involved, errors can and do occur. I was actually impressed with the records I saw from Muleshoe because the observers corrected errors and noted observation times that were outside the norm at the station. My impression was that the observers at that station tried to be as accurate as possible. I have looked through B-91 forms at other stations where no such corrections or notations were made. Some of those stations were located at people’s homes. Is it reasonable to believe that the observers never missed a 7 AM observation for any reason, such as a holiday or vacation, for years on end? That they always wrote their observation down the first time correctly?

The observers are just one human component. With respect to Muleshoe, the people who transcribed the record into electronic form clearly misinterpreted what was written, and for good reason. Taken by themselves, the forms appeared to have corrections. The people doing data entry likely did so many years ago with no training as to what common errors might occur in the record or the transcription process.

But the transcribers did make mistakes. In other records I have seen digits transposed. While transposing a 27 to a 72 is likely to be caught by a quality control check, transposing a 23 to 32 probably won’t be caught. Incorrectly entering 20 instead of -20 can get a whole month’s worth of useful data tossed out by the automatic checkers. That data could be salvaged by a thorough re-examination of the paper record.

Now expand that to the rest of the world. I think we have done as good a job as could be expected in this country, but it is not perfect. Can we say the same about the rest of the world? I’ve seen a multitude of justification for the adjustments made to the US data, but a lack of explanation as to why the rest of the world is adjusted half as frequently.

0 0 votes
Article Rating
124 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
September 28, 2015 2:06 pm

At least human error goes both ways.
On average this can be expected to have no impact on the trend.
The adjustments to the temperature records seem more knowing.

Reply to  M Courtney
September 28, 2015 2:45 pm

M Courtney has a strong point. However, since we do not note the “+” of positives the missing “-” will always raise the observed temperature.
If the Government can spend $3Trillion each year we can afford to fix these errors if the temperature really makes a difference for policy. I think we should save the money since the trends are based almost exclusively on natural phenomena not human conduct.

NZ Willy
Reply to  John H. Harmon
September 29, 2015 12:32 pm

Column 3, the “Range”, is clearly the next-line’s Maximum minus this line’s Minimum, so the data was used correctly as per the observer’s intent. The problem was the pernicious order of the columns, with “Maximum” in column 1 and “Minimum” in column 2, but the day’s minimum happens before the maximum! So observers who fill out the form sequentially would stagger the results as happened here. It would have been obvious to design this form with “Minimum” in column 1 and “Maximum” in column 2, but I don’t think user friendliness was a consideration back then.

NZ Willy
Reply to  John H. Harmon
September 29, 2015 4:30 pm

This article isn’t well researched, I regret to say. The displayed temperature record staggers the minimum and maximum because of the 8AM time of its observations (see upper right of the form). At the bottom of the form it states “See cover for instructions”, and we can see those instructions at https://archive.org/stream/1924InstructionsCoopObservers/1924%20Instructions%20Coop%20Observers_djvu.txt . There it is explained that these temperature stations had two thermometers, a mercury maximum thermometer and an alcohol minimum thermometer. The day’s observation is taken once, preferably at sunset (as the instructions helpfully suggest) so that the day’s minimum and maximum are both to hand at that time. However, the form shown in this article, being taken at 8AM, shows the overnight low from that morning but the maximum of the day before. This is why the maximum is shifted 1 day from the minimum, not because the readings were interpreted by someone, but because of the 8AM daily reading time. The first maximum was thus struck out because it was the maximum of the day before the 1st of the month.
So if this article had been researched properly, then I daresay the thrust of it would be quite different.

NOLA WX
Reply to  John H. Harmon
September 30, 2015 8:39 am

NZ Willy is correct. If they took the measurement at 8AM (not at sunset as the directions suggest) then the Maximum is associated with the previous calendar day. It sounds like that is how it was transcribed into the database. It doesn’t sound like an error. It seems Mr. Goetz misunderstands this?
“It appeared that whoever transcribed this manual record into electronic form thought that the observer intended to scratch out the first temperature and replace it with the one below, and thus shifted the maximum values up one day for the entire month.”
It seem from my reading of the instructions (linked by NZ Willy) that this is precisely what they should have done if they read at 8 AM and not at sunset.
Is it possible they shifted it up two places? As I look at the above form I don’t see any days in the entire month where a minimum is higher than the maximum recorded on the next line.
For example 26 and 35 are the minimum and maximum for Feb 6 with 35 being recorded on the line for Feb 7. 12 and 24 are the minimum and maximum for Feb 7 with 24 being recorded on the line for Feb 8.
Does Mr. Goetz mean that they offset it even more than one day in transcribing it. Did they associate a minimum of 26 on the 6th with the 24 degree maximum on the line for the 8th?
Respectfully request a further explanation from Mr. Goetz. Thanks.

Reply to  M Courtney
September 28, 2015 5:19 pm

The STT records, 70% of the world, are COOLED.
NET NET NET,, adjustments COOL THE RECORD..
use raw data
go team skeptic

Reply to  Steven Mosher
September 28, 2015 7:44 pm

Wait, were is my “splodin’ head video clip?

Reply to  Steven Mosher
September 28, 2015 9:48 pm

Yes, the past is COOLED in order to fraudulently create a warming trend.

Scarface
Reply to  Steven Mosher
September 28, 2015 11:44 pm

Correct, the PAST is cooled. What do you think the effect will be of that? Inconvenient truth aint it?

Tim Hammond
Reply to  Steven Mosher
September 29, 2015 2:52 am

Yes, because the current temperatures are supposed to be higher than the past. Go Team Alarmist.

Jason Calley
Reply to  Steven Mosher
September 29, 2015 8:14 am

Hey Steven, you say, “NET NET NET,, adjustments COOL THE RECORD.. use raw data go team skeptic”
When raw data gives a more true description of the record than does adjusted data, then absolutely, use raw data. As for the “go team sceptic”, I think you may have a misunderstanding. Cooling the record, warming the record — sceptics do not have a goal of showing no global warming. We have a goal of determining what the truth is. Showing cooling or disproving warming is not our goal — although I will say that it appears current best evidence shows that catastrophic anthropogenic global warming is not happening.

catweazle666
Reply to  Steven Mosher
September 29, 2015 5:27 pm

Oh dear, more mendacity.
Yeah, the farther you go into the PAST the more it is is COOLED, the RECENT temperatures are WARMED to exaggerate the totally spurious warming trend.
And then you wonder why we don’t believe a word you post!

Joel Snider
Reply to  Steven Mosher
September 30, 2015 1:00 pm

As I understand it, Mr. Mosher, what you are referring to is the efforts to cool down certain data sets to account for UHI effect. The question is then, of course, are they cooled down enough? Or is it just enough to preserve the warming trend while still allowing strawman statements like this one?

Reply to  M Courtney
September 29, 2015 12:33 am

Of course humans introduce biases
I know in the lighthouses, there was a tendency to make up readings on bad nights to avoid the hassle of having to go outside and do the measurements.

LarryFine
Reply to  M Courtney
September 29, 2015 1:16 am

Someone once did a study of pricing errors at grocery stores, and they discovered that the vast majority of errors were in the store’s favor.
I read somewhere that the vast majority of temperature adjustments cool earlier years and warm recent years.
Food for thought.

George E. Smith
Reply to  LarryFine
September 29, 2015 1:50 pm

Quite often in grocery stores, the price per unit is higher for the large size than for the smaller size.
Safeway Stores used to sell ” regular sized ” toilet paper rolls, for a good price and
” double ” rolls at the exact same price, for half as many double rolls. The ” double ” rolls actually had on 50% more surface area; not twice as much.
And the mega rolls which were the same price for one quarter of the number of rolls, that were ” twice the size of the ” double ” rolls, were actually only double the size of the
” standard ” rolls.
So the standard size rolls were the cheapest, and the mega rolls were the most expensive, and all were purportedly the same price per unit.
Of course they eventually discontinued the ” standard ” size rolls, so the double rolls are now the cheapest, but more expensive than the originals were. And for good measure they switched the package color on the ” best buy ” product from blue to green, so as to further confuse the shopper. So finally my wife knows to now buy the green, instead of the blue which is far more expensive.
Izzat ” caveat emptor ” in coliseum langwidge ??

George E. Smith
Reply to  M Courtney
September 29, 2015 10:00 am

I see no reason why a “random walk” scenario should result in the center of mass remaining indefinitely at the same point. I seem to recall that statistical analysis shows that the average position actually moves. (somewhere I recall, Pi comes into that somehow). Of course the direction of that average translation is entirely unpredictable.
So human errors can definitely move the howling dog off of the thorn bush.
g

KTM
September 28, 2015 2:20 pm

Definitely a human fingerprint in the record…comment image?w=640

Dave G
Reply to  KTM
September 28, 2015 11:18 pm

Yep. Pretty much irrefutable proof of Confirmation Bias.

Hugs
Reply to  Dave G
September 29, 2015 6:34 am

On the contrary. A graph from Goddard proves nothing as such.
I’d like to go Mosher here, but I’ll save you from terse insults and such. Just reproduce it, lets see then.

Jason Calley
Reply to  Dave G
September 29, 2015 8:16 am

“Confirmation Bias”? Could be “Funding Bias.”

KTM
Reply to  Dave G
September 29, 2015 8:44 am

Hugs, the silence is deafening from Mosher and others that routinely defend the necessity/robustness of the temperature adjustments being made. The beauty of this particular graph among the many that could be posted is that it’s completely self-evident what’s going on. These two variables should be completely unrelated, yet have near perfect correlation.
Coincidence?

catweazle666
Reply to  Dave G
September 29, 2015 5:32 pm

Hugs: “On the contrary. A graph from Goddard proves nothing as such.”
Unlike some other climate bloggers, Goddard posts all necessary references so that anyone with the necessary ability can verify his findings.
Clearly you lack that ability.

Reply to  KTM
September 29, 2015 12:35 am

And now show us the graph of CO2 versus temperature for the ice-age cycle.
And then please shut up.

ralfellis
Reply to  Mike Haseler
September 29, 2015 10:47 am

>>Show us CO2 versus temperature for the ice-age cycle.
>>And then please shut up.
Here is exhibit one m’lud – Ice Age temperature vs CO2. And as you can see, ladies and gentlemen of the court:
a. CO2 rises to a maximum. And when it hits MAX CO2, the world cools.
… Ergo, increasing CO2 concentrations cools the atmosphere.
b. CO2 reduces to a minimum. And when it hits MIN CO2, the world warms again.
… Ergo, reducing CO2 concentrations warms the atmosphere.
Ergo, CO2 must be a powerfully negative-feedback temperature regulator. 😉
http://www.brighton73.freeserve.co.uk/gw/paleo/400000yearslarge.gif

DWR54
Reply to  KTM
September 29, 2015 1:15 am

As the adjustments shown are for the US data only, they can’t really be compared against CO2, which is a global index. The correct comparison would be for global land and sea surface temperatures versus global CO2.

RWturner
Reply to  KTM
September 29, 2015 10:49 am

I love that graph! It should be front page news.

Matt G
September 28, 2015 2:35 pm

“The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility. The same was true for February 7th relative to the 8th.”
Actually this happens fairly common in winter than you think. The reason with this for example 12 hour periods for max and min temperatures don’t overlap.
An example in UK.
9.00am to 9.00pm Max 7.8 c
9.00pm to 9.00am Min 11.8 c
There is nothing wrong with the weather station recording these temperatures. This scenario occurs by firstly the day started cold with northerly winds and veered to strong south westerly winds during the day. Very mild and cloudy weather results pushing north from the North Atlantic ocean with a warm front and the southerly air originating all the way down to near the Azores.
There are many occasions around the world when significant changes in weather patterns can result in usual maximum and minimum temperatures on the same day. It seems to me they are doing the best they can in rejecting real data, in favor of estimated modeled for further confirmation bias.

Matt G
Reply to  Matt G
September 28, 2015 2:45 pm

Correction – Max 11.6 c, temperatures were quickly rising so resulted in 0.2 c difference.

Reply to  Matt G
September 28, 2015 3:04 pm

“9.00am to 9.00pm Max 7.8 c
“9.00pm to 9.00am Min 11.8 c
“There is nothing wrong with the weather station recording these temperatures.”

There are few stations in the U.S. taking temperature readings more than once per day, and those that do record both the MAX and MIN for each period. The scenario you describe would require the temperature to rise from 7.8 to 11.8 almost instantly at 9pm TOBS, and temperature remaining at or above 11.8 for the next 12 hours.

Matt G
Reply to  verdeviewer
September 28, 2015 3:09 pm

Correct, that example would mean an error and I have corrected it to 11.6 c Max.

KaiserDerden
Reply to  Matt G
September 28, 2015 3:18 pm

nice try … but fail … if at 8:59 pm the temp was 7.8 c are you saying its possible that 2 minutes later it was 11.8 c … because if the first time segment max temp was not at 8:59 then it must have been even lower at 9 pm (the 7.8 being the MAX for the time period) … so yes it is impossible … best case scenario max temp at 8:59 pm of 7.8 c and at 9:01 pm temp at 8 c and went UP for the next 12 hours … so that the min temp for the send time segment was 8 c … but a 4 degree diff in you example … that is impossible …

Matt G
Reply to  KaiserDerden
September 28, 2015 3:22 pm

Can’t edit and put a correction in, but the max was 11.6 c, 7.8 c was 12.00 midday temperature. 4 c in seconds is impossible, but temperature rose during the night and by 9.00 am around 13 c.

Gary Pearse
Reply to  KaiserDerden
September 28, 2015 5:18 pm

KaiserDerden, a change of 4C in a few minutes is not impossible? On the plains in western Canada and northern part of western USA “chinook” winds in winter can have very large changes in minutes. Ranchers used to talk about riding from very cold winter temperatures into well above freezing in a few minutes.
“Chinook winds have been observed to raise winter temperature, often from below -20 °C (-4 °F) to as high as 10-20 °C (50-68 °F) for a few hours or days, then temperatures plummet to their base levels. The greatest recorded temperature change in 24 hours was caused by Chinook winds on January 15, 1972, in Loma, Montana; the temperature rose from -48 to 9 °C (-54 to 48 °F).”
If your station is in the foothills of Alberta or Montana, you can indeed get very rapid temperature changes in a very short time. If climatologists aren’t aware of these kinds of conditions and those referred to by Matt G, then you have novices buggering up the records. The temperature shifts that are seen as ‘discontinuities’ are actually corrected automatically by an algorithm -don’t you love the term!- (this gives the opportunity to shift the most recent temperatures upwards to ‘correct’ them in the case of an apparent ‘drop’ that may, in fact be real. Yes human errors are a fact of life, but really, it is a pretty simple thing to read and record a temperature. I think a lot of real data is getting corrected.

DD More
Reply to  KaiserDerden
September 28, 2015 7:03 pm

Garry, winds go up and down.
The reason is that the Black Hills of South Dakota are home to the world’s fastest recorded rise in temperature, a record that has held for nearly six decades.
On January 22, 1943, the northern and eastern slopes of the Black Hills were at the western edge of an Arctic airmass and under a temperature inversion. A layer of shallow Arctic air hugged the ground from Spearfish to Rapid City. At about 7:30am MST, the temperature in Spearfish was -4 degrees Fahrenheit. The chinook kicked in, and two minutes later the temperature was 45 degrees above zero. The 49 degree rise in two minutes set a world record that is still on the books. By 9:00am, the temperature had risen to 54 degrees. Suddenly, the chinook died down and the temperature tumbled back to -4 degrees. The 58 degree drop took only 27 minutes.

http://www.blackhillsweather.com/chinook.html
107 degrees of change in 1/2 hour.

David Chappell
Reply to  KaiserDerden
September 29, 2015 6:47 am

“4 c in seconds is impossible”
I have watched it happen. My village has a public time/temp digital display – the absolute temp not necessarily accurate. One evening I was sitting having a quiet beer on the waterfront when a major storm moved in. As the rain started, the temp reading plummeted 4C in less than one minute.

Jason Calley
Reply to  KaiserDerden
September 29, 2015 8:20 am

DD More, I once spoke with a man who was in Spearfish when that temperature change happened. He claimed it was so quick that several shops had their display windows break from thermal shock.

Matt G
Reply to  KaiserDerden
September 29, 2015 10:40 am

“4 c in seconds is impossible”
I have watched it happen. My village has a public time/temp digital display – the absolute temp not necessarily accurate. One evening I was sitting having a quiet beer on the waterfront when a major storm moved in. As the rain started, the temp reading plummeted 4C in less than one minute.
————————————————————–
Not surprised less than one minute, but literally this would require a 4 c rise in one second.

Reply to  Matt G
September 28, 2015 3:51 pm

@ Matt, I have seen the same here in western Canada, temps here in the winter can fluctuate very much the same way you describe with very quick cold fronts and warm weather preceding those and then following, we try as best we can take our obs at 7 am and 7 pm. We also do regular “reality checks” with other stations in the area as to eliminate errors.

George E. Smith
Reply to  asybot
September 29, 2015 2:01 pm

Of course, if there are winds, it is possible that the current air mass in a given location, will get replaced by an air mass from a different; and maybe hotter air mass from somewhere else.
A 30 MPH wind can move an air mass as much as mile in just two minutes, and as we all know, some different air masses can be a lot smaller than a mile; for example say a tornado funnel.
So has any place actually seen the Temperature of the LOCAL AIR MASS increase by 49 deg. F in two minutes or cool down 58 deg. F in 27 minutes, with NO winds ??
I doubt that

Robert B
Reply to  Matt G
September 28, 2015 6:17 pm

Mildura Australia Feb 2012. Minimum 27th, 20.9°C. Max 28th, 18.9°C
The minimum on the 28th was 17.6°C. Not that rare as there can be a lot of cloud cover keeping minimums high before a cold front moves through. Something that cities like Melbourne experience often. I’m sure its had minimums before 9am higher than maximums after 9am.

Hugs
Reply to  Robert B
September 29, 2015 6:39 am

That is interesting!

Nick Stokes
September 28, 2015 2:39 pm

” but a lack of explanation as to why the rest of the world is adjusted half as frequently”
There is a very simple explanation – TOBS. The US had mostly volunteer observers, with their own opinions about when they should check the thermometer. ROW had employees, who observed at prescribed times.
I can’t see the point of this article. There are millions of B91 forms, and I’m sure you’ll find mistakes. So what to do? Throw out the lot?

KTM
Reply to  Nick Stokes
September 28, 2015 3:19 pm

This form says right at the top that the TOB was 8 AM. Yet a full 0.5C TOB adjustment was still applied to the station by the model. How about we throw out the lot of misapplied TOB adjustments?

Reply to  KTM
September 28, 2015 3:41 pm

Yes. All data is TOBS-adjusted to match current practice for that station, which is presumably not 8am. MMTS of course does not have a min/max thermometer, but the daily average is calculated as if TOB were midnight.

KTM
Reply to  KTM
September 28, 2015 4:59 pm

http://www.skepticalscience.com/understanding-tobs-bias.html
According to Zeke, 8-9 AM should have almost zero impact on average temperatures (relative to midnight).
And if there is a small effect, it should give a negative adjustment, not a positive one.

Reply to  KTM
September 28, 2015 7:49 pm

Does not MMTS stand for Minimum Maximum Temperature System?

KTM
Reply to  KTM
September 28, 2015 9:40 pm

You looked at morning to midnight TOBS adjustments before and found them to be small.
http://moyhu.blogspot.com.au/2014/05/the-necessity-of-tobs.html
This station’s TOBS adjustment of +0.5C far exceeds any of the others in the group of 190 stations you looked at before.
Have you forgotten this analysis? It seems that this large of a TOBS adjustment should have set off some alarm bells for someone that has studied it before.

Reply to  KTM
September 28, 2015 10:16 pm

“It seems that this large of a TOBS adjustment “
Yes, it is quite large for 7am (the 1941 time) to midnight. So the time adjusted to may not be midnight. Incidentally, didn’t people here say that adjustments were done to cool the past?
“MMTS stand for Minimum Maximum Temperature System”
Yes, but it is usually used for the thermistor version.

KaiserDerden
Reply to  Nick Stokes
September 28, 2015 3:20 pm

the point is to show that the supposed data that drives billions of dollars of government spending on green schemes is not valid or rigorous … you have mix dog poop with vanilla ice cream … don’t serve that to me and call it dessert … 🙂

Reply to  KaiserDerden
September 28, 2015 4:27 pm

Thank you for ruining ice cream for me for the next umpteen gazillion years.

Reply to  Nick Stokes
September 28, 2015 3:27 pm

Spend some of the $millions being spent on propaganda to do a more careful transcription process?

Reply to  Nick Stokes
September 28, 2015 3:29 pm

Two reasons really. TOBs is one; the other is network density. The U.S. has ~7,000 co-op stations to use in pairwise homogenization. The rest of the world (with a few exceptions) has a much less dense network of stations. The fewer nearby neighbors you have beyond a certain point, the less likely you are to detect local breakpoints like station moves or instrument changes, especially if the effect is relatively modest.

Gary Pearse
Reply to  Zeke Hausfather
September 28, 2015 5:22 pm

See comment: http://wattsupwiththat.com/2015/09/28/human-error-in-the-surface-temperature-record/#comment-2037131
It is certain that some quick temperature changes that are real have been corrected using an automatic algorithm to find it and change it. (Chinooks, cold and warm fronts in winter/spring/fall.)

Reply to  Zeke Hausfather
September 29, 2015 9:01 pm

Surely temperature station moves will, on average, have no impact on the temperature trend, though. We need to focus on adjustments that are likely to have an impact and I’m not sure TOBs has a big one either due to human nature interfering with “standard” reading times.
Put it this way, if policy is to read at 8am but twice a week its read late in the morning or even in the evening (say on the weekends) then the actual TOBs adjustment could be less than half of what is applied by that policy based assumption.
The TOBs adjustment is an extraordinarily large adjustment and hence needs an extraordinarily large justification and it frankly just doesn’t have it. If anything it simply increases the error range of our readings.

Mike the Morlock
Reply to  Nick Stokes
September 28, 2015 4:21 pm

“So what to do? Throw out the lot?” Nick, sigh, for the purpose of climate models yes. They are simply not good enough. They are good for historical reference, only that. Trying to use them leads to all types of corrections and modifications based on what the individual at the time feels it should be.
michael

MarkW
Reply to  Mike the Morlock
September 28, 2015 4:35 pm

The only thing worse than no data, is bad data.
If the data is corrupted, throw it out.

Reply to  Mike the Morlock
September 28, 2015 5:38 pm

“Nick, sigh, for the purpose of climate models yes. “
They aren’t used for climate models.

Mike the Morlock
Reply to  Mike the Morlock
September 28, 2015 8:33 pm

Nick thanks for the correction.
michael

MarkW
Reply to  Nick Stokes
September 28, 2015 4:34 pm

Fascinating how the warmist assumes that if you are employed by the govt you instantly become more reliable and trustworthy.

Reply to  Nick Stokes
September 28, 2015 5:17 pm

Phil jones had an interesting approach. By looking at manual records they calculated an error rate
and then added that to the uncertainty.
its tiny tiny tiny

catweazle666
Reply to  Steven Mosher
September 29, 2015 5:37 pm

“Phil jones had an interesting approach.”
That’ll be Phil “I’ll destroy all my data before I let anyone outside the Hockey Team inspect it” Jones of UEA CRU, will it?
I’m impressed – not.

Michael Jankowski
Reply to  Nick Stokes
September 28, 2015 5:18 pm

How about accounting for and fixing the errors that are found through a quick QA/QC check like John just did?
Are you really this obtuse?

Reply to  John Goetz
September 28, 2015 5:47 pm

“That begs the question: were the ROW employees all making observations at the same time that the TOBs adjustment is targeting? “
They were consistent over time. There is no one right time. TOBS change is like a change of instrument. The instrument before and after may be good, but you may still need to adjust for calibration for a consistent record. Reading at 9AM, 5PM, doesn’t matter as long as you stick to it. If you change, the record needs adjusting for consistency.

Reply to  John Goetz
September 30, 2015 1:31 am

Nick writes “Reading at 9AM, 5PM, doesn’t matter as long as you stick to it.”
Actually you’re better off randomly reading at either 9am and 5pm so that the errors cancel out. That way, from a long term trend perspective, you dont need to adjust the data at all.

Michael Jankowski
Reply to  Nick Stokes
September 28, 2015 5:36 pm

“There is a very simple explanation – TOBS. The US had mostly volunteer observers, with their own opinions about when they should check the thermometer. ROW had employees, who observed at prescribed times.”
Absolutely amazing. The entire world took their thermometer readings almost in unison, and folks in the U.S. just selected the time to observe theirs on their own personal whims.
Got any more BS stories to share?

Reply to  Nick Stokes
September 30, 2015 1:19 am

Nick writes “So what to do? Throw out the lot?”
No. Increase the error estimates.

September 28, 2015 2:49 pm

Actually looking at the piece of paper: “Adjustments you can believe in!”

September 28, 2015 2:51 pm

All very interesting, I worked 10 months at Tolk station outside Muleshoe (stayed in Clovis), but in the overall big picture why does it matter?
As I understand it the basic premise of the CAGW crowd is that increasing concentrations of atmospheric CO2 disrupt the “natural” atmospheric heat balance and the only way to restore that “natural” balance is by radiating that unbalanced heat back to space per the S-B relationship, i.e. increasing the surface temperature. BTW, the atmosphere is not, as some postulate, a closed system. That assumption simplifies calculations, but ignores reality.
One, there is no such thing as the “natural” heat balance. As abundantly evident from both paleo and contemporary records the atmospheric heat balance has always been and continues to be in constant turmoil w/o regard to the pitiful 2 W/m^2 of industrial CO2 added between 1750 and 2011. Fluctuations in incoming and outgoing radiation, changing albedo from clouds and ice, cosmic rays, 10 +/- W/m^2 range of solar insolation from perigee to apogee, etc. refute that notion of a closed system.
Two, radiation is far from the only source of rebalancing the “natural” heat balance. Water cools the surroundings when it evaporates and warms the surroundings when it condenses. The water vapor cycle, clouds, precipitation, etc., a subject which IPCC AR5 admits to having a poor understanding, modulates and moderates the atmospheric heat balance and has done so for millions of years all without the help or hindrance of industrialized man. The atmospheric water cycle is just on huge global atmospheric swamp cooler for the earth. Other planets don’t have that. The popular GHE considers radiation only and excludes water vapor. Large commercial greenhouses typically have a wall full of evaporative cooler pads, water & fans.
CAGW has zip to do with science and everything to do with a hazy, starry eyed, utopian, anti-fossil fuel (90% anti-coal) agenda bereft of facts & reality.

September 28, 2015 2:52 pm

“The time of observation estimate was 1.41C”
“replacing the value with an estimate of 0.12C ”
“Homogenization estimated the temperature at 2.65C”
You will notice that all data are recorded to the nearest whole degree. Estimates to 0.01 degree are meaningless. You cannot exceed measurement accuracy when data are non homogeneous.

Reply to  Tony
September 29, 2015 2:34 am

I was thinking that too. If the observation is a whole Farenheit number then it’s essentially n +/- 0.5 to account for rounding up or down. So in reality there’s a 0.5degF error bar straight from the thermometer reading, with no way of knowing whether those errors cancel out over time or not.
Once we factor in siting, equipment defects and TOBS I’d be surprised if we know the actual Tmax/Tmin to within +/- 1degF. Then there’s all that homogenisation, averaging and adjustment.
If we’re honest, we should just say that we need to include at least a +/-1degC error bar. Our best estimates would then indicate that the temperature today is the same as it’s been for the past 150 years or more. We can then close down the AGW industry!

September 28, 2015 3:04 pm

Anyone who has worked in a large organization knows that despite the competence and best intentions of all involved, errors happen. Over the last few hundred years covering the entire globe how can there not be questions about the validity of the temperature records. This is not intended to denigrate any individuals or systems. It is just that s..t happens. The more records the chances of more s..t.

KaiserDerden
Reply to  cerescokid
September 28, 2015 3:22 pm

which is why error bars can be useful …

Reply to  KaiserDerden
September 29, 2015 4:47 am

I agree but even their accuracy should be suspect. This is a monumental job and human nature leads us to believe
we can do more than we can do and know more than we can know. I see the fingerprint of hubris all over climate science.

September 28, 2015 3:09 pm

“but a lack of explanation as to why the rest of the world is adjusted half as frequently.”
For later. As needed.

KTM
September 28, 2015 3:13 pm

According to Zeke’s hand-waving about TOBS, most stations collected data late in the afternoon in 1929, so they had to apply a 0.3 C adjustment to the entire record.
http://www.skepticalscience.com/understanding-tobs-bias.html
Yet here we have a station that collected temperature at 8 AM, and still had a ~0.5C TOB adjustment applied for the month.
I hope Zeke will weigh in on why a 0.5C TOB adjustment is needed for this station that collected temps at 8 AM in 1929.

DD More
Reply to  KTM
September 28, 2015 3:53 pm

About what Zeke says.
Climate Etc. – Understanding adjustments to temperature data
by Zeke Hausfather All of these changes introduce (non-random) systemic biases into the network. For example, MMTS sensors tend to read maximum daily temperatures about 0.5 C colder than LiG thermometers at the same location.

http://judithcurry.com/2014/07/07/understanding-adjustments-to-temperature-data/
What He measured
Interviewed was meteorologist Klaus Hager. He was active in meteorology for 44 years and now has been a lecturer at the University of Augsburg almost 10 years. He is considered an expert in weather instrumentation and measurement. One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ ” For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.”
Hager confirms to the AZ that the higher temperatures are indeed an artifact of the new instruments.

http://notrickszone.com/2015/01/12/university-of-augsburg-44-year-veteran-meteorologist-calls-climate-protection-ridiculous-a-deception/
http://wattsupwiththat.com/2015/03/06/can-adjustments-right-a-wrong/
I could find nowhere the affect made to the monthly data you are posting here. Zeke ‘says’
At first glance, it would seem that the time of observation wouldn’t matter at all. After all, the instrument is recording the minimum and maximum temperatures for a 24-hour period no matter what time of day you reset it. The reason that it matters, however, is that depending on the time of observation you will end up occasionally double counting either high or low days more than you should. For example, say that today is unusually warm, and that the temperature drops, say, 10 degrees F tomorrow. If you observe the temperature at 5 PM and reset the instrument, the temperature at 5:01 PM might be higher than any readings during the next day, but would still end up being counted as the high of the next day. Similarly, if you observe the temperature in the early morning, you end up occasionally double counting low temperatures. If you keep the time of observation constant over time, this won’t make any different to the long-term station trends. If you change the observations times from afternoons to mornings, as occurred in the U.S., you change from occasionally double counting highs to occasionally double counting lows, resulting in a measurable bias.
So Zeke, where is the double counting of high or low temperatures in this month’s data and did you get the correct sign +/- on your MMTS?

Reply to  KTM
September 28, 2015 5:15 pm

The adjustment is different for different parts of the US and different seasons.
If you want proof go to the skeptics site which first audited this.
John Daly
The skeptics proved that TOB was real, that it differed by location, and that you needed to correct for it

KTM
Reply to  Steven Mosher
September 28, 2015 9:35 pm

I looked up the John Daly site.
http://www.john-daly.com/tob/TOBSUMC.HTM
This analysis by Nick of the 190 stations used there shows the effect of going from TOBS of afternoon to morning, afternoon to midnight, or morning to midnight.
http://moyhu.blogspot.com.au/2014/05/the-necessity-of-tobs.html
http://www.moyhu.org.s3.amazonaws.com/misc/ushcn/tobs3.png
The middle peak of the adjustment needed for a morning to midnight switch is at only ~+.09. There were no morning to midnight stations among those 190 that required a TOBS adjustment of even +0.4, yet here we have one of +0.5.
Don’t you think it’s at all strange that this random station plucked out of the ether happens to get a TOBS adjustment that far surpasses any other in the study you quoted?

Michael Jankowski
Reply to  Steven Mosher
September 29, 2015 5:16 pm

Wow, it took John Daly to discover this issue and bring awareness to it? Unbelievable.
I guess I should’ve known that. He’s just given sooooo much credit for it (cough, cough).

KTM
September 28, 2015 3:16 pm

Zeke has said elsewhere that because most stations in the US collected data late in the afternoon, a 0.3C TOB adjustment must be applied to the entire temperature record.
Yet here we have a station in 1929 that collected data at 8 AM, and a ~0.5C TOB adjustment was still applied to it.
Hopefully Zeke will weigh in on why such a large TOB adjustment was needed for this station collecting data at 8 AM.

Reply to  KTM
September 28, 2015 5:12 pm

TOB adjustments are GEOGRAPHICLY SPECIFIC.
depending on the location and season you get different adjustments.

gnomish
September 28, 2015 3:18 pm

and always bearing in mind that an average temperature is as meaningful as an average phone number…
and always bearing in mind that a bull who chases the cape gets the sword no matter how fiercely he attacks it…
after it’s all been examined thoroughly, don’t forget to flush.

September 28, 2015 3:36 pm

“I have looked through B-91 forms at other stations where no such corrections or notations were made. Some of those stations were located at people’s homes. Is it reasonable to believe that the observers never missed a 7 AM observation for any reason, such as a holiday or vacation, for years on end? That they always wrote their observation down the first time correctly?”
As I have explained to folks many times there is no such as “raw” data. There is a purported first report.
Filled with errors.
The US is probably the worst in the world when it comes to these things because we relied on volunteers.
That’s why when you run code to find mistakes and correct them that you find the following:
the US record is one of the worst from a stand point of inhomgenieties. That’s why you learn very little
about adjustments by studying it. your basically studying the outlier.
sad but true
The typical US skeptic assumes the US is best so problems elsewhere MUST be worse. well, not so

Reply to  John Goetz
September 28, 2015 5:10 pm

You are assuming that the observer didnt make a mistake in correcting the record to begin with

Mike the Morlock
Reply to  Steven Mosher
September 29, 2015 12:20 am

Steven Mosher
The US is probably the worst in the world when it comes to these things because we relied on volunteers.
Well its nice to know your thoughts on volunteers.
Myself I’ll take volunteers 24/7 they are doing it out of a sense of responsibility a higher calling if you will. If you can’t see that and have to malign them, then perhaps you should be in another occupation.
I’m normally not like this.
michael

catweazle666
Reply to  Steven Mosher
September 29, 2015 5:42 pm

Stephen Mosher: “The US is probably the worst in the world when it comes to these things because we relied on volunteers.”
Ye gods…
You really are a piece of work!

emsnews
September 28, 2015 3:49 pm

A daily temperature record is a very recent innovation.
To understand the past we use various tools like ‘what grew here 1,000 years ago?’ Various clues are examined. The biggest factor remains that Ice Age clues are many and very strong and point to this being an Ice Age era with very short interglacials.
We have various clues in rocks, fossils, glacier scouring marks on rocks, etc. to philosophize about the probable past. Ditto with warm weather clues in the past.
The quibbling over very minor temperature changes when we have an actual thermometer data record to examine is rather silly.when we consider that all of this is over very tiny changes in temperature compared to any Ice Age plunge.

Mike T
September 28, 2015 4:05 pm

The time of obs shouldn’t matter with maximum and minimum values in theory. They are taken on thermometers which record the respective parameters which are then reset for the next period, usually 24 hours. I have a couple of issues with the way such readings are handled. In Australia, the reset time is 9am (which in GMT varies in the months daylights savings time commences and ceases) so the maximum temperature is recorded for the previous day at 0900- no matter that a hot wind has sprung up and it’s 5C hotter than the maximum the previous day. The max is also read at 1500, but in summer the max can be higher at 1800 than it had been at 1500.
The introduction of electronic probes means that this practice is anachronistic, as a midnight to midnight max is easily obtained (and min, obviously) but the practice of recording the “daily max” at 0900 continues to keep electronic records in line with manual records. One issue with the electronic probes is that they are more sensitive than mercurial max thermometers, often reading 0.5 degrees Celcius higher than the mercurial max in the same screen. To my mind this should point to older maxima being adjusted upwards, if they are adjusted at all. Then again, the electronic probes often give a slightly lower minimum temp, although the minimum thermometers used in Australia could give incorrect minima during windy periods- the marker subject to “shaking down”. Before the advent of electronic probes, suspect minima could only be noted in the Obs book (especially as professional observers would have been taking at least hourly temp readings for aviation).

Reply to  Mike T
September 28, 2015 5:09 pm

“The time of obs shouldn’t matter with maximum and minimum values in theory.”
In reality it does matter. even for satellites

Reply to  Mike T
September 29, 2015 2:44 am

I think that electronic thermometers can react more quickly to temperature and therefore will always give higher highs and lower lows than mercury thermometers, since they have built-in averaging due to thermal inertia. In science labs we were always told to wait for the mercury to stabilise before reading to allow for that inertia — glass isn’t a brilliant conductor!

September 28, 2015 4:15 pm

The more I’ve paid attention to all this (Yes, I’m just a Layman.) the more I’m convinced that trying to get a “Global Temperature” from past records and even present surface station records is trying to, what’s the phrase, “Making silk out of a sows ear”.
Computers are powerful tools. But no matter what program they run, sound as it may seem to be, the foundation, the data, of the model produced is faulty.
The House of caGW is built on sand.
Satellites are the only truly global measurements we have.
True, calibrations are involved. But is a “calibration” an “adjustment”?
Only if the goal is to reach a desired conclusion rather than accuracy.

Reply to  Gunga Din
September 28, 2015 4:44 pm

calibrations, ,9 different instruments, corrections, approximations, models, and gcm corrections,
accuracy is fine for the policy decisions that obama s making

Michael Jankowski
Reply to  Steven Mosher
September 28, 2015 5:24 pm

True. Obama’s policy decisions will accurately reduce global warming by an infinitesimal amount while costing us a fudged and yet still very high dollar amount. Rejoice.

Pamela Gray
Reply to  Steven Mosher
September 28, 2015 6:33 pm

Then take the damn taxes out of your pocket book and leave mine the hell alone.

Mike the Morlock
Reply to  Steven Mosher
September 28, 2015 10:23 pm

Steven Mosher “accuracy is fine for the policy decisions that obama s making”
But will the same hold true for a President Rubio? Or Trump?
michael

ferdberple
Reply to  Steven Mosher
September 29, 2015 12:38 am

accuracy is fine for the policy decisions that obama s making
=====================
goldman sachs made the policy decision. Obama’s job is to sell it.

Charles samuels
September 28, 2015 4:29 pm

Is it reasonable to think these observers never missed an observation? Of couse not, but it is much worse than that. The coop station data is bad, but the First Order Stations are not far behind. Prior to 1961 mercury and alcohol thermometers were used in Weather Bureau stations and these thermometers came with correction cards, which were promptly thrown in a drawer and never looked at again. In the early 60s they started using a rube-goldberg device that transmitted a timed pulse to an indicator that had a 1/2 degree variation and the needle was 1/2 degree wide. Just because you can mathematically arrive at accuracy of 0.1 degrees, does not make it true.

NW sage
September 28, 2015 5:17 pm

Recording errors, if they are not immediately corrected, can only be accounted for by increasing the error bar of the data. The big trick is determining just how much more ‘unknowable’ the data is because of recording errors. The +- variance of the various types and designs of thermometers can be estimated but trying to determine the additional unknown caused by mistakes is real but difficult problem.
Trying to make use of data where 0.1 degree is a vital difference but the real unknown of comparison data collected 300 years ago is but plus or minus 2 degrees is futile and misleading. Everyone agrees that there must have been an average temp 300 yrs ago but trying to figure out what it was to a tenth of a degree is usually futile.

Pamela Gray
September 28, 2015 6:29 pm

I think Mosher hit the nail on the head. We should throw them out, though that will never happen. This is the solar data times 10,000. And there is NO money to grant for systematically combing through hard copies for these kinds of mistakes. Instead, there is PLENTY of incentive NOT to.
Frankly, I don’t even think God can manage to throw these records out. Heck, God couldn’t manage to throw the serpent out of his garden and instead left the damn thing in there till he had to throw the humans out. What chance do we have of these records getting cleaned up or thrown out, let alone the researchers who are benefiting from the errors?

Reply to  Pamela Gray
September 29, 2015 3:30 pm

Heck, God couldn’t manage to throw the serpent out of his garden and instead left the damn thing in there till he had to throw the humans out.

😎
Actually, it was Adam’s job to do that. It was delegated to him.
15 And the LORD God took the man, and put him into the garden of Eden to dress it and to keep it. (Gen 2:15 KJV)
The word “keep” is also used in the sense of “guard”. He had been given the authority and the power (Created “in the image of God”, spirit. (see John 4:24)) to keep Lucifer’s influence out. He lost both when he chose to learn about evil.
We’ve been stuck with the serpent and his snakes running the local show (for now) ever since.
Freedom of will is a big deal to God. He has the “raw power” to overturn it. But He also has the “raw love” and wisdom and justice to allow people to choose His solution, even though we don’t deserve it. He is true to Himself.
Maybe you and others here may think I’m just a nut (maybe a likable nut?).
I got this and more here http://sunriseswansong.wordpress.com/2013/07/11/attention-surplus-disorder-part-two/ .
Please don’t respond. Caleb has allowed this to remain but he “doesn’t have a dog in this fight”.
And here, there is the potential for a massive “derail”.
Just read and consider.

Steve in Seattle
September 28, 2015 7:22 pm

Hi Mr. Goetz, could you please provide a brief bio on yourself – I would like to excerpt some of your posts here with your permission, and I apologize, I don’t know who you are or your employer.
Thanks

Juan Slayton
September 28, 2015 9:07 pm

I’ve made a couple of trips to Muleshoe since 2010 to try to track down the station’s history. Typically for such stations, it’s been located all over town. Posted my stuff, including pictures, in the photo gallery. So my first reaction when reading this post was to review what I’d found to see if there was anything relevant to Mr. Goetz subject. (Probably not, but wouldn’t hurt to look.) Being out of California at the moment, I don’t have access to my original materials, but I could look on line, if the gallery were back in service. Just sayin’….

Dudley Horscroft
September 28, 2015 9:27 pm

emsnews September 28, 2015 at 3:49 pm said
“A daily temperature record is a very recent innovation.
To understand the past we use various tools like ‘what grew here 1,000 years ago?’ Various clues are examined. The biggest factor remains that Ice Age clues are many and very strong and point to this being an Ice Age era with very short interglacials.
We have various clues in rocks, fossils, glacier scouring marks on rocks, etc. to philosophize about the probable past. Ditto with warm weather clues in the past.
The quibbling over very minor temperature changes when we have an actual thermometer data record to examine is rather silly.when we consider that all of this is over very tiny changes in temperature compared to any Ice Age plunge.”
There is no problem with the large temperature changes from the Roman and Mediaeval Warms to the cold spells between them. The problem is that small temperature changes recorded by imperfect thermometers, read by imperfect observers, transcribed by imperfect clerks and used in imperfect models by so-called “climate scientists” can be used by those with an axe to grind to promote bad policies. And we suffer from it.
If what Mike T has said: “One issue with the electronic probes is that they are more sensitive than mercurial max thermometers, often reading 0.5 degrees Celcius higher than the mercurial max in the same screen. To my mind this should point to older maxima being adjusted upwards, if they are adjusted at all. ” is correct, then this means that the alleged 0.8K rise in global mean temp is really not much more than a real 0.3K rise in gmt. So effectively there is negligible rise in gmt due to the near doubling in atmospheric CO2 over the last 150 years.
This should be dinned into the brains of all politicians BEFORE the Paris conference, not after.

mairon62
September 28, 2015 10:15 pm

This sentence from paragraph 5 of the article above, “The minimum temperature for February 6 was higher than the maximum temperature for February 7, which is an impossibility.”, is this an erroneous assumption? It’s utterly common for the following day’s max to be lower than the daily min. from the previous day when a cold-front moves in. Or is the author referring to a time-of-observation problem where the high-max-temp. observed by the observer on the 7th is actually the high-max-temp that registered on the thermometer on the 6th; so, yes, a max could not be less than the min for the same day. Help! Is the above quoted sentence problematic, or not?

Ray Boorman
Reply to  mairon62
September 29, 2015 12:31 am

Yes, I think that statement is bs, but I am not a climate scientist, so my opinion is also bs in their eyes.

September 28, 2015 10:50 pm

In Australia and apart from questions about the accuracy of homogenisation of the raw dataset, I believe there are still unresolved questions about the impact of metrication in 1972. Almost half the country’s observer written records before 1972 were rounded to .0F and the proportions immediately dropped to between 10-20% at all stations when the new C thermometers and observer practices were introduced. Many western countries reverted from F to C in the 1960s and 1970s when AGW supposedly became pronounced. Many people claim the rounding was equally up and down but I disagree on the basis that it would be human nature for a greater proportion to be truncated down. The BoM detected a .1C artificial warming at the time of metrication but decided not to adjust for it. Australia’s rounded temps are dissected at http://www.waclimate.net/round/rounded-australia.html
As for the accuracy of the homogenised or corrected ACORN dataset, it’s hottest ever day in Australia was at Albany on the south west coast on 8 February 1933 with a max of 51.2C, even though that day is confirmed to have had a top temp of 44.8C in raw. The single day error increases the ACORN reading of Albany for Feb 1933 from a monthly average of 28.58C to a monthly average of 28.81C, and the incorrect adjustment has been recognised but uncorrected for several years.

Editor
September 29, 2015 7:53 am

Each recorded temperature is more properly recorded as, for example, 31° +/- 0.5°. This is a human reporting an analog scale thermometer, and recording the number only in whole units. The 31° recorded would be the same for any temperature from 30.5° — it is a recording of a range, just under 1° wide.
There is no way to remove this known original measurement error — all results must then be recorded with the same error notation — and all results are also a range, 1° wide. The the monthly averages should be noted as 1.41° +/- 0.5° (the range — 1.91° to 0.91°). There is no valid method of determining where in the range the actual value should be placed.
In the surface station I investigated personally, Santo Domingo, the Chief Meteorologist explained how the “shorter than average” Dominican males who recorded the daily temperatures were instructed to stand on the concrete block kindly provided, so that their eyes would be level with the thermometer, at the right angle, to record the temperature correctly. He noted that the shorter men were especially prideful, and would not stand on the block, thus always recorded temperatures from a low angle. It is uncertain how much this simple cultural factor influence the readings from this station.
It is interesting how much the automatic TOB adjustment changes the actual reading –> “The unadjusted average reflected in the electronic record was 0.8C whereas the paper record was 0.24C, just over half a degree cooler. The time of observation estimate was 1.41C. ”
I would like to see this blanket TOB adjustment researched from scratch, against modern digital hourly data to see if the adjustment used in all data bases is actually valid.

Editor
Reply to  Kip Hansen
September 29, 2015 7:55 am

…” would be the same for any temperature from anything over 30.5° to anything under 31.5″….

September 29, 2015 8:43 am

This is an essentially pointless exercise. Clearly the signal-to-noise ratios of historical temperature records are inadequate to discover if temperatures worldwide or even in the USA have changed significantly, much less whether any such change was due to human actions. Error after error at every stage in multiple processes renders the “data,” raw or otherwise, unfit for purpose. Those who report historical temperatures and compare them to present temperatures, who FAIL to report the standard error of historical temperatures, are simply misleading the public, know it, and should stop. BEST practice is somehow not entirely the truth. Indeed silk purses are not made from sow’s ears…

September 29, 2015 9:00 am

Speaking from the perspective of someone who collects reams of field data in the private sector, this discussion is mind boggling. The way these data are so carelessly treated through adjustment, homogenization, infilling and nonsensical error estimation, you would think that the data serve no real purpose. To think that trillions of dollars worth of ramifications hang on such flimsy quality control procedures makes my mind spin.
In the private sector world of real consequences, there would be no estimations, no in fillings, no homogenization. Good stations with good data would be selected in various parts of the world and stations with discontinuities or discrepancies would be dropped. Period. If I collect some data in the field with a 3DCQ greater than the clients accepted limit, I don’t get to say like so many apologists here “What do want me to do? throw it out? It’s the best I have!” I don’t have the option to discard the data quality rules and estimate the data location. Instead my data gets tossed because it isn’t accurate enough for the purpose of the client. Period.
I’m left concluding that only someone whose work has no consequences could imagine that these data are accurate enough for the purpose they are being used for. Why else is this so obvious to everyone who works in the real world and so hard to understand for academics and bureaucrats?

richard verney
Reply to  Dave in Canmore
September 29, 2015 10:39 am

I have made similar comments for years.
These weather stations were never intended to perform the function to which they are being put. They are not fit for purpose, and their data is being over extrapolated beyond its capabilities.
If the Climate Scientists wish them to perform the task to which they are now being put, the starting point would be to audit each and every station for its siting, siting issues, station moves, equipment used, screen changes, maintenance to equipment and screen, changes to equipment, record keeping and the approach to accurate record keeping, the length of uninterrupted records etc. etc The good stations could be identified and the poor stations could be thrown out.
Essentially what should have been looked for is the equivalent of USCRN stations but with the longest continuous data records. It may be that we would be left with only 1000 or perhaps only 500 stations worldwide, but better to work with good quality pristine data that requires no or little adjustment than to work with loads and loads of cr*p quality station data and cr*p data needing endless data manipulation/adjustment/homogenisation.
We are now no longer examining the data and seeing what the data tells us, but rather we are simply examining the efficacy of the various adjustments/homogenisation undertaken to that data.
Quite farcical really.

Reply to  richard verney
September 29, 2015 12:34 pm

Exactly! There are billions of government dollars available for study after study but the most basic data QC is abandoned? The gulf between best practices in the Climate Science community and the real world is staggering beyond comprehension.

Matt G
September 29, 2015 11:11 am

The only way can get a true surface data set is by using only the same samples throughout it from start to finish. They are changed all the time so they are always measuring different parts of the planets surface all the time. That has never been a technique that should be used to estimate a massive surface area by just tenths of a degree using a tiny percentage of it. It is impossible to suggest there has been any accuracy in it and the only thing that is close to this ideal are the satellite data sets. Would need a million weather stations on the planet’s surface to even come close to what satellite can measure in the troposphere. Forty four thousand weather stations are roughly one percent of planet’s surface.

Richard M
September 29, 2015 3:24 pm

I always thought it would be interesting to try and verify the station data with some kind of proxy. Wouldn’t it be interesting to see a set of proxy data collected for the US since 1880 compared to the temperature record. Yes, proxy data has it’s own limitations but if enough data was collected that should tend to average out the errors.
I suspect the problem is no one in the government wants to see any attempt to validate the data. Hence, nothing could ever get funded. It would almost take a volunteer group.

Michael Jankowski
Reply to  Richard M
September 29, 2015 5:20 pm

Richard M, you’re quite right. You’d probably be interested in this…
http://climateaudit.org/2005/02/20/bring-the-proxies-up-to-date/
And this “volunteer”” effort…
http://climateaudit.org/2007/10/12/a-little-secret/

Michael G. Chesko
September 29, 2015 4:17 pm

This is in reference to the graph that ralfellis presented above in the comments section.
Wait a minute, are you saying that CO2 is a follower of a temperature trend rather than the cause of a temperature trend?
I’m just a layman, and I often don’t understand all the scientific jargon, but it seems to me that if CO2 is a “negative-feedback temperature regulator” that a solution calling for a reduction in CO2 to save the world has a big problem.
This makes me wonder…
Can anyone (preferably a scientist) answer these four questions:
• If human beings were producing the same amount of CO2 before the last Ice Age that we are today, would the last Ice Age have been averted?
• Is the fact that human beings burn fossil fuels today going to avert another Ice Age?
• If human activity is capable of abnormally warming the Earth, are we capable of abnormally cooling it?
• What human activity would abnormally cool the Earth?

Steve M. from TN
September 29, 2015 4:27 pm

Double counting lows/highs.
Maybe some can explain this to me, but wouldn’t you only get a double high/low only once…the day you switch? As long as you don’t switch anymore, then it shouldn’t be a problem. It seems to me that a single switch of TOBS would be insignificant in the data.

richard verney
Reply to  Steve M. from TN
September 30, 2015 2:57 am

As I see it, apart from rare isolated events, this can only potentially be a significant and repeated problem where the station TOB coincides with the warmest part of the day.
Obviously the temperature profile of every day is slightly different, but in general the warmest time of the day is an hour or so after the sun has reached its peak height that day. Thus the warmest time of the day is usually some time between about 1pm and 3:30pm.
That being the case, every station that has a TOB coinciding broadly with the warmest time of the day should either be disregarded from the data set, or it should be put in a separate bin, and very detailed and careful consideration should be given to its record, with an adjustment being made if necessary.
But I consider the better practice would be to disregard any station if it has a TOB coinciding approximately with the warmest period of the day.
It would be interesting if Zeke, or Mosher would comment on why they do not simply disregard stations that have TOBs around the warmest part of the day.