The scientific method is at work on the USHCN temperature data set

Temperature is such a simple finite thing. It is amazing how complex people can make it.

commenter and friend of WUWT, ossqss at Judith Curry’s blog

Sometimes, you can believe you are entirely right while simultaneously believing that you’ve done due diligence. That’s what confirmation bias is all about. In this case, a whole bunch of people, including me, got a severe case of it.

I’m talking about the claim made by Steve Goddard that 40% of the USHCN data is “fabricated”. which I and few other people thought was clearly wrong.

Dr. Judith Curry and I have been conversing a lot via email over the past two days, and she has written an illuminating essay that explores the issue raised by Goddard and the sociology going on. See her essay:

http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/

Steve Goddard aka Tony Heller deserves the credit for the initial finding, Paul Homewood deserves the credit for taking the finding and establishing it in a more comprehensible

way that opened closed eyes, including mine, in this post entitled Massive Temperature Adjustments At Luling, Texas.  Along with that is his latest followup, showing the problem isn’t limited to Texas, but also in Kansas. And there’s more about this below.

Goddard early on (June 2) gave me his source code that made his graph, but I

couldn’t get it to compile and run. That’s probably more my fault than his, as I’m not an expert in C++ computer language. Had I been able to, things might have gone differently. Then there was the fact that the problem Goddard noted doesn’t show up in GHCN data and I didn’t see the problem in any of the data we had for our USHCN surface stations analysis.

But, the thing that really put up a wall for me was this moment on June 1st, shortly after getting Goddard’s first email with his finding, which I pointed out in On ‘denying’ Hockey Sticks, USHCN data, and all that – part 1.

Goddard initially claimed 40% of the STATIONS were missing, which I said right away was not possible. It raised my hackles, and prompted my “you need to do better” statement. Then he switched the text in his post from stations to data while I was away for a couple of hours at my daughter’s music recital. When I returned, I noted the change, with no note of the change on his post, and that is what really put up the wall for me. He probably looked at it like he was just fixing a typo, I looked at it like it was sweeping an important distinction under the rug.

Then there was my personal bias over previous episodes where Goddard had made what I considered grievous errors, and refused to admit to them. There was the claim of CO2 freezing out of the air in Antarctica episode, later shown to be impossible by an experiment and the GISStimating 1998 episode,  and the comment where when the old data is checked and it is clear Goddard/Heller’s claim doesn’t hold up.

And then just over a month ago there was Goddard’s first hockey stick shape in the USHCN data set, which turned out to be nothing but an artifact.

All of that added up to a big heap of confirmation bias, I was so used to Goddard being wrong, I expected it again, but this time Steve Goddard was right and my confirmation bias prevented me from seeing that there was in fact a real issue in the data and that NCDC has dead stations that are reporting data that isn’t real: mea culpa.

But, that’s the same problem many climate scientists have, they are used to some skeptics being wrong on some issues, so they put up a wall. That is why the careful and exacting analyses we see from Steve McIntyre should be a model for us all. We have to “do better” to make sure that claims we make are credible, documented, phrased in non-inflammatory language, understandable, and most importantly, right.

Otherwise, walls go up, confirmation bias sets in.

Now that the wall is down, NCDC won’t be able to ignore this, even John Nielsen-Gammon, who was critical of Goddard along with me in the Polifact story now says there is a real problem. So does Zeke, and we have all sent or forwarded email to NCDC advising them of it.

I’ve also been on the phone Friday with the assistant director of NCDC and chief scientist (Tom Peterson), and also with the person in charge of USHCN (Matt Menne). Both were quality, professional conversations, and both thanked me for bringing it to their attention.  There is lots of email flying back and forth too.

They are taking this seriously, they have to, as final data as currently presented for USHCN is clearly wrong. John Neilsen-Gammon sent me a cursory analysis for Texas USHCN stations, noting he found a number of stations that had “estimated” data in place of actual good data that NCDC has in hand, and appears in the RAW USHCN data file on their FTP site

From:John Nielsen-Gammon

Sent: Friday, June 27, 2014 9:27 AM

To: Anthony

Subject: Re: USHCN station at Luling Texas

 Anthony –
   I just did a check of all Texas USHCN stations.  Thirteen had estimates in place of apparently good data.
410174 Estimated May 2008 thru June 2009
410498 Estimated since Oct 2011
410639 Estimated since July 2012 (exc Feb-Mar 2012, Nov 2012, Mar 2013, and May 2013)
410902 Estimated since Aug 2013
411048 Estimated July 2012 thru Feb 2014
412906 Estimated since Jan 2013
413240 Estimated since March 2013
413280 Estimated since Oct 2012
415018 Estimated since April 2010, defunct since Dec 2012
415429 Estimated since May 2013
416276 Estimated since Nov 2012
417945 Estimated since May 2013
418201Estimated since April 2013 (exc Dec 2013).

What is going on is that the USHCN code is that while the RAW data file has the actual measurements, for some reason the final data they publish doesn’t get the memo that good data is actually present for these stations, so it “infills” it with estimated data using data from surrounding stations. It’s a bug, a big one. And as Zeke did a cursory analysis Thursday night, he discovered it was systemic to the entire record, and up to 10% of stations have “estimated” data spanning over a century:

Analysis by Zeke Hausfather
Analysis by Zeke Hausfather

And here is the real kicker, “Zombie weather stations” exist in the USHCN final data set that are still generating data, even though they have been closed.

Remember Marysville, CA, the poster child for bad station siting? It was the station that gave me my “light bulb moment” on the issue of station siting. Here is a photo I took in May 2007:

marysville_badsiting[1]

It was closed just a couple of months after I introduced it to the world as the prime example of “How not to measure temperature”. The MMTS sensor was in a parking lot, with hot air from a/c units from the nearby electronics sheds for the cell phone tower:

MarysvilleCA_USHCN_Site_small

Guess what? Like Luling, TX, which is still open, but getting estimated data in place of the actual data in the final USHCN data file, even though it was marked closed in 2007 by NOAA’s own metadata, Marysville is still producing estimated monthly data, marked with an “E” flag:

USH00045385 2006  1034E    1156h    1036g    1501h    2166i    2601E 2905E    2494E    2314E    1741E    1298E     848i       0

USH00045385 2007   797c    1151E    1575i    1701E    2159E    2418E 2628E    2620E    2197E    1711E    1408E     846E       0

USH00045385 2008   836E    1064E    1386E    1610E    2146E    2508E 2686E    2658E    2383E    1906E    1427E     750E       0

USH00045385 2009   969E    1092E    1316E    1641E    2238E    2354E 2685E    2583E    2519E    1739E    1272E     809E       0

USH00045385 2010   951E    1190E    1302E    1379E    1746E    2401E 2617E    2427E    2340E    1904E    1255E    1073E       0

USH00045385 2011   831E     991E    1228E    1565E    1792E    2223E 2558E    2536E    2511E    1853E    1161E     867E       0

USH00045385 2012   978E    1161E    1229E    1646E    2147E    2387E 2597E    2660E    2454E    1931E    1383E     928E       0

USH00045385 2013   820E    1062E    1494E    1864E    2199E    2480E 2759E    2568E    2286E    1807E    1396E     844E       0

USH00045385 2014  1188E    1247E    1553E    1777E    2245E 2526E   -9999    -9999    -9999    -9999    -9999    -9999

Source:  USHCN Final : ushcn.tavg.latest.FLs.52i.tar.gz

Compare to USHCN Raw : ushcn.tavg.latest.raw.tar.gz

In the USHCN V2.5 folder, the readme file describes the “E” flag as:

E = a monthly value could not be computed from daily data. The value is estimated using values from surrounding stations

There are quite a few “zombie weather stations” in the USHCN final dataset, possibly up to 25% out of the 1218 that is the total number of stations. In my conversations with NCDC on Friday, I’m told these were kept in and “reporting” as a policy decision to provide a “continuity” of data for scientific purposes. While there “might” be some justification for that sort of thinking, few people know about it there’s no disclaimer or caveat in the USHCN FTP folder at NCDC or in the readme file that describes this, they “hint” at it saying:

The composition of the network remains unchanged at 1218 stations

But that really isn’t true, as some USHCN stations out of the 1218 have been closed and are no longer reporting real data, but instead are reporting estimated data.

NCDC really should make this clear, and while it “might” be OK to produce a datafile that has estimated data in it, not everyone is going to understand what that means, and that the stations that have been long dead are producing estimated data. NCDC has failed in notifying the public, and even their colleagues of this. Even the Texas State Climatologist John Nielsen-Gammon didn’t know about these “zombie” stations until I showed him. If he had known, his opinion might have been different on the Goddard issue. When even professional people in your sphere of influence don’t know you are doing dead weather station data infills like this, you can be sure that your primary mission to provide useful data is FUBAR.

NCDC needs to step up and fix this along with other problems that have been identified.

And they are, I expect some sort of a statement, and possibly a correction next week. In the meantime, let’s let them do their work and go through their methodology. It will not be helpful to ANYONE if we start beating up the people at NCDC ahead of such a statement and/or correction.

I will be among the first, if not the first to know what they are doing to fix the issues, and as soon as I know, so will all of you. Patience and restraint is what we need at the moment. I believe they are making a good faith effort, but as you all know the government moves slowly, they have to get policy wonks to review documents and all that. So, we’ll likely hear something early next week.

These lapses in quality control and thinking that infilling estimated data for long dead weather stations is the sort of thing happens when the only people that you interact with are inside your sphere of influence. The “yeah that seems like a good idea” approval mumble probably resonated in that NCDC meeting, but it was a case of groupthink. Imagine The Wall Street Journal providing “estimated” stock values for long dead companies to provide “continuity” of their stock quotes page. Such a thing would boggle the mind and the SEC would have a cow, not to mention readers. Scams would erupt trying to sell stocks for these long dead companies; “It’s real, see its reporting value in the WSJ!”.

It often takes people outside of climate science to point out the problems they don’t see, and skeptics have been doing it for years. Today, we are doing it again.

For absolute clarity, I should point out that the RAW USHCN monthly datafile is NOT being infilled with estimated data, only the FINAL USHCN monthly datafile. But that is the one that many other metrics use, including NASA GISS, and it goes into the mix for things like the NCDC monthly State of the Climate Report.

While we won’t know until all of the data is corrected and new numbers run, this may affect some of the absolute temperature claims made on SOTC reports such as “warmest month ever” and 3rd warmest, etc. The magnitude of such shifts, if any, is unknown at this point. Long term trend will probably not be affected.

It may also affect our comparisons between raw and final adjusted USHCN data we have been doing for our paper, such as this one from our draft paper:

Watts_et_al_2012 Figure20 CONUS Compliant-NonC-NOAA

The exception is BEST, which starts with the raw daily data, but they might be getting tripped up into creating some “zombie stations” of their own by the NCDC metadata and resolution improvements to lat/lon. The USHCN station at Luling Texas is listed as having 7 station moves by BEST (note the red diamonds):

Luling-TX-BEST

But there really has only been two, and the station has been just like this since 1995, when it was converted to MMTS from a Stevenson Screen. Here is our survey image from 2009:

Luling_looking_north

Photo by surfacestations volunteer John Warren Slayton.

NCDC’s metadata only lists two station moves:

image

As you can see below, some improvements in lat/lon accuracy can look like a station move:

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=LOCATIONS

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=MISC

Thanks to Paul Homewood for the two images and links above. I’m sure Mr. Mosher will let us know if this issue affects BEST or not.

And there is yet another issue: The recent change of something called “climate divisions” to calculate the national and state temperatures.

Certified Consulting Meteorologist and Fellow of the AMS Joe D’Aleo writes in with this:

I had downloaded the Maine annual temperature plot from NCDC Climate at a Glance in 2013 for a talk. There was no statistically significant trend since 1895. Note the spike in 1913 following super blocking from Novarupta in Alaska (similar to the high latitude volcanoes in late 2000s which helped with the blocking and maritime influence that spiked 2010 as snow was gone by March with a steady northeast maritime Atlantic flow). 1913 was close to 46F. and the long term mean just over 41F.

 CAAG_Maine_before

Seemingly in a panic change late this frigid winter to NCDC, big changes occurred. I wanted to update the Maine plot for another talk and got this from NCDC CAAG. 

CAAG_maine_after

Note that 1913 was cooled nearly 5 degrees F and does not stand out. There is a warming of at least 3 degrees F since 1895 (they list 0.23/decade) and the new mean is close to 40F.

Does anybody know what the REAL temperature of Maine is/was/is supposed to be? I sure as hell don’t. I don’t think NCDC really does either.

In closing…

Besides moving toward a more accurate temperature record, the best thing about all this hoopla over the USHCN data set is the Polifact story where we have all these experts lined up (including me as the token skeptic) that stated without a doubt that Goddard was wrong and rated the claim “pants of fire”.

They’ll all be eating some crow, as will I, but now that I have Gavin for dinner company, I don’t really mind at all.

When the scientific method is at work, eventually, everybody eats crow. The trick is to be able to eat it and tell people that you are honestly enjoying it, because crow is so popular, it is on the science menu daily.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 1 vote
Article Rating
323 Comments
Inline Feedbacks
View all comments
June 28, 2014 5:48 pm

“When the scientific method is at work, eventually, everybody eats crow. ”
I just love it when people smarter and better than I’ll ever be slip on the ice. I know it is not very spiritual, but I can’t help chuckling.
When I myself need a slice of humble pie, all I need to do is to attempt a five-day-forecast. This promptly puts me in my place, and makes me amazed how well many meteorologists do.
People who fear to ever be wrong stay at home and hide under their beds.

Truthseeker
June 28, 2014 5:49 pm

REPLY: Who is “Tim”?
I meant “Tony”. Typo. My bad.

Larry Ledwick
June 28, 2014 6:02 pm

It appears that a key issue here that is not being discussed much, is that for large computer programs, it is impossible to write bug free code!
Worse it is impossible to know how close you are to having bug free code. There is no test suite that can cover every possible permutation of factors. From simple things like which computer the code was compiled on and some obscure difference in the installed OS on that system or that hardware. In a company I have worked at, we had two physically identical machines, identical hardware identical software and OS, yet one always ran slower than the other on certain jobs. I have seen systems punch a punch card from a program that only was able to print to a line printer. It created one single punch card image of the first line of the file, then switched to the printer and finished printing out the remaining 800,000 line items with no errors. The error never repeated, it was a one time event.
Not all compilers make the same final output from the same source code for example. Different libraries installed on apparently identical machines can give different results from the same source code compile due to differences in how they handle rounding or other functions. Just because the code has run without apparent problem for days, weeks, months or even years, does not in any way demonstrate it is stable or fit for purpose. It may appear to be good code but you can be certain that there is some chain of events and conditions that the code has never encountered which could conceal a huge bug in the code.
There are some really bazaar examples out there in the IT world of strange confluence of conditions that uncovered a bug that had been silent producing apparently good output for years when it in fact was broken all along but was broken in a way that fit people’s expectations (another form of confirmation bias).
Unfortunately starting in the 1960’s when large mainframes started to move out of research facilities and into industry, the culture sold to the public was “you can depend on it this was computer generated”!
I have personally seen code bugs uncovered that had produced bad output for very long periods of time, and that was in spite of legitimate efforts to validate the data output over that period of time. Only on investigation of some unusual event like a system crash did the bug get discovered.
Some bugs are absolutely evil, the kind of situation that only occurs on the 3rd Tuesday of the month if the month name starts with a M or is 28 days long and the program is run on Fred’s computer early in the morning before it warms up fully.
This is why I have low confidence in temperature data that always seems to be adjusted in the same direction. That is just not natural it is either intentional or due to an inherent error in the processing system. Real natural systems vary in all dimensions.
caveat emptor

ossqss
June 28, 2014 6:03 pm

So, is this actually a case of modeling observations?
Think about that for a minute……..

June 28, 2014 6:04 pm

If you don’t like infilling, don’t use it. It doesn’t change the result, almost by definition, since infilling mimics spatial interpolation: http://rankexploits.com/musings/wp-content/uploads/2014/06/USHCN-infilled-noninfilled.png
The interesting issue currently is that some stations that report apparently valid raw data are being replaced with estimated data. The cause seems to be that the NCDC data file is missing the X flag, which indicates that the data was too inhomogeneous at the time (e.g. between two station moves) to figure out what is going on. The folks at NCDC are looking into it, as the number of stations that fall into this category seems to be a bit high, at least in my opinion.
Also, the confusion here was on Anthony’s part rather than mine; I always knew that NCDC used infilling to ensure that there were 1218 reports per month in the homogenized dataset. I personally think infilling is silly, since its not really needed (as any sort of reasonable spatial interpolation will produce the same result). But I understand its something of a legacy product to ensure consistency for folks who want to calculate average absolute temperatures.

REPLY:
Confusion is the wrong word, I simply didn’t know that NCDC was reanimating dead weather stations for the final dataset. I agree, it is silly.
However I disagree that it doesn’t make a difference, because the majority 80%+ of stations are non-compliant siting-wise. A small minority are compliant, and the infilling and homogenization obliterates their signal, and those stations are by definition, the most free from bias. As we have shown, compliant stations have a lower trend than non complaint stations, and a far lower trend than final adjusted data.
Basically what NCDC is doing here is mathematically favoring the signal of the crappiest stations – Anthony

bit chilly
June 28, 2014 6:11 pm

once again i am reminded of the old adage ” the man who never made a mistake never made anything “.
well done an-thony , it is good to see personal integrity come to the fore.
i look forward to the ultimate outcome of this discovery . in order to make the claim of global warming,first there has to be an accurate record of temperature . this salient point appears to have evaded the entire climate science community to date .

June 28, 2014 6:13 pm

Zeke, infilling warmed Arizona. And by warmed I mean changed the trend from cooling to warming.
http://sunshinehours.wordpress.com/2014/06/05/ushcn-2-5-estimated-data-is-warming-data-arizona/
In Arizona at least, infilling seemed to be the key adjust for cooling the past and warming the present.
All they need was 15% of the data and some special sauce ….

Otter (ClimateOtter on Twitter)
June 28, 2014 6:13 pm

It should be obvious by now that nick just Stokes the fire.

June 28, 2014 6:15 pm

As far as the climate divisions data goes, the climate at a glance website switched earlier this year from using raw data to using TOBs-adjusted and homogenized data. It was covered at this blog, as I recall.
REPLY: No comment about the whole change in character of the data though?

CC Squid
June 28, 2014 6:22 pm

We are all becoming paranoid if we are asking ourselves one or more of the following questions:
1. Did some foreign hacker mark the high temperature sites in the first half of the 20th century estimated and the small town sites in the later half of the century estimated.
2. Did some “true believer” who works for the government do this?
3. Did some kid take a bribe to pay off his college education do this?

June 28, 2014 6:24 pm

Sunshinehours/Bruce:
Infilling makes no difference for Arizona: http://i81.photobucket.com/albums/j237/hausfath/USHCNAZinfillednoninfilled_zps0767679b.png
Anthony,
Lets leave homogenization out for the moment, as thats a different issue. Infilling shouldn’t have any effect on temperatures, because temperatures for a region are calculated through spatial interpolation. Spatial interpolation should have the same effect as infilling (e.g. assign missing grid cells a spatially-interpolated anomaly of reporting grid cells). Thats why we see identical results when we compare CONUS anomalies with and without infilling. Now, if NCDC were purposefully closing good stations to exacerbate the effects of bad stations, that would be one thing. In a world where stations are somewhat randomly dropping out due to volunteer observers passing away and similar factors, whether or not you choose to infill data is irrelevant for your estimate of region-wide change over time. The only way you will find bias is if you average absolute temperatures, but in this case infilling will give you a -less- biased result as it will keep the underlying climatology constant.
REPLY: No, you’ve missed the point entirely. Infilling is drawing from a much larger population of poorly sited stations, so the infilling will always tend to be warming the smaller population of well sited stations, obliterating their signal. We should be seeking out and plotting data from those stations, not burying them in a puree of warm soup.
And making up data for the the zombie stations like Marysville, that’s just wrong. There is no justification whatsoever and there is nothing you can say that will change my mind on that issue.
And I repeat what I’ve said before, if you want people to fully understand the process make a damn flowchart showing how each stage works. I told Menne the same thing yesterday, and his response was “well maybe in a year or two”. That’s just BS. The public wants to know what is going on, your response to me a couple of days ago was “read menne et al paper”. Process matters. A paper and a process aren’t the same thing, and maybe its time to bring in an outside auditor and run NCDC’s processes through quality control that does not involve pal review meetings where they decide that reanimated dead weather stations is a good idea.
– Anthony

CC Squid
June 28, 2014 6:24 pm

Chilly, another one says even a blind squirrel finds an acorn sometimes.

June 28, 2014 6:27 pm

“It doesn’t change the result”
Zeke,
I keep hearing this from you and Mosher. It sounds like you are more interested in the result than you are in good process or these temperature record problems would have come to light earlier. You have been working with the records for years. Do you think you may have lost some objectivity expecting a certain result?
Andrew

John W. Garrett
June 28, 2014 6:27 pm

0.6° C. over the last 120 years? Given the state of the climate/weather data gathering system, the adjustments ( cough, cough ) made by GISS, the urban heat island effect, Chinese ( cough, cough ) weather stations/data, Russian ( cough, cough ) weather stations/data, Sub-Saharan African ( cough, cough ) weather stations/data— among a multitude of other problems, that’s a rounding error— at best.
When one stops to consider the reliability of the historic temperature records, one is left to wonder if we are kidding ourselves about our ability to gauge the extent to which current temperatures are or are not higher or lower.
Do you really believe that Russian temperature records from, say, 1917-1950 are reliable?
Do you honestly believe that Chinese temperature records from, say, 1913-1980 are reliable?
Do you seriously believe that Sub-Saharan African temperatures from, say, 1850-2012 are accurate?
I don’t.

Kevin K.
June 28, 2014 6:29 pm

I kind of came across this “adjusted data” issue independently in early 2007 then found some of the work already done online by Anthony et al. Its what’s turned me from warmist to actualist. The 30 year “normals” understate the actual arithmetic means of most stations by 0.7F to 1.3F depending on the month.
The excuse given was that the pre-ASOS way had a “cold bias”. Interesting.
The difference remained that high even when 1971-2000 “normals” became 1981-2010. Even though the decade 2001-2010 was merely 0.1F warmer than the raw data mean of 1971-2010, the reset “normals” for 1981-2010 were equally understated, even though they now had 14-16 years of “correct” ASOS data instead of the 4-6 years that were present in 1971-2000. Simple understanding of math tells you the difference should have shrunk.
For anyone who doesn’t believe this is going on, go to any NWS station or pull the data down from NCDC, which is now free. BWI’s is readily available and easy to export to Excel. Go ahead and pull the 30 years of raw data results for 1981-2010 and see how they are 0.7F-1.3F over the stated “30 year normals.”
This is also why I don’t understand the UHI “adjustments” either. BWI has gradually been swallowed by Baltimore City since it opened in 1961. You would think then the UHI would cause the present data to be adjusted downward or the old data upward to draw a comparison. Nope. Just the opposite.
When I cheated with data, I got a zero on a test and detention. These guys get grants.

Latitude
June 28, 2014 6:31 pm

I guess it was worth the long wait to finally see this:
“REPLY: Oh Nick, puhlease. When 80% of your network is compromised by bad siting, what makes you think those neighboring stations have any data thats worth a damn? You are adjusting bad data with…drum roll….more bad data. And that’s why homogenization fails here. It’s perfectly mathematically legitimate, but its useless when the data you are using to adjust with is equally crappy or crappier than the data you want to “fix”.
…and that’s the whole point
well that, and the little issue of phantom stations reporting

milodonharlani
June 28, 2014 6:31 pm

John W. Garrett says:
June 28, 2014 at 6:27 pm
African data were probably better during the colonial period than since, no matter how odious imperialism may have been.

CC Squid
June 28, 2014 6:43 pm

The explanation of how the temps were decreased is located below. The comments starting at this one say it all. Mosh goes through the process in detail.
http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/#comment-601719
climatereason | June 28, 2014 at 10:37 am | Reply
Until Mosh turns up it seems appropriate to post this double header which was a comment I originally made to Edim some months ago. I will comment on it and the Goddard article separately

June 28, 2014 6:44 pm

Anthony, I took a piece out of you over at Paul Homewood’s blog … I am pleased that you have made reparations for your error. I hope that you have humility to unequivocally apologize to ‘Goddard’ for trying to bust his butt.

john robertson
June 28, 2014 6:52 pm

Thanks Anthony, a most interesting trilogy of posts.
I had not visited Steve Goddard’s site before, thanks for the heads up.
This climatology as run by our governments, is always surprising in an unhappy way.
Why is it always worse than we imagined?

June 28, 2014 6:52 pm

Zeke: “Infilling makes no difference for Arizona:”
Zeke is using anomalies calculated with infilled stations to tell us infilling makes no difference.
Oh Zeke … you did drink the kool aid.

temp
June 28, 2014 6:53 pm

CC Squid says:
June 28, 2014 at 6:43 pm
“By looking at datasets outside USCHN we can see that these adjustments are justified. In fact the adjustments are calibrated by looking at hourly stations close to the USCHN stations.
Next, the GISSTEMP algorithm will change the estimates of the past
as New data for the present comes in. This has to do with the RSM method. This seems bizarre to most folks but once you walk through the math you’ll see how new data about say 1995, changes what you think about 1945. There are also added stations so that plays a role as well.”
Yes its bizarre… moshers argument is that bay taking a stick and measuring it against other sticks you can now label that stick 3 feet long because the other sticks are supposedly 3 feet long…. His whole argument is the very definition of the logical fallacy of appealing to the censuses.
“The fundamental confusion people have is that they think that global indexs are averages.”
“1. These algorithms do not calculate averages. They estimate fields.
2. If you change the data ( add more, adjust it etc )
3. If you improve the algorithm, your estimate of the past will change. It SHOULD change. ”
So mosher confirms without a doubt that these so called temperatures sets are nothing more then model output and are not in fact observations under the scientific method.

Scott
June 28, 2014 6:53 pm

Doesnt surprise me, I used to have my staff change their hours so that each project had the right amount(easier than trying to explain that some projects go smoothly(less hours) and some are a disaster(more hours), and charge as much as “possible” to capital projects(reduce expenses).
Here, Im sure the supervisor/manager/director is only approving changes that make it warmer, and employees being run through the ringer anytime they dare make a cooler adjustment. After a while no cooler adjustments will ever be submitted, and bonus might even be linked to warming adjustments,,,

Scott
June 28, 2014 6:55 pm

Anthony, you are still my idol and I appreciate your candor. This world needs more people like you!

RoHa
June 28, 2014 7:01 pm

Slightly off topic, but I hope your reference to the stock market is not intended to imply that my shares in the South Sea Company and In British Opium are duds. If they are, I will only have my tulip bulb interests to rely on.

1 4 5 6 7 8 13