The scientific method is at work on the USHCN temperature data set

Temperature is such a simple finite thing. It is amazing how complex people can make it.

commenter and friend of WUWT, ossqss at Judith Curry’s blog

Sometimes, you can believe you are entirely right while simultaneously believing that you’ve done due diligence. That’s what confirmation bias is all about. In this case, a whole bunch of people, including me, got a severe case of it.

I’m talking about the claim made by Steve Goddard that 40% of the USHCN data is “fabricated”. which I and few other people thought was clearly wrong.

Dr. Judith Curry and I have been conversing a lot via email over the past two days, and she has written an illuminating essay that explores the issue raised by Goddard and the sociology going on. See her essay:

http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/

Steve Goddard aka Tony Heller deserves the credit for the initial finding, Paul Homewood deserves the credit for taking the finding and establishing it in a more comprehensible

way that opened closed eyes, including mine, in this post entitled Massive Temperature Adjustments At Luling, Texas.Ā  Along with that is his latest followup, showing the problem isn’t limited to Texas, but also in Kansas. And there’s more about this below.

Goddard early on (June 2) gave me his source code that made his graph, but I

couldnā€™t get it to compile and run. That’s probably more my fault than his, as I’m not an expert in C++ computer language. Had I been able to, things might have gone differently. Then there was the fact that the problem Goddard noted doesn’t show up in GHCN data and I didn’t see the problem in any of the data we had for our USHCN surface stations analysis.

But, the thing that really put up a wall for me was this moment on June 1st, shortly after getting Goddard’s first email with his finding, which I pointed out in On ā€˜denyingā€™ Hockey Sticks, USHCN data, and all that ā€“ partĀ 1.

Goddard initially claimed 40% of the STATIONS were missing, which I said right away was not possible. It raised my hackles, and prompted my ā€œyou need to do betterā€ statement. Then he switched the text in his post from stations to data while I was away for a couple of hours at my daughter’s music recital. When I returned, I noted the change, with no note of the change on his post, and that is what really put up the wall for me. He probably looked at it like he was just fixing a typo, I looked at it like it was sweeping an important distinction under the rug.

Then there was my personal bias over previous episodes where Goddard had made what I considered grievous errors, and refused to admit to them. There was the claim of CO2 freezing out of the air in Antarctica episode, later shown to be impossible by an experiment and the GISStimating 1998 episode,Ā  and the comment where when the old data is checked and it is clear Goddard/Heller’s claim doesn’t hold up.

And then just over a month ago there was Goddard’s first hockey stick shape in the USHCN data set, which turned out to be nothing but an artifact.

All of that added up to a big heap of confirmation bias, I was so used to Goddard being wrong, I expected it again, but this time Steve Goddard was right and my confirmation bias prevented me from seeing that there was in fact a real issue in the data and that NCDC has dead stations that are reporting data that isn’t real: mea culpa.

But, that’s the same problem many climate scientists have, they are used to some skeptics being wrong on some issues, so they put up a wall. That is why the careful and exacting analyses we see from Steve McIntyre should be a model for us all. We have to “do better” to make sure that claims we make are credible, documented, phrased in non-inflammatory language, understandable, and most importantly, right.

Otherwise, walls go up, confirmation bias sets in.

Now that the wall is down, NCDC wonā€™t be able to ignore this, even John Nielsen-Gammon, who was critical of Goddard along with me in the Polifact story now says there is a real problem. So does Zeke, and we have all sent or forwarded email to NCDC advising them of it.

Iā€™ve also been on the phone Friday with the assistant director of NCDC and chief scientist (Tom Peterson), and also with the person in charge of USHCN (Matt Menne). Both were quality, professional conversations, and both thanked me for bringing it to their attention.Ā  There is lots of email flying back and forth too.

They are taking this seriously, they have to, as final data as currently presented for USHCN is clearly wrong. John Neilsen-Gammon sent me a cursory analysis for Texas USHCN stations, noting he found a number of stations that had “estimated” data in place of actual good data that NCDC has in hand, and appears in the RAW USHCN data file on their FTP site

From:John Nielsen-Gammon

Sent: Friday, June 27, 2014 9:27 AM

To: Anthony

Subject: Re: USHCN station at Luling Texas

Ā Anthony –
Ā Ā  I just did a check of all Texas USHCN stations.Ā  Thirteen had estimates in place of apparently good data.
410174 Estimated May 2008 thru June 2009
410498 Estimated since Oct 2011
410639 Estimated since July 2012 (exc Feb-Mar 2012, Nov 2012, Mar 2013, and May 2013)
410902 Estimated since Aug 2013
411048 Estimated July 2012 thru Feb 2014
412906 Estimated since Jan 2013
413240 Estimated since March 2013
413280 Estimated since Oct 2012
415018 Estimated since April 2010, defunct since Dec 2012
415429 Estimated since May 2013
416276 Estimated since Nov 2012
417945 Estimated since May 2013
418201Estimated since April 2013 (exc Dec 2013).

What is going on is that the USHCN code is that while the RAW data file has the actual measurements, for some reason the final data they publish doesn’t get the memo that good data is actually present for these stations, so it “infills” it with estimated data using data from surrounding stations. It’s a bug, a big one. And as Zeke did a cursory analysis Thursday night, he discovered it was systemic to the entire record, and up to 10% of stations have “estimated” data spanning over a century:

Analysis by Zeke Hausfather
Analysis by Zeke Hausfather

And here is the real kicker, “Zombie weather stations” exist in the USHCN final data set that are still generating data, even though they have been closed.

Remember Marysville, CA, the poster child for bad station siting? It was the station that gave me my “light bulb moment” on the issue of station siting. Here is a photo I took in May 2007:

marysville_badsiting[1]

It was closed just a couple of months after I introduced it to the world as the prime example of “How not to measure temperature”. The MMTS sensor was in a parking lot, with hot air from a/c units from the nearby electronics sheds for the cell phone tower:

MarysvilleCA_USHCN_Site_small

Guess what? Like Luling, TX, which is still open, but getting estimated data in place of the actual data in the final USHCN data file, even though it was marked closed in 2007 by NOAA’s own metadata, Marysville is still producing estimated monthly data, marked with an “E” flag:

USH00045385 2006 Ā 1034E Ā  Ā 1156h Ā  Ā 1036g Ā  Ā 1501h Ā  Ā 2166i Ā  Ā 2601E 2905E Ā  Ā 2494E Ā  Ā 2314E Ā  Ā 1741E Ā  Ā 1298E Ā  Ā  848i Ā  Ā  Ā  0

USH00045385 2007 Ā  797c Ā  Ā 1151E Ā  Ā 1575i Ā  Ā 1701E Ā  Ā 2159E Ā  Ā 2418E 2628E Ā  Ā 2620E Ā  Ā 2197E Ā  Ā 1711E Ā  Ā 1408E Ā  Ā  846E Ā  Ā  Ā  0

USH00045385 2008 Ā  836E Ā  Ā 1064E Ā  Ā 1386E Ā  Ā 1610E Ā  Ā 2146E Ā  Ā 2508E 2686E Ā  Ā 2658E Ā  Ā 2383E Ā  Ā 1906E Ā  Ā 1427E Ā  Ā  750E Ā  Ā  Ā  0

USH00045385 2009 Ā  969E Ā  Ā 1092E Ā  Ā 1316E Ā  Ā 1641E Ā  Ā 2238E Ā  Ā 2354E 2685E Ā  Ā 2583E Ā  Ā 2519E Ā  Ā 1739E Ā  Ā 1272E Ā  Ā  809E Ā  Ā  Ā  0

USH00045385 2010 Ā  951E Ā  Ā 1190E Ā  Ā 1302E Ā  Ā 1379E Ā  Ā 1746E Ā  Ā 2401E 2617E Ā  Ā 2427E Ā  Ā 2340E Ā  Ā 1904E Ā  Ā 1255E Ā  Ā 1073E Ā  Ā  Ā  0

USH00045385 2011 Ā  831E Ā  Ā  991E Ā  Ā 1228E Ā  Ā 1565E Ā  Ā 1792E Ā  Ā 2223E 2558E Ā  Ā 2536E Ā  Ā 2511E Ā  Ā 1853E Ā  Ā 1161E Ā  Ā  867E Ā  Ā  Ā  0

USH00045385 2012 Ā  978E Ā  Ā 1161E Ā  Ā 1229E Ā  Ā 1646E Ā  Ā 2147E Ā  Ā 2387E 2597E Ā  Ā 2660E Ā  Ā 2454E Ā  Ā 1931E Ā  Ā 1383E Ā  Ā  928E Ā  Ā  Ā  0

USH00045385 2013 Ā  820E Ā  Ā 1062E Ā  Ā 1494E Ā  Ā 1864E Ā  Ā 2199E Ā  Ā 2480E 2759E Ā  Ā 2568E Ā  Ā 2286E Ā  Ā 1807E Ā  Ā 1396E Ā  Ā  844E Ā  Ā  Ā  0

USH00045385 2014 Ā 1188E Ā  Ā 1247E Ā  Ā 1553E Ā  Ā 1777E Ā  Ā 2245E 2526E Ā  -9999 Ā  Ā -9999 Ā  Ā -9999 Ā  Ā -9999 Ā  Ā -9999 Ā  Ā -9999

Source:Ā  USHCN Final :Ā ushcn.tavg.latest.FLs.52i.tar.gz

Compare to USHCN Raw :Ā ushcn.tavg.latest.raw.tar.gz

In the USHCN V2.5 folder, the readme file describes the “E” flag as:

E = a monthly value could not be computed from daily data. The value is estimated using values from surrounding stations

There are quite a few “zombie weather stations” in the USHCN final dataset, possibly up to 25% out of the 1218 that is the total number of stations. In my conversations with NCDC on Friday, I’m told these were kept in and “reporting” as a policy decision to provide a “continuity” of data for scientific purposes. While there “might” be some justification for that sort of thinking, few people know about it there’s no disclaimer or caveat in the USHCN FTP folder at NCDC or in the readme file that describes this, they “hint” at it saying:

The composition of the network remains unchanged at 1218 stations

But that really isn’t true, as some USHCN stations out of the 1218 have been closed and are no longer reporting real data, but instead are reporting estimated data.

NCDC really should make this clear, and while it “might” be OK to produce a datafile that has estimated data in it, not everyone is going to understand what that means, and that the stations that have been long dead are producing estimated data. NCDC has failed in notifying the public, and even their colleagues of this. Even the Texas State Climatologist John Nielsen-Gammon didn’t know about these “zombie” stations until I showed him. If he had known, his opinion might have been different on the Goddard issue. When even professional people in your sphere of influence don’t know you are doing dead weather station data infills like this, you can be sure that your primary mission to provide useful data is FUBAR.

NCDC needs to step up and fix this along with other problems that have been identified.

And they are, I expect some sort of a statement, and possibly a correction next week. In the meantime, let’s let them do their work and go through their methodology. It will not be helpful to ANYONE if we start beating up the people at NCDC ahead of such a statement and/or correction.

I will be among the first, if not the first to know what they are doing to fix the issues, and as soon as I know, so will all of you. Patience and restraint is what we need at the moment. I believe they are making a good faith effort, but as you all know the government moves slowly, they have to get policy wonks to review documents and all that. So, we’ll likely hear something early next week.

These lapses in quality control and thinking that infilling estimated data for long dead weather stations is the sort of thing happens when the only people that you interact with are inside your sphere of influence. The “yeah that seems like a good idea” approval mumble probably resonated in that NCDC meeting, but it was a case of groupthink. Imagine The Wall Street Journal providing “estimated” stock values for long dead companies to provide “continuity” of their stock quotes page. Such a thing would boggle the mind and the SEC would have a cow, not to mention readers. Scams would erupt trying to sell stocks for these long dead companies; “It’s real, see its reporting value in the WSJ!”.

It often takes people outside of climate science to point out the problems they don’t see, and skeptics have been doing it for years. Today, we are doing it again.

For absolute clarity, I should point out that the RAW USHCN monthly datafile is NOT being infilled with estimated data, only the FINAL USHCN monthly datafile. But that is the one that many other metrics use, including NASA GISS, and it goes into the mix for things like the NCDC monthly State of the Climate Report.

While we won’t know until all of the data is corrected and new numbers run, this may affect some of the absolute temperature claims made on SOTC reports such as “warmest month ever” and 3rd warmest, etc. The magnitude of such shifts, if any, is unknown at this point. Long term trend will probably not be affected.

It may also affect our comparisons between raw and final adjusted USHCN data we have been doing for our paper, such as this one from our draft paper:

Watts_et_al_2012 Figure20 CONUS Compliant-NonC-NOAA

The exception is BEST, which starts with the raw daily data, but they might be getting tripped up into creating some “zombie stations” of their own by the NCDC metadata and resolution improvements to lat/lon. The USHCN station at Luling Texas is listed as having 7 station moves by BEST (note the red diamonds):

Luling-TX-BEST

But there really has only been two, and the station has been just like this since 1995, when it was converted to MMTS from a Stevenson Screen. Here is our survey image from 2009:

Luling_looking_north

Photo by surfacestations volunteer John Warren Slayton.

NCDC’s metadata only lists two station moves:

image

As you can see below, some improvements in lat/lon accuracy can look like a station move:

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=LOCATIONS

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=MISC

Thanks to Paul Homewood for the two images and links above. I’m sure Mr. Mosher will let us know if this issue affects BEST or not.

And there is yet another issue: The recent change of something called “climate divisions” to calculate the national and state temperatures.

Certified Consulting Meteorologist and Fellow of the AMS Joe D’Aleo writes in with this:

I had downloaded the Maine annual temperature plot from NCDC Climate at a Glance in 2013 for a talk. There was no statistically significant trend since 1895. Note the spike in 1913 following super blocking from Novarupta in Alaska (similar to the high latitude volcanoes in late 2000s which helped with the blocking and maritime influence that spiked 2010 as snow was gone by March with a steady northeast maritime Atlantic flow). 1913 was close to 46F. and the long term mean just over 41F.

Ā CAAG_Maine_before

Seemingly in a panic change late this frigid winter to NCDC, big changes occurred. I wanted to update the Maine plot for another talk and got this from NCDC CAAG.Ā 

CAAG_maine_after

Note that 1913 was cooled nearly 5 degrees F and does not stand out. There is a warming of at least 3 degrees F since 1895 (they list 0.23/decade) and the new mean is close to 40F.

Does anybody know what the REAL temperature of Maine is/was/is supposed to be? I sure as hell don’t. I don’t think NCDC really does either.

In closing…

Besides moving toward a more accurate temperature record, the best thing about all this hoopla over the USHCN data set is the Polifact story where we have all these experts lined up (including me as the token skeptic) that stated without a doubt that Goddard was wrong and rated the claim “pants of fire”.

Theyā€™ll all be eating some crow, as will I, but now that I have Gavin for dinner company, I donā€™t really mind at all.

When the scientific method is at work, eventually, everybody eats crow. The trick is to be able to eat it and tell people that you are honestly enjoying it, because crow is so popular, it is on the science menu daily.

5 1 vote
Article Rating
323 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
MattN
June 28, 2014 1:36 pm

Wow.

June 28, 2014 1:36 pm

Anthony there are more metadata sources for station moves than the one you point to.
So,
First off thanks for pointing out that we use RAW data and not adjusted data or zombie data
On station moves, we use ALL the data on station locations. not just one source
Here is the kicker. A station move will merley split the station record. NOT adjust it.
if you split a station where there is no actual move it has no effect.

Chewer
June 28, 2014 1:38 pm

Apparently you missed the internal NCDC memos, otherwise you’d understand why they provide zero statements to the actual data used…

June 28, 2014 1:40 pm

Did I read somewhere that IPCC has abandoned land based temperatures in favor of sea surface temperatures?

kbray in california
June 28, 2014 1:40 pm

Very nice to see this.
Good save.

June 28, 2014 1:42 pm

Why does this bother me?
“Long term trend will probably not be affected.”
Does it translate to:
Yes, we’ve been caught fudging the figures, but we will claim warming, and that it’s AGW.

Stephen Rasey
June 28, 2014 1:46 pm

It may also affect our comparisons between raw and final adjusted USHCN data we have been doing for our paper, such as this one from our draft paper:
Please, do NOT delay the publication of your paper. Just make it version 1 that compares your clean Class 1&2 stations with what NCDC claimed was the truth for the past score of years. It becomes a first and best estimate for how much NCDC has been getting it wrong.
Just add a Post Script to the paper that NCDC now admits a big bug in their process. Your paper strongly indicated something wasn’t right. You now have confirmation.

Stephen Richards
June 28, 2014 1:46 pm

I’m happy to see that you have seen the “light” Anthony. Your episode with Steven G was wholly unacceptable. As I pointed out, your conversation and subsequent reporting of, with Steve Mc indicated that you had “guided’ SteveM to the wrong conclusion which reinforced your belief in what you were doing. SteveM creating an artifact and the other axes you had to grind with him are not a good excuse for a REAL scientist. I have found SteveGs work somewhat distastful but you have to set aside your feelings and bagage in order to see through the fog.
I have been a skeptic since the ’60s. I read all the works and books I could find on weather and climate and came to the conclusion that they were lying. It wasn’t until I got my degrees and entered a research establishment that I saw exactly what was going on.
I really appreciate your untireing work, your dedication to your blog and your team. Please do not salƩ your reputation or any other skeptics reputation in this way again.
Thanks Anthony
Stephen Richards Engineer, Physicist.

Stephen Richards
June 28, 2014 1:48 pm

Does it translate to:
Yes, weā€™ve been caught fudging the figures, but we will claim warming, and that itā€™s AGW.
No it says do worry we will change our algorythms to ensure AGW remains in the record.
Just look at their UHI adjustments. Absolutely ridiculous if not incompetent.

Anything is possible
June 28, 2014 1:48 pm

“All of that added up to a big heap of confirmation bias, I was so used to Goddard being wrong, I expected it again, but this time Steve Goddard was right and my confirmation bias prevented me from seeing that there was in fact a real issue in the data and that NCDC has dead stations that are reporting data that isnā€™t real: mea culpa.”
==========================================
Kudos to you, Sir.
And the best of British luck to you trying to sort this mess out.

June 28, 2014 1:48 pm

In my Humble opinion, same thing is going on with sea level.

Nick Stokes
June 28, 2014 1:49 pm

” Along with that is his latest followup, showing the problem isnā€™t limited to Texas”
But what was the problem in Texas? I did a post on Luling here. When you look at the local anomaly plots there is a very obvious inhomogeneity. The NOAA software detected this, and quarantined the data, exactly as it should. It then turned out, via comments of mesoman who had worked on this very site, that there was a faulty cable causing readings to be transmitted low, and this was fixed on Jan 2014.
So, you might say, good for the computer, it got it right, and Paul H was wrong. A bit of introspection from Paul Homewood and co re how they had been claiming malfeasance etc? But no, no analysis at all – instead they are on to the next “problem” in Kansas. And so the story goes – first we had problems in Texas, now in Kansas.
REPLY:
Despite what you think you can’t “estimate” the characteristics of temperature from effects of a faulty cable. In Lulings’s case, just throw out the data, don’t imagine you are smart enough to be able to predict the resistance changes that occur from rain, heat, humidity, dust, etc. as they affect it or the next lawnmower bangs into it. As you’ll note, the test “mesoman” did say the temperatures were fluctuating when he did his test to determine what was wrong. he said the data was unstable.
Can you predict what the temperature will be in a thermistor that has a faulty connection at any given moment? Can you predict what the low and high temperatures it will produce will be on any given day when compared to the vagaries of weather it experiences?
Is is patently absurd to try to salvage data from a faulty instrument, especially when you have one nearby also recording the temperature.
THROW OUT THE DATA – DON’T TRY TO FIX IT.
Imagine forensic science trying to get away with this stuff. I’m reminded of the famous line from The Green Mile The Shawshank Redemption “how can you be so obtuse?”.
-Anthony

June 28, 2014 1:50 pm

As if by magic the corrected bug free data will show climate change is worse than we thought.

David Riser
June 28, 2014 1:54 pm

Nicely done Mr. Goddard!
v/r,
David Riser

Keith
June 28, 2014 1:54 pm

Well played Antony for admitting that at first he was wrong about the revelations made by “Steven Goddard”. Also fair play giving Paul Homewood credit. However, we should all be giving Steve Goddard credit. He has been pointing out this for ages on his blog and has taken a huge amount of stick from Antony, from Zeke Hausfather, Nik Stokes, posters at Judy Curry’s blog, alarmists of various hues, and many others along the way. Yet it appears people only agreed he had a point when a small part of his work was confirmed by Paul Homewood.

June 28, 2014 1:54 pm

Nick Stokes please find a station with a faulty cable causing readings to be transmitted high

A C Osborn
June 28, 2014 1:54 pm

Thank You for manning up and telling it how it is.
I know Steve’s overall attitude gets to some people, but he does do some great data mining work.
Best’s data “summaries” output is also completely biased, I have data for the UK that shows this.

June 28, 2014 1:55 pm

It’s data collection, Anthony, but not as we know it.

June 28, 2014 1:55 pm

What?! Steven Goddard’s post didn’t deal with the issue covered in this post. That there happened to be some problem in the data doesn’t mean Goddard’s post was accurate or correct.

REPLY:
That’s true and false. I said to him there was nothing wrong with the USHCN data, but in fact there is.
He said trends calculated from RAW data using absolutes shows a cooling since they 30’s but I didn’t cover that here. My position is that the raw data is so badly corrupted by siting and other issues as a whole, you can’t really use it as a whole for anything of value and the way he went about it created a false trend.
That’s why I said in part 2 that we still need to do spatial gridding at the very least, but more importantly, we need to get rid of this massive load of dead, dying, and compromised stations, and stop trying to fix them with statistical nuances, and just focus on the good ones, use the good data, toss the rest. – Anthony

June 28, 2014 1:57 pm

let me put a sharper point on this.
The NCDC metadata is not raw metadata. Blindly believing in its accuracy is not something a skeptic
will do,
What to do instead?
What we do is consider all sources of metadata. Thats NCDC as well as any other data source that has
this station. From that we collect all station moves.
we use all the data to inform the best estimate. we dont just blindly trust the NCDC data.
after all… look what the post is about
in short be skeptical of everything. one cannot say Im skeptical of the temperature data, but I trust the metadata.
Finally, Anthony will recall when the NCDC cut off access to the metadata system. When they did that I FOIA them. The mails released indicated that they had little faith in their metadata.
so we look at everything, knowing that mathematically if we slice a station where there is NO discontinuity in the time series the answer will be the same as if we didnt slice it. In other words the method is insensitive to slicing where there is no discontinuity.

A C Osborn
June 28, 2014 1:59 pm

Nick Stokes says: June 28, 2014 at 1:49 pm
Nick, have you read my responses to mesoman, that is not the only station with problems and the data provided by Zeke proves it.

DC Cowboy
Editor
June 28, 2014 1:59 pm

LOL, C++ == Syrup of Ipecac Syntax.

A C Osborn
June 28, 2014 2:00 pm

Brandon Shollenberger says: June 28, 2014 at 1:55 pm
B***Sh*t, it is exactly what is in his Posts, note plural.

John F. Hultquist
June 28, 2014 2:01 pm

Thanks for this report. I’ve been reading Steven Goddard’s posts on this issue and the comments. A simple acquaintance with the material and data sets hasn’t been enough to follow all of it and a few of the comments have been more caustic than clarifying.
I haven’t read Judith Curry’s latest but will get there later today.
The current post here is quite clear for me but a complete novice would need a lot of background just to decipher the acronyms. If there are new readers I hope they will take some time doing this.
Good for all of you, especially Steven G., for sticking with this.

June 28, 2014 2:04 pm

I have eaten some crow recently in my line of work. I thought there was a problem. There was. But it wasn’t hardware or software. It was a manufacturing defect.

Mike Singleton
June 28, 2014 2:11 pm

Anthony,
Kudos over the public crow mastication. The feathers are usually the hardest to get down.

Nick Stokes
June 28, 2014 2:12 pm

“Despite what you think you canā€™t ā€œestimateā€ the characteristics of temperature from effects of a faulty cable. In Lulingsā€™s case, just throw out the data, donā€™t imagine you are smart enough to be able to predict the resistance changes that occur from rain, heat, humidity, dust, etc. as they affect it or the next lawnmower bangs into it.”
Throw out the data? That’s exactly what they did. They replaced it with an estimate based on neighboring stations. Not on trying to repair the Luling data. In the NCDC method which uses absolute temperatures, you have to have an estimate for each station, otherwise you get into the Godard spike issues.
I notice that John N-G said there were 13 stations in Texas that have had to replace measured data in recent years, for various periods. I believe Texas has 188 stations in total.
REPLY: Great, you should be a legal adviser in court.
Judge: The Blood samples tainted! You: OK. THROW IT OUT AND REPLACE IT WITH SOME BLOOD from …THAT GUY, OVER THERE! NO, Wait, lets get blood from the nearest five guys that look like him and mix it together. Yeah that’s a reasonable estimate.
You can’t ever assume your estimates will model reality.
Again, how can you be so obtuse? – Anthony

June 28, 2014 2:13 pm

One last one, so that people can understand the various data products.
First some definitions:
Raw data: raw data is that data that presents itself as un adjusted. That is there is no evidence to sugggest it has been changed by any processing step. Typically there will be an associated ADJUSTED file.
Adjusted data: Adjusted station data in every case I have looked at is MONTHLY data. To adjust data
the people in charge of it do a DISCRETE station by station adjustment. They may adjust for a station move by applying a lapse rate adjustment. This has error. They then may adjust it for TOBS. this has error. They then may adjust it for instrucment changes. This has error. They then may adjust it for station moves in lat/lon. This has errors.
So, what do we do differently.
1. We use all sources of data, BUT we start by using raw daily where that is available.
the big sources are Ghcn Daily ( 30K+ stations), Global summary of the day, GSOD.
This is the vast vast majority of all data.
2. Where a station doesnt have raw daily, we use raw monthly. These are typically older stations
prior to 1830s
Next we produce 3 daatesets
A) RAW. this is a compliation of all raw sources for the site.
B) “Expected” This is our best estimate of what a site WOULD HAVE RECORDED if it
did not move, did not have instrument changes, tobs changes etc. These ‘corrections’
are not calculated discretely. Rather a station and all its neighbors are considered.
A surface is generated that minimizes the error. Now that error may be due a station move
a faulty instrument, an air conditioner, a instrument switch.. These are not calculated from the bottom
UP, rather they are estimated from the TOP DOWN.
C) regional expectation. This is dependent on the gridding one selects.
From the readme. READ Carefully.
You want raw data. go ahead use it.
You want to know what the EXPECTED values are for any station, given ALL the information.
use that.
You want to know what a regional expectation is. use
each of these datasets has a different purpose. What do you want to do?
From the readme which youll skip
“% The “raw” values reflect the observations as originally ingested by
% the Berkeley Earth system from one or more originating archive(s).
% These “raw” values may reflect the merger of more than one temperature
% time series if multiple archives reported values for this location.
% Alongside the raw data we have also provided a flag indicating which
% values failed initial quality control checks. A further column
% dates at which the raw data may be subject to continuity “breaks”
% due to documented station moves (denoted “1”), prolonged measurement
% gaps (denoted “2”), documented time of observation changes (denoted “3”)
% and other empirically determined inhomogeneities (denoted “4”).
%
% In many cases, raw temperature data contains a number of artifacts,
% caused by issues such as typographical errors, instrumentation changes,
% station moves, and urban or agricultural development near the station.
% The Berkeley Earth analysis process attempts to identify and estimate
% the impact of various kinds of data quality problems by comparing each
% time series to neighboring series. At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.
%
% Lastly, we have provided a “regional expectation” time series, based
% on the Berkeley Earth expected temperatures in the neighborhood of the
% station. This incorporates information from as many weather stations as
% are available for the local region surrounding this location. Note
% that the regional expectation may be a systematically a bit warmer or
% colder than the weather stations by a few degrees due to differences
% in mean elevation and other local characteristics.
%
% For each temperature time series, we have also included an “anomaly”
% time series that removes both the seasonality and the long-term mean.
% These anomalies may provide an easier way of seeing changes through
% time.

June 28, 2014 2:17 pm

For those of us who have been reading Steven Goddard’s blog for some time now, we have seen case, after case, after case of blatant data tampering. But the real “tell” is that the government data sets always lower the past temps and warm the present. ALWAYS.
There is no way to honestly explain that fact. No honest way. I see no way to ever trust the government data sets and I don’t really believe that the past records (the original raw data) is really available anymore.

Stephen Rasey
June 28, 2014 2:17 pm

The exception is BEST, which starts with the raw daily data, but they might be getting tripped up into creating some ā€œzombie stationsā€ of their own by the NCDC metadata and resolution improvements to lat/lon. The USHCN station at Luling Texas is listed as having 7 station moves by BEST (note the red diamonds):
BEST and its supporters suffer from their own confirmation bias in several ways.
Who can look at Lulling, TX and not conclude that the scalpel is being wielded by Jack the Ripper. Either the scalpel process is wrong or the station is so corrupted that it should be eliminated it cannot be saved by any surgeon.
There is some theoretical justification for using a scalpel to split temperature station records at known moves and replacement of equipment. I accept that. But I and others have argued that instrument drift is a significant part of the measured record. You can only split the record if you measured the drift in the first instrument at the time you took it off line. This is a necessary recalibration event and important information that BEST discards. You may know that an MMTS reads 0.5 degrees warmer than a pristine Stevenson Screen, but in what condition is the Stevenson Screen at the time of replacement? We know that Stevenson screens studied have a tendency to warm with age. Unless you attempt to measure the drift of the instrument at the time of replacement, you should not split the station.
While there are theoretical justifications for splitting temperature records, the are more theoretical justification for NOT splitting. The primary justification in my opinion is the loss of low frequency information content by shortening segments and ignoring absolute values in preference for slope. This applies a band pass filter on all temperature records when a LOW pass filter is what should be applied.
Yes, watch the confirmation bias.
Has BEST ever justified their process because it agrees with NCDC results?

June 28, 2014 2:21 pm

Thanks An-thony.
Regards.

Stephen Rasey
June 28, 2014 2:22 pm

Stoval (Stoval) at 2:17 pm

For those of us who have been reading Steven Goddardā€™s blog for some time now, we have seen case, after case, after case of blatant data tampering.

Data tampering by whom? It reads as though Steve Goddard is the tamperer.
But I think you mean that Goddard has exposed data tampering by others.

Editor
June 28, 2014 2:23 pm

omnologos says:
June 28, 2014 at 1:54 pm
> Nick Stokes please find a station with a faulty cable causing readings to be transmitted high
Why just Nick? There are lots of other people with cables here. And why just high? Low is wrong too!
In fact, the outdoor thermometer in my kitchen has a bad seal and rain water gets into the connection with the thermistor. Rain water is somewhat conductive, and that appears to reduce the resistance of the thermistor, which causes a low reading.
And why just temperature? I could go on and on about audio, video, Ethernet, SCSI, USB and many other cables types that have earned my scorn and repair (or destruction). The hot electrical outlet due to a loose wire was interesting too.

temp
June 28, 2014 2:23 pm

That statement that the NCDC releases better included a complete list of all the “peer-reviewed” “science” that is not junk.
This event will require rewriting of a huge number of papers and they need to demand that it be done… and they need to follow up with all the so called “science” journals so they all know the papers based on this data are wrong and need to at the very least include a huge disclaimer about the issue.
If they don’t they are all but admitting this was planned and are only back tracking because they got caught and have moved into coverup mode.

temp
June 28, 2014 2:25 pm

PS someone FOI the emails for this data change bet the dog will eat them real quick

John Slayton
June 28, 2014 2:27 pm

In August of 09 I attempted to locate the site of the USHCN station 353095 in Fremont, Oregon. MMS had reported it closed on 19 April 1996. By dumb luck I happened on COOP station 353029 a few miles away in Fort Rock, closed in 1918, re-opened on 19 April 1996. The B-91s from both stations were signed by the same J. Wagner.
I assumed that Fort Rock was the continuation of Fremont. However I have never found indication that Fort Rock was ever included in USHCN. As I noted in an earlier thread this week, the USHCN station list generally shows when a record’s source has been changed from one station to another, as, for example when Caldwell, ID, is dropped and Nampa Sugar Station replaces it. I never found such a change noted for Fremont / Fort Rock.
So, prodded by the present discussion, I just looked at the station data for Fremont (353095), and I find to my astonishment that it continues to be reported as station 353095, with an E flag up to the end of 2013.

MattN
June 28, 2014 2:28 pm

So this would be the second SIGNIFICANT issue with the US record found in the last decade. And WE’RE supposed to have the best record and know what we’re doing.

Joseph Bastardi
June 28, 2014 2:31 pm

we have to stop circular firing squads. For instance, Goddard posts so much… so many times, that he may have things in error sometimes. But turf wars among us are like a bunch of theologians arguing over how many angels you can stick on the head of a needle. Really has nothing to do with the search for the truth.. in that case of something of a higher authority, in the case of co2, something that I think is as Bill Gray put it many years ago , a scam ( or was it hoax, will have to ask him) To our credit, we are so obsessed with right and wrong that we do argue over small things, and yes they do matter. But I have found things I did not believe before I now do. And vice versa. One thing that keeps getting clearer to me is the amount of time, treasure etc wasted on 1/100th of the GHG, .04% of the atmosphere which has 1/1000th the heat capacity of the ocean and next to the affects of the sun, oceans and stochastic events probably can not be measured outside the noise, is a giant red herring and meant to distract from a bigger agenda, which has nothing to do with our obsessions.
In the end Thoreau may sum all this up, if I remember it correctly. The sum of all our fictions add up to a joint reality

June 28, 2014 2:31 pm

Anthony, I get this is relevant to what you said to Steven Goddard, but I’ve read his post, and he didn’t say anything about what you describe in his post. There isn’t any indication what you describe in this post caused the difference Goddard highlighted.
REPLY: the trends issue came in subsequent post, related to the first. But the difference between raw and estimated data in his graph is the issue I’m addressing. Not sure why you can’t see that. There are in fact places where no raw data exists, but estimated data does. Marysville, case in point – Anthony

Stephen Rasey
June 28, 2014 2:31 pm

famous line from The Green Mile ā€œhow can you be so obtuseā€?
The Shawshank Redemption
but they are both from Steven King.

June 28, 2014 2:32 pm

Being a total novice at this, I have to ask, knowing that the figures have been fudged, how do “we” know the raw data has not been tampered with?

A C Osborn
June 28, 2014 2:33 pm

Steven Mosher says: June 28, 2014 at 2:13 pm
You keep boasting how good BEST is, their Summaries are just as biased as the problem exposed here and I have the proof for the UK at least.

JustAnotherPoster
June 28, 2014 2:35 pm

a challenge to nick stokes et al.
Please find a single station that has been adjusted down in the last 10 years of weather history.
The process that bumped one station up should also cool some down.
That’s the challenge.

onlyme
June 28, 2014 2:36 pm

Ric Werme says:
June 28, 2014 at 2:23 pm
All instrumentation and controls work I did, cable and sensor errors were designed to fail low or open.
Safety issue.
failures were not randomly high or low averaging to 0.
Perhaps that’s what Omnologos is pointing to?

harry
June 28, 2014 2:38 pm

“How can you be so obtuse” was from The Shawshank Redemption.

Justthinkin
June 28, 2014 2:38 pm

Now I know for sure why I keep coming to this sight. Mr.Watts,you have shown a level of integrity,honesty,and curiosity that is very rare in this day and age.I would be honoured to eat crow with you,however,seeing as I am in Northern Alberta,I’ll gladly share the duck I am having tonight.

Nick Stokes
June 28, 2014 2:39 pm

“Wait, lets get blood from the nearest five guys that look like him and mix it together. Yeah thatā€™s a reasonable estimate.”
They are computing a spatial average, based on stations. Infilling with neighboring data doesn’t change anything. It just, in the final sum, changes the weighting. The neighboring stations get a bit more weight to cover the area near Luling.
As I showed in the shaded plots, there is plenty of data in the region. It doesn’t depend on Luling. Using a neighbour-based estimate is just the way of getting the arithmetic to work properly. With anomalies you could just leave Luling out completely. With absolute values, you have to do something extra, so that the climatology of the omitted Luling doesn’t create Goddard spike type distortions. Estimating from neighbor values is the simplest way to do it properly.
REPLY: Oh Nick, puhlease. When 80% of your network is compromised by bad siting, what makes you think those neighboring stations have any data thats worth a damn? You are adjusting bad data with…drum roll….more bad data. And that’s why homogenization fails here. It’s perfectly mathematically legitimate, but its useless when the data you are using to adjust with is equally crappy or crappier than the data you want to “fix”.
The problem with climate science is they really have no handle on just how bad the surface network is. I do. Evan does, John N-G does. Even Mosher has a bit of a clue.
You can’t make a clean estimated signal out of a bunch of muddied signals, ever.
Now its well past your bedtime is Australia. Maybe that is why you aren’t thinking clearly -Anthony

Jeff D.
June 28, 2014 2:40 pm

Anthony, Humble Pie helps cover the taste of crow in my personal experience. Your humility has been displayed for all, well done Sir.
Steve G, thank you for living by my personal Motto ” Question Freaking Everything”. šŸ™‚

Editor
June 28, 2014 2:41 pm

Anthony wrote:

Then there was my personal bias over previous episodes where Goddard had made what I considered grievous errors, and refused to admit to them. There was the claim of CO2 freezing out of the air in Antarctica episode, later shown to be impossible by an experiment and

As one of the principles in the CO2 “frost” brouhaha, that affair still leaves a bad taste in my mouth. I’m glad I have good company.
The ICCC in Las Vegas will be interesting. Maybe I’ll take a vow of silence.

June 28, 2014 2:44 pm

At nick…. I keep asking. If the station was warm not cold. Would any of the Code picked this up.
Personally I think the code looks for just odd ‘cold’ readings and moves them up. I.e. The opposite of adjusting for UHI.
If you can’t get the maths to work, you can’t get the maths to work. Admit that. Estimating leaves you wide open…….

karnost
June 28, 2014 2:46 pm

This is an excellent opportunity to do a meaningful analysis on the efficacy of the estimates. Are they accurate? Are they representative? How do they differ from the “real” temp?

climatereason
Editor
June 28, 2014 2:47 pm

I originally posted this over at CE.
Would you bet your house on the accuracy of a temperature reading prior to the use of properly sited digital stations? No. Whilst many stations are individually good many more have a string of associated problems. Even the good ones have probably been substantially adjusted or there is data missing and interpolation has taken place
I wrote about some of the myriad problems with taking accurate temperatures here.
http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/
The further back in time the more potential for problems there are. Thermometer accuracy, accuracy of readings, calibration, time of day, recording a true max and min, use of appropriate screens, there are many and varied ways of messing up a temperature. If you really want to try to get to the REAL temperature of a historic record then you need to spend millions of Euros and several years examining 7 historic European temperature records as Camuffo did..
The result is a 700 page book which I have had to borrow three times in order to read it properly
http://www.isac.cnr.it/~microcl/climatologia/improve.php
Do all temperature readings -especially the historic ones-get such five star analysis? No of course not. We should treat them all with caution and remember Lambs words about them that ā€˜we can understand the tendency but not the precision.ā€™ Some will be wildly wrong and misleading, some will be good enough. Do we know which is which? I doubt it.
I have no doubt that temperatures have ranged up and down over the centuries as there is other evidence to support this. Do we know the global temperatures to tenths of a degree back hundreds of years? Of course not. Do we know a few regional examples of land temperatures to an acceptable degree of accuracy. Yes, probably. Do we know the ocean temperature to a few tenths of a degree back to 1860. No, that is absurd.
Have temperatures been amended from the raw data? Yes. Has it been done as part of some deliberate tampering with some of the record, rather than as a scientific adjustment for what are considered valid reasons? I remain open to the possibility but am not a conspiracy theorist.
Someone like Mosh-who I trust- needs to keep explaining to me why the past records are adjusted. With this in mind it needs clarification as to why the readings from the famous 1987 Hansen hearing differ in part to the ones Giss then produced . I am sure there must be a valid reason but as yet no one has told me what it was.
It is absurd that a global policy is being decided by our governments on the basis that they think we know to a considerable degree of accuracy the global temperature of land and ocean over the last 150 years.
Sometimes those producing important data really need to use the words ‘ very approximately’ and ‘roughly’ and ‘there are numerous caveats’ or even ‘we don’t really know.’
tonyb

MattN
June 28, 2014 2:49 pm

Nick, just stop. You are in a hole and yet you just keep digging. Just stop.

R. Shearer
June 28, 2014 2:49 pm

One might consider making a contribution to Goddard’s tip jar.

Nick Stokes
June 28, 2014 3:00 pm

“You canā€™t make a clean estimated signal out of a bunch of muddied signals, ever.”
Then there’s no point in discussing analysis, is there? But it is the job of NOAA and USHCN to interpret the data, as best they can, even if you think it is worthless. And I think at Luling they did everything right. They picked up a problem, quarantined the data, and got the best estimate available with the remaining data.
“Now its well past your bedtime is Australia. Maybe that is why you arenā€™t thinking clearly”
When it’s afternoon in California, the sun is over the Pacific somewhere. It’s 8am here.
REPLY: Right you are, I thought you’d been up all night based on your commentary elsewhere. I also thought you lived in Perth. Obviously not.
Estimating data is the issue, and again when you use let’s say the six nearest stations, and statistically as we have shown at least 80% of them are unacceptably sited, resulting in a warm bias (and that’s not just my opinion that’s from Leroy 99 and 2010, and NCDC’s use of that to setup USCRN), that means your signal is going to be biased, full of the mud from the other stations.
It renders the idea of a useful estimate pointless.
And if you are too obtuse to see that, then yes, there’s nothing else to discuss -Anthony

June 28, 2014 3:03 pm

Nice Post Anthony, I am very happy you admitted you were wrong about what you said after truely looking at Steve Goddard claimed.
Now I would hope you work with Steve Goddard because he was the original whistle blower and one who dug up the findings.
Going to be interesting.
BTW Sunshinehours blog discovered the other day that July 2012 was not the hottest month after all. Data has been changed recently.
http://sunshinehours.wordpress.com/2014/06/25/noaa-usa-july-1936-maximum-temperatures-top-3-are-1936-1934-and-1901/
http://sunshinehours.wordpress.com/2014/06/25/noaa-usa-july-1936-is-back-on-top/
new ones also so many he is uncovering.
http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-2/
http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-updated/
Can of worms has been opened.
This all must not just be confined to the US temp records, how far has it spread. world wide temp records?

Stephen Rasey
June 28, 2014 3:05 pm

On using absolute temperatures vs anomalies in the context of Low Frequency Information Content.
I accept the problem with using Absolute Temperatures, such as deg C above zero, or even deg K. when you are trying to compare stations at sea level and those as 3000 m, those at 30 deg North and othose at 45 deg N latitude. And yes, missing data using these absolutes temps creates spurious anomalies like those found in Marcott and perhaps the Goddard hockey stick.
From a low frequency preservation issue, the is no loss of information of each station uses it’s own baseline to establish a zero-base absolute temperature anomaly. The key is that the baseline MUST NOT CHANGE over time. Keep that criteria and the low frequency information (climate change) is preserved. This is like a simple tare measurement when weighing samples in a lab. If you do it right, you measure the tare before and AFTER the procedure.
My problem with the BEST process is they change the baseline for each station on criteria that results in chopping the station records into segments much too short to preserve the climate signal sought. If they feel the need to chop a station record into segments as short as 10 years, then the station is useless for the purpose of climate monitoring. It is as if they can discern changes in the tare of the beaker just by looking at the string of samples weighed. Madness.
Thinking you are improving the data by manipulating it is the worst form of confirmation bias. At best, tare adjustments add uncertainty to the measurement. But if you don’t measure the tare, don’t assume it changes. Take your thumbs off the scale

June 28, 2014 3:06 pm

Anthony, I don’t see why you aren’t “sure why [I] can’t see” you’re addressing “the difference between raw and estimated data in his graph.” This post never claims to address that difference. It never claims to explain why he got the graph he got. There isn’t a single word about quantifying the effect of the problem you highlight. This post does nothing to show the problem highlighted in it actually explains the difference Goddard highlighted.
In fact, it seems unlikely this issue does explain the difference. Steven Goddard used a graph from 1999. At that point, GISS only had data for a fraction of its stations for 1990 and on. Data for thousands of stations were missing from their data. It would be no surprise if missing data in the US regions caused the results to be distorted.
When comparing results from what a decade or more apart, things like code/version changes and data availability are far more likely culprits for differences in results than this bug. As such, there’s no reason to posit a causal link between the bug you discuss and the things Goddard said.
REPLY: The missing stations aren’t the issue, that’s been well known for some time. The fact that estimated data from long-dead and missing stations is being produced is news. The idea was that this infilling was designed to fix occasional lost data, not do a wholesale replacement of weather station data for stations like Marysville that have been closed since 2007. -Anthony

June 28, 2014 3:07 pm

Integrity: you can’t buy, borrow or steal it.
I like integrity.

Editor
June 28, 2014 3:07 pm

Steven Mosher says (June 28, 2014 at 1:36 pm) “if you split a station where there is no actual move it has no effect“. That should be “little effect” not “no effect”. After all, if you put in a station move after each measurement, you end up with nothing, so putting in moves does have some effect.
Please note, this comment is a very minor nitpick, there are much more important issues on this thread. I hope that something seriously good comes of the whole exercise (apart from a seriously good quotable quote about scientists eating crow).

richardscourtney
June 28, 2014 3:08 pm

Nick Stokes:
I write to congratulate you on your fortitude and to commend you for the honour you display by ‘standing your ground’.
As is clear from my comments to you on the other thread, I think you are profoundly mistaken about the validity of the various methods used by you and others to determine GASTA (global average surface temperature anomaly). But that disagreement does not blind me to the courage you are displaying here.
You have risen in my esteem, and I hope others are also observing the respect you deserve for your contributions to this discussion.
Please continue your contributions.
Richard

June 28, 2014 3:14 pm

For those watning a visual aid, here is a map of Illinois TMAX USHCN for May 2014 show raw, TOBS and Final. There are a bunch of stations without raw,
https://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-2/

Bloke down the pub
June 28, 2014 3:20 pm

Might now be a good time to instate Real Science on your blog-roll, even if it goes in the political climate section?

Editor
June 28, 2014 3:23 pm

Just to clarify.
The analysis Anthony refers to for Kansas, has looked at mean temperatures at every USHCN station in Kansas for January 2013.
The USHCN Final dataset has adjusted UPWARDS every single station, bar one, by an average of about 0.5C.
This is in addition to cooling historic temperatures. E.g. temperatures for 1934 have been reduced by about half a degree as well.
http://notalotofpeopleknowthat.wordpress.com/2014/06/28/ushcn-adjustments-in-kansas/
There are also 8 out of 29 stations which have “Estimated” numbers.
Does Nick Stokes really believe these are all due to faulty sensors?

June 28, 2014 3:24 pm

MUCH ADO ABOUT SOMETHING!
At world-famous U of Michigan School of Engineering, which I completed by the way, we learn rules about data. The first rule is that the data always have accuracy only to a certain level, known as “significant digits.” The second rule is that, if you need better data, buy a better instrument. Calibration back to NIST is key as well.
BEST and NCDC and GISS and CRU, and all of the rest of climate record keepers, any attempt to “improve” on raw data by “adjusting” will always be met with contempt by engineers, who in general make our living by getting RESULTS!
Thermometers typically are used to report WEATHER. Attempting to generate CLIMATE signals is futile at best. Gridding adds no information whatsoever, but lets the gridders claim their results are “Global.” Thermometers of course should be located where the air temperature is representative, otherwise the data are spurious. Averaging from nearby stations is ludicrous, merely claiming information you do not have. Extrapolating across hundreds of kilometers is even more ludicrous.
Adjusting historic records on a monthly basis would have gotten me thrown out of school. My hero Professor Brown chides “Climate Science” for the complete and total absence of error bars, and his words are significant.
Average Temperature is itself a very dubious concept, as temperature is defined as the average kinetic energy of the molecules of the mass whose temperature is being recorded. What mass is being discussed, exactly?

crosspatch
June 28, 2014 3:27 pm

The problem I have with NCDC is that it is bad enough they use “estimated” data but then they go back into the record and retroactively change it. If you look at the NCDC database at the temperature for some date in the past and then look in the database in a few years time, you will find it has been changed. It seems that another problem with using estimated data is that those estimates are re-calculated every month. For example, this is how the database has changed for two dates over time:
http://climate4you.com/images/NCDC%20Jan1915%20and%20Jan2000.gif

Stephen Rasey
June 28, 2014 3:27 pm

Steven Mosher says (June 28, 2014 at 1:36 pm) ā€œif you split a station where there is no actual move it has no effectā€œ.

Mike Jonas, this is no nitpick.
It is a statement that makes my jaw drop in disbelief.
If Steven Mosher, and the rest of BEST, believe that, then that explains why scalpel has turned into a Cuisinart, mincing temperature records willy-nilly, because they believe an unjustified slice does no harm!
With an unjustified slice:
* What was one trend has now been made into two sequential trends with an offset.
* An offset whose significance is ignored because, “it’s a station move – therefore a new station.”
* Low Frequency content is lost. Higher frequency (weather) information is weighted more heavily.
* The uncertainty in the result rises greatly (just make slice at different points and see the differences)
Confirmation Bias is the Blue Plate Special today. Another serving of crow on order.

JFD
June 28, 2014 3:29 pm

Nick, you are obviously a bright guy, but mon ami, you wear blinders and can’t see anything except your own keyboard. In the Anthony’s post starting 6-26-14, a poster listed a URL to a city with a very similar temperature chart as Luling, Texas. Anthony screwed up on Steve Goddard and admitted it like a man. I suspect that you are young otherwise you would have learned to not be such a know-it-all and think that only you knows something. Just a word to the wise from someone who has been there.

A C Osborn
June 28, 2014 3:30 pm

Nick Stokes says:
June 28, 2014 at 3:00 pm
Nick, they couldn’t have used San Antonio which is one of the near stations as it also has Estimated values from May 2013 to current month.
I found 3 in first 10 stations in the Texas zip file have estimated data for 2012-2014.
Yes 30 % on a small sample.

Stephen Rasey
June 28, 2014 3:32 pm

This post definitely earns a Watts’ Best tag in my library.
Well done and well worth remembering.

June 28, 2014 3:35 pm

Oh, I think I see the problem. This post talks about a Polifact story which criticzed Steven Goddard’s claim about adjustments to a temperature record. In doing so, it said things like:

the best thing about all this hoopla over the USHCN data set is the Polifact story where we have all these experts lined up (including me as the token skeptic) that stated without a doubt that Goddard was wrong and rated the claim ā€œpants of fireā€.
Theyā€™ll all be eating some crow, as will I, but now that I have Gavin for dinner company, I donā€™t really mind at all.

I, perhaps naively, thought that meant this post was about what that story covered. As such, I pointed out this post fails to show Goddard was right, in any way, about what he was quoted as saying in that story. I failed to notice this post was actually saying Goddard was right about an entirely different issue when it said:

this time Steve Goddard was right

Goddard was right to point out this bug existed. That just has nothing to do with the Polifact story this post portrays him as being right in.
So yeah, my bad. Sorry I didn’t realize you were giving Goddard credit in reference to one issue because he was right on a completely different issue. I guess I’m just bad at spotting incoherence.
REPLY: I have some crow I haven’t been able to choke down yet, happy to share šŸ˜‰ See the link to the “40% fabrication” at the top of the post. I agree though, a lot of material here, and if you haven’t been following the issue, easy to get sidetracked. – Anthony

bw
June 28, 2014 3:37 pm

In 2007 I started monitoring a few GISS stations. Amundsen-Scott, Vostok, Halley, Davis in Antarctica. Then a few more over the years. It was just to check some of the claims of data changing. Also to see if I could plot the temps over time. Some small changes at first, but nothing of concern.
In 2008 I added more stations such as Nuuk, to check more claims. It is easy to save the temp files for each station. Picking a few stations at random points around the GISS globe, up to about 12.
Goddard was right. Past data points were changing. Usually small amounts on a monthly basis.
Some station data did not change at all, but only the new monthly data were added.
In 2009 I was saving monthly data for 15 stations.Every month the most recent data is added to the end of each file. But, scanning past data it became apparent that some historical station data was being “revised” almost completely. That means that a time plot shifted by enough to notice on a temperature plot. Usually the “new” plot showed that the past became cooler. Small amounts, maybe one tenth of a degree. When I got to 20 stations, I noticed that one or two sets of data were being extensively revised every month. At the end of 2009 there was a sudden revision of more stations by larger amounts. Almost always the data before 1960 was getting cooler.
By 2010 I was up to 30 stations, plus the Antarctic four.
Akureyri, Bartow, Beaver City, Concordia, Crete NE, Ellsworth, Franklin, Gothenburg, Hanford, Honolulu, Hilo, Jan Mayen, Kodiak, Kwajaalein, La Serena, Loup City, Minden, Nantes, Nome, Norfolk Island, Nuuk, Red Cloud, St. Helena, St. Paul, Steffenville, Thiruvanantha, Truk, Wakeeny, Yakutat, Yamba.
Downloading 30 sets of temperature data to save. Then putting all data into an excel file to make plots. Then comparing time plots to see where individual data points had changed. It became clear that a regular revision of about 10 percent of all GISS stations was taking place on a monthly basis.
Every month in 2010, three of the 30 stations has changes to past data points that were visible by plotting. Looking at every point for every station is not possible, but scanning at random shows that many stations have small changes. 2011 was about the same, but at the end of 2011 was another substantial change to many stations. In early 2012 my 10 year old hard drive died. I had a separate backup, but it was found to be virus infected to the extent that I could not save all data. I still have the drive and can open some files but can’t copy files to another drive.
In 2012 GISS began making larger changes to more past data. Some stations were showing monthly changes that could only be described as “erratic” with some large shifts for a couple months, followed by data returning to values from months earlier. Sometime in December 2012 was a very large change in past data for many stations. Some stations showed data that stopped in 2007 or 2008. In March 2013 those stations were suddenly “restored” and resumed showing complete data up to March 2013.
Every month GISS changes past data by small amounts to some historical data. Some stations are being selected for larger revisions at random times. For some stations, portions of data from decades ago vanishes completely. The numbers are replaced with “999.9” indicating no data. Months later those same past data points re-appear, just as they were before, sometimes with small adjustments. If what is happening to a sample of 30 stations is any indication of the entire GISS data set, then I’d say that there is an Orwellian plot to manipulate the past. There is no record of these past changes, it would be impossible to verify or reproduce those changes. Over the years, the past keeps getting colder.

Gaylon
June 28, 2014 3:41 pm

I know of less than a dozen sites that would post a lead story such as this. It is very refreshing, kudos! Carry on!

Admin
June 28, 2014 3:41 pm

Goddard posts so muchā€¦ so many times, that he may have things in error sometimes.

The problem Mr. Bastardi, and I don’t want to get into a big back and forth, is that, for many of us, this is much closer to “even a blind squirrel stumbles across a nut sometimes”, than an occasional error. Goddard’s analysis in this case was shown to be faulty multiple times. It was Homewood’s analysis that caused this issue to be taken seriously

June 28, 2014 3:44 pm

Anthony, you are to be commended for your integrity. Not everyone can admit to mistakes.

Nick Stokes
June 28, 2014 3:48 pm

omnologos says: June 28, 2014 at 1:54 pm
“Nick Stokes please find a station with a faulty cable causing readings to be transmitted high”

Sometimes Nature just has a warmist bias. I’m not on top of the details here, but it seems cables are supposed to have near zero resistance. Positive resistance will reduce the voltage. Negative will increase it. But they don’t do negative.
Same with TOBS. If you go from afternoon to morning reading, the trend will reduce, and adjustment will increase it. If you go from morning to afternoon, conversely. But, as it happens, the NWS originally had people reading in the afternoon. There’s only one way that can go.

Matt L.
June 28, 2014 3:49 pm

From a lay perspective, comparing the estimated (E) data in Luling, Tx to the temperatures reported by the local weather channel would be interesting.
Even though the local TV weather station equipment isn’t part of the “official” network or approved hardware, the comparison seems just as reasonable as averaging nearby temps from approved hardware in the official network to estimate missing data.

Nick Stokes
June 28, 2014 3:51 pm

richardscourtney says: June 28, 2014 at 3:08 pm
Thanks.

Eliza
June 28, 2014 3:51 pm

Mosher: Its over no one believes it anymore please give up. The “modeling” of AGW is a FANTASY! but go ahead if you still belive in it thats what science is about LOL(from a person with 4 higher university degrees, BTW ask Freeman Dyson PLEASEEE)

June 28, 2014 3:52 pm

Anthony, what you said in this post has (as far as anyone has shown) absolutely nothing to do with what was said in the Polifact story. Your comments in regard to the Polifact story are not even wrong. They’re just incoherent. You are misleading people by pretending this bug somehow proves what Goddard said in that story is correct.
The only reason I got sidetracked is you presented this bug as proving Goddard right about what he said in the Polifact story when it doesn’t.

REPLY:
If you’ll read the Polifact story, you’ll note they are combining link and comments from two issues. The quote they used from me was about the data error issue, the link the make to Zeke’s post at Lucia’s is from Goddard’s original 40% fabrication claim. I agree they have muddled it somewhat with the animgraph. But the idea discussed has been about “fabricating” (or as I address it “infilling and estimating” temperatures. That has in fact increased since year 2000.
Polifact asked Dr. Curry about the infilling 40% issue, and she referred them to Zeke and I.
No intent was made on my part to confuse people. – Anthony

June 28, 2014 3:55 pm

Goddardā€™s analysis in this case was shown to be faulty multiple times.
I don’t think so. But if that helps you sleep at night, go with it.

June 28, 2014 3:56 pm

Brandon Shollenberger … Estimating does change trends. Stop denying it.
http://sunshinehours.wordpress.com/2014/06/05/ushcn-2-5-estimated-data-is-warming-data-arizona/

geran
June 28, 2014 3:57 pm

Here’s the thing Anthony, jump in to save face, after you made a fool of yourself, and then claim victory.
It works every time.
(Oh, it’s okay to not publish this. I just wanted the screen shot for my diary.)
REPLY: No problem, frame it if you like, but you see here’s the big difference between you and I. I put my name to my words, take my lumps when deserved, and even write about it. you taunt from behind the safety of a fake name. Color me unimpressed. (Oh, it’s OK not to print out this part for your diary) – Anthony

SIGINT EX
June 28, 2014 3:58 pm

(y) šŸ™‚

milodonharlani
June 28, 2014 3:59 pm

Novarupta 1913: Yet another instance of a major volcanic eruption apparently causing warming rather than cooling.

milodonharlani
June 28, 2014 3:59 pm

Should note, high latitude v. tropical.

kramer
June 28, 2014 4:04 pm

PS someone FOI the emails for this data change bet the dog will eat them real quick
Bet their hard drives will crash first…
Anthony, Thank you for manning up. And thank you for this great blog and this great post. The issue of temperature adjustments has always been one of the reasons I don’t trust climate science.
So, does anybody want to wager that once they get this issue straightened out, that the new results will show even worse warming?

June 28, 2014 4:13 pm

Just like the temperature record adjustments, Nick Stokes comments only go one way. lol
Andrew

June 28, 2014 4:14 pm

If the ultimate idea is to “get it right”, then it would seem that many scientists often don’t but through proper scientific methods and persistence may.
Kudos to all who continue to move toward getting it right.
As someone mentioned earlier in this thread, it is more than a little disconcerting to see how suspect the US’s “world’s best” temperature monitoring system is.

Truthseeker
June 28, 2014 4:17 pm

Anthony it is good that you have admitted that you had raised straw man arguments that had no bearing on what Steve/Tim was saying. As an outsider with no skin in this game, I always understood what Steve/Tim was saying as it was quite clear if you had started from the beginning and followed his analysis step by step. You have to admin that you did a lot of initial damage to the raising of what is a massive systematic error (being polite) with your previous pieces and that the you gave the alarmist community a “get out of jail free card” on this issue. You now have to work twice as hard to get this issue the spotlight that it deserves.
REPLY: Who is “Tim”?

Nick Stokes
June 28, 2014 4:19 pm

“The fact that estimated data from long-dead and missing stations is being produced is news.”
It’s the basis of FILNET, which is a long standing USHCN processing step. You can calculate an average with absolute temperatures, which USHCN does, but it’s more complicated than anomalies (and so, IMO, a bad idea). Each data point is the sum of a climatology component and an anomaly component. If you let the station set vary over time then the climatology component will provide a spurious signal. That was the Goddard spike, for example.
So they keep the station set constant, with a fixed set of climatologies, and interpolate anomalies where needed. When you are doing what is in effect a spatial integration of anomalies, interpolating extra values just affects the weighting. Numerical integration is effectively the integration of an interpolating function.
The existence of zombie stations isn’t new, and isn’t a problem. The result just depends on how many real stations there are. And that is mostly somewhere around 800, which is a lot for the area of CONUS.
REPLY:
Well aware of FILNET for years. John Neilsen-Gammon didn’t know that “zombie” stations were still reporting, it was news to him. It was news to me. It is news to a number of people reading here for the first time.
And no matter what you say Nick, making up data where there is none, especially from long dead weather stations using crappy data from surrounding compromised stations is still wrong. For the record, I don’t give a flying F how you rationalize it.
-Anthony

June 28, 2014 4:22 pm

Reblogged this on Climatism and commented:
Sceptics “eat crow” ~ Science, discovery, truth, integrity and reason the big winners.
Bravo Mr Watts…and Mr Heller.
Will the Global Warming zealotry; NASA, NOAA, USHCN, Gavin, Mosher et al come to the table, or will the usual dose of denial, obfuscation and pal-reviewed-reasoning win the day?

June 28, 2014 4:22 pm

This episode represents why it is important not to try and placate BEST team members. Anthony, it is really not important that you immediately speak out on these issues. Please stop listening to those who have screwed you over in the past, I cannot repeat this enough. Their only intent is to use your words against you.

June 28, 2014 4:23 pm

Nick, anomalies can’t work. Only 51 USHCN Stations have a full 30 years with on non-Estimated data from 1961-1990 (your preferred baseline).
[“with on non-Estimated” or “with no non-Estimated” data ? .mod]

June 28, 2014 4:23 pm

And Brandon. The Arizona blog post I did was mentioned in one of the threads on the Blackboard. I know you read that blog.
My post was June 5th as was the Blackboard thread.

Rud Istvan
June 28, 2014 4:24 pm

This Post, plus the corresponding one at CE, are a magnificent example of how the internet has changed everything. For the better, albeit threatening all ivory tower foundations. The academically robed are running scared.
A flawed but interesting proposition put forth, rapidly scrutinized using all sorts of mathematical and data arguments, resulting in a new synthesis now forming that the original hypothesis was flawed but the conclusion may be ‘robust’.
Much faster than ‘peer review’ and much more brutal. And wide open for anyone who cares to go fact check for themselves. (Mosher, your explanation of the BEST conclusion for station 166900 over at CE still does not ring true, since you completely avoided my key argument about the flaw in your methodology illustrated by that station using only the data and words BEST itself posted.)
We are entering a new era, ushered in by the great IPCC/CAGW meme. Could not have happened to more deserving folks than establishment consensus climatologists.. But will spread to medicine, energy, economy–everywhere it matters.

Bill Illis
June 28, 2014 4:30 pm

I can’t see the NCDC fixing the database in a meaningful way.
If anything, they will come back and say the trend is even higher after we fixed it (or it was 0.003C in total or something).
They have the data, they have 20 analysts working with it. Tom Peterson leans over someone’s desk twice a week tweaking some assumption. This is just the way large databases are managed when senior people have a stake in the results.

June 28, 2014 4:32 pm

… The existence of zombie stations isnā€™t new, and isnā€™t a problem.
Well no. Not if your objective is to cool the past and warm the present. I suppose that you have many ways to fudge the data and are very comfortable with all of them. For outsiders, some of these things come as a real surprise.
I once read about a study of the number of deaths in winter in England attributed to the high cost of heating and many people could not afford it. The number was dramatic. I wonder if the dead would agree that a false warming trend that has caused billions and billions to be wasted and energy bills to rise “isn’t a problem”.

James Strom
June 28, 2014 4:33 pm

richardscourtney says:
June 28, 2014 at 3:08 pm
Nick Stokes:
I write to congratulate you on your fortitude and to commend you for the honour you display by ā€˜standing your groundā€™.
____
Let me second that. There is so much bad blood in the climate discussion that one can hardly blame someone for the occasional eruption of anger or cynicism, but some participants have been able to maintain a professional and helpful tone despite the brickbats thrown their way. Stokes is high on that list as are Judith Curry and the Australians. I’m sure there are others I am not familiar with, but I should not omit Richard Courtney. I don’t always agree with everyone mentioned but I take their professionalism as a standard.

Stephen Rasey
June 28, 2014 4:37 pm

Stokes at 4:19 pm
Itā€™s the basis of FILNET, which is a long standing USHCN processing step.
It is my experience that the tool constrains the thinking.
The existence of zombie stations isnā€™t new, and isnā€™t a problem.
No, it isn’t new. And had it stayed at 5% of the data, it could be accepted as a simplifying assumption and it wouldn’t likely be a problem.
But Goddard may us sit up and notice that the zombie data hoard has been growing to alarming percentages of the total data over the past decade. It is no longer at the scale of a simplifying assumption but a serious concern for integrity of the data, analysis, results, and conclusions.
The tool, under these circumstances, is causing problems.
It is time to Think Different.

Joseph Bastardi
June 28, 2014 4:41 pm

Charles, as someone who has to face the facts that I am wrong in things I do, I was simply trying to say I think that we sometimes have the least compassion for those in our own camp. Perhaps its a form of self policing. And I know a man will fall on his weakest point ( Lord knows I have many time )
But I think all this arguing about global temperatures is a red herring. First of all the quantification of water vapor, the number one GHG is huge. I can show you linkage between the falling Specific humidity over the tropics since the PDO flip, the plummet of the ace and the cooling of summer temps in the arctic. That we think that a degree if warmer where the mean temp is -20 in the winter has the significance of a drop of .5 degree on average in the tropical Pacific temps, is folly to me. I think the research in climate needs to be directed at the fluctuation of the water vapor, and specifically as related to the tropical oceans. I tried to argue on Oreilly with Nye, this is a grand experiment that CAN NOT BE COMPLETED until both the Pacific and Atlantic get to finish there swings back to cold in tandem! But its not the global temperature I would measure, its what is actually going on with the water vapor. Now we have the chance to objectively measure this via satellite data since 1978. We started with what as a non satellite based temp.. but it was after a cold pdo/amo in tandem. I believe this is a start of a great climatic shift.. similar to the late 1970s warm one, but going cold. But how could anyone possibly think you could measure what was going on in previous times with the accuracy you see it today. And all this “adjustment” proves that point..just like Mann switching off from the tree rings, they are simply re-doing what does not fit and using the argument that it should be that way..
As far as Goddard. I have argued with him, But I marvel at all the things he digs up, articles, etc.
Perhaps someone says, well anyone can do that. Well as someone who analogs weather patterns and spends hours looking at “threads” of sequences to see where it takes me, doing things like that takes an obsession. I couldnt do what he does, go into all those archives, so if someone does what I cant do, and makes something easier for me, then I respect that. So I say, wow, what a source. But as I tell people, dont trust me , go look for yourself. If you find Goddard wrong, you challenge that but take it an issue at a time, At least that is what I try to do. And this blog, WUWT is amazing. In fact all you guys out there, you cant imagine how much I sit in awe of you. I guess I am getting old, but I enjoy looking at light and there is alot out there, with guys who have nothing to gain in this fight but knowing they pursued the truth where ever it would take them
So when I look at everyone here is my conclusion. The team I am on in this situation, if we were sure our pursuit of the answer made us conclude they were right, we would act accordingly and say it. I do not believe that about the other side. I believe because they have never understood what is like to fight a relentless opponent ( the weather) every day and take the shots, they have no idea how to get up if beaten. So they simply ignore facts and will not admit they are wrong. That is a another big difference. After all, if they are forced to admit what they have gained famed and fortune on is wrong, that hurts worse. If the goal is your God, what if the goal is taken away from you?
I have always loved the weather, and I see the good lords majesty in it every day. And I see good men in this situation fighting for the truth. In the end, after its all over, to me , that may be the value that is taken out of this.

John Bills
June 28, 2014 4:42 pm

what a mess this is becomming

milodonharlani
June 28, 2014 4:54 pm

Joseph Bastardi says:
June 28, 2014 at 4:41 pm
I second you on all points.
Although “adjustments” & bad data sets have been conspiratorially contrived to get rid of inconvenient climate truths, the fact is that the satellite era from 1979 roughly coincides with the PDO/AMO flip in 1977. For about the next 30 years, surface temperature appears to have warmed. The amount that it cooled in the previous 30 years of course has been made largely to disappear by sleight of hand, & how warm it was in the 30 years before that.
As I’ve commented previously, this is a water planet, so if it’s not all about the water in the air & seas & on the land, in all physical states, at least understanding that has to come first. There can be no CACA without the potent positive water vapor feedback from a modest CO2 rise, which to say the least is not in evidence, but has nevertheless been assumed in the GCMs. That’s just one reason why the GIGO models have failed so miserably to predict GASTA since c. 1996, depending upon data set.

u.k.(us)
June 28, 2014 4:55 pm

So, it has been settled that the unsettlement’s rhetoric will be taken down a notch ?
Onward !!

PMHinSC
June 28, 2014 4:56 pm

Nick Stokes says:
June 28, 2014 at 3:48 pm ā€œā€¦cables are supposed to have near zero resistance. Positive resistance will reduce the voltage. Negative will increase it. But they donā€™t do negative.ā€
Not sure what you are trying to say but donā€™t think you said it. No cable has zero (or near zero) resistance at ambient temperature. The cable (usually copper) resistance that does exist is compensated at some standard temperature. The 0.0393%/C temperature coefficient of copper at std temp is horrible and non-linear. The resistance can go positive or negative relative to the resistance at the compensated temperature. Donā€™t know if this changes your point, but unless I misunderstood the example, it doesnā€™t seem to support it.

Lance Wallace
June 28, 2014 4:56 pm

Question for Anthony, Nick Stokes, or others understanding how these NOAA/NASA changes work:
NASA’s Figure D provides annual average 48-state US estimated temperatures from 1880 to the present.
Looking at the shape of the Figure D changes between 2000 and 2014 for the period between 1880 and 2000, it is a very regular upward-facing parabola. Temperatures from 1880 to roughly 1912 are raised, then those from about 1912 to 1968 are lowered, then after 1968 they are raised again.
This is such a regular phenomenon, there must be a reasonable explanation for this behavior. What is it? Squinting, one can almost see a 60-year period.
The curve is shown here:
https://dl.dropboxusercontent.com/u/75831381/NASA%20FIG%20D%20CHANGE%202000-2014.pdf
The Excel file is here:
https://dl.dropboxusercontent.com/u/75831381/NASA%20Fig%20D%201880-1998.xlsx
Data from a comment by Dave Burton on an earlier thread: http://wattsupwiththat.com/2014/06/26/on-denying-hockey-sticks-ushcn-data-and-all-that-part-2/ Burton has been archiving the changes in Figure D since 1999

Editor
June 28, 2014 4:58 pm

Outstanding, Anthony, science at its finest. I have often said that I have credibility in part because I admit when I’m wrong. To me, that’s just part of scienceā€”you don’t always get it right.
Like my mom used to say, “It’s not whether you spill the milk ā€¦ it’s whether you clean it up”. We all spill the milk sometimes. You’ve done an outstanding scientific job on the cleanup of your error. It’s one of the reasons I write for WUWT. You’ve kept it as a beacon of honesty in science since day one.
So strange as it may sound ā€¦ my congratulations on your error. Very well done.
w.

Alan
June 28, 2014 5:11 pm

Anthony,
Look on the web. There was the British news article. Some subsequent take up.
Then there was you.
Politifact Article….
And you were wrong. Anthony I am the aging son of a corporate pilot from the 50’s…
Weathermen were held in low regard…Wrong more than right.
I touch into your site daily. You have done yeoman’s work… You exposed the stunning lack of any coherence..as to the siting of weather stations.. You have called the entire land based measuring system into question.
You have done beautiful work.
I also always touch into RealScience. Because Steve/Tony may not have all the ducks in a row…he is following data…and he is poking at it…and he is more often than not…hitting the points that don’t add up.
Just like you…You know weather site sighting…He knows computers, and computer analysis…
You got this wrong…but more important than that…You did not go to Goddard and drive the questions to ground. I work in IT. Before you make any statement…You go to the software in play…you go to the people in play and you get to root cause…You understand the actual error/question in play before you touch the wider world. If you don’t…facts will make a fool of you.
That is where you stand.
The story of tampered data is no longer in play to the wider field. You have turned an important question that had some resonance into the saddest instance of inside baseball.
I can not express how sad I am at the fact that two of the important voices in the climate debate have been turned into slap fighting school girls. And I don’t mean that as an insult to slap fighting school girls.
I expect better.

June 28, 2014 5:13 pm

An interesting post at sunshine hours.
USHCN 2.5 ā€“ OMG ā€¦ The Old Data Changes Every Day (Mapped)
http://sunshinehours.wordpress.com/2014/06/28/ushcn-2-5-omg-the-old-data-changes-every-day-2/
They have maps where you can see the raw and final (some don’t have raw data of course) over a couple of states. Interesting.

milodonharlani
June 28, 2014 5:16 pm

Lance Wallace says:
June 28, 2014 at 4:56 pm
The 60-year periodicity is even more evident when you use actual observations rather than the bent, folded, spindled, mutilated, stepped all over & generally adjusted out of all recognition so that its mother wouldn’t know it “data” sets of government gatekeeping, trough-feeding bureaucrats mascaraing as scientists. Such as those from before the time when Jim, Gavin & Phil got their hands on the numbers.

June 28, 2014 5:16 pm

Anthony, all you’re doing at this point is arguing about the way in which you’re wrong. You’ve never denied the basic point at issue, that nothing has been done to show this bug caused the differences in the two graphs Steven Goddard posted. If this post claims the bug does, it’s wrong for offering no support for the claim. If this post doesn’t claim the bug does, then it is wrong in calling the Polifact story wrong.
I’ll make this simple. What did Goddard say that was right? What did Zeke say that was wrong?
REPLY: Your thinking eludes me. I’m not arguing about anything Zeke said.
Goddard initially said that in comparing the USHCN raw versus the final data set, that 40% of the STATIONS were missing, and that is clearly wrong, he later changed that to say DATA. I couldn’t replicate it with his code, and I didn’t see it in the data. Later, with the help of Paul Homewood I was able to see the problem.
The Polifact story used my quote related to my objections to Goddards initial claim, it also linked back to Zeke’s post about Goddard’s initial claim. They asked Curry about Goddard’s initial claim and Curry referred them to Zeka and I. So there’s elements of that claim in their story. They also added an element of Goddard’s later claim about adjustments of temperatures post 2000. They used my quote to rebut the first claim, not the second. They don’t make it clear which person is rebutting which claim, though it seems clear there is a mix of rebuttals. I can’t help it if they didn’t keep the two stories straight there.
You’ve already admitted to being confused, and I’m just as confused by your continued objections after that. – Anthony

Stevo
June 28, 2014 5:18 pm

I have been watching this disagreement between you and “steve goddard” for a while and it is honestly the most serious “debate” in climate science i have ever seen(sad I know). This was a great post, and i hope this starts picking up steam.

NikFromNYC
June 28, 2014 5:24 pm

Does this improved debugging by Homewood support Goddard’s strong claim that the result hikes the recent temperature up? A sloppy system that gives the same answer only helps support climate alarm more if it is freely debugged by skeptics. The “Steve was right” narrative suggests that this odd software issue is *indeed* ramping up recent warming unfairly. It would indeed be a blow for alarmism were this so, but otherwise it merely confirms the validity of their sloppy system.

Editor
June 28, 2014 5:25 pm

Stephen Rasey says:
June 28, 2014 at 3:32 pm
> This post definitely earns a Wattsā€™ Best tag in my library.
> Well done and well worth remembering.
Not yet. maybe a followup post when we get some responses from the NCDC or they come up with a fix for their bug that provides estimates instead of real temperatures. That post will have a link back here.
It’s certainly going to be a post referenced for years….

jimash1
June 28, 2014 5:27 pm

It’s not a bug, it’s a feature.

Bill Illis
June 28, 2014 5:29 pm

Chris Beal (@NJSnowFan) says:
June 28, 2014 at 3:03 pm
——————————————–
July 1936 is back to being to the highest month on record again in the US. It took two years for this to be fixed.
Average mean temp July 1936 – 76.80F; July 2012 – 76.77F.

Konrad.
June 28, 2014 5:31 pm

Well this is a fine mess.
Just when some were trying to abandon SSTs as a measure of AGW, the corrupt surface station data gets blown out of the water.
They could run back to ARGO, but thatā€™s had 3 warming adjustments already. Another adjustment? ā€She won’t take it captain! She’ll break up!ā€

milodonharlani
June 28, 2014 5:33 pm

Bill Illis says:
June 28, 2014 at 5:29 pm
A step in the right direction. It should not have to be like pulling teeth.

June 28, 2014 5:33 pm

Flat Earth Society – 1
Team Settled Science – 0
Hopefully someone at NCDC will let Obama, Hillary, Kerry and Co. that it’s time to tone down the rhetoric a bit.
And kudos to Anthony for this post – the vast majority of people are incapable of even considering they’re wrong, let alone writing a post about a mistake on a blog with the circulation of WUWT.
And I second the thought that hitting Goddard’s tip jar wouldn’t be a bad idea.

Konrad.
June 28, 2014 5:33 pm

jimash1 says:
June 28, 2014 at 5:27 pm
———————————–
ā€œItā€™s not a bug, itā€™s a feature.ā€
Both.
It’s a ā€œfeatureā€ until it’s discovered. Then it’s just a ā€œbugā€ šŸ˜‰

Jimmy Finley
June 28, 2014 5:35 pm

Joseph Bastardi says:
June 28, 2014 at 2:31 pm: “…we have to stop circular firing squadsā€¦.” Amen, Joe. On the “skeptic” side, the upholders of truth and sanctity kick the crap out of anyone who errs or gets a bit out on a limb, while the “warmists” stand right there in their faces and spout lie after bald faced lie, or simple bullshit. One could have gone after Goddard by simply saying “I dispute your analysis, and here’s why” but instead we roar out that he is a wing nut who should be run off the reservation because he “gives the climate scientists ammunition to deride our case” which they would do if the Archangel Michael appeared in Times Square and announced that the IPCC and CO2-based catastrophe was Satan’s work.
This temperature stuff is a killer. The true believers, supine followers, slugs or whatever (why haven’t NCDC, entrusted to work with this data (and incidentally paid big bucks and laden with perks no working stiff in the real economy ever gets a whiff of), found and corrected and documented this issue? What do they do all day?
Let’s see what they do with this. They all seem to be fine fellows, really concerned about the issue. Why (gasp) it might even be true (in some very limited way, which we shall find a way to correct and bring it all back to the good!). Let me know when they ACTUALLY do something that is more than putting pasties on the exposed nipples.
Get rid of the system for any purposes other than local use (preferably paid for by local users), airport purposes (paid for by you know who), and so forth. A representation of the Earth’s AVERAGE REGIONAL TEMPERATURE it isn’t – and it’s misuse is far more damaging to skeptical science than Steven Goddard on a caffeine overdose.

CC Squid
June 28, 2014 5:35 pm

“Itā€™s a bug, a big one … it was systemic to the entire record, and up to 10% of stations have ā€œestimatedā€ data spanning over a century:”
After working on and programming systems the thought of a “big” bug is scary. Something like this is usually caused by a person with a hug ego who fails to test a patch. In a commercial environment this person could be dismissed. What I question now is the pre-1940 data. Did this “bug” cause the decrease in temprature? The statement, “10% of the stations … over a century” is pretty scary. What is the reason for the temperature change during that time span? Has a FOIA request been put in for why the data changed for the early part of the 20th century? The IRS and EPA actions are making me paranoid!

RoyFOMR
June 28, 2014 5:40 pm

Dear Dana Nuccitelli,
your recent piece in that flagship of truth and probity, the Guardian newspaper, had a title that included a most beautifully balanced phrase, to wit ‘Global warming conspiracy theorist zombies’
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/jun/25/global-warming-zombies-devour-telegraph-fox-news-brains
Little did I, or anyone else, think that you were seeding the ground for this bombshell episode of why the sceptics were right all along and that the science was, by no means, settled.
Bigger than Climategate, bigger even than your ego, welcome to Zombiegate.
Thank you Dana, you really had me fooled.
Yours
An admirer.

Jimmy Finley
June 28, 2014 5:44 pm

Darn it! “it’s” in the last paragraph of my rant above should, of course, be “its”. English is tough.

kim
June 28, 2014 5:45 pm

Dead Souls.
H/t Gogol. No, not google.
=========

June 28, 2014 5:48 pm

“When the scientific method is at work, eventually, everybody eats crow. ”
I just love it when people smarter and better than I’ll ever be slip on the ice. I know it is not very spiritual, but I can’t help chuckling.
When I myself need a slice of humble pie, all I need to do is to attempt a five-day-forecast. This promptly puts me in my place, and makes me amazed how well many meteorologists do.
People who fear to ever be wrong stay at home and hide under their beds.

Truthseeker
June 28, 2014 5:49 pm

REPLY: Who is ā€œTimā€?
I meant “Tony”. Typo. My bad.

Larry Ledwick
June 28, 2014 6:02 pm

It appears that a key issue here that is not being discussed much, is that for large computer programs, it is impossible to write bug free code!
Worse it is impossible to know how close you are to having bug free code. There is no test suite that can cover every possible permutation of factors. From simple things like which computer the code was compiled on and some obscure difference in the installed OS on that system or that hardware. In a company I have worked at, we had two physically identical machines, identical hardware identical software and OS, yet one always ran slower than the other on certain jobs. I have seen systems punch a punch card from a program that only was able to print to a line printer. It created one single punch card image of the first line of the file, then switched to the printer and finished printing out the remaining 800,000 line items with no errors. The error never repeated, it was a one time event.
Not all compilers make the same final output from the same source code for example. Different libraries installed on apparently identical machines can give different results from the same source code compile due to differences in how they handle rounding or other functions. Just because the code has run without apparent problem for days, weeks, months or even years, does not in any way demonstrate it is stable or fit for purpose. It may appear to be good code but you can be certain that there is some chain of events and conditions that the code has never encountered which could conceal a huge bug in the code.
There are some really bazaar examples out there in the IT world of strange confluence of conditions that uncovered a bug that had been silent producing apparently good output for years when it in fact was broken all along but was broken in a way that fit people’s expectations (another form of confirmation bias).
Unfortunately starting in the 1960’s when large mainframes started to move out of research facilities and into industry, the culture sold to the public was “you can depend on it this was computer generated”!
I have personally seen code bugs uncovered that had produced bad output for very long periods of time, and that was in spite of legitimate efforts to validate the data output over that period of time. Only on investigation of some unusual event like a system crash did the bug get discovered.
Some bugs are absolutely evil, the kind of situation that only occurs on the 3rd Tuesday of the month if the month name starts with a M or is 28 days long and the program is run on Fred’s computer early in the morning before it warms up fully.
This is why I have low confidence in temperature data that always seems to be adjusted in the same direction. That is just not natural it is either intentional or due to an inherent error in the processing system. Real natural systems vary in all dimensions.
caveat emptor

ossqss
June 28, 2014 6:03 pm

So, is this actually a case of modeling observations?
Think about that for a minute……..

June 28, 2014 6:04 pm

If you don’t like infilling, don’t use it. It doesn’t change the result, almost by definition, since infilling mimics spatial interpolation: http://rankexploits.com/musings/wp-content/uploads/2014/06/USHCN-infilled-noninfilled.png
The interesting issue currently is that some stations that report apparently valid raw data are being replaced with estimated data. The cause seems to be that the NCDC data file is missing the X flag, which indicates that the data was too inhomogeneous at the time (e.g. between two station moves) to figure out what is going on. The folks at NCDC are looking into it, as the number of stations that fall into this category seems to be a bit high, at least in my opinion.
Also, the confusion here was on Anthony’s part rather than mine; I always knew that NCDC used infilling to ensure that there were 1218 reports per month in the homogenized dataset. I personally think infilling is silly, since its not really needed (as any sort of reasonable spatial interpolation will produce the same result). But I understand its something of a legacy product to ensure consistency for folks who want to calculate average absolute temperatures.

REPLY:
Confusion is the wrong word, I simply didn’t know that NCDC was reanimating dead weather stations for the final dataset. I agree, it is silly.
However I disagree that it doesn’t make a difference, because the majority 80%+ of stations are non-compliant siting-wise. A small minority are compliant, and the infilling and homogenization obliterates their signal, and those stations are by definition, the most free from bias. As we have shown, compliant stations have a lower trend than non complaint stations, and a far lower trend than final adjusted data.
Basically what NCDC is doing here is mathematically favoring the signal of the crappiest stations – Anthony

bit chilly
June 28, 2014 6:11 pm

once again i am reminded of the old adage ” the man who never made a mistake never made anything “.
well done an-thony , it is good to see personal integrity come to the fore.
i look forward to the ultimate outcome of this discovery . in order to make the claim of global warming,first there has to be an accurate record of temperature . this salient point appears to have evaded the entire climate science community to date .

June 28, 2014 6:13 pm

Zeke, infilling warmed Arizona. And by warmed I mean changed the trend from cooling to warming.
http://sunshinehours.wordpress.com/2014/06/05/ushcn-2-5-estimated-data-is-warming-data-arizona/
In Arizona at least, infilling seemed to be the key adjust for cooling the past and warming the present.
All they need was 15% of the data and some special sauce ….

Otter (ClimateOtter on Twitter)
June 28, 2014 6:13 pm

It should be obvious by now that nick just Stokes the fire.

June 28, 2014 6:15 pm

As far as the climate divisions data goes, the climate at a glance website switched earlier this year from using raw data to using TOBs-adjusted and homogenized data. It was covered at this blog, as I recall.
REPLY: No comment about the whole change in character of the data though?

CC Squid
June 28, 2014 6:22 pm

We are all becoming paranoid if we are asking ourselves one or more of the following questions:
1. Did some foreign hacker mark the high temperature sites in the first half of the 20th century estimated and the small town sites in the later half of the century estimated.
2. Did some “true believer” who works for the government do this?
3. Did some kid take a bribe to pay off his college education do this?

June 28, 2014 6:24 pm

Sunshinehours/Bruce:
Infilling makes no difference for Arizona: http://i81.photobucket.com/albums/j237/hausfath/USHCNAZinfillednoninfilled_zps0767679b.png
Anthony,
Lets leave homogenization out for the moment, as thats a different issue. Infilling shouldn’t have any effect on temperatures, because temperatures for a region are calculated through spatial interpolation. Spatial interpolation should have the same effect as infilling (e.g. assign missing grid cells a spatially-interpolated anomaly of reporting grid cells). Thats why we see identical results when we compare CONUS anomalies with and without infilling. Now, if NCDC were purposefully closing good stations to exacerbate the effects of bad stations, that would be one thing. In a world where stations are somewhat randomly dropping out due to volunteer observers passing away and similar factors, whether or not you choose to infill data is irrelevant for your estimate of region-wide change over time. The only way you will find bias is if you average absolute temperatures, but in this case infilling will give you a -less- biased result as it will keep the underlying climatology constant.
REPLY: No, you’ve missed the point entirely. Infilling is drawing from a much larger population of poorly sited stations, so the infilling will always tend to be warming the smaller population of well sited stations, obliterating their signal. We should be seeking out and plotting data from those stations, not burying them in a puree of warm soup.
And making up data for the the zombie stations like Marysville, that’s just wrong. There is no justification whatsoever and there is nothing you can say that will change my mind on that issue.
And I repeat what I’ve said before, if you want people to fully understand the process make a damn flowchart showing how each stage works. I told Menne the same thing yesterday, and his response was “well maybe in a year or two”. That’s just BS. The public wants to know what is going on, your response to me a couple of days ago was “read menne et al paper”. Process matters. A paper and a process aren’t the same thing, and maybe its time to bring in an outside auditor and run NCDC’s processes through quality control that does not involve pal review meetings where they decide that reanimated dead weather stations is a good idea.
– Anthony

CC Squid
June 28, 2014 6:24 pm

Chilly, another one says even a blind squirrel finds an acorn sometimes.

June 28, 2014 6:27 pm

“It doesnā€™t change the result”
Zeke,
I keep hearing this from you and Mosher. It sounds like you are more interested in the result than you are in good process or these temperature record problems would have come to light earlier. You have been working with the records for years. Do you think you may have lost some objectivity expecting a certain result?
Andrew

John W. Garrett
June 28, 2014 6:27 pm

0.6Ā° C. over the last 120 years? Given the state of the climate/weather data gathering system, the adjustments ( cough, cough ) made by GISS, the urban heat island effect, Chinese ( cough, cough ) weather stations/data, Russian ( cough, cough ) weather stations/data, Sub-Saharan African ( cough, cough ) weather stations/dataā€” among a multitude of other problems, that’s a rounding errorā€” at best.
When one stops to consider the reliability of the historic temperature records, one is left to wonder if we are kidding ourselves about our ability to gauge the extent to which current temperatures are or are not higher or lower.
Do you really believe that Russian temperature records from, say, 1917-1950 are reliable?
Do you honestly believe that Chinese temperature records from, say, 1913-1980 are reliable?
Do you seriously believe that Sub-Saharan African temperatures from, say, 1850-2012 are accurate?
I don’t.

Kevin K.
June 28, 2014 6:29 pm

I kind of came across this “adjusted data” issue independently in early 2007 then found some of the work already done online by Anthony et al. Its what’s turned me from warmist to actualist. The 30 year “normals” understate the actual arithmetic means of most stations by 0.7F to 1.3F depending on the month.
The excuse given was that the pre-ASOS way had a “cold bias”. Interesting.
The difference remained that high even when 1971-2000 “normals” became 1981-2010. Even though the decade 2001-2010 was merely 0.1F warmer than the raw data mean of 1971-2010, the reset “normals” for 1981-2010 were equally understated, even though they now had 14-16 years of “correct” ASOS data instead of the 4-6 years that were present in 1971-2000. Simple understanding of math tells you the difference should have shrunk.
For anyone who doesn’t believe this is going on, go to any NWS station or pull the data down from NCDC, which is now free. BWI’s is readily available and easy to export to Excel. Go ahead and pull the 30 years of raw data results for 1981-2010 and see how they are 0.7F-1.3F over the stated “30 year normals.”
This is also why I don’t understand the UHI “adjustments” either. BWI has gradually been swallowed by Baltimore City since it opened in 1961. You would think then the UHI would cause the present data to be adjusted downward or the old data upward to draw a comparison. Nope. Just the opposite.
When I cheated with data, I got a zero on a test and detention. These guys get grants.

Latitude
June 28, 2014 6:31 pm

I guess it was worth the long wait to finally see this:
“REPLY: Oh Nick, puhlease. When 80% of your network is compromised by bad siting, what makes you think those neighboring stations have any data thats worth a damn? You are adjusting bad data withā€¦drum rollā€¦.more bad data. And thatā€™s why homogenization fails here. Itā€™s perfectly mathematically legitimate, but its useless when the data you are using to adjust with is equally crappy or crappier than the data you want to ā€œfixā€.
…and that’s the whole point
well that, and the little issue of phantom stations reporting

milodonharlani
June 28, 2014 6:31 pm

John W. Garrett says:
June 28, 2014 at 6:27 pm
African data were probably better during the colonial period than since, no matter how odious imperialism may have been.

CC Squid
June 28, 2014 6:43 pm

The explanation of how the temps were decreased is located below. The comments starting at this one say it all. Mosh goes through the process in detail.
http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/#comment-601719
climatereason | June 28, 2014 at 10:37 am | Reply
Until Mosh turns up it seems appropriate to post this double header which was a comment I originally made to Edim some months ago. I will comment on it and the Goddard article separately

June 28, 2014 6:44 pm

Anthony, I took a piece out of you over at Paul Homewood’s blog … I am pleased that you have made reparations for your error. I hope that you have humility to unequivocally apologize to ‘Goddard’ for trying to bust his butt.

June 28, 2014 6:52 pm

Thanks Anthony, a most interesting trilogy of posts.
I had not visited Steve Goddard’s site before, thanks for the heads up.
This climatology as run by our governments, is always surprising in an unhappy way.
Why is it always worse than we imagined?

June 28, 2014 6:52 pm

Zeke: “Infilling makes no difference for Arizona:”
Zeke is using anomalies calculated with infilled stations to tell us infilling makes no difference.
Oh Zeke … you did drink the kool aid.

temp
June 28, 2014 6:53 pm

CC Squid says:
June 28, 2014 at 6:43 pm
“By looking at datasets outside USCHN we can see that these adjustments are justified. In fact the adjustments are calibrated by looking at hourly stations close to the USCHN stations.
Next, the GISSTEMP algorithm will change the estimates of the past
as New data for the present comes in. This has to do with the RSM method. This seems bizarre to most folks but once you walk through the math youā€™ll see how new data about say 1995, changes what you think about 1945. There are also added stations so that plays a role as well.”
Yes its bizarre… moshers argument is that bay taking a stick and measuring it against other sticks you can now label that stick 3 feet long because the other sticks are supposedly 3 feet long…. His whole argument is the very definition of the logical fallacy of appealing to the censuses.
“The fundamental confusion people have is that they think that global indexs are averages.”
“1. These algorithms do not calculate averages. They estimate fields.
2. If you change the data ( add more, adjust it etc )
3. If you improve the algorithm, your estimate of the past will change. It SHOULD change. ”
So mosher confirms without a doubt that these so called temperatures sets are nothing more then model output and are not in fact observations under the scientific method.

Scott
June 28, 2014 6:53 pm

Doesnt surprise me, I used to have my staff change their hours so that each project had the right amount(easier than trying to explain that some projects go smoothly(less hours) and some are a disaster(more hours), and charge as much as “possible” to capital projects(reduce expenses).
Here, Im sure the supervisor/manager/director is only approving changes that make it warmer, and employees being run through the ringer anytime they dare make a cooler adjustment. After a while no cooler adjustments will ever be submitted, and bonus might even be linked to warming adjustments,,,

Scott
June 28, 2014 6:55 pm

Anthony, you are still my idol and I appreciate your candor. This world needs more people like you!

RoHa
June 28, 2014 7:01 pm

Slightly off topic, but I hope your reference to the stock market is not intended to imply that my shares in the South Sea Company and In British Opium are duds. If they are, I will only have my tulip bulb interests to rely on.

ROM
June 28, 2014 7:16 pm

A few weeks ago a poster here on WUWT [ details; WUWT post and blogger ?? ] pointed to a Hawaiian Islands station that for the last year or so was listed in the USHCN [ ? ] using “estimated” temperatures.
As the poster pointed out, the particular station had in fact been reporting as usual for all of that year long period and it’s data was all there in the data base despite being listed as missing and therefore estimated in-filled temperatures were being used for that supposedly missing station.
It would be good reality check albeit a very small one to go back and see what differences there are in just this one instance between the actual data and the “estimated”Ā in-filled temperature data for that particular station.
In any case in an Island situation surrounded by ocean which acts as a very good stabilising influence on temperature changes, finding a long term, well run and maintained remote island station and verifying it’s real hard data against the official HCN data might be quite illuminating.
Alternatively, decades long duration temperature data sets from the very remote and vast desert like land masses such as much of Australia’s remote arid interior regions where human habitation numbers are very low but record keeping is still at the required world standards [ we hope ! ] and comparing those station’s adjustments with the actual hard data record might likewise be very revealing.
And exactly that has been done very comprehensively for many years by the Rockhampton
[ Queensland ] based blogger Ken Stewart on his blog “Kenskingdom”.
http://kenskingdom.wordpress.com/
As he is quite low profile in the skeptic blogger world nobody seems to have had a decent look at Ken’s long work on the many quite extraordinary and unexplainable, unexplainable except by the Australian BOM, adjustments and alterations, always towards warming of course, to most of Australian station data that has now been a regular feature of the BOM’s Australian temperature data for many years.

V. Uil
June 28, 2014 7:18 pm

Congratulations to Steve Goddard / Tony Heller for his work in uncovering the data modifications going on.
I was also taken aback by Anthony’s strident criticism of Goddard that seemed to be to border on something more than just the criticism of Goddard’s findings and methods. I am pleased and appreciate that Anthony has written the above article to set the record straight.
I donated to Heller’s site and suggest other readers should do the same. We need competent people to respond to the current AGW lunacy.

Louis
June 28, 2014 7:35 pm

So if the powers that be wanted to warm temperatures up a bit, all they would have to do is remove the station that is reporting the lowest temperature trend in an area and then estimate temperatures for that removed station from the warmer stations that remain. The end result would be a bit of warming. Is there any way to verify that such a clever and devious trick is not being used to manipulate temperature data?

Ed Barbar
June 28, 2014 7:36 pm

I wonder how much information will be obtained once the fix is in. If the adjustments in general are proper, this should make no difference to temperatures.

Arno Arrak
June 28, 2014 7:41 pm

Data modification is exactly what was wrong with climate science Michael Crichton told U.S. Senate in 2005. And rightly so because now it turns out that fake temperature values are not jusy to;erated but utilized by the global warming gang. I ran into it in 2010 while doing research on my book ā€œWhat Warming?ā€ It turned out that HadCRUT3 was showing warming in the eighties and nineties when satellite data showed that global mean temperature did not change for 18 years (Figure 24 in the book).. They gave it an upward slope of 0.1 degree Celsius per decade. The same fakery is still going on. I put a warning about this warming into the preface of the book and two years later they, along with GISTEMP and NCDC, decided to not show it any more and aligned their data with the satellites. This was done secretly and nothing was said about it. But looking at present temperature records it appears to have been a passing thought ā€“ they still show warming where none exists. Further examination of their temperature data revealed that all three of these data sets had been subjected to computer processing that left its traces in their database, apparently because of an unanticipated consequence of some kind of screw-up. These traces consist of sharp upward spikes that look like noise but are found at exactly identical sites in HadCRUT, GISTEMP, and NCDC temperature datasets. These three supposedly independent data sets come from two different continents. The spikes are prominently visible at the beginnings of years 1980, 1981,1983,1986,1988,1990,1998,1999,2002,2007, and 2010.This you can check yourself simply by comparing their temperature plots with parallel UAH or RSS satellite plots. Clearly all three databases were computer processed using an identical software not more than four years ago. We were told nothing about it but since their data show a greater upward temperature slope than satellites do during the last 35 years I associate this procedure with illicit co-operation among the three data sources for the purpose of creating the impression of a a greater global temperature rise than justified by temperature measurements. And this triple alliance has the advantage that they can refer to each other’s data to confirm their fake warming.

Beale
June 28, 2014 7:49 pm

I trust you’ve informed Ronald Bailey of this development.

kuhnkat
June 28, 2014 7:57 pm

Best gets their data from the hardcopy station sheets???
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

mark
June 28, 2014 8:02 pm

Nick Stokes says: June 28, 2014 at 3:48 pm
“Sometimes Nature just has a warmist bias. Iā€™m not on top of the details here, but it seems cables are supposed to have near zero resistance. Positive resistance will reduce the voltage. Negative will increase it. ”
Just the opposite. E(voltage) = I(current) X R(resistance) Also, the colder the wire (conductor) the less resistance it has. Sensors provide minute changes so the cable length maximum to control unit is critical for accuracy. Cable protection from temperature variations is also important. Usually low voltage lines fail only when open (broken).

June 28, 2014 8:07 pm

Anthony, I cannot understand your latest response to me. You say:

Goddard initially said that in comparing the USHCN raw versus the final data set, that 40% of the STATIONS were missing, and that is clearly wrong, he later changed that to say DATA….
The Polifact story used my quote related to my objections to Goddards initial claim, it also linked back to Zekeā€™s post about Goddardā€™s initial claim.

The Polifact story never said a word about the idea 40% of stations or data being missing. It clearly described the idea it was checking:

A reader wondered if NASA really did cook the books (we love reader suggestions!), so we are checking Doocyā€™s claim about fudging the numbers.

It specifically quoted Steven Goddard and referred to one of his gifs:

“Right after the year 2000, NASA and NOAA dramatically altered U.S. climate history, making the past much colder and the present much warmer,” Goddard wrote.
He provided this animated chart to prove his point (the chart marked “a” is the old version):

The entire piece was about this claim of his. It never talked about missing data. And while you claim Zeke’s post was about Goddard’s claim regarding missing data, his post was actually about what the Polifact post was about:

The blogger Steven Goddard has been on a tear recently, castigating NCDC for making up ā€œ97% of warming since 1990ā€³ by infilling missing data with ā€œfake dataā€. The reality is much more mundane, and the dramatic findings are nothing other than an artifact of Goddardā€™s flawed methodology.

Zeke’s post was not about this “zombie” data. The Polifact piece was not about this “zombie” data. The “zombie” data wasn’t an issue for any of Goddard’s critics. They were discussing Goddard’s methodology, not anything related to the bug you highlight in this post..
As for arguing with Zeke, you claim Goddard was right while linking to a post Zeke wrote criticizing Goddard. If Goddard had been right on the topic they were covering, Zeke’s criticism of him must be wrong.
REPLY: and I don’t understand your reasoning, especially when you are claiming things I’ve not said, so we’ll just have to agree to disagree. -Anthony

Jim Clarke
June 28, 2014 8:10 pm

Can anyone imagine Michael Mann writing a post like this?

jorgekafkazar
June 28, 2014 8:13 pm

“… I was so used to Goddard being wrong, I expected it again…” Great example of why ad hominem arguments are invalid.

Beale
June 28, 2014 8:29 pm

You say:
Now that the wall is down, NCDC wonā€™t be able to ignore this, even John Nielsen-Gammon, who was critical of Goddard along with me in the Polifact story now says there is a real problem.
====================================
I think you will be unpleasantly surprised by what NCDC is able to ignore.

Daniel G.
June 28, 2014 8:31 pm

So bad data is more abundant than good data, thus it will have more weight in infilling procedures and will obliterate the climate signal.
I mean, what is up with all those zombie stations? That is ludicrous.

ROM
June 28, 2014 8:37 pm

Hundreds of climate scientists collecting quite munificent salaries plus generous grants to use the best of scientific technologies and techniques to define and accurately record the global historical temperature records and trends.
Ā Hundreds of millions of dollars spent by governments to fund some of the most powerful computer systems in the world to help determine those global temperatures and their trends.
Close to a Trillion dollars [ out of an annual global GDP of $70 trillion dollars ] already spent over the last half dozen years on climate science, global temperature record keeping, heavily subsidised wind and solar system scams, all to try to mitigate and prevent a claimed catastrophic warming, a claimed warming that is based on the output of all those hundreds or thousands of climate scientists and multi million dollar computers that claim a catastrophic global warming is under way.
600,000 thousand Germans plus British plus ?? elsewhere numbers of people being cut off fromĀ power each year due to their inability to afford the rapidly escalating costs of power due to the stringent demands of climate scientists that the dangerous warming as seen in the global temperature records must be stopped at all costs by making CO2 producing fossil fueled power unaffordable and therefore to no longer economical and therefore seeing to the demise of the fossil fueled power generating industry.
A few tens of thousands of quite avoidable deaths each year of the elderly and the poor from hypothermia and / or infections brought on by the inability to afford to heat their living quarters and who because of the increased cost of power had to a make a choice of whether to heat or eat.
All this from those hundreds, possibly at most a couple of thousand highly paid, highly rewarded climate scientists.
And now a couple of dozen unpaid, highly proficient, regularly denigrated and abused and often much worse, skeptical bloggers have proceeded to analyse without any reward or pay the same temperature record keeping of those highly paid climate scientists.
And who are possibly on the way to undermining maybe destroying the entire claims of a major warming under way by pointing to the increasing number of major flaws and outright f—-ups in the global temperature data recording systems
Cock ups and flaws that have been created and promulgated by those very climate scientists who were so highly paid for so many years to ensure as an accurate global record of temperatures and temperature trends as was humanly possible to produce were in place so as to have a totally trustworthy base as the underpinning of the entire discipline of climate science and all it’s predictions and numerous claims.
Three years ago we were still being told ad infinitum, that the science was in and could not be challenged.
And here we are with the entire basis of the claims of all of climate science now being suspect in the extreme as it’s very basis, the global temperature record and it’s keeper organisations systems are now being thoroughly dissected and found seriously wanting if not completely corrupted.
The cost to humanity in lives and treasure and societal dissension that the incompetency of the highly paid keepers of the global temperature record have imposed on humanity for nigh on two decades is almost beyond reckoning.

June 28, 2014 8:48 pm

Joe Bastardi;
That we think that a degree if warmer where the mean temp is -20 in the winter has the significance of a drop of .5 degree on average in the tropical Pacific temps, is folly to me.
>>>>>>>>>>>>>>>>>>
I’ve been banging that drum until I grew weary. Robert G Brown mentions that issue from time to time also. The sad part is that even if we could normalize by converting to an energy flux metric, we’d still have muddy results because the calculations would be predicated upon muddy temperature and humidity data. But once the temp/humidity data gets sorted out, in my view it is still meaningless until the issue you mention is also addressed (and I expect the wrangling about the “right” way to do it will make this current matter seem almost tame).
Congrats to all who have participated in this discussion. Be they right or wrong or confused, they have brought a major issue to light. Change is on the horizon, and not just a correction of the science, but a watershed moment as to how science progresses in the internet age is upon us.

crosspatch
June 28, 2014 9:07 pm

Thatā€™s why I said in part 2 that we still need to do spatial gridding at the very least, but more importantly, we need to get rid of this massive load of dead, dying, and compromised stations, and stop trying to fix them with statistical nuances, and just focus on the good ones, use the good data, toss the rest. ā€“ Anthony

Or simply stop using USHCN data at the end of this year and go with CRN data from then on. There is much less damage that can be done to CRN data. Let the past be what it is, stop adjusting the living daylights out of it, “freeze” it and use CRN going forward.

Windsong
June 28, 2014 9:29 pm

This has been a very interesting day reading the post, links and then the numerous comments. Make that a very informative day. My thanks to everybody.
Since I am handling the sale of some property, missed out on a family trip to the east coast this month, and will miss another one next week to Mexico. But with some of the money that would have been spent on travel this summer, tip jars on various sites will get a donation.

Richard Ilfeld
June 28, 2014 9:31 pm

There is a link between estimating our temperatures, scheduling for the VA, counting the homeless, figuring out the percentage unemployed, determining the GDP, checking the inflation rate, or even measuring income inequality. When one drills down into verifiable data, the reality is a ways from the political view. As had often been noted here, the thermometer at the airport lets me compute density altitude at the runway, and is essential to flight safety. Many of the data points for the other public questions referenced above are equally precise and useful numbers. It is the political repurposing of the data, for needs from bonuses to policy support, that motivates distortion. It appears that this kind distortion has become a way of life for many, and is viewed as neither wrong nor unscientific.
It is interesting that sometimes precise and accurate date points can lead to awful result when combined inappropriately, while someone like Joe Bastardi, who has done far more OBSERVATION that most folks, may not have “data” to three decimal places but seems to detect trends that are testable and make appropriate predictions, that are also testable.
Joe makes a living, because, on balance he makes more good calls based on his observations than most folks, and some whose business depends on the weather are willing to pay him for his work.
The folks who terrify kids with dead polar bears, also make a living from their work. QED
Finally, Anthony doesn’t make much if anything for the blog, which may be why he can afford to be honest.

policycritic
June 28, 2014 9:34 pm

Making up data where there is none, especially for years for long dead weather stations, is just wrong. If it were financial data, say companies [or, say, ordinary homeowners] that went bankrupt and closed, and fell off the Dow-Jones Industrial average, but somebody decided that they could ā€œfill inā€ that missing company data to keep the ā€œcontinuityā€ of the DJIA data set over the years, you can bet that somebody would be hauled off to jail within a day or two by the SEC. [ANTHONY]

(1) They hauled off Bernie Madoff for 150 years. He falsified data for 18 years so that he could…ensure 10% per annum for his clients in their homogenized portfolio returns.
(2) The sub-prime housing crisis was caused by control fraud; i.e. fraud by those in control. The CEOs were paid bonusesā€”their principal earning engine–based on the number and amount of loans completed. So their brokers (also paid well; many recruited from MacDonalds and Burger King the day before) filled in income, and re-estimated appraised house values to raise the value of the loans, and hence all their bonuses, which they were churning out at a rate of 10,000/month in some cases. It took two to three yearsā€”time for the homeownerā€™s mortgage payment to adjust 2X, 3X–for the CEO to become spectacularly rich before he declared the mortgage bank bankrupt, closed the operation down, and opened another one.
[The CEOs of mortgage banks didnā€™t have to worry about federal bank charter rules or repercussions. US mortgage banks are only regulated by the president of the NY Fed. NY Fed prez Geithner, however, ignored the FBIā€™s warning in open testimony in September 2004 that there was a 90% “epidemic of mortgage fraud” (CNN).]
Good analogy, Anthony.

ed, Mr. Jones
June 28, 2014 9:49 pm

it seems to this dunderhead, that the data available is not up to the task for which it is required. Bastardi is right, time to look elsewhere.

rogerknights
June 28, 2014 9:49 pm

RoyFOMR says:
June 28, 2014 at 5:40 pm
Dear Dana Nuccitelli,
your recent piece in that flagship of truth and probity, the Guardian newspaper, had a title that included a most beautifully balanced phrase, to wit ā€˜Global warming conspiracy theorist zombiesā€™
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/jun/25/global-warming-zombies-devour-telegraph-fox-news-brains
Little did I, or anyone else, think that you were seeding the ground for this bombshell episode of why the sceptics were right all along and that the science was, by no means, settled.
Bigger than Climategate, bigger even than your ego, welcome to Zombiegate.

ZOMBIEGATE!! YES!!!!

ferdberple
June 28, 2014 9:54 pm

It is a stretch to call this the scientific method. It hasn’t been peer reviewed. And the author isn’t a climate scientists. According to academia, this automatically disqualifies the finding.
Only recognized academics, that peer review each other are qualified to participate in the scientific method. In that way errors are prevented.

June 28, 2014 9:55 pm

This entire matter should be the subject of congressional hearings and investigation. I hope that maybe say Anthony, Goddard, Bastardi and LIndzen etc can bring it before the necessary people .I’m sure John Boehner could be persuaded to start proceedings prior to the election

ferdberple
June 28, 2014 10:05 pm

More important than the zombie data are the adjustments to the past. any accountant that changed the past would end up in jail. especially if the adjustment didn’t balance out to zero.
errors are random. they should balance out to zero. if you are adjusting errors, your errors should also balance out to zero. if they don’t, you are likely cooking the books.

ZombieWoof
June 28, 2014 10:25 pm

Good job of shooting the messenger. Not so much on the CYA. Pride goeth before destruction, and an haughty spirit before a fall – Proverbs 16:18.

copernicus34
June 28, 2014 10:33 pm

Mr Watts, as I’ve stated in the past, you are to commended for your tireless work in this field. However, many of your readers, and Heller’s (aka Goddard’s) could see what he was on to something. Maybe some of us don’t have the bias that you claim against him. But I have to see, with all sincerity, it is you sir, that need to do better. You need to open your eyes to the potential malfeasance of some in the climate community (especially those that work at the locations you are talking about here). Having said this, thank you for this post, it was so important to have this all displayed above the fold, its a credit to you, and the rest of the skeptics that all this is open and free for all to follow, instead of what I believe could be a underground element working in the shadows within the climate community. Something is amiss here, and we shouldn’t be so quick to just write it off as ‘they are doing the best they can’. I know thats not what you said, but its implied.

ferdberple
June 28, 2014 11:01 pm

http://notalotofpeopleknowthat.wordpress.com/2014/06/28/ushcn-adjustments-in-kansas
paul homewood appears to confirm that goddard was correct on more issues than the zombie stations. Kansas was adjusted about 1/2 degree upwards in 2013. man made warming indeed.

June 28, 2014 11:33 pm

Joseph Bastardi says at 2:31 pm
One thing that keeps getting clearer to me is the amount of time, treasure etc wasted on 1/100th of the GHG, .04% of the atmosphere which [CO2] has 1/1000th the heat capacity of the ocean and next to the affects of the sun, oceans and stochastic events probably can not be measured outside the noise, is a giant red herring
— ——— ———— ————– ————— —————— ———
Good points all around, but a huge point about the ocean. Plus, with logarithmic absorption it is postulated that CO2 has essentially zero effects after perhaps 50ppm. And the actual evidence on CO2 affecting temperature? Not good: https://www.youtube.com/watch?v=WK_WyvfcJyg&info=GGWarmingSwindle_CO2Lag

richardscourtney
June 29, 2014 12:02 am

Anth0ny Watts:
Although I agree with much of what You have written here, I write to disagree a point you make at June 28, 2014 at 4:40 pm.
You say

Fixing a few missing datapoints in a month with FILNET to make the record useable is one thing, wholesale reanimation of dead weather stations for years is something else altogether.

Sorry, but NO!
The matter is one of acceptable scientific procedure, and it is binary; i.e. it is right or it is wrong.
Changing one datum may have negligible effect but replacing 100% of the data certainly would have significant effect. If ā€˜someā€™ use of FILNET is ā€œone thingā€ then how much use of FILNET is ā€œsomething else altogetherā€? At present there is no determined answer to that question but there are an infinite number of possible opinions of ā€œhow muchā€ is not significant. Politics is about opinions but science is about determination of the nearest possible approximation to reality.
The underlying problem is that there is no agreed definition of global average surface temperature anomaly (GASTA) and, therefore, any method to determine GASTA is correct (at very least, it is impossible for it to be wrong). When GASTA is not defined then any use of FILNET ā€“ or anything else ā€“ to assume data is correct; i.e. it cannot be shown to be wrong for the same logical reason that an asserted name of the Popeā€™s wife cannot be shown to be wrong.
Richard

June 29, 2014 12:02 am

Anthony, if by “agree to disagree,” you mean I say you’re misrepresenting the Polifact piece and Zeke’s post, and you simply hand-wave me away, sure. I laid out what those pieces were about, with quotes to back it up. If I’m wrong, it should be easy to show.
I don’t know why a simple point of what topics were covered in sources should require us “agree to disagree.” Even people who violently disagree should be able to agree what topics a source covers.
REPLY: And I don’t agree with your assessment. The article mixed topics. It conflated my quote on Goddards intital issue (his stick graph/missing data) with a later second issue. My experience with the entire affair differs from yours. What you are missing is the fact that post 2000, the number of missing data, missing stations and infills has increased. It all goes back to the first issue, in Goddard’s first graph. I’m sorry that you can’t see this.
I’ll point out that I don’t agree with your latest headline “Cook et al Lie Their Faces Off” but you don’t see me there trying to tell you how to change your story because I disagree with the title. It’s your blog, you get to make your own editorial choices. Same here, even if you disagree with those choices. -Anthony

Nick Stokes
June 29, 2014 12:40 am

ferdberple says: June 28, 2014 at 11:01 pm
“Kansas was adjusted about 1/2 degree upwards in 2013. man made warming indeed.”

All Paul Homewood has done is give a table of USHCN final-raw for stations in one state in one month. No news there.

Nick Stokes
June 29, 2014 12:54 am

Paul Homewood says: June 28, 2014 at 3:23 pm
“Does Nick Stokes really believe these are all due to faulty sensors?”

No. All you have done is given the difference between USHCN final (the file ..FLs.52i.avg) and raw (the file ..raw.avg) in the USHCN dataset. They include TOBS, and all the adjustments that have been endlessly described. There is nothing new there.

CEH
June 29, 2014 1:08 am

I found this blogpost about thermometer accuracy quite interesting
http://pugshoes.blogspot.se/2010/10/metrology.html

Greg Goodman
June 29, 2014 1:21 am

Luling TX , Mayland, Joe d’Aleo’s finding massive adjustments in Maine ….
If correcting this “bug” does not affect long term trends, there’s a whole other, bigger, problem to be dealt with.

richard verney
June 29, 2014 1:28 am

It is clear beyond doubt (see this and the other recent article on Steve Goddardā€™s claim regarding missing data and infilling) and the poor siting issues that the surface station survey highlighted, that the land based thermometer record is not fit for purpose. Indeed, it never could be, since it has always been strained well beyond its original and design purpose. The margins of error far exceed the very small signal that we are seeking to wean out of it.
If Climate Scientists were ā€˜honestā€™ they would, long ago, have given up on the land based thermometer record and accepted that the margins of error are so large that it is useless for the purposes to which they are trying to put it. An honest assessment of that record leads one to conclude that we do not know whether it is today warmer than it was in the 1880s or in the 1930s, but as far as the US is concerned, it was probably warmer in the 1930s than it is today..
The only reliable instrument temperature record is the satellite record, and that also has a few issues, and most notably the data length is presently way too short to be able to have confidence in what it reveals.
That said, there is no first order correlation between the atmosheric level of CO2 and temperature. The proper interpretation of the satellite record is that there is no linear temperature trend, and merely a one off step change in temperature in and around the Super El Nino of 1998.
Since no one suggests that the Super El Nino was caused by the then present level of CO2 in the atmosphere, and since there is no known or understood mechanism whereby CO2 could cause such an El Nino, the take home conclusion from the satellite data record is that climate sensitivity to CO2 is so small (at current levels, ie., circa 360ppm and above) that it cannot be measured using our best and most advanced and sophisticated measuring devices. The signal, if any, to CO2 cannot be seperated from the noise of natural variability.
I have always observed that talking about climate sensitivity is futile, at any rate until such time as absolutely everything is known and understood about natural variation, what are its constituent forcings and what are the lower and upper bounds of each and every constituent forcing that goes to make up natural variation.
Since the only reliable observational evidence suggests that sensitivity to CO2 is so small, it is time to completely re-evaluate some of the corner stones upon which the AGW hypothesis is built. It is at odds with the only reliable observational evidence (albeit that data set is too short to give complete confidence), and that sugggests that something fundamental is wrong with the conjecture.

June 29, 2014 1:43 am

Morning All
From reading the comments on this thread, it is clear that bad stations, infilling of data, zombie stations and etc do not make for good scientific information.
But to hear that the raw data has been “adjusted” to cool older data, even if by a tenth of a degree shows without a doubt, that this is not sheer incompetence but a sustained and deliberate to “make the facts fit the theory”.
That is blatantly a corrupt practice.
If the US records are supposed to be one of the best, then the shit storm that this is causing is rightly justified.
I respect Anthony for having the balls to admit he was wrong. It takes a man to do that. But now, he should be chasing down the perpetrators of this and blowing SG’s whistle louder than loud.
However, this being the number one “denier” site, you can already bet any information from here will be, at best, marginalised.
The CAGW machine will only treat this as a speed bump, and like the climategate emails, this will be buried, adjusted or deliberately forgotten within months.

Greg Goodman
June 29, 2014 2:25 am

richard verney says: “Since no one suggests that the Super El Nino was caused by the then present level of CO2 in the atmosphere, and since there is no known or understood mechanism whereby CO2 could cause such an El Nino”
Oh yeah? Because you say so?
I’m not saying this is the case but consider the following: GHGs causes heat to be retained, lessening the natural cooling of the oceans. The natural variability of the Nino/Nina cycles successively absorbs incoming solar and dumps it out the atmosphere. If natural cooling is slowed and heat builds up, from time to time there is large El Nino that dumps this excess heat.
What actually causes these intermittent “cycles” is very poorly understood and often misrepresented as an “oscillation” with the undeclared implication that it symmetrical and will average out over time.
That is spurious. It is not a pendulum swing as Tisdale has pointed out. It is an active player.
So I don’t think you bland hand-waving declaration is either accurate or informed.

Editor
June 29, 2014 2:27 am

Zeke Hausfather says:
June 28, 2014 at 6:04 pm

If you donā€™t like infilling, donā€™t use it. It doesnā€™t change the result, almost by definition, since infilling mimics spatial interpolation:

Thanks, Zeke ā€¦ if there is a perfect infilling method, you’d be correct. Are you claiming that such a method exists?
Zeke Hausfather says:
June 28, 2014 at 6:24 pm

Sunshinehours/Bruce:
Infilling makes no difference for Arizona:

Well, I can see a number of differences by naked eye ā€¦ so the claim that it makes “no difference” is clearly untrue. The question is, how much difference?
Infilling operates on the ASSUMPTION that the station is highly correlated to its neighbors. And while on average this is true, for any individual station it may be far from true. But wait, it gets worse. Even if the overall correlation is good, the seasonal correlations may vary significantly. But wait, it gets worse. You are often infilling a single month ā€¦ and even if the overall correlation is good, the correlation for that one particular month may be abysmal.
For example, annual temperatures in Anchorage and Gulkana in Alaska have a correlation of 0.86 ā€¦ but despite that, a linear estimation of Anchorage temperature based on Gulkana temperature gives an error in the estimated annual data of up to a degree and a half ā€¦ and that’s with quite good correlation.
In the case of Arizona, much of the earlier infilled data is lower than the non-infilled ā€¦ so my question is, what are the trends for the two results?
Thanks for all your work on these questions,
w.

Greg Goodman
June 29, 2014 2:41 am

The exception is BEST, which starts with the raw daily data, but they might be getting tripped up into creating some ā€œzombie stationsā€ of their own by the NCDC metadata and resolution improvements to lat/lon. The USHCN station at Luling Texas is listed as having 7 station moves by BEST (note the red diamonds):
Luling-TX-BEST
But there really has only been two, and the station has been just like this since 1995, when it was converted to MMTS from a Stevenson Screen. Here is our survey image from 2009:
===
So if I’m reading that correctly, BEST will be “correcting” this station by about +1.5 since 1995, despite it being a well sited, good quality station in that period. That implies conversely that there is that whole regional average that it is being compared to may have a warm bias of that magnitude.

Eamon Butler
June 29, 2014 2:56 am

Please excuse me, but as a layperson I sometimes struggle with the technical terminology. Is ”Crow” a bit like Humble pie? šŸ˜‰
Always a pleasure to see Honesty rise above all else.
Eamon.

michel
June 29, 2014 3:06 am

Well done you. And Prof Curry. And Paul Homewood and Zeke and the others. And well done Goddard, even though he was confusing and partly wrong, he really was on to something, and its good to see everyone recognising that.

SandyInLimousin
June 29, 2014 3:30 am

Joseph Bastardi says:
June 28, 2014 at 4:41 pm
Hear Hear on all points

nevket240
June 29, 2014 3:35 am

http://www.channelnewsasia.com/news/lifestyle/hotter-and-larger-tropics/1219108.html
This blows the original hypothesis out of the water. And no one questions it??
regards

Eliza
June 29, 2014 3:59 am

# Red Flags:
1. What Sunshine hours is reporting with recent data.
2. Your throwing pearls at pigs: Findings shoud not have been reported to USHNC, NCDC ect (as you are assuming they did not know, and may in fact have purposely fabricated it as SG maintains).
3. They won’t change anything anyway
4. You will continue to defend them (harmless scientists).
5. Its time for lawsuits, not talking
6. Remember BEST saga (mosh is still defending. that should be a REAL RED FLAG).
7. Thanks anyway re Goddard.

J Martin
June 29, 2014 4:06 am

Although the saying goes “never attribute to malice that which can be attributed to incompetance” (or something like that), the differences are so absurdly blatant that one is forced to wonder.
Can someone who carries some weight with NCDC and hence might get an answer, ask them to justify the adjustments they have made shown in the two graphs for Maine.
In the absence of a viable explanation this should be considered to be fraud, designed to manipulate politicians into providing continued / increased funding programs.

Kasuha
June 29, 2014 4:29 am

The key question here is, were estimated data used to calculate gridded and higher level averages, or to adjust other stations?
Doing so would be an inexcusable error, something that should never happen to a real scientist. An error so huge I almost refuse to believe people in NCDC would make willingly.
But if these estimates are just sitting there quietly and are not used for further processing, then they’re almost irrelevant.

Nick Stokes
June 29, 2014 4:47 am

Kasuha says: June 29, 2014 at 4:29 am
“The key question here is, were estimated data used to calculate gridded and higher level averages, or to adjust other stations?
Doing so would be an inexcusable error…”

They are used to calculate averages. That is the purpose of adjustment. And it isn’t an error.
A spatial average requires an integral, and they are in effect using numerical integration formulae. That is, summing interpolated values over the whole area, implicitly. Whether you interpolate values to include in the sum makes no difference, provided the interpolation is as accurate as that implied in the integration.
They interpolate all stations in each sum because they are using absolute temps, which includes the climatology. You need to keep the same climatology from month to month.

Eliza
June 29, 2014 5:11 am

“They interpolate all stations in each sum because they are using absolute temps, which includes the climatology. You need to keep the same climatology from month to month”
Well that explains everything WTF?
Does this guy have any degree from anywhere?If its Australian I understand..

A C Osborn
June 29, 2014 6:05 am

Nick Stokes says: June 29, 2014 at 4:47 am
When are you going to respond to my questions, you justify infilling along with Zeke, but you are using already “Estimated” values to infill with.
Please name all the stations that you used to compare to Luling so that the rest of us can check them out for validity of use and values.
I have already told you that San Antonio is Estimeated for that period, did you use it?

Latitude
June 29, 2014 6:10 am

Goddard headlined the Washington Times and Drudge again today….
http://www.washingtontimes.com/news/2014/jun/23/editorial-rigged-science/
He owes all of you guys for making this happen….accusing him of being wrong…and then having to eat it…..made it a 100 times bigger

June 29, 2014 6:14 am

Nick, NOAA grids on a 5km x 5km grid. Infilling is not necessary to grid.
Infilling is necessary to warm.
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/

June 29, 2014 6:15 am

Reagarding the impossibility of CO2 freezing out of the atmosphere.
The experiment of the refrigerator is not quite fair as refrigerators have an internal fan which will mix any pockets of CO2. But you can imagine circumstances where CO2 could be allowed to settle, any sublimated gas would remain in this pocket and provide a high partial pressure environment.
As an example of some of the complexities involved, see this article on natural isotopic separations in fern: http://www.whoi.edu/cms/files/kcasciotti/2006/9/Craig1988_14129.pdf
I doubt it is a significant effect in atmospheric composition, but it does point to the complexities of what happens in the real world compared to a laboratory.

NikFromNYC
June 29, 2014 6:17 am

ferdberple asserts: ā€œpaul homewood appears to confirm that goddard was correct on more issues than the zombie stations. Kansas was adjusted about 1/2 degree upwards in 2013. man made warming indeed.ā€
But that well known adjustment has little to do with the overextended infilling that has now gotten Wattsā€™ attention as a stations hound. The bulk of the adjustment is time of day adjustment (TOBS), which Goddard separately claims is actually being done now vastly in excess of their stated amount, a claim that has not been hashed out yet by other skeptics, but mostly ignored. Perhaps Goddard is attributing to TOBS some of his own artifacts? If Goddard didn’t willfully isolate himself so badly by being so conspiratorial and political as he harbors crackpots, nice liberal minded scientist types might offer him more consistent feedback before he heads into heart of darkness episodes with little technical feedback at all for months at a time which explains how his spurious data drop out adjustments hockey stick lingered on for two years to the delight of his cheerleading squad. When I properly pointed the potential flaw out to him that team of supporters attacked me for being crazy, and the way Goddard failed to moderate this attack further isolates him. This isolation has now led to a skeptic bashing news cycle.
Please note the silence from software savvy skeptics over whether this newly discovered infilling actually adds a false warming trend as Goddard strongly claims it does, in the face of competent claims that it does not. I make little single glance infographics to present skeptical arguments on news sites. So I’ve been pressuring Goddard for years to present the real details of his various procedures, but I am regularly attacked for doing so, so there’s no meat on the bones there, as far as I can tell. That I was attacked there for also being a vegetarian was kind of laughable for a guy who makes his own hipster beef jerky.
-=NikFromNYC=-, Ph.D. in carbon chemistry (Columbia/Harvard)

Bill Illis
June 29, 2014 6:22 am

Nick Stokes says:
June 29, 2014 at 4:47 am
You need to keep the same climatology from month to month.
—————————————
With all the adjustments, the climatology itself is changing every month. I wonder how many recalc cycles one must run in order to keep that situation accurate.
Kevin K. says:
June 28, 2014 at 6:29 pm above that the climatology is not the same as the simple arithmetic mean of the station data in the baseperiod.
There could be some systematic problems with just the climatology that should be looked at.

angech
June 29, 2014 6:27 am

Nick Stokes supports using altered data as real data and prefers the use of anomalies which as he knows full well hides the temperature adjustments that have been made and continue to be made.
“They are used to calculate averages. That is the purpose of adjustment. And it isnā€™t an error.”
No it definitely is not an error and it definitely is not science as I know it.

June 29, 2014 6:48 am

NikFromNYC: “The bulk of the adjustment is time of day adjustment (TOBS)”
Wrong.
This is Jan 1895 to 2013 TMAX graphed (no gridding ā€¦ but gridding doesnā€™t change much)
The trend is -0.1C/decade raw.
The trend goes to 0.2C with TOBS
The trend goes to 0.5 with the rest of the adjustments.comment image

June 29, 2014 7:03 am

“if you split a station where there is no actual move it has no effect.”
To this statement, Steve Mosher, I have to object!
This trick introduces a warming effect:
The temp sharply drops; there’s a cooling trend forming
But you split it there.
Cooling gone.
Global Warming!
===|==============/ Keith DeHavelle

June 29, 2014 7:04 am

Infilling in USHCN tends to emphasize the trend. If the trend is up, the infilled data makes the trend steeper in the upwards direction. If the trend is down, infilling makes the trend steep in the downwards direction.
But the data is always higher.
Warming trend:
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/
Cooling trend:
http://sunshinehours.wordpress.com/2014/06/29/ushcn-2-5-estimated-data-is-warming-data-usa-1980-2014/

Dan in Nevada
June 29, 2014 7:04 am

It’s surprising that anybody is surprised that data needs to be adjusted in order to be useful and conform to modern expectations. There are numerous examples of this. In 1936, under the wise leadership of FDR, the National Congressional Decision Commissar (NCDC) was formed specifically to address issues such as incorrect thinking, principles, ethics, or honesty that were causing Congressional gridlock and preventing needed Progressive reforms.
The problem was that the use of “raw” votes would lead to either incorrect legislation or prevent necessary legislation from being passed. Using finely-honed algorithms, the NCDC replaces obviously wrong “raw” votes and instead infills corrected votes using a spatially- and politically-correct gridding system.
One originally unforeseen circumstance was that the Congressional Record could be used against incumbent Congressmen and Senators during the election cycle. Thus, over time, it became necessary to adjust past (possibly already adjusted) votes to ensure proper results in the elections. Thus there are numerous cases, such as socialized health-care or invading faraway third-world countries for no reason at all, where the legislation clearly happened at one point, but a majority of legislators can claim that they themselves did not vote for it.
This is probably alarming to some people, and a knee-jerk reaction might be to suspect fraud or dishonesty, but this is how a modern society must function. Rest assured that your government has only your best interests at heart and is committed to always being able to show in retrospect that they did the right thing.

Greg Goodman
June 29, 2014 7:15 am

“Infilling in USHCN tends to emphasize the trend. If the trend is up, the infilled data makes the trend steeper in the upwards direction. If the trend is down, infilling makes the trend steep in the downwards direction.”
And since the centennial trend is upwards on average… it increase GW.
Nice one , sunshine !

Dan in Nevada
June 29, 2014 7:18 am

Stuck in moderation? Didn’t mean to offend.

REPLY:
Sometimes people get this erroneous idea that we examine every comment, we don’t. The ones that get flagged by the spam filter get held often times due to a complex formula we have no control over. I can tell you that the longer the comment, the greater chance the spam filter will flag it. It is up now. – Anthony

Greg Goodman
June 29, 2014 7:24 am

Bill Illis: “With all the adjustments, the climatology itself is changing every month. I wonder how many recalc cycles one must run in order to keep that situation accurate.”
hey, repeat until it converges, like hadSST do.

Greg Goodman
June 29, 2014 7:34 am

ā€œInfilling in USHCN tends to emphasize the trend. If the trend is up, the infilled data makes the trend steeper in the upwards direction. If the trend is down, infilling makes the trend steep in the downwards direction.ā€
I think this is probably statistically predictable in data set were 80% are sub-standard installations subject to UHI.
This is a result of what our host refered to as “warm soup”.
It will be interesting to see what NOAA et al come up with “next week”. My guess is they will find they ‘need more time’.
I suspect this will do for data confidence, what Climategate did for trust in climatologists.

Global cooling
June 29, 2014 7:42 am

Conspiracy theories are not plausible in the eyes of media and ordinary people. It is better to tell them that the old temperature record is bad. Data is missing, inconsistent and created for another purpose.
Especially metadata, data of the data is missing. We do not know the history of weather stations accurately enough to reconstruct the information based on the raw data only. A temperature reading is a sum (or actually a complex function) of the climate signal and a number of other factors such as exact time of the reading, UHI and changes in the local environment. Without knowing these accurately we can’t calculate the climate signal.
This is why sceptics follow the satellite data that we have from 1979.

Bill Illis
June 29, 2014 7:48 am

sunshinehours1 says:
June 29, 2014 at 6:48 am
——————–
Sunshinehours1, can you plot just the adjustments over time. Raw-minus-Tobs, Raw minus final. And then plot both with a baseline to end in 2013 at 0.0C. It should reach a maximum negative around 0.5C in 1930-1940.
It is just easier for most people to understand.

PMHinSC
June 29, 2014 7:50 am

Zeke Hausfather says:
June 28, 2014 at 6:04 pm
ā€œIf you donā€™t like infilling, donā€™t use it. It doesnā€™t change the resultā€¦ā€
June 28, 2014 at 6:24 pm
ā€œInfilling shouldnā€™t have any effect on temperatures, because ā€¦ā€
It ā€œdoesnā€™t change the resultā€ or it ā€œshouldnā€™tā€ change the result? Anytime you defend something that shouldnā€™t be done you are losing the argument. Performing unnecessary operations leaves you open to mistakes and criticism. Let the process play its self out and we will know rather than speculate whether is ā€œshouldnā€™tā€ or “doesnā€™t.ā€ Rather than stoking a dying fire, wouldnā€™t it be better to do something more constructive?

Dalcio Dacol
June 29, 2014 8:12 am

I predict that when USHCN corrects its “methodology” the past will be even cooler and the present even warmer because, you know, it is worse than we thought!

Dan in Nevada
June 29, 2014 8:26 am

@Anthony 7:18 Thanks for the reply. Right after I complained, I re-read and saw at least one known code-word. My bad.

David S
June 29, 2014 8:46 am

Ok so here’s a question I’ve been wondering about for some time: Prior to the computer era, temperature data was written by hand into monthly log sheets. That handwritten data has to be manually copied into computer databases so that graphs can be constructed and long term trends determined. Doing even 1 years worth of data for a single weather station is a huge task requiring many man hours to do. So who did that? And where can someone find the digitized (unadjusted) data?

Richard M
June 29, 2014 8:57 am

Correct me if I’m wrong, but doesn’t this infilling mechanism create yet another problem? When NOAA provides one of their monthly reports it is likely based on incomplete data. Just as we see with elections the latest reporting stations are most likely the rural ones. This means they are being infilled from urban data. While this should eventually get corrected, it will make the most recent months appear warmer than they really were. Since these NOAA reports get highlighted in the press we are constantly being presented a false picture of reality.
This is probably why the July 2012 average is no longer the warmest “evah”. The addition of more and more real data cools the final result. By continuing to make these reports NOAA is presenting propaganda. The politicians then repeat this propaganda. Anyone think this will stop?

Editor
June 29, 2014 9:06 am

David S
And where can someone find the digitized (unadjusted) data?
The original station data can be accessed here.
http://www.ncdc.noaa.gov/IPS/coop/coop.html
This gives the original, hand written monthly reports of daily temps

Editor
June 29, 2014 9:11 am

From some data I saved two years ago for Alabama, it seems that the TOBS adjustment has increased by about 0.6F between the 2012 version and now.
For instance, the TOBS adj temp for Nov 2011 has been increased by 0.3F, while Nov 1934 has dropped by a similar amount.
http://notalotofpeopleknowthat.wordpress.com/2014/06/29/more-news-on-ushcn-temperature-adjustments/
And just to complicate matters further! The latest USHCN update for Luling has changed since I printed it off two days ago. Not by much, but even temps for 1998 have been altered.
Apparently, actual temperature data is no longer important!

bruce ryan
June 29, 2014 9:56 am

the nut of it, you really shouldn’t be using data from instrumentation not intended for the purpose. iow a weather station reflects current weather such as it is including the conditions in which it is situated. Which is what it was designed for. How do you intend to compare data with conditions in a continual evolution of environmental change? And designing software to devolve the conditions just isn’t right.

June 29, 2014 10:01 am

Anthony
One word sums up this post.
Respect
i now return to the reason that i previously voted for in the Weblog awards,

June 29, 2014 10:09 am

Very well done Anthony! With many excellent assists/posts.
Cognitive bias’s = the arch enemy of the scientific method.
http://en.wikipedia.org/wiki/List_of_cognitive_biases

B.C.
June 29, 2014 10:35 am

Has anyone contacted the poley bear folks and asked for their input on this whole “in-filling” debacle? It seems as though they may have some pointers as to how one can manage to lower the agenda-driven data signal, instead of always getting a higher-than-desired data signal from the places that have actual data. Just a thought.
PS: Anthony, kudos for the mea culpa. I’ve always had a great deal of respect and admiration for both your and Steve/Tony. The two of you have very different methodologies for getting your point across, but we’re all on the same team— seeking and speaking the truth, no matter the consequences. Just remember, one doesn’t go into battle with an army of monotone clones— you fight with the team you have and everyone has different personalities. Some of them/us may not be as rhetorically-restrained as you, your staff of moderators or other big-name skeptics, but they’re every bit as important in fighting the tyrant wannabes who are pushing the CAWG agenda. Let’s try to keep the “Blue-on-Blue” fire to a minimum, if at all possible.
Again, thank you for your yeoman’s work over the years. History will remember you (and Tony) as members of a movement on the level of Martin Luther. (The one who nailed that obscure little paper to the door.)

Greg Goodman
June 29, 2014 10:51 am

I hope everyone realised at this stage, whenever getting an update of any data you’ve looked at, NEVER overwrite the old file to update it. Save to a back-up name first and then the first thing you do when you have the current release is to ‘diff’ it with the old copy to what has changes apart from the addition of the more recent data.
Have the goal-post moved??

Editor
June 29, 2014 11:26 am

Question?
When infilling is needed, is it calculated against only other USHCN stations, or are non-USHCN used as well?

Editor
June 29, 2014 11:29 am

Greg
NEVER overwrite the old file to update it
That would be far too inconvenient!!!!

rogerknights
June 29, 2014 12:03 pm

A thorough outside audit is in order. The House should authorize it. It should get full explanations for ALL of the procedures and assumptions being used, with several sample cases used as examples. Outside experts, including foreign experts, should be hired and encouraged to cross-examine everything.

Bart
June 29, 2014 12:11 pm

Gracious of Anthony to put things to right. And, though I do not know enough of “Steven Goddard” and his history to make a an endorsement, I can offer kudos for pursuing this issue doggedly.
The picture, though, which sprang to mind reading all this was of a certain climate scientist, whom others have noted shares a resemblance to a particular cartoon character, crooning, “Mmmm…. Crow…”

Stephen Fox
June 29, 2014 12:18 pm

Excellent post, Anthony.
I am unqualified to discuss statistics, so would like to start at the other end, so to speak. Can Steven Mosher, or anyone else from BEST say whether the impression I have that temperatures over 50 years ago have been overwhelmingly corrected downwards, and more recent temperatures increased is right?
In other words, do you keep a running average of your adjustments (sorry I know you don’t use that word, but I don’t know how else to say it)?
From the procedures you describe, no general tendency should emerge, should it? After all, stations may move downhill, and so need to be ‘expected’ to be cooler than they report. Surely, releasing the happy news that on average there was no significant change to the overall picture should resolve the matter. It is this persistent impression of constantly increasing warming that is damaging to the image of the datasets. Or perhaps there is a perfectly acceptable accounting for such a consequence of the doubtless valuable work being done.
Regards
Stephen

scf
June 29, 2014 12:25 pm

Goddard has been showing this stuff for years. He has been showing how the temperatures for early years have been cooling and recent years warming every time the data is adjusted.
I am gobsmacked. Gobsmacked that you have been ignoring Goddard all this time.
He has been showing exactly what you are showing in the Maine temperature record above. Yet you tell us that you were not concerned by what has happened to the temperature record? Curry says “acknowledging that Goddard made some analysis errors, I am still left with some uneasiness about the actual data, and why it keeps changing”. Uneasiness, to say the least! Goddard has been doing a great service and I am gobsmacked that someone like you, who knows what it is like to be in his shoes, would not have had the same uneasiness.
In any case, it’s refreshing to see this post.

Larry Fields
June 29, 2014 12:32 pm

Although I’ve never had the opportunity to eat fresh crow, I do have an amateur interest in cooking.
In order to optimize the taste of this much-maligned bird, use crow in a Sri Lanka chicken curry recipe. If there are any off-tastes, they should be overpowered by the burning sensation of the cayenne pepper. Enjoy.

Gregory
June 29, 2014 12:43 pm

Well done Steve Goddard, and crow to many but that is what debate and science is frequently about. Those who refuse crow will ultimately have a larger portion later šŸ™‚

scf
June 29, 2014 12:44 pm

Look at Maine again, this is a state bigger than most European countries.
Note that 1913 was cooled nearly 5 degrees F and does not stand out. There is a warming of at least 3 degrees F since 1895 (they list 0.23/decade) and the new mean is close to 40F.
We’re talking 5 degrees F! Done by an adjustment in the year 2014! For an entire state! Anyone with a modicum of common sense knows this is absurd! It’s getting into the realm of the ridiculous! And it was Goddard rated “pants on fire”? Curry says ā€œacknowledging that Goddard made some analysis errors, I am still left with some uneasiness about the actual data, and why it keeps changingā€. Curry is stating the obvious.

Bryan
June 29, 2014 12:45 pm

I would add my voice to those who suggest that we should not be quick to assume that these issues with the USHCN data set result from unintentional mistakes. Consider:
1) These practices (infilling data and adjusting temperatures) both involve the opportunity to bias the final result intentionally in a way that might not be noticed. Furthermore, these practices provide deniability of intent in the case that someone does notice.
2) It appears that the practices under discussion do in fact bias the final data set in the direction of greater warming. It seems to me that Goddard and others have shown this to be the case concerning the infilling and zombie stations. As for the temperature adjustments, the bias has not been shown as convincingly, but it does seem that if UNBIASED adjustments were being done, the net effect of all adjustments would not always be in the same direction (just sayin’).
3) The top of the organization chart of the executive branch of the U. S. Government STRONGLY desires to push a global warming agenda. In fact, the Obama administration is in a full court press on this issue. You see it in the actions of every relevant federal regulatory body and in the statements and publications of every federal science organization. You see it in speech after speech from top officials, including the president. When I heard Barack Obama (who would not know a CO2 absorption band from a candied apple) essentially insult (by implication — without mentioning names) the intellectual chops of the likes of Freeman Dyson and Richard Lindzen, I realized that he is very driven on this issue. I’m sure that everyone in the chain of command down to and including the NCDC employees realized this too.
4) When EVERYBODY in an organization knows what the bosses want, it does not take a big conspiracy to get the desired results. All it takes is a few people willing to institute “helpful” practices and procedures when they get a chance, and a general tendency of others to go along with the program. If one considers this to be group think, perhaps the group think could rather be thought of as a complicity with practices that will tend to produce results that everyone knows are sought by the supervisors up and down the organizational chart.
I am not saying that this is all intentional rather than unintentional. I am saying that it might be.

rogerknights
June 29, 2014 12:56 pm

PS: Initially, a House committee should hold hearings taking testimony from contrarian experts on temperature-taking methodology and the problems that Goddard has uncovered. This hearing should also bring GISS’s procedures under the microscope. It, too, should be audited. (Or perhaps that can be deferred until the USHCN audit is complete.)

Editor
June 29, 2014 1:09 pm

It is just worth reiterating, the TOBS adjustment at a couple of stations in Alabama, which I noted in 2012, have increased by 0.6F.
I mean by this the 2012 v 1934 adjustment, as it was declared in 2012, was 0.6F less than USHCN now show for 2012 v 1934.
Unless Nick Stokes, Zeke , or anybody else can explain the reasons for this, I can find no other explanation other than sheer fraud.
The details are here.
http://notalotofpeopleknowthat.wordpress.com/2014/06/29/more-news-on-ushcn-temperature-adjustments/

rogerknights
June 29, 2014 1:11 pm

PPS: The GOP is under attack on the climate change issue this election cycle. It is on the defensive, which is a poor place to be. If it has any political savvy (doubtful), it will realize that its salvation lies in seizing the offensive and turning the spotlight on the trustworthiness of those Obama has analogized to deference-due-doctors-with-a-diagnosis.

rogerknights
June 29, 2014 1:16 pm

PPPS: The NCDCā€™s USHCN temperature for the US is about two degrees above its own high-quality network. That means that its adjustments have all been going in the wrong direction. That means the NCDC is untrustworthy. Very few people know that nowā€”but the GOP can enlighten themā€”repeatedly, and in thunder.
REPLY: You need to back up that claim of 2 degrees, because as far as I can tell, you have no basis for it. – Anthony

Gary Pearse
June 29, 2014 1:17 pm

There will be NO ‘SURPRISES” when they finally jigger all this stuff to cull out zombie stations. They are NOT going to allow something that shows they have been wrong for a long time. They have too big a stake in the product they have been bashing skeptics with and supporting trillions in expenditures on non carbon and shutting down coal. There WOULD BE ENORMOUS LAW SUITS AGAINST THE GOVERNMENT IF THIS WERE to prove significant. There WILL BE a bone thrown to calm us down. Maybe July 2012 will end up being only the 3rd hottest July on record instead of the hottest. There will be a tenth of a degree C and it will be a tenth cooling or they know the outrage will only widen. But don’t expect any real changes.

NikFromNYC
June 29, 2014 1:27 pm

sunshinehours1 demonstrates: “NikFromNYC: ā€œThe bulk of the adjustment is time of day adjustment (TOBS)ā€
Wrong.”
…and…
“Infilling in USHCN tends to emphasize the trend. If the trend is up, the infilled data makes the trend steeper in the upwards direction. If the trend is down, infilling makes the trend steep in the downwards direction.”
I hope you’re right about this in a way that affords a highly public downgrade of the average temperature uptrend. Since the trend overall is indeed upwards, this seems rather likely now that attention is being focused on something being goofy with software. I’ve been confused by the natural excitement of Watts towards technical sloppiness versus whether there is expected to be any actual correction to the final trend. And confused too about whether Goddard’s claim that TOBS itself is out of control in software or not? Nick Stokes asserted: “No. All you have done is given the difference between USHCN final and raw in the USHCN dataset. They include TOBS, and all the adjustments that have been endlessly described. There is nothing new there.”

Jimbo
June 29, 2014 1:39 pm

It seems to me that with the temperature standstill Warmists will also learn about ‘truth’. If you are right you are right, and if you are wrong you are wrong. Time is the arbiter with climate. If it starts getting too hot, then we will also learn the ‘truth’.

The truth is incontrovertible. Malice may attack it, ignorance may deride it, but in the end, there it is.
Winston Churchill

Stephen Wilde
June 29, 2014 1:44 pm

How can one make any policy decisions relating to climate issues now that, apparently, we have no idea what the global temperature has been doing since the industrial revolution?
Can anyone demonstrate that there has been any warming at all?

richardscourtney
June 29, 2014 1:55 pm

Stephen Wilde:
At June 29, 2014 at 1:44 pm you ask

Can anyone demonstrate that there has been any warming at all?

No, nobody can do that, but several teams say they can.
I yet again draw attention to this.
Richard

milodonharlani
June 29, 2014 2:09 pm

richardscourtney says:
June 29, 2014 at 1:55 pm
Without resort to the defective & highly distorted, at best, instrument record, IMO science can with some confidence conclude through proxy data that earth was cooler than now 320 years ago, with somewhat less confidence 160, but not 80 years ago, when it may well have been (& probably was, IMO) warmer than now. It was also most likely warmer than now 1000 years ago, 2000 years ago, about 3000 years & 5000 & longer years ago, with cooler spells in between.
The same pattern is detectable in previous interglacials.

richardscourtney
June 29, 2014 2:18 pm

milodonharlani:
re your post at June 29, 2014 at 2:09 pm.
Yes, I agree, but the question I answered, my answer, and this thread concern the various determinations of global average surface temperature anomaly (GASTA) and not proxy data.
Richard

Evan Jones
Editor
June 29, 2014 5:24 pm

Personally I think the code looks for just odd ā€˜coldā€™ readings and moves them up.
Oh, it does both. If a station is disproportionately warm, it will be adjusted lower. Even some Class 1\2 stations have downward adjustments.
BUT . . . the code adjusts towards the majority. And (during our study period of 1979 – 2008) 80% of the stations are carrying, on average, an extra ~0.14C warming per decade. As a result, the code adjusts relatively few stations downward, but adjusts just about all the lower-trend, well sited stations ‘way up.
They are not dishonest, just victims of confirmation bias: They assume the dataset is essentially sound. They think the logic of their code is okay. They think the results are just peachy-keen. So they look no further. They do not consider what homogenization does if the majority of the dataset is compromised.
Any game developer worth half his salt would never have missed a flub like that. I am a game designer/developer. And I did not miss that. #B^)

catweazle666
June 29, 2014 5:26 pm

So the United States Historical Climatology Network is effectively making up a very significant slice of its temperature data some of it for stations that don’t exist, and has been doing for decades, and they were completely unaware of it?
Really?
Any of you gentlemen want to buy a really nice bridge?

Evan Jones
Editor
June 29, 2014 5:35 pm

All it takes is an errant piece of code. Even the regional managers don’t appear to love their stations like the volunteer curators generally do. So they fall through the cracks.

June 29, 2014 6:47 pm

Anthony, while you’re free not to agree with what I say, you’re not free to simply state you “don’t agree” with me and leave it at that. I made my case very clear. If it is wrong, you should show it is wrong. You should not merely assert it is wrong. It would be a trivial matter to provide quotations to back up what you say, if you were correct. The fact you refuse to take such a simple step would seem to imply you aren’t correct.
Similarly, saying you don’t agree with my post’s title is meaningless unless you say why you don’t agree with it. In fact, it’s kind of pathetic. The entire point of skepticism is to argue points of disagreement. You’re doing the exact opposite. You’re refusing to argue anything, choosing to instead wave your hands and say I’m wrong.
This is simple. If I’m wrong, show I’m wrong. Don’t just editorialize and posture.
REPLY: see here’s the thing, I just don’t care. I’ve tried to explain but you don’t like my explanations. Your points have no value to me, right or wrong, and I’m not going to waste any more time arguing over the value of somebody’s else’s article nor your points which I still don’t understand why you have your knickers in a twist over. For the purpose of getting you to stop being pedantic and cluttering up this thread with an issue I don’t consider important, I’ll just say I’m wrong. But nothing is going to change in the article above. This will be the last comment on the subject. – Anthony

Editor
June 29, 2014 6:59 pm

tomcourt says:
June 29, 2014 at 6:15 am

Regarding the impossibility of CO2 freezing out of the atmosphere.
The experiment of the refrigerator is not quite fair as refrigerators have an internal fan which will mix any pockets of CO2. But you can imagine circumstances where CO2 could be allowed to settle, any sublimated gas would remain in this pocket and provide a high partial pressure environment.

Recall that there were really two experiments, one being loose samples of dry ice in a perforated box, and one I suggested, samples in a not quite sealed plastic bag. The results were:

The samples were placed in the freezer at 4:30pm (reading -82C) and removed at 10:00am (reading -83C).
Open container, start weight 36.5g dry ice, end weight 0g, amount sublimated 100%.
Zip-top bag, start weight 27.6g dry ice, end weight 25.3g, amount sublimated 8.3%
Proving, I think, that CO2 will freeze and remain frozen at below -78.5C if the partial pressure of CO2 is near 1 ATM, but the CO2 will rapidly sublimate is the partial pressure of CO2 is near atmospheric normal.

Given this was a lab grade refrigerator with NIST tracability of the thermostat, I suspect there was circulation fan. Even if there weren’t, the results were exactly what I expected – the amount of dry ice was not enough to displace all the air so I wasn’t concerned that the freezer would become a pure CO2 environment. Indeed, the unbagged dry ice completely sublimated. I thought only some would be left, but complete sublimation was fine. I expected some sublimation of the bagged ice, and perhaps redeposition on the surface, ala freezer burn on frozen food, but I expected more than half would remain, and much more than the unbagged ice. That 90% survived was at the high end of my expectations.
I thought all that was well described in the WUWT post, I’m surprised at your critique. Please reread the post. The refrigerator result pretty much confirmed that dry ice cannot form at Vostok and provided a good tangible reference to go along with the theoretical claims I and others made.
I never expected we’d still be talking about this on multiple fronts five years later.

June 29, 2014 7:01 pm

Another “lie of the year” from Poltifact. They are becoming monotonous with their goofs.

JohnH
June 29, 2014 7:59 pm

If the USHCN dataset is ā€˜what we knowā€™, the key question is ā€˜how do we know it?ā€™
And, with regard to this latter question, there are two separate issues, as Anthony has pointed out.
Firstly, what is the ā€˜qualityā€™ of the temperature records from each of the stations? E.g. is the siting acceptable, are the sensors and recording mechanism trustworthy, is actual data being provided, etc., etc.?
Secondly, what, exactly, is the ā€˜processā€™ of converting the data from each of the various stations into the USHCN dataset?
There are 1218 stations, I think. So would it be possible to crowd-solve the ā€˜qualityā€™ issue by taking a representative random sample of those stations (say 100) and ask for approximately 20 volunteers to each physically examine five of those sample stations and assess them and also their temperature records for their relative quality? Clearly some kind of standard assessment would be needed (a simple questionnaire?) for judging the stations, but that shouldnā€™t be too hard to create.
Once the ā€˜qualityā€™ issue is clear it will be easier to understand exactly how the ā€˜processā€™ does work. And as part of that discovery, perhaps those same volunteers could also look for and document revisions to the temperature records of each station over time.
Iā€™m not sure how feasible all this would be, and maybe thereā€™s a better way to audit the data. But this, at least, puts action in our hands.
Just a thought.

June 30, 2014 6:25 am

Dear Mr. Watts:
Congratulations on an essay demonstating true courage – most people, when wrong, hem, haw, and cavil until the problem fades into the past. You faced it head on and did absolutely the right thing. Way to go!
(It’s sad that this is kind of behavior is rare, but good to see it.)

JohnB
June 30, 2014 7:33 am

Waiting for a permanent link to Steven Goddard’s Website (I usually have to go to BH for the link). You could at least put it under Rants or Unreliable but it should be there somewhere (I know I should use bookmarks but never do)

June 30, 2014 9:40 am

Glad you’re seeing the light here Tony. As I said before, I was skeptical of Steve’s claim too, until I downloaded and did my own analysis, writing my own code from scratch (I posted the code at Steve’s blog).
Steve’s certainly tendentious, but he was also definitely correct on this point: 43% of the last month’s data was marked with an E. It will be very interesting to compare this data to CRN and SurfaceStations.

RAH
June 30, 2014 9:44 am

Where is the accountability in all of this? I have the greatest respect for you Anthony and Steven and great appreciation for what you both do. in the couple years I lurked at both of your sites I strove to stay out of the spats because when it comes to what you guys do, I’m an unarmed man. But I do know something about management and leadership and I can’t buy the “group think” excuse for those that are supposed to be managing this system and/or monitoring the quality of data it is producing. It sounds like something a politician would say when they are caught red handed. It blames a bureaucracy and holds no individual accountable for anything. No matter what the excuse we have here one or more of three things. Gross neglect, Gross mismanagement, or Blatant fraud. For any of them there has to be accountability otherwise all the revelations about what has been discovered will lead to a disappointing and most likely only temporary resolution.

June 30, 2014 9:45 am

Brandon Shollenberger says: June 28, 2014 at 8:07 pm
Yes, the Politifact piece is horrible. It conflates the animated GIF with the missing data claims, and purports to “debunk” both of them — even though they are both simple analyses of officially-released data.
Gell-Mann amnesia at its finest. Journalists often seem to have no skills beyond an ability to type.

Gary Pearse
June 30, 2014 10:18 am

Jimbo says:
June 29, 2014 at 1:39 pm
“It seems to me that with the temperature standstill Warmists will also learn about ā€˜truthā€™. If you are right you are right, and if you are wrong you are wrong. Time is the arbiter with climate. If it starts getting too hot, then we will also learn the ā€˜truthā€™.”
That we are essentially talking about +0.7C in a century or more, about warming for ~20yrs and now no warming to cooling for 17yrs. speaks volumes about the science. No matter what happens in the future, the CO2 warming hypothesis is dead. One seventh of this century has elapsed with slight cooling. Every year that goes by shrinks CO2’s potential share in warming toward insignificance.
Adding a growing likelihood that, no matter what CO2’s sensitivity in fact is, the earth – mainly through its oceans countervails the warming (from any cause). Much of the proof for the earth’s thermostatic control (over and above the brilliant work by Eschenbach) has been hanging out there for all to see. An unbroken chain of life for at least 1B years speaks loudly of the stability of earth climate. Despite asteroid impacts, massive volcanic out-pourings in various eras (unbreathable in the Archean), Ice Ages, evidence for Snowball Earth (as a geologist I’m not convinced of this, though) etc., the earth has countered these ‘tipping point’ events and restored temperatures to a moderate level about which the ‘extremes’ oscillate ~2-3%K. Moreover, there is no evidence to suggest we are through with ice age cold and it is likely we are well past the halfway mark going back into it. Let us pray we can find a way to alter climate and make a bit of global warming at some point.

June 30, 2014 10:19 am

I am mad as hell and not going to take it anymore. I just don’t understand how scientific papers need to be “juried” and approved by many people who are on the line for the articles published but people at NOAA and NCDC or wherever these algorithms and modifications get “codified” can do this willy nilly without anyone having to justify what they are doing? Is there a paper on the modifications made, the code used, the examples of how it applies in practice, evidence to back up these modifications?
There should be papers, multiple papers on all these factors:
1) time observation bias
2) station movements, late station reports
3) station reporting out of line data
4) stations decommissioned
For observation time bias the adjustments seem crazy high from the data that SG and others have presented over the years. I believe I read that 90%+ of the adjustment of the 1C in the last century can be traced to that. If so, then there should be a paper on this alone that explains how this is done and justifies it in terms of other measurements made at the same location at different times. If the entire GW temperature change depends critically on the amount of this change we add in or subtract out then there should be a lot more than a footnote saying we did this. If this is being done wrong then the entire amount of warming could be changed radically.
Station movements and decommissioned, the issue of averaging where there are too few stations. There should be a detailed set of papers talking about how good these kinds of estimates are. For instance with antarctica we could put a bunch of automated stations out there and monitor how they change in relation to how we predict those stations should have reported. I’ve heard there are a couple stations in all of antarctica and few in northern canada etc. Because of the lack of uniform density of stations this could mean that we really have no clue what’s happened in these places.
Stations reporting out of line. I understand that the algorithms take into account if a station shows a big movement relative to its peers closeby. The station gets ignored or adjusted to reflect a more reasonable temperature. I personally would like some field work on this. I would like us to understand what is really going on with these stations to justify the modifications being made. Do we know why they get anomalous readings and can we verify specific instances where these corrections are made and we can show WHY they were justified.
All I believe I am asking for is for what I consider reasonable. If the entire temperature trend of the 20th century depends on these adjustments then a righorous study should be made to determine if the adjustments really make sense. Are there real world things that can be explained to anyone why the adjustments are being made and specific examples of how that makes sense. As I understand it the vast majority of the data are tampered with. Therefore the entire proof of Climate change depends on justifying these adjustments yet they seem to have less peer review than ordinary science articles. I just don’t get this. Nobody seems accountable.
Even if the NOAA or NCDC fix the data as Steve suggests I still have no faith that the data that they now say is “clean” is “clean”. I would believe as any rational person must that without anyone explaining like I describe above in peer reviewed papers precisely what they are doing to the data, examples of this with real-life stories of why the adjustments actually make sense I believe that experimenter bias is likely responsible for a large part of the temperature trend.

June 30, 2014 10:46 am

[snip – Brandon, I’m sorry but as stated above I’m not discussing this anymore. We’ll simply have to agree to disagree – Anthony]

June 30, 2014 11:02 am

I totally agree with Joe Bastardi and overall, this infighting is ultimately self-defeating and plays right into the AGW thugs’ hands. At the very minimum the passive aggressive Mannesque posturing and comments do not belong. I really look up to you Anthony, and even your apology was, for a lack of a better word, sad.
You guys need to hug this out and keep these sort of things worked out by a phone call rather than this public display.
I do love you still, Anthony. I hope you do better, in the future.
šŸ™

DHF
June 30, 2014 11:35 am

I wonder: Are all these adjustments really necessary? How many well designed and reliable points of measurement will you need to get a sufficiently accurate yearly average?
Or to be more precise: How many points of measurement will you need to get a measured yearly average with sufficiently low standard uncertainty to be able to detect a positive trend trend of 0,015 K/year (1,5 K/century)?
I consider the calculated average of a number of temperature readings, performed at a defined number of identified locations, as a well defined measurand. Hence, the standard uncertainty of the average value can then be calculated as the standard deviation of all your measurements divided by the square root of the number of measurements. ( See the open available ISO standard: Guide to the expression of Uncertainty in Measurements).
Let us say that you have 1000 temperature measurement stations. which are read 2 times each day, 365 days each year. You will then have 730 000 samples each year.
(Let us disregard potential correlation for a moment.)
If we assume that 2 standard deviations for the 730 000 samples is 20 K.
(This means that 95 % of the samples are within a temperature range of 40 K.)
An estimate for the standard uncertainty for the average value of all samples will then be:
2 Standard uncertainties for the average value = 2 Standard deviations for all measurements / (Square root of number of measurements)
20 K / (square root(730 000)) = 20 K / 854 = 0.02 K.
This means that a year to year variation in the average temperature that is larger than 0,02 K cannot reasonably be attributed to uncertainty in the determination of the average. This further means that a variation larger than 0,02 K can reasonably be attributed to the intrinsic variation of the measurand.
If I further assume that 2 standard deviations of the yearly average temperature measured at a high number of locations is in order of magnitude 0,1 K (Remaining variation of the feature when trends are removed). This means that 95 % of the calculated yearly average temperatures is within the range + 0,1 K to – 0,1 K from the average of all yearly averages (If trends are removed).
Since the standard uncertainty of the measured average (0,02 K) is much less than the standard uncertainty of the feature we are studying ( 0,1 K), I regard the uncertainty to be sufficiently low. Hence 1000 locations and 2 daily readings seems to be sufficiently high for the defined purpose.
However, the variation of the measurand, yearly average of your temperature measurements, now seems to be too high to be able to see a trend of 0,01 K / year. One approach can then be to calculate the average over several years. The standard uncertainty of the average temperature for a number of years will then be equal to the standard deviation of the yearly average (0,1 K) divided by the square root of number of years. Let us try an averaging period of 16 years. 2 standard uncertainties for the average temperature for a period of 16 years can then be calculated as 0,1 K / (square root(16)) = 0,1 K / 4 = 0.025 K.
If you choose an averaging period of 16 years, the standard uncertainty of the measured average value can now be recalculated, as the number of measurements has increased by 16 times to: 16 * 730 000 = 11 680 000. Two standard uncertainties for the average value is now 0,006 K. Hence the number of measurement locations can be reduced. Even if I select as few as 250 measurement points, 2 standard uncertainties will be as low as 0,01 K.
Consequently it seems that we should only need in order of magnitude 250 good temperature measurement locations to be able to identify a trend in the average temperature. Adding more temperature measurement locations does not seem to add significant value, as the year to year variation in temperature seems to be intrinsic to the average temperature and not due to lack of measurement locations. Hence the variation cannot be reduced by adding more measurements.
So, if the intended use of the data set is to monitor the development of the average temperature, all the operations that are performed on the data sets seems to be a waste of effort. The effort to calculate temperature fields, compensate for urban heat effect and estimate measurements for discontinued locations all seems to be meaningless. What should be done is to throw over board all the questionable and discontinued measurement locations and keep in order of magnitude 250 good temperature measurement stations randomly spread around the world.

Duster
June 30, 2014 12:50 pm

Anyone who has waded through the “harry_read_me.txt” file from the original UAE climate gate data knows just how intractable encoding weather data can be, much less getting reliable summaries and detail analyses. This problem shows that there very likely are endemic problems in most historic weather data sets and worse, adjustments and corrections are quite likely making matters worse, certainly more uncertain. It is also highly likely that each new version release of the data (BEST, CRU, GISS, USCHN) incorporates its own idiosyncratic problems, which are inevitably difficult to identify unless there is either a glaring problem (Harry mentions for instance a least-squares value that becomes increasingly negative), or when a local looks at a look record and sees that it differs in some important way from their personal experience – an anecdotal process not highly regarded in some scientific circles, which is easily written off by the authorities in the field. It might be a good thing for the NCDC and other repositories to produce public annual or semiannual reports on bug hunting and data and processing code audits.

Duster
June 30, 2014 1:04 pm

Bryan says:
June 29, 2014 at 12:45 pm
I would add my voice to those who suggest that we should not be quick to assume that these issues with the USHCN data set result from unintentional mistakes. Consider:…

The problem is that the process of getting the data encoded into a single set of consistent files is a very complex problem. When you are dealing with even tens of thousands of records it can be difficult. When you attempt to address hundreds of thousands of records that are spatially distributed time series, the order of magnitude in difficulty jumps several orders of magnitude. Simply identifying problems is an immensely difficult problem, and if the problem looks like what you actually expect, then as Anthony points out, confirmation bias can blind you very thoroughly. Occam’s Razor recommends not complicating explanations beyond necessity. Until there is clear evidence of deliberate and intentional biasing of the record, there’s no reason to look for them.

Hugh Eaven
June 30, 2014 1:14 pm

It’s surprising to me that Nick Stokes’s earlier very reasonable explanation was ignored and/or dismissed by what looks like possibly more confirmation bias (or hasty reading).
Nick wrote: “sometimes Nature just has a warmist bias. (…) Positive resistance will reduce the voltage. Negative will increase it. But they donā€™t do negative. (…) Same with TOBS. If you go from afternoon to morning reading, the trend will reduce”.
This means there might be good reasons that the adjustment/estimates of those singled out stations will have some “warmist” bias, meaning that the readings will be more often *lower* than *higher* than any “correct” measurement. Correcting for this by “infilling” will, if the above would be true, necessarily create that “fabricated” trend (Goddards red graph). In other words, the raw data on average would be too cold because of stated mechanical and electronic or impedance problems.
So in the end we have here a simple explanation which could very well explain all the stated observations from all sides. It can also be falsified further by sampling more of those stations indicated to be “estimated” and which were then compensated upwards.
This is not the same discussion as “are USHCN stations in general trustworthy?”. As Nick stated, if you’d already dismiss the whole dataset, then why bother struggling with this infilling problem? It doesn’t make sense!

temp
June 30, 2014 1:47 pm

Hugh Eaven says:
June 30, 2014 at 1:14 pm
“This is not the same discussion as ā€œare USHCN stations in general trustworthy?ā€. As Nick stated, if youā€™d already dismiss the whole dataset, then why bother struggling with this infilling problem? It doesnā€™t make sense!”
It makes perfect sense. When you have billions of dollars at your finger tips which the AGW gult does they can simply create another data set and claimed “they fixed the problem”. They’ve already done this a bunch of times already.
The reality is this is a game of whack a mole where the cultist will use any dirty trick in the book until they can’t. When you have near unlimited funds to produce near unlimited propaganda you can keep making minor changes and carry on.
We already see this process with the non-stopped renaming of global warming, whenever its proven wrong you have 30 papers produced in weeks time to say “o the ocean ate my global warming”, etc, etc, etc.
The general hope is that people fighting this mess will, burn out from overwork, miss the next dirty trick, threats to person, job, family take hold, etc, etc, etc.
As the cultists say the science is settle, AGW is wrong however science is meaningless in arguments based on emotion, logical fallacies, propaganda and such….. which is what the AGW is and will be until it is finally killed off as an idea. \
Trying to get ahead of the next production of propaganda is about the only way to more easily force the next edition to come out even faster. Hopefully causing easier to spot errors.

John Slayton
June 30, 2014 2:18 pm

At this point I am stuck on a “who” question. In the example I used above, who made the decision to append the Nampa Sugar Factory record to the earlier Caldwell record by dropping Caldwell from USHCN and adding Nampa? And by contrast, who decided to create the zombie Fremont OR, giving us (so far) an 18 year record of estimated temperatures from who knows what other stations, which can not be visited for metadata observation?

kim
June 30, 2014 4:39 pm

Gad, what a great excuse for the alarmists. The adjustments ate my homework.
==========

Matthew R Marler
June 30, 2014 5:18 pm

Anthony Watts, thank you for a good post. Nick Stokes, richardscourtney, Steven Mosher, thank you for your comments, rejoinders and elaborations.

June 30, 2014 5:28 pm

As people who have followed this debate know this is not the first time by far that the data the NOAA or other institutions have produced have been grossly in error. Several incidents occur to me including one time when “someone” copied the august temperature data to september for the whole country of Russia. The climate reporting agencies were quick to jump on what they said was the hottest august on record smashing previous records. It took very little research to uncover this error being so blatant and unbelievable. After they were told they quickly fixed the result putting the temperature well back from record territory. This is just one of numerous incidents of this nature. It always seems it takes independent people in the real world to find these egregious errors and the error is ALWAYS that the data is higher than it should have been. ALWAYS.
It is entirely clear that either they are purposefully futzing with the climate record or having experimenter bias are simply blind to data that backs their presuppositions and bias. If they get a result that says temperatures are setting a new record they don’t even spend the time to see if they copied an entire countries dataset wrongly that one country comes out suspiciously super hot doesn’t concern them the slightest?? It’s hard to believe that they just weren’t hoping the data would slip by everyone’s view and nobody would look into it.
How many things like this have happened that we don’t know about that are not so blatant? The latest scandal points out how functioning reporting stations can be ignored for more than a year and substitute data fabricated from models obviously constructed by people who already are invested in their theory. How could such a blatant thing happen that huge numbers of stations are estimated values when we know that many of them have real valid data? This is obviously so politicized that its affecting the data in amazing ways that skips by the “brilliant” minds of our scientists they never question data which backs them and look for every way to minimize the appearance that climate change isn’t massive! They come up with every possible excuse to explain why temperatures weren’t so warm in the 1930s even to the point of denying the existence of well known phenomenon but don’t spend the smallest effort to determine if they are making mistakes in the other direction.
I was amazed talking to a climate modeler when he said he believed the MWP and LIA were northern hemisphere phenomenon. So, apparently for HUNDREDS of years temperatures in Northern Europe were hotter than the rest of the world. The rest of the world apparently compensated for the heat in northern Europe somehow. I’ve never heard an explanation of HOW could temperatures in Northern Europe be so hot (or cold) for hundreds of years and the rest of the world unaffected. They have NO curiosity about how such a thing would be possible or the explanation for such bizarre weather behavior. Further they don’t have any explanation for WHY the temperatures would do this for hundreds of years nor any other examples of such multi-hundred year variations in regional temperatures that apparently don’t affect anything else. Yet they were happy to accept this argument it was a regional effect for many years! Astounding to me.
When I hear arguments like this I can’t imagine that this is a sane person making such statements. Seriously, if you are going to deny the MWP or LIA then you should also have at least some plausible explanation for how such a thing would be possible. To show no curiosity in how such an aberration of temperatures could have taken place should have interested them considering their scientific curiosity. However, it appears their scientific curiosity stops 100% at the point where it might lead to some question about the veracity of the global warming religion.
The latest debacle comes as no surprise to those of us who have read the amazing stories of the data and statistical errors of our climate elite.
It seems to me that since the amount of “modification” to the temperature record is around 1F or more, sometimes 2 or 3 degrees F that the level of uncertainty is on the order of the amount of temperature in dispute, i.e. these adjustments are huge. What if I said your speed was 60mph +- 60mph. That’s not a very good estimate. I could be standing still.
There is a notion of DATA QUALITY that computer people have been pushing for a while now. First we need a rigorous way of describing the modifications to the temperature record being made and reviewed and analyzed carefully with real case studies of how such modifications work.
Further every report should have statistical reports on the frequency these “adjustments” are made, how severe the adjustments are, what regions are experiencing abnormal cold or heat. They should produce numerous data which can be used to verify the data quality of the data we are depending on. When data fall outside ranges of acceptable normal movements a person should be assigned to analyze if such a variation in fact existed. This is common practice in the financial field where you might trade billions of dollars based on numbers that were simply typed by someone wrongly into a spreadsheet sometime (or some other error). Data quality is an important discipline in being able to use the data for further analysis. It is clear that the level of professionalism of these people is zilch. The number of errors, the frequency of errors and the obvious bias in the errors is so blatant that it calls into question fraud. The agencies not only need to fix the current problem they must give the rest of us the tools to check up on them and they need to have procedures for people to call into question data. They need to have reviews of anamolous data with real people who do research to discover if these adjustments reflect reality. If the adjustments are so frequent and siginificant that this can’t be done then basic improvements to the infrastructure must be contemplated so that the data can be trusted.

June 30, 2014 5:43 pm

Anthony, if I have computed correctly Luling now shows accurate RAW.
See Paul Homewood’s site for an update or my own blog where I have put the result. Needs verifying.

Ray Boorman
June 30, 2014 9:42 pm

CC Squid says:
June 28, 2014 at 6:43 pm
The explanation of how the temps were decreased is located below. The comments starting at this one say it all. Mosh goes through the process in detail.
http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/#comment-601719
The explanation from Mosher at the link above is a real eye-opener to me. If I am reading it correctly, when BEST (maybe NCDC too??) generate their graphs of temperature anomalies, no real, actually observed, data makes it into the graph. It seems the observed data is simply a starting point, & all observed data in the record is used to adjust every other record, based on some fancy algorithm created by the gatekeepers. The result then becomes their graph!! When they do a new graph the following month with a little extra data, every record in the database, probably including the new one, is adjusted slightly. Mosh calls these “fields”, whatever that means. When I was a computer programmer, data was data, & you never thought to change the past. I can’t imagine a situation where it is valid to create historical temperature anomaly graphs that change the past when each new month’s data is added.

Ray Boorman
June 30, 2014 9:52 pm

It seems these anomaly graphs truly are very much like the analogy Anthony made above about the WSJ & stock indexes. Except that, instead of keeping a bankrupt stock in the index with fake data, they continually adjust the index to make it look “xxxx”. (insert your own descriptor in the inverted comma’s to explain their actions) If it is not good to alter the plot of a stock index, could it possibly be good to do the same to a historical temperature graph?

Bob Dedekind
June 30, 2014 10:00 pm

DHF says: June 30, 2014 at 11:35 am

“What should be done is to throw over board all the questionable and discontinued measurement locations and keep in order of magnitude 250 good temperature measurement stations randomly spread around the world.”

I agree.
Furthermore, I believe, as Anthony mentioned somewhere previously, that we may be able to crowd-source this. Some of us know our local data fairly well, and we should be able to nominate a station or two for inclusion in our region.
Of course, according to Real Climate Scientistsā„¢ we may need only 50 or so stations globally.

DHF
July 1, 2014 12:27 am

Maybe, If there are sufficient integrity in some of the temperature data series used to create Berkeley Earth, the creators of Berkeley Earth might be able and willing to provide suitable records?
According to Wikipedia article about Berkeley Earth:
“Berkeley Earth founder Richard A. Muller told The Guardian “…we are bringing the spirit of science back to a subject that has become too argumentative and too contentious, ….we are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find. We are doing this because it is the most important project in the world today. Nothing else comes close.”

July 1, 2014 1:54 am

“”” Its over no one believes it anymore please give up. The ā€œmodelingā€ of AGW is a FANTASY “””
Because photos of glaciers over the past 100 years have been consistently tampered with too. The elves sneak in every month and replace 10% of the photos with ones that have been photo-shopped to be just a little bit whiter.

July 1, 2014 2:01 am

“””
the error is ALWAYS that the data is higher than it should have been

Iā€™ve never heard an explanation of HOW could temperatures in Northern Europe be so hot (or cold) for hundreds of years and the rest of the world unaffected.
“””
Every single time I see a conspiracy theorist make an unsubstantiated claim, it is in support of their conspiracy theory. Even when most of the time a simple google search will offer a counter example: http://www.skepticalscience.com/medieval-warm-period.htm

July 1, 2014 2:11 am

“no warming to cooling for 17yrs. ”
Yes, cherry picking your data is always a good way to prove your point. Why is 17 years the magic number? Was there perhaps something special about that one year 17 years ago? Like maybe it was an outlier on the warm side? “Hey, let’s pick the hottest year on record and measure temperature trends since that year!” Gosh, you’re so smart to figure out how to manipulate the data like that. Does it help you to confirm your bias when you do that?

July 1, 2014 2:26 am

“Can anyone demonstrate that there has been any warming at all?”
There are a variety of mechanisms via which one can estimate past temperatures, not just one. You can look at historical written records; you can do archaeology to see where people lived and what they ate; you can take ice core samples; you can look at tree rings; you can look at old pictures of glaciers; you can look at the effects that past ice age glaciers had on the underlying terrain.
Of course, you can also come up with increasingly unlikely theories as to why evidence for warming cannot possibly be valid: “The urban heat island effect causes warm winds to blow up to the mountain glaciers and melt them. There is no global warming; there are just these rapidly shifting pockets of low-flying localized hot air, offset by high flying pockets of localized cold air.”

Bob Dedekind
July 1, 2014 2:26 am

cesium62:

“Why is 17 years the magic number?”

Umm, because you start now, and you go back in time while the trend stays “flat”. When it is no longer “flat”, you stop and check the date. Turns out it’s 17 years at the moment.
By “flat” of course I mean the trend is statistically insignificant.
On glaciers: they have been receding for some time now. Think hundreds to thousands of years.

Bob Dedekind
July 1, 2014 2:38 am

cesium62:
There has been a slow natural long-term warming since the last ice age, with cycles superimposed on it, such as the RWP, MWP (warmer than now) and the LIA (cooler). There has been no acceleration of this warming, and over the past 17 years there has been none at all.
This in spite of the GCMs predicting accelerating warming. Right now we should be warming at around 0.2Ā°C/century. We aren’t.
In his seminal paper on the subject, the godfather of AGW, James Hansen, predicted in 1988 that by now the warming would be so obvious that it would be three standard deviations above that of the 1950s. It isn’t.

Bob Dedekind
July 1, 2014 2:38 am

Sorry 0.2Ā°C/decade, not century.

richard verney
July 1, 2014 3:04 am

cesium62 says:
July 1, 2014 at 2:26 am
////////////////
As you say, history is a useful reference.
We know as fact that dustbowl conditions in the US were seen in the 1930s. There is much contemporaneous print, and of course there is archive film of these conditions.
These conditions were essentially caused by warmth and/or lack of precipitation. Unless there is evidence that in the 1930s there was considerably less rainfall, over the region in question, the obvious conclusion is that the 1930s in the States was warmer than today.
Until there is a return to the physical conditions seen in the 1930s, no sentient person will consider the ‘adjusted’ record that seeks to claim that it is today warmer in the US than it was in the 1930s. is correct.
It is silly to make unrealistic adjustments that are confounded by hard evidence. The evience of the 1930s is a problem, and making adjustments to this period ought to be seen as hitting a brick wall. just like, it is not possible to go too far with adjustment to present day temps in view of the check with the satellite data.

richardscourtney
July 1, 2014 3:17 am

cesium62:
At July 1, 2014 at 1:54 am you quote

Its over no one believes it anymore please give up. The ā€œmodelingā€ of AGW is a FANTASY

And you respond with this non sequiter

Because photos of glaciers over the past 100 years have been consistently tampered with too. The elves sneak in every month and replace 10% of the photos with ones that have been photo-shopped to be just a little bit whiter.

Here on planet Earth, the climate models have failed to correctly predict changes to temperatures and to total ice.
You follow that nonsense at July 1, 2014 at 2:11 am by quoting

no warming to cooling for 17yrs

which you comment with so idiotic a rant that I choose to not copy it as a method to spare your embarrassment. However, the reality of that matter was explained to you by Bob Dedekind at July 1, 2014 at 2:26 am where he writes

cesium62:

Why is 17 years the magic number?

Umm, because you start now, and you go back in time while the trend stays ā€œflatā€. When it is no longer ā€œflatā€, you stop and check the date. Turns out itā€™s 17 years at the moment.
By ā€œflatā€ of course I mean the trend is statistically insignificant.

I write to add that 17 years is not ā€œmagicā€ but it is important to the issue about climate models.
In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they ā€œrule outā€ (at 95% confidence) periods of no temperature rise for 15 years or longer.
17 years is longer than 15 years. So, according to NOAA, the climate models donā€™t work; i.e. ā€œThe ā€œmodelingā€ of AGW is a FANTASYā€.
Richard

richardscourtney
July 1, 2014 3:21 am

cesium62:
If you are having difficulty understanding the issue explained to you by richard verney at July 1, 2014 at 3:04 am then look at this.
Richard

richard verney
July 1, 2014 3:38 am

Duster says:
June 30, 2014 at 12:50 pm
/////////////
This is because we are seeking to over stretch the bounds and limitation of the data source. The network was never intended to provide temperature data measured to tenths of a degree.
The temperature measurements (ie., the observed actual data) comes with warts and all. We are trying to remove the warts and all. We are trying to administer cosmetic surgery on it, to make it more acceptable to our demands. But this is a fail. The better approach would be to simply accept the raw data and that that it has warts and all, and simply ascribe a realistic error boundary to the raw data set (to cover the warts and all element).
It would then be an indicator, with error boundaries. it may not be able to tell us much of importance, since we are seeking to discover a signal which is smaller than the error bandwidth, but that is a consequence of the design limitations of the network.
Presently, all we are doing is interpreting the approriateness of our assumptions and guesses under pining the adjustments that we have made to the data set.
There are so many fundamental issues with the data set, that I am of the opinion, that it is time to ditch it for data post 1979.

Phil.
July 1, 2014 4:48 am

In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they ā€œrule outā€ (at 95% confidence) periods of no temperature rise for 15 years or longer.
Of course as explicitly stated in that report that statement refers to models which don’t include ENSO, so you’d have to compare with data which has ENSO excluded. If you do that you’ll find that there is no period of greater than 15 years.

Phil.
July 1, 2014 5:04 am

richard verney says:
July 1, 2014 at 3:04 am
cesium62 says:
July 1, 2014 at 2:26 am
////////////////
As you say, history is a useful reference.
We know as fact that dustbowl conditions in the US were seen in the 1930s. There is much contemporaneous print, and of course there is archive film of these conditions.
These conditions were essentially caused by warmth and/or lack of precipitation. Unless there is evidence that in the 1930s there was considerably less rainfall, over the region in question, the obvious conclusion is that the 1930s in the States was warmer than today.

There was less rainfall than usual in that region, also poor agricultural practices following the ploughing of the natural prairies and expansion of wheat growing.

richardscourtney
July 1, 2014 5:46 am

Phil:
Your post at July 1, 2014 at 4:48 am continues your usual practice of posting twaddle which demonstrates you do not have a clue what you are talking about.
You quote my accurate statement that said

In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they ā€œrule outā€ (at 95% confidence) periods of no temperature rise for 15 years or longer.

and say

Of course as explicitly stated in that report that statement refers to models which donā€™t include ENSO, so youā€™d have to compare with data which has ENSO excluded. If you do that youā€™ll find that there is no period of greater than 15 years.

ENSO is an important part of global climate. Ant model which cannot emulate it cannot emulate global climate, and no model emulates ENSO adequately.
My full statement from which you quoted said

cesium62:

Why is 17 years the magic number?

Umm, because you start now, and you go back in time while the trend stays ā€œflatā€. When it is no longer ā€œflatā€, you stop and check the date. Turns out itā€™s 17 years at the moment.
By ā€œflatā€ of course I mean the trend is statistically insignificant.

I write to add that 17 years is not ā€œmagicā€ but it is important to the issue about climate models.
In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they ā€œrule outā€ (at 95% confidence) periods of no temperature rise for 15 years or longer.
17 years is longer than 15 years. So, according to NOAA, the climate models donā€™t work; i.e. ā€œThe ā€œmodelingā€ of AGW is a FANTASYā€.

So,
according to the NOAA determination the 17 years cessation to warming indicates ā€œThe ā€œmodelingā€ of AGW is a FANTASYā€
and
according to your (true) assertion the models fail to emulate ENSO which indicates ā€œThe ā€œmodelingā€ of AGW is a FANTASYā€.
The important point is that you, NOAA and I agree ā€œThe ā€œmodelingā€ of AGW is a FANTASYā€.
Richard

DHF
July 1, 2014 6:48 am

Ray Boorman says:
June 30, 2014 at 9:42 pm
////
Good points!!
I would even say that historical temperature anomaly graphs that change the past when each new monthā€™s data is added, is proof positive that the model and / or the software is fundamentally flawed.

Solomon Green
July 1, 2014 10:54 am

Duster says
“Until there is clear evidence of deliberate and intentional biasing of the record, thereā€™s no reason to look for them.”
If there are biases they should be looked for, whether or not they are deliberate and intentional. If biases exist there cannot be a true record and one cannot hope to make proper use of the data so long as one is unaware of their existence.

mogur2013
July 1, 2014 2:04 pm

Tony Heller (Steven Goddard) is at it again. This time, with the global temperatures instead of just the US record. He would like us to assume the GISS global dataset is tampered with by comparing it to the RSS dataset that he implies is a truer record of global temperatures. He posted this graph at his website:
http://www.woodfortrees.org/graph/gistemp/from:1998/mean:60/plot/rss/from:1998/mean:60/offset:0.25/plot/gistemp/from:1998/trend/plot/rss/from:1998/trend/offset:0.25
http://stevengoddard.wordpress.com/2014/07/01/giss-diverging-from-reality-at-a-phenomenal-rate/
Another cherry pick by Heller. The entire satellite record shows a completely different picture than just the data since 1998. I have extended his graph back to 1979, and included the entire RSS trendline (light blue), as well as the UAH 5 year mean (brown). UAH uses a newer satellite and it agrees more closely with the GISS record in recent years than the RSS record. Both the HADCRUT4 and WoodForTrees global datasets also support the GISS data.
http://www.woodfortrees.org/graph/gistemp/from:1979/mean:60/plot/rss/from:1979/mean:60/offset:0.25/plot/gistemp/from:1998/trend/plot/rss/from:1998/trend/offset:0.25/plot/rss/from:1979/trend/offset:0.25/plot/uah/from:1979/mean:60/offset:0.39
I don’t know if all of the data sets have been tampered with (US, global, station, satellite); however, to clearly show that the GISS data has been manipulated requires more than demonstrating a short term divergence from the RSS data set. Roy Spencer (not exactly an alarmist) on the recent UAH versus RSS satellite temperature data differences-
“[We believe] RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality. We have not used NOAA-15 for trend information in yearsā€¦we use the NASA Aqua AMSU, since that satellite carries extra fuel to maintain a precise orbit.
Of course, this explanation is just our speculation at this point, and more work would need to be done to determine whether this is the case. The RSS folks are our friends, and we both are interested in building the best possible datasets.
But, until the discrepancy is resolved to everyoneā€™s satisfaction, those of you who REALLY REALLY need the global temperature record to show as little warming as possible might want to consider jumping ship, and switch from the UAH to RSS dataset.”
I would have posted on his site, but my views are considered spam to Heller and I have been banned permanently. As you know, Tony, Heller will never concede an inch when challenged and I certainly appreciate your forthcoming admission of error to him. It remains to be determined how much data tampering is biased (or honest) versus how much is in Heller’s cherry picking imagination.

July 1, 2014 2:27 pm

mogur2013,
Speaking of cherry-picking, that is a very small part of the extensive evidence that GISS ‘adjusts’ the temperature record to make it look scarier. For example:
http://oi42.tinypic.com/vpx303.jpg [blink gif]
http://oi54.tinypic.com/fylq2w.jpg
Those links are just a couple of random selections. There is FAR too much evidence implicating GISS and Hansen for it to be a mistake.
Finally, the basic prediction of the alarmist clique, made for many years, was ‘runaway global warming’. But like all the other scary alarmist predictions, it was a total failure. So, a question:
What would it take for you to admit that the CO2=CAGW conjecture is wrong? Anything? Or is your mind made up, and the science settled?

July 1, 2014 2:57 pm

Anthony, you now need to apply the same open mindedness to your political views Murray

mogur2013
July 1, 2014 3:03 pm


Huh? What makes you think I ‘cherry-picked’ the entire RSS dataset as opposed to the only segment of it that shows a decline in global temperatures?
As for your links, I agree with you entirely. I have overlayed every Hansen US temperature graph that he has published since 1987, and absolutely agree that he has manipulated the data to exaggerate global warming in the US temperature record. So, how does that justify Heller’s idiotic cherry picking? Do you want the truth or simply a confirmation of your beliefs?
Btw, I don’t ‘believe’ in CAGW. I am convinced that the global temperatures have risen since industrialization, and I tend to think that man has contributed significantly to that rise. You seem to leap to the conclusion that I must be a liberal ‘alarmist’, out to ruin the global economy, when in reality (please rein in your imagination), I am not convinced that global warming is ‘runaway’. I could label you with all my preconceived notions about ‘denialists’, but that would be just stupid. I am sure you are informed and just as interested as I am in discovering the actual reality of our climatology.

July 1, 2014 6:12 pm

mogur2013 says:
I am convinced that the global temperatures have risen since industrialization, and I tend to think that man has contributed significantly to that rise.
First, define “significantly”. 50%? More? I don’t think there is any evidence whatever for that assumption.
Next, how would you explain this? Note that the step changes since the 1880’s are irrespective of human CO2 emissions, which began to seriously rise in the 1940’s. It takes true belief to think that the first warming step was not caused by human CO2, but the the same later rise was.
Next, looking at one-tenth degree fluctuations is well within error bars for the instrumentation used. When we look at whole degrees, there is nothing to be alarmed about.
Next, the current warming is nothing special. It has happened repeatedly during the present Holocene. I count at least twenty “hockey sticks” before industrialization.
Yes, global T is up ā€” slightly. But nothing to worry about. And for many years now, global warming has not happened. It is now global cooling.

mogur2013
July 1, 2014 9:15 pm


Okay, I see that you are passionate. Good for you. When I said that I am not convinced that global warming is disastrous, what part convinced you to attack me with one tenth degree fluctuations? Global cooling is a bit out there, isn’t it? Come on, db, can we not agree that the record may be corrupt and work together to figure out how, why, and who is responsible? Do we really need to partition everyone into ‘believers’ and ‘heretics’? I neither ‘believe’ in Mann, nor do I believe in Monckton. I want to find the truth.
As to your question about significance, I don’t really know. There are datasets by people that I don’t know and cant verify. I only have an undergraduate degree in science. But I believe in the scientific method. I quit science because I know from first hand experience that there is an unbelievable amount of ego and stratification of prestige involved with academy. As a carpenter, I learned to measure twice and cut once. Some of the pundits are cutting without even one measurement.

Editor
July 2, 2014 1:05 am

At first glance, the most egregious case seems to be
USH00015749 MUSCLE SHOALS AP
Raw data 1940 – 2014
Final data 1893 – 2014 (YES!)
The additional values at the front (1893 to late 1940) are all flagged “E”. So this doesn’t look like a case of joining the records of 2 nearby stations. The “E” flags are an admission that the values were not measured.

Matt L.
July 2, 2014 3:25 am

“Next, looking at one-tenth degree fluctuations is well within error bars for the instrumentation used. When we look at whole degrees, there is nothing to be alarmed about.”
Talk about a tempest in a teapot. Global warming is it. The chart using the whole degree scale should be published by every newspaper and news site in the USA. It puts things into perspective for numskulls like me.
Feels like we’re chasing computerized thought experiments down a rabbit hole.
CAGW is like being in a meeting where one outspoken, passionate person brings up a pet project so dumb it makes everyone’s eyes roll back in their heads. But no one stops that person. And soon the whole table has an opinion on the project. Then before you know it, the boss is nodding his head and someone’s drafted next steps and action items, adjusted company strategy and allocated resources to it.
Politicians who shouldn’t be in the conversations on climate science are now leading them. Too bad climate science isn’t a bit more like theoretical astrophysics. Because then most Americans would sit back, relax and assume Sheldon from the Big Bang Theory is handling it.

Nick
July 2, 2014 6:19 am

I made a post at Goddard’s site early on in this brouhaha. I urged him to show source code – although I was unclear, and someone showed me his download code (which I had seen before) and insulted me for being lazy. I meant he should show his code that he uses to create each graph snippet. If he did that it would follow McIntyre’s example, but he doesn’t. I haven’t checked back to see if he changed his ways, but if does now, I applaud him. If he doesn’t he should.

July 2, 2014 8:51 pm

So tired of people invoking McIntyre’s name to try and manipulate those they disagree with.

T Montag
July 3, 2014 5:19 am

From http://www.ncdc.noaa.gov/sotc/national/2012/7
The average temperature for the contiguous U.S. during July was 77.6Ā°F, 3.3Ā°F above the 20th century average, marking the warmest July and all-time warmest month on record for the nation in a period of record that dates back to 1895. The previous warmest July for the nation was July 1936, when the average U.S. temperature was 77.4Ā°F.
I thought this information was wrong.. and being corrected. Did I miss something?

Editor
July 3, 2014 9:37 am

Poptech says:
July 2, 2014 at 8:51 pm

So tired of people invoking McIntyreā€™s name to try and manipulate those they disagree with.

Thanks, Poptech. Steve McIntyre’s name gets invoked as an exemplar of people who are perfectly transparent about their scientific work, always publishing code and data for each of his analyses.
Unfortunately, there are not a whole lot of folks on either side of the climate aisle who have followed his example. So we don’t have a lot of folks whose names might bear mentioning in the same regard, and as a result McIntyre’s name gets overused. If you’d like say Phil Jones to get the same respect, you might have a quiet word with him about his scientific practices …
As to whether mentioning someone as an examplar of transparent scientific practice is an attempt to “manipulate” other people, it’s usually not that at all. Almost without exception, it is in an attempt to get the person to stop hiding their code and data from public view.
Now, I suppose you could describe trying to get someone to grow up, become a transparent and honest scientist and reveal all of their work as “manipulation” ā€¦ but if so, I wish every scientist would get manipulated.
Instead, there’s always random, generally anonymous internet popups like yourself denigrating the push for scientific transparency in various ways. Your attempt above to cover for those hiding their code and data by vainly trying to diss a person whose scientific practice is beyond reproach is just another example of the actions of people who are afraid of what the code and data might show ā€¦
Regards,
w.

July 3, 2014 3:19 pm

One reason I have stopped coming to WUWT and seeing it as a valuable knowledge base for fighting the false Anthropogenic Global Warming theory is that it has seemed to have decided that it is best to milk the whole Global Warming, Climate Change scam to its largest degree by actually prolonging the duration in which the AGW scammers are able to retain any actual credibility.
I think you go way too far out of your way to always give the scammers the benefit of every possible and many impossible doubts.

Editor
July 3, 2014 3:36 pm

astonerii says:
July 3, 2014 at 3:19 pm

One reason I have stopped coming to WUWT and seeing it as a valuable knowledge base for fighting the false Anthropogenic Global Warming theory is that it has seemed to have decided that it is best to milk the whole Global Warming, Climate Change scam to its largest degree by actually prolonging the duration in which the AGW scammers are able to retain any actual credibility.
I think you go way too far out of your way to always give the scammers the benefit of every possible and many impossible doubts.

And we’re supposed to just guess which posts provide the evidence for this curious claim of yours?
Details, facts, links, and quotes are your friend, astoneriii, because without any of them, your opinion is ā€¦ well ā€¦ unanswerable.
w.

DHF
July 3, 2014 4:24 pm

Bob Dedekind says:
June 30, 2014 at 10:00 pm
“Furthermore, I believe, as Anthony mentioned somewhere previously, that we may be able to crowd-source this. Some of us know our local data fairly well, and we should be able to nominate a station or two for inclusion in our region.Of course, according to Real Climate Scientistsā„¢ we may need only 50 or so stations globally.”
Not very easy to find traceable untouched records. Do you happen to be part of a network of professionals who might provide high integrity temperature records for such analysis?

mogur2013
July 3, 2014 7:01 pm

@db
That is how we leave this discussion? Heller didn’t cherry pick the only part of the RSS record that shows a decline? Yet, somehow I ‘cherry picked’ the entire record? If you can’t simply admit that I showed by using his own device (woodfortrees graphing), that I demolished his claim that GISS data is corrupt by his using the only narrow slice of data that supports his contention, then what have we to discuss? You want him to bolster his views with anecdotal or cherry picked narrow slices of the data record? Db, come on, tell me that Heller is correct in using any little part of a graph to support his inane views. Please. Anything?

Bob Dedekind
July 4, 2014 4:25 am

DHF says:July 3, 2014 at 4:24 pm

“Not very easy to find traceable untouched records. Do you happen to be part of a network of professionals who might provide high integrity temperature records for such analysis?”

I can only help out with our local New Zealand stations, but perhaps if the efforts were co-ordinated, we might find people in each designated region who could do the same with their own local data.
Working manually with 50 to 250 stations is certainly possible, and there would be no need for complex automatic algorithms that aren’t very good.

July 4, 2014 5:46 am

Willis, you are the ultimate hypocrite, as I have been waiting on your “computer climate modeling” code for months but we all know you are not a computer climate modeler so it does not exist. I have done more for transparency than most commentators here by exposing bullshit artists posing as scientists, with everything going according to plan.

Editor
July 4, 2014 11:01 am

Poptech says:
July 4, 2014 at 5:46 am

Willis, you are the ultimate hypocrite, …

Hey, being the best at things is just a gift …

ā€¦ as I have been waiting on your ā€œcomputer climate modelingā€ code for months but we all know you are not a computer climate modeler so it does not exist. I have done more for transparency than most commentators here by exposing bullshit artists posing as scientists, with everything going according to plan.

Poptech, you’ve made an entire career out of hating on me. You’ve got web pages devoted to calling me all kinds of names, and accusing me of everything from breach of promise to mopery on the skyways.
I find it hilarious. When you first started, I tried to point out all of the ways that you were wrong about me ā€¦ foolish me. Your bizarre crusade has nothing to do with truth. And as a result, I gave up trying to satisfy your endless need to harass people by demanding documentation that you can dismiss and discard as not being what you wanted.
I know you think you’re “exposing bullshit artists posing as scientists”, Poptech. What you haven’t seemed to have noticed is that the way scientists like myself expose such people is to point out the flaws in their science.
Since you haven’t done that, and are instead resorting to ad hominem attacks, I have to conclude that you haven’t been able to find any flaws in my science ā€¦ and that in fact your quest has little to do with science, but instead is driven by personal animosity.
Poptech, whether or not some unpleasant anonymous internet popup believes I am a “scientist”, whatever that might to you, is of zero importance to me.
So you can go and add this interaction to your web page chronicling my sins, because me, I gave up trying to satisfy you a long time ago. I tried that, and quickly found out that nothing I could do would ever be good enough for you ā€¦ so why do anything?
Here’s the crazy part, and I’m kind of unwilling to tell you this. People are hating on me all over the web, so you’ll have to take a number and get in line. Whenever I put up a new post, three times out of four within a day or so the haters like you are fulminating at me all over the web. Tamino loves to rag on me, the Weasel gets his jollies that way, you sit in your mom’s basement and pleasure yourself while publishing my imaginary sins, it’s a whole cottage industry out there.
What none of you seems to notice is that all that does is drive traffic to my work and to this web site. Someone comes across one of your rants at your site, and thinks “Dang! That Willis must be the ultimate something ā€¦ I wonder what he said to get Poptech’s panties in such a twist?”
So they come here to read the words of the ultimate arch-fiend ā€¦ and they find good, solid, defensible science, along with discussions of same.
More traffic for the website, more people reading my work ā€¦ what’s not to like?
In any case, Poptech, whatever you want to do, you’ll have to do it without my assistance. After all the ugly untrue accusations you’ve made about me, I’m overjoyed to hear that you have been “waiting for months” for something or other, and I hope you wait forever ā€¦ so how about you run along and do that, and leave the adults alone?
w.

July 4, 2014 3:33 pm

Willis, it is fascinating to see you fabricate more nonsense. I don’t hate you and never have but I do not believe you are scientist and have supported my argument with facts. I have exactly one webpage showing this and here are more facts:
1. I did not call you any names.
2. I did not accuse you of anything but instead have made some well supported speculations.
3. My only demands have been for your “compute climate model” code to prove you are “computer climate modeler” – something we both know you are not.
4. My argument has always been whether you are a professional or amateur scientist – I believe you to be an amateur scientist, so does Dr. Spencer.
Someone comes across one of your rants at your site, and thinks ā€œDang! That Willis must be the ultimate something ā€¦ I wonder what he said to get Poptechā€™s panties in such a twist? So they come here to read the words of the ultimate arch-fiend ā€¦ and they find good, solid, defensible science, along with discussions of same.ā€
Nope, this has never happened.
I have not made any untrue accusations against you. All my well supported speculations are laid out with the facts and people are more then capable to make up their own minds about them.

JohnH
Reply to  Poptech
July 4, 2014 4:06 pm

“3. My only demands have been for your ā€œcompute climate modelā€ code to prove you are ā€œcomputer climate modelerā€ ā€“ something we both know you are not.
4. My argument has always been whether you are a professional or amateur scientist ā€“ I believe you to be an amateur scientist, so does Dr. Spencer.”
Why does it matter if one is computer modeler or not? It’s not as if the credibility of the climate models confer any special status. Actually, given the failure of the models, it’s quite the opposite.
Likewise, who cares if someone is a ‘professional’ vs. amateur scientist? What matters here is the logic and rigor of one’s arguments. Do they hold up under scrutiny or not? That’s the essence of real science, not the dubious distinction of some professional designation.
So trying to marginalize someone by calling them an ‘amateur’ is to merely throw an insult. Grow up.

July 4, 2014 4:17 pm

It matters when they claimed to be one,
http://web.archive.org/web/20120218062457/http://www.telegraph.co.uk/comment/columnists/christopherbooker/8349545/Unscientific-hype-about-the-flooding-risks-from-climate-change-will-cost-us-all-dear.html
“…by Willis Eschenbach, a very experienced computer modeller.”
http://wattsupwiththat.com/2013/10/09/dr-roy-spencers-ill-considered-comments-on-citizen-science/#comment-1445190
“…I am indeed a computer modeler of some small ability”
Credentials matter and people have the right to be informed of them when someone is claimed to be a “scientist”. Many people care, especially if they feel they have been misled. I am simply putting Mr. Eschenbach’s “science” in it’s appropriate context.

July 4, 2014 6:21 pm

DHF
July 6, 2014 4:35 am

Bob Dedekind says:
July 4, 2014 at 4:25 am
//
To avoid adjustment routines, effort should be invested in quality control of the measurements.
What will be needed is
1. Text information about each temperature measurement station:
Location.
Elevation.
Information about the type of equipment, maintenance regime, quality of measurement and uncertainty estimate.
Pictures documenting that the close environment has not likely been significantly changed over the course of years.
Identification of the reviewer(s) of the station information and data
For traceability, information about each metering station should be combined into a common document.
2. Original data series and a standard format data series for for each station containing a plain temperature reading at a constant time.
Date and time (ISO 8601 format: YYYY-MM-DD HHMMZ)
Temperature (K)
Preferably one measurement each day or two measurements at 12 hours interval.
No removal, addition or correction of data should be performed without sufficient documentation.
Ideally the station should have an unbroken record with no need for corrections.
I am not overly concerned about uncertainty of each measurement, as random uncertainty will tend to cancel when combined in the averaging process. However we should be aware of potential significant systematic drift, or shifts due to change of equipment or adjustment. I think 50 stations is close to the low limit to obtain sufficiently low uncertainty. More stations could be used to reduce uncertainty or make an independent set. At least the result would be traceable and not subject to uncertainty or systematic effects from algorithms.
Feasible, reasonable, sufficient . . ?

DHF
July 6, 2014 11:05 am

Seems that original data in original format will be available. Digitized data will also be available, first at monthly resolution. No doubt that quantity of temperature station has been prioritized:
http://www.realclimate.org/index.php/archives/2014/07/release-of-the-international-surface-temperature-initiatives-istis-global-land-surface-databank-an-expanded-set-of-fundamental-surface-temperature-records/
Wonder if information about individual station quality will also be easily available?