The scientific method is at work on the USHCN temperature data set

Temperature is such a simple finite thing. It is amazing how complex people can make it.

commenter and friend of WUWT, ossqss at Judith Curry’s blog

Sometimes, you can believe you are entirely right while simultaneously believing that you’ve done due diligence. That’s what confirmation bias is all about. In this case, a whole bunch of people, including me, got a severe case of it.

I’m talking about the claim made by Steve Goddard that 40% of the USHCN data is “fabricated”. which I and few other people thought was clearly wrong.

Dr. Judith Curry and I have been conversing a lot via email over the past two days, and she has written an illuminating essay that explores the issue raised by Goddard and the sociology going on. See her essay:

http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/

Steve Goddard aka Tony Heller deserves the credit for the initial finding, Paul Homewood deserves the credit for taking the finding and establishing it in a more comprehensible

way that opened closed eyes, including mine, in this post entitled Massive Temperature Adjustments At Luling, Texas.  Along with that is his latest followup, showing the problem isn’t limited to Texas, but also in Kansas. And there’s more about this below.

Goddard early on (June 2) gave me his source code that made his graph, but I

couldn’t get it to compile and run. That’s probably more my fault than his, as I’m not an expert in C++ computer language. Had I been able to, things might have gone differently. Then there was the fact that the problem Goddard noted doesn’t show up in GHCN data and I didn’t see the problem in any of the data we had for our USHCN surface stations analysis.

But, the thing that really put up a wall for me was this moment on June 1st, shortly after getting Goddard’s first email with his finding, which I pointed out in On ‘denying’ Hockey Sticks, USHCN data, and all that – part 1.

Goddard initially claimed 40% of the STATIONS were missing, which I said right away was not possible. It raised my hackles, and prompted my “you need to do better” statement. Then he switched the text in his post from stations to data while I was away for a couple of hours at my daughter’s music recital. When I returned, I noted the change, with no note of the change on his post, and that is what really put up the wall for me. He probably looked at it like he was just fixing a typo, I looked at it like it was sweeping an important distinction under the rug.

Then there was my personal bias over previous episodes where Goddard had made what I considered grievous errors, and refused to admit to them. There was the claim of CO2 freezing out of the air in Antarctica episode, later shown to be impossible by an experiment and the GISStimating 1998 episode,  and the comment where when the old data is checked and it is clear Goddard/Heller’s claim doesn’t hold up.

And then just over a month ago there was Goddard’s first hockey stick shape in the USHCN data set, which turned out to be nothing but an artifact.

All of that added up to a big heap of confirmation bias, I was so used to Goddard being wrong, I expected it again, but this time Steve Goddard was right and my confirmation bias prevented me from seeing that there was in fact a real issue in the data and that NCDC has dead stations that are reporting data that isn’t real: mea culpa.

But, that’s the same problem many climate scientists have, they are used to some skeptics being wrong on some issues, so they put up a wall. That is why the careful and exacting analyses we see from Steve McIntyre should be a model for us all. We have to “do better” to make sure that claims we make are credible, documented, phrased in non-inflammatory language, understandable, and most importantly, right.

Otherwise, walls go up, confirmation bias sets in.

Now that the wall is down, NCDC won’t be able to ignore this, even John Nielsen-Gammon, who was critical of Goddard along with me in the Polifact story now says there is a real problem. So does Zeke, and we have all sent or forwarded email to NCDC advising them of it.

I’ve also been on the phone Friday with the assistant director of NCDC and chief scientist (Tom Peterson), and also with the person in charge of USHCN (Matt Menne). Both were quality, professional conversations, and both thanked me for bringing it to their attention.  There is lots of email flying back and forth too.

They are taking this seriously, they have to, as final data as currently presented for USHCN is clearly wrong. John Neilsen-Gammon sent me a cursory analysis for Texas USHCN stations, noting he found a number of stations that had “estimated” data in place of actual good data that NCDC has in hand, and appears in the RAW USHCN data file on their FTP site

From:John Nielsen-Gammon

Sent: Friday, June 27, 2014 9:27 AM

To: Anthony

Subject: Re: USHCN station at Luling Texas

 Anthony –
   I just did a check of all Texas USHCN stations.  Thirteen had estimates in place of apparently good data.
410174 Estimated May 2008 thru June 2009
410498 Estimated since Oct 2011
410639 Estimated since July 2012 (exc Feb-Mar 2012, Nov 2012, Mar 2013, and May 2013)
410902 Estimated since Aug 2013
411048 Estimated July 2012 thru Feb 2014
412906 Estimated since Jan 2013
413240 Estimated since March 2013
413280 Estimated since Oct 2012
415018 Estimated since April 2010, defunct since Dec 2012
415429 Estimated since May 2013
416276 Estimated since Nov 2012
417945 Estimated since May 2013
418201Estimated since April 2013 (exc Dec 2013).

What is going on is that the USHCN code is that while the RAW data file has the actual measurements, for some reason the final data they publish doesn’t get the memo that good data is actually present for these stations, so it “infills” it with estimated data using data from surrounding stations. It’s a bug, a big one. And as Zeke did a cursory analysis Thursday night, he discovered it was systemic to the entire record, and up to 10% of stations have “estimated” data spanning over a century:

Analysis by Zeke Hausfather
Analysis by Zeke Hausfather

And here is the real kicker, “Zombie weather stations” exist in the USHCN final data set that are still generating data, even though they have been closed.

Remember Marysville, CA, the poster child for bad station siting? It was the station that gave me my “light bulb moment” on the issue of station siting. Here is a photo I took in May 2007:

marysville_badsiting[1]

It was closed just a couple of months after I introduced it to the world as the prime example of “How not to measure temperature”. The MMTS sensor was in a parking lot, with hot air from a/c units from the nearby electronics sheds for the cell phone tower:

MarysvilleCA_USHCN_Site_small

Guess what? Like Luling, TX, which is still open, but getting estimated data in place of the actual data in the final USHCN data file, even though it was marked closed in 2007 by NOAA’s own metadata, Marysville is still producing estimated monthly data, marked with an “E” flag:

USH00045385 2006  1034E    1156h    1036g    1501h    2166i    2601E 2905E    2494E    2314E    1741E    1298E     848i       0

USH00045385 2007   797c    1151E    1575i    1701E    2159E    2418E 2628E    2620E    2197E    1711E    1408E     846E       0

USH00045385 2008   836E    1064E    1386E    1610E    2146E    2508E 2686E    2658E    2383E    1906E    1427E     750E       0

USH00045385 2009   969E    1092E    1316E    1641E    2238E    2354E 2685E    2583E    2519E    1739E    1272E     809E       0

USH00045385 2010   951E    1190E    1302E    1379E    1746E    2401E 2617E    2427E    2340E    1904E    1255E    1073E       0

USH00045385 2011   831E     991E    1228E    1565E    1792E    2223E 2558E    2536E    2511E    1853E    1161E     867E       0

USH00045385 2012   978E    1161E    1229E    1646E    2147E    2387E 2597E    2660E    2454E    1931E    1383E     928E       0

USH00045385 2013   820E    1062E    1494E    1864E    2199E    2480E 2759E    2568E    2286E    1807E    1396E     844E       0

USH00045385 2014  1188E    1247E    1553E    1777E    2245E 2526E   -9999    -9999    -9999    -9999    -9999    -9999

Source:  USHCN Final : ushcn.tavg.latest.FLs.52i.tar.gz

Compare to USHCN Raw : ushcn.tavg.latest.raw.tar.gz

In the USHCN V2.5 folder, the readme file describes the “E” flag as:

E = a monthly value could not be computed from daily data. The value is estimated using values from surrounding stations

There are quite a few “zombie weather stations” in the USHCN final dataset, possibly up to 25% out of the 1218 that is the total number of stations. In my conversations with NCDC on Friday, I’m told these were kept in and “reporting” as a policy decision to provide a “continuity” of data for scientific purposes. While there “might” be some justification for that sort of thinking, few people know about it there’s no disclaimer or caveat in the USHCN FTP folder at NCDC or in the readme file that describes this, they “hint” at it saying:

The composition of the network remains unchanged at 1218 stations

But that really isn’t true, as some USHCN stations out of the 1218 have been closed and are no longer reporting real data, but instead are reporting estimated data.

NCDC really should make this clear, and while it “might” be OK to produce a datafile that has estimated data in it, not everyone is going to understand what that means, and that the stations that have been long dead are producing estimated data. NCDC has failed in notifying the public, and even their colleagues of this. Even the Texas State Climatologist John Nielsen-Gammon didn’t know about these “zombie” stations until I showed him. If he had known, his opinion might have been different on the Goddard issue. When even professional people in your sphere of influence don’t know you are doing dead weather station data infills like this, you can be sure that your primary mission to provide useful data is FUBAR.

NCDC needs to step up and fix this along with other problems that have been identified.

And they are, I expect some sort of a statement, and possibly a correction next week. In the meantime, let’s let them do their work and go through their methodology. It will not be helpful to ANYONE if we start beating up the people at NCDC ahead of such a statement and/or correction.

I will be among the first, if not the first to know what they are doing to fix the issues, and as soon as I know, so will all of you. Patience and restraint is what we need at the moment. I believe they are making a good faith effort, but as you all know the government moves slowly, they have to get policy wonks to review documents and all that. So, we’ll likely hear something early next week.

These lapses in quality control and thinking that infilling estimated data for long dead weather stations is the sort of thing happens when the only people that you interact with are inside your sphere of influence. The “yeah that seems like a good idea” approval mumble probably resonated in that NCDC meeting, but it was a case of groupthink. Imagine The Wall Street Journal providing “estimated” stock values for long dead companies to provide “continuity” of their stock quotes page. Such a thing would boggle the mind and the SEC would have a cow, not to mention readers. Scams would erupt trying to sell stocks for these long dead companies; “It’s real, see its reporting value in the WSJ!”.

It often takes people outside of climate science to point out the problems they don’t see, and skeptics have been doing it for years. Today, we are doing it again.

For absolute clarity, I should point out that the RAW USHCN monthly datafile is NOT being infilled with estimated data, only the FINAL USHCN monthly datafile. But that is the one that many other metrics use, including NASA GISS, and it goes into the mix for things like the NCDC monthly State of the Climate Report.

While we won’t know until all of the data is corrected and new numbers run, this may affect some of the absolute temperature claims made on SOTC reports such as “warmest month ever” and 3rd warmest, etc. The magnitude of such shifts, if any, is unknown at this point. Long term trend will probably not be affected.

It may also affect our comparisons between raw and final adjusted USHCN data we have been doing for our paper, such as this one from our draft paper:

Watts_et_al_2012 Figure20 CONUS Compliant-NonC-NOAA

The exception is BEST, which starts with the raw daily data, but they might be getting tripped up into creating some “zombie stations” of their own by the NCDC metadata and resolution improvements to lat/lon. The USHCN station at Luling Texas is listed as having 7 station moves by BEST (note the red diamonds):

Luling-TX-BEST

But there really has only been two, and the station has been just like this since 1995, when it was converted to MMTS from a Stevenson Screen. Here is our survey image from 2009:

Luling_looking_north

Photo by surfacestations volunteer John Warren Slayton.

NCDC’s metadata only lists two station moves:

image

As you can see below, some improvements in lat/lon accuracy can look like a station move:

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=LOCATIONS

image

http://www.ncdc.noaa.gov/homr/#ncdcstnid=20024457&tab=MISC

Thanks to Paul Homewood for the two images and links above. I’m sure Mr. Mosher will let us know if this issue affects BEST or not.

And there is yet another issue: The recent change of something called “climate divisions” to calculate the national and state temperatures.

Certified Consulting Meteorologist and Fellow of the AMS Joe D’Aleo writes in with this:

I had downloaded the Maine annual temperature plot from NCDC Climate at a Glance in 2013 for a talk. There was no statistically significant trend since 1895. Note the spike in 1913 following super blocking from Novarupta in Alaska (similar to the high latitude volcanoes in late 2000s which helped with the blocking and maritime influence that spiked 2010 as snow was gone by March with a steady northeast maritime Atlantic flow). 1913 was close to 46F. and the long term mean just over 41F.

 CAAG_Maine_before

Seemingly in a panic change late this frigid winter to NCDC, big changes occurred. I wanted to update the Maine plot for another talk and got this from NCDC CAAG. 

CAAG_maine_after

Note that 1913 was cooled nearly 5 degrees F and does not stand out. There is a warming of at least 3 degrees F since 1895 (they list 0.23/decade) and the new mean is close to 40F.

Does anybody know what the REAL temperature of Maine is/was/is supposed to be? I sure as hell don’t. I don’t think NCDC really does either.

In closing…

Besides moving toward a more accurate temperature record, the best thing about all this hoopla over the USHCN data set is the Polifact story where we have all these experts lined up (including me as the token skeptic) that stated without a doubt that Goddard was wrong and rated the claim “pants of fire”.

They’ll all be eating some crow, as will I, but now that I have Gavin for dinner company, I don’t really mind at all.

When the scientific method is at work, eventually, everybody eats crow. The trick is to be able to eat it and tell people that you are honestly enjoying it, because crow is so popular, it is on the science menu daily.

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
323 Comments
Inline Feedbacks
View all comments
June 30, 2014 5:43 pm

Anthony, if I have computed correctly Luling now shows accurate RAW.
See Paul Homewood’s site for an update or my own blog where I have put the result. Needs verifying.

Ray Boorman
June 30, 2014 9:42 pm

CC Squid says:
June 28, 2014 at 6:43 pm
The explanation of how the temps were decreased is located below. The comments starting at this one say it all. Mosh goes through the process in detail.
http://judithcurry.com/2014/06/28/skeptical-of-skeptics-is-steve-goddard-right/#comment-601719
The explanation from Mosher at the link above is a real eye-opener to me. If I am reading it correctly, when BEST (maybe NCDC too??) generate their graphs of temperature anomalies, no real, actually observed, data makes it into the graph. It seems the observed data is simply a starting point, & all observed data in the record is used to adjust every other record, based on some fancy algorithm created by the gatekeepers. The result then becomes their graph!! When they do a new graph the following month with a little extra data, every record in the database, probably including the new one, is adjusted slightly. Mosh calls these “fields”, whatever that means. When I was a computer programmer, data was data, & you never thought to change the past. I can’t imagine a situation where it is valid to create historical temperature anomaly graphs that change the past when each new month’s data is added.

Ray Boorman
June 30, 2014 9:52 pm

It seems these anomaly graphs truly are very much like the analogy Anthony made above about the WSJ & stock indexes. Except that, instead of keeping a bankrupt stock in the index with fake data, they continually adjust the index to make it look “xxxx”. (insert your own descriptor in the inverted comma’s to explain their actions) If it is not good to alter the plot of a stock index, could it possibly be good to do the same to a historical temperature graph?

Bob Dedekind
June 30, 2014 10:00 pm

DHF says: June 30, 2014 at 11:35 am

“What should be done is to throw over board all the questionable and discontinued measurement locations and keep in order of magnitude 250 good temperature measurement stations randomly spread around the world.”

I agree.
Furthermore, I believe, as Anthony mentioned somewhere previously, that we may be able to crowd-source this. Some of us know our local data fairly well, and we should be able to nominate a station or two for inclusion in our region.
Of course, according to Real Climate Scientists™ we may need only 50 or so stations globally.

DHF
July 1, 2014 12:27 am

Maybe, If there are sufficient integrity in some of the temperature data series used to create Berkeley Earth, the creators of Berkeley Earth might be able and willing to provide suitable records?
According to Wikipedia article about Berkeley Earth:
“Berkeley Earth founder Richard A. Muller told The Guardian “…we are bringing the spirit of science back to a subject that has become too argumentative and too contentious, ….we are an independent, non-political, non-partisan group. We will gather the data, do the analysis, present the results and make all of it available. There will be no spin, whatever we find. We are doing this because it is the most important project in the world today. Nothing else comes close.”

July 1, 2014 1:54 am

“”” Its over no one believes it anymore please give up. The “modeling” of AGW is a FANTASY “””
Because photos of glaciers over the past 100 years have been consistently tampered with too. The elves sneak in every month and replace 10% of the photos with ones that have been photo-shopped to be just a little bit whiter.

July 1, 2014 2:01 am

“””
the error is ALWAYS that the data is higher than it should have been

I’ve never heard an explanation of HOW could temperatures in Northern Europe be so hot (or cold) for hundreds of years and the rest of the world unaffected.
“””
Every single time I see a conspiracy theorist make an unsubstantiated claim, it is in support of their conspiracy theory. Even when most of the time a simple google search will offer a counter example: http://www.skepticalscience.com/medieval-warm-period.htm

July 1, 2014 2:11 am

“no warming to cooling for 17yrs. ”
Yes, cherry picking your data is always a good way to prove your point. Why is 17 years the magic number? Was there perhaps something special about that one year 17 years ago? Like maybe it was an outlier on the warm side? “Hey, let’s pick the hottest year on record and measure temperature trends since that year!” Gosh, you’re so smart to figure out how to manipulate the data like that. Does it help you to confirm your bias when you do that?

July 1, 2014 2:26 am

“Can anyone demonstrate that there has been any warming at all?”
There are a variety of mechanisms via which one can estimate past temperatures, not just one. You can look at historical written records; you can do archaeology to see where people lived and what they ate; you can take ice core samples; you can look at tree rings; you can look at old pictures of glaciers; you can look at the effects that past ice age glaciers had on the underlying terrain.
Of course, you can also come up with increasingly unlikely theories as to why evidence for warming cannot possibly be valid: “The urban heat island effect causes warm winds to blow up to the mountain glaciers and melt them. There is no global warming; there are just these rapidly shifting pockets of low-flying localized hot air, offset by high flying pockets of localized cold air.”

Bob Dedekind
July 1, 2014 2:26 am

cesium62:

“Why is 17 years the magic number?”

Umm, because you start now, and you go back in time while the trend stays “flat”. When it is no longer “flat”, you stop and check the date. Turns out it’s 17 years at the moment.
By “flat” of course I mean the trend is statistically insignificant.
On glaciers: they have been receding for some time now. Think hundreds to thousands of years.

Bob Dedekind
July 1, 2014 2:38 am

cesium62:
There has been a slow natural long-term warming since the last ice age, with cycles superimposed on it, such as the RWP, MWP (warmer than now) and the LIA (cooler). There has been no acceleration of this warming, and over the past 17 years there has been none at all.
This in spite of the GCMs predicting accelerating warming. Right now we should be warming at around 0.2°C/century. We aren’t.
In his seminal paper on the subject, the godfather of AGW, James Hansen, predicted in 1988 that by now the warming would be so obvious that it would be three standard deviations above that of the 1950s. It isn’t.

Bob Dedekind
July 1, 2014 2:38 am

Sorry 0.2°C/decade, not century.

richard verney
July 1, 2014 3:04 am

cesium62 says:
July 1, 2014 at 2:26 am
////////////////
As you say, history is a useful reference.
We know as fact that dustbowl conditions in the US were seen in the 1930s. There is much contemporaneous print, and of course there is archive film of these conditions.
These conditions were essentially caused by warmth and/or lack of precipitation. Unless there is evidence that in the 1930s there was considerably less rainfall, over the region in question, the obvious conclusion is that the 1930s in the States was warmer than today.
Until there is a return to the physical conditions seen in the 1930s, no sentient person will consider the ‘adjusted’ record that seeks to claim that it is today warmer in the US than it was in the 1930s. is correct.
It is silly to make unrealistic adjustments that are confounded by hard evidence. The evience of the 1930s is a problem, and making adjustments to this period ought to be seen as hitting a brick wall. just like, it is not possible to go too far with adjustment to present day temps in view of the check with the satellite data.

richardscourtney
July 1, 2014 3:17 am

cesium62:
At July 1, 2014 at 1:54 am you quote

Its over no one believes it anymore please give up. The “modeling” of AGW is a FANTASY

And you respond with this non sequiter

Because photos of glaciers over the past 100 years have been consistently tampered with too. The elves sneak in every month and replace 10% of the photos with ones that have been photo-shopped to be just a little bit whiter.

Here on planet Earth, the climate models have failed to correctly predict changes to temperatures and to total ice.
You follow that nonsense at July 1, 2014 at 2:11 am by quoting

no warming to cooling for 17yrs

which you comment with so idiotic a rant that I choose to not copy it as a method to spare your embarrassment. However, the reality of that matter was explained to you by Bob Dedekind at July 1, 2014 at 2:26 am where he writes

cesium62:

Why is 17 years the magic number?

Umm, because you start now, and you go back in time while the trend stays “flat”. When it is no longer “flat”, you stop and check the date. Turns out it’s 17 years at the moment.
By “flat” of course I mean the trend is statistically insignificant.

I write to add that 17 years is not “magic” but it is important to the issue about climate models.
In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they “rule out” (at 95% confidence) periods of no temperature rise for 15 years or longer.
17 years is longer than 15 years. So, according to NOAA, the climate models don’t work; i.e. “The “modeling” of AGW is a FANTASY”.
Richard

richardscourtney
July 1, 2014 3:21 am

cesium62:
If you are having difficulty understanding the issue explained to you by richard verney at July 1, 2014 at 3:04 am then look at this.
Richard

richard verney
July 1, 2014 3:38 am

Duster says:
June 30, 2014 at 12:50 pm
/////////////
This is because we are seeking to over stretch the bounds and limitation of the data source. The network was never intended to provide temperature data measured to tenths of a degree.
The temperature measurements (ie., the observed actual data) comes with warts and all. We are trying to remove the warts and all. We are trying to administer cosmetic surgery on it, to make it more acceptable to our demands. But this is a fail. The better approach would be to simply accept the raw data and that that it has warts and all, and simply ascribe a realistic error boundary to the raw data set (to cover the warts and all element).
It would then be an indicator, with error boundaries. it may not be able to tell us much of importance, since we are seeking to discover a signal which is smaller than the error bandwidth, but that is a consequence of the design limitations of the network.
Presently, all we are doing is interpreting the approriateness of our assumptions and guesses under pining the adjustments that we have made to the data set.
There are so many fundamental issues with the data set, that I am of the opinion, that it is time to ditch it for data post 1979.

July 1, 2014 4:48 am

In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they “rule out” (at 95% confidence) periods of no temperature rise for 15 years or longer.
Of course as explicitly stated in that report that statement refers to models which don’t include ENSO, so you’d have to compare with data which has ENSO excluded. If you do that you’ll find that there is no period of greater than 15 years.

July 1, 2014 5:04 am

richard verney says:
July 1, 2014 at 3:04 am
cesium62 says:
July 1, 2014 at 2:26 am
////////////////
As you say, history is a useful reference.
We know as fact that dustbowl conditions in the US were seen in the 1930s. There is much contemporaneous print, and of course there is archive film of these conditions.
These conditions were essentially caused by warmth and/or lack of precipitation. Unless there is evidence that in the 1930s there was considerably less rainfall, over the region in question, the obvious conclusion is that the 1930s in the States was warmer than today.

There was less rainfall than usual in that region, also poor agricultural practices following the ploughing of the natural prairies and expansion of wheat growing.

richardscourtney
July 1, 2014 5:46 am

Phil:
Your post at July 1, 2014 at 4:48 am continues your usual practice of posting twaddle which demonstrates you do not have a clue what you are talking about.
You quote my accurate statement that said

In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they “rule out” (at 95% confidence) periods of no temperature rise for 15 years or longer.

and say

Of course as explicitly stated in that report that statement refers to models which don’t include ENSO, so you’d have to compare with data which has ENSO excluded. If you do that you’ll find that there is no period of greater than 15 years.

ENSO is an important part of global climate. Ant model which cannot emulate it cannot emulate global climate, and no model emulates ENSO adequately.
My full statement from which you quoted said

cesium62:

Why is 17 years the magic number?

Umm, because you start now, and you go back in time while the trend stays “flat”. When it is no longer “flat”, you stop and check the date. Turns out it’s 17 years at the moment.
By “flat” of course I mean the trend is statistically insignificant.

I write to add that 17 years is not “magic” but it is important to the issue about climate models.
In 2008 the US National Oceanic and Atmospheric Administration (NOAA) reported that climate models commonly suggest periods of 10 years or less with no temperature rise but they “rule out” (at 95% confidence) periods of no temperature rise for 15 years or longer.
17 years is longer than 15 years. So, according to NOAA, the climate models don’t work; i.e. “The “modeling” of AGW is a FANTASY”.

So,
according to the NOAA determination the 17 years cessation to warming indicates “The “modeling” of AGW is a FANTASY”
and
according to your (true) assertion the models fail to emulate ENSO which indicates “The “modeling” of AGW is a FANTASY”.
The important point is that you, NOAA and I agree “The “modeling” of AGW is a FANTASY”.
Richard

DHF
July 1, 2014 6:48 am

Ray Boorman says:
June 30, 2014 at 9:42 pm
////
Good points!!
I would even say that historical temperature anomaly graphs that change the past when each new month’s data is added, is proof positive that the model and / or the software is fundamentally flawed.

Solomon Green
July 1, 2014 10:54 am

Duster says
“Until there is clear evidence of deliberate and intentional biasing of the record, there’s no reason to look for them.”
If there are biases they should be looked for, whether or not they are deliberate and intentional. If biases exist there cannot be a true record and one cannot hope to make proper use of the data so long as one is unaware of their existence.

mogur2013
July 1, 2014 2:04 pm

Tony Heller (Steven Goddard) is at it again. This time, with the global temperatures instead of just the US record. He would like us to assume the GISS global dataset is tampered with by comparing it to the RSS dataset that he implies is a truer record of global temperatures. He posted this graph at his website:
http://www.woodfortrees.org/graph/gistemp/from:1998/mean:60/plot/rss/from:1998/mean:60/offset:0.25/plot/gistemp/from:1998/trend/plot/rss/from:1998/trend/offset:0.25
http://stevengoddard.wordpress.com/2014/07/01/giss-diverging-from-reality-at-a-phenomenal-rate/
Another cherry pick by Heller. The entire satellite record shows a completely different picture than just the data since 1998. I have extended his graph back to 1979, and included the entire RSS trendline (light blue), as well as the UAH 5 year mean (brown). UAH uses a newer satellite and it agrees more closely with the GISS record in recent years than the RSS record. Both the HADCRUT4 and WoodForTrees global datasets also support the GISS data.
http://www.woodfortrees.org/graph/gistemp/from:1979/mean:60/plot/rss/from:1979/mean:60/offset:0.25/plot/gistemp/from:1998/trend/plot/rss/from:1998/trend/offset:0.25/plot/rss/from:1979/trend/offset:0.25/plot/uah/from:1979/mean:60/offset:0.39
I don’t know if all of the data sets have been tampered with (US, global, station, satellite); however, to clearly show that the GISS data has been manipulated requires more than demonstrating a short term divergence from the RSS data set. Roy Spencer (not exactly an alarmist) on the recent UAH versus RSS satellite temperature data differences-
“[We believe] RSS data is undergoing spurious cooling because RSS is still using the old NOAA-15 satellite which has a decaying orbit, to which they are then applying a diurnal cycle drift correction based upon a climate model, which does not quite match reality. We have not used NOAA-15 for trend information in years…we use the NASA Aqua AMSU, since that satellite carries extra fuel to maintain a precise orbit.
Of course, this explanation is just our speculation at this point, and more work would need to be done to determine whether this is the case. The RSS folks are our friends, and we both are interested in building the best possible datasets.
But, until the discrepancy is resolved to everyone’s satisfaction, those of you who REALLY REALLY need the global temperature record to show as little warming as possible might want to consider jumping ship, and switch from the UAH to RSS dataset.”
I would have posted on his site, but my views are considered spam to Heller and I have been banned permanently. As you know, Tony, Heller will never concede an inch when challenged and I certainly appreciate your forthcoming admission of error to him. It remains to be determined how much data tampering is biased (or honest) versus how much is in Heller’s cherry picking imagination.

July 1, 2014 2:27 pm

mogur2013,
Speaking of cherry-picking, that is a very small part of the extensive evidence that GISS ‘adjusts’ the temperature record to make it look scarier. For example:
http://oi42.tinypic.com/vpx303.jpg [blink gif]
http://oi54.tinypic.com/fylq2w.jpg
Those links are just a couple of random selections. There is FAR too much evidence implicating GISS and Hansen for it to be a mistake.
Finally, the basic prediction of the alarmist clique, made for many years, was ‘runaway global warming’. But like all the other scary alarmist predictions, it was a total failure. So, a question:
What would it take for you to admit that the CO2=CAGW conjecture is wrong? Anything? Or is your mind made up, and the science settled?

July 1, 2014 2:57 pm

Anthony, you now need to apply the same open mindedness to your political views Murray

mogur2013
July 1, 2014 3:03 pm


Huh? What makes you think I ‘cherry-picked’ the entire RSS dataset as opposed to the only segment of it that shows a decline in global temperatures?
As for your links, I agree with you entirely. I have overlayed every Hansen US temperature graph that he has published since 1987, and absolutely agree that he has manipulated the data to exaggerate global warming in the US temperature record. So, how does that justify Heller’s idiotic cherry picking? Do you want the truth or simply a confirmation of your beliefs?
Btw, I don’t ‘believe’ in CAGW. I am convinced that the global temperatures have risen since industrialization, and I tend to think that man has contributed significantly to that rise. You seem to leap to the conclusion that I must be a liberal ‘alarmist’, out to ruin the global economy, when in reality (please rein in your imagination), I am not convinced that global warming is ‘runaway’. I could label you with all my preconceived notions about ‘denialists’, but that would be just stupid. I am sure you are informed and just as interested as I am in discovering the actual reality of our climatology.