The NCDC press office sent an official response to Politifact, which is below.
The NCDC has not responded to me personally, I only got this by asking around.
I’ve provided it without comment.
=====================================================
Are the examples in Texas and Kansas prompting a deeper look at how the algorithms change the raw data?
No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.
Watts wrote that NCDC and USHCN are looking into this and will issue some sort of statement. Is that accurate?
Although all estimated values are identified in the USHCN dataset, NCDC’s intent was to use a flagging system that distinguishes between the two types of estimates mentioned above. NCDC intends to fix this issue in the near future.
Did the point Heller raised, and the examples provided for Texas and Kansas, suggest that the problems are larger than government scientists expected?
No, refer to question 1.
==================================================
Konrad says:
July 1, 2014 at 8:53 pm
“Using an program that makes TOB adjustments without individual station metadata would be a bad thing. A very bad thing…”
Last I heard thats exactly what they are/were doing to include applying TOB to new hourly reporting stations.
To me this response suggests that the issue is so serious that they have flagged it WONTFIX.
Not acceptable. Keep pushing.
Programmers have a specific name for this, it’s a wad bug, works as designed. In other words, someone screwed the pooch.
“our algorithm is working as designed” = “There was some bone-headed decisions… Not even a smidgeon of corruption.”
So INS, IRS, VA, EPA, NSA, NLRB, Federal Reserve, policies on Iraq, Syria, Russia, Ukraine, etc. — everything is “working as designed.”
temp says:
July 1, 2014 at 9:17 pm
———————————
“Last I heard thats exactly what they are/were doing to include applying TOB to new hourly reporting stations.”
Yes, and apparently you can even use it on “zombie” stations as well.
NCDC says “our algorithm is working as designed”. Given that their code is working to produce an artificial warming trend, this does raise some interesting questions. Questions like “Who’s design exactly?”, and “Designed for what purpose?” Or “About the vicious and sustained public floggings, I trust there are no objections?”
“Col Mosby says:
July 1, 2014 at 6:12 pm
Apparently they only supply missing data when required by other programs. Might I suggest modifying those other programs, rather than inventing data? You can never increase acccuracy by guessing, nor can knowledge be increased simply by multiplying your current information.
##################################################
It is actually pretty simple.
1. Some methods for calculating global averages REQUIRE long series
A) GISS
B) CRU
2. There are many methods for creating long series from multiple records ( see CET)
3. One method is to make zombies.
the algorithm does what it was designed to do.
” These estimates are used in applications that require a complete set of data values.”
However, this can all be avoided by using the method suggested by skeptics:
oh wait, that would be the berkeley method.
Working as designed indeed. It’s been a nice little earner…
Synthetic data?
Once again, each week, we seem to arrive at the same lamentable and PR disaster prone stage in which technically equipped skeptics with computer programming skills fail to engage with highly energetic Steve Goddard as he doubles down on possibly real but often fantastic claims. For two years Goddard made a simple mistake of not accounting for different numbers of stations in the final vs. raw data, creating an adjustments hockey stick with a massive current year spike merely due to late station reporting. This was allowed to happen because he is not careful at all himself and doesn’t ever intend to be. But the way good science works is that different temperaments engage with each other and quickly double check each other’s work. Since I am no longer set up with software outside of 3D design stuff, my attempt to entice rigor out of Steve backfired in a fit of cheerleading squad attacks and a ban, after I was a regular there for years. Now, once again a new claim of trend alteration is being made as news cycles come and go without the needed skeptical side information as to whether Steve’s strong final effect claims are validated or not by other skeptics! Steve claims that the zombie station infilling itself contains a bizarre bias in which the infilled stations show much greater trends. So what am I to do week after week as a news site activist when these trained Gorebot alarmists keep countering my work with Goddard bashing? I can’t do a damn thing since skeptics with the needed set up don’t really let on about the only final result that matters of whether the trend is really mistakenly too high or not. Most here just *assume* a scandal but there’s only a scandal *against* skeptics if there’s no downward revision asserted by more than a mere single blogger!
Wow… Did they just admit culpability in intentionally misleading people?
Working as intended? So basically it was designed to be screwed up. Man I really should have taken the other pill… this rabbit hole just keeps getting deeper and deeper.
I have read many expert reports, and cross examined many experts in many disciplines, but the BS language and appearance doesn’t change. There is so much wrong with this response I don’t even know where to begin. Its incriminating as well as illuminating in my view. I might expand in another post, see how this unfolds for now.
In the meantime, what does the NOAA response have to do with why they keep changing the July 1936 temperature? Did their ‘algorithm’ find a ‘shift’ in 1936 and thus a reason to ‘estimate’ some 1936 (non)station temperatures after all these years? Do they keep finding new ‘shifts’ back in 1936 each month? Or did they find a ‘shift’ in 2014 that their algorithm decides it affects temperatures in 1936?
It all seems like a bunch of shift.
…or very shifty at best….
But I digress. One basic question, besides ‘was your data right then or is it right now ..or was it right the 14th time you changed it?” is this: At what point can you, NOAA, tell the world’s scientists that you are done ‘calculating’ the 1936 temperature?
Let’s get the facts before we continue declaring world war on the very gas (CO2) that keeps both the earth…and mankind…alive. That we do know is a fact. JMO.
Think a big problem is that people expect a product to work when it’s marketed so heavily, like a Mercedes or an iPad. Unfortunately the consumers in this case don’t seem too eager to complain, leaving people on the sidewalk scratching their heads as the tailpipe hits the asphalt.
GM could say the same of its ignition switches.
It appears that for Seattle WA, the HI, LOW and avg. temp requests plot the same data, as suggested above. I am still investigating, stay tuned.
Konrad says: July 1, 2014 at 8:53 pm
‘“does now”? Yes, we have noted the panicked scrabbling…’
No, they have done it for many years. Here is a paper from skeptics Balling and Idso in 2002 complaining in GRL about, wouldn’t you know it, USHCN adjustments creating a trend. And their data? Published USHCN raw and adjusted data. In fact, they say (in 2002):
“Considered one of the best of its type, the United States Historical Climatology Network (USHCN) dataset consists of temperature records from 1,221 stations spanning most of the 20th century [Karl et al., 1990]. An important feature of the USHCN is an extensive metadata file aiding in adjustments to the temperature data associated with station moves, instrument changes, microclimatic changes near the station, urbanization, and/or time of observation biases. As a result, there are many versions of the USHCN ranging from the raw temperature time series to more widely-used datasets that have been extensively adjusted for multiple potential contaminants to the record.”
and
“All scientists agree that the raw records are in need of some adjustment…”
Their conclusion:
“It is noteworthy that while the various time series are highly correlated, the adjustments to the RAW record result in a significant warming signal in the record that approximates the widely-publicized 0.50°C increase in global temperatures over the past century.”
People are still announcing excited discoveries of this.
Well, I used the month of July , start year is 1950 and interval is one month, for Seattle. The plots are all the same, but in the table shown below each plot there appears to be something strange that I haven’t figured – the rankings are the same for the first 7 years in my series, all I could screen capture, however there are different anomaly numbers. I was going to blame the “plot” software as being FUBAR, however now, not so sure if the problem(s) lie deeper.
Its late here, I’m tired, to be continued, perhaps others can pick their cities and help expand the query.
I saved my screen captures as JPG, wish I could share them with you all.
Anth0ny:
I am astonished that any civil servant would provide so incompetent a reply as
Perhaps NCDC needs to employ a British ‘Sir Humphry’ to teach their spokespeople how to provide an answer which says nothing in so obscure a manner that few can understand it.
Alternatively, if employing a British Senior Civil Servant is too costly then they could hire Terry Oldberg.
Richard
Seems plausible. What a laugh if that was (another) flub.
Steven Mosher says:
July 1, 2014 at 10:38 pm
———————————-
“However, this can all be avoided by using the method suggested by skeptics:
oh wait, that would be the berkeley method.”
No, Mr Mosher. That won’t wash. Sceptics are not suggesting methods to try and torture a trend out of the surface station data. BEST? Don’t make me laugh. More time in the blender will unscramble the egg? Your “scalpel” can’t work. Too many micro-site problems are gradual, not step changes.
Surface stations were never designed for the purpose you and yours are attempting to use the data for. The data is unfit for purpose. Any attempt to use it for climate issues speaks to motive.
This response merits legal action under the date quality legislation.
The NCDC is generating data that is patently deficient in quality.
If I knew how to post an Excel sheet on here I can show that BEST Summaries do exactly the same thing, they introduces non existent warming trends that bear no relationship to real data.
The data does not even make sense as they show South West Wales (On the Atlantic/Irish Sea coast) hotter than London which everyone in Britain knows is always much warmer because of UHI.
The BBC forecasts tell us so every day.
Kent Clizbe says:
July 1, 2014 at 5:41 pm
My first reaction was “Anthony lives in California. Just what do you think his congressman is going to do?” But this being WUWT I decided to do a little research first. Chico is in California’s 1st Congressional District and the current representative is Doug LaMalfa (R). A quick perusal of his House page suggests that Congressman LaMalfa might actually be interested. One of the stories featured on his page:
I don’t recall hearing a peep about this in the MSP.
Kent Clizbe says:
July 1, 2014 at 5:41 pm
Including the shredders?
(sorry, couldn’t resist).