NCDC responds to identified issues in the USHCN

The NCDC press office sent an official response to Politifact, which is below.

The NCDC has not responded to me personally, I only got this by asking around.

I’ve provided it without comment. 

=====================================================
Are the examples in Texas and Kansas prompting a deeper look at how the algorithms change the raw data?
No – our algorithm is working as designed.  NCDC provides estimates for temperature values when:
1) data were originally missing, and
2)  when a shift (error) is detected for a period that is too short to reliably correct.  These estimates are used in applications that require a complete set of data values.
Watts wrote that NCDC and USHCN are looking into this and will issue some sort of statement. Is that accurate?
Although all estimated values are identified in the USHCN dataset, NCDC’s intent was to use a flagging system that distinguishes between the two types of estimates mentioned above. NCDC intends to fix this issue in the near future.
Did the point Heller raised, and the examples provided for Texas and Kansas, suggest that the problems are larger than government scientists expected?
No, refer to question 1.

==================================================

 

0 0 votes
Article Rating
147 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 1, 2014 5:03 pm

NUTS…
Back to You…
Worked once, not so sure it works out as well this time.

Latitude
July 1, 2014 5:06 pm

“No – our algorithm is working as designed”
again, no mention of zombie stations and no mention of stations that are reporting data having their data substituted for made up data….
As long as you let them control the conversation…this is the answer they plan on giving

July 1, 2014 5:08 pm

I didn’t realize the similarity between the NCDC and the Emperor Penguins. Namely, they both leave streaks on the ground where they have tread. And the streaks are made of the same material.

David Davidovics
July 1, 2014 5:10 pm

I don’t expect them to admit any flaws.

July 1, 2014 5:12 pm

“Did the point Heller raised… suggest that the problems are larger than government scientists expected?”
If it’s government work, large problems are always expected.

July 1, 2014 5:17 pm

Would like to see the letter you sent to NCDC regarding this issue, which they failed to respond to…

Mike Fayette
July 1, 2014 5:23 pm

So does this mean that their future data tables will distinguish between:
A: Raw Data as originally reported with no adjustments
B: Estimated Data based on surrounding stations since the data is missing
C: Adjusted Data (using a Blackbox Algorithm) because we don’t like the original data
If they do that, wouldn’t that be helpful to all?

Pamela Gray
July 1, 2014 5:23 pm

Not even close to a satisfactory answer. Way too curt and seems filled with hope that a short stern answer will stop the inquiry.

José Tomás
July 1, 2014 5:24 pm

“No – our algorithm is working as designed”
So, case settled.
There was some debate here about if this was a case of deliberate tampering or a bug.
One commenter said that “it was a feature until discovered, then it would become a “bug”.”
Not even that.
The deny it being a bug.
So, the other option is…

Evan Jones
Editor
July 1, 2014 5:25 pm

No – our algorithm is working as designed.
I know.

July 1, 2014 5:27 pm

They are sweeping it under the rug..
GUILTY
“algorithm is working as designed”
Who signed off and designed the algorithm? I smell James Hansen’s dirty work!!!
When you are Gov and things do not go as you planed you make it show as you planned. Warmer then reality

DEEBEE
July 1, 2014 5:29 pm

So it’s a feature not a bug

Niff
July 1, 2014 5:32 pm

They mean…they changed the design to comply with the code…..and please go away now.

Rud Istvan
July 1, 2014 5:35 pm

The answer is in one sense honest: “Our algorithms are working as designed.”
We designed them to maintain zombie stations. We designed them to substitute estimated for actual data. We designed them to cool the past as a ‘reaction’ to UHI.
But in another sense, this is as bad or worse than IRS losing Lerner’s Emails, not following the law to recover from the backups, not reporting the fact to the National Archivist. It is another, “if you like your temperature, you can keep your temperature…”.
Politicization of rigged data. When finally called to account after the next election, they will first say we misunderstood what they meant, and then say they misspoke. And then maybe we will be able to jail a few, since the coverup is usually worse than the original crime.
What strange post modern times.

Finn
July 1, 2014 5:37 pm

I guess this rules out incompetence.

RAH
July 1, 2014 5:37 pm

Reads like STFU to me.

July 1, 2014 5:41 pm

Congressional inquiries are in order.
You can ask your Congressman to inquire into this issue.
In US Government offices, everything stops when a Congressional inquiry is received.
Tell your Congressman that your communications with the temperature office were curtly rebuffed, without a satisfactory answer.
If everyone reading this talked to their Congressman’s office, at least a few would follow-up with the temperature scammers.
And you might actually get some answers.

July 1, 2014 5:48 pm

Jesus just turned water into wine. These clowns are trying to turn BS into data. The threads are unraveling.

Paul in Sweden
July 1, 2014 5:48 pm

“No – our algorithm is working as designed”
The obvious has been stated. Do we know when the hearings and prosecutions will begin?

climatebeagle
July 1, 2014 5:49 pm

My usual answer to “working as designed” is to ask to see the design documents.

Quinx
July 1, 2014 5:49 pm

Decode: We’ll only panic if it looks like temps are dropping. Meanwhile, the money keeps rolling in.

July 1, 2014 5:51 pm

“our algorithm is working as designed.”
Exactly.

Theodore
July 1, 2014 5:53 pm

“No – our algorithm is working as designed.”
Unfortunately not surprising. So it doesn’t matter if their data is as accurate as VA wait times, it is the answer they intended to produce.

John Greenfraud
July 1, 2014 5:55 pm

The answers from NCDC are acceptable to Politifact? Forget that it smacks of a coordinated effort between the two and they are willing to accept this dissembling for a definitive answer.
“our algorithms are working as designed”
Most of us believe that IS the problem. Your algorithm produces garbage by infilling with spurious data.

Theodore
July 1, 2014 5:57 pm

Quinx says:
July 1, 2014 at 5:49 pm
“Decode: We’ll only panic if it looks like temps are dropping. Meanwhile, the money keeps rolling in.”
Temps are dropping, they just don’t have to admit that as long as their AlGorethm is working as designed.

Eliza
July 1, 2014 5:58 pm

As I mentioned before they will do NOTHING. That is why the time is past talking.They have an agenda AGW.. As an aside this is what we shouid worryibg about; Note the definite almost circular shape Antarctica is beginning to form with the extraordinary ice expansion (ABOVE ANOMALY)
http://arctic.atmos.uiuc.edu/cryosphere/NEWIMAGES/antarctic.seaice.color.000.png

July 1, 2014 5:59 pm

Was their algorithm deliberately designed to adjust the temperatures according to the level of CO2 in the atmosphere then (this to me is the final straw that should break their credibility entirely, showing them as deliberate fraudsters)?:
“US Temperatures Have Been Falsely Adjusted According to the Level of Carbon Dioxide in the Atmosphere”

Scute
July 1, 2014 6:00 pm

Does that mean:
“No, our algorithm is working as designed.”
or
“No, our algorithm is working as [re] designed [yesterday in a hurry]”

Paul in Sweden
July 1, 2014 6:01 pm

Crap on a Cracker! Hot Dog venders on a street, Clowns, Magicians & Kids Entertainers are licensed, regulated, held to a higher standard and much more respected than what we are seeing bilging from Climate ‘science’ these days. It is astonishing.

July 1, 2014 6:06 pm

As a practical matter they have no choice but to defend their process. They will surely lose their jobs if they allow a change that damages the political narrative because that data infects many of the analyses the administration is using to push their agenda.

Editor
July 1, 2014 6:06 pm

I am altering the data. Pray I don’t alter it any further.

D.I.
July 1, 2014 6:06 pm

“No – our algorithm is working as designed”
What The F**k? Designing Temperature? Who do they think they are, GOD?

exNOAAman
July 1, 2014 6:07 pm

The IRS gal pleads the fifth, because her answers may incriminate her.
“Our algorithm is working as designed”
Rather incriminating.
You should’ve plead the fifth, son.

resistance
July 1, 2014 6:09 pm

“No – our algorithm is working as designed”
Looks like an outright, on-the-record admission of fraud to me…

Lawrence Todd
July 1, 2014 6:12 pm

NCDC National Cruddy Data Commission

July 1, 2014 6:12 pm

Apparently they only supply missing data when required by other programs. Might I suggest modifying those other programs, rather than inventing data? You can never increase acccuracy by guessing, nor can knowledge be increased simply by multiplying your current information.

July 1, 2014 6:13 pm

Trust but verify.
I’m done trusting.

DesertYote
July 1, 2014 6:14 pm

“No – our algorithm is working as designed.”
I am sure it is. I just wonder what the algorithms design criteria was!

mjc
July 1, 2014 6:15 pm

And this ship is the best ever built…it’s totally unsinkable!

Doug Badgero
July 1, 2014 6:17 pm

“No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.”
I believe the ‘and’ should be an ‘or’.

Eliza
July 1, 2014 6:19 pm

It needs to be brought to the attention of Mainstream media. This is probably the single most importantpoint about all this. Most have not even heard of this.It is quite a story.

Gary
July 1, 2014 6:21 pm

The Titanic worked as designed.
The Hindenburg worked as designed.
The Treaty of Versailles worked as designed.
The attack on Peal Harbor worked as designed.
Federal funding of climate research works as designed.
Peer review works as designed.
The IPCC works as designed.
Climate models work as designed.
It’s what you didn’t design that you have to watch out for.

July 1, 2014 6:22 pm

“No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.”
Ah, notice the don’t say “…in applications that require an accurate complete set of data values.”
Very tricksey, these NCDC hobbits.

Latitude
July 1, 2014 6:22 pm

It’s just their press office…….The NCDC press office

DC Cowboy
Editor
July 1, 2014 6:27 pm

I hear the sound of a broom and a lifted rug somewhere. Beyond belief, even for a Bureaucracy trying to protect itself. It sounds exactly like what I would expect the ‘higher ups’ at an Agency to respond when their ‘technical experts’ show them a MASSIVE problem that would prove embarrassing to the agency. Exactly. I suspect that the ‘experts’, in their heart of hearts, know there is something rotten in Denmark, but, they have kids to feed.
I don’t see how they can make the claim that ‘the algorithm is working as designed’ when there are admitted problems of the scale that have been shown to exist in Texas and Kansas data. Are those States accorded unique treatment in the ‘algorithm design’ such that the issues raised are unique to those two States? What kind of an ‘algorithm design’ does that?
They are hoping that we just all go away and the American public accepts the constant drum of ‘denier, denier, denier, flat earther’ by way of explanation. Lots of ‘LA, LA, LA, I CAN’T HEAR YOU!! going on. I for one am growing tired of the expectation that personal insults suffice for intellectual argument.
I think we need to stop being passive and pursue this in whatever manner we can to expose the truth. If it is that the ‘algorithm’ is working ‘as designed’, then so be it, but, somehow, I suspect that this is not the case. We’re going to have to expose it for them.

Nick Stokes
July 1, 2014 6:30 pm

Mike Fayette says: July 1, 2014 at 5:23 pm
“So does this mean that their future data tables will distinguish between:
A: Raw Data as originally reported with no adjustments
B: Estimated Data based on surrounding stations since the data is missing
C: Adjusted Data (using a Blackbox Algorithm) because we don’t like the original data
If they do that, wouldn’t that be helpful to all?”

That’s what USHCN does now. They provide a raw data file, and an adjusted file (F52), and there they mark (with an E) estimated data. They also provide a file with TOBS adjustment only.
REPLY: it is important to note that the issue here has to do with errors in the X and E flags in reporting on data. More on that here:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/readme.txt
The problem is that they have GODD DATA IN HAND in the raw data file, but instead of using it in the F52 (final) data file, they are throwing in way too many estimates. Along with “estimated” data for a bunch of closed/zombie weather stations that shouldn’t be reporting at all, and have no data in the raw data file.
Nick and others want to argue like the town crier “All is well!”, but in reality, the USHCN is not only a train wreck from a raw data file standpoint due to all the inhomgenieity, its a bigger train wreck after NCDC inserts “estimated” and “zombie” data that should not be there, except that in their world “all is well”.
I had a couple of people call me today that might very well be able to get an independent investigation done, I said let’s wait and then we’ll see how NCDC handles the rest of this. Hopefully they won’t say “All is well!”.
-Anthony

DC Cowboy
Editor
July 1, 2014 6:30 pm

Eliza says:
July 1, 2014 at 6:19 pm
It needs to be brought to the attention of Mainstream media. This is probably the single most importantpoint about all this. Most have not even heard of this.It is quite a story.
======================
IF you think this, you haven’t been paying attention. The ‘mainstream’ media loves nothing better than a ‘we’re all gonna die’ story, which is what they get from the current Admin and the IPCC. That sells newspapers.

July 1, 2014 6:34 pm

Doug Badgero says:
July 1, 2014 at 6:17 pm
“No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.”
I believe the ‘and’ should be an ‘or’.

Hey, it is their algorithm, and maybe it was designed to not recognize the difference between “and” and “or”.
But then, notice the “too short to reliably correct” phrase. Since it can’t be reliably corrected, they estimate and/or randomly generate data that is probably, mostly, not reliably correct.
/grin

Rob Dawg
July 1, 2014 6:40 pm

One unfortunate consequence of filling in missing data is that it masks outlying readings. Instead of seeing a spurious result surrounded by dissimilar datums you now have averaging creating less of a clear difference.

John M
July 1, 2014 6:41 pm

This sounds Mannian.
The algorithm is robust to the data input.

Jimmy Finley
July 1, 2014 6:42 pm

Oh yes, they all seem like really fine guys. I’m sure they will take action to correct any issues that our research points out.
“…No – our algorithm is working as designed…..”
C’mon people, these slugs, true believers or warming evangelists – whatever they are – are not going to do the “right thing”. Go ahead: launch FOIA or congressional hearings or whatever. At the end of the day, their computers will have burned down and been junked. They are laughing at you. And you keep on talking to them nicely. They need to be fired with prejudice; lose their rich pensions; go to prison. But I doubt we will live long enough to see it happen.

Alec aka Daffy Duck
July 1, 2014 6:48 pm

Hmm, you’ve seemed to have hit a never… You should poke them again harder!

July 1, 2014 6:51 pm

Did the point Heller raised, and the examples provided for Texas and Kansas, suggest that the problems are larger than government scientists expected?
No, refer to question 1.

Tony Heller’s answer:
https://stevengoddard.wordpress.com/2014/07/02/government-scientists-expected-the-huge-problems-we-found

Geoff
July 1, 2014 6:55 pm

In reality they have admitted that their output is not fit for purpose. To amend the data to support applications that require a full set should not be their output. That is something those applications should be doing themselves.
It’s start again time.

Wyo Skeptic
July 1, 2014 6:57 pm

The Climate at a glance portion of the NCDC website is giving nothing but wonky data right now. Choose a site and it gives you data where the min temp, avg temp and max temp are the same. Change settings to go to a statewide time series and what it does is give you made up data where the average is the same amount above min as max is above avg.
http://www.ncdc.noaa.gov/cag/
Roy Spencer noticed it first in his blog about Las Vegas. I checked it out of curiosity and it is worse than what he seemed to think. It is totally worthless right now.

mark
July 1, 2014 6:59 pm

duh. It’s an al-gore-rhythm.
working as designed…

MattN
July 1, 2014 7:09 pm

Wait. The algorithm is working AS DESIGNED?!? It’s SUPPOSED to make up data?!?
It really is “man made global warming” huh??

Paul in Sweden
July 1, 2014 7:11 pm

….This is Climate ‘science’ and nothing matters. Conclusions have been written and will have already been widely distributed well before evidence & hearings are held.
We walked in, sat down, Obie came in with the twenty seven eight-by-ten
Colour glossy pictures with circles and arrows and a paragraph on the back
Of each one, sat down. Man came in said, “All rise.” We all stood up,
And Obie stood up with the twenty seven eight-by-ten colour glossy
Pictures, and the judge walked in sat down with a seeing eye dog, and he
Sat down, we sat down. Obie looked at the seeing eye dog, and then at the
Twenty seven eight-by-ten colour glossy pictures with circles and arrows
And a paragraph on the back of each one, and looked at the seeing eye dog.
And then at twenty seven eight-by-ten colour glossy pictures with circles
And arrows and a paragraph on the back of each one and began to cry,
’cause Obie came to the realization that it was a typical case of American
Blind justice, and there wasn’t nothing he could do about it, and the
Judge wasn’t going to look at the twenty seven eight-by-ten colour glossy
Pictures with the circles and arrows and a paragraph on the back of each
One explaining what each one was to be used as evidence against us.

http://youtu.be/6IWSMhUINPk

sinewave
July 1, 2014 7:15 pm

“No – our algorithm is working as designed” I want to use that phrase every time someone asks me about something I messed up on. Is it copyrighted? 🙂

José Tomás
July 1, 2014 7:16 pm

dccowboy says:
July 1, 2014 at 6:30 pm
Eliza says:
July 1, 2014 at 6:19 pm
It needs to be brought to the attention of Mainstream media. This is probably the single most importantpoint about all this. Most have not even heard of this.It is quite a story.
======================
IF you think this, you haven’t been paying attention. The ‘mainstream’ media loves nothing better than a ‘we’re all gonna die’ story, which is what they get from the current Admin and the IPCC. That sells newspapers.
——————————————————————-
Maybe not.
Yes, the MSM loves a ‘we’re all gonna die’ story, but I am not sure if this particular story is able to sell newspapers anymore. The public has shown they are fed up with it and not paying attention anymore.
OTOH, a good “Government Scandal” story has the odds of selling many more newspapers.
Unless selling newspapers is not their primary concern…

dp
July 1, 2014 7:17 pm

Quotes from the movie “The Man Who Would Be King”:

Peachy Carnehan: What’s he saying, Billy?
Billy Fish: Danny’s bleeding. They know! He says not god, not devil, but man!
Peachy Carnehan: [approaches Danny] They’ve twigged it, Danny. You’ve had it! The jig’s up!
Daniel Dravot: [grabs arrow and raises hand in proclamation] I, Sikander –
Peachy Carnehan: [cuts off Danny] For God’s sake!
Peachy Carnehan: [grabs Danny and leads him down the temple stairs] We’ve got to brass it out, Danny. Danny, brass it out!

h/t: http://www.imdb.com/title/tt0073341/quotes
They’re brassing it out. Next will be the Hillary Defense: “What difference does it make?”

Tommy E
July 1, 2014 7:19 pm

@JohnWho says: July 1, 2014 at 6:22 pm …
Very tricksey, these NCDC hobbits.
Take it back! No Hobbit ever did anything even remotely that foul or evil. Even Gollum had enough of a conscience left at the end to try to get others to do his dirty work, and even then he argued bitterly with himself.

José Tomás
July 1, 2014 7:20 pm

… And I still think that this is a story that James Taranto from the WSJ will find very very interesting.

mjc
July 1, 2014 7:21 pm

It was issued by the US government…so it should be public domain.

Polly
July 1, 2014 7:27 pm

If you can invent temperature data for locations where a station used to be, then it’s equally valid to place a fake station anywhere else you might want one. Why maintain real stations at all? For that matter, why maintain tide gauges and ocean buoys at all? Just think of the cost savings!

mjc
July 1, 2014 7:34 pm

Polly…cost savings are NOT even in the lexicon.

Shub Niggurath
July 1, 2014 7:47 pm

Algorithm is worked as designed?
How *else* would it work?

Abbott
July 1, 2014 7:47 pm

I read the response in a different vein: “We were under orders to do it this way. We didn’t like it, but we were over-ruled. Please keep the pressure on so we can get out from under the palm of this **** and get back to reliable, scientific record keeping and analysis.”

D. B. Cooper
July 1, 2014 7:48 pm

NOAA . . . The Enron of Climate Data.
Some NOAA people need to be ass kicked, some NOAA people need to be fired, some NOAA people need to be in jail. Then NOAA need a to be terminated.
Utterly useless organization squandering precious tax dollars on pseudo scientific crap.

TomR,Worc,MA,USA
July 1, 2014 7:49 pm

philjourdan says:
July 1, 2014 at 5:48 pm
============================

Tom J
July 1, 2014 7:51 pm

The following is copied from Wikipedia as a definition for an algorithm:
‘An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input … the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing “output” and terminating … The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.’
Hmm; “not necessarily deterministic.” So, could that be a description of the NCDC’s algorithm, “working as designed?” There’s really no necessity for a physical reality in this game, is there?

ossqss
July 1, 2014 8:10 pm

So,,,,, how long has this algorithm/process/practice been taking place?
Now, I am interested.

lee
July 1, 2014 8:23 pm

I’m trying to find out if the algorithm is used outside USA? BOM, Met UK etc.

phodges
July 1, 2014 8:49 pm

“Finn on July 1, 2014 at 5:37 pm
“No – our algorithm is working as designed”
I guess this rules out incompetence.”
Exactly…

Konrad
July 1, 2014 8:53 pm

Nick Stokes says:
July 1, 2014 at 6:30 pm
———————————
“That’s what USHCN does now. They provide a raw data file, and an adjusted file (F52), and there they mark (with an E) estimated data. They also provide a file with TOBS adjustment only.”
“does now”? Yes, we have noted the panicked scrabbling…
Oh, and about that TOB thing. Care to clarify whether actual individual station metadata is being used for individual station TOB adjustments?
It wouldn’t be that Tom Karl’s pet rat TOBy is still nibbling on the data would it? Using an program that makes TOB adjustments without individual station metadata would be a bad thing. A very bad thing…

temp
July 1, 2014 9:17 pm

Konrad says:
July 1, 2014 at 8:53 pm
“Using an program that makes TOB adjustments without individual station metadata would be a bad thing. A very bad thing…”
Last I heard thats exactly what they are/were doing to include applying TOB to new hourly reporting stations.

Cynical Scientst
July 1, 2014 9:24 pm

To me this response suggests that the issue is so serious that they have flagged it WONTFIX.
Not acceptable. Keep pushing.

gary turner
July 1, 2014 9:39 pm

Programmers have a specific name for this, it’s a wad bug, works as designed. In other words, someone screwed the pooch.

JN
July 1, 2014 9:53 pm

“our algorithm is working as designed” = “There was some bone-headed decisions… Not even a smidgeon of corruption.”
So INS, IRS, VA, EPA, NSA, NLRB, Federal Reserve, policies on Iraq, Syria, Russia, Ukraine, etc. — everything is “working as designed.”

Konrad
July 1, 2014 9:54 pm

temp says:
July 1, 2014 at 9:17 pm
———————————
“Last I heard thats exactly what they are/were doing to include applying TOB to new hourly reporting stations.”
Yes, and apparently you can even use it on “zombie” stations as well.
NCDC says “our algorithm is working as designed”. Given that their code is working to produce an artificial warming trend, this does raise some interesting questions. Questions like “Who’s design exactly?”, and “Designed for what purpose?” Or “About the vicious and sustained public floggings, I trust there are no objections?”

July 1, 2014 10:38 pm

“Col Mosby says:
July 1, 2014 at 6:12 pm
Apparently they only supply missing data when required by other programs. Might I suggest modifying those other programs, rather than inventing data? You can never increase acccuracy by guessing, nor can knowledge be increased simply by multiplying your current information.
##################################################
It is actually pretty simple.
1. Some methods for calculating global averages REQUIRE long series
A) GISS
B) CRU
2. There are many methods for creating long series from multiple records ( see CET)
3. One method is to make zombies.
the algorithm does what it was designed to do.
” These estimates are used in applications that require a complete set of data values.”
However, this can all be avoided by using the method suggested by skeptics:
oh wait, that would be the berkeley method.

July 1, 2014 10:40 pm

Working as designed indeed. It’s been a nice little earner…

Rob
July 1, 2014 10:55 pm

Synthetic data?

NikFromNYC
July 1, 2014 11:01 pm

Once again, each week, we seem to arrive at the same lamentable and PR disaster prone stage in which technically equipped skeptics with computer programming skills fail to engage with highly energetic Steve Goddard as he doubles down on possibly real but often fantastic claims. For two years Goddard made a simple mistake of not accounting for different numbers of stations in the final vs. raw data, creating an adjustments hockey stick with a massive current year spike merely due to late station reporting. This was allowed to happen because he is not careful at all himself and doesn’t ever intend to be. But the way good science works is that different temperaments engage with each other and quickly double check each other’s work. Since I am no longer set up with software outside of 3D design stuff, my attempt to entice rigor out of Steve backfired in a fit of cheerleading squad attacks and a ban, after I was a regular there for years. Now, once again a new claim of trend alteration is being made as news cycles come and go without the needed skeptical side information as to whether Steve’s strong final effect claims are validated or not by other skeptics! Steve claims that the zombie station infilling itself contains a bizarre bias in which the infilled stations show much greater trends. So what am I to do week after week as a news site activist when these trained Gorebot alarmists keep countering my work with Goddard bashing? I can’t do a damn thing since skeptics with the needed set up don’t really let on about the only final result that matters of whether the trend is really mistakenly too high or not. Most here just *assume* a scandal but there’s only a scandal *against* skeptics if there’s no downward revision asserted by more than a mere single blogger!

Alan Grey
July 1, 2014 11:14 pm

Wow… Did they just admit culpability in intentionally misleading people?

Jeff D
July 1, 2014 11:17 pm

Working as intended? So basically it was designed to be screwed up. Man I really should have taken the other pill… this rabbit hole just keeps getting deeper and deeper.

bruce
July 1, 2014 11:22 pm

I have read many expert reports, and cross examined many experts in many disciplines, but the BS language and appearance doesn’t change. There is so much wrong with this response I don’t even know where to begin. Its incriminating as well as illuminating in my view. I might expand in another post, see how this unfolds for now.
In the meantime, what does the NOAA response have to do with why they keep changing the July 1936 temperature? Did their ‘algorithm’ find a ‘shift’ in 1936 and thus a reason to ‘estimate’ some 1936 (non)station temperatures after all these years? Do they keep finding new ‘shifts’ back in 1936 each month? Or did they find a ‘shift’ in 2014 that their algorithm decides it affects temperatures in 1936?
It all seems like a bunch of shift.
…or very shifty at best….
But I digress. One basic question, besides ‘was your data right then or is it right now ..or was it right the 14th time you changed it?” is this: At what point can you, NOAA, tell the world’s scientists that you are done ‘calculating’ the 1936 temperature?
Let’s get the facts before we continue declaring world war on the very gas (CO2) that keeps both the earth…and mankind…alive. That we do know is a fact. JMO.

freeHat
July 1, 2014 11:37 pm

Think a big problem is that people expect a product to work when it’s marketed so heavily, like a Mercedes or an iPad. Unfortunately the consumers in this case don’t seem too eager to complain, leaving people on the sidewalk scratching their heads as the tailpipe hits the asphalt.

rogerknights
July 1, 2014 11:38 pm

“Our algorithms are working as designed.”

GM could say the same of its ignition switches.

Steve in Seattle
July 2, 2014 12:00 am

It appears that for Seattle WA, the HI, LOW and avg. temp requests plot the same data, as suggested above. I am still investigating, stay tuned.

Nick Stokes
July 2, 2014 12:10 am

Konrad says: July 1, 2014 at 8:53 pm
‘“does now”? Yes, we have noted the panicked scrabbling…’

No, they have done it for many years. Here is a paper from skeptics Balling and Idso in 2002 complaining in GRL about, wouldn’t you know it, USHCN adjustments creating a trend. And their data? Published USHCN raw and adjusted data. In fact, they say (in 2002):
“Considered one of the best of its type, the United States Historical Climatology Network (USHCN) dataset consists of temperature records from 1,221 stations spanning most of the 20th century [Karl et al., 1990]. An important feature of the USHCN is an extensive metadata file aiding in adjustments to the temperature data associated with station moves, instrument changes, microclimatic changes near the station, urbanization, and/or time of observation biases. As a result, there are many versions of the USHCN ranging from the raw temperature time series to more widely-used datasets that have been extensively adjusted for multiple potential contaminants to the record.”
and
“All scientists agree that the raw records are in need of some adjustment…”
Their conclusion:
“It is noteworthy that while the various time series are highly correlated, the adjustments to the RAW record result in a significant warming signal in the record that approximates the widely-publicized 0.50°C increase in global temperatures over the past century.”
People are still announcing excited discoveries of this.

Steve in Seattle
July 2, 2014 12:19 am

Well, I used the month of July , start year is 1950 and interval is one month, for Seattle. The plots are all the same, but in the table shown below each plot there appears to be something strange that I haven’t figured – the rankings are the same for the first 7 years in my series, all I could screen capture, however there are different anomaly numbers. I was going to blame the “plot” software as being FUBAR, however now, not so sure if the problem(s) lie deeper.
Its late here, I’m tired, to be continued, perhaps others can pick their cities and help expand the query.

Steve in Seattle
July 2, 2014 12:20 am

I saved my screen captures as JPG, wish I could share them with you all.

richardscourtney
July 2, 2014 12:35 am

Anth0ny:
I am astonished that any civil servant would provide so incompetent a reply as

our algorithm is working as designed

Perhaps NCDC needs to employ a British ‘Sir Humphry’ to teach their spokespeople how to provide an answer which says nothing in so obscure a manner that few can understand it.
Alternatively, if employing a British Senior Civil Servant is too costly then they could hire Terry Oldberg.
Richard

rogerknights
July 2, 2014 12:47 am

Doug Badgero says:
July 1, 2014 at 6:17 pm
“No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.”
I believe the ‘and’ should be an ‘or’.

Seems plausible. What a laugh if that was (another) flub.

Konrad
July 2, 2014 1:02 am

Steven Mosher says:
July 1, 2014 at 10:38 pm
———————————-
“However, this can all be avoided by using the method suggested by skeptics:
oh wait, that would be the berkeley method.”
No, Mr Mosher. That won’t wash. Sceptics are not suggesting methods to try and torture a trend out of the surface station data. BEST? Don’t make me laugh. More time in the blender will unscramble the egg? Your “scalpel” can’t work. Too many micro-site problems are gradual, not step changes.
Surface stations were never designed for the purpose you and yours are attempting to use the data for. The data is unfit for purpose. Any attempt to use it for climate issues speaks to motive.

Frederick Colbourne
July 2, 2014 1:56 am

This response merits legal action under the date quality legislation.
The NCDC is generating data that is patently deficient in quality.

A C Osborn
July 2, 2014 3:53 am

If I knew how to post an Excel sheet on here I can show that BEST Summaries do exactly the same thing, they introduces non existent warming trends that bear no relationship to real data.
The data does not even make sense as they show South West Wales (On the Atlantic/Irish Sea coast) hotter than London which everyone in Britain knows is always much warmer because of UHI.
The BBC forecasts tell us so every day.

July 2, 2014 4:39 am

Kent Clizbe says:
July 1, 2014 at 5:41 pm

Congressional inquiries are in order.

My first reaction was “Anthony lives in California. Just what do you think his congressman is going to do?” But this being WUWT I decided to do a little research first. Chico is in California’s 1st Congressional District and the current representative is Doug LaMalfa (R). A quick perusal of his House page suggests that Congressman LaMalfa might actually be interested. One of the stories featured on his page:

Rep. Doug LaMalfa (R-CA) delivers opening remarks at a Natural Resources Committee hearing on the Environmental Protection Agency’s “Waters of the United States” proposal, in which the EPA claims jurisdiction over land use across virtually all of Northern California without any legal basis to do so.

I don’t recall hearing a peep about this in the MSP.

July 2, 2014 4:50 am

Kent Clizbe says:
July 1, 2014 at 5:41 pm

In US Government offices, everything stops when a Congressional inquiry is received.

Including the shredders?
(sorry, couldn’t resist).

johann wundersamer
July 2, 2014 4:54 am

Wow! Never explain, never excuse.
Business as usual.
Who goes there?

tadchem
July 2, 2014 5:04 am

What we are seeing here is entropy in data. My undergraduate P. Chem professor explained entropy this way. “If you add a millileter of wine to a liter of sewage, you get 1001 ml of sewage. If you add a milliliter of sewage to a liter of wine, you get 1001 ml of sewage. That is entropy.”
The same concept applies to mixing real data values with estimates or interpolations. If it is not ALL 100% real data, then it is not real data.
For my money, the best computer representations of climate and weather are produced by Bethesda Softworks for the Elder Scrolls series.

DHF
July 2, 2014 5:18 am

The response from NCDC does not reflect any insight in the challenge at hand. My hypothesis is that average value provided by NCDC is not, and cannot possibly be, suitable for its intended purpose. I would like to elaborate my view on this:
From Wikipedia:
“Metrology is defined by the International Bureau of Weights and Measures (BIPM) as “the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology.”
“A core concept in metrology is metrological traceability, defined by the Joint Committee for Guides in Metrology as “property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty”. Metrological traceability permits comparison of measurements, whether the result is compared to the previous result in the same laboratory, a measurement result a year ago …..”
From Guide to the expression of uncertainty in measurement:
“This Guide is primarily concerned with the expression of uncertainty in the measurement of a well-defined physical quantity — the measurand — that can be characterized by an essentially unique value. If the phenomenon of interest can be represented only as a distribution of values or is dependent on one or more parameters, such as time, then the measurands required for its description are the set of quantities describing that distribution or that dependence.”
How the temperature measurements could have been used in this perspective:
Each of the temperature measurements can be well defined (e.g. in terms of location, time of day and equipment). Each of the measurements can be traceable to international standards through a documented unbroken chain of calibrations.
If each of the measurands are well defined, the average over a set of measurements, each performed at defined times at defined locations will also be well defined. Further, the average can be calculated again and again and will provide the same result every time. It is also possible to use statistics to calculate standard deviations and it is possible to calculate the uncertainty of the measurement from standard deviations and uncertainty estimate for the individual measurements.
Hence the average value produced in this way can have metrological traceability with a stated level of uncertainty.
How about the average temperature values from NCDC in this respect?
The temperature value produced by NCDC (National Climatic Data Center) seems to fail to meet the requirements to metrological traceability in several ways. Most seriously:
The measurement is not repeatable. It produces a different result for a given period back in time every time the measurement is performed (the model is run). Furthermore the lack of repeatability seems to be very significant in relation to the intended used of the measurement.
Another fundamental flaw is that the measurement result is not traceable, it cannot be calibrated. The measurement result is not traceable primarily because the measurand “the average temperature” is not well defined, and consequently you cannot find a traceable reference to calibrate it towards. The model, algorithms and software integrated in the measurement system complicates the definition of the measurand to such a degree that it cannot possibly be calibrated in a way that will provide traceability towards a defined reference.
Further, the uncertainty of the measurement result, estimated in accordance with an acceptable standard, is not stated for the measurement result (see: Guide to the expression of Uncertainty in Measurement.”
Hence, it is no doubt that the model that is intended to produce an average temperature of some kind keeps a lot of people enthusiastically engaged. However, the model does not meet the fundamental requirements for it to be an acceptable measurement. That is, if the intended use is to identify a trend or change in order of magnitude 0,01 K/year.
All standards and principles within measurement have been made so that people can trust and agree on measurement results. The temperature output from the model fails to meet these standards and fails to produce a useful measurement result, primarily because the model, algorithms and or software implementation does not provide a metrological traceable and repeatable result with stated uncertainty.

Chuck Nolan
July 2, 2014 5:18 am

JN says:
July 1, 2014 at 9:53 pm
“our algorithm is working as designed” = “There was some bone-headed decisions… Not even a smidgeon of corruption.”
So INS, IRS, VA, EPA, NSA, NLRB, Federal Reserve, policies on Iraq, Syria, Russia, Ukraine, etc. — everything is “working as designed.”
—————————————————–
You left out health care (HHS) and Homeland Security.
This is all kind of scary.
cn

DHF
July 2, 2014 5:37 am

Steven Mosher says:
July 1, 2014 at 10:38 pm
“However, this can all be avoided by using the method suggested by skeptics:
oh wait, that would be the berkeley method.”
Do the Berkeley method provide a traceable, repeatable measurement result with stated uncertainty for a well defined measurand?

John Peter
July 2, 2014 5:42 am

“Their conclusion:
“It is noteworthy that while the various time series are highly correlated, the adjustments to the RAW record result in a significant warming signal in the record that approximates the widely-publicized 0.50°C increase in global temperatures over the past century.”
At least now Nick Stokes know this so why is he so anxious to promote CAGW rather than taking on the Lintzen attitude that a doubling of CO2 will lead to a +/- 1C increase in temperatures. Considering that CO2 has a logarithmic effect, an addition of 120ppm has caused 0.5C increase so a doubling giving 1C is in the “ballpark”. That is discounting any natural variation. What is all the fuss about other than making money from CAGW by some people?

chris moffatt
July 2, 2014 6:39 am

Isn’t it just a great pity that so many skeptics pooh-poohed Goddard’s claims without reading them and understanding them properly. The toothpaste is now out of the tube and no amount of post-facto blogging is going to put it back. Politifact has denounced the allegations as “pants on fire” and nothing will change that. When will people learn that concerning AGW claims, if they are skeptics, they only get one chance to get it right and maybe not even one?

Non Nomen
July 2, 2014 6:45 am

Latitude says:
July 1, 2014 at 6:22 pm
It’s just their press office…….The NCDC press office
____________________________________________
And that press office works “as designed” with the purpose of bumfuzzling the rest of the world.

Jaye Bass
July 2, 2014 6:48 am

Reads like FYTW.

DHF
July 2, 2014 6:53 am

Non Nomen says:
July 2, 2014 at 6:45 am
Latitude says:
July 1, 2014 at 6:22 pm
“It’s just their press office…….The NCDC press office”
I cannot imagine that a press office will issue anything that has not been internally approved by a director.

Steve Oregon
July 2, 2014 7:08 am

Their response may not work as designed.

bruce
July 2, 2014 7:40 am

“4.2 OBTAINING ARCHIVED VERSIONS
At this time, the National Climatic Data Center does not maintain an
online ftp archive of daily processed USHCN versions. The latest
version always overwrites the previous version and thus represents the latest data, quality control, etc. ”
Does this mean that the data, like emails, is no longer available at all, or just they have it but won’t make it available?
Why would a gov. entity charged with providing accurate data…change the data daily and not keep public records as to the changes?
How can a government agency charged with providing accurate data….keep changing the data and decide the latest is now definitely accurate?
How can a gov. agency that produces the base data forming the foundation for their own belief that the world is in for a climate catastrophy…credibly change the data daily and not make the changes publicly available on an issue so important to mankind?
The fact they create data…and they do create data. whether they admit it or not…would alone be a valid reason to throw out an expert opinion in court based on it. The fact they keep changing the data constantly for past years would make it less likely to survive scrutiny in court But the admitted fact their data is regularly wrong and changed by nearly a degree, while claiming they have proven the world is warming by a degree, is almost laughable. Would love to cross examine the NOAA on this. DNA science would never be accepted in court as reliable, on data that constantly changes and is constantly manipulated like this
If ‘all scientists agree the data needs adjustment”…that might work for a one time adjustment…maybe, if clearly explained but I’m not even there yet..
But to admit to changing historical data on a daily basis , then toss yesterdays data as if it is meaningless, and claim that ”today we got it right, yesterdays was wrong”…very troubling.
B

dorsai123
July 2, 2014 7:50 am

Anthony,
You made a big deal of saying that these folks are all professionals and that they would quickly review and fix these sorts of obvious errors …
Do you still stand by that assessment ?
REPLY: That is an excellent question. From the delay so far I can surmise that the problem is real and probably is more of a problem to fix than they may have originally thought. The problem itself might be simple but the effect of the problem might be very very inconvenient. As a result I think they are trying to figure out how to diplomatically deal with it without looking like boneheads.
If the problem wasn’t real, we’d have gotten some sort of govspeak issued almost immediately that basically says “Watts and Goddard are idiots”. But instead, we are getting delays. If they try to stretch it into the holiday weekend, we’ll know they have been circling the wagons, and then the questions of credibility will be valid to ask. – Anthony

bit chilly
July 2, 2014 9:26 am

amongst other points ,cooling the past makes no sense.the past WAS cooler already. surely with the onset of uhi the later measurements should have been cooled,not the earlier ones ?

catweazle666
July 2, 2014 9:36 am

No – our algorithm is working as designed.
Ah, it’s “Hide the Decline II” time, is it?

Chuckarama
July 2, 2014 10:10 am

Their statement is factual. It is working as designed. The question is, what is the value of that design. It certainly has _some_ value as statistical tools and algorithms always do, but it’s the context that matters. Does the general public understand that at least ~40% of that representation is manipulated by the algorithm? I doubt it.

Duster
July 2, 2014 11:05 am

“…working as designed.”
I have run into this response before. Quite a long time ago now I was conducting a great deal of literature research where the authors frequently included 2X2 contingency tables and then proceeded to “explain” the tabled data however seemed reasonable to them with no quantitative examination at all. I at first would load up a stat program to run a Fisher’s Exact Test, which is a handy first tool for examining categorical data in a contingency table. I hated the wait for the stat package to boot and decided to write my own utility (in TurboPascal). The routine is simple enough that with a small N, the results can be checked by hand.
In doing this I discovered just how stupid computers really are and how careful and really, really paranoid a programmer has to be about results. Since the marginal totals are fixed, there should be no difference for a given set of cell values in results that is dependent on mirrored arrangement of the numbers in the cells. A routine had to be included that double-checked the arrangement to be sure that results were not the inverse (1-P, rather than P) for instance. As a double, double check I checked my results against commercial stat software Fisher’s Exact routines. Three out of three commercial packages I had access to were sensitive to cell arrangement and would yield erroneous results to arrangements with mirrored symmetry – 2,2|2,1 for instance would give a different result than 1,2|2,2. I contacted the individual publishers and two of three responded that the error was “not possible” and their software worked as expected. the third said they would look in to it and later sent me a “thank you note.” All three packages yielded correct results in the next release.
The idea that software works “as designed” in any complex system is likely to be illusory. If results look “reasonable” there is a high chance that an error will not be detected. The “zombie stations” problem exmplifies the entire problem with in-filling and estimating data.

Mike Singleton
July 2, 2014 11:14 am

I wonder how many hours they spent concocting that statement, I can imagine the conference room strewn with discarded pizza boxes, empty coffee containers and redolent with the odor of unwashed bodies.
Defensive “spinning” at it’s finest.

July 2, 2014 11:16 am

Our algorithms are working as designed
——————
That was a CYA response to the question asked,….. but still an absolutely true statement.
The same as …. all computer programs that contain “bugs” resulting in mistakes, error or crashes …… “are working as coded”.

July 2, 2014 12:23 pm

So the creation of zombie stations is part of the design?
Let’s go back to the original Goddard issue.
Zombie station might have been part of the design as a simplifying assumption when there is one zombie in 20 stations.
The population of zombie stations is growing grotesquely large to where the zombie hoard is almost half the total population.
Zombiegate indeed.

July 2, 2014 12:53 pm

No, Mr Mosher. That won’t wash.
As I said, there are TWO approaches to constructing an average
1. Create, manufacture, find, LONG STATIONS : GISS and CRU
2. Use the raw data AS IS and estimate the field.
Option 2 was first proposed and implemented by SKEPTICS
http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/
The man who invented this is RomanM.
When we started at BerkeleyEarth our head statistician consulted with Romanm to understand
what he had done.
RomanM used a simple but effect least squares approach.
We improved that by using a refinement called Krigging
That idea was first proposed at Climate audit.
So there you have it.
two approachs:
A) find or create LONG STATIONS and average them CRU AND GISS
B) use all the raw data to estimate a field, an approach first implemented by skeptics.
Oh ya, when skeptics first did this, they showed a warming trend that was HIGHER
ya’ll forget that

July 2, 2014 12:57 pm

“Do the Berkeley method provide a traceable, repeatable measurement result with stated uncertainty for a well defined measurand?”
we are not doing measurments. we are creating a prediction ( an expected value)
given
A) the raw data
B) a geostatistical model of climate.
So the right questions are
A) have you tested your prediction? Yes, it validates
B) do you have uncertainties– yes they are required to do #1

DHF
Reply to  Steven Mosher
July 2, 2014 1:35 pm

At the web site http://www.berekleyearth.org it is stated:
“The Berkeley Earth averaging process generates a variety of Output data including a set of gridded temperature fields, regional averages, and bias-corrected station data.”
Here is the Wikipedia explanation of the terms prediction and Measurement:
“A prediction or forecast is a statement about the way things will happen in the future, often but not always based on experience or knowledge.”
“Measurement is the assignment of numbers to objects or events. It is a cornerstone of most natural sciences, technology, economics, and quantitative research in other social sciences.” ….. .. “The science of measurement is called metrology.”
“Metrology is defined by the International Bureau of Weights and Measures (BIPM) as “the science of measurement, embracing both experimental and theoretical determinations at any level of uncertainty in any field of science and technology.”
I would say that you are providing a measurement result. Hence, you should adhere the standards for measurement.

Nix
July 2, 2014 1:03 pm

What? No Hitler hissy fit video yet?

richardscourtney
July 2, 2014 1:15 pm

Steven Mosher:
I write to request a clarification of your post at July 2, 2014 at 12:57 pm which includes this

we are not doing measurments. we are creating a prediction ( an expected value)
given
A) the raw data
B) a geostatistical model of climate.
So the right questions are
A) have you tested your prediction? Yes, it validates
B) do you have uncertainties– yes they are required to do #1

You “not doing measurments (sic)” does not absolve you from a need for an independent calibration standard when have “tested your prediction” to determine “it validates”. Your “prediction” needs to be compared to something if its validity is to be established.
Please say
(a) What are you predicting?
and
(b) How do you “test” the result of your “prediction”?
and
(c) How does that “test” determine “it validates”?
Thanking you in anticipation of your answers
Richard

Konrad
July 2, 2014 3:10 pm

“..then the questions of credibility will be valid to ask. – Anthony”
Anthony,
you may have noticed Nick Stokes running around the web making this claim –
“TOBS is an adjustment made when a station changes its time of observation. The amount depends on the change.”
– about TOB adjustments in the USHCN “Temperaturette” product. (I can’t believe it’s not temperature!)
When pressed at “Not a Lot of People Know That”, Nick linked to the USHCN V2 methods, which in turn links, guess where? Tom Karl’s pet rat TOBy –
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/karl-etal1986.pdf
From the abstract –
“A self-contained computer program has been developed which allows a user to estimate the time of observation bias anywhere in the contiguous United States without the costly exercise of accessing 24-hourly observations at first-order stations.”
The Feb 1986 paper starts well, dealing with solar and human time zone issues, but then switches to estimating TOB without metadata. The conclusion claims as much as 2C corrections may need to be applied and specifically mentions “climate change”. This paper was submitted in 1985.
Nick quotes on this thread –
“An important feature of the USHCN is an extensive metadata file aiding in adjustments to the temperature data associated with station moves, instrument changes, microclimatic changes near the station, urbanization, and/or time of observation biases.”
“and/or”? If TOB adjustments have been applied to stations that only ever reported hourly or “zombie” stations, that can only be TOBy’s nibbling, not properly referenced metadata.
Tom’s pet rat TOBy may have been nibbling on temperatures and leaving his dropping in Stevenson screens from one side of the US to the other for quite some time. The question now is whether little TOBy has a cardboard box with air holes and a little passport. Is TOBy an international traveller?

NikFromNYC
July 2, 2014 3:35 pm

All this hullabaloo is inflamed by ignoring two facts, one that Goddard’s claim of the necessity for a trend adjustment downwards remains unclear due to his reliance on objectively bad statistics, and second that the “Our algorithms are working as designed” is certainly true in the proper context of the Texas station that was in the news which turned out to involve a corroded wire that their algorithms prevented from adding a bias to their result, as it was reasonably designed to do.
What remains unanswered by skeptics is whether Goddard’s claims that infilling adds heating bias and his claim that the TOBS time of day adjustment is itself much greater in practice than in their claimed use of it, whether these claims are confirmed or denied. There are strong suggestions that they are indeed denied and few suggestions that any trend bias claims survives more rational analysis. If they do adjust down, skepticism is afforded a huge gold trophy, but if not, our existing claims will be further marginalized due to the our own inability to self regulate quickly enough to avoid embarrassment in the public eye as we merely provide ammo to the alarmist side.

u.k.(us)
July 2, 2014 4:44 pm

I think we’ve got them flustered.
Keep bluffing, or show their hands ?

cba
July 2, 2014 4:58 pm

makes ya wonder about whether all problem responses are now channeled through the whitehouse political machine.. Evidently, that is something going on with FOIA requests now – anything that might embarass the administration.

rogerknights
July 2, 2014 5:27 pm

NikFromNYC says:
July 2, 2014 at 3:35 pm
What remains unanswered by skeptics is whether Goddard’s claims that infilling adds heating bias and his claim that the TOBS time of day adjustment is itself much greater in practice than in their claimed use of it, whether these claims are confirmed or denied. There are strong suggestions that they are indeed denied and few suggestions that any trend bias claims survives more rational analysis.

But there would seem to be a prima facie case that NCDC has been biasing things warm somehow, given that its ordinary collection of stations is running two degrees or so (F) hotter than its gold-standard reference network.

DanMet'al
July 2, 2014 5:53 pm

Maybe slightly off topic; but I’m wondering whether anyone else thinks that using current land-based temperature sensors and protocols is not appropriate or optimal for gaging climate changes.
Land based “weather stations”, arguably starting with the station atop the Blue Hills in Milton MA, were designed to characterize weather for the citizenry. As Mr. Watts has demonstrated, the collection of weather stations in USHCN and other networks have demonstrated uneven and often poor quality in even characterizing weather.
So back to my question at the top of this post. If you were given a clean start with the objective to measure land climatic temperature change, how would you design your “climate station” physically, locationally, sensor-wise, data handling/management/QC, and subsequent data treatment via post-processing corrections (or not). Engineers of all stripes will likely get this; the climate hacks, maybe not.
I have my thoughts on this, but since I’m a newbie, I prefer to offer this as an open question. I can tell you that my “alternative universe” view is that it does not at all represent the view of Hansen, Menne, Mann, etc. etc. or anything present in the land temperature measurement infrastructure.
Thanks
Dan

rogerknights
July 2, 2014 6:05 pm

NikFromNYC says:
July 2, 2014 at 3:35 pm
If they do adjust down, skepticism is afforded a huge gold trophy, but if not, our existing claims will be further marginalized due to the our own inability to self regulate quickly enough to avoid embarrassment in the public eye as we merely provide ammo to the alarmist side.

Hmmm — AW’s idea of an “official” skeptical organization would help in this situation.

rogerknights
July 2, 2014 6:14 pm

DanMet’al said:
If you were given a clean start with the objective to measure land climatic temperature change, how would you design your “climate station” physically, locationally, sensor-wise, data handling/management/QC, and subsequent data treatment via post-processing corrections (or not).

The NCDC has already, as of ten years ago, created an ideal Climate Reference network in the USA with over 100 stations. I don’t know its exact name and details, but hopefully another commenter will give details and/or a link.

DanMet'al
July 2, 2014 6:32 pm

I’m skeptical not just with regards to Climate but also relative to my own discipline (material science and engineering); but even more, I apply skepticism to everything I do. . . technically and otherwise. . . so I guess I’m just a boring guy. . . just ask my wife!
But where I do take exception — relative to a number of skeptical viewpoints expressed on WUWT– is my belief that we skeptics need to do more than simply dig in and strive to uncover issues with the “other side’s” technical argument. We skeptics really need to use our creative capacity to find new ways forward that are more technically sound and contribute to stronger science and engineering. . . Yeah, we need to solve problems, and not just harangue our supposed opposition. This means we (and they) need to grow up!!!!
Enough said.
Dan
Engineering, you say? I contend that the climate system is well beyond the capability of scientists. . . and that’s why it’s so mucked up! Time to bring in engineers that know how to deal with concrete criteria, uncertain, and model validation.

DanMet'al
July 2, 2014 6:46 pm

rogerknights says:
July 2, 2014 at 6:14 pm
Thanks Roger; I was probably too obtuse; but I was thinking and looking for ideas beyond the inch-bugging tweaks and corrections proposed by the climate community.
For example, if you’re interested in climate (rather than simply weather) why not employ sensor stations with radically larger thermal mass . . . so that the sensor integrates out the weather? Then how do you deploy sensors to achieve a sensible (justifiable) distribution among topographies, biospheric environment etc. Needs some thought
But the real issue is that such a measurement system shouldn’t be relagated to “the history of weather measurements” or the happenstance of what NOAA or NASA can do. Rather a TRUE CLIMATE measurement system needs to be designed base on objectives, strategies, technical foresight, standards with oversight from the entire cognizant technical community. Nobody. . . after the fact should have to ask Menne, Nick Stokes, Steve Mosher, Hansen, or anybody why they did what they did. . . it should be plain, clear, agree upon upfront. There should no future self proclaimed keepers of the truth that we must all accept on faith.
Sorry. . . end of rant.
Dan

July 2, 2014 7:11 pm

José Tomás says:
July 1, 2014 at 5:24 pm
“No – our algorithm is working as designed”
So, case settled.
There was some debate here about if this was a case of deliberate tampering or a bug.
One commenter said that “it was a feature until discovered, then it would become a “bug”.”
Not even that.
The deny it being a bug.
So, the other option is…
———————————
Samuel C Cogar says:
July 2, 2014 at 11:16 am
That was a CYA response to the question asked,….. but still an absolutely true statement.
The same as …. all computer programs that contain “bugs” resulting in mistakes, error or crashes …… “are working as coded”.
———————————–
Gotta love it! “Working as designed” is both a long standing joke in the software world, and the ultimate cop-out from incompetent development teams, stating “we didn’t make any coding mistakes so we’re in the clear.” No mention, you’ll note, of adhering to the specifications, satisfying the requirements, or solving the problem that needed to be solved.

Konrad
July 2, 2014 7:49 pm

rogerknights says:
July 2, 2014 at 6:14 pm
——————————–
“The NCDC has already, as of ten years ago…”
A slight correction. US installation was from 2009. Issues raised about the poor condition of existing stations by our host Anthony Watts were a significant factor in the decision.

lee
July 2, 2014 8:00 pm

I was interested in the observations from various cities of no variation in the data.
It now seems that homogenisation of the historical (hysterical?) data has now been seamlessly integrated into all records

Konrad
July 2, 2014 8:02 pm

NikFromNYC says:
July 2, 2014 at 3:35 pm
“.. but if not, our existing claims will be further marginalized due to the our own inability to self regulate quickly enough to avoid embarrassment in the public eye as we merely provide ammo to the alarmist side.”
What a great idea! Let’s self regulate to avoid embarrassment!
rogerknights says:
July 2, 2014 at 6:05 pm
“Hmmm — AW’s idea of an “official” skeptical organization would help in this situation.”
Yes, Steve Mosher thought this would be perfect too. Especially if belief in a net radiative GHE was strictly enforced.
Poptech had another great suggestion. Sceptics should stop all this pointless analysis of existing climate papers and organisations and start working on only trying to get papers through pal review. After all if it’s not pal reviewed it couldn’t possibly be scientifically valid. If sceptics wanted to be taken “seriously” they had to pass peer review by established climate scientists…
The Hoff also had a great idea to stop sceptics wasting their time. All we had to do is accept that the ERBE “proved” there was a net radiative GHE and just forget all those pesky empirical experiments…
Any guess where all these suggestions should be filed?

asybot
July 2, 2014 10:48 pm

Everything working as designed,
And I live in a 13 room 5 bathroom home with a 4 bay garage with 2 Rolls Royces, a Hummer and a swimming pool, a tennis court and a boat on the lake, And have a butler and a full time gardener.

July 3, 2014 4:42 am

John M says:
July 1, 2014 at 6:41 pm
This sounds Mannian.
The algorithm is robust to the data input.
===================================
Heha. Mannian logic as opposed to Boolean logic.

July 3, 2014 4:52 am

Paul Homewood has busted them again
http://notalotofpeopleknowthat.wordpress.com/2014/07/03/another-ncdc-cock-up/
Alabama temperatures have been adjusted down by 50 degrees fahrenheit. And the algorithm “is working as designed”.
Kids – you just can’t get decent programmers any more it seems…

July 3, 2014 6:32 am

DanMet’al says:
July 2, 2014 at 5:53 pm
If you were given a clean start with the objective to measure land climatic temperature change, how would you design your “climate station” physically, locationally, sensor-wise, data handling/management/QC, and subsequent data treatment via post-processing corrections (or not).
——————–
If it t’were me, ….. I would design a “round” container of one (1) gallon capacity/volume (or whatever volume was deemed optimal) and fill it with -40 degree F anti-freeze …. and then submerge two (2) calibrated thermocouples in the center of the liquid …. and then suspend said “ball” container underneath an open canopy to protect it from direct Sunlight …. but subject to the near-surface air surrounding it ….. and from direct IR radiation from the surface underneath it.
The thermocouples would be connected to auto-recording equipment which would also monitor the output of both thermocouples to insure their “calibration” remained true and accurate. The auto-recording equipment could be preset to record the temperature of the anti-freeze at whatever hour of the day that one desired and/or transmit said temperature to a processing center.
Said recorded temperature would be the “true n’ accurate” Average Air Temperature for that specific Surface Station because the liquid anti-freeze would “filter out” all spurious, random and/or short duration environmental effects that normally cause increased/decreased temperature “spikes”. Note: the volume of the liquid anti-freeze would determine the “filter time”.
Thus, the data is the data …. and no pre or post processing of the data for “error correction” is necessary or required …. except for when a thermocouple “error” is detected which is an SS maintenance problem.
Also note: said “round” container might have to be connected via a small diameter tube to a small “overflow” container to account for expansion and contraction … just like an automobile radiator is. Or an open “stand-pipe” on top of the “ball” might suffice.
Whatta ya think, …….. feasable, doable?

July 3, 2014 8:22 am

NikFromNYC says:
July 2, 2014 at 3:35 pm

… to self regulate quickly enough to avoid embarrassment …

Nik, I’ve been trying to avoid embarrassment since I was a child. It didn’t work.
What are your results?

DanMet'al
July 3, 2014 1:19 pm

Samuel C Cogar says:
July 3, 2014 at 6:32 am
Thanks for reading my question and providing a thoughtful response. As I indicated initially, I had my own thoughts and interestingly there’s a good bit of correspondence between your design and mine. But I must admit that I hadn’t considered using a “liquid” such as anti-freeze (ethylene glycol) as the climate station thermal mass. It does have the issues you raised (expansion) and potential wild-life health risks; but I like that a liquid thermal buffer would eliminate some of the thermal lag (presuming that natural convection were not hindered by internal baffles etc.).
I had been thinking about a fully solid and more massive climate station; and I was worried about snow cover and freezing rain (a given where I live). . . so I was actually thinking about an upwards pointing cone with a sharp enough apex angle to shed most snow. I then was thinking about doing some FEM thermal analysis to test out some design alternatives . . . but decided, what the heck, there’s a lot of smart people on WUWT. . . maybe they might have some good ideas. So I thank you for your thoughtful and useful thoughts. . . I just wish others had an interest and additional feedback.
Just one further comment: I started thinking about this following a conversation with a friend who buys into the global warming perspective. . . in which I challenged him as to whether either of us could determine the average temperature of my 1/2 acre yard. Well, of course nothing happened, so now that I’m retire, I intend to build my climate station and several other T reading stations. I’ve got a digital data logger, thermistors, calibrated Omega thermocouples, and some micro controllers ready for the soldering gun.
Anyway Sam, thanks for your input. . . maybe we can talk about this at some future time.
Best Regards
Dan

July 4, 2014 8:05 am

DanMet’al says:
July 3, 2014 at 1:19 pm
And I thank you, Dan, for your response to my proposed design.
The thermal expansion is really not a problem, I only included it to fend off the “naysayers”. With a temperature range between -40F and 110F there would be very little volume change in 1 gallon of ethylene glycol, thus a small “stand-pipe” or even an “air pocket” at the top of the liquid in the “ball” should suffice. And ps, the primary purpose of the liquid “buffer” is to guarantee a “thermal lag” ….. and not to eliminate one. If a “thermal lag” is incorporated in the design then one doesn’t have “massage” the recorded temperature data “after-the-fact” to adjust or eliminate potential “outliers”.
My idea for a Surface Station would be four (4) sturdy posts with an attached louvered roof and the “ball” container suspended underneath with unencumbered “air flow” all around it. Those square “boxes” with louvered enclosures will disrupt the air flow depending on the direction from which it is flowing.
And Dan, with your 1/2 acre yard and equipment, you could prove whether or not that 400 ppm of CO2 contributes to or causes any measurable “warming” of the near-surface air. And you could do that by performing an “experiment” that I devised 4 or 5 years ago.
So, until we talk again, …. Cheers

July 5, 2014 6:52 pm

Konrad says:
Poptech had another great suggestion. Sceptics should stop all this pointless analysis of existing climate papers and organisations and start working on only trying to get papers through pal review. After all if it’s not pal reviewed it couldn’t possibly be scientifically valid. If sceptics wanted to be taken “seriously” they had to pass peer review by established climate scientists…

Konrad why are you lying about things I never said? My only recent discussions on pal-review relate to PRP. And yes skeptics get taken seriously when they get legitimately peer-reviewed and this is not as hard to do as you would think but I am not about to explain it here.