NOAA's National Climatic Data Center caught cooling the past – modern processed records don't match paper records

We’ve seen examples time and again of the cooling of the past via homogenization that goes on with GISS, HadCRUT, and other temperature data sets. By cooling the data from the past, the trend/slope of the temperature for the last 100 years increases.

This time, the realization comes from an unlikely source, Dr. Jeff Masters of Weather Underground via contributor Christopher C. Burt. An excerpt of the story is below:

Inconsistencies in NCDC Historical Temperature Analysis

Jeff Masters and I recently received an interesting email from Ken Towe who has been researching the NCDC historical temperature database and came across what appeared to be some startling inconsistencies. Namely that the average state temperature records used in the current trends analysis by the NCDC (National Climate Data Center) do not reflect the actual published records of such as they appeared in the Monthly Weather Reviews and Climatological Data Summaries of years past. Here is why.

An Example of the Inconsistency

Here is a typical example of what Ken uncovered. Below is a copy of the national weather data summary for February 1934. If we look at, say Arizona, for the month we see that the state average temperature for that month was 52.0°F.

The state-by-state climate summary for the U.S. in February 1934. It may be hard to read, but the average temperature for the state of Arizona is listed as 52.0°F From Monthly Weather Review.

However, if we look at the current NCDC temperature analysis (which runs from 1895-present) we see that for Arizona in February 1934 they have a state average of 48.9°F, not the 52.0°F that was originally published:

Here we see a screen capture of the current NCDC long-term temperature analysis for Arizona during Februaries. Note in the bar at the bottom that for 1934 they use a figure of 48.9°.

Ken looked at entire years of data from the 1920s and 1930s for numerous different states and found that this ‘cooling’ of the old data was fairly consistent across the board. In fact he produced some charts showing such. Here is an example for the entire year of 1934 for Arizona:

The chart above shows how many degrees cooler each monthly average temperature for the entire state of Arizona for each month in 1934 was compared to the current NCDC database (i.e. versus what the actual monthly temperatures were in the original Climatological Data Summaries published in 1934 by the USWB (U.S. Weather Bureau). Note, for instance, how February is 3.1°F cooler in the current database compared to the historical record. Table created by Ken Towe.

Read the entire story here: Inconsistencies in NCDC Historical Temperature Analysis

================================================================

The explanation given is that they changed from the  ‘Traditional Climate Division Data Set’ (TCDD) to a new ‘Gridded Divisional Dataset’ (GrDD) that takes into account inconsistencies in the TCDD. “.

Yet as we have seen time and time again, with the exception of a -0.05°C cooling applied for UHI (which is woefully under-represented) all “adjustments, improvements, and fiddlings” to data applied by NCDC and other organizations always seem to result in an increased warming trend.

Is this purposeful mendacity, or just another example of confirmation bias at work? Either way, I don’t think private citizen observers of NOAA’s Cooperative Observer Program who gave their time and efforts every day for years really appreciate that their hard work is tossed into a climate data soup then seasoned to create a new reality that is different from the actual observations they made. In the case of Arizona and changing the CLimate Divisions, it would be the equivalent of changing state borders as saying less people lived in Arizona in 1934 because we changed the borders today. That wouldn’t fly, so why should this?

Sure there are all sorts of “justifications” for these things published by NCDC and others, but the bottom line is that they are not representative of true reality, but of a processed reality.

h/t to Dr. Ryan Maue.

UPDATE: Here’s a graph showing cumulative adjustments to the USHCN subset of the entire US COOP surface temperature network done by Zeke Hausfather and posted recently on Lucia’s Blackboard:

This is calculated by taking USHCN adjusted temperature data and subtracting  USHCN raw temperature data on a yearly basis.  The TOBS adjustment is the lion’s share.

5 1 vote
Article Rating
193 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Andrew Greenfield
June 6, 2012 1:43 pm

Time to call the cops!

June 6, 2012 1:49 pm

This is an outrageous foul! Where’s the Umpire when you need him in this game?

John Bills
June 6, 2012 1:51 pm

It is called modelling. And as I know they are not very good in that.

Joachim Seifert
June 6, 2012 1:53 pm

What are “Inconsistencies”? – Modern term for “Cooking the books”?

CodeTech
June 6, 2012 1:54 pm

We need Ed Begeley Jr on this. He can reassure us that it’s all Peer Reviewed and thus the people who measured the temperature back then, not being Climatologists with letters after their names, didn’t know how to read thermometers.

Owen in GA
June 6, 2012 1:54 pm

If 1934 was cooled by an average of three degrees, wouldn’t that mean we are actually cooler now than then?

Taphonomic
June 6, 2012 1:55 pm

“And if all others accepted the lie which the Party imposed -if all records told the same tale — then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory.”
Doubleplusungood!

Auto
June 6, 2012 1:56 pm

Not familiar with the American system. Can you impeach the weatherfolk?
Or is Andrew geenfield riht – and the only option is to call Phoenix PD and lay information about a fraud? That won’t do science, generally, a lot of good.
Are we moving to a post-scientific age, where celebrity – or Teamwork – trumps facts, carfully researched and un-dramatically presented?
[On present trend, DC and much of Maryland will be covered by a kilometer-deep ice cap by 2025, extrapolating the cooling seen last night, using a model I made (up) earlier. /Sarc. /Not real!]
Seriously, a rational human would expet adjustmenst to be, basically, fairly evenly distributed about the neutral.
As described, it smacks of a very biased die.
Cui Bono?

Chris B
June 6, 2012 1:57 pm

War is peace. Hot is cold. What else is new?

tallbloke
June 6, 2012 2:07 pm

I hope these data vandals are keeping a careful record of their ‘adjustments’ so the records can be un-bent later.

Scottish Sceptic
June 6, 2012 2:08 pm

People have gone to jail for less than this.

Luther Wu
June 6, 2012 2:10 pm

All who still trust government, raise your hands.

Luther Wu
June 6, 2012 2:12 pm

Both of them… this is a stickup.

Latitude
June 6, 2012 2:24 pm

Don’t they have to adjust for the sea floor sinking or something like that…………………
Steven Goddard has been posting these “inconsistencies” for years…………….

Andrew
June 6, 2012 2:25 pm

Its time the big people that matter that are likely to be running the show soon are made aware of this and the Hockey stick Scandals (yes it has now become plural) ie persons such as Romney, Abbot and with money like Gina Rhinehart in Australia

cui bono
June 6, 2012 2:28 pm

“From the Peoples Central Statistical Office. The current five-year plan continues to advance well ahead of schedule thanks to the glorious foresight of the Great Leader. Tractor production is up 45.3% since last year. Work harder, dedicated Stakhanovites, for the Great Day will soon dawn for us all.”

SteveSadlov
June 6, 2012 2:29 pm

Got to hide those warm 1930s!

June 6, 2012 2:30 pm

Being NOAA, wouldn’t Congress have oversight??? Someone ought to send this info. to the proper oversight Congressional Chairmen!

jitthacker
June 6, 2012 2:31 pm

Well – it should be obvious that if the explanation is true, then as many of the gridcells should have increased temperatures as have decreased temperatures.
Without knowing how thorough the survey and its reporting is, we can’t say this is systematic bias. Hopefully the author will confirm whether they balance out.

Phil C
June 6, 2012 2:34 pm

Is this purposeful mendacity, or just another example of confirmation bias at work? Either way, I don’t think private citizen observers of NOAA’s Cooperative Observer Program who gave their time and efforts every day for years really appreciate that their hard work is tossed into a climate data soup then seasoned to create a new reality that is different from the actual observations they made.
You cut off the Weatherunderground story just as it gets good. In your view, if I happened to be stationed a mere ten yards away from another researcher, and we are supposed to be two people covering an area of hundreds of square miles, I should be concerned that my hard work was “tossed into a climate data soup” rather than properly considered as not accurately representative of the region we were supposed to be covering? If you had reprinted the full quotation provided over at weather underground that they had taken from Transitioning from the traditional divisional dataset to the Global Historical Climatology Network-Daily gridded divisional dataset it would be clear that these adjustments make scientific sense and are not biased. I think the right thing to do at this point would be for you to bring the substance of that reference over here to prove me wrong, but I’ll wager you don’t do that.
REPLY: And you didn’t read what I said about them. They make no sense to me. Explain why virtually every adjustment made to the raw data causes a temperature trend increase. That’s your challenge. See also comment directly above. Moving the data boundaries around should balance out. It apparently doesn’t. This sort of adjustment wouldn’t be tolerated in a stock report if it made trends/performance improve. The SEC would be on that like white on rice. Why should it be any different here? – Anthony

June 6, 2012 2:36 pm

Why is it that the older data that had less influence from man made structures, roads, changes to land, no air conditioning units, fewer parking lots and smaller airports is considered to have inconsistencies and must be “adjusted”?
The newer data is what is truly messed up with “man made” changes that affect the measurements.

kadaka (KD Knoebel)
June 6, 2012 2:36 pm

http://wattsupwiththat.com/2012/04/13/warming-in-the-ushcn-is-mainly-an-artifact-of-adjustments/
As Dr. Roy Spencer said about the NOAA-NCDC USHCN record, 1973-2012 (read original post for full context):


2) Virtually all of the USHCN warming since 1973 appears to be the result of adjustments NOAA has made to the data, mainly in the 1995-97 timeframe.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

A year ago, I believed global warming was a gentle linear trend, starting around 1850 after the Little Ice Age, easily manageable and not a problem.
Now, I’m wondering if there really is any sort of linear trend, from about the start of the 20th century to now, or if there are really just “surges” like the short 1998 Super El Nino and longer ones like a positive PDO, a “charging” of the global temperature systems, that wears off in time, a “discharging”. And if that noticeable long-term trend is only a result of the data mangling.
And with the negative PDO and other indicators, we have about 20 years of global cooling coming which should knock down that slope, provided more adjustments don’t “hide the decline”.
Is it now conceivable that the already-seen “climate change” the CAGW doomsayers insist foretells devastation to come, never even happened?

kramer
June 6, 2012 2:36 pm

What is the point of placing (I assume) calibrated thermometers at various locations and then later on, adjusting what those thermometers recorded in the past?
That’s about the same as measuring and recording the height of say trees in a location at a certain date and then at some point in the future, going back and adjusting the recorded height of those trees.
Fraud comes to mind.
I kind of would like to see an all comprehensive report on all the temperature adjustments ever made and ‘weather’ (it’s a pun) or not the vast majority of adjustments have made global warming look worse, about the same, or not as bad. If I were to guess, I’d say these adjustments on the whole, make AGW look worse.
Reminds me a bit of what James Lovelock said a few years about about how 80% of the ozone measurements were either faked or incompletely done. (I’m still waiting for the MSM to pick up on this and do an investigation into this claim just like they would do if a scientist from big tobacco or big oil had admitted in a newspaper that 80% of their measurements were faked or incompletely done.)

June 6, 2012 2:37 pm

How extensive are the anomalies? I can’t find where he says that in the article

just some guy
June 6, 2012 2:37 pm

Well, the problem is that the highly authorative IPCC has shown that there is an overwelming consensus that anthropogenic factors have caused warming in the 20th century which is unprecedented. Therefore any data which does not agree with this consensus, such as the Arizona data from February 1934, must clearly represent the fringe viewpoint and must be corrected. The adjustments made to the data is obviously in line with the 98% consensus viewpoint and is therefore scientifically justified. Many, many, studies in Science and Nature have confirmed this.
/sarc (did I make you feel nauseous just now.)

LamontT
June 6, 2012 2:40 pm

“Luther Wu says:
June 6, 2012 at 2:10 pm
All who still trust government, raise your hands.”
==================================================
Lost trust? I didn’t have it to begin with.
Typical. We have to fix the data to correct for unspecified errors that led to a much to warm report at the time. ::sigh:: And then they with a straight face ask us to believe them.

coalsoffire
June 6, 2012 2:40 pm

Being a bear of very little brain I have a question. Can this sort of trick work forever? Will the artificial wave in the temperature anomaly just keep rolling along? In other words if you constantly adjust the past down and tinker a bit upward with the present to produce a constant upward trend, regardless of what is actually happening in the world, does the wave you have created ever crash on the shore? And if the natural variation is upward a bit too, well… bonus.
If we were dealing with financial fraud or embezzelment of this nature the trick would eventually become untenable. But with Climate Science it seems to be the perfect crime. Gotta love those anomolies. I guess it works because all you really have to do is create the illusion of rising temperatures, and you are chipping back the the old temperatures that you raised earlier on, so it never gets that far out of the ordinary. Whereas in financial fraud the object is to actually remove money from the system, it’s not enough to make it look like you are making money. That eventually sinks you.
Also I don’t see why this is news to anyone. I’ve been reading about this adjustment exercise for years now on this blog and elsewhere and no one has actually ever denied it. It’s just a rule of “climate science” that must be taught in CS 101. All past temperature adjustments are downwards and all present temperature adjustments are upwards. This keeps the narrative alive and the funding flowing. It’s a fundamental professional conceit. If you can’t do that you can’t be a true climate scientist.

crosspatch
June 6, 2012 2:44 pm

And if you check monthly you will notice that the discrepancy increases over time. With every passing month NCDC adjusts pre-1950 temperatures down a bit more and post-1950 temperatures up.
Shown here is the amount by which the NCDC database has changed since May 2008 until April 2012.
http://climate4you.com/images/NCDC%20MaturityDiagramSince20080517.gif

Ian W
June 6, 2012 2:47 pm

Now you see why the Team loses input data or cannot show it due to ‘non-disclosure agreements’, or just flat out refuses to release data under FOIA requests. The team will see their fault here as failing to ‘lose’ (or hide the decline in) the State records.
It is a relatively simple choice – mendacity or incompetence – in either case they should not be funded and allowed to continue.

jim
June 6, 2012 2:50 pm

This page has a link to a NOAA chart of the adjustments. It essentially shows that there is NO warming since the 1930s without the adjustments:
http://jennifermarohasy.com/2009/06/how-the-us-temperature-record-is-adjusted/
Thanks
JK

noaaprogrammer
June 6, 2012 2:55 pm

Someone should run Benford’s Law of first and second leading digits on the adjusted data to verify that tampering has been done. This is a statistical test that is run on accounting data to alert auditors of book cooking. The test is independent of any kind of data and units used.

JFD
June 6, 2012 2:57 pm

quidsapio, you missed a sentence in Anthony’s paste of the original paper. It reads, “Ken looked at entire years of data from the 1920s and 1930s for numerous different states and found that this ‘cooling’ of the old data was fairly consistent across the board.”
JFD

pochas
June 6, 2012 3:00 pm

Lawyers, here’s your chance! Just think up a basis for a legal action based on some injury this fraud has caused. Regulations based on this chicanery have produced financial losses everywhere.

June 6, 2012 3:02 pm

Fraud.
Anybody else anywhere else about anything else, and they’d go to jail.

June 6, 2012 3:08 pm

We the people must stand up to the lies of AGW. This is unbelievable. Changing the truth to fit the lie is deplorable!

June 6, 2012 3:10 pm

We have all been on about this before and will continue for some time yet I am sure. Homogenization is fools effort, as the results fool even the doer.

DocMartyn
June 6, 2012 3:12 pm

This is something many of us have long suspected. My guess is that the vast majority of paper documents have been or will be lost, water logged or shredded.
In 20 years time we will be shocked that glaciers were not covering the 30’s dust bowl states.

just some guy
June 6, 2012 3:18 pm

Holy Hockey-Sticks, Batman! The NOAA instrumental record is nothing but a protracted series of computers models. Take a look at the “Quality Control, Homogeneity Testing, and Adjustment Procedure”.
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL
At nearly every step, the data from the previous is entered into a computer program which makes the adjustments. Examples:
“The TOB debiased data are input into the MMTS program and is the second adjustment.”
“The debiased data from the second adjustment are then entered into the Station History Adjustment Program or SHAP. ”
“Each of the above adjustments is done is a sequential manner. The areal edits are preformed first and then the data are passed through the following programs (TOBS, MMTS, SHAP and FILNET). ”
It’s impossible for an outside to see verify any of these steps since (at least just going by what they put on thier website), it is impossible to get “under the hood” on thier procedures. I wonder if anyone’s looked into getting the “code” from NOAA.

LazyTeenager
June 6, 2012 3:25 pm

Sure there are all sorts of “justifications” for these things published by NCDC and others, but the bottom line is that they are not representative of true reality, but of a processed reality.
———————–
Surely you can’t say this until you understand the differences between the methods by which the 2 averages were calculated. Assuming that the older values are correct and the new values are incorrect just because you like the older values better does not standup.
Let’s see, speaking hypothetically cos I don’t know, if it turns out that the old temps are measured at places that people prefer to live and those places are cool in a hot state, that will introduce a bias if you just add the numbers up and divide by the number of places. If those places are at a particular altitude, those places might warm more than other places, not necessarily with some simple linear relationship. so those old averages might be wrong.
So here is another article lined up for you. When the weather service says there were deficiencies in the old method of calculating average temps what were those deficiencies? It must be documented somewhere.

DocMartyn
June 6, 2012 3:26 pm

“REPLY: Explain why virtually every adjustment made to the raw data causes a temperature trend increase. – Anthony”
I can never understand why a large pay increase has less of an impact than a small tax increase.

Green Sand
June 6, 2012 3:30 pm

DocMartyn says:
June 6, 2012 at 3:12 pm

———————————————————–
Just a few minutes of Fahrenheit 451: the autoignition temperature of paper.
“In an oppressive future, a fireman whose duty is to destroy all books begins to question his task. “
http://www.imdb.com/title/tt0060390/

NeedleFactory
June 6, 2012 3:35 pm

I’m not playing Devil’s advocate, but trying to find a sympathetic interpretation of the “adjustments” I came up with this:
Imagine the”surface” suggested when the thermometer readings are z-coordinates and the thermometer locations are x- and y-coordinates. With some knowledge of the actual terrain, some readings may be “suspiciously” low (or high), and might be adjusted by some kriging algorithm. (Think of, say, Colorado with five thermometers spaced like the pips on the five-of-hearts playing card, all reading about the same except for the central one.)
Furthermore, faulty thermometers might be more prone to read a bit low rather than a bit high. Were this the case, a non-biased adjustment might actually “cool the past”.
I’m not saying it’s so, just thinking aloud. Whatever adjustments were made, the rationale and the algorithms should be available, as well as the raw data. Too much to ask, I fear.

NeedleFactory
June 6, 2012 3:36 pm

Should have said “a bit high”

David Corcoran
June 6, 2012 3:39 pm

What will the temperature in 1934 have been tomorrow? And the day after that? Is there some kind of time machine involved in these changes?

juanslayton
June 6, 2012 3:40 pm

The chart above shows how many degrees cooler….
Should this read, “…how many degrees cooler….?

juanslayton
June 6, 2012 3:42 pm

Nuts. Make that, “how many degrees warmer“.

Nick Stokes
June 6, 2012 3:42 pm

As Phil C noted, the Jeff Masters link goes on to explain the reasons for the change. And I think they make a lot of sense. They are going to a more modern gridded system. I think this bullet point is the key:
“1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially under sampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).”
In the original calc, the average for Arizona was just the average of whatever stations reported in each month. That’s pretty much what you have to do if you’re working with pen and paper.
It’s the reason why most climate work is now done with anomalies. Then you don’t get the effect that a month will seem warmer when mountain stations don’t report, etc.
Anyway, if you have to work with absolute temperatures, then you have to look at what kind of stations you have in the mix when calculating an average. And the least you can do is area weighting, which the new system seems to include. If you have an area represented by few stations, you give them more weight in the average.
That’s probably why the average has gone down. It’s likely that mountain regions in Arizona were underrepresented. If you upweight stations there it brings down the average.
The real question is whether this has anything to do with trend.

DR
June 6, 2012 3:46 pm

Golly gee, I wonder what Mosher will say about this? Will it be the standard “trust the climate scientists, they know what they’re doing”?
Lukewarmers agree with all temperature adjustments. We little people are just too…… little…..to understand the highly technical procedure of sorting and analyzing data, or reading thermometers for that matter.
We can expect Zeke & Co.R will replicate the results thereby validating the “adjustments”, then later Muller will have to reevaluate his data set so that it too agrees.
Sorry for the snark, but frankly it is beyond the pale this is called “science”. It certainly wouldn’t pass any industrial standard I’m aware of.

David Falkner
June 6, 2012 3:50 pm

noaaprogrammer:
Not sure Benford’s analysis is appropriate here. I think you’d be likely to skew a little heavy towards 1 as the first digit just because of the way the numbers fall on the orders of magnitude involved.

Neville
June 6, 2012 3:50 pm

What is wrong with you Americans? If you’re sure of the info above why don’t you take them to court?
There is trillions $ riding on the CAGW fraud and you’d be doing your taxpayers and all other taxpayers on the planet a favour if you could expose this fraud and con trick.
First move should be a spot on Fox news or whatever to get the ball rolling. But some how you must get some REAL publicity. If a major network runs with this story others must respond and then print and pollies have to join in and respond as well.
This is the only way it will work, always has, always will. You’re currently discussing this in house but you need to break out into the neighbourhood.

DR
June 6, 2012 3:57 pm

Nick Stokes says:
June 6, 2012 at 3:42 pm
As Phil C noted, the Jeff Masters link goes on to explain the reasons for the change. And I think they make a lot of sense. They are going to a more modern gridded system. I think this bullet point is the key:
“1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially under sampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).”
In the original calc, the average for Arizona was just the average of whatever stations reported in each month. That’s pretty much what you have to do if you’re working with pen and paper.
It’s the reason why most climate work is now done with anomalies. Then you don’t get the effect that a month will seem warmer when mountain stations don’t report, etc.
Anyway, if you have to work with absolute temperatures, then you have to look at what kind of stations you have in the mix when calculating an average. And the least you can do is area weighting, which the new system seems to include. If you have an area represented by few stations, you give them more weight in the average.
That’s probably why the average has gone down. It’s likely that mountain regions in Arizona were underrepresented. If you upweight stations there it brings down the average.
The real question is whether this has anything to do with trend.

Translation: there are no standards in climate “science”, so make it up as you go along. If it cools the past and/or warms the present, it must be correct.
I’d like to see these people pass an A2LA audit.

Green Sand
June 6, 2012 3:57 pm

Nick Stokes says:
June 6, 2012 at 3:42 pm

The real question is whether this has anything to do with trend.
==========================================================
Yup Nick, that really is the question. If a guy stands still in Arizona and the average daily temperature where he stands does not change for a century what is the trend?

wayne Job
June 6, 2012 3:59 pm

Once can be a mistake, twice can be a co-incidence, thrice is on purpose. If this is computer generated, one can only assume that the programming is deliberately biased, or would have been corrected by now.

Steptoe Fan
June 6, 2012 4:08 pm

http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL
The seven ( 7 ) references at the bottom of the article show as links, yet none of them are active. How can one look under the hood ?
Sigh, why do they want to make it hard ?

Glenn
June 6, 2012 4:08 pm

Phil C says:
June 6, 2012 at 2:34 pm
“In your view, if I happened to be stationed a mere ten yards away from another researcher, and we are supposed to be two people covering an area of hundreds of square miles, I should be concerned that my hard work was “tossed into a climate data soup” rather than properly considered as not accurately representative of the region we were supposed to be covering?”
He should be concerned for his own sanity as well as the other’s, by thinking that temp data for an area of hundreds of square miles could be taken from two measurement locations ten yards apart. I’m concerned for those that think this needs to be considered at all.

June 6, 2012 4:09 pm

I would like (love) to see an audit of some of the specific paper results for sites in the current records to the reported results for those sites as shown in Anthony’s surface project. Paper to excel, column to column, single site comps. Adding the audit of the real records to the surface project as a whole would be interesting. As reported meets As Adjusted.
I have a funny feeling that the reported 1 degree of warming in the last century is possibly off a bit more then anyone is ready to accept. Irony is if they have flipped the trend like the upside series embedded in other work.

Glenn
June 6, 2012 4:10 pm

Green Sand says:
June 6, 2012 at 3:57 pm
” If a guy stands still in Arizona and the average daily temperature where he stands does not change for a century what is the trend?”
Depends on his boot size.

WE TOLD YOU SO
June 6, 2012 4:13 pm

Anthony, I’ll kindly point to you my friend that, many MANY of us have been telling anyone who wouldn’t listen, that
*IT HASN’T WARMED EVEN UNUSUALLY for practically speaking, in instrument-recorded history.*
***All this warming over and above standard deviations leaving last cold period,***
***IS BEING CONSTRUCTED through FALSIFICATION of R.E.C.O.R.D.S.***
And, there’s another thing: this Magic Gas thing everyone was so scared of?
Where’s the infrared astronomy & optical astronomy fields’ constant chorus for us to all
***LOOK at the PHOTOS of EVER RISING EVIDENCE of HEAT in the ATMOSPHERE accompanying rising levels of THE MAGIC GAS?***
It’s not there kids, because there ISN’T any more infrared in the atmosphere now than usual,
READ MY LIPS: THERE’S L.E.S.S.
This is, simply, i.m.p.o.s.s.i.b.l.e.
unless those people are committing C.R.I.M.E.
Anthony I know I come here and type like I’m a walking billboard or something: but my field, is two way radio communications. Do you know what the generic name for this field is?
The electronic engineering associated with the calibration maintenance and usage of all instrumentation
associated with the transmission, capture, and analysis of electromagnetic energy through the atmosphere, space, and industrial compounds.
This includes the controls associated with nearly everything under the sun that has a button: and I assure you,
there are a dozen ways there is proof there’s no more energy in the atmosphere, so many THEY COULDN’T BE HIDDEN: and people have been telling again, anyone who *wouldn’t listen*
that, that magic gas story is utter, utter, fabrication from nearly syllable one. Utter falsification of everything under the sun I say, and the fact that industries associated with advanced instrumentation aren’t making note in their industry rags about the ‘recent accomodations in calibration/instrumentation technologies to the ever warming environment’.
This has been crime
from
the
Beginning.

WE TOLD YOU SO
June 6, 2012 4:16 pm

There’s a typo at the end there where I should have inserted, “…to the ever warming environment, are proof that
This has been crime
etc.
Sorry

AlexS
June 6, 2012 4:18 pm

Taphonomic says:
June 6, 2012 at 1:55 pm
“And if all others accepted the lie which the Party imposed -if all records told the same tale — then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ And yet the past, though of its nature alterable, never had been altered. Whatever was true now was true from everlasting to everlasting. It was quite simple. All that was needed was an unending series of victories over your own memory.”
Very appropriate.

Phil C
June 6, 2012 4:22 pm

And you didn’t read what I said about them. They make no sense to me.
I did read what you said and what the GrDD authors write makes sense to me. Here’s a simplied description of what they’ve done:
1. draw a tic-tac-toe grid (3 x 3 square)
2. fill in all 9 squares with a temperature reading.
3. fill in the 3 top squares again with an additional temperature reading.
4. You’ve now got 12 readings: average them. That’s the old method (TCDD).
Is that the average for the entire area? Of course it isn’t. You’re taking too many readings in the top row. To correct for the bias in the top row, you should first average the two numbers in each square of the top row, and then use those three readings with the remaining six to find your average over the entire area.
This is exactly how I understand what the GrDD authors are doing when they write this:
For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially under sampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
They’ve added additional adjustments for missing data, etc., and of course the compuation of the areas is far more complicated, but the impact is the same: an average which treats every reading without introducting bias for the size of the area. Doing this method means that there’s no reason why the numbers should “balance out” as you write, because that is purely a function of area size and the readings.
REPLY: The failure in your logic is that grids don’t follow state boundaries, thus when giving an area average number from grid data, it cannot accurately represent the state average (as calculated before) because it will also include stations outside of the state boundary. Depending on how coarse the grid is determines how many stations outside of the state get included. Thus when NCDC displays a grid derived state value, it is not truly representative. But there’s more to it than that. The majority of the states show this flaw. WUWT? I’m trying to locate the author to find out more. – Anthony

KenB
June 6, 2012 4:26 pm

Will the great law suits flood of this century begin with a trickle, then spread to ecompass professional scientific organisations (cozy clubs) that “should have known” but defended the indefensible or will they quickly join the accusers/lawyers to try and cover their butts?

just some guy
June 6, 2012 4:26 pm

Glenn says:
June 6, 2012 at 4:08 pm
Phil C says:
June 6, 2012 at 2:34 pm
“In your view, if I happened to be stationed a mere ten yards away from another researcher, and we are supposed to be two people covering an area of hundreds of square miles, I should be concerned that my hard work was “tossed into a climate data soup” rather than properly considered as not accurately representative of the region we were supposed to be covering?”
He should be concerned for his own sanity as well as the other’s, by thinking that temp data for an area of hundreds of square miles could be taken from two measurement locations ten yards apart. I’m concerned for those that think this needs to be considered at all.

Exactly true. And therein lies the problem. The resulting product can only possibly be an estimate , and in this case, not a very accurate one. Estimates require the estimator to make judgements in order to fill in the blanks. Whether intentional or not, those judgements are subject to estimator bias.
The result is that we have output from protracted computer models being called “instrumental data”.

Green Sand
June 6, 2012 4:27 pm

Glenn says:
June 6, 2012 at 4:10 pm

“Depends on his boot size.”
================================
We are sure that the boot size was constant but are considering possible adjustments for variations in sole thickness and changes over time from natural to man made materials?

just some guy
June 6, 2012 4:31 pm

“1. draw a tic-tac-toe grid (3 x 3 square)
2. fill in all 9 squares with a temperature reading.
3. fill in the 3 top squares again with an additional temperature reading.
4. You’ve now got 12 readings: average them. That’s the old method (TCDD).”
This would be fine, if only it were that simple. Unfortunately, it is not, since weather station data going back over 100 years does not conveniently provide raw data to fill in all those tic-tac-toe square.

davidmhoffer
June 6, 2012 4:40 pm

Green Sand says:
June 6, 2012 at 3:30 pm
DocMartyn says:
June 6, 2012 at 3:12 pm
———————————————————–
Just a few minutes of Fahrenheit 451: the autoignition temperature of paper.
>>>>>>>>>>>>>>>
A rather apt observation, and one of the great works of science fiction of all time, one that is apt for this thread considering that the theme of the book was government destroying all paper records of everything so that they would be the only ones with the “truth”.
Sadly, Ray Bradbury (the author) passed away yesterday.

June 6, 2012 4:45 pm

If they modify the data that is printed and saved in the Library of Congress, imagine what they are doing with the data from satellites!

Tom in Worcester
June 6, 2012 5:00 pm

If this organization is publicly funded, someone should send a link to someone in Senator Inhofe’s website.

June 6, 2012 5:00 pm

It’s difficult to believe that Anthony still makes a living peddling this twaddle. Go to the NCDC here, http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html, and you will see the adjusted temps, the raw data and the complete methodology for insuring the most accurate records. After which, come back here and explain why the less accurate data is preferrable. You might also want to explain why the NCDC adjusted SST’s decreased the trend over the raw data. JP

u.k.(us)
June 6, 2012 5:04 pm

LazyTeenager says:
June 6, 2012 at 3:25 pm
“Let’s see, speaking hypothetically cos I don’t know, ……”
=============
Join the club.
We’ll try to integrate the information offered, while paying taxes to our betters that have shown themselves as most qualified to spend our hard earned money.
The good times are over, one way or another.

Evan Jones
Editor
June 6, 2012 5:05 pm

atarsinc, I approved your post, but I’m here to say you really don’t have any idea what you are talking about.
And, yes, I’ve carefully studied the raw and adjusted USHCN data. The adjustment procedure is shocking and scandalous.

just some guy
June 6, 2012 5:10 pm

REPLY: “……I’m trying to locate the author to find out more. – Anthony”
Anthony, if you are chasing down more information about the adjustments, perhaps the “TOB” would be worthwhile.
“Next, the temperature data are adjusted for the time-of-observation bias (Karl, et al. 1986) which occurs when observing times are changed from midnight to some time earlier in the day. ”
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL
From the graph on that site, TOB (the dashed line) appears to be the biggest driver of suppressing earlier 20th century temperatures.

June 6, 2012 5:10 pm

Reblogged this on Climate Ponderings and commented:
Caught fingering the cool to make it warm

Rogelio escobar
June 6, 2012 5:12 pm

I would not be surprised if this is removed from the wunderground site soon… make a photocopy

June 6, 2012 5:12 pm

Now I’m really confused. The Medieval Warm Period disappeared because of a tree ring. We’re supposed to trust that tree ring. A tree ring is wood. Paper is made from wood.
Now the recent records recorded on parer are being “disappeared”. We’re not supposed to trust the numbers recorded on that wood. What’s wrong with these tree ring records?

June 6, 2012 5:15 pm

“on parer are being” should be “on paper are being”

davidmhoffer
June 6, 2012 5:15 pm

atarsinc;
You might also want to explain why the NCDC adjusted SST’s decreased the trend over the raw data. JP>>>>
Well I cannot explain it. What I do know is that calculating an anomaly from a moving base period makes about as much sense as counting the tires in a junkyard to figure out how many cars are on the road today. There is no logical explanation for doing so, and makes the anamoly data as presented useless. As for your other question, it is only your opinion that the records are more accurate. As several commenters have pointed out, the possibility of the “more accurate” methodology having a consistant effect of decreasing average temperatures in unlikely at best, a coincidence on the par with getting struck by lightning 21 times in a row and then winning the lottery with the same number twice in succession.

June 6, 2012 5:20 pm

atarsinc says:
“It’s difficult to believe that Anthony still makes a living peddling this twaddle.”
Actually, it’s difficult to believe ‘atarsinc’ is such a dope. He needs to read the article before making such a stupid comment. One of the article’s links shows that:
“…the average state temperature records used in the current trends analysis by the NCDC (National Climate Data Center) do not reflect the actual published records of such as they appeared in the Monthly Weather Reviews and Climatological Data Summaries of years past.”
NCDC is changing the historical temperature record. It has been pointed out repeatedly on this site that government agencies like NCDC, GISS, NOAA, NSIDC and others routinely fabricate the numbers in order to falsely show a rapidly warming planet.
Fabricating the temperature record is dishonest. But people like ‘atarsinc’ refuse to believe their lying eyes, because their belief is based on religion, and thus is impervious to reason and facts. Amazing but true.

Pamela Gray
June 6, 2012 5:23 pm

Take 100 seed plots of a variety being tested for rust (yes, plants can get rusted). Remove more than half of them in a systematic (not random) way. Take what is left and bury some of the cans further into the ground and raise some of the other cans up a bit. Now take measurements of plant growth and declare the variety hardy against rust. Sounds like snake oil research doesn’t it. But this seems to be ok to do these days. Why not? Opening the door to shady research and data fudge seems to be the way our current governers and White House administration want it done.
Fool me once shame on you, fool me twice shame on me. Do NOT vote for anyone on the watermellon ticket. In Oregon, we DEPEND on agriculture jobs. Without farmers and ranchers we will go the way of nutty California. If the NOAA has been fooling with the data, making today warmer than in the past, that will not set well with farmers who depend on accurate information, not adjusted information.
Hope all is well down in Central and South Central Oregon. They had a freeze warning early this morning. Temps dipped low enough to kill entire crops. That would be temps in the 20’s folks. But it is a warm freeze.

David Falkner
June 6, 2012 5:27 pm

atarsinc says:
June 6, 2012 at 5:00 pm
It’s difficult to believe that Anthony still makes a living peddling this twaddle. Go to the NCDC here, http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html, and you will see the adjusted temps, the raw data and the complete methodology for insuring the most accurate records. After which, come back here and explain why the less accurate data is preferrable. You might also want to explain why the NCDC adjusted SST’s decreased the trend over the raw data. JP
——————
Explain why the less accurate data is preferable? We could start with the fact that the thermometers are not as accurate as the data. That does present credibility issues for the *ahem* “more” accurate data.

u.k.(us)
June 6, 2012 5:31 pm

atarsinc says:
June 6, 2012 at 5:00 pm
It’s difficult to believe that Anthony still makes a living peddling this twaddle.
=================
Yet, you still read it.
So quit yer bitchin’, and your attempts to muddy the waters.
Go spread your knowledge elsewhere.
The polls say the “faithful” are losing faith.
Good luck.

EJ
June 6, 2012 5:33 pm

Gotta love the tribute to the data and archiving our granparents set up. That for me makes this willy nilly ‘processing’ going on a huge letdown. It’s gotten to the point where there is no OFFICIAL record. HadC, GISS, NOAA, BEST … Now we are distracted about who did what, when and why to some data set. Revision 1, 2, 3, or however many it takes to produce the correct graph.
This is a joke, a sad, sad result of Billions of dollars spent, wonderful talents wasted and the potential demise of the trust the public in the scientific method.

June 6, 2012 5:37 pm

I demand that Dr. Peter Gleick do an ethics investigation of the people maintaining these records. We need to get to the bottom of the matter regardless of how we prove it!

Rick Bradford
June 6, 2012 5:45 pm

“… Put simply, it’s not about weather, it’s about power. The movement is everything, the goal is nothing. It’s not about curbing CO2 emissions; it’s about creating a mob—a mass cult whose legions empower its shamans, and whose systematic anti-human ideology serves as a basis for reorganizing society along totalitarian lines. ”
— Robert Zubrin, Merchants of Despair

thingadonta
June 6, 2012 5:50 pm

It’s curious that the only people who don’t ever seem to be aware of the problems of researcher bias are the researchers themselves.

kadaka (KD Knoebel)
June 6, 2012 6:18 pm

From atarsinc on June 6, 2012 at 5:00 pm:

It’s difficult to believe that Anthony still makes a living peddling this twaddle.

Could be because the name would be Joe Romm, not Anthony Watts. Anthony makes his living from the full-time job of running his business (at least one, there’s The Weathershop and he’s mentioned building computer systems for clients), plus he earlier reported doing weather reports for radio.
From this site, hosted free on WordPress which receives the ad revenue, Anthony would be lucky to get beer money. Donations are accepted but that’s earmarked for the Surfacestations project.
Sheesh, you can’t even get the name of the blogger right and you expect to be taken seriously on this site? Try harder.

Gary Hladik
June 6, 2012 6:21 pm

atarsinc says (June 6, 2012 at 5:00 pm): “It’s difficult to believe that Anthony still makes a living peddling this twaddle.”
Anthony makes a living from WUWT???? The coal company “incentive” money finally came? Dang, where’s MY check? True, I don’t have an official “Planet Wrecker” certificate
http://wattsupwiththat.com/2012/06/04/quote-of-the-week-i-get-an-endorsment-by-bill-mckibben-plus-a-certificate-in-certified-planet-wrecking/
like Anthony, but I try to do my part. If only I were certifiable…
/sarc (for the really, really humor-impaired)

Bobl
June 6, 2012 6:27 pm

This part of NOAA should be defunded or moved. NOAA should only be responsible for reading the thermometers and someone else can adjust them. To put the responsibility for the raw data in the same hands as the adjusted data is open to fraud. No adjustments should EVER be made to raw data records, the adjustment process should be applied downstream – in the models so we can see the adjustment in the process. This avoids tampering and ensures we know the sources of error. In the case of say satellite data, as well as the raw data – if instrumentation biases are known to exist then NOAA should publish the details of the disovered bias and the algorithm for adjustment of the RAW data – they should not supply preadjusted data sets. Preadjusting data is in my opinion professional fraud.
Readers in the united states should contact their representatives and insist that the keepers of raw data keep that record pristine, even if there is the BEST reason to adjust it up (or down), this should clearly be done only in an adjusted data set kept entirely separate from the raw data preferably managed by a different organisation.
As I see it the whole scientific debate is being clouded by the availability of pre-adjusted data sets.

theduke
June 6, 2012 6:31 pm

What they are saying is that what scientists thought the temperature was back then doesn’t really matter. It’s the interpretation of scientists now of the temperature then that matters. It’s like re-writing history to suit present day biases. In the field of history, those with a true understanding of their subject call it “historical presentism,” which is essentially judging the behavior of historical actors by present day standards, which are, of course, always shifting.
The data must conform to the narrrative!

Peter Miller
June 6, 2012 6:34 pm

Same old story, if the data doesn’t fit the models, then change the data,
Climate Change 101.

Bill H
June 6, 2012 6:50 pm

tallbloke says:
June 6, 2012 at 2:07 pm
I hope these data vandals are keeping a careful record of their ‘adjustments’ so the records can be un-bent later.
…………………………………………………………………
I seem to recall a recent event where some British chaps rewrote the temps and then dumped the original data…. OOOPS….
Climategate Continues unabated…. and James Hansen seems to be right in the middle of this adjustment… when you loose the argument because mother nature proves you a fool, his answer is to change history…. DeJaVu… here we go again..

Steven Kopits
June 6, 2012 6:50 pm

They adjusted the temperature down 3 deg? On avg? For the whole state? Month after month? Wow. That’s a really large adjustment.

just some guy
June 6, 2012 6:59 pm

This might be an adjustment for “time of observation” bias.

David
June 6, 2012 7:05 pm

Phil C says:
June 6, 2012 at 4:22 pm
And you didn’t read what I said about them. They make no sense to me.
I did read what you said and what the GrDD authors write makes sense to me. Here’s a simplied description of what they’ve done:
1. draw a tic-tac-toe grid (3 x 3 square)
2. fill in all 9 squares with a temperature reading.
3. fill in the 3 top squares again with an additional temperature reading.
4. You’ve now got 12 readings: average them. That’s the old method (TCDD).
Is that the average for the entire area? Of course it isn’t. You’re taking too many readings in the top row. To correct for the bias in the top row, you should first average the two numbers in each square of the top row, and then use those three readings with the remaining six to find your average over the entire area.
=======================
No reason for the past to always go down, it should balance out.

Gail Combs
June 6, 2012 7:12 pm

Pamela Gray says: @ June 6, 2012 at 5:23 pm
….. If the NOAA has been fooling with the data, making today warmer than in the past, that will not set well with farmers who depend on accurate information, not adjusted information.
Hope all is well down in Central and South Central Oregon. They had a freeze warning early this morning. Temps dipped low enough to kill entire crops. That would be temps in the 20′s folks. But it is a warm freeze.
_________________________________
It is not exactly warm here on the east coast either. It is 64F at 10:00 pm and was 53F this morning. Usually it is in the mid eighties or higher.
At the rate the US temperature data is changing the records we will have a mile of ice sitting on New York City and the data will still be showing a warming trend.
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif

Brian D
June 6, 2012 7:12 pm

I found a paper by David Head from 1985 on the need for time of observation bias adjustments. Good laymen read.
http://www.isws.illinois.edu/pubdoc/MP/ISWSMP-81.pdf
Karl et. al. paper from 1986
http://journals.ametsoc.org/doi/pdf/10.1175/1520-0450%281986%29025%3C0145%3AAMTETT%3E2.0.CO%3B2
Vose et. al. paper from 2003
http://www.docstoc.com/docs/98268909/An-evaluation-of-the-time-of-observation-bias-adjustment-in-the-

Steve
June 6, 2012 7:15 pm

When all else fails, manipulate the data….

jim2
June 6, 2012 7:15 pm

It takes a finely tuned supercomputer to turn out sophisticated results like those!

Brian D
June 6, 2012 7:30 pm

Steven Goddard made an interesting comment back in Aug of last year on his blog concerning this adjustment. As a kid, he used to reset the markers on the max-min thermometers at bedtime because he was fully aware of the problem TOBS is adjusting for. How many other observers did this?
But there seems to be more cooling that has been done to the past even after this was put into the data set. At least it seems that way to me.

Ted
June 6, 2012 7:42 pm

A bunch of crooks cooling the data – they will never stop till the grant money runs out!

Brian D
June 6, 2012 8:06 pm

NCDC has Feb 1934 in MN at 12.7F, while the chart above has 14.1F (1.4F difference). Next door in WI, they have 13.9F, while the chart above has 15.3F (1.4F difference).

Follow the Money
June 6, 2012 8:20 pm

4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).”
Wow. That’s an elegant solution. So many ways to adjust real data in just one sentence!.

June 6, 2012 8:48 pm

Curious that NCDC adjustments LOWERED the SST’s. You might want to actually look at the methodologies before proffering this twaddle. Skeptics, real skeptics, can find that information here: http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html . Then come back and tell us why data that is known to be inaccurate is preferable to the more accurate data.
Especially for Gail: What does a cold snap in Oregon have to due with AGW? Answer: absolutely nothing. Thanks for the weather report. We’re all enthralled. JP

Nick Stokes
June 6, 2012 8:51 pm

There’s a recorded conference presentation here on the transition from TCDD to GRDD, and its effects.

June 6, 2012 9:03 pm

I see that ‘atarsinc’ has attempted to re-frame the discussion away from NOAA’s changing the historical land temperature record, to unrelated sea surface temperatures.
BZ-Z-Z-Z-ZT!!
Nice try, thanx for playing, noob, and Vanna has some lovely parting gifts for you.☺

June 6, 2012 9:16 pm

“Golly gee, I wonder what Mosher will say about this? Will it be the standard “trust the climate scientists, they know what they’re doing”? ”
he will say exactly what Nick Stokes said.
read the whole article.

davidmhoffer
June 6, 2012 9:23 pm

Smokey says:
June 6, 2012 at 9:03 pm
I see that ‘atarsinc’ has attempted to re-frame the discussion away from NOAA’s
>>>>>>>>>>>
That was his second shot at it too. Now he’s blown off both his feet, what do you think he will aim at?

June 6, 2012 9:29 pm

Follow the Money says:
June 6, 2012 at 8:20 pm (Edit)
4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).”
Wow. That’s an elegant solution. So many ways to adjust real data in just one sentence!.
#############
1. changing the time of observation from mid day to morning or mid night to noon, WILL create a bias. Depending on the time and place that bias can be positive or negative, large or small.
Dont believe me? read skeptic John daly on the issue. Still dont believe, read the TOBS thread at climate audit. Stiil dont believe, go get 5 minute data from CRN and test for yourself. Still dont believe, ask roy spencer he ran into this problem with his latest work
2. Station location. If you move from sea level to 500m you had better account for the station
move. There are a couple ways. Roy Spencer, for example, runs a regression to empirically estimate a correction due to different altitudes between stations. Others use a standard lapse rate.
You surely recall that some people complained that higher altitude and higher latitude stations were being dropped…(Ross mckittrick) and that this change would cause a bias? so, adjusting for changes in a station altitude or latitude would seem to make sense.. If you believe ross’s complaint ( it causes a bias ) and if you read Roys work
3. Instrumentation. Yes, in the 80s there was a change to MMTS. side by side calibration studies suggested an adjustment.
when you see these changes in the methods of observing you can ignore the bias,
adjust for the bias
OR, you can just “scalpel” the stations when you have these changes ( a willis suggestion) like Berkeley does.

David Falkner
June 6, 2012 9:33 pm

Ahh, yes, gridcells. Because the uninhabited areas where no one lived in the past (likely for good reason) may become uninhabitable. Just because there is one station (or three) in a gridcell standing against many others does not mean you can smear them together. Mountain regions are supposed to respond differently to AGW because of the elevation, aren’t they? What sense does it make to average them together in AZ?

Grey Lensman
June 6, 2012 9:33 pm

Why do they use temperature anomaly as a benchmark?
The reason is because Its simple and its a double whammy.
First they select a low datum thus the anomaly is always plus/warmer.
Second the choice of word, anomaly, itself, inculcates the thought, the idea of something out of the ordinary.
Thus at a stroke, they prime the masses to their warmer and its out of the ordinary meme.
Q.E.D.
Its not science its fraud and deception

RoHa
June 6, 2012 9:35 pm

And this is why we should always keep hard copies.

mib8
June 6, 2012 9:36 pm

So, ditch the grid (tic-tac-toe squares), ditch the state-wide averages. Link the temperatures to the locations in 3-D (within the available resolution). Let the microclimates take care of themselves as much as possible, since sufficient information is not available to make scientific adjustments (e.g. for albedo change when the weather station pavement was painted white).

Follow the Money
June 6, 2012 9:40 pm

NIck writes: As Phil C noted, the Jeff Masters link goes on to explain the reasons for the change. And I think they make a lot of sense. They are going to a more modern gridded system. I think this bullet point is the key: “1. …”
Generally, in such matters, the first “point” or reason forwarded is the least significant, although the proponent may or may want you to think of the “first” reason as primary. It is always the best sounding, that’s why it’s first. But it is more productive to look at what is revealed last. In newspaper lingo, “under the fold” is where the real nitty-gritty can be found. In a prospectus context, always read the footnotes first, that’s where the proponents hint at their weaknesses and try to establish plausible deniability for later proceedings wherein their lack of disclosure is contested.
This bullet point that is more likely the source of the plurality or majority of adjustments: “4”.

Greg House
June 6, 2012 9:55 pm

Steven Mosher says:
June 6, 2012 at 9:29 pm
1. changing the time of observation from mid day to morning or mid night to noon, WILL create a bias….Roy Spencer, for example, runs a regression to empirically estimate a correction …Others use a standard lapse rate….you can ignore the bias, adjust for the bias…
===============================================================
The problem is that you guys do not have reliable data to conclude anything. All these “methods” including grids, area weighting, reconstructions etc. have no basis in real science. You just do it and CALL it science.

Glenn
June 6, 2012 10:06 pm

Steven Mosher says:
June 6, 2012 at 9:16 pm
“he will say exactly what Nick Stokes said.”
You think it likely that Feb 1934 Arizona average was too high due to underrepresentation of mountainous regions? And it isn’t clear to you what effect the methodology of “adjusting” that monthly temp would have on a trend?

Bill Illis
June 6, 2012 10:07 pm

The Time of Observation Bias was known about long, long ago.
The weather bureau was already directing its weather bureau sites to use regular times so that the effect could be adjusted for in the early 1900s.
There is first the “Report on the temperatures and vapor tensions of the United States reduced to a homogeneous system of 24 hourly observations for the 33-year interval, 1873-1905″ Bigelow 1909.
Then “Effect of Observation Time of mean temperature estimation” Rumbaugh 1934.
http://docs.lib.noaa.gov/rescue/mwr/062/mwr-062-10-0375.pdf
Then there is “Effect of changing observation time on mean temperature” Mitchell 1958
Then there is “Temperature adjustments for discrepancies due to time of observation” directive to the director of the NCDC – Weaver, Miller 1970
Then Karl 1987, then Vose, Williams, Peterson, Menne, Quinlin, Easterling, Jones, Thorne every few years.
All these adjustments are a scam they bring out every few years and we should all know that. NCDC just did it all over again with the GHCN-M version 3.1.0 implemented late last year. This added another 0.2C to the US temperature trend, its just that these latest changes have not been picked up yet by Mosher and Zeke. Take Zeke’s chart above – and add another 0.2C in adjustments to it.
What can we do about it? Nothing. They are brave enough to even adjust the ENSO now. We need some from-the-top house-cleaning or the historic temperature record will just continually be adjusted higher and higher. They’ve got away with it so far and have received more funding for the effort. Why would they stop now.

DirkH
June 6, 2012 10:07 pm

Any reason to rewrite the past is a good reason for NOAA as long as it makes the past cooler. Defund NOAA; you won’t get any worthwhile information from them anyway.

temp
June 6, 2012 10:11 pm

Steven Mosher says:
June 6, 2012 at 9:29 pm
1. I completely agree… so why change it…. o right changing it means you can make adjustment that fit into the researchers biases and lets you paint the picture that you want…
2. Part A Station location.
If there is no choice in moving the for whatever reason then adjusting it based on well documented meta-data is fine. One must also include a very detailed but yet very simple display of all this information when one accesses the that station/s data. Which clearly is not done and in many cases is purposely hidden/thrown away.
Part B Dropping stations”. Their is no excuse to drop and station unless you have no other choice. The vast majority of stations have have been dropped still exist and produce data. This “dropping” is clearly done to make the “need” for adjustments allowing bias researchers to adjust in their own personal biases.
3. When ever you make adjustment expect to be challenged on them because often the adjustments maybe unneeded, wrong, fake or any of a host of other events. The fact that it seems multiple adjustments are made on top of each other often without the next person nothing or understanding past adjustments results in over and under adjustment… normally with the outcome favoring the bias of the researchers.

June 6, 2012 10:17 pm

So, if NOAA is cooking the books, why did they adjust the SST trend lower? If anyone has a reasoned explanation for a flawed methodology, present it. If you simply want to speculate about imagined cabals, you’re at the right site. JP

just some guy
June 6, 2012 10:19 pm

“1. changing the time of observation from mid day to morning or mid night to noon, WILL create a bias. Depending on the time and place that bias can be positive or negative, large or small.”
I’ll agree with this (after a good scouring of the internet.)
However, it bugs me that I can’t find any records or evidence showing the actual time of day used for early 20th century station temperature recordings. NOAA would need this in order to compute TOBS. Did they record the time of day back in 1895, or perhaps it wasn’t important to them? Is NOAA making general assumptions for that? Maybe not, maybe so. I can’t find any proof.

June 6, 2012 10:20 pm

I have a real dumb question concerning “time of observation”.
Are not a large segment of the temperature readings made with min-max thermometers? The recorder reads the record late morning or early evening and the minimum and maximum temperatures for the previous 24 hrs are then recorded. What has time of observation have to do with anything with a min-max thermometer?
If it isn’t a min-max, then how is a min and max determined at a rural station in 1934? Some thousands of people are reading a thermometer every hour on the hour?
If the time of day the data is recorded is necessary, what is the likelihood that metadata is available, much less reliable?
Under what 1934 real world circumstances is a time of observation correction NOT a correction without calibration?

Bill Tuttle
June 6, 2012 10:22 pm

Ian W says:
June 6, 2012 at 2:47 pm
It is a relatively simple choice – mendacity or incompetence – in either case they should not be funded and allowed to continue.

Mendacity wins. The only incompetence they’ve displayed is in not destroying all the archived data.

davidmhoffer
June 6, 2012 10:31 pm

atarsinc says:
June 6, 2012 at 10:17 pm
So, if NOAA is cooking the books, why did they adjust the SST trend lower? If anyone has a reasoned explanation for a flawed methodology, present it.
>>>>>>>>>>>
Never attribute to malice what can be explained by incompetance. For both cases. As for NOAA’s SST trends in particular, they now use a sliding base period which totaly and completely defeats the purpose of anomaly data in the first place. But if you are stating for the record that you think their method is credible, then I assume also that you conclude, given that about 3/4 of the earth is covered by water, that 3/4 of the earth has been cooling, not warming, and there is no cause for alarm in regard to CAGW?

June 6, 2012 10:33 pm

Just Some Guy, kudos for actually looking for the reasoning behind the adjustments rather than just assuming some nefarious intent by the NOAA scientists. You can find the answers you seek here: http://www.ncdc.noaa.gov/cmb-faq/temperature-monitoring.html. JP

June 6, 2012 10:48 pm

Here’s a chart of the adjusted data: http://www1.ncdc.noaa.gov/pub/data/cmb/temperature-monitoring/image003.jpg
Here’s the same with the “raw” data: http://www1.ncdc.noaa.gov/pub/data/cmb/temperature-monitoring/image004.jpg
This is what has Anthony wrapped around his axle? JP
REPLY: That’s GHCN data, not USHCN – we are discussing the US record in this thread, not the global record – wrong comparison on your part. – Anthony

Bill Tuttle
June 6, 2012 10:50 pm

atarsinc says:
June 6, 2012 at 10:17 pm
So, if NOAA is cooking the books, why did they adjust the SST trend lower?

For the same reason they adjusted the historical surface temperatures.
If anyone has a reasoned explanation for a flawed methodology, present it.
That’s NOAA’s question to answer.

davidmhoffer
June 6, 2012 10:55 pm

I’m not certain why we fret so much about anomaly data in the first place. In my opinion, it is useless. The idea is to try and compare temperature trends in disparate temperature ranges. So, if the arctic warms from -40 to -39, that’s an anomaly of +1, and if the tropic warm from +30 to +31, that is also an anomaly of +1. Makes is simple to compare temperature trends in areas with completely different temperature ranges, doesn’t it?
NOT!
What we are arguing about is the IPCC claim that doubling of CO2 adds 3.7 w/m2 to the picture. So, do we want to measure degrees? Or watts? The formula to convert between the two is:
P(w/m2) = 5.67 * 10^-8 * T^4
With T being in degrees K. So pump the numbers into that formula for
T= 233K (-40C) = 167.1 w/m2
T=234K (-39C) = 170.0 w/m2
Anomaly in w/m2 = 2.9
T=303K (+30C) = 477.9 w/m2
T=304K (+31C) = 484.3 w/m3
Anomaly in w/m2 = 6.4 w/m2
How can a temperature increase of one degree = both 2.9 w/m2 and 6.4 w/m2?
Comparing anomalies from different temperature ranges to try and track changes in earth’s energy balance is just plain silly. Take those same numbers suppose for a monent that the arctic had increased by two degrees, but the tropics had cooled by one. According to the average temperature, the earth would have increased by one degree. But in fact, from an energy balance perspective, we would actually be 0,8 w/m2 COOLER.
Adjust and justify the adjustments to the anomaly data all you want. If what you are trying to understand is if CO2 actually contributes 3.7 w/m2 per doubling, all you have is a bunch of temperature anomaly numbers that mean nothing.

June 6, 2012 10:58 pm

[snip – don’t insult the blog owner ~mod]

June 6, 2012 11:01 pm

Rasey continued: What has time of observation have to do with anything with a min-max thermometer?
Perhaps I should clarify the question. It is assumed that the recording of the thermometer does not occur near the hottest nor the coldest part of the day. ANYONE with the patience to daily record temperatures over the years would not commit such a blunder.
There will be the occasional cold front where the time of record is the warmest temp of the next day. And this presupposes that the recorder does not reset the marker before bedtime.
My father dutifully recorded min-max temperatures from 1960 to 1995 at his home every day when he returned from work. No, he wasn’t paid. His records aren’t part of any offical database. He was just an aeronautical engineer that liked data – good data. He daily plotted the data on red K&E 1×1 mm tracing graph paper, along with estimates of rain and snow fall. He taped them on top of each other with 20 years in a stack taped to the front closet door. The thought that anyone would think to adjust his data after he died would have been foreign to him — and to me.

June 6, 2012 11:02 pm

Bill, you’re not making sense. The Land Temp adjustments tend to show more warming, while the SST adjustments do the opposite. JP

June 6, 2012 11:04 pm

Dr. Watts,
Christopher C. Burt here, author of the Weather Underground blog you have quoted herein.
First, just for the record, I would like to correct your assumption that my blog was written by Dr. Jeff Masters. Solely myself wrote the blog with no input from others in the Weather Underground organization.
Second, you have quoted only the first half of my blog on the subject of the NCDC changes so far as evaluating the changes in long-term means (LTM’s) of temperature averages in the contiguous U.S. (CONUS). This gives a false impression that I disagree with the new methodology the NCDC is now using. That is not the case, as would be obvious if the 2nd half of my blog had been published in your piece today (June 6) titled “NOAA’s National Data Center caught cooling the past-modern records don’t match paper records”.
There are very good reasons for ‘massaging’ the areal temperature (and precipitation) data for the use in ascertaining trends in climate change.
For instance in the example I used of Arizona in 1934: the USWB (U.S. Weather Bureau, Dept. of Agriculture) based their 52.0° state average on data from 78 sites that reported from around the state that particular month of February 1934. Of these 78 sites 3 were in the city of Phoenix (Airport, USWB site, and Indian School), 3 were in Yuma (Citrus Station, USWB, and Valley site), and 2 were in Tucson (Airport and Univ. of Arizona campus). So 8 (more than 10%) of the 78 sites for the entire state were located in three of the warmest cities in the state. Furthermore, 27 of the 78 sites were in Maricopa and Yuma counties, the two warmest counties in the state that comprise 12.6% of the state’s landmass yet account for 34.6% of the observation sites.
It does not take a genius to see this leads to a problem when trying to ascertain a ‘state average’ temperature. You might argue, well why not just stick to these same sites that reported in 1934 and compare to what they now observe in 2012? That is not possible because many of the sites that reported in 1934 have long since stopped supplying data, so it is therefore impossible to keep that timeline continuous. Plus, even the city or town sites that STILL report data now have (since 1934) relocated within their municipalities and/or effected changes in instrumentation.
This is why, for the sake of determining long-term trends, it is not possible to simply use the same raw data from 1934 as in, say, 2012.
The NCDC has thus necessarily come up with a better way of trying to address these issues. So long as they apply the same parameters to the new GrDD (grid system) for ALL the sections for the whole POR (period of record 1895-present) then the actual raw data for trend purposes is irrelevant. And, in fact, the original raw data from all the INDIVIDUAL sites used HAS NOT been changed, it is only the way their record has been interpreted that has (for the above reasons I outlined) been changed.
Yours,
Christopher C. Burt
Weather Historian
Weather Underground
REPLY: Thank you for responding. You are making an argument I have not. I never suggested that the original station data changed, only the state average results. I know you wrote the article, think perhaps you misinterpreted “..via contributor Christopher C. Burt.” as I wrote above. You should know that I attempted to use the link on the WU article right sidebar itself to contact you earlier today, as well as the contact link on this page of yours: http://extremeweatherguide.com/contact.asp and both failed. Thus, being unable to contact you I could only do an excerpt, nor could I get clarifications. However, you do make an excellent point in that the US surface temperature record, as I have pointed out many times, is quite the mess, requiring a multitude of adjustments. The fact that all of these adjustments increase the trend is the issue. Also, I have no claim to the title of Dr., though thank you for thinking of me in this way. – Anthony

June 6, 2012 11:14 pm

Anthony, you want to discuss only the one dataset, because it supports your position. I’m discussing both datasets to show that your position is incorrect. NOAA’s adjustments don’t take sides. They are simply attempting to present the most accurate data. If you believe they’ve erred, show us in what way they have done so, instead of insinuating some nefarious purpose. JP
REPLY: No. The article doesn’t mention GHCN, this is solely a discussion about US data, not global data. Your argument is a straw man by adding a second data set that uses a different set of procedures. -Anthony

Bob Ryan
June 6, 2012 11:19 pm

An accountant was asked at an interview: ‘what is 2 + 2?’. ‘What would you like it to be?’ came the confident reply. In finance this is known as creative accounting – the gentle art of persuading the numbers to say what you want. So this is the new climatology: persuading the data to deliver any message required, fit any theory, agree with the output of any model. But then that very same massaged data can be used to create the parameters that populate those very same models. A circular self sustaining intellectual discipline. We have a new specialism: ‘creative climatology’. Awesome!

June 6, 2012 11:28 pm

Anthony, many of your readers seem to be assuming that your article was about NOAA making inappropriate adjustments to datasets that tend to show more warming. I’ve presented an example of NOAA making adjustments to a different dataset that tend to show more cooling. My purpose is to show that adjustments are made for reasoned scientific purposes, regardless of whether they show more or less warming. JP
REPLY: That may be, but we aren’t talking about GHCN here, that’s a whole different can of worms. BTW< you should know that you've violated site policy by changing names. You've previously commented here as John Parsons and now are commenting as "atarsinc". Pick one and stick with it please. – Anthony

June 6, 2012 11:56 pm

coalsoffire said (June 6, 2012 at 2:40 pm)
“…Being a bear of very little brain I have a question. Can this sort of trick work forever? Will the artificial wave in the temperature anomaly just keep rolling along? In other words if you constantly adjust the past down and tinker a bit upward with the present to produce a constant upward trend, regardless of what is actually happening in the world, does the wave you have created ever crash on the shore? And if the natural variation is upward a bit too, well… bonus…”
And, if the “climate scientists” aren’t careful, their adjustments of the past will create a new “Little Ice Age”. They may be able to adjust the past, but too many people are observing the present.
If you really want to start an argument, ask a “climate scientist” if they can, with 95% certainty, tell us that the recorded values of their selected data-set (HadC, GISS, NOAA, BEST – pick one) will NEVER go back below the “zero” they’ve selected. Their whole world depends not only on a rising trend, but on their anomalies being as far above “zero” as they can make it.
This was one of the first things that made me skeptical about the whole temperature anomaly business – before you can tell how high or low a value is, you’ve got to have a point of reference (their “zero” point). Anybody that’s worked around electronics knows that your point of reference matters. That’s why most measurements are referenced to something (look up a decibel (dB) to see my point – “…a logarithmic unit that indicates the ratio of a physical quantity (usually power or intensity) relative to a specified or implied reference level…”).
Only “climate scientists” allow a “floating zero” in the temperature anomalies. No other science would allow different references to be used to measure the same quantity.
All of the data bases seem to use different reference periods – all the way from GISS’s base period of 1951-1980 (showing the highest anomalies) to NDCC’s base period of 1981-2010 (along with it’s lower anomaly values).

June 7, 2012 12:15 am

Anthony,
My response was to the original article written (I thought by Dr. Watts) not a reply to your comments. Sorry about missing your attempts to contact me. Please in the future use the wunderground.com email system for such (under ‘blogs’ and then ‘weather historian’). My personal web site: extremeweatherguide.com is going through a server host change at the moment and not a reliable conduit for contacting me for the time being (plus that site is only related to my book and its contents).
Again sorry for the confusion!
Chris

crosspatch
June 7, 2012 12:17 am

Here is the problem I have: So if you do things like gridding and arrive at some adjustment for some year, say 1934, fine. Why change that adjustment next month? And why change it colder? And why continue to change it colder and colder as time goes by? Artificially adjusting pre-1950 temperatures colder and post-1950 temperatures warmer is bad enough … but to increase those adjustments with every passing month seems irresponsible.

Reply to  crosspatch
June 7, 2012 12:40 am

Averages change over time. The NCDC is now using a consistent application to determine the temperature averages since 1895 and they show, in the long-term, a warming trend. That is why you are not seeing any cooling (yet) for the entire POR since 1895. Of course, if in fact, temperatures do begin to cool, then it WILL be reflected in the data.

June 7, 2012 12:30 am

Correction, I’ve confused ‘atarsinc’ with you ‘Anthony Watts’! So please disregard my last missive (except in so far as contacting me via email)!
Best,
Chris

3x2
June 7, 2012 12:31 am

And one by one the historical ‘inconsistencies’ just keep on disappearing.

just some guy
June 7, 2012 12:42 am

Christopher Burt said: “The NCDC has thus necessarily come up with a better way of trying to address these issues. So long as they apply the same parameters to the new GrDD (grid system) for ALL the sections for the whole POR (period of record 1895-present) then the actual raw data for trend purposes is irrelevant. ”
I am going to have to go ahead and disagree with this. NCDC caused the early 20th century temperatures to go down by adding [i] estimated values [/i] to fill in this grid system. Therein lies the problem. By adding more and more estimates in lieue of raw data, they are drifting farther and farther away from the true record.
Any estimate, no matter what methodology is used, will always be, at least in part a product of the estimator’s own bias and experience. This is unavoidable.
As Anthony put it earlier, the US temperature reconstruction is a mess. Not that anyone is to blame for that, who could have known 100 years ago that anyone would really care about .3 degrees or true average temperatues.

Brian H
June 7, 2012 12:45 am

crosspatch says:
June 7, 2012 at 12:17 am

Artificially adjusting pre-1950 temperatures colder and post-1950 temperatures warmer is bad enough … but to increase those adjustments with every passing month seems irresponsible.

Does “irresponsible” mean “deceptive and utterly illegitimate”? Or is it just the strongest term you are willing and able to come up with?

kadaka (KD Knoebel)
June 7, 2012 1:07 am

Found in: atarsinc on June 6, 2012 at 11:28 pm:

REPLY: That may be, but we aren’t talking about GHCN here, that’s a whole different can of worms. BTW&lt you should know that you’ve violated site policy by changing names. You’ve previously commented here as John Parsons and now are commenting as “atarsinc”. Pick one and stick with it please. – Anthony

Thank you, that was the last piece I needed, saved me a few minutes.
“atarsinc” identifies a user at a buy/sell site here with a location of Kettle Falls, WA.
Location and “John Parsons” leads to this commenter at “MinnPost” (Minneapolis, Minnesota), non-profit news site. In the snippets of his comments he’s self-identified as Dr. John Parsons. (BTW Googling “kettle falls wa climate change jp” brings this up so finding it was inevitable even without Anthony’s mention.)
Seven “recent” comments listed, spread across three (more or less) climate-related articles: Climate B.S. of the Year Awards: And the winners are… (Gleick), Climate skeptic admits he was wrong (Richard Muller), and Texas politicians censor climate-change research.
Here’s a real winner, found at the Texas story:

SUBMITTED BY JOHN PARSONS ON OCTOBER 19, 2011 – 2:17PM.
Mr. Ludvigson–May I suggest that you do a little research on the web site you are using to inform your opinion on AGCC. “Wattsup” is a notoriously unreliable purveyor of misinformation regarding climate science. Instead of relying on a clearly ideological website, let me suggest that you refer to sites that provide the raw scientific information and then make up your own mind, without the filtering effects of a predisposed agenda. John Parsons

Yup, Dr. JP is quite a charmer.
Disclaimer: Information provided for entertainment purposes, not to facilitate harassment, which the blog owner flatly does not condone. So don’t harass Dr. Parsons.

June 7, 2012 1:37 am

Well – it should be obvious that if the explanation is true, then as many of the gridcells should have increased temperatures as have decreased temperatures.
Without knowing how thorough the survey and its reporting is, we can’t say this is systematic bias. Hopefully the author will confirm whether they balance out.

TFNJ
June 7, 2012 1:51 am

It’s OK. In 50 years TODAY’S temperatures will have been homogenised down.

richard verney
June 7, 2012 2:16 am

Steven Mosher says:
June 6, 2012 at 9:29 pm
//////////////////////////////////////
One should always look at data in its purest and most uncorrupted form.
There should be no adjustments to the raw data, and no attempt to extrapolate the data over a notional grid area. The entire idea of a global average, or a state average is a fallacy. My garden is about 1700sqm and if I had an accurate thermoter it would not surprise me if I could find 100 different temperatures in my garden. The idea that one thermometer could provide the ‘average’ temperature of my garden is frankly ludicrous. The ‘average’ temperature of my garden because of land topography, folliage etc would not be the same as that of my neighbours. It is even more crazy to consider that the temperature taken at an airport some 40 or so miles away could reflect the ‘average’ temperature of my or my neighbour’s garden. It is certifiably insane to consider that a state average could be obtained from from a dozen or so thermometers.
Each station data set should be considered individually and the trend of that data set assessed on an individual basis. If there is a change of instrument, that is the end of one data set and the beginning of another. If the time of observation is altered, that is the end of one data set and the beginning of a new data set. If there is a change of siting, that is the end of one data set and the beginning of another. If there are changes to infrastructure (eg., putting in a tarmac carpark) that is the end of one data set and the beginning of another etc etc.
Of course, that may well mean that there are few if any lengthy continuous data sets, but that is just a consequence of the history of the site. What one can do is examine each individual data set (for as long as it lasts) and see what it actually says. We can look at the true facts. Presently all we are doing is examining the artifact of subjective adjustments and so called harmonisations, we are not reviewing the data and this is inevitably distorting the picture.

NZ Willy
June 7, 2012 2:24 am

OK, methinks Nick Stokes etc are right in principle, but I’m concerned at the quantification of the temperature differential between well-measured and poorly-measured places, through the epochs. I expect UHI is a stronger phenomenon today than 100 years ago, but who can say by how much? It’s an opportunity to build biases into the model. Just because the whole-Earth approach is best for quantifying Earth’s heat today (especially since satellite data is now used), doesn’t mean it’s best for historical comparisons, because the error bars increase so much as look-back time increases.
How’s that old rhyme go: “The way to fame and fortune, when other roads are barred, is take something very easy, and make it very hard”. Don’t project the whole-Earth temperature record into past epochs. Instead, map today’s whole-Earth temperatures to the existing surface temperature records, then trace those backwards, and for each epoch then reverse-map back into whole-Earth temperatures. This keeps errors manageable. But it looks like the tech-boys are determined to do this the hard and bias-able ways, sigh. (sorry if this posting is hard to read)

David Schofield
June 7, 2012 3:11 am

Green Sand says:
June 6, 2012 at 3:30 pm
DocMartyn says:
June 6, 2012 at 3:12 pm
———————————————————–
Just a few minutes of Fahrenheit 451: the autoignition temperature of paper.
“In an oppressive future, a fireman whose duty is to destroy all books begins to question his task. “
Spooky. Ray Bradbury died yesterday aged 91.

Brent Hargreaves
June 7, 2012 3:14 am

In support of the above evidence of data fiddling, here’s a little analysis I did of the doctoring of data at some Arctic stations. For Iceland and N. Russia the scalliwags have depressed temperatures by 0.9C in the early 20th century and raised them by 0.9C in the later period.
http://endisnighnot.blogspot.com/2012/03/giss-strange-anomalies.html
We need tenacious journalists – a la Watergate – to give this scandal a proper public airing.

Bill Tuttle
June 7, 2012 3:18 am

atarsinc says:
June 6, 2012 at 11:02 pm
Bill, you’re not making sense. The Land Temp adjustments tend to show more warming, while the SST adjustments do the opposite. JP

I presume that was in reply to my reply at 10:50 pm to your query at 10:17 pm: “So, if NOAA is cooking the books, why did they adjust the SST trend lower?”
For the same reason they adjusted the historical surface temperatures.

Because they can.

Geoff Sherrington
June 7, 2012 3:39 am

davidmhoffer says: June 6, 2012 at 4:40 pm government destroying all paper records of everything …
From PhD thesis, page 49, Simon Torok, who did the first major homogenisation of Australian weather data:
“It should be noted that BoM archive searches are frustrated by the fact that (Regional Offices) hold unique items not held by Head Office and vice-versa. The problem was compounded by a culling of the meta data files at Head Office, carried out in the 1960s.”

Editor
June 7, 2012 3:56 am

The original monthly hand written/typed monthly temperature and rainfall records can all be accessed here for individual stations right back to Year Dot.
http://www7.ncdc.noaa.gov/IPS/coop/coop.html
Also there is a very useful monthly state summary here, which shows the actual stations which actually build up the state average.
http://www7.ncdc.noaa.gov/IPS/cd/cd.html
I will have a closer look when I get a minute, but anyone can have ago themselves, as everything you need is there.

Blade
June 7, 2012 4:24 am

Gail Combs [June 6, 2012 at 7:12 pm] says:
At the rate the US temperature data is changing the records we will have a mile of ice sitting on New York City and the data will still be showing a warming trend.
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif

LOL funny but true!

beng
June 7, 2012 4:42 am

****
Auto says:
June 6, 2012 at 1:56 pm
[On present trend, DC and much of Maryland will be covered by a kilometer-deep ice cap by 2025, extrapolating the cooling seen last night, using a model I made (up) earlier. /Sarc. /Not real!]
****
It’s a rather system-shocking 40F here in west MD this morning. My tomatoes/okra just can’t get any love…

Caleb
June 7, 2012 5:07 am

For those who are just waking up to the fun and games involved with “adjustments,” I refer you to the Climate Audit post of August 8, 2007. http://climateaudit.org/2007/08/08/a-new-leaderboard-at-the-us-open/
This was my personal wake-up-call.
However once you get the hang of playing with adjustments, it can be quite helpful in terms of feeling better about your personal budget. Rather than depressed about being broke, you can make a few adjustments-for-inflation, playing with various “proxies.” For example, the price of gold per ounce has gone up since 1865, while the price of aluminum has fallen. Pick your proxie, extend your “trend-line,” and guess what!!? You’re not broke after all!!!

Phil C
June 7, 2012 5:31 am

BTW< you should know that you've violated site policy by changing names. You've previously commented here as John Parsons and now are commenting as "atarsinc".
Why do you care?
[REPLY: Because commenting under multiple names is sock-puppetry. Because even anonymous commenters should be accountable. Why do you feel a need to question our policy? -REP]

RockyRoad
June 7, 2012 5:33 am

So now with these revelations, the NCDC can’t claim (with any validity) to be a “Data Center”. Now they’re just the NC. And with “Data Center” gone, they can’t claim to be “National” or “Climatic” either.
Now they’re nothing. How sad.

Phil C
June 7, 2012 7:04 am

Because commenting under multiple names is sock-puppetry.
A tautology here. I’ll move on …
Because even anonymous commenters should be accountable.
Why? What does it matter? In all the posts I’ve read here for a number of years, I see no harm in someone wishing to remain anonymous. Perhaps you could offer one up. And I’m at a loss as to what specifically these people should be held to account for. It’s the content of the post, the value of the argument and not the person’s name that is relevant to the discussion.
Why do you feel a need to question our policy?
Because I’ve seen valid discussions here cut short because someone violated this policy that has nothing to do with the discussion. And it seems like every time that happens, it happens to someone who argues against the original post. Meanwhile, I see plenty of comments of an ad hominem nature (I have been subject to some of those) coming from people who don’t use their real names, yet I can’t recall an instance where the moderators stepped in to that unless the person making the comment was challenging the original author. In other words, your decision to intervene with your policy regarding names appears to be applied based on the content of the poster’s argument.
[REPLY: “Tautology?” It seems you really do need everything spelled out for you. Sock-puppetry is a form of astro-turfing. Even if you are not concerned about that sort of behavior, we are. Remaining anonymous is one thing, but making contradictory statements or constantly repeating statements that have already been addressed is another. In that sense, even anonymous commenters are accountable for what they have claimed and for their actions on this site. Your last complaint is just plain false. This morning I snipped a comment that could well be described as skeptic for sock-puppetry. You didn’t see the comment because it was snipped. Anthony and his moderators bend over backwards to give every sincere comment (and not-so-sincere, like many of yours) a fair airing. Many other sites do not. This is Anthony’s living room on the internet and he does not have to entertain every insult and innuendo people may care to utter. Don’t like it? Tough. Now, this conversation is over. Moderation policy is NOT open for discussion or debate and any further comments aling this line will be deleted. That’s tough, too. -REP

Mark
June 7, 2012 7:04 am

Myron Mesecke says:
Why is it that the older data that had less influence from man made structures, roads, changes to land, no air conditioning units, fewer parking lots and smaller airports is considered to have inconsistencies and must be “adjusted”?
If the aim were to compensate for UHI then “cooling” historical data makes no sense. You’d either want to “cool” recent data or “warm” historical data.
The “curve” given makes even less sense, it’s simply the wrong shape for any kind of UHI “compensation”.
I wonder how adjusted and unadjusted plots would compare.

Mark
June 7, 2012 7:37 am

Steven Mosher says:
1. changing the time of observation from mid day to morning or mid night to noon, WILL create a bias. Depending on the time and place that bias can be positive or negative, large or small.
Given that there are at least 5 possible ways to work out time of day things get especially tricky when you are trying to compare different sites. In plenty of parts of the world timezones are radically different from local time according to longitude even without DST. “Midnight” according to timezone may equate to “somewhere between 21:30 and 02:30” according to longitude.

Maus
June 7, 2012 7:48 am

Christopher C. Burt: @ 11:04 “This is why, for the sake of determining long-term trends, it is not possible to simply use the same raw data from 1934 as in, say, 2012.”
Assume this claim is true. And yet you claim that moving city hall a few blocks is sufficient to destroy our capability to detect or measure the long term trend. For both of these to be true it must be that the long-term trend is significantly less than the difference in temperatures over a distance of one-half of one mile. It follows then that to even potentially detect such a small long-term trend that there would need to be stations positioned at a minimum of 1/4 of one mile distant from another. (Nyquist, etc.)
But if that is the case there is no significance to 8 of 78 stations being in cities and speaking of ‘warmest counties or regions’ if it is that the best coverage of those 78 stations is 15.3153 (Rounding up) square miles out of region comprising 113,594.08 square miles. Or a valid sample coverage of 13/1000ths of 1% or the area in interest.
Such that if we assume, to begin with, that the samples are not randomly distributed then everything is as you say it is. But if we assume they are not randomly distributed then it is then it is certain that the only thing that can be said is that temperature averages and trends, whether gridded or not, are they are like unicorns. A fantasy for now, but will make someone filthy, stinking rich if they can ever be actually implemented.

JR
June 7, 2012 8:00 am

Re: Steve Mosher 9:29pm
Please show one example of a station that was moved 500 meters or more in elevation and retained the same station ID. A move like that should result in a new ID being assigned after the move.

alex the skeptic
June 7, 2012 8:17 am

Exerpt fron Georeg Orwell’s Ninteen Eightyfour (Part 3, chapter 2):
>>O’Brien was looking down at him speculatively. More than ever he had the air of a teacher taking pains with a wayward but promising child.
‘There is a Party slogan dealing with the control of the past,’ he said. ‘Repeat it, if you please.’
‘”Who controls the past controls the future: who controls the present controls the past,”‘ repeated Winston obediently.
‘”Who controls the present controls the past,”‘ said O’Brien, nodding his head with slow approval. ‘Is it your opinion, Winston, that the past has real existence?’
Again the feeling of helplessness descended upon Winston. His eyes flitted towards the dial. He not only did not know whether ‘yes’ or ‘no’ was the answer that would save him from pain; he did not even know which answer he believed to be the true one.
O’Brien smiled faintly. ‘You are no metaphysician, Winston,’ he said. ‘Until this moment you had never considered what is meant by existence. I will put it more precisely. Does the past exist concretely, in space? Is there somewhere or other a place, a world of solid objects, where the past is still happening?’
‘No.’
‘Then where does the past exist, if at all?’
‘In records. It is written down.’
‘In records. And –?’
‘In the mind. In human memories.’
‘In memory. Very well, then. We, the Party, control all records, and we control all memories. Then we control the past, do we not?’
‘But how can you stop people remembering things?’ cried Winston again momentarily forgetting the dial. ‘It is involuntary. It is outside oneself. How can you control memory? You have not controlled mine!’
O’Brien’s manner grew stern again. He laid his hand on the dial.
‘On the contrary,’ he said, ‘you have not controlled it. That is what has brought you here. You are here because you have failed in humility, in self-discipline. You would not make the act of submission which is the price of sanity. You preferred to be a lunatic, a minority of one. Only the disciplined mind can see reality, Winston. You believe that reality is something objective, external, existing in its own right. You also believe that the nature of reality is self-evident. When you delude yourself into thinking that you see something, you assume that everyone else sees the same thing as you. But I tell you, Winston, that reality is not external. Reality exists in the human mind, and nowhere else. Not in the individual mind, which can make mistakes, and in any case soon perishes: only in the mind of the Party, which is collective and immortal. Whatever the Party holds to be the truth, is truth. It is impossible to see reality except by looking through the eyes of the Party. That is the fact that you have got to relearn, Winston. It needs an act of self-destruction, an effort of the will. You must humble yourself before you can become sane.<<

Doug Proctor
June 7, 2012 8:20 am

noaaprogrammer says:
June 6, 2012 at 2:55 pm
Benford’s Law requires the data to span several orders of magnitude to be valid. The temperature data or its anomalies do not do so, so the fraud-test would not work.
But your comment cause me (and I suppose many others) to learn about Benford’s Law, which is impressive!
Years ago, while making thousands of estimations of depths in my job as a petroleum geologist, I wondered if I unconsciously chose certain numbers at the 0.x meter level. I didn’t come to a conclusion – I stopped believing there was any validity to the precision as I thought about it further, so I stopped caring – but I still expect there is. Reading the thermometer would have the same intrinsic situation.
Error bars on pre-mechanical temperature readings strike me as something that should never be less than the error possible on one reading. As I understand it, you can reduce your error by a square root function if the measurement is of the same object by the same means by the same operator (if personal choice is a factor), but in temperatures, each reading is unique either by place, time or operator. There is no “true” value around which a measuring device or operator wanders randomly.
I also wonder about the median value in temperature or other readings, as shown for the value we are supposed to note. Do we not tend to measure higher rather than lower if the day is hot, and lower than higher if the day is cold? Are our instruments not more sensitive to changes up or down? So should the “median” curve not be high or low weighted? We only center it because we think the errors are equal high/low and random. If neither assumption is true, then the “median” should be off-median.
Another: I throw this out one more time (at least until I see a response): look at the older records, wherein the error bars are wide. The median value looks like a damped wave, as if a long-period smoothing function has been applied to the data. As we come towards the present, the error bars shrink as the temperature median departs from the past. However, if we were to apply a similar long-period smoothing function to the present (presuming we had future data to allow this) the last 100 years of temperature changes would exhibit less variation and, for the 20th century, a reduced temperature rise.
More like the past.
So: have we got a statistical problem going on, in which older data is apple-like, and newer data, orange-like? Can we not really say that in the past, were we able to see the true temperature averages, we would see a similar frequency distribution, such that highs and lows could more reflect what we see recently with new, better data?
Is it more reasonable not to say that today’s temperatures are valid to compare to the post-1945 days, perhaps, but prior to then we can only say that the warmer decades were WARMER than the median value, and the cooler decades, COLDER than the median value, and that the top and bottom of the error bars was, at times, actual data, not error, and on a decadal level?
It is said by one and all that today is warmer than – today, in the Calgary Herald – since humans evolved. This is based on the current medial values as shown in the temperature reconstruction graph/data used regularly by amateur and professional alike. I wonder if this is a valid statement to make when the nature of the median value is taken into account.

June 7, 2012 8:21 am

This is fraud!! No way NOAA is getting away with this.

Howling Winds
June 7, 2012 8:51 am

One thing that seems to be apparent in all of this, is the simple notion of the accuracy of the so-called “data”. Although I am an admitted skeptic, I have a difficult time with the idea that thousands upon thousands of temperature readings take over hundreds of years, can have any precision at all, and certainly not within more than a few degrees in either direction.

rilfeld
June 7, 2012 8:54 am

The citrus band and species suitability by temperature charts from the DOA over the years aren’t showing much warming. This points up that the political implications depend on “catastrophic” global warming. Nothing in the record suggest a climate change that we can’t accomodate by moving 100 miles north or south. I don’t think voters would be moved to immolate their economies on the alter of global warming if it were expressed as making North Dakota more like South Dakota, yet that’s the reality of even the “cooked” data
Those of us of a certain age, who remember that the first “Earth Day” was based on catastrophic anthropogenic global cooling, and are amused that certain prophets of doom such as Paul Erlich have made a living arguing both sides of the street, called BS long ago.
I understand the call for lawyering, but the combination of sovereign immunity and the argument of most that they were following what they though was legitimate science (‘I was only following orders’) will certainly make that futile.
Anthony, you are on the correct course: sunshine is the best antiseptic.
As “greened” economies fail, and money dries up, we’ll have a new fake crisis that requires us to give up our lives to the statists in due course….the food police seem to be on the rise at the moment.
Age, again, causes these things to be viewed as circular…..
Grocery stores use paper bags – very efficient, and reusable by consumers.
Tree huggers scream, irrationally as the pulp comes from farmed, purpose grown trees.
We switch to plastic.
Anti carbon folks scream, and discards kill fish and birds and clog drains, and even kill babies in spite of the multilingual printer warnings..
We switch to government mandated, inconvenient cloth bags. They harbor nasty bacteria and people hate having to manage them.
Some stores offer paper again , still from purpose grown trees and still fully recyclable. And find as before, it is the most reliable lowest cost alternative. And
disgruntled sports fans once again have something to wear to the games.

Ripper
June 7, 2012 9:34 am

Steven Mosher says:
June 6, 2012 at 9:29 pm
Others use a standard lapse rate.
===========================================================================
That is the theory, but it often is opposite to what happens in the real world.
Tell me which station here is at the higher altitude.
http://members.westnet.com.au/rippersc/hcjanjulycomp.jpg

NotSure
June 7, 2012 9:37 am

“Of these 78 sites 3 were in the city of Phoenix (Airport, USWB site, and Indian School), 3 were in Yuma (Citrus Station, USWB, and Valley site), and 2 were in Tucson (Airport and Univ. of Arizona campus). So 8 (more than 10%) of the 78 sites for the entire state were located in three of the warmest cities in the state. Furthermore, 27 of the 78 sites were in Maricopa and Yuma counties, the two warmest counties in the state that comprise 12.6% of the state’s landmass yet account for 34.6% of the observation sites.”
I don’t understand this. How do you know which cities and areas are “warmest”? You can’t compare against an unknown. Maybe some of the areas for which we have no measurements were actually warmer than the areas you say are warmest. We don’t know and can’t know. Comparing against an unknown is like division by zero, the result is undefined.

June 7, 2012 10:03 am

How funny. Everytime the 1930’s are adjusted they become cooler for the US. Next thing you know they will be telling us that the 1930’s was a mini ice age and that our planet should be inside of an ice age, but due to evil man and his artificial ways, the climate is now not in an ice age. /sarc
Boggles the mind if I might say so. But in any regard, I don’t see how changing anything data wise means much especially in such a state as Arizona with mountains that create micro-climate zones. You really can not tell the “average” temperature over an arbritrary area such as the state because frankly your guess is as good as any because of all the micro-climate zones. We could figure it out for today but for the 1930’s…forget it.
And I say we “could” because if we used more thermometers that are not at airports and that are not in urban locations we “could” get a good representative average, but frankly we won’t because frankly every adjustment must be the same and tell the same story. Reminds me of religions and how they changes history to advance their cause.

Ian W
June 7, 2012 10:15 am

I have yet to see it explained what ‘average temperature’ actually means.
Yes I realize that two numeric values are added together and then divided by two – to give a numeric mean. But behind that are several completely invalid assumptions.
Examples:
* The lowest temperature of the day is just before dawn and the highest temperature of the day is late afternoon. This leads to all these time of day corrections – but we all know of days where that is not the case. As someone pointed out – what about a maximum and minimum thermometer readings – just once at midnight then reset. But both of these approaches are failing to quantify how LONG temperature was at a certain level. The assumption is of a consistent, smooth, temperature curve which is patently false in most temperate areas of the world.
* The intent of this whole process and the reason it has moved from a poorly funded arcane exercise to a trillion dollar industry, is to prove (falsify) the hypothesis of emissions from human activity leading to more heat energy (watts per square meter) being trapped in the Earth’s system. Measuring temperature alone will not provide this information – it is the incorrect metric. Creating a mathematical mean temperature is even less meaningful. What is needed are temperature and humidity observations at regular intervals. The atmospheric enthalpy and heat content in Kilojoules per Kilogram -at the time of observation – can then be calculated. These observations should then be repeated perhaps at hourly intervals and the overall daily integral of heat content in KJ/Kg can be quantified. Climate ‘scientists’ really should understand the metrics that they are using. A misty 75degF afternoon in a Louisiana Bayou after a thundershower has around twice the heat energy of an almost zero humidity 100degF afternoon in Arizona. Averaging these values is nonsensical. It IS heat energy this is all about — so called green house gases trapping outgoing longwave radiation and the heat trapped causing ‘climate change’?
Measuring temperature (and fudging the figures) and trying to show cleverness by nitpicking time-of-observation – may be a useful argument with non-scientists like politicians and the media but it is abject nonsense: So one wonders why so much time is spent by supposed experts NCDC and NOAA fudging incorrect metrics. This would appear to be solid evidence of an intentional hoax – I don’t believe the members of these agencies are so ignorant of physics – but they seem to believe that everyone else is.

Hugh K
June 7, 2012 10:31 am

atarsinc says:
June 6, 2012 at 8:48 pm
Especially for Gail: What does a cold snap in Oregon have to due with AGW? Answer: absolutely nothing. Thanks for the weather report. We’re all enthralled. JP
I missed the part in Gails post where she compared a cold snap w/AGW…could you please point me to that in her comment?
Sadly it has come down to exactly that – to be enthralled with the use of raw data. I found that refreshing. However, alarmists need not worry…this was obviously an oversight on Gail’s part and I’m sure before expressing her concern for farmers in the future, Gail will wait for the adjusted Fed version. Of course the farmers will still suffer the same amount of loss but at least they won’t suffer as much anxiety after the fact thinking it really wasn’t as cold as it really was.

June 7, 2012 10:46 am

A few things:
1) As Mosher and other have noted, some adjustments to the record are needed to correct for TOBs changes, station moves, instrumentation changes, etc. Its actually rather impressive how often stations moved from urban rooftops to newly created airports in the U.S. in the 1930s and 1940s, resulting in some rather large step changes in temperature.
Since some adjustments are needed, you should have an adjustment method that ideally is a) algorithmic, so it can automatically detect and correct for inhomogenities without manual (and potentially biased) observer corrections and b) has been extensively tested against data with different types of inhomogenities to ensure that it does not introduce systemic biases. For those who have not come across it, the Menne, Williams, and Thorne paper last year is a good example of this sort of testing and well worth a read: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/williams-menne-thorne-2012.pdf
Its also worth pointing out that the Berkeley homogenization method, which differs quite a bit from that employed by USHCN, produces rather similar results (with even more “cooling of the past” than USHCN): http://rankexploits.com/musings/2012/a-surprising-validation-of-ushcn-adjustments/
.
2) Its worth pointing out that while the simple majority of adjustments are positive, its far from the vast majority. About 40% of the adjustments actually cool the present or warm the past (and thus decrease the trend). You can see a distribution of USHCN adjustments in Fig. 6 of Menne et al 2009 here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2009.pdf
.
3) You really need to separate out the effects of the adjustments from the effects of different spatial interpolation and the use of anomalies. The latter should be far less controversial, as state average temperatures will be a far more accurate reflection of reality if they use the spatially-weighted anomalies rather than a simple average of non-spatially weighted absolutes.
_________
Also, Bill Illis, the data used in creating the chart shown above uses the latest USHCN data as of about three weeks ago. USHCN adjustment procedures have not been changed since 2010 or so, and the updated GHCN 3.1 data has no effect on USHCN, as they are separate (albeit overlapping) networks.

Tom Stone
June 7, 2012 11:13 am

It is like the Stalinist tactic of airbrushing “inconvenient” people from government photographs.
REPLY: No it is not anything like that. The original Monthly Weather Review is still there. If that disappears, then you’ll have a point – Anthony

June 7, 2012 11:26 am

C aka atarsinc

1. draw a tic-tac-toe grid (3 x 3 square)
2. fill in all 9 squares with a temperature reading.
3. fill in the 3 top squares again with an additional temperature reading.
4. You’ve now got 12 readings: average them. That’s the old method (TCDD).
Is that the average for the entire area? Of course it isn’t. You’re taking too many readings in the top row. To correct for the bias in the top row, you should first average the two numbers in each square of the top row, and then use those three readings with the remaining six to find your average over the entire area.

Wrong. The arithmetic mean function (‘expectation operator’) is idempotent and associative, so you’ll get the same result, regardless of the order or number of sub-arrangments made: E[a,b]=E[E[a,b]]=E[E[E[a,b]]] etc.
Also, the mean function (unlike the variance function) is an unbiased estimator , so increasing the weights of randomly selected terms should not change the expected value (i.e. bias=0). (Some will pull up, other will pull down, so changes will cancel out to zero in the limit)
The bias here, if any, seems to be caused by the way the grids were selected, if I understood the article correctly (I skimmed it). It looks like the old ‘selection bias’ problem again.
So, if the small grids tend to be around large urban areas and the larger grids tend to be much larger, relatively unpopulated areas, then the larger grids (i.e. rural areas) are “unfairly” represented in the sense that their ‘coolness’ is diminished by the influence of the warmer (UHI) urban areas, if the grids with larger area have equal weight with smaller areas.
To make it fair (“unbiased”) either the grids should be weighted by area, or equivalently, make all the grids the same size. (Or, thirdly, relocate thermometers to eliminate UHI bias)
So I have no problem with this attempt to remove bias provided it is not used to make prejudiced comparisons with the older biased statistics. In other words, we might need some “fudge factors”, to eliminate the bias. Like the ones the solar scientists use to try to make unbiased estimates with historical and current sunspot data. (But bias still exists there, according to Leif Svalgaard).
😐

Werner Brozek
June 7, 2012 11:46 am

Christopher C. Burt says:
June 6, 2012 at 11:04 pm
So 8 (more than 10%) of the 78 sites for the entire state were located in three of the warmest cities in the state.

So let us presume for argument sake that these 10% of the sites were 3 F warmer than the rest of the state. Then the average would be 0.3 F higher than it ought to be if we totally neglect the area of the cities. But since the average went down 3.1 F, would that not mean the cities were 30 F higher than the country side? However I realize the statement below also has to be factored in:
Furthermore, 27 of the 78 sites were in Maricopa and Yuma counties, the two warmest counties in the state that comprise 12.6% of the state’s landmass yet account for 34.6% of the observation sites.
So exactly how much warmer were the extra 22% of the sites? Everything could still be legitimate, but I would be happy if an impartial person audited everything. I have just read too much negative information about adjustments to data.

June 7, 2012 1:13 pm

Not to worry. Those bogus reductions in earlier year temperatures will soon imply that the Little Ice Age actually did not end until 1950 !

June 7, 2012 1:15 pm

As these reductions in earlier temperatures continue the end date for the Little Ice Age will move from 1680 to 1950.

TXRed
June 7, 2012 1:41 pm

And people wonder why I have to put additions in the appendices of my works explaining when I obtained the NCDC data sets. At least the precip data do not seem to have been adjusted yet.

H.R.
June 7, 2012 2:47 pm

The past just isn’t what it used to be.

peterhodges
June 7, 2012 5:00 pm

http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#QUAL
“The cumulative effect of all adjustments is approximately a one-half degree Fahrenheit warming in the annual time series over a 50-year period from the 1940’s until the last decade of the century. “

Ivan
June 7, 2012 6:30 pm

“As Dr. Roy Spencer said about the NOAA-NCDC USHCN record, 1973-2012 (read original post for full context):
2) Virtually all of the USHCN warming since 1973 appears to be the result of adjustments NOAA has made to the data, mainly in the 1995-97 timeframe.

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.”
The only problem is that Spencer quickly realized that this would put a huge question mark over his own satellite data which show 0.22 C of warming per decade since 1979 over the US 48. So he quietly dropped the bombastic finding you cited, in a half sentence in the next post, and suddenly “discovered” 10 times as much of real warming as he had in the post you cited. In order to salvage his data, he essentially accepted this adjustment as legitimate!!!

Mark.R
June 7, 2012 7:28 pm

OT But Christchurch N.Z had its coldest day on record on the 6 June 2012.
http://www.stuff.co.nz/the-press/news/70….ears-of-records

Rob
June 7, 2012 7:48 pm

The State data has definately been “inconsistantly” changed. I noticed this years ago from the oldest paper records of the various State Weather Services…

kadaka (KD Knoebel)
June 7, 2012 8:54 pm

From Ivan on June 7, 2012 at 6:30 pm:

The only problem is that Spencer quickly realized that this would put a huge question mark over his own satellite data which show 0.22 C of warming per decade since 1979 over the US 48. So he quietly dropped the bombastic finding you cited, in a half sentence in the next post, and suddenly “discovered” 10 times as much of real warming as he had in the post you cited. In order to salvage his data, he essentially accepted this adjustment as legitimate!!!

I checked his blog archive for April 2012, the month the post I referenced appeared in:
http://www.drroyspencer.com/2012/04/
Next post was last post for April, New Evidence Our Record Warm March was Not from Global Warming. It contains nothing of what you claim.
So I checked the next month, found the next next post, U.S. Temperature Update for April, 2012: +1.28 deg. C. I found the one number you gave, +0.22 deg. C/decade, but satellite data wasn’t mentioned, he was updating his new IHS dataset which goes from 1973, not 1979. Here’s the chunk:

Note that the linear warming trend I get (+0.13 deg. C/decade) is about 50% of that I get from analyzing the USHCN data (+0.26 deg. C/decade).
It is also a considerable reduction below what I get if I perform no population density adjustment (+0.22 deg. C/decade). Since that population adjustment is so large, here are the data supporting it (click for large version):

He’s saying USHCN is the highest, his ISH dataset without population density adjustment is next highest, ISH with that adjustment is lowest and about 50% of USHCN.
There was no quiet dropping of the “bombastic finding”, nor discovering of “10 times as much of real warming”. Nothing as what you said is there.
So I checked the next next next (next*3) post, UAH Global Temperature Update for April 2012: +0.30°C. Very short. Still nothing, again.
On to the next next next next (next*4) post, Our Response to Recent Criticism of the UAH Satellite Temperatures by Drs. Christy and Spencer. Very long. Still nothing.
Next next next next next (next*5) post, AMSR2 Being Readied for Launch Today. Amazingly enough, still nothing.
Next next next next next next (next*6) post, The AMSR2 Antenna has been Successfully Deployed. Go on, take a guess.
Next next next next next next next (next*7) post, UAH Global Temperature Update for May 2012: +0.29°C. That was on June 4, and is his latest post. You don’t deserve a guess. Still nothing.
Despite your bombastic claim, I can find nothing like what you said Dr. Spencer said, from his next post to his very latest post. Nothing.
Just to show you’re not a complete liar trying to shove words in Dr. Spencer’s mouth he never said, can you supply a link to whatever “next” Dr. Spencer post this was said, and tell where in that post that half-sentence that says so much is actually located?

Bill Tuttle
June 8, 2012 1:32 am

Christopher C. Burt says:
June 6, 2012 at 11:04 pm
For instance in the example I used of Arizona in 1934: the USWB (U.S. Weather Bureau, Dept. of Agriculture) based their 52.0° state average on data from 78 sites that reported from around the state that particular month of February 1934. Of these 78 sites 3 were in the city of Phoenix (Airport, USWB site, and Indian School), 3 were in Yuma (Citrus Station, USWB, and Valley site), and 2 were in Tucson (Airport and Univ. of Arizona campus)…So 8 (more than 10%) of the 78 sites for the entire state were located in three of the warmest cities in the state.

They are three of the warmest cities *today*.
Phoenix grew from 48,000 in 1930 to 1,445,000 in 2010, Yuma’s population grew from a miniscule 4,900 to 93,000, and Tucson city’s population was less than a tenth of today’s 520,000. Were UHI reductions used in the recalculations, and, if so, were they based on present-day measurements or on those of towns comparable in size and location representative of those cities in the ’30s?

Julian Braggins
June 8, 2012 3:42 am

Come on guys, we are playing their game here, as several others have pointed out, temperature alone is not the right metric to measure heat gain, which is what CO2 is supposed to accomplish.
The global warming scam is over, as climate will soon make this apparent. Then there will be the next scare to bring in Global Governance as the technocrats have been endeavouring to do since the 40’s, as suggested by the Club of Rome in the 60’s, climate change, alien invasion or threat by asteroid or biological plague. The EU accomplished rule by unelected technocrats, but the problems with the Euro may not be the end of it but the beginning of the World Currency as nations begin to collapse by overwhelming debt. What do you think the Bildeburg meeting was all about ?
Effective World Government will follow a World Currency, if that happens we have lost the battle.
What the answer is then I don’t know, I won’t be around as I was a youngster in the thirties and know it was a darned sight hotter then than now 😉

Werner Brozek
June 8, 2012 7:33 am

Bill Tuttle says:
June 8, 2012 at 1:32 am
They are three of the warmest cities *today*.

Excellent points! I think it is also worth pointing out that the UHI is most evident when the difference in temperature between the desired house temperature and the outside temperature is the greatest. If it is -40 outside, then the heated homes would really add heat to the outside, but at 52 F, relatively little heat gets lost to the outside. As well, how many cars were there to contribute to UHI in 1934?

Rational_Db8
June 8, 2012 9:22 am

I wonder how the BEST research handled these issues? My guess? They didn’t dig into and evaluate data issues such as this. If that’s the case, it would mean that their work was negligent and inaccurate.

SiliconDoc
June 8, 2012 3:42 pm

Well JB, they may believe with every ounce of their soul the world will soon reach the man made disaster tipping point and begin to overheat exponentially. All their data manipulations may conveniently prove or disprove their theory for them, but the march to taxing carbon output across the world has long since left the barn.
There is a kept very quiet Chicago Exchange, Australia was just rumored to have nearly passed, or did pass, a very penalizing nation wide carbon tax, and as we all know, the media has been on board for the money takers and the heat apocalypse since day one a decade plus ago.
Worse yet, no matter how much money they take from everyone, no matter what they try to impose, human carbon emissions are rising, and rising quickly. They have already admitted their targets cannot stop their world ending scenario.
It appears their point is whip up substantial emotional disorders in order to take as much money and power and self satisfaction unto themselves, as is humanly possible. They have a battle, and intend to win even as they lose the larger picture – stopping the tipping point and saving humanity.
So, they are essentially, looting failures. They need a new leadership who can get the job done. Their current top tier, it appears, desires directed human depopulation down to 1-2 billion as the final solution. Strange as it is, they have indeed said so at various times.
A worldwide deadly contagious virus or global thermonuclear war are the only two man produced disasters that appear to fit that bill.
I believe they are destined for failure in that, into the future, as far as humanity can see.
In the mean time, a good chunk of your earnings, and much of your freedom, they are successfully garnishing bit by bit.

Matt G
June 10, 2012 5:15 am

This data fiddling is a disgrace and about time action was taken. If anything the more likely cause of warming artificially is urban spread. Temperatures over recent decades should be adjusted down, not up. The errors causing cooling apparantly over recent years show not occur with instruments used over this period. They are reliable enough at least not taking warming bias into account. No wonder the station data (Arctic and most developed global regions) for recent years represents more like the late 1930’s and early 1940’s then the fiddled global data sets.

Editor
June 11, 2012 8:47 am

NCDC have a “toolkit” on their site, to graph the differences between the old and new datasets.
Try looking at Alabama as an example, there is a degree of warming added since the 1930’s.
http://nidis1.ncdc.noaa.gov/GHCNViewer/