We’ve seen examples time and again of the cooling of the past via homogenization that goes on with GISS, HadCRUT, and other temperature data sets. By cooling the data from the past, the trend/slope of the temperature for the last 100 years increases.
This time, the realization comes from an unlikely source, Dr. Jeff Masters of Weather Underground via contributor Christopher C. Burt. An excerpt of the story is below:
Inconsistencies in NCDC Historical Temperature Analysis
Jeff Masters and I recently received an interesting email from Ken Towe who has been researching the NCDC historical temperature database and came across what appeared to be some startling inconsistencies. Namely that the average state temperature records used in the current trends analysis by the NCDC (National Climate Data Center) do not reflect the actual published records of such as they appeared in the Monthly Weather Reviews and Climatological Data Summaries of years past. Here is why.
An Example of the Inconsistency
Here is a typical example of what Ken uncovered. Below is a copy of the national weather data summary for February 1934. If we look at, say Arizona, for the month we see that the state average temperature for that month was 52.0°F.
![]()
The state-by-state climate summary for the U.S. in February 1934. It may be hard to read, but the average temperature for the state of Arizona is listed as 52.0°F From Monthly Weather Review.
However, if we look at the current NCDC temperature analysis (which runs from 1895-present) we see that for Arizona in February 1934 they have a state average of 48.9°F, not the 52.0°F that was originally published:
![]()
Here we see a screen capture of the current NCDC long-term temperature analysis for Arizona during Februaries. Note in the bar at the bottom that for 1934 they use a figure of 48.9°.
Ken looked at entire years of data from the 1920s and 1930s for numerous different states and found that this ‘cooling’ of the old data was fairly consistent across the board. In fact he produced some charts showing such. Here is an example for the entire year of 1934 for Arizona:
![]()
The chart above shows how many degrees cooler each monthly average temperature for the entire state of Arizona for each month in 1934 was compared to the current NCDC database (i.e. versus what the actual monthly temperatures were in the original Climatological Data Summaries published in 1934 by the USWB (U.S. Weather Bureau). Note, for instance, how February is 3.1°F cooler in the current database compared to the historical record. Table created by Ken Towe.
Read the entire story here: Inconsistencies in NCDC Historical Temperature Analysis
================================================================
The explanation given is that they changed from the ‘Traditional Climate Division Data Set’ (TCDD) to a new ‘Gridded Divisional Dataset’ (GrDD) that takes into account inconsistencies in the TCDD. “.
Yet as we have seen time and time again, with the exception of a -0.05°C cooling applied for UHI (which is woefully under-represented) all “adjustments, improvements, and fiddlings” to data applied by NCDC and other organizations always seem to result in an increased warming trend.
Is this purposeful mendacity, or just another example of confirmation bias at work? Either way, I don’t think private citizen observers of NOAA’s Cooperative Observer Program who gave their time and efforts every day for years really appreciate that their hard work is tossed into a climate data soup then seasoned to create a new reality that is different from the actual observations they made. In the case of Arizona and changing the CLimate Divisions, it would be the equivalent of changing state borders as saying less people lived in Arizona in 1934 because we changed the borders today. That wouldn’t fly, so why should this?
Sure there are all sorts of “justifications” for these things published by NCDC and others, but the bottom line is that they are not representative of true reality, but of a processed reality.
h/t to Dr. Ryan Maue.
UPDATE: Here’s a graph showing cumulative adjustments to the USHCN subset of the entire US COOP surface temperature network done by Zeke Hausfather and posted recently on Lucia’s Blackboard:
This is calculated by taking USHCN adjusted temperature data and subtracting USHCN raw temperature data on a yearly basis. The TOBS adjustment is the lion’s share.
![USHCN-adjustments[1]](http://wattsupwiththat.files.wordpress.com/2012/06/ushcn-adjustments1.png)
atarsinc says:
June 6, 2012 at 10:17 pm
So, if NOAA is cooking the books, why did they adjust the SST trend lower?
For the same reason they adjusted the historical surface temperatures.
If anyone has a reasoned explanation for a flawed methodology, present it.
That’s NOAA’s question to answer.
I’m not certain why we fret so much about anomaly data in the first place. In my opinion, it is useless. The idea is to try and compare temperature trends in disparate temperature ranges. So, if the arctic warms from -40 to -39, that’s an anomaly of +1, and if the tropic warm from +30 to +31, that is also an anomaly of +1. Makes is simple to compare temperature trends in areas with completely different temperature ranges, doesn’t it?
NOT!
What we are arguing about is the IPCC claim that doubling of CO2 adds 3.7 w/m2 to the picture. So, do we want to measure degrees? Or watts? The formula to convert between the two is:
P(w/m2) = 5.67 * 10^-8 * T^4
With T being in degrees K. So pump the numbers into that formula for
T= 233K (-40C) = 167.1 w/m2
T=234K (-39C) = 170.0 w/m2
Anomaly in w/m2 = 2.9
T=303K (+30C) = 477.9 w/m2
T=304K (+31C) = 484.3 w/m3
Anomaly in w/m2 = 6.4 w/m2
How can a temperature increase of one degree = both 2.9 w/m2 and 6.4 w/m2?
Comparing anomalies from different temperature ranges to try and track changes in earth’s energy balance is just plain silly. Take those same numbers suppose for a monent that the arctic had increased by two degrees, but the tropics had cooled by one. According to the average temperature, the earth would have increased by one degree. But in fact, from an energy balance perspective, we would actually be 0,8 w/m2 COOLER.
Adjust and justify the adjustments to the anomaly data all you want. If what you are trying to understand is if CO2 actually contributes 3.7 w/m2 per doubling, all you have is a bunch of temperature anomaly numbers that mean nothing.
[snip – don’t insult the blog owner ~mod]
Rasey continued: What has time of observation have to do with anything with a min-max thermometer?
Perhaps I should clarify the question. It is assumed that the recording of the thermometer does not occur near the hottest nor the coldest part of the day. ANYONE with the patience to daily record temperatures over the years would not commit such a blunder.
There will be the occasional cold front where the time of record is the warmest temp of the next day. And this presupposes that the recorder does not reset the marker before bedtime.
My father dutifully recorded min-max temperatures from 1960 to 1995 at his home every day when he returned from work. No, he wasn’t paid. His records aren’t part of any offical database. He was just an aeronautical engineer that liked data – good data. He daily plotted the data on red K&E 1×1 mm tracing graph paper, along with estimates of rain and snow fall. He taped them on top of each other with 20 years in a stack taped to the front closet door. The thought that anyone would think to adjust his data after he died would have been foreign to him — and to me.
Bill, you’re not making sense. The Land Temp adjustments tend to show more warming, while the SST adjustments do the opposite. JP
Dr. Watts,
Christopher C. Burt here, author of the Weather Underground blog you have quoted herein.
First, just for the record, I would like to correct your assumption that my blog was written by Dr. Jeff Masters. Solely myself wrote the blog with no input from others in the Weather Underground organization.
Second, you have quoted only the first half of my blog on the subject of the NCDC changes so far as evaluating the changes in long-term means (LTM’s) of temperature averages in the contiguous U.S. (CONUS). This gives a false impression that I disagree with the new methodology the NCDC is now using. That is not the case, as would be obvious if the 2nd half of my blog had been published in your piece today (June 6) titled “NOAA’s National Data Center caught cooling the past-modern records don’t match paper records”.
There are very good reasons for ‘massaging’ the areal temperature (and precipitation) data for the use in ascertaining trends in climate change.
For instance in the example I used of Arizona in 1934: the USWB (U.S. Weather Bureau, Dept. of Agriculture) based their 52.0° state average on data from 78 sites that reported from around the state that particular month of February 1934. Of these 78 sites 3 were in the city of Phoenix (Airport, USWB site, and Indian School), 3 were in Yuma (Citrus Station, USWB, and Valley site), and 2 were in Tucson (Airport and Univ. of Arizona campus). So 8 (more than 10%) of the 78 sites for the entire state were located in three of the warmest cities in the state. Furthermore, 27 of the 78 sites were in Maricopa and Yuma counties, the two warmest counties in the state that comprise 12.6% of the state’s landmass yet account for 34.6% of the observation sites.
It does not take a genius to see this leads to a problem when trying to ascertain a ‘state average’ temperature. You might argue, well why not just stick to these same sites that reported in 1934 and compare to what they now observe in 2012? That is not possible because many of the sites that reported in 1934 have long since stopped supplying data, so it is therefore impossible to keep that timeline continuous. Plus, even the city or town sites that STILL report data now have (since 1934) relocated within their municipalities and/or effected changes in instrumentation.
This is why, for the sake of determining long-term trends, it is not possible to simply use the same raw data from 1934 as in, say, 2012.
The NCDC has thus necessarily come up with a better way of trying to address these issues. So long as they apply the same parameters to the new GrDD (grid system) for ALL the sections for the whole POR (period of record 1895-present) then the actual raw data for trend purposes is irrelevant. And, in fact, the original raw data from all the INDIVIDUAL sites used HAS NOT been changed, it is only the way their record has been interpreted that has (for the above reasons I outlined) been changed.
Yours,
Christopher C. Burt
Weather Historian
Weather Underground
REPLY: Thank you for responding. You are making an argument I have not. I never suggested that the original station data changed, only the state average results. I know you wrote the article, think perhaps you misinterpreted “..via contributor Christopher C. Burt.” as I wrote above. You should know that I attempted to use the link on the WU article right sidebar itself to contact you earlier today, as well as the contact link on this page of yours: http://extremeweatherguide.com/contact.asp and both failed. Thus, being unable to contact you I could only do an excerpt, nor could I get clarifications. However, you do make an excellent point in that the US surface temperature record, as I have pointed out many times, is quite the mess, requiring a multitude of adjustments. The fact that all of these adjustments increase the trend is the issue. Also, I have no claim to the title of Dr., though thank you for thinking of me in this way. – Anthony
Anthony, you want to discuss only the one dataset, because it supports your position. I’m discussing both datasets to show that your position is incorrect. NOAA’s adjustments don’t take sides. They are simply attempting to present the most accurate data. If you believe they’ve erred, show us in what way they have done so, instead of insinuating some nefarious purpose. JP
REPLY: No. The article doesn’t mention GHCN, this is solely a discussion about US data, not global data. Your argument is a straw man by adding a second data set that uses a different set of procedures. -Anthony
An accountant was asked at an interview: ‘what is 2 + 2?’. ‘What would you like it to be?’ came the confident reply. In finance this is known as creative accounting – the gentle art of persuading the numbers to say what you want. So this is the new climatology: persuading the data to deliver any message required, fit any theory, agree with the output of any model. But then that very same massaged data can be used to create the parameters that populate those very same models. A circular self sustaining intellectual discipline. We have a new specialism: ‘creative climatology’. Awesome!
Anthony, many of your readers seem to be assuming that your article was about NOAA making inappropriate adjustments to datasets that tend to show more warming. I’ve presented an example of NOAA making adjustments to a different dataset that tend to show more cooling. My purpose is to show that adjustments are made for reasoned scientific purposes, regardless of whether they show more or less warming. JP
REPLY: That may be, but we aren’t talking about GHCN here, that’s a whole different can of worms. BTW< you should know that you've violated site policy by changing names. You've previously commented here as John Parsons and now are commenting as "atarsinc". Pick one and stick with it please. – Anthony
coalsoffire said (June 6, 2012 at 2:40 pm)
“…Being a bear of very little brain I have a question. Can this sort of trick work forever? Will the artificial wave in the temperature anomaly just keep rolling along? In other words if you constantly adjust the past down and tinker a bit upward with the present to produce a constant upward trend, regardless of what is actually happening in the world, does the wave you have created ever crash on the shore? And if the natural variation is upward a bit too, well… bonus…”
And, if the “climate scientists” aren’t careful, their adjustments of the past will create a new “Little Ice Age”. They may be able to adjust the past, but too many people are observing the present.
If you really want to start an argument, ask a “climate scientist” if they can, with 95% certainty, tell us that the recorded values of their selected data-set (HadC, GISS, NOAA, BEST – pick one) will NEVER go back below the “zero” they’ve selected. Their whole world depends not only on a rising trend, but on their anomalies being as far above “zero” as they can make it.
This was one of the first things that made me skeptical about the whole temperature anomaly business – before you can tell how high or low a value is, you’ve got to have a point of reference (their “zero” point). Anybody that’s worked around electronics knows that your point of reference matters. That’s why most measurements are referenced to something (look up a decibel (dB) to see my point – “…a logarithmic unit that indicates the ratio of a physical quantity (usually power or intensity) relative to a specified or implied reference level…”).
Only “climate scientists” allow a “floating zero” in the temperature anomalies. No other science would allow different references to be used to measure the same quantity.
All of the data bases seem to use different reference periods – all the way from GISS’s base period of 1951-1980 (showing the highest anomalies) to NDCC’s base period of 1981-2010 (along with it’s lower anomaly values).
Anthony,
My response was to the original article written (I thought by Dr. Watts) not a reply to your comments. Sorry about missing your attempts to contact me. Please in the future use the wunderground.com email system for such (under ‘blogs’ and then ‘weather historian’). My personal web site: extremeweatherguide.com is going through a server host change at the moment and not a reliable conduit for contacting me for the time being (plus that site is only related to my book and its contents).
Again sorry for the confusion!
Chris
Here is the problem I have: So if you do things like gridding and arrive at some adjustment for some year, say 1934, fine. Why change that adjustment next month? And why change it colder? And why continue to change it colder and colder as time goes by? Artificially adjusting pre-1950 temperatures colder and post-1950 temperatures warmer is bad enough … but to increase those adjustments with every passing month seems irresponsible.
Averages change over time. The NCDC is now using a consistent application to determine the temperature averages since 1895 and they show, in the long-term, a warming trend. That is why you are not seeing any cooling (yet) for the entire POR since 1895. Of course, if in fact, temperatures do begin to cool, then it WILL be reflected in the data.
Correction, I’ve confused ‘atarsinc’ with you ‘Anthony Watts’! So please disregard my last missive (except in so far as contacting me via email)!
Best,
Chris
And one by one the historical ‘inconsistencies’ just keep on disappearing.
Christopher Burt said: “The NCDC has thus necessarily come up with a better way of trying to address these issues. So long as they apply the same parameters to the new GrDD (grid system) for ALL the sections for the whole POR (period of record 1895-present) then the actual raw data for trend purposes is irrelevant. ”
I am going to have to go ahead and disagree with this. NCDC caused the early 20th century temperatures to go down by adding [i] estimated values [/i] to fill in this grid system. Therein lies the problem. By adding more and more estimates in lieue of raw data, they are drifting farther and farther away from the true record.
Any estimate, no matter what methodology is used, will always be, at least in part a product of the estimator’s own bias and experience. This is unavoidable.
As Anthony put it earlier, the US temperature reconstruction is a mess. Not that anyone is to blame for that, who could have known 100 years ago that anyone would really care about .3 degrees or true average temperatues.
Does “irresponsible” mean “deceptive and utterly illegitimate”? Or is it just the strongest term you are willing and able to come up with?
Found in: atarsinc on June 6, 2012 at 11:28 pm:
Thank you, that was the last piece I needed, saved me a few minutes.
“atarsinc” identifies a user at a buy/sell site here with a location of Kettle Falls, WA.
Location and “John Parsons” leads to this commenter at “MinnPost” (Minneapolis, Minnesota), non-profit news site. In the snippets of his comments he’s self-identified as Dr. John Parsons. (BTW Googling “kettle falls wa climate change jp” brings this up so finding it was inevitable even without Anthony’s mention.)
Seven “recent” comments listed, spread across three (more or less) climate-related articles: Climate B.S. of the Year Awards: And the winners are… (Gleick), Climate skeptic admits he was wrong (Richard Muller), and Texas politicians censor climate-change research.
Here’s a real winner, found at the Texas story:
Yup, Dr. JP is quite a charmer.
Disclaimer: Information provided for entertainment purposes, not to facilitate harassment, which the blog owner flatly does not condone. So don’t harass Dr. Parsons.
Well – it should be obvious that if the explanation is true, then as many of the gridcells should have increased temperatures as have decreased temperatures.
Without knowing how thorough the survey and its reporting is, we can’t say this is systematic bias. Hopefully the author will confirm whether they balance out.
It’s OK. In 50 years TODAY’S temperatures will have been homogenised down.
Steven Mosher says:
June 6, 2012 at 9:29 pm
//////////////////////////////////////
One should always look at data in its purest and most uncorrupted form.
There should be no adjustments to the raw data, and no attempt to extrapolate the data over a notional grid area. The entire idea of a global average, or a state average is a fallacy. My garden is about 1700sqm and if I had an accurate thermoter it would not surprise me if I could find 100 different temperatures in my garden. The idea that one thermometer could provide the ‘average’ temperature of my garden is frankly ludicrous. The ‘average’ temperature of my garden because of land topography, folliage etc would not be the same as that of my neighbours. It is even more crazy to consider that the temperature taken at an airport some 40 or so miles away could reflect the ‘average’ temperature of my or my neighbour’s garden. It is certifiably insane to consider that a state average could be obtained from from a dozen or so thermometers.
Each station data set should be considered individually and the trend of that data set assessed on an individual basis. If there is a change of instrument, that is the end of one data set and the beginning of another. If the time of observation is altered, that is the end of one data set and the beginning of a new data set. If there is a change of siting, that is the end of one data set and the beginning of another. If there are changes to infrastructure (eg., putting in a tarmac carpark) that is the end of one data set and the beginning of another etc etc.
Of course, that may well mean that there are few if any lengthy continuous data sets, but that is just a consequence of the history of the site. What one can do is examine each individual data set (for as long as it lasts) and see what it actually says. We can look at the true facts. Presently all we are doing is examining the artifact of subjective adjustments and so called harmonisations, we are not reviewing the data and this is inevitably distorting the picture.
OK, methinks Nick Stokes etc are right in principle, but I’m concerned at the quantification of the temperature differential between well-measured and poorly-measured places, through the epochs. I expect UHI is a stronger phenomenon today than 100 years ago, but who can say by how much? It’s an opportunity to build biases into the model. Just because the whole-Earth approach is best for quantifying Earth’s heat today (especially since satellite data is now used), doesn’t mean it’s best for historical comparisons, because the error bars increase so much as look-back time increases.
How’s that old rhyme go: “The way to fame and fortune, when other roads are barred, is take something very easy, and make it very hard”. Don’t project the whole-Earth temperature record into past epochs. Instead, map today’s whole-Earth temperatures to the existing surface temperature records, then trace those backwards, and for each epoch then reverse-map back into whole-Earth temperatures. This keeps errors manageable. But it looks like the tech-boys are determined to do this the hard and bias-able ways, sigh. (sorry if this posting is hard to read)
Green Sand says:
June 6, 2012 at 3:30 pm
DocMartyn says:
June 6, 2012 at 3:12 pm
———————————————————–
Just a few minutes of Fahrenheit 451: the autoignition temperature of paper.
“In an oppressive future, a fireman whose duty is to destroy all books begins to question his task. “
Spooky. Ray Bradbury died yesterday aged 91.
In support of the above evidence of data fiddling, here’s a little analysis I did of the doctoring of data at some Arctic stations. For Iceland and N. Russia the scalliwags have depressed temperatures by 0.9C in the early 20th century and raised them by 0.9C in the later period.
http://endisnighnot.blogspot.com/2012/03/giss-strange-anomalies.html
We need tenacious journalists – a la Watergate – to give this scandal a proper public airing.
atarsinc says:
June 6, 2012 at 11:02 pm
Bill, you’re not making sense. The Land Temp adjustments tend to show more warming, while the SST adjustments do the opposite. JP
I presume that was in reply to my reply at 10:50 pm to your query at 10:17 pm: “So, if NOAA is cooking the books, why did they adjust the SST trend lower?”
For the same reason they adjusted the historical surface temperatures.
Because they can.
davidmhoffer says: June 6, 2012 at 4:40 pm government destroying all paper records of everything …
From PhD thesis, page 49, Simon Torok, who did the first major homogenisation of Australian weather data:
“It should be noted that BoM archive searches are frustrated by the fact that (Regional Offices) hold unique items not held by Head Office and vice-versa. The problem was compounded by a culling of the meta data files at Head Office, carried out in the 1960s.”