By Joseph D’Aleo, CCM
Remember this story long ago on New York’s Central Park multiple very different data sets to which Steve McIntyre responded here. McIntyre wrote then:
…has the temperature of New York City increased in the past 50 years? Figure 1 below is excerpted from their note, about which they observed.
Note the adjustment was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees …The result is what was a flat trend for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population in the 1990s.
Well, NCDC has a shiny new very cool tool for plotting data for regions, states and some city locations by month(s), seasons, years. They describe it this way.
Data for the Contiguous U.S., statewide, climate divisions, climate regions, and agricultural belts come from the U.S. Climate Divisional Database, which have data from 1895 to the present.
Information is also available at the city level for the following 60 cities. The 27 cities highlighted in blue below are Automated Surface Observing System (ASOS) stations which are part of the U.S. Historical Climatology Network (USHCN) (temperature data for the USHCN stations were converted to version 2.5 in October 2012). The other 33 cities use Global Historical Climatology Network (GHCN) data. These cities have data from varying beginning periods of record to the present.

Source: http://www.ncdc.noaa.gov/cag/data-info
New York’s Central Park was one of the blue cities (new USHCN v2.5). So I plotted it for July since that was one of the months in the original comparison.

Source: http://www.ncdc.noaa.gov/cag/
The surprise (when I plotted the source data myself rather than use NCDC’s tool) was how flat it was in the dust bowl heat of the 1930s. I know that on the NWS NYC web site, they have archived raw monthly means back well into the 1800s. So I downloaded that and compared.

It was dramatically cooler in the NCDC v2.5 than the original data. This plot shows the differences between the original recorded temperature data at Central Park and the final adjusted data that NCDC presents to the public:

As is clearly evident, adjustments made the dust bowl period cooler, while post 1995 had no adjustments applied. This results in a temperature trend that is steeper because the past is cooler than the present. The only problem is that it isn’t what the data actually recorded then.
I think maybe we need to coin a new term for NOAA NCDC – ‘dust bowl deniers’. Yes it appears there is man made warming underway but the men are in Asheville, North Carolina at NOAA’s National Climatic Data Center.
=============================================================
Addendum by Anthony:
Cooling the past increases the trend. We’ve seen this effect happen several times before, yet there seems to be no justification for it. Probably this most dramatic example is what we see in this NOAA GISS plot comparison:
I’ve also written before about this tampering with data from the past. Such tampering with new adjustments like USHCN V2.5 allow claims of “warmest ever” to be made when the past gets cooled:
Dear NOAA and Seth, which 1930′s were you comparing to when you say July 2012 is the record warmest?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Steven Mosher says:
July 15, 2013 at 10:32 aM …
I can even prove to you that there is no such thing as raw data.. but thats no fun….
It’s quite a lot of fun really, not to mention critical methodologically, especially defining “raw.” The discussion reveals methodological biases, personal expectations, unconscious inclinations and all kinds of revelatory bits of behaviour. The lack of that discussion in any readily available form is in large part the reason so many sceptical lay people profoundly distrust the available analyses of climate data. It would also cure insomnia in many.
@Willis Thank you Stephen! I love it when the error estimates are less than the adjustments, that’s always hilarious.
Unfortunately, most people don’t get the joke.
The majority of those that do don’t want the facts get in the way of a “good story.”
Thanks, Joe. Good article.
“Multigraph – Interactive Data Graphs for the Web” NCDC’s “shiny new very cool tool for plotting data” is at http://multigraph.github.io
Easy to use and free.
I remind you that Steve McIntre did an excellent review of the UHI adjustments to the GISS dataset in a post at Climate Audit. I wrote a summary here
http://www.friendsofscience.org/index.php?id=396
which contains a link to Steve’s original post “Positive and Negative Urban Adjustments” of March 1, 2008..
NASA applies an urban correction of its GISS temperature index in the wrong direction in 45% of the adjustments. Instead of eliminating the urbanization effects, these “wrong way corrections” makes the urban warming trends steeper.
Ken Gregory says:
July 15, 2013 at 12:39 pm
Kind of illustrates the point about ‘adjustments’ that I was trying to make – in that, I have no beef with adjustments if they are properly recorded and documented, with due diligence and logical reasoning – but also recorded along with the original data carefully preserved. That way, when Mr X has made his adjustments (and kept careful records/observations) – when Mr Y comes along to make further adjustments, he either does it solely on the original raw data, or takes into account the previous adjustments to the ‘v2’ of the data, etc, etc.
My point is simple (to me, anyway) – I want to know what adjustments have been made, and why they were made. Is anyone out there aware of a temperature dataset so ‘clean’ as to be able to demonstrate its traceability and ‘history’ through the various changes? I don’t believe so, but would be more than happy to be shown otherwise. (and no, individual or specific station data history does not really count as proof a major dataset like GISS or Hadcrut is ‘valid’ – indeed, looking at the various examples of research done by others on specific stations (such as this), it is not clear that acceptance of these large datasets is warranted. In short, where is the validation?). This may sound like a good reason to completely distrust the data – which it isn’t necessarily true on its own – but a true scientist would not work in this ‘hidden’ fashion, and so the shouts of ‘show us the data’ (meaning the changes/reasons) are not unreasonable IMHO, and the longer they remain ‘hidden’, the less trusting of the data we can be? That’s it in a nutshell. All the hockey stick charades, tree rings, etc, etc – what do they tell us about the efficacy of the peer review and ‘data’ processing methods? They strongly suggest that methods are wrong, incomplete, unsupervised, and potentially deliberately fraudulent. Why is asking for reasonable demonstration of the dataset ‘construction’ so wrong?
@ur momisugly Marc77 says:
July 15, 2013 at 10:27 am
You might find my work on nightly cooling interesting.
l don’t know if this is linked to the low sun activity.
But what’s has been of real interest lately is the number of ‘cut off ‘ lows that have been forming.
lt looks like we may have up to four forming over the next 7 days or so. lf this starts to become a growing trend, then that would point to a increased risk of cooling
Sorry posted on the wrong topic.
Q: What do you get when you adjust bad data?
A: Adjusted Bad Data.
Bad data can be usefull for correcting experimental methods, it cannot be used to correct itself. It is rather binary. It is either good or not good.
The next time I experience triple digit temperatures, I will know to adjust my thermometer readings downward. I will feel so much cooler.
Richard G says:
July 15, 2013 at 3:04 pm
thats not strictly true – a simple data dropout can be infilled using reasonable statistical methods (obviously not so good when a sh*tload of dropouts occur!). To a scientist, the actual observations (or instrument readings) are always paramount – sure, they can be corrected later if instrumental or systemic errors are found, which doesn’t necessarily make the original data ‘bad’?
My personal take on all the datasets being bandied about is that they are all a cross referenced mish-mash based on each other to some degree or other, with subsequent ‘alterations’ also cross referenced, etc, perhaps applied even more than once via different authors/processes/erroneous computer code, etc, etc! I can’t prove that, but as far as I can see, few can disprove it either – and that is the worrying issue here.
I would love to see Mosh’s reaction to a bank statement (‘check account’ I think you Americans call them?) with several unexplained corrections (up, down, sideways, etc!) on it – and a covering letter from the bank saying that ‘we’ve checked and eveything is in order, sir!’ That is honestly how I see the adjusted temperature datasets as presented to Joe Public.
Kev-in-Uk says:
July 15, 2013 at 3:18 pm
If instrumental or systemic errors are found, they should be corrected and the experiment rerun. Too bad this cannot be done with climate records, but it can’t.
I refer you to the Harry_read_me files to iluminate the inescapable conclusion that the CRU records are fatally corrupted by uncertainty. The observer error cannot be quantified. Too many people doing things slightly differently. The records can not be sorted out, let alone corrected.
To paraphrase Harry “What do we do when there are missing station records? We make them up because we can.”
These GISS diagrams –
http://www.warwickhughes.com/blog/?p=38
http://www.warwickhughes.com/papers/gissuhi.htm
from a 2001 paper Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354. and a pdf can be downloaded at http://pubs.giss.nasa.gov/abstracts/2001/
Illustrate with crystal clarity how the common “adjustments” to UHI affected data to compensate for steps in the data as instruments are moved outward from urban centres – actually inserts more warming into the resulting adjusted trend.
Funny how the highs from the 1930s through the 1950s are adjusted down from actual data as well. One would think they were trying to make it easier to set records…compare raw data to what is listed on weather channel for highs, they have dropped folsom, can a good 3-4 regress for record highs.
Unless the methodology behind such adjustments are made freely available to the public and subject to commentary by the scientific community, they cannot in any reasonable sense, be considered as justified.
An example of how “raw data” may not actually be raw data: When an NWS Cooperative weather station exists in a rural area, regardless of its length of record, and is then moved into an urban area due to loss of the previous observer, etc., the entire record is adjusted upward by an NCDC algorithm to match the new urbanized location to prevent the appearance of hockey sticks. The entire period of record, once moved to the other location, becomes artificial. Of course, the same thing happens in reverse…an urban record is adjusted downward if moved into a rural area with cooler temps, depending on the measured differences.
And I sit here with a copy of “The World Almanac 1933” pub. by World Telegram, “Single Copies 60 Cents,” at my left elbow and open to p. 91, “Daily Maximum and Minimum Temperatures at New York City, 1931.” (Compiled under the direction of James H. Scarr, United States Meteorologist). Just for grins, want to guess the max temp. for July 15, 1931? It was………………………………………………………. twelve plus seven-seven.
[PRESERVE THE HISTORICAL RECORDS!
And, this happy little bit of trivia to end the day: “July 15 [1932] — President Hoover cut his own salary 20 per cent.”
Oh, BOTHER!!!!!
“… twelve plus SEVENTY-seven” = 89
The fraudisticians operate in complete confidence that there will be no calling-to-account. Their malefactions thus grow, and grow.
One way or the other, it will all end in tears.
Why would any temperature ever be adjusted??? You have recorded a temperature later somehow someone knows the equipment was out of calibration and by how much??? I would so love to never see an “adjusted data set again”.
FWIW, TOBS is at 24:00 and is constant throughout. No adjustment needed for that.
And Winston looked at the sheet handed him:
“Adjustments prior to 1972 shall be -0.2 degrees and after 1998 shall be +0.3 degrees.”
Winston wondered at the adjustment to the data. At this point, no one even knows if the data, prior to his adjustments, was raw data or already adjusted one or more times previously.
It didn’t matter. All Winston was sure of is that one of the lead climatologists needed more slope to match his computer model outputs. He punched out the new Fortran cards and then dropped the old cards into the Memory Hole where they were burned.
“There!” Winston exclaimed to himself. “Now the temperature data record is correct again; all is double-plus good.”
NucEngineer says:
July 16, 2013 at 7:54 am
Haha – that is funny except for the fact that it immediately reminded me of learning Fortran as a student in in the 70’s and using punchcards, carefully putting them order and putting them into be read and processed, only to get the inevitable ‘The fortran operation referenced does not exist’ (or something very similar) back several hours later! Jeez, earlier computer science was a real drag……..LOL
From Climate Audit’s (long ago!) 2007 thread, we find this little gem.
Quoting “Anonymous”
:
So, if in 2007, “Anonymous” knows New City area meteorologists know Central Park is reading “artificially cool … for at least 10 years now” .. then why should the GISS (a known NYC climate research source) not have “artificially adjusted” Central Park temperatures between 1990 and 2007 down by 10 degrees?
/sarchasm – that gaping hole between a liberal and the truth and the real world.
Central Park was conceived of in 1844, work began in 1853-1857, construction of the Park finished 1870 with almost no physical changes since then to the Park’s interior and walking paths/roads/carriage paths around Belvedere Castle and the ponds and trees around it since the Angel of the Waters statue was dedicated in 1873. So, since 1870, the ONLY change has been – not to the location of the station or the “average trees” around it, but to the 40-50 miles in the “AVERAGE REGION” around the Central Park.
That is, to the UHI effect of the total energy used around NYC and the average effect of the roads, concrete, and buildings around the entire Central Park area. The REGIONAL effect on the daly temperatures of Central Park IS the UHI effect!
Factors That Change the UHI over Time.
Distinct “parts” of the UHI at any given location are in fact, related to population – but ONLY indirectly related to population. The biggest UHI effect is the total reflection and absorbtion area of buildings, streets, parking lots, and roads in the region: These changed radically between 1810 and 1870 in the REGION around Central Park between 1810 and 1890, but much, much less so between 1890 and 1930, almost none between the 1920’s and 2013. (An urban map of 1920 (or even 1890 for that matter) for NJ’s Hudson River shore, Manhatten, Brooklyn, the Bronx, Bronx, or Queens would not only be recognizable, but would need almost no changes to be used now. in fact, the number and height of buildings in Manhatten (or any other of those above) has little changed since 1950.)
Regionnal energy use has gone up very, very much since the Central Park station was located in 1870, BUT that energy increase is small compared to the solar affect on the regional buildings. It needs to be considered for any UHI study or UHI correction anywhere in the country, BUT the change in energy use is very, very different at different years. There has been, for example, a change when electricity was introduced into the NYC Manhattan in 1890, then increased slowly up to 1920-1930, decreased during the Great Depression, then changed again from the late 30’s through WWII’s boom and again with the introduction of air conditioning. But has energy use in NYC increased directly always proportional to population alone between even such years and 2013? Absolutely not!
Has it increased (or decreased) according to NASA-GISS’s “favored” nighttime light analysis proxy? That’s a bad assumption: nighttime lights may be a proxy for average population in an area, but is that assumption valid for a long-established urban region like
NYC’s Central Park-Brooklyn-Bronx-NJ-Queens area between 1970 and 2013, or is it equally/better/less valid for the the 5x increase in population and energy and buildings changes around Atlanta since 1970 and 2013? Around Chattanooga since 1930-1940’s TVA changes? Fresno since 1920 due to farming? Colorado City since 1990? Las Vegas since 1980? since 1990? Since 2000?
“Local” changes to the immediate 100 feet around a weather station should NEVER be considered “inside” the region’s far larger time-affected UHI – but these local changes WILL also change the local recorded temperatures. But any valid “adjustment” to the historical temperature record must specifically and uniquely account for these local changes in location, altitude, exposure, contamination (air conditioning compressors, parking lots, trees, roads, etc. But to arbitrarily “assume” Central Park is 10 degrees cooler in 2007 compared to 1990 due to an increase in tree height?????? Is that what counts for “climate science” by GISS?
By the Way: Somebody needs to explain to me why NOAA/NASA-GISS “time of observation” correction need to be averaged over all temperature records over a 15 or 25 year period over an entire region or country, but will change differently for each year during that period, rather than just once (when the recording time “might” have been changed from morning to afternoon only once in that period for only a few of the region’s thermometer stations…..