Central Park in USHCNv2.5 (October 2012) magically becomes cooler in July in the Dust Bowl years

By Joseph D’Aleo, CCM

Remember this story long ago on New York’s Central Park multiple very different data sets to which Steve McIntyre responded here. McIntyre wrote then:

…has the temperature of New York City increased in the past 50 years? Figure 1 below is excerpted from their note, about which they observed.

Note the adjustment was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees …The result is what was a flat trend for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population in the 1990s.

Well, NCDC has a shiny new very cool tool for plotting data for regions, states and some city locations by month(s), seasons, years. They describe it this way.

Data for the Contiguous U.S., statewide, climate divisions, climate regions, and agricultural belts come from the U.S. Climate Divisional Database, which have data from 1895 to the present.

Information is also available at the city level for the following 60 cities. The 27 cities highlighted in blue below are Automated Surface Observing System (ASOS) stations which are part of the U.S. Historical Climatology Network (USHCN) (temperature data for the USHCN stations were converted to version 2.5 in October 2012). The other 33 cities use Global Historical Climatology Network (GHCN) data. These cities have data from varying beginning periods of record to the present.

image

Source: http://www.ncdc.noaa.gov/cag/data-info

New York’s Central Park was one of the blue cities (new USHCN v2.5). So I plotted it for July since that was one of the months in the original comparison.

image

Source: http://www.ncdc.noaa.gov/cag/

The surprise (when I plotted the source data myself rather than use NCDC’s tool) was how flat it was in the dust bowl heat of the 1930s.  I know that on the NWS NYC web site, they have archived raw monthly means back well into the 1800s. So I downloaded that and compared.

image

It was dramatically cooler in the NCDC v2.5 than the original data. This plot shows the differences between the original recorded temperature data at Central Park and the final adjusted data that NCDC presents to the public:

image

As is clearly evident, adjustments made the dust bowl period cooler, while post 1995 had no adjustments applied. This results in a temperature trend that is steeper because the past is cooler than the present. The only problem is that it isn’t what the data actually recorded then.

I think maybe we need to coin a new term for NOAA NCDC – ‘dust bowl deniers’.  Yes it appears there is man made warming underway but the men are in Asheville, North Carolina at NOAA’s National Climatic Data Center.

=============================================================

Addendum by Anthony:

Cooling the past increases the trend. We’ve seen this effect happen several times before, yet there seems to be no justification for it. Probably this most dramatic example is what we see in this NOAA GISS plot comparison:

I’ve also written before about this tampering with data from the past. Such tampering with new adjustments like USHCN V2.5 allow claims of “warmest ever” to be made when the past gets cooled:

Dear NOAA and Seth, which 1930′s were you comparing to when you say July 2012 is the record warmest?

Does NOAA’s National Climatic Data Center (NCDC) keep two separate sets of climate books for the USA?

NYT_revised_july2012

 

0 0 votes
Article Rating
56 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John Tillman
July 15, 2013 8:55 am

Shameful anti-scientific behavior from climate “scientists”, ie activist liars.
Congress needs to investigate & put a stop to these outrageous shenanigans by mendacious CACCAdvocates.

GlynnMhor
July 15, 2013 9:11 am

Even with their adjustments they’re having a hard time masking the ongoing slump in warming, not coincidentally as the Sun goes into its deepest funk since the Dalton Minimum at the end of the Little Ice Age.

Hardy Cross
July 15, 2013 9:21 am

Will someone please ask the folks who make these adjustments why they made them.

Stephen Rasey
July 15, 2013 9:42 am

It seems to me that the adjustments are a minimum error band to be applied to the temperatures.
According to the 4th figure above, “Central Park Difference”, the NCDC is admitting that the error bars on average temperature are at least 2 degrees in the early part of the century.
Adjustments do not reduce uncertainty. Adjustments themselves have error. When Adjustments with error are added to a time series with error and noise, the total errors increase.

Kev-in-Uk
July 15, 2013 9:51 am

Hmm – which of our warmista visiting friends is going to appear to defend this?
It beggars belief as far as I am concerned. And yet, we will still have the likes of Mosh ‘defending’ the ‘data’ like some kind of irreproachable holy icon. Sure, ‘it’s all we’ve got’ (one of Steve’s past comments, IIRC !) but yet again, this illustrates that we cannot rely on any of the data being touted is verifiably ‘good’ data?
BTW, I’m not having a go at Mosh, just using that as an example of how some folk ‘trust’ the data ‘as supplied’ – when clearly, as has been demonstrated many times – it can be quite suspect, to say the least. (and don’t come back at me with the ‘it doesn’t affect the global average’ crapola, either – as I have said before, unless all raw and adjusted data is presented in ALL formats, i.e.raw and subsequent revisions, with descriptions and adjustment values – we will never know!!
As I have also said before, what worries me the most, is that the original data is long since lost, or even swamped by subsequent ‘revised’ data sets. I also wonder how many revisions on top of revisions have taken place – meaning that a revised dataset has been ‘further’ revised, perhaps without the previous revisions being properly taken into account. ???

Jon Jewett
July 15, 2013 9:52 am

(Sigh!!!)
Look, folks, it wasn’t ever about science, not for a moment.
Thank you, Anthony for what you do.
Steamboat Jack (Jon Jewett’s evil twin)

Louis Hooffstetter
July 15, 2013 9:59 am

I say: Scientific Fraud!
Another egregious example of “Cooling the past to warm the present”
Moshpit: What say you?

Editor
July 15, 2013 10:06 am

Stephen Rasey says:
July 15, 2013 at 9:42 am

It seems to me that the adjustments are a minimum error band to be applied to the temperatures.
According to the 4th figure above, “Central Park Difference”, the NCDC is admitting that the error bars on average temperature are at least 2 degrees in the early part of the century.
Adjustments do not reduce uncertainty. Adjustments themselves have error. When Adjustments with error are added to a time series with error and noise, the total errors increase.

Thank you Stephen! I love it when the error estimates are less than the adjustments, that’s always hilarious.
w.

Brad
July 15, 2013 10:10 am

The best showing of this effect on the global thermometer record may be at the global temp page at climate4you.com . Shows 0.6 to 0.8 degrees caused by manipulation only.

July 15, 2013 10:14 am

Because of increased UHI effect over the past century, they should have adjusted the 1930s UP and the 1990s DOWN.

Kev-in-Uk
July 15, 2013 10:22 am

Stephen Rasey says:
July 15, 2013 at 9:42 am
Certainly loooks that way! But these folk wouldn’t let error bars get in the way of a good bit of data manipulation – all in the name of producing a ‘definitive’ trend line!

July 15, 2013 10:27 am

This is not only irritating, it is just plain fraudulent. These tamperers are rewriting history in exactly the same manner as Stalin or Goebbels…but in an even more sleazy manner. Every aspect of the temperature ‘record’ has been manipulated, be it by adjustment or statistics, to show a ‘warming’, itself a meaningless set of figures calculated from manipulated data. It is 0.7 degrees warmer? Who the hell could measure that? And the only way to present it is with a greatly-exaggerated y-axis….because otherwise all that sciency-looking squggle would be a flat line. What a bunch of shifty spoiled children we have in the agencies in question. “Nobody is listening (or even paying attention) so we’ll just show THEM!”
Disgusting.

Marc77
July 15, 2013 10:27 am

I have been looking at detecting the UHI lately. What I have found is that an increase in night temperatures due to UHI has a different signature than a warming due to a change in cloudiness.
The UHI seems to be associated with a high thermal inertia. It seems to warm the third lowest minimum of the month 50% more than the average minimum. This is probably because the city stays warm for several days after it has accumulated heat. Or it constantly produces heat. I have found this by looking to pair of close stations with a different night temperature but a similar day temperature in the same decade.
But if you look a the variations from one year to the other at the same station, an increase in the night temperature does not seem to lead to a greater change in the third coolest night. This is because, an increase in low cloud only reduces the number of days where a coolest night of the month can be set.
I have also looked at the third highest diurnal temperature range, it is also more affected by station siting than the average diurnal temperature range.
I used the third highest or lowest in order to the remove the most extreme values.
The third coolest night and the third highest diurnal temperature range occur with very low cloud cover. So a change in low cloud cannot explain their evolution through time. In some extreme cases, like Winnipeg in 2006, the third highest diurnal temperature range jumped by a good 3C due to, I believe, a lack of soil moisture. There is also the possibility that water vapor can have a high thermal inertia by condensing and evaporating within the atmosphere. Outside of that, I guess that most of the decrease in diurnal temperature range over the last few decades was due to urban development.
When I corrected the night temperature based on this idea, I found that the average diurnal temperature range might have increase in the last 60 years.
There are other things that can be found from daily data.
1- If you put the all the diurnal temperature ranges of a decade for a particular month in order, the general shape could hint about the number of days with little low cloud cover. And also it could show a plateau of high diurnal temperature ranges. That’s because you can’t have less low cloud cover than no low cloud cover. The evolution of this plateau through time is independent of a change in low cloud cover.
2- If you look at the evolution of the third coolest day of the month. It could show the temperature when the sunshine does not hit the station directly. It could help detect the stations that are not well maintained.
I have used number 2 with very little data, and it seems to show that old stations were not more affected by the sunshine.

noaaprogrammer
July 15, 2013 10:30 am

No wonder Mann can believe the globe is always warming, never cooling. One just makes the adjustments in computer code and voila – perpetual warming!

July 15, 2013 10:30 am

Back in 2009 I plotted Central Park from version 2 raw data and was pleased that it in fact carried back to about 1820 as did Washington D.C. and Minneapolis, though I had to extend one of these plots using data overlap with another record, as I vaguely recall:
http://s24.postimg.org/498mmzb6d/2agnous.gif
Minneapolis data has stopped early though so there’s a gap at the end. I based my search for long records based on a page from an old article I lack current reference for: http://s23.postimg.org/kluzqs9mj/Oldest_T_Stations_List.gif
ClimateReason.com has another plot of Central Park that far back too:
http://climatereason.com/LittleIceAgeThermometers/NewYork_USA.html

July 15, 2013 10:32 am

http://berkeleyearth.lbl.gov/stations/167589
http://berkeleyearth.lbl.gov/station-list/?phrase=central+park
“BTW, I’m not having a go at Mosh, just using that as an example of how some folk ‘trust’ the data ‘as supplied’ – when clearly, as has been demonstrated many times – it can be quite suspect, to say the least. (and don’t come back at me with the ‘it doesn’t affect the global average’ crapola, either – as I have said before, unless all raw and adjusted data is presented in ALL formats, i.e.raw and subsequent revisions, with descriptions and adjustment values – we will never know!!
############
I most certainly dont trust the data as supplied. The procedure is simple.
First, use daily data where ever possible. All stations have daily data back to 1820s or so, before that they only have monthly records.
So, never use GHCN monthly. use daily data. I started with daily data back in 2007.
Next use the unadjusted data. I do.
Finally, use the best method available, a method suggested by skeptics, to create your best estimate given the data. Now, do you trust the underlying raw daily data? no. But if you are asked for your best estimate given the data at hand you can compute that estimate. That’s all you can do. And yes, you can never know. But science isnt about knowing. Science is ab
out presenting the best understanding given the data. We can play philosophy and question all data. I can even prove to you that there is no such thing as raw data.. but thats no fun.
If you want all the data its available. Its been available for sometime. And going forward the archive will be improved even more to include level 0 data. Whats that? Photos of the original written records.
Of course to look into this you’d have to get off your ass. When you do you will find that….
Guess what? the LIA existed! and the warmth in the 30s and 40s was real and wasnt due to population growth.

climatereason
Editor
July 15, 2013 10:37 am

Joe
I wrote this article some 4 years ago concerning three temperature stations along the Hudson including New York. In it I also described Dust bowl year conditions in Central Park.
http://noconsensus.wordpress.com/2009/11/25/triplets-on-the-hudson-river/
the graph for New York is still there in the article. It bears little relationship to the amended versions you show. It was hot hot hot during the 1920 to 1940 period
tonyb

Marc77
July 15, 2013 10:40 am

At some point in the past, I can’t say when and where, I have seen methods to correct for the UHI and I now know they were wrong. The idea was to detect variation in station siting by finding sudden jumps in temperature. It cannot work for simple reasons.
Let’s say the effect of an element of urbanization decreases with the square of distance. So an event that occurs between 10 to 20 meters is 100 times more powerful than an event between 100 to 200 meters. But the region between 10 to 20 meters is also 100 times smaller. So you can’t ignore events that occur at a distance where they cannot be detected individually.
Also, if a building is built near of a station. It could cool the station by adding shades. But in the long run, the roof of the building might absorb more sunshine. It could evaporate less water. And it could have less snow cover than the grass.
In fact, urbanization might add a constant warming due to all the events at a long distance and every stations in the same region might experience a similar effect.

Resourceguy
July 15, 2013 10:41 am

and at taxpayer expense too

Kev-in-Uk
July 15, 2013 10:55 am

Steven Mosher says:
July 15, 2013 at 10:32 am
I think you missed my point a bit, Steve ! – mostly in that we (as in, the general public) are ‘presented’ with these datasets, all as ‘prepared’ by others. If we want to inspect, we would have to do individual station analyses as you describe. This is of course no use when trying to check/analyse a ‘global’ dataset, as no one person can check several thousands of stations data!! (you will presumably know the BEST’s method of data quality checking better than most here – perhaps you can enlighten us as to whether it was undertaken on a daily ‘raw’ record by record human analysis, as required (in order to be thorough) – or, as I suspect, a computer generated ‘inspection’ of inputted data to ‘throw out’ possible data issues? – I don’t know, hence the question?)
Now, in the UK, we cannot acquire a raw CET or Hadcrut initial raw dataset (to my knowledge it does not exist anymore, but I may be wrong!). I have spent many hours trawling the metoffice historical station data – but that is subject to adjustment after quality control, so isn’t actually raw either.
Hence, how can anyone check through the presented data, except perhaps as on an individual station data, where available.?
But again, if you know of a UK station (or even a US one?) which is available as raw data, and then each subsequent ‘revision’, with notes/documentation as to the changes and reasons therefore before ‘incorporation’ into a bigger dataset – I’ll be very happy to look at it because my ultimate question is what changes were made, and why?
regards
Kev

July 15, 2013 11:00 am

Central Park missed out on an early plan to majorly trip it out:
http://behindthescenes.nyhistory.org/wp-content/uploads/2013/06/81069_CentralPark_RinkPlan.jpg

July 15, 2013 11:04 am

if you have drought, there is no rain, i.e. no weather, i.e no pressure difference, i.e. a standstill in the change of the speed of warming/cooling, i.e. a constant temperature over a great range of latitudes – within the restraints of seasons.
Therefore the statement
“The surprise (when I plotted the source data myself rather than use NCDC’s tool) was how flat it was in the dust bowl heat of the 1930s”
is no surprise to me…
The interesting question to me, would be:
when does the coming drought start?
looking at maxima
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
we will approach the complete standstill around 2016, looking at energy-in
Unfortunately between maxima and means there is a bit of a lag.
which could be as much as between 3 and 5 years.
So the next drought period could start around 2020
that makes sense in terms of periodicity: 2020 – 88 = 1932, but one could consider that it could well be a few years either way,
therefore I have recently considered doing some calculations to try and pinpoint the date further, but I have not yet had the time for this….
I wonder if anyone here has some ideas or comments on the start of the drought?

Marc77
July 15, 2013 11:04 am

I made an error in my first comment, what I meant was:
If you put all the diurnal temperature ranges of a certain month of a decade in order. The length of the plateau of max DTR should be dependent on the number of days with low cloud cover. The height of this plateau should be dependent on the climate and the amount of UHI.

Editor
July 15, 2013 11:22 am

NikFromNYC says:
July 15, 2013 at 10:30 am

… I based my search for long records based on a page from an old article I lack current reference for: http://s23.postimg.org/kluzqs9mj/Oldest_T_Stations_List.gif

That’s from Jones and Bradley 1992.
w.

July 15, 2013 11:29 am

I made these two comments earlier today under other post. They seem to apply here.
http://wattsupwiththat.com/2013/07/15/abnormal-weather-just-another-scare-tactic/#comment-1364325
http://wattsupwiththat.com/2013/07/14/fabricating-climate-doom-part-1-parmesans-butterfly-effect/#comment-1364387
(PS This is the first time I’ve tried to post a link to a previous WUWT comment. Perhaps Ric Werme can add it to his “Ric Werme’s Guide to WUWT”?)

Duster
July 15, 2013 11:35 am

Steven Mosher says:
July 15, 2013 at 10:32 aM …
I can even prove to you that there is no such thing as raw data.. but thats no fun….

It’s quite a lot of fun really, not to mention critical methodologically, especially defining “raw.” The discussion reveals methodological biases, personal expectations, unconscious inclinations and all kinds of revelatory bits of behaviour. The lack of that discussion in any readily available form is in large part the reason so many sceptical lay people profoundly distrust the available analyses of climate data. It would also cure insomnia in many.

Stephen Rasey
July 15, 2013 11:45 am

@Willis Thank you Stephen! I love it when the error estimates are less than the adjustments, that’s always hilarious.
Unfortunately, most people don’t get the joke.
The majority of those that do don’t want the facts get in the way of a “good story.”

July 15, 2013 11:49 am

Thanks, Joe. Good article.
“Multigraph – Interactive Data Graphs for the Web” NCDC’s “shiny new very cool tool for plotting data” is at http://multigraph.github.io
Easy to use and free.

July 15, 2013 12:39 pm

I remind you that Steve McIntre did an excellent review of the UHI adjustments to the GISS dataset in a post at Climate Audit. I wrote a summary here
http://www.friendsofscience.org/index.php?id=396
which contains a link to Steve’s original post “Positive and Negative Urban Adjustments” of March 1, 2008..
NASA applies an urban correction of its GISS temperature index in the wrong direction in 45% of the adjustments. Instead of eliminating the urbanization effects, these “wrong way corrections” makes the urban warming trends steeper.

Kev-in-Uk
July 15, 2013 1:19 pm

Ken Gregory says:
July 15, 2013 at 12:39 pm
Kind of illustrates the point about ‘adjustments’ that I was trying to make – in that, I have no beef with adjustments if they are properly recorded and documented, with due diligence and logical reasoning – but also recorded along with the original data carefully preserved. That way, when Mr X has made his adjustments (and kept careful records/observations) – when Mr Y comes along to make further adjustments, he either does it solely on the original raw data, or takes into account the previous adjustments to the ‘v2’ of the data, etc, etc.
My point is simple (to me, anyway) – I want to know what adjustments have been made, and why they were made. Is anyone out there aware of a temperature dataset so ‘clean’ as to be able to demonstrate its traceability and ‘history’ through the various changes? I don’t believe so, but would be more than happy to be shown otherwise. (and no, individual or specific station data history does not really count as proof a major dataset like GISS or Hadcrut is ‘valid’ – indeed, looking at the various examples of research done by others on specific stations (such as this), it is not clear that acceptance of these large datasets is warranted. In short, where is the validation?). This may sound like a good reason to completely distrust the data – which it isn’t necessarily true on its own – but a true scientist would not work in this ‘hidden’ fashion, and so the shouts of ‘show us the data’ (meaning the changes/reasons) are not unreasonable IMHO, and the longer they remain ‘hidden’, the less trusting of the data we can be? That’s it in a nutshell. All the hockey stick charades, tree rings, etc, etc – what do they tell us about the efficacy of the peer review and ‘data’ processing methods? They strongly suggest that methods are wrong, incomplete, unsupervised, and potentially deliberately fraudulent. Why is asking for reasonable demonstration of the dataset ‘construction’ so wrong?

July 15, 2013 2:00 pm

@ Marc77 says:
July 15, 2013 at 10:27 am
You might find my work on nightly cooling interesting.

taxed
July 15, 2013 2:01 pm

l don’t know if this is linked to the low sun activity.
But what’s has been of real interest lately is the number of ‘cut off ‘ lows that have been forming.
lt looks like we may have up to four forming over the next 7 days or so. lf this starts to become a growing trend, then that would point to a increased risk of cooling

taxed
July 15, 2013 2:04 pm

Sorry posted on the wrong topic.

Richard G
July 15, 2013 3:04 pm

Q: What do you get when you adjust bad data?
A: Adjusted Bad Data.
Bad data can be usefull for correcting experimental methods, it cannot be used to correct itself. It is rather binary. It is either good or not good.

Richard G
July 15, 2013 3:09 pm

The next time I experience triple digit temperatures, I will know to adjust my thermometer readings downward. I will feel so much cooler.

Kev-in-Uk
July 15, 2013 3:18 pm

Richard G says:
July 15, 2013 at 3:04 pm
thats not strictly true – a simple data dropout can be infilled using reasonable statistical methods (obviously not so good when a sh*tload of dropouts occur!). To a scientist, the actual observations (or instrument readings) are always paramount – sure, they can be corrected later if instrumental or systemic errors are found, which doesn’t necessarily make the original data ‘bad’?
My personal take on all the datasets being bandied about is that they are all a cross referenced mish-mash based on each other to some degree or other, with subsequent ‘alterations’ also cross referenced, etc, perhaps applied even more than once via different authors/processes/erroneous computer code, etc, etc! I can’t prove that, but as far as I can see, few can disprove it either – and that is the worrying issue here.

Kev-in-Uk
July 15, 2013 3:25 pm

I would love to see Mosh’s reaction to a bank statement (‘check account’ I think you Americans call them?) with several unexplained corrections (up, down, sideways, etc!) on it – and a covering letter from the bank saying that ‘we’ve checked and eveything is in order, sir!’ That is honestly how I see the adjusted temperature datasets as presented to Joe Public.

Richard G
July 15, 2013 4:45 pm

Kev-in-Uk says:
July 15, 2013 at 3:18 pm
If instrumental or systemic errors are found, they should be corrected and the experiment rerun. Too bad this cannot be done with climate records, but it can’t.
I refer you to the Harry_read_me files to iluminate the inescapable conclusion that the CRU records are fatally corrupted by uncertainty. The observer error cannot be quantified. Too many people doing things slightly differently. The records can not be sorted out, let alone corrected.
To paraphrase Harry “What do we do when there are missing station records? We make them up because we can.”

July 15, 2013 5:04 pm

These GISS diagrams –
http://www.warwickhughes.com/blog/?p=38
http://www.warwickhughes.com/papers/gissuhi.htm
from a 2001 paper Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354. and a pdf can be downloaded at http://pubs.giss.nasa.gov/abstracts/2001/
Illustrate with crystal clarity how the common “adjustments” to UHI affected data to compensate for steps in the data as instruments are moved outward from urban centres – actually inserts more warming into the resulting adjusted trend.

edwardt
July 15, 2013 5:09 pm

Funny how the highs from the 1930s through the 1950s are adjusted down from actual data as well. One would think they were trying to make it easier to set records…compare raw data to what is listed on weather channel for highs, they have dropped folsom, can a good 3-4 regress for record highs.

Karl W. Braun
July 15, 2013 5:43 pm

Unless the methodology behind such adjustments are made freely available to the public and subject to commentary by the scientific community, they cannot in any reasonable sense, be considered as justified.

mesoman30
July 15, 2013 7:26 pm

An example of how “raw data” may not actually be raw data: When an NWS Cooperative weather station exists in a rural area, regardless of its length of record, and is then moved into an urban area due to loss of the previous observer, etc., the entire record is adjusted upward by an NCDC algorithm to match the new urbanized location to prevent the appearance of hockey sticks. The entire period of record, once moved to the other location, becomes artificial. Of course, the same thing happens in reverse…an urban record is adjusted downward if moved into a rural area with cooler temps, depending on the measured differences.

Janice Moore
July 15, 2013 10:19 pm

And I sit here with a copy of “The World Almanac 1933” pub. by World Telegram, “Single Copies 60 Cents,” at my left elbow and open to p. 91, “Daily Maximum and Minimum Temperatures at New York City, 1931.” (Compiled under the direction of James H. Scarr, United States Meteorologist). Just for grins, want to guess the max temp. for July 15, 1931? It was………………………………………………………. twelve plus seven-seven.
[PRESERVE THE HISTORICAL RECORDS!
And, this happy little bit of trivia to end the day: “July 15 [1932] — President Hoover cut his own salary 20 per cent.”

Janice Moore
July 15, 2013 10:20 pm

Oh, BOTHER!!!!!
“… twelve plus SEVENTY-seven” = 89

Brian H
July 15, 2013 10:50 pm

The fraudisticians operate in complete confidence that there will be no calling-to-account. Their malefactions thus grow, and grow.
One way or the other, it will all end in tears.

July 16, 2013 1:53 am

Why would any temperature ever be adjusted??? You have recorded a temperature later somehow someone knows the equipment was out of calibration and by how much??? I would so love to never see an “adjusted data set again”.

Evan Jones
Editor
July 16, 2013 3:38 am

FWIW, TOBS is at 24:00 and is constant throughout. No adjustment needed for that.

NucEngineer
July 16, 2013 7:54 am

And Winston looked at the sheet handed him:
“Adjustments prior to 1972 shall be -0.2 degrees and after 1998 shall be +0.3 degrees.”
Winston wondered at the adjustment to the data. At this point, no one even knows if the data, prior to his adjustments, was raw data or already adjusted one or more times previously.
It didn’t matter. All Winston was sure of is that one of the lead climatologists needed more slope to match his computer model outputs. He punched out the new Fortran cards and then dropped the old cards into the Memory Hole where they were burned.
“There!” Winston exclaimed to himself. “Now the temperature data record is correct again; all is double-plus good.”

Kev-in-Uk
July 16, 2013 8:56 am

NucEngineer says:
July 16, 2013 at 7:54 am
Haha – that is funny except for the fact that it immediately reminded me of learning Fortran as a student in in the 70’s and using punchcards, carefully putting them order and putting them into be read and processed, only to get the inevitable ‘The fortran operation referenced does not exist’ (or something very similar) back several hours later! Jeez, earlier computer science was a real drag……..LOL

RACookPE1978
Editor
July 16, 2013 12:18 pm

From Climate Audit’s (long ago!) 2007 thread, we find this little gem.
Quoting “Anonymous”
:

Posted Sep 19, 2007 at 5:02 PM | Permalink | Reply
You guys are missing some things.
The first is that the guidelines for siting a climate station require vegetation and other obstructions to be at least 100 feet distance. Central Park currently fails that test by a LONG shot.
The second is that, yes, those trees are MUCH bigger than they were 50 years ago, not to mention 100 years ago. This has strongly influenced how warm the site gets during summer days. There are in fact many meteorologists in the NYC area who have deferred to other sites in the area during the summer because Central Park is reading artificially cool, and has been for at least 10 years now.

So, if in 2007, “Anonymous” knows New City area meteorologists know Central Park is reading “artificially cool … for at least 10 years now” .. then why should the GISS (a known NYC climate research source) not have “artificially adjusted” Central Park temperatures between 1990 and 2007 down by 10 degrees?
/sarchasm – that gaping hole between a liberal and the truth and the real world.
Central Park was conceived of in 1844, work began in 1853-1857, construction of the Park finished 1870 with almost no physical changes since then to the Park’s interior and walking paths/roads/carriage paths around Belvedere Castle and the ponds and trees around it since the Angel of the Waters statue was dedicated in 1873. So, since 1870, the ONLY change has been – not to the location of the station or the “average trees” around it, but to the 40-50 miles in the “AVERAGE REGION” around the Central Park.
That is, to the UHI effect of the total energy used around NYC and the average effect of the roads, concrete, and buildings around the entire Central Park area. The REGIONAL effect on the daly temperatures of Central Park IS the UHI effect!
Factors That Change the UHI over Time.
Distinct “parts” of the UHI at any given location are in fact, related to population – but ONLY indirectly related to population. The biggest UHI effect is the total reflection and absorbtion area of buildings, streets, parking lots, and roads in the region: These changed radically between 1810 and 1870 in the REGION around Central Park between 1810 and 1890, but much, much less so between 1890 and 1930, almost none between the 1920’s and 2013. (An urban map of 1920 (or even 1890 for that matter) for NJ’s Hudson River shore, Manhatten, Brooklyn, the Bronx, Bronx, or Queens would not only be recognizable, but would need almost no changes to be used now. in fact, the number and height of buildings in Manhatten (or any other of those above) has little changed since 1950.)
Regionnal energy use has gone up very, very much since the Central Park station was located in 1870, BUT that energy increase is small compared to the solar affect on the regional buildings. It needs to be considered for any UHI study or UHI correction anywhere in the country, BUT the change in energy use is very, very different at different years. There has been, for example, a change when electricity was introduced into the NYC Manhattan in 1890, then increased slowly up to 1920-1930, decreased during the Great Depression, then changed again from the late 30’s through WWII’s boom and again with the introduction of air conditioning. But has energy use in NYC increased directly always proportional to population alone between even such years and 2013? Absolutely not!
Has it increased (or decreased) according to NASA-GISS’s “favored” nighttime light analysis proxy? That’s a bad assumption: nighttime lights may be a proxy for average population in an area, but is that assumption valid for a long-established urban region like
NYC’s Central Park-Brooklyn-Bronx-NJ-Queens area between 1970 and 2013, or is it equally/better/less valid for the the 5x increase in population and energy and buildings changes around Atlanta since 1970 and 2013? Around Chattanooga since 1930-1940’s TVA changes? Fresno since 1920 due to farming? Colorado City since 1990? Las Vegas since 1980? since 1990? Since 2000?
“Local” changes to the immediate 100 feet around a weather station should NEVER be considered “inside” the region’s far larger time-affected UHI – but these local changes WILL also change the local recorded temperatures. But any valid “adjustment” to the historical temperature record must specifically and uniquely account for these local changes in location, altitude, exposure, contamination (air conditioning compressors, parking lots, trees, roads, etc. But to arbitrarily “assume” Central Park is 10 degrees cooler in 2007 compared to 1990 due to an increase in tree height?????? Is that what counts for “climate science” by GISS?
By the Way: Somebody needs to explain to me why NOAA/NASA-GISS “time of observation” correction need to be averaged over all temperature records over a 15 or 25 year period over an entire region or country, but will change differently for each year during that period, rather than just once (when the recording time “might” have been changed from morning to afternoon only once in that period for only a few of the region’s thermometer stations…..

Editor
July 16, 2013 7:55 pm

Gunga Din says:
July 15, 2013 at 11:29 am

(PS This is the first time I’ve tried to post a link to a previous WUWT comment. Perhaps Ric Werme can add it to his “Ric Werme’s Guide to WUWT”?)

Yeah, I could do that. Check out http://home.comcast.net/~ewerme/wuwt/index.html in the next to last section – Linking to past comments.

John Cunningham
July 17, 2013 7:28 am

I have read a lot of papers by the warmalists, but I have never seen a discussion of how the average downward adjustment of 2 degrees F was arrived at. why not 1 dgree, or 4?

Forrest
July 17, 2013 8:37 am

There was a commenter over on Powerline blog [http://www.powerlineblog.com/archives/2013/07/he-who-controls-the-present-controls-the-past.php] who stated that; “Axiom: the rise in global temperature increase is directly related to the increase in Federal grants purported to study global temperatures.” I thought that this would make an interesting, and telling, chart.

July 17, 2013 1:53 pm

Ric Werme says:
July 16, 2013 at 7:55 pm

Gunga Din says:
July 15, 2013 at 11:29 am
(PS This is the first time I’ve tried to post a link to a previous WUWT comment. Perhaps Ric Werme can add it to his “Ric Werme’s Guide to WUWT”?)

Yeah, I could do that. Check out http://home.comcast.net/~ewerme/wuwt/index.html in the next to last section – Linking to past comments.

=================================================================
Thanks!
I would add that it might help to give a clue as to what the comment is about.
(PS I think you have a typo. You have “they” instead of “the” in one line in the last section.)

The Count
July 17, 2013 2:21 pm

I don’t usually post here nor am I a frequent visitor, but this caught my attention. A few years ago, I did my own research into temperature readings in Central Park. Being a longtime NYC resident, I’m quite familiar with the area.
Historically, the weather readings were taken at two locations in the park. Up until 1920, the weather station was located at a building called The Arsenal, which is located at 65th Street just off of 5th Avenue. The building is now part of the grounds of the Central Park Zoo and houses the administrative offices for the park. In 1920, the weather station was moved 0.8 miles north to Belvedere Castle where it remains to this day. Belvedere Castle sits on top of a crag of rock just off the 79th Street traverse and almost exactly on the north-south axis of the park.
I wondered about the possible reasons for moving this station and I think I hit upon the cause: wind. Starting in the 1920’s, some rather tall apartment buildings were constructed right along 5th Avenue, which blocked the free flow of wind. If you’ve ever been in Manhattan on a windy day, you probably experienced the well known “canyon effect” that buildings have on wind. You can walk along a street with the wind blowing at your back, then come to an intersection and have it blowing in your face, cross the street and it will come at you from the side, etc. The new construction probably played havoc with wind measurements at The Arsenal. So, the solution was to move the weather station to Belvedere Castle, which was far away from any obstructions and above the treetops.
The link I used to access GISS data for Central Park doesn’t work anymore. However, one of the things that I found somewhat mystifying was that, for the purpose of calculating yearly averages, GISS starts the year on December 1st. So I recalculated the averages if the year starts in January. The most obvious difference between the two sets is that, in the original GISS data, 2002 is the hottest year on record (13.94 ºC) while in the recalculated version, the hottest year is 1998 (13.95 ºC). In fact, in the recalculated version, the value for 2002 is almost a half a degree cooler (13.57 ºC). The reason for the discrepancy was an abnormally warm December 2001 (5 ºC above the 1.7 ºC average across the entire series for December). Just the simple action of moving December to the correct year is enough to account for a 0.77 ºC difference in 1990. In other years, it accounts for more than a ±0.5 ºC difference in certain years (notably 1911, 1918, 1958, and 2001). The fact that moving one data point can skew the results this much makes me wonder how valid an average annual temperature value really is.
In poking around the NOAA website, I found this page of climate data for New York City: (http://www.erh.noaa.gov/okx/climate_cms.html). Under “Temperature”, there is a link for “Average Monthly and Annual”. Clicking on that brings up this page of monthly and annual temperature readings from Central Park going back to January 1869 (http://www.erh.noaa.gov/okx/climate/records/monthannualtemp.html). According to NOAA, the readings were taken at The Arsenal building until December 1919 and, afterwards from Belvedere Castle. This appears to be “raw” data before it goes through homogenization, pasteurization, whatever. And unlike the “real scientists” at GISS, the “real weathermen” at NOAA start the year in January for the purpose of calculating annual averages.
I got sidetracked before I could go further with this, being a hobby and all. But I did notice that GISS consistently reported that past temperatures as cooler than originally reported. Between 1895 and 1920, GISS data is 1˚C or more cooler than NOAA data. Between 1920 and 1995, GISS is 0.5˚- 0.7˚C cooler. After 1995, the discrepancy disappears and the only differences I could see are probably rounding errors in the conversion between Fahrenheit and Celsius.

Editor
July 17, 2013 3:53 pm

Gunga Din says:
July 17, 2013 at 1:53 pm

Thanks!
I would add that it might help to give a clue as to what the comment is about.
(PS I think you have a typo. You have “they” instead of “the” in one line in the last section.)

Nah, I’ll just leave the link as a little surprise explanation and a way to mess with the readers’ heads. 🙂
Typo fixed, will show up tomorrow morning.