I’ve been following this issue a few days and looking at a number of stations and had planned to make a detailed post about my findings, but WUWT commenter Steven Douglas posted in comments about this curious change in GISS data recently, and it got picked up by Kate at SDA, which necessitated me commenting on it now. This goes back to the beginning days of surfacestations.org in June 2007 and the second station I surveyed.
Remember Orland? That nicely sited station with a long record?
Note the graph I put in place in June 2007 on that image.
Now look at the graph in a blink comparator showing Orland GISS data plotted in June 2007 and today:
NOTE: on some browsers, the blink may not start automatically – if so, click on the image above to see it
The blink comparator was originally by Steven Douglas. However he made a mistake in the “after” image which I have now corrected.What you see above is a graphical fit via bitmap alignment and scaling of the images to fit. This is why the dots and lines appear slightly smaller in the “after” image. I don’t have the GISS Orland data handy at the moment from 2007, but I did have the GISS station plots from Orland from that time and from the present, downloaded from the GISS website today. If I locate the prior Orland data, I’ll redo the blink comparator.
I believe this blink comparator representation accurately reflects the change in the Orland data, even is the dots and lines aren’t exactly the same thickness.
Douglas writes in his notice to me:
It appears that RAW station plots are no longer available, although NASA GISS (Hansen et al) do not say it in this way. Here is the notice on their site:
Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).
I don’t know about the “renamed” option, but the RAW data appears to be NO LONGER AVAILABLE.
Here’s a detailed blink comparison of Orland. All their options now give you an “adjusted” plot of some kind. The “AFTER” in this graph show the “adjustments” to Orland.
Here is what the GISS data selector looks like now, yellow highlight mine, click to enlarge:
Above clip from: http://data.giss.nasa.gov/gistemp/station_data/
Here is the “raw” GISS data plot of Orland I saved back in 2007:

And here is another blink comparator of Orland raw -vs- homogenized data posted by surfacestations.org volunteer Mike McMillan on 12/29/2008:

And here is the “raw” GISS data for Orland today, please note the vertical scale is now different since the pre-1900 data has been removed, the GISS plotting software autoscales to the most appropriate range:

Source:
And it is not just Orland, I’m seeing this issue at other stations too.
For example Fairmont, CA another well sited station well isolated, and with a long record:
Here is Fairmont “raw” from 11/17/2007:

And here is Fairmont from GISS today:

Source:
This raises a number of questions. for example: Why is data truncated pre-1900? Why did the slope change? The change appears to have been fairly recent, within the last month. I tried to pinpoint it using the “wayback machine” but apparently because this page:
http://data.giss.nasa.gov/gistemp/station_data/
is forms based, the change in this phrase:
Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).
Appears to span the entire “wayback machine” archive, even prior to 2007. If anyone has a screen cap of this page prior to the change or can help pinpoint the date of the change, please let me know.
It is important to note that the issue may not be with GISS, but upstream at GHCN data managed by NCDC/NOAA. Further investigation is needed to found out where the main change has occurred. It appears this is a system wide change.
The timing could not be worse for public confidence in climate data.
I’ll have more on this as we learn more about this data change.
UPDATE1 from comments:
GISS also just started using USHCN_V2 last month. See under “What’s New”:
http://data.giss.nasa.gov/gistemp/graphs/
“Nov. 14, 2009: USHCN_V2 is now used rather than the older version 1. The only visible effect is a slight increase of the US trend after year 2000 due to the fact that NOAA extended the TOBS and other adjustment to those years.
Sep. 11, 2009: NOAA NCDC provided an updated file on Sept. 9 of the GHCN data used in our analysis. The new file has increased data quality checks in the tropics. Beginning Sept. 11 the GISS analysis uses the new NOAA data set. ”
Sponsored IT training links:
Worried about N10-004 exam? Our 640-802 dumps and 70-680 tutorials can provide you real success on time.




GISS and CRU remind me very much of the movie “Fahrenheit 451” (book Bradbury, director Truffaut, Werner and 2 times Christie). People are trying to preserve banned books by learning them by heart.
It would be a horrible setback for science if all the raw data would be effaced.
The raw data ‘no longer exists’, but the MODIFIED DATA DOES.
So, let me get this straight. The high priests have performed an ‘immaculate conception’ in ‘turning water into wine’, which may indeed be a miracle in turning a cooling trend into a warming one but it is not a justification for global climate change action.
Whosoever took that decision to ditch the raw data should go to jail and ALL climate change work based on those datasets should be put on ‘indefinite hold’ until this scandal of epic proportions is fully exposed, its perpetrators taken to the proverbial guillotine and ALL politicians who trusted them or conspired with them strung up and pilloried. And I’m REALLY, REALLY looking forward to the Royal Society trying to defend primary data destruction……..
Let the exposing begin.
But remember: not every country is as generous as the US in its exposure clauses. Tiger Woods just banned the UK from publishing pictures of him nude. I don’t think I want to see those pictures, but I sure would like to see the complete exposure of climate data destruction…….
Here are some more blink comparators from previous WUWT posts, with one-way data manipulation. I remember more, but could not find them.
http://wattsupwiththat.com/2009/06/28/nasa-giss-adjustments-galore-rewriting-climate-history/
http://wattsupwiththat.com/2008/11/14/the-evolution-of-the-giss-temperature-product/
.
Richard – it’s simply almost too much to believe – how ‘the team’ could have conspired to do this is gobsmacking.
The data has had more surgery than Michael Jackson. Without starting from scratch, how would we ever know what it looked like before this fiddling started?
I can only presume that the pushing down of older temps was to maintain an upward trend when it actually stopped warming up/satellites made it harder to alter the figures.
Everyday I think it can’t get worse – and then it does.
Squeaky bum time for Big Al and the Hoaxers…We in Ireland have just been dealt the roughest budget in livinig memory.. And whilst we cut unemployment benefits to 20-21 year olds by 50% it is all the sweeter that we can afford €150,000,000 to ‘combat climate change in Africa’..as announced by our esteemed MInister of the Environment John Gormley of (yes you guessed it)…the GREEN PARTY. God, save me from your followers.
Whilst on the subject of blink comparators, perhaps we in the UK should have our very own reflecting the disparity between the CET as calculated by the Met office in http://hadobs.metoffice.com/hadcet/cet_info_mean.html and that calculated by Philip Eden in http://www.climate-uk.com/index.html.
Intrigued by the Met’s anomaly increasing over the past few days, whilst temperatures were falling across the UK, I took myself to Philip Eden’s site where I found that his anomaly to 10th December was fully 0.6C less than that stated by the Met.
Investigation reveals that the Met are not comparing like with like and that their current recording sites differ substantially from those that were employed to make up the bulk of the early records.
Philip Eden however constructs his charts from sites that are as close as is possible to those originally used. In the country of the blind the one eyed man is king!
This is really depressing – Ben Goldacre who usually speaks with a lot of common sense and debunks bad science on his blog – has posted this
http://www.badscience.net/2009/12/copenhagen-climate-change-blah-blah/
Our views of those of dinner party know-nothings who advance zombie arguments.
I listened to him on the Any Questions programme and he was very dismissive.
Last tip I’ll be putting in his jar for a while.
The tricky truncators of Giss!
http://s95.photobucket.com/albums/l155/junkie_99/?action=view¤t=maryville.gif
I got the raw and adjusted graphs for Marysville. I made a mistake when I didn’t save the originals. It took quite a bit of adjusting to make the scales match. The raw data graph says “Marysville” at the top, and of course the homogenized version has no label. I feel bad that I didn’t keep the originals…who would have thought they’d change their website and “lose” the raw data.
Plato says:
You assert to me:
“how ‘the team’ could have conspired to do this is gobsmacking.”
I have made no accusations of conspiracy. Please note that I have only reported demonstrable, documented facts. Indeed, I see no need to invoke ideas concerning conspiracy.
‘Group think’ and self interests are sufficient to explain what has happened, and, therefore, a conspiracy seems unlikely.
Everyone who has conducted work to compile MGT data series and/or has conducted work that depends on or utilises MGT data sets has a personal interest in preventing publication of work that shows the MGT data series are complete rubbish. They have invested time money and effort into such work that has provided each of them with career status and enhancement. So, all of them could be expected to dismiss, to denigrate and to oppose publication of anything that shows the MGT data sets are complete rubbish.
Indeed, it is hard to imagine that any one of them could bring him or her self to consider the possibility that the MGT data sets are complete rubbish. So, action to prevent publication of a paper that indicates the MGT data sets are complete rubbish could be seen (by themselves) as a reasonable defence of their work.
But the fact is that the MGT data sets are complete rubbish and actions to prevent publication of this fact is a matter of record.
Richard
Questions:
When making comparisons between the archived surface station.org plots and current GISS plots which version of adjusted GISS data should be used?
Also what does the time of observation adjustment do? As far as I can tell both the old Liquid in Glass thermometers and the MMTS systems record the minimum and maximum daily temps, so how does the time of observation affect the result?
RE Mark (17:06:08) :
re, jack Kendrick (15:40:48) :
” They can’t just keep adjusting temperatures ever upwards.”
Why not adjust the older data downwards?
Yes, the Ministry of Truth could adjust the older data downwards. But if they want to keep the data on an upwards trajectory, eventually, in the not too distant future, the “older data” is going to include 1998 and 2007 which are kind of like Holy Grails for them. They’ll have to do a lot more than just adjust the datasets. They’ll need to do a “1984” -like adjustment to the peer reviewed liturature and the all of the articles written in the MSM. I think they have caught themselves in a trap created by their own deceipt and that is why they are so desperate in Copenhagen to get something done before the deceipt becomes obvious.
Roger (04:50:29) :
One of the major reasons why there is a difference in CET between the Met Office and Philip Edens version is the reference frame. The MO use 1961 to 1990 whilst Philip Eden uses 1971 to 2000.
Huge thanks for this report and the comments.
In a nutshell – raw data were not just adjusted and re-adjusted, but it looks to me from that these data also got extra little tweaks whenever someone demonstrated that, and how, they were cooked.
The earlier revelations, coming from the READ-ME.txt, were bad enough, but this is far worse.
Who can call themselves ‘scientists’ when adjusting data retrospectively, with no reason given? Data on which others rely, or made to rely?
I am stunned by this, and sickened.
Sadly, I am sure there will be more.
Anthony,
Perhaps the title of this post should be changed, since GISS
has been using NCDC adjusted data, not raw data, for USHCN
stations for at least 8 years.
Jerry
REPLY: GISS in their previous presentation advertised it as “raw” so that is where the reference comes from. -A
This is probably pointing out the obvious. But since most of us apparent heathens, heretics, and deniers (and the occasional troll) doesn’t exactly match the general description of rocket scientists, wouldn’t it be a bit easier for people to comprehend if all the logic of following the data (instead of the money) was put into a nice tidy flow chart, which would no doubt look like a family tree.
As everyone ought to know, all them “thousands of [IPCC] scientists” aren’t crunching temperature data, a whole lot of ’em, probably most, are in fact just crunching the results of others, i.e. the results of others are the base for the context of their work assignment, so to speak.
From what I have read, in articles and comments alike, it often is just like the author offered in a reply: the data is taken at face value, which might be sound in a lot of cases, since it’s not up to the economy guy the question the integrity of the temperature data if his/her job is about calculating the cost.
However, a flow chart would probably make it easier for average people to visualize the problem if the data gets corrupted at one stage or another, since it can be easily pinpointed with ones two own eyes. And besides, since the climatology department at NASA also seem to be lacking in the amount of rocket scientists these days, and what with the bureaucrats love for anything flow charty in color, even they might enjoy the wonders of the simple colorful visualization tool to know who to blame’n’sack.
Re: JohnV (15:32:20) :
This morning, I compared all the data in v2.mean from 2007/08/14 (which I had downloaded when I created http://www.unur.com/climate/ghcn-v2/ to the data in v2.mean from 2009/12/12 03:36 am (see ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/ ).
It was a quick and dirty job so there is some small chance that I made an error somewhere but I am relatively certain that the past did NOT change between the 2007/08/14 and 2009/12/12 data sets.
I don’t mean to rain on your parade, or if others have seen this too (or are particularly interested), but, your two sets of links above come up thusly:
BTW, “cross-blog debating” has been going on at least since you were 18 yrs old …
.
.
It is imperative that all the old records be preserved and archiVed in an accessible file. There is a circling of the wagons going on and a burning of files that could be used in investigations which all the usual suspects fear is about to occur
Richard S Courtney (14:07:21) :
“It demonstrates that 6 years ago The Team knew the estimates of average global temperature (mean global temperature, MGT) were worthless and they acted to prevent publication of proof of this.”
Breathtaking, how gatekeeper functions of those people damaged climate science. IMHO all ‘corrections’ and ‘adjustments’ of raw data should be carefully reexamined wherever possible.
Some interesting insight into motivations for adjustments
FOIA\mail\1254147614.txt
” At 06:25 28/09/2009, Tom Wigley wrote:
Phil,
Here are some speculations on correcting SSTs to partly
explain the 1940s warming blip.
If you look at the attached plot you will see that the
land also shows the 1940s blip (as I’m sure you know).
So, if we could reduce the ocean blip by, say, 0.15 degC,
then this would be significant for the global mean — but
we’d still have to explain the land blip.
I’ve chosen 0.15 here deliberately. This still leaves an
ocean blip, and i think one needs to have some form of
ocean blip to explain the land blip (via either some common
forcing, or ocean forcing land, or vice versa, or all of
these). When you look at other blips, the land blips are
1.5 to 2 times (roughly) the ocean blips — higher sensitivity
plus thermal inertia effects. My 0.15 adjustment leaves things
consistent with this, so you can see where I am coming from.
Removing ENSO does not affect this.
It would be good to remove at least part of the 1940s blip,
but we are still left with “why the blip”.
Let me go further. If you look at NH vs SH and the aerosol
effect (qualitatively or with MAGICC) then with a reduced
ocean blip we get continuous warming in the SH, and a cooling
in the NH — just as one would expect with mainly NH aerosols.
The other interesting thing is (as Foukal et al. note — from
MAGICC) that the 1910-40 warming cannot be solar. The Sun can
get at most 10% of this with Wang et al solar, less with Foukal
solar. So this may well be NADW, as Sarah and I noted in 1987
(and also Schlesinger later). A reduced SST blip in the 1940s
makes the 1910-40 warming larger than the SH (which it
currently is not) — but not really enough.
So … why was the SH so cold around 1910? Another SST problem?
(SH/NH data also attached.)
This stuff is in a report I am writing for EPRI, so I’d
appreciate any comments you (and Ben) might have.
Tom.”
(Mods: I try again?)
Quick link for for Michael showing failed DP link (the ‘proof’):
Screen Capture of failed link
.
.
I wouldn’t worry too much about any of the data being permanently gone or destroyed. Whoever is posting these new datasets are as naive as the authors of the, now famous, “emails”.
All of these emails and all of this data passes through and is stored on servers which are far removed from the control of the authors. All of these bureaucracies have large data centers with untold number of “virtualized” servers (mail, ftp, file and web) which are dutifully backed up daily. It is even likely many of the participants laptops are backed up regularly.
These agencies are so anal with their data they even have armies of serfs scanning paper copies continuously.
If you want to check the really, real temperature for a station in the US the paper copies have been scanned and stored in a conveniently accessible database here: http://www7.ncdc.noaa.gov/IPS/lcd/lcd.html
Can you imagine the monumental task of going into to all the past backups and manipulating that data? Not likely. The past has been preserved.
All that is required is a body of evidence that will justify going back and unearthing the truth.
Anthony,
You wrote:
REPLY: GISS in their previous presentation advertised it as “raw” so that is where the reference comes from. -A
I do not know what you mean by “previous presentation”, but since at least March of 2001, the GISS selection menu referred to “raw GHCN data+USHCN corrections”.
By suggesting, or implying, that in 2007 GISS characterized their USHCN station data as raw, you open yourself to a variety of needless criticisms, and avoidable detractions from your efforts.
Jerry
I’ve completed a blink comparator for Marysville, CA – please have a look:
http://examples.com/giss/marysville_phased.gif (318K .gif animation)
It’s in three phases: Beginning with RAW data plot (archived at surfacestations.org, to USHCN corrected (“value added”), and onward to the final plot with Homogeneity Adjustments (“quality controlled, homogeneous”) applied.
The transformation is ASTOUNDING. If it wasn’t for the graphs stored at surfacestations.org, I wouldn’t have found what I found, but would have instead assumed that I was essentially seeing raw temperature station data from NASA (which they call “RAW DATA+” – kind of a “value added” thing).
Don’t forget that a lot of the raw data plots are still stored at surfacestations.org as part of each site survey – just not in number form. Those are historical records now, and still extremely valuable in establishing a pattern. Too bad ALL of NASA’s data wasn’t archived. (anyone?)
Now, IF I was a warmist, in charge of data, and was one who felt that adjustments were needed to the station data to fit a given hypothesis, my tasks would be different depending on the type of stations considered.
For RURAL stations, especially well placed (no nearby artificial heat sources, etc.,) the task would two-fold:
1) adjust pre-1940 data to appear as trendless as possible — kind of a pre-industrial age climate change denial, and
2) adjust post-1940 data to show a gradual incline that accelerates from about 1960 onward (i.e. make it “agree well” with a hockey stick)
3) For improperly placed stations (e.g., mostly urban, like those with nearby artificial heat sources): The Urban Heat Island effect signature is unmistakable, with an obvious trend from many of these stations starting out low in 1900’s and continuing almost linearly upward. The tasks would be the same as above, but the FIRST order of business would be to mask (“hide”/”contain”) the “putative” UHI (Urban Heat Island effect).
That’s precisely the impression the Marysville “reconstruction” conveys – and not too subtly either.
Santa Rosa is another one. I’m working on that one now.
Yes like an embezzlement scam, you can only cook the books so much. Eventually the adjustments you made to make the books balance 5 years ago, will make it impossible to balance the books next year without it being patently obvious that something is amiss.
It appears to me we are entering that “Chickens coming home to roost” phase where it will be physically and logically impossible to mask reality with “tricked up data”, as your adjustments will be confounded by historical records you have no control over, like news accounts of storms 20-30 years ago which state the month was the coldest on record since 1858, but your current graph does not agree with that news account. You can only juggle so many balls for so long.
What we need to do, in addition to all the forensic analysis going on of the code and the data, is to start to data mine large libraries for those historic records that site official weather service records of the day, and then ask why those accounts do not agree with their “raw data” for the same area.
Larry