GISS "raw" station data – before and after

I’ve been following this issue a few days and looking at a number of stations and had planned to make a detailed post about my findings, but WUWT commenter Steven Douglas posted in comments about this curious change in GISS data recently, and it got picked up by Kate at SDA, which necessitated me commenting on it now. This goes back to the beginning days of surfacestations.org in June 2007 and the second station I surveyed.

Remember Orland? That nicely sited station with a long record?

Note the graph I put in place in June 2007 on that image.

Now look at the graph in a blink comparator showing Orland GISS data plotted in June 2007 and today:

NOTE: on some browsers, the blink may not start automatically – if so, click on the image above to see it

The blink comparator was originally by Steven Douglas. However he made a mistake in the “after” image which I have now corrected.What you see above is a graphical fit via bitmap alignment and scaling of the images to fit. This is why the dots and lines appear slightly smaller in the “after” image.  I don’t have the GISS Orland data handy at the moment from 2007, but I did have the GISS station plots from Orland from that time and from the present, downloaded from the GISS website today. If I locate the prior Orland data, I’ll redo the blink comparator.

I believe this blink comparator representation accurately reflects the change in the Orland data, even is the dots and lines aren’t exactly the same thickness.

Douglas writes in his notice to me:

It appears that RAW station plots are no longer available, although NASA GISS (Hansen et al) do not say it in this way. Here is the notice on their site:

Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).

I don’t know about the “renamed” option, but the RAW data appears to be NO LONGER AVAILABLE.

Here’s a detailed blink comparison of Orland. All their options now give you an “adjusted” plot of some kind. The “AFTER” in this graph show the “adjustments” to Orland.

Here is what the GISS data selector looks like now, yellow highlight mine, click to enlarge:

Above clip from: http://data.giss.nasa.gov/gistemp/station_data/

Here is the “raw” GISS data plot of Orland I saved back in 2007:

Click for full sized

And here is another blink comparator of Orland raw -vs- homogenized data posted by surfacestations.org volunteer Mike McMillan on 12/29/2008:

click for full size

And here is the “raw” GISS data for Orland today, please note the vertical scale is now different since the pre-1900 data has been removed, the GISS plotting software autoscales to the most appropriate range:

click for source image from NASA GISS

Source:

http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425725910040&data_set=0&num_neighbors=1

And it is not just Orland, I’m seeing this issue at other stations too.

For example Fairmont, CA another well sited station well isolated, and with a long record:

Here is Fairmont “raw” from 11/17/2007:

click for full size

And here is Fairmont from GISS today:

click for source image from NASA GISS

Source:

http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425723830010&data_set=0&num_neighbors=1

This raises a number of questions. for example: Why is data truncated pre-1900? Why did the slope change? The change appears to have been fairly recent, within the last month. I tried to pinpoint it using the “wayback machine” but apparently because this page:

http://data.giss.nasa.gov/gistemp/station_data/

is forms based, the change in this phrase:

Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).

Appears to span the entire “wayback machine” archive, even prior to 2007. If anyone has a screen cap of this page prior to the change or can help pinpoint the date of the change, please let me know.

It is important to note that the issue may not be with GISS, but upstream at GHCN data managed by NCDC/NOAA. Further investigation is needed to found out where the main change has occurred. It appears this is a system wide change.

The timing could not be worse for public confidence in climate data.

I’ll have more on this as we learn more about this data change.

UPDATE1 from comments:

GISS also just started using USHCN_V2 last month. See under “What’s New”:

http://data.giss.nasa.gov/gistemp/graphs/

“Nov. 14, 2009: USHCN_V2 is now used rather than the older version 1. The only visible effect is a slight increase of the US trend after year 2000 due to the fact that NOAA extended the TOBS and other adjustment to those years.

Sep. 11, 2009: NOAA NCDC provided an updated file on Sept. 9 of the GHCN data used in our analysis. The new file has increased data quality checks in the tropics. Beginning Sept. 11 the GISS analysis uses the new NOAA data set. ”

Share


Sponsored IT training links:

Worried about N10-004 exam? Our 640-802 dumps and 70-680 tutorials can provide you real success on time.


The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
246 Comments
Inline Feedbacks
View all comments
Editor
December 11, 2009 1:50 pm

GISS data is full of this kind of stuff – trying to document it (with others) at present.
I have a previous blogpost from last month on adjustments – a GIStemp ‘Hall of Shame’. I have posted links here previously, but will do so again as it is directly relevant (Climate Fast Food)
Anthony, if it is of interest, you may use it. Us new bloggers don’t get much traffic. Writing it has been a result of learning about climate stuff from WUWT in the last two years and then finding out in greater detail though what E.M. Smith has been doing.
Verity

danbo
December 11, 2009 1:51 pm

You got me to thinking. I hadn’t checked the local station recently. (Waveland, MS) Unfotunately I didn’t keep a copy of it. But untill recently there was cooling if anything. Now there’s a strong warming trend going backsome time. So I checked with charts you had over at surface stations fo another nearby site. Amite. LA. Amite in 07 was
http://gallery.surfacestations.org/main.php?g2_itemId=19481
Amite now is http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425722330030&data_set=1&num_neighbors=1
It looks like they found some early source of artificial warming and have dropped old records down a fair bit
I’m supposed to trust this science?

Ed Scott
December 11, 2009 1:53 pm

After seeing adjusted temperature data for Orland, I realize that is not as cold as I thought. Snow has appeared briefly on the front lawn twice since 1992 and was sorta expected this morning, given the over-night temperatures of recent days. All that happened was rain, beginning around 1:00 am to 1:30 am and a relatively warm temperature due to the “green-house effect” of cloud cover. Since the Terminator signed CO2 limiting legislation, the effectiveness of the CO2 “green-house effect” here in California has been nil, nada, zilch, zero.

Green RD Manager
December 11, 2009 1:55 pm

Jeff,
Please post which stations and summarize if you can. This stepwise adjustment is showing up everywhere with the same characteristics.
In every case so far (10 in CA, 1 in Nevada, 1 in Arizona, 1 for Calgary), the steps are in 10ths or some multiple and the overall curve is adjusted to moderate the early 20th century thereby increasing the apparent warming and rate in the late 20th.
The net result has been to push this towards a common curve model as each station’s adjustments have been very mechanical and unique but same curve results if you use Excel and a polynomial fit with 6.
I’ll be very interested if you see the same or something different.
Anthony…good work. We may have layers of unexplained adjustments going on.

David
December 11, 2009 1:56 pm

When they make these adjustments, do they also adjust the baseline if the adjustments fall in the period the baseline is derived from?

December 11, 2009 1:56 pm

Anthony,
I just did a quick test and the GISS data did not match the NCDC NOAA data for this test. Maybe, I’m doing something wrong, but here is what I did:
Navigated to: http://www7.ncdc.noaa.gov/IPS/coop/coop.html
Selected California and then Lemon Cove and then 2009-09 and downloaded the PDF sheet showing the daily TMax and TMin.
Here is that record transcribed and double-checked:
Day Max Min
1 103 65
2 101 67
3 102 65
4 107 64
5 102 60
6 99 61
7 94 56
8 94 56
9 94 57
10 95 57
11 95 61
12 99 61
13 98 60
14 98 64
15 102 67
16 106 67
17 105 68
18 106 73
19 108 74
20 112 71
21 107 65
22 102 65
23 101 66
24 104 65
25 101 63
26 99 67
27 103 70
28 105 71
29 106 66
30 101 67
31 101 69
I averaged these in Excel and got:
AVG TMax = 101.6
AVG TMin = 64.8
Since those are obviously Fahrenheit, I converted them to Celsius:
AVG TMax (C) = 38.7
AVG TMin (C) = 23.2
And finally I averaged both of these together to get the overall mean for September 2009 for Lemon Cove: 30.9
Now the interesting thing is thing is that GISS reports 28.4 for that month for Lemon Cove. This can be found here:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425723890010&data_set=1&num_neighbors=1
So either this is a different station at Lemon Cove, or I arrived at the mean incorrectly, or the biased the temp downward (doubt it), or something else is wrong.
I noticed that this COOP database contains many more stations than GISS does for California, so maybe it is a different station with the same name.
But interesting none the less…
Daniel

David L. Hagen
December 11, 2009 1:58 pm

Let loose the lawyers and we will have the Revenge of the Chads. See: Chad — From waste product to headline

December 11, 2009 1:58 pm

Sorry, that was July 2009 at Lemon Cove.

December 11, 2009 2:02 pm

Agree with Kevin K – starting over with the paper records is the path of real science now. Surfacetemps.org is (from memory on the Darwin temps thread comments) registered, now what’s needed is the open-source (and Transactional, please!) rebuild of those records. Let’s see what my trusty back of the ciggy packet calculator says about the size of this here job:
15,000 stations (from aging memory – E M Smith has station counts).
160 years of obs in the worst case
2 obs per day
Why, that’s only – um- about 1.65 billion data points.
Divide that over the number of hits on WUWT, and that’s around 60 data points per hit.
Big job, but someone has to do it. The ‘professionals’ clearly aren’t up to the task…
REPLY: we really don’t need to do all stations – the 1218 in USHCN would work, plus a few thousand in GHCN, but there are no paper records of those. – Anthony

edward
December 11, 2009 2:03 pm

Why don’t one of the “skeptics” just bring two placards to the studio the next time they are booked for a primetime interview or debate with a warmer?
My suggestion would be to hold up two Willis Eschenbach graphs as follows:
1) Here is the raw temperature data
2) Here is the same temperature data on drugs!
I can’t think of a better visual to drive home the point that the data has been corrupted.

Richard S Courtney
December 11, 2009 2:07 pm

Friends:
I have previously posted this on WUWT but it seems desirable to post it again here.
It demonstrates that 6 years ago The Team knew the estimates of average global temperature (mean global temperature, MGT) were worthless and they acted to prevent publication of proof of this.
The most important email among those hacked (?) from CRU may turn out to be one that I wrote 6 years ago. I had forgotten it but Willis Essenbach found it among the hacked (?) emails and circulated it. I copy it here then explain its meaning and significance.
The email is this.
From: RichardSCourtney@aol.com
To: t.osborn@uea.ac.uk, m.allen1@physics.ox.ac.uk, Russell.Vose@noaa.gov
Subject: Re: Workshop: Reconciling Vertical Temperature Trends
Date: Sun, 23 Nov 2003 18:42:59 EST
Cc: trenbert@cgd.ucar.edu, timo.hameranta@pp.inet.fi, Thomas.R.Karl@noaa.gov, ceforest@mit.edu, sokolov@mit.edu, phstone@mit.edu, ekalnay@atmos.umd.edu, richard.w.reynolds@noaa.gov, christy@atmos.uah.edu, roy.spencer@msfc.nasa.gov, benjie.norris@nsstc.uah.edu, kostya@atmos.umd.edu, Norman.Grody@noaa.gov, Thomas.C.Peterson@noaa.gov, sfbtett@metoffice.com, penner@umich.edu, dian.seidel@noaa.gov, trenbert@ucar.edu, wigley@ucar.edu, pielke@atmos.colostate.edu, climatesceptics@yahoogroups.com, aarking1@jhu.edu, bjorn@ps.au.dk, cfk @lanl.gov, c.defreitas@auckland.ac.nz, cidso@co2science.org, dwojick@shentel.net, douglass@pas.rochester.edu, dkaroly@ou.edu, mercurio@jafar.hartnell.cc.ca.us, fredev@mobilixnet.dk, seitz@rockvax.rockefeller.edu, Heinz.Hug@t-online.de, hughel@comcast.net, jahlbeck@ab
Dear All:
The excuses seem to be becoming desperate. Unjustified assertion that I fail to understand “Myles’ comments and/or work on trying the detect/attribute climate change” does not stop the attribution study being an error. The problem is that I do understand what is being done, and I am willing to say why it is GIGO.
Tim Allen said;
In a message dated 19/11/03 08:47:16 GMT Standard Time, m.allen1@physics.ox.ac.uk writes:
“I would just like to add that those of us working on climate change detection and attribution are careful to mask model simulations in the same way that the observations have been sampled, so these well-known dependencies of nominal trends on the trend-estimation technique have no bearing on formal detection and attribution results as quoted, for example, in the IPCC TAR.”
I rejected this saying: At 09:31 21/11/2003, RichardSCourtney@aol.com wrote:
“It cannot be known that the ‘masking’ does not generate additional spurious trends. Anyway, why assume the errors in the data sets are geographical and not?. The masking is a ‘fix’ applied to the model simulations to adjust them to fit the surface data known to contain spurious trends. This is simple GIGO.”
Now, Tim Osborn says of my comment;
In a message dated 21/11/03 10:04:56 GMT Standard Time, t.osborn@uea.ac.uk writes:
“Richard’s statement makes it clear, to me at least, that he misunderstands Myles’ comments and/or work on trying the detect/attribute climate change.
As far as I understand it, the masking is applied to the model to remove those locations/times when there are no observations. This is quite different to removing those locations which do not match, in some way, with the observations – that would clearly be the wrong thing to do. To mask those that have no observations, however, is clearly the right thing to do – what is the point of attempting to detect a simulated signal of climate change over some part of (e.g.) the Southern Ocean if there are no observations there in which to detect the expected signal? That would clearly be pointless.”
Yes it would. And I fully understand Myles’ comments. Indeed, my comments clearly and unarguably relate to Myles comments. But, as my response states, Myles’ comments do not alter the fact that the masked data and the unmasked data contain demonstrated false trends. And the masking may introduce other spurious trends. So, the conducted attribution study is pointless because it is GIGO. Ad hominem insults don’t change that.
And nor does the use of peer review to block my publication of the facts of these matters.
Richard
The great importance of the matter in the quoted email may not be apparent to some. Therefore, I provide this brief background explanation.
Climate change ‘attribution studies’ use computer models to assess possible causes of global climate change. Known effects that cause climate change are input to a computer model of the global climate system, and the resulting output of the model is compared to observations of the real world. Anthropogenic (i.e. man-made) global warming (AGW) is assumed to be indicated by any rise in average global temperature (mean global temperature, MGT) that occurred in reality but is not accounted by the known effects in the model.
Clearly, any error in determinations of changes to MGT provides incorrect attribution of AGW.
The various determinations of the changes to MGT differ and, therefore, there is no known accurate amount of MGT change. But the erroneous MGT change was being input to the models (garbage in, GI) so the amount of AGW attributed by the studies was wrong (garbage out, GO) because ‘garbage in’ gives ‘garbage out’ (GIGO). The attribution studies that provide indications of AGW are GIGO.
I and others attempted to publish a discussion paper that attempted to explain the problems with analyses of MGT. We compared the data and trends of the Jones et al., GISS and GHCN data sets. These teams each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. Since all three data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric (i.e. MGT anomaly), this is surprising. Clearly, the methods of compilation of MGT time series can generate spurious trends (where ‘spurious’ means different from reality), and such spurious trends must exist in all but at most one of the data sets.
So, we considered MGT according to two interpretations of what it could be; viz.
(i) MGT is a physical parameter that – at least in principle – can be measured;
or
(ii) MGT is a ‘statistic’; i.e. an indicator derived from physical measurements.
These two understandings derive from alternative considerations of the nature of MGT:
If the MGT is assumed to be the mean temperature of the volume of air near the Earth’s surface over a period of time, then MGT is a physical parameter indicated by the thermometers (mostly) at weather stations that is calculated using the method of mixtures (assuming unity volume, specific heat, density etc). We determined that if MGT is considered as a physical parameter that is measured, then the data sets of MGT are functions of their construction. Attributing AGW – or anything else – to a change that is a function of the construction of MGT is inadmissable.
Alternatively:
If the thermometers (mostly) at weather stations are each considered to indicate the air temperature at each measurement site and time, then MGT is a statistic that is computed as being an average of the total number of thermometer indications. But if MGT is considered to be a statistic then it can be computed in several ways to provide a variety of results, each of different use to climatologists. In such a way, the MGT is similar in nature to a Retail Price Index, which is a statistic that can be computed in different ways to provide a variety of results, each of which has proved useful to economists. If MGT is considered to be a statistic of this type, then MGT is a form of average. In which case, the word ‘mean’ in ‘mean global temperature’ is a misnomer, because although there are many types of average, a set of measurements can only have one mean. Importantly, if MGT is considered to be an indicative statistic then the differences between the values and trends of the data sets from different teams indicate that the teams are monitoring different climate effects. But if the teams are each monitoring different climate effects then each should provide a unique title for their data set that is indicative of what is being monitored. Also, each team should state explicitly what its data set of MGT purports to be monitoring.
Thus, we determined that – whichever way MGT is considered – MGT is not an appropriate metric for use in attribution studies.
However, the compilers of the MGT data sets frequently alter their published data of past MGT (sometimes they have altered the data in each of several successive months). Hence, our paper always contained incorrect MGT data because the MGT data kept changing. The MGT data always changed between submission of the paper and completion of the peer review process. Thus, the frequent changes to MGT data sets prevented publication of the paper.
Whatever you call this method of preventing publication of a paper, you cannot call it science.
But this method prevented publication of information that proved the estimates of MGT and AGW are wrong and the amount by which they are wrong cannot be known.
It should also be noted that there is no possible calibration for the estimates of MGT. The data sets keep changing for unknown (and unpublished) reasons although there is no obvious reason to change a datum for MGT that is for decades in the past. It seems that the compilers of the data sets adjust their data in attempts to agree with each other.
Methods to correct these problems could have been considered 6 years ago if publication of my paper had not been blocked.
Additionally, I point out that the AGW attribution studies are wrong in principle for two reasons.
Firstly, they are ‘argument from ignorance’.
Such an argument is not new. For example, in the Middle Ages experts said, “We don’t know what causes crops to fail: it must be witches: we must eliminate them.” Now, experts say, “We don’t know what causes global climate change: it must be emissions from human activity: we must eliminate them.” Of course, they phrase it differently saying they can’t match historical climate change with known climate mechanisms unless an anthropogenic effect is included. But evidence for this “anthropogenic effect” is no more than the evidence for witches.
Secondly, they use an attribution study to ‘prove’ what can only be disproved by attribution.
In an attribution study the system is assumed to be behaving in response to suggested mechanism(s) that is modelled, and the behaviour of the model is compared to the empirical data. If the model cannot emulate the empirical data then there is reason to suppose that the suggested mechanism is not the cause (or at least not the sole cause) of the changes recorded in the empirical data.
It is important to note that attribution studies can only be used to reject hypothesis that a mechanism is a cause for an observed effect. Ability to attribute a suggested cause to an effect is not evidence that the suggested cause is the real cause in part or in whole. (To understand this, consider the game of Cludo. At the start of the game it is possible to attribute the ‘murder’ to all the suspects. As each piece of evidence is obtained then one of the suspects can be rejected because he/she can no longer be attributed with the murder).
But the CRU/IPCC attribution studies claim that the ability to attribute AGW as a cause of climate change is evidence that AGW caused the change (because they only consider one suspect for the cause although there could be many suspects both known and unknown).
Then, in addition to those two pieces of pure pseudo-science – as my paper demonstrates – the attribution studies use estimates of climate changes that are know to be wrong!
This does not give confidence that the MGT data sets provide reliable quantification of change to global temperature.
Richard

Randy
December 11, 2009 2:12 pm

Anthony
My first post here as I’m just an average punter (and voter) but I read this site daily as it is an education of the best sort. My question.
How many climate scientists rely on this ‘adjusted’ data believing it to be a solid foundation upon which they then do their thing?
My natural scepticism is firming up daily and amongst the general public I am not alone. However probably the last remaining barrier to total disbelief is that I can’t believe that so many serious scientists are ‘on the take’. If however the base science they are relying upon is seriously flawed they I would expect to see an increasing number standing up to be counted.
Probably a lame question.
Randy
REPLY: The answer is, almost all of them. There are very few papers questioning the data integrity. Most take the data at face value, never questioning the measurement environment and the data procedures. I didn’t start questioning it myself until Spring of 2007. – Anthony

Ray
December 11, 2009 2:16 pm

“Our world is getting hotter, faster.” – any pro-agw researcher/politicien.
Of course it is getting hotter, faster… especially when they fudge/modify/revised/cook the raw historical data.
Me think they are trying to fit the historical data to that from the climate models instead of trying to fix their computer model to fit the historical data.

Kevin Kilty
December 11, 2009 2:18 pm

NK (13:46:19) :
To: Kevin Kilty (13:38:58) :
That’s what Anthony and Steve M. have been doing for a few years. Those are huge undertakings and we all owe them, big time.

I realise this, and I intended no slight of Anthony’s and McIntyre’s efforts, but the last time I checked surfacestations.org it seemed that only about one-fourth of the U.S. stations had gotten a thorough vetting, and I have no idea where the UHI project currently stands, and problems with various data sets available on the internet seem to be spreading faster than the vetting process can identify problems. So, where does data credibility stand at this point?

JohnV
December 11, 2009 2:22 pm

Anthony:
I have some tools that can easily parse all station data for hundreds of stations and write the temperatures into Excel-compatible files. Each column of the spreadsheet represents a single station and includes all temperature data for that station.
I have a copy of the raw GHCN archive (v2.mean.zip) from September 12, 2007 and from today. I plan to generate spreadsheets from the current archive and the old archive to see if there are differences in the raw GHCN data.
Does anyone have a copy of v2.mean.zip from GHCN prior to September 2007?

Kevin Kilty
December 11, 2009 2:26 pm

REPLY: we really don’t need to do all stations – the 1218 in USHCN would work, plus a few thousand in GHCN, but there are no paper records of those. – Anthony
I have from time to time gone looking for climate data on-line or on CDRom or some other format, and I am often surprised at the variety of formats available. If one had no access to paper records for the GHCN stations, then what would be the next best alternative? Are all of the various data products derived from the same underlying data, so that with, say, SOD records from GHCN stations, one could be pretty well assured of having the raw GHCN daily maximum, minimum and average temperatures?

December 11, 2009 2:29 pm

I wonder if these new adjustments affects the outcome of Peter’s video about UHI using GISS data – http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/.
Maybe Gavin decided he couldn’t afford to have any more 6th graders undercutting the message so effortlessly.
C3H Editor

windansea
December 11, 2009 2:30 pm

watch as armed UN security guard protects IPCC scientist from question about Climate gate
HT Jeffid

Tyler
December 11, 2009 2:30 pm

That’s sooooo funny!
Coincidentally, I often use the term “homogeneity adjustment” as a colloquialism for the word “trick.”
Fix a leaky faucet: “That’ll do the homogeneity adjustment.”
On stage: “And now for my next homogeneity adjustment…”
Tax incentives: “I believe in homogeneity adjustmentle down economics”
Halloween: “Homogeneity adjustment or Treat!” (ALWAYS gets the treat)
Old saying: “You can’t teach an old dog new homogeneity adjustments.”
At a hockey game: “My son scored a Hat-Homogeneity adjustment”
Hot Rods: “His ride is totally homogeneity-adjustmented OUT!”
TMZ: “Seems some of Tiger’s women may have been turning homogeneity adjustments.”
Confidence man: “A real homogeneity adjustmenster.”
Common practice: “Homogeneity adjustment of the trade.”
Playing a practical joke: “Ah Ahhh! Homogeneity adjustmented you!”
Catching on to a joke: “I’m not falling for that homogeneity adjustment.”

CMT
December 11, 2009 2:37 pm

Re Daniel Ferry 13:56:28–
If you take the average of each day’s Max and Min, then average all those daily averages, you get a monthly average of 28.4.

AdderW
December 11, 2009 2:37 pm

Media attention, this needs more media attention…

Barry Kearns
December 11, 2009 2:42 pm

The purpose of this seems clear.
It’s to hide the decline… in data integrity.

December 11, 2009 2:48 pm

I knew there was a reason I never updated the plots I made in 2007:
http://www.unur.com/comp/ghcn-v2/
http://www.unur.com/climate/ghcn-v2/425/72591.html

December 11, 2009 2:48 pm

[no profanity, none, nada ~ ctm]

drjohn
December 11, 2009 2:50 pm

Evil, not wrong.