
Readers may recall I wrote in this article about my family memorial day excursion to Crater Lake:
Crater Lake happens to have a USHCN weather station, and it is one of the few stations that GISS excludes (they have an exclusion code for it in their software Mosher located some time ago).
Steve Mosher commented in detail about that:
Thanks for remembering that Anthony.
In addition to crater Lake there were 4 other northern california stations that GISS had removed from the data. In the paper Hansen merely says this:
‘
The strong cooling that exists in the unlit station data in the northern California region is not found in either the periurban or urban stations either with or without any of the adjustments. Ocean temperature data for the same period, illustrated below, has strong warming along the entire West Coast of the United States. This suggests the possibility of a flaw in the unlit station data for that small region. After examination of all of the stations in this region, five of the USHCN station records were altered in the GISS analysis because of inhomogeneities with neighboring stations (data prior to 1927 for Lake Spaulding, data prior to 1929 for Orleans, data prior to 1911 for Electra Ph, data prior of 1906 for Willows 6W, and all data for Crater Lake NPS HQ were omitted), so these apparent data flaws would not be transmitted to adjusted periurban and urban stations. If these adjustments were not made, the 100-year temperature change in the United States would be reduced by 0.01°C”
Well, I wanted to see the analysis, the code, that was used to make this determination that these stations were flawed. Gavin basically said the paper documented everything, but these words don’t tell me HOW it was done. It just says THAT it was done. Any way that was pretty much why I wanted the code released. When it finally was released, you will see that there is no analysis supporting the removal of these stations. Upon inspection you can see some flakey stuff with the stations, but I was looking for math that quantified the flakiness. In the end, these were excluded by hand.
The argument of course is that including them or excluding them amounts to a tiny difference. That argument never held much water for me. The question, in my mind, was how many other flakey stations were there and was there math that could detect it? I think thats a good question. It doesnt make me doubt the record, I just think its a good question.
===============================================================
I agree. I don’t know that Crater Lake data is flawed or “flakey”, it just may reflect the snow pack hanging around longer, creating a cool bias into summer. With the snow pack as heavy as it is this year, it will be interestign to see if that has an effect of suppressing mean temperature.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Stupid is as stupid does.
How many other cooling stations were excluded. They dropped several thousand, as I recall.
It strikes me as strange that the implicit assumption in the GISS method is that the rural (no lights) sites are the ones in error and that therefore need adjusting or removing. Surely the basic logic should be that the (truly) rural sites are the ones most likely to provide an accurate record of their particular location and so if their records disagree with those of nearby urban or semi-urban sites the suspicion should be the other way (UHI, microsite issues, undocumented station or site moves).
One thing I was told doing my PhD – there is no such thing as bad data, just incorrect interpretations.
The moral of that was – get the measurements correct, record and report all the data. Once that is on record, anyone wishing to examine or criticise your interpretation of the results has all the information to fall back on to verify or otherwise the conclusions. This is something that seems lost on quite a few climate scientists, despite many of them coming from Earth Science backgrounds (same as me).
The ground squirrels in Morro Rock State Preserve, California appear to have another viewpoint on the coming global warming trend: http://i.telegraph.co.uk/multimedia/archive/01916/fat-squirrel_1916397i.jpg
Guys, again we need to start attacking “mean temperature” as the MEANINGLESS METERIC IT IS!
You really CANNOT AVERAGE TEMPERATURES!
You can look at “averages” for a STATION, and compare it’s changes with time. That gives some idea of LOCAL CHANGES.
But this “mean temperature” of surface stations is a MEANINGLESS METRIC. Because what we are doing with regard to ATMOSPHERIC ENERGY.
Case in point, it was recently 86 F and 60% RH in Minnesota. Same time it was 104 and 13% RH in Phx. In the true definition of HEAT (i.e., energy content, per lbm of atm.) which was “hotter”? Answer, MN. At 38 BTU/lbm versus AZ at 33 BTU/Lbm.
If we don’t know the humidities at the observing stations, we don’t know squat. All these yo-yo’s doing “mean temperature” and using it as a metric should have their PHD’s revoked.
Max
Nick Stokes says:
June 9, 2011 at 4:08 am
I’m not sure what you mean about showing the evidence that this was done. THe following quote came from Hansen himself: “and all data for Crater Lake NPS HQ were omitted.” Do you really need more evidence than that? If so, I doubt there’s any evidence that would satisfy your desires.
By the way, Nick, I realize the quote was from 2001, but if they did it then, what evidence do you have to show that they stopped excluding it?
Re: HungarianFalcon
Apparently CRU throws out outliers (see http://wattsupwiththat.com/2010/09/05/analysis-cru-tosses-valid-5-sigma-climate-data/ ). These outliers apparently occur more often in Winter than Summer,
I dont know about GISS though
So what we are saying is that if 75% of stations show warming and 25% show cooling, then the stations showing cooling will get excluded because they are outvoted by 3 to 1. Interesting way of doing an analysis.
It seems to me that a meterologist would look at the same data and conclude that perhaps the prevailing weather systems had not changed much, but were tracking in slightly different directions making some areas cooler and some warmer, but with no real net effect.
An engineer might come to the conclusion that the long term temperature trends could not be relied upon to the required accuracy, and that the overall trend was inconclusive.
I don’t see any justification at all for excluding some sites simply because they don’t fit the expected outcome, and even less justification for other scientists being compliant with the way this has been done. It is something of a scientific scandal.
Has anyone done a study of all the stations that are excluded and see if they mostly disprove AGW arguments? If so, please send me a link. If not, why not?
I suspect they are mostly excluded because they disprove the warming myth.
Thanks.
Still rigging the system to match the theory.
A few years back I was hoping to find good station data across an elevation gradient for coastal British Columbia. I was very disappointed to find very serious data quality issues for coastal mountain stations. (This based on COMMON SENSE manual inspection of the data — definitely NO automated algorithms required.) I contacted Environment Canada with a few questions and they admitted upfront to the issues without hesitation. I was informed that they had considered shutting down one of the high-profile coastal mountain stations (near Vancouver) due to data quality issues. Although data quality was too poor for the types of analyses I had in mind to confirm what I knew from first-hand outdoor experience, I’m VERY glad (thank you Environment Canada) the stations weren’t shut down altogether, as imperfect information is dramatically superior to NO information. By the way, even good coastal mountain data does not strongly correlate with nearby (only a few kilometers away) low-elevation coastal weather stations under all conditions. This is NO surprise. This is, rather, what one sees & experiences first-hand if one is an outdoor enthusiast &/or coastal mountain worker.
By the way Anthony, 3 meters of snow (10 feet) can melt pretty fast once the North Pacific High settles in for summer. Once that happens, the snow doesn’t stand a chance.
That’s an insightful point, Max, thank you. There’s is a lot of nonsense thrown around. Heat energy stored in things with greater thermal masses is more significant than heat stored in smaller thermal masses. We’re fortunate to live on a planet with lots of water and a temperature span that includes two phase changes which modulate the energy/temperature correlation.
Ian B says:
June 9, 2011 at 6:24 am
“The moral of that was get the measurements correct, record and report all the data. Once that is on record, anyone wishing to examine or criticise your interpretation of the results has all the information to fall back on to verify or otherwise the conclusions. This is something that seems lost on quite a few climate scientists, despite many of them coming from Earth Science backgrounds (same as me).”
Excellent points Ian. In fact, what strikes me about this is that it NEVER occurs to the “scientists” at GISS to actually, you know, VISIT the site(s) in question, check out the equipment, and assess the siting(s)! They would rather sit happily in the air-conditioned offices in NYC and decide arbitrarily (or at least without any real-world verification) if data should be dropped due to “inhomogeneities”. Pathetic…
Anthony,
My apologies, but a small correction may be warranted. I found this old Climate Audit post, not long after the GISS code had been FOIA’d.
http://climateaudit.org/2008/07/19/cedarville-sausage/
Following the dissection in the comments, it appears GISS was completely deleting some stations, and partial periods from other stations, but then USHCN did some of their famous “corrections” to the records resulting in GISS no longer having to do their own “corrections.”
So as near as I can follow those comments, which were made in the early days of deciphering the garbled GISS mess, GISS is not currently ignoring Crater Lake, but USHCN is so GISS doesn’t have to.
As mentioned, that was early days, subsequent analysis may have found otherwise, and you should go through those comments yourself and see what you can make of them, besides checking with Mosher. I hate to feed the beast, but if it’s the truth then it needs to be told, especially here where we know how important the truth is in authentic science.
So if you can ignore colder trends because they don’t fit the surrounding urban warming trends, why not ignore the urban warming trends because they don’t fit the surrounding cold trends? Seems like the same argument to throw out the urban trends could be made.
@max Hugosen: Perhaps we should change the metric to entropy. That would have some real meaning.
…”The strong cooling that exists in the unlit station data in the northern California region…”
Where else in the world do we know of that has a lot of “unlit station data”? There wouldn’t be a problem with polar twilight readings, would there Maestro?
0.01C here another 0.01C there and next thing you know if we’ve done it very often, a significant portion of the alleged warming is accounted for. Consider the exclusion of that tiny part of the world from the data……. so yes the question is are there more and how many? How does one go about finding out?
In Re; Metryq,June 9, 2011 at 4:06 am ;
Funny, I was thinking the exact same thing. Tycho’s obsevations didn’t fit what Galileo was selling, and Kepler figured it out. But for that it is uncertain how long it would’ve taken or if Newton would have had the information to figure out Newtonian physics at all. Of course, Tycho was rather disinterested in what the data fit with, he was just showing off how accurately he could record the data. Hmm . . . something to that whole ‘disinterested’ thing, isn’t there. And there you have it; Hansen and Schmidt modifying and eliminating data that doesn’t ‘fit’ their model. Shame that they are setting science back under the guise of doing real science.
I am sure history will forgive them and hold them in high esteem.
I find these hundreth’s and thousandth’s of degrees laughable. Please explain what measurements you have that can distinguish temperatures by 0.01C, measure to that resolution and how you get averages at that level when you are making measurements with instruments that probably don’t do better than tenths. Did I miss some statistical treatment that increases your average resolution by an order of magnitude or two over the measurement?
It would be nice if any of this were an indicator of the energy balance between what arrives each moment from the sun and what escapes forever to the dark places among the stars from earth. It doesn’t provide that information and doesn’t even verify or deny global warming. We have no global warming. We have regions of warming, regions of cooling. The business of averaging the temperatures of these regions has no non-political purpose. Leaving out non-warming regions recognizes that what I just spoke is so, and that the purpose is to fulfill non-scientific goals.
Does anyone need more evidence?
Josh Grella
June 9, 2011 at 6:31 am
###
He’s a propagandist and only posts to seed FUD.
Bob: It is all laughable. These weather stations weren’t set up to measure global average temperature. They were set up to measure local weather, which is a totally different thing.
In cardiology there is the concept of “unidirectional block” . A very bad thing that may happen to your heart and trigger cardiac arrest.
This is a simple transposition of the same concept: “unidirectional block of flaws”.