
Climategate: CRU Was But the Tip of the Iceberg
Not surprisingly, the blatant corruption exposed at Britain’s premiere climate institute was not contained within the nation’s borders. Just months after the Climategate scandal broke, a new study has uncovered compelling evidence that our government’s principal climate centers have also been manipulating worldwide temperature data in order to fraudulently advance the global warming political agenda.
Not only does the preliminary report [PDF] indict a broader network of conspirators, but it also challenges the very mechanism by which global temperatures are measured, published, and historically ranked.
Last Thursday, Certified Consulting Meteorologist Joseph D’Aleo and computer expert E. Michael Smith appeared together on KUSI TV [Video] to discuss the Climategate — American Style scandal they had discovered. This time out, the alleged perpetrators are the National Oceanic and Atmospheric Administration (NOAA) and the NASA Goddard Institute for Space Studies (GISS).
NOAA stands accused by the two researchers of strategically deleting cherry-picked, cooler-reporting weather observation stations from the temperature data it provides the world through its National Climatic Data Center (NCDC). D’Aleo explained to show host and Weather Channel founder John Coleman that while the Hadley Center in the U.K. has been the subject of recent scrutiny, “[w]e think NOAA is complicit, if not the real ground zero for the issue.”
And their primary accomplices are the scientists at GISS, who put the altered data through an even more biased regimen of alterations, including intentionally replacing the dropped NOAA readings with those of stations located in much warmer locales.
As you’ll soon see, the ultimate effects of these statistical transgressions on the reports which influence climate alarm and subsequently world energy policy are nothing short of staggering.
NOAA – Data In / Garbage Out
Although satellite temperature measurements have been available since 1978, most global temperature analyses still rely on data captured from land-based thermometers, scattered more or less about the planet. It is that data which NOAA receives and disseminates – although not before performing some sleight-of-hand on it.
Smith has done much of the heavy lifting involved in analyzing the NOAA/GISS data and software, and he chronicles his often frustrating experiences at his fascinating website. There, detail-seekers will find plenty to satisfy, divided into easily-navigated sections — some designed specifically for us “geeks,” but most readily approachable to readers of all technical strata.
Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.
Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.
And no wonder — imagine the accuracy of campaign tracking polls were Gallup to include only the replies of Democrats in their statistics. But it gets worse.
Prior to publication, NOAA effects a number of “adjustments” to the cherry-picked stations’ data, supposedly to eliminate flagrant outliers, adjust for time of day heat variance, and “homogenize” stations with their neighbors in order to compensate for discontinuities. This last one, they state, is accomplished by essentially adjusting each to jive closely with the mean of its five closest “neighbors.” But given the plummeting number of stations, and the likely disregard for the latitude, elevation, or UHI of such neighbors, it’s no surprise that such “homogenizing” seems to always result in warmer readings.
The chart below is from Willis Eschenbach’s WUWT essay, “The smoking gun at Darwin Zero,” and it plots GHCN Raw versus homogeneity-adjusted temperature data at Darwin International Airport in Australia. The “adjustments” actually reversed the 20th-century trend from temperatures falling at 0.7°C per century to temperatures rising at 1.2°C per century. Eschenbach isolated a single station and found that it was adjusted to the positive by 6.0°C per century, and with no apparent reason, as all five stations at the airport more or less aligned for each period. His conclusion was that he had uncovered “indisputable evidence that the ‘homogenized’ data has been changed to fit someone’s preconceptions about whether the earth is warming.”
WUWT’s editor, Anthony Watts, has calculated the overall U.S. homogeneity bias to be 0.5°F to the positive, which alone accounts for almost one half of the 1.2°F warming over the last century. Add Smith’s selection bias to the mix and poof – actual warming completely disappears!
Yet believe it or not, the manipulation does not stop there.
GISS – Garbage In / Globaloney Out
The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports.
In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.
It then puts the merged data through a few “adjustments” of its own.
First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.
Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.
And according to NASA, “If no such neighbors exist or the overlap of the rural combination and the non-rural record is less than 20 years, the station is completely dropped; if the rural records are shorter, part of the non-rural record is dropped.”
However, Smith points out that a dropped record may be “from a location that has existed for 100 years.” For instance, if an aging piece of equipment gets swapped out, thereby changing its identification number, the time horizon reinitializes to zero years. Even having a large enough temporal gap (e.g., during a world war) might cause the data to “just get tossed out.”
But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
Here’s NASA’s solution:
For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
Even on paper, the design flaws inherent in such a process should be glaringly obvious.
So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
An isolated problem? Hardly, reports Smith.
From KUSI’s Global Warming: The Other Side:
“There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends. And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”


Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
Pretty slick, huh?
And those are but a few examples. In fact, throughout the entire grid, cooler station data are dropped and “filled in” by temperatures extrapolated from warmer stations in a manner obviously designed to overestimate warming…
…And convince you that it’s your fault.
Government and Intergovernmental Agencies — Globaloney In / Green Gospel Out
Smith attributes up to 3°F (more in some places) of added “warming trend” between NOAA’s data adjustment and GIStemp processing.
That’s over twice last century’s reported warming.
And yet, not only are NOAA’s bogus data accepted as green gospel, but so are its equally bogus hysterical claims, like this one from the 2006 annual State of the Climate in 2005 [PDF]: “Globally averaged mean annual air temperature in 2005 slightly exceeded the previous record heat of 1998, making 2005 the warmest year on record.”
And as D’Aleo points out in the preliminary report, the recent NOAA proclamation that June 2009 was the second-warmest June in 130 years will go down in the history books, despite multiple satellite assessments ranking it as the 15th–coldest in 31 years.
Even when our own National Weather Service (NWS) makes its frequent announcements that a certain month or year was the hottest ever, or that five of the warmest years on record occurred last decade, they’re basing such hyperbole entirely on NOAA’s warm-biased data.
And how can anyone possibly read GISS chief James Hansen’s Sunday claim that 2009 was tied with 2007 for second-warmest year overall, and the Southern Hemisphere’s absolute warmest in 130 years of global instrumental temperature records, without laughing hysterically? It’s especially laughable when one considers that NOAA had just released a statement claiming that very same year (2009) to be tied with 2006 for the fifth-warmest year on record.
So how do alarmists reconcile one government center reporting 2009 as tied for second while another had it tied for fifth? If you’re WaPo’s Andrew Freedman, you simply chalk it up to “different data analysis methods” before adjudicating both NASA and NOAA innocent of any impropriety based solely on their pointless assertions that they didn’t do it.
Earth to Andrew: “Different data analysis methods”? Try replacing “analysis” with “manipulation,” and ye shall find enlightenment. More importantly, does the explicit fact that since the drastically divergent results of both “methods” can’t be right, both are immediately suspect somehow elude you?
But by far the most significant impact of this data fraud is that it ultimately bubbles up to the pages of the climate alarmists’ bible: The United Nations Intergovernmental Panel on Climate Change Assessment Report.
And wrong data begets wrong reports, which – particularly in this case – begets dreadfully wrong policy.
It’s High Time We Investigated the Investigators
The final report will be made public shortly, and it will be available at the websites of both report-supporter Science and Public Policy Institute and Joe D’Aleo’s own ICECAP. As they’ve both been tremendously helpful over the past few days, I’ll trust in the opinions I’ve received from the report’s architects to sum up.
This from the meteorologist:
The biggest gaps and greatest uncertainties are in high latitude areas where the data centers say they ‘find’ the greatest warming (and thus which contribute the most to their global anomalies). Add to that no adjustment for urban growth and land use changes (even as the world’s population increased from 1.5 to 6.7 billion people) [in the NOAA data] and questionable methodology for computing the historical record that very often cools off the early record and you have surface based data sets so seriously flawed, they can no longer be trusted for climate trend or model forecast assessment or decision making by the administration, congress or the EPA.
Roger Pielke Sr. has suggested: “…that we move forward with an inclusive assessment of the surface temperature record of CRU, GISS and NCDC. We need to focus on the science issues. This necessarily should involve all research investigators who are working on this topic, with formal assessments chaired and paneled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.” I endorse that suggestion.
Certainly, all rational thinkers agree. Perhaps even the mainstream media, most of whom have hitherto mistakenly dismissed Climategate as a uniquely British problem, will now wake up and demand such an investigation.
And this from the computer expert:
That the bias exists is not denied. That the data are too sparse and with too many holes over time in not denied. Temperature series programs, like NASA GISS GIStemp try, but fail, to fix the holes and the bias. What is claimed is that “the anomaly will fix it.” But it cannot. Comparison of a cold baseline set to a hot present set must create a biased anomaly. It is simply overwhelmed by the task of taking out that much bias. And yet there is more. A whole zoo of adjustments are made to the data. These might be valid in some cases, but the end result is to put in a warming trend of up to several degrees. We are supposed to panic over a 1/10 degree change of “anomaly” but accept 3 degrees of “adjustment” with no worries at all. To accept that GISTemp is “a perfect filter”. That is, simply, “nuts”. It was a good enough answer at Bastogne, and applies here too.
Smith, who had a family member attached to the 101st Airborne at the time, refers to the famous line from the 101st commander, U.S. Army General Anthony Clement McAuliffe, who replied to a German ultimatum to surrender the December, 1944 Battle of Bastogne, Belgium with a single word: “Nuts.”
And that’s exactly what we’d be were we to surrender our freedoms, our economic growth, and even our simplest comforts to duplicitous zealots before checking and double-checking the work of the prophets predicting our doom should we refuse.
Marc Sheppard is environment editor of American Thinker and editor of the forthcoming Environment Thinker.
//
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Anthony:
Am I reading incorrectly the numbers below (“more than 860” temperature stations versus “a low of 136 as of 2007”)? If your answer is “no”, it explains the apparent lack of concern by NOAA about the quality of the stations not “online”. However, it raises the question of why NOAA would continue the façade of stations not “online” (and, perhaps, waste resources in the process).
SEE:
Is the U.S. Temperature Record Reliable?
By Anthony Watts – Spring, 2009
“The official record of temperatures in the continental United States comes from a network of 1,221 climate-monitoring stations overseen by the National Weather Service, a department of the National Oceanic and Atmospheric Administration (NOAA)….During the past few years [Anthony Watts] recruited a team of more than 650 volunteers to visually inspect and photographically document more than 860 of these temperature stations.”
January 22, 2010
Climategate: CRU Was But the Tip of the Iceberg
By Marc Sheppard
“Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe….
Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA…”.
Tilo Reber (08:32:47) :
That is simply not true. After 1990 there definitely a difference.
The blue curve is stations discontinued at some time prior to 1995. In the last few years (1990-1995) numbers are getting very small.
The evidence of a conspiracy by CRU and GISS to make fraudulent adjustments seems evident to me. This is not an accident. It appears that treason has been committed. I believe that the offenders should be punished accordingly.
Doug in Seattle (18:29:18) :
“”I am not sure how this dropping out of cooler stations works. It is my understanding that when NOAA or GISS homogenizes for missing readings or stations they reach out to the nearest stations up to 1200 km distant and do their corrections based on the anomaly of the other stations – not the absolute temperature as seems to be inferred by D’Aleo and Smith.
While this doesn’t deal with UHI (which I think is quite imprtant), it should not necessarily inject a warm bias unless they are cherry picking nearest stations for UHI. But that doesn’t appear to be what D’Aleo and Smith are alleging.””
Brandon Says: I believe this report is saying, there used to be many more temp recording stations spread throughout the country, but in order to show a warming trend they slow ,but steadily removed cooler stations from both the averages and anomolies. And by them continuously doing this you get continually get a warmer average and anomoly. This process keeps repeating, before long they will be using only a dozen temp stations in US.
DirkH (06:02:08) :
German windpower has managed to increase its output from 17% of the nominal performance to about 21%. We have to be prepared for windy days when they suddenly deliver a 100% surge. In order to stabilise our networks, we have to have enough gas-powered plants with fast reaction times. Enough means: Total capacity “running reserve” on standby must be as big as wind+solar together. So for each installed GW renewables we need to install 1 GW “running reserve”.
Keep in mind that wind power output rises with the third power of wind velocity. This makes for violent spikes. It’s a tough job for the transmission lines and the standby gas plants. (…)
What, we’re not at the technological stage yet where massive banks of super-capacitors will take care of storing such surges and filling in the lulls in generation?
Interesting side-note, how we are becoming increasingly dependent on electronics as a primary element of power distribution. Since it has been discovered that ultra-high voltage direct current is more efficient for power transmission than alternating current, we are seeing more transmission done with DC which then uses ultra-high voltage electronics to convert it back to AC. (BTW UHV DC as far as I know still violates existing theories in physics as to being more efficient than AC. I’m not that well read on the subject so perhaps there is an explanation of it by now.)
The need can be seen for fast-acting and efficient DC storage of electricity rather than fast-acting backup generation. If batteries and super-capacitors can’t handle it, maybe more research into flywheels and such needs to be done. We can build inverters to supply the AC, with some more work on increasing efficiency seemingly warranted. Then whatever backup generation will be considered necessary can be much slower reacting, and fully shut down when not needed.
Although we are still continuing with setting ourselves up for a large solar flare or similar event to wipe out civilization by killing our technology. All-AC with no electronics involved does have a distinct advantage. It will also be embarrassing to find old vehicles and tractors with mechanical “breaker” ignition systems are the only vehicles left running.
(…) And expensive. We pay about 30 US cents or 20 Eurocents per kWh. I don’t wish that on you. The leftists here say it’s still too cheap.
Ordinary people can still afford it without starving, therefore it is still too cheap. When virtually everyone will require the government’s help to get electricity, then it will be priced just right. This is how the logic of (green) leftists seems to go.
Quoting Tilo Reber (08:45:15) :
“Somebody tell me, does that look like a weather station out at the end of the tarmac. Right side.”
Commenting:
I don’t think it could be anything else, my friend. If I have learned nothing else from Anthony, et al, I know what those structures look like.
Steve Keohane: that graph you linked showing the temperature decline after the addition of more sites; do you have any details about it; such as where the data comes from etc?
Nick; I don’t understand how you can say that reducing stations can have a “negligible influence on climate changes within the range of anomalies”; for a start this is contradicted by the Steig et al paper on the Antarctic; Steig used mainly WAP station data to ‘calibrate’ satellite data over the bulk of Antarctica; the assumption of “high spatial coherence” so as to justify interpolation by RegEM was unfounded and the Steig conclusions were nonsense.
Micro-effects at particular stations are crucial and profoundly affect trends at particular stations which should not be extrapolated, at least using the methods that GISS appears to use, to other stations so as to produce a GMST. The Runnalls and Oke paper highlights this:
http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI3663.1
The Runnalls and Oke method claims to be able to characterise noise; of course UHI is not noise [unless you intend it to be] and neither should be variations between stations of the time of data collection and other lack of standardisation; at the end of the day when a decision to drop non-conforming stations on the basis of standardisation criteria is made that is a value judgement; when the result is almost uniform temperature increase in contradiction to historical records then that criteria has to be challenged;
http://ams.confex.com/ams/13ac10av/techprogram/paper_39363.htm
I am not in weather or climate or science of any kind. Just a marine engineer looking after tankers, and on one occasion one of my ships touched bottom in an unseasonably low channel off Louisiana, spilling about 10,000 tons of oil in the process. Before anyone gets excited, this was about 20 years ago and the oil is all safely gone now with not a single seabird oiled-up. However, we had a hard job defending ourselves in the resulting court case, in which I was involved. A major problem for us was that both sides used NOAA pronouncements on the behaviour of the waters of the Gulf of Mexico, for prosecution and defence.
At no time were any of the NOAA statements ever queried by anyone, for NOAA was the acknowledged fount of natural science
knowledge in the USA.
In the subsequent court case, in which I was a witness for the defence, not a single lawyer on either side ever challenged a NOAA statement. It was just entered into the court records, as a fact.
To learn that NOAA has been counterfeiting (climate) science facts will surely undermine the standing of that organisation when called in as expert witness in any future natural disaster scenario.
In similar circumstances now, I would advise any engineer involved to inspect very carefully any information supplied by NOAA.
cohenite (15:33:03) :
Nick; I don’t understand how you can say that reducing stations can have a “negligible influence on climate changes within the range of anomalies”;
No, I said that the T^4 issue has negligible influence. This just relates to the arithmetic I put up earlier. If you vary temperatures by a few degrees, as happens with substituting a station with a nearby one, say, then the radiative change will be proportional. There may be a non-negligible effect, but it won’t be a nonlinear T^4 effect.
FWIW
I am sending this to our congress.
They should be able to get it.
Nick Stokes (18:30:46) :
Nick, would you address my point about the Medieval Warm Period?
OT-
Perhaps I missed mention of it but,
Mann got 2.4 M from the stimulus.
Well, thank heavens for that Nick! With all due respect I still think you are not appreciating how much difference there is in calculating the w/m2 between the 2 methods Motl has described; the 9w/m2 difference is 5 times the change of irradiance [1.6w/m2] blamed on GHGs since 1750; I don’t know, maybe this paper can explain it better than me;
http://pielkeclimatesci.files.wordpress.com/2009/10/r-321.pdf
I know Parker, Jones, Peterson and Kennedy from CRU have made a comment on it at AGU; I haven’t read it and don’t feel like buying it so if you have a copy I would appreciate a link to it; I know Eli has also had a shot at the Pielke Sr paper but lucia sorted him out;
http://rankexploits.com/musings/2008/spatial-variations-in-gmst-really-do-matter-iii-when-estimating-climate-sensitivity/
There are a lot of comments here on the Thermometergate thread. I hope that I’m not overlapping too much with those that I have not read.
Sloppy science? Yes, throwing away high-latitude and high-altitude information is sloppy. But fraud? Not necessarily.
NOAA has statistical techniques for extrapolating the missing data from ‘nearby’ temperature stations (within 1200 km) in warmer areas. The National Post article that I read insinuates that NOAA and GISS are plugging temperature data from the nearest slightly-warmer areas that do have thermometers onto the ledger sheet for the colder-climate areas that are thermometer-deficient. I do not know if that is the case. Here’s another theoretical scenario.
Experience has shown that year-to-year temperature CHANGES in high-latitude areas and in high-altitude areas are highly correlated with those of ‘nearby’ warmer areas. So chuck the temperature measurements in cold-weather areas. I don’t know if this scenario is true either.
Of course, there’s the obvious UHI-amplification issue.
Until more information comes down the pike, I’m not rating this news story as a smoking gun–or even as a freezing gun. 🙂 At the moment, AGW skeptics would be ill-advised to hitch their wagons to Thermometergate. Like all True Believers, the Climatistas will vigorously attack the weakest skeptical arguments and give short shrift to the stronger ones. The Climategate emails are much better conversation-stoppers.
BTW, for folks who assert that because it’s all anomalies there will be no change from station changes, I’ve already run a benchmark showing that is not true:
http://chiefio.wordpress.com/2009/11/12/gistemp-witness-this-fully-armed-and-operational-anomaly-station/
It’s a crude benchmark based on the USHCN to USHCN.v2 transition where they left out data from 5/2007 to 11/2009 then put it back in. This is a ‘weak’ benchmark in that the data change only in 2% of the world surface ( IIRC that’s the number the Warmers toss about for the USA) but the anomaly report is for the Northern Hemisphere. So, in theory, the changes in the report are diluted by about a 2:50 ratio (so multiply the observed variances by 25 to get the actual impact) and yes, I need to do a better fuller benchmark to find the exact “price being asked for this service” (see the link 😉 but we do find changes in the 1/100 C place. On the order of 2/100 to 4/100. So as a ‘first cut’ we can say there is at least a 1/2 C to potentially a 1 C impact that “the anomaly will save us!” failed to catch…
So to all the bleating about “But it’s ANOMALIES so it doesn’t matter!” I say:
But it’s A BENCHMARK of the actual code run; and it says theory be damned, in reality the anomaly FAILS to be perfect. “Use the Code Luke!” 😉
BTW, I’m upgrading my hardware and software so I can’t do a lot of ‘new stuff’ right now as I’m doing admin / sysprog work making it all work again on a newer faster box… so I’ll be doing that next finer grained benchmark, but not for a while… Until then, this one will have to do.
Nick Stokes (05:57:40) : GIStemp say they use GHCN v2.mean. And this is already expressed in anomalies for each station, as downloaded from NOAA. In fact, it isn’t clear how you can even recover the temp in Celsius from that file.
Try again. GHCN.v2 is already in C (it was converted from F which is what is in the USHCN set). It is NOT anomalies. It is reported as actual temperatures (for the monthly mean).
http://chiefio.wordpress.com/2009/02/24/ghcn-global-historical-climate-network/
Quoting from that page quoting the NOAA page:
So, to understand GIStemp, we have to take a look at GHCN data. Here is the “README” file from the GIStemp download:
May 1997
This is a very brief description of GHCN version 2 temperature data and
metadata (inventory) files, providing details, such as formats, not
available in http://www.ncdc.noaa.gov/ghcn/ghcn.html.
New monthly data are added to GHCN a few days after the end of
the month. Please note that sometimes these new data are later
replaced with data with different values due to, for example,
occasional corrections to the transmitted data that countries
will send over the Global Telecommunications System.
All files except this one were compressed with a standard UNIX compression.
To uncompress the files, most operating systems will respond to:
“uncompress filename.Z”, after which, the file is larger and the .Z ending is
removed. Because the compressed files are binary, the file transfer
protocol may have to be set to binary prior to downloading (in ftp, type bin).
The three raw data files are:
v2.mean
v2.max
v2.min
The versions of these data sets that have data which we adjusted
to account for various non-climatic inhomogeneities are:
v2.mean.adj
v2.max.adj
v2.min.adj
Each line of the data file has:
station number which has three parts:
country code (3 digits)
nearest WMO station number (5 digits)
modifier (3 digits) (this is usually 000 if it is that WMO station)
Duplicate number:
one digit (0-9). The duplicate order is based on length of data.
Maximum and minimum temperature files have duplicate numbers but only one
time series (because there is only one way to calculate the mean monthly
maximum temperature). The duplicate numbers in max/min refer back to the
mean temperature duplicate time series created by (Max+Min)/2.
Year:
four digit year
Data:
12 monthly values each as a 5 digit integer. To convert to
degrees Celsius they must be divided by 10.
Missing monthly values are given as -9999.
If there are no data available for that station for a year, that year
is not included in the data base.
A short FORTRAN program that can read and subset GHCN v2 data has been
provided (read.data.f).
Station inventory and metadata:
All stations with data in max/min OR mean temperature data files are
listed in the inventory file: v2.inv. The available metadata
are too involved to describe here. To understand them, please refer
to: http://www.ncdc.noaa.gov/ghcn/ghcn.html and to the simple FORTRAN
program read.inv.f. The comments in this program describe the various
metadata fields. There are no flags in the inventory file to indicate
whether the available data are mean only or mean and max/min.
Country codes:
The file v2.country.codes lists the countries of the world and
GHCN’s numerical country code.
Data that have failed Quality Control:
We’ve run a Quality Control system on GHCN data and removed
data points that we determined are probably erroneous. However, there
are some cases where additional knowledge provides adequate justification
for classifying some of these data as valid. For example, if an isolated
station in 1880 was extremely cold in the month of March, we may have to
classify it as suspect. However, a researcher with an 1880 newspaper article
describing the first ever March snowfall in that area may use that special
information to reclassify the extremely cold data point as good. Therefore,
we are providing a file of the data points that our QC flagged as probably
bad. We do not recommend that they be used without special scrutiny. And
we ask that if you have corroborating evidence that any of the “bad” data
points should be reclassified as good, please send us that information
so we can make the appropriate changes in the GHCN data files. The
data points that failed QC are in the files v2.m*.failed.qc. Each line
in these files contains station number, duplicate number, year, month,
and the value (again the value needs to be divided by 10 to get
degrees C). A detailed description of GHCN’s Quality Control can be
found through http://www.ncdc.noaa.gov/ghcn/ghcn.html.
Chiefio, you better ask for an extra 0 on your oil company paycheck. You are doing fantastic. 🙂
@Dave F:
You know, I keep looking for where that paycheck is and never can find it. I think the other Smith down the street must be getting it… or maybe they got me confused with that Smith at NASA who was a shuttle pilot… At any rate, I’m still doing this all on my own between washing the dishes, cooking dinner, and being “servant to cats”… So if anyone knows where I can get one of those “paycheck” things, just let me know. I used to get a nice one for computer work about 5 years ago, but then they “fixed” the economy and I’ve been “a house husband” ever since. I try to remember what a “paycheck” looks like some times… but the memory is starting to go… 😉
/sarcoff>
Or put another way: So, I ought to add another 0? That makes it:
$000.00 or maybe $0,000.00 … somehow I don’t thing that’s what you had in mind… 😉
(On a semi-serious note: I’d love to do this full time somewhere and with real equipment and facilities instead of things cobbled together from junk in the garage; but you work with what you have. My budget is $0, but at least that makes my ROI infinite! So I try to make up for the shortage of facilities and time with a focused effort and finely directed instinct about where to look. The largest leverage. It is a bit frustrating in that I know I’m moving forward at about 1/4 speed, but OTOH, I do think the progress has been worthwhile 😎
If nothing else, it’s keeping the technical skills up for the day when California either recovers, or collapses entirely and I move to Costa Rica and get a gig as a PC repair guy in the Expat Retirement Village 😉
And I do find it funny that folks accuse skeptics of being in the pay of Big Oil; when Big Oil is planing to make a killing off of getting coal utilities to provide them with the liquified CO2 they need for “enhanced oil recovery” from old oil fields. Oil is 100% in the warmers camp… Coal, not so much. Electric utilities not at all. “Follow the money” says it’s not skeptics getting the money…
I remember the day, about 15 years(?) ago that I first read about liquid CO2 oil well stripping; and they said it worked really great, but had one giant problem: While it worked really really well, the cost of CO2 killed it. And shortly after that a sudden giant movement to “sequester CO2” popped up from nowhere. Now “big oil” is looking to be paid to take away the “pollutant” CO2. Hmmm….
At any rate, thanks for the vote of confidence! Just doing what I can. While I used to call this kind of thing “Kitchen Science”, I’m thinking maybe I need to relabel it. “Joe Sixpack” science has a nice ring to it: “Will program for beer!” makes a nice motto too 😉
E.M.Smith (20:54:13) :
BTW, for folks who assert that because it’s all anomalies there will be no change from station changes, I’ve already run a benchmark showing that is not true:
I think those who have been asserting that are wishing it was true. They have to continually fudge numbers to make it true.
Nick Stokes (13:27:14) :
Nick,
you have said, in effect, that the anomaly of the data that was used by GISS is that same as the anomaly of the data that they’ve dropped. To prove you are right you would have to have the dropped data.
Are you going to attempt to get the data by sending a FOI to GISS for the dropped data?
Also known as The fiddlers Three.
You’re saying that the data should be analyzed before reaching a conclusion ??
That’s not fair !!!
E.M.Smith (20:54:13) :
Awesome
pH (05:41:20) :
“Are you going to attempt to get the data by sending a FOI to GISS for the dropped data?”
No use doing that – GISS didn’t acquire it. Neither did GHCN. It sits, as original data does, with the world’s weather services.
But there’s something wrong wuth this whole story of “dropped data”. As I’ve said above, GHCN was a historical data project of the 90’s. A whole lot of past data was collected, under no time pressure, and with special project funding. That’s when the data from 7000 stations were assembled.
Then GHCN became the vehicle for the publication of regular month-to-month data. It couldn’t get, for various reasons, recurrent data from all those sites. It had to choose. For the most part, the “dropped” stations were in fact never regular contributors. They just provided a batch of past info at that time in the ’90’s.
Maybe GHCN needs more funding. Anyway, if GHCN can’t assemble modern data from those missing stations, I certainly can’t, and waving a FOI wand won’t do it.
I can’t usefully respond to your MWP issue – I can’t see the point of it. Yes, if there was local warming, any method, anomaly or other, would just have reported that. And there’s no way just measuring can detect whether it is permanent or not.
And philincali, the reaching of conclusions was done in this article. NOAA – Data In / Garbage Out. No data analysed before reaching that conclusion, which is indeed an odd one, with the actual complaint being that NOAA did not get the data in.
Nick Stokes (10:30:28) :
Well then Nick, get from wherever it is and make the graph. You’ve obfuscating about it so it must be important to compare the two.
Also, NASA makes it a point to talk about the hottest year, hottest decade. So eliminating rural and mountain stations from the data means everything.
Nick Stokes (10:30:28) :
So the MWP was world wide?