
Climategate: CRU Was But the Tip of the Iceberg
Not surprisingly, the blatant corruption exposed at Britain’s premiere climate institute was not contained within the nation’s borders. Just months after the Climategate scandal broke, a new study has uncovered compelling evidence that our government’s principal climate centers have also been manipulating worldwide temperature data in order to fraudulently advance the global warming political agenda.
Not only does the preliminary report [PDF] indict a broader network of conspirators, but it also challenges the very mechanism by which global temperatures are measured, published, and historically ranked.
Last Thursday, Certified Consulting Meteorologist Joseph D’Aleo and computer expert E. Michael Smith appeared together on KUSI TV [Video] to discuss the Climategate — American Style scandal they had discovered. This time out, the alleged perpetrators are the National Oceanic and Atmospheric Administration (NOAA) and the NASA Goddard Institute for Space Studies (GISS).
NOAA stands accused by the two researchers of strategically deleting cherry-picked, cooler-reporting weather observation stations from the temperature data it provides the world through its National Climatic Data Center (NCDC). D’Aleo explained to show host and Weather Channel founder John Coleman that while the Hadley Center in the U.K. has been the subject of recent scrutiny, “[w]e think NOAA is complicit, if not the real ground zero for the issue.”
And their primary accomplices are the scientists at GISS, who put the altered data through an even more biased regimen of alterations, including intentionally replacing the dropped NOAA readings with those of stations located in much warmer locales.
As you’ll soon see, the ultimate effects of these statistical transgressions on the reports which influence climate alarm and subsequently world energy policy are nothing short of staggering.
NOAA – Data In / Garbage Out
Although satellite temperature measurements have been available since 1978, most global temperature analyses still rely on data captured from land-based thermometers, scattered more or less about the planet. It is that data which NOAA receives and disseminates – although not before performing some sleight-of-hand on it.
Smith has done much of the heavy lifting involved in analyzing the NOAA/GISS data and software, and he chronicles his often frustrating experiences at his fascinating website. There, detail-seekers will find plenty to satisfy, divided into easily-navigated sections — some designed specifically for us “geeks,” but most readily approachable to readers of all technical strata.
Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.
Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.
And no wonder — imagine the accuracy of campaign tracking polls were Gallup to include only the replies of Democrats in their statistics. But it gets worse.
Prior to publication, NOAA effects a number of “adjustments” to the cherry-picked stations’ data, supposedly to eliminate flagrant outliers, adjust for time of day heat variance, and “homogenize” stations with their neighbors in order to compensate for discontinuities. This last one, they state, is accomplished by essentially adjusting each to jive closely with the mean of its five closest “neighbors.” But given the plummeting number of stations, and the likely disregard for the latitude, elevation, or UHI of such neighbors, it’s no surprise that such “homogenizing” seems to always result in warmer readings.
The chart below is from Willis Eschenbach’s WUWT essay, “The smoking gun at Darwin Zero,” and it plots GHCN Raw versus homogeneity-adjusted temperature data at Darwin International Airport in Australia. The “adjustments” actually reversed the 20th-century trend from temperatures falling at 0.7°C per century to temperatures rising at 1.2°C per century. Eschenbach isolated a single station and found that it was adjusted to the positive by 6.0°C per century, and with no apparent reason, as all five stations at the airport more or less aligned for each period. His conclusion was that he had uncovered “indisputable evidence that the ‘homogenized’ data has been changed to fit someone’s preconceptions about whether the earth is warming.”
WUWT’s editor, Anthony Watts, has calculated the overall U.S. homogeneity bias to be 0.5°F to the positive, which alone accounts for almost one half of the 1.2°F warming over the last century. Add Smith’s selection bias to the mix and poof – actual warming completely disappears!
Yet believe it or not, the manipulation does not stop there.
GISS – Garbage In / Globaloney Out
The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports.
In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.
It then puts the merged data through a few “adjustments” of its own.
First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.
Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.
And according to NASA, “If no such neighbors exist or the overlap of the rural combination and the non-rural record is less than 20 years, the station is completely dropped; if the rural records are shorter, part of the non-rural record is dropped.”
However, Smith points out that a dropped record may be “from a location that has existed for 100 years.” For instance, if an aging piece of equipment gets swapped out, thereby changing its identification number, the time horizon reinitializes to zero years. Even having a large enough temporal gap (e.g., during a world war) might cause the data to “just get tossed out.”
But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
Here’s NASA’s solution:
For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
Even on paper, the design flaws inherent in such a process should be glaringly obvious.
So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
An isolated problem? Hardly, reports Smith.
From KUSI’s Global Warming: The Other Side:
“There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends. And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”


Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
Pretty slick, huh?
And those are but a few examples. In fact, throughout the entire grid, cooler station data are dropped and “filled in” by temperatures extrapolated from warmer stations in a manner obviously designed to overestimate warming…
…And convince you that it’s your fault.
Government and Intergovernmental Agencies — Globaloney In / Green Gospel Out
Smith attributes up to 3°F (more in some places) of added “warming trend” between NOAA’s data adjustment and GIStemp processing.
That’s over twice last century’s reported warming.
And yet, not only are NOAA’s bogus data accepted as green gospel, but so are its equally bogus hysterical claims, like this one from the 2006 annual State of the Climate in 2005 [PDF]: “Globally averaged mean annual air temperature in 2005 slightly exceeded the previous record heat of 1998, making 2005 the warmest year on record.”
And as D’Aleo points out in the preliminary report, the recent NOAA proclamation that June 2009 was the second-warmest June in 130 years will go down in the history books, despite multiple satellite assessments ranking it as the 15th–coldest in 31 years.
Even when our own National Weather Service (NWS) makes its frequent announcements that a certain month or year was the hottest ever, or that five of the warmest years on record occurred last decade, they’re basing such hyperbole entirely on NOAA’s warm-biased data.
And how can anyone possibly read GISS chief James Hansen’s Sunday claim that 2009 was tied with 2007 for second-warmest year overall, and the Southern Hemisphere’s absolute warmest in 130 years of global instrumental temperature records, without laughing hysterically? It’s especially laughable when one considers that NOAA had just released a statement claiming that very same year (2009) to be tied with 2006 for the fifth-warmest year on record.
So how do alarmists reconcile one government center reporting 2009 as tied for second while another had it tied for fifth? If you’re WaPo’s Andrew Freedman, you simply chalk it up to “different data analysis methods” before adjudicating both NASA and NOAA innocent of any impropriety based solely on their pointless assertions that they didn’t do it.
Earth to Andrew: “Different data analysis methods”? Try replacing “analysis” with “manipulation,” and ye shall find enlightenment. More importantly, does the explicit fact that since the drastically divergent results of both “methods” can’t be right, both are immediately suspect somehow elude you?
But by far the most significant impact of this data fraud is that it ultimately bubbles up to the pages of the climate alarmists’ bible: The United Nations Intergovernmental Panel on Climate Change Assessment Report.
And wrong data begets wrong reports, which – particularly in this case – begets dreadfully wrong policy.
It’s High Time We Investigated the Investigators
The final report will be made public shortly, and it will be available at the websites of both report-supporter Science and Public Policy Institute and Joe D’Aleo’s own ICECAP. As they’ve both been tremendously helpful over the past few days, I’ll trust in the opinions I’ve received from the report’s architects to sum up.
This from the meteorologist:
The biggest gaps and greatest uncertainties are in high latitude areas where the data centers say they ‘find’ the greatest warming (and thus which contribute the most to their global anomalies). Add to that no adjustment for urban growth and land use changes (even as the world’s population increased from 1.5 to 6.7 billion people) [in the NOAA data] and questionable methodology for computing the historical record that very often cools off the early record and you have surface based data sets so seriously flawed, they can no longer be trusted for climate trend or model forecast assessment or decision making by the administration, congress or the EPA.
Roger Pielke Sr. has suggested: “…that we move forward with an inclusive assessment of the surface temperature record of CRU, GISS and NCDC. We need to focus on the science issues. This necessarily should involve all research investigators who are working on this topic, with formal assessments chaired and paneled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.” I endorse that suggestion.
Certainly, all rational thinkers agree. Perhaps even the mainstream media, most of whom have hitherto mistakenly dismissed Climategate as a uniquely British problem, will now wake up and demand such an investigation.
And this from the computer expert:
That the bias exists is not denied. That the data are too sparse and with too many holes over time in not denied. Temperature series programs, like NASA GISS GIStemp try, but fail, to fix the holes and the bias. What is claimed is that “the anomaly will fix it.” But it cannot. Comparison of a cold baseline set to a hot present set must create a biased anomaly. It is simply overwhelmed by the task of taking out that much bias. And yet there is more. A whole zoo of adjustments are made to the data. These might be valid in some cases, but the end result is to put in a warming trend of up to several degrees. We are supposed to panic over a 1/10 degree change of “anomaly” but accept 3 degrees of “adjustment” with no worries at all. To accept that GISTemp is “a perfect filter”. That is, simply, “nuts”. It was a good enough answer at Bastogne, and applies here too.
Smith, who had a family member attached to the 101st Airborne at the time, refers to the famous line from the 101st commander, U.S. Army General Anthony Clement McAuliffe, who replied to a German ultimatum to surrender the December, 1944 Battle of Bastogne, Belgium with a single word: “Nuts.”
And that’s exactly what we’d be were we to surrender our freedoms, our economic growth, and even our simplest comforts to duplicitous zealots before checking and double-checking the work of the prophets predicting our doom should we refuse.
Marc Sheppard is environment editor of American Thinker and editor of the forthcoming Environment Thinker.
//

Baa Humbug (18:31:57) : So lets get this straight. Is the following hypothetical example correct?
Yes, substantially. Though to be even more correct, you would have your three stations that in 1950-1980 in the baseline report:
3 stations measure 11deg 12 deg 13deg averaging 12deg.
And you would “adjust” them such that the older period was 10.5 11.5 12 degrees, average of 11.3 (they do have this habit of always “correcting” the past to be colder…)
Then we drop the first 2 “cooler” ones leaving us with the third 13deg. Which of itself is 1deg above the average.
Though we would do a broken UHI “correction” on it, call it less than Pisa, so… about 13.5 degrees.
So now you get an “anomaly” of 13.5 – 11.3 = 2.2 (Oh NO!! SKY is burning up as it falls!!)
You gotto hand it to them, they found a way to make a station read warming against itself just beautifully. (except they got caught
Basically, IMHO, yes. For each of the changes I’ve shown in this example, I can point at real cases where a very similar thing is done in the NOAA / NCDC data or in GIStemp. Pisa gets a ‘wrong way’ UHI of 1.4 C in the past. USHCN is “readjusted” to make USHCNv2 and blink charts show added warming trends. Thermometers are real rural areas are removed and those in mountains are substantially extinct; but low altitude airports are multiplying like rabbits.
Clive (22:19:53) :Perhaps someone noticed this confusion and already reported it:
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.”
Am I missing something here? As stated, there is a “confusion” of elevation and latitude in this statement. It says the higher elevation stations “dropped to one” and then goes on to say there is “one .. north of 65°” Which is correct?
I think it should say “higher latitudes dropped to one.” Yes?
I think he is mixing two things, both of which happen. The average Altitude drops as the Canadian Rockies are erased AND the northern latitudes are erased leaving ONE above latitude 65 N at Eureka…
The “by altitude” report:
http://chiefio.wordpress.com/2009/11/13/ghcn-oh-canada-rockies-we-dont-need-no-rockies/
The “by latitude”:
http://chiefio.wordpress.com/2009/10/27/ghcn-up-north-blame-canada-comrade/
Joseph D’Aleo preliminary report says:-
* China had 100 stations in 1950, over 400 in 1960 then only 35 by 1990.
I find that hard to believe, that’s about 1 station per province in China, or similar to putting 1 station in the middle of England to cover the whole of the United Kingdom.
Similar for Canada:-
* In Canada the number of stations dropped from 600 to 35 in 2009.
Am I understanding this properly?
martyn (03:03:52) : Joseph D’Aleo preliminary report says:-
* China had 100 stations in 1950, over 400 in 1960 then only 35 by 1990.
I find that hard to believe, that’s about 1 station per province in China, or similar to putting 1 station in the middle of England to cover the whole of the United Kingdom.
Similar for Canada:-
* In Canada the number of stations dropped from 600 to 35 in 2009.
Am I understanding this properly?
The stations, in most cases, still exist and are still recording data (for the local country BOMs) but when the data get to NOAA / NCDC to create the GHCN data set, they drop about 90% of it on the floor. Especially the cold ones 😉
see:
http://chiefio.wordpress.com/2009/10/28/ghcn-china-the-dragon-ate-my-thermometers/
for the China numbers. I get 34 in 2007, rising to 73 in 2008, but did not have 2009 numbers when I did this table.
Other countries here:
http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/
But yes, the number of thermometers in the “composite instrument” we use to find the world temperature wanders all over by about a factor of 10 between the baseline and “now” in most countries.
Oh, and the notion that each station has individual anomalies calculated so it doesn’t matter what stations you use is just broken on the face of it. Look again at the anomaly maps. Notice that “Bolivia Exists”. Since it has had no data since 1990, it can’t very well have an anomaly calculated from that non-existent data… There is a “sort of an anomaly” used to calculate the homgenizing, and another “sort of an anomaly” (really and average of about a dozen stations) used to calculate UHI, but when it comes time to “box and grid” since many whole boxes contain NO stations at all, and many others have a set of ‘a few’, there are some peculiar shenanigans with ‘zonal means’ calculated using large buckets of stations and offsets calculated during one phase of the PDO are used to figure what the offset might be during the present (1/2 the time different) phase of the PDO. Not like that would make any difference….
http://chiefio.wordpress.com/2009/03/02/picking-cherries-in-sweden/
shows that the GIStemp “baseline” is set in a nice cold dip in the middle of one phase of a long duration ripple (that looks to me to be PDO or related?) so getting a ‘cold anomaly’ relative to it would require a return to the Little Ice Age bottom…
While we’re making our ‘dream list’, I’d like to see the anomaly maps all made with a baseline of 1930-1960 … since folks claim it makes no difference…
Just one question, why the hell do they include urban stations, the Peter video shows their data is garbage, there is adequate rural data, all urban data should be thrown out or two seperate data sets should be used, urban and rural.
kadaka (23:46:08) :
“One megawatt of electricity can provide power to about 1,000 homes.”
1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.
As you point out the average house uses about 3 to 5 KW on average. With peaks to about 50% of its service rating. Which is 220 volts X 200 amps for a modern house or 44 KW and half that is 22 KW.
But it is worse than you thought. On average the windgen provides only 1/3 of its name plate rating on average. But sometimes you get the full MW. And sometimes (maybe days) you get zero.
4500 inconvenient stations
E.M.Smith (03:31:14) :
“While we’re making our ‘dream list’, I’d like to see the anomaly maps all made with a baseline of 1930-1960 … since folks claim it makes no difference…”
Your dream come true.
“Coho, the fourth power stuff is a distraction. It’s the Kelvin temperature, and the proportional change from temperature anomalies is small. BB radiation at 290K (17C) is 401.03 W/m2. At 291, it increases by 5.56. At 292, by a further 5.62. From 292 to 293 it increases by 5.68. It’s so close to linear it doesn’t matter.”
No, it’s not linear as Lubos Motl has shown; the GMST is 288K [15C]; the 4th power of 288 x the SB constant = 390.08w/m2 which fits in with the K&T diagram; if you regionalise that by 4 climate zones at 313K [40C], 393K [20C], 283K [10C] and 263K [-10C], even though that still averages 288K, if you take the average of each 4th power you will get 399.26 w/m2, a difference of 9 w/m2. If you use many individual stations the variation is staggering; apart from making the concept of a GMST look ridiculous it also makes any reduction of utilised stations suspect.
Another theory: Since 1999, really since Y2K, most thermometers have been made in China. While, in fact, the actual temperatures have been declining for the past ten years, the world’s thermometers say otherwise. Reason, the spec sheet for the calibration and printing equipment was writen by none other than – Richard Somerville, a distinguished professor emeritus and research professor at Scripps Institution of Oceanography, UC San Diego.
Well it is a theory and people will invest trillions on a good theory, right? Now we just have to prove it, or do we? Do you think MP’s and MC’s could hide behind this? An unproven theory? They are dupes. They were duped. And they do need a simple, dupy reason to change their stripes, don’t they?
3×2 (01:44:49) :
What you suggest might be true if we were comparing an individual station with its own 1951-80 mean OR if we had exactly the same group of stations (unchanged) as we had in 1951-1980. But neither is the case.
I believe the first is the case, and that is how they are calculated. Do you have information to the contrary?
Alexej Buergin (02:16:09) :
A “cool site” is not one where T is small or even negative (in °C). It is one where dT/dt is very small or even negative, a rural station e.g.
That’s an unusual definition. But what the article says is just:
“It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations.”
cohenite (04:35:18) :
It’s true that the big changes between tropical and arctic have a significant fourth power nonlinear effect. It doesn’t make a global average temperature ridiculous – it’s the logical measure of heat content. It just means that if you want to calculate global radiation, you have to average T^4, which is easy enough.
But my point is that it has negligible influence on climate changes within the range of anomalies – or changes that you’d get by replacing one station with one from the same climate zone.
Nick Stokes (04:32:21) :
E.M.Smith (03:31:14) :
“While we’re making our ‘dream list’, I’d like to see the anomaly maps all made with a baseline of 1930-1960 … since folks claim it makes no difference…”
Your dream come true.
Not quite. One of a few thousand…
We need to hold our elected officials accountible. This international fraud needs to be investigated and brought into the light of day. Climate change is the biggest hoax and conspiracy in the history of the world.
I understand this thread has concentrated on the effects on published temperatures from the massive filtering of data. the thread has been exhaustive on the demonstration of the warming bias it creates.
My question: Does GISS at any point declare a deletion officially? Does it give a reason for the dropping of a thermometer?
Here is the link to the part of the GIStemp code that does the final anomaly map creation:
http://chiefio.wordpress.com/2009/03/07/gistemp-step3-the-process/
It’s been a while since I read this chunk last, but as I read it now, it looks like it is doing things not ‘station to station’ but ‘station to averages’ for computing anomalies. These are just comments from the code, but they give you the flavor of it.
from:
http://chiefio.wordpress.com/2009/03/07/gistemp-step345_tosbbxgrid/
C**** The spatial averaging is done as follows:
C**** Stations within RCRIT km of the grid point P contribute
C**** to the mean at P with weight 1.- d/1200, (d = distance
C**** between station and grid point in km). To remove the station
C**** bias, station data are shifted before combining them with the
C**** current mean. The shift is such that the means over the time
C**** period they have in common remains unchanged (individually
C**** for each month). If that common period is less than 20(NCRIT)
C**** years, the station is disregarded. To decrease that chance,
C**** stations are combined successively in order of the length of
C**** their time record. A final shift then reverses the mean shift
C**** OR (to get anomalies) causes the 1951-1980 mean to become
C**** zero for each month.
C****
C**** Regional means are computed similarly except that the weight
C**** of a grid box with valid data is set to its area.
C**** Separate programs were written to combine regional data in
C**** the same way, but using the weights saved on unit 11.
Now that looks to me like a bunch of stations get shifted all over before they are combined and turned into a “grid box”… Not exactly comparing one station to itself in the past.
The way “anomalies” are used in GIStemp is not quite what you would ever expect…
Lots of weighting and shifting and combining and …
Oh, and the temperatures have had a bit of musical chairs too:
trimSBBX.v2.f
The Header:
C**** This program trims SBBX files by replacing a totally missing
C**** time series by its first element. The number of elements of the
C**** next time series is added at the BEGINNING of the previous record.
Oh, and it stirs in some “zonal means” too:
The FORTRAN zonav.f
C*********************************************************************
C *** PROGRAM READS BXdata
C**** This program combines the given gridded data (anomalies)
C**** to produce AVERAGES over various LATITUDE BELTS.
…
C**** DATA(1–>MONM) is a full time series, starting at January
C**** of year IYRBEG and ending at December of year IYREND.
C**** WT is proportional to the area containing valid data.
C**** AR(1) refers to Jan of year IYRBG0 which may
C**** be less than IYRBEG, MONM0 is the length of an input time
C**** series, and WTR(M) is the area of the part of the region
C**** that contained valid data for month M.
Note that “WT” is a weighting function and that data are made area proportional. There is some magic sauce to fill in missing bits…
C**** JBM zonal means are computed first, combining successively
C**** the appropriate regional data (AR with weight WTR). To remove
C**** the regional bias, the data of a new region are shifted
C**** so that the mean over the common period remains unchanged
C**** after its addition. If that common period is less than
C**** 20(NCRIT) years, the region is disregarded. To avoid that
C**** case as much as possible, regions are combined in order of
C**** the length of their time record. A final shift causes the
C**** 1951-1980 mean to become zero (for each month).
C****
C**** All other means (incl. hemispheric and global means) are
C**** computed from these zonal means using the same technique.
C**** NOTE: the weight of a zone may be smaller than its area
C**** since data-less parts are disregarded; this also causes the
C**** global mean to be different from the mean of the hemispheric
C**** means.
So all those means against which all these anomalies are taken can themselves wander around a lot… Note particularly that “data-less parts are disregarded” in making the zonal means.
Remember that next time someone says that dropping out, oh, high cold mountains will not change the hemispheric or global mean… and thus it’s anomaly.
The devil, literally is in the details…
E.M.Smith (05:37:25) :
GIStemp say they use GHCN v2.mean. And this is already expressed in anomalies for each station, as downloaded from NOAA. In fact, it isn’t clear how you can even recover the temp in Celsius from that file.
“M. Simon (04:25:40) :
kadaka (23:46:08) :
“One megawatt of electricity can provide power to about 1,000 homes.”
1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.
As you point out the average house uses about 3 to 5 KW on average. With peaks to about 50% of its service rating. Which is 220 volts X 200 amps for a modern house or 44 KW and half that is 22 KW.
But it is worse than you thought. On average the windgen provides only 1/3 of its name plate rating on average. But sometimes you get the full MW. And sometimes (maybe days) you get zero.”
German windpower has managed to increase its output from 17% of the nominal performance to about 21%. We have to be prepared for windy days when they suddenly deliver a 100% surge. In order to stabilise our networks, we have to have enough gas-powered plants with fast reaction times. Enough means: Total capacity “running reserve” on standby must be as big as wind+solar together. So for each installed GW renewables we need to install 1 GW “running reserve”.
Keep in mind that wind power output rises with the third power of wind velocity. This makes for violent spikes. It’s a tough job for the transmission lines and the standby gas plants. And expensive. We pay about 30 US cents or 20 Eurocents per kWh. I don’t wish that on you. The leftists here say it’s still too cheap.
Carlos RP (05:26:33) : I understand this thread has concentrated on the effects on published temperatures from the massive filtering of data. the thread has been exhaustive on the demonstration of the warming bias it creates.
My question: Does GISS at any point declare a deletion officially? Does it give a reason for the dropping of a thermometer?
Well, no. But the details are interesting…
If you just look at the station inventory, there are some 6000 ish stations. All of them are there, being used, see?
This comes form NOAA / NCDC (not GISS) as the GHCN data set (and as the USHCN for US sites). Now NOAA / NCDC are the ones that drop cold thermometers on the floor but only in the recent part of the record since about 1989. Again, not GISS.
So the station is “being used”, just only in the past… so it isn’t really ‘dropped’, just sort of truncated is all…
I suspect a certain amount of “mutt and jeff” going on here, but it is not something that can be proven without email logs and meeting notes.
So Hansen over at NASA / GISS can say he did nothing to drop stations… And Peterson? over at NOAA / NCDC can say he does nothing to make anomaly maps…
Also, the “drop shorter than 20 years” is built into GIStemp, so GISS does drop a load of stations, but does not list them by name. After all, each run will be a different set as some 19 year 11 month station “comes of age”.
One is left to wonder what the impact of massive equipment upgrades and changes of airports has done to chop up the record into disposable bits, but I’ve not gotten to that part of the investigation… yet… But if a move from a Stevenson Screen out by the field to an automated pole on a rope next to the hanger causes a change of ‘minor station number’ as it ought, then that new station will not contribute to the GIStemp work product for 20 years… Some other station will be used for ‘in fill’ instead…
Just a thought…
So AFTER NOAA / NCDC have cut out 4500 stations, THEN NASA / GISS via GIStemp chuck out any station records shorter than 20 years. What’s left? Well, not much. And as noted above, the code has to go to bizarre lengths to fill in grid boxes with whatever it can dream up…
Nick Stokes (20:45:09): Anomalies don’t work that way.
They only cause a full degree step change in measurement.
http://i27.tinypic.com/14b6tqo.jpg
How’s this?
Dear Representative,
The current staff at NOAA has distorted its global temperature report to create the illusion of warming,
1. By dropping the majority of cooler rural data sampling sites until the cooler rural temperatures become outliers.
2. Then, the few remaining cooler outliers were statistically reduced or “smoothed” to create a warming bias.
3. Furthermore, NOAA used the reduced number of cherry-picked warm sites to fill in cooler parts of the planet instead of real data.
Do not pass any legislation based on this flagrant manipulation of data. The people at NOAA responsible for this need to be removed from their posts.
http://wattsupwiththat.com/2010/01/22/american-thinker-on-cru-giss-and-climategate/
“Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.
Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.”
And
“But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
Here’s NASA’s solution:
For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
Even on paper, the design flaws inherent in such a process should be glaringly obvious.
So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
An isolated problem? Hardly, reports Smith.
From KUSI’s Global Warming: The Other Side:
“There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends. And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”
Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
Pretty slick, huh?
How to email your representative:
http://www.yourcongressyourhealth.org/?gclid=CIaA2q3mup8CFag65QodpzQnzg
Baa Humbug (21:52:03) :
I rather get my info from the horses mouth.
J Hansen said the following on Oz tv interview with T Jones.
TONY JONES: Okay, can you tell us how the Goddard Institute takes and adjusts these global temperatures because sceptics claim that urban heat centres make a huge difference; that they distort global temperatures and they make it appear hotter that it really is.
So do you adjust, in your figures, for the urban heat zone effects?
JAMES HANSEN: We get data from three different sources and we now, in order to avoid criticisms from contrarians, no longer make an adjustment.
We exclude urban locations, use rural locations to establish a trend.
The USA airport percent today in GHCN are just shy of 92%.
Once the majority of weather stations have been moved from cooler to warmer that is it, they will all be on the same playing field, when might that have occured, around mid 1990`s perhaps. Is the lack of warming since 1998 due mainly to the weather station placement at airports, perhaps since 1998 there is now no UHI effect left in the data sets.
“r (06:24:15) : […]”
My suggestion: Give it structure. Give headlines for paragraphs. Start with
INTRODUCTION
End with
CONCLUSIONS.
Makes for better “quick-reading”.
DirkH (06:02:08) :
“M. Simon (04:25:40) :
kadaka (23:46:08) :
Re the German Wind power Generation, a study by Engineers has suggested that running the Backup Power Stations at less than optimum actually costs as much as Wind power saves due to Inefficiencies in the Backup Stations.
http://www.clepair.net/windsecret.html
Nick Stokes:
“There’s very little difference.”
That is simply not true. After 1990 there definitely a difference.
“These devilish experts would have to anticipate future warming trends in individual stations. Harder than you think.”
Not so hard. Just pick the ones where population growth is the strongest.
In any case, maybe on of the reasons that we have had 12 years of no warming is because they can’t find any more stations to throw out.