
Climategate: CRU Was But the Tip of the Iceberg
Not surprisingly, the blatant corruption exposed at Britain’s premiere climate institute was not contained within the nation’s borders. Just months after the Climategate scandal broke, a new study has uncovered compelling evidence that our government’s principal climate centers have also been manipulating worldwide temperature data in order to fraudulently advance the global warming political agenda.
Not only does the preliminary report [PDF] indict a broader network of conspirators, but it also challenges the very mechanism by which global temperatures are measured, published, and historically ranked.
Last Thursday, Certified Consulting Meteorologist Joseph D’Aleo and computer expert E. Michael Smith appeared together on KUSI TV [Video] to discuss the Climategate — American Style scandal they had discovered. This time out, the alleged perpetrators are the National Oceanic and Atmospheric Administration (NOAA) and the NASA Goddard Institute for Space Studies (GISS).
NOAA stands accused by the two researchers of strategically deleting cherry-picked, cooler-reporting weather observation stations from the temperature data it provides the world through its National Climatic Data Center (NCDC). D’Aleo explained to show host and Weather Channel founder John Coleman that while the Hadley Center in the U.K. has been the subject of recent scrutiny, “[w]e think NOAA is complicit, if not the real ground zero for the issue.”
And their primary accomplices are the scientists at GISS, who put the altered data through an even more biased regimen of alterations, including intentionally replacing the dropped NOAA readings with those of stations located in much warmer locales.
As you’ll soon see, the ultimate effects of these statistical transgressions on the reports which influence climate alarm and subsequently world energy policy are nothing short of staggering.
NOAA – Data In / Garbage Out
Although satellite temperature measurements have been available since 1978, most global temperature analyses still rely on data captured from land-based thermometers, scattered more or less about the planet. It is that data which NOAA receives and disseminates – although not before performing some sleight-of-hand on it.
Smith has done much of the heavy lifting involved in analyzing the NOAA/GISS data and software, and he chronicles his often frustrating experiences at his fascinating website. There, detail-seekers will find plenty to satisfy, divided into easily-navigated sections — some designed specifically for us “geeks,” but most readily approachable to readers of all technical strata.
Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.
Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.
And no wonder — imagine the accuracy of campaign tracking polls were Gallup to include only the replies of Democrats in their statistics. But it gets worse.
Prior to publication, NOAA effects a number of “adjustments” to the cherry-picked stations’ data, supposedly to eliminate flagrant outliers, adjust for time of day heat variance, and “homogenize” stations with their neighbors in order to compensate for discontinuities. This last one, they state, is accomplished by essentially adjusting each to jive closely with the mean of its five closest “neighbors.” But given the plummeting number of stations, and the likely disregard for the latitude, elevation, or UHI of such neighbors, it’s no surprise that such “homogenizing” seems to always result in warmer readings.
The chart below is from Willis Eschenbach’s WUWT essay, “The smoking gun at Darwin Zero,” and it plots GHCN Raw versus homogeneity-adjusted temperature data at Darwin International Airport in Australia. The “adjustments” actually reversed the 20th-century trend from temperatures falling at 0.7°C per century to temperatures rising at 1.2°C per century. Eschenbach isolated a single station and found that it was adjusted to the positive by 6.0°C per century, and with no apparent reason, as all five stations at the airport more or less aligned for each period. His conclusion was that he had uncovered “indisputable evidence that the ‘homogenized’ data has been changed to fit someone’s preconceptions about whether the earth is warming.”
WUWT’s editor, Anthony Watts, has calculated the overall U.S. homogeneity bias to be 0.5°F to the positive, which alone accounts for almost one half of the 1.2°F warming over the last century. Add Smith’s selection bias to the mix and poof – actual warming completely disappears!
Yet believe it or not, the manipulation does not stop there.
GISS – Garbage In / Globaloney Out
The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports.
In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.
It then puts the merged data through a few “adjustments” of its own.
First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.
Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.
And according to NASA, “If no such neighbors exist or the overlap of the rural combination and the non-rural record is less than 20 years, the station is completely dropped; if the rural records are shorter, part of the non-rural record is dropped.”
However, Smith points out that a dropped record may be “from a location that has existed for 100 years.” For instance, if an aging piece of equipment gets swapped out, thereby changing its identification number, the time horizon reinitializes to zero years. Even having a large enough temporal gap (e.g., during a world war) might cause the data to “just get tossed out.”
But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
Here’s NASA’s solution:
For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
Even on paper, the design flaws inherent in such a process should be glaringly obvious.
So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
An isolated problem? Hardly, reports Smith.
From KUSI’s Global Warming: The Other Side:
“There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends. And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”


Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
Pretty slick, huh?
And those are but a few examples. In fact, throughout the entire grid, cooler station data are dropped and “filled in” by temperatures extrapolated from warmer stations in a manner obviously designed to overestimate warming…
…And convince you that it’s your fault.
Government and Intergovernmental Agencies — Globaloney In / Green Gospel Out
Smith attributes up to 3°F (more in some places) of added “warming trend” between NOAA’s data adjustment and GIStemp processing.
That’s over twice last century’s reported warming.
And yet, not only are NOAA’s bogus data accepted as green gospel, but so are its equally bogus hysterical claims, like this one from the 2006 annual State of the Climate in 2005 [PDF]: “Globally averaged mean annual air temperature in 2005 slightly exceeded the previous record heat of 1998, making 2005 the warmest year on record.”
And as D’Aleo points out in the preliminary report, the recent NOAA proclamation that June 2009 was the second-warmest June in 130 years will go down in the history books, despite multiple satellite assessments ranking it as the 15th–coldest in 31 years.
Even when our own National Weather Service (NWS) makes its frequent announcements that a certain month or year was the hottest ever, or that five of the warmest years on record occurred last decade, they’re basing such hyperbole entirely on NOAA’s warm-biased data.
And how can anyone possibly read GISS chief James Hansen’s Sunday claim that 2009 was tied with 2007 for second-warmest year overall, and the Southern Hemisphere’s absolute warmest in 130 years of global instrumental temperature records, without laughing hysterically? It’s especially laughable when one considers that NOAA had just released a statement claiming that very same year (2009) to be tied with 2006 for the fifth-warmest year on record.
So how do alarmists reconcile one government center reporting 2009 as tied for second while another had it tied for fifth? If you’re WaPo’s Andrew Freedman, you simply chalk it up to “different data analysis methods” before adjudicating both NASA and NOAA innocent of any impropriety based solely on their pointless assertions that they didn’t do it.
Earth to Andrew: “Different data analysis methods”? Try replacing “analysis” with “manipulation,” and ye shall find enlightenment. More importantly, does the explicit fact that since the drastically divergent results of both “methods” can’t be right, both are immediately suspect somehow elude you?
But by far the most significant impact of this data fraud is that it ultimately bubbles up to the pages of the climate alarmists’ bible: The United Nations Intergovernmental Panel on Climate Change Assessment Report.
And wrong data begets wrong reports, which – particularly in this case – begets dreadfully wrong policy.
It’s High Time We Investigated the Investigators
The final report will be made public shortly, and it will be available at the websites of both report-supporter Science and Public Policy Institute and Joe D’Aleo’s own ICECAP. As they’ve both been tremendously helpful over the past few days, I’ll trust in the opinions I’ve received from the report’s architects to sum up.
This from the meteorologist:
The biggest gaps and greatest uncertainties are in high latitude areas where the data centers say they ‘find’ the greatest warming (and thus which contribute the most to their global anomalies). Add to that no adjustment for urban growth and land use changes (even as the world’s population increased from 1.5 to 6.7 billion people) [in the NOAA data] and questionable methodology for computing the historical record that very often cools off the early record and you have surface based data sets so seriously flawed, they can no longer be trusted for climate trend or model forecast assessment or decision making by the administration, congress or the EPA.
Roger Pielke Sr. has suggested: “…that we move forward with an inclusive assessment of the surface temperature record of CRU, GISS and NCDC. We need to focus on the science issues. This necessarily should involve all research investigators who are working on this topic, with formal assessments chaired and paneled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.” I endorse that suggestion.
Certainly, all rational thinkers agree. Perhaps even the mainstream media, most of whom have hitherto mistakenly dismissed Climategate as a uniquely British problem, will now wake up and demand such an investigation.
And this from the computer expert:
That the bias exists is not denied. That the data are too sparse and with too many holes over time in not denied. Temperature series programs, like NASA GISS GIStemp try, but fail, to fix the holes and the bias. What is claimed is that “the anomaly will fix it.” But it cannot. Comparison of a cold baseline set to a hot present set must create a biased anomaly. It is simply overwhelmed by the task of taking out that much bias. And yet there is more. A whole zoo of adjustments are made to the data. These might be valid in some cases, but the end result is to put in a warming trend of up to several degrees. We are supposed to panic over a 1/10 degree change of “anomaly” but accept 3 degrees of “adjustment” with no worries at all. To accept that GISTemp is “a perfect filter”. That is, simply, “nuts”. It was a good enough answer at Bastogne, and applies here too.
Smith, who had a family member attached to the 101st Airborne at the time, refers to the famous line from the 101st commander, U.S. Army General Anthony Clement McAuliffe, who replied to a German ultimatum to surrender the December, 1944 Battle of Bastogne, Belgium with a single word: “Nuts.”
And that’s exactly what we’d be were we to surrender our freedoms, our economic growth, and even our simplest comforts to duplicitous zealots before checking and double-checking the work of the prophets predicting our doom should we refuse.
Marc Sheppard is environment editor of American Thinker and editor of the forthcoming Environment Thinker.
//

Great item. Thanks.
Perhaps someone noticed this confusion and already reported it:
For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.”
Am I missing something here? As stated, there is a “confusion” of elevation and latitude in this statement. It says the higher elevation stations “dropped to one” and then goes on to say there is “one .. north of 65°” Which is correct?
I think it should say “higher latitudes dropped to one.” Yes?
Thanks,
Clive
Nick Stockes:
“This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
Of course you are right Nick. But dropping true rural stations in favor of city stations, or even in favor of rural airport stations, is a big problem. I mean, how the heck are you going to know what is going on in California if all you have is one thermometer in SF and 3 in LA. I don’t care if a thermometer is on a snowy mountain or a sunny beach, because you still only care about the anomaly. But I do care about the dropping percentage of rural stations.
On this subject I have been looking into why GISS has an arctic area that is so much hotter than HadCRUT. Looking at the GISS web site you can see the devestation of stations in the Arctic. They have a map where you can click any location and it will show you the stations in that area. Click somewhere near the Arctic and you get a long list of stations. You go, “oh boy, now I can see what is happening”. But then you quickly realize that most of the stations that GISS is giving you are no longer reporting. What is left is maybe half a dozen or less that are used to extrapolate the entire Arctic. Then you have the problem that your are extrapolating from land to oceans that are sometimes ice covered and sometimes not. I looked at the SST anomalies for those oceans when they were not completely ice covered and when they were able to report SST values. The SST anomalies were much smaller than the land based extrapolations for those same areas. But GISS will not use SST if an area is ice covered for any part of the year. They would rather extrapolate from land.
Then, when I look at the thermometers that are left in the Siberian part of the GISS record, the numbers look truely radical. For example, one has 4.5C of temperature rise in seven years. And that is being used to adjust temperatures all over the Siberian Arctic. The funny thing is, that when you go and look at a thermometer like Barrow Alaska, and compare it to the Russian Vize thermometer, there is virtually no correlation at all. It’s all very crazy. I have no idea how they can get meaningful data out of what they are doing.
Here is an example, go to the RC website and look at Jim Hansen’s thread called, “If It’s That Warm, How Come It’s So Damned Cold. Now go down to figure three of his presentation. Actually, I’ll give you the link:
http://www.realclimate.org/images/Hansen09_fig3.jpg
Now look at the chart marked HadCRUT 2005 and the one marked GISS 2005. Look at the top row of gridcells in the HadCRUT chart. Notice that there are about 6 of them that actually show cooling. Now look up at the corresponding gridcells on the GISS chart. Notice that every one of those gridcells that is shown as having a cool anomaly in HadCRUT is shown as being maximally hot in GISS. Now tell me, can you trust those data sets?
A little OT, but are there “rules of thumb” for average temperature drop per, say, every 1000 Km as one goes from the equator to the poles over land? -over ocean? – similar to the average drop of 2.5 degrees F per 1000 ft in elevation. Also, is there a rule of thumb for temperature change as one travels inland from the ocean – the temperature direction of which would also depend on the season as well as the prevailing winds of the latitudinal zone?
Do such rules play a part in their distant interpolations?
“silly meme”. Reminds of the black knight. “Tis but a scratch”!! 8^]
Take the word “me” and insert appropriately. You know what I mean, …..
Please indulge me. I have been following WUWT since well before Climategate. It’s been a visceral pleasure to read and watch the whole thread of lies called AGW unravel before my eyes. It’s like being in the front seats watching history being made.
What troubles me is the sequelae. What will be the consequences of Climategate in the future? It is very troubling and dispiriting.
Human history has been Germans against Jews, Japanese against Chinese, Turks against Armenians. Americans against the Indians. The list goes on; the Spanish against Central and Southern American natives, the French against the Vietnamese, the Russians against anyone unfortunate enough to get in their way, the Chinese against themselves (great leap forward), the Cambodians against themselves (the Killing Fields), the Vikings against Europe, the Romans against the Greeks, Goths, Vandals and the Cro-Magnons against the Neanderthals.
This is not a pretty picture of our species and no race has clean hands; we only know about the bloody ones because they were educated enough to write of their exploits. There are plenty more who never learned to write before embarking exploiting adventures.
Humans are a homicidal, virulently aggressive and bloody species. Viciousness, brutality and an enthusiastic proclivity towards procreation has kept us off of the endangered species list. These traits have made us nature’s most successful species on earth. We have another trait we have we don’t share with any other species on earth, though. It is our self-awareness.
This trait balances everything animal and brutal in us. We are aware of our mortality, we imagine what may come after we die, we have a sense of wonder and a vision of truth and beauty. We have a pure and humble wish to know how we fit in nature’s scheme. We are artistic beings.
Our art is us seeking beauty and truth. Some paint to find it, some write prose and poetry, some sculpt while others write songs. Philosophers tried to understand nature directly and they gave birth to all the sciences. The sciences seek truth and beauty through logic while other artists seek it through our senses.
Most of us aren’t artistic. We depend on artists to see the truth and describe to us through their work. We especially depend on the philosophical artists we call scientists because their work is logical and has particular power.
We were let down by Climategate in ways we cannot fathom yet. We had faith in scientists to describe what truth and beauty is. We built entire belief systems on their words. We trusted them and they lied to us. That is why so many people still cannot come to terms with anthropogenic global warming being a lie; they trusted and they believed.
Once trust is broken, one starts wondering what else is a lie. Like love gone wrong, one wonders afterward if any of it was true. Second-hand smoke, Alar, nuclear energy dangers, Radon, endangered species, DDT, spotted owl, swine flu, all of it. Scientists said but was any of it true? Scientists lied.
Climategate has opened Pandora’s Box.
A question and two comments:
When they drop stations does that drop go all the way back in the record. Ie. if a station is dropped in 1990 is it removed from all records prior to 1990 and those years recalculated? If not then then dropping a cooler site will clearly raise the warming. If they are then every such change will require a complete recalculation of the past. Are there records of what effect the drops have had on past calculated average anomalies?
Nick Stokes
I understand that the GCM models suggest that air temp will increase at different rates depending on the altiude. Thus dropping “cooler” stations may not have an effect on the average anomolies but dropping high altitude stations would. And I suspect that dropping “cooler” and dropping “higher” are practically equivalent. In addition dropping inland versus near shore could also have an effect because of the mediating effect of oceans. Which way those would effect the average anomolies I don’t know but I’m sure someone does.
RE: Whats the motive?
I don’t think any motive is needed here nor does it help to attribute it. It just creates an us versus them and gets every one angry. A combiation of “Where is the grant money?” and Confirmation Bias can fully explain the results.
“Where’s the money?” is fully human and we are all subject to that. It’s not evil, it’s reality. And I doubt that many would intentionally falsify data for that (rose colored glasses perhaps). But if you incentivise research into AGW you will get lots of people hunting for it. And that will increase the chances of finding it. And of course those that don’t find it won’t get any press or publication.
Then (I believe the real villian here) confirmation bias.
If a climate scientist wants to improve his/her results and they make a modification in the program/adjustment etc.. and the calculated warming goes down then they obviously goofed and back to the drawing board. If it goes up then they were probably right in making the correction. It stays. Not because the scientist want’s the warming to go up, but because they don’t want to make a mistake and since they are sure that AGW exists then that is a convient error check. Likewise when you are chosing stations or data you will look for the “that’s weird” stuff to correct or eliminate (whether manually or by program). If you are totally convinced in AGW the data that gets looked at twice will be data that doesn’t show warming. Data that does will be accepted as it doesn’t ring any bells. When the data goes through many hands who all have the same basic beliefs (even if many try to fight the tendency) then there will be progressively more and more bias built in.
So we don’t need to postulate evil intent here. Just scientists with strong beliefs who are doing the best they can. And who sometimes forget that a scientists duty, according to Richard Feynman, if to try to disprove their own hypothesis.
Baa Humbug (21:59:00) :
He also stated that the contiguous US was only 2% of the globes land mass so the error had no significance.
I contend that if the gold standard in data was wrong by 0.21 per decade, what hope the rest of the global data (not of US standard) is remotely close to accurate?
Yes, an excellent point.
Joseph D’Aleo and E. Michael Smith should be presenting this evidence as a written submission to The Science and Technology Committee of the British parliament in answer to question 3 “How independent are the other two international data sets?”
Steve Schaper (21:57:43) :
“And now, apparently, Hansen is on record as desiring the destruction of all cities, and the murder of billions of people.”
One way to get rid of UHI for sure. The we won’t have to argue about it.
Note on the state of Green Science, as they continue the push to reduce carbon emissions based on flawed science.
From a Reuters article on the push to use wind power to supply 20% of the eastern US power grid:
“One megawatt of electricity can provide power to about 1,000 homes.”
1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.
I have a coffeemaker listed as 1000 watts, and a microwave oven saying it is 1100 watts right on the front. So by eco-math, if I run both the microwave and the coffeemaker at the same time, will someone else’s house lose power? That isn’t even taking the lights into account. And heaven forbid if I use lights, both appliances, and both the furnace and the well pump turn on. I could cause a brownout for the neighborhood.
Of course that’s just silly, the system is large, and people won’t all be using their appliances at the same time. Like at night, when TV’s and home computers are powered up and supper is made. With the lights on. Nope, you’ll never see the vast majority of homes use power simultaneously like that. And especially not during the Super Bowl.
And to think with mathematical genius like that on display, people wonder why we don’t trust their adjusted temperature numbers. Go figure.
Now I get the Bolivia comments in past posts! One would hope that this will be put to the UK’s investigation as a huge fraud.
What makes me sad is there are a huge number of honest and expert groups of scientists and engineers at NASA that have always had my respect for what the Space Race has given us, including our ability to use this forum as a mark of our discent! !
These people are being tainted by (and here I am wary of a snip from A. so I will moderate myself on his behalf) “someone” of religious persuasion who is dragging those good people into the mire of ” climatology” . (if I am in error A, please feel free to “snip”)
A huge T.H. to Meteorologist Joseph D’Aleo and computer expert E. Michael Smith for helping me wade through the disingenuous information I once (I am ashamed to say) I fell for!
Here’s another piece of the jigsaw. Our nearest Reference Climate Station (Mackay) http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=033119&p_nccObsCode=36&p_month=13
has records from 1908 (homogenized by GISS to 1951) but Te Kowai http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=033047&p_nccObsCode=36&p_month=13
is only 10 km away, is a rural station whose land use and crop (sugar cane) has not changed, and has records from January 2008 with only a few gaps. Notice the difference? GISS calls it “Mackay Sugar Mill Station” and the homogenized record http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=501943670010&data_set=2&num_neighbors=1
shows an adjustment of approximately -0.9C decreasing gradually over 80 years to zero, to make the temperature 100 years ago almost 1 degree cooler. This accounts for nearly all the 20th century warming. The hottest year was 1931, the hottest summer 1922-23.
We need to publicise all the änomalies” we can find. I am going to look at all Australian stations with a long history, especially rural ones, and see what I can find. How do you find out which stations are on or off the dataset?
BTW Michael Larkin (21:58:39) : Thank you for your post, taken and inwardly digested and followed by David Ball’s “Tiss but a scratch” comment.. I would only add…
” BLACK KNIGHT: I move for no man.” (remind us of anyone?) followed by…
BLACK KNIGHT: I’ve had worse.
Followed by (as the whole AGW thing slowly disolves into Global Cooling) BLACK KNIGHT: Oh, oh, I see, running away then. You yellow
b********! (expletive deleted for “A” sensibilities and quite right sir 😉 ) Come back here and take what’s coming to you.
I’ll bite your legs off!
Its quite amazing how a little levity from a film fits the whole scenario and at some point when the film comes out our children will be on their knee’s laughing at how the world made past the CRU/Water Melons!
I will (here in cold China) fall asleep with a thought to make me smile and science tells us..smiling keeps us warm 🙂
I find this piece and the work behind it interesting, not because it is anything new to those of us who follow the subject but because it highlights something which is often left unsaid.
In every single temperature measurement ever taken there is a margin for error. The thermometer (or whatever new fancy devices are called) is not guaranteed to be 100% accurate, it might be spot-on but it might not, a +/- is inherent in every single measurement since records began.
The margin for error of a single device is what it is. Some will record a bit high, some a bit low, some might be over or under depending on factors that affect that particular device (such as humidity or temperature itself) no one can ever know precisely because by its nature there is no perfect device to act as a control. Nonetheless one can seek to reduce the margin for error by setting up a number of devices in close proximity. Place five thermometers in your garden and they will almost certainly not all record exactly the same temperature, but you can take an average of all five and be reasonably confident that the average is fair (albeit within a margin for error, that margin should be less than for one device alone).
And then there is the problem of taking measurements at different times of day. Added to the measuring device’s inherent inaccuracy one adds the inaccuracy of the adjustment made to try to equate 10am on one day with 4pm the next day. The margin for error increases.
The more different measuring devices used, the more the margin for error is likely to be trimmed. Again, we cannot be certain because they might all contain the same error but the inherent likelihood is that they do not.
6,000 thermometers covering the whole world is an extraordinarily small number – they represent one for every 2,500 km2 of land. In addition to the margin for error in the measuring devices the extrapolation of the few measurements over the whole land mass makes the concept of calculating “average global temperature” simply absurd. Reducing the number of devices used in the calculations turns the exercise into a farce.
I don’t understand why the surface temperature data are taken seriously at all.
A reminder of one of the more ludicrous examples that has come to light.
Melbourne Regional Office c 1965:
http://1.bp.blogspot.com/_gRuPC7OQdxc/SrRvX7IjuRI/AAAAAAAAAco/LQ9J0ciTBPY/s400/heat+island+1960.jpg
Melbourne Regional Office c 2007:
http://wattsupwiththat.files.wordpress.com/2007/10/melbmetrolookingeast.jpg
The adjacent buildings were built in the late 90s:
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=086071&p_nccObsCode=36&p_month=13
philincalifornia (21:14:37) :
How about they were thrown out because, in contrast to your theory, they did affect the reported warming trends …. downwards ??
Well, that’s a new reason for dropping – do you have anything to aupport it? The Smith/Sheppard claim is different:
“It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations.”
But OK, if you want to be conspiratorial and imagine someone is throwing out stations that affect warming trends, how would they do it? The past data from the stations remains in the record, so pruning won’t eliminate known warming. These devilish experts would have to anticipate future warming trends in individual stations. Harder than you think.
Oliver Ramsay (22:08:40) :
“I’ll ask for an explanation of how an opinion that you concede might be true can still be described as a “silly meme”.”
What might be true is that the stations reduced are disproportionately cooler. I don’t know, and this article provides little evidence. What is silly is the argument that cooler sites means cooler global anomalies. It doesn’t; the anomalies reported are relative to each site’s mean. The Arctic, for example, has some of the most rapidly warming anomalies.
As for Bolivia, that’s a classic silliness. The fact that Bolivia is high does not mean that a station there would produce a low anomaly. And this is an anomaly plot. In fact, there were stations in the regions around Bolivia, and that particular plot for November shows that they were consistently high over a wide region. I looked up the closest ones to Sucre, Bolivia. Here’s some info:
Anom…Dist…..Alt…Name
1.63C..346km..3410m..La Quiaca
2.02C..561km….88m..Arica
1.61C..564km…385m..Tacna
3.21C..571km…189m..Marisca
2.66C..595km…950m..Jujuy Aero
High, low, in a hot month the anomalies are consistent, and consistently high.
I’ll write more on this on my blog.
Take 100 people at random from Yankee Stadium then measure how tall they are. Produce an average. Then a year later ask the 30 tallest people back. Produce an average of their height. Then proclaim that at the current rate of increase of the average height, humans will be as tall as a 10 story building by the end of the century.
Seems logical to me.
You really can do better Nick; apart from what Tilo has said there is another problem with the GISS pruning; if the discarded stations are in warmer areas but have cooler trends than those remaining then the overall global warming position is distorted. The reason for this is simple; a cooling or stationary trend in a warmer area has a greater effect on radiated energy than a rising trend in a cooler area because irradiance is proportional to the 4th power of the absolute temperature. To compound this further the GMST is calculated, as you say, by calculating the mean of the combined average anomaly from the few stations left standing; on this basis the AGW effect is calculated by the 4th power of the GMST; but this negates the regional effect of SB which I referred to; to maintain that regional SB effect the correct treatment would be to derive the average value of the 4th power of temperature from each site. GISS and their GMST do not do this.
Michael (18:32:21) :
“I made a bumper sticker.”
Michael, I dont think you are doing any of us any service mixing religion into this……
Nick Stokes (18:29:14) :
This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.
1951-80 three stations – HOT,WARM,COLD to give calculated baseline = WARM
1990 Remove COLD,WARM
1991 HOT – (baseline) WARM = +ve anomaly
2010 HOT – (baseline) WARM = +ve anomaly
2050 HOT – (baseline) WARM = +ve anomaly
Even under exactly the conditions of 1951-80 ..
HOT – (baseline) WARM = +ve anomaly
What you suggest might be true if we were comparing an individual station with its own 1951-80 mean OR if we had exactly the same group of stations (unchanged) as we had in 1951-1980. But neither is the case.
” Nick Stokes (18:29:14) :
This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
A “cool site” is not one where T is small or even negative (in °C). It is one where dT/dt is very small or even negative, a rural station e.g.
photon without a Higgs (19:44:41) :
I’d like to see that graph.
Well, you can, or one very like it. Zeke at the Yale Climate Forum has plotted the record of anomaly temps for discontinued stations vs continued stations, from 1890 to about 1995. The discontinued stations were those that had a long record before being discontinued. There’s very little difference.
Doug in Seattle (18:29:18) : I am not sure how this dropping out of cooler stations works. It is my understanding that when NOAA or GISS homogenizes for missing readings or stations they reach out to the nearest stations up to 1200 km distant and do their corrections based on the anomaly of the other stations – not the absolute temperature as seems to be inferred by D’Aleo and Smith.
It’s a bit more complicated than that. For some individual stations, missing data will be filled in or records ‘homogenized’ from nearby (up to 1000 km) stations based on an “offset” (that could be called an anomaly or sorts) to that nearby set. Then the UHI “adjustment” is done and that again looks up to 1000 km away. (That step also tosses out any record shorter than 20 years…) IFF if finds “enough” of those “nearby” 1000 km away stations that are “rural” but includes towns big enough to have some UHI and includes major airports (hey, nobody LIVES at the airport, so population is low…) it will make an average of a few of them, then adjust the history of the “urban” station to match what it thinks ought to be the historic relationship. But it often gets it very wrong, including adjusting in the wrong direction some times. Such as Pisa Italy where it gets the sign wrong by 1.4 C at some points in the past…
http://chiefio.wordpress.com/2009/08/30/gistemp-a-slice-of-pisa/
But there is also the entire “fill in the grid / box step” that can reach 1200 km away for a reference to fill in the grid / box. Please note: That ‘reference station’ is by this time a partial composite of 1000 km away UHI adjustments that can in part come from another 1000 km away ‘homogenizing’ … These steps are sequential… So while I would guess that it is unlikely to have a cascade, nothing prevents it. Furthermore, as you reduce the number of stations, the code must “reach” further to build a reference set.
Now the baseline is built with a large number of stations, many of which are more rural and more pristine than the survivors. Probably the easiest illustration of that is Airports. As of 2009, the percentage of GHCN station located at airports is 41%
http://chiefio.wordpress.com/2009/08/26/agw-gistemp-measure-jet-age-airport-growth/
somehow I don’t think there were that many airports in 1900 … nor were they running tons of kerosene through their jet turbines in 1950…
And yes, airports ARE used as rural:
http://chiefio.wordpress.com/2009/08/23/gistemp-fixes-uhi-using-airports-as-rural/
In fact, the “heat” correlates better with jet fuel usage than with CO2…
http://chiefio.wordpress.com/2009/12/15/of-jet-exhaust-and-airport-thermometers-feed-the-heat/
Which isn’t all that surprising when you start breaking it down by country:
http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
and find things like percentage of sites at airports:
Quoting from the link
NCDC GHCN – Airports by Year by Latitude
This is a bit hobbled by the primitive data structure of the “station inventory” file. It only stores an “Airstation” flag for the current state. Because of this, any given location that was an open field in 1890 but became an airport in 1970 will show up as an airport in 1890. Basically, any trend to “more airports” is understated. Many of the early “airports” are likely old military army fields that eventually got an airport added in later years.
[…]
The Pacific, New Zealand, and Australia
These are “latitude bands” in degrees. SP is South Pole and NP is “from 10 degrees up to the North Pole” or “anything above N 10 degrees”. DArPct is “Decade Airport Percent”. So you can see what percent of airports at each latitude band, but that far right number is most interesting. That is the “total percentage airports” in that decade.
Year SP -35 -30 -25 -20 -15 -10 -5 5 10 -NP DArPct: 1939 2.4 2.4 1.4 2.3 2.4 0.6 0.7 1.1 1.8 1.0 16.1 DArPct: 1949 3.9 3.8 2.6 3.1 3.1 1.0 1.0 1.3 1.1 1.0 21.8 DArPct: 1959 4.6 4.1 3.1 3.9 3.1 1.8 2.8 3.8 4.5 4.5 36.1 DArPct: 1969 4.9 4.5 2.8 4.0 2.9 2.6 5.8 8.3 4.1 4.2 44.2 DArPct: 1979 5.6 5.3 2.9 4.0 3.7 3.3 3.6 5.4 3.6 3.4 40.8 DArPct: 1989 5.9 6.3 3.4 4.6 4.5 3.8 3.6 4.3 3.6 2.6 42.7 DArPct: 1999 7.6 7.2 4.0 6.3 5.1 3.9 3.2 5.3 6.1 3.5 52.2 DArPct: 2009 10.7 8.3 5.1 7.8 5.6 4.8 4.1 11.1 9.4 4.5 71.3end quote
So, how to “pick those places which will have a warming bias to the anomaly”? How about, oh, I don’t know, maybe taking 16% of places with grassy fields or small tarmac runways and infrequent airplanes (Anyone remember that Pan Am had a large Pacific fleet in the ’30s and ’40s … of SEA PLANES because there were not enough runways?…) and turn them into multiple 10,000 foot concrete runways with hectares of tarmac, tons of Jet-A burned on takeoffs, and a fleet of cars, buses and trucks? Oh, and increase the percentage to 71% today….
How about South America where thermometers fled the mountains (like in Bolivia).
South America
Has the migration of thermometers from the mountains to the beach also put them at airports?
Oh, one ‘defect’ in this report is that the “airstation” flag is only available for present status. So if a place is an airport now, it will ALSO be shown as an airport for all prior time. That means these numbers are actually understating the problem. For example, I’m pretty sure there were not 35% airports in Latin America in the decade ending in 1919…
Year SP -50 -40 -35 -30 -25 -20 -15 -10 10 -NP DArPct: 1919 8.5 6.4 2.1 2.1 6.0 4.3 2.6 0.0 3.6 0.0 35.5 DArPct: 1929 8.1 6.1 2.0 2.0 6.1 4.0 4.0 0.0 4.0 0.0 36.4 DArPct: 1939 6.1 6.3 6.5 7.7 9.1 2.8 3.1 0.9 5.9 0.0 48.5 DArPct: 1949 4.4 4.6 7.4 7.0 10.7 4.4 3.5 0.9 8.5 0.0 51.4 DArPct: 1959 2.9 4.2 5.5 8.6 7.1 4.9 7.4 3.8 13.0 3.9 61.3 DArPct: 1969 2.3 5.1 4.2 8.6 5.7 4.7 6.1 6.1 18.3 3.3 64.4 DArPct: 1979 2.5 5.0 4.9 7.8 6.8 4.7 5.3 6.2 18.8 3.4 65.4 DArPct: 1989 3.0 5.5 5.4 9.6 6.8 4.6 5.7 5.4 18.4 3.5 67.9 DArPct: 1999 2.7 6.6 6.0 11.7 7.1 4.8 3.1 3.0 19.8 4.9 69.8 DArPct: 2009 1.9 6.5 6.1 12.1 7.6 4.9 2.6 2.3 18.4 5.4 67.9But that 68% in 2009 will be fairly accurate. So just where does GISTemp look to find those “rural” stations for UHI? And just were can those 1950 and 1960 cold Bolivian mountains baseline pick up a comparison “anomaly”… Oh, at the Airport in Peru…
FWIW, the USA airport percent today in GHCN are just shy of 92%, (Please note that while GIStemp will blend in some USHCN.v2 stations that have had loads of “adjustments” made, other temperature series will not. Oh, and from May 2007 until just last November 2009, GIStemp had no USHCN stations for the “then present” as they used USHCN version 1 that ‘cut off’ in 2007. So these numbers were what was used for all those world record hot claims they’ve not retracted). In the following table, “LATpct” is the percentage stations in a given latitude band in a SINGLE YEAR not a decade ending; while AIRpct is the percentage of airports in that latitude band. Once again, the far right number is the total percent in that year for all latitudes. particularly intriguing is how the Decade Airport Percent is still a modest 30.7% and only by looking at the years in that decade do we see the profound blow up of airport percentage in the last few years.
Quoting from the link:
But it masks the rather astounding effect of deletions in GHCN without the USHCN set added in:
The United States of America
Year SP 30 35 40 45 50 55 60 65 70 -NP LATpct: 2006 3.7 18.3 29.5 33.2 14.4 0.0 0.4 0.3 0.1 0.1 100.0 AIRpct: 1.3 4.0 6.3 6.7 3.2 0.0 0.4 0.3 0.1 0.1 22.4 LATpct: 2007 8.2 17.2 28.4 26.9 11.2 0.0 3.7 3.0 0.7 0.7 100.0 AIRpct: 8.2 15.7 27.6 23.1 9.0 0.0 3.7 3.0 0.7 0.7 91.8 LATpct: 2008 8.8 16.9 28.7 26.5 11.0 0.0 3.7 2.9 0.7 0.7 100.0 AIRpct: 8.8 15.4 27.9 22.8 8.8 0.0 3.7 2.9 0.7 0.7 91.9 LATpct: 2009 8.1 17.8 28.1 26.7 11.1 0.0 3.7 3.0 0.7 0.7 100.0 AIRpct: 8.1 16.3 27.4 23.0 8.9 0.0 3.7 3.0 0.7 0.7 91.9 DLaPct: 2009 4.3 18.4 29.5 32.5 13.6 0.0 0.7 0.9 0.2 0.1 100.0 DArPct: 2.1 5.7 8.8 8.9 3.7 0.0 0.6 0.8 0.2 0.1 30.7For COUNTRY CODE: 425
Yup, just shy of 92% of all GHCN thermometers in the USA are at airports.
So, want to know why it’s still “record hot” with tons of snow outside your window? Well, go look at the airport where they are busy de-icing airplanes and clearing all the snow away…
While this doesn’t deal with UHI (which I think is quite imprtant), it should not necessarily inject a warm bias unless they are cherry picking nearest stations for UHI. But that doesn’t appear to be what D’Aleo and Smith are alleging.
Or unless they are picking airports… for example.
cohenite (01:03:06) :
Coho, the fourth power stuff is a distraction. It’s the Kelvin temperature, and the proportional change from temperature anomalies is small. BB radiation at 290K (17C) is 401.03 W/m2. At 291, it increases by 5.56. At 292, by a further 5.62. From 292 to 293 it increases by 5.68. It’s so close to linear it doesn’t matter.
Christopher Hanley (00:48:19) :
Those photos of the Melbourne site give a false impression. The street between the station and the building is Latrobe street. It’s a very wide street, and the building is further set back. The building is to the south, and does not shade the station. There are legitimate worries about the amount of traffic, but the building isn’t the problem.
Incidentally, soon after your 1965 pic, another large building was on that site, which was then replaced by the one pictured. There is parkland to the north.