American Thinker on CRU, GISS, and Climategate

Climategate: CRU Was But the Tip of the Iceberg

By Marc Sheppard

Not surprisingly, the blatant corruption exposed at Britain’s premiere climate institute was not contained within the nation’s borders. Just months after the Climategate scandal broke, a new study has uncovered compelling evidence that our government’s principal climate centers have also been manipulating worldwide temperature data in order to fraudulently advance the global warming political agenda.

Not only does the preliminary report [PDF] indict a broader network of conspirators, but it also challenges the very mechanism by which global temperatures are measured, published, and historically ranked.

Last Thursday, Certified Consulting Meteorologist Joseph D’Aleo and computer expert E. Michael Smith appeared together on KUSI TV [Video] to discuss the Climategate — American Style scandal they had discovered. This time out, the alleged perpetrators are the National Oceanic and Atmospheric Administration (NOAA) and the NASA Goddard Institute for Space Studies (GISS).

NOAA stands accused by the two researchers of strategically deleting cherry-picked, cooler-reporting weather observation stations from the temperature data it provides the world through its National Climatic Data Center (NCDC). D’Aleo explained to show host and Weather Channel founder John Coleman that while the Hadley Center in the U.K. has been the subject of recent scrutiny, “[w]e think NOAA is complicit, if not the real ground zero for the issue.”

And their primary accomplices are the scientists at GISS, who put the altered data through an even more biased regimen of alterations, including intentionally replacing the dropped NOAA readings with those of stations located in much warmer locales.

As you’ll soon see, the ultimate effects of these statistical transgressions on the reports which influence climate alarm and subsequently world energy policy are nothing short of staggering.

NOAA – Data In / Garbage Out

Although satellite temperature measurements have been available since 1978, most global temperature analyses still rely on data captured from land-based thermometers, scattered more or less about the planet. It is that data which NOAA receives and disseminates – although not before performing some sleight-of-hand on it.

Smith has done much of the heavy lifting involved in analyzing the NOAA/GISS data and software, and he chronicles his often frustrating experiences at his fascinating website. There, detail-seekers will find plenty to satisfy, divided into easily-navigated sections — some designed specifically for us “geeks,” but most readily approachable to readers of all technical strata.

Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.

Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.

Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.

It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).

For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.

Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that

It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.

That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.

Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.

And no wonder — imagine the accuracy of campaign tracking polls were Gallup to include only the replies of Democrats in their statistics.  But it gets worse.

Prior to publication, NOAA effects a number of “adjustments” to the cherry-picked stations’ data, supposedly to eliminate flagrant outliers, adjust for time of day heat variance, and “homogenize” stations with their neighbors in order to compensate for discontinuities. This last one, they state, is accomplished by essentially adjusting each to jive closely with the mean of its five closest “neighbors.” But given the plummeting number of stations, and the likely disregard for the latitude, elevation, or UHI of such neighbors, it’s no surprise that such “homogenizing” seems to always result in warmer readings.

The chart below is from Willis Eschenbach’s WUWT essay, “The smoking gun at Darwin Zero,” and it plots GHCN Raw versus homogeneity-adjusted temperature data at Darwin International Airport in Australia. The “adjustments” actually reversed the 20th-century trend from temperatures falling at 0.7°C per century to temperatures rising at 1.2°C per century. Eschenbach isolated a single station and found that it was adjusted to the positive by 6.0°C per century, and with no apparent reason, as all five stations at the airport more or less aligned for each period. His conclusion was that he had uncovered “indisputable evidence that the ‘homogenized’ data has been changed to fit someone’s preconceptions about whether the earth is warming.”

WUWT’s editor, Anthony Watts, has calculated the overall U.S. homogeneity bias to be 0.5°F to the positive, which alone accounts for almost one half of the 1.2°F warming over the last century. Add Smith’s selection bias to the mix and poof – actual warming completely disappears!

Yet believe it or not, the manipulation does not stop there.

GISS – Garbage In / Globaloney Out

The scientists at NASA’s GISS are widely considered to be the world’s leading researchers into atmospheric and climate changes. And their Surface Temperature (GISTemp) analysis system is undoubtedly the premiere source for global surface temperature anomaly reports.

In creating its widely disseminated maps and charts, the program merges station readings collected from the Scientific Committee on Antarctic Research (SCAR) with GHCN and USHCN data from NOAA.

It then puts the merged data through a few “adjustments” of its own.

First, it further “homogenizes” stations, supposedly adjusting for UHI by (according to NASA) changing “the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations.” Of course, the reduced number of stations will have the same effect on GISS’s UHI correction as it did on NOAA’s discontinuity homogenization – the creation of artificial warming.

Furthermore, in his communications with me, Smith cited boatloads of problems and errors he found in the Fortran code written to accomplish this task, ranging from hot airport stations being mismarked as “rural” to the “correction” having the wrong sign (+/-) and therefore increasing when it meant to decrease or vice-versa.

And according to NASA, “If no such neighbors exist or the overlap of the rural combination and the non-rural record is less than 20 years, the station is completely dropped; if the rural records are shorter, part of the non-rural record is dropped.”

However, Smith points out that a dropped record may be “from a location that has existed for 100 years.” For instance, if an aging piece of equipment gets swapped out, thereby changing its identification number, the time horizon reinitializes to zero years. Even having a large enough temporal gap (e.g., during a world war) might cause the data to “just get tossed out.”

But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.

Here’s NASA’s solution:

For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.

Even on paper, the design flaws inherent in such a process should be glaringly obvious.

So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”

An isolated problem? Hardly, reports Smith.

From KUSI’s Global Warming: The Other Side:

“There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends.  And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue].  But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”

Of course, you already know the answer:  GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.

Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”

Pretty slick, huh?

And those are but a few examples. In fact, throughout the entire grid, cooler station data are dropped and “filled in” by temperatures extrapolated from warmer stations in a manner obviously designed to overestimate warming…

…And convince you that it’s your fault.

Government and Intergovernmental Agencies — Globaloney In / Green Gospel Out

Smith attributes up to 3°F (more in some places) of added “warming trend” between NOAA’s data adjustment and GIStemp processing.

That’s over twice last century’s reported warming.

And yet, not only are NOAA’s bogus data accepted as green gospel, but so are its equally bogus hysterical claims, like this one from the 2006 annual State of the Climate in 2005 [PDF]: “Globally averaged mean annual air temperature in 2005 slightly exceeded the previous record heat of 1998, making 2005 the warmest year on record.”

And as D’Aleo points out in the preliminary report, the recent NOAA proclamation that June 2009 was the second-warmest June in 130 years will go down in the history books, despite multiple satellite assessments ranking it as the 15thcoldest in 31 years.

Even when our own National Weather Service (NWS) makes its frequent announcements that a certain month or year was the hottest ever, or that five of the warmest years on record occurred last decade, they’re basing such hyperbole entirely on NOAA’s warm-biased data.

And how can anyone possibly read GISS chief James Hansen’s Sunday claim that 2009 was tied with 2007 for second-warmest year overall, and the Southern Hemisphere’s absolute warmest in 130 years of global instrumental temperature records, without laughing hysterically? It’s especially laughable when one considers that NOAA had just released a statement claiming that very same year (2009) to be tied with 2006 for the fifth-warmest year on record.

So how do alarmists reconcile one government center reporting 2009 as tied for second while another had it tied for fifth? If you’re WaPo’s Andrew Freedman, you simply chalk it up to “different data analysis methods” before adjudicating both NASA and NOAA innocent of any impropriety based solely on their pointless assertions that they didn’t do it.

Earth to Andrew: “Different data analysis methods”? Try replacing “analysis” with “manipulation,” and ye shall find enlightenment. More importantly, does the explicit fact that since the drastically divergent results of both “methods” can’t be right, both are immediately suspect somehow elude you?

But by far the most significant impact of this data fraud is that it ultimately bubbles up to the pages of the climate alarmists’ bible: The United Nations Intergovernmental Panel on Climate Change Assessment Report.

And wrong data begets wrong reports, which – particularly in this case – begets dreadfully wrong policy.

It’s High Time We Investigated the Investigators

The final report will be made public shortly, and it will be available at the websites of both report-supporter Science and Public Policy Institute and Joe D’Aleo’s own ICECAP. As they’ve both been tremendously helpful over the past few days, I’ll trust in the opinions I’ve received from the report’s architects to sum up.

This from the meteorologist:

The biggest gaps and greatest uncertainties are in high latitude areas where the data centers say they ‘find’ the greatest warming (and thus which contribute the most to their global anomalies). Add to that no adjustment for urban growth and land use changes (even as the world’s population increased from 1.5 to 6.7 billion people) [in the NOAA data] and questionable methodology for computing the historical record that very often cools off the early record and you have surface based data sets so seriously flawed, they can no longer be trusted for climate trend or model forecast assessment or decision making by the administration, congress or the EPA.

Roger Pielke Sr. has suggested: “…that we move forward with an inclusive assessment of the surface temperature record of CRU, GISS and NCDC.  We need to focus on the science issues.  This necessarily should involve all research investigators who are working on this topic, with formal assessments chaired and paneled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.” I endorse that suggestion.

Certainly, all rational thinkers agree. Perhaps even the mainstream media, most of whom have hitherto mistakenly dismissed Climategate as a uniquely British problem, will now wake up and demand such an investigation.

And this from the computer expert:

That the bias exists is not denied.  That the data are too sparse and with too many holes over time in not denied.  Temperature series programs, like NASA GISS GIStemp try, but fail, to fix the holes and the bias.  What is claimed is that “the anomaly will fix it.”  But it cannot.  Comparison of a cold baseline set to a hot present set must create a biased anomaly.   It is simply overwhelmed by the task of taking out that much bias.  And yet there is more.  A whole zoo of adjustments are made to the data.  These might be valid in some cases, but the end result is to put in a warming trend of up to several degrees.  We are supposed to panic over a 1/10 degree change of “anomaly” but accept 3 degrees of “adjustment” with no worries at all. To accept that GISTemp is “a perfect filter”. That is, simply, “nuts”.  It was a good enough answer at Bastogne, and applies here too.

Smith, who had a family member attached to the 101st Airborne at the time, refers to the famous line from the 101st commander, U.S. Army General Anthony Clement McAuliffe, who replied to a German ultimatum to surrender the December, 1944 Battle of Bastogne, Belgium with a single word: “Nuts.”

And that’s exactly what we’d be were we to surrender our freedoms, our economic growth, and even our simplest comforts to duplicitous zealots before checking and double-checking the work of the prophets predicting our doom should we refuse.

Marc Sheppard is environment editor of American Thinker and editor of the forthcoming Environment Thinker.

//

Advertisements

158 thoughts on “American Thinker on CRU, GISS, and Climategate

  1. You know, there is nothing in this article that hasn’t been published at least twice before. It is an absolute crime that nothing comes of it. And this article, too, will be ignored by those in a position to do something about it.
    Global warming is, indeed, man made … at GISS and CRU.

  2. This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.

  3. I am not sure how this dropping out of cooler stations works. It is my understanding that when NOAA or GISS homogenizes for missing readings or stations they reach out to the nearest stations up to 1200 km distant and do their corrections based on the anomaly of the other stations – not the absolute temperature as seems to be inferred by D’Aleo and Smith.
    While this doesn’t deal with UHI (which I think is quite imprtant), it should not necessarily inject a warm bias unless they are cherry picking nearest stations for UHI. But that doesn’t appear to be what D’Aleo and Smith are alleging.

  4. So lets get this straight. Is the following hypothetical example correct?
    3 stations measure 11deg 12 deg 13deg averaging 12deg.
    Then we drop the first 2 “cooler” ones leaving us with the third 13deg. Which of itself is 1deg above the average.
    You gotto hand it to them, they found a way to make a station read warming against itself just beautifully. (except they got caught)

  5. This article warrants the widest possible dissemination, in my opinion. The more the climate “gate” opens, the more people will pass through it.
    We haven’t been having unprecedented warming, but we’ve sure had some unprecedented temperature reporting, have we not?

  6. Measuring temperature is easy,
    you just use the Alice in Wonderland homogeneity bias
    One pill makes you larger
    and one pill makes you small

  7. Central Alaska has a good, continuous, UHI free temperature record.
    Conclusions:
    — over the last 30 years, there has been 3 deg F of warming
    — over the last 80 years, 0.5 F.
    — essentially all detectable variation is directly related to the PDO.
    Clearly, adjustment is required. Where data collides with theory, so much the worse for data.

  8. Look at the results of the NOAA’s “adjustments” of the raw temperature data: click [takes a few seconds to load]
    This is a chart showing the reduction in the number of temperature stations: click. Most of the reduction was in rural stations – leaving the urban stations located on airport tarmacs, next to air conditioning exhausts, etc. Obviously this will skew the global temperature record higher than it really is.
    And here we can see a graphic of the radical reduction in the number of temperature stations that compile the global temperature record: click
    With $Trillions in higher taxes being demanded to “fight global warming,” an accurate temperature record is required. As we can see, NOAA “adjusts” the record upward by using very questionable methodology. An impartial investigation is necessary, with all sides of the debate fully involved.

  9. Great article … too bad it will not get any leggs outside of the skeptic community.
    I mean … “The American Thinker” … you might as well have said that Rush read this on his show, … thus, it must be rubbish.

  10. originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
    It’s so cold in the Amazon jungle you’d think you’re on a snowy mountain.

  11. Nick Stokes (18:29:14) :
    This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.
    Funny that they didn’t drop the stations most affected by UHI which causes anomaly to be unnaturally higher.

  12. The short point is that if you do not use exactly the same data sets throughout in the calculation of the entire temperature record, you cannot even show a trend. Thus station drop outs render the temeprature record next to meaningless since any apparent trend is a factor (to some more or less extent) of station drop out.
    Would not any serious scientist seek to compile the record on the purest data available, ie data sets requiring no adjustment for UHI, changing land use etc. This would lead one to reject urban station sets and insetad use rural data sets.
    In fact, the entire idea of creating an artificail global average temperature record is absurd given that global warming is not a global problem. Some countries will benefit, for others it will be a problem (to more or less degree). The melting of artic/antartic ice and glaciers depend smore upon their own micro climatic conditions than some notional global average. With the passing months, it is becoming ever more apparant that manmade global warming is little more than manmade manipulation of data.

  13. Funny how this was started in 1989 right after James Hansen gave his infamous 1988 presentation before congress.
    Maybe you had to help your predictions, hey James?

  14. You really have to ask what is the reason these people would do this and how did they get it to this scale.
    Some say control and power and some say grants and money. Whatever- it is almost unbelieveable that it could go this far.The scale is breathtaking when you include NOAA and NASA. Maybe those moon pictures were fake-LOL.

  15. Nick Stokes (18:29:14) :
    “This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
    As I read this they were saying that the anomaly can be affected if warmer stations are extropalated to an area (up to 1200 Klicks) from an area where readings are no longer used, thus raising the anomaly for that area, especially true if the areas transposed are affected by improper UHI adjustments.
    Also if rural areas are dropped, and UHI is not correctly accounted for, this could raise the anomaly.
    Also at times I hear reports that the global average temp was such and such for a given year, and so even if it is not in the IPCC papers it, (the global mean temperature) can be used politcaly for misinformation.

  16. The inertia of AGW is such that it will take more than a single event, such as leaking of emails and documents at CRU, to affect significant change upon the direction of climate change theory.
    The ‘independent’ reviews at UEA and PSU, even allowing them the benefit of the doubt and assuming they will vigorously hunt for the truth, will never be enough to turn around the state of climate science.
    The only way to rip the lid off the messy state of climate science is to keep pressing for full disclosure. Let the first tier investigations proceed, knowing that other investigations are set to proceed (investigate the investigators?).
    While this article by Marc Sheppard is nothing ‘new’, it further reveals the nature of the problem and advances what Meteorologist Joseph D’Aleo and computer expert E. Michael Smith have so far uncovered and discussed. This serves to continue to raise the pressure, thereby inviting further investigations.
    It will take many years to truly reconstruct climate science into a viable field of study and rid it of that silly notion that mankind has a major impact on climate. Climategate and its immediate fallout is but the first step of a long process.

  17. Nick Stokes (18:29:14) :
    Well let’s do this, Nick:
    Let’s go back to 1989 when all this started, take all the data that was dropped up until today, and use it as GISTemp product instead of the data that wasn’t dropped. And let’s just drop all the data that was used by GISS. According to you they are interchangeable.
    Want to do it?

    I’d like to see that graph. And I am sure you would have no problem with it too since you call this whole thing a ‘a silly meme’.

  18. Why make it so complicated. Use rural stations, you could probably still have more records then they now use, plot them on overlapping time grids, and get your anomaly estimates for each decade based on the number of available stations for that decade? KISS could really help. Then we could argue about the quality of the rural stations, but not about all these (what appear to be) unnecessary adjustments.
    Produce a second record for the mean of all the long term rural temps. I think we would then have a better anomaly record, and a mean temp record.
    Then we could get to the point of factoring in mean relative humitiy for the same time period, it sounds difficult, and IMV is more difficult then it sounds)
    Then we could calculate how much of whatever warming we found was at night verses day.
    Also, I understand that exactly how all these adjstments are made is not fully disclosed. Is this correct.

  19. As to GISS being even more influential than UK counterparts, this from article re Hansen’s warmest decade story, seems to say the UK has few if any stations in the Artic and Antarctic. That leaves GISS as the main source for disaster scenarios at the poles. I wouldn’t take any single person’s word about such a thing:
    Other research groups also track global temperature trends but use different analysis techniques. The Met Office Hadley Centre in the United Kingdom uses similar input measurements as GISS, for example, but it omits large areas of the Arctic and Antarctic where monitoring stations are sparse.”…escience, 1/21/10
    As to why they give all the bad info, it’s money and politics, climate is among biggest businesses in the world now–all based on nothing. Carbon trading is the most profitable division of investment banks in Europe. The US has had 20 years of weak leadership, so we just rolled over for UN thugs. There are 5 climate lobbyists for every congressman in DC.

  20. Thank you, Marc Sheppard, for keeping the spotlight of public attention focused on the global climate warming scandal.
    I have personally witnessed a steady decline in the integrity of federally funded research since 1960.
    Over the past 50 years, scientists have been trained with grant funds the way Pavlov’s dogs were trained with dog biscuits.
    The National Academy of Sciences (NAS) uses its power of review of the budgets for NASA, NOAA, NSF, DOE, etc. to direct funds to researchers who will find evidence of favorite NAS opinions: Anthropological Global Warming, Oscillating Solar Neutrinos, the Standard Solar Model, etc, ad infinitum.
    Beneath the Climategate iceberg is a half-century of distortion and data manipulation by those who control the purse strings of NASA, NOAA, NSF, DOE, etc.
    What a sad state of affairs for science,
    Oliver K. Manuel
    Former NASA PI for Apollo

  21. I wonder if a volunteer team of several hundred, like Anthony’s surface station project, could plow their way through each station record like was done for Darwin Airport.
    It might be quite technical and everyone may not have the skills to understand everything but perhaps there are preliminary analyses that can be done to get some idea of what the state of the records are. And with enough people it may not take too long and we’d have the final and definitive answer on this entire mess.

  22. What GISS and NOAA have done is to write a program and implement it to make any weather anomaly they want.
    With UHI’s as high as 10deg F, they control the horizontal and vertical, and that’s how you get whole regions mysteriously showing warmer than ever before when the opposite is true in the real world.
    Yes, they need to have thier clocks cleaned.
    There is only 1 way, and 1 way only, to clean up this stinky mess.
    Pull up all the data from the stations they don’t use anymore. Most of them are still recording. Verify with printed material of the times.
    Nothing that GISS or NOAA has been producing is beyond suspicion.

  23. Nick Stokes, (I feel like I’m piling on at this point), the comments against your point are valid. Especially when the “anomaly” is measured from some baseline, and the “current value” is taken from a known, UHI-enhanced, hotter location extrapolated over into an area that is actually much cooler.
    I especially like Photon’s offer – let’s just switch all the dropped with retained stations, and run the numbers again! What say you?

  24. Nick Stokes and others,
    thanks for the apologetics.
    When you toss out stations at higher elevation and higher latitude you are generally tossing out research stations, smaller villages and towns. When you look at what is left and find there is actually a high percentage of airports and larger towns with virtually no rural stations, yes, you ARE gridding with high trend sites!!
    But then, you KNEW that and STILL wanted to apologise for James “coal trains are death trains” Hansen and the reswt!! But thanks for dropping by Nick!!! Nice to see you making the rounds and making yourself look ignorant again!!

  25. The Carbon Pollution Reduction Scheme legislation is due to be reintroduced into the Australian Parliament in February. That is if Mr Rudd follows through with his threat to reintroduce the bill and thereby create a ‘trigger’ for a double dissolution of both houses of Parliament and call an early election. I have sent a copy of this article to Senators who voted in favour of the bill last year. At least they can not say they did not know this fraud/conspiracy was going on.

  26. For sure the New York Times will be all over this story.
    What would we do without the New York Times telling the truth.

  27. David (19:35:41) :
    “thus raising the anomaly for that area, “

    Anomalies don’t work that way. An anomaly is calculated for each individual site. They are then aggregated. The purpose is exactly to avoid the issue spuriously raised here. For climate purposes you only aggregate deviations from the individual site average. Then it doesn’t matter whether the mean temperatures represent the average mean for the area.

  28. Here’s the data set I would like to see plotted:
    SINCE radiosonde (weather balloon) data have been used to validate MSU data sets, which we have going back over 30 years, THEN find the ground data that corresponds to these release points.
    THIS independently validated surface temperature data set can be plotted. Next, of course, UHI adjustments, relocations deleted, and other factors may need be performed.
    Overall, I think this should be the World Temperature – not the phony ones we’ve put up with for so many years.

  29. photon without a Higgs (19:44:41) :
    “I am sure you would have no problem with it too”

    Yes, not much. The data was probably dropped for a reason, but as long as it is reasonably OK, the fact that it came from a cooler area (if that is indeed the trend) won’t matter.

  30. Certified Consulting Meteorologist Joseph D’Aleo and computer expert E. Michael Smith have done an outstanding job. Marc Sheppard’s report is well-written, too. He’s trying to explain it to people who have not been following this work, as many WUWT-ers have been.
    I’m glad to see D’Aleo and Smith (and Watts et al.) getting some credit where it is certainly due. I also anxiously await the full report. It will cause a stir immediately, but it will also resonate for years. It may well mark the tipping point. Climategate-zilla.

  31. Nick Stokes (18:29:14) :
    This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.

    How about they were thrown out because, in contrast to your theory, they did affect the reported warming trends …. downwards ?? In fact, I thought a warmista meme was that, because of the lack of water vapor at the poles (sites with a distinctly cooler mean), anthropogenic CO2-induced warming (as in the temperature anomaly) would be more pronounced ?? Did that one die a death ??
    Science has a nasty habit of biting theories in the ass, so I will keep an open mind on this until you, Nick, show me the data that supports your conclusion.

  32. Nick Stokes (18:29:14) :
    This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.
    Since measuring anomalies from fewer locations is as accurate as measuring anomalies from more locations, why don’t they simply measure from one location?

  33. Using rural stations only has it’s own share of problems. You can get regional scale changes similar to the UHI effect. For instance:
    – Change of land use by making bigger farms by merging smaller farms, typically with removal of vegetation and fencing to make it more economical to do the farming with big machines.
    – Forest clear-felling.
    – Introduction of irrigation.
    – Change in plant characteristics – e.g. wheat plants these days are significantly shorter than years ago.
    – Use of artificial fertilizers as compared to older style crop rotation.
    – Change in preferred crop – e.g. growing corn for bioethanol.
    Now I don’t know the exact numeric effect of these changes, but I do know they will change the regional micro-climate by mechanisms such as
    – changing the surface albedo (incoming and outgoing radiation balance)
    – changing the surface turbulence ( i.e. mixing of near ground and higher air)
    – changing the amount of particles in the air (cloud formation)
    – change in the local evaporation rate (cloud & humidity)
    Perhaps it is not sensible to correct for any of these or UHI? Perhaps a better solution is to take the figures as they are and recognize the simple truth of “yes it’s hotter in town now than 20 years ago”. That is the ground truth for the residents and what actually affects their lives. It is no comfort to know what caused it, just that it is so.
    The secondary interest is whether the world is heating up or not. The issue is how to take a sparse and spotty set of measurements and fairly apportion them to some grid that can then be compared over time.
    My naive approach would be to take all measurements, uncorrected, calculate the reference temperature and anomaly at each station, and then calculate gridded values of the reference temperature & anomaly using a geometric averaging process from relatively nearby stations, with a weighting factor that discounts multiple stations close together – e.g. the 10 weather stations in a city compared to the 5 nearby country stations. Obviously it would need to be a bit cleverer than that to handle stations starting and stopping etc

  34. I rather get my info from the horses mouth.
    J Hansen said the following on Oz tv interview with T Jones.
    TONY JONES: Okay, can you tell us how the Goddard Institute takes and adjusts these global temperatures because sceptics claim that urban heat centres make a huge difference; that they distort global temperatures and they make it appear hotter that it really is.
    So do you adjust, in your figures, for the urban heat zone effects?
    JAMES HANSEN: We get data from three different sources and we now, in order to avoid criticisms from contrarians, we no longer make an adjustment. Even if we see there are eight stations in Alaska and seven of them have temperatures in the minus 30s and one of them says plus 35, which pretty obvious what happens, someone didn’t put the minus sign there, we just, we don’t correct that.
    Instead we send an email or letter or a letter to the organisation that produces the data and say, you’d better check the Alaska temperatures, because we don’t want to be blamed for changing anything. But as far as adjusting for urban effects, we have a very simple procedure.
    We exclude urban locations, use rural locations to establish a trend, and that does eliminate – though urban stations do have more warming than the rural stations, and so we eliminate that effect simply by eliminating those stations, but it’s very clear that the warming that we see is not urban, it’s largest in Siberia, and in the Arctic and the Antarctic, and there aren’t any cities there, and there’s warming over the oceans, there are no cities there. So it’s not urban warming that’s just nonsense.
    So he says they drop the UHI stations “to establish a trend”
    What happens when you drop “warm” ones? the trend is cool so that current temps will more than likely be warmer against the trend.
    Alternatively, if he meant they drop UHI stations altogether, we know this is false, they did the exact opposite.

  35. And now, apparently, Hansen is on record as desiring the destruction of all cities, and the murder of billions of people.
    Aren’t there facilities for people like that?

  36. This kind of analysis is something that everyone can understand, even the layman (maybe even politicians!). It has a strong appeal to commonsense. If one could only get a mainstream newspaper to publish something like it!
    On another point: that word “meme”. I’ve noticed it is very often used in the sense of something (obviously false) that is parroted by a person one happens to disagree with. Though essentially a crude and pejorative insult, it masquerades as sophisticated superiority. “Meme” has become a meme in its own right, and, ironically, parodies itself.
    People who utter it often try to package up that which they disapprove of, colour it black, and toss it in the stupid bin. It doesn’t matter what is correct or incorrect in a post-modern world, does it? Everything is relative; my truth is whatever I choose it to be, and can quote the largest number (real or fabricated) are in support of. As if truth bowed its head to mere numerical superiority.
    Observe countless climate-related comments: “meme” usually implicitly declares the utterer as having little interest in intellectual enquiry. The world is divided between what resides in the stupid bin, and what not: but the latter may also contain memes. You know, the “approved” ones nobody wants to admit are memes because they are bad things only someone else ascribes to, right?
    I wonder what it must be like to inhabit this world of memes, one set coloured white, and the other black. The very thought of it makes me shudder. How could I live without my doubts, my need to explore and occasionally be surprised and delighted with what I find, whether or not it gainsays things I previously thought to be true?
    “Meme” folk don’t know what they are missing. Things are so much more uncertain, and therefore interesting, than they realise.

  37. Further to my post above, the US has the most modern and comprehensive temp. data of any region in the world. When S McIntyre found an error, GISS corrected their data set. When questioned about the error, Hansen said it was only from 2000-2006 and only 0.15DegC (0.15 over 7 yrs equates to 0.21 over ten years or 2.1 per century, nearly all of the supposed scary warming).
    He also stated that the contiguous US was only 2% of the globes land mass so the error had no significance.
    I contend that if the gold standard in data was wrong by 0.21 per decade, what hope the rest of the global data (not of US standard) is remotely close to accurate?

  38. Nick Stokes (18:29:14) : said;
    “This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
    ——-
    With just a casual observation that “silly meme” is a fatuous tautology, I’ll ask for an explanation of how an opinion that you concede might be true can still be described as a “silly meme”.
    A meme that I find silly is that every objection made to the methods, models or mangled data of the climate orthodoxy can be dismissed by the faithful as a “silly meme”.
    You didn’t choose to attempt to show us in what way this cogently written article erred. Bolivia would be a good place to start.

  39. I’ve just read through the first 26 comments and missed (if it is there) mention of all the “How not to measure temperature” posts that WUWT has had. I think Anthony and the volunteer crew of surface station scouts has shown that these data being collected and manipulated are not acceptable for the use to which they are being put, namely precision of fractions of a degree. They are useful for local weather reporting and classification of regions (see Köppen), as long as one is interested only in general aspects. The “global warming scam” takes these data way beyond their possibilities.

  40. ‘We exclude urban locations, use rural locations to establish a trend, ‘
    We golly be, if you have previously dropped the rural stations, and then you go and drop the urban stations… we didn’t do nuthin’. Must have been somebody else.

  41. Great item. Thanks.
    Perhaps someone noticed this confusion and already reported it:
    For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.”
    Am I missing something here? As stated, there is a “confusion” of elevation and latitude in this statement. It says the higher elevation stations “dropped to one” and then goes on to say there is “one .. north of 65°” Which is correct?
    I think it should say “higher latitudes dropped to one.” Yes?
    Thanks,
    Clive

  42. Nick Stockes:
    “This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
    Of course you are right Nick. But dropping true rural stations in favor of city stations, or even in favor of rural airport stations, is a big problem. I mean, how the heck are you going to know what is going on in California if all you have is one thermometer in SF and 3 in LA. I don’t care if a thermometer is on a snowy mountain or a sunny beach, because you still only care about the anomaly. But I do care about the dropping percentage of rural stations.
    On this subject I have been looking into why GISS has an arctic area that is so much hotter than HadCRUT. Looking at the GISS web site you can see the devestation of stations in the Arctic. They have a map where you can click any location and it will show you the stations in that area. Click somewhere near the Arctic and you get a long list of stations. You go, “oh boy, now I can see what is happening”. But then you quickly realize that most of the stations that GISS is giving you are no longer reporting. What is left is maybe half a dozen or less that are used to extrapolate the entire Arctic. Then you have the problem that your are extrapolating from land to oceans that are sometimes ice covered and sometimes not. I looked at the SST anomalies for those oceans when they were not completely ice covered and when they were able to report SST values. The SST anomalies were much smaller than the land based extrapolations for those same areas. But GISS will not use SST if an area is ice covered for any part of the year. They would rather extrapolate from land.
    Then, when I look at the thermometers that are left in the Siberian part of the GISS record, the numbers look truely radical. For example, one has 4.5C of temperature rise in seven years. And that is being used to adjust temperatures all over the Siberian Arctic. The funny thing is, that when you go and look at a thermometer like Barrow Alaska, and compare it to the Russian Vize thermometer, there is virtually no correlation at all. It’s all very crazy. I have no idea how they can get meaningful data out of what they are doing.
    Here is an example, go to the RC website and look at Jim Hansen’s thread called, “If It’s That Warm, How Come It’s So Damned Cold. Now go down to figure three of his presentation. Actually, I’ll give you the link:
    http://www.realclimate.org/images/Hansen09_fig3.jpg
    Now look at the chart marked HadCRUT 2005 and the one marked GISS 2005. Look at the top row of gridcells in the HadCRUT chart. Notice that there are about 6 of them that actually show cooling. Now look up at the corresponding gridcells on the GISS chart. Notice that every one of those gridcells that is shown as having a cool anomaly in HadCRUT is shown as being maximally hot in GISS. Now tell me, can you trust those data sets?

  43. A little OT, but are there “rules of thumb” for average temperature drop per, say, every 1000 Km as one goes from the equator to the poles over land? -over ocean? – similar to the average drop of 2.5 degrees F per 1000 ft in elevation. Also, is there a rule of thumb for temperature change as one travels inland from the ocean – the temperature direction of which would also depend on the season as well as the prevailing winds of the latitudinal zone?
    Do such rules play a part in their distant interpolations?

  44. Please indulge me. I have been following WUWT since well before Climategate. It’s been a visceral pleasure to read and watch the whole thread of lies called AGW unravel before my eyes. It’s like being in the front seats watching history being made.
    What troubles me is the sequelae. What will be the consequences of Climategate in the future? It is very troubling and dispiriting.
    Human history has been Germans against Jews, Japanese against Chinese, Turks against Armenians. Americans against the Indians. The list goes on; the Spanish against Central and Southern American natives, the French against the Vietnamese, the Russians against anyone unfortunate enough to get in their way, the Chinese against themselves (great leap forward), the Cambodians against themselves (the Killing Fields), the Vikings against Europe, the Romans against the Greeks, Goths, Vandals and the Cro-Magnons against the Neanderthals.
    This is not a pretty picture of our species and no race has clean hands; we only know about the bloody ones because they were educated enough to write of their exploits. There are plenty more who never learned to write before embarking exploiting adventures.
    Humans are a homicidal, virulently aggressive and bloody species. Viciousness, brutality and an enthusiastic proclivity towards procreation has kept us off of the endangered species list. These traits have made us nature’s most successful species on earth. We have another trait we have we don’t share with any other species on earth, though. It is our self-awareness.
    This trait balances everything animal and brutal in us. We are aware of our mortality, we imagine what may come after we die, we have a sense of wonder and a vision of truth and beauty. We have a pure and humble wish to know how we fit in nature’s scheme. We are artistic beings.
    Our art is us seeking beauty and truth. Some paint to find it, some write prose and poetry, some sculpt while others write songs. Philosophers tried to understand nature directly and they gave birth to all the sciences. The sciences seek truth and beauty through logic while other artists seek it through our senses.
    Most of us aren’t artistic. We depend on artists to see the truth and describe to us through their work. We especially depend on the philosophical artists we call scientists because their work is logical and has particular power.
    We were let down by Climategate in ways we cannot fathom yet. We had faith in scientists to describe what truth and beauty is. We built entire belief systems on their words. We trusted them and they lied to us. That is why so many people still cannot come to terms with anthropogenic global warming being a lie; they trusted and they believed.
    Once trust is broken, one starts wondering what else is a lie. Like love gone wrong, one wonders afterward if any of it was true. Second-hand smoke, Alar, nuclear energy dangers, Radon, endangered species, DDT, spotted owl, swine flu, all of it. Scientists said but was any of it true? Scientists lied.
    Climategate has opened Pandora’s Box.

  45. A question and two comments:
    When they drop stations does that drop go all the way back in the record. Ie. if a station is dropped in 1990 is it removed from all records prior to 1990 and those years recalculated? If not then then dropping a cooler site will clearly raise the warming. If they are then every such change will require a complete recalculation of the past. Are there records of what effect the drops have had on past calculated average anomalies?
    Nick Stokes
    I understand that the GCM models suggest that air temp will increase at different rates depending on the altiude. Thus dropping “cooler” stations may not have an effect on the average anomolies but dropping high altitude stations would. And I suspect that dropping “cooler” and dropping “higher” are practically equivalent. In addition dropping inland versus near shore could also have an effect because of the mediating effect of oceans. Which way those would effect the average anomolies I don’t know but I’m sure someone does.
    RE: Whats the motive?
    I don’t think any motive is needed here nor does it help to attribute it. It just creates an us versus them and gets every one angry. A combiation of “Where is the grant money?” and Confirmation Bias can fully explain the results.
    “Where’s the money?” is fully human and we are all subject to that. It’s not evil, it’s reality. And I doubt that many would intentionally falsify data for that (rose colored glasses perhaps). But if you incentivise research into AGW you will get lots of people hunting for it. And that will increase the chances of finding it. And of course those that don’t find it won’t get any press or publication.
    Then (I believe the real villian here) confirmation bias.
    If a climate scientist wants to improve his/her results and they make a modification in the program/adjustment etc.. and the calculated warming goes down then they obviously goofed and back to the drawing board. If it goes up then they were probably right in making the correction. It stays. Not because the scientist want’s the warming to go up, but because they don’t want to make a mistake and since they are sure that AGW exists then that is a convient error check. Likewise when you are chosing stations or data you will look for the “that’s weird” stuff to correct or eliminate (whether manually or by program). If you are totally convinced in AGW the data that gets looked at twice will be data that doesn’t show warming. Data that does will be accepted as it doesn’t ring any bells. When the data goes through many hands who all have the same basic beliefs (even if many try to fight the tendency) then there will be progressively more and more bias built in.
    So we don’t need to postulate evil intent here. Just scientists with strong beliefs who are doing the best they can. And who sometimes forget that a scientists duty, according to Richard Feynman, if to try to disprove their own hypothesis.

  46. Baa Humbug (21:59:00) :
    He also stated that the contiguous US was only 2% of the globes land mass so the error had no significance.
    I contend that if the gold standard in data was wrong by 0.21 per decade, what hope the rest of the global data (not of US standard) is remotely close to accurate?

    Yes, an excellent point.

  47. Joseph D’Aleo and E. Michael Smith should be presenting this evidence as a written submission to The Science and Technology Committee of the British parliament in answer to question 3 “How independent are the other two international data sets?”

  48. Steve Schaper (21:57:43) :
    “And now, apparently, Hansen is on record as desiring the destruction of all cities, and the murder of billions of people.”
    One way to get rid of UHI for sure. The we won’t have to argue about it.

  49. Note on the state of Green Science, as they continue the push to reduce carbon emissions based on flawed science.
    From a Reuters article on the push to use wind power to supply 20% of the eastern US power grid:
    “One megawatt of electricity can provide power to about 1,000 homes.”
    1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.
    I have a coffeemaker listed as 1000 watts, and a microwave oven saying it is 1100 watts right on the front. So by eco-math, if I run both the microwave and the coffeemaker at the same time, will someone else’s house lose power? That isn’t even taking the lights into account. And heaven forbid if I use lights, both appliances, and both the furnace and the well pump turn on. I could cause a brownout for the neighborhood.
    Of course that’s just silly, the system is large, and people won’t all be using their appliances at the same time. Like at night, when TV’s and home computers are powered up and supper is made. With the lights on. Nope, you’ll never see the vast majority of homes use power simultaneously like that. And especially not during the Super Bowl.
    And to think with mathematical genius like that on display, people wonder why we don’t trust their adjusted temperature numbers. Go figure.

  50. Now I get the Bolivia comments in past posts! One would hope that this will be put to the UK’s investigation as a huge fraud.
    What makes me sad is there are a huge number of honest and expert groups of scientists and engineers at NASA that have always had my respect for what the Space Race has given us, including our ability to use this forum as a mark of our discent! !
    These people are being tainted by (and here I am wary of a snip from A. so I will moderate myself on his behalf) “someone” of religious persuasion who is dragging those good people into the mire of ” climatology” . (if I am in error A, please feel free to “snip”)
    A huge T.H. to Meteorologist Joseph D’Aleo and computer expert E. Michael Smith for helping me wade through the disingenuous information I once (I am ashamed to say) I fell for!

  51. Here’s another piece of the jigsaw. Our nearest Reference Climate Station (Mackay) http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=033119&p_nccObsCode=36&p_month=13
    has records from 1908 (homogenized by GISS to 1951) but Te Kowai http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=033047&p_nccObsCode=36&p_month=13
    is only 10 km away, is a rural station whose land use and crop (sugar cane) has not changed, and has records from January 2008 with only a few gaps. Notice the difference? GISS calls it “Mackay Sugar Mill Station” and the homogenized record http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=501943670010&data_set=2&num_neighbors=1
    shows an adjustment of approximately -0.9C decreasing gradually over 80 years to zero, to make the temperature 100 years ago almost 1 degree cooler. This accounts for nearly all the 20th century warming. The hottest year was 1931, the hottest summer 1922-23.
    We need to publicise all the änomalies” we can find. I am going to look at all Australian stations with a long history, especially rural ones, and see what I can find. How do you find out which stations are on or off the dataset?

  52. BTW Michael Larkin (21:58:39) : Thank you for your post, taken and inwardly digested and followed by David Ball’s “Tiss but a scratch” comment.. I would only add…
    ” BLACK KNIGHT: I move for no man.” (remind us of anyone?) followed by…
    BLACK KNIGHT: I’ve had worse.
    Followed by (as the whole AGW thing slowly disolves into Global Cooling) BLACK KNIGHT: Oh, oh, I see, running away then. You yellow
    b********! (expletive deleted for “A” sensibilities and quite right sir 😉 ) Come back here and take what’s coming to you.
    I’ll bite your legs off!
    Its quite amazing how a little levity from a film fits the whole scenario and at some point when the film comes out our children will be on their knee’s laughing at how the world made past the CRU/Water Melons!
    I will (here in cold China) fall asleep with a thought to make me smile and science tells us..smiling keeps us warm 🙂

  53. I find this piece and the work behind it interesting, not because it is anything new to those of us who follow the subject but because it highlights something which is often left unsaid.
    In every single temperature measurement ever taken there is a margin for error. The thermometer (or whatever new fancy devices are called) is not guaranteed to be 100% accurate, it might be spot-on but it might not, a +/- is inherent in every single measurement since records began.
    The margin for error of a single device is what it is. Some will record a bit high, some a bit low, some might be over or under depending on factors that affect that particular device (such as humidity or temperature itself) no one can ever know precisely because by its nature there is no perfect device to act as a control. Nonetheless one can seek to reduce the margin for error by setting up a number of devices in close proximity. Place five thermometers in your garden and they will almost certainly not all record exactly the same temperature, but you can take an average of all five and be reasonably confident that the average is fair (albeit within a margin for error, that margin should be less than for one device alone).
    And then there is the problem of taking measurements at different times of day. Added to the measuring device’s inherent inaccuracy one adds the inaccuracy of the adjustment made to try to equate 10am on one day with 4pm the next day. The margin for error increases.
    The more different measuring devices used, the more the margin for error is likely to be trimmed. Again, we cannot be certain because they might all contain the same error but the inherent likelihood is that they do not.
    6,000 thermometers covering the whole world is an extraordinarily small number – they represent one for every 2,500 km2 of land. In addition to the margin for error in the measuring devices the extrapolation of the few measurements over the whole land mass makes the concept of calculating “average global temperature” simply absurd. Reducing the number of devices used in the calculations turns the exercise into a farce.

  54. I don’t understand why the surface temperature data are taken seriously at all.
    A reminder of one of the more ludicrous examples that has come to light.
    Melbourne Regional Office c 1965:
    http://1.bp.blogspot.com/_gRuPC7OQdxc/SrRvX7IjuRI/AAAAAAAAAco/LQ9J0ciTBPY/s400/heat+island+1960.jpg
    Melbourne Regional Office c 2007:
    http://wattsupwiththat.files.wordpress.com/2007/10/melbmetrolookingeast.jpg
    The adjacent buildings were built in the late 90s:
    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=086071&p_nccObsCode=36&p_month=13

  55. philincalifornia (21:14:37) :
    How about they were thrown out because, in contrast to your theory, they did affect the reported warming trends …. downwards ??

    Well, that’s a new reason for dropping – do you have anything to aupport it? The Smith/Sheppard claim is different:
    “It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations.”
    But OK, if you want to be conspiratorial and imagine someone is throwing out stations that affect warming trends, how would they do it? The past data from the stations remains in the record, so pruning won’t eliminate known warming. These devilish experts would have to anticipate future warming trends in individual stations. Harder than you think.
    Oliver Ramsay (22:08:40) :
    “I’ll ask for an explanation of how an opinion that you concede might be true can still be described as a “silly meme”.”

    What might be true is that the stations reduced are disproportionately cooler. I don’t know, and this article provides little evidence. What is silly is the argument that cooler sites means cooler global anomalies. It doesn’t; the anomalies reported are relative to each site’s mean. The Arctic, for example, has some of the most rapidly warming anomalies.
    As for Bolivia, that’s a classic silliness. The fact that Bolivia is high does not mean that a station there would produce a low anomaly. And this is an anomaly plot. In fact, there were stations in the regions around Bolivia, and that particular plot for November shows that they were consistently high over a wide region. I looked up the closest ones to Sucre, Bolivia. Here’s some info:
    Anom…Dist…..Alt…Name
    1.63C..346km..3410m..La Quiaca
    2.02C..561km….88m..Arica
    1.61C..564km…385m..Tacna
    3.21C..571km…189m..Marisca
    2.66C..595km…950m..Jujuy Aero
    High, low, in a hot month the anomalies are consistent, and consistently high.
    I’ll write more on this on my blog.

  56. Take 100 people at random from Yankee Stadium then measure how tall they are. Produce an average. Then a year later ask the 30 tallest people back. Produce an average of their height. Then proclaim that at the current rate of increase of the average height, humans will be as tall as a 10 story building by the end of the century.
    Seems logical to me.

  57. You really can do better Nick; apart from what Tilo has said there is another problem with the GISS pruning; if the discarded stations are in warmer areas but have cooler trends than those remaining then the overall global warming position is distorted. The reason for this is simple; a cooling or stationary trend in a warmer area has a greater effect on radiated energy than a rising trend in a cooler area because irradiance is proportional to the 4th power of the absolute temperature. To compound this further the GMST is calculated, as you say, by calculating the mean of the combined average anomaly from the few stations left standing; on this basis the AGW effect is calculated by the 4th power of the GMST; but this negates the regional effect of SB which I referred to; to maintain that regional SB effect the correct treatment would be to derive the average value of the 4th power of temperature from each site. GISS and their GMST do not do this.

  58. Michael (18:32:21) :
    “I made a bumper sticker.”
    Michael, I dont think you are doing any of us any service mixing religion into this……

  59. Nick Stokes (18:29:14) :
    This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.
    1951-80 three stations – HOT,WARM,COLD to give calculated baseline = WARM
    1990 Remove COLD,WARM
    1991 HOT – (baseline) WARM = +ve anomaly
    2010 HOT – (baseline) WARM = +ve anomaly
    2050 HOT – (baseline) WARM = +ve anomaly
    Even under exactly the conditions of 1951-80 ..
    HOT – (baseline) WARM = +ve anomaly
    What you suggest might be true if we were comparing an individual station with its own 1951-80 mean OR if we had exactly the same group of stations (unchanged) as we had in 1951-1980. But neither is the case.

  60. ” Nick Stokes (18:29:14) :
    This “dropping cooler sites” is a silly meme. Whether true or not, what is used from the sites is anomaly data. Whether the mean is cooler or not does not affect the reported warming trends.”
    A “cool site” is not one where T is small or even negative (in °C). It is one where dT/dt is very small or even negative, a rural station e.g.

  61. photon without a Higgs (19:44:41) :
    I’d like to see that graph.

    Well, you can, or one very like it. Zeke at the Yale Climate Forum has plotted the record of anomaly temps for discontinued stations vs continued stations, from 1890 to about 1995. The discontinued stations were those that had a long record before being discontinued. There’s very little difference.

  62. Doug in Seattle (18:29:18) : I am not sure how this dropping out of cooler stations works. It is my understanding that when NOAA or GISS homogenizes for missing readings or stations they reach out to the nearest stations up to 1200 km distant and do their corrections based on the anomaly of the other stations – not the absolute temperature as seems to be inferred by D’Aleo and Smith.
    It’s a bit more complicated than that. For some individual stations, missing data will be filled in or records ‘homogenized’ from nearby (up to 1000 km) stations based on an “offset” (that could be called an anomaly or sorts) to that nearby set. Then the UHI “adjustment” is done and that again looks up to 1000 km away. (That step also tosses out any record shorter than 20 years…) IFF if finds “enough” of those “nearby” 1000 km away stations that are “rural” but includes towns big enough to have some UHI and includes major airports (hey, nobody LIVES at the airport, so population is low…) it will make an average of a few of them, then adjust the history of the “urban” station to match what it thinks ought to be the historic relationship. But it often gets it very wrong, including adjusting in the wrong direction some times. Such as Pisa Italy where it gets the sign wrong by 1.4 C at some points in the past…
    http://chiefio.wordpress.com/2009/08/30/gistemp-a-slice-of-pisa/
    But there is also the entire “fill in the grid / box step” that can reach 1200 km away for a reference to fill in the grid / box. Please note: That ‘reference station’ is by this time a partial composite of 1000 km away UHI adjustments that can in part come from another 1000 km away ‘homogenizing’ … These steps are sequential… So while I would guess that it is unlikely to have a cascade, nothing prevents it. Furthermore, as you reduce the number of stations, the code must “reach” further to build a reference set.
    Now the baseline is built with a large number of stations, many of which are more rural and more pristine than the survivors. Probably the easiest illustration of that is Airports. As of 2009, the percentage of GHCN station located at airports is 41%
    http://chiefio.wordpress.com/2009/08/26/agw-gistemp-measure-jet-age-airport-growth/
    somehow I don’t think there were that many airports in 1900 … nor were they running tons of kerosene through their jet turbines in 1950…
    And yes, airports ARE used as rural:
    http://chiefio.wordpress.com/2009/08/23/gistemp-fixes-uhi-using-airports-as-rural/
    In fact, the “heat” correlates better with jet fuel usage than with CO2…
    http://chiefio.wordpress.com/2009/12/15/of-jet-exhaust-and-airport-thermometers-feed-the-heat/
    Which isn’t all that surprising when you start breaking it down by country:
    http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
    and find things like percentage of sites at airports:
    Quoting from the link
    NCDC GHCN – Airports by Year by Latitude
    This is a bit hobbled by the primitive data structure of the “station inventory” file. It only stores an “Airstation” flag for the current state. Because of this, any given location that was an open field in 1890 but became an airport in 1970 will show up as an airport in 1890. Basically, any trend to “more airports” is understated. Many of the early “airports” are likely old military army fields that eventually got an airport added in later years.
    […]
    The Pacific, New Zealand, and Australia
    These are “latitude bands” in degrees. SP is South Pole and NP is “from 10 degrees up to the North Pole” or “anything above N 10 degrees”. DArPct is “Decade Airport Percent”. So you can see what percent of airports at each latitude band, but that far right number is most interesting. That is the “total percentage airports” in that decade.

          Year SP -35  -30  -25  -20  -15  -10   -5    5   10  -NP
    DArPct: 1939  2.4  2.4  1.4  2.3  2.4  0.6  0.7  1.1  1.8  1.0 16.1
    DArPct: 1949  3.9  3.8  2.6  3.1  3.1  1.0  1.0  1.3  1.1  1.0 21.8
    DArPct: 1959  4.6  4.1  3.1  3.9  3.1  1.8  2.8  3.8  4.5  4.5 36.1
    DArPct: 1969  4.9  4.5  2.8  4.0  2.9  2.6  5.8  8.3  4.1  4.2 44.2
    DArPct: 1979  5.6  5.3  2.9  4.0  3.7  3.3  3.6  5.4  3.6  3.4 40.8
    DArPct: 1989  5.9  6.3  3.4  4.6  4.5  3.8  3.6  4.3  3.6  2.6 42.7
    DArPct: 1999  7.6  7.2  4.0  6.3  5.1  3.9  3.2  5.3  6.1  3.5 52.2
    DArPct: 2009 10.7  8.3  5.1  7.8  5.6  4.8  4.1 11.1  9.4  4.5 71.3
    

    end quote
    So, how to “pick those places which will have a warming bias to the anomaly”? How about, oh, I don’t know, maybe taking 16% of places with grassy fields or small tarmac runways and infrequent airplanes (Anyone remember that Pan Am had a large Pacific fleet in the ’30s and ’40s … of SEA PLANES because there were not enough runways?…) and turn them into multiple 10,000 foot concrete runways with hectares of tarmac, tons of Jet-A burned on takeoffs, and a fleet of cars, buses and trucks? Oh, and increase the percentage to 71% today….
    How about South America where thermometers fled the mountains (like in Bolivia).
    South America
    Has the migration of thermometers from the mountains to the beach also put them at airports?
    Oh, one ‘defect’ in this report is that the “airstation” flag is only available for present status. So if a place is an airport now, it will ALSO be shown as an airport for all prior time. That means these numbers are actually understating the problem. For example, I’m pretty sure there were not 35% airports in Latin America in the decade ending in 1919…

          Year SP -50  -40  -35  -30  -25  -20  -15  -10   10  -NP
    DArPct: 1919  8.5  6.4  2.1  2.1  6.0  4.3  2.6  0.0  3.6  0.0 35.5
    DArPct: 1929  8.1  6.1  2.0  2.0  6.1  4.0  4.0  0.0  4.0  0.0 36.4
    DArPct: 1939  6.1  6.3  6.5  7.7  9.1  2.8  3.1  0.9  5.9  0.0 48.5
    DArPct: 1949  4.4  4.6  7.4  7.0 10.7  4.4  3.5  0.9  8.5  0.0 51.4
    DArPct: 1959  2.9  4.2  5.5  8.6  7.1  4.9  7.4  3.8 13.0  3.9 61.3
    DArPct: 1969  2.3  5.1  4.2  8.6  5.7  4.7  6.1  6.1 18.3  3.3 64.4
    DArPct: 1979  2.5  5.0  4.9  7.8  6.8  4.7  5.3  6.2 18.8  3.4 65.4
    DArPct: 1989  3.0  5.5  5.4  9.6  6.8  4.6  5.7  5.4 18.4  3.5 67.9
    DArPct: 1999  2.7  6.6  6.0 11.7  7.1  4.8  3.1  3.0 19.8  4.9 69.8
    DArPct: 2009  1.9  6.5  6.1 12.1  7.6  4.9  2.6  2.3 18.4  5.4 67.9
    

    But that 68% in 2009 will be fairly accurate. So just where does GISTemp look to find those “rural” stations for UHI? And just were can those 1950 and 1960 cold Bolivian mountains baseline pick up a comparison “anomaly”… Oh, at the Airport in Peru…
    FWIW, the USA airport percent today in GHCN are just shy of 92%, (Please note that while GIStemp will blend in some USHCN.v2 stations that have had loads of “adjustments” made, other temperature series will not. Oh, and from May 2007 until just last November 2009, GIStemp had no USHCN stations for the “then present” as they used USHCN version 1 that ‘cut off’ in 2007. So these numbers were what was used for all those world record hot claims they’ve not retracted). In the following table, “LATpct” is the percentage stations in a given latitude band in a SINGLE YEAR not a decade ending; while AIRpct is the percentage of airports in that latitude band. Once again, the far right number is the total percent in that year for all latitudes. particularly intriguing is how the Decade Airport Percent is still a modest 30.7% and only by looking at the years in that decade do we see the profound blow up of airport percentage in the last few years.
    Quoting from the link:
    But it masks the rather astounding effect of deletions in GHCN without the USHCN set added in:
    The United States of America

          Year SP  30   35   40   45   50   55   60   65   70  -NP
    LATpct: 2006  3.7 18.3 29.5 33.2 14.4  0.0  0.4  0.3  0.1  0.1 100.0
    AIRpct:       1.3  4.0  6.3  6.7  3.2  0.0  0.4  0.3  0.1  0.1 22.4
    LATpct: 2007  8.2 17.2 28.4 26.9 11.2  0.0  3.7  3.0  0.7  0.7 100.0
    AIRpct:       8.2 15.7 27.6 23.1  9.0  0.0  3.7  3.0  0.7  0.7 91.8
    LATpct: 2008  8.8 16.9 28.7 26.5 11.0  0.0  3.7  2.9  0.7  0.7 100.0
    AIRpct:       8.8 15.4 27.9 22.8  8.8  0.0  3.7  2.9  0.7  0.7 91.9
    LATpct: 2009  8.1 17.8 28.1 26.7 11.1  0.0  3.7  3.0  0.7  0.7 100.0
    AIRpct:       8.1 16.3 27.4 23.0  8.9  0.0  3.7  3.0  0.7  0.7 91.9
    DLaPct: 2009  4.3 18.4 29.5 32.5 13.6  0.0  0.7  0.9  0.2  0.1 100.0
    DArPct:       2.1  5.7  8.8  8.9  3.7  0.0  0.6  0.8  0.2  0.1 30.7
    

    For COUNTRY CODE: 425
    Yup, just shy of 92% of all GHCN thermometers in the USA are at airports.
    So, want to know why it’s still “record hot” with tons of snow outside your window? Well, go look at the airport where they are busy de-icing airplanes and clearing all the snow away…

    While this doesn’t deal with UHI (which I think is quite imprtant), it should not necessarily inject a warm bias unless they are cherry picking nearest stations for UHI. But that doesn’t appear to be what D’Aleo and Smith are alleging.

    Or unless they are picking airports… for example.

  63. cohenite (01:03:06) :
    Coho, the fourth power stuff is a distraction. It’s the Kelvin temperature, and the proportional change from temperature anomalies is small. BB radiation at 290K (17C) is 401.03 W/m2. At 291, it increases by 5.56. At 292, by a further 5.62. From 292 to 293 it increases by 5.68. It’s so close to linear it doesn’t matter.
    Christopher Hanley (00:48:19) :
    Those photos of the Melbourne site give a false impression. The street between the station and the building is Latrobe street. It’s a very wide street, and the building is further set back. The building is to the south, and does not shade the station. There are legitimate worries about the amount of traffic, but the building isn’t the problem.
    Incidentally, soon after your 1965 pic, another large building was on that site, which was then replaced by the one pictured. There is parkland to the north.

  64. Baa Humbug (18:31:57) : So lets get this straight. Is the following hypothetical example correct?
    Yes, substantially. Though to be even more correct, you would have your three stations that in 1950-1980 in the baseline report:
    3 stations measure 11deg 12 deg 13deg averaging 12deg.
    And you would “adjust” them such that the older period was 10.5 11.5 12 degrees, average of 11.3 (they do have this habit of always “correcting” the past to be colder…)

    Then we drop the first 2 “cooler” ones leaving us with the third 13deg. Which of itself is 1deg above the average.

    Though we would do a broken UHI “correction” on it, call it less than Pisa, so… about 13.5 degrees.
    So now you get an “anomaly” of 13.5 – 11.3 = 2.2 (Oh NO!! SKY is burning up as it falls!!)

    You gotto hand it to them, they found a way to make a station read warming against itself just beautifully. (except they got caught

    Basically, IMHO, yes. For each of the changes I’ve shown in this example, I can point at real cases where a very similar thing is done in the NOAA / NCDC data or in GIStemp. Pisa gets a ‘wrong way’ UHI of 1.4 C in the past. USHCN is “readjusted” to make USHCNv2 and blink charts show added warming trends. Thermometers are real rural areas are removed and those in mountains are substantially extinct; but low altitude airports are multiplying like rabbits.
    Clive (22:19:53) :Perhaps someone noticed this confusion and already reported it:
    For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.”
    Am I missing something here? As stated, there is a “confusion” of elevation and latitude in this statement. It says the higher elevation stations “dropped to one” and then goes on to say there is “one .. north of 65°” Which is correct?
    I think it should say “higher latitudes dropped to one.” Yes?

    I think he is mixing two things, both of which happen. The average Altitude drops as the Canadian Rockies are erased AND the northern latitudes are erased leaving ONE above latitude 65 N at Eureka…
    The “by altitude” report:
    http://chiefio.wordpress.com/2009/11/13/ghcn-oh-canada-rockies-we-dont-need-no-rockies/
    The “by latitude”:
    http://chiefio.wordpress.com/2009/10/27/ghcn-up-north-blame-canada-comrade/

  65. Joseph D’Aleo preliminary report says:-
    * China had 100 stations in 1950, over 400 in 1960 then only 35 by 1990.
    I find that hard to believe, that’s about 1 station per province in China, or similar to putting 1 station in the middle of England to cover the whole of the United Kingdom.
    Similar for Canada:-
    * In Canada the number of stations dropped from 600 to 35 in 2009.
    Am I understanding this properly?

  66. martyn (03:03:52) : Joseph D’Aleo preliminary report says:-
    * China had 100 stations in 1950, over 400 in 1960 then only 35 by 1990.
    I find that hard to believe, that’s about 1 station per province in China, or similar to putting 1 station in the middle of England to cover the whole of the United Kingdom.
    Similar for Canada:-
    * In Canada the number of stations dropped from 600 to 35 in 2009.
    Am I understanding this properly?

    The stations, in most cases, still exist and are still recording data (for the local country BOMs) but when the data get to NOAA / NCDC to create the GHCN data set, they drop about 90% of it on the floor. Especially the cold ones 😉
    see:
    http://chiefio.wordpress.com/2009/10/28/ghcn-china-the-dragon-ate-my-thermometers/
    for the China numbers. I get 34 in 2007, rising to 73 in 2008, but did not have 2009 numbers when I did this table.
    Other countries here:
    http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/
    But yes, the number of thermometers in the “composite instrument” we use to find the world temperature wanders all over by about a factor of 10 between the baseline and “now” in most countries.
    Oh, and the notion that each station has individual anomalies calculated so it doesn’t matter what stations you use is just broken on the face of it. Look again at the anomaly maps. Notice that “Bolivia Exists”. Since it has had no data since 1990, it can’t very well have an anomaly calculated from that non-existent data… There is a “sort of an anomaly” used to calculate the homgenizing, and another “sort of an anomaly” (really and average of about a dozen stations) used to calculate UHI, but when it comes time to “box and grid” since many whole boxes contain NO stations at all, and many others have a set of ‘a few’, there are some peculiar shenanigans with ‘zonal means’ calculated using large buckets of stations and offsets calculated during one phase of the PDO are used to figure what the offset might be during the present (1/2 the time different) phase of the PDO. Not like that would make any difference….
    http://chiefio.wordpress.com/2009/03/02/picking-cherries-in-sweden/
    shows that the GIStemp “baseline” is set in a nice cold dip in the middle of one phase of a long duration ripple (that looks to me to be PDO or related?) so getting a ‘cold anomaly’ relative to it would require a return to the Little Ice Age bottom…
    While we’re making our ‘dream list’, I’d like to see the anomaly maps all made with a baseline of 1930-1960 … since folks claim it makes no difference…

  67. Just one question, why the hell do they include urban stations, the Peter video shows their data is garbage, there is adequate rural data, all urban data should be thrown out or two seperate data sets should be used, urban and rural.

  68. kadaka (23:46:08) :
    “One megawatt of electricity can provide power to about 1,000 homes.”
    1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.

    As you point out the average house uses about 3 to 5 KW on average. With peaks to about 50% of its service rating. Which is 220 volts X 200 amps for a modern house or 44 KW and half that is 22 KW.
    But it is worse than you thought. On average the windgen provides only 1/3 of its name plate rating on average. But sometimes you get the full MW. And sometimes (maybe days) you get zero.

  69. “Coho, the fourth power stuff is a distraction. It’s the Kelvin temperature, and the proportional change from temperature anomalies is small. BB radiation at 290K (17C) is 401.03 W/m2. At 291, it increases by 5.56. At 292, by a further 5.62. From 292 to 293 it increases by 5.68. It’s so close to linear it doesn’t matter.”
    No, it’s not linear as Lubos Motl has shown; the GMST is 288K [15C]; the 4th power of 288 x the SB constant = 390.08w/m2 which fits in with the K&T diagram; if you regionalise that by 4 climate zones at 313K [40C], 393K [20C], 283K [10C] and 263K [-10C], even though that still averages 288K, if you take the average of each 4th power you will get 399.26 w/m2, a difference of 9 w/m2. If you use many individual stations the variation is staggering; apart from making the concept of a GMST look ridiculous it also makes any reduction of utilised stations suspect.

  70. Another theory: Since 1999, really since Y2K, most thermometers have been made in China. While, in fact, the actual temperatures have been declining for the past ten years, the world’s thermometers say otherwise. Reason, the spec sheet for the calibration and printing equipment was writen by none other than – Richard Somerville, a distinguished professor emeritus and research professor at Scripps Institution of Oceanography, UC San Diego.
    Well it is a theory and people will invest trillions on a good theory, right? Now we just have to prove it, or do we? Do you think MP’s and MC’s could hide behind this? An unproven theory? They are dupes. They were duped. And they do need a simple, dupy reason to change their stripes, don’t they?

  71. 3×2 (01:44:49) :
    What you suggest might be true if we were comparing an individual station with its own 1951-80 mean OR if we had exactly the same group of stations (unchanged) as we had in 1951-1980. But neither is the case.

    I believe the first is the case, and that is how they are calculated. Do you have information to the contrary?
    Alexej Buergin (02:16:09) :
    A “cool site” is not one where T is small or even negative (in °C). It is one where dT/dt is very small or even negative, a rural station e.g.

    That’s an unusual definition. But what the article says is just:
    “It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations.”

  72. cohenite (04:35:18) :
    It’s true that the big changes between tropical and arctic have a significant fourth power nonlinear effect. It doesn’t make a global average temperature ridiculous – it’s the logical measure of heat content. It just means that if you want to calculate global radiation, you have to average T^4, which is easy enough.
    But my point is that it has negligible influence on climate changes within the range of anomalies – or changes that you’d get by replacing one station with one from the same climate zone.

  73. Nick Stokes (04:32:21) :
    E.M.Smith (03:31:14) :
    “While we’re making our ‘dream list’, I’d like to see the anomaly maps all made with a baseline of 1930-1960 … since folks claim it makes no difference…”
    Your dream come true.

    Not quite. One of a few thousand…

  74. We need to hold our elected officials accountible. This international fraud needs to be investigated and brought into the light of day. Climate change is the biggest hoax and conspiracy in the history of the world.

  75. I understand this thread has concentrated on the effects on published temperatures from the massive filtering of data. the thread has been exhaustive on the demonstration of the warming bias it creates.
    My question: Does GISS at any point declare a deletion officially? Does it give a reason for the dropping of a thermometer?

  76. Here is the link to the part of the GIStemp code that does the final anomaly map creation:
    http://chiefio.wordpress.com/2009/03/07/gistemp-step3-the-process/
    It’s been a while since I read this chunk last, but as I read it now, it looks like it is doing things not ‘station to station’ but ‘station to averages’ for computing anomalies. These are just comments from the code, but they give you the flavor of it.
    from:
    http://chiefio.wordpress.com/2009/03/07/gistemp-step345_tosbbxgrid/
    C**** The spatial averaging is done as follows:
    C**** Stations within RCRIT km of the grid point P contribute
    C**** to the mean at P with weight 1.- d/1200, (d = distance
    C**** between station and grid point in km). To remove the station
    C**** bias, station data are shifted before combining them with the
    C**** current mean. The shift is such that the means over the time
    C**** period they have in common remains unchanged (individually
    C**** for each month). If that common period is less than 20(NCRIT)
    C**** years, the station is disregarded. To decrease that chance,
    C**** stations are combined successively in order of the length of
    C**** their time record. A final shift then reverses the mean shift
    C**** OR (to get anomalies) causes the 1951-1980 mean to become
    C**** zero for each month.
    C****
    C**** Regional means are computed similarly except that the weight
    C**** of a grid box with valid data is set to its area.
    C**** Separate programs were written to combine regional data in
    C**** the same way, but using the weights saved on unit 11.
    Now that looks to me like a bunch of stations get shifted all over before they are combined and turned into a “grid box”… Not exactly comparing one station to itself in the past.
    The way “anomalies” are used in GIStemp is not quite what you would ever expect…
    Lots of weighting and shifting and combining and …
    Oh, and the temperatures have had a bit of musical chairs too:
    trimSBBX.v2.f
    The Header:
    C**** This program trims SBBX files by replacing a totally missing
    C**** time series by its first element. The number of elements of the
    C**** next time series is added at the BEGINNING of the previous record.
    Oh, and it stirs in some “zonal means” too:
    The FORTRAN zonav.f
    C*********************************************************************
    C *** PROGRAM READS BXdata
    C**** This program combines the given gridded data (anomalies)
    C**** to produce AVERAGES over various LATITUDE BELTS.

    C**** DATA(1–>MONM) is a full time series, starting at January
    C**** of year IYRBEG and ending at December of year IYREND.
    C**** WT is proportional to the area containing valid data.
    C**** AR(1) refers to Jan of year IYRBG0 which may
    C**** be less than IYRBEG, MONM0 is the length of an input time
    C**** series, and WTR(M) is the area of the part of the region
    C**** that contained valid data for month M.
    Note that “WT” is a weighting function and that data are made area proportional. There is some magic sauce to fill in missing bits…
    C**** JBM zonal means are computed first, combining successively
    C**** the appropriate regional data (AR with weight WTR). To remove
    C**** the regional bias, the data of a new region are shifted
    C**** so that the mean over the common period remains unchanged
    C**** after its addition. If that common period is less than
    C**** 20(NCRIT) years, the region is disregarded. To avoid that
    C**** case as much as possible, regions are combined in order of
    C**** the length of their time record. A final shift causes the
    C**** 1951-1980 mean to become zero (for each month).
    C****
    C**** All other means (incl. hemispheric and global means) are
    C**** computed from these zonal means using the same technique.
    C**** NOTE: the weight of a zone may be smaller than its area
    C**** since data-less parts are disregarded; this also causes the
    C**** global mean to be different from the mean of the hemispheric
    C**** means.
    So all those means against which all these anomalies are taken can themselves wander around a lot… Note particularly that “data-less parts are disregarded” in making the zonal means.
    Remember that next time someone says that dropping out, oh, high cold mountains will not change the hemispheric or global mean… and thus it’s anomaly.
    The devil, literally is in the details…

  77. E.M.Smith (05:37:25) :
    GIStemp say they use GHCN v2.mean. And this is already expressed in anomalies for each station, as downloaded from NOAA. In fact, it isn’t clear how you can even recover the temp in Celsius from that file.

  78. “M. Simon (04:25:40) :
    kadaka (23:46:08) :
    “One megawatt of electricity can provide power to about 1,000 homes.”
    1 megawatt / 1000 homes = 1 kilowatt per home, 1000 watts.
    As you point out the average house uses about 3 to 5 KW on average. With peaks to about 50% of its service rating. Which is 220 volts X 200 amps for a modern house or 44 KW and half that is 22 KW.
    But it is worse than you thought. On average the windgen provides only 1/3 of its name plate rating on average. But sometimes you get the full MW. And sometimes (maybe days) you get zero.”
    German windpower has managed to increase its output from 17% of the nominal performance to about 21%. We have to be prepared for windy days when they suddenly deliver a 100% surge. In order to stabilise our networks, we have to have enough gas-powered plants with fast reaction times. Enough means: Total capacity “running reserve” on standby must be as big as wind+solar together. So for each installed GW renewables we need to install 1 GW “running reserve”.
    Keep in mind that wind power output rises with the third power of wind velocity. This makes for violent spikes. It’s a tough job for the transmission lines and the standby gas plants. And expensive. We pay about 30 US cents or 20 Eurocents per kWh. I don’t wish that on you. The leftists here say it’s still too cheap.

  79. Carlos RP (05:26:33) : I understand this thread has concentrated on the effects on published temperatures from the massive filtering of data. the thread has been exhaustive on the demonstration of the warming bias it creates.
    My question: Does GISS at any point declare a deletion officially? Does it give a reason for the dropping of a thermometer?

    Well, no. But the details are interesting…
    If you just look at the station inventory, there are some 6000 ish stations. All of them are there, being used, see?
    This comes form NOAA / NCDC (not GISS) as the GHCN data set (and as the USHCN for US sites). Now NOAA / NCDC are the ones that drop cold thermometers on the floor but only in the recent part of the record since about 1989. Again, not GISS.
    So the station is “being used”, just only in the past… so it isn’t really ‘dropped’, just sort of truncated is all…
    I suspect a certain amount of “mutt and jeff” going on here, but it is not something that can be proven without email logs and meeting notes.
    So Hansen over at NASA / GISS can say he did nothing to drop stations… And Peterson? over at NOAA / NCDC can say he does nothing to make anomaly maps…
    Also, the “drop shorter than 20 years” is built into GIStemp, so GISS does drop a load of stations, but does not list them by name. After all, each run will be a different set as some 19 year 11 month station “comes of age”.
    One is left to wonder what the impact of massive equipment upgrades and changes of airports has done to chop up the record into disposable bits, but I’ve not gotten to that part of the investigation… yet… But if a move from a Stevenson Screen out by the field to an automated pole on a rope next to the hanger causes a change of ‘minor station number’ as it ought, then that new station will not contribute to the GIStemp work product for 20 years… Some other station will be used for ‘in fill’ instead…
    Just a thought…
    So AFTER NOAA / NCDC have cut out 4500 stations, THEN NASA / GISS via GIStemp chuck out any station records shorter than 20 years. What’s left? Well, not much. And as noted above, the code has to go to bizarre lengths to fill in grid boxes with whatever it can dream up…

  80. How’s this?
    Dear Representative,
    The current staff at NOAA has distorted its global temperature report to create the illusion of warming,
    1. By dropping the majority of cooler rural data sampling sites until the cooler rural temperatures become outliers.
    2. Then, the few remaining cooler outliers were statistically reduced or “smoothed” to create a warming bias.
    3. Furthermore, NOAA used the reduced number of cherry-picked warm sites to fill in cooler parts of the planet instead of real data.
    Do not pass any legislation based on this flagrant manipulation of data. The people at NOAA responsible for this need to be removed from their posts.
    http://wattsupwiththat.com/2010/01/22/american-thinker-on-cru-giss-and-climategate/
    “Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
    Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets. These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide, but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
    Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
    It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
    For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.” And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
    Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
    It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains. So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
    That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations. And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out – will always yield a warming trend, even when temperatures have actually dropped.
    Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.” But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this “selection bias” creates a +0.6°C warming in U.S. temperature history.”
    And
    “But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid, into which the time series are converted to a series of anomalies (degree variances from the baseline). Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
    Here’s NASA’s solution:
    For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
    Even on paper, the design flaws inherent in such a process should be glaringly obvious.
    So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
    An isolated problem? Hardly, reports Smith.
    From KUSI’s Global Warming: The Other Side:
    “There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends. And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”
    Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool, as Bolivia contains proportionately more land above 10,000 feet than any other country in the world – with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
    Remember that single station north of 65° latitude which they located in a warm section of northern Canada? Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
    Pretty slick, huh?

  81. Baa Humbug (21:52:03) :
    I rather get my info from the horses mouth.
    J Hansen said the following on Oz tv interview with T Jones.
    TONY JONES: Okay, can you tell us how the Goddard Institute takes and adjusts these global temperatures because sceptics claim that urban heat centres make a huge difference; that they distort global temperatures and they make it appear hotter that it really is.
    So do you adjust, in your figures, for the urban heat zone effects?
    JAMES HANSEN: We get data from three different sources and we now, in order to avoid criticisms from contrarians, no longer make an adjustment.
    We exclude urban locations, use rural locations to establish a trend.

    The USA airport percent today in GHCN are just shy of 92%.
    Once the majority of weather stations have been moved from cooler to warmer that is it, they will all be on the same playing field, when might that have occured, around mid 1990`s perhaps. Is the lack of warming since 1998 due mainly to the weather station placement at airports, perhaps since 1998 there is now no UHI effect left in the data sets.

  82. “r (06:24:15) : […]”
    My suggestion: Give it structure. Give headlines for paragraphs. Start with
    INTRODUCTION
    End with
    CONCLUSIONS.
    Makes for better “quick-reading”.

  83. DirkH (06:02:08) :
    “M. Simon (04:25:40) :
    kadaka (23:46:08) :
    Re the German Wind power Generation, a study by Engineers has suggested that running the Backup Power Stations at less than optimum actually costs as much as Wind power saves due to Inefficiencies in the Backup Stations.
    http://www.clepair.net/windsecret.html

  84. Nick Stokes:
    “There’s very little difference.”
    That is simply not true. After 1990 there definitely a difference.
    “These devilish experts would have to anticipate future warming trends in individual stations. Harder than you think.”
    Not so hard. Just pick the ones where population growth is the strongest.
    In any case, maybe on of the reasons that we have had 12 years of no warming is because they can’t find any more stations to throw out.

  85. to Bah Humbug,
    who said: I rather get my info from the horses mouth.
    I think that this report shows that what the horses mouth is saying and what the horse’s *** is doing, are two differnt things.

  86. I was looking at weather stations along the arctic, and I noticed that even there they make changes that effect the temperature. For example, in Svalbard they moved the weather station from a lonely radio station out on a point near the sea to an airport much nearer to town. Here is a picture of the airport.
    http://www.svalbard-images.com/photos/photo-longyearbyen-airport-002-e.php
    Somebody tell me, does that look like a weather station out at the end of the tarmac. Right side.

  87. Self:
    “That is simply not true. After 1990 there definitely a difference.”
    Correction – I meant that after 1980 there was definitely a difference.
    Thank you self.

  88. Orson (20:49:37) :
    “Here’s the data set I would like to see plotted:
    “SINCE radiosonde (weather balloon) data have been used to validate MSU data sets, which we have going back over 30 years, THEN find the ground data that corresponds to these release points.”
    The surface temperature at launch is coded into every Radiosonde report. And the location as well as the elevation is known via the station ID. Google up the code — you may be surprised.

  89. DirkH,
    Good idea, can you give me an example?
    The idea here, is to give people who are busy, who are not good at putting words together, who are not great writers, something easy to cut and paste that is also easy for the Senator’s staff to read and tally.
    Thanks : )

  90. Roger Sowell (20:26:29) :
    I’d like to see a blink graph of the two sets; the dropped an the retained.

  91. Remember,
    Our US government representatives work for us, its citizens. We elect them. We pay them. We are their bosses. We can fire them if they are not doing a good job.
    The employees of government organizations also work for us. If they are not doing a good job, they need to be fired also.
    I’m sure there are many other qualified people waiting to fill those positions.

  92. To help the question above, I think the GHCN v2 data is available in celsius on http://www.realclimate.org/index.php/data-sources/#Climate_data_raw
    This is what I’ve used to attempt to produce some GISS style data on a constant station basis. I thought that if the stations observed vary from year to year, it might be interesting to look each year in terms of ONLY stations that reported in that year and the previous year AS WELL
    Results on http://crapstats.wordpress.com/

  93. Nick Stokes (20:45:09) :
    Nick Stokes (20:50:40) :
    The purpose is exactly to avoid the issue spuriously raised here.
    but as long as it is reasonably OK
    The data is property of the tax payer. So sure it’s ‘OK’.
    Would you send a FOIA request to James Hansen and ask him for all the dropped data?

  94. Homogenation, as used by NOAA/GISS, incorporates the best features of the data manipulation and data correction techniques, well known by under-graduate physics students, of Finagle, Bougerre and Diddle (also known as Variable Constants or Constant Variables, your choice), techniques which have been refined over years of academic application in achieving pre-determined results.
    It has been suggested that the NOAA/GISS data has also been Pasteurized, which makes the data sterile in relation to the environment.
    I remember a book, How To Lie With Statistics. Perhaps the IPCC will publish a book, How To Lie About Climate.

  95. David Ball (22:28:21) :
    “silly meme”. Reminds of the black knight. “Tis but a scratch”!! 8^]
    Yes David, merely a flesh would.
    🙂

  96. Tilo said,
    “In any case, maybe one of the reasons that we have had 12 years of no warming is because they can’t find any more stations to throw out.”
    LOL, How true! What will they do now? I’ve heard them resort tho statements like “The cooling means that it is warming.”
    War is peace. Freedom is slavery. Ignorance is strength.— George Orwell

  97. There are many ways to look at temperature data over long periods of time, not just anomalous statistics. This should be done to determine the robust nature of any one measure of increasing temperature. For example, if high and low records (high max, low max, low min, and high min records) are not showing trends, then one can say that either cooling or warming is not robust in all measurements. The same can be said for proxies. And we already know that certain tree rings are not robust to trends.

  98. Nick Stokes (20:45:09) :
    Nick Stokes (20:50:40) :
    It’s been said that the Medieval Warm Period wasn’t global, that it was exclusive to Greenland, and part of Europe only.
    If temperature stations, hypothetically speaking, had only been located only in those areas at that time, and none anywhere else, would there have been an accurate world temperature anomaly?
    And if none had been located in Greenland and Europe but only everywhere else in the world would we still have gotten an accurate world temperature anomaly of that time?
    And are you saying that today these same type of anomalies can’t happen? Or can they?
    And if they can’t then would you agree that the Medieval Warm Period was global and not local to Greenland and Europe?
    According to you it’s one or the other, isn’t it?

  99. Nick Stokes (02:16:30) :
    photon without a Higgs (19:44:41) :
    I’d like to see that graph.
    Well, you can, or one very like it.

    It’s not like it.
    I want to see the data that was dropped from 1989 until now and compare it to the data that was used.
    That would be ‘like it’.

  100. Tilo Reber (08:32:47) :
    In any case, maybe on of the reasons that we have had 12 years of no warming is because they can’t find any more stations to throw out.
    Very nice point Tilo.

  101. “r (09:00:18) :
    DirkH,
    Good idea, can you give me an example?”
    I googled for “introduction conclusion” and this is a good hit:
    http://ezinearticles.com/?Essay-Writing-Tips—Powerful-Introduction-and-Conclusion&id=1242801
    Also, i like the detailed introduction here:
    http://elc.polyu.edu.hk/CiLL/reports.htm
    Background: the motivation for this writing
    Objectives: what we want to achieve with this report
    Scope: the affected domain
    Gives the reader a good fast start.
    I’ll try to rework your writing a little and post it later. I’m not a great writer but did a lot technical documentation and read even more, so i know how valuable a good structure is for a time-pressed reader.

  102. It seems to me that it might be useful to plot the data from the stations still operating against population for their locations over the same time period. Correlation is not cause but I would still like to see that graph.

  103. I can only complain about the title of this essay — “CRU Was But the Tip of the Iceberg.” What “Iceberg”? How about “… the Bottom of the Hot Horse-droppings”?
    Bob

  104. DirkH (10:11:15) :
    “r (09:00:18) :
    DirkH,
    Good idea, can you give me an example?”
    Sorry, monsterpost:
    Hi r, please don’t forget i’m not a native speaker of English, so run my stuff through a spell checker,
    and format it nicely, and check whether i’ve written complete bonkers somewhere.
    The best would be to format it with Open Office or Word and send it to
    your representative as a .PDF so you know it looks NICE when he looks at it!!!
    And keep it all a little toned-down, you know, never suppose malice when you can suppose incompetence and all that, be diplomatic otherwise
    you come across as a nut and your letter gets tossed.
    Dear Representative,
    The current staff at NOAA has distorted its global temperature report, leading to an amplification of global warming, causing exaggerated alarmism and
    reason for concern about possible future consequences.
    INTRODUCTION
    Background
    In our opinion, the current staff at NOAA has distorted its global temperature report,
    1. By dropping the majority of cooler rural data sampling sites, reducing the number of thermometers by a factor of 4.
    2. Then, the few remaining cooler outliers were statistically reduced or “smoothed” to create a warming bias.
    3. Finally, NOAA used the reduced number of cherry-picked warm sites to fill in cooler parts of the planet instead of real data, producing
    a distorted, warming-biased view of the real situation of the climate.
    Objectives
    We would like to demonstrate here exactly how the distortion of the data works, give examples,
    and urge you to not pass any legislation based on the mistaken findings by NOAA.
    The people at NOAA responsible for this need to be made responsible for their mistakes.
    Scope
    NOAA is considered to be an authoritative institution in the area of climate science.
    They are responsible for the creation of THIS-AND-THAT DATA SET (insert right name here).
    We will show here that they have tainted their authority in this area either through sluggish work or deliberately.
    REFERENCES
    We refer to this article in The American Thinker:
    http://www.americanthinker.com/2010/01/climategate_cru_was_but_the_ti.html
    (MAYBE WE SHOULD ALSO POINT TO A POSTING FROM chiefio.wordpress.com AND to the KUSI video links with Coleman and E.M.Smith)
    EVIDENCE
    We quote from the linked article:
    “Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe.
    Now, 75% represents quite a drop in sampling population, particularly considering that these stations provide the readings used to compile both the
    Global Historical Climatology Network (GHCN) and United States Historical Climatology Network (USHCN) datasets.
    These are the same datasets, incidentally, which serve as primary sources of temperature data not only for climate researchers and universities worldwide,
    but also for the many international agencies using the data to create analytical temperature anomaly maps and charts.
    Yet as disturbing as the number of dropped stations was, it is the nature of NOAA’s “selection bias” that Smith found infinitely more troubling.
    It seems that stations placed in historically cooler, rural areas of higher latitude and elevation were scrapped from the data series in favor of more urban locales
    at lower latitudes and elevations. Consequently, post-1990 readings have been biased to the warm side not only by selective geographic location, but also by the anthropogenic
    heating influence of a phenomenon known as the Urban Heat Island Effect (UHI).
    For example, Canada’s reporting stations dropped from 496 in 1989 to 44 in 1991, with the percentage of stations at lower elevations tripling while the numbers
    of those at higher elevations dropped to one. That’s right: As Smith wrote in his blog, they left “one thermometer for everything north of LAT 65.”
    And that one resides in a place called Eureka, which has been described as “The Garden Spot of the Arctic” due to its unusually moderate summers.
    Smith also discovered that in California, only four stations remain – one in San Francisco and three in Southern L.A. near the beach – and he rightly observed that
    It is certainly impossible to compare it with the past record that had thermometers in the snowy mountains.
    So we can have no idea if California is warming or cooling by looking at the USHCN data set or the GHCN data set.
    That’s because the baseline temperatures to which current readings are compared were a true averaging of both warmer and cooler locations.
    And comparing these historic true averages to contemporary false averages – which have had the lower end of their numbers intentionally stripped out –
    will always yield a warming trend, even when temperatures have actually dropped.
    Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007.
    In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA[,] including all the cold ones in California.”
    But he was deadly serious after comparing current to previous versions of USHCN data and discovering that this
    “selection bias” creates a +0.6°C warming in U.S. temperature history.”
    And
    “But the real chicanery begins in the next phase, wherein the planet is flattened and stretched onto an 8,000-box grid,
    into which the time series are converted to a series of anomalies (degree variances from the baseline).
    Now, you might wonder just how one manages to fill 8,000 boxes using 1,500 stations.
    Here’s NASA’s solution:
    For each grid box, the stations within that grid box and also any station within 1200km of the center of that box are combined using the reference station method.
    Even on paper, the design flaws inherent in such a process should be glaringly obvious.
    So it’s no surprise that Smith found many examples of problems surfacing in actual practice. He offered me Hawaii for starters. It seems that all of the Aloha State’s surviving stations reside in major airports. Nonetheless, this unrepresentative hot data is what’s used to “infill” the surrounding “empty” Grid Boxes up to 1200 km out to sea. So in effect, you have “jet airport tarmacs ‘standing in’ for temperature over water 1200 km closer to the North Pole.”
    An isolated problem? Hardly, reports Smith.
    From KUSI’s Global Warming: The Other Side:
    “There’s a wonderful baseline for Bolivia — a very high mountainous country — right up until 1990 when the data ends.
    And if you look on the [GISS] November 2009 anomaly map, you’ll see a very red rosy hot Bolivia [boxed in blue]. But how do you get a hot Bolivia when you haven’t measured the temperature for 20 years?”
    Of course, you already know the answer: GISS simply fills in the missing numbers – originally cool,
    as Bolivia contains proportionately more land above 10,000 feet than any other country in the world –
    with hot ones available in neighboring stations on a beach in Peru or somewhere in the Amazon jungle.
    Remember that single station north of 65° latitude which they located in a warm section of northern Canada?
    Joe D’Aleo explained its purpose: “To estimate temperatures in the Northwest Territory [boxed in green above], they either have to rely on that location or look further south.”
    CONCLUSION
    NOAA has introduced or exaggerated a warming trend in their global temperature data set through
    -deletion of three quarters of all temperature sampling sites worldwide, around 1990.
    -adjusting temperature measurements to show warmer temperatures
    -filled the gaps around the globe by smearing out the wamr-biased data they left.
    So when comparing near-present data with data pre 1990, the reader will get the impression of a step change in temperature towards warming.
    Whether this has been done deliberately or through simple incompetence is not evident without further investigation.
    But it can safely be said that the data set provided by NOAA is biased and distorted to a degree hat it can not be the base of legislation to curb a problem that just might not exist
    in the severe proportions often purported.
    Do not base your decisions on the data provided by NOAA without further investigation!
    Yours sincerely, …

  105. Ed Scott (09:19:21) :
    Homogenation….It has been suggested that the NOAA/GISS data has also been Pasteurized, which makes the data sterile in relation to the environment.
    Pasteurization makes milk harder to digest.
    Pasteurization of data makes it harder to digest too.
    Unless you’re Nick. Nick must be consuming the pasteurized data with fudge making it go down easier. 😉

  106. Nick Stokes (02:40:29)
    “Those photos of the Melbourne site give a false impression….”etc.
    I know the site well.
    The parkland opposite has been there for 140 years — it’s irrelevant.
    The old Commonwealth building (the ‘Green Latrine’)….
    http://www.slv.vic.gov.au/roseglass/0/0/6/doc/rg006911.shtml
    ….. was positioned hard onto Spring Street so it is clear from the configuration of that section of Victoria Street/Spring Street/Latrobe Street intersection, that it was about 100 meters from the instruments, while according to Google Earth, the apartment building (built late 90s) shown in the latter picture is only 30 meters from the instruments.
    I reckon (as an innocent bystander, mind you) that one of those balconies wouldn’t be a bad place to sit on a cold sunny day in Melbourne.

  107. Anthony:
    Am I reading incorrectly the numbers below (“more than 860” temperature stations versus “a low of 136 as of 2007”)? If your answer is “no”, it explains the apparent lack of concern by NOAA about the quality of the stations not “online”. However, it raises the question of why NOAA would continue the façade of stations not “online” (and, perhaps, waste resources in the process).
    SEE:
    Is the U.S. Temperature Record Reliable?
    By Anthony Watts – Spring, 2009
    “The official record of temperatures in the continental United States comes from a network of 1,221 climate-monitoring stations overseen by the National Weather Service, a department of the National Oceanic and Atmospheric Administration (NOAA)….During the past few years [Anthony Watts] recruited a team of more than 650 volunteers to visually inspect and photographically document more than 860 of these temperature stations.”
    January 22, 2010
    Climategate: CRU Was But the Tip of the Iceberg
    By Marc Sheppard
    “Perhaps the key point discovered by Smith was that by 1990, NOAA had deleted from its datasets all but 1,500 of the 6,000 thermometers in service around the globe….
    Overall, U.S. online stations have dropped from a peak of 1,850 in 1963 to a low of 136 as of 2007. In his blog, Smith wittily observed that “the Thermometer Langoliers have eaten 9/10 of the thermometers in the USA…”.

  108. Tilo Reber (08:32:47) :
    That is simply not true. After 1990 there definitely a difference.

    The blue curve is stations discontinued at some time prior to 1995. In the last few years (1990-1995) numbers are getting very small.

  109. The evidence of a conspiracy by CRU and GISS to make fraudulent adjustments seems evident to me. This is not an accident. It appears that treason has been committed. I believe that the offenders should be punished accordingly.

  110. Doug in Seattle (18:29:18) :
    “”I am not sure how this dropping out of cooler stations works. It is my understanding that when NOAA or GISS homogenizes for missing readings or stations they reach out to the nearest stations up to 1200 km distant and do their corrections based on the anomaly of the other stations – not the absolute temperature as seems to be inferred by D’Aleo and Smith.
    While this doesn’t deal with UHI (which I think is quite imprtant), it should not necessarily inject a warm bias unless they are cherry picking nearest stations for UHI. But that doesn’t appear to be what D’Aleo and Smith are alleging.””
    Brandon Says: I believe this report is saying, there used to be many more temp recording stations spread throughout the country, but in order to show a warming trend they slow ,but steadily removed cooler stations from both the averages and anomolies. And by them continuously doing this you get continually get a warmer average and anomoly. This process keeps repeating, before long they will be using only a dozen temp stations in US.

  111. DirkH (06:02:08) :
    German windpower has managed to increase its output from 17% of the nominal performance to about 21%. We have to be prepared for windy days when they suddenly deliver a 100% surge. In order to stabilise our networks, we have to have enough gas-powered plants with fast reaction times. Enough means: Total capacity “running reserve” on standby must be as big as wind+solar together. So for each installed GW renewables we need to install 1 GW “running reserve”.
    Keep in mind that wind power output rises with the third power of wind velocity. This makes for violent spikes. It’s a tough job for the transmission lines and the standby gas plants. (…)

    What, we’re not at the technological stage yet where massive banks of super-capacitors will take care of storing such surges and filling in the lulls in generation?
    Interesting side-note, how we are becoming increasingly dependent on electronics as a primary element of power distribution. Since it has been discovered that ultra-high voltage direct current is more efficient for power transmission than alternating current, we are seeing more transmission done with DC which then uses ultra-high voltage electronics to convert it back to AC. (BTW UHV DC as far as I know still violates existing theories in physics as to being more efficient than AC. I’m not that well read on the subject so perhaps there is an explanation of it by now.)
    The need can be seen for fast-acting and efficient DC storage of electricity rather than fast-acting backup generation. If batteries and super-capacitors can’t handle it, maybe more research into flywheels and such needs to be done. We can build inverters to supply the AC, with some more work on increasing efficiency seemingly warranted. Then whatever backup generation will be considered necessary can be much slower reacting, and fully shut down when not needed.
    Although we are still continuing with setting ourselves up for a large solar flare or similar event to wipe out civilization by killing our technology. All-AC with no electronics involved does have a distinct advantage. It will also be embarrassing to find old vehicles and tractors with mechanical “breaker” ignition systems are the only vehicles left running.
    (…) And expensive. We pay about 30 US cents or 20 Eurocents per kWh. I don’t wish that on you. The leftists here say it’s still too cheap.
    Ordinary people can still afford it without starving, therefore it is still too cheap. When virtually everyone will require the government’s help to get electricity, then it will be priced just right. This is how the logic of (green) leftists seems to go.

  112. Quoting Tilo Reber (08:45:15) :
    “Somebody tell me, does that look like a weather station out at the end of the tarmac. Right side.”
    Commenting:
    I don’t think it could be anything else, my friend. If I have learned nothing else from Anthony, et al, I know what those structures look like.

  113. Steve Keohane: that graph you linked showing the temperature decline after the addition of more sites; do you have any details about it; such as where the data comes from etc?
    Nick; I don’t understand how you can say that reducing stations can have a “negligible influence on climate changes within the range of anomalies”; for a start this is contradicted by the Steig et al paper on the Antarctic; Steig used mainly WAP station data to ‘calibrate’ satellite data over the bulk of Antarctica; the assumption of “high spatial coherence” so as to justify interpolation by RegEM was unfounded and the Steig conclusions were nonsense.
    Micro-effects at particular stations are crucial and profoundly affect trends at particular stations which should not be extrapolated, at least using the methods that GISS appears to use, to other stations so as to produce a GMST. The Runnalls and Oke paper highlights this:
    http://ams.allenpress.com/perlserv/?request=get-abstract&doi=10.1175%2FJCLI3663.1
    The Runnalls and Oke method claims to be able to characterise noise; of course UHI is not noise [unless you intend it to be] and neither should be variations between stations of the time of data collection and other lack of standardisation; at the end of the day when a decision to drop non-conforming stations on the basis of standardisation criteria is made that is a value judgement; when the result is almost uniform temperature increase in contradiction to historical records then that criteria has to be challenged;
    http://ams.confex.com/ams/13ac10av/techprogram/paper_39363.htm

  114. I am not in weather or climate or science of any kind. Just a marine engineer looking after tankers, and on one occasion one of my ships touched bottom in an unseasonably low channel off Louisiana, spilling about 10,000 tons of oil in the process. Before anyone gets excited, this was about 20 years ago and the oil is all safely gone now with not a single seabird oiled-up. However, we had a hard job defending ourselves in the resulting court case, in which I was involved. A major problem for us was that both sides used NOAA pronouncements on the behaviour of the waters of the Gulf of Mexico, for prosecution and defence.
    At no time were any of the NOAA statements ever queried by anyone, for NOAA was the acknowledged fount of natural science
    knowledge in the USA.
    In the subsequent court case, in which I was a witness for the defence, not a single lawyer on either side ever challenged a NOAA statement. It was just entered into the court records, as a fact.
    To learn that NOAA has been counterfeiting (climate) science facts will surely undermine the standing of that organisation when called in as expert witness in any future natural disaster scenario.
    In similar circumstances now, I would advise any engineer involved to inspect very carefully any information supplied by NOAA.

  115. cohenite (15:33:03) :
    Nick; I don’t understand how you can say that reducing stations can have a “negligible influence on climate changes within the range of anomalies”;

    No, I said that the T^4 issue has negligible influence. This just relates to the arithmetic I put up earlier. If you vary temperatures by a few degrees, as happens with substituting a station with a nearby one, say, then the radiative change will be proportional. There may be a non-negligible effect, but it won’t be a nonlinear T^4 effect.

  116. Nick Stokes (18:30:46) :
    Nick, would you address my point about the Medieval Warm Period?

  117. Well, thank heavens for that Nick! With all due respect I still think you are not appreciating how much difference there is in calculating the w/m2 between the 2 methods Motl has described; the 9w/m2 difference is 5 times the change of irradiance [1.6w/m2] blamed on GHGs since 1750; I don’t know, maybe this paper can explain it better than me;
    http://pielkeclimatesci.files.wordpress.com/2009/10/r-321.pdf
    I know Parker, Jones, Peterson and Kennedy from CRU have made a comment on it at AGU; I haven’t read it and don’t feel like buying it so if you have a copy I would appreciate a link to it; I know Eli has also had a shot at the Pielke Sr paper but lucia sorted him out;
    http://rankexploits.com/musings/2008/spatial-variations-in-gmst-really-do-matter-iii-when-estimating-climate-sensitivity/

  118. There are a lot of comments here on the Thermometergate thread. I hope that I’m not overlapping too much with those that I have not read.
    Sloppy science? Yes, throwing away high-latitude and high-altitude information is sloppy. But fraud? Not necessarily.
    NOAA has statistical techniques for extrapolating the missing data from ‘nearby’ temperature stations (within 1200 km) in warmer areas. The National Post article that I read insinuates that NOAA and GISS are plugging temperature data from the nearest slightly-warmer areas that do have thermometers onto the ledger sheet for the colder-climate areas that are thermometer-deficient. I do not know if that is the case. Here’s another theoretical scenario.
    Experience has shown that year-to-year temperature CHANGES in high-latitude areas and in high-altitude areas are highly correlated with those of ‘nearby’ warmer areas. So chuck the temperature measurements in cold-weather areas. I don’t know if this scenario is true either.
    Of course, there’s the obvious UHI-amplification issue.
    Until more information comes down the pike, I’m not rating this news story as a smoking gun–or even as a freezing gun. 🙂 At the moment, AGW skeptics would be ill-advised to hitch their wagons to Thermometergate. Like all True Believers, the Climatistas will vigorously attack the weakest skeptical arguments and give short shrift to the stronger ones. The Climategate emails are much better conversation-stoppers.

  119. BTW, for folks who assert that because it’s all anomalies there will be no change from station changes, I’ve already run a benchmark showing that is not true:
    http://chiefio.wordpress.com/2009/11/12/gistemp-witness-this-fully-armed-and-operational-anomaly-station/
    It’s a crude benchmark based on the USHCN to USHCN.v2 transition where they left out data from 5/2007 to 11/2009 then put it back in. This is a ‘weak’ benchmark in that the data change only in 2% of the world surface ( IIRC that’s the number the Warmers toss about for the USA) but the anomaly report is for the Northern Hemisphere. So, in theory, the changes in the report are diluted by about a 2:50 ratio (so multiply the observed variances by 25 to get the actual impact) and yes, I need to do a better fuller benchmark to find the exact “price being asked for this service” (see the link 😉 but we do find changes in the 1/100 C place. On the order of 2/100 to 4/100. So as a ‘first cut’ we can say there is at least a 1/2 C to potentially a 1 C impact that “the anomaly will save us!” failed to catch…
    So to all the bleating about “But it’s ANOMALIES so it doesn’t matter!” I say:
    But it’s A BENCHMARK of the actual code run; and it says theory be damned, in reality the anomaly FAILS to be perfect. “Use the Code Luke!” 😉
    BTW, I’m upgrading my hardware and software so I can’t do a lot of ‘new stuff’ right now as I’m doing admin / sysprog work making it all work again on a newer faster box… so I’ll be doing that next finer grained benchmark, but not for a while… Until then, this one will have to do.

  120. Nick Stokes (05:57:40) : GIStemp say they use GHCN v2.mean. And this is already expressed in anomalies for each station, as downloaded from NOAA. In fact, it isn’t clear how you can even recover the temp in Celsius from that file.
    Try again. GHCN.v2 is already in C (it was converted from F which is what is in the USHCN set). It is NOT anomalies. It is reported as actual temperatures (for the monthly mean).
    http://chiefio.wordpress.com/2009/02/24/ghcn-global-historical-climate-network/
    Quoting from that page quoting the NOAA page:
    So, to understand GIStemp, we have to take a look at GHCN data. Here is the “README” file from the GIStemp download:
    May 1997
    This is a very brief description of GHCN version 2 temperature data and
    metadata (inventory) files, providing details, such as formats, not
    available in http://www.ncdc.noaa.gov/ghcn/ghcn.html.
    New monthly data are added to GHCN a few days after the end of
    the month. Please note that sometimes these new data are later
    replaced with data with different values due to, for example,
    occasional corrections to the transmitted data that countries
    will send over the Global Telecommunications System.
    All files except this one were compressed with a standard UNIX compression.
    To uncompress the files, most operating systems will respond to:
    “uncompress filename.Z”, after which, the file is larger and the .Z ending is
    removed. Because the compressed files are binary, the file transfer
    protocol may have to be set to binary prior to downloading (in ftp, type bin).
    The three raw data files are:
    v2.mean
    v2.max
    v2.min
    The versions of these data sets that have data which we adjusted
    to account for various non-climatic inhomogeneities are:
    v2.mean.adj
    v2.max.adj
    v2.min.adj
    Each line of the data file has:
    station number which has three parts:
    country code (3 digits)
    nearest WMO station number (5 digits)
    modifier (3 digits) (this is usually 000 if it is that WMO station)
    Duplicate number:
    one digit (0-9). The duplicate order is based on length of data.
    Maximum and minimum temperature files have duplicate numbers but only one
    time series (because there is only one way to calculate the mean monthly
    maximum temperature). The duplicate numbers in max/min refer back to the
    mean temperature duplicate time series created by (Max+Min)/2.
    Year:
    four digit year
    Data:
    12 monthly values each as a 5 digit integer. To convert to
    degrees Celsius they must be divided by 10.
    Missing monthly values are given as -9999.
    If there are no data available for that station for a year, that year
    is not included in the data base.
    A short FORTRAN program that can read and subset GHCN v2 data has been
    provided (read.data.f).
    Station inventory and metadata:
    All stations with data in max/min OR mean temperature data files are
    listed in the inventory file: v2.inv. The available metadata
    are too involved to describe here. To understand them, please refer
    to: http://www.ncdc.noaa.gov/ghcn/ghcn.html and to the simple FORTRAN
    program read.inv.f. The comments in this program describe the various
    metadata fields. There are no flags in the inventory file to indicate
    whether the available data are mean only or mean and max/min.
    Country codes:
    The file v2.country.codes lists the countries of the world and
    GHCN’s numerical country code.
    Data that have failed Quality Control:
    We’ve run a Quality Control system on GHCN data and removed
    data points that we determined are probably erroneous. However, there
    are some cases where additional knowledge provides adequate justification
    for classifying some of these data as valid. For example, if an isolated
    station in 1880 was extremely cold in the month of March, we may have to
    classify it as suspect. However, a researcher with an 1880 newspaper article
    describing the first ever March snowfall in that area may use that special
    information to reclassify the extremely cold data point as good. Therefore,
    we are providing a file of the data points that our QC flagged as probably
    bad. We do not recommend that they be used without special scrutiny. And
    we ask that if you have corroborating evidence that any of the “bad” data
    points should be reclassified as good, please send us that information
    so we can make the appropriate changes in the GHCN data files. The
    data points that failed QC are in the files v2.m*.failed.qc. Each line
    in these files contains station number, duplicate number, year, month,
    and the value (again the value needs to be divided by 10 to get
    degrees C). A detailed description of GHCN’s Quality Control can be
    found through http://www.ncdc.noaa.gov/ghcn/ghcn.html.

  121. @Dave F:
    You know, I keep looking for where that paycheck is and never can find it. I think the other Smith down the street must be getting it… or maybe they got me confused with that Smith at NASA who was a shuttle pilot… At any rate, I’m still doing this all on my own between washing the dishes, cooking dinner, and being “servant to cats”… So if anyone knows where I can get one of those “paycheck” things, just let me know. I used to get a nice one for computer work about 5 years ago, but then they “fixed” the economy and I’ve been “a house husband” ever since. I try to remember what a “paycheck” looks like some times… but the memory is starting to go… 😉
    /sarcoff>
    Or put another way: So, I ought to add another 0? That makes it:
    $000.00 or maybe $0,000.00 … somehow I don’t thing that’s what you had in mind… 😉
    (On a semi-serious note: I’d love to do this full time somewhere and with real equipment and facilities instead of things cobbled together from junk in the garage; but you work with what you have. My budget is $0, but at least that makes my ROI infinite! So I try to make up for the shortage of facilities and time with a focused effort and finely directed instinct about where to look. The largest leverage. It is a bit frustrating in that I know I’m moving forward at about 1/4 speed, but OTOH, I do think the progress has been worthwhile 😎
    If nothing else, it’s keeping the technical skills up for the day when California either recovers, or collapses entirely and I move to Costa Rica and get a gig as a PC repair guy in the Expat Retirement Village 😉
    And I do find it funny that folks accuse skeptics of being in the pay of Big Oil; when Big Oil is planing to make a killing off of getting coal utilities to provide them with the liquified CO2 they need for “enhanced oil recovery” from old oil fields. Oil is 100% in the warmers camp… Coal, not so much. Electric utilities not at all. “Follow the money” says it’s not skeptics getting the money…
    I remember the day, about 15 years(?) ago that I first read about liquid CO2 oil well stripping; and they said it worked really great, but had one giant problem: While it worked really really well, the cost of CO2 killed it. And shortly after that a sudden giant movement to “sequester CO2” popped up from nowhere. Now “big oil” is looking to be paid to take away the “pollutant” CO2. Hmmm….
    At any rate, thanks for the vote of confidence! Just doing what I can. While I used to call this kind of thing “Kitchen Science”, I’m thinking maybe I need to relabel it. “Joe Sixpack” science has a nice ring to it: “Will program for beer!” makes a nice motto too 😉

  122. E.M.Smith (20:54:13) :
    BTW, for folks who assert that because it’s all anomalies there will be no change from station changes, I’ve already run a benchmark showing that is not true:
    I think those who have been asserting that are wishing it was true. They have to continually fudge numbers to make it true.

  123. Nick Stokes (13:27:14) :
    Nick,
    you have said, in effect, that the anomaly of the data that was used by GISS is that same as the anomaly of the data that they’ve dropped. To prove you are right you would have to have the dropped data.
    Are you going to attempt to get the data by sending a FOI to GISS for the dropped data?

  124. Ed Scott (09:19:21) :
    Homogenation, as used by NOAA/GISS, incorporates the best features of the data manipulation and data correction techniques, well known by under-graduate physics students, of Finagle, Bougerre and Diddle (also known as Variable Constants or Constant Variables, your choice),

    Also known as The fiddlers Three.

  125. photon without a Higgs (05:41:20) :
    Nick Stokes (13:27:14) :
    Nick,
    you have said, in effect, that the anomaly of the data that was used by GISS is that same as the anomaly of the data that they’ve dropped. To prove you are right you would have to have the dropped data.

    You’re saying that the data should be analyzed before reaching a conclusion ??
    That’s not fair !!!
    E.M.Smith (20:54:13) :
    Awesome

  126. pH (05:41:20) :
    “Are you going to attempt to get the data by sending a FOI to GISS for the dropped data?”

    No use doing that – GISS didn’t acquire it. Neither did GHCN. It sits, as original data does, with the world’s weather services.
    But there’s something wrong wuth this whole story of “dropped data”. As I’ve said above, GHCN was a historical data project of the 90’s. A whole lot of past data was collected, under no time pressure, and with special project funding. That’s when the data from 7000 stations were assembled.
    Then GHCN became the vehicle for the publication of regular month-to-month data. It couldn’t get, for various reasons, recurrent data from all those sites. It had to choose. For the most part, the “dropped” stations were in fact never regular contributors. They just provided a batch of past info at that time in the ’90’s.
    Maybe GHCN needs more funding. Anyway, if GHCN can’t assemble modern data from those missing stations, I certainly can’t, and waving a FOI wand won’t do it.
    I can’t usefully respond to your MWP issue – I can’t see the point of it. Yes, if there was local warming, any method, anomaly or other, would just have reported that. And there’s no way just measuring can detect whether it is permanent or not.
    And philincali, the reaching of conclusions was done in this article. NOAA – Data In / Garbage Out. No data analysed before reaching that conclusion, which is indeed an odd one, with the actual complaint being that NOAA did not get the data in.

  127. Nick Stokes (10:30:28) :
    Well then Nick, get from wherever it is and make the graph. You’ve obfuscating about it so it must be important to compare the two.
    Also, NASA makes it a point to talk about the hottest year, hottest decade. So eliminating rural and mountain stations from the data means everything.

  128. Nick Stokes (10:30:28) :
    I see, so the excuse is pressured for time. Funny how this pressure to meet some deadline created the hottest decade on record.
    Too bad they didn’t drop those urban and airport stations instead of the mountain and rural ones so there could be no suspicions of human influence in the readings, huh Nick.
    Of course you’ll find some convoluted way of explaining that away too.

  129. Nick Stokes (10:30:28) :
    Maybe GHCN needs more funding.
    WELL OF COURSE THEY DO NICK!
    THAT’S WHAT IT’S GLOBAL WARMING IS ALL ABOUT!!
    FOLLOW THE MONEY!

  130. Nick Stokes (10:30:28) :
    I can’t usefully respond to your MWP issue – I can’t see the point of it.
    You haven’t usefully responded to anything.
    The point I am making about the MWP is the same point you are making about the number of stations used by GISS since 1989. You can’t see a point in it then you must not see a point in your own argument too.
    No matter though, if you did see a point you would have diverted from it and answered a some kind of question I didn’t ask.
    This is my last comment with you in this thread.

  131. Nick Stokes (10:30:28) :
    But there’s something wrong with this whole story of “dropped data”.
    and
    Maybe GHCN needs more funding. Anyway, if GHCN can’t assemble modern data from those missing stations, I certainly can’t, and waving a FOI wand won’t do it.

    I think that’s probably a fair comment and, let’s face it, someone is going to have to figure out this mess for future generations.
    Just as long as the proposal is not titled along the lines of “….. temperature record ….. da dee da .. in support of man-made climate change”, or words to that effect.
    Objective replacements for the Principle Investigators/usual suspects would be in order, and if E.M. Smith were not a PI, he should at least be a Senior Consultant to the project.

  132. Nick Stokes (10:30:28) : But there’s something wrong wuth this whole story of “dropped data”. As I’ve said above, GHCN was a historical data project of the 90’s. A whole lot of past data was collected, under no time pressure, and with special project funding. That’s when the data from 7000 stations were assembled.
    Then GHCN became the vehicle for the publication of regular month-to-month data. It couldn’t get, for various reasons, recurrent data from all those sites. It had to choose. For the most part, the “dropped” stations were in fact never regular contributors. They just provided a batch of past info at that time in the ’90’s.
    Maybe GHCN needs more funding. Anyway, if GHCN can’t assemble modern data from those missing stations, I certainly can’t, and waving a FOI wand won’t do it.

    Strange, just very strange. GIven that NOAA / NCDC has the same USA station data that was dropped in GHCN sitting in USHCN, they would have to move it all the way from their own right pocket into their left pocket. Couldn’t expect that…
    Oh, and Wunderground seems to have no trouble getting the data from those foreign sites that NOAA / NCDC dropped from GHCN. They even have Bolivia.
    Somehow your “story” just sounds more and more made up with each new turn and twist… I have a suggestion: “Stop digging…”

  133. Anyone who knows anything about mountain weather knows that extrapolating 100km, let alone 1200km, is totally ridiculous. So I think the IPCC method for extrapolating is plain USELESS for all mountainous terrain.
    Example: suppose one winter in Europe the jet stream tails south but the Siberian high extends west, as is happening this year (well, the Siberian high’s retreated lately, but it may well come back again soon).
    You’d expect:
    1. Colder than normal in the Eastern Alps (Germany/Austria);
    2. Snowier than normal in the SW Alps (France and Italy)
    3. More Foehns than normal in the northern French- and Swiss Alps.
    Where do you site your ‘normal’ station?
    1. Nice?
    2. Genf?
    3. Zermatt?
    4. Garmisch?
    5. Innsbruck?
    Don’t think you’d find the same results, would you??

  134. “Nick Stokes (05:06:16) :
    cohenite (04:35:18) :
    It’s true that the big changes between tropical and arctic have a significant fourth power nonlinear effect. It doesn’t make a global average temperature ridiculous – it’s the logical measure of heat content. It just means that if you want to calculate global radiation, you have to average T^4, which is easy enough.”
    Oh come on. Maybe climatepunks work that way but it’s still wrong. Temperatures in Germany vary by 50 degrees Celsius between winter and summer and we don’t even have a continental climate, your assertion that T^4 can easily be approximated by linearity is complete bollocks. In Siberia you’ll get 80 degrees variation between summer and winter. In the Sahara you’ll get 40 degrees variation between day and night. Do i have to go on?

Comments are closed.