I’m In Sydney at the moment, on tour. My first stop out of the airport was to visit the Sydney observatory, where BoM maintains an official weather station. Here it is:
Click thumbnails for larger images – quite a nice heatsink in the form of a brick wall that gets full sun just a few feet away. Note my previous story (Sydney’s historic weather station: 150 meters makes all the difference) where researchers found a bias due to a move to this location. Seeing it first hand, it is easy to see why. – Anthony
=======================
Meteorologist Art Horn writes in Pajamas Media:
The following remarkable statement now appears on the NOAA (National Oceanic and Atmospheric Administration) site:
For detecting climate change, the concern is not the absolute temperature — whether a station is reading warmer or cooler than a nearby station placed on grass — but how that temperature changes over time.
The root of the problem? NOAA’s network for measuring temperature in the United States has become corrupted by artificial heat sources and other issues. These problems introduce warm biases into the temperature measurements that are then used by the government and others to support manmade global warming. So as a reaction to criticism about these problems … NOAA now claims that the accuracy of the measured temperature no longer matters!
Let’s take a closer look at this amazing statement to see what it actually says:
For detecting climate change, the concern is not the absolute temperature …
Are they truly saying the accuracy of the temperature readings don’t matter? Yes, they are! But why?
The NOAA climate measuring network is so broken there is literally no way to fix it. Reacting to criticism of this annoying fact — and to cover up its significance — NOAA now says accuracy simply doesn’t matter, the temperature reading itself is not as important as the trend. This is clearly a deceptive political statement meant to distract the reader from the truth.
Of course the accuracy of the temperature reading matters!
Remember, a warming of only one degree Fahrenheit over the last 160 years is what the warmists claim to be evidence of manmade global warming. (And of course, half of that warming occurred from 1910 to 1945, before they claim the presence of any significant human influence.)
Temperature changes of tenths of a degree are being used to justify dramatic policy directives by the Environmental Protection Agency, dictums that would profoundly alter life as we know it: taxes levied on virtually anything that produces carbon dioxide. But it’s not only the accuracy of the temperature that is in play. If the temperature readings are off by a few tenths of a degree, this could significantly affect the longer term trend as well. If the temperature trend starts from an artificially elevated reading, the end result will be an artificially inflated measure of any global warming. The slope of the trend itself could be exaggerated by inaccurate temperature measurements.
So what exactly is wrong with NOAA’s temperature readings?
NOAA has five classes of climate measuring sites. The best sites, categories 1 and 2, require the site to be placed over grass or low local vegetation. According to section 2.2 of NOAA’s Climate Reference Network (CRN) Site Information Handbook:
The most desirable local surrounding landscape is a relatively large and flat open area with low local vegetation in order that the sky view is unobstructed in all directions except at the lower angles of altitude above the horizon. …
[For categories 1 and 2 there can be] no artificial heating sources within 100 meters (330 feet). …
[For the lower quality stations 3 to 5 there must be] no artificial heating source within 10 meters (33 feet).
The integrity of the site as a reliable climate measuring station is completely dependent on these criteria. Is the government adhering to its own standards?
In a landmark 2007 research project to determine the quality of the United States climate measuring network, meteorologist Anthony Watts set out to get some answers. He recruited more than 650 volunteers to photograph the climate stations around the country.
What they have found is astounding. A full ninety percent of the United States climate measuring sites do not meet the government’s own criteria for accurate temperature measurement!
Again, that number is ninety percent.
read the rest here at Pajamas Media




Geoff is as mistaken as he was with his fertilizer post on Niche. The rainfall distribution statistics are self-evident and well studied – surely we’re not going to play the olde entire Australian rainfall statistic ruse? Surely we’re not going to substitute peer reviewed multiple diagnostic analyses for your “opinion”?
to add to that
I forgot, they know exactly how much artificial warming each and every site produces…
….so they know exactly how much to adjust each site for that
and then use that to estimate warming over thousands of miles, where they do not have access to sites
It’s amazing what a 1/10th of a degree can do, isn’t it?
Nick Stokes says:
June 12, 2010 at 6:28 pm
Try some tap dancing yourself. For detecting climate change, why does it matter whether a thermometer is on a hill or in a valley? Or over grass or over bare ground?
But you well know that it does matter if, over time, the grass or bare ground got paved over, a new building went up near the site, and so forth. It seems to me quite likely that sites which today would receive a poor rating probably once were not as “contaminated” as they are now. And that matters.
Tom_R
The satellite records don’t go back beyond 1979 which is the acknowledged beginning of a 30 year warming trend marked by the onset of the warm half of the Pacific Decadal Oscillation. The PDO appears to have tapered off and is right on schedule entering its cool phase. If the CO2 bogeyman is real there won’t be any cooling which, as far as I can tell, is a good thing as the last thing we need is shorter growing seasons. World hunger is still a big problem. A cooling climate would be catastrophic. A warming climate we can easily deal with and for the life of me I can’t see any net downside to it. The earth is still quite cold by geological standards which in the latest epoch has been dominated by ice ages with brief intermissions where it’s barely warm enough to green it up some.
The average temperature of the world’s oceans are a mere 5 degrees C. This is the average temperature of the surface over a period of time including one glaciation and one interglacial period. The thermal mass of the atmosphere is a fraction of a percent the thermal mass of the ocean. If it weren’t for a thermocline which generally separates a warm (average 16 degrees C) top layer of ocean from the ice cold depths we’d be in an ice age. A certain amount of mixing of surface and deep water occurs driven by surface winds and a coriolis force oceanic conveyor belt. We are at the mercy of the amount of mixing to keep us as warm as we are now. CO2 greenhouse effect, what little there is, is a case of diminishing returns as the infrared absorption band of CO2 is approaching saturation at current levels. It isn’t enough to put a dent in the vast cold deep of the oceans.
I think the vast majority of people here understand the difference between trends and absolute readings. So, what is the argument? Clearly, in a completely static environment the placement could be ignored. Now, all I need is for those who believe that all stations are located in completely static environments to raise their hands.
Now, I realize there will be no takers even though this is pretty much what folks like Luke, Phil., Gneiss and Nick have been asserting.
All one needs to do is consider a station sited near an air conditioner (this has been posted many time previously). The warmer it gets the more often the air conditioner will run and this would enhance any warming signal (or cooling signal). In addition, someone might just prefer cooler/warmer inside temps and adjust the thermostat accordingly. In other words, there is absolutely no way to determine if the TREND is valid with this kind of poorly sited station. Many other examples can be created that demolish the claim that siting is unimportant.
“Nick Stokes says:
[…]
Try some tap dancing yourself. For detecting climate change, why does it matter whether a thermometer is on a hill or in a valley? Or over grass or over bare ground?”
Notice how he avoids to say “For detecting climate change, why does it matter whether a thermometer is on a hill or downtown, surrounded by high-rise buildings on a parking lot between two shopping malls? Or over grass or on an airport on asphalt next to an engine testing ground?”
Nice try, Nick 😉
The first device on the right (the brown metal cannister with the open-cone top) looks like an incinerator. I guess it must not be or Art would have commented on it. But what is it?
If the proponents of AGW and the current state of temperature records would like to explain this away.
http://hidethedecline.eu/pages/posts/scandinavian-temperatures-ipccacutes–scandinavia-gate–127.php
Of course the Scandinavians must be wrong, it is only their dataset about their country’s temperatures.
Plus of course it won’t affect the Global Temperature Trends will it?
Alec Rawls says:
June 13, 2010 at 9:49 am
Likely a rain gauge with enough space inside for things like a narrow tube to measure small rainfalls mounted in a bucket to measure big rainfalls, and stuff to accurately measure what’s in the bucket, and a log book, various tools, snacks, bottles containing ethanol/water solutions, etc.
Ironically, the NOAA statement actually is correct if one considers that the trend over time has been toward increasing corruption of the data through station movement and microsite biases. That is of primary importance because it adds uncertainty to the true trends and minimizes our trust in them.
Not one claim of CAGW evidence of global catastrophe holds up under scrutiny.
All that holds up is that CO2 acts to raise temperature some, all other things being equal.
The public has wasted $billions and $billions of dollars promoting or solving a problem that does not exist.
One cost of this misappropriation of resources is that the EPA, instead of being ready to deal with oil spills, has basically no ability-either in resources or management- to deal with its statutory responsibilities.
In the informal discussion after the Sydney meeting, someone made the point that many people are now so sick of the Climate Change topic that they don’t want to hear any more from either side, or look at any of the figures. I suggested to Anthony that the solution, at least for Australia, was to get the bookmakers to open up for wagers on temperature predictions versus actual readings at the target date. Many would take a fresh interest in the subject if they could bet on it, and the punters who now spend endless hours researching bloodlines and race result records to place their bets on horses might go to it digging into the provenance of data and the veracity of unadjusted or adjusted temperature readings. Since people invest in ten year government bonds on the basis of their estimate of future rates, betting on temperature rise, ice extent or snowfall over a decade is not out of the question, and would average out the weather fluctuations. If betting on it won’t get Australians interested, nothing will. All we have to do is get the bookmakers interested.
“If the temperature trend starts from an artificially elevated reading, the end result will be an artificially inflated measure of any global warming. “
Anthony, isn’t that the other way about? From an artifically depressed starting temp, you’ll get a bigger trend?
Apologies, I see Huub Baker already covered this point.
Huub Bakker says:
June 12, 2010 at 11:58 pm
Art Horn says:
If the temperature trend starts from an artificially elevated reading, the end result will be an artificially inflated measure of any global warming.
I think you either mean that “the temperature trend ends with an artificially- elevated reading” or “starts from an artificially-lowered reading”.
Basil June 13, 2010 at 7:51 am
Basil, what you’re saying doesn’t negate the NOAA statement. What matters for climate change is temperature change. It’s as simple as that. It’s true that when you measure temperature change, it could be partly due to some artefact. And you have to sort that out, as you do with any measurement. But all the NOAA is saying is that you have to focus on the right thing to measure.
“If the temperature trend starts from an artificially elevated reading, the end result will be an artificially inflated measure of any global warming. “
It’s actually more devious than that.
No one is debating that sitting stations have become colder.
They can only become warmer.
But where is most of that warmer measured?
Is it warmer in the summer days? summer nights?
Winter days? winter nights?
If I had to bet, I would say it’s measuring warmer winter nights……
Which falls right in line with what is being said………
John Innes, that is a very good suggestion. It’s true that the average person is becoming jaded; but the vested interests and fanatics promulgating the end of the world scenario of AGW are still beavering away [apologies to beavers], implementing regulations and spending money – over $2 billion per annum in Australia – to “solve” AGW.
Having a bet would push AGW out to unsupportable odds very quickly.
So Richard M – on your logic one should construct an analysis based on good quality sites and see what that tells you – BoM have done just that http://www.bom.gov.au/climate/change/hqsites/
and as Meene et al has done in the US
And you would look at other data sets unaffected by UHI – ocean, satellite and see what they tell you … (as I have posted above)
Need we look any further?
The issue with the anomaly method is:
It obscures bad data and convinces even scientists that they have actual data instead of crud. It allows adjustments that are are conceptually fine, but lead to aphysical results.
Yes – There isn’t too much wrong with a true anomaly method. The trend -is- the key factor. If you calibrate your equipment in a climatological sense you would know the relationship between the reading from the instrument and the gridcell T – with an explicit errorbar. As it stands, we have weather readings with no known relation to the gridcell temperature. And the error bars are for measuring the absolute temperature local to the actual instrument – not the discrepancy with the gridcell mean.
When you propagate the “instrumental error” instead of the true error in measuring the gridcell mean temperature, you end up with a false sense of a very tight grip on the global temperature average.
It has been raised above – “why be scared of a small change in temperature” – and fair comment – it is not the change in temperature mean that’s is the real issue. And GMST is merely “an index”. An Index ! Changes in mean – imply changes in extremes – but more importantly a reorganisation of atmospheric circulations patterns and perhaps quasi-periodic modes.
As Australia is subject to episodic droughts – billions of dollars have been spent over the last 3 decades in drought aid; water allocation and growth demand are ever present – offered some temporary respite by recent rains – but as grandaddy used to say – water conservation stops when the tank stops overflowing. The next drought is always around the corner. The question is how soon and how often.
For Cohenite worried about is his taxpayer contribution to Australian climate research – he should be pleased if modern science has uncovered ENSO, Modoki, IOD, SAM and STRi – so we have very good explanations that these mechanisms explain much of our rainfall variation.
And can it be helped if that best science reveals an “anthropogenic” contribution – that’s contribution to – not 100% cause to our drought sequences.
If you’re allocating water you need to know long term odds. And those odds seem to be changing in some areas e.g. SW WA and southern SE Australia
That’s what Australian climate science is about. And if you live in Australia – you should be glad for it. In little over a decade many farmers now factor ENSO into their farming and financial commodity hedging decisions.
Lawyers don’t have to worry – they have air-conditioning.
Maybe Australians can already bet legally online via London bookies like William Hill? If so, all that would be needed would be to publicize the fact, and/or to convince WH to add bets regarding Australian temperatures.
This climate-debate-awareness-promotion is one of the reasons I have been trying to publicize the only site where Americans can make long-term climate bets, https://www.intrade.com
Yeah sure luke, I’ll settle for a well paid government sinecure anytime, with or without the aircon. I have nothing against climate research: it is a good thing; but when that research is funnelled, diverted and corrupted by the constraint of confirming what is essentially an ideological position, AGW, then as a taxpayer paying for those sinecures I become a little irritated.
You say this:
“Changes in mean – imply changes in extremes – but more importantly a reorganisation of atmospheric circulations patterns and perhaps quasi-periodic modes.”
This is not correct; the GMST has been suborned to be the measure of AGW; Anthony has shown how that occurs at the data collection stage; at the ERB stage the inappropriateness of the GMST is shown by this:
http://scienceofdoom.com/2010/04/07/the-dull-case-of-emissivity-and-average-temperatures/
The point is that a GMST will not reflect changes in the ERB and changes in the ERB may not be reflected in the GMST; the nexus between an upward trend in GMST and ERB is worthless; the paradigm of AGW has no meaning.
No. I don’t see how the statement begs the question. They touch on two different matters. ‘Accuracy’ is discussed elsewhere in the NOAA article.
If you had 2 rural weather stations nearby each other – one in the shade of a 150 year old tree, and one standing exposed above bare rock (neither contaminated by urban influences), the latter will record higher temperatures. If they have not undergone station moves or other inhomogeneities, then the difference in absolute temperature won’t matter when deriving a trend. This is the point the quoted statement is making. Accuracy in terms of inhomogeneities do matter, and this is briefly discussed in the NOAA publication linked in the top post.
Of course, the very general overview doesn’t answer granular concerns. But the argument in the top post is based on a misunderstanding of context, and strangely omits portions of the article that deal directly with the subject nominated – accuracy of readings. Fortunately, a link to the NOAA article is provided for the reader to investigate for themselves.
So, will we no longer hear of any year being the warmest in (choose a period), as absolute temperatures are no longer important.
Will this be replaced with “the largest anomaly in recorded history” or similar?
That will certainly make it much easier to sell the message as I’m sure more people (us non-climate scientists) will understand anomalies far better than absolute temperatures.
Sounds like another bait-and-switch to me, similar to ‘global warming’ morphing to ‘climate change’.
Gneiss says: June 12, 2010 at 3:03 pm
That “remarkable statement” is just a straightforward description of temperature anomalies. Do you have data showing that in general they don’t work?
Nick Stokes says: June 13, 2010 at 12:55 pm
Basil, what you’re saying doesn’t negate the NOAA statement. What matters for climate change is temperature change. It’s as simple as that. It’s true that when you measure temperature change, it could be partly due to some artefact. And you have to sort that out, as you do with any measurement. But all the NOAA is saying is that you have to focus on the right thing to measure.
Those who think the absolute temperature doesn’t matter, you should rethink the actual ways that the data anomalies are determined. (If E.M. has already commented, this might be redundant).
First, every grid is smoothed (homogeneity adjustment to 250km) by the stations that are enclosed in that grid. If there are three poorly rated sites and no good sites, then the anomalous value should be a good indicator of the urban areas in that site. But, how much of the earth’s surface area within that grid is urban or rural? WTH knows!.
Second, If there were good, rural sites within that grid included in the homogeneity adjustment in prior years that have been recently dropped, the data is again skewed upward. We know this has occurred with wanton abandon; http://chiefio.wordpress.com/ .
Third, The data from a grid with poorly sited stations, once containing data from X number of good rated stations that have been dropped, is used to infill data for other adjacent grids up to 1200km away that may have had high rated station dropout with an unknown urban to rural proportion, which can then be used to infill another grid, another 1200km away, etc. etc. etc. We know this occurs; http://chiefio.wordpress.com/ .
So, considering all the homogeneity adjustments, data infilling, and highly rated station dropout; in essence what we are measuring in the current surface data sets is an anomalous number representing the dropout of highly rated, usually rural stations.
It’s as simple as that.
And , provided that any good rural site that was collecting data pre 1995 was not dropped