Heh.

From Tom Nelson

Email 600, Sept 2007: Watts expose makes NOAA want to change entire USA method

Email 600

[Tom Karl, Director of the National Climatic Data Center] We are getting blogged all over for a cover-up of poor global station and US stations we use. They claim NCDC is in a scandal by not providing observer’s addresses. In any case Anthony Watts has photographed about 350 stations and finds using our criteria that about 15% are acceptable. I am trying to get some our folks to develop a method to switchover to using the CRN sites, at least in the USA.

Hat tip: AJ

===============================================================

Note this email, because it will be something I reference in the future. – Anthony

Related articles
0 0 votes
Article Rating
152 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DSW
February 4, 2012 12:10 pm

hehe

trbixler
February 4, 2012 12:21 pm

Drop as many as can be dropped then extrapolate. No one will notice.

Merovign
February 4, 2012 12:22 pm

Someone correct me if my recollection was wrong, but wasn’t the actual scandal at the time the fact that the NOAA and the “climatology mafia” didn’t check this *themselves*, and that their reaction in the moment was to argue and minimize instead of revisiting the data?
REPLY: Yes, but Mr. Karl obviously misses what is obvious to everyone else. – Anthony

Rob
February 4, 2012 12:26 pm

Wow. Mr. NCDC(former AMS Pres) admitting(privately) that the USHCN is not worth a flip!!!

February 4, 2012 12:28 pm

A lot of hard yukka on your part pays off. Well done, Mr. Watts.

John in NZ
February 4, 2012 12:30 pm

High Five.

ScuzzaMan
February 4, 2012 12:31 pm

bwahahaha….

Evan Jones
Editor
February 4, 2012 12:31 pm

Wow.
Yet he does not seem to have switched over to CRN.
At this date we have surveyed well over a thousand stations (I, myself, have over 200 kills, a dozen f-t-f, the rest “virtual” and/or by direct interview). Most of the remaining USHCN1 stations are long closed, and some sites are known only after recent station relocations).
The recent switchover to USHCN2, substituting ~50 stations, does not, to my recollection, show a switchover to CRN stations — but I will give it a look-see and report back.
Also, UHCN1 showed a +0.6°C/century trend, while USHCN2 shows +0.72. But that’s adjusted data, of course. As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)

David, UK
February 4, 2012 12:36 pm

Brilliant! I bet reading that for the first time felt a bit like pay day!
Off topic, but here in the UK we’re experiencing heavy snowfall up and down the country.
Here from the Press Association:
“Forecaster Paul Mott, of Meteo Group, the weather division of the Press Association, said the deep freeze was likely to continue into next week meaning the snow is likely to settle and much of Britain will remain carpeted in white.”
Until just a few hours ago, the Met (as reported by the BBC) was predicting “light snow” for tonight. Double “Heh!”

geography lady
February 4, 2012 12:43 pm

Is NOAA trying to lessen the number of observation stations? When I worked in the 1970’s with air pollution monitoring stations, EPA decided to eliminate the number of sites reporting. Much of this was suppose to cut back the cost to EPA. But the air monitoring sites I worked with was financed by our local county agency–not EPA. I was working for a local county government environmental agency. It would not surprise me if they were to eliminate the number of meteorological stations. One needs more stations/data not less for more accurate data.

February 4, 2012 12:53 pm

Congratulations, Anthony!
On a related theme (good vs bad measurement locations), are there no sites that could be considered “pristine” and long-term? If any such sites exist, would it not be better to use the trends from a few good data points rather than attempt to adjust hundreds of not-so-good ones?
I’m thinking that National Parks would be good candidates, as real estate development, or land use changes, are generally not found in them. Perhaps this has been discussed already?

Gary Hladik
February 4, 2012 12:59 pm

Bazinga!

Theo Goodwin
February 4, 2012 1:09 pm

Anthony’s data has been a black eye on the ruling regime for years. Just cannot wait to see the spin on this one from the Warmists. It will be based on magical statistics that can “disappear” any and all offending empirical observations. If they were genuine scientists they would learn.

Viv Evans
February 4, 2012 1:13 pm

Tee-double-hee!

Evan Jones
Editor
February 4, 2012 1:14 pm

1.) The new USHCN2 sites are all COOP, not a CRN site among them. That leads to the question of what the raw CRN data is (gridded and ungridded) and why the suggestion to convert to CRN readings was not implemented.
2.) After the substitution, there are 2218 USHCN2 sites as compared with 1221 USHCN1 sites. By my count, 50 have been added, 53 discontinued. This has had the effect of somewhat increasing the adjusted historical trend by ~0.12C/century. This increase may be due to the change in stations, a change in adjustment, both (or perhaps some other factor entirely).

Paul Westhaver
February 4, 2012 1:15 pm

No… that isn’t the half of it…
I speculate that while they were privately wringing their hands about the station data, they were publicly dismissing Andy Watts as that pesky, tedious, obsessed weather station dork…
I just hate NOAA bravado and arrogance.

George Munsch
February 4, 2012 1:22 pm

@ Roger Sowell
Siting is only part of the problem. The type and style of instrumentation changed several times over the period of interest, and corrections to the data are then made because the data produced is discontinuous. In principle this should not be necessary, but it is, and typically the corrections are poorly applied. Anthony has meade several postings comparing various generations of instruments, and the differeces in them.
George M.

Evan Jones
Editor
February 4, 2012 1:30 pm

Cortland
Cooperstown
Bedford
Belvidere
Mohonk
Maryland
New York
Norwich
‘Lantic City
Stroudsburg
Blue Hill
Morrisville
I been everywhere, man, I been everywhere.

February 4, 2012 1:32 pm

Just goes to show how threatened these people feel. Call me suspicious, but if they had nothing to hide I would have expected more than just this frightened capitulation.
Anthony, this is a BIG win for you and your team. Thanks again for all the hard work put in.

February 4, 2012 1:35 pm

Guys, guys, guys! These stations were clearly taken out of context!

Kev-in-UK
February 4, 2012 1:39 pm

Hmm… but where are the defenders of the sacred ‘data’ – the warmista based trolls? Surely, one of them must be along soon to post some c*ck and bull story about how the data was accidentally fecked up but suddenly became ‘good’ again, once they had found it hiding under their discarded grant funding and pay slips!

Ken Harvey
February 4, 2012 1:39 pm

A round of applause for Mr. Watts please.

DaveG
February 4, 2012 1:48 pm

Thanks to Anthony and Climategate 1 & 2 the intransigent people (warmer’s) have sat up and noticed, they circled the wagons to no avail, in military terms they have been fighting a classic rearguard action with a steady but quickening retreat. These email or the poor ground station sitings are nothing new to readers at WUWT, but a sad commentary on the sordid state of so called climate science and the political crass class. How can any disciple of the church of global warming defend the indefensible is a mystery to me!
Thanks again Anthony for you tireless battle to expose these charlatans.

Al Gored
February 4, 2012 1:57 pm

Funny. On the one hand, the Team was insisting that CO2 was the great driver of change while, on the other hand, they were insisting that what was doing Anthony wasn’t. As it turns out…
Meanwhile, yesterday’s employment statistics confirm that it is Green Shoots all the way for the USA now… so vote for Obama!
One can only imagine how those employment stats were created…

Evan Jones
Editor
February 4, 2012 1:57 pm

A round of applause for Mr. Watts please.
Nobody Beats the Rev!

oeman50
February 4, 2012 1:57 pm

I have surveyed (and submitted) one hard-to-reach weather station. I would not have known what to look for without the information in this blog over the past years. The problems I saw that can influence the measurements from just this one site made concrete what has been said here. How can anyone grounded in the scientific method believe that the world’s temperature has increased by 0.1 C (or whatever) due to readings from these places? Thanks Anthony and Evan!

Steve
February 4, 2012 1:58 pm

The feeling has got to be like watching your child take their first steps!
Congrats.

Beth Cooper
February 4, 2012 2:00 pm

Great to see Anthony, Evan and the rest of Anthony’s citizen army vindicated. A step forward for transparency and empiric investigation…. ‘Just the facts, ma’am.’

WLF15Y
February 4, 2012 2:02 pm

Maybe I missed something along the way, but in the email he admits to only 15% that are “unacceptable”. Is this % purely due to the time frame in which the email was sent?

Editor
February 4, 2012 2:05 pm

evanmjones says:
February 4, 2012 at 1:14 pm

1.) The new USHCN2 sites are all COOP, not a CRN site among them. That leads to the question of what the raw CRN data is (gridded and ungridded) and why the suggestion to convert to CRN readings was not implemented.

Perhaps people thought about what a data break it would be in the USHCN data and that there would be no long continuous records in the years after the break.
We’ve heard very little about the CRN site data. Perhaps people are waiting for some sizable fraction of the blessed 30 year climate period before trying to embrace the CRN data.
Or perhaps they’ve found the CRN data isn’t tracking the airport station data very well.
It would be a good amateur project to collect monthly CRN averages and post summaries and graphs ala GISS, UAH, etc. and produce data suitable for inclusion at Wood-for-Trees.
I’d be interested if I didn’t have this pesky job that keeps me busy. And fed. Fed is good.

Joseph Thoma
February 4, 2012 2:05 pm

Theo Goodwin
February 4, 2012 at 1:09pm
Theo, if I remmember right, Anthony’s project and data came under an attack not only by the alarmists, but also by many luke-warmers, and I could never understand why? What did luke-warmers hope to gain by downgrading Anthony’ project, beats me.
Taras

pat
February 4, 2012 2:07 pm

Of course the instrument sites are unacceptable. I have heard the meteorologists in charge of these stations say as much.

Owen
February 4, 2012 2:18 pm

One man can make a difference. You’re the man Anthony. Thanks for all you do !

SidViscous
February 4, 2012 2:24 pm

WLF15Y
Go back and re-read. He says only 15% are acceptable.

Tom Konerman
February 4, 2012 2:27 pm

evanmjones says:
February 4, 2012 at 1:30 pm
Cortland
Cooperstown
Bedford
Belvidere
Mohonk
Maryland
New York
Norwich
‘Lantic City
Stroudsburg
Blue Hill
Morrisville
I been everywhere, man, I been everywhere.

R de Haan
February 4, 2012 2:39 pm

Not only is this the best climate blog on the planet, it’s also the most effective considering what came crawling out of the woodworks of the CAGW movement.
This isn’t a small achievement.
Great work from Tom Nelson who’s sinked his teeth into the ClimateGate e-mails and simply can’t let go.
Great job.

February 4, 2012 2:41 pm

About 6 inches of snow at the moment here in Cambridge, England… so I guess our weather stations will be ignored for a while huh? 😉

February 4, 2012 2:45 pm

evanmjones says: February 4, 2012 at 1:30 pm
Cortland, Cooperstown, Bedford, Belvidere, Mohonk, Maryland, New York, Norwich, ‘Lantic City. Stroudsburg, Blue Hill. Morrisville
I been everywhere, man, I been everywhere.

Darn near, but not Belle Plaine, Toledo, Clinton, Galva, or Aledo. 🙂
Nice to see your hard work pay off, Evan.
Mike

Jessie
February 4, 2012 2:45 pm

Bravo for your hard work Anthony and also Tim for reading each and every one of the ClimateGate2 emails
The email600 also includes:
.. IDAG is meeting Jan 28-30 in Boulder. You couldn’t make the
last one at Duke. Have told Ferris about IDAG, as I thought DAARWG
might be meeting in Boulder. Jan 31-Feb1 would be very convenient
for me – one transatlantic flight, I would feel good about my carbon
bootprint and I would save the planet!

Cheers Phil’

(bold inserted)
The acronyms are enough to bamboozle anyone. Or at least keep them on the outer.

Jessie
February 4, 2012 2:48 pm

Apologies, meant Tom in previous post.

R. Shearer
February 4, 2012 2:51 pm

Karl’s behind must be jealous because of all the crap that comes from his keyboard.

Jessie
February 4, 2012 2:58 pm

email600
DAAWRG!
Data Archiving & Access Requirements Working Group
for eg http://www.sab.noaa.gov/Meetings/2011/march/SAB_Mtg_Pres_Mar11_Webster_FINALv3_03-09-11.pdf

Barclay E MacDonald
February 4, 2012 3:11 pm

Anthony your effort and tenacity are nothing short of amazing. This was your project from the very beginning. Beautiful work!

LazyTeenager
February 4, 2012 3:14 pm

So we now have evidence that they acknowledge the existing meteorology less than ideal for climate change monitoring and are motivated to improve it.
Ooops. Seems to contradict notions of nefarious behavior. If they wanted to produce fake data to support some climate conspiracy they would not bother to try to improve the network now would they.

Robin Hewitt
February 4, 2012 3:19 pm

The weatherman said, “minus 5 in the cities tonight, minus 10 in the countyside”.
So they do believe in the urban heat island and station siting must be important.
Who’d have thunk it.

LazyTeenager
February 4, 2012 3:22 pm

evanmjones says
As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)
———-
I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.
So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.

LazyTeenager
February 4, 2012 3:29 pm

Paul Westaver says
No… that isn’t the half of it…
I speculate that while they were privately wringing their hands about the station data, they were publicly dismissing Andy Watts as that pesky, tedious, obsessed weather station dork…
I just hate NOAA bravado and arrogance.
———–
That’s weird. You just make up a story, and then you claim this is a valid justification of you contempt.

February 4, 2012 3:31 pm

Funnily enough this is the concrete foundation upon which my admiration for WUWT is based.

KV
February 4, 2012 3:32 pm

Roger Sowell says: February 4, 2012 at 12:53 pm
“On a related theme (good vs bad measurement locations), are there no sites that could be considered “pristine” and long-term?”
Roger: In his article “What’s Wrong With the Surface record” John L Daly listed the following U.S sites as some in that category.
Ashton, Idaho: Basin, Wyoming; Cedar Lake, WA; Cold Bay, Alaska; Davenport, Wa; Eagle Pass, Texas; Lamar, Colorado; Lander, Wyoming; Lampasas, Texas; Nome, Alaska; Spickard, Missouri;
Tombstone, Arizona; Yellowstone National Park;; Yosemite National Park HQ, California.
http://www.john-daly.com/ges/surftmp/surftemp.htm
It would be interesting for those with the expertise to do a comparison with the graphs John lists and those now listed at Hansen’s Gistemp.
I do note that even using Hansen’s data “after removing suspicious records” that in almost all cases the places above not only still showed no “unprecedented warming” but 1934 was still clearly the hottest year in the USA in the time frame covered.
A big tick to Anthony and all his volunteers.

juanslayton
February 4, 2012 3:33 pm

evanmjones: I been everywhere, man, I been everywhere.
Well, I been to Steheken. And seen the elephant. (That’s a Oregon Trail joke, boy.)

Jessie
February 4, 2012 3:37 pm

Phil Jones is a member of Data Archiving and Access Requirements Working Group (DAAWRG)
p17 http://www.sab.noaa.gov/Meetings/2011/november/SAB_Mtg_Pres_Nov11_DAARWG-Final.pdf
Though DAARWG agendas/publications are not listed or easy to find, the Joint Office of Science Support ‘JOSS works closely with scientists and research managers to plan, organize and conduct scientific programs in the most productive, efficient and cost-effective ways.
JOSS works on many levels, from consulting with individual investigators to working with research managers and funding agency officials who are planning large-scale geophysical field experiments and monitoring projects….

http://www.joss.ucar.edu/index.html
At the request of the scientific community and in collaboration with the several Federal agencies, JOSS provides staff to manage major, scientific programs, including:
United States Global Change Research Program (USGCRP)
USGCRP International and International Group of Funding Agencies for Global Change Research (IGFA),
Intergovernmental Panel on Climate Change (IPCC) Working Group II Technical Support Unit.
Climate Variability and Predictability (CLIVAR);
Ocean Technology and Interdisciplinary Coordination (OTIC); and
Carbon Cycle Science Program (CCSP).’
http://www.joss.ucar.edu/scientific_collaboration.html#education
Agendas and background information listed on the JOSS website for the dates of DAARWG cf email600 (11 Sept 2007)
■ 2007 Global Carbon Cycle Program P.I. Meeting
10-11 September 2007; Silver Spring, MD
http://www.joss.ucar.edu/joss_psg/meetings/Meetings_2007/GCC_PI/index.html
■ Southern Ocean Gas Exchange Experiment (GasEx) Meeting
12 September 2007; Silver Spring, MD
http://www.joss.ucar.edu/joss_psg/meetings/Meetings_2007/GasEx/index.html
■ Six decades of fishery genetics: A retrospective view and a vision for the future
A Public Symposium
17-18 September 2007; Seattle, WA
http://www.joss.ucar.edu/joss_psg/meetings/Meetings_2007/Six_dedcades_of_fishery_genetics/index.html
■I PCC Expert Meeting on New Scenarios: Toward New Scenarios for Analysis of Greenhouse Gas Emissions, Climate Change, Impacts, and Response Strategies
19-21 September 2007; Noordwijkerhout, Netherlands
■ Detecting the Atmospheric Response to the Changing Face of the Earth: A Focus on Human-Caused Regional Climate Forcings, Land-Cover/Land-Use Change, and Data Monitoring
27-29 August 2007; Boulder, CO
http://www.joss.ucar.edu/joss_psg/meetings/Meetings_2007/Detecting/Index.html
source: (2007) http://www.joss.ucar.edu/events/past_events_07.html
http://www.joss.ucar.edu/events/past_events.html
Joint Office for Science Suppport http://www.joss.ucar.edu/daarwg/feb09/index.html
In regard to Prof Jones (?sarcastic) comment in email6000 on feeling good about his carbon bootprint…. save the planet JOSS also provide policy advice on travel.http://www.joss.ucar.edu/policies/index.html#related

Jeff Wiita
February 4, 2012 3:38 pm

That’s a good one.

LazyTeenager
February 4, 2012 3:39 pm

WLF15Y says
Maybe I missed something along the way, but in the email he admits to only 15% that are “unacceptable”. Is this % purely due to the time frame in which the email was sent?
——-
You misread the email.

Carl Brannen
February 4, 2012 3:40 pm

The amazing thing is that it took this long for the email to show up on WUWT. This sort of indicates to me how many emails are involved in this release.

February 4, 2012 3:42 pm

The current director of the center is Tom Karl, a lead author on three Intergovernmental Panel on Climate Change science assessments.
He has played a key role in reports developed by the USGCRP and the Intergovernmental Panel on Climate Change (IPCC). He served as Co-Chair of the USGCRP’s US National Assessment and its recent Global Climate Change Impacts in the United States report. Additionally, Karl was the convening lead author for the Observations Chapter for the IPCC’s Third Assessment Report and was the review editor for the chapter on Observations for its Fourth Assessment Report. He has been the convening and lead author and review editor of all the major IPCC assessments since 1990.
Who could have guessed?

LazyTeenager
February 4, 2012 3:47 pm

Joseph Thoma says
Theo, if I remember right, Anthony’s project and data came under an attack not only by the alarmists
———
I don’t think you remember correctly.
There have been some quibbles about the significance of some observations. But the main objection has been about Anthony’s interpretations of those observations.
Actual personal attacks against Anthony have largely been about this web site and very little about his sites project.

Jessie
February 4, 2012 3:51 pm

Tom Konerman says: February 4, 2012 at 2:27 pm
Great clip.
Good driving.
Beautiful bride.
2.05 Hope the corn sold well.

Evan Jones
Editor
February 4, 2012 3:55 pm

Juan has directly observed many more stations than I have (problem is I don’t drive). Others have also done a lot more direct surveys than I have (Anthony’s done around a hundred or so, maybe more). It has been a true team effort.
Most of my “contribution” has been “virtual” observation. That is, locating stations using satellite resources and/or employing hints to run down the curators, call/contact them, and get exact descriptions of the locations so I can place them on a zoomed-in Google Earth map image and make the necessary measurements.
Sometimes I’ve made a direct spot using Google’s “Street Level” view. And sometimes “Birdseye Views” from (what is now) Bing maps would turn up a station.
All in all, I’ve clocked in somewhere between 200 and 250 (a number of which have been confirmed by later direct surveys, which we strongly encourage).
NOAA’s allegations that we would harass the volunteer curators is entirely unfounded. They were all helpful and highly cooperative — and were happy to be recognized and thanked for their steadfast public service and civicmindedness.
Additionally, I’ve put up hundreds of “measurement views” to supplement surveys others have made. Google maps “ruler” feature is invaluable in this regard and allows us the evaluate the station and assign a rating.
It has used up a lot of elbow grease but has been enormous fun. And, without going into any details, there will be more fun to come.

LazyTeenager
February 4, 2012 3:57 pm

Jesse says
In regard to Prof Jones (?sarcastic) comment in email6000 on feeling good about his carbon bootprint…. save the planet
———
I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.

juanslayton
February 4, 2012 4:00 pm

KV: In his article “What’s Wrong With the Surface record” John L Daly listed the following U.S sites as some in that category.
Ashton, Idaho: Basin, Wyoming; Cedar Lake, WA; Cold Bay, Alaska; Davenport, Wa; Eagle Pass, Texas; Lamar, Colorado; Lander, Wyoming; Lampasas, Texas; Nome, Alaska; Spickard, Missouri;
Tombstone, Arizona; Yellowstone National Park;; Yosemite National Park HQ, California.

I couldn’t find in Daly’s article where he pulled these stations out for commendation, but I’ll assume it’s in there somewhere. Tombstone can now be pulled from the list; they just moved the station to town. It now sits maybe 20 feet from a paved street.

KnR
February 4, 2012 4:00 pm

Basic science , ensure your measuring devices are calibrated and any issues are known and that their used correctly . Its what the teach in any good school let alone university. But its not a standard that ‘climate science’ can achieve, and they wonder why people are skeptical of the ‘grand claims of certainty ‘

KnR
February 4, 2012 4:03 pm

LazyTeenager expect becasue as you didn’t use the exact same method as they do , they claim your results are worthless , a classic heads I win , tails you lose trick . But nothing to do with science.

David Ball
February 4, 2012 4:06 pm

Vindication. A lot of us have backed Anthony all along. 5 years to come to light. The data collection is a HUGE problem. Go team Watts !!
Lazy Teenager, you’re a doofus of the highest order.

David Ball
February 4, 2012 4:09 pm

LazyTeenager says:
February 4, 2012 at 3:57 pm
“I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.”
Can you not see this is EXACTLY what your side has done?

Robert of Ottawa
February 4, 2012 4:18 pm

This was the very basis of your efforts; Tom Karl passses youa compliment.

Robert of Ottawa
February 4, 2012 4:18 pm

Don’t feed the lazy trolls

Curiousgeorge
February 4, 2012 4:21 pm

Well you know the saying: “If you don’t like the climate here in the Goldilocks Zone, just wait a couple centuries.” 😉

Rob Crawford
February 4, 2012 4:28 pm

Lazy Teenager: “I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.”
Sure, it’s easy enough to throw an offset on a reading. Doesn’t mean it’s correct, or that you’re not cooking the offsets to get the results you want.
Why not focus on getting good data in the first place?

cui bono
February 4, 2012 4:28 pm

Yo, go Anthony! And kudos to all the volunteers and of course the indefatigable Tom!
Meanwhile the Met Office-forecasted ‘light snow’ here in the UK has cut off my satellite TV signal and my young cats won’t go out. Time once again to cite Dr David Viner: “Cats just aren’t going to know what snow is!”

David Ball
February 4, 2012 4:37 pm

The Met offices “barbecue summer” means something quite different to those of us who have seen the pictures of most citing issues.

Bigred (Victoria, Australia)
February 4, 2012 4:38 pm

Pinged, I reckon. Onya Anthony (and of course the indefatigable Mr Nelson).

Rhoda Ramirez
February 4, 2012 4:51 pm

cui bono: Cats aren’t dumb.

Green Sand
February 4, 2012 4:55 pm

Heh.
Sometimes three letters and a . is just enough!

Brian H
February 4, 2012 4:57 pm

Edit comments:
geography lady: “eliminate” means reduce to zero. Overkill word!

Al Gored says:
February 4, 2012 at 1:57 pm
Funny. On the one hand, the Team was insisting that CO2 was the great driver of change while, on the other hand, they were insisting that what was doing Anthony wasn’t. As it turns out…

Almost incoherent; try again.
WLF15Y: Nope. That’s 15% ACCEPTABLE. By advanced substration, it follows that 85% are unnacceptable.
Tom: your video won’t play where I am (EMI copyright), so here’s a classic substitute:

evanmjones says:
February 4, 2012 at 3:55 pm

NOAA’s allegations that we would harass the volunteer curators is entirely unfounded. They were all helpful and highly cooperative — and were happy to be recognized and thanked for their steadfast public service and civicmindedness.

Indeed; by all indications, NOAA has been ignoring them and taking them and their equipment etc. for granted.

Jessie
February 4, 2012 5:00 pm

In March 2009 Phil Jones responded to an email from S Clegg, University of East Anglia (UEA). Clegg’s email thanked recipients for responding to requests for information for their Annual Report and the another document placed in pigeon-holes. The new request (5/3/2009) was for information for a new publication titled ‘Leadership, Innovation and Collaboration’. Clegg provided for the email recipients a few egs of the type and style of information required: eg Mike Hulme appointed editor in Chief of “Wiley Interdisciplinary Reviews Climate Change” etc
Jones replies:
1) Recent release of paper in Nature (Geosciences)
’…..observed changes in Arctic and Antarctic
temperatures are not consistent with natural climate variability, but instead are directly attributable to human influence on the climate system as a result of the build-up of greenhouse gases in the atmosphere….The results show that these human activities have already caused significant warming in both polar regions, with likely impacts on polar biology, indigenous communities, ice-sheet mass balance and global sea level.’

2) ’ We would have highlighted the new set of UK Climate Scenarios (UKCP09), but they will not be out for several months. They should be a must for 2 years time!’ [bold added]
3) Jones is now a member of the NOAA Working Group called ‘Data Access and Archiving Working Group’ (DAARWG). ’This reports to the NOAA Scientific Advisory Board (SAB). I’ve attached its latest recommendations. There is a strong likelihood they will get acted upon now NOAA has a National Climate Service. … which reports to the Scientific Advisory Board (SAB) of NOAA. The aims of DAARWG are to provide the SAB with guidance on how best to archive the increasing amounts of observational and climate model data that NOAA is obliged to keep by Federal Laws. The group has advised on the development of guidelines to decide which datasets need to be archived and also addressed issues of access. DAARWG oversees all three of NOAA’s principal Data Centres as well as the 30+ centres of data that NOAA runs’ [bold added]
4) That he [Jones] had just ‘rotated off the Hadley Centre Science Review Committee.’
Source: http://foia2011.org/index.php?id=1714
P14-15 Phil Jones remains a member of DAARWG, his ?three year term due to be completed in 2011.
http://www.sab.noaa.gov/Meetings/2011/november/SAB_Mtg_Pres_Nov11_DAARWG-Final.pdf
Ferris, mentioned in email600 [IDAG ??Information Dissemination Advisory Group http://www.fsa.gov.uk/ ], is also a member of the DAARWG.
email600 states ‘Would be great to get IDAG and DAARWG on the
same timeframe’

Could we have some clarification on the acronym IDAG please?
[Reply: http://www.acronymfinder.com/IDAG.html ~dbs]

RichieP
February 4, 2012 5:04 pm

LazyTeenager says:
February 4, 2012 at 3:57 pm
‘I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.’
Your usual projection, with that added pinch of cognitive dissonance thrown in for seasoning. Such an inadvertent comedian and entirely without any self-awareness, as befits your monniker.

kbray in california
February 4, 2012 5:10 pm

After watching JeffId’s Sea Ice Video 1978-2012….

The maximum Arctic ice area seems to vary only by about 10%, that can easily be within a natural fluctuation.
If I had normal 20 foot snow drifts in my driveway, but only got 18 feet for a few years, I’d still be complaining about the shoveling, even though it was 10% less.
Good work Anthony and Jeff too, these fluctuations are no crisis.
Thanks for enlightening the world.

Green Sand
February 4, 2012 5:11 pm

cui bono says:
February 4, 2012 at 4:28 pm #
and

David Ball says:
February 4, 2012 at 4:37 pm

If you want to see their true colours ask the Met Office about the following:-
http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/decadal-prediction
Note that they have chosen not to update it.
And now look at their next offering:-
http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc
PS, don’t expect the MO to update that one either, but whether or not the MO joins in it will be updated.

Evan Jones
Editor
February 4, 2012 5:13 pm

I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.
Your impression is absolutely incorrect. Furthermore, we are not interested in anyone else’s adjustments. We are interested in how NOAA makes the adjustments. In order to find out, we require the – exact – procedures/algorithm, working code (and operating manuals) so that NOAA’s adjustments can be replicated.
Unless the adjustments can be replicated using NOAA’s exact procedures, they cannot be independently reviewed.
You can’t just show up with adjusted data and say, “I got it from the boys.” Not and expect to be treated seriously.
So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.
This Liberal New Yorker hails from Show Me, Missouri, on that one.
The code, Jack. That’s what we need. A nice up of tea and the code . . .

John Billings
February 4, 2012 5:37 pm

The new improved HADCRUT4 lowers earlier recorded temps to make current actual temps look higher. It’s been found already, it’s old news. There was an Iceland story here on WUWT a week or two back on exactly that.
Two hundred years ago, people took to the streets. Now we take to the blogs. The military-industrial complex must be pissing their pants.
We’ve got nice graphs though.

Jessie
February 4, 2012 5:37 pm

LazyTeenager says: February 4, 2012 at 3:57 pm
The only boots I am familiar with LT are those worn by people such as Willis E and a few good women.
And what is right is right, but you ain’t been right yet……………

Bill Illis
February 4, 2012 5:50 pm

Tom Karl and the NCDC have been given a pretty big pass so far.
All of the problems and all the unjustified corrections to the temperature record are done through the non-transparent NCDC.
We shouldn’t be going after James Hansen and Phil Jones so much as we should be looking at what Tom Karl (and his NCDC employees) have done.

Alan S. Blue
February 4, 2012 6:03 pm

Dear Lazy Teenager, I’ll dramatically simplify the problem and turn it around:
Pretend you have only two thermometers, their accuracy is ±0.1C. They’re “CRN1”, meaning there aren’t any barbeques, overhanging trees, nor jet exhaust (a criteria that 85% of the existing stations fail). Assume they’re auto-adjusted for humidity and elevation. (Some of the adjustments make perfect sense.) Put them anywhere you like so long as they’re fifty miles apart.
Now: Pretend the two thermometers are two corners of a rectangular area (projected on a sphere) and provide the average temperature of the -entire- box to an accuracy of 0.01C under all weather conditions.
Making one’s own code to do exactly that is non-trivial.

Geoff Sherrington
February 4, 2012 6:49 pm

Roger Sowell February 4, 2012 at 12:53 pm RE: Pristine sites
Australia probably has more instrinsically pristine weather stations than USA. I selected the best subset I could, for the year 1970 onwards (due to availability of modern data). A summary is at
http://www.geoffstuff.com/030303CONDENSED%20PRISTINE%20SLOPE%20AUSTRALIA.xls
As one can see from the graphs at the bottom of the summary, the linear slope of the movement on Tmax and Tmin varies enormously and does not correlate with any extraneous variable I could identify. (The linear fit is for decoration, not math). It is therefore somewhat pointless to try to set a baseline temperature when there is no consistency among pristine sites, whose trends here varied from -2.5 to + 4.8 degrees per century equivalent. over some 50 sites.
Going back earlier, when data colection often relied on individuals in remote locations, Dr Simon Torok wrote his Doctoral Thesis in 1996 on the more accurate compilation of weather data. Some of his complications on my web site, with thanks& acknowledgements to him, at
http://www.geoffstuff.com/Torok%20thesis%20excusesW2003.doc

philincalifornia
February 4, 2012 6:56 pm

Lazy Teenager’s view of the world is that his honest opinions are honest, but anyone who has honest opinions that differ from his are being dishonest.
Keep fighting that rearguard action LT. Not long to go now ….
… watch out for bootprints.

Jean Parisot
February 4, 2012 6:56 pm

While I can see how a large network of volunteer run stations could have error over time, I can’t see how the “corrections” aren’t fully documented. If only due to bureaucratic inertia and not data integrity.

February 4, 2012 7:20 pm

To the best of my knowledge, there has only been one book published that lays out Anthony’s extraordinary doings regarding the recording of temperature:
http://amzn.to/yLN0Zm

February 4, 2012 7:46 pm

Congratulations Anthony, surfacestations.org has done a great job!

Steve from Rockwood
February 4, 2012 8:07 pm

If it were me, I would stop for a minute and have a scotch.

Steve from Rockwood
February 4, 2012 8:09 pm

Is were even a word?

Jessie
February 4, 2012 8:58 pm

philincalifornia says: February 4, 2012 at 6:56 pm
Lazy Teenager’s view of the world is that his honest opinions are honest, but anyone who has honest opinions that differ from his are being dishonest.
Keep fighting that rearguard action LT. Not long to go now ….
… watch out for bootprints.

or hand prints……………………of one artistic form or another………………………..
Parental guidance recommended. Some will need parenteral guidance……………

Editor
February 4, 2012 9:29 pm

Steve from Rockwood says:
February 4, 2012 at 8:07 pm
> If it were me, I would stop for a minute and have a scotch.
I finished off a bottle of Drambuie in Anthoy’s honor. Is that good enough?

Jessie
February 4, 2012 9:36 pm

Notes on science, bootprints, sites, legislation and economics……… perhaps policy and expenditure…………
cf address of Dr T R Karl in email600.
July 1969 Buzz Aldrin, Bootprint on Lunar Soil
http://www.britannica.com/bps/media-view/73230/1/0/0
AND
http://www.smh.com.au/photogallery/world/vintage-nasa-photographs-20111024-1mga7.html
104th Congress, 1st Session HR 2504
To designate the Federal building located on the corner of Patton Avenue and Otis Street ………………. as the ‘Veach-Baley Federal Complex’.
http://www.gpo.gov/fdsys/pkg/BILLS-104hr2504rh/pdf/BILLS-104hr2504rh.pdf
20/7/2009 Army Contracts to Study Its ‘Carbon Bootprint’
WASHINGTON — As the federal government prepares to regulate greenhouse gases, the U.S. Army has contracted a firm to evaluate the military’s “carbon bootprint,” a balance sheet of its emissions.
http://online.wsj.com/article/SB124811157068565825.html
29/9/2009 RECOVERY (ARRA) – Lighting Feasibility Studies
RECOVERY ACT PROCUREMENT. THIS NOTICE IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY.
The project will be funded through the American Recovery and Reinvestment Act of 2009 (ARRA). The proposed procurement is being made under an Architect-Engineer (A-E) Indefinite Delivery Indefinite Quantity (IDIQ) Multiple Award Supplemental Contract for the primary geographic area of GSA Region 4 (GS-04P-06-EX-D-0027).
The scope of this work is for lighting feasibility studies using relighting best practices to the Veach-Bailey Federal Building, 151 Patton Ave., Asheville, NC 28801.
https://www.fbo.gov/index?s=opportunity&mode=form&id=f9632b2f4d73fec3bea240135d2e2e73&tab=core&_cview=1
21 /5/2010 GSA Goes Green
The General Services Administration is Using ARRA Funds for Southeast Green Projects

The U.S. General Services Administration received nearly $5.6 billion in American Recovery and Reinvestment Act funding to modernize federal facilities and convert them into high-performance green buildings. Those dollars are starting to flow into communities in the Southeast as projects ramp up…..“The government spends a lot of money on energy use in buildings, and anything we can do to make that better and reduce our carbon footprint is a good thing…..”
• $4.4 million upgrade to the Veach-Baley Federal Complex in Asheville, N.C.
……. that offered energy conservation and renewable energy generation, could start within 120 days and had limited risk of failure. It also considered the facility’s condition, the project’s ability to improve asset utilization, return on investment, the opportunity to avoid lease costs and historic significance.’
(bold added)
http://southeast.construction.com/features/2010/0401_GreenConstruction.asp
Nov-Dec 2011 The Economic Bootprint of Defense Spending in Indiana
‘Since 2001, the value of defense contracts awarded to Indiana has more than doubled, the annual number of unique contracts awarded has increased nearly five-fold, and the number of Indiana defense contractors has grown significantly (see Table 1). The 2010 value of Indiana’s defense-related contracts ranked 23rd among states……………the estimated average compensation for direct defense-supported jobs was nearly $20,000 greater than Indiana’s average compensation per worker for all jobs.
http://www.incontext.indiana.edu/2011/nov-dec/article1.asp
Also mentioned in email600 was the need for information for the upcoming publication ‘Leadership, Innovation….’. An eg given was for epidemiology. Tony McMichael (Australia) contributed to reports in the IPCC and is mentioned in The Delinquent Teenager practices and advises on both epidemiology and climate. Sir Michael Marmot, also an epidemiologist also has an interest in architecture and town planning (http://globetrotter.berkeley.edu/people2/Marmot/marmot-con1.html). Perhaps later in green star ratings of civil buildings? Or was that the longitudinal studies (Whitehall etc) on civil servants. Will need to check.

pat
February 4, 2012 10:06 pm

congrats anthony. your hard work was not in vain.
big thanx to mrs. watts for allowing u to perservere.

Keith
February 4, 2012 10:38 pm

OT but interesting. 101 Tory MP´s in the UK parliament have revolted on the Coalition´s position on renewable energy, saying that subsidies to onshore wind, and the siting of more wind turbines is wrong. http://www.telegraph.co.uk/news/politics/9061997/100-Tories-revolt-over-wind-farms.html

KV
February 4, 2012 10:42 pm

@ juanslayton February 4, 2012 at 4:00 pm
“I couldn’t find in Daly’s article where he pulled these stations out for commendation, ”
Along with many other stations from round the world John considered met the requirements to be classed as “greenfields” sites, they are listed in a “clickable” Appendix – Station Records to 1998 or 1999, at the end of his article.
That list contains what John considered to be one the finest “greenfield” sites in the world because of its history and ideal location: Valentia Observatory. Ireland. Read what John said about it, click on the graph and see why the warmists and the whole CAGW crowd hate it.
And to anticipate any comment from LT about cherry picking, if ever there was a “bell-wether” surface station to monitor any climate change, Valentia would be the one!

Girma
February 4, 2012 11:12 pm

Anthony
With that email, all your efforts have been rewarded.
Congratulation!
Why the switchover if it were okay?

February 4, 2012 11:53 pm

Congratulations, Anthony, well done yet again. It looks like a massive payday from out here.

February 5, 2012 12:07 am

Steve from Rockwood says:
If it were me, I would stop for a minute and have a scotch.

Is were even a word?

Yes. Subjunctive mood. Seldom used these days except by the highly educated, pirates, and in Ebonics.

Dreadnought
February 5, 2012 12:12 am

A much-deserved, good old sock in the eye for the nay-sayers – nice pwnage, Anthony!

Steve C
February 5, 2012 12:29 am

Mentioned In Dispatches. That’s only just short of getting a medal! 🙂

MikeH
February 5, 2012 12:51 am

evanmjones says:on February 4, 2012 at 12:31 pm
“As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data,…”
Evan, Anthony, etc..
Is there correspondence documenting this? Is there a public statement as to why they refuse to reveal their methodology? Last time I checked, NOAA was a publicly operation, U.S. tax dollar$, etc…
Is Senator Jim Inhofe from Oklahoma aware of this? Has he addressed this with NOAA? If he is not aware of this, maybe it should be brought to his attention, he is a strong ally in rationally reviewing the data and the claims of AGW. He is the minority leader in the U.S. Senate Committee on Environment and Public Works. Maybe during one of their public meetings, he can request NOAA to answer a few questions? He seems to be a person who can get hings done…. Maybe some of his constituents, and others, write him to see if there is an answer to this?
As to not releasing the data, I remember watching a documentary a few years ago on the mapping of the human genome. There was a government lab that was crunching away with mapping it out.. But it was going to take years..
But, a private company started doing the same thing, from the other end of the genetic map. They didn’t want to copy the results the government was doing.. Well, since the government lab was publicly funded, it results were public information and released it’s findings regularly. The private company made an early announcement that they had 90% of the human genome mapped, much sooner than expected. They we legally able to include the government data in with their data. So even if each of them had only mapped 45%, the government was not privy to the results from the private company, but the private company could legally use the government’s data. Whether someone views this as ‘right’ or ‘wrong’ is up to the individual, but it was legal.. I wonder if NOAA is viewing this the same way. If they release all of their data and math, someone else will pick up the ball, leaving NOAA in the dust…

DR
February 5, 2012 1:18 am

@Ric Werme
Considering you spelled Anthony’s name wrong, I’d say that’s more than good enough!

Larry in Texas
February 5, 2012 1:21 am

Good work, Anthony! Once again, we are reminded about what went wrong at NOAA and NCDC and how you pointed that out. You should remember this – and remind your critics of it over, and over, and over again.

neil swallow
February 5, 2012 1:51 am

an intriguing post by ed caryl on the Best temperature data.surprised its not been mentioned here at WUWT.http://notrickszone.com/2012/01/30/best-fails-to-account-for-population-and-cold-winters/ .

John Marshall
February 5, 2012 2:21 am

Just as big a problem is the reduction of reporting stations from over 6000 to around 2300 in 1990. Most of these removed stations were in colder regions. The global average temperature in 1990 leapt up to be claimed as another doom scenario.

February 5, 2012 2:58 am

Phil Jones says in e-mail #600:
” IDAG is meeting Jan 28-30 in Boulder. You couldn’t make the
last one at Duke. Have told Ferris about IDAG, as I thought DAARWG
might be meeting in Boulder. Jan 31-Feb1 would be very convenient
for me – one transatlantic flight, I would feel good about my carbon
bootprint and I would save the planet!”
That final exclamation mark sugggests to me that he knows it is all a load of crap.

Lars P.
February 5, 2012 3:31 am

LazyTeenager says:
February 4, 2012 at 3:57 pm
“I wish you would not just make stuff up. Without the context you can’t interpret this properly so you should not be making up interpretations just to suit your propaganda objectives. It’s fundamentally dishonest.”
LT from what I see you are not lazy posting lots of posts after posts and not a teen judging from your posts. This is not a teen speaking.
In the web many people like to talk under a pseudonym which I respect, even if we all know anonymity is just a fable, but wonder why would you like to give the impression you are just an anonymous teen?

February 5, 2012 5:07 am

Just an amusing thought I had in relation to this…
“one transatlantic flight, I would feel good about my carbon
bootprint and I would save the planet!”

http://en.wikipedia.org/wiki/Nineteen_Eighty-Four
If you want a picture of the future, imagine a boot stamping on a human face—forever.
Not a bad analogy, I think.

Editor
February 5, 2012 5:12 am

DR says:
February 5, 2012 at 1:18 am
@Ric Werme
> Considering you spelled Anthony’s name wrong, I’d say that’s more than good enough!
Yeah, I noticed that right after I posted. In my foggy decision process, I debated posting an Oops or just hope no one would notice. However, nothing escapes WUWT nation these days. At the very least it’s supporting evidence of the state of the Drambuie bottle.

EternalOptimist
February 5, 2012 5:26 am

To paraphrase Travesty T
Do you consult your blogger about your surface stations? In science, as in any area, reputations are based on knowledge and expertise in a field and on published, peer-reviewed work. If you need surgery, you want a highly experienced expert in the field who has done a large number of the proposed operations.
Wrong answer!!

Frank K.
February 5, 2012 5:35 am

Robert of Ottawa says:
February 4, 2012 at 4:18 pm
“Don’t feed the lazy trolls.”
Don’t worry – he’s off coding up his own adjustments, since everyone knows how to do that…even NOAA…heh! /sarc.
BTW – I do wonder why NOAA refuses to release their adjustment code. Even GISS (after a lot of prodding and public embarrassment) released their code. Of course, we realized WHY GISS were so ashamed of it after they released it…

February 5, 2012 5:37 am

Let me get this straight:
Surface station monitors on the about 30% of the earths surface that is land, but are not equally representing the entire 30%, and are not sited or functioning properly, need to have their data adjusted or manipulated in some manner in order to provide what can never be considered a “global” temperature since they do not monitor the other 70% of the planet.
Can anything be built upon this “rock”?

Mike Monce
February 5, 2012 6:32 am

Alan Blue said:
“Pretend you have only two thermometers, their accuracy is ±0.1C. They’re “CRN1″, meaning there aren’t any barbeques, overhanging trees, nor jet exhaust (a criteria that 85% of the existing stations fail). Assume they’re auto-adjusted for humidity and elevation. (Some of the adjustments make perfect sense.) Put them anywhere you like so long as they’re fifty miles apart.
Now: Pretend the two thermometers are two corners of a rectangular area (projected on a sphere) and provide the average temperature of the -entire- box to an accuracy of 0.01C under all weather conditions.
Making one’s own code to do exactly that is non-trivial.”
I hope I’m just missing the sarcasm here. Not only is it non-trivial, it’s impossible! If the instrumental uncertainly is +- 0.1C, then it is literally impossible to obtain an average than has an uncertainty a factor of 10 better than what the instruments can actually read.

JPeden
February 5, 2012 6:40 am

LazyTeenager says:
February 4, 2012 at 3:14 pm
So we now have evidence that they acknowledge the existing meteorology less than ideal for climate change monitoring and are motivated to improve it.
Ooops. Seems to contradict notions of nefarious behavior. If they wanted to produce fake data to support some climate conspiracy they would not bother to try to improve the network now would they.

“You lazy teenager, when are you going to get a real job and move out of my basement.”
“Ok, Mom, but I’ve been trying really really hard!”
“Lordy…where’s my blood pressure medicine?”

DirkH
February 5, 2012 6:42 am

LazyTeenager says:
February 4, 2012 at 3:22 pm
“evanmjones says
As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, and therefore, of course, any results are Scientifically Insignificant.)
———-
I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.
So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles involved are well known.”
This is proof that Lazy Teenager is in fact a lazy teenager without any experience. Lazy Teenager; even if it were obvious HOW to write such adjustment code, we would still not know whether NOAA managed to do it without introducing errors. Some time later in your young life, you might get introduced to the craft of computer programming, and you will learn that even the most experienced programmers make mistakes all the time. Obviously, you don’t know this by now, otherwise you wouldn’t have written what you wrote.
Of course, this total lack of experience also goes a long way to excuse your entirely unfounded trust in IPCC consensus climate science.

trbixler
February 5, 2012 7:06 am

Anthony as Morpheus Karl as the agent
NOAA wants us all to take their pill.

Editor
February 5, 2012 7:06 am

So we now have evidence that they acknowledge the existing meteorology less than ideal for climate change monitoring and are motivated to improve it.
Ooops. Seems to contradict notions of nefarious behavior. If they wanted to produce fake data to support some climate conspiracy they would not bother to try to improve the network now would they.

Yet they didn’t. They did not convert to CRN. They did not even add CRN stations to the mix. Using NOAA/NWS or Leroy (1999) standards, <10% of stations are acceptable, and using the new and (very much) improved Leroy (2010) standards, ~15% are acceptable.
All they did was to replace 53 stations with 50 other stations that show greater warming than those they replaced. In proportional terms, they replaced 2% of the USHCN1 stations and that resulted in a 20% warmer trend for USHCN2.
Any comment on nefarious behavior?

Rogelio
February 5, 2012 7:12 am

This posting at Real science is by far the most important or relevant posting concerning the whole AGW scam/science since it started… a must read
http://www.real-science.com/hadcrut-global-trend-garbage
it accounts for all that “cold area” before the 80’s. Its THE graph that was used and still is and its complete garbage as I always suspected. This needs to be broadcast far and wide

Victor Venema
February 5, 2012 7:26 am

LazyTeenager wrote: “I was under the impression that it’s relatively easy to code your own adjustment code and that it has been done multiple times. And they all come much the same conclusions about the temperature trends.”
evanmjones answered: “Your impression is absolutely incorrect. Furthermore, we are not interested in anyone else’s adjustments. We are interested in how NOAA makes the adjustments. In order to find out, we require the – exact – procedures/algorithm, working code (and operating manuals) so that NOAA’s adjustments can be replicated.”
You can download the code used to homogenize (compute the adjustments) the NOAA’s USHCN dataset here:
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/#phas
This page also lists all the articles that describe how the software works. Many other homogenization codes are also freely available.
Homogenization adjustments are computed to comparing a candidate station with its neighboring stations. Nearby stations will have about the same climate signal. If there is a clear jump (relocation, change in instrumentation or weather shelter, etc.) or a gradual trend (urban heat island, growing vegetation, etc.) in the difference time series of two nearby stations this is unphysical and need to be corrected.
Rather than going through the code line by line, LazyTeenager is of course right, that it is much smarter to try to understand the principle and to write your own code and apply it to the data. If you get about the same result, NOAA and you did a good job, if you find differences you try to understand why, which of the two codes makes an error. That is how you normally do science.
The NOAA homogenization software was just subjected to a blind test with artificial climate data with inserted inhomogeneities. This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends. The test was blind in that only I knew where the inhomogeneities were inserted and the scientists performing the homogenization did not. More information on this blind test:
http://variable-variability.blogspot.com/2012/01/new-article-benchmarking-homogenization.html

kramer
February 5, 2012 7:43 am

Hey Tom Nelson, I just want to say a BIG THANK YOU for all of your work in posting the climategate emails.
I also want to add that I read somewhere (I think on JoNova’s or Laframboise’s site) that Joe Romm is (I’m paraphrasing) upset over the climategate 2 emails. Nice to know that you’re contributing to this…

Victor Venema
February 5, 2012 7:46 am

JohnWho wrote:
“Let me get this straight: Surface station monitors on the about 30% of the earths surface that is land, but are not equally representing the entire 30%, and are not sited or functioning properly, need to have their data adjusted or manipulated in some manner in order to provide what can never be considered a “global” temperature since they do not monitor the other 70% of the planet. Can anything be built upon this “rock”?”
It is only you guys that focus so much on the surface network and in most cases even only on the surface network in the USA. Which is not very smart: Even if you could show that your national weather service is in a big conspiracy, it would hardly change the global warming signal. America is not that large. If you would like to contribute to science and find a reason why the the global warming signal is too strong, you’d better think of reasons that apply globally.
The oceans are lately covered by satellites and way back into the past by ocean weather ship and voluntary observations ships: International Comprehensive Ocean-Atmosphere Data Set (ICOADS).
http://icoads.noaa.gov/
The vertical dimension is covered by the radiosonde network, many of them on islands to cover the atmosphere above the oceans. Keywords: GUAN: GCOS Upper-Air Network and GCOS Reference Upper-Air Network:
http://www.gruan.org

neill
February 5, 2012 7:49 am

Jessie says:
February 4, 2012 at 8:58 pm
After viewing that video, I now know what causes global warming — Jessica Simpson.

Evan Jones
Editor
February 5, 2012 8:23 am

You can download the code used to homogenize (compute the adjustments) the NOAA’s USHCN dataset here:
But homogenization is just one step in a long process. There is infilling, SHAP, TOBS, UHI, and equipment, to name just a few. (Not to mention the initial tweaking — outliers, etc.)
And, of course, homogenization is not supposed to increase the trend. After all, if all you are doing is, in effect, providing a weighted averaging of stations within a given radius (or grid box or whatever), the overall average would not change (or at least not much, depending on the weighting procedures).
Yet the adjusted data is considerably warmer than the raw data. Using Steve McIntyre’s 20th Century data: The average USHCN1 station has warmed 0.14C per century using raw data. But it is +0.59 for adjusted data.
The data we used for Fall, et al. (2011), for the 1979 – 2008 (positive PDO) period — using USHCN2 adjustment methods — showed a warming of 0.22 C/decade for raw data and +0.31 C/decade for adjusted data.
This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.
In that case, homogenization is not going to explain the differences. So homogenization code is not terribly relevant to my objections. We need the full and complete adjustment code. The part that creates the — very large — differences between aggregate raw and adjusted data.
As in the infamous Saturday Night Live “paraquat test”: It’s light! To conduct this test, we need an ounce. A FULL and COMPLETE ounce.

Evan Jones
Editor
February 5, 2012 8:37 am

Rather than going through the code line by line, LazyTeenager is of course right, that it is much smarter to try to understand the principle and to write your own code and apply it to the data.
That statement is such an enormity it needs to be addressed separately.
Actually, for Independent Review purposes, not only does one have to understand the underlying principles, but also has to go through the code line by line and be able to run the code and get the exact same results as NOAA.
For example, are they using the TOBS record from the actual B-91 and B-44 forms, or are they using some sort of mishmash regional guesstimate procedure? “Underlying principles” are not going to answer that one. Only a line-by-line review is going to shed any light on that.
Being “much smarter” would get you landed in the hoosegow if you tried to pull that in the private sector.

Evan Jones
Editor
February 5, 2012 8:44 am

Finally, homogenization does not change the overall average. What it does do is smear SHAP bias around so it cannot be distinguished. We are going to hear a LOT more of about this in the fairly near future. But that is a story for another day . . .

Evan Jones
Editor
February 5, 2012 9:13 am

It is only you guys that focus so much on the surface network and in most cases even only on the surface network in the USA. Which is not very smart: Even if you could show that your national weather service is in a big conspiracy, it would hardly change the global warming signal. America is not that large. If you would like to contribute to science and find a reason why the the global warming signal is too strong, you’d better think of reasons that apply globally.
We concentrate on the USA for for the following reasons:
It is very difficult to locate even US stations. Having run down over 200 of them, I can speak to this personally. It is exceedingly difficult and timeconsuming, given that NOAA has pulled the curators’ names and addresses from the MMS website. Also, the coordinates they provide are often faulty in the extreme, though there has been some improvement of late. It has taken us years and years to run down the bulk of USHCN stations.
Foreign stations are a near-impossible task. We do not have an international network of volunteers. As for locating them by satellite, GHCN provides coordinates to only two or three decimal places. That is entirely useless for our purposes. Some can be identified by airport, WWTP, or some other industrial structure, but satellite resolution outside the US (even inside the US) is generally so poor as to make distinguishing stations impossible. On top of that, there is no conformity of equipment, so unless it is a Stevenson Screen or an ASOS, we wouldn’t recognize them if they showed up on the blurry map images — which they generally don’t.
In any event, the USA is an excellent sample. First, the US shows much the same overall warming trend for the 20th century as does the world, overall (c. +0.7C / century for adjusted data — and much less for raw data), though the “1940 bump” is higher. Second, with the possible exception of Australia, the US has the highest quality historical station network in the world. This assertion appears to be supported by what few foreign stations we have actually managed to locate.
Furthermore, we are not dedicated to proving that the NOAA is a “big conspiracy”. What we are after is determining if their precedures are tight enough to cut the mustard in the private sector and whether output is correct. There is a lot riding on the answers. Yet we are excoriated for even asking the question. That violates both Scientific Method (and mores) and the principles of Liberalism by which I was raised and educated.
And finally, we have a surveyed and rated sample of just a bit over 1000 stations. That will allow us to examine well sited stations vs. poorly sited stations. It does not matter statistically whether only the US is covered or whether the sample is scattered over the world. What matters is the number of stations evaluated and how consistent the equipment is.
The question is: How does site quality affect the readings? I’d prefer to be looking at 6000 stations worldwide, but 1000 within the US will suffice to answer that question.
Previously, we used Leroy (1999) ratings, though that was a poor metric as it accounted only for distance from heat sink and made no account for area within radius, as does Leroy (2010).
The oceans are lately covered by satellites and way back into the past by ocean weather ship and voluntary observations ships: International Comprehensive Ocean-Atmosphere Data Set (ICOADS).
The methods of measuring ocean temperatures prior to ARGO (2005) are both inconsistent and abominable (c.f., the bucket/bag/bilge controversy). UAH and RSS provide reasonably reliable atmospheric readings over the oceans, but not prior to December 1978.
The vertical dimension is covered by the radiosonde network, many of them on islands to cover the atmosphere above the oceans
Radiosonde readings show so little warming (or even cooling) that one must be suspicious of them. If the radiosonde readings are accurate, we have nothing whatever to worry about. I’d go with UAH and RSS for atmospheric readings until more is known.

Victor Venema
February 5, 2012 9:16 am

evanmjones wrote: “But homogenization is just one step in a long process. There is infilling, SHAP, TOBS, UHI, and equipment, to name just a few. (Not to mention the initial tweaking — outliers, etc.)”
What do you mean with SHAP and how is it different from homogenization? TOBS (Time of Observation bias), UHI (Urban Heat Island), changes is the equipment (instruments and weather shelters) can be corrected for using parallel measurements. But if not corrected that way, these errors are corrected in the normal statistical homogenization, by comparison with neighboring stations, which was tested in the blind validation study.
evanmjones wrote: “And, of course, homogenization is not supposed to increase the trend. After all, if all you are doing is, in effect, providing a weighted averaging of stations within a given radius (or grid box or whatever), the overall average would not change (or at least not much, depending on the weighting procedures). Yet the adjusted data is considerably warmer than the raw data. ”
Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.
The mentioned weighted average of stations is used as a reference time series (for some homogenization methods). You compute a difference time series of this reference with your candidate time series. If there is just one jump, due to the move to the airport, this difference time series looks like a step function with some weather noise. From the size of the step you determine the temperature difference between the city and the airport. This step size is added to the data to correct for the relocation of the station.
The reference time series thus does not replace the data of the candidate station, which some people seem to assume, but is only used to compute the size of the jump. Thus if the other stations would also all move the airports, the results would still be right (as long as they do not all move to the airport on the same day).
Why is there a difference between the trends in the raw and the homogenized data?
Menne et al. (2009): “The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). ”
Menne, M. J., Williams, C. N. Jr., and Vose, R. S.: The U.S. historical climatology network monthly temperature data, version 2, B. Am. Meteorol. Soc., 90, 993–1007, doi:10.1175/2008BAMS2613.1, 2009.
I wrote: “This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.”
evanmjones wrote: “In that case, homogenization is not going to explain the differences. So homogenization code is not terribly relevant to my objections. We need the full and complete adjustment code. The part that creates the — very large — differences between aggregate raw and adjusted data.”
The differences between the aggregate raw and adjusted data are due to homogenization. The blind validation study showed that the adjusted trends are closer to the true trends as the trends in the raw data. Thus there are changes in the tends, but no *artificial* additional warming trends, as many of you guys like to assume. My advice would be to look for weak points in the global warming theory elsewhere.

Darkinbad the Brightdayler
February 5, 2012 9:26 am

“Because he knows, a frightful fiend Doth close behind him tread……..”
Anthony, like it or not, they are having to look over their shoulders and keep an eye on you and WUWT.
You have arrived.

Evan Jones
Editor
February 5, 2012 10:34 am

What do you mean with SHAP and how is it different from homogenization?
By SHAP, I mean changing microenvironment over time. Station History Adjustment Procedure. This is entirely unrelated to homogenization.
TOBS (Time of Observation bias), UHI (Urban Heat Island), changes is the equipment (instruments and weather shelters) can be corrected for using parallel measurements.
Precisely. And we need to examine and audit NOAA procedure for doing so. Of course, it would be better to have automated Class 2-sited stations or better, with no adjustment needed or applied. Last I heard, NOAA no longer applies an adjustment for UHI. But without their algorithm, code, and manuals, we have no way of knowing the details.
But if not corrected that way, these errors are corrected in the normal statistical homogenization, by comparison with neighboring stations, which was tested in the blind validation study.
Incorrect. Homogenization has nothing to do with correcting for those factors. It just smears the error around between x number of stations so the problem shows up less per station — by a factor of x. Sort of like correcting a 5 point grading error by changing the grades of five students by 1 point.
Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.
Of course homogenization will alter the trends of every individual station in the network. however, you have stated definitively (with NOAA citation) that homogenization does not alter the overall trend average. Therefore, by definition, homogenization is merely distributing the errors of each individual station among all nearby stations, resulting in a net zero change in average.
Therefore, the 20th century raw trend anomaly being increased by adjustment by over 400% for the 20th century and by over 40% for the past 30 years needs to be subject to audit. A FULL and COMPLETE audit.
Why is there a difference between the trends in the raw and the homogenized data?
Menne et al. (2009): “The largest biases in the HCN are shown to be associated with changes to the time of observation and with the widespread changeover from liquid-in-glass thermometers to the maximum–minimum temperature system (MMTS). ”

Yes, I mentioned both TOBS and equipment issues earlier. Code, please!
I notice that they adjust MMTS trends UP to match CRS rather than adjusting CRS trends DOWN to match MMTS. Despite the fact that MMTS is probably a better instrument (discounting siting issues, of course).
And nothing, of course, for microsite. Just upward adjustments to stations moved to airports (which show a large warming trend bias, didn’t you know?).
As Al Gore once put it, so far as adjustment procedure is concerned, everything that’s UP is supposed to be DOWN and everything that’s DOWN is supposed to be UP.
And since homogenization, in and of itself, does not affect the overall trend for USHCN, homogenization code is not relevant to the question.

Evan Jones
Editor
February 5, 2012 11:00 am

The NOAA homogenization software was just subjected to a blind test with artificial climate data with inserted inhomogeneities. This test showed that the USHCN homogenization software improves the homogeneity of the data and did not introduce any artificial (warming) trends.
. . .
The differences between the aggregate raw and adjusted data are due to homogenization. The blind validation study showed that the adjusted trends are closer to the true trends as the trends in the raw data. Thus there are changes in the tends, but no *artificial* additional warming trends, as many of you guys like to assume. My advice would be to look for weak points in the global warming theory elsewhere

Follow the pea.
What is going on, then, is that well sited stations that are running cooler are adjusted so their trends are as warmy as poorly sited stations (which also have been adjusted warmier).
Actually, good stations are adjusted even slightly warmer than bad stations. Quite a bit warmer, if airports are excluded. And, yes, I’ve checked.
Thanks for the advice, but I think we had better look for weak points in global warming right here.

Victor Venema
February 5, 2012 11:55 am

Dear evanmjones, if you are not willing to invest a little time into understanding the main principle behind homogenization and how it is implemented (including how it can improve the aggregate trend), do not expect people to waste their precious life time for a complete audit to satisfy your unfounded distrust.
Your last two comments are so full of plainly wrong statements, so clearly display that you have no idea how homogenization is performed and no willingness to learn, that I do not expect that further clarifications would bring anything.

T. Jefferson
February 5, 2012 12:01 pm

“During the past few years I recruited a team of more than 650 volunteers to visually inspect and photographically document more than 860 of these temperature stations. We were shocked by what we found. We found stations located next to the exhaust fans of air conditioning units, surrounded by asphalt parking lots and roads, on blistering-hot rooftops, and near sidewalks and buildings that absorb and radiate heat. We found 68 stations located at wastewater treatment plants, where the process of waste digestion causes temperatures to be higher than in surrounding areas.
In fact, we found that 89 percent of the stations – nearly 9 of every 10 – fail to meet the National Weather Service’s own siting requirements that stations must be 30 meters (about 100 feet) or more away from an artificial heating or radiating/ reflecting heat source. In other words, 9 of every 10 stations are likely reporting higher or rising temperatures because they are badly sited.
http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf

Questing Vole
February 5, 2012 1:12 pm

It was WUWT evidence that station data was skewed by poor siting and selectivity that made me take the step from skeptical questioning of climate change models using this data to absolute skepticism of the theory itself.
In this day and age, with access to refined technology and enhanced communications, the global network of sites reporting raw data should be expanding, rather than shrinking. I’m not into conspiracy theories, but while ever the reverse is true, it is hard not to conclude that a caucus is trying to control the data, and to manipulate it to meet its own agenda.

RomanM
February 5, 2012 2:00 pm

Victor Venema says:

Homogenization is supposed to change the trend in the raw data in case this trend is wrong. If a station is moved from the city to the airport, there is typically a drop in temperature. If this drop is sufficiently large, you may find an erroneous cooling temperature trend in the raw data.

I don’t understand why you say “If a station is moved…”. What you describe cannot be defined in any way as movement.
Rather, one station is discontinued and a new station is built at a second location. Keeping the same name or identification number does not make it the “same station”. Does it make sense to you to then “adjust” the recorded values at either site in the name of homogenization? Would you do this for two previously “unrelated” sites?
Furthermore, you state:

If there is just one jump, due to the move to the airport, this difference time series looks like a step function with some weather noise. From the size of the step you determine the temperature difference between the city and the airport. This step size is added to the data to correct for the relocation of the station.

You, of course realize that an adjustment would rarely be a single value added or subtracted. The difference would be unlikely to remain constant over the various months if the geographic characteristics change. This means that the temporal structure of anomalies calculated for the appended series could also be affected even if such adjustments were done on a monthly basis. Doing multiple adjustments then becomes much more arbitrary.
Nor would any of this explain those adjustments which have a trend already built into them…

John-X
February 5, 2012 2:10 pm

“We are getting blogged all over…”
Blogs = artillery
Free Speech = WMD

HuDuckXing
February 5, 2012 3:32 pm

Great news. Congratulations to all involved! Made my Sunday! But,,, This post added an exclamation point to my day:
“John Billings says:
February 4, 2012 at 5:37 pm
The new improved HADCRUT4 lowers earlier recorded temps to make current actual temps look higher. It’s been found already, it’s old news. There was an Iceland story here on WUWT a week or two back on exactly that.
Two hundred years ago, people took to the streets. Now we take to the blogs. The military-industrial complex must be pissing their pants.
We’ve got nice graphs though.”
You see, although my nic here is HuDuckXing, my real name is John Billings! So;
Hello to John Billings!
from,
John Billings

Evan Jones
Editor
February 5, 2012 5:04 pm

Your last two comments are so full of plainly wrong statements, so clearly display that you have no idea how homogenization is performed and no willingness to learn, that I do not expect that further clarifications would bring anything.
Unless homogenization includes SHAP, FILNET, outliers, UHI, TOBS, and microsite effects, it is not full disclosure.
All I see is adjustments that increase good site trends to greater than bad site trends. After the bad site trends themselves have been increased.
That’s a fact. We have the raw and adjusted data trends. We have determined the ratings.
Unless that can be replicated and the code inspected — line by line — there can be no independent review. By definition. I do not see how you can dispute that. Yet you said earlier that LT was correct in saying that there is no need to check out NOAA adjustments, which hike 20th century temperature trends by over 0.4C per century and the last 30-year trends by over twice that amount.
As I say, we have the raw and adjusted data trends.
So we’ll just toodle along, as we have been, and submit my clearly wrong statements for peer review. No need to clarify.

Evan Jones
Editor
February 5, 2012 5:22 pm

Meanwhile, we would like a FULL and COMPLETE adjustment procedure including any/all working code, manuals and methods involved. If independent review demonstrates that NOAA’s procedures are legit (for example, that Time of Observation is taken for each individual station directly from B-91 and B-44 forms), then there is no problem. But until we can do that, there can be no independent review, and the adjustments, by definition, cannot be considered scientifically valid, much less legitimately used as a basis for multi-trillion dollar policy.

Evan Jones
Editor
February 5, 2012 5:42 pm

I don’t understand why you say “If a station is moved…”. What you describe cannot be defined in any way as movement.
That’s how NOAA defines it. If the station does not receive a new COOP number it is not considered to be a new station but, rather, a station move.
Stations move rather frequently. Sometimes they are merely localized equipment moves, particularly if there is a conversion from CRS to MMTS. Sometimes a curator passes away or moves, so they find another volunteer (or go for the old standbys of either an airport or WWTP) and relocate the station accordingly. More often than not, NOAA does not consider this to be a “new” station, only as a station move.

Editor
February 5, 2012 8:11 pm

Victor Venema says:
February 5, 2012 at 11:55 am

Dear evanmjones, if you are not willing to invest a little time into understanding the main principle behind homogenization and how it is implemented (including how it can improve the aggregate trend), do not expect people to waste their precious life time for a complete audit to satisfy your unfounded distrust.

Evan has spent plenty of his precious life time working on various WUWT endeavors. Don’t gripe about losing some of yours – you seem to cover the subject fairly well (at least for having no source code) at your blog.
More importantly, thousands of people read this blog every day. You’re losing out on a good chance to explain to a lot more people than you reach on your blog how raw data gets processed into climate data in Germany.
Also, please take some time checking out http://chiefio.wordpress.com/gistemp/ . EM Smith spent a lot of time studying the GISS adjustments, enough time to warrant starting his own blog. You might want to see if some of his criticisms of GISS also apply to German data.

Steve from Rockwood
February 5, 2012 8:42 pm

Ric Werme says:
February 5, 2012 at 8:11 pm

———————————–
Ric, you’re a pretty decent guy.

Steve from Rockwood
February 5, 2012 8:49 pm

Mike McMillan says:
February 5, 2012 at 12:07 am
Steve from Rockwood says:
If it were me, I would stop for a minute and have a scotch.

Is were even a word?
Yes. Subjunctive mood. Seldom used these days except by the highly educated, pirates, and in Ebonics.

Stop the press. I’m a pirate!
REPLY — Would it were. Arrrr. — Evan.

Editor
February 5, 2012 8:51 pm

To: Victor Venema
I note in http://www2.meteo.uni-bonn.de/mitarbeiter/venema/themes/homogenisation/HOME/ you say:

Some people remaining sceptical of climate change claim that adjustments applied to the data by climatologists, to correct for the issues described above, lead to overestimates of global warming. The results clearly show that homogenisation improves the quality of temperature records and makes the estimate of climatic trends more accurate.

I confess that I’ve forgotten how some of the steps Evan mentioned are applied, but one adjustment in particular by GISS is really annoying. It’s the backfilling of missing data in a station’s record, something that I don’t think is covered by homogenization as you understand it.
EM Smith’s blog probably goes into much better detail, but essentially when a new month’s data is out, GISS code looks through the record for missing data for the month, and if it finds it, recomputes an estimate for that month. An effect of that code, is that the historical record keeps changing, and so for anyone wanting to reproduce research that used GISS data, they have to know the month an year it was released in order to stay in sync. Worse, the adjustment tend to make the old data colder, thereby increasing the rate of temperature increase in the record.
So, put me in the camp that thinks climate change is occuring (well, not very quickly the last decade or so) and that adjustments lead to overestimates of recent global warming.

Evan Jones
Editor
February 5, 2012 9:06 pm

Oh, I understand how homogenization can “improve” the trends, all right. It identifies stations that are running cooler and “adjusts” them so they are warmer.
That is pretty much the only way that the few good stations start out with much lower trends than the bad stations and then somehow wind up with higher trends than than the (upwardly) adjusted data of bad stations.
Yes, you read it correctly: somehow the bad stations wind up with higher trends as well. And, yes, the adjusted trends for the good stations are adjusted even higher than that.

Editor
February 5, 2012 9:08 pm

To: Victor Venema
One more thing.
A lot of people here have been moving away from manual measurements to more automatable measurements with a more even coverage. For example, 10.7 cm microwave emissions instead of sunspot counts, satellite-derived temperature estimates of the lower troposphere instead of the ill sited US weather station network, and ocean energy storage instead of atmospheric temperature estimates. Perhaps you can compare your data with those other sources.

February 5, 2012 9:41 pm

LazyTeenager says:
February 4, 2012 at 3:22 pm
>evanmjones says
>> As NOAA has refused to release its adjustment code, we cannot reproduce the adjusted data, >>and therefore, of course, any results are Scientifically Insignificant.)
> I was under the impression that it’s relatively easy to code your own adjustment code and that it >has been done multiple times. And they all come much the same conclusions about the >temperature trends.
> So doesn’t that make access to the NOAA code kind of irrelevant since the actual principles >involved are well known.
I have seen it argued before (from warmist side) that scientists doing such work as gathering raw data (maybe also developing adjustment codes?) should not have to give their works away for free, not even if they are paid by governments to do this work.
I seem to think that data and codes gained at taxpayer expense should be free to taxpayers of the taxpaying jurisdiction. Maybe delay free publishing by 1-3 years (depending on field of study), so that when something big hits, competing scientists have to do their own work.
It appears to me this forces competing work that generates alternative codes and data, and I think that is good. When someone else redoes something already done, science is interested if
the rework confirms or does not confirm something that can use confirmation by an independent effort.
If others develop adjustment codes of their own, it is interesting to see if they have similar results or significantly different results from the NOAA one. (Of course, this is easier with access to both the raw and adjusted NOAA data.)
I think that taxpayer paid data processing codes and compilations of raw data relevant to climate change should be published on the web for free to taxpayers that paid for it, no later than 1.5 years after they were generated, and no later than 9 months after publication of studies using them. I think subtract up to 6 months from these figures if necessary and sufficient, to the extent necessary, to have publication at least 15 days before a major election where candidates are running at least in part on climate change issues, and at least 10 days before major government body or international body voting events on appointing big players or on treaties concerning global warming or climate change issues.

juanslayton
February 5, 2012 9:49 pm

Ric Werme to Victor Venema:
EM Smith’s blog probably goes into much better detail, but essentially when a new month’s data is out, GISS code looks through the record for missing data for the month, and if it finds it, recomputes an estimate for that month. An effect of that code, is that the historical record keeps changing, and so for anyone wanting to reproduce research that used GISS data, they have to know the month an year it was released in order to stay in sync.
Ironically, the metadata presented in MMS has exactly the opposite problem. When station location coordinates are refined/corrected (recently done en masse with GPS), the old, inaccurate coordinates are not changed. Rather a fictitious location change is entered. So when you try to trace back a station history (necessary if you really want to go take a look on the ground) you never know which locations are to be taken seriously. At least you don't until you catch on to the game….

Evan Jones
Editor
February 6, 2012 3:52 am

Rather a fictitious location change is entered. So when you try to trace back a station history (necessary if you really want to go take a look on the ground) you never know which locations are to be taken seriously. At least you don’t until you catch on to the game….
What you have to look for is coordinates ending in .33333, .5, .83, .66667 or whatever.
But even with what look like painstakingly precise coordinates, they can be anywhere from 5 feet to half a mile off. It’s a complete crapshoot.
Blue Hill, MA, is a poster child. I looked at several “station moves” over quite a patch of square miles and evaluated them in a rough sort of way. And then when I spoke to the curator, I discovered that there was one localized equipment move of 20 feet or so during the entire 100+ year history of the station.
So not only is it COMPLETELY impossible to judge microsite without an image or direct testimony of a curator (or other eye witness), but you can’t even rely completely on the larger picture. So we use the NOAA and GISS determinations of which stations are urban, semi-urban, and rural (we have o choice), but sometimes I wonder how accurate even that is.
And I know that the NOAA’s own microsite ratings — such as they even exist — are woefully inaccurate by examining Menne (2009) using Leroy (1999) standards. And, judging by my current studies, Menne, et algore, cannot be even close to accurate by Leroy (2010) standards.

Victor Venema
February 6, 2012 7:06 am

RomanM says:

“What you describe cannot be defined in any way as movement. Rather, one station is discontinued and a new station is built at a second location. Keeping the same name or identification number does not make it the “same station”. Does it make sense to you to then “adjust” the recorded values at either site in the name of homogenization? Would you do this for two previously “unrelated” sites?”

As long as the station moved over a distance much less than the average distance between stations, I see no problem in keeping the station number the same. You can also split up the record, as you suggest, that would also be fine. Every weather service has its own rules for doing so. If you split up the record you will have to take the jump due to the relocation into account when you compute a regional average over all stations. Thus you cannot avoid the homogenization problem.
RomanM says:

“You, of course realize that an adjustment would rarely be a single value added or subtracted. The difference would be unlikely to remain constant over the various months if the geographic characteristics change. This means that the temporal structure of anomalies calculated for the appended series could also be affected even if such adjustments were done on a monthly basis. Doing multiple adjustments then becomes much more arbitrary.”

You are right, there is often also a change in the annual cycle due to a inhomogeneity. For temperature you can typically estimate the adjustments needed for every month quite well. Thus temperature is often homogenized on a monthly scale. Precipitation is more variable, thus the adjustments more uncertain. Consequently precipitation is often homogenized on a yearly scale.
Trends are almost always computed on yearly mean values, then you also just need to compute the annual adjustments and the annual cycle is irrelevant.
RomanM says:

“Nor would any of this explain those adjustments which have a trend already built into them…”

In the blind validation study of homogenization algorithms we also inserted local trend inhomogeneities to model the urban heat island effect or the growth of vegetation, etc. Homogenization algorithms can also handle that situation. In most cases they solve the problem by inserting multiple small breaks in the same direction. Algorithms that use trend-like adjustments were not better than those inserting multiple small breaks.
————-
Ric Werme says:

“Evan has spent plenty of his precious life time working on various WUWT endeavors. Don’t gripe about losing some of yours – you seem to cover the subject fairly well (at least for having no source code) at your blog.”

It is nice to hear that someone puts in a good word for Evan. If he invested a lot of time in the surface temperature project to visit all the stations, I am very grateful. I wish there was a similar project in Europe as it has the potential to help our understanding of the quality of the measurements.
However, when it comes to homogenization, how inhomogeneities are removed, I am not able to understand the gibberish Evan is talking. He does not seem to be able or willing to understand how homogenization is performed. I am happy to answer your questions.
Ric Werme says:

“More importantly, thousands of people read this blog every day. You’re losing out on a good chance to explain to a lot more people than you reach on your blog how raw data gets processed into climate data in Germany.”

Actually Roger Pielke put me into contact with Anthony Watts, he requested permission to repost my post on the blind validation study of homogenization algorithms. I guess he was no longer interested when he read the conclusions. The admission that at least a minimal part of climatology is scientifically sound is apparently too controversial for this blog. Conclusion: If you are interested in the truth, read the blogs of the “opponents”.
—————
Ric Werme says:

“I confess that I’ve forgotten how some of the steps Evan mentioned are applied, but one adjustment in particular by GISS is really annoying. It’s the backfilling of missing data in a station’s record, something that I don’t think is covered by homogenization as you understand it.

I am not a climatologist. I am a physicist that normally works on the relation between clouds and (solar and heat) radiation. Being an impartial outsider was why they asked me to perform the blind validation. I now understand the homogenization problem somewhat, but I did not study the filling of missing data and cannot comment on this problem.
The International surface temperature initiative is working on a similar blind validation study, but now for a global temperature network. Because it is global, we can not only validate homogenization algorithms, but also the methods used to interpolate and compute regional and global averages. Stay tuned.
http://www.surfacetemperatures.org/benchmarking-and-assessment-working-group
————–
Ric Werme says:

One more thing. A lot of people here have been moving away from manual measurements to more automatable measurements with a more even coverage. For example, 10.7 cm microwave emissions instead of sunspot counts, satellite-derived temperature estimates of the lower troposphere instead of the ill sited US weather station network, and ocean energy storage instead of atmospheric temperature estimates. Perhaps you can compare your data with those other sources.

A good idea. I do think we need to do both. Satellites also have their inhomogeneity problems. Their calibration can only be partially checked in space, the relation between the measured quantity and the climatological variable of interest also depends on the state of the atmosphere and may thus change in space and time. Furthermore, the time series are relatively short from a climatological perspective, the instrumentation has changed considerably over the decades and the satellites themselves have a short time span.
The European Space Agency has a climate Satellite Application Facility (CM-SAF), which is coordinated by the German weather service. They try to solve these problems and produce a good dataset. Again this is a very different problem and I do not have the expertise to judge it.

Agile Aspect
February 6, 2012 12:53 pm

Congratulations!
Let’s hope that NOAA isn’t a GREENGOV(TM) client of Richard Muller and associates
http://www.mullerandassociates.com

Editor
February 6, 2012 7:50 pm

However, when it comes to homogenization, how inhomogeneities are removed, I am not able to understand the gibberish Evan is talking. He does not seem to be able or willing to understand how homogenization is performed. I am happy to answer your questions.
The result of homogenization is to take well sited stations and adjust them warmer than poorly sited stations. That is a fact.
The problem arises when 15% of the stations are properly sited turn out to run significantly lower trends than the remaining 85%. Therefore, they show up as outliers and are “adjusted” to conform with the surrounding (poorly sited) stations.
You are not fixing bad microsite. You are unfixing good microsite.
(When they homogenize the data, why do they always seem to feel the need to pasteurize it?)
As for UHI, I got a great idea: Take, oh, say, the USHCN. Average urban, semi-urban, and rural station trends. Compare the averages.
As for climate trends, simply classify, grid, and average the grid boxes for each classification. (Comparisons of good/bad stations within each grid is also recommended.)
Dump homogenization entirely.