Chiefio Smith examines GHCN and finds it “not fit for purpose”

E.M. Smith over at the blog Musings from the Chiefio earlier this month posted an analysis comparing versions 1 and 3 of the GHCN (Global Historical Climate Network) data set.  WUWT readers may remember a discussion about GHCN version 3 here.   He described why the GHCN data set is important:

 There are folks who will assert that there are several sets of data, each independent and each showing the same thing, warming on the order of 1/2 C to 1 C. The Hadley CRUtemp, NASA GIStemp, and NCDC. Yet each of these is, in reality, a ‘variation on a theme’ in the processing done to the single global data set, the GHCN. If that data has an inherent bias in it, by accident or by design, that bias will be reflected in each of the products that do variations on how to adjust that data for various things like population growth ( UHI or Urban Heat Island effect) or for the frequent loss of data in some areas (or loss of whole masses of thermometer records, sometimes the majority all at once).

 He goes on to discuss the relative lack of methodological analysis and discussion of the data set and the socio-economic consequences of relying on it. He then poses an interesting question:

What if “the story” of Global Warming were in fact, just that? A story? Based on a set of data that are not “fit for purpose” and simply, despite the best efforts possible, can not be “cleaned up enough” to remove shifts of trend and “warming” from data set changes, of a size sufficient to account for all “Global Warming”; yet known not to be caused by Carbon Dioxide, but rather by the way in which the data are gathered and tabulated?…

 …Suppose there were a simple way to view a historical change of the data that is of the same scale as the reputed “Global Warming” but was clearly caused simply by changes of processing of that data.

 Suppose this were demonstrable for the GHCN data on which all of NCDC, GISS with GIStemp, and Hadley CRU with HadCRUT depend? Suppose the nature of the change were such that it is highly likely to escape complete removal in the kinds of processing done by those temperature series processing programs?….

 He then discusses how to examine the question:

…we will look at how the data change between Version 1 and Version 3 by using the same method on both sets of data. As the Version 1 data end in 1990, the Version 3 data will also be truncated at that point in time. In this way we will be looking at the same period of time, for the same GHCN data set. Just two different versions with somewhat different thermometer records being in and out, of each. Basically, these are supposedly the same places and the same history, so any changes are a result of the thermometer selection done on the set and the differences in how the data were processed or adjusted. The expectation would be that they ought to show fairly similar trends of warming or cooling for any given place. To the extent the two sets diverge, it argues for data processing being the factor we are measuring, not real changes in the global climate..The method used is a variation on a Peer Reviewed method called “First Differences”…

 …The code I used to make these audit graphs avoid making splice artifacts in the creation of the “anomaly records” for each thermometer history. Any given thermometer is compared only to itself, so there is little opportunity for a splice artifact in making the anomalies. It then averages those anomalies together for variable sized regions….

 What Is Found

What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.

 The inevitable conclusion of this is that we are depending on the various climate codes to be nearly 100% perfect in removing this warming shift, of being insensitive to it, for the assertions about global warming to be real.

 Simple changes of composition of the GHCN data set between Version 1 and Version 3 can account for the observed “Global Warming”; and the assertion that those biases in the adjustments are valid, or are adequately removed via the various codes are just that: Assertions….

 Smith then walks the reader through a series of comparisons, both global and regional and comes to the conclusion:

 Looking at the GHCN data set as it stands today, I’d hold it “not fit for purpose” even just for forecasting crop planting weather. I certainly would not play “Bet The Economy” on it. I also would not bet my reputation and my career on the infallibility of a handful of Global Warming researchers whose income depends on finding global warming; and on a similar handful of computer programmers who’s code has not been benchmarked nor subjected to a validation suite. If we can do it for a new aspirin, can’t we do it for the U.S. Economy writ large?  

The article is somewhat technical but well worth the read and can be found here.

 h/t to commenters aashfield, Ian W, and  rilfeld

0 0 votes
Article Rating
120 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
barry
June 21, 2012 3:55 pm

There is hardly any difference in the global temperature record between the different GHCN versions and the raw data set. Skeptics (like Jeff Conlon and Roman M at the Air Vent) have come up with global temp records that are actually warmer than those produced by the institutes. But they all rest within margin of error.
Adjustments are more readily noticeable at local scale, but the big picture is not qualitatively affected by adjustments. At a few hundredths of a degree per decade, the fixation on slight differences in global trend between one method or another is overly pedantic.

June 21, 2012 4:23 pm

Barry…if this statement is in dispute…“What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.” Then please clarify why you think there is “hardly any difference”?

June 21, 2012 4:27 pm

barry says: June 21, 2012 at 3:55 pm
Doesn’t look like you’ve even looked at EMS’ work.
I did, and it’s a tour de force. Thank you EMS. But IMHO it needs re-presentation (a) to grasp in one go (b) with say 20-50 bite-size statements that accumulate the wisdom gained. Say, a rewrite like Bishop Hill did for Steve McIntyre.

Rob R
June 21, 2012 4:27 pm

Barry
This is not the point that chiefio (EM Smith) is addressing.
The analyses you mention (Jeff Condon etc) all take the GHCN version 2 or version 3 thermometer compilation as a “given” and produce global trends from a single version.
The Chiefio is looking primarily at the differences between the different GHCN versions. He is not really focussing on the trend that you can get from a single GHCN version.

Kev-in-Uk
June 21, 2012 4:28 pm

A fair analysis of the REAL state of play, IMHO – but really it’s what we have known for ages, i.e that the manipulated data is essentially worthless!
*drums fingers* – waits for someone like Mosh to come along and remind us that it’s all we have, etc, etc….
I don’t actually mind necessary data manipulations IF they are logged and recorded and explained in sufficient detail that they can be reviewed later. To my knowledge such explanation(s) is/are not available to the general public? What is more worrying is that I doubt the reasoning behind the adjustments are still ‘noted’ anywhere – a la ‘the dog ate my homework’, etc – effectively making any and every subsequent use of the ‘data’ pointless!
We have seen various folk analyse and debunk single stations (or a few at a time) – but does anyone think Jones has been through every stations’ data and ‘checked’ through each and every time series, site change, etc? Somehow I think not – it is likely all computer manipulation at the press of a button and we all know what that means…….(where’s Harry? LOL)

Ian W
June 21, 2012 4:28 pm

barry says:
June 21, 2012 at 3:55 pm
There is hardly any difference in the global temperature record between the different GHCN versions and the raw data set. Skeptics (like Jeff Conlon and Roman M at the Air Vent) have come up with global temp records that are actually warmer than those produced by the institutes. But they all rest within margin of error.

Barry I would really suggest you read Chiefio’s post. Then after that come back here and tell us all the errors that you have found in his approach.

Chris
June 21, 2012 4:50 pm

Way back in the olden days during the brief time I was doing science as a grad student, I was taught that measured results can either be accepted, or with good reason rejected. There was no option to “adjust” numbers. Adjusting was fraud.
Now in the temperature records we no longer have data. We have a bunch of numbers, but the numbers are not data. If adjustments are necessary, they should be presented in painstaking detail, if need be as a separate step. Adjustments are never part of the data: data cannot be adjusted and still remain data. Adjustments are part of the model, not part of the data. They have to be documented and justified as part of the model. Anything else is fraud.

u.k.(us)
June 21, 2012 5:13 pm

Chris says:
June 21, 2012 at 4:50 pm
=========
+1
Though using the term “fraud”, leaves no room to maneuver.
I assume many would like to wriggle out of the trap they have entered.

pouncer
June 21, 2012 5:16 pm

barry says:” the big picture is not qualitatively affected by adjustments.”
Yep, Right. The hairs on the horse’s back may be thicker or thinner but the overall height of the horse isn’t affected.
The thing is that the “simple physics” everybody tells me settles the science of the Earth’s black body radiation budget is based on the average temperature of the Earth in degrees KELVIN. An adjustment of one to two degrees to an average temperature of 20 is already small. Such a variation on an average temperature of about 500 is — well, you tell me.
The effect being measured is smaller than the error inherent in the measuring tool. In order to account for that, very strict statistical protocols must be chosen, documented, tested, applied, and observed for all (not cherry picked samples of) the data.
Note that problems, if any, with the historic instrument record propagate into the pre-historic reconstructions. When calibrating ( “screening” if you like, or “cherry picking”) proxies against “the instrumental record” — does the researcher use the world wide average? The closest local station record? A regional, smoothed, aggregated record? As it turns out the regional extracts of the GHCN record are themselves “proxies” (or as Mosher explains, an “index”) for the actual factor of interest in the “big picture” — the T^4 in degree Kelvin in black body models. If you’re matching your speleo, isotopic or tree-ring record to a debatable instrument proxy — the magical teleconnection screen — why wouldn’t you expect debate about temperatures in the past thousand years?
Chiefio says the record is unfit for use. Barry, what use case do you have in mind for which the record is well fit?

KR
June 21, 2012 5:23 pm

This does not address the fact that the various surface temp readings match the satellite readings (see http://tinyurl.com/6nl33kz), with the caveat that satellite readings are known to be more sensitive to ENSO variations, and that land variations are higher than global variations. Those surface temperatures have been confirmed by the entirely separate satellite series.
Sorry, but the opening post is simply absurd.

June 21, 2012 5:32 pm

EMS has put a lot of work into this analysis and I wish to congratulate him. He has posted a great comment at Jonova’s sit here http://joannenova.com.au/2012/06/nature-and-that-problem-of-defining-homo-sapiens-denier-is-it-english-or-newspeak/#comments at comment 49 worth reading. I wish I had time to put up some more technical information at my pathetic attempt of a blog. I would suggest EMS gets together with Steve (McI) and Ross (McK) to publish some of this.
Keep up the good work. Eventually, truth will out but I am concerned that only legal action will stop the flow of misinformation and power seeking by the alarmists and climate (pseudo)scientists.

pat
June 21, 2012 6:06 pm

Seth is all over the MSM with his simple facts!
20 June: Canadian Press: AP: Seth Borenstein: Since Earth summit 20 years ago, world temperatures, carbon pollution rise as disasters mount
http://ca.news.yahoo.com/since-earth-summit-20-years-ago-world-temperatures-155231432.html

Ian W
June 21, 2012 6:23 pm

KR says:
June 21, 2012 at 5:23 pm
This does not address the fact that the various surface temp readings match the satellite readings (see http://tinyurl.com/6nl33kz), with the caveat that satellite readings are known to be more sensitive to ENSO variations, and that land variations are higher than global variations. Those surface temperatures have been confirmed by the entirely separate satellite series.
Sorry, but the opening post is simply absurd.

Note that EMS was just assessing the differences between GHCN V1 and GHCN V3 from 1880 to 1990. So I am somewhat confused by your comment – can you describe what “satellite surface temp readings” were produced from 1880 – 1990? and how they were assimilated into GHCN V1 and GHCN V3? The issue is the accuracy of anomalies and warming rates over a century based on the GHCN records. These show differing ‘adjustments’ to temperature readings back over a century ago between versions. How do satellite temp readings affect this?

u.k.(us)
June 21, 2012 6:25 pm

Chiefio Smith examines GHCN and finds it “not fit for purpose”
===============
When did we switch from charting the weather’s vagaries, to predicting it ?
How will changing previous weather data, enhance our understanding ?
A battle surely lost.

wayne
June 21, 2012 6:38 pm

Thanks E.M., good article.
How does this affect the GHCN or is it a different matter?
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
All I see is the very rise in temperature that has been created by the adjustments themselves! (but a small ~0.3C difference)

Michael R
June 21, 2012 6:42 pm

This does not address the fact that the various surface temp readings match the satellite readings (see http://tinyurl.com/6nl33kz), with the caveat that satellite readings are known to be more sensitive to ENSO variations, and that land variations are higher than global variations. Those surface temperatures have been confirmed by the entirely separate satellite series.
Sorry, but the opening post is simply absurd.

As Ian pointed out above, I am curious to know how surface temperatures and satelite correlation has anything to do with pre-satelite temperature data? The only way that argument is valid is if we could hindcast temperatures using those satelites – which I would love to know how that is done…
Having good correlation between recent ground thermometres and satelites means squat about previous temperature readings. I also find it curious that almost the entire warming that is supposed to have occured in the last 150 years occured BEFORE the satelite era and if that data shows it has been adjusted several times to create higher and higher warming trends, the effect is lots of warming pre satelite area and a sudden flattening of temp rise post satelite era…. but hang on…. that’s exactly how the data looks and your link shows it.
Unfortunately your link does not prove your argument, but it does support E.M. Smith’s. Maybe try a different tact..

Mark T
June 21, 2012 7:09 pm

Grammar nazi comment: who’s is a contraction for who is; the possessive is whose.
Mark

mfo
June 21, 2012 7:19 pm

A very interesting and thorough analysis by EM. When considering the accuracy of instruments I like the well known example used for pocket calculators:
0.000,0002 X 0.000,0002 = ?
[Reply: I hate you for making me go get a calculator to do it. ~dbs, mod.]
[PS: kidding about the hating part☺. Interesting!]
[PPS: It also works with .0000002 X .0000002 = ?

thelastdemocrat
June 21, 2012 7:22 pm

Puh leezze EDIT POSTS!
“Data” is plural!!! “Datum” is singular!!!
“If that data has an inherent bias in it, by accident or by design, that bias will be reflected in each of the products that do variations on how to adjust that data for various things like population growth ( UHI or Urban Heat Island effect) or for the frequent loss of data in some areas (or loss of whole masses of thermometer records, sometimes the majority all at once).”
This datum, these data.
[REPLY: The memo may not have reached you yet, but language evolves. Ursus horribilis, horribilis est. Ursus horribilis est sum. -REP]
[Reply #2: Ever hear of Sisyphus? He would know what it’s like to try and correct spelling & grammar in WUWT posts. ~dbs, mod.]

E.M.Smith
Editor
June 21, 2012 7:33 pm

@Lucy Skywalker:
It’s open and public. The code is published, so folks can decide if there is any hidden ‘bug’ in it as they feel like it. The whole set, including links to the source code, can be easily reached through a short link:
http://chiefio.wordpress.com/v1vsv3/
Anyone who wants to use it as a ‘stepping off point’ for a rewrite as a more approachable non-technical “AGW data have issues” posting is welcome to take a look and leverage off of it.
@Barry:
The individual data items do not need to all be changed for the effect to be a shift of the trend. In particular, “Splice Artifacts”. They are particularly sensitive to ‘end effects’ where the first or last data item (the starting and ending shape of the curve) have excessive effect.
In looking through the GIStemp code (yes, I’ve read it. Ported it to Linux and have if running in my office. Critiqued quite a bit of it.) I found code that claims to fix all sorts of such artifacts, but in looking at what the code does, IMHO, it tries but fails. So if you have a 1 C “splice artifact” effect built into the data, and code like GISTemp is only 50% effective at removal, you are left with a 1/2 C ‘residual’ that isn’t actually “Global Warming” but an artifact of the Splice Artifacts in the data and imperfect removal from codes like GIStemp and HadCRUT / CRUTEMP.
The code I used is particularly designed not to try to remove those splice artifacts. The intent is to “characterized the data”. To find how much “shift” and “artifact” is IN the data, so as to know how much programs like GIStemp must remove, then compare that to what they do. (My attempts to benchmark GIStemp have run into the problem that it is incredibly brittle to station changes, so I still have to figure out a way to feed it test data without having it flat out crash. But in the testing that I have done, it looks like it can remove about 1/2 of the bias in a data set, but perhaps less.)
So depending on how the particular “examination” of the data set is done, you may find that it “doesn’t change much” or that it has about 1 C of “warming” between the data set versions showing up as splice artifacts and subtle changes of key data items, largely at ends of segments.
In particular, the v1 vs v3 data often show surprising increases in the temperature reported in the deep past, while the ‘belly of the curve’ is cooled. If you just look at “average changes” you would find some go up, and some go down, net about nothing. Just that the ones that go up are in the deep past that is thrown away by programs like GIStemp (that tosses anything before 1880) or Hadley (that tosses before 1850). Only if you look at the shape of the changes will you find that the period of time used as the ‘baseline’ in those codes is cooled and the warming data items are thrown away in the deep past; increasing the ‘warming trend’ found.
Furthermore, as pointed out by Rob R, the point of the exercise was to dump the Version One GHCN and the Version Three GHCN data aligned on exactly the same years, through exactly the same code, and see “what changes”. The individual trends and the particular “purity” of the code don’t really matter. What is of interest is how much what is supposedly the SAME data for the SAME period of time is in fact quite different from version to version.
That the changes are about the same as the Global Warming found from V2 and V3 data, and that the data do produce different trends is what is of interest. It shows the particular data used does matter.
@Kev-in-UK:
It is important to realize that the difference highlighted in this comparison is NOT item by item thermometer by thermometer changes of particular data items. ( i.e. June 18 1960 at Alice Springs) but rather the effect of changes of what total data is “in” vs “out” of the combined data set. Yes, some of what it will find will be influenced by “day by day adjustments”, but the bulk of what is observed is due to wholesale changes of what thermometers are in, vs out, over time.
I know, harvesting nits… but it’s what computer guys do 😉
W:
Good suggestion…
Also note that the v1 data are no longer on line as near as I can tell. I had saved a copy way back when (being prone to archiving data. Old habit.) It may be hard for other folks to find a copy to do a comparison. I’ve sent a copy to at least one other person ‘for safe keeping’ but it is a bit large to post.
Frankly, most folks seem to have forgotten about v1 once v2 was released, and paid little attention to both now that v3 is out.
@Chis:
What I was taught as well. You got an F in my high school Chemistry class if you had an erasure in your lab book. Any change was ONLY allowed via a line-out and note next to it. Then the new data written below.
FWIW, the “adjustments” are now built into GHCN v3. In v2 each station record had a ‘duplicate number’. Those are now “spiced” and QA changes made upstream of the V3 set. This is particularly important as one of the first things I found was that the major shift that is called warming happened with the change of “duplicate number” at about 1987-1990 (depending on station). It is no longer possible to inspect that ‘join’, as it is hidden in the pre-assembly process at NCDC. (But I’ve saved a copy of v2 as well, so I can do it in the future as desired 😉
:
Well said. 😉
@KR:
Reaching for “The Usual Answers” eh?
Did you notice that this is a comparison of v1 and v3 aligned on 1990 when v1 ends? Did you even read the article where that is pointed out? Look at any ONE of the graphs that all start in 1990?
So it has all of about 12 years of overlap with the satellites. Just not relevant. Recent dozen+ years temperatures have shown no warming anyway, so that they match the sats is just fine with me…
:
Thanks for the support. FWIW, I’ve got a version where I cleaned up a typo or two and got the bolding right posted here:
http://chiefio.wordpress.com/2012/06/21/response-to-paul-bain/
with a couple of ‘lumpy’ sentences cleaned up a bit. It’s a bit of a ‘rant’, but I stand by it.
@UK (US):
It’s not predicting weather that bothers me. Folks like Anthony can do that rather well and it doesn’t need the GHCN. It’s the notion of predicting “climate change” and the notion that we can influence at all the climate that is just broken.
When I learned about “climates”, we were taught that they were determined by: Latitude, distance from water, land form (think mountain ranges between you and water), and altitude. So a desert climate is often found behind a mountain range where rain is squeezed out (often as snow) on the mountains (making an Alpine climate).
In the world of Geology I learned, unless you change one of those factors, you are talking weather, not climate…
CO2 does not change latitude, distance from water, land form, nor altitude. So the Mediterranean still has a Mediterranean Climate and the Sahara Desert still has a Desert Climate and The Rockey Mountains still have an Alpine Climate and the Arctic is still an Arctic Tundra Climate. Then again, I’m old fashioned. I like both my science definitions and my historical data to remain fixed…

Mike Jowsey
June 21, 2012 7:37 pm

KR @ 5:23 says: Those surface temperatures have been confirmed by the entirely separate satellite series.
Were those satellite measuring devices ever calibrated to surface temperature? If so, then the satellite series is not “entirely separate”, but in fact joined at the hip. If not, then how does the proxy of an electronic device in orbit translate to accurate surface land temperature?

June 21, 2012 7:43 pm

barry
Thanks for being the first to comment. It was clear in the past that some global warming believers, or more simply put, those that disagree with this web site, seemed to sit in front of their computers waiting for a new post so they could be the first to comment.

Gail Combs
June 21, 2012 7:55 pm

Chris says:
June 21, 2012 at 4:50 pm
Way back in the olden days during the brief time I was doing science as a grad student, I was taught that measured results can either be accepted, or with good reason rejected…..
_________________________________
BINGO!
That is one of the reasons for doubting the entire con-game in the first place. How the Heck does Jones or Hansen KNOW that all the guys who took the readings in 1910 did it wrong and all the results need to be adjusted DOWN by a couple of hundredths or the guys in 1984 screwed up and all the data needs to be raised by 0.4. A few hundreths in 1910??? The data was never measured that precisely in the first place. http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Even more suspicious is the need to CONSTANTLY change the readings. http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
The data has been so messaged, manipulated and mangled that no honest scientist in his right mind would trust it now especially since Jones says the “dog ate my homework” and New Zealand’s ‘leading’ climate research unit NIWA says the “goat ate my homework”

…In December, NZCSC issued a formal request for the schedule of adjustments under the Official Information Act 1982, specifically seeking copies of “the original worksheets and/or computer records used for the calculations”. On 29 January, NIWA responded that they no longer held any internal records, and merely referred to the scientific literature.
“The only inference that can be drawn from this is that NIWA has casually altered its temperature series from time to time, without ever taking the trouble to maintain a continuous record. The result is that the official temperature record has been adjusted on unknown dates for unknown reasons, so that its probative value is little above that of guesswork. In such a case, the only appropriate action would be reversion to the raw data record, perhaps accompanied by a statement of any known issues,” said Terry Dunleavy, secretary of NZCSC.
“NIWA’s website carries the raw data collected from representative temperature stations, which disclose no measurable change in average temperature over a period of 150 years. But elsewhere on the same website, NIWA displays a graph of the same 150-year period showing a sharp warming trend. The difference between these two official records is a series of undisclosed NIWA-created ‘adjustments’…. http://briefingroom.typepad.com/the_briefing_room/2010/02/breaking-news-niwa-reveals-nz-original-climate-data-missing.html

And if you did not follow ChiefIO’s various links. This one graph of GHCN data set. Version 3 minus Version 1 says it all:comment image
Nothing like lowering the past data by about a half degree and raising the current data by a couple tenths to get that 0.6 degree per century change in temperature….
What is interesting is the latter half of the 1700’s had times that were warmer that to day and overall the temperature was more variable.

KR
June 21, 2012 8:03 pm

E.M.Smith“Reaching for “The Usual Answers” eh?”
You have written a great deal, but included little evidence.
I would point you to http://forums.utsandiego.com/showpost.php?p=4657024&postcount=86 where a reconstruction from _raw_, unadjusted data, from the most rural stations possible (readily available, mind you) has been done. Results? Area weighted temperatures estimated from the most rural 50 GHCN stations closely match the NASA/GISS results. As confirmed by the separately run, separately calibrated satellite data
You have asserted quite a lot – without proving it. You have cast aspersions aplenty – with no evidence. And when uncorrected data is run, the same results as the NASAQ/GISS adjusted data comes out, with only a small reduction in uncertainties and variances.
I don’t often state things so strongly, but your post is b******t. If you don’t agree with the results, show your own reconstruction, show us that the adjustments are incorrect. Failing that, your claims of malfeasance are as worthy as the (zero) meaningful evidence you have presented.
[Reply: Please feel free to submit your own article for posting here. ~dbs, mod.]

Luther Wu
June 21, 2012 8:06 pm

“What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.”
That’s the “money shot”.

Bennett
June 21, 2012 8:10 pm

@ cementafriend at 5:32 pm
Wow. Thanks so much for the link to E.M. Smith’s comment essay. I’ll be printing and sending that to all of my State reps to DC. It really can’t be laid out any better than that.
http://chiefio.wordpress.com/2012/06/21/response-to-paul-bain/

KR
June 21, 2012 8:13 pm

Clarification – in my previous post (http://wattsupwiththat.com/2012/06/21/chiefio-smith-exqamines-ghcn-and-finds-it/#comment-1015094) the text regarding variances should have been:
“…only a small reduction in uncertainties and variances in the NASA/GISS data due to corrections and much more data.”

June 21, 2012 8:21 pm

Great job of explaining the issues with the data!

@E.M.Smith June 21, 2012 at 7:33 pm
(My attempts to benchmark GIStemp have run into the problem that it is incredibly brittle to station changes, so I still have to figure out a way to feed it test data without having it flat out crash. But in the testing that I have done, it looks like it can remove about 1/2 of the bias in a data set, but perhaps less.)

If I am understanding you correctly the issue is that it is looking for specific station names/ IDs?
If so could you feed it test code by inserting test data into a data file with all the current station names and other information it is depending on. Create a pink noise data set with the individual entries assigned to the station names it is looking for.
Larry

KenB
June 21, 2012 8:23 pm

Thanks Chiefio
I wonder how long this farce of trying for social change by corrupting science will go on. It seems inevitable there will be a massive law suit. With the poor state of climate science, its myths and beliefs that keep changing to protect the guilty/careless,stupid.or just incompetent, how will that fare when put to the test of providing sworn testimony AND data, adjustments to raw data in a civil court of law.
The Harry read me file should have sparked alarm bells among true scientists and immediately lead to a clean up. As it stands now, the harm and potential damages that have occurred since that event, have made it odds on, that the only remedy is, either recant the meme now or prepare to be sued and explain “why the deception continued”? The continuance after exposure of a problem/issue is surely , where the punitive damage and assessment of costs will rest in this donnybrook!

barry
June 21, 2012 8:24 pm

Rob,

The analyses you mention (Jeff Condon etc) all take the GHCN version 2 or version 3 thermometer compilation as a “given” and produce global trends from a single version.
The Chiefio is looking primarily at the differences between the different GHCN versions. He is not really focussing on the trend that you can get from a single GHCN version.

The Air Vent used the raw data (check the code in the post I linked), not any of the adjusted versions. The point is, the results are hardly different from the adjusted results. And this has been discovered over and over again at Lucia’s place (Zeke Hausfather) and by other blog efforts taking raw and adjusted data and comparing. At the global level there are differences in trend between the institutional and blog efforts, but they are minor.
I can’t speak to the acuracy of EM Smiths’ methods, but I note that he points out the African trend has actually decreased in V3 and the same with Europe – this is a shout out to those commentators who think that every adjustment is upwards. Histograms of adjustments show a fairly even split – but you wouldn’t know it because critics tend to focus on the upwards adjustments and usually omit references to downward ones.
I’d be interested to know what EM Smith discerns as the global trend difference between V1 and V3 from 1900. By eyeball it doesn’t appear to be much. Smith appears to have worked out a trend comparison from the 1700s (!) – it would not be a big surprise if the major part of the difference Smith is seeing is due to adjustments made to the early record, which, being sparse, is more prone to bigger adjustments than later in the record when there is much more data averaging out. No one should rely on early data – and indeed trend estimates given by the institutes are usually from about 1900, when the data are firmer. When the data is less sparse, the records match much better. For various reasons, as I mentioned above, adjustments are more obvious at regional and especially local resolution, but gobally these adjustments tend to cancel out, which is why so many trend analyses using raw and adjusted global data from 1900 are so close.

barry
June 21, 2012 8:27 pm

EM Smith,
thanks for the reply. What is the difference between V1 and V3 for the global temperature trend from 1900? And in your initial comparison in the post, what was the start year when you discerned a 0.75C difference in trend?

KR
June 21, 2012 8:30 pm

~dbs, mod – I have pointed to a temperature estimate from rural stations and raw data, showing that the NASA/GISS data is simply a refinement of the results from that subset of _raw_ data.
Given that E.M.Smith has not presented an alternative temperature estimate, let alone one that stands up to scrutiny, I feel that I have fully documented the case against this opening post.

E.M.Smith
Editor
June 21, 2012 8:31 pm

@TheLastDemocrat:
It’s a jargon thing. In Data Processing the use of data is as a ‘mass noun’. The data. These data. That data (item). Datum is never heard (unless from a ‘newbie’ who doesn’t do it long).
On maps, you find a Datum…
So you have a choice: Become a Defender Of Proper Latin! And would that be Law Latin? Vulgar Latin? Latin of the Renaissance? Latin of the Early Roman Empire? Or late? They are all different, you know… there is even a government web site in the UK that details the differences between UK Government Latin from the late Empire usage vs early formal Latin. It is an interesting read, BTW. http://www.nationalarchives.gov.uk/latin/beginners/

The tutorial covers Latin as used in England between 1086 and 1733, when it was the official language used in documents. Please note that this type of Latin can be quite different from classical Latin.

So do be sure to identify exactly which Latin in which we are to be schooled…
Or accept that “things change”.
Ita semper fuit … De rerum transmutatione magis stetisse
As I’m a “computer geek” in Silicon Valley, I’m using “Silicon Valley Standard Geek Speak” in which the plural of data is data and the singular of data is data and the specific indicative is “data item”. YMMV. ( Though I tend to the English usage of ‘should’ and ‘ought’ not the American, as Mum was a Brit. So “I ought to eat less” instead of “I should eat less” and “If I should fall” instead of “If I were to fall” (or worse, the Southernism “If I was to fall”… ) so that can be disconcerting to some of my American neighbors… But I often shift dialect with context, so put me talking to a Texan and it will be “If’n I was ta’ fall”… so don’t expect strict adherence to any one dialect, time period, or ergot.)
T.:
Unfortunately, I spell more or less phonetically (with a phonemic overlay). I had 6 years of Spanish in grammar school and high school, French to the point of French Literatures IN French at University, and along the way took a class in German and one in Russian (in which I did not do so well – funny characters ;-). Since then I’ve learned a bit, on my own, of Italian (enough to get around Rome), Gaelic (at which I’m still dismal, but trying), some Swedish (enough to read office memos when working at a Swedish company – with enough time and the dictionary) and smatterings of Norwegian. I’ve also looked at, but not learned much of, Greek (where I can puzzle out some words and simple phrases), Icelandic, Portuguese (which I can read as it is close to Spanish but the spoken form is just enough off that I can’t lock on to it) and a few others. (Not to mention a dozen or two computer programming languages).
The upshot of it all is that my use of Standard Grammar can be pretty easily shot as the various systems get a bit blurred… But I try. So I’ll see about changing it to what you like, so it won’t bother you. But I do generally agree with Mark Twain on the whole thing, and if you look back just a couple of hundred years even English had widely variant usage and spelling. It can not be neatly put in a bottle and frozen (though grammarians try…)
http://www.twainquotes.com/Spelling.html
But at least it’s not German…
http://www.cs.utah.edu/~gback/awfgrmlg.html
@Mfo:
In the USA, data were read off the Liquid In Glass thermometers and recorded in Whole Degree F on the reports. These data are compared to current records from ASOS stations at airports that report electronically.
All it takes is a simple tendency for folks to read 67.(whatever) as something to report as 67 for there to be a 1/2 F degree jump in ‘bias’ at the point about 1987 where the change of Duplicate Number happens.
IMHO, the shift from LIG to automated stations accounts for the “jump” at that point and for some of the volatility changes. Proving that belief will take some work.
Jowsey:
As I understand it, the sats are calibrated to a standard reference inside the enclosure. They read an absolute temperature reference for calibration.
It isn’t particularly relevant to my posting though.
It also doesn’t really help much in comparing sats results to land averages as there is the fundamental fact that temperature is an intrinsic property of A material, and an average of intrinsic properties is meaningless. (That is a point most folks ignore. I occasionally raise it as it is a critical point, but nobody really cares… The best example I’ve found is this: Take two pots of water, one at 0 C the other at 20 C. Mix them. What is the final temperature? How does that relate to the average of the temperatures? The answer is “You can not know” and “not at all”. Missing are the relative masses of water in the two pots and the phase information on the 0 C water. Was it frozen or melted? You simply can not average temperatures and preserve meaning; yet all of “climate science” is based on that process…)
http://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/
So the sats are calibrated to a standard then look at a field that covers some large area and integrates the incident energy and creates a value we call a temperature. It isn’t, but that’s what we call it. Then the land temperatures are averaged eight ways from Sunday (starting with averaging the min and max each day to get the mean) and we call THAT a temperature, but it isn’t. That the two values end up approximating each other is interesting, but doesn’t mean much. It mostly just shows that both series move more or less in sync with the incident energy level. ( i.e. they very indirectly measure solar output modulated by albedo changes.)
But that’s all a bit geeky and largely ignored other than by a few folks even more uptight about their physics philosophy than a grammar Nazi is about commas quotes, and apostrophes…

Paul Linsay
June 21, 2012 8:40 pm

KR @ 8:03 pm
You missed the point of the article. Differences in instrumental artefacts between v1 and v3 are the source of the rise in global average temperature. It’s the artefacts that produce the rise you plot, which E.M. Smith would get too if he bothered to do the calculation, as does the BEST collaboration. Look around for the 1990 GISS temperature plot versus their 2000 version. You’ll see the shift where the 1930’s go from being the same temperature as 1990 to being cooler. It’s available on this web site somewhere.

June 21, 2012 8:41 pm

Grammar Nazi 2 reporting for duty…..
A poster wrote (in good faith) “Maybe try a different tact..”. Sorry, it’s “tack”. It’s a sailing term. Here beside the St. John River we are familiar with such terminology; the writer might come from a drier region where sailboats are not so common.
IanM

Trevor
June 21, 2012 8:41 pm

@thelastdemocrat:
First of all, you have a lot of fracking nerve criticizing others’ grammar when you open your post with “Puh leezze”, end TWO sentences with multiple exclamation points, and end your post with an incomplete sentence.
Second, “data” is NOT plural. It is a MASS NOUN, and as such, is treated as singular, like “information”. “Data” is, in fact, DEFINED as “information”, and like “information”, though it may be comprised of many individual pieces, when referred to as a whole, it is singular.
So shut the frack up!
I bet you also think it’s wrong to end a sentence with a preposition, don’t you? God, I can’t stand grammar snobs. Especially when they’re WRONG.
[Well, PASS THE POPCORN!! ~dbs, who is sitting back and immensely enjoying all sides of this fracking conversation!☺]

KR
June 21, 2012 8:46 pm

E.M.Smith – Let me put this in clear terms. I would challenge you to show, using uncorrected, well distributed, rural raw data, a temperature estimate that significantly varies from the GHCN estimates. One that shows a consistent bias from other estimates. If you do, I promise to take a look at it, and give an honest response as to my perspective, which will include agreement if it is well supported.
Failing that, I would consider your essay an attempt to raise unwarranted doubts, and treat it just as noise.
Seriously, folks. It’s one thing to argue about attribution (human or natural), or feedback levels (clouds, aerosols, albedo). It’s another thing to attempt to dismiss the masses of data we’ve accumulated over time regarding instrumental temperature records. The temperature is what it is, we’re ~0.7C over pre-industrial levels. We are where we are – claiming that the temperature record(s) are distorted is as unsupportable as the “Slayers” or Gerlich and Tscheuschner claims of 2nd law issues.

June 21, 2012 9:01 pm

barry
I forgot the “sarc” tag on my comment.
What you consider pedantic some other consider checking things for themselves. Being like Al Gore who says everything he says he heard from his science advisers, one was James Hansen, isn’t good enough, and shouldn’t be good enough. Would to God everyone was your kind of pedantic. “Manmade global warming” scares would be completely over!

June 21, 2012 9:06 pm

KR
you say: “closely match the NASA/GISS results”
Closely match is not good enough. Global warming hysteria is based on 1/10ths of a degree. The difference between the earth heading to global warming calamity and nothing out of the ordinary is so little that they “closely match”.

June 21, 2012 9:09 pm

KR
Here’s Richard Lindzen pointing out how global warming alarmists make their case for histrionics on things amounting to ant hills:

June 21, 2012 9:19 pm

Oh well.
Chiefo hasnt kept up with skeptical science. He relies on First differences. Yes, First differences is peer reviewed, but smart skeptics dont trust peer review, we do our own review.
So, lets look at the review that Jeff Id did of first differences
http://noconsensus.wordpress.com/2010/08/21/comparison-and-critique-of-anomaly-combination-methods/
“This post is a comparison of the first difference method with other anomaly methods and an explanation of why first difference methods don’t work well for temperature series as well as comparison to other methods”
basically, Chiefo is using an known inferior method, a proven inferior method. That worse than anything Mann ever did, because it was skeptics that pointed out that First differences was inferior.
The other thing he doesnt realize that an entire global reconstruction can be done without using a single station from GHCN Monthly. Guess what, the answer is the same.

June 21, 2012 9:35 pm

KR
“a temperature estimate that significantly varies from the GHCN”
You are going to have to define what you think significant is. What are you parameters?

Leonard Lane
June 21, 2012 10:03 pm

We hear the statement “chimes against humanity” all the time in a political context, and I am not belitteling real crimes against humanity in any way. They have occured in the past and are occurring now.
But when taxpayers have funded climatic data collection, storage, and publication for decades because the data are essential to agriculture, infrastructure design and protection, etc., what do you call it when crooks and dishonest scientists destroy these vital public records for their own selfish and dishonest reasons?And when the so called adjustments to the historical data are deliberately made to corrupt the data and cannot be reversed to restore the historical data, what do you call it? I think fraud is too weak a term for corrupting these invaluable data.

June 21, 2012 10:09 pm

Mosher,
Bad mouthing the “process” and comparing this to “Mann’s work” is not refutation. In science, we show our work. All I am asking is to show your work…. Here is a testable hypothesis with data and code completely free for you to show the process is bad. Do your own analysis and show that the work is wrong and please do not resort to name-calling and other obfuscation and drawing up of straw men. Use logic my friend. We don’t need to argue on political attack dog mode do we?
In other words, next time try to prove the result wrong directly. We are talking about the differences between GHCN V1 and V3 by the way. The theory is that there should be very little difference between the results and that any varience will have no slope and will not change the actual results in any way. I seem to see that it does in this case with this work. But I will always eat my hat like I say all the time if proven wrong and will appologize.
This is not to say that we have not warmed at all or that this work proved that all warming was the results of instrument problems, you have to read more into the work to come up with what it is saying.
This proves that IT IS POSSIBLE that the entire “postulated” warming caused by humans is all the result of instrumentation and measuring issues along with methodology therein. Heck, its also “just possible” that humans are responsible for the same amount of warming due to “Emitting CO2”.
Doesn’t that make either theory equally likely until one or the other is proven wrong?

jorgekafkazar
June 21, 2012 10:18 pm

The trolls are here in force, a sure sign that they fear the post. They resolutely go round and round the same wrong interpretation of it, hoping to divert readers from the actual message to a straw man of their own devising.

E.M.Smith
Editor
June 21, 2012 10:20 pm

@Wayne:
GIStemp takes in USHCN and GHCN and then does a bewildering set of manipulations on them. The USHCN is just US data, so theoretically only would have effect in the USA. HOWEVER, GIStemp is a ‘Serial Averager” and a “Serial Homogenizer”, so any given thermometer record may have a missing data item “filled in” via extrapolation from another thermometer up to 1200 km away. That record is then spliced onto other records and “UHI correction” applied (that can go in either direction, often the wrong one) based on, yup, records up to 1200 km away that might themselves have data from 1200 km away as in-fill.
Now comes the fun part…. Only AFTER all that averaging of temperatures, it creates “grid box anomalies”. The last version I looked at had 16,000 grid /boxes filled in with ‘temperatures’. The One Small Problem is that the GHCN last I looked had 1280 current thermometers providing data items… and the USHCN only covers 2% of the Earth’s surface. Soo…. by definition, about 14,000 of those “grid /boxes” contain a complete fabrication. A fictional temperature. That value can be “made up” by, yes, you guessed it, looking up to 1200 km away and via the Reference Station Method filling in a value where none exists (in fact, where about 14/16 of them do not exist) which reference may itself be adjusted via stations up to 1200 km away based on infilled data from 1200 km away… so any one data item may have “reach” of up to 3600 km… (Though most of them do not, but you don’t know how many or how far…)
At any rate, as v1 changes to v2 changes to v3, the input to that process changes (the exact thermometers that change which other thermometers and exactly what goes into each grid box). In this way, any “splice artifacts” and any “instrument changes” can directly propagate into the “grid box anomalies” calculated in the last step of GIStemp. The major difference between what I do (using a reverse First Differences) and what it does, is that it uses an average of a middling bit of the data do do the ‘splice’ so will be a little less sensitive to end effects from the final data item. (But it has a bunch of other bad habits that make that a small comfort… for more, see:
http://chiefio.wordpress.com/gistemp/ but it’s a couple of years of work, much of it technical, so can be a bit ‘thick’ at times…)
R: Tee Hee 😉
@Amino Acids in Meteorites :
You noticed that too, eh? I’ve sometimes privately speculated that some various college professors might be giving students assignments to sit and comment… Who else would have the time and patience to so regularly be “first out the gate”?
Not particularly a pejorative statement, BTW. I’d often give my students an assignment to “Go shop for a computer” for example. I wanted them to report their interactions in the real world with what they had learned of the jargon. So assigning students to “hang out” somewhere and be present is a common assignment.
It would also explain why the “who’s on first” (note: Proper usage of who’s for the grammar Nazis) tends to change just about every semester / school year rotation.
Or it could just be folks who just have to be “in their face” with “the opposition”. Who knows…
@Gail Combs:
As you said: “Bingo!”
Though I do have to stress that part of the greater volatility in the past is an artifact of reducing instrument numbers. Eventually in the 1700s you are down to single digits (and at the very end a single instrument). It is just a mathematical truth that a single instrument will range far more widely than an average of instruments; and that the more instruments and the more widely they are scattered, the less the average will vary. (So a half dozen on the East Coast will move together with a Tropical Heat Wave; but average in the West Coast where we’re in the cooler and that dampens).
So anything before about 1700 is very subject to individual instrument bias. In fact, the perfectly reasonable reason that GIStemp cuts off the data in 1880 and Hadley in 1850 is that before that time there are just not enough instruments to say anything about the Global Average (though regional averages and country averages and individual instrument trends are still valid).
But more interesting to me is the other end of the graph. As we approach the present and numbers of thermometers are more stable OR they drop out of the record, we get even LESS volatility? That is counter to the math… That is the end where something fishy is showing up. Stable or dropping numbers ought to have stable or increasing volatility. Yet it goes the other way. A direct finger print of data artifacts from instrument and processing changes.
OK, with that caveat about old temperatures said: The method I used of comparing each thermometer ONLY to itself, ought to give as good a standardizing of ‘trend over time’ as it is possible to get from an average. Better would be to look at those old records and just plot them as an individual instrument. When you do that, results are in conformance with what I found. In particular, when Linnaeus was doing his work in Sweden (and some folks scoff at the warm climate plants he said he grew) it really WAS warm in Sweden. The record from Upspala from ONE instrument, well tended and vetted, shows that 1720 was rather warm just like now.
(Hope my tendency to state the limits and caveats, then also say “but the result matches” isn’t too disconcerting…. I just think it’s important to say “Watch out for this issue” and yet also say “But this is what is demonstrable”… )
@Mfo and Moderators:
Are you REALLY going to make me find a calculator somewhere and do it? Or can you post the result? Or does it work in Excel too? Sheesh… Put an anomaly in front of an obsessive compulsive and then walk away… with friends like that… 😉
@KR:
No “evidence” eh? The data are published. I’ve published the code that does the anomaly creation. Other than data, process, code, and results, what else would you like?
No, I’m not going to go chasing after Your topic and links. Maybe in a few days. Right now I’m busy… Just realize that in forensics there may be 100 ways the evidence is NOT seen, but what matters is the one that shows the stain… or, in short “Absence of evidence is not evidence of absence.” So about that absence of evidence for issues you posted…
BTW, did you know that Einstein was asserted to be WRONG on General Relativity by one of the early tests of light bending around the sun. They said “we didn’t find anything so he is wrong” in essence. Later that was shown to be in error. They just didn’t have the right tools to see the effect. Asserting from the absence of evidence is a very dicey thing to do.
So I don’t really care if a dozen folks look at things one way or another and see nothing. I care about the one way that shows something. Rather like watching a Magic Show. It is fine that 99%+ of the people will say “I saw him saw the girl in half”; but the interesting view is from the side or stage back where you can see how the trick is done.
BTW, what I have done is to “show my own reconstruction”. I’ve done it including publishing the code, describing the method, and publishing the graphs of what happens when the GHCN data in various “versions” are put through the same process. Take a look at the graphs. Many of them show warming. In particular, North America and Europe show warming trends on the order of 1 C to 3 C (depending on version of data used).
Are you unhappy that I find warming of 1 C to 3 C?
Or is it that the same code finds little to no warming in the whole Southern Hemisphere and finds Africa cooling? That is shows that “instrument selection” in the various versions causes as much shift of results as the supposed “global warming”? That it shows wildly divergent “warming” depending on where in the world you look, and when? All things that are not compatible with the CO2 warming hypothesis…
So it’s easy to toss the “BS Bomb”, but I’m the one who has all cards on the table. In that context, a random “rant” doesn’t look like much.
Look, I use the official GHCN data. I don’t change it. It’s not “my version”. I use the First Differences technique that is an accepted method (with only minor changes – order of time from latest to earliest and not doing a reset on missing data but just waiting for the next valid data item to span dropouts). Not “my” method, really. So using other folks methods and other folks data I look at what the data says. Then publish it all. If that’s “BS” then most of science is “BS” by extension.
@Larry Ledwick (hotrod ):
My first attempts at benchmarking were to remove some station data and see what happened. The code crashed. There’s a place where an errata file is used to update some specific stations and I think that is where it crashes – if one of those stations is missing.
In theory, I think, one could keep all the station entries in place, but re-write them with specific replacement data items and get a benchmark to complete. But that is a heck of a lot of data re-writing.
I was in the middle of thinking through how best to do that when two things happened.
1) I realized that GHCN itself “had issues” and did the GHCN investigation series instead as a more important task.
2) A bit later GIStemp changed to support v3, so I was looking at benchmarking code that was no longer in use. I’d need to do a new port and all, or benchmark obsoleted code as the walnut shell moves to a new pea…
It’s on a ‘someday try that’ list, but frankly, GIStemp was making such out of alignment results from the other codes as to not be very reputable anyway.
I THINK that a benchmark could be run by replacing data in each year with specific values that show a gradual increase, then running it. Then do the same thing with “dropouts” and with the ‘end data’ sculpted so that segments each ‘start low and end high’ but the actual overall trend is the same. Then compare results. But it is a lot of work and there is only one of me. So I put myself on higher value tasks. ( Little things, like making some money to buy food… Despite all the assertions to the contrary, being a Skeptic does not pay well and there isn’t a lot of funding from the NSF… )
@Luther Wu:
It’s what the data show when run through a very direct First Differences process.
@Bennet:
Sometimes I have my moments… Just needs a bit of pins under the skin 😉
@KenB:
My speculation would simply be that they have “Top Cover”. The whitewash in the Climategate “Investigations” pretty much serves as evidence of the same. The boldness with which clear evidence of wrong is simply brushed away speaks of folks who have been assured “We can protect you”… So someone with “Privilege” can assert it as protection. It is what seems to fit, though there is little to prove it. (At least in the present batch of Climategate and FOIA 2011 emails… but who knows what they are holding in the encrypted files…)
One of the interesting bits in forensics is what I call “The Negative Space” issue. What ought to be, but is not. The “Negative Space” of the investigations (the complete lack of what ought to be happening) IMHO is evidence of protection from much higher up the food chain. At least, were I leading an investigation, that’s where I’d look.
@Barry:
Two very important points:
1) I am NOT looking at “adjustments” per se in individual records. I’m looking at the aggregate impact of the totality of the data changes. What changes between v1 and v3. That will include some changes of the data and it will include many changes of the data set composition as to which thermometers are in vs out of the data set. The point is not that “adjustments” are the issue, only that ‘data set composition’ has as much impact as the imputed “global warming” and as much as or more than any “adjustments”.
2) The method I use is specifically designed such that the “trend” is found from the present and does not depend on the oldest data. That is why I start my First Differences in the present (where we supposedly have the best coverage and the best data) and then work backwards in time. That accumulates any “strangeness” in the early records only into the very early parts of the graphs. I go ahead and graph / show it, since in some places (like Europe and North America) there is decent data prior to 1850. In essence, any trend in the graph is most valid in the left most 3/4, but is likely decent in the left most 4/5 and only gets ‘suspect’ in the far right at the oldest thermometers with the fewest instruments. Even then, since each instrument is only compared to itself, and even then only within the same month / year over year, the accumulated anomaly will get wider error bars, but ought to remain representative.
So, for example, the oldest record in Europe ought to come from Uppsala IIRC. That record will have “January 2012” compared to “January 2011”. Then January 2011 compared to 2010. etc. back to the start of time in the 1720 to 17teen area. Similarly for February, March, etc. If a given month has a value missing, it is just skipped. So if January 2011 is missing, 2012 gets compared to 2010 and the difference entered at that point. In this way missing data is just ignored and any trend will continue to be recorded. (The method is very robust to data dropouts). At the end of it all, the monthly anomaly series are averaged together to get annual anomaly series and any collection of instruments can get their anomalies averaged together. At least in theory, the accumulated anomaly ought to be valid even over long ranges of time, and as thermometers drop out of the record, the remaining anomaly in any given year ought to have wider range (as any average of fewer things can have more volatility) but ought to still be valid as to absolute value.
Hopefully that isn’t too complicated an explanation of why the ‘trend’ is valid even as the length of time can be long as as instruments leave the pool. If plotting a ‘trend line’ I’d likely cut off the right most 10% of the graph just to assure that there isn’t too much ‘end effect’ on calculating the fit, but for ‘eyeball’ purposes, the graphs are fine. Just be aware that big swings at the far right are when you are down to single digit or even just one thermometer and they can move more than an aggregate.
Per the trend difference from 1900 in aggregate: Frankly, I don’t find it as interesting as just how the tend varies from region to region and how much some parts of the globe have little to no trend. All of that speaks to data artifacts, not a “well distributed gas” as causal.
But, looking at a large version of the graph, it looks like about 1/4 C to 1/2 C of variation between the two data sets depending on exactly which points in time you choose to inspect.comment image
As the “global warming” we are supposed to be worried about is about 1/2 C, I’d say “that matters”…

Editor
June 21, 2012 10:55 pm

KR – you refer to a study using ‘rural’ stations. One of the problems with such studies is they tend to accept the ‘rural’ classification of a station as meaning that it is truly rural – ie, that it will not be affected by urban development. Unfortunately, that is often incorrect (bear in mind that UHE tends to come from urban growth not urban size, ie, small towns and airports can have UHE too). For example, in the BEST study, I did a quick chec on their list of ‘rural’ stations for Australia, and it contained over 100 airports and over 100 post offices.
In the study that you cited, I picked one Australian station at random from the map. It was Onslow, in NW WA, and the station appears to be at the airport. Unfortunately I am on holiday and unable to put in more time, but I think it would be a good idea to check that the stations in your cited study really are rural in the non-UHE sense.

barry
June 21, 2012 11:06 pm

EM,
my eyeball sees little trend in the difference (V3 minus V1) from 1900, but we know how eyeballs are. Rather than guestimate, would it be too much trouble to quantify? The period 1900 to 1990 (incl) is what I’m interested in, because then I can check that against the official trend estimates with some confidence. GISS, NCDC and UK Met Office don’t, AFAIK, offer trend estimates from the middle of the 19th century.
(I understood the rest of your reply, thank you)

E.M.Smith
Editor
June 21, 2012 11:38 pm

@Barry:
Look at the graph in the comment just posted and eyeball it for your self. It looks to me like about 1/2 C of offset with v3 warmer in about 1984-1987.
As for 0.75C, as I’ve never said anything about it, I don’t have any better eyeball of the graph than anyone else. At about 1829 the two are 1 C apart with v3 colder, so about 1.5 C of ‘offset increase” compared with v1 from 1984 or so…, but there is variation from year to year, so how you find a ‘trend’ to get 0.75 C between values will be influenced by exactly what start and end dates you pick.
Then again, that’s sort of the whole point I’ve been making about “end effects” and “splice artifacts”…
@KR:
“Feel” whatever you like. I have no need to dance to whatever tune you care to call. Everything I’ve done is public and posted. You’ve tossed dirt and a link. Others can judge.
I’d only comment that if you think GISS and GIStemp is a ‘refinement’ then you haven’t looked at how the code works. I have. I ported it and have it running (and fixed a bug in how it did F to C conversion that warmed about 1/10 C through the data…)
Painful details here: http://chiefio.wordpress.com/gistemp/
Basically, if you think GIStemp is an improvement of the data, watch out for folks selling bridges in NYC…
Per your “challenge”: So just where to I get the global data to do that calculation? Hmmm?
What I have done is exactly to show trends in the GHCN data, even breaking it out by region (and I can do similar trends down to country and even any other subset of the WMO numbers including individual station data). The whole point of the posting is to show that a reasonable method, based on a peer reviewed and supposedly valid method of First Differences, gets very different trends out of two different version of what is the same time period of data from the same physical locations. The differences being 100% due to changes IN THE DATA SET. You, then, want to get all huffy about my not somehow magically making a different trend out of some other data set. Well, I don’t have that other data set. I have the GHCN (in three versions).
Perhaps you can do me the favor of posting a link to some other global historical temperature data set, not based on the GHCN? I’ll be glad to make trends and graphs from it, I’ll be waiting for your link. Remember: Global coverage. From at least 1750 to date. NOT GHCN based or derived. Publicly available. At least 1200 thermometers world wide in the recent decades.
I do find it funny that you think using the GHCN, publishing the code used, and graphing and showing trends from it is all just “noise”. Guess that makes GISS, Hadley, and NOAA/NCDC just noise makers too… 😉
BTW, I’m not “dismissing the data” we’ve accumulated. I’m illustrating a problem in the way it is assembled and presented for use. The way that it is lacking coverage and has dramatic changes in instruments in it, and how those changes show through accepted methods of making “trends” as what I think are “splice artifacts” and related issues. That the particular assemblages of it have more variation from version to version than the signal being sought.
As to your assertion that the world is 0.7 C warmer: Nope. If you bother to look at the graphs presented, you will find that Europe and North America (and a little bit of South America) have much stronger warming. I would assert it is largely a result of putting most of our present thermometers near the tarmac and concrete at airports. Oceana / Pacific Basin isn’t warming much at all, and Africa doesn’t warm either. I’d also suggest you read the link about averaging intensive properties…
@Mosher:
That First Differences is “inferior” for finding warming doesn’t mean it is inferior for doing comparisons of two versions of the same data. In some ways as a forensic tool it is more valuable.
It is more impacted by ‘end effects’, particularly at the end where you start the comparison. Since part of what I’m trying to find is how much bias is in the data, having a method a bit more sensitive to the common causes of bias (like where each trend starts and ends) is a feature.
In essence, your criticism amounts to: “He didn’t sit in the audience and went behind the stage to look at the magic show”.
I freely grant that if you do everything exactly the same way as the Magician you get the same show. So what?
My purpose is not to create some fictional “Global Average Temperature” (which really is a silly thing to do – again, see that ‘intrinsic properties’ issue) so having a ‘better way’ to calculate something that is fundamentally meaningless is really rather pointless.
My purpose is to ‘compare and contrast’ two versions of GHCN and see how they are different from each other. To find out how much cruft shows up and how much one version changes compared to the next. For that purpose, a very clear and very simple and reliably repeatable system is far more important than one that does some very complicated shifting about and hard to explain manipulations of the data. Furthermore, as the purpose is to find how much of such things as “splice artifacts” are changed, having a tool that shows them is rather important. In that context, the Jeff Id critique of First Differences essentially says that I’ve picked a tool that works well for that purpose. (In that FD is sensitive to the splice point).

The average of all the steps over 10,000 series is about zero, but even over 50 series the average trend is dominated by this random microclimate noise.

Notice those values well. There are about 6000 thermometers at peak in GHCN. As I do FD on each month separately, that’s about 72000 FD segments. 7.2 x the point where Jeff finds the error to be “about zero”. Even at the 50 range, that’s about 5 thermometers. I’ve already pointed out that in the single digit range of instruments the graphs will be a bit off. As that’s pretty much prior to 1700 in places like North America and Europe and prior to 1800 in most of the rest of the world, I’m OK with that.
One of the first rules of forensics is to look at things from a different POV and with different lighting than used in “the show”…
And, pray tell, what publicly available non-GHCN data set might I download and test? Eh?
Link please…
And since GIStemp, Hadley, and NCDC are all just variations on GHCN post processing, they are not reasonable alternatives… Same limits as given to the other person saying to use other data: Public available, a few thousand thermometers, not GHCN based, etc. etc.

barry
June 21, 2012 11:51 pm

EM,

As for 0.75C, as I’ve never said anything about it, I don’t have any better eyeball of the graph than anyone else

Then I may have misunderstood this comment from your post

Overall, about 0.75 C of “Warming Trend” is in the v3 data that was not in the v1 data.

I thought that you had crunched the numbers for that. I’m still interested in the actual, rather than the guessed trend difference from 1900 to 1990 (because it looks to be about zero to me), but if it’s too much trouble then no worries. Thanks for the replies.

Nick Stokes
June 21, 2012 11:52 pm

GHCN V3 produces two data sets – a qcu file of unadjusted data and a qca file of adjusted data. I couldn’t see stated, either here or at Chiefio’s, which obe he is talking about. AFAIK V1 only had unadjusted, and V3 unadjusted has changed very little. It looks to me as if the comparison is between v1 and v3 adjusted.
It’s the unadjusted file that is the common base used by BEST etc, although Gistemp has only recently started using the adjusted file (instead of their own homogenization).

E.M.Smith
Editor
June 21, 2012 11:52 pm

Jonas:
A very long time ago I looked at GHCN airport percentage over time. IIRC it ended at about 80-90% of current stations at airports. They are often called “rural”…
So, as you point out, comparing grass fields in 1800 to tarmac today surrounded by cars and airplane engines running is considered “rural” comparison… Yeah, right…
@Barry:
The data series used are also posted, so you can just grab the data and do whatever you want in the way of comparisons. In the article here:
http://chiefio.wordpress.com/2012/06/01/ghcn-v1-vs-v3-1990-to-date-anomalies/
but down in comments I post the same data with a sign inversion that makes the comparison a bit easier. (That is, you don’t need to invert the sign yourself to get cooling in the past to be a negative value).
Like I’ve said: The data, methods, code, etc. are all published.
Even has the individual monthly anomaly values so you can do ‘by month’ trends if you like.
Oh, and the reports have a thermometer count in them too, so you can see when the numbers are in the single digits and the results are less reliable.

Gary Hladik
June 21, 2012 11:54 pm

E.M.Smith says (June 21, 2012 at 7:33 pm): “FWIW, I’ve got a version where I cleaned up a typo or two and got the bolding right posted here:
http://chiefio.wordpress.com/2012/06/21/response-to-paul-bain/
with a couple of ‘lumpy’ sentences cleaned up a bit. It’s a bit of a ‘rant’, but I stand by it.”
Wow. Epic!
I don’t visit that site enough…

June 22, 2012 12:22 am

Reblogged this on luvsiesous and commented:
Anthony Watts and E. M. Smith point out how data is re-imaged, or re-imagined, in order to give it a semblance of integrity.
It should be that the data is original, not that it is re-duplicated.
Wayne

E.M.Smith
Editor
June 22, 2012 12:35 am

@Benfrommo:
That is exactly why I put all of it on line and available. So folks can replicate and find “issues”.
My point being that the changes in the versions introduce bias, and we are depending 100% on “other methods” to remove that bias. That the tool used is prone to responding to that bias is then used as an attack? Rather funny, really. So I’m saying “The other method looks like it is about 1/2 effective” and the complaint is “my approach finds 100% of the issue”… Go figure…
FWIW, I’m happy to soldier on and make yet more methods of comparing v1 to v2 to v3, and fully expect they might find ‘less’ difference. To the extent I’m perfect at it, they would find zero difference. But my whole point is just this: To the extent other methods are in any degree less than perfect, they will have SOME of the difference leak through. As that difference is a whole degree C in places like Europe, even a 50% of perfection method will have 1/2 C of variation from the changes in the data set. (In the case of Europe, making the warming trend LESS by 1/2 C). As the overall variation is 1.5 C between about 1984 and 1888 in aggregate, even a 75% effective method of defect removal will find 1/2 C of “warming” over that time period based only on leakage of data change artifacts.
The purpose of this analysis is NOT to say that the 1.5 C of difference is evidence of malfeasance, nor even to say it can not be mitigated, nor to say that the method that identifies it is the One Valid Way to calculate a Global Average Temperature trend. It is just to say “That bias exists” and then raise the question of “What percentage of it can reasonably be removed?” Near as I can tell, the “Warmers” wish us to just accept by assertion that the answer is 100%. I’ve seen no evidence that the other programs are perfect at removing such data bias. I have seen evidence that GIStemp is about 50% effective. (But even that is not as well proven as I’d like – GIStemp broke on attempts at better testing.)
For some reason this “measuring the data” mind set seems to throw a lot of the folks who are not familiar with it. They seem to jump to the conclusion that the goal is to show an actual value for the increase in the Global Average Temperature. As I’ve said a few times, the whole concept of a GAT is really a bit silly due to the fact that averaging intrinsic properties is fundamentally meaningless.
At any rate, one can show how ‘trend’ can be shifting in the data even if one doesn’t believe that the calculated ‘trend’ has much meaning. So I do.
I ran the software QA group at a compiler company for a few product cycles and I guess it changes how you think about things. Demonstrating repeatable and reliable outcomes is important… when the outcome changes based on the “same” input, it is cause for concern that maybe the “same” input isn’t really the same…
@Leonard Lane:
I do feel compelled to point out again that this does NOT show that individual data items in individual months are changed. It shows that the collection of instrument records has an overall shift that changes the trend. (Though there may well be some individual data items that also change).
As First Differences is particularly sensitive to the “join” point between two records, it will be particularly sensitive to changes in the first few years (near that 1990 start of time point) and will highlight the change of processing that happened about 1987-1990. I think that is mostly the move to electronic thermometers and largely at airports.
So this isn’t so much saying that lots of individual data items have been changed as it is saying that a “shift” in thermometers and processing at about that time causes a ‘splice artifact’ that shows up as a ‘warming trend’ when homogenized and over averaged. At least, that’s what I speculate is the cause of the shift.
It could simply be that the other codes, like CRUTEMP and GISStemp are so complicated that they do a very good job of hiding that splice artifact, and a less than stellar job of removing it, and that the folks who installed the changed thermometers and the folks running those codes truly believe that what they are doing is close enough to perfect that the result is valid.
Basically, it need not be malice. It can simply be pridefulness and error…
:
I especially like the way some posted ‘critiques’ in less time than it takes to actually read the article and look at the graphs 😉
But it is hard to tell a ‘Troll’ from a ‘Zealous True Believer’ sometimes. Still, the distractors of “go do what I tell you to do” and / or “but look at this orthogonal link” can be a bit funny…

Kev-in-UK
June 22, 2012 12:40 am

@ EMSmith
absolutely – as a comparative analysis of datasets, of course you couldn’t have studied every station data! But that was kinda my point – who has? – and how/what have they done to give us these different datasets?
@Mosh – come on! – FD analysis is hardly rocket science but it is seen as a sensible first representation of potential flaws in data/adjustments? Are you not annoyed that EMS has shown the data to be suspect? Never mind your personal stance on the issue – as a data user – would you not expect the data to be clean and reliable? (and ideally raw!)

davidmhoffer
June 22, 2012 12:40 am

Chefio’s analysis is, as usual, thorough and absolutely excellent. Beyond learning a lot though, watching as he eviscerated one troll after another was as entertaining a read as I’ve seen in some time. He even goaded Mosher into a comment using multiple paragraphs instead of a one line drive by snark. Wow!
The icing on the cake however were the grammar and spelling snobs, which Chefio crushed as badly as the technical trolls. C’mon Chefio, can’t you leave one or two trolls or snobs standing so that the rest of us can do some of the pummeling?

davidmhoffer
June 22, 2012 12:53 am

Spelling aint my strong point, but you’d think I’d at least get “Chiefio” right?
Best I not try and pummel any trolls tonight, to sleep, to tired….
unlss th spllng snbs shw agn, in whch cse I wnt to knw if spllng is so mprtnt why I cn tke mst of the vwls out and stll be ndrstd?

John Diffenthal
June 22, 2012 12:56 am

The original article is a longish read and I would recommend that most people ignore the early sections and pick it up from ‘The dP or Delta Past method’ which is about 15% of the way through. The remaining material on the quality of the data is fascinating. It’s written as a series of sections which can be digested independently. The meta message is inescapable and is summarised neatly by Smith in his penultimate paragraph.
Chiefio is strongest in this kind of analysis – detailed, documented and relatively easy to replicate. Would that more of the climate debate were based on this kind of material. If you find that his conclusions on the data series are interesting then go back and read what you missed at the beginning.

Brent Hargreaves
June 22, 2012 1:04 am

Marvellous work, Chiefio. You have set out the big picture with great thoroughness, demonstrating that the Global Warming story is based on dodgy data.
In support of your work, this link focusses on some specific Arctic stations, showing where the fraudsters’ fingerprints can be detected: http://endisnighnot.blogspot.com/2012/03/giss-strange-anomalies.html

barry
June 22, 2012 1:10 am

EM,
thanks for the link to data. I did a linear regression on version1 and 3 using dT/yr data column, 1900 to 1990. Surprisingly both are negative! So I mayn’t have understood your labelling. dT/yr is just the average of monthly anomalies, isn’t it? Anyway, here are the results.
V1 trend is -0.0623 for the whole period
V3 trend is -0.0542 for the whole period
If I’m using the right metric, the trend difference is insignificant – but the trends are in the wrong sign, let alone barely sloping.
What have I done wrong?

Editor
June 22, 2012 1:19 am

Stokes:
I use ghcn v3 unadjusted. Don’t know where all I said it, but at least one place was during a (kind of silly really) rant about data quality in a non-adjusted data set. This was while I was developing the code and it crashed on ‘bad data’, so I complained in essence [that] non-adjusted data was not corrected for bad data. Still, it is a bit “over the top” to have rather “insane” values show up in the ‘unadjusted data’… but on the other hand, folks who want “really raw data” ought to expect it to be, well, raw… pimples, warts, insane values, and all:
http://chiefio.wordpress.com/2012/05/24/ghcn-v3-a-question-of-quality/

Looking at the v3.mean data, there are 3 records for North America with “insane” values. Simply not possible.
Yet they made it through whatever passes of Quality Control at NCDC on the “unadjusted” v3 data set. They each have a “1″ in that first data field. Yes, each of them says that it was more than boiling hot. In one case, about 144 C.
You would think they might have noticed.

E.M.Smith
Editor
June 22, 2012 1:27 am

@Barry:
Oh, I see, you are talking about in the linked article, not about comments here. The estimate is from eyeballing the graph where about 1987 is around + 1/2 C and around 1888 is about -1/2 C and allowing about 1/4 C of “maybe I’m off”. But you can graph the data yourself as it is posted and do whatever you like with it. Here is the ‘difference’ pre-calculated back to 1880 (presuming you are not interested in prior to that time):

v3 - v1 dT
-0.33
-0.03
0.34
0.19
0.23
0.14
0.02
-0.04
0.18
0.13
0.26
0.15
0.18
0.07
0.18
0.05
0.2
0.07
0.11
0.12
0.14
0.12
0.06
0.15
0.08
0.15
0.15
0.14
0.19
0.2
0.2
0.19
0.19
0.12
0.23
0.17
0.23
0.25
0.22
0.19
0.21
0.2
0.16
0.19
0.18
0.22
0.13
0.27
0.23
0.21
0.18
0.25
0.25
0.15
0.16
0.06
0.16
0.23
0.24
0.27
0.17
0.22
0.12
0.12
0.09
0.15
0.2
0.22
0.18
0.18
0.22
0.11
0.14
0.19
0.18
0.14
0.16
0.13
0.07
0.07
0.1
0.05
0.15
0.1
0.05
0.1
0.12
0.13
0.17
0.16
0.23
0.17
0.13
0.17
0.04
0.04
0.04
-0.05
-0.13
-0.08
-0.27
-0.23
-0.13
-0.19
-0.12
-0.18
-0.03
-0.19
-0.06
-0.4
0.09
-0.04
-0.05
-0.16
0.01
0.09
0.26
-0.54
-0.39
0.05
0.24
-0.24
0.03
-0.15
-0.6
0.33
-0.16
-0.09
-0.51
0.24
-0.26
0.1
-0.03
-0.45
-0.1
-0.33
0.05
-0.06
-0.3

You can see that near the start, it’s about +1/4 C and near the end (but not AT the end) there are values of about -1/2 C. As with all such graphs, the exact start and end points you choose to ‘cut off time’ can shift the trend and this set leaves out some of the largest ‘cooling of the past’ back prior to 1880, but I didn’t want too long a set of numbers in a comment…

LazyTeenager
June 22, 2012 1:40 am

Hmm, I guess I have to go and read it.
But the first thing I notice is that he bangs on about unproven assertions, but makes plenty of unproven assertions himself. This makes what EM Smith says hard to take seriously.
It’s seems any thing he doesn’t like is an unproven assertion and anything he just makes up is a proven fact.
So is there any substance behind this fog. Well using an analysis based on differences has to be done very carefully, since differences exaggerate noise. That is very well known. Numerical Maths 101. Even a dumb LT knows that.
So let’s have a look.

June 22, 2012 2:05 am

Leonard Lane says:
June 21, 2012 at 10:03 pm
We hear the statement “chimes against humanity” all the time in a political context…

A fortuitous typo, since much of the AGW handwringing seems to be based on noise.

E.M.Smith
Editor
June 22, 2012 2:10 am

:
Love the spelling point 😉 FWIW, studies have shown that as long as the first and last letter are correct, all the other letters can be in random order and most folks can read the word. Strange, that. I find it particularly easy to do ECC on things so just read your ‘compressed’ versions quite easily. (Then again, I read ‘mirror writing’ and upside down print and don’t always notice…)

A Plan for the Improvement of English Spelling
For example, in Year 1 that useless letter c would be dropped to be replased either by k or s, and likewise x would no longer be part of the alphabet. The only kase in which c would be retained would be the ch formation, which will be dealt with later.
Year 2 might reform w spelling, so that which and one would take the same konsonant, wile Year 3 might well abolish y replasing it with i and Iear 4 might fiks the g/j anomali wonse and for all.
Jenerally, then, the improvement would kontinue iear bai iear with Iear 5 doing awai with useless double konsonants, and Iears 6-12 or so modifaiing vowlz and the rimeining voist and unvoist konsonants.
Bai Iear 15 or sou, it wud fainali bi posibl tu meik ius ov thi ridandant letez c, y and x — bai now jast a memori in the maindz ov ould doderez — tu riplais ch, sh, and th rispektivli.
Fainali, xen, aafte sam 20 iers ov orxogrefkl riform, wi wud hev a lojikl, kohirnt speling in ius xrewawt xe Ingliy-spiking werld.
Mark Twain

Per Chefio: As I’m fond of cooking, too, I’m Fine With That 😉 Just got a new “smoker” and made a simple brined smoked salmon. Yum! 1 qt water, 1/2 cup each salt and brown sugar. Soak for an hour. Then coat with soy sauce and let sit for another 20 minutes or so as a bit of ‘pellicle’ forms. Put in the smoker on ‘very low’ about 200 to 220 F, for about an hour and a half… Less for thinner chunks. Also did a chicken that was marinaded in about 1/3 lemonade and 2/3 soysauce, then slow smoked for a few hours… Hickory chips in the chip box…
(Can’t say I don’t aim to please 😉
Per dealing with Trolls: While I appreciate the compliment, mostly I’m just trying to answer questions in as open and honest a way as possible. “The truth just is. -E.M.Smith”. So I don’t know if they are Trolls or just folks with questions based on a slightly akimbo (mis?) understanding. All I do in any case is state what is.
At any rate, as it is nearing 2 am where I am, bed will be calling me soon, too, and there will be plenty of time for others to chime in…
Diffenthal:
Well, while I may be prolix, at least I can’t be faulted for lacking thoroughness 😉
And yes, I do prefer things that can be intuitively grasped and don’t have a lot of artifice and “puff” / complexity between the starting point and the result. So the actual dP/dt code is very short and very understandable. The “method” fits in one simple paragraph (see above).
@Brent Hargreaves :
Yes, GIStemp basically invents a completely fictional Arctic temperature. Though I’m loath to call it ‘fraud’ since it can simply be that they are ‘sucking their own exhaust’ and ‘believe their own BS’ (to quote two common aphorisms for the tendency to believe what you want to believe, especially about your own abilities, that are commonly used in Silicon Valley programmer circles…)
“Never attribute to malice that which can be adequately explained by stupidity” can cover a lot of ground as “Intelligence is limited, but stupidity knows no bounds. -E.M.Smith” 😉
Put another way: I’ve looked at code that I’d swear was perfect, run it, and had horrific bugs show up. Things you learn doing software QA… And WHY I stress that until there has been a professionally done and complete independent software audit and benchmark of any “Climate Codes” (such as GIStemp) they can not be trusted for anything more than publishing papers for meeting academic quota…
There is a reason that the FDA requires a “Qualified Installation” for even simple things like a file server and complete submission of ALL software and data used in any drug trial. They must be able to 100% reproduce exactly what you did or you get no drug approval. Even something as simple as a new coating on an aspirin requires that kind of rigor. Yet for “Climate Science” we take code that has never been properly tested, nor benchmarked, and feed it “Data Du Jour” that mutates from month to month, and play “Bet Your Economy” on it… Just crazy, IMHO. But what do I know about software, I’m just a professional in the field with a few decades of experience… I’ve even DONE qualified installs and signed the paperwork for the FDA… ( The “Qualified Installation” documents must be written such that a robot following the statements would get exactly the same result. Say, for example, “push the red power button” and the vendor changes to an orange button, and you will fail. So wording has to be careful with things like “push the power button to the on position”… No way the “Climate Codes” could even think of applying for the process. The data archival requirements alone would cause GHCN to be tossed out as too unstable. But hey, an aspirin is far more important than the global economy… /sarcoff>; )

mfo
June 22, 2012 2:26 am

The World Climate Data and Monitoring Programme of the WMO have been scratching their bonces about metadata and homogenisation:
http://www.wmo.int/pages/prog/wcp/wcdmp/wcdmp_series/index_en.html
Phil Jones was involved in a case study which included:
“…………..variations are related to non-climatic factors, such as the introduction of new instrumentation, relocation of weather stations, changes in exposure of instruments or in observing practices, modification of the environment surrounding the meteorological stations, etc.
At the same time, wrong or aberrant observations are common in most observational systems. All these factors reduce the quality of original data and compromise their homogeneity.”
http://www.wmo.int/pages/prog/wcp/wcdmp/wcdmp_series/documents/WCDMP_Spain_case_study-cor_ver6March.pdf
This pdf gives the WMO guidelines:
http://www.wmo.int/pages/prog/wcp/wcdmp/wcdmp_series/documents/WCDMP-53.pdf
In a cited paper where a certain PD Jones was involved (pg 40) it states:
“Judgement by an experienced climatologist has been an important tool in many adjustment methodologies because it can modify the weight given to various inputs based on a myriad of factors too laborious to program.”
Phil Jones: “I’m not adept enough (totally inept) with excel to do this now as no-one who knows how to is here.”
I expect the CRU have got their copy of Excel for Dummies by now.
Warmist papers should come with a warning: “May contain nuts.”

barry
June 22, 2012 2:32 am

EM,
yes, I was talking about the post on your blog that this one links to.
my last post seems not to have got through. I did a trend comparison between v1 and v3 per the tables in the post you linked (thanks).
v1 trend 1900 to 1990 = -0.623 for the period
v3 trend 1900 to 1990 = -0.542 for the period
(v3 is from memory – I’d shut Exel down and couldn’t be bothered doing it again)
I used the data from the column dT/yr, which is the average of the monthly anomalies for each year. Assuming I’ve used the correct data, three things are apparent.
There is hardly any slope for the period.
The slope is negative.
The difference between V1 and V3 global trend is insignificant – less than a tenth of a degree over 91 years.
I find the negative result surprising. Am I doing something wrong?
Thanks for the difference table upthread, but it was easier to copy’n’paste from the other tables, as they had the years already marked.

davidmhoffer
June 22, 2012 2:35 am

Chiefio;
Per dealing with Trolls: While I appreciate the compliment, mostly I’m just trying to answer questions in as open and honest a way as possible.>>>>
I know! That’s what made it so darn amusing! (Many of the questions were of course legit, they weren’t all troll comments of course)
There’s a pet peeve of mine that I’m wondering if you’d comment on? Trending “average” temperatures has never made much sense to me. We’re trying to understand if increased CO2 results in an energy imbalance at earth surface that raises temperatures. If that is the case, why would we average temperatures and then trend them? The relationship between temperature and w/m2 is not linear.
Taking for example some cooling in Africa that is balance by some warming in say Canada. Do they balance each other out? Very possible that they do when one averages “degrees” but averaging “w/m2” give a whole different result. If equatorial Africa cools from +30 to +29, that’s a change of -6.3 w/m2. But if temps in northern Canada in an equal size area rise from -30 to -29, that’s an increase of only 3.3 w/m2. So, the “average” temperature based on those two data points would be a change of zero, but in terms of energy balance, the earth would be cooler by 3 w/m2.
One thing I would like to see is the temperature data converted by SB Law to w/m2 and THEN trended. By averaging temps alone, we’re over representing changes in energy balance at high latitudes, and under representing changes in equatorial regions. From a w/m2 perspective, a degree of cooling in equatorial Africa would wipe out several degrees of warming in Antarctica and may well show a completely different trend than temperature.
I’ve always wondered why, if the theory is that doubling of CO2 increases forcing by 3.7 w/m2, we would try to measure it through temperature readings which amount to a proxy that is KNOWN to not have a linear relationship to w/m2! If we want to see of CO2 is increasing forcing by some amount measured in w/m2, would it not make thousands of times more sense to MEASURE in w/m2 in the first place?

E.M.Smith
Editor
June 22, 2012 2:38 am

@Barry:
A trend line through “all data” differences has the following values:
f(x)= -0.0030733007x + 0.2706459054
R^2 = 0.6412526237
per the least squares fit line in Open Office.
The trend line hits the vertical axis at about 0.25 ( I believe that is the 0.27 value in the formula above) and crosses -0.6 at the start of time. It is at about -0.5 around 1740 or so.
As I understand things, that gives a formal LSF line of about 0.75 increase by using the data from the present back to about 1740.
Hopefully that satisfies your need.
(Though again, I do want to stress that it is the variation in the ‘trend’ between different regions that I think is more important. It shows that the trend is very non-uniform around the world, and that it varies dramatically in how it changes from place to place as v1 changes to v3. It is that non-uniformity and that the changes are orthogonal to CO2 changes that make it ‘suspect’ that CO2 has any impact on the data and that changes in the data set are more important to any ‘trend’ found.)

mogamboguru
June 22, 2012 2:50 am

To quote the article:
What is found is a degree of “shift” of the input data of roughly the same order of scale as the reputed Global Warming.
The inevitable conclusion of this is that we are depending on the various climate codes to be nearly 100% perfect in removing this warming shift, of being insensitive to it, for the assertions about global warming to be real.
——————————————————————————————————————-
Aah – I LOVE the smell of facts in the morning!

Latimer Alder
June 22, 2012 3:04 am

the average temperature of the Earth in degrees KELVIN. An adjustment of one to two degrees to an average temperature of 20 is already small. Such a variation on an average temperature of about 500 is — well, you tell me.

At the risk of being terribly pedantic, if the average temperature gets to 500K we are all already in a lot of trouble 🙂
500K is +227C…warm enough to bake bread. Gas Mark 8 in old money.
You might wish to rephrase it at about 290K (+17C). Cheers.

E.M.Smith
Editor
June 22, 2012 3:24 am

@Barry:
dT/yr is the change for THAT year compared to the next, not the running total that you want to use. It is the volatility, if you will. It is not the field you want for trend.
As I noted earlier, there are two sets of reports posted for the “all data” version. The one in the article itself has the opposite sign to that in the comments. ( That is, the dT/dt version of the code tells you how you must change things to get to the present, while the dP/dt version tells you directly what the past looked like compared to today. Or “it is 1 C warmer now than then” vs “it was 1 C cooler then than now”. This is detailed in the posting.
(The reason for this is pretty simple. The original dT/dt code was for a different purpose and was to show “how much warming was needed to get to the present”. So if it was cooler by 1 C in 1816 the code showed “You need to add 1 C to get to the present warmth”. This was ‘inconvenient’ for making the comparison graphs, not to mention counter intuitive if you are showing trend from the past into the present on a graph, so the dP/dt version was made that basically just inverts the sign. That shows “It was colder by 1 C in 1880” as -1 C.)
So, to get the rising trend, scroll on down to the comments and pick up the dP/dt version of the data (or change which is subtracted from what 😉
To get the comparison of changes between the two data sets, subtract the v1 data for dP from the v3 data for dP. Thus, if it was -1 C in v3 and -0.25 C in v1, you get that it is now -0.75 C cooler in that point between the two. A trend line plotted through those variations show the trend of the differences between the two sets of data. As noted above, that’s about 0.75 C of difference between 1740 and the present. I don’t know what it is at 1900 to the present as that isn’t a start date used by any of the climate codes, so I’ve not inspected it. As any given ‘cherry pick’ of a start date on a trend line data set can change the trend, YMMV as you try different start and end dates. (That is also one of the “issues” I have with the whole “warming” assertion, BTW. I can give you any warming or cooling trend you want as long as I can pick the start date. It was far warmer 6000 years ago. Colder by far in 1816 (the “Year without a summer”). About the same in 1934. Colder in the mid-’60s. The Younger Dryas froze things hard. A few hundred years later it was way hot. etc. etc.)
Hopefully I can be done now on the “100 ways to measure the change of trend in the differences”…

Quinn the Eskimo
June 22, 2012 3:48 am

KR at 8:46 pm
You were looking for “well distributed, rural raw data, a temperature estimate that significantly varies from the GHCN estimate”
This deals with NCDC data for the contiguous US, but otherwise may suffice: Long, E.R, “Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets,” available at
http://scienceandpublicpolicy.org/originals/temperature_trends.html.
For the rural stations in the study, the raw data showed a linear trend of 0.13º C per century, while for urban stations the raw data showed a trend of 0.79º C per century. Id. at p. 8-9. The long term trends were very similar until about 1965, when the trend in the urban raw data increases faster than in the rural data. Id. at 9- 10.
NCDC’s adjusted data for rural stations show a trend of 0.64 º C per century, compared to 0.13 º C per century for the raw data. In other words, the NCDC adjustment increased the rural trend by nearly five times. Id. at 11. The adjusted data for urban stations show a trend of 0.77º C per century, compared to a raw urban trend of 0.79º C. per century. Id. “Thus, the adjustments to the data have increased the rural rate of increase by a factor of 5 and slightly decreased the urban rate, from that of the raw data.” Id.
E.R. Long is a retired NASA physicist.

LazyTeenager
June 22, 2012 3:52 am

The other curiosity is that a large proportion of the graphs show the 1970s temperature dip.
No explanation why splicing errors produce such a consistent result.

E.M.Smith
Editor
June 22, 2012 4:01 am

:
Averaging temperatures is just silly. Yet “it’s what they do” in “climate science”.
I occasionally point out the silliness in it (that “intrinsic” link above goes through the scientific philosophy bankruptcy of the notion of averaging temperatures in depth) and I occasionally point out that looking for changes in energy content or heat via temperatures alone is something even a freshman in chemistry would laugh at. But it doesn’t seem to ‘click’ with most folks.
http://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/
FWIW most folks confuse heat and temperature anyway. Only engineers, chemists and the odd physicist seem to ‘get it’ and even they often go ahead and ‘average temperatures’ anyway.
THE classic error in calorimetry is to screw around with the thermometers and change or move them in the middle of the run; yet we are doing a large “calorimetry” measurement on the earth while constantly changing the thermometers by huge percentages. Yet “climate science” doesn’t care…
( I sometimes wonder why all the Chemists are not up in arms about that point… then again, when doing calorimetry in college chem classes, many of the chem students didn’t seem to ‘get it’ either…)
Then pointing out that averaging a bunch of temperatures does NOT get rid of systematic error, only random error, only got me lambasted for weeks by folks asserting I was an idiot for not being quite happy to do such a thing as clearly any such average was subject to the law of large numbers and would have ever increasing precision. (That the precision is false precision and that the presence of a systematic error can not be removed even though the precision of the average can be known to ever greater degree being seen as a red flag to more insults.) Attempting, then, to point out that most of the errors in the data were not “random error” but more systematic just caused more abuse. Attempting to point out that measuring one thing a dozen times or with a dozen different thermometers can remove random error but that measuring a dozen things with a dozen thermometers just has an error in each reading that may well be systematic and you simply can not assume that averaging them removes it; well, talk about red flags and bulls…
(The simplest example of ‘systematic’ error would be something like: If folks reading the Liquid In Glass thermometers and recording data in whole degrees F tended to just report the last whole degree the meniscus crossed, then you get a ‘low bias’ in all the readings. Averaging them together does not restore those missing fractional portions.)
All that is even before we get to your points about energy flux not being linear with temperature.
Heck, take one place that cools by 1 C from +0.5 C to -0.5 C AND gets a foot of snow. Average that with some desert area that goes up by 1 C and has no water. What on God’s Earth does that say about heat flux? Nothing. You have TONS of frozen water worth of heat being ignored. Ignoring enthalpy is a Very Bad Thing. (We won’t even talk about rising air and moving heat with non-correlated temperature changes… adiabatic rate anyone? 😉
But the “debate” seems firmly rooted in the bankrupt notion of “average temperature” even though it has no philosophical basis in reality, confounds heat with temperature, does not conform with the physics of heat flux (as you point out) and is frankly just silly. As near as I can tell, it is only because “temperature is what we measure” and folks do not understand that averaging is used to hide information that is “in the way” of seeing something else and is NOT an easy tool to use and is completely meaningless when used to average temperatures as they are an intrinsic property…
But pointing that out just causes the average person to glaze over and causes “climate scientists” and Warming Believers to toss rocks and flames at you. So I only do it on special occasions 😉 Most of the time I behave myself and pretend that ‘an average of temperatures’ is a sane thing to do… though I’ll usually sneak in a small disclaimer about it…
It really is an “Angels and Pins” argument, in terms of physics… (Which is why I mostly show faults in the method and data biases rather than try to find a ‘better way to calculate the Global Average Temperature’…. I’m not fond of trying to show one way of counting Angles and measuring Pinheads is better than some other one…;-)

John Doe
June 22, 2012 4:30 am

There is no warming trend in the raw data. Two adjustments, SHAP and TOBS are responsible for creating the trend.
You can see the consequence of each individual adjustment in the graph below, direct from the source, so don’t take my word for it. Basically SHAP (station homogeneity) and TOBS (time of observation) each account for about half the trend and neither have much effect until 1950.
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif
Global warming is anthropogenic alrighty but it’s done with a computer not an SUV.

TonyTheGeek
June 22, 2012 4:32 am

For anyone who is not already familiar with Chiefio’s blog, I highly recommend it! It is always entertaining, no matter what the topic of the day may be, and his clarity and attention to detail rivals Misters MacIntyre, Eschenbach, Montford and Watts to name but a few. Although some of the science (especially the statistics) is beyond my comprehension on most of these sites, I can appreciate that ‘loose tooth’ feeling of something not being ‘right’ and the obsessive tendency to worry the tooth until it comes loose and reveals its secrets. To be able to do that and explain it clearly is a gift.
I also greatly appreciate the magnanimity with which he handles comments, having read his blog for some time (I found it through WUWT) I have almost always found him to be polite, light hearted and respectful, rare qualities these days.
Before I sound too much like lead cheerleader, I wish to offer my thanks to all the people who operate/ maintain/ help out with/ comment on these blogs, without them we would all be relying on mainstream media and I think we are all aware how that would go.
Kudos to you all gentlemen and ladies, your dedication to the truth is laudable and fully deserving of recognition.
Tony
p.s. I know I missed out Jo Nova, Jeff Conlon and many others, my apologies. Also, one of my personal favourites (and the one who started me looking down this road), John Brignell at numberwatch who doesn’t post very regularly due, I believe, to ill health but is a fascinating read on the perils of epidemiology.

barry
June 22, 2012 5:11 am

As I understand things, that gives a formal LSF line of about 0.75 increase by using the data from the present back to about 1740.
Hopefully that satisfies your need.

Aha! Back to 1740; I wondered what your start date was.
I want to determine the trends in V1 and V3 from 1900 (to 1990). There are two reasons for this.
1. The long-term trend estimates done by GISS, UK Met Office etc are usually from no earlier than 1900. I want to compare your V1 and V3 with them for the same (or roughly the same) time period.
2. The pre-1900 data gets sparser and sparser. Adjustments are bound to have a more obvious effect the further back in time you go. GISS and UK Met Office don’t offer trend estimates from before 1900 because the data is uncertain.
I bet that there will be litle difference between V1 and V3 for the global trend from 1900. This will accord with pretty much everything I’ve read on the subject. I don’t think trend estimates using surface data starting from the 18th century are going to be in any way reliable. Does anyone else do this?

DR
June 22, 2012 5:31 am

if the satellite warming trends since 1979 are correct,then surface warming during the same time should be significantly less,because moist convection amplifies the warming with height.- Roy Spencer
@KR
Why is Roy Spencer wrong?

KR
June 22, 2012 6:45 am

Quinn the Eskimo – The SPPI paper you referenced contains data from 48 sites, chosen one per contiguous US state, meaning Texas and Rhode Island have the _same_ influence on the results. The word “weight” does not appear anywhere in the document, there is no accounting for the size of the regions represented.
It’s therefore unsurprising that their results differ from area-weighted estimates – a comparison of apples and oranges.
In regards to those UHI issues, I would point to the Berkeley work, http://berkeleyearth.org/pdf/berkeley-earth-uhi.pdf:

“Time series of the Earth’s average land temperature are estimated using the Berkeley Earth methodology applied to the full dataset and the rural subset; the difference of these shows a slight negative slope over the period 1950 to 2010, with a slope of -0.19°C ± 0.19 / 100yr (95% confidence), opposite in sign to that expected if the urban heat island effect was adding anomalous warming to the record. The small size, and its negative sign, supports the key conclusion of prior groups that urban warming does not unduly bias estimates of recent global temperature change.”

(emphasis added)

Al in Kansas
June 22, 2012 6:56 am

A note on SB equation and T^4. Given an approx. earth surface temperature variation of +/- 41C, in K that is a variation of 273K+/- 15%. 0.85^4= 0.522 and 1.15^4= 1.749 In other words, 3.5 times the heat flux at maximum than at minimum. Averaging temperature would not seem to be conducive to accurate results.

Bill Illis
June 22, 2012 7:00 am

Just pick the raw data from the 50 sites which have the longest continuous data.
The problem is one doesn’t even know if the NCDC raw data really is the raw data.
If they have decided to adjust the records to help the global warming case, then they have also been mucking around in the raw temperature database as well.
So, nobody is really working with the raw unadjusted records.

KR
June 22, 2012 7:28 am

DR“…if the satellite warming trends since 1979 are correct,then surface warming during the same time should be significantly less…”
That’s a good point. I will note that satellite data is itself highly adjusted, based upon modelling of microwave emission from various points in the atmosphere, and has itself had numerous corrections (the UAH data initially showed cooling, not warming, until certain errors were noted and corrected). I suspect more corrections for things like diurnal drift will be applied in the future. There’s a good discussion of this in Thorne et al 2010 (http://onlinelibrary.wiley.com/doi/10.1002/wcc.80/abstract). Personally (opinion only here), although analyses such as Fu et al 2004 (www.nature.com/nature/journal/v429/n6987/abs/nature02524.html) have some statistical issues, I suspect the possibility of tropospheric temperatures being contaminated by uncorrected stratospheric signal are correct.
The radiosonde data shows trends since 1960 of 0.13C/decade surface, 0.16C/decade mid-tropospheric (http://www.ncdc.noaa.gov/sotc/upper-air/2010/13), incidentally, which is consistent with that tropospheric amplification by moist convection. There is also discussion on that page of uncorrected stratospheric influence in _both_ satellite sets.

June 22, 2012 7:33 am

KR: “urban warming does not unduly bias estimates of recent global temperature change.”
NASA: “Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows. The complex phenomenon that drives up temperatures is called the urban heat island effect.”
http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html
“In July 2008, the central high building regions of Beijing have a monthly mean Tskin above 308 K (Fig. 1c). This is significantly higher than the surrounding non-urban regions where the cropland-dominated landscape has Tskin values in the 302-304 K range. The forests north of Beijing have Tskin as low as 298–300 K.”
http://www.met.sjsu.edu/~jin/paper/Tskin-UHI-jclimate-finalaccepted.pdf
If BEST can’t find UHI, the methodology or data or corrections or all three are broken.

June 22, 2012 7:42 am

Bill Illis,
If you think GHCN raw data isn’t really raw, try ISH, GSOD, WDSSC, etc. They will give you substantially the same results. Or take the Berkeley dataset and use only non-GHCN data and see what the results are. Here are three useful articles for reference:
http://wattsupwiththat.com/2010/07/13/calculating-global-temperature/
http://rankexploits.com/musings/2011/comparing-land-temperature-reconstructions-revisited/
http://judithcurry.com/2012/02/18/new-version-of-the-berkeley-earth-surface-temperature-data-set/
We’ve been around this so many times now, posted so many different codes by Jeff Id/Roman, Mosher, Chad, Nick Stokes, myself, Tamino, etc. If you take the raw data with no adjustments, you will get global results similar to NOAA/NASA. If you use non-GHCN data, you will get similar results to NOAA/NASA. If you use only rural stations (using any objective urbanity proxy, be it nightlights, impermeable surface area, MODIS, population density, population growth, etc.), you will get global results similar to NOAA/NASA. If you don’t trust me (and, like any good skeptic even if you do trust me you should go out and verify it yourself), I’d strongly suggest downloading the data sets (available here: http://berkeleyearth.org/source-files/ ) and create your own analysis. Its pretty trivially simple to convert the data into anomalies and to a simple gridding.

Brent Hargreaves
June 22, 2012 7:47 am

Is the v1 data available at GHCN? I can only see v2 and v3.

Pascvaks
June 22, 2012 7:55 am

“The hardest thing to remove from science in order to obtain ‘pure science’ is the human element.”
E.M. Smith is a ‘pure’ scientist. He has given you the ‘pure’ results of his search and efforts. The rest is up to you. (Thanks again EM!)

Luther Wu
June 22, 2012 8:34 am

Zeke Hausfather says:
June 22, 2012 at 7:42 am
We’ve been around this so many times now, posted so many different codes by Jeff Id/Roman, Mosher, Chad, Nick Stokes, myself, Tamino, etc…
____________________
That certainly clears up the whole thing for me.
Why does the actual temp even matter?
No need to nitpick… massaged or not, whatever temps we’re seeing are unprecedented and leave no doubt of anthropogenic origins, plunging beloved Mother Earth into catastrophe.
We must be made to pay.
/s
(just in case)

KR
June 22, 2012 8:58 am

sunshinehours1 – Yes, urban areas are much warmer than rural areas. But what folks don’t seem to recognize is that the trend measurements shown by NASA/GISS, HadCRUT, NOAA, etc., are of temperature anomalies, not absolute temperatures. Changes in temperature from the long term average for each particular area.
So if the city is 8°C warmer than the nearby farmland, but has always been 8°C warmer – then that difference in temperature has exactly zero effect on the anomalies.
The only possible effect would come from the growth of urban areas – and quite frankly the percentage of the globe that has changed urbanization (although large in absolute terms) is quite small in percent area of the planet. And hence has very very little effect on temperature anomalies.

David Falkner
June 22, 2012 9:11 am

If you can take the raw data and get (substantially, as Zeke says) the same results why bother adjusting it?

EthicallyCivil
June 22, 2012 9:25 am

KR says:
” if the city is 8°C warmer than the nearby farmland, but has always been 8°C warmer – then that difference in temperature has exactly zero effect on the anomalies.”
Given the increase in population density, decrease in green space over time, and increasing energy density, use, and specifically air conditioning, is it reasonable to assume the UHI differential would stay constant?

Luther Wu
June 22, 2012 9:30 am

KR-
Are most reporting thermometers located in urban or rural areas?
Most folks don’t seem to realize….?

June 22, 2012 9:46 am

David Falkner,
Raw and adjusted global temps are close (~5% difference or so, depending on the timeframe), but adjustments can make a much larger difference on a regional level. They are also needed in many cases for regional climatological analysis. For example, I was working on a project the other week examining Chicago temperature trends. I kept getting some weird results where summer max temperatures were constantly above 40C prior to 1940 or so and almost never that high after, until I realized that the station I was examining had been moved from an urban rooftop to an airport around 1940. There simply aren’t enough “pristine” stations that have not had a move (or many moves), instrument change, time of observation change, etc. over the past century to do long-term analysis without trying to correct for these biases.

rilfeld
June 22, 2012 9:56 am

KR and others. Enough with the obfuscation!
What happened was a physical reality — it got warm, it got cold. Contemporary individuals tried to measure this using various instruments. There is one set of raw data. Two different processings of the data differ by an amount similar to the effect supposedly shown by the data. Global warming may exist. We may have caused it.
CO2 may be evil. What EM has said is that it is difficult to reasonably base public policy on an effect supposedly “proven” by a specific manipulation of this data set.
The criticism here ranges from reasonable scientific argument to ad hominem idiocy, but to me most is off point.
Why should we base massive changes in public policy on these data sets?

Frank K.
June 22, 2012 10:07 am

rilfeld says:
June 22, 2012 at 9:56 am
“Why should we base massive changes in public policy on these data sets?”
This is very simple rilfeld.
The left-wing, “green” climate scientists need money (lots of it!) and fame. Scaring people about climate gives them both. That’s all there is to it. They could care less about the public…

Luther Wu
June 22, 2012 10:45 am

rilfeld says:
June 22, 2012 at 9:56 am
….
_____________
Your points are valid and well taken.
As far as ad hominem idiocy, I am guilty as charged.
When it became apparent long ago that reason and logic and valid scientific method held no sway with “the forces arrayed against us”, then nothing was left, but to stand on the side of the road and jeer at the emperor with no clothes.
With an eye to recent revelations about the data collection and classifications of free citizens as undertaken by our US govt. (and doubtless, same for our foreign friends, too), I can assume that my machinations have put me on some sort of list as being dangerous and a threat. The end, for those at whom I throw stones, has boundless justification of means.

Quinn the Eskimo
June 22, 2012 11:19 am

KR at 6:45 am
The adjusted NCDC data don’t show any meaningful difference between urban and rural either. It’s the validity of the upward adjustment to the rural trend that results in that agreement that is in question.
As for Long’s selection criteria, as he explains it the overlay of the 5 degree by 5 degree grid is roughly 1 per state, except in the northeast. Long’s choice of one rural and urban per state for simplicity would overweight the northeast.
The Best study compares rural stations to all stations, not rural to urban as in Long, so the contrast would be muted in Best and heightened in Long. Also, Best has a much, much larger sample size, being global to Long’s lower 48.
Best is 1950 to 2010, Long is 1900 to 200x.
It is a curiosity, to say the least, that the UHI, which is an indisputable physical reality, is asserted to have no meaningful effect in temperature time series in which urban areas are substantially overrepresented. If Long is to be believed, in the contiguous US, the curiosity is explained by adjustments which quintuple the rural warming trend and lower the urban trend, bringing them into contrived agreement. In light of this, the multitude of other credibility problems afflicting these time series, the GHCN v1 to v3 differences discussed by Chiefio, and the USHCN v1 to v3 differences, I am a long way from being convinced the instrumental record has sufficient validity and reliability to support the AGW policy agenda.
Regards,

David, UK
June 22, 2012 11:50 am

mfo says:
June 21, 2012 at 7:19 pm
A very interesting and thorough analysis by EM. When considering the accuracy of instruments I like the well known example used for pocket calculators:
0.000,0002 X 0.000,0002 = ?
[Reply: I hate you for making me go get a calculator to do it. ~dbs, mod.]
[PS: kidding about the hating part☺. Interesting!]
[PPS: It also works with .0000002 X .0000002 = ?

Also works for .0002 x .0002 (or any other number to that many decimal places) simply because the answer has more digits than the average calculator can…er…. calculate. .0002 x .002 just fits!

Keitho
Editor
June 22, 2012 1:51 pm

I would think it likely, KR, that the increase in the urban temperatures weren’t instantaneous but happened over time and may continue to do so. The average they are measured against is unmoving and obviously predates some or all of the warming. Your thoughts are wrong.

KR
June 22, 2012 2:25 pm

(Various UHI comments…)
The IPCC data (2007) indicates that UHI has an influence of ~0.006°C/decade. BEST calculates it at -0.019°C ± 0.019/decade. I’m afraid that anyone who’s looked at the data in depth finds that the range of possible UHI effects are an order of magnitude (or more) smaller than the observed trend.
Also note that (as Zeke pointed out in http://wattsupwiththat.com/2012/06/21/chiefio-smith-exqamines-ghcn-and-finds-it/#comment-1015394) raw data gives the same trends, rural stations (by any selection criteria) show the same trends, satellite data shows the same trends in and out of cities, and sea surface temperatures (SST’s) show those trends too (slightly smaller, just as expected per higher thermal mass, but again no cities there).
UHI is just not an issue. That horse is dead, it’s perhaps time to stop beating it. And, back on topic for this thread, adjustments in the GHCN data (again, see http://wattsupwiththat.com/2012/06/21/chiefio-smith-exqamines-ghcn-and-finds-it/#comment-1015394) aren’t a significant issue either.
Adieu.

June 22, 2012 2:36 pm

KR: “t the trend measurements shown by NASA/GISS, HadCRUT, NOAA, etc., are of temperature anomalies, not absolute temperatures.”
True … but:
1) There were no satellites to measure UHI in the past so we don’t know if 8C of UHI was 8C of UHI in 1920 or 1930. It may have bee 1C in 1900 and 3 in 1930 etc.
2) They aren’t the same thermometers and they aren’t in the same location.
3) GIStemp etc are full of stations with no modern data and no early data. How do we know what UHI was. We will never know. The thermometer is gone or ignored.
If we ever want to know what UHI is, then a set of reference thermometers need to be decided on and the metadata kept up to date and satellites need to measure UHI for decades to see the rate of UHI changes. Math or algorithms won’t cut it.

June 22, 2012 2:42 pm

Zeke, the BEST “Analysis Charts” are useless. Can’t they grid the data and then supply the data for each grid so we could actually see if what they are claiming is true.
A 2d graph is useless.
Something like this would be nice.comment image
http://sunshinehours.wordpress.com/2012/03/18/cooling-weather-stations-by-decade-from-1880-to-2000/

clipe
June 22, 2012 3:12 pm

David, UK says:
June 22, 2012 at 11:50 am
mfo says:
June 21, 2012 at 7:19 pm
A very interesting and thorough analysis by EM. When considering the accuracy of instruments I like the well known example used for pocket calculators:
0.000,0002 X 0.000,0002 = ?
[Reply: I hate you for making me go get a calculator to do it. ~dbs, mod.]
[PS: kidding about the hating part☺. Interesting!]
[PPS: It also works with .0000002 X .0000002 = ?
Also works for .0002 x .0002 (or any other number to that many decimal places) simply because the answer has more digits than the average calculator can…er…. calculate. .0002 x .002 just fits!

Can someone explain the joke here? When I was a wee boy my father taught me .0 x .0 = 0

clipe
June 22, 2012 3:24 pm

Is it, the more you multiply, the more zeroes you add?

Paul, Somerset
June 22, 2012 3:26 pm

KR says:
June 22, 2012 at 2:25 pm
(Various UHI comments…)
The IPCC data (2007) indicates that UHI has an influence of ~0.006°C/decade. BEST calculates it at -0.019°C ± 0.019/decade. I’m afraid that anyone who’s looked at the data in depth finds that the range of possible UHI effects are an order of magnitude (or more) smaller than the observed trend.
____________________________________________________________________________
0.019°C ± 0.019/decade looks trivial. How about expressing it as 0.19C +/- 0.19 per century?
Suddenly we can see that UHI effects are not ” an order of magnitude (or more) smaller than the observed trend”.
In fact you have just provided a reference for what I’d always suspected. UHI effects account for a substantial portion of the observed trend.
Thank you.

June 22, 2012 3:43 pm

clipe,
Maybe this explains it.
And maybe not.

E.M.Smith
Editor
June 22, 2012 3:46 pm

Gee, take a nap and a whole new load of comments show up… many unrelated to v1 vs v3.
I’ll try to address the most relevant ones, but “what some other data do” isn’t all that useful. Most all of it comes from electronic devices at airports, so is inherently biased by the growth of aviation. Note that airplanes and airports go from “nothing” to “almost all the record” in the last 100 years. From grass fields to acres of tarmac and tons of kerosene burnt per day, surrounded by expanding urban jungles. Doesn’t matter who collects the METARS, that same problem persists.
Battye:
All UHI ‘gradually accumulates’. Especially at Airports. During the last 40 years, SJC the local San Jose airport, has gone from a small private plane field with a terminal for commercial use where you could leave your car and in less than 100 yards be walking up the roll around stairs to one where there are massive “international terminals” and you get to drive a couple of mile loop to go between terminals. Huge area now paved for “long term parking”.
IMHO, much of what we measure in the instrumental record is the growth of aviation from 1940 to 2012. Changing which instruments are in a set (like v1 vs v3) shifts how much of that kind of effect is measured vs more pristine places.
@Brent Hargreaves:
Near as I can tell, they’ve tossed it in the bin. I saved a copy some long time ago. I think others have too. I’d love to find a public copy still available.
@Barry:
My “start date” is not 1740. The “start date” is the most recent data item. 1990 in this analysis. The slope of the trend line just reaches your desired number of 0.75 C (that matches my rough eyeball estimate) at that date.
@Quinn the Eskimo:
Looks interesting… now if I can just find the time to look into it 😉
@Lazy Teenager:
Not claiming that the ’70s dip was caused by splice artifacts. It was a known cool time. In fact, that the graphs find it (and things like “1800 and froze to death”) act as sanity checks that the code works reasonably well.
My assertion of suspicion that “splice artifacts” ( in quotes as some of the effects are not formally a splice, but an ‘averaging together in a kind of a splice’) are a probable cause of things like the 1987-1990 “shift” seen at the same time that the Duplicate Number changes in v2 indicating a change of instruments or processes. You need to be less lazy and read more completely, please. BTW, I have no need to provide an “explanation” for your misreadings…
Doe:
Nice… very nice… TOBS always has bothered me… Over the course of a month, one would expect it to “average out”, since the “climate scientists” seem to think all sorts of other errors can just be average away… ;-0
@TonyTheGeek:
Blush 🙂
But honestly, isn’t it just that we are both Geeks so we understand Geek Speak? ;-0
@KR:
Anyone who thinks UHI doesn’t introduce a bias has never walked barefoot down a paved road in summer… Or had to stand on the tarmac waiting to board a plane…
@Al in Kansas:
BINGO! Now figure in that there is a giant Polar Vortex with megatons of air spinning down the funnel to the cold pole, and similar megatons of hot air rising over the topics, with huge heat transfer via evaporation and condensation and loads of heat dumped via IR at the TOP of the atmosphere, with variable hight, mass, water content, and velocity; and tell me again how surface temperatures explain anything?…
Illis:
It’s worse than that. Individual Met Offices can change the data prior to sending it to NCDC. So it might or might not be “raw” as it is originated. Many of the METARS are raw, but they also have giant errors in them (like that 144 C value…) so even what IS raw is full of, er, “issues”…
It’s a Hobson’s choice between “crap data” and “fudged data”…
@sunshinehours1 :
Spot on.
@Pascvaks:
You are most welcome.
Falkner:
If it doesn’t matter what data you use, you always get the same results, then the data do not say anything, it is all in the processing…. Just use a single thermometer and be done with it…
The “negative space” issue of “they all are the same” is that they all ought not to be the same if they are truly independent different data sets and different processes. The differences might not be dramatic, but if there are no differences, that is a Red Flag of suspicion that things are not as ‘independent’ as they are claimed..
I see that is echoing Luther Wu… Yeah, “my thoughts exactly” 😉
@KR:
What you don’t seem to realize is that GIStemp keeps temperatures AS TEMPERATURES and not anomalies all the way to the last “grid / box” step. Only then does it create grid / box anomalies by comparing a fictional ‘grid box’ value now to an equally fictional one in the past. (They must be predominantly fictional as the total number of grid / boxes is vastly higher than the present number of thermometers in use. 14/16 ths, roughly, of the boxes have no thermometer in them…)
So the argument that “it is all done with anomalies” is just flat out wrong for GIStemp.
Don’t know about the other programs as I’ve not looked inside their code. Have you?
@EthicallyCivil:
Yeah, I love the way ‘urban areas’ never grow in the Warmers World, yet the global population goes from 1 Billion to 9 billion and a massive shift of people move from the rural landscape into the cities over the last 100 years…
@Luther Wu:
I did an analysis of GHCN v2 some years back that shows the percentage of Airports rising over time and by latitude:
http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
While some places are ‘only’ 50% or so Airports, others are a ‘bit more’:

Yup, just shy of 92% of all GHCN thermometers in the USA are at airports.
I’d call that a problem…

Not like airports in the USA got larger between 1950 and 2010… Or the Jet Age began… or runways grew from 2000 ft to 10,000 ft for jets… or…
@Zeke:
“without trying to correct for these biases.”
In other words, making up a value that we HOPE is better…
“But Hope is not a strategy. -E.M.Smith”.
And that, in a nutshell, is my core complaint about the data and how they are used. There is, as you pointed out, just so much “instrument change” that the basic data are fundamentally useless for climatology (as you illustrated). That by necessity means we are depending on Hope and Trust in the fiddling of the data to “adjust it” and “fix it’.
I prefer to just recognize that the basic data are not “fit for purpose” and that the “fiddling” is error prone and that “hope” is not a decent foundation for Science.
@rilfeld :
You got it! We have two basic POV.
One is that the variations in the data constitution are so large that it just isn’t very reliable nor usable. Zeke ran into that in his example. I demonstrate it in my test suite / comparison.
The other says “But if we adjust, fiddle, and use just the right process, no matter what code is run you get the same results!” (or the analog “No matter what data you feed the code you get the same results!”)
But if the data used don’t change the outcome, what use is the data? If a specific data set fed to different codes give the same results, what trust can be put in the code?
There ought to be variations based on specific data used and based on specific codes run. Those variations ought to be characterized to measure the error bars on each. THEN the best data and / or best codes can be selected / validated / vetted.
It’s like taking 10 cars to the race track with 10 drivers and they all clock the same lap time. Something isn’t right when that happens…
In a valid test, there ought to be outlier codes and there ought to be measurable effects from changes of the data set shown and calibrated by benchmarks. There are no benchmarks, and we are told the individual data used don’t make any difference. That’s just wrong.
@Quinn the Eskimo:
Very well said! That is the basic issue. The “negative space”… what ought to be there but isn’t.
There is an undeniable UHI and Airport Heat Island effect. Anyone with bare feet can find it.
There is an undeniable gross increase in urbanization around the globe on all continents and a massive increase in population.
Yet “no UHI effect is found”? Then the test is broken… which means either the data or the codes or both are broken.
My “BS O’meter” will not allow me to do otherwise than doubt the assertion that UHI doesn’t matter. In fact, to the extent the “climate scientists” claim to believe it doesn’t matter, it just makes them look ever less credible and ever more gullible or both.

E.M.Smith
Editor
June 22, 2012 4:22 pm

@Barry:
In re-reading your comment:

2. The pre-1900 data gets sparser and sparser. Adjustments are bound to have a more obvious effect the further back in time you go. GISS and UK Met Office don’t offer trend estimates from before 1900 because the data is uncertain.
I bet that there will be litle difference between V1 and V3 for the global trend from 1900. This will accord with pretty much everything I’ve read on the subject. I don’t think trend estimates using surface data starting from the 18th century are going to be in any way reliable. Does anyone else do this?

I think you have the dates used by GISS and Hadley off. GIStemp uses an 1880 cutoff on data, Hadley, IIRC, is 1850. BOTH use data from prior to 1900.
For your comparisons to those sets, you need to use those ranges.
@Smokey:
Yeah, that explains everything 😉

pouncer
June 22, 2012 4:24 pm

Latimer Alder writes
“At the risk of being terribly pedantic, if the average temperature gets to 500K we are all already in a lot of trouble 🙂 You might wish to rephrase it at about 290K (+17C).”
Kelvin. Rankin. Wunderlich. One of those dead white males. I’m old. I get them confused.
Speaking of Wunderlich, his 19th century measures of human body temperature suffered some of the same problems as we now see with global temperatures. The best he could do — and he was a VERY careful guy with a WHOLE LOT of data — was average his result down to 37 degrees. (Uhm, centigrade. avoiding the dead white male name issue, again)
37 degrees German is 98.6 American. But 98.6 is synthetically precise, not actually accurate. That is, plus or minus 0.1 degree Fahrenheit compared to plus or minus 1/2 a degree centigrade is, uhm, almost ten times too precise.
If Chiefio’s work is comparable and the trend, as adjusted, in the GHCN data is ten times more precise than the accuracy of the underlying figures actually supports; well, it won’t be the first time temperature was badly reported to the public.
BY the way, speaking of adjusted data, did y’all see where they’d adjusted the time of Secretariat’s races in the Triple Crown?

clipe
June 22, 2012 4:38 pm

Smokey says:
June 22, 2012 at 3:43 pm
clipe,
Maybe this explains it.
And maybe not.

Like dividing .1 by 3 = 0?

E.M.Smith
Editor
June 22, 2012 6:08 pm

:
Adjusting Secretariat? Say it isn’t so!!
Since the non-random error in the GHCN older data is 1 F range for the US data and who knows what up to 1 C for much of the ROW, IMHO the best precision that can be legitimately claimed is about 1 F. 1/2 C. Even that is just a bald guess. The averaging that gives a more precise value can remove the random error, but not any systematic errors, and we don’t know how much systematic error there might be… so must presume it is the range of the width of the recorded precision.
@Barry:
Just for you, custom graphs with trend lines and formulas aligned on the GIStemp and Hadley HadCRUT start of time period:
http://chiefio.wordpress.com/2012/06/23/ghcn-v1-vs-v3-special-alignments/
(Can you tell I grew up in a service business? Family Restaurant and all… Even making custom cut reports for a commenter 😉

Editor
June 23, 2012 12:44 am

Zeke Hausfather – I checked the BEST paper (Muller, Curry et al) on UHE for method (it used MODIS to identify rural stations), and then checked the Australian stations used (I asked BEST for, and received, the set of stations). Of the ~800 Australian stations, over 100 had “airport” or equivalent in their name, and over 100 had “post office” or equivalent in their name. The inevitable conclusion, which I posted in a comment on WUWT at the time, is that their method of assessing UHE was useless.
Some time ago, I did an analysis of Australian temperatures in an attempt to identify the presence, or absence, of UHE. In that analysis, I took all long-term stations, examined their location using Google Earth to see if they looked truly rural, and then compared the temperature trends of the clearly-rural stations with all the others’. The temperature trends of the clearly-rural stations were substantially lower. All the data and all the Google Earth images were posted with my analysis on WUWT, so that others could check, which some did.
It is very clear to me that UHE has not yet been properly identified and quantified.
[I am currently on holiday – about to go and watch Black Caviar – and a long way from my home computer so sorry, but I can’t supply references for the above and please note that it is all from memory.]

June 23, 2012 1:47 am

Zeke Hausfather
Again, “similar results” is not good enough. All the hysteria over warming is literally based on 1/10ths of a degree. 1/10th of a degree difference in the output in the various ways the data is handled results in panic costing billions because disaster is imminent. And making 1/10 of a degree in warming is oh so easy to produce. Fudging can easily add 1/10th here and there. Extrapolating over 1000 miles can do it too. Whole degrees, let alone 1/10ths, can be fudged in.
Here’s an example that should make anyone wonder if GISTemp, for one, can be trusted with 1/10ths of a degree—and it’s a very easy example to understand:
Does GISTemp change? part 1

Does GISTemp change? part 2

Al in Kansas
June 23, 2012 6:04 am

Preaching to the choir, Rev Smith. 🙂 The earth actualy has runaway water vapor feedback heating the planet. It has had it ever since there were oceans to supply an unlimited amount of greenhouse gas. As I have commented previously, I think Miskolczi got it right. The atmosphere adjusts to a fixed optical depth in the far infared that results in maximum energy transfere to space. Any additional heat input results in more clouds(NEGATIVE feedback) and more weather. Not more extreme, just a few more thunderstorms or a small increase in evaporation and precipitation.

tonyfolleyTonyTheGeek
June 23, 2012 8:17 am

@ E.M. Smith
But honestly, isn’t it just that we are both Geeks so we understand Geek Speak? ;-0
I think it is a coding thing, (I have done a little C++ for fun) in that strange effects can come from seemingly innocuous code. Oh, and you have NEVER found all the bugs 🙂
TonyTheGeek

June 23, 2012 1:26 pm

E.M.Smith said @ June 22, 2012 at 4:01 am

But the “debate” seems firmly rooted in the bankrupt notion of “average temperature” even though it has no philosophical basis in reality, confounds heat with temperature, does not conform with the physics of heat flux (as you point out) and is frankly just silly. As near as I can tell, it is only because “temperature is what we measure” and folks do not understand that averaging is used to hide information that is “in the way” of seeing something else and is NOT an easy tool to use and is completely meaningless when used to average temperatures as they are an intrinsic property…
But pointing that out just causes the average person to glaze over and causes “climate scientists” and Warming Believers to toss rocks and flames at you. So I only do it on special occasions 😉 Most of the time I behave myself and pretend that ‘an average of temperatures’ is a sane thing to do… though I’ll usually sneak in a small disclaimer about it…
It really is an “Angels and Pins” argument, in terms of physics… (Which is why I mostly show faults in the method and data biases rather than try to find a ‘better way to calculate the Global Average Temperature’…. I’m not fond of trying to show one way of counting Angles and measuring Pinheads is better than some other one…;-)

The earliest mention I know of angels dancing on the head of pins is in Chillingworth’s Religion of Protestants: a Safe Way to Salvation (1638). He accuses unnamed scholastics of debating “Whether a Million of Angels may not fit upon a needle’s point?” H.S. Lang, author of Aristotle’s Physics and its Medieval Varieties (1992), wrote (p 284): “The question of how many angels can dance on the point of a needle, or the head of a pin, is often attributed to ‘late medieval writers’ … In point of fact, the question has never been found in this form”.
WUWT commenters seem to love propagating this sort of story: “In the middle ages it was believed the earth was flat”, “Galileo showed medieval physics was wrong by dropping weights from the Leaning Tower of Pisa”, and so on. They are equivalent in the history of ideas to urban myths like the exploding cat/poodle/hamster [delete whichever is inapplicable] in the microwave.
I think, EM, your most important point is that the notion of “average temperature” is just such a myth.

E.M.Smith
Editor
June 23, 2012 5:06 pm

Jonas:
See the videos here:
http://wattsupwiththat.com/2012/06/22/comparing-ghcn-v1-and-v3/#comment-1016042
for a nice description of how the Russians have found about 1/2 C of “excess warming” in the data used in GHCN for their country due to Urban Heat Effect…. It’s not just Australia…
Oh, and there’s a paper from the Turkey Met folks as well where they used “all the Turkey” data, not just the ones selected for GHCN, and found Turkey was cooling, not warming.
In a comment from here:
http://chiefio.wordpress.com/2010/03/10/lets-talk-turkey/

vjones says:
12 March 2010 at 3:54 pm (Edit)
EM,
Very interesting post and analysis. I too have been looking at Turkey due to the very many stations for the size of the country. I might do a post on this myself incorporating some of what you have found.
!992 seems to have been very cold in Turkey: http://www3.interscience.wiley.com/journal/114078036/abstract. Perhaps if you can look at the 1990-1995 period more closely this would show up. Looking at many of the individual station records 1994 seems to have been very warm. Some stations have a 3 degC jump between these years.
Great graphs BTW, but small nitpick – the dates on your X axes are very hard to decipher.

From that paper Abstract:

Abstract
The purpose of this study is to investigate the variations and trends in the long-term annual mean air temperatures by using graphical and statistical time-series methods. The study covers a 63-year period starting from 1930 and uses temperature records from 85 climate stations. First, spatial distributions of the annual mean temperatures and coefficients of variation are studied in order to show normal conditions of the long-term annual mean temperatures. Then variations and trends observed in the annual mean temperatures are investigated using temperature data from 71 climate stations and regional mean series. Various non-parametric tests are used to detect abrupt changes and trends in the long-term mean temperatures of both geographical regions within Turkey and individual stations. The analyses indicate some noticeable variations and significant trends in the long-term annual mean temperatures. Among the geographical regions, only Eastern Anatolia appears to show similar behaviour to the global warming trends, except in the last 5 years. All the coastal regions, however, are characterized by cooling trends in the last two decades. Considering the results of the statistical tests applied to the 71 individual stations data, it could be concluded that annual mean temperatures are generally dominated by a cooling tendency in Turkey. The coldest years of the temperature records of the majority of the stations were 1933 and 1992, respectively.

A common rule of thumb is that once is an accident. Twice is a pattern. Three times is an investigation and charge…
@Amino Acids in Meteorites:
That’s one of those loverly little things just just needs more awareness. We have a ‘product’ that wanders around by more than the sought after effect, so process is greater than nature; yet some folks want to believe it. Go figure…
@Al in Kansas:
If you look at the increase in precipitation as the sun has gone quiet, I think you can directly see the increased heat flow off planet as rain (the ‘left overs’ of that evaporate and condense cycle). I agree about the feedback and oceans.
Though I suspect that there is a bi-stable oscillator going on. We’re in the less common hot end stage that only comes for a dozen thosand years every 100,000 or so. The other stable state is the cold end, when ice / albedo and desertification dominate and we’re in an Ice Age Glacial mode. That the only “tipping point” we risk is “down to cold” is what folks ought to be looking at.
@tonyfolleyTonyTheGeek:
Well, I certainly have found all the bugs in MY code. (From here on out, any “odd” behaviour will be classed as a ‘feature’ by The Marketing Department 😉 We have an ever growing “feature” list in our products 😉
@The Pompous Git :
Note that I never attributed “Angels and pins” to an old source. I just used the modern form of the metaphor… (Duck, dodge, weave 😉
BTW, you’ve warmed my heart that at least one other soul has “Got It” that GAT is a farce…

June 23, 2012 8:33 pm

E.M.Smith said @ June 23, 2012 at 5:06 pm

Note that I never attributed “Angels and pins” to an old source. I just used the modern form of the metaphor… (Duck, dodge, weave 😉
BTW, you’ve warmed my heart that at least one other soul has “Got It” that GAT is a farce…

Not for an instant did it cross my mind that you believed the canard and I took it that you were using the expression metaphorically. No ducking, dodging and weaving necessary 🙂
The phrase: “The Git got it that GAT is a farce” has a certain charm doncha think? I won’t repeat what I said to myself when the penny dropped cos I don’t want to distress the mods who then have to remind me that this is a family blog 😉

E.M.Smith
Editor
June 23, 2012 11:13 pm

@The Pompous Git:
Tee Hee ;=)