CRUTEM3 "…code did not adhere to standards one might find in professional software engineering"

Those of us who have looked at GISS and CRU code have been saying this for months. Now John Graham-Cumming has posted a statement with the UK Parliament about the quality and veracity of CRU code that has been posted, saying “they have not released everything”.

http://popfile.sourceforge.net/jgrahamc.gif

I found this line most interesting:

“I have never been a climate change skeptic and until the release of emails from UEA/CRU I had paid little attention to the science surrounding it.”

Here is his statement as can be seen at:

http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc5502.htm

=================================

Memorandum submitted by John Graham-Cumming (CRU 55)

I am writing at this late juncture regarding this matter because I have now seen that two separate pieces of written evidence to your committee mention me (without using my name) and I feel it is appropriate to provide you with some further information. I am a professional computer programmer who started programming almost 30 years ago. I have a BA in Mathematics and Computation from Oxford University and a DPhil in Computer Security also from Oxford. My entire career has been spent in computer software in the UK, US and France.

I am also a frequent blogger on science topics (my blog was recently named by The Times as one of its top 30 science blogs). Shortly after the release of emails from UEA/CRU I looked at them out of curiosity and found that there was a large amount of software along with the messages. Looking at the software itself I was surprised to see that it was of poor quality. This resulted in my appearance on BBC Newsnight criticizing the quality of the UEA/CRU code in early December 2009 (see http://news.bbc.co.uk/1/hi/programmes/newsnight/8395514.stm).

That appearance and subsequent errors I have found in both the data provided by the Met Office and the code used to process that data are referenced in two submissions. I had not previously planned to submit anything to your committee, as I felt that I had nothing relevant to say, but the two submissions which reference me warrant some clarification directly from me, the source.

I have never been a climate change skeptic and until the release of emails from UEA/CRU I had paid little attention to the science surrounding it.

In the written submission by Professor Hans von Storch and Dr. Myles R. Allen there are three paragraphs that concern me:

“3.1 An allegation aired on BBC’s “Newsnight” that software used in the production of this dataset was unreliable. It emerged on investigation that the neither of the two pieces of software produced in support of this allegation was anything to do with the HadCRUT instrumental temperature record. Newsnight have declined to answer the question of whether they were aware of this at the time their allegations were made.

3.2 A problem identified by an amateur computer analyst with estimates of average climate (not climate trends) affecting less than 1% of the HadCRUT data, mostly in Australasia, and some station identifiers being incorrect. These, it appears, were genuine issues with some of the input data (not analysis software) of HadCRUT which have been acknowledged by the Met Office and corrected. They do not affect trends estimated from the data, and hence have no bearing on conclusions regarding the detection and attribution of external influence on climate.

4. It is possible, of course, that further scrutiny will reveal more serious problems, but given the intensity of the scrutiny to date, we do not think this is particularly likely. The close correspondence between the HadCRUT data and the other two internationally recognised surface temperature datasets suggests that key conclusions, such as the unequivocal warming over the past century, are not sensitive to the analysis procedure.”

I am the ‘computer analyst’ mentioned in 3.2 who found the errors mentioned. I am also the person mentioned in 3.1 who looked at the code on Newsnight.

In paragraph 4 the authors write “It is possible, of course, that further scrutiny will reveal more serious problems, but given the intensity of the scrutiny to date, we do not think this is particularly likely.” This has turned out to be incorrect. On February 7, 2010 I emailed the Met Office to tell them that I believed that I had found a wide ranging problem in the data (and by extension the code used to generate the data) concerning error estimates surrounding the global warming trend. On February 24, 2010 the Met Office confirmed via their press office to Newsnight that I had found a genuine problem with the generation of ‘station errors’ (part of the global warming error estimate).

In the written submission by Sir Edward Acton there are two paragraphs that concern the things I have looked at:

“3.4.7 CRU has been accused of the effective, if not deliberate, falsification of findings through deployment of “substandard” computer programs and documentation. But the criticized computer programs were not used to produce CRUTEM3 data, nor were they written for third-party users. They were written for/by researchers who understand their limitations and who inspect intermediate results to identify and solve errors.

3.4.8 The different computer program used to produce the CRUTEM3 dataset has now been released by the MOHC with the support of CRU.”

My points:

1. Although the code I criticized on Newsnight was not the CRUTEM3 code the fact that the other code written at CRU was of low standard is relevant. My point on Newsnight was that it appeared that the organization writing the code did not adhere to standards one might find in professional software engineering. The code had easily identified bugs, no visible test mechanism, was not apparently under version control and was poorly documented. It would not be surprising to find that other code written at the same organization was of similar quality. And given that I subsequently found a bug in the actual CRUTEM3 code only reinforces my opinion.

2. I would urge the committee to look into whether statement 3.4.8 is accurate. The Met Office has released code for calculating CRUTEM3 but they have not released everything (for example, they have not released the code for ‘station errors’ in which I identified a wide-ranging bug, or the code for generating the error range based on the station coverage), and when they released the code they did not indicate that it was the program normally used for CRUTEM3 (as implied by 3.4.8) but stated “[the code] takes the station data files and makes gridded fields in the same way as used in CRUTEM3.” Whether

3.4.8 is accurate or not probably rests on the interpretation of “in the same way as”. My reading is that this implies that the released code is not the actual code used for CRUTEM3. It would be worrying to discover that 3.4.8 is inaccurate, but I believe it should be clarified.

I rest at your disposition for further information, or to appear personally if necessary.

John Graham-Cumming

March 2010

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
162 Comments
Inline Feedbacks
View all comments
Mike
March 4, 2010 2:26 pm

bill (09:51:52) :
“Mike, thats true but by the same token if the standards (of computer coding out of which are born the models) are rather low in this field, should we take their work as good enough to be the basis of policies which could well cause considerable lifestyle changes not to mention rather more taxes?”
Fair question. (1) These low stabdards did get us to the moon, etc., etc.
(2) Many climate research groups have arrived at similar results.
(3) Most of the code for the data analysis of temps is now availible. Small errors that have been found have not substainly changed the results.
The last two points illustrate the robustness of the climate results.
Small note: the programs at CRU are for data analysis, not climate modeling. The hocky stick is a data set of past temps.
Here is another way to look at it. Suppose we went back to the early medical work showing the link between tobacco and cancer. It probably would not meet these new standards. Did they save all the data? I doubt it. Are all the statistics programs that were used availible? I can’t imagine it. Yet, it would be foolish to run out and start smoking.
Remember that the tobacco companies put up are fight to keep people smoking. Millions died.
If we enact C&T schemes now with high caps for now, at least we will have a system in place. I figure it will take a few yaers to get a the “bugs” out of C&T. If the temps go down, and I’d venture there is a 5% chance of that,
then we keep the caps high. Not much harm done. If we do nothing, and the temps go up and we keep pilling up the CO2, we are going to mess things up big time. No, I don’t think it will be the end of civilization, but major hardships will be impossed.
We do need to weigh the risks of doing nothing. Some people will go to doctor after doctor until they hear what they want. If the first nine doctors tell you to lose weight, eat better and get more exercise but the tenth one says not to worry, it is tempting to go with the tenth doctor, but this is not wise.
If the models are off, it only means the warming will come a few years or decades later. You can’t get around the physics that more CO2 will eventualy cause big problems.

D, King
March 4, 2010 2:31 pm

A rare look at the Met Office and CRU code assembled,
and together.
http://tinyurl.com/y9a43bn

rbateman
March 4, 2010 2:37 pm

Milwaukee Bob (14:09:09)
THEN, there is that damn data….. ☺

That damn data status can be improved upon. The data sets(raw) out were by no means derived from an exhaustive and through effort.
I would estimate that the completeness of the records could be improved upon greatly by examing closely the archives.
I am getting this very unnerving feeling that few have bothered to look into the differences between what’s in archives and what’s being put out there as “that’s it, nothing more to see”.
How many out there have looked?

kadaka
March 4, 2010 2:37 pm

Moderators: Re: kadaka (13:58:35)
Ah, that change works as well. Thank you for your prompt attention.
Feel free to delete this and my previous comment at your discretion, I won’t mind. If you want to leave them, perhaps as a sort of change note, that’s fine too.

March 4, 2010 2:39 pm

E. M. Smith
With modern journalling file systems, you can even capture moment to moment changes. The Network Appliance has a kind of version control built in to it. This is a 10 minute kind of thing to set up>>
Well now we’re talking storage management, not backup 🙂
All storage arrays can do a snapshot of what is stored on them and keep it as a “point in time” copy. Not all snapshot techniques are the same, so some arrays can only support a few snapshots while others (like Netapp) can support hundreds. This is slightly different than version control. Also, it is valid for flat files, but databases require that they be quiesced before snapshot otherwise there is a possibility of the snapshot losing an in flight transaction and compromising the integrity of the snapshot. Many vendors have tools that can do this automatically for common databases like Oracle, SQL, Exchange, etc. As for journaling, yes that can be done at the file system level, and most database and transaction processing systems can do it at the application level. In brief, if you are willing to spend the money, you can capture the state of the whole system down to the second if you want. But as a rule of thumb, the minimum any IT shop would have in place would be weekly fulls and daily incrementals. A typical Netapp shop (or Equallogic, or Sun7000, IBM nSeries which support “re-direct on write” snaphots) would supplement the tape based backup system with hourly snapshots.

March 4, 2010 2:43 pm

Mike (14:26:24),
*sigh*
Instead of sending you to logic re-education camp, start reading the archives. “What if” scenarios can mean anything.
And by saying: “You can’t get around the physics that more CO2 will eventualy cause big problems,” you’re telling us that you are right and planet Earth is wrong. CO2 has been just a mite higher in the past, without causing your imaginary “big problems.” click

max
March 4, 2010 2:47 pm

Anna:
“No. The raw data are not shared with all and sundry in the accelerator experiments. The groups have rights of publication. Once the data is archived, it is open for sharing, after the experiment is closed, and still there are caveats.
Replication is done by having more than one experiment at a time. In the LHC ATLAS and CMS are competing experiments studying the same physics independently.
One reason is proprietary. It takes ten years of preparation by hundreds of people to set up the experiment and take the data. You would not find people willing to do that if the first theorist who came with a FOI request got his/her hands on the data before publication by the group.
The second is the complexity. Each experiment develops its own computer codes ( not well documented) corrections etc that an outsider would have to spend years to do all over again, given the raw data. That is why at least two experiments are necessary.”
I am a little puzzled by this. Just how do people try to reproduce the results of experiments before those results have been published? Yes, to various extents there is exchange of information prior to publication, but prior to publication the results are subject to revision. This is not about pre-publication embargoes of data so that the experimenters can publish, this is about the post-publication blackout on the data so that the results cannot be tested once they become know to the world at large. Once everyone is done publishing, it is expected that the raw data will become available so that the results can be checked is a perfectly acceptable formulation, but contextually I didn’t (and still don’t) see the need to add a “post publication” qualifier.
a side issue: the problem with climate science is that there are so many possible data sets and means of manipulating them that you can produce a wide variety of results. these are not all equally valuable in determining global trends, in fact many are almost useless. without information about what data sets are used and how they are manipulated it is impossible to determine (among other things) to what extent the results are an artifact of the manipulation, to what extent the data sets are representative of a global trend and what degree of significance to attach to the results.

Another Ian
March 4, 2010 2:47 pm

Somewhat o/t
Another view on amateur
I once worked for a man who voluntered for army service very early in WWII. Their medical officer was, in civies, a VD specialist. His basic message on that subject was
“Its not the professionals you have to worry about, its the bloody enthusiastis amateurs”
And I suspect that he would have allowed that, while there are gifted amateurs, these are vastly exceeded by amateurs who think that they are gifted.
And applicable to most subjects.

Spen
March 4, 2010 3:17 pm

Simple question for the committee.
ISO 9001. Is CRU accredited?. If not, stop wasting time and money – close the enquiry.
If it is accredited, then when was the latest audit and what were the results. If negative, stop wasting time and money close the enquiry.
Then sanitise the organisation.

March 4, 2010 3:28 pm

Visceral Rebellion;
Funny you mention Y2K. I’m on a project that found a Y2K bug a couple of weeks ago. Fortunately the buggy code wasn’t invoked in that routine until we tried it but still. . ..Only I would be hit with Y2K a decade later.>>
Don’t worry, there’s another round coming. A lot of the fixes were temporary. they took a range like 0 to 18 or 0 to 34 and wrote a little routine to convert JUST that date range to 2000+ instead of 1900+ on the assumption that a) there were no computer records prior to 1960 or so to conflict with, and b) the sofware would be replaced with new software before the new hard coded fix ran out of runway. Then everyone forgot about the new deadline they created for themselves and went back to day to day emergencies.
Grace Hopper would chuckle.

Richard Sharpe
March 4, 2010 3:29 pm

davidmhoffer (14:39:10) said:

A typical Netapp shop (or Equallogic, or Sun7000, IBM nSeries which support “re-direct on write” snaphots) would supplement the tape based backup system with hourly snapshots.

Would you be referring to “copy on write?” (The actual implementation is unlikely to copy the old data, rather it would simply allocate a new block for the new data and change some pointers in the metadata [block lookup table/b+tree/whatever].)

March 4, 2010 3:43 pm

Richard Sharpe (15:29:14) :
davidmhoffer (14:39:10) said:
Would you be referring to “copy on write?” (The actual implementation is unlikely to copy the old data, rather it would simply allocate a new block for the new data and change some pointers in the metadata [block lookup table/b+tree/whatever].)
Yes but no. Early storage arrays like EMC, HP EVA, LSI, etc etc used a snapshot called “copy on write”. When the file system wants to change a data block, the snapshot tool interrupts the write, copies the original block and writes it to snapshot reserve to be retrieved later if the snapshot needs to be invoked, then allows the original write to change the original block. This works, but uses a lot of I/O, so a limited number of snapshots can be supported before performance of the array is impacted.
Netapp, Equallogic, others use a different technique called “re-direct on write”. In their file systems, when a block needs to be changed, the file system writes a net new block, and leaves the original in place. The file system itself is changed to point to the new block (re-direct) instead of the old one. The snapshot tool in this system copies the file system at a point in time to preserve the pointers (you will hear the term “pointer based snapshot” as well, same thing).
There are pros and cons to both strategies.

Milwaukee Bob
March 4, 2010 3:55 pm

rbateman (14:37:09) :
Absolutely! Collect data on and of every sub-system we know is involved. Have a through search for historical data. I am not disagreeing with you.
But if you take a hard, cold look at what data we have now relating to the entire atmospheric system…. It’s zilch.
And there are sub-systems that we know have effects on the total system that we have virtually no data on and others that think have effects but we’re not even collecting data on them for a number of reasons.
Here’s a sentence I deleted from my previous post before I submitted it – “The real shame in all of this is the billions of $$ wasted by so called “scientist” on partial, low quality data when the equipment and systems used to get the data is not only inept for that purpose, but is crumbling around their feet! Not to mention they know the “model” they’re plugging the data into is full of holes.”
And what do we have for it? Al Gore…. Cap & Trade….. IPCC…… A Draconian EPA….. Thanks a lot, “scientist”. Next time I’ll skip the dance.

March 4, 2010 4:11 pm

An old term comes to mind a friend told me ” garbage in garbage out”. I have just started , 3 weeks ago, to start looking into allthe hype about AGW. I have not believed in AGW since it became an issuse 20 someodd years ago.I wilnot pretend to to know the science behind it,as i am a average person in the us,an auto mechanic by trade. I was taught early on in life that know matter what kind of education you have COMMON SENSE trumps all,most of the time.Good intentions are all well and good,but if you do not inject good common sense,all you have done, as in this instance, is yell fire in a crowded room. Its all well and good they they say the world is warming,but please BE VERY SURE!!!!,before you tell the world that some thing needs to be done, that it in fact needs to be done and you know how it should be done.any thing else is just pi$$ing inthe wind. This my first time posting,i hope i did alright< i have many thoughts on this subject and i hope to put all down little by little. this site ia amazing,anthony, great job!!!! hopefully,iwould like to actually sit with some of you and talk all about this and other subject matter. i find it easier to talk to people than sit here and type,something like skpe,if we can?

John R. Walker
March 4, 2010 4:30 pm

Much of the data coming in is such unstructured and incomplete crap that it lends itself to manipulation using QAD code just to try and stitch the data together into something that can even be processed…
It’s like trying to build a car using a collection of bits from different manufacturers – the end result may look like a car but it won’t work like a car…
These whole projects attempting to produce global/regional gridded temperature data need taking right back to square-1 and starting over with consistent methodology using only complete quality assured data-sets. Until then it’s just GIGO…
Maybe they should learn to walk before they can run?

Hilary Ostrov (aka hro001)
March 4, 2010 4:48 pm

John Galt (09:06:48)
“Some very good databases are also open-source”
===
Yes, but as we learned via the Climategate emails (confirmed by none other than Phil Jones during his recent testimony), “open-source” is an anathema to “climate scientists”!

Visceral Rebellion
March 4, 2010 5:43 pm

davidmhoffer (15:28:04) :
Don’t worry, there’s another round coming. A lot of the fixes were temporary. they took a range like 0 to 18 or 0 to 34 and wrote a little routine to convert JUST that date range to 2000+ instead of 1900+ on the assumption that a) there were no computer records prior to 1960 or so to conflict with, and b) the sofware would be replaced with new software before the new hard coded fix ran out of runway. Then everyone forgot about the new deadline they created for themselves and went back to day to day emergencies.
Grace Hopper would chuckle

Nah, we went whole hog with the long way ’round. If there were ANY chance I’d have to deal with THAT project I’d be looking for another job!
Of course, what they’re doing to us right now out of DC is just as bad, and may be worse, than Y2K ever dreamed of. If you think CRU is dumb and evil, check out CMS.

SOYLENT GREEN
March 4, 2010 6:44 pm

As McGoo said after looking at the Harry_Read_Me files, “I’ve seen tighter routines in virus source code.”

Jeff Alberts
March 4, 2010 6:45 pm

Started programming 30 years ago. I can only assume that’s not a current picture.

rbateman
March 4, 2010 6:59 pm

Milwaukee Bob (15:55:08) :
I’d do it for regional purposes. That’s where it’s going to of most use.

March 4, 2010 7:27 pm

In partial defense of not using professional programmers, and not using professional software techniques. I am guilty of having done that, as it was standard practice for many years in my engineering profession, and many others as well.
Disclosure: as a chemical engineer beginning in the 1970s, I wrote reams of amateur computer code. It worked – eventually; was tested to my satisfaction for the purpose at hand, and was sometimes left for the next poor fellow to deal with, perhaps years later. This was the norm in many organizations who built and ran the chemical plants, refineries, and many other manufacturing operations.
The reasons we did this were financial and time constraints, as there were deadlines to get things done and very little staff or budget for it. When we published in our technical journals, we did not publish code, rather we published the mathematics that went into the computer code. This was standard practice for many years. (for examples of publications in the U.S., see Hydrocarbon Processing, Oil & Gas Journal, Chemical Engineering Progress, also Chemical Engineering, all available in most university libraries). As was mentioned in a comment above, it was assumed that those reading the publication would have the skills to do the programming – that was considered a trivial task.
Then, after sufficient computer code was created internally, a choice arose when a new task was at hand: use somebody’s old code (probably undocumented and poorly written), make it work for your own purposes, or, start from scratch and write your own code.
Management wanted engineers to go with choice number one, but that created problems for management, as they could not understand why old code did not work the first time and produce valid results for the current problem. Eventually, in some organizations, we resorted to having professional software engineers and managers deal with the legacy computer code. And thus was born the technical group within the IT department. The IT department had the same issues, how to reprogram old code and standardize it. They also brought in version control and some of the other good programming techniques mentioned in comments above.
We also found it more economically attractive to lease or buy professionally written and maintained software, and an entirely new industry arose: software providers for the chemical engineers. A couple of such companies were Simulations Sciences of Brea, California, also Aspen Technologies, but there were usually a half-dozen or so. Some of our legacy code was run by the commercial software as a plug-in subroutine. That in itself created more than a few problems, though.
We did not have the fate of the world riding on our software results, but we did have multi-billion dollar processes that could be harmed (or explode) if our code was wrong, and some smaller processes in the hundred million dollar range.
It would appear that the climate scientists are today somewhere in the state that the chemical engineers were in a couple of decades ago: they could use an IT department and quit doing the programming themselves. This ClimateGate fiasco could also create a competitive commercial software industry, where the climate scientists shove their data in the front, and professionally written and maintained software crunches the data to produce the output.
Lamentably, this probably will not happen. A major drawback to these types of commercial software is the lack of flexibility, and stifling of creativity in writing one’s own code to produce results.
My preference is to at the very least, make sure the software is examined by professionals and brought up to some reassuring standards so that the code is robust and bug-free. This is the minimum for making policies that have the implications as proposed by the climate science community. We were able to make do in the early days with our engineer-written, use-it-once code where the worst outcome was we installed a pump or heat exchanger that did not work. The world’s economies did not suffer much, although the engineer who did this might have been out of a job for sloppy work. The stakes for climate models is, of course, far higher. Those who make policy based on the climate science should demand that the data and the computer codes be as up-to-date and modern as possible. No expense should be spared.

CodeTech
March 4, 2010 8:30 pm

Mike (14:26:24) :

Mike, OUCH, man. You’re way off here.

Fair question. (1) These low stabdards did get us to the moon, etc., etc.

No, they most CERTAINLY did not. NASA had the highest standards available at the time, and if the original computer programming is crappy or not archived, it’s because a large number of PEOPLE were involved, each checking the others’ work. Failure was not an option, and the computer was used as it was supposed to be: as a tool, not as the answer.

(2) Many climate research groups have arrived at similar results.

… by using the same contaminated data …

(3) Most of the code for the data analysis of temps is now availible. Small errors that have been found have not substainly changed the results.

Most, hey? You’re not keeping up.

The last two points illustrate the robustness of the climate results.

Well, they would if they were accurate.

Remember that the tobacco companies put up are fight to keep people smoking. Millions died.

No, that’s wrong. The tobacco companies put up a fight to remain in business while powerful special interests fought to destroy their LAWFUL business activities. Smoking was, and remains, the decision of the smoker. The anti-smoking lobby long ago crossed the line from idealistically attempting to wean us off of a harmful habit, and are now just outright lying. The parallels are striking, as powerful special interests are fighting to destroy LAWFUL businesses and destroy peoples’ livelihoods on the basis of a disproved hypothesis.

If we enact C&T schemes now with high caps for now, at least we will have a system in place. I figure it will take a few yaers to get a the “bugs” out of C&T. If the temps go down, and I’d venture there is a 5% chance of that, then we keep the caps high. Not much harm done. If we do nothing, and the temps go up and we keep pilling up the CO2, we are going to mess things up big time. No, I don’t think it will be the end of civilization, but major hardships will be impossed.

Here’s the OUCH. YOU figure about a 5% chance of nothing bad happening? Well, I figure that’s completely baseless, and delusional.
You figure “not much harm done”? Well, the evidence in Europe and even from this recession say otherwise… that we would literally shut down our first world economies. Not just a slowdown, and not some magical, fairyland of “green jobs and energy”. Not even close. Economic suicide is an understatement. Unlike the 1929 depression, a HUGE percentage of the first world now directly have their savings and investments in the markets, markets that will crash, fail, tumble, tank, and end. It won’t just be stock brokers jumping out of windows.

We do need to weigh the risks of doing nothing. Some people will go to doctor after doctor until they hear what they want. If the first nine doctors tell you to lose weight, eat better and get more exercise but the tenth one says not to worry, it is tempting to go with the tenth doctor, but this is not wise.

Um… again, we need the courage to do nothing. This IS a non-issue. There is NOTHING WE CAN DO that will make a lick of difference. I realize you believe otherwise, but your belief is based on fabrications, lies, political spin, and maybe even a bit of hero-worship. Whichever, your belief is wrong.

If the models are off, it only means the warming will come a few years or decades later. You can’t get around the physics that more CO2 will eventualy cause big problems

Yes, actually you can. But it’s nice that you used the word “physics” because it made you sound more authoritative.
When it comes down to it, your entire post has no basis in reality, only a belief system. And that is most of the problem we’re fighting against. Well meaning people have been deliberately misled and used as tools by a few ambitious and unscrupulous people. It’s shameful, really.

anna v
March 4, 2010 9:02 pm

Re: Mike (Mar 4 14:26),
If we enact C&T schemes now with high caps for now, at least we will have a system in place. I figure it will take a few yaers to get a the “bugs” out of C&T. If the temps go down, and I’d venture there is a 5% chance of that,
then we keep the caps high. Not much harm done. If we do nothing, and the temps go up and we keep pilling up the CO2, we are going to mess things up big time. No, I don’t think it will be the end of civilization, but major hardships will be impossed.
Bold mine.
This is a very naive statement. You are saying : take civilization back to 19th century levels , and not much harm will be done!!, if the pyramid scheme worked of course in reducing the alleged culprit, CO2.
Already millions starved to death in the Third World with the ethanol fiasco, because the price of corn went artificially up. In Haiti they were eating mud pies before the quake because of this. And you have the hubris to say not much harm will be done. I guess as you and yours survive the pyramid.
There is the law of unexpected consequences that the naive do not know and the sharks of this world know and expect.

anna v
March 4, 2010 9:28 pm

Re: Roger Sowell (Mar 4 19:27),
It seems that the conflict is between professional programming versus creative programming by researchers.
Researchers use programming as a tool. Research grants are usually limited and most of the job is done by graduate students who are fired by the enthusiasm of the subject they have chosen to research. Creativity is nurtured, and creativity is opposed to regimentation.
From your post I see that it was the same in the first years of using computers in industrial situations. May be it was a bleed through from the academic methods.
In particle physics, professional programming is done in products that are used for the programs for research. Monte Carlo programs, thousands of mathematical functions, there is a CERN program library and it is expected to have professional programming standards.
The programming done in research situations is the problem solving type: these are the data, I need to analyze them using computer programs and statistics as tools, to display the trends and further to see if a current hypothesis is correct.
This is ad hoc, and it is tested by other graduate students working in other experiments and either agreeing or disputing the conclusions.
No decisions of world wide nature hang on these studies.
The problem with climate “science” is that it follows the research pattern while claiming industrial level outputs for political decision making. And that decision may be one that leads to the destruction of the western world as we know it and the death of billions in the third world.

Roger Knights
March 4, 2010 9:53 pm

Mike:
If the temps go down [in a few years], and I’d venture there is a 5% chance of that, ….

There are bets you can make on how warm future years from 2011 up until 2019 will be (based on GISStemp’s online figures), at the well-known, Dublin-based event prediction site https://www.intrade.com (Click on Markets → Climate & Weather → Global Temperature).
For instance, you can bet on whether 2019 will be warmer than 2009. There’s an offer there currently to “sell” 100 lots at 90, which says that there’s a 10% chance of No Warming. So you can get odds that are twice as attractive as your 5% estimate of No Warming.