From Gary Strand, software engineer at the National Center for Atmospheric Research (NCAR) commenting on Climate Audit:
As a software engineer, I know that climate model software doesn’t meet the best standards available. We’ve made quite a lot of progress, but we’ve still quite a ways to go.
I’ll say. NASA GISS model E written on some of the worst FORTRAN coding ever seen is a challenge to even get running. NASA GISTEMP is even worse. Yet our government has legislation under consideration significantly based on model output that Jim Hansen started. His 1988 speech to Congress was entirely based on model scenarios.
Do we really want congress to make trillion dollar tax decisions today based on “software [that] doesn’t meet the best standards available.”?
There’s more. Steve McIntyre comments:
Re: Gary Strand (#56),
Gary, if this is what you think, then this should have been reported in IPCC AR4 so that politicians could advise themselves accordingly. I do not recall seeing any such comment in AR4 – nor for that matter in any review comments.
Steve McIntyre July 5th, 2009 at 7:49 pmRe: Gary Strand (#56),
If we can convince funding agencies to better-fund software development, and continued training, then we’ll be on our way. It’s a little harsh, IMHO, to assign blame to software engineers when they’re underpaid and overworked.
Boo-hoo. Hundreds of millions of dollars, if not billions of dollars is being spent. PErhaps the money should be budgeted differently but IMO there’s an ample amount of overall funding to have adequate software engineers. Maybe there should be some consolidation in the climate model industry, as in the auto industry. If none of the models have adequate software engineering, then how about voluntarily shutting down one of the models and suggest that the resources be redeployed so that the better models are enhanced?
I’m not making this QOTW to pick on Gary Strand, though I’m sure he’ll see it that way. It is a frank and honest admission by him. I’m making it QOTW because Gary highlights a real problem that we see when we look at code coming from NASA GISS.
But don’t take my word for it, download it yourself and have a look. Take it to a software engineer at your own company and ask them what they think.
GISS Model E global climate model source here
GISTEMP (surface temperature analysis) source here
Sure, this is one of many climate modeling software programs out there, but it happens to be the most influential, since GISS and GISTEMP are the most widely cited outputs in the popular media.
U.S. industry seems to do a better job of software development than government programs, because in business, if something doesn’t work, or doesn’t work well, contracts get lost and/or people get fired. There’s consequences to shoddy work.
In academia, the solution is usually to ask for more grant money.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

“…software engineers when they’re underpaid and overworked.”
Why not look for another job?
Reported in New Scientist 4 July
Boeing has encountered some serious structural sicknesses in the new 787.
Stress tests have revealed that the “wingbox” and 18 areas on the fuselage around the wings require strengthening.
“Data from the test did not match our computer model,” says Boeing vice-president Scott Francher. That highlights the difficulty of predicting the behaviour of advanced CFRP materials being used in very large structures for the first time.
Jimmy Haigh (07:45:46) :
“…software engineers when they’re underpaid and overworked.”
Why not look for another job?
I believe the best ones went and worked in the financial modelling sector…..
cheers David
Maybe they can outsource GISSTEMP to India or China?
Could anyone hazzard a guess about the Met Offices input/output data in light of the above? No such admissions forthcoming from that quarter or they’ll be out on their ears! It does worry me when the apparent “uncertainties” are explained away as “better understood”, which means nothing & is not the same as saying “we know & undertand how they affect our data output”, particularly when predicting the future!
I still think we are at the limits of computer power & understanding about climate. If you are making assumptions, whether reasonable or not, they are still assumptions, & if wrong, no amount of computing power is going to rectify it & or give you the answer, which seems to be what the MO are implying by getting this new Deep Thought computer, as they seem to want another one 1000’s of times more powerful than the one they have to get the right results! As Prof John Brignell @ur momisugly Number Watch says, a comuputer model can be right or wrong, but still irrelevant!
OK , I can see how those in the “climate science” community choose to ignore the holes in the fabric of their theories , but why do our media and our politicians ? (Yes its rhetorical , but still…)
No one involved in this mess WANTS functioning, competent software engineers, because they might accidentally do their jobs and reveal the fraud to the world. We Can’t Have That.
When you’re conduction a fraud of this scale, you need to make sure the fraud is perpetuated from top to bottom. Crappy, inscrutable softward code that no one understands but that always gives the “Right” answers is something to be treasured, guarded, and NEVER EVER replaced.
Good post and useful critique, I think.
But, I first did FORTRAN in 1965 (version II-D) and am quite happy to take your word that these models are not great examples of clarity. I’ve seen all the “do loops” and “if – then” statements I ever want to see.
That being said: Know that I appreciate those of you that are looking at these things and I do trust your judgment.
The last piece of code that worked without any unexpected errors was;
10 Print “Hello”;
20 Goto 10
and it still required half an hour of debugging.
The faith that is put in models is terrifying and people always ignore the one underlying problem;
For a model to be correct it has to be close to 100% accurate against the real world situation, if it isn’t it is at worst a waste of time, at best a vague indicator as to whether your original theory was credible but should never be treated as proof of anything.
I browsed through the code a bit. If I take the comments at face value, it looks like you have PhD scientists writing production FORTRAN code – bad idea.
Industry practice is to have PROFESSIONAL code writers maintain the software. Although I have degrees in Chemical Engineering, I worked as a programmer for 4 years in the mid-1980s on FORTRAN, COBOL, SAS, and several other coding languages. I was the lead programmer for a Fortune 50 company’s in house ChemE simulation program, about 500,000 lines of code. Programmers maintained and updated the code. Our analysts (usually Masters or PhD Chemical Engineers) provided the input and the mathematical modeling, which we then coded. Analysts weren’t allowed to compile or load ANY source code.
It might be a stretch to call it the worst FORTRAN coding, what little I saw wasn’t very good. I would reserve judgement until I had run some FORTRAN source analyzers and mapping software to see how the code lays out.
At a minimum they should have a QA/QC and testing program.
Coding errors wouldn’t be particularly difficult to find. Are the GISS models “peer reviewed”???
Sounds like the next project after SurfaceStations wraps up.
I’ve written my own NASA-like climate model software
10 Rem Global Climate Model
20 Print “The End Is Nigh”
30 Goto 20
40 End (of planet)
Hmmm. I wonder. Maybe if GCM programmers spent their time programming instead of arguing on blogs…
http://www.climateaudit.org/?p=6316#comment-346228
http://wattsupwiththat.com/2009/06/27/warmists-deny-copenhagen-access-to-polar-bear-scientist/
As a longtime professional software developer, I’ve seen millions of lines of code, of varying qualities. But I can make some generalizations. The people who code something up as an assistance to their job, not as a primary focus, in general write lousy code.
So I’m not surprised to see the quality of the code. It pretty much looks like what I’d get if I’m interfacing, say, with a hardware guy who writes a tool to use on the hardware he’s designing. Unreadable, undocumented, unmaintainable code that basically works. The mathematician/statisticians that I’ve worked with are the same way, so I’d guess I’d expect the same thing of climate scientists.
If I had to guess, I’d say that the number of actual professional software engineers they have working on the project is zero, that 100% of this has been coded by the scientists themselves.
Grumbler (08:01:18) :
I believe the best ones went and worked in the financial modelling sector…..
Yes I think that they probably did!
For an overworked software engineer, our Gary still finds lots of time to spend on blogs! When I am working, usually on an oil rig offshore, the 4 hours off, that I manage to salvage from most days, I usually spend sleeping.
“It’s a little harsh, IMHO, to assign blame to software engineers when they’re underpaid and overworked.”
UCAR/NCAR stopped public access to wage data in 2008. Apparently how federal funding for this institution is broken down has been hard to find as early as 2006:
http://www.climatesciencewatch.org/index.php/csw/details/greenwire-inhofe/
But if the General Schedule for federal employees is any indication of compensation at NCAR then $80,000+ would be a good starting estimate of “underpaid”.
http://www.bls.gov/oco/cg/cgs041.htm#earnings
Gary, perhaps you should back up your unfounded claims like being “underpaid” and “overworked”, eh?
Or do you just not have the time to post that, being overworked and all, at your office on weekends on your “own time” for instance?
What’s so tough about writing software for climate modeling?
All you got to do is make sure your model spits out warming.
Then the media will take care of the rest.
All the software engineers I’ve met were most definitely underpaid. Riiiight….
I am a software engineer and I have looked at the GISS software. It is right up there with some of the worst code I have ever seen. The question is not whether it contains errors, the question is just how bad the errors are and how much they impact the results.
Keep in mind that the errors could be impacting results both ways, although, given the GISS track record, I doubt it would error on the cool side 😉
As one with a chemical engineering degree, I really despise it when people decribe themselves as a “software engineer”. They are no more an engineer then the guys who pick up my garbage, who are also known as sanitation engineers. From Wikipedia, engineering is defined as “The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.” Last time I looked, “software engineers” do not have to take an engineer-in training (EIT) exam or a professional engineering exam.
A more appropriate description for our friends who write programs may be software code developer.
In The Black Swan (http://www.amazon.com/Black-Swan-Impact-Highly-Improbable/dp/1400063515/ref=sr_1_1?ie=UTF8&s=books&qid=1246893449&sr=1-1)
Nassim Nicholas Taleb argues it is impossible to successfully model complex systems over periods of time; that the very attempt is foolish. I would be interested to here from anyone involved in complex modeling that thinks otherwise. Anyone?
Climate modeling software may have problems but it is not the problem. What are the software engineers attempting to code … some model that makes no sense? I heard a congressman ask: did we learn anything from these failed financial models (packaging of mortgages to reduced risk). If you put the slightest confidence in GW models you learned NOTHING. Here is a brief comparison of the two:
1 Mortgage modelers had a financial incentive to be right. Climate modelers have a financial incentive to spread alarm and generate more government grants. They plan to be dead before the absurdity of their 100-year forecasts becomes clear. (They miscalculated since all 4 IPCC forecasts are not just wrong but fell outside their 90% confidence interval).
2. Mortgage modelers tested against historical data releasing models only after they completed regression tests using historical data. Climate modelers know the earth cycled through over a dozen ice ages … their models do not, and cannot, predict these cycles. They don’t care. (See #4)
3. Mortgage modelers attempted to incorporate all known relevant information. Climate modelers are concerned only with “human induced factors” (see #4). Further, they ignore science when it gets in the way of their objectives. For example, every study of CO2’s persistence in the atmosphere (over 30 of them) finds CO2 released into the atmosphere will persist for 4-15 years before being (mostly) consumed by the ocean. I understand GW models assume 100 years because assuming less than 50-years does not produce the desired warming effect. Is this Right? The American Chemical Society considers this a fatal flaw in the GW models.
4. Mortgage modelers understood the system they were attempting to model. Climate modelers don’t understand much about the climate … nor do they care to. According to the IPCC document, PRINCIPLES GOVERNING IPCC WORK: “The role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the … information relevant to understanding the scientific basis of risk of human-induced climate change.” The “human induced” restriction makes it clear why there is relatively little research into the role of the sun, the earth’s orbit, the formation of clouds (which cool the earth?), cosmic rays (which may seed clouds?), ocean currents or the huge climate changes of the past which moved the earth into and out of over a dozen ice ages; humans have no significant impact on any of these.
Ryan needs to get paid more too, IMHO software developers are often overpaid for the difficulties of their jobs. I’ve learned programming in my spare time – it ain’t rocket science.
Ryan’s made a big improvement on Steig et al running over 3000 reconstructions of the Antarctic and resolving several outstanding issues. He also directed several comments to the RC guys. Looks like the makings of a bad week for them.
http://noconsensus.wordpress.com/2009/07/06/tav-to-realclimate-you-can%E2%80%99t-get-there-from-here/
I was unaware that any blame had been directed at GISS software engineers. My understanding was that they were faithfully implementing the bogus sensitivity assumptions and all the rest of that which comprises current climate modeling ideology.
I am inclined not to read much into this fellow’s candid assessment. And I think Steve M. was a little over the top implying that the programmer should have communicated his misgivings about the models to IPCC, Congress and the media. The guy who has to translate crappy assumptions from higher higher into workable code should not have to field that kind of issue or get caught in the middle.
There seems to be a correlation between government involvement and the lack of quality of developed software. In the UK, the Government underwrote the financing of an enormously complex health care system, supposedly to link all the medical records of every family doctor, every hospital into one central database. Although the development was carried out by the private sector, it has been to date nothing less than a complete disaster, and a waste of something like six billion pounds. Now go to the extreme and have the government not only finance the thing but develop it itself and no rational person can expect anything at all out the other end that in any way resembles its stated purpose.
Just perusing that programming lauguage-it has been more than 30 years since I’ve seen FORTRAN is it me-do I see some sort of deliberate ‘loop’ if you will in the way it it written? I’m no programmer but have downloaded a couple of different wildlife studies
in FORTRAN and got caught in a “garbage in garbage out” -GIGO scenario that dang near got the whole study shut down.-after 3 years of hard field work…
This GISS thing makes me wonder…
It is perplexing how models are called “experiments” but/and also used for “prediction”.
If the model is understood, it is not experimental.
If the model is experimental, it should not be relied on for prediction.
I don’t really care how much money and time and effort and smarts went into developing the models. There comes a point where we have to decide whether to trust the results. Many projects take up a lot of time and effort but nonetheless, in the end they simply fail.
We need to get honest about whether climate models are successes or failures.