Guest Post By Willis Eschenbach
One reason that I’m always hesitant to speculate on other peoples’ motives is that half the time I have no clue about my own motives.
So … for the usual unknown reasons and with the usual unknown motives, I got to thinking about the GISS climate model known as the “GISS GCM ModelE”, or as I call it, the “MuddleE”.
Like many such climate muddles, it was not designed before construction started. Instead, it has grown over decades by accretion, with new parts added, kluges to fix problems, ad-hoc changes to solve new issues, and the like. Or to quote one of the main GISS programmers, Gavin Schmidt, in a paper describing the ModelE:
The development of a GCM is a continual process of minor additions and corrections combined with the occasional wholesale replacement of particular pieces.
For an additional difficulty factor, as with many such programs, it’s written in the computer language FORTRAN … which was an excellent choice in 1983 when the MuddleE was born but is a horrible language for 2022.
How much has it grown? Well, not counting header files and include files and such, just the FORTRAN code itself, it has 441,668 lines of code … and it can only run on a supercomputer like this one.

So I thought I’d wander through the GISS model and see what I could find. I knew, from a cruise that I’d taken through the MuddleE code two decades ago, that they’d had trouble with “melt pools”. These are the pools of meltwater that form on top of the seasonal sea ice. They are important in calculating the albedo of the sea ice. In my previous cruise, I’d found that they’d put a hard time limit on the days during which melt pools could form.
This leads me to a most important topic—the amazing stability of the climate system. It turns out that modern climate muddles have a hard time staying on course. They are “iterative” models, meaning that the output of one timestep is used as the input for the next timestep. And that means that any error in the output of timestep “J” is carried over as an error in the input into timestep “K”, and on ad infinitum … which makes it very easy for the muddle to spiral into a snowball earth or go up in flames. Here, for example, are a couple of thousand runs from a climate model …

Figure 1. 2,017 runs from the climateprediction.net climate model.
Notice in the upper panel how many runs fall out the bottom during the control phase of the model runs … and that has never happened with the real earth.
So I wrote up a computer program that searches through the 511 individual files containing the 441,688 lines of computer code for keywords and word combinations and the like, and when it finds a match it lists the file number and the line number where the keyword appears, and prints out the line in question and the surrounding lines, so I can investigate what it’s found.
To avoid the climate model losing the plot and wandering away from reality, as a programmer you have two choices. You can either fix what’s wrong with the model that’s making it go off the rails … or you can do what the MuddleE programmers did with the melt ponds. You can simply put a hard limit, basically an adjustable guardrail, that prevents the muddle from jumping the shark.
It appears that they’ve improved the melt pond code because I can no longer find the hard limit on the days the melt ponds can form. Instead, they’ve put in the following code:
C**** parameters used for Schramm sea ice albedo scheme (Hansen) !@var AOImin,AOImax range for seaice albedo !@var ASNwet,ASNdry wet,dry snow albedo over sea ice !@var AMPmin mininimal melt pond albedo REAL*8 :: C VIS NIR1 NIR2 NIR3 NIR4 NIR5 * AOImin(6)=(/ .05d0, .05d0, .05d0, .050d0, .05d0, .03d0/), * AOImax(6)=(/ .62d0, .42d0, .30d0, .120d0, .05d0, .03d0/), * ASNwet(6)=(/ .85d0, .75d0, .50d0, .175d0, .03d0, .01d0/), * ASNdry(6)=(/ .90d0, .85d0, .65d0, .450d0, .10d0, .10d0/), * AMPmin(6)=(/ .10d0, .05d0, .05d0, .050d0, .05d0, .03d0/)
What this code does is to put up hard limits on the values for the albedo for sea ice and melt ponds, as well as specifying constant values for wet and dry snow on the sea ice. It specifies the limits and values for visible light (VIS), as well as for five bands of the near infrared (NIR1-5).
What this means is that there is code to calculate the albedo of the sea ice … but sometimes that code comes up with unrealistic values. But rather than figure out why the code is coming up with bad values and fixing it, the climate muddle just replaces the bad value with the corresponding maximum or minimum values. Science at its finest.
Here’s a comment describing another bit of melt pond fun:
C**** safety valve to ensure that melt ponds eventually disappear (Ti<-10) if (Ti1 .lt.-10.) pond_melt(i,j)=0. ! refreeze
Without this bit of code, some of the melt ponds might never refreeze, no matter how cold it got … gotta love that kind of physics, water that doesn’t freeze.
This is what the climate modelers mean when they say that their model is “physics-based“. They mean it in the same sense as when the producers say a Hollywood movie is “based on a true story” …
Here, for example, is a great comment from the MuddleE code (the “c” or the “!” in a line indicates a comment):
!@sum tcheck checks for reasonable temperatures !@auth Ye Cheng/G. Hartke !@ver 1.0 c ---------------------------------------------------------------------- c This routine makes sure that the temperature remains within c reasonable bounds during the initialization process. (Sometimes the c the computed temperature iterated out in left field someplace, c *way* outside any reasonable range.) This routine keeps the temp c between the maximum and minimum of the boundary temperatures. c ----------------------------------------------------------------------
In other words, when the temperature goes off the rails … don’t investigate why and fix it. Just set it to a reasonable temperature and keep rolling.
And what is a reasonable temperature? Turns out they just set it to the temperature of the previous timestep and keep on keeping on … physics, you know.
Here’s another:
c ucheck makes sure that the winds remain within reasonable c bounds during the initialization process. (Sometimes the computed c wind speed iterated out in left field someplace, *way* outside c any reasonable range.) Tests and corrects both direction and c magnitude of the wind rotation with altitude. Tests the total c wind speed via comparison to similarity theory. Note that it c works from the top down so that it can assume that at level (i), c level (i+1) displays reasonable behavior.
… when the climate muddle goes off the rails, and the wind is blowing five hundred miles per hour, don’t look for the reason why. Just prop it up, put it back on the rails, and keep going …
Then we have a different class of non-physics. These are tunable parameters. Here’s a description from Gavin Schmidt’s paper linked to above:
The model is tuned (using the threshold relative humidity U00 for the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at TOA within ±0.5 W m−2 of zero) and a reasonable planetary albedo (between 29% and 31%) for the control run simulations.
In other words, the physics simulated in the climate muddle won’t keep the modelworld in balance. So you simply turn the tuning knob and presto! It all works fine! In fact, that U00 tuning knob worked so well that they put in two more tuning knobs … plus another hard limit. From the code:
!@dbparam U00a tuning knob for U00 above 850 mb without moist convection !@dbparam U00b tuning knob for U00 below 850 mb and in convective regions !@dbparam MAXCTOP max cloud top pressure
Finally, all models are subjected to what I call “evolutionary tuning”. That is the process whereby a change is made, and then the model is tested against the only thing we have to test it against—the historical record. If the model is better able to replicate the historical record, then the change is kept. But if the change makes it work worse at hindcasting the past, it’s thrown out.
Unfortunately, as the stock brokers’ ads in the US are required by law to say, “Past performance is no guarantee of future success”. The fact that a climate model can hindcast the past means absolutely nothing about whether it can successfully predict the future. And this is particularly true when the model is propped up and kept from falling over by hard limits and tunable parameters, and then evolutionarily tuned to hindcast the past …
What else is going on? Well, as in many such ad-hoc projects, they’ve ended up with a single variable name representing two different things in different parts of the program … which may or may not be a problem, but is a dangerous programming practice that can lead to unseen bugs. (Note that FORTRAN is not “case sensitive”, so “ss” is the same variable as “SS”.) Here are some of the duplicate variable names.
SUBR identifies after which subroutine WATER was called SUBR identifies where CHECK was called from SUBR identifies where CHECK3 was called from SUBR identifies where CHECK4 was called from ss = photodissociation coefficient, indicies SS = SIN(lat)*SIN(dec) ns = either 1 or 2 from reactn sub ns = either ns or 2 from guide sub i2 newfam ifam dummy variables nn = either nn or ks from reactn sub nn = either nn or nnn from guide sub nn = name of species that reacts, as defined in the MOLEC file. ndr = either ndr or npr from guide sub ndr = either nds or ndnr from reactn sub Mo = lower mass bound for first size bin (kg) Mo = total mass of condensed OA at equilibrium (ug m-3) ks = local variable to be passed back to jplrts nnr or nn array. ks = name of species that photolyses, as defined in the MOLEC file. i,j = dummy loop variables I,J = GCM grid box horizontal position
Finally, there’s the question of conservation of energy and mass. Here’s one way it’s handled …
C**** This fix adjusts thermal energy to conserve total energy TE=KE+PE finalTotalEnergy = getTotalEnergy() call addEnergyAsDiffuseHeat(finalTotalEnergy - initialTotalEnergy)
Curiously, the subroutine “addEnergyAsDiffuseHeat” is defined twice in different parts of the program … but I digress. When energy is not conserved, what it does is simply either add or subtract the difference equally all over the globe.
Now, some kind of subroutine like this is necessary because computers are only accurate to a certain number of decimals. So “rounding errors” are inevitable. And their method is not an unreasonable one for dealing with this unavoidable error.
However, twenty years ago I asked Gavin Schmidt if he had some kind of “Murphy Gauge” on this subroutine to stop the program if the energy imbalance was larger than some threshold. In the real world, a “Murphy Swichgauge” is a gauge that will give an alarm if some user-set value is exceeded. Here’s what one looks like

Without such a gauge, the model could be either gaining or losing a large amount of energy without anyone noticing.
Gavin said no, he didn’t have any alarm to stop the program if the energy imbalance was too large. So I asked him how large the imbalance usually was. He said he didn’t know.
So in this cruise through the code 20 years later, once again I looked for such a “Murphy Gauge” … but I couldn’t find one. I’ve searched the subroutine “addEnergyAsDiffuseHeat” and the surrounds, as well as looking for all kinds of keywords like “energy”, “kinetic”, “potential”, “thermal”, as well as for the FORTRAN instruction “STOP” which stops the run, and “STOP_MODEL” which is their subroutine to stop the model run based on certain conditions and print out a diagnostic error message.
In the ModelE there are 846 calls to “STOP_MODEL” for all kinds of things—lakes without water, problems with files, “mass diagnostic error”, “pressure diagnostic error”, solar zenith angle not in the range [0.0 to 1.0], infinite loops, ocean variables out of bounds, one STOP_MODEL that actually prints out “Please double-check something or another.”, and my personal favorites, “negative cloud cover” and “negative snow depth”. Hate it when those happen …
And this is all a very good thing. These are Murphy Gauges, designed to stop the model when it goes off of the rails. They are an important and necessary part of any such model.
But I couldn’t find any Murphy Gauge for the subroutine that takes excess or insufficient energy and sprinkles it evenly around the planet. Now, to be fair, there are 441,668 lines of code, and it’s very poorly commented … so it might be there, but I sure couldn’t track it down.
So … what is the conclusion from all of this?
Let me start with my bona fides. I wrote my first computer program over a half-century ago, and have written uncountable programs since. On my computer right now, I have over 2,000 programs I wrote in the computer language R, with a total of over 230,000 lines of code. I’ve forgotten more computer languages than I speak, but I am (or at one time was) fluent in C/C++, Hypertalk, Mathematica (3 languages), VectorScript, Basic, Algol, VBA, Pascal, FORTRAN, COBOL, Lisp, LOGO, Datacom, and R. I’ve done all of the computer analysis for the ~1,000 posts that I’ve written for WUWT. I’ve written programs to do everything from testing blackjack systems, to providing the CAD/CAM files for cutting the parts for three 80′ fishing steel boats, to a bidding system for complete house construction, to creating the patterns for cutting and assembling a 15-meter catenary tent, to … well, the program that I wrote today to search for keywords in the code for the GISS ModelE climate model.
So regarding programming, I know whereof I speak.
Next, regarding models. On my planet, I distinguish between two kinds of models. These are single-pass models and iterative models. Single-pass models take a variety of inputs, perform some operations on them, and produce some outputs.
Iterative models, on the other hand, take a variety of inputs, perform some operations on them, and produce some outputs … but unlike single-pass models, those outputs are then used as inputs, which the model performs operations on, and the process is repeated over and over to give a final answer.
There are a couple of very large challenges with iterative models. First, as I discussed above, they’re generally sensitive and touchy as can be. This is because any error in the output becomes an error in the input. This makes them unstable. And as mentioned above, two ways to fix that—correct the code, or include guardrails to keep it from going off the rails. The right way is to correct the code … which leads us to the second challenge.
The second challenge is that iterative models are very opaque. Weather models and climate models are iterative models. Climate models typically run on a half-hour timestep. This means that if a climate model predicting say 50 years into the future, the computer will undergo 48 steps per day times 365 days per year times 50 years, or 876,000 iterations. And if it comes out with an answer that makes no sense … how can we find out where it went off the rails?
Please be clear that I’m not picking on the GISS model. These same issues, to a greater or lesser degree, exist within all large complex iterative models. I’m simply pointing out that these are NOT “physics-based”—they are propped up and fenced in to keep them from crashing.
In conclusion, a half-century of programming and decades of studying the climate have taught me a few things:
• All a computer model can do is make visible and glorify the under- and more importantly the misunder-standings of the programmers. Period. If you write a model under the belief that CO2 controls the temperature … guess what you’ll get?
• As Alfred Korzybski famously said, “The map is not the territory”. He used the phrase to poetically express that people often confuse models of reality with reality itself. Climate modelers have this problem in spades, far too often discussing their model results as if they were real-world facts.
• The climate is far and away the most complex system we’ve ever tried to model. It contains at least six subsystems—atmosphere, biosphere, hydrosphere, lithosphere, cryosphere, and electrosphere. All of these have internal reactions, forces, resonances, and cycles, and they all interact with all of the others. The system is subject to variable forces from both within and without the system. Willis’s First Rule of Climate says “In the climate, everything is connected to everything else … which in turn is connected to everything else … except when it isn’t.”
• We’ve only just started to try to model the climate.
• Iterative models are not to be trusted. Ever. Yes, modern airplanes are designed using iterative models … but the designers still use wind tunnels to test the results of the models. Unfortunately, we have nothing that corresponds to a “wind tunnel” for the climate.
• The first rule of buggy computer code is, when you squash one bug, you probably create two others.
• Complexity ≠ Reliability. Often a simpler model will give better answers than a complex model.
Bottom line? The current crop of computer climate models are far from being fit to be used to decide public policy. To verify this you only need to look at the endless string of bad, failed, crashed-and-burned predictions that have come from the models. Pay them no attention. They are not “physics-based” except in the Hollywood sense, and they are far from ready for prime time. Their main use is to add false legitimacy to the unrealistic fears of the programmers.
And there you have it, a complete tour of a climate muddle.
Here in the redwood forest, it’s my birthday.

I’m three-quarters of a century old. And I’ll take this opportunity to thank all of my friends both in real life and on the web for two things.
The first is the endless support that I’ve gotten for my life, my writings, and my research. Everything of value in my life I’ve learned from family and friends. I owe profound thanks to the encouragement and support of people like Anthony Watts, Steve McIntyre, Roy Spencer, William Gray, Charles the Moderator, Viscount Ridley, Bishop Hill, Judith Curry, of course my gorgeous ex-fiancee, and many others for all that you have done. No way to mention everyone individually, but you know who you are, and you have my thanks.
The second thing I’m thankful for is the endless checking and public peer-review of my work. I love writing for WUWT because I know that a) whatever I write about, someone out there has done it for a living and knows more than I do about it, and b) whatever mistakes I make won’t last longer than a couple of hours without people pointing them out. This has been immensely useful to me because it has kept me from following false trails and wasting weeks or years going down blind alleys based on mistaken beliefs. Keep up the good work!
So what does a man do at 75? Well, after doing absolutely nothing today, tomorrow I’m going back to climbing 20 feet (6m) up a ladder to pressure-wash shingles with my new pressure washer … hey, do I know how to party or what?
My very best to all,
w.
Again I Ask: When commenting, please quote the exact words you’re responding to. And if you’re commenting on something not on the thread, provide a link. It avoids endless misunderstandings.
Happy Birthday Willis!
Back in November of 2021, flooding in Washington State was being attributed by the governor and the press to climate change. Cliff Mass at the University of Washington disputed those claims in an article on his blog:
‘Were the Sumas Floods Caused by Global Warming? The Evidence Says No.’
https://cliffmass.blogspot.com/2021/11/were-sumas-floods-caused-by-global.html
Having spent part of my career in the nuclear industry as a QA auditor, including software quality assurance auditing, I wrote the following two comments in response to his article:
Part One of a Two-Part Comment, posted November 23 2021:
————————————————————————————————-
Cliff Mass said in an earlier article posted on this blog concerning weather whiplash: “I am involved in regional climate simulations, using an ensemble of high-resolution projections driven by an ensemble of many global climate models. This is the gold standard for such work.”
In the nuclear industry, the software and the data used to directly support a technical decision which will be undergoing a safety analysis review by an independent authority such as the NRC must be developed and managed under strict quality assurance guidelines.
For a process simulation model in the nuclear world, the model’s software code — plus the data the model acquires, creates internally, or produces as outputs — are all considered essential components of a single unitary system. It’s all One Thing.
For any single model run which is being cited in a nuclear safety analysis, a snapshot of the entire modeling system which produced that run must be frozen in time and a copy of it placed into a historical archive as ‘record material’ which can be examined later by an auditor.
If an ensemble of model runs is being cited, every individual model run in the ensemble must be archived in a way which allows it to be individually retrieved for later examination, and which allows the original ensemble to be recreated as a whole at some later time.
The material which must be documented and archived for later reference includes:
— The modeling system’s design at the time the specific model run was produced, including a description of the methods used to develop both the software code base and the data which the system processes.
— Copies of the input data plus an inventory of any physical or technical assumptions which might be implemented in lines of software code as opposed to being read as a data input file or being created internally on the fly as the model run progresses.
— The criteria by which any specific model run is being evaluated as either suitable or unsuitable for the purposes it is intended to serve. In other words, do we accept the results of the model run; and if so, why do we accept those results?
Moreover, the suitable-versus-unsuitable evaluation criteria might change from run to run. So it is important to keep a precise record of what specific evaluation criteria is being applied to which specific model run.
As one can imagine, this is an expensive and time-consuming proposition — because the very nature of a software-driven simulation system is that both the software and the data may change as the driving assumptions change. What is being done in the nuclear industry to document these changes is the exception, not the norm.
Now we get to the point of this comment. For the regional climate simulations being done by Cliff Mass at the University of Washington, I as a nuclear type professional — someone whose work situation is heavily influenced by a variety of strict QA requirements — would ask these questions:
1) Do the State of Washington’s records management requirements apply in some way to the software and the data used in UW’s climate simulations?
2) Is there a quality assurance program of some kind in place for developing, maintaining, and managing UW’s climate simulation software and associated data?
3) Is the criteria for evaluating the output of each UW model run being recorded such that a specific set of criteria can be precisely matched to each run?
4) For any specific UW model run which employs the output from global climate models produced by other climate scientists not associated with UW, is that output being archived along with UW’s own climate simulation data and software?
Part Two of my two-part comment will concern the use of climate simulation model runs as experimental observational evidence — in lieu of direct physical observation — for the Soden-Held water vapor feedback amplification theory of global warming.
Part Two of a Two-Part Comment, posted November 26, 2021:
—————————————————————————————————-
Let’s talk about the Soden-Held water vapor feedback amplification theory of global warming. This theory is central to estimating how sensitive the earth’s atmosphere is to increasing concentrations of greenhouse gases, principally CO2. Soden & Held’s theory offers an explanation as to how CO2 can ‘punch above its weight’ so to speak and produce warming effects well beyond CO2’s base effect of a 1.2C – 1.5C increase from of a doubling of CO2 concentration.
Water vapor is the principal GHG in the earth’s atmosphere. According to the Soden-Held theory, increased warming at the earth’s surface occurring as a consequence of CO2’s base GHG effect enables the atmosphere to hold more water vapor than it otherwise would. This has the ultimate effect of amplifying CO2’s base level warming well beyond what an increasing concentration of CO2 could accomplish by itself.
It isn’t currently possible to directly observe the Soden-Held temperature amplification mechanism operating in real time inside the earth’s atmosphere, in the same way we would observe an amplification mechanism operating inside an electronic circuit on a test bed in a laboratory. The presence and characteristics of such a mechanism, if it actually exists, must be inferred from other kinds of observations.
Because their postulated amplification mechanism cannot be observed directly, Soden & Held use output from the climate models as one source of data among several in estimating the theoretical sensitivity of earth’s climate system to the continuous addition of CO2 and other carbon GHGs to the atmosphere. These model runs are being employed as if they were physical experiments. The results of the model runs are being used as if they were physical observations recorded in the course of conducting a true physical experiment.
I’ve previously discussed the rigorous quality assurance requirements which apply to simulation systems used in the nuclear industry.
Because a model run of any simulation system is being employed as if it was a physical experiment, the documentation used for nuclear industry simulation systems is the equivalent of a highly detailed laboratory notebook which records the purpose of the simulation, how it was conducted, what its results were, and the criteria by which the results of the model run were being evaluated as either suitable or unsuitable for the purposes that run was intended to serve.
Cliff Mass: “I am involved in regional climate simulations, using an ensemble of high-resolution projections driven by an ensemble of many global climate models. This is the gold standard for such work.”
The IPCC has a web site which gives their guidance on the proper use of data, and which also includes a short article concerning the limitations of the General Circulation Models (GCMs):
IPCC guidance on the use of data: https://www.ipcc-data.org/guidelines/index.html
The limitations of GCMs: https://www.ipcc-data.org/guidelines/pages/gcm_guide.html
Referring to the IPCC articles, different GCMs may simulate quite different responses to the same forcing, simply because of the way certain processes and feedbacks are modeled. Moreover, the models are often parameterized; i.e., estimates of the physical effects of atmospheric processes are often being employed, as opposed to precisely descriptive process physics. This results in some level of uncertainty in a GCM’s modeled output.
The modeling of clouds and water vapor is an area of uncertainty where relatively small differences in the parameterization values can result in large differences in the model’s results.
So I would ask this question: For an ensemble of model runs, should there not be documentation as to how the various parameterizations used in each component model run compare with each other? Moreover, is the evaluation criteria for the ensemble as a whole different in some way from that used for each individual model run within the ensemble?
==================
Footnote, February 19th, 2022: Only one response was made to my comments. It came from a blog reader who said, “What?!!! Climate simulations are not safety analysis. The nuclear business is building potentially dangerous equipment for which it has control over the design. Archiving is for legal purposes.”
I appreciate your discussion.
Furthermore, I assert that using the output of these GCMs as the reason to destroy much of the world’s energy infrastructure and replace it with unreliable wind, solar and means not yet in existence, is, indeed, a massive safety problem.
In addition to working software QA, a portion of my long career in nuclear — about fifteen years out of thirty-five years — was spent writing software code for a variety of odd and sundry purposes.
So I am familiar with IBM assembler, JCL, FORTRAN, COBOL, BASIC, Pascal, complex SQL queries against DB2, Oracle, MS SQL Server, Sybase, and Informix databases, and 4GL’s such as NOMAD and FOCUS. I play with R occasionally and find it to to be a really fun language to work with.
I’ve seen any number of software development methodologies and philosophies come and go. My favorite was RAD for ‘Rapid Application Development’.
Why? Because it seemed like such a great acronym for use in a nuclear industry software development project.
That said, as long as you are using a disciplined approach for designing and writing software code, and as long as you are thoroughly documenting what it is you are doing, it doesn’t make a whole lot of difference which of these philosophies and methodologies you use.
Happy 75 Willis.
I suspect one result of your article will be a rapid edit from the model of comments using words and phrases such as: fix, keeps the, tuning knob, etc.
Great article. That model looks like a giant make work project. I think it would be more correct if they called them mimics instead of models.
one question- is iterative the correct term for this kind of program? I always thought it was a way to solve complex problems with many equations by using guesses and solving by adjusting the guesses (after moving everything to one side so that all equations equaled zero) until all equations equaled zero. As done in the declarative programming tool TKSolver.
The fundamental problem with the GCMs is that they have never been subjected to Verification and Validation. Those who are not familiar with the process can find an excellent description on the Mitre web site by pointing your favorite web browser at Mitre Verification and Validation of Simulation Models.
The last time that I looked the FAA had over 20 volumes covering the V and V of simulation models used in the design, building and operation of commercial aircraft. It is insane to set societal goals such as “NetZero” and it family based on simulation models that are kluges that have never been put through the V and V wringer.
Ray, the CESM model has been “scientifically validated”, whatever it means.
Happy Birthday Willis! Thank you for years of enlightening articles – my IQ goes up with every article of yours I read!
You’re a model man of learning; creative, learned and wise, but probably most importantly: humble. Ready to admit mistakes and open to criticism. That’s how science progresses.
Thanks Willis!
Cent’anne!
(My written Italian is horrible, I’m trying to write “100 years!” May you reach 100 years!)
Willis, having some experience of programming as well as hardware design myself, I congratulate you on this fine analysis of the climate model codswallop. I have made similar observations myself, albeit not as detailed, and watched people’s eyes glaze over. All the kludges you mention are defined as forcing a solution in math or logic programming. A forced solution is obviously not an unbiased or reliable function — I know you know this, it’s an observation for the general reader. These models are forced to give the pre-determined result that CO² is the cause of all evil things.
The state of climate modeling is precisely the case observed by a notable scientist (I forget who) who commented, “give me 4 variable parameters and I can fit an elephant thru the eye of a needle. ” climate models have way more than 4 parameters. They are based on WAGs, not even SWAGs. Acronym decode: SWAG = Scientific Wild Assed Guess.
Your analysis is very welcome and appreciated. Your expertise and infectious curiosity are remarkable. I have been curious about the internal workings of the GCMs but not up to the task of looking for myself, at this time.
The next time I am told that they are based upon the physics, there will be this insightful and well-written essay to recommend.
Happy Bday.
This comment from a “dinosaur”: Alcom, Algol, Fortran 4 – circa 1960s.
I have never and never will write a computer program, if only to spite Biden and his inane comments about learning to code.
But thanks for this article pointing just how ridiculous these models are
To respond to clowns like Mann and Dessler maybe you should go on Joe Rogan’s podcast and just spell all these items out onscreen and then invite the Scientologists to point out what you’ve got wrong.
This stuff has to be seen and done in public to make a difference, Rogan has 11 million followers, about 10.5 million more than the cbc here in canada
“Here in the redwood forest, it’s my birthday”
Congratulation Willis! and thank you for sharing so many toughts and fine analysis!
kind regards
SteenR
Willis, do you have a link to the source code?
The ones on the NASA website dont work for me anymore
Info here, Tim.
w.
I seem to be saying this to somebody else so, a repeat:Happy Birthday Willis, may you grow to 100 as the wish is in greece, with all your faculties intact , including the will to find the truth. I have enjoyed your posts through the years.
That’s where I’ve been trying to get it from but I’ve been getting a connection refused message.
“FORTRAN … which was an excellent choice in 1983″
No, by 1983 there were already better alternative languages available. FORTRAN was miserably outdated by then and was not used much by serious programmers. It retained a following in various scientific disiplines mostly due to inertia – it was the language many scientists had been trained in.
Happy Birthday Willis, may you grow to 100 as the wish is in greece, with all your faculties intact , including the will to find the truth.
Happy Birthday Willis!
The climate model are much worse than I thought. Much worse.
Technically, Fortran was a terrible language because it equates zero and null. I once inherited a client with an accounting system written in Fortran that contained one of these errors.
It came as quite a shock to the client to learn their bottom line figure had been wrong for years. They actually asked that I reinstall the error, because the new value looked so bad.
Every procces should halt on error or capture the error in a log for analysis.
Climate Muddle E is not a serious programming effort. It really needs some standards body to take a look at this.
Imagine sending a 10 billion telescope into space, and some programmer hard coding a resset of negative values to zero because they were impossible.
You missed
Jeanoff your list of languages ?