Guest Post By Willis Eschenbach
One reason that I’m always hesitant to speculate on other peoples’ motives is that half the time I have no clue about my own motives.
So … for the usual unknown reasons and with the usual unknown motives, I got to thinking about the GISS climate model known as the “GISS GCM ModelE”, or as I call it, the “MuddleE”.
Like many such climate muddles, it was not designed before construction started. Instead, it has grown over decades by accretion, with new parts added, kluges to fix problems, ad-hoc changes to solve new issues, and the like. Or to quote one of the main GISS programmers, Gavin Schmidt, in a paper describing the ModelE:
The development of a GCM is a continual process of minor additions and corrections combined with the occasional wholesale replacement of particular pieces.
For an additional difficulty factor, as with many such programs, it’s written in the computer language FORTRAN … which was an excellent choice in 1983 when the MuddleE was born but is a horrible language for 2022.
How much has it grown? Well, not counting header files and include files and such, just the FORTRAN code itself, it has 441,668 lines of code … and it can only run on a supercomputer like this one.

So I thought I’d wander through the GISS model and see what I could find. I knew, from a cruise that I’d taken through the MuddleE code two decades ago, that they’d had trouble with “melt pools”. These are the pools of meltwater that form on top of the seasonal sea ice. They are important in calculating the albedo of the sea ice. In my previous cruise, I’d found that they’d put a hard time limit on the days during which melt pools could form.
This leads me to a most important topic—the amazing stability of the climate system. It turns out that modern climate muddles have a hard time staying on course. They are “iterative” models, meaning that the output of one timestep is used as the input for the next timestep. And that means that any error in the output of timestep “J” is carried over as an error in the input into timestep “K”, and on ad infinitum … which makes it very easy for the muddle to spiral into a snowball earth or go up in flames. Here, for example, are a couple of thousand runs from a climate model …

Figure 1. 2,017 runs from the climateprediction.net climate model.
Notice in the upper panel how many runs fall out the bottom during the control phase of the model runs … and that has never happened with the real earth.
So I wrote up a computer program that searches through the 511 individual files containing the 441,688 lines of computer code for keywords and word combinations and the like, and when it finds a match it lists the file number and the line number where the keyword appears, and prints out the line in question and the surrounding lines, so I can investigate what it’s found.
To avoid the climate model losing the plot and wandering away from reality, as a programmer you have two choices. You can either fix what’s wrong with the model that’s making it go off the rails … or you can do what the MuddleE programmers did with the melt ponds. You can simply put a hard limit, basically an adjustable guardrail, that prevents the muddle from jumping the shark.
It appears that they’ve improved the melt pond code because I can no longer find the hard limit on the days the melt ponds can form. Instead, they’ve put in the following code:
C**** parameters used for Schramm sea ice albedo scheme (Hansen) !@var AOImin,AOImax range for seaice albedo !@var ASNwet,ASNdry wet,dry snow albedo over sea ice !@var AMPmin mininimal melt pond albedo REAL*8 :: C VIS NIR1 NIR2 NIR3 NIR4 NIR5 * AOImin(6)=(/ .05d0, .05d0, .05d0, .050d0, .05d0, .03d0/), * AOImax(6)=(/ .62d0, .42d0, .30d0, .120d0, .05d0, .03d0/), * ASNwet(6)=(/ .85d0, .75d0, .50d0, .175d0, .03d0, .01d0/), * ASNdry(6)=(/ .90d0, .85d0, .65d0, .450d0, .10d0, .10d0/), * AMPmin(6)=(/ .10d0, .05d0, .05d0, .050d0, .05d0, .03d0/)
What this code does is to put up hard limits on the values for the albedo for sea ice and melt ponds, as well as specifying constant values for wet and dry snow on the sea ice. It specifies the limits and values for visible light (VIS), as well as for five bands of the near infrared (NIR1-5).
What this means is that there is code to calculate the albedo of the sea ice … but sometimes that code comes up with unrealistic values. But rather than figure out why the code is coming up with bad values and fixing it, the climate muddle just replaces the bad value with the corresponding maximum or minimum values. Science at its finest.
Here’s a comment describing another bit of melt pond fun:
C**** safety valve to ensure that melt ponds eventually disappear (Ti<-10) if (Ti1 .lt.-10.) pond_melt(i,j)=0. ! refreeze
Without this bit of code, some of the melt ponds might never refreeze, no matter how cold it got … gotta love that kind of physics, water that doesn’t freeze.
This is what the climate modelers mean when they say that their model is “physics-based“. They mean it in the same sense as when the producers say a Hollywood movie is “based on a true story” …
Here, for example, is a great comment from the MuddleE code (the “c” or the “!” in a line indicates a comment):
!@sum tcheck checks for reasonable temperatures !@auth Ye Cheng/G. Hartke !@ver 1.0 c ---------------------------------------------------------------------- c This routine makes sure that the temperature remains within c reasonable bounds during the initialization process. (Sometimes the c the computed temperature iterated out in left field someplace, c *way* outside any reasonable range.) This routine keeps the temp c between the maximum and minimum of the boundary temperatures. c ----------------------------------------------------------------------
In other words, when the temperature goes off the rails … don’t investigate why and fix it. Just set it to a reasonable temperature and keep rolling.
And what is a reasonable temperature? Turns out they just set it to the temperature of the previous timestep and keep on keeping on … physics, you know.
Here’s another:
c ucheck makes sure that the winds remain within reasonable c bounds during the initialization process. (Sometimes the computed c wind speed iterated out in left field someplace, *way* outside c any reasonable range.) Tests and corrects both direction and c magnitude of the wind rotation with altitude. Tests the total c wind speed via comparison to similarity theory. Note that it c works from the top down so that it can assume that at level (i), c level (i+1) displays reasonable behavior.
… when the climate muddle goes off the rails, and the wind is blowing five hundred miles per hour, don’t look for the reason why. Just prop it up, put it back on the rails, and keep going …
Then we have a different class of non-physics. These are tunable parameters. Here’s a description from Gavin Schmidt’s paper linked to above:
The model is tuned (using the threshold relative humidity U00 for the initiation of ice and water clouds) to be in global radiative balance (i.e., net radiation at TOA within ±0.5 W m−2 of zero) and a reasonable planetary albedo (between 29% and 31%) for the control run simulations.
In other words, the physics simulated in the climate muddle won’t keep the modelworld in balance. So you simply turn the tuning knob and presto! It all works fine! In fact, that U00 tuning knob worked so well that they put in two more tuning knobs … plus another hard limit. From the code:
!@dbparam U00a tuning knob for U00 above 850 mb without moist convection !@dbparam U00b tuning knob for U00 below 850 mb and in convective regions !@dbparam MAXCTOP max cloud top pressure
Finally, all models are subjected to what I call “evolutionary tuning”. That is the process whereby a change is made, and then the model is tested against the only thing we have to test it against—the historical record. If the model is better able to replicate the historical record, then the change is kept. But if the change makes it work worse at hindcasting the past, it’s thrown out.
Unfortunately, as the stock brokers’ ads in the US are required by law to say, “Past performance is no guarantee of future success”. The fact that a climate model can hindcast the past means absolutely nothing about whether it can successfully predict the future. And this is particularly true when the model is propped up and kept from falling over by hard limits and tunable parameters, and then evolutionarily tuned to hindcast the past …
What else is going on? Well, as in many such ad-hoc projects, they’ve ended up with a single variable name representing two different things in different parts of the program … which may or may not be a problem, but is a dangerous programming practice that can lead to unseen bugs. (Note that FORTRAN is not “case sensitive”, so “ss” is the same variable as “SS”.) Here are some of the duplicate variable names.
SUBR identifies after which subroutine WATER was called SUBR identifies where CHECK was called from SUBR identifies where CHECK3 was called from SUBR identifies where CHECK4 was called from ss = photodissociation coefficient, indicies SS = SIN(lat)*SIN(dec) ns = either 1 or 2 from reactn sub ns = either ns or 2 from guide sub i2 newfam ifam dummy variables nn = either nn or ks from reactn sub nn = either nn or nnn from guide sub nn = name of species that reacts, as defined in the MOLEC file. ndr = either ndr or npr from guide sub ndr = either nds or ndnr from reactn sub Mo = lower mass bound for first size bin (kg) Mo = total mass of condensed OA at equilibrium (ug m-3) ks = local variable to be passed back to jplrts nnr or nn array. ks = name of species that photolyses, as defined in the MOLEC file. i,j = dummy loop variables I,J = GCM grid box horizontal position
Finally, there’s the question of conservation of energy and mass. Here’s one way it’s handled …
C**** This fix adjusts thermal energy to conserve total energy TE=KE+PE finalTotalEnergy = getTotalEnergy() call addEnergyAsDiffuseHeat(finalTotalEnergy - initialTotalEnergy)
Curiously, the subroutine “addEnergyAsDiffuseHeat” is defined twice in different parts of the program … but I digress. When energy is not conserved, what it does is simply either add or subtract the difference equally all over the globe.
Now, some kind of subroutine like this is necessary because computers are only accurate to a certain number of decimals. So “rounding errors” are inevitable. And their method is not an unreasonable one for dealing with this unavoidable error.
However, twenty years ago I asked Gavin Schmidt if he had some kind of “Murphy Gauge” on this subroutine to stop the program if the energy imbalance was larger than some threshold. In the real world, a “Murphy Swichgauge” is a gauge that will give an alarm if some user-set value is exceeded. Here’s what one looks like

Without such a gauge, the model could be either gaining or losing a large amount of energy without anyone noticing.
Gavin said no, he didn’t have any alarm to stop the program if the energy imbalance was too large. So I asked him how large the imbalance usually was. He said he didn’t know.
So in this cruise through the code 20 years later, once again I looked for such a “Murphy Gauge” … but I couldn’t find one. I’ve searched the subroutine “addEnergyAsDiffuseHeat” and the surrounds, as well as looking for all kinds of keywords like “energy”, “kinetic”, “potential”, “thermal”, as well as for the FORTRAN instruction “STOP” which stops the run, and “STOP_MODEL” which is their subroutine to stop the model run based on certain conditions and print out a diagnostic error message.
In the ModelE there are 846 calls to “STOP_MODEL” for all kinds of things—lakes without water, problems with files, “mass diagnostic error”, “pressure diagnostic error”, solar zenith angle not in the range [0.0 to 1.0], infinite loops, ocean variables out of bounds, one STOP_MODEL that actually prints out “Please double-check something or another.”, and my personal favorites, “negative cloud cover” and “negative snow depth”. Hate it when those happen …
And this is all a very good thing. These are Murphy Gauges, designed to stop the model when it goes off of the rails. They are an important and necessary part of any such model.
But I couldn’t find any Murphy Gauge for the subroutine that takes excess or insufficient energy and sprinkles it evenly around the planet. Now, to be fair, there are 441,668 lines of code, and it’s very poorly commented … so it might be there, but I sure couldn’t track it down.
So … what is the conclusion from all of this?
Let me start with my bona fides. I wrote my first computer program over a half-century ago, and have written uncountable programs since. On my computer right now, I have over 2,000 programs I wrote in the computer language R, with a total of over 230,000 lines of code. I’ve forgotten more computer languages than I speak, but I am (or at one time was) fluent in C/C++, Hypertalk, Mathematica (3 languages), VectorScript, Basic, Algol, VBA, Pascal, FORTRAN, COBOL, Lisp, LOGO, Datacom, and R. I’ve done all of the computer analysis for the ~1,000 posts that I’ve written for WUWT. I’ve written programs to do everything from testing blackjack systems, to providing the CAD/CAM files for cutting the parts for three 80′ fishing steel boats, to a bidding system for complete house construction, to creating the patterns for cutting and assembling a 15-meter catenary tent, to … well, the program that I wrote today to search for keywords in the code for the GISS ModelE climate model.
So regarding programming, I know whereof I speak.
Next, regarding models. On my planet, I distinguish between two kinds of models. These are single-pass models and iterative models. Single-pass models take a variety of inputs, perform some operations on them, and produce some outputs.
Iterative models, on the other hand, take a variety of inputs, perform some operations on them, and produce some outputs … but unlike single-pass models, those outputs are then used as inputs, which the model performs operations on, and the process is repeated over and over to give a final answer.
There are a couple of very large challenges with iterative models. First, as I discussed above, they’re generally sensitive and touchy as can be. This is because any error in the output becomes an error in the input. This makes them unstable. And as mentioned above, two ways to fix that—correct the code, or include guardrails to keep it from going off the rails. The right way is to correct the code … which leads us to the second challenge.
The second challenge is that iterative models are very opaque. Weather models and climate models are iterative models. Climate models typically run on a half-hour timestep. This means that if a climate model predicting say 50 years into the future, the computer will undergo 48 steps per day times 365 days per year times 50 years, or 876,000 iterations. And if it comes out with an answer that makes no sense … how can we find out where it went off the rails?
Please be clear that I’m not picking on the GISS model. These same issues, to a greater or lesser degree, exist within all large complex iterative models. I’m simply pointing out that these are NOT “physics-based”—they are propped up and fenced in to keep them from crashing.
In conclusion, a half-century of programming and decades of studying the climate have taught me a few things:
• All a computer model can do is make visible and glorify the under- and more importantly the misunder-standings of the programmers. Period. If you write a model under the belief that CO2 controls the temperature … guess what you’ll get?
• As Alfred Korzybski famously said, “The map is not the territory”. He used the phrase to poetically express that people often confuse models of reality with reality itself. Climate modelers have this problem in spades, far too often discussing their model results as if they were real-world facts.
• The climate is far and away the most complex system we’ve ever tried to model. It contains at least six subsystems—atmosphere, biosphere, hydrosphere, lithosphere, cryosphere, and electrosphere. All of these have internal reactions, forces, resonances, and cycles, and they all interact with all of the others. The system is subject to variable forces from both within and without the system. Willis’s First Rule of Climate says “In the climate, everything is connected to everything else … which in turn is connected to everything else … except when it isn’t.”
• We’ve only just started to try to model the climate.
• Iterative models are not to be trusted. Ever. Yes, modern airplanes are designed using iterative models … but the designers still use wind tunnels to test the results of the models. Unfortunately, we have nothing that corresponds to a “wind tunnel” for the climate.
• The first rule of buggy computer code is, when you squash one bug, you probably create two others.
• Complexity ≠ Reliability. Often a simpler model will give better answers than a complex model.
Bottom line? The current crop of computer climate models are far from being fit to be used to decide public policy. To verify this you only need to look at the endless string of bad, failed, crashed-and-burned predictions that have come from the models. Pay them no attention. They are not “physics-based” except in the Hollywood sense, and they are far from ready for prime time. Their main use is to add false legitimacy to the unrealistic fears of the programmers.
And there you have it, a complete tour of a climate muddle.
Here in the redwood forest, it’s my birthday.

I’m three-quarters of a century old. And I’ll take this opportunity to thank all of my friends both in real life and on the web for two things.
The first is the endless support that I’ve gotten for my life, my writings, and my research. Everything of value in my life I’ve learned from family and friends. I owe profound thanks to the encouragement and support of people like Anthony Watts, Steve McIntyre, Roy Spencer, William Gray, Charles the Moderator, Viscount Ridley, Bishop Hill, Judith Curry, of course my gorgeous ex-fiancee, and many others for all that you have done. No way to mention everyone individually, but you know who you are, and you have my thanks.
The second thing I’m thankful for is the endless checking and public peer-review of my work. I love writing for WUWT because I know that a) whatever I write about, someone out there has done it for a living and knows more than I do about it, and b) whatever mistakes I make won’t last longer than a couple of hours without people pointing them out. This has been immensely useful to me because it has kept me from following false trails and wasting weeks or years going down blind alleys based on mistaken beliefs. Keep up the good work!
So what does a man do at 75? Well, after doing absolutely nothing today, tomorrow I’m going back to climbing 20 feet (6m) up a ladder to pressure-wash shingles with my new pressure washer … hey, do I know how to party or what?
My very best to all,
w.
Again I Ask: When commenting, please quote the exact words you’re responding to. And if you’re commenting on something not on the thread, provide a link. It avoids endless misunderstandings.
Somewhere I had gotten the idea that a reason that FORTRAN was used was that it had been written to compile into sections that can be parallelized to take advantage of parallel processing on supercomputers.
How many ‘modern’ languages will compile parallelized code, or have need to?
I run parallelized R on my multi-core home computer (MacBook Pro).
w.
There are many great things about R, but it is an interpreted language, so it would not be efficient enough for the huge computational load of a long-term climate model.
Highly parallelized computing is a relatively small niche – I don’t know if there are really superior alternatives to Fortran (my first programming language, but I haven’t used in 40 years…)
Not entirely true, Ed.
“The byte compiler was first introduced with R 2.13, and starting with R 2.14, all of the standard functions and packages in R were pre-compiled into byte-code. The benefit in speed depends on the specific function but code’s performance can improve up by a factor of 2x times or more.”
What this means is that you’d just need to write packages rather than individual functions if you want the speedup.
w.
Pretty much every language that I am familiar with can be written to run in parallel. What you need is an operating system that will support parallelism.
Pretty much every language that I am familiar with can be written to run in parallel. What you need is an operating system that will support parallelism.
I see what you mean but you forgot the smiley face.
Happy Birthday Willis – great stuff (again)
Yes, it is a cheap fix. De-bugging even simple code can be very time consuming, hence expensive.
I guess what I would do if tasked to find the problem would be to write out a matrix of all the intermediate values and plot them later. I’d look for any variables suddenly stepping, or suddenly rising very rapidly, or flattening out indicating it had been clamped. I’d then explore those areas in the code to see what was going on.
When I was younger and less experienced, I wrote an iterative program in Basic on an 8-bit Atari to estimate the terminal velocity of a falling object. It was well behaved until it got close to about 120 MPH, and then the speed oscillated wildly. I think the major problem was round-off errors, particularly as the denominator approached zero.
From pedants’ corner Willis –
“mistaken conclusions” please.
“belief” should not be used in any references involving proper scientific pursuits.
“Belief” belongs in religious discourse.
(or AGW disciples)
Sorry, far too pedantic for me. Do you believe the sun will rise tomorrow? Do you believe that E=MC^2.
w.
I completely agree! I’d liken the parameterizations and clamping of out of range variables to writing the “physics-based” equation E = mc^(2 +/-20%). It is “physics-based), but probably not too useful.
How much training does the average climastrologer get or have in real software engineering?
All of these stories about FORBASH spaghetti noodle stirring point to the answer—none.
Hi Willis,
Happy 75th. I got there 5 years ago, so can also relate to punched cards. Used to draw colored diagonal lines on the outside of a stack to make it easier to re-assemble it.
First programming was in machine language, make a perpetual calendar that accounted for leap years. After that, I realized that good programmers, like good golfers, were rare and needed to keep in heavy training, which I did not have the time for. My best calendar effort had twice the lines of my colleague, so over to him in future.
Re, GCMs. Pat Frank’s uncertainty estimates are interesting. You seem to be quiet about them, Nick Stokes simply unbelieves. After having worked for an income that depended on keeping within uncertainty bounds, I tend to agree with Pat’s findings. Any comment?
Finally, with many applications in science, a what-if can be diagnostic. What if a study was done on GCMs by turning off the main components one after another to see the effect on output. IIRC, that was done for clouds, authors claimed no effect of turning them off. Has anyone turned off CO2 to see the output?
Keep off ladders! My balance is now extremely poor, gave up ladders a decade ago, real threat. Geoff S
w. ==> “They are “iterative” models, meaning that the output of one timestep is used as the input for the next timestep.
Notice in the upper panel how many runs fall out the bottom during the control phase of the model runs … and that has never happened with the real earth. [ Figure 1. 2,017 runs ]
This routine makes sure that the temperature remains within reasonable bounds”
Iterative models of non-linear physics always result in chaotic results (using the definitions of Chaos Theory.) That is why they get runs with results that fly off into the starosphere or dive to the center of the Earth (figuratively).
They must put bounds for all sorts of elements of the output, as Chaos demands that the models output nonsensical values sometimes lots of them. To solve this, they simply throw away these results and only keep the ones that agree with their biases (expected results). Even then, the spread of results is as shown in Fig 1 Panel a – all over the map.
Averaging such results is nonsensical and un-physical and violates basic principles of science.
They can not eliminate the effects of Chaos — and thus “we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” IPCC Third Assessment Report (TAR) WG1 Section 14.2.2.2
The file RADIATION.F contains much of the code that handles the radiant balance and contains thousands of baked in floating point constants, many of which have no documentation associated with them.
They have recently updated from F66 to the 1990 version of Fortran by replacing spaghetti code control constructs with loop constructs and even case statements…
From the article: “Finally, all models are subjected to what I call “evolutionary tuning”. That is the process whereby a change is made, and then the model is tested against the only thing we have to test it against—the historical record.”
What historical record? The only historical temperature record I know of are the written, regional surface temperature charts. The models don’t test against those.
What the models test against is the bogus, bastardized Hockey Stick global temperature science fiction chart. This is not a historical record. This is dreamed up in a computer and looks nothing like the written, historical temperature record.
Testing a computer model against science fiction such as the bogus Hockey Stick chart will tell us nothing about the real world.
“thank all of my friends both in real life and on the web…”
Being on the web does not make us less real!
Happy Birthday Willis and thank you for your posts.
Happy Birthday Willis, you are amazing! My view on models is that there have only ever been two models worth bothering with. One you find on a catwalk and the other can be seen on the freeway, and until the advent of EV’s it was easy to determine which was the most reliable.
Another dose of reality. Happy Birthday W
Years ago I surmised that Climate Science with their GCMs was ‘mired in minutia’. This assessment corroborates that.
To estimate the future average climate temperature, might just as well guess at what is important, use what has actually been measured and optimize their combination. Assume the same things will remain important for a while to predict the future. That’s what produced this. Click my name and see Sect 17 to see how.
Crap! Clicking my name doesn’t work with Google anymore. Click this: http://globalclimatedrivers2.blogspot.com
Is there any way to construct an equivalent graph using the raw or unprocessed data? Considering all the temperature and heatwave records set in the 30’s the HadCrut series seems nonsensical.
Certainly there is more concrete and asphalt on the ground now than 80 years ago!
True but emissivity is about the same and 71% of the planet is covered by oceans anyway. Evaporation change doesn’t matter because water vapor is measured and that is what I use. Water vapor has increased about twice as fast as possible from just feedback.
Water vapor increase has been about twice as fast as possible from just feedback.
Years ago I made this comparison of anomalies of all who reported way back in time. The ‘wrt’ is referring to the reference T as an average for some previous period, They were all constructed from the same data base of raw data. IMO they are close enough to each other that I decided to use H4 instead of some average. Also IMO back then they were a bunch of experts trying to get to the truth, while since then many (me included) think some might be ‘adjusting’ the data to better match their faulty GCMs. I have not been motivated to try to reconstruct a T trajectory using initially reported temperatures because I don’t think it would make much difference.
I’ve been reading WUWT for many years and I can recall several articles Anthony posted where ‘Climate Scientists’ had things pointed out or found things that “aren’t in the models.”
I’m guessing that those discoveries of oversights led to a fair amount of the muddling.
Willis, what a tour de force! Any reader that may have discounted you because “you are not a climate scientist” should benefit with a link to this remarkable piece of work.
You thanked so many people and it is proper to so, of course. But I have to say that the thanks that is your due for sharing your knowledge, clear analyses, patient explanations, all in straight forward language and with no pay walls or monetary profit to you, is global in scale. Even antagonist researchers have been enriched (some, like the Nile crocodile being saved by an ecologist, will still try to snap his benefactors ass off).
Thanks for your kind words, Gary. I fear that discounting my work based on my complete and total lack of credentials will continue to be the refuge of those who can’t attack my science …
As you point out, I’m indeed an “amateur” scientist, a word which has an interesting etymology:
“Amateur—late 18th century: from French, from Italian amatore, from Latin amator ‘lover’, from amare ‘to love’”.
In other words, I’m a scientist, not for money or fame, but because of my love of science, for the joy of discovery, for the surprise of finding new relationships.
In any case, I’m used to being dissed because of my amateur status. I wrote about it over a decade ago in my post called “It’s Not About Me“.
My best to you,
w.
Oh, and Happy Bithday! And remember that the phenomenal Michael Faraday was a self taught amateur whom Sir Humphrey Davy took on as an assistant.
I’ve done many model simulations that were iterative, but much simpler than a GCM. They were always essentially solutions to the diffusion equation. The good thing is that there is theoretical guidance as to how to combine your time and spatial steps so that the thing doesn’t go off the rails. Even then, it’s a tricky business when there are complex non-linear parts to the model, as there’s no solution “in the CRC Handbook” to check it against. So I completely agree with Willis that one should be exceedingly humble about the accuracy to be expected from such models.
One model I looked at from decades ago dealt with the uptake of oxygen diffusively into a long thin bit of muscle tissue which is in turn consuming oxygen. The muscle was approximated by an infinite cylinder (got to cut down on dimensions when you can). To get the “right” answer you run it and have a guardrail to prevent O2 from going to negative infinity. Physics! Biology! To paraphrase the immortal words of Clint Eastwood, a model’s got to know its limitations.
Fortran 77 was my first language. It’s fine for number-crunching but no good for talking to anything besides the hard disk and the screen. Most big, iterative programs (finite element analyses, process simulators, etc.) were written in Fortran. They now have slick front ends but the guts are still Fortran. I asked my boss why they didn’t rewrite the reservoir simulation code in a more modern language. His answer was that the risk of it not running in the new language outweighed any benefit.
I wonder if Gavin Schmidt ever gets on the roof to do a spot of pressure washing?
Happy Birthday, Willis, you youngster.
Willis, this was very interesting. It stimulated someone further down the thread to provide a link to this paper:
https://journals.ametsoc.org/configurable/content/journals$002fclim$002f24$002f9$002f2010jcli3814.1.xml?t:ac=journals%24002fclim%24002f24%24002f9%24002f2010jcli3814.1.xml&tab_body=fulltext-display
which addresses the question how many models there really are. The authors have taken 24 well regarded ones, and determined
They therefore address the question that has always puzzled me, how legitimate it is to simply take a bunch of models and average them.
My own worry has always been that we seem to take the average both of ones with a good match to observational results, and ones that give predictions which are way off, and I have never understood why averaging the known bad with the known good could possibly give you more reliable predictions than just using the known good ones.
Their result bears on a different but maybe related question: if there are about one third the number of independent models than we had thought, then we are (at least to some considerable extent) just looking at groupthink when we look at the averages. It makes my puzzlement worse. If we take 8 basically identical models, then add another 16 to them, then take the means of all their predictions, the results will effectively be to weight the results that the 8 deliver, but without any justification.
Why are we not throwing out the failing models totally, and only using one of the identical ones?
The whole process seems completely illegitimate. Any other area of science this would never fly. Do you agree? I would also be grateful for any comments Nick Stokes has on this matter.
“how legitimate it is to simply take a bunch of models and average them”
Not particularly. But who does it? Can you quote? Generally IPCC etc present a spaghetti plot of trajectories. Sometimes someone might draw in a mean or median curve.
On the other hand, having a number of different model results obviously does tell you more than having just one. There has to be some way of making use of that extra information. But I don’t think many people jst take a simple average of a subset.
“Why are we not throwing out the failing models totally”
Well, you don’t know which they are. People here like to make that judgement based on agreement with surface temperature over a few years of a prediction span. But first there is much more to model output than surface temperature. And second, models do not claim to forecast weather, up to say ENSO duration, so it is not an appropriate basis for ruling models out.
I’m sure “michel” has other examples, but my most recent example for the IPCC doing this is in the latest AR6 (WG1) report.
In section 1.4.1, “Baselines, reference periods and anomalies”, on page 1-54 :
The examples I had in mind were the spaghetti charts with, as Nick says, a mean line superimposed.
But Nick’s reply makes me scratch my head even more. If its really true that we don’t pick the good ones and throw the others out because we don’t know which are the good ones…? We really don’t know enough to be able to reject any of the 24 in the cited paper?
It makes sense, its an excellent reason for not doing it, if you really do not know. But the implication of that is surely that the range of uncertainty is far bigger than I had thought climate scientists believe it to be. I mean, if the hottest forecasts are as good as the coolest ones, because we don’t know enough to tell which model is better, then we are in much more of a mess about policy than any of the alarmed admits.
Is there any other area of science and public policy where we find this situation? Where we have a huge range of outcomes, from the catastrophic to the minor problem, and after 30 years or so of trying to model the situation still cannot reject any of them? Maybe there are, but are there any of these in which the established view is that we should act on the catastrophic forecasts.
I mean, this is the precautionary principle in action, it would be arguing something like, we have these models, we have no idea which if any is right, but a few of them, as far as we know as good as the others, shows disaster coming unless we do X, so we have to do it, because the payoff if we are wrong is so huge.
Very hard sell to an increasingly skeptical electorate….
Thanks, Michel, been saying this for a while. Climate models are NOT fit for the purpose of public policy.
w.
Some time ago Dr. Judith Curry asked for some input to a presentation she was preparing for a group of attorneys. I suggested something along the lines of “Climate models are not sufficient justification for fundamentally altering our society, economy nor energy systems.” She used the idea.
Apparently it depends on the precise sub-parameter being considered, and AR6 does highlight the use of “expert judgement / assessment” more than the IPCC did back in 2013 with AR5.
Section 1.5.4 of AR6 (“Modelling techniques, comparisons and performance assessments”, pages 1-89 to 1-97) starts with the admission that : “Numerical models, however complex, cannot be a perfect representation of the real world.”
Sub-section 1.5.4.8, “Weighting techniques for model comparisons”, ends (on page 1-97) with :
– – – – – – – – – –
Box 4.1, “Ensemble Evaluation and Weighting”, can be found on pages 4-21 to 4-24.
The first paragraph “clearly” states :
NB : I’m nor personally convinced at just how “clearly” that particular aspect was “communicated” … especially by journalists, invited “opinion piece” writers, and editors in the MSM.
Box 4.1 also includes the following … “interesting” (?) revelations :
Again, you have to read the full AR6 (WG1) report … well, the “Subject to Final Edits” version, at least … to find them, but there are more admissions about the “expert judgements” being made by the IPCC than those so “clearly” stated in AR5.
NB : Not many can be found in the SPM though (entering either “expert” or “judgement” into my PDF file reader application’s “Find … [ Ctrl + F ]” function returned “Not found” error messages for that file).
I have wondered why models don`t bother to get the surface temperature right, We have learnt that they have a great bias, even the model bunch average. If this is wrong the initial values for calculations of change must be wrong. Temperatures are governing much of the climate processes, such as evaporation, water vapor, cloud dissipation etc.
Some estimates of measured absolute temperatures and model average and spread. Arbitrary chosen time.
From ncdc/noaa. https://www.ncdc.noaa.gov/sotc/global/201805
«The May 2018 combined average temperature over the global land and ocean surfaces was 0.80°C (1.44°F) above the 20th century average of 14.8°C (58.6°F).»
Global Climate Report – May 2018
This results in an average temperature in May 2018 at 15,6 degC
Eyeballing the 12 models at Clive Best gives the temperatures of approximate 16,6 – 16,4 – 16,3 – 15,7 – 15,5 – 14,9 – 14,5 – 13,9 – 13,5 – 13,2 – 12,9 – 12,7. This gives a model average in May 2018 at 14,7 degC. If this is representative, models are at present about 1 degC colder than measured temperatures.
http://clivebest.com/blog/?p=8788&
Most grateful Willis. With my programming knowledge at zilch and at 86 have forgotten most of the maths, I rely on my College Notes which I still have from the days of my training in the RN and steam propulsion. The Navy took great care to ensure their Engineer Officers were well trained before letting them loose on the Fleet.
These notes totally trash any credibility in these climate models, where the Mindset involved appears oblivious to reality, with attached dubious motivations.
The one thing that sticks up like a sore thumb is the fact that the oceans never get much above 30°C in spite of millions of years of relentless solar radiation.
I’m tussling myself to explain that and find that two different math programs produce two different graphics. It’s knocked me sideways; but still stick with my hypothesis that the Hydrological Cycle is the prime influence in the moderation and control of the Earth’s emery balance. Proving it is a very different matter.
After all the oceans comprise some 72% of the Earth’s area and if you include the close water/atmosphere interface in the plant world where the same science applies? you wind up with an area greater than that of the Earth itself.
For info: The engineering toolbox site provides a wealth of information and makes you realise how complex the practicalities are; particularly if one is trying to prove something.
As a programmer myself, I know that a computer program developed this way and being that large, is probably full of undetected bugs.
It would be interested if someone analysed the code of the Russian climate model, INMCM, the only model that doesn’t seem to be a complete failure. But that code maybe isn’t available?
Since I actually have to wrote code for the real world – I cannot tell you how often ‘reality’ versus expectation has been different.
Truth is. No matter how the data looks before I decide to act on it, rather than betting the farm that all is well. I TEST IT FIRST.
How do I test. I create a controlled set of potential outcomes based on my model with a limited range of acceptable results and then I see if the test comes back within the ascribed parameters.
I would say right now I am batting around 75% – better than a coin flip but still… Not that great.
Can CO2 increase temperature on the Earth via the narrow band of radiation that is being absorbed? Yes. Is it? That is the more difficult and chaotic question. I could argue that increased use of water for agricultural purposes has a mare damning effect BUT that would not have the same… control aspects from government.
Also – there are a great number of potential solutions to even the issue of releasing CO2 into the air that would at least make sense. Why is it only solar – wind – and geothermal that are acceptable. Nuclear is a viable potential source of energy that is constantly disregarded as a good power source – is it problem free? Nope but neither is wind mills etc and so on.
As a guy in charge of numerically controlled machines told me, “Programming is easy. Signing a production order is difficult.”
Happy Birthday and all the best to you and yours, Willis.
I thank you for all your hard work and voluminous essays. You and the others at WUWT have been an invaluable resource in my, and millions of others, education on all things Climate Related.
My Highest Respects,
Brent
Willis, this is just a short note of appreciation. Oh, and Happy Birthday, by the way! (Have many more please).
Your posts, among the great ones here at WUWT, are among my most cherished. Not only do I value WHAT you think… but also HOW you think.
You often bring us with you as you approach a problem and though I lack many of your mental skills, I certainly learn from them.
All the best to you and yours.
Willis, a few quotes and comments:
Quote: For an additional difficulty factor, as with many such programs, it’s written in the computer language FORTRAN … which was an excellent choice in 1983 when the MuddleE was born but is a horrible language for 2022.
Somewhat of a generalization. What specific aspects of the Reference Specifications for Fortran make it “a horrible language for 2022” ? I think Fortran 2018 (ISO/IEC 1539-1:2018) / 28 November 2018 is the latest release. What specifics in that Standard make it horrible for use in 2022?
Given that FORTRAN was an acronym for FORmula TRANslation you might think its a good approach for industrial strength engineering and scientific computing.
Quote: How much has it grown? Well, not counting header files and include files and such, just the FORTRAN code itself, it has 441,668 lines of code … and it can only run on a supercomputer like this one.
What matters in the choice of a language is determined generally by the application of that language by those who construct the coding. Of course there are higher levels of filtering languages for use; R and Basic, for examples, I think are not good candidates for industrial-strenght scientific and engineering computing. Some of the important aspects of the code as constructed include maintainability, extensibility, and readability relative to independent verification that the coding agrees with the specifications. Well-structured code, with well-considered data-structures, can be written in almost any language. Unusable/unreadable code can be written in a language that promotes structure.
Lines of Code (LoC) in and of itself is a metric that has no merits. A “Supercomputer” is the machine of choice due to matters not associated with LoC. Things like clock speed, parallel capabilities, turn-around time for completion of an analysis, attached storage, . . .
Quote: • Iterative models are not to be trusted. Ever.
It’s not clear what “trusted” means in this context. For solving non-linear equations, or systems of non-linear equations, there is no other option. Even large systems of linear equations, iterative methods are sometimes employed to handle the matrices involved.
Quote: So regarding programming, I know whereof I speak.
The first four paragraphs filling this statement. I’m not going to make a quote for all those words.
Firstly, not to get bogged down in nomenclature, but ‘iterative’ I think has a specific meaning in numerical mathematics. The GCMs are formulated as an Initial Value Boundary Value Problem. As such, the solution methods generally march in time from the initial time to the final time of interest, using discrete values of the time step. Iteration on the other hand generally implies that a fixed equation, or system of equations, are non-linear and iteration is used to solve the fixed system.
The GCM solution method might be a one-shot through for a time step, or at a time step a systems of non-linear equations might be iteratively solved. I suspect GCMs use the former one-shot through. Even in that case, there might be modeling that requires solution of non-linear equations, and for those an iterative method likely will be used. [ You have already solved all non-linear equations that have analytical solutions while you were in undergraduate school 🙂 ]
In the latter case, there are two iterative processes underway; the method for the general equations and that for a sub-model requiring iteration.
So, whatever we call it, iterative methods and time-stepping methods do indeed use previously calculated information to get the next estimate. However, use of that procedure in and of itself says absolutely nothing about the validity or lack thereof of the calculated results.
The real world being non-linear, it is highly likely that many scientific and engineering computer calculations use iterative methods for all kinds of applications that effect our lives.
There are vast literatures over centuries associated with numerical solutions of equations; I think Cauchy and Euler and Newton and Guass all dabbled in the stuff.
“Iterative” cannot be simply and categorically dismissed out of hand
Quote: There are a couple of very large challenges with iterative models. First, as I discussed above, they’re generally sensitive and touchy as can be. This is because any error in the output becomes an error in the input. This makes them unstable.
Sensitive and touchy as can be are again so general and non-specific that this concern holds little meaning. The objective of iterative methods is in fact to reduce the difference between the input from the previous iteration and the output from the current iteration. The difference is generally driven down to machine precision, or very close to that; several digits, at least.
This makes them unstable. The use of time-stepping in and of itself does not in any way imply instability. The errors made at a given time step are truncation errors and round-off errors. The former is by far the most important when the IEEE 754 64-bit arithmetic standard is used: 11-bit exponent, 52-bit fraction. That standard includes also special handling of some of the usual pathology situations that can be encountered when finite-precision arithmetic is used.
If applications indicate that round-off errors are a concern, some Fortran compilers can handle 128-bit representations. Special coding can sometimes also be constructed that minimizes specific identified round-off problems. Round-off error generally does not grow bit-by-bit (in the digital bit sense) as the number of evaluations increase so to get to the point that round-off affects say the 5th or 6th digit generally requires massive numbers of evaluations. Sometimes round-off errors become random in nature so that no fixed growth or decay occurs.
Stability, or instability, is determined by the specific mathematical formulations used for the numerical solution methods; not the use of time-stepping.
Dan Hughes February 19, 2022 8:40 am
First, let me say that your voice is always welcome on my planet.
My main objection to FORTRAN is that you have to use FOR loops and DO loops and WHILE loops (actually all handled as DO loops in FORTRAN). These hugely increase the odds of bugs in your code. To loop through a bunch of 3D gridcells in FORTRAN and add one to each of them, you need to use something like (pseudocode):
do latitude = -89.5, 89.5 , 2
do longitude = -179.5, 179.5 , 2
do altitude = 0,10000, 1000
gridcell(latitude, longitude, altitude) = gridcell(latitude, longitude, altitude) +1
end do
end do
end do
In a modern language like R, you simply say
gridcellarray = gridcellarray +1
Guess which one is a) easier to write b) easier to read c) less likely to contain bugs?
Clearly I wasn’t clear. I gave the number of lines of code simply to indicate the difficulty in understanding and debugging the model.
In my example just above, what takes one line of code in R takes seven lines in FORTRAN … which is easier to understand and debug?
By “trusted” I meant depended on without extensive testing, verification, and validation, none of which have been applied to the climate models.
I think I see the difficulty. You are using “iterative” to mean a model that converges on a final answer by repeated calculations. For example, you can get a square root of a number “Z” iteratively by dividing it by some number “S”, and then averaging “S” and “Z/S”. This gives a new number to divide “Z” by, and it will eventually converge to the square root of Z.
I’m using “iterative” to mean a model that calculates a time-dependent output by calculating the answer at small time intervals, and using the output of one step as the input of the next step. So to calculate the global temperature in the year 2000, we calculate the temperature and literally thousands of other variables a half hour from now. Then we use those thousands of variables as the starting point to calculate the temperature an hour from now. Repeat that process a mere 1,365,120 times, and PRESTO! We have the temperature in 2100!
THAT is the kind of iterative model I say do not trust, ever. Perhaps I’m using the wrong term for that kind of model. What term would you use for a time-stepping model like that?
You are speaking of what I call “converging” models.
However, the idea that in the climate models the only “errors made at a given time step are truncation errors and round-off errors” doesn’t pass the laugh test. To those I’d add:
• computer bugs of a hundred varieties
• edge-effect errors
• errors of logic
• theoretical errors
• errors from an incomplete understanding of the physics of the system
• errors from omitted variables
• errors from gridcells larger than important phenomena
• errors from incorrect parameterization and/or guardrails
And finally, there are what I call “Rumsford Errors”. Donald Rumsfeld said:
With time-stepping computer models, any of these errors can grow with each time step, ending up with the model going totally off the rails as in Figure 2.
And with all of those, trusting time-stepping climate models is … well … let me call it “excessively optimistic”.
My best to you,
w.
Willis, I focused on numerical solution method errors. Not any of the multitude of unrelated ailments that can inflict software.
You might find the following to be fun exercises.
Dan
A simple direct fun introduction look into truncation and round-off errors.
Here’s an exercise that can be carried out in R or any other language.
Numerical integrate
dT/dx = 1 – T^4
for various numerical integration methods, values of step size, and finite-precision. The explicit Euler method is straightforward and easily coded, for example, or use any of your favs. Or, roll your own in any language and any numerical integration you love.
This equation is interesting because the numerical values of the truncation terms can be calculated. The magnitude of each truncation-error term can be compared with the magnitude of the numerical solution for each x. And also with 1 which represents the constant throughput of the modeled process.
The steady state solution is T_ss = 1. There is an analytical solution that gives T as an implicit function of x: Given x you can get a numerical estimate for T by finding the zero of
x – F(T) = 0
As the step size is refined, the difference between either (1) successive values of T using two different values of the step size, or (2) the difference between the current value of T and the numerical estimate from solution of the above equation will decrease. That’s truncation error.
If R has a facility for setting the finite precision representation of number, check a couple or few of those as well; 4-bytes, 8-bytes, mo-bytes.
What you’ll see is that:
(1) the greatest differences between the numerical and analytical solutions occur near the region of non-linear changes in T. Truncation-error is bigger when the method does not give you a good estimate of the gradient.
(2) for sufficiently small step size and long final end time you might see the effects of round-off kick in. That is, continue to integrate for long times after T reaches 1.0 and observe the value of T. Display lots o’ digits; display all that are consistent with the finite precision. After decreasing as step size is refined, the difference might hit a minimum and begin to increase. How many evaluations are needed to hit the minimum? What does the round-off error look like after becomes visitable? What is the magnitude of the sum of the truncation and round-off errors for many, many many evaluations. What is the number of digits that are not affected by round-off?
(3) higher precision arithmetic will not change the truncation error, but will delay appearance of the effects of round-off.
(4) calculate the discrete values of x by a) incrementing x like x_now = x_pre + step_size, and b) x = istep * step_size. Do this for different precision; 4-bytes, 8 bytes, extended precision. Note that errors in x will affect errors in the iterative solution of the analytical solution.
(5) what are the effects of the stopping criterion (eria) applied to the implicit analytical solution.
(5) do all of the above using variations of the Euler method, or any other method.
This exercise can be easily carried out for a multitude of ODEs that have analytical solutions, both linear and non-linear (more of the former and fewer of the latter, those being forms that are missing the independent variable like the example).
Then check out some simple PDEs.
Thanks, Dan, I’ll take a look.
w.
Increment all elements of an array in Fortran 95/90:
http://astroa.physics.metu.edu.tr/MANUALS/intel_ifc/mergedProjects/optaps_for/fortran/optaps_prg_arrs_f.htm
Thanks, Dan. Clearly I’m behind the loop in my knowledge of FORTRAN.
Unfortunately, so is the GISS ModelE … for example:
[8] “c Loop over quantities in the file”
[9] “c”
[10] ” do varid=vid1,vid2″
[11] ” status = nf_inq_varndims(fid,varid,ndims)”
[12] ” if(ndims.lt.2) cycle”
[13] ” dids = -1″
[14] ” status = nf_inq_vardimid(fid,varid,dids)”
[15] ” if(single_pair_of_dims) then”
[16] ” if(count(dids.eq.did1).ne.1) cycle”
[17] ” if(count(dids.eq.did2).ne.1) cycle”
[18] ” else”
[19] ” did1 = dids(1)”
[20] ” did2 = dids(2)”
[21] ” status = nf_inq_dimlen(fid,dids(1),dsiz1)”
[22] ” status = nf_inq_dimlen(fid,dids(2),dsiz2)”
[23] ” allocate(xout(dsiz1*dsiz2))”
[24] ” endif”
[25] ” do n=1,ndims”
[26] ” if(dids(n).eq.did1) idim=n”
[27] ” if(dids(n).eq.did2) jdim=n”
[28] ” status = nf_inq_dimlen(fid,dids(n),dsizes(n))”
[29] ” status = nf_inq_dimname(fid,dids(n),dnames(n))”
[30] ” enddo”
[31] ” srt = 1″
[32] ” cnt = 1″
[33] ” cnt(idim) = dsizes(idim)”
[34] ” cnt(jdim) = dsizes(jdim)”
[35] ” nslab = product(dsizes(1:ndims))/(dsiz1*dsiz2)”
[36] ” if(ndims.gt.2) then”
[37] ” k = 1″
[38] ” diminfo=””
[39] ” do n=1,ndims”
[40] ” if(n.eq.idim .or. n.eq.jdim) cycle”
[41] ” kmod(n) = k”
[42] ” k = k*dsizes(n)”
[43] ” write(str1,'(i1)’) int(1.+log10(real(dsizes(n))))”
[44] ” ifmt(n)(3:3) = str1″
[45] ” ifmt(n)(5:5) = str1″
[46] ” write(str3,ifmt(n)) srt(n)”
[47] ” if(len_trim(diminfo).eq.0) then”
[48] ” diminfo=trim(dnames(n))//’=’//trim(str3)”
[49] ” else”
[50] ” diminfo=”
[51] ” & trim(diminfo)//’ ‘//trim(dnames(n))//’=’//trim(str3)”
[52] ” endif”
[53] ” p1(n) = len_trim(diminfo)-len_trim(str3)+1″
[54] ” p2(n) = len_trim(diminfo)”
[55] ” enddo”
[56] ” endif”
[57] ” vname = ””
[58] ” status = nf_inq_varname(fid,varid,vname)”
[59] ” lname = ””
[60] ” status = nf_get_att_text(fid,varid,’long_name’,lname)”
[61] ” if(status.eq.nf_noerr) then”
[62] ” do k=1,len_trim(lname) ! remove extra NULL characters”
[63] ” if(iachar(lname(k:k)).eq.0) lname(k:k)=’ ‘”
[64] ” enddo”
[65] ” else”
[66] ” lname = vname”
[67] ” endif”
[68] ” units = ””
[69] ” status = nf_get_att_text(fid,varid,’units’,units)”
[70] ” if(status.eq.nf_noerr) then”
[71] ” do k=1,len_trim(units) ! remove extra NULL characters”
[72] ” if(iachar(units(k:k)).eq.0) units(k:k)=’ ‘”
[73] ” enddo”
[74] ” units = ‘(‘//trim(units)//’)'”
[75] ” endif”
[76] ” shnhgm = undef”
[77] ” if(ndims.eq.2) then ! look for global means”
[78] ” status = nf_inq_varid(fid,trim(vname)//’_hemis’,varid2)”
[79] ” if(status.eq.nf_noerr) then”
[80] ” status = nf_get_var_real(fid,varid2,shnhgm)”
[81] ” endif”
[82] ” endif”
[83] ” do k=1,nslab”
[84] ” status = nf_get_vara_real(fid,varid,srt,cnt,xout)”
[85] ” title = trim(lname)//’ ‘//units”
[86] ” if(ndims.gt.2) title=trim(title)//’ ‘//trim(diminfo)”
[87] ” lrem = 80-(len_trim(title)+1) ! space remaining for run info”
[88] ” if(lrem>0 .and. is_modelE_output) then”
[89] ” l1 = 80-min(lrem,linfo)+1″
[90] ” title(l1:80)=run_info(1:min(lrem,linfo))”
[91] ” endif”
[92] ” if(shnhgm(3).eq.undef) then ! no global mean available”
[93] ” write(lunit) title,xout”
[94] ” else ! write with global mean”
[95] ” write(lunit) title,xout”
[96] ” & ,(undef,n=1,dsiz2) ! have to write means at each lat”
[97] ” & ,shnhgm(3) ! before the global mean”
[98] ” endif”
[99] ” if(ndims.gt.2) then”
[100] ” do n=1,ndims ! increment the start vector”
[101] ” if(n.eq.idim .or. n.eq.jdim) cycle”
[102] ” if(mod(k,kmod(n)).eq.0) then”
[103] ” srt(n) = srt(n) + 1″
[104] ” if(srt(n).gt.dsizes(n)) srt(n)=1″
[105] ” write(str3,ifmt(n)) srt(n)”
[106] ” diminfo(p1(n):p2(n))=trim(str3)”
[107] ” endif”
[108] ” enddo”
[109] ” endif”
[110] ” enddo”
Like I said … invites bugs, hard to avoid errors, hard to debug …
w.
At the first glance, the twin assignments status = nf_xxxx in lines 21, 22 and 28,29 suggest that a side effect of the nf_xxxx call is being utilised. A very unstructured practice.