Another uncertainty for climate models – different results on different computers using the same code

New peer reviewed paper finds the same global forecast model produces different results when run on different computers

Did you ever wonder how spaghetti like this is produced and why there is broad disagreement in the output that increases with time?

CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1[1]Graph above by Dr. Roy Spencer

Increasing mathematical uncertainty from initial starting conditions is the main reason. But, some of it might be due to the fact that while some of the models share common code, they don’t produce the same results with that code owing to differences in the way CPU’s, operating systems, and compilers work. Now with this paper, we can add software uncertainty to the list of uncertainties that are already known unknowns about climate and climate modeling.

I got access to the paper yesterday, and its findings were quite eye opening.

The paper was published 7/26/13 in the Monthly Weather Review which is a publication of the American Meteorological Society. It finds that the same global forecast model (one for geopotential height) run on different computer hardware and operating systems produces different results at the output with no other changes.

They say that the differences are…

“primarily due to the treatment of rounding errors by the different software systems”

…and that these errors propagate over time, meaning they accumulate.

According to the authors:

“We address the tolerance question using the 500-hPa geopotential height spread for medium range forecasts and the machine ensemble spread for seasonal climate simulations.”

“The [hardware & software] system dependency, which is the standard deviation of the 500-hPa geopotential height [areas of high & low pressure] averaged over the globe, increases with time.”

The authors find:

“…the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The initial conditions of climate models have already been shown by many papers to produce significantly different projections of climate.

It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.

Here is how they conducted the tests on hardware/software:

Table 1 shows the 20 computing environments including Fortran compilers, parallel communication libraries, and optimization levels of the compilers. The Yonsei University (YSU) Linux cluster is equipped with 12 Intel Xeon CPUs (model name: X5650) per node and supports the PGI and Intel Fortran compilers. The Korea Institute of Science and Technology Information (KISTI; http://www.kisti.re.kr) provides a computing environment with high-performance IBM and SUN platforms. Each platform is equipped with different CPU: Intel Xeon X5570 for KISTI-SUN2 platform, Power5+ processor of Power 595 server for KISTI-IBM1 platform, and Power6 dual-core processor of p5 595 server for KISTI-IBM2 platform. Each machine has a different architecture and approximately five hundred to twenty thousand CPUs.

model_CPUs_table1

And here are the results:

model_CPUs_table2
Table 2. Globally-averaged standard deviation of the 500-hPa geopotential height eddy (m) from the 10-member ensemble with different initial conditions for a given software system 383 (i.e., initial condition ensemble), and the corresponding standard deviation from the 10-member ensemble with different software systems for a given initial condition (i.e., software system ensemble).

While the differences might appear as small to some, bear in mind that these differences in standard deviation are only for 10 days worth of modeling on a short term global forecast model, not a decades out global climate model. Since the software effects they observed in this study are cumulative, imagine what the differences might be after years of calculation into the future as we see in GCM’s.

Clearly, an evaluation of this effect is needed over the long term for many of the GCM’s used to project future climate to determine if this also affects those models, and if so, how much of their output is real, and how much of it is simply accumulated rounding error.

Here is the paper:

An Evaluation of the Software System Dependency of a Global Atmospheric Model

Song-You Hong, Myung-Seo Koo,Jihyeon Jang, Jung-Eun Esther Kim, Hoon Park, Min-Su Joh, Ji-Hoon Kang, and Tae-Jin Oh Monthly Weather Review 2013 ; e-Viewdoi: http://dx.doi.org/10.1175/MWR-D-12-00352.1

Abstract

This study presents the dependency of the simulation results from a global atmospheric numerical model on machines with different hardware and software systems. The global model program (GMP) of the Global/Regional Integrated Model system (GRIMs) is tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems. The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.

h/t to The Hockey Schtick

0 0 votes
Article Rating
281 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Chris @NJSnowFan
July 27, 2013 11:06 am

Great report and read Anthony.
Tree Rings in trees also do the same thing in a way. The roots are grounded but soil conditions under each tree can vary from tree to tree giving different results even though the trees are same species and grow right next to each other or in same area.
Like how computer models run on different software.

Rhoda R
July 27, 2013 11:06 am

Didn’t some warmist researcher make a comment about not trusting data when it conflicted with models?

Dave Dardinger
July 27, 2013 11:07 am

I don’t know that I thought much about computing errors, but it should be known by everyone who wants to be taken seriously in the climate debate, that even the approximations to the actual equations of science being modeled in the Climate models cannot be kept from diverging for very long. This is why I cringe whenever i see the modelers trying to clam that their models are using the actually scientific equations underlying earth’s climate. They aren’t and they can’t.

Eric Anderson
July 27, 2013 11:12 am

Rounding errors. Digits at the extreme that are too small to deal with, but that, over time, end up affecting the outcome. Miniscule physical properties that we are unable to measure. The general uncertainty principle involved in trying to forecast how climate (made up of weather, made up of air particles, made up of individual molecules, made up of individual particles — both the precise location and trajectory of which cannot, in principle, even be measured).
Is it just the case that we need better computers, more accurate code, more decimals, more money, more measurements?
The doubt that keeps gnawing at me is whether it is possible — even in principle — to accurately model something like climate over any reasonable length of time.

Nick Stokes
July 27, 2013 11:13 am

[snipped to prevent your usual threadjacking. You are wrong- read it and try again – Anthony]

Richard M
July 27, 2013 11:13 am

Wild ass guess plus or minus rounding error is still just a WAG. Wake me up when they properly model oceans and other natural climate factors.

DirkH
July 27, 2013 11:15 am

Now did I lambast people for years here now with the mathematical definition of chaos as used by chaos theory, which states that a system is chaotic IFF its simulation on a finite resolution iterative model develops an error that grows beyond any constant bound over time? And was this argument ignored ever since by all warmists (to be fair; our resident warmists surely had their eyes glaze over after the word mathematical)?
Yes and yes.

more soylent green
July 27, 2013 11:15 am

Floating point numbers are not precise and computer use floating point numbers for very large or very small numbers. This is not a secret and while everybody who ever took a programming course probably learned it, most of us forget about it unless reminded.
Somebody who works as a programmer in the scientific or engineering fields where floating point numbers are routinely used should be aware of this issue. However, it appears that much of the climate model code is written by anybody but professionally trained computer programmers or software engineers.

Latitude
July 27, 2013 11:17 am

honest questions, I’m no longer sure that I fully understand these things:
1. the beginning of each model run is really hindcasting to tune the models??…if so, they missed that too
2. each model run is really an average of dozens/hundreds/thousands of runs, depending on how involved each model is?
3. if they hindcast/tuned the models to past temps, then they tuned them to past temps that have been jiggled and even if the models work, they will never be right?
4. the models have never been right…not one prediction has come true?
..I’m having an old age moment…and doubting what I thought I knew…mainly because it looks to me like people are still debating garbage

steven
July 27, 2013 11:17 am

Can’t wait to see what RGB has to say about this!

July 27, 2013 11:18 am

Butterfly errors producing storms of chaos in the models.

Mark Bofill
July 27, 2013 11:18 am

Look that’s just sad. I haven’t had to fool with it for 20 years since school, but there are methods for controlling this sort of thing, it’s not like numerical approximation and analysis are unknown frontiers for goodness sakes.

DirkH
July 27, 2013 11:18 am

[snip – I nipped his initial comment, because it was wrong and gave him a chance to correct it, you can comment again too – Anthony]

DirkH
July 27, 2013 11:20 am

more soylent green says:
July 27, 2013 at 11:15 am
“Somebody who works as a programmer in the scientific or engineering fields where floating point numbers are routinely used should be aware of this issue. However, it appears that much of the climate model code is written by anybody but professionally trained computer programmers or software engineers.”
They just don’t care. Anything goes as long as the funding comes in.

Billy Liar
July 27, 2013 11:21 am

Nick Stokes says:
July 27, 2013 at 11:13 am
[snip – I nipped his initial comment, because it was wrong and gave him a chance to correct it, you can comment again too – Anthony]

Nick Stokes
July 27, 2013 11:22 am

[snip – try again Nick, be honest this time
Note this:

While the differences might appear as small to some, bear in mind that these differences in standard deviation are only for 10 days worth of modeling on a short term global forecast model, not a decades out global climate model.

– Anthony]

Man Bearpig
July 27, 2013 11:26 am

Rounding errors ? How many decimal places are they working to ? 1? 0?

mpaul
July 27, 2013 11:26 am

hmm, I imagine this could become the primary selection criteria when purchasing new supercomputers for running climate models. The supercomputer manufacturers will start to highlight this “capability” in their proposals to climate scientists — “The Cray SPX -14 implements a proprietary rounding algorithm that produces results that are 17% more alarming than our nearest competitor”. Maybe the benchmarking guys can also get a piece of the action — “The Linpack-ALRM is the industry’s most trusted benchmark for measuring total delivered alarminess”.
Sorry, gotta go. I’ve got an idea for a new company I need to start.

Stephen Richards
July 27, 2013 11:27 am

Nick Stokes says:
July 27, 2013 at 11:13 am
Nick, read the UK Met off proud announcement that their climate model is also used for the weather forecast and in so doing validates their model.

July 27, 2013 11:27 am

I find it difficult to believe that this is due purely to the handling of rounding errors. Have the authors established what it takes to produce identical results? I would start with a fix hardware, OS, complier and optimization. Do two separate runs produce almost exactly matching result? If not, then the variation over different hardware/compiler is irrelevant. I am inclined to believe it is more likely something in the parallel compiler, in determining whether communication between nodes is synchronous. Modern CPU has cycle time of 0.3ns (inverse of frequency). Communication time between nodes is on the order of 1 micro-sec. So there maybe an option in the parallel compiler to accept “safe” or insensitive assumptions in the communication between nodes?

Nick Stokes
July 27, 2013 11:30 am

Billy Liar says: July 27, 2013 at 11:21 am
“So climate models are immune to rounding errors?”

No. Nothing is. As some have observed above, an atmosphere model is chaotic. It has very sensitive dependence on initial conditions. That’s why forecasts are only good for a few days. All sorts of errors are amplified over time, including, as this paper notes, rounding errors.
That has long been recognised. Climate models, as Latitude notes, have a long runup period. They no longer attempt to predict from the initial conditions. They follow the patterns of synthetically generated weather, and the amplification of deviation from an initial state is no longer an issue.

Curt
July 27, 2013 11:30 am

Nick – Climate models may not have specific initial conditions (e.g. today’s actual weather conditions to predict next week’s weather), but they must have initial conditions — that is, starting numerical values for the states of the system. If different software implementations of the same model diverge for the same initial conditions, that is indeed a potential problem.

Nick Stokes
July 27, 2013 11:32 am

Anthony, you say I’m wrong. What’s your basis for saying these are climate models?

Editor
July 27, 2013 11:33 am

Remarkable!

Editor
July 27, 2013 11:35 am

I expected the server to tell me it was pay walled, but got:

The server is experiencing an unusually high volume of requests and is temporarily unable to process your request.
Please try again in a moment or two.

It worked the second try – and was told the paper was paywalled.
This is rather interesting. I’m not a floating point expert, though I got a lesson in all that on one of my first “for the heck of it” programs that simulated orbital motion on Univac 1108.
I would think that IEEE floating point should lead to near identical results, but I bet the issues lie outside of that. The different runtime libraries use very different algorithms to produce transcendental functions, e.g. trig, exponentials, square root, etc. Minor differences will have major changes in the simulation (weather). They might even produce changes in the long term average of the output (climate).
CPUs stopped getting faster several years ago, coincidentally around the time the climate stopped warming. Supercomputers keep getting faster by using more and more CPUs and spreading the compute load across the available CPUs. If one CPU simulates it’s little corner of the mesh and shares its results with its neighbors, the order of doing that can lead to round off errors. Also, as the number of CPUs increase, that makes it feasible to use a smaller mesh, and a different range of roundoff errors.
The mesh size may not be automatically set by the model, so the latter may not apply. Another sort of problem occurs when using smaller time increments. That can lead to computing small changes in temperature and losing accuracy when adding that to a much larger absolute temperature. (Something like that was part of my problem in the orbital simulation, though IIRC things also got worse when I tried to deal with behavior of tan() near quadrant boundaries. Hey, it was 1968. I haven’t gotten back to it yet.)

“…the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

Wow. So we could keep the initial conditions fixed, but vary the decomposition. Edward Lorenz would be impressed.

July 27, 2013 11:39 am

The climate models will forever be useless because the intial state of the climate can’t be put into them properly,in addition they will never have complete existing data,r all of the data needed that influences the climatic system of the earth in addition to not being able to account for all the data that might influence the climate of the earth , to give any sort of accurate climate forecast.
Anotherwords they are USELESS, and one can see that not only in their temperature forecast but by their basic atmospheric circulation and temperature profile forecast which have been 100% wrong.
As this decade proceeds the temp. trend will be down n response to the prolonged solar minimum and they will be obsolete.

July 27, 2013 11:39 am

For a non-tech savy person like me and others too, this is undoubtedly an eye opener and vital piece of information.

Laurie Bowen
July 27, 2013 11:40 am

Richard M . . . . “when they properly model oceans and other natural climate factors” . . . . “they” will HAVE TO take into account all the causes of “natural climate variation factors” “weather” (whether) long term, short term and/or temporary term causes . . . to build that model around . . . . at this point “they” only look at the data which are the effects of the causes. Even I can confidently forecast that! Some of the “they” have been doing it bass ackwards for a long time. In my humble observation . . . and opinion.
Google: “methods and procedures for forecasting long-term climate in specific locals” see what you get!

TerryS
July 27, 2013 11:43 am

Let me repeat a comment I made over a years ago:

I’ve often thought it would be interesting to run one of these models with 128 bit floating point numbers and then repeat the exercise with 64 bit and 32 bit numbers and then compare the outputs. Any differences would highlight the futility in attempting to use models to predict a chaotic system.

Editor
July 27, 2013 11:46 am

Man Bearpig says:
July 27, 2013 at 11:26 am
> Rounding errors ? How many decimal places are they working to ? 1? 0?
“Digital” computers don’t use decimal numbers frequently, the most recent systems I know that do are meant for financial calculations.
The IEEE double precision format has 53 bits of significance, about 16 decimal places. Please don’t offer stupid answers.

Mark Bofill
July 27, 2013 11:47 am

The pitfalls may not just be in rounding. When accuracy is important it’s useful to understand how computers represent numbers. Mantissa and exponent for floating point numbers for example. The thing is, you have to bear in mind that you don’t have good intermediary results when you perform operations in an order that mixes very large numbers with a lot of important significant digits with very small numbers with a lot of important significant digits; if the programmer isn’t thinking this through its easy to loose significant digits along the way – one has to bear in mind that the intermediate answer (with floating point) will always be a limited number of significant digits at one scale, or exponent.
But these are well known, well studied problems in comp sci. There’s no reason for anybody who cares about numerical accuracy to be stung this way.

Tom
July 27, 2013 11:53 am

Terry, It would cost a lot of money to run at 128bit – one would need millions of dollars of coding, then the program would run 20 – 100x slower at 128 bit. At 32 bit you get muddy sludge in minutes with this kind of code.
The authors of the paper have done the right thing – its much cheaper to change compilers and optimization settings while staying at 64 bit.
Essentially this is the butterfly effect in numerical computing.

Nick Stokes
July 27, 2013 11:55 am

“Nick in his initial comment (that I snipped because he and I had a misunderstanding) said that climate models don’t have initial starting conditions.”
I didn’t say that they don’t have starting conditions – I’m well aware that all time dependent differential equations have to. I said that they don’t forecast from initial conditions, as I expanded on here.

July 27, 2013 11:55 am

So it’s really important to split hairs about the differnce between ‘global forecast models’ and ‘climate models’? So, until a ‘scientist’ produces a peer-reviewed paper showing that this divergence also occurs in ‘climate models’, we can safety asssume they are unaffected? Ha!

July 27, 2013 11:57 am

positive lyapunov exponents

July 27, 2013 11:59 am

I have been a programmer since 1968 and I am still working. I have been programming in many different areas including forecasting. If I have undestood this correctly this type of forecasting is architected so that forecastin day N is built on results obtained for day N – 1. If that is the case I would say that its meaningless. Its hard enough to predict from a set of external inputs. If you include results from yesterday it will go wrong. Period!

DirkH
July 27, 2013 12:02 pm

Anthony Watts says:
July 27, 2013 at 11:46 am
“Nick in his initial comment (that I snipped because he and I had a misunderstanding) said that climate models don’t have initial starting conditions.
They have to. They have to pick a year to start with, and levels of CO2, forcing, temperature, etc must be in place for that start year. The model can’t project from a series of arbitrary uninitialized variables.”
Initial starting conditions include the initial state of each cell of the models – energy, moisture, pressure (if they do pressure), and so on. The state space is obviously giantic (number of cells times variables per cell times resolution of the variables in bits – or in other words, if you can hold this state in one Megabyte, the state space is 2 ^ (8 * 1024 * 1024) – assuming every bit in the megabyte is actually used), and the deviation of the simulated system from the real system should best be expressed as the vector distance between the state of the simulation expressed as a vector of all its state variables, against the according vector representation of the real system. It is this deviation (length of the vector difference) that grows beyond all bounds when the system being simulated is chaotic.

DirkH
July 27, 2013 12:04 pm

Ingvar Engelbrecht says:
July 27, 2013 at 11:59 am
“I have been a programmer since 1968 and I am still working. I have been programming in many different areas including forecasting. If I have undestood this correctly this type of forecasting is architected so that forecastin day N is built on results obtained for day N – 1. ”
Yes, Ingvar, weather forecasting models as well as climate models are iterative models (basically very large finite state machines; where the program describes the transition table from one step to the next).

July 27, 2013 12:09 pm

Numerical computation is an entire distinct subfield of computer science. There are many traps for the unwary, some language-specific and others deriving from hardware differences. It used to be worse, with different manufacturers using incompatible formats and algorithms, and most had serious accuracy problems on over/underflow events.
A major advance was the 1985 adoption of the IEEE 754 standard for floating point. One principal goal was to avoid abrupt loss of accuracy as quantities neared representation limits in calculation. But even with much better underlying representation and algorithms, there are plenty of opportunities for unseasoned programmers to get results significantly off what they should be. All floating point quantities are approximate, unlike integers which are always exact. It sounds like a simple distinction but it has profound implications for programming.
One assumes all the environments used were validated using a floating-point accuracy benchmark before they ran these models, but there is no benchmark to catch sloppy programming.
Intense numerical programming is not for amateurs.

DirkH
July 27, 2013 12:14 pm

Nick Stokes says:
July 27, 2013 at 11:30 am
“They no longer attempt to predict from the initial conditions. They follow the patterns of synthetically generated weather, and the amplification of deviation from an initial state is no longer an issue.”
That’s great to hear. As every single model run will develop an individual ever-growing discrepancy in its state from the state of the real system ( in the vector space, this can be imagined as every single model run accelerating into a different direction; picture an explosion of particles here), how is the mean of a thousand or a million model runs meaningful?
TIA

Frank K.
July 27, 2013 12:16 pm

Unfortunately, Nick, climate (as formulated in most GCMs) is an initial value problem. You need initial conditions and the solution will depend greatly on them (particularly given the higlhy coupled, non-linear system of differential equations being solved).
“They follow patterns of synthetic weather”??
REALLY? Could you expand on that?? I have NEVER heard that one before…

Greytide. Middle England sceptic
July 27, 2013 12:16 pm

mpaul says:
July 27, 2013 at 11:26 am
Excellent! When are you selling shares? 1 penny from everyone’s bank account will make me a millionaire in no time, 0.001 of a degree each month will prove the warmists are right!

DirkH
July 27, 2013 12:21 pm

DirkH says:
July 27, 2013 at 11:15 am
Partial restauration of my comment above that got snipped:
Reminder: mathematical definition of chaos as used by chaos theory is that a system is chaotic IFF its simulation on a finite resolution iterative model develops a deviation from the real system that grows beyond any constant bound over time.

Theo Goodwin
July 27, 2013 12:24 pm

Excellent article about an excellent article. If you “looked under the hood” of a high level model and talked to the very high level programmers who manage it, you would learn that they use various heuristics in coding the model and that the effects of those heuristics cannot be separated from the model results. Run a model on different computers, especially supercomputers, and you are undoubtedly using different programmers, heuristics, and code optimization techniques, none of which can be isolated for effects on the final outcome. Problems with rounding errors are peanuts compared to problems with heuristics and code optimization techniques.
But the bottom line is that no VP of Finance, whose minions run models and do time series analyses all the time, believes that his minions are practicing science. He/she knows that these are tools of analysis only.

Nick Stokes
July 27, 2013 12:26 pm

TerryS says: July 27, 2013 at 11:43 am
“128 bit floating point numbers”

Extra precision won’t help. The system is chaotic, and amplifies small differences. It amplifies the uncertainty in the initial state, and amplifies rounding errors. The uncertainty about what the initial numbers should be far exceeds the uncertainty of how the computer represents them. Grid errors too are much greater. The only reason why numerical error attracts attention here is that it can be measured by this sort of machine comparison. Again, this applies to forecasting from initial conditions.
Ric Werme says: July 27, 2013 at 11:35 am
“The mesh size may not be automatically set by the model”

It’s constrained, basically by speed of sound. You have to resolve acoustics, so a horizontal mesh width can’t be (much) less than the time it takes for sound to cross in a timestep (Courant condition). For 10 day forecasting you can refine, but have to reduce timestep in proportion.

Gary Pearse
July 27, 2013 12:26 pm

Out of my depth here, but that’s how one gleans an eclectic education these days (usually someone good at this sort of thing puts it all into a brief “for dummies” essay). Meanwhile, on faith at this point, I’m overwhelmed. A few thoughts come to mind for some kind reader to explain:
1) How can one model any complex phenomenon confidently given such confounding factors? Is there a fix?
2) Aren’t the rounding errors distributed normally? Can’t we simply take the mean path through the spaghetti arising from these errors?
3) Surely since we can’t hope for perfection in modelling future climate, i.e. the errors in components (and from missing components) is necessarily sizable, rounding errors would seem to be the smaller of the issues. If we were predicting a temperature increase of 2C by 2100, what would the std from the rounding errors be as an example?

Green Sand
July 27, 2013 12:27 pm

[Removed as requested, see Green Sand’s comment below]

Theo Goodwin
July 27, 2013 12:28 pm

Frank K. says:
July 27, 2013 at 12:16 pm
“Unfortunately, Nick, climate (as formulated in most GCMs) is an initial value problem. You need initial conditions and the solution will depend greatly on them (particularly given the higlhy coupled, non-linear system of differential equations being solved).
“They follow patterns of synthetic weather”??
REALLY? Could you expand on that?? I have NEVER heard that one before…”
Synthetic weather is what we have in Virginia. Nature could not produce the rain that we have “enjoyed” this year. The synthetic “pattern” is endless dreariness.

Green Sand
July 27, 2013 12:29 pm

Mods, sorry for previous OT comment, posted in error, please delete if not too inconvenient.

Theo Goodwin
July 27, 2013 12:34 pm

Tom says:
July 27, 2013 at 11:53 am
Well said. And many a decision is made on the basis of overall cost.

Nick Stokes
July 27, 2013 12:40 pm

Frank K. says: July 27, 2013 at 12:16 pm
“Unfortunately, Nick, climate (as formulated in most GCMs) is an initial value problem. You need initial conditions and the solution will depend greatly on them (particularly given the higlhy coupled, non-linear system of differential equations being solved).
“They follow patterns of synthetic weather”??
REALLY? Could you expand on that?? I have NEVER heard that one before…”

Here is something that is familiar, and is just a small part of what AOGCM’s do. Time varying ocean currents, shown with SST. You see all the well known effects – Gulf Stream, ENSO, Agulhas. It’s time varying, with eddies.
If you run that on another computer, the time features won’t match. You’ll see eddies, but not synchronous. That’s because of the accumulation of error. There is no prediction of exactly what the temperature will be at any point in time. But the main patterns will be the same. The real physics being shown is unaffected.

george e. smith
July 27, 2013 12:48 pm

Well, whenever I read about these computer “glitches”, a whole flood of thoughts come over me. Dirk H’s point on chaotic systems, being just one of those concerns. I think about these problems often, when driving.
It occurs to me (all the time) that all those traffic lights are programmed by the same sort of people who gave us Micro$oft Windows; the largest of all computer viruses. Well the keep sending out new viruses every few days too.
So traffic lights are programmed to answer the question: “Which car(s) should I let go (if any) ?”
This results in most traffic lights being mostly red, most of the time, and most cars standing still burning gas very inefficiently.
If they changed the algorithm, to answer the question: “Which car(s) should I stop (if any) ?”
Then most traffic lights, would be mostly green most of the time, and most of the cars would be moving (safely), and conserving gasoline.
So a lot depends not on the programmer, but on the designer of the algorithms.
Let me give you an example from Optical Ray Tracing, to demonstrate the problem.
In lens design, you are dealing with optical surfaces, that most commonly (but not always) are portions of spheres of radius (R). Now sometimes the value of R can be large, even very large.
In a typical camera lens, the difference between a surface with a radius of 100 cm and one of radius 1,000 cm is not tat much in practice. You even have surfaces of infinite radius, sometimes called “flats” or planes.
Well now we have a problem, because my keyboard doesn’t have an infinity key on it. Well not to worry, we can write a separate routine, to deal with plane surfaces. That doesn’t deal with the 100-1,000 problem.
Well instead, we recognize that it is the CURVATURE of the surface that determines the optical power; not the radius. Moreover, most keyboards, DO have a zero key, to designate the curvature of a plane.
Does it ever occur to anyone, that no matter how small a segment of a circle (or sphere), you take, whether a micron of arc length, or a light year of arc length, the curvature is still the same. it NEVER becomes zero.
Well the issue is still not solved. Suppose, I have a portion of a spherical surface of radius of curvature (R), and that zonal portion has an aperture radius of (r).
We can calculate the sag of the surface (from flat) with a simple Pythagorean calculation, which will give us:-
s = R – sqrt (R^2 – r^2) How easy is that ? ………….(1)
Well now we have a real computer problem, because if R is 1,000 mm, and r is 10 mm , then we find that sqrt is 999.95 with an error of about 1.25 E-6.
So our sag is the small difference in two large numbers; a dangerous rounding error opportunity.
So we never use equation (1) aside from the infinity problem..
I can multiply (and then divide) by (R + sqrt (R^2 – r^2)) to get :
(R^2 – (R^2 – r^2)) / (R + sqrt (R^2 – r^2)) = r^2 / (R + sqrt (R^2 – r^2))
So now I divide through by (R); well why nor multiply by (C) (= 1/R)
This gives me s = Cr^2 / (1 + sqrt (1- C^2.r^2)) ……………..(2)
So the small difference problem has vanished, replaced by the sum of two nearly equal numbers, and suddenly the plane surface is no different from any other sphere.
Most geometers might not immediately recognize equation (2) as the equation of a sphere; but to a lens designer; well we live and breathe it.
This is just one example of writing computer algorithms, that are mathematically smart, rather than the red light traffic light codes.

DirkH
July 27, 2013 12:49 pm

Gary Pearse says:
July 27, 2013 at 12:26 pm
“2) Aren’t the rounding errors distributed normally? Can’t we simply take the mean path through the spaghetti arising from these errors?”
In an iterative model any error amplifies over time. Assuming the Law Of Large Numbers held (which is only true for distributions that depend on only one variable; so the assumption is overly generous, but anyway) we could dampen the error by averaging. Average N^2 models and you dampen the error by a factor of N.
As the error amplifies over time, N must become ever larger to keep the error mean at the end of the simulation under the predefined desired bound.
This gives us a forecasting horizon behind which computational power is insufficient to keep the error under the desired bound.
No such examinations or considerations by warmist climate scientists are known to me. All of this has been ignored by climate science. Their motto is
just wing it.

Paul Linsay
July 27, 2013 12:50 pm

Gary Pearse@12:26.
“1) How can one model any complex phenomenon confidently given such confounding factors? Is there a fix?”
The short answer is no. In a low dimensional nonlinear system where there are only a few variables one can model the “attractor” and learn a great deal about how the system behaves. This still doesn’t allow you to make long term predictions though short term predictions are possible with errors that grow exponentially with time. The climate is a very high dimensional system with severe uncertainty about the actual dynamical mechanisms, and mediocre data that covers the atmosphere and oceans very sparsely.

July 27, 2013 12:51 pm

I confess to becoming cross-eyed when I try to understand computers, however I like the idea of using a “rounding error” to escape blame for those times I utterly and totally screw up.
I’d like those of you who are wise to explain the concept further, in terms a layman can understand, so that I might employ it in possible future scenarios involving my wife and the IRS.

DirkH
July 27, 2013 12:53 pm

Nick Stokes says:
July 27, 2013 at 12:40 pm
“If you run that on another computer, the time features won’t match. You’ll see eddies, but not synchronous. That’s because of the accumulation of error. There is no prediction of exactly what the temperature will be at any point in time. But the main patterns will be the same. The real physics being shown is unaffected.”
We get whirly patterns that look like real whirly patterns, ergo we’re right? Really? That’s it? Now in that case I have a surprise for you – see first graph in the headpost – the momdel temperatures don’t look anything like real ones. Ergo your “looks about right” argument can be used by skeptics to say – hmm, yeah, ok, according to the arguments of the warmists, the GCM’s are junk because it doesn’t look right. Well thank you.

Robert of Ottawa
July 27, 2013 12:59 pm

Algorithm based upon integration and random number seeding will produce divergent results.
The random number algorithm will be slightly different on different operating systems and compilers; the seed generation will be different (if it is to be truly random) and the different results each iteration are amplified with the integration algorithms.
Actually, that is why they produce multiple runs in Monte Carlo batches. Howevver, I don’t think it should give us much confidence in the “models”

Tsk Tsk
July 27, 2013 1:04 pm

It’s disturbing to hear them blame this on rounding errors. As Ric noted all of the hardware should be conforming to the same IEEE standard which means it will result in the same precision regardless. Now the libraries and compilers are a different beast altogether and I could see them causing divergent results. Different numerical integration methods could also be at play here. Whatever the ultimate cause this is a creative definition of the word, “robust.”

Mark Bofill
July 27, 2013 1:09 pm

Nick,

If you run that on another computer, the time features won’t match. You’ll see eddies, but not synchronous. That’s because of the accumulation of error. There is no prediction of exactly what the temperature will be at any point in time. But the main patterns will be the same. The real physics being shown is unaffected.

Seriously, you’ve just supplied an excellent example of why I don’t trust some warmists. The real physics being shown is unaffected. Did you go study the code really quickly there to determine what the possible sources of the discrepancies being discussed are? Did you check anything?
I doubt it. As Dirk says, the whirly patterns look pretty good, so … must be right.
At my job, in the areas we really care about and absolutely must not be wrong about, we try pretty hard to assume that everything is wrong until it’s verified as being right. Obviously there is only so far you can take this, but I’ll say this – if climate scientists treated the models as seriously and with as much rigor as engineers writing a fire detection / suppression system for a modern airliner treat their code (for example), you wouldn’t see issues like this.

July 27, 2013 1:09 pm

Important question: Is this why Prof Murry Salby needed resources to rebuild his studies when he moved to Macquarie University?
In this post: http://wattsupwiththat.com/2013/07/08/professor-critical-of-agw-theory-being-disenfranchised-exiled-from-academia-in-australia/
Prof Murry Salby wrote,

Included was technical support to convert several hundred thousand lines of computer code, comprising numerical models and analyses (the tools of my research), to enable those computer programs to operate in Australia.

.
In the following online discussion this was thought peculiar. It seemed strange to me too but not being an IT guy I made no comment… at least no comment worth remembering.
But now it seems to make sense to me.
Am I right?

Jonathan Abbott
July 27, 2013 1:11 pm

Very interesting. I manage a team of software engineers but am new enough to coding to not fully understand some of the discussion here. It appears to be suggested that unless the coding team really know their stuff, differences between models when run on different machines are to be expected. Could someone expand on this a bit or tell me where to start reading?

Joe Crawford
July 27, 2013 1:12 pm

Don’t you just love those rounding errors? I remember when a major airplane manufacturer upgraded their mainframes back in the ’60s and some parts they designed on the new system no longer fit properly. Turns out both the old system and the new one had errors in the floating point routines that no one had apparently found. The errors between the systems were just different enough. The engineers had apparently learned to work with the old ones, probably without even realizing what or why. When they moved to the new system, their adaptations no longer worked, so, the parts no longer fit. Needless to say that shook thinks up for a bit.

July 27, 2013 1:13 pm

“primarily due to the treatment of rounding errors by the different software systems”
Why in the bloody hell are they just figuring this out? Those of us who are engineering physicists, engineers, or even straight code programmers are taught this in class and we even learn to write programs to determine the magnitude of these errors. That these people are just now studying this and figuring it out is the height of incompetence!
Geez!

Mike McMillan
July 27, 2013 1:13 pm

Back in my 286/287 ASM days, I was playing working with the then-novel Mandelbrot set, which has incredible detail down to a gazillion decimal places. Double precision with the 287 ran only about 17 places, though. As long as I was working in the zero point something region of the set, I could get down to the sub-atomic details, but once I got above one point zero, all that fine stuff went away and I was limited to larger patterns. That’s a characteristic of floating point math, but with the models, I don’t think it makes a nickel’s worth of difference.
The models are not real climates, the initial values are not real weather values, only approximations, so computing to 40 or 50 decimal places just wastes irreplaceable electrons. Sort of a reverse of the ‘measure with a micrometer, cut with a chainsaw’ idea.
Dan Margulis, the Photoshop guru, did some work trying to see if there was an advantage by image processing in 12 bits per primary as opposed to the usual 8 bits, that’s 1024 shades of red, green, or blue, as opposed to 256 shades apiece. There’s a similarity to climate in that you have a great many little pieces adding to an overall picture. His conclusion was that examining at the pixel level, you could see subtle differences, but the overall picture looked the same. Waste of time.
The unreality of the climate models is not due to any imprecision in the calculations or to any failure to translate any particular model into verified code. It’s due to the assumptions behind the models being wrong.

July 27, 2013 1:17 pm

Could someone expand on this a bit or tell me where to start reading?
ALL computer math processing systems use approximations in order to achieve a mathematical result. I just read where in C they are trying to get rid of the term granularity, which has to do with the rounding errors. Here is what they say.
https://www.securecoding.cert.org/confluence/display/seccode/VOID+Take+granularity+into+account+when+comparing+floating+point+values
P.J. Plauger objected to this rule during the WG14 review of the guidelines, and the committee agreed with his argument. He stated that those who know what they are doing in floating point don’t do equality comparisons except against a known exact value, such as 0.0 or 1.0. Performing a fuzzy comparison would break their code. He said that if a fuzzy comparison would be necessary, then it is because someone has chosen the wrong algorithm and they need to go back and rethink it.
ALL Floating point calculating systems have this problem.

July 27, 2013 1:20 pm

A few years ago (2008), I tried to git GISS GCM Model-E, one of the climate models used by James Hansen and Gavin Schmidt at NASA, to run under Windows. Since it is written in a combination of Fortran, Perl, and Unix shell scripts, I needed to make a few, fairly simple, modifications. After simulating about 28 days of weather, it would always crash with because the radiation from some source was less than zero. Since inspection revealed that the code was full of tests for other parameters being less than zero, I assumed that these were added as necessary.
Besides the fact that it wouldn’t run, there were a number of other issues.
* Years were exactly 365 days long – no leap years
* Some of the physical constants were different than their current values
* The orbital computation to determine the distance between the Earth and the Sun was wrong
It was when I discovered a specific design error in using Kepler’s equation to compute the orbital position that I quit playing with the code. I wrote a short paper explaining the error, but it was rejected for publication because “no one would be interested”.
Ric Werme says:
July 27, 2013 at 11:35 am

I would think that IEEE floating point should lead to near identical results, but I bet the issues lie outside of that.

Yes .. except that single precision floating point numbers may use a different number of bits, depending on the compiler and the compile flags.

Matthew R Marler
July 27, 2013 1:21 pm

Nick Stokes: There is no prediction of exactly what the temperature will be at any point in time. But the main patterns will be the same.
That is what is hoped for: that the distributions of the predicted quantities (e.g. July 2017, 2018, 2019, 2020 mean and standard deviations of temp and rainfall, etc.) Has it been shown to be true? Over a 30 year simulation, there is no reason to expect these small system variations to cancel out. What you’d expect would be a greater and greater divergence of the model from that which is intended to be modeled.

DirkH
July 27, 2013 1:24 pm

Jonathan Abbott says:
July 27, 2013 at 1:11 pm
“Very interesting. I manage a team of software engineers but am new enough to coding to not fully understand some of the discussion here. It appears to be suggested that unless the coding team really know their stuff, differences between models when run on different machines are to be expected. Could someone expand on this a bit or tell me where to start reading?”
As to the precision question, George E Smith put it best with his example. Yes, you must know what you’re doing when working with number formats. The first Ariane 5 rocket was lost because they copied an algorithm for the stabilization from a smaller Ariane and didn’t have the budget to re-test it. Turned out that the bigger mass of the new rocket lead to an overflow. The algorithm was fine for the small rocket but not for the big one. All that would have been necessary was going from, I think, a 16 bit integer to a 32 bit integer or something like that; I think it wasn’t even a floatingpoint number.
Didn’t test; lost the rocket.

DirkH
July 27, 2013 1:26 pm

Jonathan Abbott says:
July 27, 2013 at 1:11 pm
” where to start reading?”
Find out the professor who invented the IEEE 754 floating point standard. He’s written very good articles and books about it.

Matthew R Marler
July 27, 2013 1:30 pm

Dennis Ray Wingo: That these people are just now studying this and figuring it out is the height of incompetence!
It does strike me as rather late in the game for this. Imagine, for the sake of argument, that a patent application or medical device application depended on the validation of this code.

July 27, 2013 1:32 pm

Edward Lorenz pretty much came to almost the exact same conclusion, in regards to almost the exact same computational problem almost 50 years ago; this is as the Warmistas would say is “settled science”..

July 27, 2013 1:32 pm

Climate is a stochastic process, driven by the laws of random probability. To emulate this, a GCM will need a random number generator. There must be thousands of random number generator algorithms in use in computer systems, but none of them are anywhere near perfect. To get around this, you set constraints for the upper and lower limits of the number returned by the random number generator, and if the return value doesn’t pass the test, you ask the random number generator to try again. So to debug an unstable GCM, the first thing I’d look at is the constraints.
Some variation between identical programs running on different hardware and software platforms is to be expected. The usual way to seed a random number generator is from the system clock. Small differences in system speed can yield large departures.
Then there’s CROE, or cumulative round off error. This was a big problem back in the 1950’s but it has since been solved, so that CROE will only occur once in maybe 10 to the 20th calculations. However. A typical GCM will run on a superfast computer for 20, 30, 40 or more days and perform gazillions of calculations. Inevitably CROE is going to happen several times.So the second place I’d look at in an unstable GCM is the waypoints where numbers are examined for reasonableness.
Normally as I said the odds on a computational error are so small it’s not worth wasting time checking the results, but here the law of large numbers comes into play.

Mark Bofill
July 27, 2013 1:34 pm

Jonathan Abbott says:
July 27, 2013 at 1:11 pm
————-
I found this discussion linked from Stack Overflow I think:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Joe
July 27, 2013 1:40 pm

Paul Linsay says:
July 27, 2013 at 12:50 pm
[…] The climate is a very high dimensional system with severe uncertainty about the actual dynamical mechanisms, and mediocre data that covers the atmosphere and oceans very sparsely.
———————————————————————————————————
Very well stated but, also, very irrelevant.
Whether or not the models have, or are even capable of developing, skill in forecasting in no way impacts their utility as long as 9.7 out of 10 Climate Scientists (who’s owners expressed a preference) accept there output.

Joe
July 27, 2013 1:42 pm

Need a shamefaced smiley and preview / edit function! That should, of course, read “their outputs”

DirkH
July 27, 2013 1:49 pm

Matthew R Marler says:
July 27, 2013 at 1:30 pm
“Dennis Ray Wingo: That these people are just now studying this and figuring it out is the height of incompetence!
It does strike me as rather late in the game for this. Imagine, for the sake of argument, that a patent application or medical device application depended on the validation of this code.”
When you write code for medical or transportation or power plant applications you know from the start about the safety integrity level (SIL 0..4) the critical core of the application must fullfill, and the level of validations necessary to get it certified. Including code reviews, test specifications, documentation that proves that the tests have been fullfilled by the software – including signatures of the responsible persons etc etc etc. (In the case of the Ariane 5 lost I mentioned above – no human life was endangered so no biggy.)
As climate science is not directly affecting human lifes it has no such burden and doesn’t need code validation, it is all fun and games. The life-affecting decisions only come about through the ensuing policies; and what would happen if we had to validate political decisions for their impacts on human life…. probably the state would not be able to get ANY decision through the validation process.

NZ Willy
July 27, 2013 1:53 pm

My reading is that the authors are comparing different software, and that the hardware is incidental. The operating systems may matter though, depending on native precision handling — I presume no “endian” effects. I disagree with Nick that chaotic system amplify discrepancies in initial conditions, because chaotic systems randomize and so the initial conditions get lost — the point of the “butterfly effect” is that there *isn’t* one. Maybe climatological software does have a butterfly effect, in which case, “their bad”.

dp
July 27, 2013 1:54 pm

My recollection from first classes in chaotic systems is that the starting point matters in the extreme and there are no insignificant digits. The underlying precision of the computing hardware, the floating point/integer math libraries, and math co-processors all contribute error in different ways at very small and very large numbers (regardless of sign).

DirkH
July 27, 2013 1:56 pm

Mike Mellor says:
July 27, 2013 at 1:32 pm
“Climate is a stochastic process, driven by the laws of random probability. To emulate this, a GCM will need a random number generator. ”
No. A chaotic system amplifies low order state bits so that they are over time left-shifted in the state word; this leads to small perturbations becoming larger differences over time. Chaos has nothing to do with an external source of randomness.
The left-shifting of the state bits means that any simulation with a limited number of state bits runs out of state bits over time; while the real system has a near infinitely larger resolution.
Meaning, in short, a perfectly deterministic system can be a chaotic system.

ikh
July 27, 2013 1:57 pm

I am absolutely flabbergasted !!! This is a novice programming error. Not only that, but they did not even test their software for this very well known problem.
Software Engineers avoid floating point numbers like the plague. Where ever possible we prefer to use scaled integers of the appropiate size. IEEE floating point implementations can differ within limits. Not all numbers can be exactly represented in a floating point implementation so it is not just rounding errors that can accumulate but representational errors as well.
When you do floating point ops, you always need to scale the precision of the calculation and then round to a lower precesion that meets your error bar requirements.
Thye problem is not the libraries they have used, or the differing compillers, or the hardware. It is the stupid code that makes the system non-portable.
If they can not even manage these basic fundementals. What the hell are the doing using multi-threading and multi-processing! These are some of the most difficiult Comp Sci techniques to master correctly. So I would be less than surprised if the also had race conditions throwing out their results.
Incredible!
/ikh

DirkH
July 27, 2013 1:58 pm

NZ Willy says:
July 27, 2013 at 1:53 pm
“I disagree with Nick that chaotic system amplify discrepancies in initial conditions, because chaotic systems randomize and so the initial conditions get lost”
No.

Gary Pearse
July 27, 2013 2:03 pm

DirkH says:
July 27, 2013 at 12:49 pm
Paul Linsay says:
July 27, 2013 at 12:50 pm
Thank you both for answering my questions and enlightening me on the ineluctable nature of these errors.
On climate forecasting, I’ve wondered out loud on some other threads that we know over the very long term that temperature varies only 8-10C between the depths of an ice age and the peaks of an interglacial, and that life on the planet seems to be an unbroken chain back >1 B yrs, so there weren’t comparatively short hidden periods of such severity in climate that ‘life’ was terminated (individual species like the dinosaurs and countless others, of course, didn’t survive – some climate, some asteroid impacts).
This would mean that the things we argue about here would be small ripples on the much larger trend – warming, ~stationary and cooling squarish waves. I would say this main mega trend is what we should be trying to nail down first. Hopefully it is at least a horizontal trend with oscillations of +/-4 to 5C. Despite the chaotic nature of climate, these mega trends are not so chaotic and some plausible explanations have been explored. The idea that we will drop like fishflies by 2C either way is a total crock and -2C is a heck of a lot more worrying than +2C . Try living in Winnipeg for a few years or Kano, Nigeria (I’ve done both). Cheap energy will fix any problems that arise on this scale, although when I was in Nigeria there was no airconditioning and I got acclimatized just fine.

Man Bearpig
July 27, 2013 2:03 pm

Ric Werme says:
July 27, 2013 at 11:46 am
Man Bearpig says:
July 27, 2013 at 11:26 am
> Rounding errors ? How many decimal places are they working to ? 1? 0?
“Digital” computers don’t use decimal numbers frequently, the most recent systems I know that do are meant for financial calculations.
The IEEE double precision format has 53 bits of significance, about 16 decimal places. Please don’t offer stupid answers.
===========================
Yes, and isn’t that wonderful? However, to what level can we actually measure the values that are entered as a starting point into the models. To calculate them to 16 decimal places is not a representation of the the real world.
What is the point of running a model to that precision when the error in real world data is nowhere near it? Particularly since the models do not represent reality. Most of the model outputs I have seen have shown a predicted rise in temperature, so these models are wrong to 16 decimal places.

July 27, 2013 2:11 pm

Just the tip of a rather large iceberg.
I know how much time and effort goes into identifying and fixing errors in commercial software of similar size to the climate models. With the crucial difference in commercial software that you can always say categorically whether a result is correct or not, which, of course, you can’t with the climate model outputs. Even so, commercial software will get released with hundreds of errors, which come to light over time.
Then the iterative nature of climate models will compound any error, no matter how minor.
As I’ve said before, baseing climate models on iterative weather models, was a fundamentally wrong decision from the beginning.

July 27, 2013 2:12 pm

Climate is chaotic and nonlinear. As such any equations are nonlinear and cannot be solved unless they are approximated with piece-wise linear equations. Such results are good for only a short time and must be reinitialized. The whole mess depends on accurate assumptions which are continually changing, making accurate predictions impossible.
The only thing we have is an observable past to varying degrees of accuracy. The further back we go the less resolution we have, both in measurements and detectable events. I think the best guess is to say the future will be similar to the past but not exactly like it. The biggest variable is the Sun and how it behaves. We see somewhat consistent solar cycles but never exactly the same. There is so much about the Sun’s internal dynamics we don’t know – if fact, we don’t know what we don’t know. That makes future predictions of climate variability to merely be a WAG since they are based on assumptions that are a WAG.

MarkG
July 27, 2013 2:15 pm

“Yes .. except that single precision floating point numbers may use a different number of bits, depending on the compiler and the compile flags.”
I believe it’s even more complex than that?
It’s a long time since I did any detailed floating point work on x86 CPUs, but from what I remember, the x87 FPU would actually perform the math using 80-bit registers, but when you copied the values out of the FPU to RAM, they were shrunk to 64-bit for storage in eight bytes. So, depending on the code, you could end up performing the entire calculation in 80-bit floating point, then getting a 64-bit end result in RAM, or you could be repeatedly reading the value back to RAM and pushing it back into the FPU, with multiple conversions between 64-bit and 80-bit along the way. That could obviously produce very different results depending on the code.
I would presume that a modern compiler would be using SSE instead of the old FPU, but I don’t know for sure.

July 27, 2013 2:19 pm

Non-linear complex systems such as climate are by their very nature chaotic, in the narrow mathematical meaning of that word. A tiny change in a constant value, such as numeric precision, leads to different results. But since we don’t have the math to handle such systems, which means they’re programmed in a linear fashion, this one goes into the usual GCM cockup bin.
Pointman

Don K
July 27, 2013 2:25 pm

more soylent green says:
July 27, 2013 at 11:15 am
Floating point numbers are not precise and computer use floating point numbers for very large or very small numbers. This is not a secret and while everybody who ever took a programming course probably learned it, most of us forget about it unless reminded.
True enough
Somebody who works as a programmer in the scientific or engineering fields where floating point numbers are routinely used should be aware of this issue. However, it appears that much of the climate model code is written by anybody but professionally trained computer programmers or software engineers.
In my experience except when doing fixed point arithmetic in an embedded system even trained programmers, scientists and engineers depend on the guys that designed the FPU and wrote the math libraries to handle the details of managing truncation and rounding. At very best, they might rearrange an operation to avoid subtracting a big number from another big number. And even that doesn’t happen very often.
Usually, that works pretty well. I doubt that anyone who wasn’t doing scientific programming 50 years ago has ever encountered 2.0+2.0=3.9999999….
However, it appears that we might have a situation that requires thought and analysis. Maybe a little — might just be a library bug in some systems or a hardware flaw in some CPU/FPU similar to the infamous Pentium 5 divide bug. Or it may be something much more fundamental.
How about we wait until we have sufficient facts before we rush to a judgement?

Tom Bakewell
July 27, 2013 2:26 pm

I remember the fine buzzwords ” intermediate product swell” from a review of one of the data fitting software packages on offer in the early 90’s. However a Google search shows nothing. So I guess we’re doomed to repeat the past (again)

Heather Brown (aka Dartmoor resident)
July 27, 2013 2:27 pm

The problems of floating point calculation (mainly rounding and losing significance by mixing very large an very small numbers as others have commented) are well-known to any competent computer scientist. For models of this complexity it is essential to have someone who specialises in numerical analysis write/check the code to avoid problems. When I worked in a University computer science department we frequently despaired about the “results” published by physicists and other scientists who were good at their subject but thought any fool could write a Fortran program full of floating point operations and get exact results.
After following many of the articles (at Climate Audit and elsewhere) about the lack of statistical knowledge displayed by many climate scientists, I am not surprised that they are apparently displaying an equal lack of knowledge about the pitfalls of numerical calculations..

DirkH
July 27, 2013 2:31 pm

Man Bearpig says:
July 27, 2013 at 2:03 pm
“The IEEE double precision format has 53 bits of significance, about 16 decimal places. Please don’t offer stupid answers.
===========================
Yes, and isn’t that wonderful? However, to what level can we actually measure the values that are entered as a starting point into the models. To calculate them to 16 decimal places is not a representation of the the real world. ”
The recommended way for interfacing to the real world is
-enter data in single float format (32 bit precision)
-During subsequent internal computations use as high a precision as you can – to reduce error propagation
-during output, output with single precision (32 bit floats) again – because of the precision argument you stated.
It is legit to use a higher precision during the internal workings. It is not legit to assign significance to those low order digits when interpreting the output data.
In this regard, the GCM’s cannot be faulted.

DirkH
July 27, 2013 2:33 pm

Pointman says:
July 27, 2013 at 2:19 pm
“Non-linear complex systems such as climate are by their very nature chaotic,”
No. Only when they amplify low order state bits. Complexity alone is not necessary and not sufficient. The Mandelbrot equation is not very complex yet chaotic.

July 27, 2013 2:32 pm

I think this is brilliant.
The computer modelers get to blame the disparity between their results and actual observations on their computers being unable to support enough significant digits, and they justify the need for new computers with more significant digits all in one fell swoop.
I’m not active in the CPU wars anymore. Anyone know if one of the semi-conductor companies is close to releasing 128 bit CPU’s? (and did they fund this study /snark)

DirkH
July 27, 2013 2:39 pm

MarkG says:
July 27, 2013 at 2:15 pm
“It’s a long time since I did any detailed floating point work on x86 CPUs, but from what I remember, the x87 FPU would actually perform the math using 80-bit registers, but when you copied the values out of the FPU to RAM, they were shrunk to 64-bit for storage in eight bytes. So, depending on the code, you could end up performing the entire calculation in 80-bit floating point, then getting a 64-bit end result in RAM, or you could be repeatedly reading the value back to RAM and pushing it back into the FPU, with multiple conversions between 64-bit and 80-bit along the way. That could obviously produce very different results depending on the code.”
Yes. OTOH you are allowed to store the full 80 bit in RAM; the “extended” data type supported by various compilers.
“I would presume that a modern compiler would be using SSE instead of the old FPU, but I don’t know for sure.”
SSE is a newer instruction set and requires either handcoding or an according library (maybe BOOST numerical does that, I’m not sure) or a very smart compiler who recognizes opportunities for vectorizing computations; when you’re lazy and don’t have top performance needs your throwaway computations like
double a=…;
double b=…;
b *= a;
will still create instructions for the old FPU core.

michael hart
July 27, 2013 2:42 pm

I’m not surprised. Not surprised at all.

Berényi Péter
July 27, 2013 2:43 pm

There is a general theory of non-equilibrium stationary states in reproducible systems (a system is reproducible if for any pair of macrostates (A;B) A either always evolves to B or never)
Journal of Physics A: Mathematical and General Volume 36 Number 3
Roderick Dewar 2003 J. Phys. A: Math. Gen. 36 631 doi:10.1088/0305-4470/36/3/303
Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
Now, the climate system is obviously not reproducible. Earth is not black as seen from outer space, although in systems radiatively coupled to their environment maximum entropy production occurs when all incoming short wave radiation is thermalized, none reflected.
Unfortunately we do not have any theory at the moment, rooted in statistical mechanics, about non-reproducible (chaotic) systems.
Therefore the guys are trying to do computational modelling based on no adequate physical theory whatsoever. That’s a sure sign of coming disaster, according to my dream book.

DirkH
July 27, 2013 2:46 pm

Gary Pearse says:
July 27, 2013 at 2:03 pm
“This would mean that the things we argue about here would be small ripples on the much larger trend – warming, ~stationary and cooling squarish waves. I would say this main mega trend is what we should be trying to nail down first. ”
Yes. All my arguments about chaotic systems do not exclude the possibility of a coupling of the chaotic system to an external independent influence; meaning that a low frequency non-chaotic signal could be present; while the chaotic subsystem does its dance on top of that.
In fact I strongly believe in a Solar inflluence – due to recordings of the Nile level, the periodicity of Rhine freezings, etc etc. Svensmark will be vindicated.

DirkH
July 27, 2013 2:50 pm

Berényi Péter says:
July 27, 2013 at 2:43 pm
“Unfortunately we do not have any theory at the moment, rooted in statistical mechanics, about non-reproducible (chaotic) systems.”
Again: Reproducibility (or Determinism) and Chaos do describe different aspects.
A real life chaotic system of course has a for all practical matters infinite resolution of the state word; so the exact same starting condition cannot be maintained between two trials, giving the impression of “randomness”. Randomness might be present; but is not necessary for chaos.

July 27, 2013 2:58 pm

I see a number of questions upthread from people trying to understand what this is all about. To provide a WAY over simplified answer, cpu’s are just fine with basic math. No need to worry about them giving you a wrong answer when you are trying to balance your check book. But for very long numbers that must be very precise, there are conditions where the CPU itself will return a wrong answer. These conditions are called errata, and the semi-conductor companies actually publish the ones they know about. If there is a way to get around the problem, they publish that too. Here’s an example of same for the Intel i7, scroll down to page 17 to see what I mean:
http://www.intel.com/content/www/us/en/processors/core/core-i7-lga-2011-specification-update.html?wapkw=i7+errata
Since the errata for different CPU’s is different, even between two iterations of a CPU from the same manufacturer, the programmer must be aware of these things and make certain that the way they’ve written the code takes the errata into account.
In essence, the problem is worse than what the paper suggests. If the programmers failed to take the errata into account, there is little wonder that they get different results on different computers. The obvious question however, was never asked. Given a variety of results, which one is the “right one”? The answer to that is this:
If the programmer didn’t take errata into account, the most likely results is that they are ALL wrong.
Start the gravy train anew. They’ll need grants to modify their code, grants for new computers, grants for hiring students to write grant proposals…

Gary Pearse
July 27, 2013 2:58 pm

When we know that in coming eons, we will have oscillated through glacial to interglacial periods of + or – 5C limits, I think we should be looking at a well established major determinative physical driver instead of believing we go from ice age to interglacial by “runaways” of a chaotic nature between two attractors. This latter bespeaks of too much mathematical mind-clogging on the tiny centennial ripples we preoccupy ourselves with in climate science, the ripples on the inexorable megaennial movements of temperature.

July 27, 2013 2:59 pm

DirkH,
“Non-linear complex systems such as climate are by their very nature chaotic,”
“No. Only when they amplify low order state bits. Complexity alone is not necessary and not sufficient. The Mandelbrot equation is not very complex yet chaotic.”
*************************
When you can show me the math that handles turbulence (or even the physics of clouds), I’ll retract. Beyond the math, the behaviour of climate exhibits all the characteristics of chaotic behaviour. The more iterations, the quicker is spirals out to a completely different result. Simply compare the accuracy of a 1 day forecast to a 30 day one.
The basically ignorant supposition that simply running a GCM enough times will somehow give a credible result with a chaotic system, shows a total failure to understand the difference between an average predicted result and the complete unpredictability of chaotic systems.
The precision to which a computer calculates numbers is totally irrelevant, if you’re into the telling the future business. If you don’t understand the problem or don’t have the adequate physics/math to express it, you’re going to get a wrong result.
Pointman

DirkH
July 27, 2013 3:11 pm

Pointman says:
July 27, 2013 at 2:59 pm
“DirkH,
“Non-linear complex systems such as climate are by their very nature chaotic,”
“No. Only when they amplify low order state bits. Complexity alone is not necessary and not sufficient. The Mandelbrot equation is not very complex yet chaotic.”
*************************
When you can show me the math that handles turbulence (or even the physics of clouds), I’ll retract. Beyond the math, the behaviour of climate exhibits all the characteristics of chaotic behaviour. The more iterations, the quicker is spirals out to a completely different result. Simply compare the accuracy of a 1 day forecast to a 30 day one.”
You are of course completely right for climate; but you said
“Non-linear complex systems such as climate are by their very nature chaotic”
– where “such as climate” mentions an example, so let’s reduce it to the statement
“Non-linear complex systems are by their very nature chaotic”
which is not always correct. I’m picking nits, but definitions are all about the nits. 😉

Editor
July 27, 2013 3:12 pm

1. This seems to be the IT equivalent of science’s ‘confirmation bias’. Only if you notice that rounding errors are causing problems do you fix them.
2. Weather forecasts use similar models and equations, and they start to fail just a few days ahead. That’s why the weather bureaus only make forecasts a few days ahead. Actually, they do make generalised longer term forecasts, but they aren’t worth much. Surely similar rules should apply to climate models.
3. As Willis Eschenbach has pointed out several times in the past, the climate models act as a black box in which the climate forecasts simply follow a single factor, their assumed ECS, and all the surrounding millions of lines of code have no effect. [From memory. Apologies, w, if I have misrepresented you].

son of mulder
July 27, 2013 3:17 pm

Although I am blowing my own trumpet I explained precisely this problem in this post
http://wattsupwiththat.com/2013/03/08/statistical-physics-applied-to-climate-modeling/
and this
http://wattsupwiththat.com/2013/06/01/a-frank-admission-about-the-state-of-climate-modeling-by-dr-gavin-schmidt/
I claim my £5. Nothing wrong with being a sceptic. It just amazes me that the issue only now appears in a paper related to GCM’s.

July 27, 2013 3:18 pm

Pointman
(to DirkH)
When you can show me the math that handles turbulence (or even the physics of clouds), I’ll retract.
>>>>>>>>>>>>>>>
This is the other side of the problem and Pointman has, in my opinion, nailed it. Beyond the inability of cpu’s to handle extremely precise numbers in a consistent fashion is that we forget in this day and age that computers are actually dumber than posts. Seriously, they are capable of only very simple instructions. Their advantage is that for the very simple things they CAN do, they have the capability to do them very, very, fast.
That’s just fine when you can break the problem you are working on into simple pieces. But if the problem itself is too complex for the human mind to understand, then the human mind cannot break it into simple pieces that are known to be correct, and any computer program built upon an incomplete understanding of the physics is going to produce correct results only by some miracle of chance.

Jimbo
July 27, 2013 3:22 pm

While the differences might appear as small to some, bear in mind that these differences in standard deviation are only for 10 days

So we only have about 31,755 days to go until the year 2100. So this means garbage in, garbage out into computer software handling differences = accurate IPCC projections for the year 2100. Oh, our children and grandchildren will have a field day in 2100. Historians won’t be able to type their books because of the huge belly laughs.

July 27, 2013 3:25 pm

Would the models even produce the same result when run on the same computer on different runs?
Are the rounding errors always made to the high side? -following the well usual CAGW fashion of course…

RoyFOMR
July 27, 2013 3:25 pm

What a fascinating discussion and what a privilege it is to be privy to the massive crowdsourcing that is made possible by this site.
The range of expertise available and the freedom to express ones ideas (within the bounds of decency) made possible by the world’s most viewed science blog is quite breath-taking.
Thank you Anthony and all those who contribute (for better or for worse) to demonstrate the future of learning and enquiry.

DirkH
July 27, 2013 3:37 pm

Jimmy Haigh says:
July 27, 2013 at 3:25 pm
“Would the models even produce the same result when run on the same computer on different runs?”
If the errors are due to differences of the floating point implementation of different computer systems, the result should stay constant on one system (given that the exact same initialization happens, which can usually be accomplished by using the same random seed for the random generator , if they use a random generator to fill the initial state of the model).
(deterministic)
If, on the other hand, errors are introduced by CPU errata or by race conditions between CPU cores, as mentioned by others, we would expect every run to have different results even when the initialization is identical.
(nondeterministic)

DirkH
July 27, 2013 3:42 pm

Correction: Depending on the nature of a CPU erratum, it could be present in the deterministic or in the nondeterministic camp. Many CPU errata are internal race conditions inside the CPU.

Janice Moore
July 27, 2013 3:50 pm

Jonathan Abbott (at 1:11PM)! So, you’re the boss in “Dilbert”!!! At least, you are bright and want to learn, unlike that guy. (and I’m sure you don’t style your hair like he does, either, lol)
Well, all I can say is, if you want some great insight into what it is like for your software engineers to work for someone who is new to coding, read Dilbert (by Scott Adams).
From your conscientiousness, I’m sure, given all the real Dilbert bosses out there, they consider themselves blessed.
Window into the world of being part of the “team”

Dinostratus
July 27, 2013 3:55 pm

“Since the software effects they observed in this study are cumulative”
Not necessarily. Models sometimes diverge and they may diverge for any number of reasons, floating point precision being the most common. However the models *should* be stable, that is return to a state (or trend) no matter any errors in the initial conditions or calculations. If the models are *not* stable then they are glorified curve fitted, unstable, extrapolations. Which they are.

Kasuha
July 27, 2013 3:57 pm

It definitely is a heads up for climate modellers, but 10 days of simulation tells very little about how stable the simulation is as a whole. If you run a good weather model on different hardware or using different arithmetic settings, it will also produce different results based on type of rounding and arithmetic precision, but results from multiple runs will be spread around the same forecast values.
So it sure deserves attention and further examination but it’s too early to say that climate models are sensitive to that effect. It would be definitely great shame if they were, though.

ikh
July 27, 2013 4:06 pm

davidmhoffer says:
If the programmer didn’t take errata into account, the most likely results is that they are ALL wrong.
No. Unless you are programming in Assembler language. It is generally the job of the compiller to deal with CPU bugs. A rare exception to this was the Pentium FDIV bug.
It is the job of the GCM programmer to understand the programming language guarntees and to know how to correctly perform numerical calculations accurately to the desired precission. They also need to have a good understanding of how errors can propagate.
Unless they wanted to use SIMD hardware ( Single Instruction Multiple Data ) such as SSE instructions or an FPU, then you should be using integral data types. I.e a temp of 15.11C could be stored in an interger data type as e.g. 1511. or as 151100 depending on the precession you need.
They would also need to avoid division as an intermediate operation. And you always need to be careful with division, in order to maintain precession.
Heather Brown’s comment above are spot on. We have Climate Scientists and Physicsits codimg climate models without sufficient trainning in Computer Science. No wonder thery produce GIGO results.
I tried to look at the open source climate models including GISS E. All of the models I looked at were coded in Fortran. Most used mixed versions of the language making it very difficiult to understand or reason about the computing model that they are fiollowing. Not one of them was even minimally documented and they were almost completely lacking in comments. This makes them almost undeciferable to an outsider. Fortran is an archiac language that is almost never used in the commercial world because we can get the same performance from more modern languages such as C or C++ with much better readability and easier reasoning for correctness.
We can not even check these models ourselves, because we do not have access to the hardware necessary to run them. Typically a cluster or supercomputer..
Irrespective of what we may think of the mathematical modelling in GCM’s We have a seperate and independant criticism. That they are not reproduceable across hardware platforms.
/ikh

Londo
July 27, 2013 4:11 pm

But this may indicate that the models exhibit multiple solutions (which is somewhat a trivial statement) and the modellers fail to track the “physical” one (which is sort of surprising). I have wondered for a long time how they knew which solution to follow but now it seems they do not.
It still seems to me that failing to identify the physical solution is too fundamental aspect of the simulation to ignore but maybe the coupled non-linear nature of GCMs makes the problem to intractable that the consensus science agreed just to ignore it.

wsbriggs
July 27, 2013 4:14 pm

For those of sufficient curiosity, get the old Mandelbrot set code, and set it up to use the maximum resolution of your machine. Now take a Julia set and drill down, keep going until you get to the pixels. This is the limit of resolution for your machine, if you’re lucky, your version of the Mandelbrot algorithm lets you select double precision floating point numbers which are subsequently truncated to integers for display, but still give you some billions of possible colors. The point is, every algorithm numerically approximated by a computer has errors. A computer can only compute using the comb of floating point numbers, not the infinite precision that the real world enjoys. Between every floating point number, there are a very large (how large? for the sleepless among us) number of numbers with infinite decimal places.
Solving PDEs using approximate solutions, gets you errors, period. Parameters can make the pictures look pretty, but they can’t make the solutions better. The precision isn’t there. That’s one of the reasons that “Cigar box physics” still rules. If you can’t fit the problem description and solution in a cigar box, you probably don’t understand the physics yet.
To regular readers here, note that Willis’ contributions are generally a perfect example of CBP.

DirkH
July 27, 2013 4:22 pm

wsbriggs says:
July 27, 2013 at 4:14 pm
“For those of sufficient curiosity, get the old Mandelbrot set code, and set it up to use the maximum resolution of your machine. Now take a Julia set and drill down, keep going until you get to the pixels. ”
What he means is, zoom into it until it becomes blocky. The blocks you see are atrefacts because your computer has run out of precision. They shouldn’t be there if your computer did “real” maths with real numbers. floating point numbers are a subset of real numbers.

ikh
July 27, 2013 4:51 pm

DirkH says:
July 27, 2013 at 3:37 pm
Jimmy Haigh says:
July 27, 2013 at 3:25 pm
“Would the models even produce the same result when run on the same computer on different runs?”
I am sorry DirkH but you are wrong. You are assuming far too simlistic a computing model. W£hat you need to remember is that each core can do “Out Of Order” excution, as long as the operations are non-dependant. This means that numercial calculations can be re-ordered.
And that is just on a single cored cpu. Then add multi-threading on multi -cores and multi-processing across a cluster that makes up a super computer and you have a completely non-determanistic piece of hardware. it is uto the programmer to impose order. And the Climate Modelers do not have that skill.
/ikh

Dave Talbot
July 27, 2013 4:52 pm

I recall reading several years ago that modeling is an art, not a science, and that of ~140 ‘best practices’ learned the hard way by those whose living depended on accuracy (i.e. oil exploration, etc) over 120 were violated by climate models.

Agesilaus
July 27, 2013 5:05 pm

When I took a numerical computation class back in the 1970’s (using Fortran 4 and WATFIV) we spent weeks going over error terms and how to try to minimize them. They are generally represented by epsilon. Anyone with even a minimal background in computation would be aware of this. I’m sure there are advanced methods to try to compensate for these errors but these clowns don’t seem to find it necessary to get advice from experts in this field.
And this is even more remarkable when you consider the fact that NASA computer people must be well aware of these problems since their space craft seem to get where they are intended to go, for the most part anyway. You know NASA that where Mr Hansen used to work.

View from the Solent
July 27, 2013 5:06 pm

Paul Jackson says:
July 27, 2013 at 1:32 pm
Edward Lorenz pretty much came to almost the exact same conclusion, in regards to almost the exact same computational problem almost 50 years ago; this is as the Warmistas would say is “settled science”..
=============================
Exactly! And begat the study of non-linear dynamical systems, aka chaos theory.

July 27, 2013 5:08 pm

This thread is an excellent discussion of potential math pitfalls while coding; even if the information is in bits and pieces.
I kept copying a comment with the intention of using the comment as an intro point for my comment; only to copy another comment further on.
Before I start a comment, I would like to remind our fellow WUWT denizens about some of the spreadsheets we’ve seen from the CAGW crowd. Lack of data, zeroed data, missing or incorrect sign. Just the basics behind developing models is flawed, let alone building code around them.
Having an idea for a program is good, programming from the idea is, well, not intelligent. The more complex a program is intended to be, the intense and rigorous the design and testing phases.
Any program should be required to process known inputs and return verified results. If the program is chaotic, then all information, data and numbers must be output for every operation. Then someone has to sit down and verify that the program is processing correctly.
All too often, both design and testing phases are skipped or assumed good. What makes a good design? Not that the end result is a perfect match to the design, but that the design is properly amended with explanation so that it matches the result.
All three items, design, test inputs and outputs and code should be available for review. Protection? That what copyright laws are intended for or patent if unique enough; though very few programs are truly unique.
Computer languages handle numerical information differently. From IEEE 754-1985, modern IEEE compliant computer language will automatically handle floating point calculations. What happens when numbers exceed the bounds of the language is a rounding.
Rounding is a known entity and can be controlled; which is why the frequent complaints about it’s a novice who allows a program to follow or process incorrect rounding assumptions.
The intention of rounding is to follow a basically sum neutral process of rounding where the total of rounding up equals the total round down; e.g. 365 numbers rounded, 182 numbers rounded up a total of +91, 183 numbers rounded down a total of -92.
365 divided by two gives 182.5; rounding up to 183 would make for 366 days, instead one number is rounded down while the other is rounded up. This rounding must be forced by the programmer!
The numbers rounded could be .4 normally rounded down or .6 normally rounded up. Depending on the rounding approach .5 is normally rounded up.
This concept follows the even division approach of a 50/50 split in how numbers will be rounded. Where datasets with huge arrays of numbers can really get caught hard is when large numbers of rounded numbers are aggregated or god forbid, the program’s default rounding approach is assumed good enough for climate work.
Nick Stokes supplied an example along with a description for what happens when the code is run on two different systems. Nick mentions that time features are wrong. Well that could be because portrayal of time in a program is a function of math computation and the intensive use of the time function with accumulated roundings, (Ingvar Engelbrecht’s N+1 discussion) . Note the ‘could’ as I haven’t dissected the code to follow and flesh out time. What happens internally is that the ‘time function’ call to the system is defined, processed, rounded, tracked and stored differently.

Jonathan Abbott says: July 27, 2013 at 1:11 pm
Very interesting. I manage a team of software engineers but am new enough to coding to not fully understand some of the discussion here. It appears to be suggested that unless the coding team really know their stuff, differences between models when run on different machines are to be expected. Could someone expand on this a bit or tell me where to start reading?

Uhoh!
I wouldn’t necessarily agree that the team must really know their math functions nowadays. It used to be that way, but the days where the programmer had to allocate specific memory, registers to store numbers, explicitly define every number field are pretty much past.
Mark Bofill states it fairly well in several different comments. The programmers must be diligent with both calculations and output rigorously checked. If you’ve got ask how the team is handling rounding errors, check and make sure you still have both legs as your software engineers should know better. If a software engineer tells you he cobbled up the code over the weekend, assign someone else, preferably not a pal, to verify the code and all data handlings, including math.

Michael J
July 27, 2013 5:09 pm

Sounds like complete garbage to me.
It has been many years since compilers were responsible for doing floating point computations. Nowadays all the heavy lifting is done using hardware and pretty much all of it is IEEE 754 compliant. IEEE 754 provides ways to configure how rounding is done, so it should be possible to get pretty identical results on different hardware.
Even if the programmer failed to configure the floating point hardware (a real rookie error) the rounding errors ought to be very small. Algorithms with large numbers of iterations can compound small rounding errors into big errors, but that is usually the sign of a very naive algorithm. Better designed programs can usually avoid this.
Large errors are most likely due to software bugs.
Numerical software that cannot produce consistent results on different platforms is unreliable and should not be trusted on any platform.

Editor
July 27, 2013 5:16 pm

calvertn says:
July 27, 2013 at 11:55 am

So it’s really important to split hairs about the difference between ‘global forecast models’ and ‘climate models’? So, until a ‘scientist’ produces a peer-reviewed paper showing that this divergence also occurs in ‘climate models’, we can safety assume they are unaffected? Ha!

There’s a major difference between a program looking for a forecast for next week and one looking at climate for the next decade. For the forecast you want to know the weather conditions at various times next week. For the climate forecast you want to know the average conditions then. The actual conditions would be nice, but chaos says they can’t be predicted.
If two climate models are working by simulating the weather, then it really doesn’t matter if the instantaneous weather drifts widely apart – if the average conditions (and this includes tropical storm formation, ENSO/PDO/AMO/NAO/MJO and all the other oscillations) vary within similar limits, then the climate models have produced matching results. (If they’re really good, they’ll even be right.)
This bugs the heck out of me, so let me say it again – forecasting climate does not require accurately forecasting the weather along the way.
Another way of looking at it is to consider Edward Lorenz’s attractor”, seehttp://paulbourke.net/fractals/lorenz/ and http://en.wikipedia.org/wiki/Lorenz_system While modeling the attractor with slightly different starting points will lead to very different trajectories, you can define a small volume that will enclose nearly all the trajectory.
The trajectory is analogous to weather – it has data that can be described as discrete points with numerical values. If some of the coefficients that describe the system change, then the overall appearance will change and that’s analogous to climate.
The trick to forecasting weather is to get the data points right. The trick to forecasting climate is to get the the changing input and the response to changing input right.

Paul Linsay
July 27, 2013 5:23 pm

If you are not aquainted with how complex chaotic behavior can be for even the simplest system look at the section on chaos in http://en.wikipedia.org/wiki/Logistic_map and at http://mathworld.wolfram.com/LogisticMap.html for an introduction. Assuming your mathematical skills exceed Phil Jones’, it’s easy to set up an Excel spreadsheet to play with the the logistic map to get a feel for deterministic chaos. The fluid equations used in climate models are guaranteed to be way more complicated than this.

Editor
July 27, 2013 5:31 pm

Alan Watt, Climate Denialist Level 7 says:
July 27, 2013 at 12:09 pm

… All floating point quantities are approximate, unlike integers which are always exact. It sounds like a simple distinction but it has profound implications for programming.

“All” is a strong word. Software engineers rarely use absolutes.
1.0 can be represented exactly by an IEEE floating point number, so can 2.0, 3.0, and so on up to the precision of the significand.
0.5 and other negative powers of 2 can be represent (up to the range of the exponent). As can integral multiples – my first example applied for multiples 2^0, i.e. multiples of 1.
On the other hand, 1/10 cannot be represented exactly, nor can 1/3 or 1/7 – we can’t represent the latter two with a decimal floating point number, the best we can do is create a notation for a repeating decimal.
In practice, this is a moot point – values used in modeling are don’t have exact representations in any form.

Intense numerical programming is not for amateurs.

I disagree. Amateur mathematicians have made many contributions to number theory. That statement implies that only research scientists can contribute to climate science. I disagree with that too.
I will agree that numerical programming is rife with surprises and pitfalls.

July 27, 2013 5:44 pm

My first COBOL professor had a saying that I’ve kept in mind ever since.
Programs are languages that allow humans, usually, to communicate with hardware. Only hardware doesn’t understand languages. Hardware communicates via on or off bits accumulated into bytes accumulated into words, doublewords and so on.
Systems analysts or system engineers often work in a higher language but must resort to Assembler or even bits and bytes when determining what is happening at hardware level.
Most high level languages are aggregations of lower level language codings that accomplish specific tasks. Compiling a higher level language used to be essential before a computer could actually perform the code’s instructions. Compiling was/is rendering the higher level language into a lower level language that the computer accepts.
Nowadays many higher level languages are compiled as they are run so most programmers no longer need to learn Assembly or hardware protocols. For many years, assembly language was accepted by the kernel operating system and translated into the machine protocol in a similar fashion.
Many modules, function calls, data base calls of the higher level languages are assemblages of coding from a very wide diverse group of individuals. Yes, they were tested, but within the parameters of ‘common usage’. Does anyone consider climate or even atmospheric physics ‘common usage’?
What the translates to for the uninitiated is that the higher or perhaps better phrased easier levels of coding are assemblages of pieces of code from many other programmers who have their own ideas of what was/is needed for their contribution.
Which brings us back to my COBOL professor Dr. Gomez. “Never state implicitly what you can state explicitly!”. Meaning unless the code is well documented or until tested and the results are explicitly known, most code modules and function calls are implicit. Make it explicit.
One more comment, huge linear or multilayered databases are treated as dimensional arrays in a program. The more complex the system, the more dimensions added to the overall design and the more arrays to be processed.

cynical_scientist
July 27, 2013 5:49 pm

Unsurprising. For a chaotic system this behavior is completely expected. There is a mathematical theorem that errors for each run of a numerical approximation (with roundoff) there is a set of real world conditions close to the original ones that would yield exactly the same real world result. So having suchsensitivity to roundoff doesn’t add a new type of error to the models. It can be dealt with by looking at the sensitivity to initial conditions.
The real world is itself chaotic. Even if we had an absolutely perfect model of the climate on a godlike compuler with no roudoff errors, runs with slightly different initial conditions would still give you a pile of spaghetti. The pile of spaghetti is not a sign that the models are defective. The only way to get rid of the spaghetti is to find that damned butterfly and kill it.

Editor
July 27, 2013 5:58 pm

ikh says:
July 27, 2013 at 4:06 pm

I tried to look at the open source climate models including GISS E. All of the models I looked at were coded in Fortran. Most used mixed versions of the language making it very difficiult to understand or reason about the computing model that they are fiollowing. Not one of them was even minimally documented and they were almost completely lacking in comments. This makes them almost undeciferable to an outsider.

Scientists (even computer scientists) tend to write code that’s hard to follow and is ill-commented. Don’t get me started about David Parnas and directions he tried to explore to remove descriptive information from subroutines in a module. He hid some much information about a stack implementation that Alan Perlis couldn’t figure out what he had described. Oops, I got started, sorry! “There are two ways to write error-free programs; only the third one works.” – Alan Perlis
Early Unix code and user level utilities were pretty awful. Engineers who don’t comment their code should be (and often are) stuck with supporting it because no one else understands it.

Fortran is an archiac language that is almost never used in the commercial world because we can get the same performance from more modern languages such as C or C++ with much better readability and easier reasoning for correctness.

I was involved in the mini-super computer field for a while and was quite surprised at what Fortran had become. Fortran 2003 includes object oriented constructs. Just because a lot of people are writing code as though they have a Fortran IV compiler should not be cause to denigrate modern Fortran. Note that C does not have OOP elements, you can’t even multiply a pair of maxtrices together like you can in Fortran with “A = B * C”. While C is one of the main languages I use, C is an archaic language. More so than Fortran.
I bet you’ll find a lot of Fortran code in places like Wall Street, it’s not just for R&D any more!

Gary Pearse
July 27, 2013 5:59 pm

Janice Moore says:
July 27, 2013 at 3:50 pm
“Jonathan Abbott (at 1:11PM)! So, you’re the boss in “Dilbert”!!! At least, you are bright and want to learn, unlike that guy. (and I’m sure you don’t style your hair like he does, either, lol)”
I’ve seen balding men who have their hair styled at the Dairy Queen, but Dilbert’s boss has a double scoop!

July 27, 2013 6:05 pm

“M Courtney says: July 27, 2013 at 1:09 pm
Important question: Is this why Prof Murry Salby needed resources to rebuild his studies when he moved to Macquarie University?…”

Yes, perhaps and no. Even within the same programming language there are different versions and different modules. Extensive testing is needed to determine modules needing change.
If the hardware is different, which is almost absolutely, there are different system implementations, access procedures, storage procedures, times available and so on ad infinitum.
Just upgrading within a software system to a new version can cause clumps of hair to appear around one’s desk. (Yes, I still have a full head of hair; but can you think of some program operators who are a little shy of hair?)
Your short answer is yes, but this we only assume from his letter and there may be more, much more to Professor Salby’s planned move and rebuilding. One rarely plans a new venture only intending to rebuild one’s old venture

DirkH
July 27, 2013 6:13 pm

ikh says:
July 27, 2013 at 4:51 pm

“DirkH says:
July 27, 2013 at 3:37 pm
Jimmy Haigh says:
July 27, 2013 at 3:25 pm
“Would the models even produce the same result when run on the same computer on different runs?”
I am sorry DirkH but you are wrong. You are assuming far too simlistic a computing model. W£hat you need to remember is that each core can do “Out Of Order” excution, as long as the operations are non-dependant. This means that numercial calculations can be re-ordered.”

As you say, re-ordering means non-dependancy. Assuming no CPU erratum this does not affect the outcome. Determinism is maintained.

“And that is just on a single cored cpu. Then add multi-threading on multi -cores and multi-processing across a cluster that makes up a super computer and you have a completely non-determanistic piece of hardware. it is uto the programmer to impose order. And the Climate Modelers do not have that skill.”

That’s why I mentioned race conditions as a possible source of non-deterministic behaviour.

Paul Linsay
July 27, 2013 6:24 pm

Ric Werme says:
July 27, 2013 at 5:16 pm
“The trajectory is analogous to weather – it has data that can be described as discrete points with numerical values. If some of the coefficients that describe the system change, then the overall appearance will change and that’s analogous to climate.”
First, it’s not clear that the concept of an attractor has any meaning for a high dimensional system like the climate.
Second, even for chaotic systems with say three or four dimensions it’s common for multiple attractors to coexist, parallel worlds in effect. Each attractor has distinct statistical properties and samples different parts of phase space. Worse, the “basin of attraction” for the attractors is a fractal. This means that tiny variations in initial conditions leads you unpredictably to one of the attractors and without infinite precision you can’t tell ahead of time where you’ll wind up.
Are we to believe that a complex fluid system like the coupled atmosphere-ocean system has simpler behavior?

Talent-KeyHole Mole
July 27, 2013 6:27 pm

Ah. My mind drift back to 1978 and the “State Of Computing.”
A graduate-level class, I was an undergraduate at that time, in numerical analysis where we learn about “heat”, i.e. heat buildup in the CPU and Memory and the growth of round-off error, I/O misreads and miswrites and byte misrepresentation in memory from one cycle to the next cycle.
And I thought this was going to be a class on ‘Mathematics.’
And it WAS. 🙂
Then came “Endianness”.
From Wikipedia:
“Endianness is important as a low-level attribute of a particular data format. Failure to account for varying endianness across architectures when writing code for mixed platforms can lead to failures and bugs. The term big-endian originally comes from Jonathan Swift’s satirical novel Gulliver’s Travels by way of Danny Cohen in 1980.[1]
[1] ^ a b c Danny Cohen (1980-04-01). On Holy Wars and a Plea for Peace. IEN 137. “…which bit should travel first, the bit from the little end of the word, or the bit from the big end of the word? The followers of the former approach are called the Little-Endians, and the followers of the latter are called the Big-Endians.” Also published at IEEE Computer, October 1981 issue.
🙂
Thank you Anthony for reminding me!

July 27, 2013 6:34 pm

“Ric Werme says: July 27, 2013 at 5:16 pm

This bugs the heck out of me, so let me say it again – forecasting climate does not require accurately forecasting the weather along the way.
Another way of looking at it is to consider Edward Lorenz’s attractor”, seehttp://paulbourke.net/fractals/lorenz/ and http://en.wikipedia.org/wiki/Lorenz_system While modeling the attractor with slightly different starting points will lead to very different trajectories, you can define a small volume that will enclose nearly all the trajectory.
The trajectory is analogous to weather – it has data that can be described as discrete points with numerical values. If some of the coefficients that describe the system change, then the overall appearance will change and that’s analogous to climate.
The trick to forecasting weather is to get the data points right. The trick to forecasting climate is to get the the changing input and the response to changing input right.”

Bugs the heck is right. What I want to know is how can one forecast the climate which is dependent on weather and which the final result is weather? Climate predictions simplified to input and response sound far too stripped to work in a real world.
Using your own fractal modeling analogy. The fractal model is a fractal model; changing the input is not new code nor a new model. Encompassing a fractal or even a simplified convection model is not what I consider a climate encompassed world.

MarkG
July 27, 2013 6:35 pm

“As you say, re-ordering means non-dependancy. Assuming no CPU erratum this does not affect the outcome. Determinism is maintained.”
Indeed. For out-of-order execution to work effectively, particularly when running old code built for in-order CPUs, it has to produce the same results as though the program ran in-order. We realised twenty years ago that relying on the programmer or compiler to deal with the consequences of internal CPU design was a disaster; e.g. some of the early RISC chips where the results of an instruction weren’t available until a couple of instructions later, but there were no interlocks preventing you from trying to read the result of that instruction early and getting garbage instead. One compiler or programmer screwup, and suddenly your program starts doing ‘impossible’ things.

Talent-KeyHole Mole For 30+ Years
July 27, 2013 6:48 pm

A bad day for the Infinite Gods of the UN IPCC … indeed.
At the bar:
Mix geographers, computing machines, data in a small room with bad ventilation and what do you get ….
Catastrophic Global Anthropogenic Clathrate Gun Bomb Climate Runaway Heating Warming Over Tripping Point exercise in Pong.
The ‘Heating’ only exists in the CPU of the INTEL little-endian computers and and groins of the hapless geographers (like Micky Mick Mick Mann and Jimmery Jim Jim Hansen) and NOWHERE else.
Hardy har har.
My Name IS Loki!
Say My Name! Say My Name! Say My Name!

ombzhch
July 27, 2013 6:56 pm

As a mathematician by training, in the 60s all were taught Numerical Analysis, the art of avoiding rounding error by hand and with those new fangled computer things! But MUCH more serious is
CHAOS Theory 1996 which showed that the integration of differentiable manifolds was unstable with respect to initial conditions. This is a proven Mathematical Theory, which means that:
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
Models RIP, MFG, omb

July 27, 2013 7:04 pm

ikh says:
July 27, 2013 at 4:06 pm
davidmhoffer says:
If the programmer didn’t take errata into account, the most likely results is that they are ALL wrong.
No. Unless you are programming in Assembler language. It is generally the job of the compiller to deal with CPU bugs.
>>>>>>>>>>>>>>>
Yes it is. Only this kind of code isn’t general at all. Plus compilers have considerable ability to be tuned to the specific instruction set of the specific cpu. That’s why you can get completely different benchmark results using the same code on the same hardware using the same compiler, just by setting various flags in the compiler different ways. If you don’t understand how the compiler deals with instructions or conditions that turn out to have errata associated with them, you are asking for trouble. Your confidence in C as a replacement for Fortran is also misplaced in this type of application. You said it yourself in your own comment, the code is spaghetti code, often in different languages, most likely written by different people at different points in time for some specific purpose at that time, and there’s no documentation to give anyone any clue as to what each section of code is supposed to do. Compile that mess written over decades on modern hardware with completely different cpu’s and expect a consistent result? good luck.

Rafa
July 27, 2013 7:08 pm

Someone mentioned compilers and hardware platforms as a dangerous beast for FP calculations. About 1000 years ago I was coding numerical algorithms and – when moving code from one computer to another – I got nasty surprises on FP results due to the ‘endianness’ problem (little endian vs. big endian, etc). Then I left the field but I guess some of the code embedded into the models is pretty old and is never been reviewed or adapted. I remember a routine on splines I wrote which produced absurd results depending on the hardware platform used. However astronomers in the team where using it to smooth data – despite my alert not to trust 100%- like if my routine were the word of the Lord. Sigh! So long time ago!

Editor
July 27, 2013 7:18 pm

Subject: Lucky fortune time
From: Ric Werme
Date: Sat, 27 Jul 2013 21:00:01 -0400 (EDT)
To: <werme…>
—— Fortune
One person’s error is another person’s data.

Sceptical lefty
July 27, 2013 7:24 pm

We are trying to model an open system. The nature and effects of the largest single known influence, the sun, are poorly understood. There may be other significant influences, not yet apprehended. The models are basically curve-fitting exercises based on precedent, faith and luck. There is no reason to suppose that any model that depends on our present state of knowledge will have reliable predictive power. It is reasonable to conclude that these horrendously expensive ongoing exercises are simply a waste of resources.
No doubt, the modellers will disagree. However, I seem to recall another serious, hotly contested debate about how many angels could dance on the head of a pin. A century ago respectable intellectuals happily agreed on the natural inferiority of the Negro. How about the briefly fashionable ‘runaway global warming’? (We’ll end up like Venus!) There may be one or two people here who would prefer to forget their involvement in that one.
It seems to be an unfortunate fact that in intellectual discourse (as in most other things) fashion, rather than rigour, is the prime determinant of of an idea’s ‘correctness’.

John Blake
July 27, 2013 7:34 pm

As Edward Lorenz showed in 1960, Earth’s climate is sensitively dependent on initial conditions, the so-called “butterfly effect”. This of course entails not merely Chaos Theory with its paradigmatic “strange attractors” but Benoit Mandelbrot’s Fractal Geometry wherein all complex dynamic systems tend to self-similarity on every scale.
If Global Climate Models (GCMs) indeed are initiating their simulations to some arbitrary Nth decimal, their results will devolve to modified Markov Chains, non-random but indeterminate, solely dependent on the decimal-resolution of an initial “seed.” To say that this invalidates over-elaborate computer runs not just in practice but in math/statistical principle severely understates the case: Over time, absolutely no aspect of such series can have any empirical bearing whatsoever.
Kudos to this paper’s authors. Their results seem obvious, once pointed out, but it takes a certain inquiring –might one say “skeptical”– mindset to even ask such questions.

July 27, 2013 7:40 pm

Worrying about the precision of calculations is a waste of time. Sure; the calculations will be more accurate but the results, however deviant, are still dependent upon a description of initial state where each parameter has in the range of 6 to 10 bits of “resolution” – before homogenisation – which tends to “dampen” the information content, reducing effective bits of state information.
Internally, the models tend to be a swill of ill-conceived code apparently written by undergraduates with little appreciation of software engineering, let alone the physical systems that they’re “modelling”. Physical “constants” that are e.g. only constant in ideal gases, are hard-coded to between 10 and 16 bits of imprecision. Real constants such as π, are not infrequently coded as a decimal (float) value with as few as 5 digits. (I knew that that was very wrong when I was an undergraduate.) Other formula constants are often lazily hard-coded as integer values. Typical for multiplication or division of DOUBLE (or better) PRECISION variables can have different results; often determined by the order in which the equations are written and subsequently compiled.
Superficially, that doesn’t make much difference. But as the GCM are iterative models, it’s the changes in parameters that determine the behaviour of the simulated system. i.e. Lots of nearly-equal values are subtracted and the result is used to determine what to do next.
Subtraction of “nearly-equal” values is also a numerical technique for generating random numbers.
I’ve previously commented on the credibility of ensembles of models and model runs.

AndyG55
July 27, 2013 7:51 pm

I repeat what I have said before.
If they hindcast to HadCrud or GISS, the result will ALWAYS be a large overestimate of future temperatures.

July 27, 2013 7:55 pm

None of them are close to the observations, so why should we care if they are close to each other? They are all wrong.

ombzhch
July 27, 2013 7:55 pm

There are a number of additional points that need to be made about modeling and Computation,
1. Fortran is used for two reasons (a) the Numerical libraries eg NAG are best tested there, and (b) the semantics of the language allow compilers to produce better code with more agressive optimization. For this reason they are still extensively used in Finance Oil Exploration … for new work. Different ABIs make it slow to mix Fortran and C, C++.
2. Interpreted languages, which can give you indefinite precision eg Perl are 3-5 times too slow.
3. Do not listen to the OO club, once you get into that you bring huge, unnecessary overhead, which usually dosn’t matter, People are more expensive than machines … but simplicity and speed are key to the linear extrapolation that has to be done by many cores to make these computations hum. Hand coded Asseombly can often pay off.
4. The limits are set bu chaos and continual rounding which moves the initial conditions for each cycle.
MFG, omb

DougS
July 27, 2013 8:09 pm

The models work just fine. They produce the propaganda they are intended to produce.

ROM
July 27, 2013 8:38 pm

From a computer and mathematically challenged old timer this has been a fascinating and very illuminating post on the soft ware engineering and computer coding’s fallibility’s as it is currently being expressed and promoted in the climate models.
It is also these same incompetently coded climate models that are being used for an almost endless array, a whole band wagon of well endowed tax payer funded alarmist research claims in nearly every scientific field.
The software engineers and computer savvy folk here have very effectively shredded the whole of climate science’s modeling claims and in doing so have almost destroyed the climate modelers claims they are able to predict the future directions of the global climate through the use of dedicated climate models.
And I may be wrong but through this whole post and particularly the very illuminating comments section nary a climate scientist or climate modeler was to be seen or heard from.
Which says a great deal about the coding skills or relative lack of any real competency and high level coding capabilities by the various climate science modelers and the assorted economists, astronomers, meteorologists, geologists, dendrochronologists and etc and etc who have all been very up front in getting themselves re-badged and their bio’s nicely padded out as so called Climate Scientists.
All this incompetency in Climate Science’s modeling has probably cost three quarters of a trillion dollars worth of global wealth and god knows how much suffering and how many lives. Along with the still only partly known extent of the destruction of living standards for so many in both the wealthy western countries and even more so in the undeveloped world where development and increases in living standards have been held back and even crippled by the near fanatical belief of the ruling political and watermelon cabals in Climate Science’s incompetently modeled and increasingly disparaged predictions for the future of the global climate.
We are all paying and will likely continue to pay for some time a truly horrific price for the incompetency and ever more apparent real ignorance of the self professed climate scientists and their failed and incompetently coded climate models on which the entire global warming / climate change / climate extreme meme is entirely based.

Mike McMillan
July 27, 2013 9:01 pm

DirkH says: July 27, 2013 at 2:33 pm
… The Mandelbrot equation is not very complex yet chaotic.

The Mandelbrot equation terms are complex numbers, which should qualify it as complex.

Mike McMillan
July 27, 2013 9:49 pm

wsbriggs says: July 27, 2013 at 4:14 pm
For those of sufficient curiosity, get the old Mandelbrot set code, and set it up to use the maximum resolution of your machine. Now take a Julia set and drill down, keep going until you get to the pixels. This is the limit of resolution for your machine, if you’re lucky, your version of the Mandelbrot algorithm lets you select double precision floating point numbers which are subsequently truncated to integers for display, but still give you some billions of possible colors.
DirkH says: July 27, 2013 at 4:22 pm
… What he [wsbriggs] means is, zoom into it until it becomes blocky. The blocks you see are atrefacts because your computer has run out of precision. They shouldn’t be there if your computer did “real” maths with real numbers. floating point numbers are a subset of real numbers.

The colors in a Mandelbrot image have nothing to do with the values computed by the Mandelbrot algorithm.
Here’s a rundown on what’s going on.
The Mandelbrot set is a set of points in the complex plane. The points’ x coordinates are real numbers, and their y coordinates are imaginary numbers (numbers with i (the square root of minus 1) attached. Points whose distance from the origin is 2 or greater are not in the set.
The algorithm takes the coords (a complex number) of each point and squares it. Due to the quirky nature of complex number math, we get another complex number, the coords of another point somewhere else. We check to see if that next point is more than 2 away from the origin. If not, we repeat the operation until we get a point 2 or greater away, or until we’ve repeated a selected limit number of times. When that happens, we throw away all the results of our calculations and write down only the number of iterations performed, which becomes the value for the original point.
The color pixel we place on that point is one we’ve already chosen to represent that number of iterations and placed in a lookup table. We can pick as pretty a bunch of colors as we wish, and as many of them as we decide to limit the algorithm iterations to. Since points actually in the set will never exceed 2 regardless of the number of iterations, they’re all one color and not interesting, as are original points greater than 2. All the action is in the border regions, which in truth are not in the Mandelbrot set.
PDF file of the original Scientific American article that got the whole thing started.

Surfer Dave
July 27, 2013 10:18 pm

One should read Donald Knuth’s “The Art of Computer Programming” Section 4.2.2 “Accuracy of Floating Point Numbers” in Volume 2 to get a feel for how bad it can be. It turns out it is worse when there is addition and subtraction operations on floating point than for multiplication or division and in fact the normal axioms of arithmetic do not hold, for example the associative law does not hold, eg (a+b)+c is not always the same as a+(b+c), depending on the actual values of a, b and c.
Knuth also writes about random numbers, and they are an exceptionally difficult thing to produce.
So, when I did look at some of GCM source code a few years ago, I could see that there was no attempt to track errors from the underlying floating point system and that random numbers are used widely (which by itself is an indication that the “models” are suspect). I found instances where repetitive iterations used floating point additions, and it was clear to me that these models would quickly degenerate into nonsense results.

Chris Jesshope
July 27, 2013 11:07 pm

Surfer Dave says: (a+b)+c is not always the same as a+(b+c). This is absolutely correct. Deterministic results from a program rely on applying the order of operations in the same order. Different compilers may generate machine code in different schedules to achieve the same results, This is one source of non-determinism int he results of the same algorithm. Unfortunately there is another, which means that the same machine code may not generate the same results twice on the same machine as the hardware is able to reschedule operations that are not dependent on each other, i.e. the macine instructions may not necessarily excute in the same order between two runs on the same computer.
With a stable system, this is not a problem but where you have amplification of results which lose precision (i.e. differencing two almost equal numbers) then the results can be off by ordrs of magnitude.
I did my PhD modelling semi-conductor equations, very similar field equations to those used in weather and climate modelling. Because of this I have always distrusted the certainty expressed in the results. I was able to fit simulated to measured results just by making small adjustments to certain parameters, where those adjustments were not much larger than the rounding error.

jorgekafkazar
July 28, 2013 12:00 am

“Climate model” is an oxymoron. There is not and never will be a valid model of the Earth’s climate. What climatologists create are climate emulators, conglomerations of algorithms that produce an output with short-run climate-like properties. The result is not Earth climate, but meaningless academic exercises whose primary use is to deceive people, including themselves.

DirkH
July 28, 2013 12:38 am

Mike McMillan says:
July 27, 2013 at 9:49 pm

“DirkH says: July 27, 2013 at 4:22 pm

“… What he [wsbriggs] means is, zoom into it until it becomes blocky. The blocks you see are atrefacts because your computer has run out of precision. They shouldn’t be there if your computer did “real” maths with real numbers. floating point numbers are a subset of real numbers.”

The colors in a Mandelbrot image have nothing to do with the values computed by the Mandelbrot algorithm. […]”

If the values computed are not used, then why do you go on to say

“We check to see if that next point [the value computed in the last step – Dirk] is more than 2 away from the origin. If not, […]”

Your description of the algorithm is correct; but your first sentence is an obvious absurdity.

DirkH
July 28, 2013 12:40 am

Mike McMillan says:
July 27, 2013 at 9:01 pm

“DirkH says: July 27, 2013 at 2:33 pm
… The Mandelbrot equation is not very complex yet chaotic.
The Mandelbrot equation terms are complex numbers, which should qualify it as complex.”

….rimshot.

sophocles
July 28, 2013 12:52 am

Woo hoo! So 0.65 Deg.C or 0.7 Deg. C or whatever, is just a
cumulative rounding error!
Can all those PhD’s who created this nonsense called CAGW
and “Climate Change” please hand in their diplomas for
immediate incineration? C’mon, Mikey, this means YOU too!

Louis
July 28, 2013 2:03 am

No problem. Now they have an excuse to make adjustments to the model output data like they already do to temperature data. If they know how to adjust for things like urban heat effects, time-of-day problems, thermometer upgrades, and various errors, it must be a simple matter to adjust model output for rounding errors, right? After yearly adjustments, model predictions will magically match observations quite nicely. It was those pesky rounding errors that caused the models to be so far off in the first place. /sarc

Carbon500
July 28, 2013 2:31 am

The temperatures on the graph’s vertical axis are presumably anomalies – if so, from what period are the deviations please? I don’t want to go to the expense of buying the paper just for this scrap of information! Thank you in anticipation.

PaulM
July 28, 2013 2:57 am

This error wouldn’t be possible outside of academia.
In the real world it is important that the results are correct so we write lots of unit tests. These are sets of tests for each of the important sub routines in the program where we pass in a wide range of papmeters and compare the result to the expected result. So if you moved your program to a different computer that produces slightly differing results your unit tests would fail.
Unit tests also ensure that when you make changes to the program that you haven’t broken anything. A lot of software professionals that I know wouldn’t trust the results of any program that didn’t have a comprehensive suite of unit tests.

Chris Jesshope
Reply to  PaulM
July 28, 2013 4:00 am

As floating point operations are not associative, then if you change the order of operations you will get a different results (and you can not avoid this, you will get different ordering with different compilers/hardware). Unit tests on floating point have to look at value ranges. Anything outside that range is not necessarily an error but may indicate an unstable computation.

johnmarshall
July 28, 2013 3:01 am

It only goes to show that formulating government energy policy on model output is a stupid thing to do.

DirkH
July 28, 2013 3:34 am

Carbon500 says:
July 28, 2013 at 2:31 am
“The temperatures on the graph’s vertical axis are presumably anomalies – if so, from what period are the deviations please? I don’t want to go to the expense of buying the paper just for this scrap of information! Thank you in anticipation.”
The graph says it. All trend lines normalized to zero in 1979. No reference period necessary. 1979 is the reference.
I think the graph comes from Dr. Roy Spencer, you might be able to find out more, if necessary, on his blog.
http://www.drroyspencer.com

DirkH
July 28, 2013 3:36 am

Carbon500, the graph is not from the paper. And Anthony does not say it does. He put it there to explain the spaghetti graph concept only.

Nick Stokes
July 28, 2013 3:42 am

PaulM says: July 28, 2013 at 2:57 am
“This error wouldn’t be possible outside of academia.”

It isn’t an error. It is very well known that atmosphere modelling is chaotic (as is reality). Numerical discrepancies of all kinds grow, so that the forecast is no good beyond about 10 days. Rounding errors grow like everything else.
The alternative is no numerical forecast at all.

Mark
July 28, 2013 4:48 am

Dennis Ray Wingo says:
Why in the bloody hell are they just figuring this out? Those of us who are engineering physicists, engineers, or even straight code programmers are taught this in class and we even learn to write programs to determine the magnitude of these errors. That these people are just now studying this and figuring it out is the height of incompetence!
Maybe it has something to do with the mentality of “only a climate scientist is qualified to even comment about climate science”.
Thus an engineer, physicist, computer scientist, etc. who points out such issues is simply called a “denier” and ignored.

Mark
July 28, 2013 5:02 am

Robert Clemenzi says:
Besides the fact that it wouldn’t run, there were a number of other issues.
* Years were exactly 365 days long – no leap years
* Some of the physical constants were different than their current values
* The orbital computation to determine the distance between the Earth and the Sun was wrong

At which point any “rounding errors” resulting from calculations by the machine don’t really matter. Since the basic “physics” of the model is fiction.
It was when I discovered a specific design error in using Kepler’s equation to compute the orbital position that I quit playing with the code. I wrote a short paper explaining the error, but it was rejected for publication because “no one would be interested”.
More likely it would cause too much “loss of face”…

ikh
July 28, 2013 5:06 am

Nick Stokes says
“It isn’t an error.”
No Nick, its not an eror and you are not a Troll. It is a humongous novice error that is easily avoided by using fixed point in integral data types.
Btw, nice to see you admitting that the GCMs are no use beyond 10 days. That means we can throw away all those pesky projections to 2100.
/ikh
:

ikh
July 28, 2013 5:17 am

DirkH says:
July 27, 2013 at 6:13 pm
“As you say, re-ordering means non-dependancy. Assuming no CPU erratum this does not affect the outcome. Determinism is maintained.”
Nope. you are forgetting that re-ordering non-dependant floating point operations can give you different rounding error. That is not deterministic.
The same is also true in multi-threaded code without race conditions. But if climate scientists don’t even understand the basics of numerical programming, they are not likely to do well in the more complex world of multi-threading.
/ikh

ikh
July 28, 2013 5:26 am

davidmhoffer: There never has been and never will be a programming language that stops people from writing rubbish code. The advantage of C over Fortran is that it has block structure and strongly typed function signatures. Also, there are also a lot of open source tools to help untangle the spaghetti 🙂
/ikh

RC Saumarez
July 28, 2013 5:35 am

Do models work? Are they verifiable?
A modest suggestion for an experiment: Take a very large building and cause a flux of energy through it. Include a large amount of water, some of which is cooled until it is frozen. Instrument the building for temperature, pressure, flows etc.
Invite, say, 5 groups of climate modellers to simulate the behaviour of this experiment. Will they all get the same results? I doubt it.

July 28, 2013 5:46 am

The point about GCMs is that they’re unverifiable, which means they neither prove or predict anything to a rational person, unless they’re into belief rather than science.
http://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/
Pointman

DirkH
July 28, 2013 5:55 am

ikh says:
July 28, 2013 at 5:17 am
“DirkH says:
July 27, 2013 at 6:13 pm
“As you say, re-ordering means non-dependancy. Assuming no CPU erratum this does not affect the outcome. Determinism is maintained.”
Nope. you are forgetting that re-ordering non-dependant floating point operations can give you different rounding error. That is not deterministic.”
You’re right! Thanks, I didn’t think of that!

DirkH
July 28, 2013 6:03 am

Robert Clemenzi says:
July 27, 2013 at 1:20 pm
“Besides the fact that it wouldn’t run, there were a number of other issues.
* Years were exactly 365 days long – no leap years
* Some of the physical constants were different than their current values
* The orbital computation to determine the distance between the Earth and the Sun was wrong
It was when I discovered a specific design error in using Kepler’s equation to compute the orbital position that I quit playing with the code. I wrote a short paper explaining the error, but it was rejected for publication because “no one would be interested”.”
It looks like we have entrusted the future of our energy infrastructure and economy to a bunch of amateur enthusiasts who somehow slipped into research institutes where they were mistaken for scientists.
Reminds me of the guy who sold the Eiffel tower… twice. (wikipedia).
Today he would become a climate scientist.

Mark Negovan
July 28, 2013 6:03 am

This is one of the most important posts here at WUWT on the subject of using GCM crystal balls and is one of the first ones that I have seen here that shows the true computational insanity of all of the climate models. Although many other articles have shown GCMs to be inconsistent with each other and in comparison to actual data, the actual problem with using GCMs to compute the future is the underlying fact that from a computer science perspective, any errors or invalid initial conditions are amplified and propagated through any model run. Also, GCMs fail to address error propagation, model uncertainty, or any other chaotic influence.
If you are designing an airplane wing using CFD, you absolutely have to determine the computational uncertainty to properly validate the engineering model. Error uncertainty in this case is bounded and the model can be adjusted to test the error extremes and hence validate that the wing stresses are within design parameters. You cannot do that in a time series extrapolation of the future state of a chaotic system. The problem is that if you just test the extremes of the error uncertainty after a time step in a GCM, you are just computing a different state of the system. Thus errors and uncertainty at any time step is unbounded and the error bound will increase to the extremes of where the climate has been in the past in a very short time.
The arguments that the scientists use in their claims that GCMs are robust (such as the one Nick Stokes uses in comments here ) are not scientifically demonstrable. You cannot change the rules of science and claim that error propagation does not matter in GCMs because of some magical property of the model. THIS IS THE ACHILLES HEAL OF GCMs.
There is no scientific basis that can support the prestidigitation of GCM prognostication.

DirkH
July 28, 2013 6:33 am

Mark Negovan says:
July 28, 2013 at 6:03 am
“The problem is that if you just test the extremes of the error uncertainty after a time step in a GCM, you are just computing a different state of the system. Thus errors and uncertainty at any time step is unbounded and the error bound will increase to the extremes of where the climate has been in the past in a very short time.”
Exactly. “Defocussing” the state to a meaningless blur in a few time steps.

beng
July 28, 2013 7:10 am

Rounding errors? We don’t care ’bout no stinkin’ rounding errors! As long as the results are in the ballpark & moving upward. /sarc

Jaye Bass
July 28, 2013 7:16 am

How about something as simple as unit tests? Do they exist for every subroutine?

LdB
July 28, 2013 7:47 am

Stokes
NS says: It isn’t an error. It is very well known that atmosphere modelling is chaotic (as is reality). Numerical discrepancies of all kinds grow, so that the forecast is no good beyond about 10 days. Rounding errors grow like everything else.
NS says: The alternative is no numerical forecast at all.
Nick that is all blatantly wrong you may not know how to deal with the problem but many scientists do so please don’t lump us all in with your limited science skill and abilities.
First lets sort out some terminology errors that grow are called compounding errors and they occur due to refusal to deal with them. A chaotic system can be forecast if you know what you are doing and know how to deal with the errors in fact most weather forecasters do it every day. In Quantum Mechanics chaos in all sorts of forms crops up and we deal with it forecast and model around it and lift it into theories usually as probabilities.
We did this dance with Nicola Scafetta and his astrological cycles I am beginning to think it must be common for the climate science community to be ignorant on compounding errors.
So Nick without getting into some of the more complex mathematical and QM ways to deal with chaotic errors the absolutely simple way to do it is to drag the model back down to reality and dump the errors each cycle. All weather reports typically do that they ignore yesterdays mistake and forecast forward.
Similarly most manual human navigation systems have the concept of waypoints where you visually locate a point and then fix that point as your new start point removing all the previous errors.
It’s really not that hard and the most famous example of a chaotic system was found in 1964 (http://mathworld.wolfram.com/Henon-HeilesEquation.html).
You might want to start there before you make further stupid statements like chaotic systems can’t be forecast or approximated or at least talk to a real scientist.

Blarney
July 28, 2013 8:38 am

Seems completely obvious and not an issue to me. To say it plainly, we all know that a butterfly flapping its wings can influence the weather in a distant place and time; however, the same cannot (or at least, it is extremely improbable) cause an ice age or a global warming in the distant future. Every single model’s prediction of a particular level of warming in a particular year is per se meaningless. It is the sum of all predictions that produces a meaningful result. And unless it is demonstrated that slightly different CPUs or, for that matters, inital conditions, generate completely different _average trends_, the thing is completely irrelevant to climate models.

Ian W
July 28, 2013 8:39 am

Nick Stokes says:
July 27, 2013 at 12:40 pm
Frank K. says: July 27, 2013 at 12:16 pm
“Unfortunately, Nick, climate (as formulated in most GCMs) is an initial value problem. You need initial conditions and the solution will depend greatly on them (particularly given the higlhy coupled, non-linear system of differential equations being solved).
“They follow patterns of synthetic weather”??
REALLY? Could you expand on that?? I have NEVER heard that one before…”
Here is something that is familiar, and is just a small part of what AOGCM’s do. Time varying ocean currents, shown with SST. You see all the well known effects – Gulf Stream, ENSO, Agulhas. It’s time varying, with eddies.
If you run that on another computer, the time features won’t match. You’ll see eddies, but not synchronous. That’s because of the accumulation of error. There is no prediction of exactly what the temperature will be at any point in time. But the main patterns will be the same. The real physics being shown is unaffected.

Nick, the ‘real physics’ may be unaffected but the different values for the various vectors will be and if these are the starting conditions for a floating point constrained program modeling a chaotic system then the results will be totally different dependent on which of the computer models were used for the start point. This is indeed what we see with ‘GCM’s.
From experience, even trying to model mesoscale weather (say out to 20 Km) accurately is not feasible more than 20 – 30 minutes out.
Therefore, there will be hidden assumptions in the models about which physical laws can be discounted as unimportant at a particular scale, which are averaged together and which are completely retained. These assumptions will also create wild variance in the chaotic models. I presume that some of the ‘training’ of the models is to bound the wildness of these variances by rule of thumb to stay inside acceptable (to the researcher) bounds. Thus the models cease to be scientific – purely based on ‘the laws of physics’, and become merely the expression of the researcher’s best guess hidden behind complex software.

LdB
July 28, 2013 8:45 am

ZOMG I take it back someone in climate science does understand chaos in science after looking around
Very good article:
http://judithcurry.com/2011/03/05/chaos-ergodicity-and-attractors/
AND THIS I AGREE WITH:
=>Nothing tells us that such a finite dimensional attractor exists, and even if it existed, nothing tells us that it would not have some absurdly high dimension that would make it unknown forever. However the surprising stability of the Earth’s climate over 4 billion years, which is obviously not of the kind “anything goes,” suggests strongly that a general attractor exists and its dimension is not too high.
The challenge of a climate science modeler is to keep adjusting the attractor until you got lock on the chaotic signal and at that point you can perfectly predict.

July 28, 2013 8:46 am

LdB says:
July 28, 2013 at 7:47 am
Stokes
NS says: It isn’t an error. It is very well known that atmosphere modelling is chaotic (as is reality). Numerical discrepancies of all kinds grow, so that the forecast is no good beyond about 10 days. Rounding errors grow like everything else.

So Nick without getting into some of the more complex mathematical and QM ways to deal with chaotic errors the absolutely simple way to do it is to drag the model back down to reality and dump the errors each cycle. All weather reports typically do that they ignore yesterdays mistake and forecast forward.

Exactly — and this approach is used in many optimization systems — where you can have “way points” this approach will constantly correct the model and can be used to adjust parameters. Anyone who has worked with manufacturing scheduling realizes this. The scheduling algorithms are typically NP problems — with approximation solutions. Correcting further by monitoring progress with real time data collection reduces errors considerably — an obvious point one would think.
My first work was in designing math processors that could multiply and divide numbers of any length — decimal or otherwise — with any degree of accuracy/precision desired. The math processing routines in today’s compilers do use approximations — as do the math co-processors in the more advanced chips. This is simply to keep computational time reasonable in “small” computation systems.
Many times the algorithm chosen to solve a problem is one that uses a lot of “multiply/divide” (where relative error is specified) where it could just as easily rely on an algorithm that uses mostly “add/subtract” — where you can typically use “absolute error”. Sometimes we have a choice with the algorithm — sometimes we don’t….
The add/subtract error error grows much more slowly than the error of the multiplication and division operations. It’s one reason the use of determinants to solve an equation set can often be a bad idea as opposed to LU factorization of one of the other algorithms for equation set solution that use mostly add/subtract. (Gauss Siedel if I recall correctly).
On occasion when asked to evaluate why a computer is providing “bad answers” I have created and run algorithms that gave an answer correctly to say a hundred decimal places to make a point. (Whether a micro-processor or a mainframe). It’s all about the time requirements and your appetite (tolerance) for error.
Advances in FPGAs and ASIC’s could change how we do calculations of popular algorithms as we could design special purpose chips or programs that would run only a particular algorithm — or at high accuracy. Again — it is a money vs time vs accuracy requirements issue.
Apologies for the simplistic explanation.

cliff mass
July 28, 2013 8:52 am

Anthony and others,
As a numerical modeler at the University of Washington, let me make something clear: this issue is not really relevant to climate prediction–or at least it shouldn’t be. The fact that differences in machine architecture, number of processors, etc. will change a deterministic weather forecast over an extended period is well known. It is reflection of the fact that the atmospheric is a chaotic system and that small differences in initial state will eventually grow. This is highly relevant to an initial value problem, like weather forecasting. Climate modeling is something else…it is a boundary value problem…in this case the radiative effects due to changes in greenhouse gases.
To to climate forecasting right, you need to run an ensemble of climate predictions and for a long climate run the statistical properties of such climate ensembles should not be sensitive to the initial state. And they should reveal the impacts of changing greenhouse gas concentrations.
…cliff mass, department of atmospheric sciences, university of washington

GaryW
July 28, 2013 8:57 am

This article was an eye opener for me. I falsely assumed issues of rounding error, math library calculation algorithms, and parallel processing were considered and handled properly in software. I assumed differences between measured values and climate projections were entirely due to climate simulation algorithms. I had always wondered how so many competent people were miss-led about the accuracy of projections of future warming. Now I see that it is quite possible to assume other folks are operating with the same or better level of knowledge and care as themselves and be led to invalid confidence in their claims.

DirkH
July 28, 2013 9:11 am

Blarney says:
July 28, 2013 at 8:38 am
“And unless it is demonstrated that slightly different CPUs or, for that matters, inital conditions, generate completely different _average trends_, the thing is completely irrelevant to climate models.”
You sound as if you believed the average trends computed now by climate models were not already falsified.
It has been demonstrated that the average trend computed by GCM’s does not correspond to reality; see chart in headpost. So the Null hypothesis holds – the climate is doing nothing extraordinary but slowly recovering from the LIA; CO2 concentrations do not affect it in a significant way.
Now the onus is on the programmers of the climate models to come up with a new hypothesis, incorporate it into their programs, make predictions, and wait for possible falsification again.

Blarney
Reply to  DirkH
July 28, 2013 9:18 am

“You sound as if you believed the average trends computed now by climate models were not already falsified.”
This is a completely different matter. The influence of numerical approximations on predictions based on climate models can be (is, in my opinion) completely irrelevant, and still those models may be unable to produce accurate predictions for any other reason.
If you think that climate models are wrong, just don’t make the mistake of buying in any proposed explanation of why it is so, because this instead of making your argument stronger, it makes it weaker.

DirkH
July 28, 2013 9:17 am

cliff mass says:
July 28, 2013 at 8:52 am
“This is highly relevant to an initial value problem, like weather forecasting. Climate modeling is something else…it is a boundary value problem…in this case the radiative effects due to changes in greenhouse gases.”
You are claiming that there are no tipping points. “Tipping points” have been the number one mainstay of climate science for years, and still are; they continue talking about a “point of no return”.
Thanks for clarifying that official climate science now believes something else; namely a simple one to one relationship between greenhouse gases and temperature.
In that case, why do they run 3 dimensional models at all? A handkerchief should provide sufficient space for extrapolating the temperature in the year 2100.
Are you saying they -cough- are wasting tax payer money by buying themselves unnecessary supercomputers? Oh. Yes that is exactly what you are saying.

cynical_scientist
July 28, 2013 9:31 am

The more important question is how do you validate a model of a chaotic system? To validate the model of a stable system is simple – you just compare prediction to reality (with error bounds) and throw the model out if the two don’t match. But when you are modelling a chaotic system you can have no expectation that prediction will look anything like reality even if the model is perfect. So how could you falsify such a model? An unfalsifiable model isn’t science.
The best method is to make use of the properties of chaotic systems.The solutions are not completely random – there will be relationships between the variables reflecting the fact that the solution set forms an attractor – a surface of lower dimension than the full space. Seek those relations. If observation does not lie on the predicted attractor the model is falsified.
I don’t see much of this happening with climate models though. Instead of looking for relationships between the variables that would characterise the nature of the attractor and might allow the model to be tested, everyone just seems fixated on producing spaghetti graphs of temperature.

July 28, 2013 9:44 am

This sort of thing is very difficult to control for. See, for example Floating-Point Determinism:

Some of the additional things that you may need to worry about include:
* Compiler rules for generating floating-point code
* Intermediate precision rules
* Compiler optimization settings and rules
* Compiler floating-point settings
* Differences in all of the above across compilers
* Different floating-point architectures (x87 versus SSE versus VMX and NEON)
* Different floating-point instructions such as fmadd
* Different float-to-decimal routines (printf) that may lead to different printed values
* Buggy decimal-to-float routines (iostreams) that may lead to incorrect values

Reed Coray
July 28, 2013 10:06 am

steven says: July 27, 2013 at 11:17 am
Can’t wait to see what RGB has to say about this!

Just maybe we’ve stumbled across a reason for averaging GCM model results–we’re averaging out random rounding errors. But if the rounding errors are that large, who is willing to trust any of the results?

DirkH
July 28, 2013 10:09 am

Blarney says:
July 28, 2013 at 9:18 am
“If you think that climate models are wrong, just don’t make the mistake of buying in any proposed explanation of why it is so, because this instead of making your argument stronger, it makes it weaker.”
I haven’t even begun to cite the holes in the physics in the models because I didn’t want to derail the debate.

DirkH
July 28, 2013 10:13 am

Reed Coray says:
July 28, 2013 at 10:06 am
“Just maybe we’ve stumbled across a reason for averaging GCM model results–we’re averaging out random rounding errors. But if the rounding errors are that large, who is willing to trust any of the results?”
That argument, if it were made, would fail for two reasons:
-averaging would only help if the errors were equally or normally distributed; now; are they?
(Law of Large Numbers again; it does not hold for Cauchy-type distributions)
-the compounding errors grow over time so the averaging would become more ineffectual with every time step. Why do it if no predefined level of quality can be maintained? A scientist should have a reason for doing things – not just average 20 runs because 20 is a nice number and averaging sounds cool.

DirkH
July 28, 2013 10:18 am

cynical_scientist says:
July 28, 2013 at 9:31 am
“The more important question is how do you validate a model of a chaotic system? ”
-get as much data of the real system as possible for a time interval.
-initialize the model with a state that is as close to the state of the real system at the start of your reference time interval as possible.
-run the model.
-compare with what the real system did.
And that’s what climate scientists obviously never did. They do their hindcasting but they initialize with a random state. Now, is that incompetence or malice or both?

Reed Coray
July 28, 2013 10:27 am

Many, many (too many actually) years ago when I took my first class in digital signal processing (DSP), the instructor assigned us the problem of writing software code that represented an infinite impulse response (IIR) digital filter having several “poles” just inside the unit circle. We computed values for the feedback loop coefficients that would locate the “poles” at their proper locations and entered those coefficients into the computer representation of the filter. The instructor had deliberately chosen “pole” locations that required coefficients to an extremely high degree of precision. The computer/software we were using rounded (or truncated, I’m not sure which) our input coefficient values. The result was that the rounded values represented a filter where the “poles” moved from just inside the unit circle to just outside the unit circle. For those familiar with DSP, a digital filter having a “pole” outside the unit circle is unstable–i.e., for a bounded input, the output will grow without bound. When we fed random data into our computer-based IIR filter, sure enough it didn’t take long before “overflow” messages started appearing. The purpose of the whole exercise was to make us aware of potential non-linear effects (rounding) in our construction of digital filters. Apparently the Climate Science(?) community would have benefited from a similar example.

MarkG
July 28, 2013 10:46 am

“Nope. you are forgetting that re-ordering non-dependant floating point operations can give you different rounding error. That is not deterministic.”
The compiler may well change the sequence of floating point instructions for better performance (e.g. changing a*b+c*b into (a+c)*b to eliminate a multiply), and thereby produce different results to those you expect. I’m not aware of any CPU that will do the same.

July 28, 2013 10:57 am

Mr. Layman here. Awhile ago I made a comment about models projecting out to 100 years. I tried to find it but couldn’t. (It had to do with models going that far would need to have numbers for such things as the price of tea in China for the next 100 years and how that might effect the crops that may be planted instead of tea and their effect on CO2 etc. etc. Lots of variables that need a number to run in a computer program.) Now we’re learning that when the “Coal Trains of Death” is dependent upon what kind of computer and which program is run?
Why bet trillions of dollars, national economies, and countless lives on such uncertainty?

July 28, 2013 10:59 am

Now we’re learning that when the “Coal Trains of Death” is dependent

Should be “Now we’re learning that when the “Coal Trains of Death” kill us all is dependent

July 28, 2013 11:04 am

Tsk Tsk says:
July 27, 2013 at 1:04 pm
It’s disturbing to hear them blame this on rounding errors. As Ric noted all of the hardware should be conforming to the same IEEE standard which means it will result in the same precision regardless. Now the libraries and compilers are a different beast altogether and I could see them causing divergent results. Different numerical integration methods could also be at play here. Whatever the ultimate cause this is a creative definition of the word, “robust.”
Reply:
My own experience has been that IEEE compliance helps accuracy, but the way the CPU implements floating point makes a larger difference. Intel x86 uses a floating point stack. As long as you keep numbers on the stack, they compute in 80-bit precision unless you tell it to round results. Once you take the result off the stack and store it in memory as 64-bit, it gets rounded. Depending on how you write your equations into C++ or some other language, the roundoff results can be very different.
I use neural networks in my work and MS Visual Studio C++ for development. Neural networks involve many calculations of dot products of two vectors that are fed forward through the network. In my case I calculate these in a single statement to keep the summation accuracy in the stack at 80-bits. If I do the calculation in an index loop, which might be the way most people would program it, I get wildly different results because each loop rounds the accumulated result down to 64-bit each time it loops. The MS C++ compiler also doesn’t implement 80-bit IEEE and so isn’t well suited to these calculations. I only get 80-bit because I’m aware of how the stack works. I doubt the MS compiler is used much for these climate simulations, and if it is, the accuracy should be questioned. Doing things at 80-bits in a compiler that supports it really doesn’t slow things down much.
Try this on a Power or ARM CPU and you will most likely get very different results even though they all support IEEE floating point. This means that you can’t just move the neural net (with fixed weight values) to another CPU and expect it to work.

steverichards1984
July 28, 2013 11:19 am

If you are writing heavy duty code, in any language, to be used for a serious purpose, then there a few tips that people need to be aware of:
Others have and still are, doing this job now, so copy the best methods.
Use approved compilers and only use subsets of your language of choice.
A typical example is MISRA C. It is not a compile but a set of ‘tests’ that you chosen compiler/CPU will ALWAYS WORK THE SAME WAY EVERYTIME.
What does this mean in practice? Common programming tricks are banned because the ‘system’ can not guarantee it will give the same result every time it runs. (Heard that one before).
MISRA C is now the accepted standard to use if you are coding for automobiles etc.
http://www.misra.org.uk/misra-c/Activities/MISRAC/tabid/160/Default.aspx
If you want to code for aircraft flight control systems, it just gets worse, from the programmers perspective, (but safer for the passenger).
DO-178B (changing to ‘version C soon) is much tougher, contains many more rule/procedure based checking details.
To prove that you have correctly used MISRA C or DO-178 (the programming rules) you use a software QA toolset. http://www.ldra.com/ produce tools that analyse your code before you even run it, getting rid of many bugs at the beggining.
If i were involved in spending 1 million on producing a GCM I would have expected a few professional programmers, a test engineer, a QA engineer to prove to the customer that what we have produced fits the bill and can be seen to do so.

d$
July 28, 2013 11:22 am

Nick Stokes, you keep saying global climate models are built on ‘simple’ and real physics.
The fact is we still struggle to model a ‘ simple’atom of CO2 in physics. The maths required to do this are frightening.
If there are serious and difficult issues describing how a single atom of CO2 can be modelled accurately using highly complex maths.
How on earth can we accurately predict how climate will react to adding more CO2 in the atmosphere if we can’t accurately model a CO2 atom

steverichards1984
July 28, 2013 11:29 am

For the ins and outs of rounding read this:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
I also not that toolset provider http://www.ldra.com/index.php/en/products-a-services/availability/source-languages do not support FORTRAN, implying that GCMs contain wholly unverified code.

ikh
July 28, 2013 11:32 am

MarkG:
Intel chips have an instruction pipeline that they try to keep full and not allow it to stall. They also support hyper-tyhreading, branch prediction, and out of order execution. These are all used to maximise the CPU throughput. This is so the chip can make max use of available hardware that is not currently busy. They even have hidden registers to store intemediate results in until they are needed. All the while maintaining the external appereance of doing things sequentially.
IBM’s Power chips have similar functionality and I guess the same would be true os Sun’s Spark chips.
/ikh

July 28, 2013 11:35 am

“cliff mass says: July 28, 2013 at 8:52 am
Anthony and others,
As a numerical modeler at the University of Washington, let me make something clear: this issue is not really relevant to climate prediction–or at least it shouldn’t be. The fact that differences in machine architecture, number of processors, etc. will change a deterministic weather forecast over an extended period is well known. It is reflection of the fact that the atmospheric is a chaotic system and that small differences in initial state will eventually grow. This is highly relevant to an initial value problem, like weather forecasting. Climate modeling is something else…it is a boundary value problem…in this case the radiative effects due to changes in greenhouse gases.
To to climate forecasting right, you need to run an ensemble of climate predictions and for a long climate run the statistical properties of such climate ensembles should not be sensitive to the initial state. And they should reveal the impacts of changing greenhouse gas concentrations.
…cliff mass, department of atmospheric sciences, university of Washington”

My bolding
Cliff:
Without delving into parts of your post; if I understand your ‘doing climate forecasting right’ phrase:
–Ensemble – an arranged group of models chosen for their particular contribution to the whole. Alright I think I follow that line; I have my doubts, but they’re cool compared to the following.
–It is the following two lines that I get lost; a long climate run where the statistical properties are not sensitive to the initial state? Perhaps you need to explain this line better?
–boundary value problem, …radiative effects due to changes in greenhouse gases?
There is a leap of logic where ensembles play together correctly, followed by a larger leap of logic to a suddenly valid model with statistics immune to initial states; followed by an abyss to where climate models are GHG specific boundary issues…
Now, I am at a loss to understand the details of how your scenario is different from the CAGW CO2 driven GCMs which are the topic of the above research submission. Can you elucidate?

July 28, 2013 11:46 am

Lots o’ URLs. I hope it gets through.
With regard to accurate numerical integration of simple ordinary differential equation (ODE) systems that exhibit complex chaotic response see the following, and references therein.
Shijun Liao, “On the reliability of computed chaotic solutions of non-linear differential equations,” Tellus (2009), 61A, 550–564,
DOI: 10.1111/j.1600-0870.2009.00402.x
Benjamin Kehlet And Anders Logg, “Long-Time Computability Of The Lorenz System”
http://home.simula.no/~logg/pub/papers/KehletLogg2010a.pdf
Benjamin Kehlet And Anders Logg, “Quantifying the computability of the Lorenz system,”
http://arxiv.org/pdf/1306.2782.pdf
These papers focus on the original Lorenz equation system of 1963. The papers are related to discussions that started with the following:
Teixeira, J., Reynolds, C. A. and Judd, K. 2007. Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design. J. Atmos. Sci. 64, 175–189.
http://journals.ametsoc.org/doi/pdf/10.1175/JAS3824.1
Yao, L. S. and Hughes, D. 2008b. Comments on ‘Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design’. J. Atmos. Sci. 65, 681–682.
http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2495.1
Teixeira, J., Reynolds, C. A. and Judd, K. 2008. Reply to Yao and Hughes’ comments. J. Atmos. Sci. 65, 683–684.
http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2523.1
Yao, L. S. and Hughes, D. 2008a. Comment on ‘Computational periodicity as observed in a simple system’ By Edward N. Lorenz (2006). Tellus 60A, 803–805.
http://eaps4.mit.edu/research/Lorenz/Comp_periodicity_06.pdf and http://www.tellusa.net/index.php/tellusa/article/download/15298/17128‎
E. N. Lorenz, Reply to comment by L.-S. Yao and D. Hughes, Tellus A, 60 (2008), pp. 806–807.
eaps4.mit.edu/research/Lorenz/Reply2008Tellus.pdf‎
Kehlet and Logg have determined the range of time over which the Lorenz system is correctly integrated as a function of (1) the order of the discrete approximation, and (2) the number of digits that are used to carry out the arithmetic. The former issue has not been noted in the discussions here.
The Lorenz system is a system of three simple ODEs and these show complex behavior in the calculated solutions. The original usage of complex within the context of chaos: the response, not the model. Some of the usual physical realizations of chaotic behavior are based on simple mechanical systems.
Calculations of numerical solutions of ODEs and PDEs that cannot exhibit chaotic response, but do so when the solutions of discrete approximations are not accurately resolved have been discussed in the following.
Cloutman, L. D. 1996.A note on the stability and accuracy of finite difference approximations to differential equations. Report No. UCRL-ID-125549, Lawrence Livermore National Laboratory, Livermore, CA.
Cloutman, L. D. 1998. Chaos and instabilities in finite difference approximations to nonlinear differential equations. Report No. UCRL-ID-131333, Lawrence Livermore National Laboratory, Livermore, CA.
http://www.osti.gov/bridge/servlets/purl/292334-itDGmo/webviewable/292334.pdf
See also: Julio Cesar Bastos de Figueiredo, Luis Diambra and Coraci Pereira Malta, “Convergence Criterium of Numerical Chaotic Solutions Based on Statistical Measures,”
http://www.scirp.org/Journal/PaperDownload.aspx?paperID=4505 DOI: 10.4236/am.2011.24055
Nick Stokes says:
July 27, 2013 at 12:26 pm
” It’s constrained, basically by speed of sound. You have to resolve acoustics, so a horizontal mesh width can’t be (much) less than the time it takes for sound to cross in a timestep (Courant condition). For 10 day forecasting you can refine, but have to reduce timestep in proportion. ”
NWP and GCM applications do not time-accurately resolve pressure propagation. A few simple calculations will show that this requirement would lead to an intractable computation. Not the least problem would be that the largest speed of sound coupled with the smallest grid dimension would dictate a time-step size restriction for the entire grid system.
The pressure-related derivatives are handled in an implicit manner. I seem to recall that about 20 minutes is the order of the time-step size for GCM applications.
All corrections to incorrectos will be appreciated.

Carbon500
July 28, 2013 11:53 am

Dirk H: Thank you for your comments in reply to my questions about the graph. All is now clear.

ikh
July 28, 2013 12:01 pm

steverichards1984
Nice comments and links. I think that you have very elegantly reinforced my point that floating point arithmatic should be avoided like the plague unless absolutely necessary. And should only be done by those who fully understand the pitfalls.
/ikh

Brian H
July 28, 2013 12:22 pm

Mark Negovan says:
July 28, 2013 at 6:03 am
This is one of the most important posts here at WUWT on the subject of using GCM crystal balls and is one of the first ones that I have seen here that shows the true computational insanity of all of the climate models.

errors and uncertainty at any time step is [are] unbounded and the error bound will increase to the extremes of where the climate has been in the past in a very short time.

Concisely! Thanks for this, Mark.
The tendency or practice of the AGW cult to hand-wave away uncertainty as fuzz that can be averaged out of existence and consideration is the fundamental falsehood and sham on which their edifice rests. In fact, the entire range of past variation is equally likely under their assumptions and procedures. Which means they have precisely nothing to say, and must be studiously and assiduously disregarded and excluded from all policy decisions.

Talent-KeyHole Mole For 30+ Years
July 28, 2013 12:40 pm

To be honest I find a paper such as this to be considered publishable in any science journal laughable. Really, floating point representation in byte-codes as opposed to long integer representations, endianness (most significant bit first or least significant bit first in 8-bit, 16-bit, 32-bit, 64-bit and 128-bit word length) of the cpu + i/o + memory + operating system have been known since the 1970s. Do recall the 1970s cpu wars between Intel (little endian), IBM (big endian) and DEC (middle and mixed-endian).
But then again, the ‘journal’ is a Meteorology journal and not a scientific-technical journal.

July 28, 2013 12:45 pm

I read all the posts until jorgekafkazar at (July 28, 2013 at 12:00 am) and finally got some insight. My understanding is that the climate emulators are run multiple times to generate a set of results from which a statistical average and standard deviation envelope are generated. As DirkH pointed out, it could also be called simulation.
Years ago I was shocked when I read references suggested on RealClimate wherein climate modelers would effuse over some physical phenomenon like ENSO or monsoonal flow when it emerged spontaneously in the model. Shiny! The use of such phenomenon has been used for countless model “validations” as well as “predictions” of the future of such phenomena. But it is 100% crap.
The climate modelers recognize that their emulations are highly dependent on parameter settings, so they choose (mostly cherry pick) a few parameters and ranges to demonstrate that the statistics don’t change very much. But AFAICS there is never any systematic validation of model parameterizations, and it may be impossible since the climate measurements used for hindcasting are quite poor in many cases.
I always believed that the demonstrated non-skill of modern ENSO prediction invalidates the climate models. Need to eliminate the inconvenient global MWP? Just “emulate ENSO” and voila, Greenland and Europe stay mostly warm but persistent La Nina compensates by cooling the rest of the globe for 100’s of years. But they are not emulating ENSO, but emulating persistent La Nina. In a narrow sense it can be validated, but only for an artificial skill in hindcasting; thus it can’t be used for predictions. ENSO is just one of the factors (albeit an important one) without which global atmospheric temperature cannot be understood.

July 28, 2013 12:47 pm

“Gunga Din says: July 28, 2013 at 10:57 am

Why bet trillions of dollars, national economies, and countless lives on such uncertainty?”

You are asking for a dreary geek history you know.
— When the computers first hit, they were fielded by large companies, teams of techs, engineers and programmers. This actually built a huge trust in computing; distrusted sure, but not reviled. Programming then was a dark secretive process that occurred with card punch rooms. Dark and secretive because almost no-one actually knew a real programmer.
— Personal Computers hit the world. When IBM’s PCs and clones came out, most booted right into BASIC, IBM into IBM BASIC, clones into Microsoft’s version which was nearly identical. You required a formatted for OS floppy drive or a large cable linked box to hold a 10MB hard drive.
— Almost everybody took it upon themselves, when faced with these monstrosities, to learn programming… sort of. Others who had taken college courses in systems, coding and design for mainframes were ‘discovered’ not working in computers, brought is and sat down in front of the monsters. Peter Norton’s DOS book was our friend for finding and working out issues, like HP’s plotter install with hard coded config.sys and auto.bat happily playing havoc with the computer. Or Lotus’s version 1 & 1A installs assuming that the computer’s only use is for Lotus.
— The folks who solved these problems quickly, did ad-hoc coding found themselves tailoring PC’s to fit the specific business functions of the user. I’ll also mention D-Base which introduced thousands to databases, which I also hated extensively and deleted off of every computer under my management. Why? Dbase did not have procedures for recovery from hard drive failure or making a database too large. Normal procedures for installing the last backup because the backups would not install to different DOS or D-Base versions. After many hours installing different versions of DOS and D-Base, I considered trying to throw one PC into the Gulf of Mexico which was a good mile from where the computer was.
The next step is the final leap. Where computing moved to everyone’s desktops; email became common, spreadsheets and word processing programs de facto business instruments. People became adjusted to computers serving their needs, accurately and almost immediately. By and far, the great majority of users are ignorant of the mystical details internal and relevant to computing.
So it became habitual that when faced with a computer topic barely understood, that nodding agreement and playing along is almost required. The more intense the mathematics or code requirements, the more glazed and accepting people became. The scifi shows went far to make this approach common as their models could do everything almost immediately.
When I was sat down in front of a PC, major financial, commodity and productivity reports were produced every accounting period (28 days for us). Flash or rough reports were forced very week by sheer dint of labor. When I retired, those same reports, depending on the background sources could be had hourly.
What’s more is that people grew to trust those immediate gratification reports, almost absolutely.
Enter models. In many people’s mind, models are amazing Sci-fi things of perfection; so all they care about is the final result, not the details or inner workings.
As a perspective point, these people become the faithful, much as devoted trekkies, Society for Creative Re-enactment re-enactors, D&D gamers and so on. Only with many of the Sci-fi crowd, they at least understand there is a ‘real’ world separate from the ‘fantasy’ world’. Think of people wearing hip boots, super suits and waving hockey sticks and you wonder about their grasp of what is real.
The past decade with so many online companies delivering goods within days, this immediate gratification syndrome has separated many people away from reality, especially when their ‘favs’ who are deeply entrenched CO2 blood money refuse to acknowledge their debunked or opaque science, GCM failure, Climate failure to adhere, bad math, bad code, bad research and papers keep declaring CAGW alarmisms and validity.
My perspective anyway following parts of my career watching people eyes as I tried to explain their equipment, software and put into perspective their desired program desires.
“I want you to show me how to put this into a spreadsheet.” (Lotus 1A no less) Explained the Manager of safety waving a large book at me.
“Say what?
Do you mean put pictures in a spreadsheet? That’s a lot of pictures and they’re memory hogs” I responded.
No! He Says, I want you to show how to make put this entire book into the software so I can pull up immediately all relevant rules and procedures.”
“Um.” I stammered. “Spreadsheets are a bad fit for that purpose. It would be better for you to have a real program designed for you.”
“Great!” He bellowed. “I want you to start on it immediately.”
“Ah, I can’t do that.”
“Jerry, has me booked solid for months. You’ll need to talk to him.”
Before the Safety Manager got his breath back, I exited and headed straight to Jerry, my boss’s office who found me delivering mail and brought me inside to A/C, water fountains and other pleasures. Some topics needed preparation and quick talking before Theo subdivision goes undeliverable sub atomic time units.
Others should and will have perspectives on the social question because of their unique education and experiences. Pitch in! Or should this be under a whole different thread?

peterg
July 28, 2013 1:28 pm

Roundoff error can be modelled as a form of noise. By postulating a large sensitivity to co2, the resulting model is unstable, and greatly amplifies this noise. A stable model, one that approximates thermodynamic reality, would tend to decrease and minimize this roundoff noise.

DirkH
July 28, 2013 1:35 pm

ATheoK says:
July 28, 2013 at 11:35 am
“–It is the following two lines that I get lost; a long climate run where the statistical properties are not sensitive to the initial state? Perhaps you need to explain this line better?
–boundary value problem, …radiative effects due to changes in greenhouse gases?”
He and the other climate scientist programmers ignore that the very purpose of simulating a chaotic system is to find out about the strange attractors. Otherwise why simulate all the chaos at all? Why not go with Callendar’s simple 1938 model that outperforms modern GCM’s (climate audit).
And when you have strange attractors – that can flip the system into a totally different state – why in the world do you hope that the outcomes of several runs will be normally or equally distributed so that averaging would make sense?
There is no logic in that.

DirkH
July 28, 2013 1:39 pm

peterg says:
July 28, 2013 at 1:28 pm
“Roundoff error can be modelled as a form of noise. By postulating a large sensitivity to co2, the resulting model is unstable, and greatly amplifies this noise. A stable model, one that approximates thermodynamic reality, would tend to decrease and minimize this roundoff noise.”
We know that amplification happens; otherwise weather could not be chaotic. So they can’t avoid that problem. Oh, and we know that the atmosphere is unstable. Otherwise there would be no weather at all.

July 28, 2013 1:44 pm

DirkH has said a lot on this thread that I am not qualified to comment on.
But I note he has beaten all-comers and is unassailed now so he seems to know his stuff.
And I can comprehend his comment at July 28, 2013 10:18 am.
It is worth everyone noting it; he is spot on right here:

“The more important question is how do you validate a model of a chaotic system? ”
-get as much data of the real system as possible for a time interval.
-initialize the model with a state that is as close to the state of the real system at the start of your reference time interval as possible.
-run the model.
-compare with what the real system did.
And that’s what climate scientists obviously never did. They do their hindcasting but they initialize with a random state. Now, is that incompetence or malice or both?

My answer: It’s incompetence.
They aren’t smart enough to do this maliciously. If they were in a conspiracy it wouldn’t be so obvious they are failing at modelling.

July 28, 2013 1:44 pm

DirkH says:
July 27, 2013 at 11:15 am
a system is chaotic IFF its simulation on a finite resolution iterative model develops an error that grows beyond any constant bound over time?
=========
This divergence is at the heart of chaotic systems, and is routinely ignored by climate modellers that insist the “noise” will converge to zero over time, similar to the way that heads and tails on a coin balance out in the long term.
The problem is that a great deal of mathematical theory assumes that numbers are infinitely precise. So in theory the errors don’t accumulate, or they accumulate so slowly that the effects are not significant.
However computers do not store numbers!! They store binary approximations of numbers. Similar to the fraction 1/3, which results in an infinitely long decimal fraction, there are an infinite number of real numbers that cannot be stored exactly on computers.
When these binary approximations are used in models of chaotic systems, or in just about any non-linear problem, the errors quickly grow larger than the answer. You get ridiculous results. We have numerical techniques to minimize these problems, but it remains a huge problem in computer science.
If the “noise” in computer models of chaotic systems did actually converge to zero over time, then we could make all sorts of reliable long term predictions about both climate, and the stock market. Climate modellers would not need government grants.
They could simply apply climate forecasting techniques to the stock markets and use the profits generated to pay for all the super computers they could want. Heck, there would be no need for carbon taxes. Climate modellers could use the winnings from their stock market forecasts to pay for the entire cost of global warming.

AndyG55
July 28, 2013 1:51 pm

“Oh, and we know that the atmosphere is unstable. Otherwise there would be no weather at all.”
Actually, weather is the atmosphere stabilising itself.
The atmosphere is inherently stable, its the Earth that keeps rotating.

Nick Stokes
July 28, 2013 1:52 pm

Dan Hughes says: July 28, 2013 at 11:46 am
“NWP and GCM applications do not time-accurately resolve pressure propagation. A few simple calculations will show that this requirement would lead to an intractable computation. Not the least problem would be that the largest speed of sound coupled with the smallest grid dimension would dictate a time-step size restriction for the entire grid system.”

The have to. It’s how force is propagated. You either have to resolve the stress waves or solve a Poisson equation.
The Courant limit for a 100 km grid at speed of sound 334 m/s is 333 sec, or 5.5 min. They seem to be able to stretch it a bit.
Mark Negovan says: July 28, 2013 at 6:03 am
“If you are designing an airplane wing using CFD, you absolutely have to determine the computational uncertainty to properly validate the engineering model.”

It’s a good idea to determine computational uncertainty, but it doesn’t validate anything. Any CFD program will behave like this numerical weather forecasting program, when used to model a transient flow from initial conditions. Errors will grow. The chaotic nature of flow is inevitably in practice recognised with some form of turbulence modelling, explicitly giving up on the notion that you can get a predictive solution to floating point accuracy.
I bet you can’t get two different computers to keep a vortex shedding sequence in exactly the same phase for 100 cycles. But they will still give a proper vortex street, with frequency and spacing.
LdB says: July 28, 2013 at 7:47 am
“Nick that is all blatantly wrong you may not know how to deal with the problem but many scientists do so please don’t lump us all in with your limited science skill and abilities.”

Unlike most of the armchair experts here, I actually write CFD programs.

Nick Stokes
July 28, 2013 2:06 pm

“The Courant limit for a 100 km grid at speed of sound 334 m/s is 333 sec”
OK, 300 sec.

Blarney
July 28, 2013 2:12 pm

ferd berple says:
July 28, 2013 at 1:44 pm
“If the “noise” in computer models of chaotic systems did actually converge to zero over time, then we could make all sorts of reliable long term predictions about both climate, and the stock market.”
No, you’re confusing simulation with forecast. Try this mental experiment: you have a machine that allows you to create a perfect duplicate, 1:1 size, of the whole Earth in its current state. This duplicate Earth is of course a perfect “simulation” of every process at work on the real Earth, as everything works there just exactly the same as it would on the original planet. Now let a few years pass and compare the weather in a specific day and place on the two planets- you’ll find that the meteorological weather on the two planets has diverged to the point that you can’t use what’s happening on Earth B to say what’s happening on Earth A. This even if the “simulation” you’re using is absolutely perfect, in that it’s not even a simulation at all (it’s a perfect copy of the real thing). Tiny differences in noise coming from space and quantum fluctuations have made the two meteorological systems completely unrelated in a short time.
So it’s not in the computer models the problem, it’s in the nature of chaotic systems. Rounding errors in numerical simulations are just a source of a small background noise: they influence the state of the system, but (if the system is well modeled and the noise is kept low, and depending on the timescales we’re trying to study) they may not influence the general pattern of its evolution.
On the other hand, claiming that the noise introduced in the simulations by rounding errors is driving their average towards a warmer world, is equivalent (if the errors are small and random) to saying that a noise of equivalent magnitude and randomness is keeping the climate of the Earth more stable than what predicted. Which is clearly nonsense.

July 28, 2013 2:17 pm

The problem with error divergence is not limited to chaotic systems. Even the most basic of linear programming models deliver very inaccurate results using the techniques taught in high school and university math classes
Do you remember linear programming in high school? The teacher drew a 3×3 matrix on the board, with an equal sign pointing to a column of three numbers on the right. The solution was to add, subtract, multiple, divide the rows so that you got all 1’s on the diagonals. The column on the right would then give you the your solution for X,Y,X.
What the teacher usually didn’t tell you was that these problems were almost always contrived so that the answers were integers or well behaved decimal fractions. What they didn’t tell you is that when you program a computer to solve this problem using the same techniques, the round off errors quickly make the answers useless except for very well behaved problems.
Even simple linear programming models on computers deliver huge errors using standard mathematical techniques. To reduce the error on computers, we apply numerical techniques such as back iteration. For example, we fudge the results slightly to see if it reduces or increases the error. and if it reduces the error we continue to fudge the results in the same direction. Otherwise we reverse the sign of the fudge and try again. We continue this process until the error is acceptably small.
So now you know. Even on linear models we need to “fudge” the answer on computers to try and minimize the round off errors. Try and apply this technique to a climate model. You end up with a whole lot of fudge and not a lot of answer.

Compu Gator
July 28, 2013 2:18 pm

http://www.cs.berkeley.edu/~wkahan/ieee754status/754story.html:

An Interview with the Old Man of Floating-Point:
Reminiscences elicited from William Kahan [U. Cal. (Berkeley)]
by Charles Severance [U. Michigan School of Information] 20 Feb. 1998
This interview underlies an abbreviated version to appear in the March 1998 issue of IEEE Computer.
Introduction
If you were a programmer of floating-point computations on different computers in the 1960’s and 1970’s, you had to cope with a wide variety of floating-point hardware. Each line of computers supported its own range and precision for its floating point numbers, and rounded off arithmetic operations in its own peculiar way. While these differences posed annoying problems, a more challenging problem arose from perplexities that a particular arithmetic could throw up. Some of one fast computer’s numbers behaved as non-zeros during comparison and addition but as zeros during multiplication and division; before a variable could be used safely as a divisor it had to be multiplied by 1.0 and then compared with zero. [….] On another computer, multiplying a number by 1.0 could lop off its last 4 bits.

I suspect that his “non-zeros” example refers to the CDC 6600 & 7600, which used 1’s-complement arithmetic, giving it signed zeros (i.e.: +0 distinct from -0). The “4 bits” example probably refers to the IBM S/360, which has a “hexadecimal exponent” (i.e.: binary E in exponent-part representing 16^E, thus normalizing the mantissa no more granularly than 4-bit nybbles).
This interview is more focused on the history of the IEEE 754 standard (neé “P754”: possibly more useful as a search term) than on the foibles of binary floating-point arithmetic. The coäuthors of P754 (in addition to Kahan) were Jerome Coonen and Harold Stone. George Taylor (a key person mentioned in the interview) had seen, with his own eyes, Kahan’s collection of numerous electronic calculators that erred in 1 repeatable way or another, as stashed in a desk drawer.

July 28, 2013 2:23 pm

Blarney says:
July 28, 2013 at 2:12 pm
Try this mental experiment: you have a machine that allows you to create a perfect duplicate, 1:1 size, of the whole Earth in its current state.
==========
I agree with you completely. Even two identical earth’s will deliver two different results. I have written extensively on this, and the reason has nothing to do with round-off errors. There are a (near) infinite number of futures. We will arrive at one of these. Which one is not at all certain. The odds of two identical earth’s arriving at the same future are (near) infinitely small.
the question I was addressing what the computational limitations of solving even very trivial models using computers.

TomRude
July 28, 2013 2:41 pm

Bottomline: weather is conditionned by what’s happening in the first 1500m, well below the 500 hPa level. Not only it is a computer problem but it is also a fundamental understanding problem. Hence the fallacy used in so many climate papers using the 500 hPa and trying to explain surface temperature series while disconnecting the synoptic reality that created them.
===
““We address the tolerance question using the 500-hPa geopotential height spread for medium range forecasts and the machine ensemble spread for seasonal climate simulations.”

“The [hardware & software] system dependency, which is the standard deviation of the 500-hPa geopotential height [areas of high & low pressure] averaged over the globe, increases with time.”

July 28, 2013 2:58 pm

You are walking down the road an come to a fork. You decide to toss a coin. Heads you go right, tails you go left. You toss the coin and go right.
On an identical earth, your identical copy comes to an identical fork. they toss an identical coin. Will they go left or right?
This is an unsolved question in physics. If your identical self will always go right, then much of our western culture and belief is a nonsense. the future was decided at the moment of creation of the universe, and nothing we can do or say will affect this. Our future actions, and all future actions, are already fully accounted for in the past. In which case, the notion that people are personally responsible for any of their actions is a nonsense. your actions were determined at the point of creation and there is nothing you can say or do to alter this..
If however, your identical self does sometimes go left in the identical earth, then our actions are not fully determined at the point of creation. Creation may skew the odds that we go right, but we still might go left. This is much closer to how our belief systems operate, but it opens up a complete can of worms for predicting the future.
Suddenly the future is not deterministic, it is probabilistic. The future doesn’t exist as a point in time, it exists as a probability function, with some futures more likely than others, but none written in stone.
In which case, predicting what will happen in the future is physically impossible, computer models and mathematics not withstanding. The very best you can hope for is to calculate the odds of something happening, and while the odds may favor a turn to the right, they don’t prohibit a turn to the left. the same for temperatures.

July 28, 2013 3:03 pm

http://wattsupwiththat.com/2013/07/27/another-uncertainty-for-climate-models-different-results-on-different-computers-using-the-same-code/#comment-1373526
Nick, if the method requires that the Courant sound-speed criterion be met, it can’t be “stretched a bit”. The grid is not 100 x 100 x 100 either. Your basic factor of 4 also isn’t “a bit”. Finally, accuracy generally requires a step size much smaller than the stability limit be used.

Mark T
July 28, 2013 3:13 pm

Parallel processors as found in GPUs, e.g., the Nvidia CUDA, also execute instructions across threads/warps in a non-deterministic order.
Mark

cynical_scientist
July 28, 2013 3:43 pm

cynical_scientist says:
“The more important question is how do you validate a model of a chaotic system? ”

DirkH says:
-get as much data of the real system as possible for a time interval.
-initialize the model with a state that is as close to the state of the real system at the start of your reference time interval as possible.
-run the model.
-compare with what the real system did.
And that’s what climate scientists obviously never did. They do their hindcasting but they initialize with a random state. Now, is that incompetence or malice or both?

For a chaotic system even a perfect model would fail your proposed test because there is a degree of inherent variability involved. You simply can’t expect accurate predictions of what will happen in this case.
Varying the initial conditions is reasonable as it allows the model to exibit its internal variability so that you can find the shape of its attractors. The initial consitions should match measurement though, and the variations should be small – within measurement error.
However climate scientists seldom seek to find the shape of the attractor. They are focussed on temperature to the exclusion of all else and throw away most of their data to generate graphs of global average temperature. They then average over several runs and use the variability to compute error bounds for the prediction. The models can be falsified if reality strays out of those error bounds, but it is a very weak test and requires you to wait many years for data to possibly falsify it.
I suspect their models would fail a strong test, which would look at whether they exhibit the identified variability of the real climate. Do the models have el nino and la nina states? Do they exhibit the various ocean oscillations that have been identified? If they don’t do this they are wrong. You don’t have to wait many years to reach that conclusion.

Blarney
July 28, 2013 4:12 pm

ferd berple says:
July 28, 2013 at 2:58 pm
“Suddenly the future is not deterministic, it is probabilistic. The future doesn’t exist as a point in time, it exists as a probability function, with some futures more likely than others, but none written in stone… The very best you can hope for is to calculate the odds of something happening, and while the odds may favor a turn to the right, they don’t prohibit a turn to the left. the same for temperatures.”
Think of a cloud of gas. Every molecule in it is drifting and bouncing in random directions. Even if you could make an exact copy of the cloud with all its molecules in the exact place and speed at an instant T0, soon the positions of molecules in the two clouds would start to diverge because of quantum fluctuations and the most microscopic noise introduced from the environment.
Does that mean that you can’t simulate the behaviour of that gas cloud? Not at all. Your model loses very quickly the ability to predict where a single molecule will be, but it can simulate the shape of the cloud very well on some much longer time scale because its general behaviour has nothing to do with the particular trajectories of its molecules.
So it’s a matter of spatial and temporal scale: the current weather models can’t tell me if a small breeze will pass through my windows in the next minute, but they can predict the general weather on the city in two days; they will fail in telling which weather you’ll have in one year but (as climate models) they can show some skill in predicting if the average temperature for the whole planet will go up or down in ten years. If the climate is a chaotic system (letting aside catastrophic events of all kinds) it is possible that the same models will eventually fail completely on some very long time scale – tens of thousands of years, for example. But the rate of divergence from the modelled system is completely different from that of weather models as the level of detail at which they have to exhibit a skill is completely different.

jorgekafkazar
July 28, 2013 5:13 pm

ATheoK says: “…I’ll also mention D-Base which introduced thousands to databases, which I also hated extensively and deleted off of every computer under my management….”
I’m rather proud of having retained my luddite-like total ignorance of DBase. I never learned it, getting by with less profound (and less expensive) programs. This saved me hundreds of hours of learning curve, which I’ve doubtlessly squandered in other places, such as Lotus Agenda…

Nick Stokes
July 28, 2013 5:30 pm

Dan Hughes says: July 28, 2013 at 3:03 pm
“Nick, if the method requires that the Courant sound-speed criterion be met, it can’t be “stretched a bit”. The grid is not 100 x 100 x 100 either. Your basic factor of 4 also isn’t “a bit”. Finally, accuracy generally requires a step size much smaller than the stability limit be used.”

The Courant “limit” gives a guide to a typical space dimension. It can be extended by higher order interpolation, for example. It gives an instability mode for the worst case – a wave that has wavelength equal to the Nyquist frequency – In fact the Nyquist freq is a half wavelength, so there’s a factor of 2 already. if this is damped by accident or design, you can extend. And so on.
The vertical dimension does not have a momentum equation; hydrostatic pressure is assumed. No wave propagation required.
The Courant condition is a stability condition, not an accuracy one. It controls the emergence of unstable modes. You need to resolve acoustic waves to prevent them going haywire, but they aren’t otherwise an important part of the solution.

ikh
July 28, 2013 5:43 pm

I have just been looking at the latest snapshot of NASA GISS ModelE source code and it is a real eye opener.
In the main model directory there is over 333,000 lines of code. That is huge.
I did a search using ‘grep’ and found over 6,000 uses of the keyword real. This is used to declare floating point variables. Each line that declares real declares multiples variables, typicaly 3 to 6.
There are a few uses in the declaration of functions and some constants but at a rough guess they are less than 10 percent. My rough guess is that there are 20 to 30 thousand floating point variables in the GISS ModelE GCM. And some of those are arrays or matrixes.
There is no way on earth that anyone can manage this level of complexity with regard to rounding errors. I would suggest that nothing that is output from GISS ModelE has any value.
/ikh
.

jimmi_the_dalek
July 28, 2013 6:17 pm

ihk : “In the main model directory there is over 333,000 lines of code. That is huge.”
I am not a climate scientist but work in another area of physical science which requires large scale computing. 333000 lines of code is not huge. I know of and have used several codes which contain millions (about 10 million in the largest case) of lines of mixed Fortran and C++. They are full of floating point arithmetic – it cannot be avoided. The interesting thing is that these codes, written by different groups at different times, can, when applied to a given problem, be capable of giving the same results to virtually full machine precision i.e. about 15 significant figures. You do not have to be afraid of FP arithmetic and rounding errors. When these codes do give different answers, it is usually not the code, or the CPU, but the compiler – compilers make far more mistakes than people realise and quite often ‘mis-optimise’ a code in such a way that they change the results. We always check for problems of this nature – I hope the climate people do as well.

LdB
July 28, 2013 7:20 pm

What is funny is we are getting a three group separation and it is based on lack of understanding
We have a group who for some reason believe that chaos means you can’t track and predict it because of some extremely sensitive reactions the so called butterfly effect.
We have a group who want to try and plaster over and ignore the problem and seem to accept the models will always be in error and this seems to include the climate scientists, Nick Stokes and the climate modelers.
Then you have a third group which come from different backgrounds and I include myself in that which know that both of those arguments are garbage because we routinely do it in science.
If you consider a patriot missile when it launches it only approximately knows where the target is. Worse as it closes on the target that target will most likely go into counter measures including going as random in movements as it can.
You will note when lose of prediction occurs in a patriot missile we call it what it is …. a software bug
http://sydney.edu.au/engineering/it/~alum/patriot_bug.html
The fact the climate models and climate model creators are not even sure if they have lock on the thing they are trying to model tells you they have not a clue what they are doing.
In effect the climate models are trying to launch a patriot that has its targeting correct at launch and the military used that system in earlier world wars against planes and we call it “flak” and it seems climate science hasn’t progressed far from there.
The problem to me seems to stem from the fact most of the climate scientists don’t want to put error feedback and analysis of the errors on the models and accept that they continue to deviate from reality and like Nick Stokes is arguing that it is unavoidable.
The stability of the Earth’s climate over 4 billion years tells you that the chaos effect in climate is not unmanageable in the same way a missiles random movements has limits and thus a patriot can track and destroy it the problem seems to be the climate scientists willingness to learn from hard science.

MarkG
July 28, 2013 7:20 pm

“Intel chips have an instruction pipeline that they try to keep full and not allow it to stall.”
Out-of-order execution produces the same results as in-order execution, because it has to wait for the previous operation on the data to complete before it can continue. If the compiler tells an Intel CPU to calculate a*b, then c*b, then add them, it does that. It may choose to calculate c*b before it calculates a*b, but that’s irrelevant, because the results will be the same either way.
It won’t convert those instructions into calculating a+c and then multiplying that sum by b, which would change the result. But the compiler may well do so.

July 28, 2013 7:26 pm

My experience with compilers is very limited. It would only “catch” syntax errors if the code was written incorrectly but didn’t care about errors within the syntax. (i.e. If I left the “greater than” sign off a “blockquote” here, a compiler might catch it but wouldn’t care if within the blockquote I had said 2+2=5.)
Do the compilers spoken of here basically work the same way?

David Gillies
July 28, 2013 9:13 pm

To a first approximation, numerical modeling *is* control of forward error propagation. Very often (and this is a rough, non-rigorous description) a finite difference model of a differential will have two solutions mixed in due to the finiteness with which variables in the system can be represented using finite-precision hardware. One will be, say, a decaying exponential, A exp(-n t) and the other will be a rising exponential B exp(n t). No matter how small B is made compared to A, over time it will come to dominate and wreck the convergence of the system. Mitigating this is half the battle.

RoHa
July 28, 2013 11:47 pm

Lessee now.
My Radio Shack TRS80 says we are probably doomed.
My Sinclair ZX80 says we are only slightly doomed.
But my Oric Atmos says we are very doomed indeed.
Which should I believe?

Duster
July 29, 2013 12:05 am

Paul Jackson says:
July 27, 2013 at 1:32 pm
Edward Lorenz pretty much came to almost the exact same conclusion, in regards to almost the exact same computational problem almost 50 years ago; this is as the Warmistas would say is “settled science”..

Thank you. I going to point that out myself.

Duster
July 29, 2013 12:18 am

LdB says:
July 28, 2013 at 7:20 pm
….
The stability of the Earth’s climate over 4 billion years tells you that the chaos effect in climate is not unmanageable in the same way a missiles random movements has limits and thus a patriot can track and destroy it the problem seems to be the climate scientists willingness to learn from hard science.

I like most of your argument. However, “stability” is a matter of perception. First, over that 4 by period, the atmosphere has evolved from something that would kill an unprotected human in very short order to the air we breath today. Next, there have been during the last 600 my a number of wildly violent swings in the size of the planetary biomass with accompanying swings in the number of species of plant, animal and even bacteria. Entire orders have been pruned. At present, atmospheric CO2 is in fact hovering near the lowest it has ever been within that same 600 my span, and if it drops by about half, the entire planet could once again see a massive extinction event due to failed primary green-plant productivity. The only “stability” per se over the last 4 by is the presence of life forms of one form or another throughout.

steverichards1984
July 29, 2013 1:53 am

This thread has been a most illuminating one. I hope non computer programmers begin to understand that ‘heavy duty programming’ is not without problems!
But, in the engineering world, computer programs are built up from modules or units, each module or unit is tested before being incorporated into a larger program.
In engineering, each module/function is tested to make sure it behaves as expected.
We apply inputs, run it, and check the outputs are correct.
In climate science programming, I would expect the same, develop a small program/module/unit/function to process the effect of, say, particulates of a certain size, or specific gases of a specified concentration.
Where are the results of the thousands of experiments needed to prove that each small function of a GCM has been tested and verified before being incorporated into the complete model?
Having just had a quick look at the GISS IE model linked to earlier, I could see no obvious unit testing of each and every function.
I could see patches available to the WHOLE MODEL. How can you patch a COMPLETE MODEL without regression testing relevant units?
It beggars belief that these people are paid public money to ‘play with software’.
When programmers in Banks cause a failure and an online bank goes offline for a few hours due to a bad software update, people are sacked, careers ruined.
In climate science you can proudly boast of your untested model and gain awards??????

July 29, 2013 2:46 am

LdB says:(July 28, 2013 at 7:20 pm) “We have a group who want to try and plaster over and ignore the problem and seem to accept the models will always be in error and this seems to include the climate scientists, Nick Stokes and the climate modelers.”
You need to read Nick above where he says: “I bet you can’t get two different computers to keep a vortex shedding sequence in exactly the same phase for 100 cycles. But they will still give a proper vortex street, with frequency and spacing.”
Nick probably meant to say “statistically valid”, but with climate we do not know what is statistically valid when the climate changes either due to increased CO2 or exogenous (e.g. solar) influences.
steverichards1984 (July 29, 2013 at 1:53 am) “In climate science programming, I would expect the same, develop a small program/module/unit/function to process the effect of, say, particulates of a certain size, or specific gases of a specified concentration.”
They have had that for decades but it doesn’t help The simplest explanation is that the computations for individual molecules cannot be extended to the model of the planet as a whole because there are too many molecules to simulate. So they use various parameterizations instead which cannot be validated.

Ian W
July 29, 2013 3:16 am

DirkH says:
July 27, 2013 at 2:31 pm
Man Bearpig says:
July 27, 2013 at 2:03 pm
“The IEEE double precision format has 53 bits of significance, about 16 decimal places. Please don’t offer stupid answers.
===========================
Yes, and isn’t that wonderful? However, to what level can we actually measure the values that are entered as a starting point into the models. To calculate them to 16 decimal places is not a representation of the the real world. ”
The recommended way for interfacing to the real world is
enter data in single float format (32 bit precision)
-During subsequent internal computations use as high a precision as you can – to reduce error propagation
-during output, output with single precision (32 bit floats) again – because of the precision argument you stated.
It is legit to use a higher precision during the internal workings. It is not legit to assign significance to those low order digits when interpreting the output data.
In this regard, the GCM’s cannot be faulted.

Dirk you appear to have missed Man Bearpig’s point. The real world value you have just entered ‘in single float format’ was 12.5C. The actual value was 12.3C but the observer never thought about your problems with initial start parameters. The pressure was similarly slightly not the same the dew point etc etc. There are huge roundings and errors in the observations that provide the start point even from automatic sensors like ARGO and GOES. it is impossible to set the start parameters and inputs to the level of precision required to prevent the chaotic dispersion of results. People are living in a dreamworld if they think they can,

Ian W
July 29, 2013 3:58 am

Ric Werme says:
July 27, 2013 at 5:16 pm
…….If two climate models are working by simulating the weather, then it really doesn’t matter if the instantaneous weather drifts widely apart – if the average conditions (and this includes tropical storm formation, ENSO/PDO/AMO/NAO/MJO and all the other oscillations) vary within similar limits, then the climate models have produced matching results. (If they’re really good, they’ll even be right.)
This bugs the heck out of me, so let me say it again – forecasting climate does not require accurately forecasting the weather along the way.
Another way of looking at it is to consider Edward Lorenz’s attractor”, seehttp://paulbourke.net/fractals/lorenz/ andhttp://en.wikipedia.org/wiki/Lorenz_system While modeling the attractor with slightly different starting points will lead to very different trajectories, you can define a small volume that will enclose nearly all the trajectory.
The trajectory is analogous to weather – it has data that can be described as discrete points with numerical values. If some of the coefficients that describe the system change, then the overall appearance will change and that’s analogous to climate.
The trick to forecasting weather is to get the data points right. The trick to forecasting climate is to get the the changing input and the response to changing input right.

Ric, I understand what you are saying but disagree that it is the way to build a model of the real world. I have bolded my concern. What you appear to be describing is what we see. Climate modelers impose their own guess on what the shape and extent of the Poincare section should be and bound their software to meet that guesstimate at future reality. These climate models allow tweaking and parameterization (tuning) so that the output is what the writers want not what will happen in the real world or what would happen if they took their thumbs off the scales. If these modelers were NOT bounding their programs in this way then the chaotic systems would show far more chaotic dispersal.

DirkH
July 29, 2013 4:22 am

Blarney says:
July 28, 2013 at 4:12 pm
“So it’s a matter of spatial and temporal scale: the current weather models can’t tell me if a small breeze will pass through my windows in the next minute, but they can predict the general weather on the city in two days; they will fail in telling which weather you’ll have in one year but (as climate models) they can show some skill in predicting if the average temperature for the whole planet will go up or down in ten years. If the climate is a chaotic system (letting aside catastrophic events of all kinds) it is possible that the same models will eventually fail completely on some very long time scale – tens of thousands of years, for example. But the rate of divergence from the modelled system is completely different from that of weather models as the level of detail at which they have to exhibit a skill is completely different.”
You are describing the process of parameterizing a statistical description of a 50 times 50 km cell in a GCM.
Note that this statistical approach only works when a lot of instances of the described process happen in the grid box.
This approach breaks down when one process instance is larger than a grid box, for instance a convective front.
You have now entered the area where the faulty physics of the models must be described; and we leave the area where we discuss rounding errors and the implications of chaos.

July 29, 2013 5:24 am

David Gillies says:
One will be, say, a decaying exponential, A exp(-n t) and the other will be a rising exponential B exp(n t). No matter how small B is made compared to A, over time it will come to dominate and wreck the convergence of the system.
Not if the step size is chosen so as to satisfy the stability requirements for the solution of the discrete approximations to the continuous differential equations. Off-the-shelf ODE solver software will cut through this like a hot knife through warm butter. Roll-your-own finite difference methods can be easily analyzed to determine the stability requirements.
This problem is a classic that is used to illustrate various aspects of finite-precision arithmetic and numerical solutions of ODEs.
Consistent discrete approximations coupled with stable numerical solution methods always leads to convergence of the solutions of the discrete approximations to the solutions of the continuous equations. <emAlways.

Frank K.
July 29, 2013 6:19 am

Clemenzi and @ikh
Thank you for providing your comments on Model E. They echo my analysis of that code (i.e. it’s a real mess, with totally inadequate code comments and documentation). Robert – I’d like to hear more about your experiences with Model E. Perhaps you could write up a short essay that Anthony could publish.
For those who would like to see for themselves, the Model E source code can be downloaded from here:
http://www.giss.nasa.gov/tools/modelE/
For laughs, click on their link for the “general documentation” paper. Count how many equations you see in the entire paper…
@Dan_Hughes
Good to see you commenting here 🙂
Stokes
Thanks for the link to ocean circulation animation (I’ve seen it before). While, that’s not an example of “synthetic weather”, I think you point is that climate models are something like Large Eddy Simulation (LES) in CFD, wherein the Navier-Stokes equations are solved for the scales of turbulence resolvable on a given (usually very fine) mesh. In such simulations, the actual path of individual eddies don’t matter, only their impact on the time-averaged solution, which is what is of interest. While this is tempting analogy, the problem is that no one has demonstrated that this behavior should emerge from the differential equations which climate models are solving (which are NOT the Navier-Stokes equations, despite what some ill-informed climate scientists may assert). If you’d like to point to such an analysis, I’d be very interested to read more about this topic.

LdB
July 29, 2013 7:36 am

Duster says
July 29, 2013 at 12:18 am
The only “stability” per se over the last 4 by is the presence of life forms of one form or another throughout.
You forget the argument we are talking about surface temperature because that is what the climate models are supposed to be predicting.
Given the possible range of -157 degrees C to 121 degrees C which is the temperature range possible both at theory and as it is recorded on the ISS (http://science.nasa.gov/science-news/science-at-nasa/2001/ast21mar_1/)
We have never seen anything like those numbers within human occupancy and as best we can tell ever so therefore there is something that is some defining behavior this is the exact situation of a plane or a rocket trying to do random turns even by design the random they can do has limits because they have inertia.
You see the same inertia with temperature when the sun goes down the surface temperature drops but it doesn’t go to -157 degrees instantly does it .,… why?
Lorenz was talking about a particle something that has no real inertia you can’t get an instant butterfly effect on a plane, rocket or temperature of the planet they simply have to much inertia.
It’s the inertia of the system that makes them solvable and that is what makes some of the arguments by people just very silly.
Technically what happens with inertia is it creates integratibility over the chaos which is technical speak for saying that the chaos can not take only any value it can only change at some maximal rate. In layman speak a plane or rocket when trying to go chaotic can only bank and turn at a maximal rate which you would already know from common sense.
So butterfly effects and other chaotic behavior which exists and is unsolvable in mathematics and some fields holds little relevance to many real world physics where the chaos mixes with inertia components.
So when you people are talking about chaos and effects that appear in mathematics and books please be very careful because they don’t tend to be pure like that in the real world.
If you look at the Quantum Mechanics as a theory everything is popping into and out of existence completely randomly doesn’t get much more random and chaotic than that but yet you view the world as solid for exactly the same reason because your observation inertia makes it appear solid.
If you are still not convinced then I can do nothing but show you an example as many have tried to explain
“Hardware Implementation of Lorenz Circuit Systems for Secure Chaotic Communication Applications”
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3649406/
=> The experimental results verify that the methods are correct and practical.
So please do not tell us you can not get a model to lock to a chaotic system, you may not be able to but many of us can.

coder
July 29, 2013 8:05 am

http://old.zaynar.co.uk/misc2/fp.html
I think this might be relevant for people who are discussing floating point error.
streflop helps mitigate this, although it is rarely used.

son of mulder
July 29, 2013 8:35 am

LdB says:
July 28, 2013 at 7:20 pm
“The problem to me seems to stem from the fact most of the climate scientists don’t want to put error feedback and analysis of the errors on the models and accept that they continue to deviate from reality and like Nick Stokes is arguing that it is unavoidable. ”
How does this differ from adjusting models to hindcast? Try applying it to tossing a coin or predicting percentage of time the jetstream will pass south of the UK or north of the UK, or over the UK, in 100 years time, in 50 years time, in 10 years time and so define a main driver of the UK’s climate.

July 29, 2013 8:45 am

“Technically what happens with inertia is it creates integratibility over the chaos which is technical speak for saying that the chaos can not take only any value it can only change at some maximal rate. ”
I’m not sure that matters since the differences between various predictions or projections are all well within the maximal rates of change of al the values involved. You appear to be talking about trimming an enormous space of possible solutions which still leaves behind a fairly large space.
The question is whether the “models” (i.e. emulations or simulations) can be validated. To use a simple example, convection from warming easily results in turbulent flow. The turbulence can be ignored using some bounds similar to temperature inertia. Then one could simplify parameters and formulas using average wind speeds.
One could also analyze structure to the chaotic system calculations like the self-organizing structures described here: http://www.schuelers.com/chaos/chaos1.htm Although weather is a lot simpler than that, it still has effects like atmospheric mixing that are ultimately controlled by a chaotic process. Without accurate calculations of the mixing efficiency predictions are impossible. This is even more important for oceans over the long run to model ocean cycles and even determine something as simple as thermal inertia which is well-bounded as you state.

ikh
July 29, 2013 9:38 am

MarkG says:
July 28, 2013 at 7:20 pm
“Out-of-order execution produces the same results as in-order execution, because it has to wait for the previous operation on the data to complete before it can continue.”
First you need to chose an appropiate example i.e. if we four floating point values A, B,C,D and we want to add B, C,D, to A. In the C language we wpould write:
A += B + C + D;
The compiler with optimisation turned off would move A into a register ( if necessary ) and then add B followed by C followed by D.
The CPU is free to re-order these instructions and becazuse of differing intermediate values with rounding the result is not the same.
The CPU is free tp re-oder instructions that follow the above statement as long as the do not read from A or write to B,C, or D. The rules against writting to B,C,D can be releaxed as long as temp copies of the values of B, or C, or D, are saved.
/ikh

John
July 29, 2013 9:49 am

How embarrassing. The problems of rounding issues and library differences, and the subsequent effects on cluster runs, have been known in the HPC (High Performance Computing) community for a long time. That some modellers chose to ignore this issue suggests a lack of professionalism.

Philip Peake
July 29, 2013 10:36 am

When I did my computer science degree (many years ago) there was a course on numerical analysis. A big part of that was trying to get us to understand just how bad rounding errors can get, in very few iterations. There was an example that was used (which I have forgotten completely) where just dropping a digit or two from each partial result resulted in complete nonsense at the end.
A core concept was that you had to fully understand the mathematical construct that you were coding in order to be able to spot these pitfalls. There was a time when I thought that this was just a maths professor trying to remain relevant in the computer age, but I now realize that he was absolutely right. The same principle applies everywhere in computing — you had darn well better understand what you are attempting to represent in code.
One of the sad but true things about computers is that real arithmetic is something that they actually do quite poorly compared to many other application.
I spent time after my degree running “surgeries” for undergraduates doing computer science, helping them to debug their coding problems. Quite a few post-grads turned up too. Mainly from the “soft-sciences”, they would have found some equation in a book, or maybe an algorithmic description of some statistical manipulation/test and tried to code it up themselves. I used to look forward to these guys, they provided a much bigger mental challenge than undergrads fighting to find where the missing close bracket was. Most often, once we had it running and spitting out numbers, they would show no interest at all in constructing some tests to validate that it was working correctly for all values, and particularly for the edge cases.
Inappropriate statistical models, incorrectly coded …
I now realize that I may have failed the world in not strangling these guys there and then.

Philip Peake
July 29, 2013 10:56 am

Re: Pipe-lined instructions and out of order execution.
All of the comments are correct. If we are dealing with integers, you can do the operations in any order. With floating point, you may get different answers depending on overflow, rounding etc.
But the big killer is where any of the values you are using may be changed by external influences, such as reading a value from a register which is hardware and affected by changing some other variable.
Even harder to detect and deal with are multi-threaded applications, where variables being used in calculations may be changed asynchronously by another thread of execution. If you are writing the code, you are totally responsible for ensuring that multiple threads don’t tread on each other.
Most of the code I have seen that the climate dweebs play with is written in single-thread of control languages such as Fortran (which quite honestly should have died about 30 years ago).
To get performance out of modern super-computer clusters, this monolithic, single threaded code is compiled with compilers that attempt to break it down into separate components that can be run in parallel on multiple CPUs or even multiple computers. That is placing a LOT of trust in the compilers and support libraries. It would be instructive to see if the compilers used are the latest patch levels, and if not, what bug fixes are listed in each release subsequent to the one being used.
The combination of multi machine, multi cpu with each cpu pipelining and using out of order execution applied to code which was written to be executed linearly with a single thread of control written by someone with a poor grasp of numerical analysis is … well … worrying.

Compu Gator
July 29, 2013 11:02 am

jimmi_the_dalek says (July 28, 2013 at 6:17 pm):

I know of and have used several codes which contain millions (about 10 million in the largest case) of lines of mixed Fortran and C++. [….] When these codes do give different answers, it is usually not the code, or the CPU, but the compiler–compilers make far more mistakes than people realise and quite often ‘mis-optimise’ a code in such a way that they change the results.

Having worked on developing 5 compilers, for various computer manufacturers, I’m chomping at the bit as I type, because someone needs to point out that “quite often ‘mis-optimise’” paints its generalization with a overly broad brush.
Such compiler failures should certainly not be “quite often“. Certainly not for compilers from any established company, especially one that supplies the hardware the compiler is targeted for.
I confess that I’d be really reluctant to make any bets on compilers from (unidentified) start-up companies, whose engineers might be much stronger in enthusiasm than experience, and who are self-motivated much more by the fun of invention than the drudgery of compiler testing. And testing. And retesting.
I’m writing on the crucial assumption that the compilers we’re discussing are serious
commercial products. So you and your colleagues work closely with your compiler vendors, providing them promptly with minimal-size test cases for each bug you’ve discovered, right? Each compiler vendor should have some way to force their code-generator–or optimizer–to replicate its behavior in your computing environment, e.g.: your combination of multiple processors and vector sizes. But simply shipping out the latest version of the 10-million lines-o’-code lump is now even less productive for all concerned, than it was back when high-performance computers were typically single high-speed processors.

DirkH
July 29, 2013 12:13 pm

LdB says:
July 29, 2013 at 7:36 am
“Technically what happens with inertia is it creates integratibility over the chaos which is technical speak for saying that the chaos can not take only any value it can only change at some maximal rate. In layman speak a plane or rocket when trying to go chaotic can only bank and turn at a maximal rate which you would already know from common sense.
So butterfly effects and other chaotic behavior which exists and is unsolvable in mathematics and some fields holds little relevance to many real world physics where the chaos mixes with inertia components.”
All nice and well, but radiative energy exchange in the atmosphere is a fast process. Kevin Trenberth’s infamous energy balance diagram that 90 % of the energy that leaves the Earth come not from the surface but from the atmosphere. (Yeah I know that’s a fantasy. But it’s probably what the models do.) So one would think positive water vapor feedback can accelerate about as quickly as a thunderstorm in the tropics develops (yeah I know they don’t do thunderstorms. Just a comparison). Thermal runaway within an afternoon. Assuming their positive water vapor feedback existed.

DirkH
July 29, 2013 12:14 pm

Or shorter: Chaos in CO2AGW models is just as fast as weather.

July 29, 2013 12:18 pm

I re-engineered an exist custom Integrated Circuit. I had to create a 32bit adder, for the last stage I use 2 4bit overflow adders, one tied to the other. I think I created test vectors that tested each adder, and both together, but the case of (I think) both set to 16, and getting them to add correctly I don’t think I had a test case. In use the chip acted odd, triggered more often than it was suppose to. Fortunately, while my vectors didn’t detect the problem, the problem was a bad 4 bit adder macro that I used, that had apparently never been tested. They re-spun the chip, we added some extra vectors. Didn’t cost us anything, new chip worked great.

jimmi_the_dalek
July 29, 2013 1:11 pm

Compu Gator,
The compilers I am referring to are by major companies – Intel, Sun (now Oracle), Portland Group. They all make mistakes on large scale codes. We report errors where practicable, otherwise go through lengthy test suites, identify the problem routines, and reduce the optimisation level on those. It’s fact of life I’m afraid, and has been on every machine/operating system/compiler combination I have seen in 20 years. The operating system itself is rarely a problem, and neither are the maths libraries, fortunately.

July 29, 2013 1:19 pm

Frank K. says:
July 29, 2013 at 6:19 am
Robert – I’d like to hear more about your experiences with Model E. Perhaps you could write up a short essay that Anthony could publish.
Some of the details of my experience are at
http://mc-computing.com/qs/Global_Warming/Model_E/index.html
I never posted the proposed paper and some other stuff because journals refuse to consider anything previously published.
Based on that research, I have developed a number of related pages, such as
http://mc-computing.com/Science_Facts/Water_Vapor/index.html
For instance, at 0.01°C, the saturation vapor pressure of water is 611.657 ± 0.01 Pa. Model-E uses 610.8 Pa. As a result, the model will use the wrong value for water vapor absorption. However, it is not clear if their algorithm gives too much or too little absorption. In my programs, there are 28 different formulas for computing saturation vapor pressure – Model-E uses 2 of them, one over water and another over ice. While the difference in the amount of vapor is very small, the difference in the amount of IR radiation absorbed and emitted is quite large when compared to what they claim changes in CO2 are causing.
One of the reasons I tried to run Model-E was so that I could change parameters, like the saturation algorithm, and see what effect, if any, it had. I also wanted to change the number of days from 365 to 366. Basically, I wanted to determine which constants and algorithms were important and which weren’t. Since “they” claim that “water vapor feedback” is what causes the real problem, I think that this type of analysis is important.
For additional information, contact me via the provided links at the bottom of my pages.

Janice Moore
July 29, 2013 2:40 pm

Wow. I understood about 1/4 of what was said above, but, it was well worth the effort to understand that much. From their failure to: hindcast or forecast the real data (oxymoron, I know, but, so often needed that I use it for clarity), I already knew GCM’s were piles of junk, now, I understand a bit more (maybe a whole byte more!) why.
2 (of the many) fine quotes from above:
***************************************************************
“… the entire range of past variation is equally likely under their assumptions and procedures… .”
[Brian H 12:22 PM 7/29/13 – edited emphasis]
That says it all.
******************************************
“Weather is the atmosphere stabilising itself.”
[AndyG 1:51PM 7/28/13]
Nicely put.

July 29, 2013 4:03 pm

ummm…
It used to be much worse – but what’s going on here is a problem arising from the interaction between the job management software (e.g. openmpi) and the model. The models run to a level of detail set by the number of available processors (or run time maxima), the job management software sets that at the start based on what’s available then. Run the same openmpi job on a heavily used machine first and then on the same machine running idle and this type of application will get different results.

ikh
July 29, 2013 4:17 pm

jimmi_the_dalek says:
July 29, 2013 at 1:11 pm
We report errors where practicable, otherwise go through lengthy test suites, identify the problem routines, and reduce the optimisation level on those.
Every compiler I have ever used has documentation describing what you can and can not do at a particular optimisation level. If your programmers ignore the rules then youn deserve everything you get. And that is a much higher cost of maintenance than is necessary.
Before you can blame the compiler optimiser you have to eliminate programmer errors from using constructs that break the rules for that optimisation level.
Compilers from harware vendors are “Flagship” products. They do not make profits for the vendor and I suspect they rarely cover costs, but they do help to sell their hardware. Which is where the big monrey is. As such, they put huge effort into making sure that they are correct, otherwise it refelects on the computers and chips that they sell.
That is not to say that there is never a bug oin the compiler, but it will be a lot less common than a comercial or in house piece of software breaking the optimisation rules.
/ikh

ikh
July 29, 2013 5:15 pm

Janice Moore says:
July 29, 2013 at 2:40 pm
“Wow. I understood about 1/4 of what was said above, but, it was well worth the effort to understand that much.”
It is not surprising that for somone reading a climate blog, you can not understand some serrious Computer Science. It is a valid and seperate discipline in its own right. And one that is not generrally taught in school unless you study it at University.
And yet, the first quote you chose:
“… the entire range of past variation is equally likely under their assumptions and procedures….”
Is probablly the most important phrase in the post.This means that the same magnitude of variation that comes from running a single model as an ensamble by changing some of the parameters ( initial conditions ) can just as easily be achieved by changing the compiler optimisation level ( with out changing the parameters ) or by changing the hardware.
Does this automatically invalidate a model. No. We would need multiple runs of a model with identical starting conditions to show rounding errors.
/ikh

jimmi_the_dalek
July 29, 2013 5:45 pm

Ikh,
You are over optimistic.
Compilers frequently make mistakes. If that were not the case then successive issues would not contain, in their release notes, long lists of problems which have been fixed. For example Intel releases bug fixes to there compilers every couple of months – see http://software.intel.com/en-us/articles/intel-compiler-and-composer-update-version-numbers-to-compiler-version-number-mapping and the detailed list of bugs runs to several pages each time – see for example http://software.intel.com/en-us/articles/intel-composer-xe-2011-compilers-fixes-list
I assure you there are problems with all compilers when applied to large scale codes, and not just the loss of a few digits precision which might come from rearranging the arithmetic operations, and which could be due to the programmer writing unstable code, but outright blatant failures with totally wrong answers.

LdB
July 29, 2013 6:54 pm

son of mulder says:
July 29, 2013 at 8:35 am
How does this differ from adjusting models to hindcast? Try applying it to tossing a coin or predicting percentage of time the jetstream will pass south of the UK or north of the UK, or over the UK, in 100 years time, in 50 years time, in 10 years time and so define a main driver of the UK’s climate.
They are very different techniques in your full hindcast model you assume that you have faithfully described the model correctly at the beginning the second method is called “Sliding mode control” or sometimes “sliding window control” because you essentially realize at the start that you don’t know the exact values so the system adjusts them as they go.
http://en.wikipedia.org/wiki/Sliding_mode_control
The obvious difference is that the sliding mode control has a “lock” condition which is move the sliding window forward and failure for the prediction to be in that window means you have lost lock on the target. Look for example at how the bug with the patriot missile was reported
http://sydney.edu.au/engineering/it/~alum/patriot_bug.html
=> Discovery of the Bug
Ironically, Israeli forces had noticed the anomaly in the Patriot’s range gate’s predictions in early February 1991, and informed the U.S. Army of the problem. They told the Army that the Patriots suffered a 20% targeting inaccuracy after continuous operation for 8 hours.
Obviously in the patriot missile case the “lock” signal is reported to the operator.
Sometimes when you aquire “lock” it is interesting to then hold the current parameters and turn the model around and run it backwards thru the hindcast to see if the lock can hold. Again in a military sense this is how you back track a missile launch site to greater accuracy than the initial detection.
If your lock parameters can’t reverse thru the hindcast then you don’t have a complete description of the terms involved in the process and it is possible you may lose lock in the future.
In your case your jetstream should be within the range the sliding window from whatever time it takes to build up. Essentially a jetstream doesn’t just appear instantly or disappear overnight there are process required for it to form and those processes take time because they involve alot of energy. That is the same restriction placed on the ability of plane or rocket to turn or bank even when trying to avoid the patriot which is hunting it down.

LdB
July 29, 2013 7:07 pm

eric1skeptic says:
July 29, 2013 at 8:45 am
I’m not sure that matters since the differences between various predictions or projections are all well within the maximal rates of change of al the values involved. You appear to be talking about trimming an enormous space of possible solutions which still leaves behind a fairly large space.
You will always have a window when trying to “lock” to a chaotic signal that is true, and in the same sense you also always have jitter on phase locked loop. These processes lock by continual adaption of the sliding window.
So the forward prediction is more strict locally and much wider as you go out longer.
Take our humble patriot missile once it has “lock” on a missile it has a tight window on the target and will put a square to look at in front of the target and it will be centred at the position it is most likely to be. It also provides a longer much bigger window right out to an impact target if required but one can not say with absolute certainty what the end value will be you have a probability range.
Really the result is no different to the IPCC models they have a central value they predict and a range either side they look like this
http://www.realclimate.org/images/model122.jpg

Editor
July 29, 2013 7:26 pm

Philip Peake says:
July 29, 2013 at 10:56 am

Most of the code I have seen that the climate dweebs play with is written in single-thread of control languages such as Fortran (which quite honestly should have died about 30 years ago).

It did – See Fortran 90, Fortran 2013 and others. Now you have teach the old user community how to write good Fortran.

Janice Moore
July 29, 2013 7:46 pm

Dear I. K. H. (I just cannot bear to write and, thus, THINK, “ick” for you, wonderful scientist!),
Thank you, so much, for taking time out from your excellent, high-level, discussion to encourageme by affirming my reading of the information presented. That YOU said so makes me highly value the compliment.
I heartily agree. Computer Science is a rigorous, high-level intelligence, area of study. The logic and analytical thinking skills of computer scientists are second to none. Also, almost without exception, as I know from talking with my software engineer brother and IT geeks (even really young ones), they are and take great pride in being very patient with non-computer saavy people and extremely good at “translating” into layperson’s language technical talk and computer science concepts.
Your grateful student,
Janice
P.S. I think what MOST makes me realize how smart you C.S. people are is how challenging I found my computer science courses (for a B.S. in Computer Science-Business). LOL, and they were just starting to offer “C” (a NEW language) my senior year. No, I did not take it! Pascal is as much as I learned. Anyway, knowing a little of the area makes me doubly aware of how amazingly bright software engineers are. BTW, I was surprised to read above that FORTRAN was still used; it was considered obsolete (or so I thought!) by my freshman year of college (1982-83). We did learn COBOL (just in case we came upon it — (smile)).

July 29, 2013 10:22 pm

This is one of a number of current jobs for FORTRAN developers that do not have a degree in computer science. Because this is in Greenbelt, I assume that this is related to NASA (GSFC – Goddard Space Flight Center). (There are lots of hits when you search for “Goddard Space Flight Center” and FORTRAN.)
Scientist
Contribute to the development of two-moment cloud microphysical scheme in GEOS-5 model and its successors. Run GEOS-5 model and produce higher level diagnostics related to clouds, aerosol, and precipitation to compare with observations. Link GEOS-5 cloud microphysical scheme with aerosol microphysical scheme.
Job Experience: PhD in atmospheric science, oceanography or geoscience. Experience in working on and running large-scale Earth-System models. Experience in working in supercomputer computational environments. Proficient in Fortran, Unix shell script, GrADS.
Job Location – Greenbelt, Maryland, United States

son of mulder
July 30, 2013 2:03 am

” LdB says:
July 29, 2013 at 6:54 pm
In your case your jetstream should be within the range the sliding window from whatever time it takes to build up. Essentially a jetstream doesn’t just appear instantly or disappear overnight there are process required for it to form and those processes take time because they involve alot of energy. ”
But that is only useful in very short term prediction like required in weather forecasting. One doesn’t have the feedback mechanism when looking say 100 or 50 years into the future when trying to assess what cost effective, justifiable precautionary actions should be taken to mitigate the dangerous model predicted changes in climate that may or may not occur in a region. This is the practical purpose of climate models that most on this site are sceptical about.

Mike N
July 30, 2013 8:29 am

They misuse computers the same way they misuse statistics.

Janice Moore
July 30, 2013 12:06 pm

Nice detective work, Robert Clemenzi (7/29, 10:22PM). Veehhlly eentehrehsteeeng.

Compu Gator
July 31, 2013 1:07 pm

ikh says (July 27, 2013 at 4:06 pm):

[….] Fortran is an archiac language that is almost never used in the commercial world

FORTRAN was originally designed, in effect, as a medium-level language for the IBM 704 mainframe. Its half-century (semicentennial?) could’ve been celebrated as early as 2004 (based on its preliminary report being completed in 1954; the FORTRAN I compiler was first released in 1957). The language was a hard sell at the outset, because its potential customers were skeptical that adequate speed could possibly be obtained in programs not coded in assembler. So the optimizations by that compiler were crucial to its acceptance, even when it was a product confined to IBM computers. As a result, expectations of compiler optimizations and their benefits became an early & permanent part of the FORTRAN culture. A paper published in 1964 counted 43 FORTRAN compilers in existence across the computer industry; FORTRAN I for UNIVAC’s Solid State 80 seems to’ve been the first running outside IBM (Jan. 1961).[*1] The famously effective optimizing compiler known as IBM FORTRAN H (‘H’ encoding its compilation main-memory requirement) had appeared by 1969.
The primary historical reasons for not using FORTRAN in “the commercial world” was that its alphabetic character-manipulation was so awkward in what was fundamentally a computer-word oriented language (a word being 30-something or 40 bits long on many early mainframes), plus its complete neglect of the Binary-Coded Decimal (BCD) data type that was typically unimplemented in the “scientific” lines of the mainframes of the era. FORTRAN’s lack of pointers largely disqualified it for “systems” programming; altho’ they could be emulated via excursions into assembly code, and sneakiness with customarily unchecked subroutine parameters, such subterfuges techniques were not portable across any architectures that weren’t plug-compatible.
In the early years of FORTRAN, the “the commercial world” used separate lines of “commercial” mainframes, without binary floating-point. They using other languages that were specialized for their distinct categories of programming, featuring BCD. I’m assuming that you wouldn’t want binary-floating-point approximation or round-off errors to affect your bank accounts.
So to be fair, the arguably venerable FORTRAN language was not designed to be suitable for those categories of programming (even tho’ its wide-spread availability has allowed really determined programmers to use it successfully for some “system” or character-manipulation categories, despite the programming pain that was undoubtedly involved).
Ric Werme says (July 27, 2013 at 5:58 pm):

Fortran 2003 includes object oriented constructs. Just because a lot of people are writing code as though they have a Fortran IV compiler [that] should not be cause to denigrate modern Fortran.

FORTRAN has proven to be an unusually long-lived programming language. Not because it’s a language of elegant beauty (APL has the edge on elegance for writing code), nor because it provides the best of all possible worlds (PL/I, Ada, or Algol 68 are probably among the closest to ideals of universality), but because it’s remained very effective at coding the kinds of software that it was designed for.
Notes:
*1: Programming Research Group of IBM Applied Science Div.: “Preliminary Report: Specifications for the IBM Mathematical FORmula TRANslating System, FORTRAN”. Nov. 10, 1954). The software itself was apparently not available (or at least not outside its mostly-IBM “working committee”) until 1957. (Jean E. Sammet 1969: Programming Languages: History and Fundamentals, § IV.3.1, p. 143–150.) An impressive on-line source that I’ve recently discovered is the “History of FORTRAN and FORTRAN II“: a project of the Software Preservation Group of the Computer History Museum.

Compu Gator
August 4, 2013 10:17 am

Seeing another reader give undue praise, in this topic, to a programming language that’s not renowned for its use in numerically-intensive code, perhaps Mr. Watt would be willing to just stand by and see if anything interesting emerges after I kick a particular ant-hill.
ikh says (July 27, 2013 at 4:06 pm):

[….] because we can get the same performance from more modern languages such as C or C++ with much better readability and easier reasoning for correctness.

C !? A language in which every array is formally and practically equivalent to a pointer? And in which programmers blithely perform arithmetic on pointers, not only for sequential access to dynamically-allocated data based on pointer syntax, but also to variables declared using array syntax, and even for the mundane purpose of retrieving main-program and function parameters? Maybe there’s been a concerted effort to tighten up the language as compiled, but back when I last used it for something largish, any combination of C’s inside-out pointer syntax was valid, and may the Good Lord help you if it wasn’t the syntax that you really needed to code to access your data-structure as you intended. Because you could be sure that the generated code would heedlessly access some memory somewhere.
Even the language’s principal designer Dennis Ritchie admitted [*2]:

C’s treatment of arrays in general (not just strings) has unfortunate implications both for optimization and for future extensions. The prevalence of pointers in C programs, whether those declared explicitly or arising from arrays, means that optimizers must be cautious, and must use careful dataflow techniques to achieve good results. Sophisticated compilers can understand what most pointers can possibly change, but some important usages remain difficult to analyze.

The phrase “pointer aliasing“, when submitted to search engines, provides a good start
in learning about this compiler-optimization issue that’s especially important to C.
The only silver lining is that early C zealots disparaged compiler optimization, boasting of their ability to produce highly efficient code simply by careful crafting of their C source code. So it might still be that the much larger modern C culture continues to follow the lead of their earlier evengelists, thus remaining somewhat indifferent to having compilers optimize their code.
Ric Werme says (July 27, 2013 at 5:58 pm):

[….] While C is one of the main languages I use, C is an archaic language. More so than Fortran.

C is not much more than a medium-level language for the DEC pdp-11 minicomputer. Its ANSI standard (1989) even kept its goto statement. At least C++ prohibits goto into a compound statement.
Ric Werme says (as linked above):

Note that C does not have OOP elements [….]

True, but wouldn’t you say that was a major rationale for the invention of C++?
Notes: *2: Dennis M. Ritchie 1993: “The Development of the C Language”. http://www.cs.bell-labs.com/who/dmr/chist.html.

Pamela Gray
August 4, 2013 10:21 am

ding ding ding ding. Brain just bumped up against limit***warning warning***shut down is immanent***warning warning***blip…………………………..

steverichards1984
August 4, 2013 11:05 am

As @Compu Gator was alluding too, it seems a badge of pride amongst ‘clever’ C programmers to ‘optimise’ each line of code.
So much so, that it becomes almost impossible for someone who needs to maintain that code to understand it 2 years later.
This is why we use subsets of C and C++.
It goes against the grain for ‘clever’ programmers to accept programming constraints ie do not do this, do that instead, that a whole set of tools and standards have grown up to ‘control’ the worst excess of clever programmers. See MISRA etc for a subset standard ‘proven’ to work in safety critical applications.
By use of ‘proven’ subsets one can be assured that the same code will produce the same results on any hardware that obeys the standard.
Which is what we want isn’t it?