Another uncertainty for climate models – different results on different computers using the same code

New peer reviewed paper finds the same global forecast model produces different results when run on different computers

Did you ever wonder how spaghetti like this is produced and why there is broad disagreement in the output that increases with time?

CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1[1]Graph above by Dr. Roy Spencer

Increasing mathematical uncertainty from initial starting conditions is the main reason. But, some of it might be due to the fact that while some of the models share common code, they don’t produce the same results with that code owing to differences in the way CPU’s, operating systems, and compilers work. Now with this paper, we can add software uncertainty to the list of uncertainties that are already known unknowns about climate and climate modeling.

I got access to the paper yesterday, and its findings were quite eye opening.

The paper was published 7/26/13 in the Monthly Weather Review which is a publication of the American Meteorological Society. It finds that the same global forecast model (one for geopotential height) run on different computer hardware and operating systems produces different results at the output with no other changes.

They say that the differences are…

“primarily due to the treatment of rounding errors by the different software systems”

…and that these errors propagate over time, meaning they accumulate.

According to the authors:

“We address the tolerance question using the 500-hPa geopotential height spread for medium range forecasts and the machine ensemble spread for seasonal climate simulations.”

“The [hardware & software] system dependency, which is the standard deviation of the 500-hPa geopotential height [areas of high & low pressure] averaged over the globe, increases with time.”

The authors find:

“…the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The initial conditions of climate models have already been shown by many papers to produce significantly different projections of climate.

It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.

Here is how they conducted the tests on hardware/software:

Table 1 shows the 20 computing environments including Fortran compilers, parallel communication libraries, and optimization levels of the compilers. The Yonsei University (YSU) Linux cluster is equipped with 12 Intel Xeon CPUs (model name: X5650) per node and supports the PGI and Intel Fortran compilers. The Korea Institute of Science and Technology Information (KISTI; http://www.kisti.re.kr) provides a computing environment with high-performance IBM and SUN platforms. Each platform is equipped with different CPU: Intel Xeon X5570 for KISTI-SUN2 platform, Power5+ processor of Power 595 server for KISTI-IBM1 platform, and Power6 dual-core processor of p5 595 server for KISTI-IBM2 platform. Each machine has a different architecture and approximately five hundred to twenty thousand CPUs.

model_CPUs_table1

And here are the results:

model_CPUs_table2
Table 2. Globally-averaged standard deviation of the 500-hPa geopotential height eddy (m) from the 10-member ensemble with different initial conditions for a given software system 383 (i.e., initial condition ensemble), and the corresponding standard deviation from the 10-member ensemble with different software systems for a given initial condition (i.e., software system ensemble).

While the differences might appear as small to some, bear in mind that these differences in standard deviation are only for 10 days worth of modeling on a short term global forecast model, not a decades out global climate model. Since the software effects they observed in this study are cumulative, imagine what the differences might be after years of calculation into the future as we see in GCM’s.

Clearly, an evaluation of this effect is needed over the long term for many of the GCM’s used to project future climate to determine if this also affects those models, and if so, how much of their output is real, and how much of it is simply accumulated rounding error.

Here is the paper:

An Evaluation of the Software System Dependency of a Global Atmospheric Model

Song-You Hong, Myung-Seo Koo,Jihyeon Jang, Jung-Eun Esther Kim, Hoon Park, Min-Su Joh, Ji-Hoon Kang, and Tae-Jin Oh Monthly Weather Review 2013 ; e-Viewdoi: http://dx.doi.org/10.1175/MWR-D-12-00352.1

Abstract

This study presents the dependency of the simulation results from a global atmospheric numerical model on machines with different hardware and software systems. The global model program (GMP) of the Global/Regional Integrated Model system (GRIMs) is tested on 10 different computer systems having different central processing unit (CPU) architectures or compilers. There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems. The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.

h/t to The Hockey Schtick

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

281 Comments
Inline Feedbacks
View all comments
steverichards1984
July 28, 2013 11:29 am

For the ins and outs of rounding read this:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
I also not that toolset provider http://www.ldra.com/index.php/en/products-a-services/availability/source-languages do not support FORTRAN, implying that GCMs contain wholly unverified code.

ikh
July 28, 2013 11:32 am

MarkG:
Intel chips have an instruction pipeline that they try to keep full and not allow it to stall. They also support hyper-tyhreading, branch prediction, and out of order execution. These are all used to maximise the CPU throughput. This is so the chip can make max use of available hardware that is not currently busy. They even have hidden registers to store intemediate results in until they are needed. All the while maintaining the external appereance of doing things sequentially.
IBM’s Power chips have similar functionality and I guess the same would be true os Sun’s Spark chips.
/ikh

July 28, 2013 11:35 am

“cliff mass says: July 28, 2013 at 8:52 am
Anthony and others,
As a numerical modeler at the University of Washington, let me make something clear: this issue is not really relevant to climate prediction–or at least it shouldn’t be. The fact that differences in machine architecture, number of processors, etc. will change a deterministic weather forecast over an extended period is well known. It is reflection of the fact that the atmospheric is a chaotic system and that small differences in initial state will eventually grow. This is highly relevant to an initial value problem, like weather forecasting. Climate modeling is something else…it is a boundary value problem…in this case the radiative effects due to changes in greenhouse gases.
To to climate forecasting right, you need to run an ensemble of climate predictions and for a long climate run the statistical properties of such climate ensembles should not be sensitive to the initial state. And they should reveal the impacts of changing greenhouse gas concentrations.
…cliff mass, department of atmospheric sciences, university of Washington”

My bolding
Cliff:
Without delving into parts of your post; if I understand your ‘doing climate forecasting right’ phrase:
–Ensemble – an arranged group of models chosen for their particular contribution to the whole. Alright I think I follow that line; I have my doubts, but they’re cool compared to the following.
–It is the following two lines that I get lost; a long climate run where the statistical properties are not sensitive to the initial state? Perhaps you need to explain this line better?
–boundary value problem, …radiative effects due to changes in greenhouse gases?
There is a leap of logic where ensembles play together correctly, followed by a larger leap of logic to a suddenly valid model with statistics immune to initial states; followed by an abyss to where climate models are GHG specific boundary issues…
Now, I am at a loss to understand the details of how your scenario is different from the CAGW CO2 driven GCMs which are the topic of the above research submission. Can you elucidate?

Dan Hughes
July 28, 2013 11:46 am

Lots o’ URLs. I hope it gets through.
With regard to accurate numerical integration of simple ordinary differential equation (ODE) systems that exhibit complex chaotic response see the following, and references therein.
Shijun Liao, “On the reliability of computed chaotic solutions of non-linear differential equations,” Tellus (2009), 61A, 550–564,
DOI: 10.1111/j.1600-0870.2009.00402.x
Benjamin Kehlet And Anders Logg, “Long-Time Computability Of The Lorenz System”
http://home.simula.no/~logg/pub/papers/KehletLogg2010a.pdf
Benjamin Kehlet And Anders Logg, “Quantifying the computability of the Lorenz system,”
http://arxiv.org/pdf/1306.2782.pdf
These papers focus on the original Lorenz equation system of 1963. The papers are related to discussions that started with the following:
Teixeira, J., Reynolds, C. A. and Judd, K. 2007. Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design. J. Atmos. Sci. 64, 175–189.
http://journals.ametsoc.org/doi/pdf/10.1175/JAS3824.1
Yao, L. S. and Hughes, D. 2008b. Comments on ‘Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design’. J. Atmos. Sci. 65, 681–682.
http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2495.1
Teixeira, J., Reynolds, C. A. and Judd, K. 2008. Reply to Yao and Hughes’ comments. J. Atmos. Sci. 65, 683–684.
http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2523.1
Yao, L. S. and Hughes, D. 2008a. Comment on ‘Computational periodicity as observed in a simple system’ By Edward N. Lorenz (2006). Tellus 60A, 803–805.
http://eaps4.mit.edu/research/Lorenz/Comp_periodicity_06.pdf and http://www.tellusa.net/index.php/tellusa/article/download/15298/17128‎
E. N. Lorenz, Reply to comment by L.-S. Yao and D. Hughes, Tellus A, 60 (2008), pp. 806–807.
eaps4.mit.edu/research/Lorenz/Reply2008Tellus.pdf‎
Kehlet and Logg have determined the range of time over which the Lorenz system is correctly integrated as a function of (1) the order of the discrete approximation, and (2) the number of digits that are used to carry out the arithmetic. The former issue has not been noted in the discussions here.
The Lorenz system is a system of three simple ODEs and these show complex behavior in the calculated solutions. The original usage of complex within the context of chaos: the response, not the model. Some of the usual physical realizations of chaotic behavior are based on simple mechanical systems.
Calculations of numerical solutions of ODEs and PDEs that cannot exhibit chaotic response, but do so when the solutions of discrete approximations are not accurately resolved have been discussed in the following.
Cloutman, L. D. 1996.A note on the stability and accuracy of finite difference approximations to differential equations. Report No. UCRL-ID-125549, Lawrence Livermore National Laboratory, Livermore, CA.
Cloutman, L. D. 1998. Chaos and instabilities in finite difference approximations to nonlinear differential equations. Report No. UCRL-ID-131333, Lawrence Livermore National Laboratory, Livermore, CA.
http://www.osti.gov/bridge/servlets/purl/292334-itDGmo/webviewable/292334.pdf
See also: Julio Cesar Bastos de Figueiredo, Luis Diambra and Coraci Pereira Malta, “Convergence Criterium of Numerical Chaotic Solutions Based on Statistical Measures,”
http://www.scirp.org/Journal/PaperDownload.aspx?paperID=4505 DOI: 10.4236/am.2011.24055
Nick Stokes says:
July 27, 2013 at 12:26 pm
” It’s constrained, basically by speed of sound. You have to resolve acoustics, so a horizontal mesh width can’t be (much) less than the time it takes for sound to cross in a timestep (Courant condition). For 10 day forecasting you can refine, but have to reduce timestep in proportion. ”
NWP and GCM applications do not time-accurately resolve pressure propagation. A few simple calculations will show that this requirement would lead to an intractable computation. Not the least problem would be that the largest speed of sound coupled with the smallest grid dimension would dictate a time-step size restriction for the entire grid system.
The pressure-related derivatives are handled in an implicit manner. I seem to recall that about 20 minutes is the order of the time-step size for GCM applications.
All corrections to incorrectos will be appreciated.

Carbon500
July 28, 2013 11:53 am

Dirk H: Thank you for your comments in reply to my questions about the graph. All is now clear.

ikh
July 28, 2013 12:01 pm

steverichards1984
Nice comments and links. I think that you have very elegantly reinforced my point that floating point arithmatic should be avoided like the plague unless absolutely necessary. And should only be done by those who fully understand the pitfalls.
/ikh

Brian H
July 28, 2013 12:22 pm

Mark Negovan says:
July 28, 2013 at 6:03 am
This is one of the most important posts here at WUWT on the subject of using GCM crystal balls and is one of the first ones that I have seen here that shows the true computational insanity of all of the climate models.

errors and uncertainty at any time step is [are] unbounded and the error bound will increase to the extremes of where the climate has been in the past in a very short time.

Concisely! Thanks for this, Mark.
The tendency or practice of the AGW cult to hand-wave away uncertainty as fuzz that can be averaged out of existence and consideration is the fundamental falsehood and sham on which their edifice rests. In fact, the entire range of past variation is equally likely under their assumptions and procedures. Which means they have precisely nothing to say, and must be studiously and assiduously disregarded and excluded from all policy decisions.

Talent-KeyHole Mole For 30+ Years
July 28, 2013 12:40 pm

To be honest I find a paper such as this to be considered publishable in any science journal laughable. Really, floating point representation in byte-codes as opposed to long integer representations, endianness (most significant bit first or least significant bit first in 8-bit, 16-bit, 32-bit, 64-bit and 128-bit word length) of the cpu + i/o + memory + operating system have been known since the 1970s. Do recall the 1970s cpu wars between Intel (little endian), IBM (big endian) and DEC (middle and mixed-endian).
But then again, the ‘journal’ is a Meteorology journal and not a scientific-technical journal.

July 28, 2013 12:45 pm

I read all the posts until jorgekafkazar at (July 28, 2013 at 12:00 am) and finally got some insight. My understanding is that the climate emulators are run multiple times to generate a set of results from which a statistical average and standard deviation envelope are generated. As DirkH pointed out, it could also be called simulation.
Years ago I was shocked when I read references suggested on RealClimate wherein climate modelers would effuse over some physical phenomenon like ENSO or monsoonal flow when it emerged spontaneously in the model. Shiny! The use of such phenomenon has been used for countless model “validations” as well as “predictions” of the future of such phenomena. But it is 100% crap.
The climate modelers recognize that their emulations are highly dependent on parameter settings, so they choose (mostly cherry pick) a few parameters and ranges to demonstrate that the statistics don’t change very much. But AFAICS there is never any systematic validation of model parameterizations, and it may be impossible since the climate measurements used for hindcasting are quite poor in many cases.
I always believed that the demonstrated non-skill of modern ENSO prediction invalidates the climate models. Need to eliminate the inconvenient global MWP? Just “emulate ENSO” and voila, Greenland and Europe stay mostly warm but persistent La Nina compensates by cooling the rest of the globe for 100’s of years. But they are not emulating ENSO, but emulating persistent La Nina. In a narrow sense it can be validated, but only for an artificial skill in hindcasting; thus it can’t be used for predictions. ENSO is just one of the factors (albeit an important one) without which global atmospheric temperature cannot be understood.

July 28, 2013 12:47 pm

“Gunga Din says: July 28, 2013 at 10:57 am

Why bet trillions of dollars, national economies, and countless lives on such uncertainty?”

You are asking for a dreary geek history you know.
— When the computers first hit, they were fielded by large companies, teams of techs, engineers and programmers. This actually built a huge trust in computing; distrusted sure, but not reviled. Programming then was a dark secretive process that occurred with card punch rooms. Dark and secretive because almost no-one actually knew a real programmer.
— Personal Computers hit the world. When IBM’s PCs and clones came out, most booted right into BASIC, IBM into IBM BASIC, clones into Microsoft’s version which was nearly identical. You required a formatted for OS floppy drive or a large cable linked box to hold a 10MB hard drive.
— Almost everybody took it upon themselves, when faced with these monstrosities, to learn programming… sort of. Others who had taken college courses in systems, coding and design for mainframes were ‘discovered’ not working in computers, brought is and sat down in front of the monsters. Peter Norton’s DOS book was our friend for finding and working out issues, like HP’s plotter install with hard coded config.sys and auto.bat happily playing havoc with the computer. Or Lotus’s version 1 & 1A installs assuming that the computer’s only use is for Lotus.
— The folks who solved these problems quickly, did ad-hoc coding found themselves tailoring PC’s to fit the specific business functions of the user. I’ll also mention D-Base which introduced thousands to databases, which I also hated extensively and deleted off of every computer under my management. Why? Dbase did not have procedures for recovery from hard drive failure or making a database too large. Normal procedures for installing the last backup because the backups would not install to different DOS or D-Base versions. After many hours installing different versions of DOS and D-Base, I considered trying to throw one PC into the Gulf of Mexico which was a good mile from where the computer was.
The next step is the final leap. Where computing moved to everyone’s desktops; email became common, spreadsheets and word processing programs de facto business instruments. People became adjusted to computers serving their needs, accurately and almost immediately. By and far, the great majority of users are ignorant of the mystical details internal and relevant to computing.
So it became habitual that when faced with a computer topic barely understood, that nodding agreement and playing along is almost required. The more intense the mathematics or code requirements, the more glazed and accepting people became. The scifi shows went far to make this approach common as their models could do everything almost immediately.
When I was sat down in front of a PC, major financial, commodity and productivity reports were produced every accounting period (28 days for us). Flash or rough reports were forced very week by sheer dint of labor. When I retired, those same reports, depending on the background sources could be had hourly.
What’s more is that people grew to trust those immediate gratification reports, almost absolutely.
Enter models. In many people’s mind, models are amazing Sci-fi things of perfection; so all they care about is the final result, not the details or inner workings.
As a perspective point, these people become the faithful, much as devoted trekkies, Society for Creative Re-enactment re-enactors, D&D gamers and so on. Only with many of the Sci-fi crowd, they at least understand there is a ‘real’ world separate from the ‘fantasy’ world’. Think of people wearing hip boots, super suits and waving hockey sticks and you wonder about their grasp of what is real.
The past decade with so many online companies delivering goods within days, this immediate gratification syndrome has separated many people away from reality, especially when their ‘favs’ who are deeply entrenched CO2 blood money refuse to acknowledge their debunked or opaque science, GCM failure, Climate failure to adhere, bad math, bad code, bad research and papers keep declaring CAGW alarmisms and validity.
My perspective anyway following parts of my career watching people eyes as I tried to explain their equipment, software and put into perspective their desired program desires.
“I want you to show me how to put this into a spreadsheet.” (Lotus 1A no less) Explained the Manager of safety waving a large book at me.
“Say what?
Do you mean put pictures in a spreadsheet? That’s a lot of pictures and they’re memory hogs” I responded.
No! He Says, I want you to show how to make put this entire book into the software so I can pull up immediately all relevant rules and procedures.”
“Um.” I stammered. “Spreadsheets are a bad fit for that purpose. It would be better for you to have a real program designed for you.”
“Great!” He bellowed. “I want you to start on it immediately.”
“Ah, I can’t do that.”
“Jerry, has me booked solid for months. You’ll need to talk to him.”
Before the Safety Manager got his breath back, I exited and headed straight to Jerry, my boss’s office who found me delivering mail and brought me inside to A/C, water fountains and other pleasures. Some topics needed preparation and quick talking before Theo subdivision goes undeliverable sub atomic time units.
Others should and will have perspectives on the social question because of their unique education and experiences. Pitch in! Or should this be under a whole different thread?

peterg
July 28, 2013 1:28 pm

Roundoff error can be modelled as a form of noise. By postulating a large sensitivity to co2, the resulting model is unstable, and greatly amplifies this noise. A stable model, one that approximates thermodynamic reality, would tend to decrease and minimize this roundoff noise.

DirkH
July 28, 2013 1:35 pm

ATheoK says:
July 28, 2013 at 11:35 am
“–It is the following two lines that I get lost; a long climate run where the statistical properties are not sensitive to the initial state? Perhaps you need to explain this line better?
–boundary value problem, …radiative effects due to changes in greenhouse gases?”
He and the other climate scientist programmers ignore that the very purpose of simulating a chaotic system is to find out about the strange attractors. Otherwise why simulate all the chaos at all? Why not go with Callendar’s simple 1938 model that outperforms modern GCM’s (climate audit).
And when you have strange attractors – that can flip the system into a totally different state – why in the world do you hope that the outcomes of several runs will be normally or equally distributed so that averaging would make sense?
There is no logic in that.

DirkH
July 28, 2013 1:39 pm

peterg says:
July 28, 2013 at 1:28 pm
“Roundoff error can be modelled as a form of noise. By postulating a large sensitivity to co2, the resulting model is unstable, and greatly amplifies this noise. A stable model, one that approximates thermodynamic reality, would tend to decrease and minimize this roundoff noise.”
We know that amplification happens; otherwise weather could not be chaotic. So they can’t avoid that problem. Oh, and we know that the atmosphere is unstable. Otherwise there would be no weather at all.

July 28, 2013 1:44 pm

DirkH has said a lot on this thread that I am not qualified to comment on.
But I note he has beaten all-comers and is unassailed now so he seems to know his stuff.
And I can comprehend his comment at July 28, 2013 10:18 am.
It is worth everyone noting it; he is spot on right here:

“The more important question is how do you validate a model of a chaotic system? ”
-get as much data of the real system as possible for a time interval.
-initialize the model with a state that is as close to the state of the real system at the start of your reference time interval as possible.
-run the model.
-compare with what the real system did.
And that’s what climate scientists obviously never did. They do their hindcasting but they initialize with a random state. Now, is that incompetence or malice or both?

My answer: It’s incompetence.
They aren’t smart enough to do this maliciously. If they were in a conspiracy it wouldn’t be so obvious they are failing at modelling.

July 28, 2013 1:44 pm

DirkH says:
July 27, 2013 at 11:15 am
a system is chaotic IFF its simulation on a finite resolution iterative model develops an error that grows beyond any constant bound over time?
=========
This divergence is at the heart of chaotic systems, and is routinely ignored by climate modellers that insist the “noise” will converge to zero over time, similar to the way that heads and tails on a coin balance out in the long term.
The problem is that a great deal of mathematical theory assumes that numbers are infinitely precise. So in theory the errors don’t accumulate, or they accumulate so slowly that the effects are not significant.
However computers do not store numbers!! They store binary approximations of numbers. Similar to the fraction 1/3, which results in an infinitely long decimal fraction, there are an infinite number of real numbers that cannot be stored exactly on computers.
When these binary approximations are used in models of chaotic systems, or in just about any non-linear problem, the errors quickly grow larger than the answer. You get ridiculous results. We have numerical techniques to minimize these problems, but it remains a huge problem in computer science.
If the “noise” in computer models of chaotic systems did actually converge to zero over time, then we could make all sorts of reliable long term predictions about both climate, and the stock market. Climate modellers would not need government grants.
They could simply apply climate forecasting techniques to the stock markets and use the profits generated to pay for all the super computers they could want. Heck, there would be no need for carbon taxes. Climate modellers could use the winnings from their stock market forecasts to pay for the entire cost of global warming.

AndyG55
July 28, 2013 1:51 pm

“Oh, and we know that the atmosphere is unstable. Otherwise there would be no weather at all.”
Actually, weather is the atmosphere stabilising itself.
The atmosphere is inherently stable, its the Earth that keeps rotating.

July 28, 2013 1:52 pm

Dan Hughes says: July 28, 2013 at 11:46 am
“NWP and GCM applications do not time-accurately resolve pressure propagation. A few simple calculations will show that this requirement would lead to an intractable computation. Not the least problem would be that the largest speed of sound coupled with the smallest grid dimension would dictate a time-step size restriction for the entire grid system.”

The have to. It’s how force is propagated. You either have to resolve the stress waves or solve a Poisson equation.
The Courant limit for a 100 km grid at speed of sound 334 m/s is 333 sec, or 5.5 min. They seem to be able to stretch it a bit.
Mark Negovan says: July 28, 2013 at 6:03 am
“If you are designing an airplane wing using CFD, you absolutely have to determine the computational uncertainty to properly validate the engineering model.”

It’s a good idea to determine computational uncertainty, but it doesn’t validate anything. Any CFD program will behave like this numerical weather forecasting program, when used to model a transient flow from initial conditions. Errors will grow. The chaotic nature of flow is inevitably in practice recognised with some form of turbulence modelling, explicitly giving up on the notion that you can get a predictive solution to floating point accuracy.
I bet you can’t get two different computers to keep a vortex shedding sequence in exactly the same phase for 100 cycles. But they will still give a proper vortex street, with frequency and spacing.
LdB says: July 28, 2013 at 7:47 am
“Nick that is all blatantly wrong you may not know how to deal with the problem but many scientists do so please don’t lump us all in with your limited science skill and abilities.”

Unlike most of the armchair experts here, I actually write CFD programs.

Nick Stokes
July 28, 2013 2:06 pm

“The Courant limit for a 100 km grid at speed of sound 334 m/s is 333 sec”
OK, 300 sec.

Blarney
July 28, 2013 2:12 pm

ferd berple says:
July 28, 2013 at 1:44 pm
“If the “noise” in computer models of chaotic systems did actually converge to zero over time, then we could make all sorts of reliable long term predictions about both climate, and the stock market.”
No, you’re confusing simulation with forecast. Try this mental experiment: you have a machine that allows you to create a perfect duplicate, 1:1 size, of the whole Earth in its current state. This duplicate Earth is of course a perfect “simulation” of every process at work on the real Earth, as everything works there just exactly the same as it would on the original planet. Now let a few years pass and compare the weather in a specific day and place on the two planets- you’ll find that the meteorological weather on the two planets has diverged to the point that you can’t use what’s happening on Earth B to say what’s happening on Earth A. This even if the “simulation” you’re using is absolutely perfect, in that it’s not even a simulation at all (it’s a perfect copy of the real thing). Tiny differences in noise coming from space and quantum fluctuations have made the two meteorological systems completely unrelated in a short time.
So it’s not in the computer models the problem, it’s in the nature of chaotic systems. Rounding errors in numerical simulations are just a source of a small background noise: they influence the state of the system, but (if the system is well modeled and the noise is kept low, and depending on the timescales we’re trying to study) they may not influence the general pattern of its evolution.
On the other hand, claiming that the noise introduced in the simulations by rounding errors is driving their average towards a warmer world, is equivalent (if the errors are small and random) to saying that a noise of equivalent magnitude and randomness is keeping the climate of the Earth more stable than what predicted. Which is clearly nonsense.

July 28, 2013 2:17 pm

The problem with error divergence is not limited to chaotic systems. Even the most basic of linear programming models deliver very inaccurate results using the techniques taught in high school and university math classes
Do you remember linear programming in high school? The teacher drew a 3×3 matrix on the board, with an equal sign pointing to a column of three numbers on the right. The solution was to add, subtract, multiple, divide the rows so that you got all 1’s on the diagonals. The column on the right would then give you the your solution for X,Y,X.
What the teacher usually didn’t tell you was that these problems were almost always contrived so that the answers were integers or well behaved decimal fractions. What they didn’t tell you is that when you program a computer to solve this problem using the same techniques, the round off errors quickly make the answers useless except for very well behaved problems.
Even simple linear programming models on computers deliver huge errors using standard mathematical techniques. To reduce the error on computers, we apply numerical techniques such as back iteration. For example, we fudge the results slightly to see if it reduces or increases the error. and if it reduces the error we continue to fudge the results in the same direction. Otherwise we reverse the sign of the fudge and try again. We continue this process until the error is acceptably small.
So now you know. Even on linear models we need to “fudge” the answer on computers to try and minimize the round off errors. Try and apply this technique to a climate model. You end up with a whole lot of fudge and not a lot of answer.

Compu Gator
July 28, 2013 2:18 pm

http://www.cs.berkeley.edu/~wkahan/ieee754status/754story.html:

An Interview with the Old Man of Floating-Point:
Reminiscences elicited from William Kahan [U. Cal. (Berkeley)]
by Charles Severance [U. Michigan School of Information] 20 Feb. 1998
This interview underlies an abbreviated version to appear in the March 1998 issue of IEEE Computer.
Introduction
If you were a programmer of floating-point computations on different computers in the 1960’s and 1970’s, you had to cope with a wide variety of floating-point hardware. Each line of computers supported its own range and precision for its floating point numbers, and rounded off arithmetic operations in its own peculiar way. While these differences posed annoying problems, a more challenging problem arose from perplexities that a particular arithmetic could throw up. Some of one fast computer’s numbers behaved as non-zeros during comparison and addition but as zeros during multiplication and division; before a variable could be used safely as a divisor it had to be multiplied by 1.0 and then compared with zero. [….] On another computer, multiplying a number by 1.0 could lop off its last 4 bits.

I suspect that his “non-zeros” example refers to the CDC 6600 & 7600, which used 1’s-complement arithmetic, giving it signed zeros (i.e.: +0 distinct from -0). The “4 bits” example probably refers to the IBM S/360, which has a “hexadecimal exponent” (i.e.: binary E in exponent-part representing 16^E, thus normalizing the mantissa no more granularly than 4-bit nybbles).
This interview is more focused on the history of the IEEE 754 standard (neé “P754”: possibly more useful as a search term) than on the foibles of binary floating-point arithmetic. The coäuthors of P754 (in addition to Kahan) were Jerome Coonen and Harold Stone. George Taylor (a key person mentioned in the interview) had seen, with his own eyes, Kahan’s collection of numerous electronic calculators that erred in 1 repeatable way or another, as stashed in a desk drawer.

July 28, 2013 2:23 pm

Blarney says:
July 28, 2013 at 2:12 pm
Try this mental experiment: you have a machine that allows you to create a perfect duplicate, 1:1 size, of the whole Earth in its current state.
==========
I agree with you completely. Even two identical earth’s will deliver two different results. I have written extensively on this, and the reason has nothing to do with round-off errors. There are a (near) infinite number of futures. We will arrive at one of these. Which one is not at all certain. The odds of two identical earth’s arriving at the same future are (near) infinitely small.
the question I was addressing what the computational limitations of solving even very trivial models using computers.

TomRude
July 28, 2013 2:41 pm

Bottomline: weather is conditionned by what’s happening in the first 1500m, well below the 500 hPa level. Not only it is a computer problem but it is also a fundamental understanding problem. Hence the fallacy used in so many climate papers using the 500 hPa and trying to explain surface temperature series while disconnecting the synoptic reality that created them.
===
““We address the tolerance question using the 500-hPa geopotential height spread for medium range forecasts and the machine ensemble spread for seasonal climate simulations.”

“The [hardware & software] system dependency, which is the standard deviation of the 500-hPa geopotential height [areas of high & low pressure] averaged over the globe, increases with time.”

July 28, 2013 2:58 pm

You are walking down the road an come to a fork. You decide to toss a coin. Heads you go right, tails you go left. You toss the coin and go right.
On an identical earth, your identical copy comes to an identical fork. they toss an identical coin. Will they go left or right?
This is an unsolved question in physics. If your identical self will always go right, then much of our western culture and belief is a nonsense. the future was decided at the moment of creation of the universe, and nothing we can do or say will affect this. Our future actions, and all future actions, are already fully accounted for in the past. In which case, the notion that people are personally responsible for any of their actions is a nonsense. your actions were determined at the point of creation and there is nothing you can say or do to alter this..
If however, your identical self does sometimes go left in the identical earth, then our actions are not fully determined at the point of creation. Creation may skew the odds that we go right, but we still might go left. This is much closer to how our belief systems operate, but it opens up a complete can of worms for predicting the future.
Suddenly the future is not deterministic, it is probabilistic. The future doesn’t exist as a point in time, it exists as a probability function, with some futures more likely than others, but none written in stone.
In which case, predicting what will happen in the future is physically impossible, computer models and mathematics not withstanding. The very best you can hope for is to calculate the odds of something happening, and while the odds may favor a turn to the right, they don’t prohibit a turn to the left. the same for temperatures.

Dan Hughes
July 28, 2013 3:03 pm

http://wattsupwiththat.com/2013/07/27/another-uncertainty-for-climate-models-different-results-on-different-computers-using-the-same-code/#comment-1373526
Nick, if the method requires that the Courant sound-speed criterion be met, it can’t be “stretched a bit”. The grid is not 100 x 100 x 100 either. Your basic factor of 4 also isn’t “a bit”. Finally, accuracy generally requires a step size much smaller than the stability limit be used.

Verified by MonsterInsights