Guest essay by Anthony R. E.
Recently, I read a posting by Kip Hansen on Chaos and Climate. (Part 1 and Part 2) I thought it will be easier for the layman to understand the behavior of computer models under chaotic conditions if there is a simple example that he could play. I used the attached file in a course where we have lots of “black box” computer models with advance cinematic features that laymen assume is reality.
Consider a thought experiment of a simple system in a vacuum consisting of a constant energy source per unit area of q/A and a fixed receptor/ emitter with an area A and initial absolute temperature, T0 . The emitter/receptor has mass m , specific heat C, and Boltzmann constant σ. The enclosure of the system is too far away such that heat emitted by the enclosure has no effect on the behaviour of the heat source and the emitter/receptor.
The energy balance in the fixed receptor/emitter at any time n is:
Energy in { q/A*A= q} + energy out {-2AσTn 4 } + stored/ released energy {- mC( Tn+1 – Tn )} = 0 eq. (1)
If Tn+1 > Tn the fixed body is a heat receptor, that is, it receives more energy than it emits and if Tn > Tn+1 it is an emitter, that is, it emits more energy than it receives. If Tn = Tn+1 the fixed body temperature is at equilibrium.
Eq (1) could be rearranged as :
Tn+1 = Tn -2AC Tn 4 /mC +q/mC eq(2)
Since 2AC/mC is a constant, we could call this α, and q/mC is also a constant we could call this β to facilitate calculations. Eq (2) could be written as:
Tn+1 = Tn – αTn 4 + β eq.(3)
The reader will note this equation exhibits chaotic properties as described by Kip Hansen in this previous post at WUWT on November 23, 2015, titled “Chaos & Climate –Part 2 Chaos=Stability”. At equilibrium, Tn=+1 = Tn , and if the equilibrium temperature is T∞ then from equation (3)
T∞ 4 =β/α or α = β / T∞4 if eq. (4)
And eq (3) could be written as
Tn+1 = Tn – βTn 4 /T∞4 + β or Tn+1 =Tn +β(1-Tn4 /T∞4 ) eq (5)
Eq (5) could be easily programmed in Excel. However, there are several ways of writing T4 . One programmer could write it as T*T*T*T, another programmer could write it as T ^2* T ^2, another programmer could write it as T*T ^3 and another could write as T^4. From what we learned in basic algebra, it does not matter as all those expressions are the same. The reader could try all the variations of writing T4 . For purposes of illustration, let us look at β= 100, T∞ =243 ( I am using this out of habit that if it were not for greenhouse gases the earth would be -30o C or 2430 K but you could try other temperatures) and initial temperature of 300 K. After the 17th iteration the temperature has reached its steady state and the difference between coding T4 as T^4 and T*T*T*T is zero. This is the non-chaotic case. Extract from the Excel spreadsheet is shown below:
| beta= | 100 | ||||
| Iteration | w T^4 | w T*T*T*T | % diff. | T ∞= | 243 |
| 0 | 300.00 | 300.00 | 0.00 | ||
| 1 | 167.69 | 167.69 | 0.00 | ||
| 2 | 245.01 | 245.01 | 0.00 | ||
| 3 | 241.66 | 241.66 | 0.00 | ||
| 4 | 243.85 | 243.85 | 0.00 | ||
| 5 | 242.44 | 242.44 | 0.00 | ||
| 6 | 243.36 | 243.36 | 0.00 | ||
| 7 | 242.77 | 242.77 | 0.00 | ||
| 8 | 243.15 | 243.15 | 0.00 | ||
| 9 | 242.90 | 242.90 | 0.00 | ||
| 10 | 243.06 | 243.06 | 0.00 | ||
| 11 | 242.96 | 242.96 | 0.00 | ||
| 12 | 243.03 | 243.03 | 0.00 | ||
| 13 | 242.98 | 242.98 | 0.00 | ||
| 14 | 243.01 | 243.01 | 0.00 | ||
| 15 | 242.99 | 242.99 | 0.00 | ||
| 16 | 243.00 | 243.00 | 0.00 | ||
| 17 | 243.00 | 243.00 | 0.00 | ||
| 18 | 243.00 | 243.00 | 0.00 | ||
| 19 | 243.00 | 243.00 | 0.00 | ||
| 20 | 243.00 | 243.00 | 0.00 |
If β is changed to170 with the same initial T and T∞, T does not gradually approach T∞ unlike in the non chaotic case but fluctuates as shown below. While the difference in coding T4 as T^4 and T*T*T*T is zero to the fourth decimal place, differences are really building up as shown in the third table.
| beta | 170 | ||||
| Iteration | w T^4 | w T*T*T*T | % diff | T ∞ | 243 |
| 0 | 300.0000 | 300.0000 | 0.0000 | ||
| 1 | 75.0803 | 75.0803 | 0.0000 | ||
| 2 | 243.5310 | 243.5310 | 0.0000 | ||
| 3 | 242.0402 | 242.0402 | 0.0000 | ||
| 4 | 244.7102 | 244.7102 | 0.0000 | ||
| 5 | 239.8738 | 239.8738 | 0.0000 | ||
| 6 | 248.4547 | 248.4547 | 0.0000 | ||
| 7 | 232.6689 | 232.6689 | 0.0000 | ||
| 8 | 259.7871 | 259.7871 | 0.0000 | ||
| 9 | 207.7150 | 207.7150 | 0.0000 | ||
| 10 | 286.9548 | 286.9548 | 0.0000 | ||
| 11 | 126.3738 | 126.3738 | 0.0000 | ||
| 12 | 283.9386 | 283.9386 | 0.0000 |
By the 69th the difference between coding T4 as T^4 and T*T*T*T is now apparent at the fourth decimal place as shown below:
| 69 | 88.6160 | 88.6153 | 0.0008 |
| 70 | 255.6095 | 255.6088 | 0.0003 |
| 71 | 217.4810 | 217.4824 | 0.0007 |
| 72 | 278.4101 | 278.4086 | 0.0005 |
| 73 | 155.4803 | 155.4850 | 0.0030 |
| 74 | 296.9881 | 296.9894 | 0.0004 |
The difference between the two codes builds up rapidly that by the 95th iteration, the difference is 4.5 per cent and by the 109th iteration is a huge 179 per cent as shown below.
| 95 | 126.5672 | 132.2459 | 4.4866 |
| 96 | 284.0558 | 287.3333 | 1.1538 |
| 97 | 136.6329 | 125.0047 | 8.5105 |
| 98 | 289.6409 | 283.0997 | 2.2584 |
| 99 | 116.5073 | 139.9287 | 20.1029 |
| 100 | 277.5240 | 291.2369 | 4.9412 |
| 101 | 158.3056 | 110.4775 | 30.2125 |
| 102 | 297.6853 | 273.2144 | 8.2204 |
| 103 | 84.8133 | 171.5465 | 102.2637 |
| 104 | 252.2905 | 299.3233 | 18.6423 |
| 105 | 224.7630 | 77.9548 | 65.3169 |
| 106 | 270.3336 | 246.1543 | 8.9442 |
| 107 | 179.9439 | 237.1541 | 31.7934 |
| 108 | 298.8261 | 252.9321 | 15.3581 |
| 109 | 80.0515 | 223.3877 | 179.0549 |
However, the divergence is not monotonically increasing. There are instance such as in the 104th iteration, the divergence drops from 102 per cent to 18 per cent. One is tempted to conclude T^4 ≠ T*T*T*T.
Conclusion:
Under chaotic conditions, the same one line equation with the same initial conditions and constant but coded differently will have vastly differing results. Under chaotic conditions predictions made by computer models are unreliable.
The calculations are made for purposes of illustrating the effect of instability of simple non-linear dynamic system and may not have any physical relevance to more complex non-linear system such as the earth’s climate.
Note:
For the above discussion, a LENOVO G50 64 bit computer is used. If a 32 bit computer is used the differences would be noticeable at a much earlier iterations. A different computer processor with the same number of bit will also give different results.
![climate-model-1[1]](https://wattsupwiththat.files.wordpress.com/2013/09/climate-model-11.jpg?resize=576%2C576&quality=83)
This was a big problem 15 years ago. However, computer scientists at Lawrence Berkeley found the problem and fixed it using arbitrary precision arithmetic. The reference to climate models was buried in a paper that they presented at a conference.. At one point it was available for download. Needless to say, it is no longer available !
” -30o C or 2430 K ”
since this is a technical discussion : the unit is kelvin ( K ) , not degrees kelvin .
I think you meant “pedantic discussion”…
🙂
If they are going to tell us to “define your terms” over and over again — then we should use the definition. Pedantic or not. Eh?
And to think, I just thought climate computer programs were always pre-programmed to produce imminent Thermageddon.
I never realised that such large errors were almost inevitable, especially when you start doing hundreds of iterations.
My last comment is the quality of comments here completely put the lie to alarmists who claim they are the sole providers of climate science.
It speaks to the infinite amount of hubris in the climate establishment. If they really knew anything, they would know how little they know.
As a very old mechanical engineer-having taken an electrical engineering course in my freshman year at MIT in 1954 I wonder if an analog computer could be employed to solve these problems. That is if any more exist.
There is my favorite one here:
http://www.antiquark.com/sliderule/sim/n909es/virtual-n909-es.html
It is a working sliderule for your browser based on moving images of rules… works, too…
Mr. Brand,
An analog computer could model the problems, but is not likely to solve them. The problems of climate are like those of financial forecasting, “Past performance is no guarantee of future results.”
But I like your suggestion because, after all, the real world is analog.
Dan Marks
I have quickly written a C# (C-Sharp) program that is not complete for the demo of this article, but it might be a good start for others.
I will be writing some code (some day) that uses a class that allows a number to be as large as ones computer memory will allow (bypasses 32bit or 64bit limit but at the cost of much greater computing time). The code I will be referring to is in a book about C++ algorithms, and I would either have to type it out from the book or alternatively re-write it in C#.
Here is the code, and a debug out of the code shown below that. You can copy this into a file yourself and use Visual Studio Community Edition free from Microsoft.
—————
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Collections.ObjectModel;
using System.Diagnostics;
namespace Kip
{
[TestClass]
public class MyTestClass
{
[TestMethod]
public void MyTestMethod2()
{
Kipper kipper = new Kipper()
{
Tee = 300,
Bee = 100,
Tee_infinity = 243
};
var kips = new ObservableCollection();
for (int i = 0; i < 100; i++)
{
kips.Add(kipper.Iterate());
Debug.WriteLine($"Iteration # : {i + 1}, calculated value : {kipper.Tee}");
}
foreach (var item in kips)
{
Debug.WriteLine($"value is : {item}");
}
Debugger.Break();
}
}
internal class Kipper
{
public Kipper()
{
}
public int Bee { get; set; }
public float Tee { get; set; }
public double Tee_infinity { get; set; }
internal float Iterate()
{
Tee = Tee + Beta();
return Tee;
}
private float Beta()
{
double result = Bee * (1 – Math.Pow(Tee, 4) / Math.Pow(Tee_infinity, 4));
return (float)result; //cast double type to float type
}
}
}
———— and some debug output
Test Name: MyTestMethod2
Test Outcome: Passed
Result StandardOutput:
Debug Trace:
Iteration # : 1, calculated value : 167.6943
Iteration # : 2, calculated value : 245.014
Iteration # : 3, calculated value : 241.6573
Iteration # : 4, calculated value : 243.8492
Iteration # : 5, calculated value : 242.444
Iteration # : 6, calculated value : 243.3561
Iteration # : 7, calculated value : 242.7686
Iteration # : 8, calculated value : 243.1489
Iteration # : 9, calculated value : 242.9035
Iteration # : 10, calculated value : 243.0622
Iteration # : 11, calculated value : 242.9598
Iteration # : 12, calculated value : 243.026
Iteration # : 13, calculated value : 242.9832
Iteration # : 14, calculated value : 243.0108
Iteration # : 15, calculated value : 242.993
Iteration # : 16, calculated value : 243.0045
Iteration # : 17, calculated value : 242.9971
Iteration # : 18, calculated value : 243.0019
Iteration # : 19, calculated value : 242.9988
Iteration # : 20, calculated value : 243.0008
Iteration # : 21, calculated value : 242.9995
Iteration # : 22, calculated value : 243.0003
Iteration # : 23, calculated value : 242.9998
Iteration # : 24, calculated value : 243.0001
Iteration # : 25, calculated value : 242.9999
Iteration # : 26, calculated value : 243.0001
Iteration # : 27, calculated value : 243
Iteration # : 28, calculated value : 243
Iteration # : 29, calculated value : 243
The ObservableCollection declaration lost an “left angle bracket” “float” “right angle bracket” during the reformatting by wordpress upon posting.
var kips = new ObservableCollection<float>();
I think the point of the article was that it converges with beta =100 but it chaotic for beta=170.
When beta multiplier starts to get comparable to size of the settled value each iteration jumps chaotically either side yet is always attracted to it so remains confined to that region of the variable space.
In the chaotic mode, the small difference due to rounding errors in the two ways of doing the iteration become significant and the two paths become unrelated with each other from step to step, while both remaining attracted to the same attractor: Tinf.
Well, I can write code that iterates through various values and see what the various outcomes are, but I do have a lot of other projects that I am working on…
I found an add in for my code editor that can copy as html, so here is that copy :
using Microsoft.VisualStudio.TestTools.UnitTesting; using System; using System.Collections.ObjectModel; using System.Diagnostics; namespace Kip { [TestClass] public class MyTestClass { [TestMethod] public void MyTestMethod2() { Kipper kipper = new Kipper() { Tee = 300, Bee = 100, Tee_infinity = 243 }; var kips = new ObservableCollection<float>(); for (int i = 0; i < 100; i++) { kips.Add(kipper.Iterate()); Debug.WriteLine($"Iteration # : {i + 1}, calculated value : {kipper.Tee}"); } foreach (var item in kips) { Debug.WriteLine($"value is : {item}"); } Debugger.Break(); } } internal class Kipper { public Kipper() { } public int Bee { get; set; } public float Tee { get; set; } public double Tee_infinity { get; set; } internal float Iterate() { Tee = Tee + Beta(); return Tee; } private float Beta() { double result = Bee * (1 - Math.Pow(Tee, 4) / Math.Pow(Tee_infinity, 4)); return (float)result; //cast double type to float type } } }And you said this is for the layman, right? LOL
It’s relative to your level of understanding of the discussion. This is a pretty high level of mathematics (so to speak). Hang in there. 😉
Like any language, if you can speak and write it, it is easy. If you don’t have a word in your vocab you don’t know it.
The term T^4 brought back memory of a nifty formula from an advanced astronomy class in high school. The total energy output of a star is 4 * pi * r^2 * (sigma) * T^4. (It’s cooler looking when written properly rather than in text.) T*4 is the output per square unit, where T is the star’s surface temp in degrees Kelvin (and you know that from the star’s class and color). Sigma is just a scaling constant, and 4 * pi * r^2 is the surface area of a sphere. But I never learned where that T^4 comes from in the first place.
I found this helpful, as a start…
https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law
Short answer: the fourth power is from the integral over all frequencies of a third power law. The third power comes from the three dimension of space.
The fourth power can be explained as a combination of Johnson noise, characteristic impedance of free space coupled with antenna theory and the frequency spectrum of a black body as constrained by quantum mechanics. A power of one comes from Johnson noise, P=kTB. A power of one comes from the bandwidth of the black body spectrum increasing with T, the bandwidth being where the black body spectrum drops by 3dB from the peak. A power of two comes from considering thermal equilibrium as a function of frequency, the characteristic emission and capture areas are proportional to the square of the wavelength or inversely proportional to the square of the frequency. For a black body radiator at temperatures significantly higher than a few degrees K, the thermal power density at 2GHz will be four times the thermal power density at 1GHz.
An interesting corollary to the power density versus frequency is that below 100 to 200 MHz, the effective sky temperature is above 300K, gets to be on the order of 3000K at 30 MHz and even higher at low frequencies. Due to the long wavelengths, the power density is too low to have any affect on surface temperatures.
As Stephen Skinner points out, (above), climate models are structured as boxes in a grid. That’s the part that baffles me: How do you represent the relationships between the boxes? Don’t the number of 2nd, 3rd …. order relationships become quickly unmanageable? And if Terre Haute is in one box and South Bend is in another and Kalamazoo is in a 3rd then what does the interrelationship between all 3 look like when you extend the interrelationships to Yuma, AZ? It seems like a trivially small error in any parameter would quickly spin out of control.
Until someone can explain all this is plain English I won’t believe any of it. FYI: I have a Masters in Stats, and PhD in Information Theory, but I also believe that if you can’t break a problem down so that you can explain the basic notion to your mother then you really don’t know what you are talking about.
I agree Mark. And this is what Tesla says:
“Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality. ”
And this one:
“If anybody ever tells you anything about an aeroplane which is so bloody complicated you can’t understand it, take it from me: it’s all balls.”
— R. J. Mitchell, advice given about his engineering staff to test pilot Jeffrey Quill during prototype trials.
I had a stats professor in graduate school who was famous for commanding us to provide answers by “Saying it in American!!!” Lol, that’s about all I remember from that class!
As a headhunter for many years, I recruited a fair number of people for technology positions. One of my tests was to ask them to explain some technical issue they were involved with. They would pass my test if I could understand what the heck they were talking about.
From the article, it is implied by the way he wrote the temperature that the measuring accuracy is no more than a hundredth of a degree. It is therefore impractical to maintain any precision beyond this. Thus if the author just added a round function to the term (T*T*T*T) and T^4 such as: round(T*T*T*T, 2) and round(T^4, 2) then both columns will produce the same values regardless of the number of iterations. It is common practice in science and engineering that the precision of calculations are bound by the accuracy of your measuring device. That should follow with computer models they create. This problems have been solved by banking systems long ago. Science should learn from bankers.
Yea, the banks solved it by simply keeping all the ” EXTRA ” numbers in their own accounts !!!!
Yes, I have wondered about the idea of making an analogue GCM with a high accuracy instrumentation amplifier at each grid box position. A few thousand in spherical formation all interconnected to their neighbours.
The circuits would have to be accurate and stable enough to represent the various physical equations being modelled to within a similar accuracy as numerical methods. It’s a tall order !
It’s pretty hard with analog circuitry to 20 bit accuracy (~I part in a million) so any analog computations would be orders of magnitude worse than the digital (64 bit) precision.
http://www.linear.com/docs/4177
In point of fact, for years banks used Binary Coded Decimal machines. All arithmetic was exact in that format. You could either represent a number or you couldn’t because either it was too large for the number of places to be represented, or too small for the places in the fractional part of the number. Burroughs and IBM both had such systems, UNISYS still sells them.
A good practical illustration of chaos, a branch of math developed in response to problems with computer models. The problems stem from:
Inexact initial or boundary conditions.
Finite precision arithmetic.
It isn’t just arithmetic. According to Lorenz’s original paper the chaotic behavior was intrinsic to trying to get solutions to partial differential equations. With certain starting values the system of equations generates solutions that cluster in certain areas but never actually repeat. So something like predicting when the climate switches from an ice age to an interglacial isn’t possible. The best that can be done is predict that the change is highly likely to be within a certain range of dates. Or predict that the temperatures are highly likely to be within certain limits in one phase or the other.
“…it emits more energy than it receives.”
Violates 2nd law.
What version of the 2nd law are you using?!
I suggest you refresh your memory.
The fundamental characteristics of chaos include:
1. Sensitive dependence on initial conditions. Two solutions started with ICs that are different at the precision of the machine representation of the numbers, for example, will exponentially diverge. The difference between the two series will grow to be of the same order of magnitude as the variables themselves.
2. Bounded aperiodic oscillations with increasing values of the independent variable, which is usually taken to be time. Growth of the dependent variable to blowup is not chaos, limit-cycle oscillatory response is not chaos, periodic oscillatory response is not chaos.
The example in this post does not meet either of these properties. Instead, two solutions started at nearby ICs remain close, blowup, limit-cycle, and periodic oscillatory solutions are obtained for some conditions.
None of the solutions have been tested for, and shown to be consistent with, chaotic properties and characteristics.
The example is nothing more than a simple, and severely incomplete, demonstration of numerical instabilities. And heads off in a perpendicular to a less-than complete discussion of machine representations of numbers, additionally with the focus on the wrong part of the equation, which after all contains the term (T^4 / T_inf^4) which can be written in a multitude of different ways in any of the several coding languages.
It is critically important to note that the example can in no ways whatsoever be associated with mathematical models and numerical solution methods in general, and GCMs in particular.
Several investigations into accurate numerical integration of the Lorenz ODE system appeared in the literature about 5 or 6 years ago, with an earlier paper from 1998 also re-discovered. A summary of these papers is given here.
The series of papers started with this publication:
J. Teixeira, C.A. Reynolds, and K. Judd (2007), Time step sensitivity of nonlinear atmospheric models: Numerical convergence, truncation error growth, and ensemble design. Journal of the Atmospheric Sciences, Vol. 64 No.1, pp. 175–189. http://journals.ametsoc.org/doi/pdf/10.1175/JAS3824.1
A Comment on the paper was published:
L. S. Yao and D. Hughes (2008), Comment on ‘Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design,” Journal of the Atmospheric Sciences, Vol. 65, No. 2, pp. 681-682. http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2495.1
Subsequently, additional comments and responses about another publication appeared. The paper:
Edward N. Lorenz (2006), Computational periodicity as observed in a simple system, Tellus A, Vol. 58A, pp. 549-557. http://eaps4.mit.edu/research/Lorenz/Comp_periodicity_06.pdf
The comment:
L. S. Yao and D. Hughes (2008), Comment on ‘Computational periodicity as observed in a simple system’, by E. N. Lorenz, Tellus A, Vol. 60, No. 4, pp. 803-805. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2008.00301.x/abstract
Response to the comment:
E. N. Lorenz (2008), Reply to comment by L.-S. Yao and D. Hughes, Tellus A, Vol. 60, pp. 806–807. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2008.00302.x/abstract
Following these publications, several appeared that directly addressed the problem of accurate numerical integration of ODE systems that exhibit chaotic response. Yao and Hughes, and others, missed an earlier publication by Estep and Johnson (1998).
D. Estep and Claes Johnson (1998), The pointwise computability of the Lorenz system, Mathematical Models and Methods in Applied Sciences, Vol. 8, pp. 1277-1305. http://www.worldscientific.com/doi/abs/10.1142/S0218202598000597
The papers listed below are representative of recent, the 2000s, publications. There are very likely other publications that address the issue.
S. Liao (2009), On the reliability of computed chaotic solutions of non-linear differential equations. Tellus A, Vol. 61, No. 4, pp. 550–564. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2009.00402.x/abstract
Kehlet (2010) addressed the problem in a MSc thesis.
B. Kehlet, Analysis and implementation of high-precision finite element methods for ordinary differential equations with application to the Lorenz system, MSc thesis, Department of Informatics, University of Oslo, 2010.
And Kehlet and Logg (2010, 2013)
B. Kehlet and A. Logg (2010), A reference solution for the Lorenz system on [0, 1000], American Institute of Physics, ICNAAM, Numerical Analysis and Applied Mathematics, International Conference 2010, Vol. III, Edited by T. E. Simos, G. Psihoyios and Ch. Tsitouras. http://adsabs.harvard.edu/abs/2010AIPC.1281.1635K
B. Kehlet and A. Logg (2013), Quantifying the computability of the Lorenz System, arXiv: 1306.2782. See also Proceedings of the VI International Conference on Adaptive Modeling and Simulation (ADMOS 2013), Edited by J. P. Moitinh de Almeida, P. D`ïez, C. Tiago and N. Parés. International Center for Numerical Methods in Engineering (CIMNE), 2013. http://arxiv.org/abs/1306.2782 http://www.math.chalmers.se/~logg/pub/papers/KehletLogg2010a.pdf
Sarra and Meador (2011) also addressed the problem.
Scott A. Sarra and Clyde Meador (2011), On the numerical solution of chaotic dynamical systems using extend precision floating point arithmetic and very high order numerical methods, Nonlinear Analysis: Modelling and Control, Vol. 16, No. 3, pp. 340–352. http://www.lana.lt/journal/42/NA16306.pdf
Prior to the papers listed above that address the Lorenz system, Cloutman investigated systems that exhibit chaotic response even though the response theoretically cannot be chaotic.
L. D. Cloutman (1996), A note on the stability and accuracy of finite difference approximations to differential equations, Lawrence Livermore National Laboratory Report, UCRL-ID-125549. http://www.osti.gov/scitech/servlets/purl/420369/
L. D. Cloutman (1998), Chaos and instabilities in finite difference approximations to nonlinear differential equations, Lawrence Livermore National Laboratory Report, UCRL-ID-131333. http://www.osti.gov/scitech/servlets/purl/292334
As I noted above, this problem has nothing to do with chaos,climate or any such. Nor is it an issue with floating point, ways of doing T^4. It is an elementary case of a stiff differential equation. And it is very well known that you can’t use Euler’s method, as the author seeks to do, except with very small time steps. The remedies have been well known for at least a hundred years.
Although not a century ago I used Gear’s method specifically designed for ‘stiff’ ODEs in my thesis work in the early 70’s.
Thank you for that.
It’s the year 2015, and we’re still using 64 bits for floating point calculations?
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format
The register sizes of the latest CPUs are as large as 512 bits and support e.g. integer crypto operations at those sizes, but full HW support still not there on x86 for quadruple precision from what I can tell.
If I’m doing my math right, it should take more at least 53 times as many iterations to show the same problem above with when using quadruple precision. I might be off by log(10)/log(2),can’t remember..
Peter
Anthony R. E.,
I get the same results on the 109th iteration; the particulars of my software/system are:
Software: Gnumeric Spreadsheet 1.12.9
Processor : 8x Intel(R) Core(TM) i7-4700MQ CPU @ur momisugly 2.40GHz
Operating System: Ubuntu 14.04.3 LTS
Some basic descriptive stats might be of interest (β=170, T∞=243, n=65,535):
Well sure, if one’s goal is to predict the nth future state of a chaotic physical system to within a few hundredths of a percent, ALL methods will fail miserably at some point even if all methods produce exactly the same answer for any given n along the way.
However, if one’s goal is to predict the statistics of the values a chaotic system is likely to experience under specified conditions (β=170, T∞=243, n=65,535) …
… -0.04 K (-0.02%) difference in the means and 0.0663 K and 0.05 K (0.07%) difference in the standard deviations suggest that the most dire problems with the model are probably not due to how we’ve coded the 4th power of T.
Bah … mods, I did not properly close the [code][/code] tags in each of the tables of figures in the above post, please fix. Thanks.
[Fix OK? – mod]
mod, that fixed it, thanks.
Some (most?) languages have library support for arbitrary precision decimals:
Java – java.math.BigDecimal
Python – Decimal
C++ – Quick google search showed a couple of options
Dear Anthony R. E.,
You say, ” … a simple example”.
Hmmm ( ? )
It appears to be complexity more than a woman will decide upon the right shoes for her new outfit.
Pass me the George bottle !
Regards.
WL
Fun: numerical computation in binary. Back in the 70’s it was done in decimal but now binary floating point hardware units (embedded in the cpu) do the job. Speed and precision have always been tradeoffs and as compilers have got ‘smarter’ most programmers have no idea of how their code will be executed. 10×10 can be interpreted as 10^2 and a compiler may choose to execute the code for that by any of several different methods chosen by the compiler writer for ‘efficient execution’ or personal preference even. The same code compiled differently will yield a different result at a significant number of places. That also goes for the executing processor environment too. The CPU only does addition and subtraction in its main arithmetic registers so a 32 bit CPU will have be able to process a 32 bit integer and the data pathways to the CPU will appear as 32 bits although they may be delivered differently. Scale it up to x64 similarly. Floating point hardware allows an extension then to 128 bits in an output register. FPU’s have externally defined operations but it is only the data format and the class of the operation which is defined. The execution microcode will be a trade secret and that too will be based on additive registers At that level shift and add rules along with programmer ingenuity for efficiency and speed.
Once even business programmers had to know their compilers so that any arithmetic gave sensible results even with 2 places of decimals. I have seen Fortran output with 9’s for zero. Never presume equality and at every iteration the operation would need to be rounded to the significant number of places before proceeding. Bring back BCD with hardware multiply and divide. The Trachtenberg method can be implemented with a table for very speedy computations.
It should be obvious to anyone with high school maths that climate model computations are probably impossible.
Ah! BCD! The beep, beep, crunch sounds of a 1960’s ICL LEO III!
As a programmer I would like to duplicate computer models and tests using the actual software or code as it is being used for further analysis. I’ve never been able to get the raw data or code being used. I want to perform testing on my own. I wonder if you can point me in the right direction? Even the physical location would be very useful.
Hey, you want the see the code, no problem. Directions? go to hell !
“physical location” ? Up my a$$ and round the bend. 😉 Does that answer your questions?
Welcome to the “science” of the anti-christ : climatology. Fully peer reviewed but you are not my peer, so butt out.
An interesting post, but I’m not convinced that it is relevant. The errors in every single step of a climate model are so much larger than the difference between T^4 and T*T*T*T that any computation accuracy problem just pales into insignificance.
On second thoughts, it isn’t that irrelevant, because it could help people to understand just how useless the climate models are : if you can get errors like that from the difference between T^4 and T*T*T*T, just think what errors you will get from the massive quantity of highly inaccurate weather data that the climate models use in their internal calculations.
If you really want to make this simple for the layman to understand, find a way to do it without all of the math notation in the explanation. For many, the formulae get in the way of comprehension. It’s a foreign language they have a hard time grasping. Maybe that bothers the math-intuitive folks, but they need to have some patience for people without their abilities who still want to understand.
Algebraic to floating point operation none equivalence is only part of the problem. To me, the real elephant in the room is the lack of accounting done for source precision and error in the models. For instance a measurement of temperature of 10.5 is not exactly 10.5 it could be anything from 10.40001 upto 10.59999 (depending on your measurement methods). That’s just under a 2% error range on the one measure. Also the error range is not constant (in this case it gets worse the lower the temperature!).
Such errors cannot be removed by calculation or averaging – they are with you always and tend to grow rather quickly, i.e. a*b=c also adds the errors, so two 2% error margin numbers result in a 4% error margin result… Now do that a few times reusing the results (like 10) and you are soon into processing pure noise – the results become meaningless. Hence all the tuning parameters and fudge factors on the climatic models, they want to go chaotic if run too long, so you need the knobs to ‘dampen’ the eventual pure noise to the result you want.
This reminds me of the old accountancy joke:
A company is looking to hire an accountant, they have a simple test, what is 2+2. The first accountant says ‘4’ – he isn’t hired. The second accountant looks around and asks ‘what do you want it to be?’ – he is hired.
Not nitpicking but a suggestion. You have mC is the mass times the specific heat capacity which is just heat capacity with symbol C while specific heat (per unit mass) has a lower case c as the convention.
Here are the graphs for A.R.E.’s Excel files above:

Well, that didn’t quite work out — should have been just one of those, and this one:
This second graph shows the principle from my recent essay. The system tends towards its stable point at 243. It overshoots at first downward, then overshoots coming back up, but finally settles down and is rock solid.
Thanks, a visualisation is always helpful.
The analytical solution is a decaying exp with no overshoot, so the iterative calculation is with an overshoot of about 100% is still very unstable and qualitatively incorrect, though not yet chaotic.
Now perhaps you could plot Nick’s trapezoidal solution.
That’s the trajectory one gets if the solution of the characteristic equation gives an imaginary root with a negative real part, termed a ‘stable focus’ and exhibits an oscillatory approach to the stationary point.
The first graph shows the in the first 12 iterations that both formulas produce a single line, they are so close in final value that they overwrite one another here. (then follows a break in the data set, that looks visually as a long downslanting line) The same is true in the iterations 69-75, the values overwrite one another they are so close, and are still oscillating. (then follows a break in the data set) In the third segment, we see that the values of the two formulas have diverged, both in value, and then in nature — the orange trace seems to stabilize around a value of 240 or so while the blue trace still wildly slams up and down.