A simple demonstration of chaos and unreliability of computer models

Guest essay by  Anthony R. E.

climate-model-1[1]

Recently, I read a posting by Kip Hansen on Chaos and Climate. (Part 1 and Part 2) I thought it will be easier for the layman to understand the behavior of computer models under chaotic conditions if there is a simple example that he could play. I used the attached file in a course where we have lots of “black box” computer models with advance cinematic features that laymen assume is reality.

Consider a thought experiment of a simple system in a vacuum consisting of a constant energy source per unit area of q/A and a fixed receptor/ emitter with an area A and initial absolute temperature, T0 . The emitter/receptor has mass m , specific heat C, and Boltzmann constant σ. The enclosure of the system is too far away such that heat emitted by the enclosure has no effect on the behaviour of the heat source and the emitter/receptor.

The energy balance in the fixed receptor/emitter at any time n is:

Energy in { q/A*A= q} + energy out {-2AσTn 4 } + stored/ released energy {- mC( Tn+1 – Tn )} = 0 eq. (1)

If Tn+1 > Tn the fixed body is a heat receptor, that is, it receives more energy than it emits and if Tn > Tn+1 it is an emitter, that is, it emits more energy than it receives. If Tn = Tn+1 the fixed body temperature is at equilibrium.

Eq (1) could be rearranged as :

Tn+1 = Tn -2AC Tn 4 /mC +q/mC eq(2)

Since 2AC/mC is a constant, we could call this α, and q/mC is also a constant we could call this β to facilitate calculations. Eq (2) could be written as:

Tn+1 = Tn – αTn 4 + β eq.(3)

The reader will note this equation exhibits chaotic properties as described by Kip Hansen in this previous post at WUWT on November 23, 2015, titled “Chaos & Climate –Part 2 Chaos=Stability”. At equilibrium, Tn=+1 = Tn , and if the equilibrium temperature is T then from equation (3)

T 4 =β/α or α = β / T4 if eq. (4)

And eq (3) could be written as

Tn+1 = Tn – βTn 4 /T4 + β or Tn+1 =Tn +β(1-Tn4 /T4 ) eq (5)

Eq (5) could be easily programmed in Excel. However, there are several ways of writing T4 . One programmer could write it as T*T*T*T, another programmer could write it as T ^2* T ^2, another programmer could write it as T*T ^3 and another could write as T^4. From what we learned in basic algebra, it does not matter as all those expressions are the same. The reader could try all the variations of writing T4 . For purposes of illustration, let us look at β= 100, T =243 ( I am using this out of habit that if it were not for greenhouse gases the earth would be -30o C or 2430 K but you could try other temperatures) and initial temperature of 300 K. After the 17th iteration the temperature has reached its steady state and the difference between coding T4 as T^4 and T*T*T*T is zero. This is the non-chaotic case. Extract from the Excel spreadsheet is shown below:

beta= 100
Iteration w T^4 w T*T*T*T % diff. T ∞= 243
0 300.00 300.00 0.00
1 167.69 167.69 0.00
2 245.01 245.01 0.00
3 241.66 241.66 0.00
4 243.85 243.85 0.00
5 242.44 242.44 0.00
6 243.36 243.36 0.00
7 242.77 242.77 0.00
8 243.15 243.15 0.00
9 242.90 242.90 0.00
10 243.06 243.06 0.00
11 242.96 242.96 0.00
12 243.03 243.03 0.00
13 242.98 242.98 0.00
14 243.01 243.01 0.00
15 242.99 242.99 0.00
16 243.00 243.00 0.00
17 243.00 243.00 0.00
18 243.00 243.00 0.00
19 243.00 243.00 0.00
20 243.00 243.00 0.00

If β is changed to170 with the same initial T and T∞, T does not gradually approach T∞ unlike in the non chaotic case but fluctuates as shown below. While the difference in coding T4 as T^4 and T*T*T*T is zero to the fourth decimal place, differences are really building up as shown in the third table.

beta 170
Iteration w T^4 w T*T*T*T % diff T ∞ 243
0 300.0000 300.0000 0.0000
1 75.0803 75.0803 0.0000
2 243.5310 243.5310 0.0000
3 242.0402 242.0402 0.0000
4 244.7102 244.7102 0.0000
5 239.8738 239.8738 0.0000
6 248.4547 248.4547 0.0000
7 232.6689 232.6689 0.0000
8 259.7871 259.7871 0.0000
9 207.7150 207.7150 0.0000
10 286.9548 286.9548 0.0000
11 126.3738 126.3738 0.0000
12 283.9386 283.9386 0.0000

By the 69th the difference between coding T4 as T^4 and T*T*T*T is now apparent at the fourth decimal place as shown below:

69 88.6160 88.6153 0.0008
70 255.6095 255.6088 0.0003
71 217.4810 217.4824 0.0007
72 278.4101 278.4086 0.0005
73 155.4803 155.4850 0.0030
74 296.9881 296.9894 0.0004

The difference between the two codes builds up rapidly that by the 95th iteration, the difference is 4.5 per cent and by the 109th iteration is a huge 179 per cent as shown below.

95 126.5672 132.2459 4.4866
96 284.0558 287.3333 1.1538
97 136.6329 125.0047 8.5105
98 289.6409 283.0997 2.2584
99 116.5073 139.9287 20.1029
100 277.5240 291.2369 4.9412
101 158.3056 110.4775 30.2125
102 297.6853 273.2144 8.2204
103 84.8133 171.5465 102.2637
104 252.2905 299.3233 18.6423
105 224.7630 77.9548 65.3169
106 270.3336 246.1543 8.9442
107 179.9439 237.1541 31.7934
108 298.8261 252.9321 15.3581
109 80.0515 223.3877 179.0549

However, the divergence is not monotonically increasing. There are instance such as in the 104th iteration, the divergence drops from 102 per cent to 18 per cent. One is tempted to conclude T^4 ≠ T*T*T*T.

Conclusion:

Under chaotic conditions, the same one line equation with the same initial conditions and constant but coded differently will have vastly differing results. Under chaotic conditions predictions made by computer models are unreliable.

The calculations are made for purposes of illustrating the effect of instability of simple non-linear dynamic system and may not have any physical relevance to more complex non-linear system such as the earth’s climate.

Note:

For the above discussion, a LENOVO G50 64 bit computer is used. If a 32 bit computer is used the differences would be noticeable at a much earlier iterations. A different computer processor with the same number of bit will also give different results.

Advertisements

228 thoughts on “A simple demonstration of chaos and unreliability of computer models

    • Not fp!
      =====
      depends how the floating point is implemented. regardless, there are loss of precision errors in fp calculations on digital machines. similar to 1/3 in decimal notation, you cannot represent this as 0.33333 in a finite number of decimals. these small truncation errors can quickly build up through repeated iterations as are found in climate models, such that the error overwhelms the result.

      for example, due to round off/truncation errors it is nearly impossible to create a climate model that doesn’t gain or lose energy, without any energy actually being added or lost from the system! So, at each iteration it is often necessary to calculate the phantom gain/loss in energy, and then average this back into the model over all the grids. In effect you smear the error over the globe, in an attempt to hide it, rather than reporting it as model error.

      • Floating point usually follows IEEE 754 single and double.

        And excess precision of extended registers a big issue, a disgusting “feature” of the Intel CPU. Excess precision leads to semantic issues, cases where the axiom of equality A=A for any expression A doesn’t hold. It’s very very bad. It’s a disease. Many people in the field deny that, even a programming language designer.

        If I had to recruit a computer scientist, I would ask him how bad it is.

      • @Simple Touriste:

        In my FORTRAN class of decades back, we were cautioned not to test for equality in floats, but to test for “smaller than an error band”. So ABS (A-A) .LT. 0.000001

        I’ve noticed that is not done in GIStemp and would speculate it is missing in the GCMs.

        Oh, and per your question: “It is bad.. very Very Bad.. Evil, even…”

      • @E.M.Smith

        Exactly. One of the first things I “learned by doing” in numerical analysis is to never test floats for equality. And at the time (also decades ago) I was only trying to solve a simple system of linear equations !

      • Great article. the money quote:

        “[Floating-point] math is hard. You just won’t believe how vastly, hugely, mind-bogglingly hard it is. I mean, you may think it’s difficult to calculate when trains from Chicago and Los Angeles will collide, but that’s just peanuts to floating-point math.”

        The author brings up a whole series of problems that occur with floating point math. For example, whenever the results get close to zero you get problems such as N/0 and 0/0. Because of precision problems, these values are rarely exactly zero in FP, so the computer goes ahead and blindly calculates results for these problems, often in intermediate temporary values that are hidden from the programmer. The results “blow up”, but the programmer and computer are both unaware of what has gone on. They report the resulting nonsense as though it was a valid answer.

        The bottom line is that floating point values and iterative solutions do not mix well together. This problem is well known to Numerical Analysts, but largely unknown by everyone else. Yet this is exactly how the climate models are implemented. As a result, even if the theory is 100% correct, the results in practice are unlikely to be correct.

  1. Only a few of the alarmists have knowledge enough to understand math…. that’s sad.
    None of the so called “experts” who presented computer models has the ability and knowledge how to write a proper algoritm taking ALL NEEDED parameters/variables into consideration. When I myself analysed waterlevels from peak Stone Age up to year 1000 AD, I used 43….. Is there anyone having seen any of the so called experts taking more than 12 into consideration?

  2. Reblogged this on Norah4you's Weblog and commented:
    some types of Erosion

    * Wind-erosion
    * Water-erosion
    * Temperature-erosion
    * Gravitional-erosion
    (but there are more types)

    When erosion on sand and/or coral reef occurs along coasts the “land” always will loose areas to the Sea. If there are houses or settlements close to water, sooner or later those will “disappear”. That’s due to the Natural Forces mostly to the force we call Erosion.

    Example:
    two phenomena are common: Wind, waves and temperature allows the sand/coral to become undermined and the land above piece by piece falls into water. Sometimes down in the form of landslides.

    Depending on the area where they occur, tidal power and strength of ebb and flow, this can mean anything from that sand is spread out over a larger area of ​​land near, with the seemingly may look as if the water has been raised or that the sand is spread by wind and waves over much large surfaces.

    Another beachfront effect of erosion, in the form of sludge, gravel, and pollutants including carried out thanks to meandering rivers. Water cycle makes the water trying to reach the lowest point possible. While travelling from land to Sea the speed force in rivers are lower in the middle of the streams. This causes eroded land/sand and so on to fall into the water bringing the sludge deposit to the sides and close to entering the Sea river delta might occure/arise. Normal school knowledge that has been forgotten can be repeated

  3. Actually it’s called numerical unstability: Multiply a small number (for example 0.99) with itself a thousand times, and what do you get? – A very, very small number, e.g., nearl zero. Multiply a small, but, slightly larger number (for example 1.01) with itself a thousand times, then you get large numbers, trending towards infinity !
    It’s called iteration, and any computation where iteration is part of the game, just like in all GCMs and climate models, this is the root of chaos.

    • Numerical unstability, iteration? Your example is just a geometric progression where you choose the common ratio just above and below 1. This is basic maths:

      In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio.

      The behaviour of a geometric sequence depends on the value of the common ratio.
      If the common ratio is:

      – Greater than 1, there will be exponential growth towards positive or negative infinity (depending on the sign of the initial term).
      – 1, the progression is a constant sequence.
      – Between −1 and 1 but not zero, there will be exponential decay towards zero.

      • This is basic maths:
        ==========
        to a mathematician. to a climate scientists, it looks like a ‘tipping point”. you start with 1, and iterate to the Nth power, you still have 1. climate is stable. Now add in CO2. Now you start with 1.00001. Iterate this to the Nth power and it blows up. a tipping point. climate is unstable, thus any change to the atmosphere, any change at all will lead to run-away warming or run-away cooling.

        which is nonsense, because it would have already happened, given the past changes to earth’s atmosphere. so the result must be a mathematical illusion. the result of iterating (1 +/- small value) to the Nth power.

    • @asybot:

      A simple example would just do something like C=(A x B ) / 100
      and C=(A/100.0) x (B / 100.0)

      and show they were both different… then show how usng one vs the other changes results, especially in repeated use in a loop.

      Not quite the same chaotic thing, but similar idea of math divergence over time.

      • typo: C = (A x B)/10000. A and B accuracy will be individually truncated, while A x B may be massively truncated before the division. A better example may be

        (A/128) x (B/128) vs (A x B)/16384

        Suppose you had a fixed point signed representation of 8 bits, and A = 128 and B = 128. The answer is C = 1, but A x B overflows, so (A x B)/16384 gives you 0.

    • Nor me. But what I took away from this article is:

      1. The coding methods used and the equipment on which the models are run can introduce a large error, especially the further out the projects are run, and

      2. the modelers do not recognize or account for this hidden source of error.

      I hope I have this right…if not, could someone please correct me?

      • Correct, but more subtly you cannot compensate for this error. It’s chaotic and unless you know what the answer should be, there’s no way to know what the error is.

  4. To me, it has more to do with the style of algorithm. Taking a number (data point)with an accuracy of +/-2% and then taking it to the 4th power would lead to great inaccuracies. Taking the 4th root of a number with an accuracy of+/- 2% would give a better outcome.

  5. A computer is like a food mixer, it just churns the ingredients in the order you put them in.
    For a perfect confection you need the best ingredients, the skill of knowing the correct order to add them & how long to churn them for.
    The ‘climate meddlers’ have used flour salt water & chalk thrown it in mixed for a few mins, added a few picked cherry’s & are trying to pass it of as a gateaux.
    GIGO – garbage in garbage out.

    • “1saveenergy

      December 6, 2015 at 12:07 am

      A computer is like a food mixer, it just churns the ingredients in the order you put them in.”

      Ahhh, no. A mixer does exactly that, mixes (Whatever you put in to it). A computer does exactly what it was programmed to do.

    • it just churns the ingredients in the order you put them in.

      Er no, these days it churns them out in the order the compiler put them in..

      And compilers are making a lot of decisions the programmer didn’t even think about.

      • Exactly right, …. and that is the very reason all of my “programming” was done in Assembler Code (machine language). And my software/firmware performed the way I wanted it to perform …. and not the way the programmer of a “compiler” per se assumed I wanted it to perform.

      • Technically modern CPUs perform out of order execution where it will reorder instructions for parallel execution where it deems they are independent of one another.

      • -And even compilers aren’t perfect. Yea, many moons ago, we got a *brand* *new* FORTRAN 77 compiler. All went OK, until I switched off the debugging mode. It took weeks of going through the assembler output to find out that the compiler itself had bugs!

      • Back in the 80’s we had a Mark Williams compiler that generated weird bugs whenever we tried to crank optimization above level 0.

      • There are also chip level micro codes between the assembler emitted by the compiler and the silicon. Further, some compilers do not emit assembler, they emit C, which is further compiled by a C compiler that emits assembler. You also need to account for the procedure libraries used by the compilers.

  6. This was eye-opening. How do the ‘models’ avoid this iterative chaos ruining the projections?

    • They can’t. That is sort of the definition of the effects of a chaotic system…

      Part of why any model must be validated. Frequently. And each new release all over again.

    • I have been wondering the same. Could someone please explain why CMIP5 Climate Models diverge so little from each other? I have my doubts that is the case in the real life.

      • The simple answer is, ….. because they both use the same “canned” program “sub-routines”.

      • Fudge factors. They know the results are bad. So they introduce “corrective” instructions that are designed to “correct” the results, to make them acceptable. Looking at it from the other side, they make conclusions about what the results should be, then program it in.

        As they don’t understand the atmosphere to the degree required to model it, they can’t model it. Complaints that their computers just aren’t big enough are lame. “If we only had more granularity.” I think the entire approach, GCM Nudge Models, is wrong. They are trying to substitute brute-force computing power for brain power.

      • It’s due to the tuning that they apply to the system. That is, they adjust the user adjustable variables until they get the answer they were looking for.

  7. It is even worse than that.

    Code can be compiler release senstive, language sensitive, order of operation sensitive, and more.

    In GIStemp, I looked at just ONE line of code in one progran out a dozen or so. A minor coding fault caused an average 1/1000 degree temp warming of the overall data, and 1/10 C warming of individual readings.
    https://chiefio.wordpress.com/2009/07/30/gistemp-f-to-c-convert-issues/

    covers it, and some compiler dependencies and more.

    I then varied some of the data type choices and by making different “reasonable” choices could get up to 1/2 C difference…

    It is essential to have an independent trained programming team audit every single line of code before you can even begin to trust it, and even then, test repeatedly..

    The skill needed is beyond that of the climate researchers. It takes a good engineer to even start on the task, with lots of code audit experience.

    • What an interesting article, why am I not at all surprised?

      I would recommend all to read your article, but just in case people do not have the time, and in view of the importance of your findings, I set out your conclusions drawn from your analysis:

      In Conclusion

      So here we have a specific line of the GIStemp code that warms about 1% of the data records by 1/10C in the conversion from F to C.

      We also have a NOAA supplied data set with fractional degrees of F when we know that the fractional part is a lie (or, to be charitable, an error… of False Precision). NOAA calculate a monthly average by a method that may, or may not, be valid and give us data that are not properly vetted for precision. It has a 1/100F precision when no such precision is possible.

      And I’ve done a test on the GHCN data (already in degrees C so we’re not going through the F to C conversion “issue”) to see just how much sensitivity there is in the “Global Average Temperature” to the kinds of decisions that programmers had to make at NOAA and GIS in writing their code. What I found was that about 0.5 C is “easy” to change, and that about 0.1 C TO 0.2 C is very hard to avoid. So it is almost certain that the 1/10C place owes it’s values as much to programmers and compilers as it does to any actual physical changes in the world.

      Any statement about “Global Average Temperature” changing in the tenths or hundredths place is “dancing in the error band” of the calculations. There is no “truth” in any digits other than the full degrees. The whole body of code must be “vetted” and tested in a manner similar to what I did above before there can be any trust given to the “bits to the right of the decimal point”.

      • Any statement about a complex computer model in the tenths to hundredths of a degree range is just silly. Back when I first got involved in computer simulations in the 1970’s we got a short and very sensible lecture on the difference between accuracy and precision. The greater the great number of decimal places in such numbers is no indication of accuracy, indeed it is more often than not an indication of imprecision.

        Any study that does not have an appendix that examines and details the error bars implicit in the analysis is not worth the paper its printed on. From what I saw in the climategate releases the code written at the CRU of the UEA was appalling in this regard. Not surprising really in that most was written by undergraduates with little or no formal computer science training.

        In one release we see the following comment added in by the last programmer who tried to figure out what the code was doing.

        “I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more”

        This is the basis on which we are supposed to change our entire economic system.

      • I knew some physics student; once one of them told me that his program was giving satisfying results but then his team discovered that they implemented equations that were not the one they wanted to implement and the output was meaningless – but plausible.

  8. Scientific programming is subtle and tricky.

    Each of these options of GCC can impact program semantics in subtle ways:

    The following options control compiler behavior regarding floating-point arithmetic. These options trade off between speed and correctness. All must be specifically enabled.

    -ffloat-store
    Do not store floating-point variables in registers, and inhibit other options that might change whether a floating-point value is taken from a register or memory.
    This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a double is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.

    -fexcess-precision=style
    This option allows further control over excess precision on machines where floating-point registers have more precision than the IEEE float and double types and the processor does not support operations rounding to those types. By default, -fexcess-precision=fast is in effect; this means that operations are carried out in the precision of the registers and that it is unpredictable when rounding to the types specified in the source code takes place. When compiling C, if -fexcess-precision=standard is specified then excess precision follows the rules specified in ISO C99; in particular, both casts and assignments cause values to be rounded to their semantic types (whereas -ffloat-store only affects assignments). This option is enabled by default for C if a strict conformance option such as -std=c99 is used.
    -fexcess-precision=standard is not implemented for languages other than C, and has no effect if -funsafe-math-optimizations or -ffast-math is specified. On the x86, it also has no effect if -mfpmath=sse or -mfpmath=sse+387 is specified; in the former case, IEEE semantics apply without excess precision, and in the latter, rounding is unpredictable.

    -ffast-math
    Sets the options -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans and -fcx-limited-range.
    This option causes the preprocessor macro __FAST_MATH__ to be defined.

    This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.

    -fno-math-errno
    Do not set errno after calling math functions that are executed with a single instruction, e.g., sqrt. A program that relies on IEEE exceptions for math error handling may want to use this flag for speed while maintaining IEEE arithmetic compatibility.
    This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.

    The default is -fmath-errno.

    On Darwin systems, the math library never sets errno. There is therefore no reason for the compiler to consider the possibility that it might, and -fno-math-errno is the default.

    -funsafe-math-optimizations
    Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations.
    This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. Enables -fno-signed-zeros, -fno-trapping-math, -fassociative-math and -freciprocal-math.

    The default is -fno-unsafe-math-optimizations.

    -fassociative-math
    Allow re-association of operands in series of floating-point operations. This violates the ISO C and C++ language standard by possibly changing computation result. NOTE: re-ordering may change the sign of zero as well as ignore NaNs and inhibit or create underflow or overflow (and thus cannot be used on code that relies on rounding behavior like (x + 2**52) – 2**52. May also reorder floating-point comparisons and thus may not be used when ordered comparisons are required. This option requires that both -fno-signed-zeros and -fno-trapping-math be in effect. Moreover, it doesn’t make much sense with -frounding-math. For Fortran the option is automatically enabled when both -fno-signed-zeros and -fno-trapping-math are in effect.
    The default is -fno-associative-math.

    -freciprocal-math
    Allow the reciprocal of a value to be used instead of dividing by the value if this enables optimizations. For example x / y can be replaced with x * (1/y), which is useful if (1/y) is subject to common subexpression elimination. Note that this loses precision and increases the number of flops operating on the value.
    The default is -fno-reciprocal-math.

    -ffinite-math-only
    Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
    This option is not turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.

    The default is -fno-finite-math-only.

    -fno-signed-zeros
    Allow optimizations for floating-point arithmetic that ignore the signedness of zero. IEEE arithmetic specifies the behavior of distinct +0.0 and −0.0 values, which then prohibits simplification of expressions such as x+0.0 or 0.0*x (even with -ffinite-math-only). This option implies that the sign of a zero result isn’t significant.
    The default is -fsigned-zeros.

    -fno-trapping-math
    Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option requires that -fno-signaling-nans be in effect. Setting this option may allow faster code if one relies on “non-stop” IEEE arithmetic, for example.
    This option should never be turned on by any -O option since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions.

    The default is -ftrapping-math.

    -frounding-math
    Disable transformations and optimizations that assume default floating-point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating-point expressions at compile time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes.
    The default is -fno-rounding-math.

    This option is experimental and does not currently guarantee to disable all GCC optimizations that are affected by rounding mode. Future versions of GCC may provide finer control of this setting using C99’s FENV_ACCESS pragma. This command-line option will be used to specify the default state for FENV_ACCESS.

    -fsignaling-nans
    Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math.
    This option causes the preprocessor macro __SUPPORT_SNAN__ to be defined.

    The default is -fno-signaling-nans.

    This option is experimental and does not currently guarantee to disable all GCC optimizations that affect signaling NaN behavior.

    -fsingle-precision-constant
    Treat floating-point constants as single precision instead of implicitly converting them to double-precision constants.

    -fcx-limited-range
    When enabled, this option states that a range reduction step is not needed when performing complex division. Also, there is no checking whether the result of a complex multiplication or division is NaN + I*NaN, with an attempt to rescue the situation in that case. The default is -fno-cx-limited-range, but is enabled by -ffast-math.
    This option controls the default setting of the ISO C99 CX_LIMITED_RANGE pragma. Nevertheless, the option applies to all languages.

    -fcx-fortran-rules
    Complex multiplication and division follow Fortran rules. Range reduction is done as part of complex division, but there is no checking whether the result of a complex multiplication or division is NaN + I*NaN, with an attempt to rescue the situation in that case.
    The default is -fno-cx-fortran-rules.

    https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html

  9. “Under chaotic conditions predictions made by computer models are unreliable.”
    AND, the atmosphere is not made of boxes in a grid, nor does it behave as a load of boxes in a grid and we cannot hope to understand it if we think of it as boxes in a grid.

    • AND.. with every so called ‘box’ there will be interfaces to the next boxes which will need heavy duty maths to describe how these boxes inter react. These interfaces do not exist!

      • Of course they exist. They are coupled nonlinear differential equations. “Doesn’t exist” is very different from “fiendishly difficult to solve, if at all”.

    • Do not forget that to make the mathematics and algorithms easier, changes in CO2 level are considered instantaneous over all ‘boxes’. The errors due to that which exceed those due to iterated float imprecision are then multiplied by the float imprecision.

    • All science is about simplified models.

      I like to think of chaos type edge effects like this.

      A sniper in an ideal world has a perfect rifle, the air is dead calm,. the cartridge is reliably loaded with the same charge of explosive manufactured to the same formula, and his sights are set perfectly, the air is at standard pressure and humidity and we ignore any effects of sunshine (yes, light exerts a force too). And the prediction is at a mile range he kills the president.

      In reality, at that range so many random second order effects come into play that any rifle marksmen will tell you that identical shots will be out by an inch or three.

      And here’s the rub. The result of as single shot could be anything from a breeze past his cheek, to a nasty flesh wound, permanent brain damage or instantaneous death.

      No laws of physics are violated to create this uncertainty, merely the nature of the system we have hypothetically constructed.

      And yet human history being what it is, this might be the difference between a global war and a nasty incident best forgotten.

      The point is that the mathematical precisions uncertainty entirely mimics real world uncertainty. This isn’t necessarily an issue with the models, or the computation, it’s an issue with using a particular sort of model outside the limits of its valid applicability.

      We are asking the question ‘will a rifle aimed this way, kill the president or not?’ and the answer is ‘we can’t say: A random gust of wind, or a slightly out of true manufactured round, could be the difference between life and death’.

      I have said it before and I will say it again, as an engineer, this is a bad solution. If you want to ensure the outcome, get closer.

      I think people are still confused by all this chaos.

      This article is really about the butterfly effect. ‘Sensitivity to initial conditions’ – or in this case floating point precision.

      If you have a model that is that sensitive, it simply isn’t usable as a predictive device. Period.

      I am more interested in models that do actually represent reality and yet still cannot be used to predict outcomes. That is the class of equations that produce (via negative feedback) bounded solutions, yet cannot be analysed sufficiently well to give even an approximate solution, due to the sorts of effects mentioned above.

      What we need to do here, is model climate using these sorts of delayed feedback models, in order to get a picture of the boundaries of climate – that is, a range of temperatures beyond which it cannot go…

      I suspect once the heat (sic!) goes out of the climate argument as countries promise much, do nothing, and nothing actually happens, we will develop the climate models to incorporate the sort of multi-decadal negative feedback systems that cause the quasi periodic oscillation of temperature and other climate and weather related effects.

      I have been shying away from it, but I suspect I may need to write some sample code to demonstrate how such periodic delayed feedback systems actually create a system that fluctuates naturally within certain bounds, but relatively unpredictably.

    • Just as importably, the real world has essentially infinite precision, and eventually a difference between the model and reality I’m the millionth digit will make a difference.

  10. I don’t understand this. There is an equilibrium temperature determined by insolation, that doesn’t change. Any departure causes a feedback to arise which tends to return the system to equilibrium, no matter whether the “error” is from natural variability or from processor “rounding” errors. Any such errors cannot be cumulative or the model is simply wrong.

    • Yes; the post was misleading to start off with “The reader will note this equation exhibits chaotic properties.” There’s nothing about the system defined by that equation that is inherently chaotic. If you run that system in you favorite differential-equations solver, you won’t observe chaotic behavior.

      All he’s showing is that variations in computer operations can cause minuscule errors that pile up, independently of whether the system being modeled is chaotic.

      • Exactly,

        And this can add up to be something of significance. As E. M. Smith ( December 6, 2015 at 12:29 am ) observes:

        It is even worse than that. Code can be compiler release sensitive, language sensitive, order of operation sensitive, and more.

    • pochas said “There is an equilibrium temperature determined by insolation . . . ” Peripheral to the topic, but please note that at each point in the atmosphere one can calculate (at least) two temperatures both of which change most of the time and which cannot be in equilibrium because of transparency: the gas kinetic, and the radiation field. They are seldom closer than several degrees F. At best the two will be in balance twice a day as the diurnal curves cross each other.

      • It’s true that the earth is never in thermal equilibrium, as you say. Because I like to avoid long-winded comments I did skate around some important things that can cause the equilibrium surface temperature to change. They must all be understood before any attempt can be made to model the climate. To name a few, the day/night and seasonal cycles, solar activity (TSI, solar wind, terrestrial and interplanetary magnetic fields), solar spectral composition, tidal effects on oceans and atmosphere, volcanism (aerosols), and yes, humans.

  11. Using Integer-Vectors as in cryptography-mathematics the numbers can be as big and accurate as you want but calculating time will increase.

  12. Here it is as Fortran, using interval arithmetic.
    real, parameter :: Tinf = 243
    real, parameter :: beta = 100
    interval :: T
    integer :: i

    T = 300
    do i = 1, 120
    T = T + beta*(1 – (T/Tinf)**4)
    print *, i, t
    end do
    end
    The results start out looking good, but suddenly…
    1 [167.69426874581208,167.6942687458124]
    2 [245.01402143715617,245.01402143715672]
    3 [241.65731550475314,241.65731550475474]

    36 [163.12325262212141,308.84946610130112]
    37 [2.1702338224654909,388.54281085174552]
    38 [-551.45741730850421,488.54281021553476]
    39 [-3103.7585801101132,588.54281021553482]
    40 [-2664504.7860223172,688.54281021553482]
    41 [-1.4455759832608285E+18,788.54281021553482]
    42 [-1.2523871492099317E+65,888.54281021553482]
    43 [-7.0555246943510491E+252,988.54281021553482]
    44 [-Inf,1088.542810215535]

    This is using double precision arithmetic. So even the first version has its problems. Interval arithmetic is often pessimistic, but it can give you a clue that you’ve got trouble. In this particular case, what it’s telling us is that this is a lousy way to integrate this particular equation, which isn’t really surprising. In more complicated situations, it can be hard to see. (Kahan has an instructive paper on useful but not foolproof ways to tell if a floating point computation should not be trusted.)

    • I agree with your comment. The method shown isn’t a formal numerical method of solution. The article has nothing to do with chaos but is a naive exposition of numerical rounding errors.

      • The example used does not exhibit mathematical chaotic behavior (a particulr jargon meaning), but the computer processing done on it does exhibit the common meaning of chaos. The result becomes rapidly and unpredictably disordered.

        I’m not sure which the author set out to illustrate, so I refrain fromtossing rocks about it. YMMV.

  13. GCMs ought to be regarded as a source of entertainment and amusement for computer buffs, with the hope that some good will spin-off from their efforts to model such a system in such a way. But sad to say it has been no laughing matter as the owners and operators of these models have extracted, with some determination, just the kind of illustrations they seek to show the world what they insist must happen to that system thanks to our CO2 emissions. This is would-be visionary stuff. Not science.

  14. You have lost the plot in terms of the definition of “simple”.
    Try something like the change in the investment rate for savings over time, even that is complex for most people. If I recall someone did this some years ago on this site. It illustrates the point without the complexity.

    • You know, there’s a saying in French that goes like “Envoyez un âne à paris, il n’en reviendra pas plus appris” (‘send a donkey to Paris, it won’t come back any smarter’ – my translation).

      Or in the alleged words of Einstein: “Everything should be made as simple as possible, but not simpler.”

      I can appreciate that not everyone will find this article enlightening, but that’s not a reason not to write it.

  15. In another way, it is “even worse” – that is psychology. I’ve done modeling work in a context other than climate, but the psychology is the same in all modeling. At the start, there is uncertainty is some data or “parameters” (assumptions). When doing good modeling work, these are always well understood at the start. But after days (or months or years) of looking at numbers to with 8 (or more) significant digits, it is VERY EASY to start thinking that differences in that 5th significant digit (or 4th or 3rd etc.) really tell you something while forgetting that important inputs were +/- 10% (for example). In my experience, it takes extreme discipline to avoid this trap. Maybe this is one reason one well known climate modeler (who did not accept the global warming party line) once said something like, “climate modelers need to go outside and look at the weather now and then.” In “impact” models, ones which assume a certain warming and then try to predict the impact of that warming on some ecosystem, the input assumptions have huge uncertainties making this phenomena probably even more pronounced.

  16. This post has nothing to do with climate. It is one of an endless series of posts which seems to claim that ordinary everyday engineering maths is impossible.

    But is isn’t. Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.

    It illustrates that if you don’t know what you are doing, solution algorithms for differential equations can be unstable. You learn that quickly in CFD. Your program blows up within a few thousand iterations. But GCMs don’t do that. There is a whole body of knowledge about stable solution. You have to get it right. In a situation like this, you first have to think about timestep. But better, use a degree of implicitness, which relates the change in T to the average T^4 during the time step, not the initial value.

    • Nick- “Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.” Engineering math works because engineers are responsible when the bridge falls down or the plane crashes. They know how to check for instabilities in solutions and how to prevent them, and then do a reality check and make sure the results make sense. They also always build in generous safety factors into every critical design. Even so, planes still crash and bridges do fall down, but only rarely. There still are some 70-80 year old airplane still being flown in commercial use because 1)they were well-designed to begin with, 2) they were substantially over built, 3) they can be economically repaired and upgraded and have been over the years.

      The large scale GCM’s cannot be checked for the effect of calculation instabilities at every interation. Even so, the GCM’s can only be correct if they DO have the actual, important variables as part of the models. In that case, only ONE model would be needed. The fact that 32 or 97 or 123 different versions of the same process are used is because averaging them gives an illusion that the process is being correctly modeled and the results mean something useful. We’re only now, at 30 years on, that the models are showing that they do not correctly model the climate. It’s time for engineering to take over and re-do the design with the “correct” parameters and equations so the result can be tested for another 30-50 years to see if it works better. We’re still in the pre stage 1) above era, making a well-designed machine. Give it another 1000 years and we may have a well-designed, well-tested climate model.

      • “The large scale GCM’s cannot be checked for the effect of calculation instabilities at every interation.”
        You don’t have to do that. The basic requirement of any numerical method is that the solution, when substituted, satisfies the equations. You can check that.

        GCM developers wouldn’t be flummoxed by a simple stiff de like this. But the basic check that is done that would show it up is grid and timestep independence. If you have an instability like this, and it hasn’t already caused some overflow, then it will surely show when you vary the time step.

      • “The basic requirement of any numerical method is that the solution, when substituted, satisfies the equations. You can check that.”

        And what would that be for the climate models? Tn+1 = Tn – αTn4 + β or Tn+1 = Tn – αTn4 – 2αTn3(Tn+1-Tn) + β

        I kind of thought that it was just an example of how easy it is to cock up something more complex. I’m sure that you could do a better job of calculating something so simple but that’s not the point.

    • Nick Stokes, I never saw a plane (excluding RCM planes) that didn’t get plenty of time in a wind tunnel, whether in part or as a (model) whole, starting with the Wright Flyer.

      • Boeing says:
        “The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade.”

        Sounds like they think it works.

      • ” bridges and buildings were run through wind tunnels”
        You can’t put a bridge in a wind tunnel. You can try with a scale model, but getting the scaling right is a big problem even for fluid flow alone, and for fluid-structure interaction with large fluctuations, well, scarcely possible.

      • Billy Liar December 6, 2015 at 2:36 pm

        Nick Stokes, I never saw a plane (excluding RCM planes) that didn’t get plenty of time in a wind tunnel, whether in part or as a (model) whole, starting with the Wright Flyer.

        Reply
        Nick Stokes December 6, 2015 at 3:51 pm

        Boeing says:

        “The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade.”

        Sounds like they think it works.

        Nick, he didn’t say that CFD doesn’t work. He said CFD is tested in the wind tunnel.

        w.

      • Willis,
        “Nick, he didn’t say that CFD doesn’t work. He said CFD is tested in the wind tunnel.”
        I said originally that it works. This seemed to be controversial, so I pointed out that Boeing was affirming it. Yes, it is tested, but it passes. CFD works.

        But in fact, CFD can tell you more than can be tested in wind tunnel. The Boeing link says:
        “However, the wind tunnel typically does not produce data at flight Reynolds number, is subject to significant wall and mounting system corrections, and is not well suited to provide flow details. The strength of CFD is its ability to inexpensively produce a small number of simulations leading to understanding necessary for design.”

        This relates to what I said about bridges. Just scaling even a pure fluid flow is hard, and the paper says that they can’t achieve flight Reynolds number – ie can’t scale. CFD can.

        That’s just external. Engine makers use CFD (and linked structural) in engine design. Flow over the compressor blades is critical, as is stress in them. Very hot air, reactions, huge forces, all kinds of acoustics and vibration, all whirring at high speed. A typical environment where CFD is all that you have. And people say the atmosphere is too hard.

      • Nick Stokes December 7, 2015 at 12:49 am

        That’s just external. Engine makers use CFD (and linked structural) in engine design. Flow over the compressor blades is critical, as is stress in them. Very hot air, reactions, huge forces, all kinds of acoustics and vibration, all whirring at high speed. A typical environment where CFD is all that you have. And people say the atmosphere is too hard.

        And why do they say the atmosphere is too hard? Because the CFD models of the atmosphere have performed so abysmally at longer range forecasting, being totally unable to predict even next month’s weather.

        Yes, CFD is extremely valuable. But as near as anyone can tell to date, it is totally inadequate to the task of simulating the climate. If it were adequate, it would have either predicted or hindcast the current plateau in warming. It did not.

        Hardly a ringing endorsement of CFD …

        w.

      • Willis,

        You make it sound like a black-and-white sort of thing: Either the CFD models can predict climate or they cannot. However, the actual reality of the situation is that we know what sort of things the models are better at and what they are worse at. They are good at predicting the weather short-term, before the chaotic effects (i.e., extreme sensitivity to initial conditions) significantly degrades their performance. They are also presumably pretty good at predicting things that are not very sensitive to initial conditions, like how much colder the climate will typically be here in Rochester in January as compared to July. They are not very good (or, maybe so-so) at predicting issues of climate that are sensitive to initial conditions such as whether this winter will be colder or warmer than average in Rochester. The prediction of the ups-and-downs that have led people to talk about a plateau in warming is something that is sensitive to initial conditions; the climate 100 years from now in response to greenhouse gases is something that is not.

      • joeldshore December 7, 2015 at 5:35 pm

        Joel, always good to hear from you. You say:

        Willis,

        You make it sound like a black-and-white sort of thing: Either the CFD models can predict climate or they cannot. However, the actual reality of the situation is that we know what sort of things the models are better at and what they are worse at. They are good at predicting the weather short-term, before the chaotic effects (i.e., extreme sensitivity to initial conditions) significantly degrades their performance. They are also presumably pretty good at predicting things that are not very sensitive to initial conditions, like how much colder the climate will typically be here in Rochester in January as compared to July.

        OK so far.

        They are not very good (or, maybe so-so) at predicting issues of climate that are sensitive to initial conditions such as whether this winter will be colder or warmer than average in Rochester.

        True dat.

        The prediction of the ups-and-downs that have led people to talk about a plateau in warming is something that is sensitive to initial conditions; the climate 100 years from now in response to greenhouse gases is something that is not.

        I’m sorry, but although this is put forwards as though it were an obvious fact by the climate modelers, just as you have done so here, to date we have absolutely no evidence that it is true. If you have evidence that the climate 100 years from now is NOT sensitive to initial conditions, this would be the time to present it …

        Next, the “plateau in warming” is up to two decades now … so your claim is that time spans up to two decades are dependent on initial conditions.

        Another statement of your claims that I’ve seen is that “weather is an initial-conditions problem, and climate is a boundary problem”. However, when I’ve asked the proponents of this theory at what point in time does one change to the other, they just wave their hands … is it no longer an initial-value problem after five years? Ten years? Fifty years? You’ve said that it IS an initial-value problem out for no less than twenty years … but suddenly it will stop being one after what? … 30 years? 50 years?

        My next question is, what are the nature and value of the boundary conditions TODAY that define the claimed “boundary problem”? I mean yes, there is a boundary in that space radiates at about 3 W/m2, and there is a boundary where the sea meets the ocean, but just what are the boundaries of climate today?

        Next, if we know the current boundary values, what are the future values of those boundary conditions in say 2050? Will they be different?

        So … let me pose those questions to you. Your claim is that the CFD models are right in the short term of days, passable in the longer term of a couple weeks, much worse in the longer term of a couple months, wrong in the longer term of a couple years, and totally off the rails by the time we’re out twenty years …

        But by gosh, you assure us they will come right again a hundred years from now when we’re all dead … and you base this assurance on what?

        You are a scientist, Joel, and from all reports a good one. What actual evidence do you have for your claim that someday the models will come right? Because their performance to date gives no evidence of that. For example, what other computer models of complex systems can you point to that have that quality (right short-term, wrong middle-term, right long-term)?

        So the modelers make an iterative model of the most complex and most turbulent system we’ve ever tried to model. The climate contains the following subsystems: the atmosphere, hydrosphere, biosphere, cryosphere, lithosphere, and the electromagnetosphere. None of them are fully understood. In addition to phenomena occurring internal to one subsystem, there are innumerable mutual two-way or multiple-way interactions between these subsystems. For example: in response to heat from the downwelling solar and longwave radiation (electromagnetosphere), the plankton (biosphere) in the oceans (hydrosphere) put chemicals into the air (atmosphere) that function as nuclei for clouds that affect both the incoming sunlight and the downwelling longwave (electromagnetosphere).

        Typical iteration times for climate models are a half an hour. Anyhow, they make up some kind of tinkertoy model of this insanely complex system, with many of the important emergent phenomena like thunderstorms and plankton and dust devils and Rayleigh-Benard circulation either simply “parameterized” or ignored entirely. Then they run their model for 1,753,200 iteration cycles, all the way out to the year 2100 …

        Now, we know that after running for a modeled year of time, about 18,000 cycles, the model results are unreliable. And we know that after 350,000 cycles, or twenty years, they are still unreliable.

        And despite all of that, some folks, including otherwise reasonable scientists, believe without evidence that if we just run them a leetle bit longer, the results from their simple iterative models will somehow represent the climate a hundred years from now …

        In addition to the lack of evidence that the models will eventually come right, I’ve been writing computer programs for a half century now, including a variety of iterative models, and my experience says no way. Anyone who thinks that the climate models can predict the future climate out 100 years is fooling themselves about the capability of the programmers, the adequacy of the models, and the state of climate science. We’re nowhere near to understanding the climate well enough to do that.

        My best to you and yours,

        w.

      • Thanks for your reply, Willis.

        I’m sorry, but although this is put forwards as though it were an obvious fact by the climate modelers, just as you have done so here, to date we have absolutely no evidence that it is true. If you have evidence that the climate 100 years from now is NOT sensitive to initial conditions, this would be the time to present it …

        It is not a hard thing to test that this is true in the models and is done all the time. They perturb the initial conditions on the model and run it again.

        Now, you might claim that is a deficiency of the models, but the fact is that the whole field of chaos theory arose from studying models. It is not a case where they saw something in nature but could not reproduce it. In fact, Lorenz wrote down some ridiculously simple model and discovered the chaotic behavior. Indeed, most of what we know about the chaotic nature of the weather is from models…It is not like you can do these “perturb the initial condition” experiments on the real earth and see what happens.

        My next question is, what are the nature and value of the boundary conditions TODAY that define the claimed “boundary problem”? I mean yes, there is a boundary in that space radiates at about 3 W/m2, and there is a boundary where the sea meets the ocean, but just what are the boundaries of climate today?

        As I understand it, the sense in which they mean it is a boundary value problem is that you are perturbing the energy balance of the Earth and the question is what happens in response to that. Just as when you go from winter to summer, you perturb the energy balance (on a more local scale).

        In the rest of your post, you make it sound like there are no ways to test the models. But, in fact there are many ways, including looking at the fidelity of reproducing current climate, modeling past climatic changes, and looking at more detailed pieces (like is the upper troposphere moistening as the water vapor feedback claims that it should: http://www.sciencemag.org/content/310/5749/841.abstract )

        No doubt, it is a difficult problem and there are considerable uncertainties. But, let’s face it, the easy problems in science are completely solved already and it is not as if we are doomed to not be able to make progress on these more difficult problems. If we demand, ridiculous certainty and precision of the science before we do anything, then we may well be doomed, but most people (other than those who have an extreme ideological predisposition to dislike the sort of solutions proposed) don’t demand that kind of precision before they take action.

      • joelshore says:

        …most people (other than those who have an extreme ideological predisposition to dislike the sort of solutions proposed) don’t demand that kind of precision before they take action.

        The kind of “precision” that models produce is so wrong it’s ridiculous:

        And as usual, joelshore sees everything through the prism of ‘ideology’, confirming once again that the ‘dangerous AGW’ scare is nothing more than a self-serving hoax based on politics and pseudo-science.

      • Joel, in response to your claim that:

        The prediction of the ups-and-downs that have led people to talk about a plateau in warming is something that is sensitive to initial conditions; the climate 100 years from now in response to greenhouse gases is something that is not.

        I had said:

        I’m sorry, but although this is put forwards as though it were an obvious fact by the climate modelers, just as you have done so here, to date we have absolutely no evidence that it is true. If you have evidence that the climate 100 years from now is NOT sensitive to initial conditions, this would be the time to present it …

        In response to my request for evidence, you have given me nothing but a paean to the climate models … it appears that despite a solid scientific background, you are mistaking the output of untested climate models for evidence.

        So I’ll ask again—do you have any EVIDENCE that the models will someday come right? Because there is nothing in the model outputs to suggest that. To date, they’ve grown worse and worse over time … so just what magic is supposed to make them suddenly be able to predict the far, far distant future when they have failed so miserably at the short term (years) and the medium term (decades)?

        You go on to say:

        In the rest of your post, you make it sound like there are no ways to test the models. But, in fact there are many ways, including looking at the fidelity of reproducing current climate, modeling past climatic changes …

        You misunderstand me. I did not say that there were no ways to test the models. I’m saying that they’ve failed the tests.

        They are unable to reproduce the current climate, and are getting worse over time, not better. As to “modeling past climatic changes”, they can do well only on what they have been trained on, which is the recent temperature history. The models do a horrible job, for example, on hindcasting recent precipitation, a vital component of the climate. But it’s worse in the past. The models can’t tell us why the earth warmed up in Medieval times. They’ve been unable to say why those warm times ended and we went into the Little Ice Age. And they are similarly clueless about why the LIA ended and the globe has been warming for ~ 300 years.

        So if you consider “the fidelity of reproducing current climate, [and] modeling past climatic changes” to be valid tests of the models, there’s bad news … they’ve failed.

        Finally, I ask again. We know that the models do very poorly on short and medium term predictions. But you claim that at some future unknown date they will all come right and start suddenly giving us the right answers … so:

        a) What is your evidence that the climate models’ currently terrible performance at forecasting the future will improve someday, and

        b) When? I mean, when will the CMIP5 model projections become correct? In 3020? 2050? Tomorrow?, and

        c) As a reality check, can you point to even one iterative computer model of any kind in any field which is right in the short term (weeks), wrong in the longer term (years to decades), but somehow it comes right again in the longest term (centuries)? I know of no iterative models with those characteristics, but I was born yesterday, what do I know …

        I asked these three questions before, and you totally ignored all three of them … how about you answer them this time so I don’t have to ask again? Or at least let me know if you are refusing to answer them, or just ignoring them and hoping they go away …

        Best regards,

        w.

    • Nick writes “But is isn’t. Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.”

      These are instantaneous effects, Nick, and the complexity is orders of magnitude less than a GCM. But to illustrate…how does CFD go if you tried to predict the fuel consumption of a theoretical aircraft circling the earth a bunch of times?

    • In the engineering world model validation is required and performed. Model parametrizations for two phase heat transfer, correlations for system bifurcation behavior, etc. are validated in the laboratory. Then the model is built to conservatively treat these parameters based on the real world limitation being protected. Even then, no one who deals with models of this type would take any model run as indicative of the real world result. You either treat all uncertainties conservatively to ensure that limits are not exceeded, or you sample uncertainties to develope a probability and confidence based on multiple model runs.

      • And even if you do multiple model runs, with a system as complex as climate you still have no idea if you have fully (or even significantly) explored the parameter space … nor do you know how well your parameter space approximates reality …

        w.

    • That was precisely my thought too. Anyone who has had to spend a long time explaining concepts to laymen knows that the quickest way to glaze their eyes over is to put even the simplest maths in front of them. The converse is often that if you cannot simply explain a concept in words (natural language) then you probably do not understand it yourself.

    • As a layman with no mean education, including plenty of science, I have to point out that

      Tn+1 = Tn – βTn 4 /T∞4 + β or Tn+1 =Tn +β(1-Tn4 /T∞4 ) eq (5)

      does, indeed, make my eyes and mind glaze over.

      • I pretty much educated myself in mathematics. At around the age of 45, I started studying Calculus once again with the determination this time to actually learn it. I studied every single day, carried my calculus book with me every single day to the coffee shop, and did this for several years. I wore out my calculus book, it is no longer fit to leave my desk.
        I gained great insight into mathematics by this daily exposure. More about my experience some other day…

      • Stephen, numbers higher than ten were mine.

        (But I do understand how Gödel’s theorems killed Hilbert’s programme. That’s logic.)

    • The term “layman” is relative. Even the best of us are layman in some degree. I appreciate this lead post as it helped this layman (me) to a great degree.

    • learned to read equations by substituting the symbols with their written English meaning. Works well for me but can sometimes be painful. Debye’s specific heat triple integration is the one that plagues my memory.

  17. This isn’t surprising.

    In programming. T^4 generally does not equal T*T*T*T. The former is almost always calculated with an approximation. The two should be close (whatever that means) but rarely will they be equal.

    A second issue arises which is caused by loss of precision, chiefly underflow and overflow. The IEEE standard, for example, (which most machines use, nowadays) has what’s referred to as a double , 64 bits total, has an exponent of 11 bits and 53 bits in the mantissa. The physical representation of the mantissa has only 52 bits. If the number can be represented with normalization (that is, within a limited range) you can get 53 bits but this disappears when the value is outside of the range which can be normalized. Also, not all possible binary configurations are possible. Some are set aside to represent things like infinity and Not-A-Number..

    There is a representation that has 80 bits which is used for intermediate calculations.It’s unlikely that Excel uses this. Certainly not when the value is stored in memory.

    The possibility of underflow and overflow requires careful thought. Especially when doing a long calculation with many iterations. It is sometimes more prudent to give up some precision to extend the range. When multiplying many term together, it’s often better to add the logarithms of those terms and convert back with an approximation of the exp function. A single underflow or overflow can render a calculation completely useless. Using logarithms lessens the chance of under/overflow.

    BTW: this only looks chaotic. It’s quite possible to predict the result if you understand how it was computed and have the values used..

  18. Let me expand for those people who aren’t computer experts.

    Let’s take an extreme example. Integer arithmetic. And let’s pick a really simple example.

    You buy a object for 8 bucks and sell it for 10 what is the selling price as a percentage of the original amount. We know of course the answer is 10/8 × 100 = 125 %, simple huh?

    So let’s do that with limited precision (integer) arithmetic

    So that’s 10/8 x 100. But 10/8 = 1.25 = 1 (remember this is integer aritmetic, which is truncated to by the limited precision of integers) times 100 =100%

    But let’s rexpress the calculation

    10 x 100 / 8 in the computer you’d then get 1000/8 = 125%

    Or I could express the same equation as

    100/8 × 10
    100/8=12.5=12 × 10 = 120%

    So depending on how this calculation is written you get vastly different results 100, 120 or 125 %. Because compilers often reorder these expressions if I write this simple expression in integer arithmeric , I don’t know which outcome I’ll get so in fact I would have to code this as

    (10*100)/8 in order to force the compiler to get the right answer, but note that this is different to how we ordinarilly think about that equation, on paper we would write it as 10/8 x 100, but if we transcribe the obvious equation directly into an integer computer we get the wrong answer entirely.

    The same thing happens on floating point, whenever the number can’t be expressed exactly the representation truncates the value to the nearest representable value. Something is lost. As the value is smaller the amount lost becomes a higher percentage of the value. So 100/8 loses less in percentage terms than 10/8. Even in floating point, how the result is calculated can make enormous differences in the result. Iterating only makes it worse.

    (10/8) × (10/8) ×100 = 100

    While the true answer is 156.25 (156 truncated to an integer)

    This is how my university showed the effects of limited precision, I’ve never forgotten.

  19. Great thread! It is nice to see we have so many here who understand the limitations of digital computers.

    I suppose that many here also understand the limitations of models even when coded as well as we could do. I would hope that many here understand that the model what you code comes from your own understanding and biases of the physical situation at hand. (and the laws of physics that govern)

    In the post the author writes “( I am using this out of habit that if it were not for greenhouse gases the earth would be -30o C or 2430 K but you could try other temperatures)”. What if this “greenhouse gases warm the surface by 33 degrees” (or whatever) is completely wrong? It is well known that many of the skeptics (as verses lukewarmers) believe the atmospheric pressure, solar irradiance, albedo, water in all its forms, and other factors control the climate of earth. I also think the climate has been remarkably stable over the eons but that is a matter of perception. (what is “remarkably stable” after all?)

    So, the takeaway here is that we have uncertainty in the models due to the calculation difficulties as outlined in the original post. We have difficulties in the construction of the models themselves before considering the numerical calculation difficulties. Then we have the problem that the underlining physics may well be wrong.

    The current climate models are worthless. They are an exercise in futility as far as science goes, but are wonderful at making the IPCC look “scientific” to the layman.

    • It’s more than just limitations in calculation imposed by tools. All models (including supposed physical laws) are approximations. One should never confuse the model with reality. The climate modelers seem to have fallen into this trap. When the atmosphere runs off on its own and departs from the output of the models, the modelers seem to think the data are wrong or the heat went and hid somewhere. The latter perhaps is what should be part of the investigation that leads to correcting the models but, strangely, the assumption seems to be that the atmosphere has been subjected to some bizarre aberration and somehow the models prove that.

      • +100
        The inability to represent reality accurately was what Lorenz identified. Minute differences in initial conditions in chaotic systems make prediction of past or future outcomes impossible.

        This was almost recognized by the IPCC in their statement on non-linear coupled chaotic systems. But note it is impossible to predict past or future outcomes That is, it is not ‘difficult’, or requires knowing the correct compiler switches, or IEEE standards, it is impossible.

        This is not a ‘matter of opinion’ . So none of these climate models can ever be correct.

  20. One programmer could write it as T*T*T*T, another programmer could write …

    … and whatever the programmer writes, the compiler will try to optimise how it is actually done. If you code x^2 the compiler will very likely implement x*x ;)

    • Compiler optimization is overrated. I rarely enable it. For one, it makes debugging difficult. After the code is debugged, you then have run yet another validation because the compiler output now isn’t what was debugged.

      Besides, with modern processors and large fast RAM store, any compiler optimization is tiny compared to what might be obtained with a different algorithm. Hence, we see programs written in scripting languages and the resulting slowdown caused by interpreting intermediate code is often imperceptible. It’s better to profile subroutine execution times and locally optimize by hand.

      • A half adder is a hardware circuit for adding one bit to another with carry output. A full adder is one that not only inputs the two bits to be added but also adds the carry from the previous lower level. It’s usually the basic building block of an N-bit adder. A full adder can be built form two half-adders

        Not sure what it has to do with the current topic.

    • If you code x^2 the compiler will very likely implement x*x

      Depends if the processor architecture has square functionality built in. Agreed Intel doesn’t seem to. In fact using C standard math libraries there is no ‘square’ function.

      So you would have to specify how to perform the square anyway..

      There is multiply and there is raise to a power.

      Lord knows what a POS like Excel does.

      The general case for raising to integer powers has two methodologies I am aware of take a logarithm, multiply, and anti-log the result, or do the successive multiplication.

      The logarithm method is almost certainly slower but is the only method that can be used for raising to fractional powers.

      • My point is that even if you code x squared, you will probably get x times x in the object code. I’m pretty sure that the default for gcc on Intel platforms. Thus attempting to optimise your code by writing x+x or x << 1 instead of 2*x is pointless since the compiler will chose what it considers best with the currently active options.

        If you want in specific implementation you may need to use inline assembler.

      • Lord knows what a POS like Excel does.

        LOL,

        Phil Jones often gets slagged off for saying he did not know how to fit a straight line in said POS. Sadly many here miss the point of that comment.

  21. Great article; I have been banging on about rounding errors for a while;

    https://www.ma.utexas.edu/users/arbogast/misc/disasters.html

    … the Vancouver stock-exchange disaster is an eye-popper!.

    And on this basis surely the Scripps CO2 series computations, on which the whole AGW edifice is built on, has been investigated and made public. And as these computations rely on data from a hardware Analog-to-Digital converter, an analysis of the known sources of error in A-D conversion and the potential for propagation through the series over many, many years surely ought to be a public document.

    Could anyone advise?

    • TonyN said:

      and the potential for propagation through the series over many, many years

      A long, long time ago (60’s) I heard the rumor, or maybe it was fact, that the City of Philadelphia and the County government were battling it out in Court to determine “who the money belonged to” that had accumulated over many, many years as a result of a “5 decimal point round-off” on the distribution of the taxes collected on Stock Market transactions.

      The amount of money “setting in limbo” was around $10 million dollars.

  22. This was a big problem 15 years ago. However, computer scientists at Lawrence Berkeley found the problem and fixed it using arbitrary precision arithmetic. The reference to climate models was buried in a paper that they presented at a conference.. At one point it was available for download. Needless to say, it is no longer available !

  23. ” -30o C or 2430 K ”

    since this is a technical discussion : the unit is kelvin ( K ) , not degrees kelvin .

  24. And to think, I just thought climate computer programs were always pre-programmed to produce imminent Thermageddon.

    I never realised that such large errors were almost inevitable, especially when you start doing hundreds of iterations.

    My last comment is the quality of comments here completely put the lie to alarmists who claim they are the sole providers of climate science.

  25. As a very old mechanical engineer-having taken an electrical engineering course in my freshman year at MIT in 1954 I wonder if an analog computer could be employed to solve these problems. That is if any more exist.

  26. I have quickly written a C# (C-Sharp) program that is not complete for the demo of this article, but it might be a good start for others.
    I will be writing some code (some day) that uses a class that allows a number to be as large as ones computer memory will allow (bypasses 32bit or 64bit limit but at the cost of much greater computing time). The code I will be referring to is in a book about C++ algorithms, and I would either have to type it out from the book or alternatively re-write it in C#.
    Here is the code, and a debug out of the code shown below that. You can copy this into a file yourself and use Visual Studio Community Edition free from Microsoft.
    —————
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using System;
    using System.Collections.ObjectModel;
    using System.Diagnostics;

    namespace Kip
    {
    [TestClass]
    public class MyTestClass
    {
    [TestMethod]
    public void MyTestMethod2()
    {
    Kipper kipper = new Kipper()
    {
    Tee = 300,
    Bee = 100,
    Tee_infinity = 243
    };

    var kips = new ObservableCollection();
    for (int i = 0; i < 100; i++)
    {
    kips.Add(kipper.Iterate());
    Debug.WriteLine($"Iteration # : {i + 1}, calculated value : {kipper.Tee}");
    }

    foreach (var item in kips)
    {
    Debug.WriteLine($"value is : {item}");
    }
    Debugger.Break();
    }
    }

    internal class Kipper
    {
    public Kipper()
    {
    }

    public int Bee { get; set; }
    public float Tee { get; set; }
    public double Tee_infinity { get; set; }

    internal float Iterate()
    {
    Tee = Tee + Beta();
    return Tee;
    }

    private float Beta()
    {
    double result = Bee * (1 – Math.Pow(Tee, 4) / Math.Pow(Tee_infinity, 4));
    return (float)result; //cast double type to float type
    }
    }
    }
    ———— and some debug output
    Test Name: MyTestMethod2
    Test Outcome: Passed
    Result StandardOutput:
    Debug Trace:

    Iteration # : 1, calculated value : 167.6943
    Iteration # : 2, calculated value : 245.014
    Iteration # : 3, calculated value : 241.6573
    Iteration # : 4, calculated value : 243.8492
    Iteration # : 5, calculated value : 242.444
    Iteration # : 6, calculated value : 243.3561
    Iteration # : 7, calculated value : 242.7686
    Iteration # : 8, calculated value : 243.1489
    Iteration # : 9, calculated value : 242.9035
    Iteration # : 10, calculated value : 243.0622
    Iteration # : 11, calculated value : 242.9598
    Iteration # : 12, calculated value : 243.026
    Iteration # : 13, calculated value : 242.9832
    Iteration # : 14, calculated value : 243.0108
    Iteration # : 15, calculated value : 242.993
    Iteration # : 16, calculated value : 243.0045
    Iteration # : 17, calculated value : 242.9971
    Iteration # : 18, calculated value : 243.0019
    Iteration # : 19, calculated value : 242.9988
    Iteration # : 20, calculated value : 243.0008
    Iteration # : 21, calculated value : 242.9995
    Iteration # : 22, calculated value : 243.0003
    Iteration # : 23, calculated value : 242.9998
    Iteration # : 24, calculated value : 243.0001
    Iteration # : 25, calculated value : 242.9999
    Iteration # : 26, calculated value : 243.0001
    Iteration # : 27, calculated value : 243
    Iteration # : 28, calculated value : 243
    Iteration # : 29, calculated value : 243

    • The ObservableCollection declaration lost an “left angle bracket” “float” “right angle bracket” during the reformatting by wordpress upon posting.
      var kips = new ObservableCollection<float>();

    • I think the point of the article was that it converges with beta =100 but it chaotic for beta=170.

      When beta multiplier starts to get comparable to size of the settled value each iteration jumps chaotically either side yet is always attracted to it so remains confined to that region of the variable space.

      In the chaotic mode, the small difference due to rounding errors in the two ways of doing the iteration become significant and the two paths become unrelated with each other from step to step, while both remaining attracted to the same attractor: Tinf.

      • Well, I can write code that iterates through various values and see what the various outcomes are, but I do have a lot of other projects that I am working on…

    • I found an add in for my code editor that can copy as html, so here is that copy :

      using Microsoft.VisualStudio.TestTools.UnitTesting;
      using System;
      using System.Collections.ObjectModel;
      using System.Diagnostics;
       
      namespace Kip
      {
          [TestClass]
          public class MyTestClass
          {
              [TestMethod]
              public void MyTestMethod2()
              {
                  Kipper kipper = new Kipper()
                  {
                      Tee = 300,
                      Bee = 100,
                      Tee_infinity = 243
                  };
       
                  var kips = new ObservableCollection<float>();
                  for (int i = 0; i < 100; i++)
                  {
                      kips.Add(kipper.Iterate());
                      Debug.WriteLine($"Iteration # : {i + 1}, calculated value : {kipper.Tee}");
                  }
       
                  foreach (var item in kips)
                  {
                      Debug.WriteLine($"value is : {item}");
                  }
                  Debugger.Break();
              }
          }
       
          internal class Kipper
          {
              public Kipper()
              {
              }
       
              public int Bee { get; set; }
              public float Tee { get; set; }
              public double Tee_infinity { get; set; }
       
              internal float Iterate()
              {
                  Tee = Tee + Beta();
                  return Tee;
              }
       
              private float Beta()
              {
                  double result = Bee * (1 - Math.Pow(Tee, 4) / Math.Pow(Tee_infinity, 4));
                  return (float)result; //cast double type to float type
              }
          }
      }
    • It’s relative to your level of understanding of the discussion. This is a pretty high level of mathematics (so to speak). Hang in there. ;-)

    • Like any language, if you can speak and write it, it is easy. If you don’t have a word in your vocab you don’t know it.

  27. The term T^4 brought back memory of a nifty formula from an advanced astronomy class in high school. The total energy output of a star is 4 * pi * r^2 * (sigma) * T^4. (It’s cooler looking when written properly rather than in text.) T*4 is the output per square unit, where T is the star’s surface temp in degrees Kelvin (and you know that from the star’s class and color). Sigma is just a scaling constant, and 4 * pi * r^2 is the surface area of a sphere. But I never learned where that T^4 comes from in the first place.

      • Short answer: the fourth power is from the integral over all frequencies of a third power law. The third power comes from the three dimension of space.

      • The fourth power can be explained as a combination of Johnson noise, characteristic impedance of free space coupled with antenna theory and the frequency spectrum of a black body as constrained by quantum mechanics. A power of one comes from Johnson noise, P=kTB. A power of one comes from the bandwidth of the black body spectrum increasing with T, the bandwidth being where the black body spectrum drops by 3dB from the peak. A power of two comes from considering thermal equilibrium as a function of frequency, the characteristic emission and capture areas are proportional to the square of the wavelength or inversely proportional to the square of the frequency. For a black body radiator at temperatures significantly higher than a few degrees K, the thermal power density at 2GHz will be four times the thermal power density at 1GHz.

        An interesting corollary to the power density versus frequency is that below 100 to 200 MHz, the effective sky temperature is above 300K, gets to be on the order of 3000K at 30 MHz and even higher at low frequencies. Due to the long wavelengths, the power density is too low to have any affect on surface temperatures.

  28. As Stephen Skinner points out, (above), climate models are structured as boxes in a grid. That’s the part that baffles me: How do you represent the relationships between the boxes? Don’t the number of 2nd, 3rd …. order relationships become quickly unmanageable? And if Terre Haute is in one box and South Bend is in another and Kalamazoo is in a 3rd then what does the interrelationship between all 3 look like when you extend the interrelationships to Yuma, AZ? It seems like a trivially small error in any parameter would quickly spin out of control.

    Until someone can explain all this is plain English I won’t believe any of it. FYI: I have a Masters in Stats, and PhD in Information Theory, but I also believe that if you can’t break a problem down so that you can explain the basic notion to your mother then you really don’t know what you are talking about.

    • I agree Mark. And this is what Tesla says:
      “Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality. ”

      And this one:
      “If anybody ever tells you anything about an aeroplane which is so bloody complicated you can’t understand it, take it from me: it’s all balls.”
      — R. J. Mitchell, advice given about his engineering staff to test pilot Jeffrey Quill during prototype trials.

    • I had a stats professor in graduate school who was famous for commanding us to provide answers by “Saying it in American!!!” Lol, that’s about all I remember from that class!

      As a headhunter for many years, I recruited a fair number of people for technology positions. One of my tests was to ask them to explain some technical issue they were involved with. They would pass my test if I could understand what the heck they were talking about.

  29. From the article, it is implied by the way he wrote the temperature that the measuring accuracy is no more than a hundredth of a degree. It is therefore impractical to maintain any precision beyond this. Thus if the author just added a round function to the term (T*T*T*T) and T^4 such as: round(T*T*T*T, 2) and round(T^4, 2) then both columns will produce the same values regardless of the number of iterations. It is common practice in science and engineering that the precision of calculations are bound by the accuracy of your measuring device. That should follow with computer models they create. This problems have been solved by banking systems long ago. Science should learn from bankers.

  30. I wonder if an analog computer could be employed to solve these problems. That is if any more exist.

    Yes, I have wondered about the idea of making an analogue GCM with a high accuracy instrumentation amplifier at each grid box position. A few thousand in spherical formation all interconnected to their neighbours.

    The circuits would have to be accurate and stable enough to represent the various physical equations being modelled to within a similar accuracy as numerical methods. It’s a tall order !

  31. In point of fact, for years banks used Binary Coded Decimal machines. All arithmetic was exact in that format. You could either represent a number or you couldn’t because either it was too large for the number of places to be represented, or too small for the places in the fractional part of the number. Burroughs and IBM both had such systems, UNISYS still sells them.

  32. A good practical illustration of chaos, a branch of math developed in response to problems with computer models. The problems stem from:

    Inexact initial or boundary conditions.
    Finite precision arithmetic.

    • It isn’t just arithmetic. According to Lorenz’s original paper the chaotic behavior was intrinsic to trying to get solutions to partial differential equations. With certain starting values the system of equations generates solutions that cluster in certain areas but never actually repeat. So something like predicting when the climate switches from an ice age to an interglacial isn’t possible. The best that can be done is predict that the change is highly likely to be within a certain range of dates. Or predict that the temperatures are highly likely to be within certain limits in one phase or the other.

  33. The fundamental characteristics of chaos include:

    1. Sensitive dependence on initial conditions. Two solutions started with ICs that are different at the precision of the machine representation of the numbers, for example, will exponentially diverge. The difference between the two series will grow to be of the same order of magnitude as the variables themselves.

    2. Bounded aperiodic oscillations with increasing values of the independent variable, which is usually taken to be time. Growth of the dependent variable to blowup is not chaos, limit-cycle oscillatory response is not chaos, periodic oscillatory response is not chaos.

    The example in this post does not meet either of these properties. Instead, two solutions started at nearby ICs remain close, blowup, limit-cycle, and periodic oscillatory solutions are obtained for some conditions.

    None of the solutions have been tested for, and shown to be consistent with, chaotic properties and characteristics.

    The example is nothing more than a simple, and severely incomplete, demonstration of numerical instabilities. And heads off in a perpendicular to a less-than complete discussion of machine representations of numbers, additionally with the focus on the wrong part of the equation, which after all contains the term (T^4 / T_inf^4) which can be written in a multitude of different ways in any of the several coding languages.

    It is critically important to note that the example can in no ways whatsoever be associated with mathematical models and numerical solution methods in general, and GCMs in particular.

    Several investigations into accurate numerical integration of the Lorenz ODE system appeared in the literature about 5 or 6 years ago, with an earlier paper from 1998 also re-discovered. A summary of these papers is given here.

    The series of papers started with this publication:

    J. Teixeira, C.A. Reynolds, and K. Judd (2007), Time step sensitivity of nonlinear atmospheric models: Numerical convergence, truncation error growth, and ensemble design. Journal of the Atmospheric Sciences, Vol. 64 No.1, pp. 175–189. http://journals.ametsoc.org/doi/pdf/10.1175/JAS3824.1

    A Comment on the paper was published:

    L. S. Yao and D. Hughes (2008), Comment on ‘Time step sensitivity of nonlinear atmospheric models: numerical convergence, truncation error growth, and ensemble design,” Journal of the Atmospheric Sciences, Vol. 65, No. 2, pp. 681-682. http://journals.ametsoc.org/doi/pdf/10.1175/2007JAS2495.1

    Subsequently, additional comments and responses about another publication appeared. The paper:

    Edward N. Lorenz (2006), Computational periodicity as observed in a simple system, Tellus A, Vol. 58A, pp. 549-557. http://eaps4.mit.edu/research/Lorenz/Comp_periodicity_06.pdf

    The comment:

    L. S. Yao and D. Hughes (2008), Comment on ‘Computational periodicity as observed in a simple system’, by E. N. Lorenz, Tellus A, Vol. 60, No. 4, pp. 803-805. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2008.00301.x/abstract

    Response to the comment:

    E. N. Lorenz (2008), Reply to comment by L.-S. Yao and D. Hughes, Tellus A, Vol. 60, pp. 806–807. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2008.00302.x/abstract

    Following these publications, several appeared that directly addressed the problem of accurate numerical integration of ODE systems that exhibit chaotic response. Yao and Hughes, and others, missed an earlier publication by Estep and Johnson (1998).

    D. Estep and Claes Johnson (1998), The pointwise computability of the Lorenz system, Mathematical Models and Methods in Applied Sciences, Vol. 8, pp. 1277-1305. http://www.worldscientific.com/doi/abs/10.1142/S0218202598000597

    The papers listed below are representative of recent, the 2000s, publications. There are very likely other publications that address the issue.

    S. Liao (2009), On the reliability of computed chaotic solutions of non-linear differential equations. Tellus A, Vol. 61, No. 4, pp. 550–564. http://onlinelibrary.wiley.com/doi/10.1111/j.1600-0870.2009.00402.x/abstract

    Kehlet (2010) addressed the problem in a MSc thesis.

    B. Kehlet, Analysis and implementation of high-precision finite element methods for ordinary differential equations with application to the Lorenz system, MSc thesis, Department of Informatics, University of Oslo, 2010.

    And Kehlet and Logg (2010, 2013)

    B. Kehlet and A. Logg (2010), A reference solution for the Lorenz system on [0, 1000], American Institute of Physics, ICNAAM, Numerical Analysis and Applied Mathematics, International Conference 2010, Vol. III, Edited by T. E. Simos, G. Psihoyios and Ch. Tsitouras. http://adsabs.harvard.edu/abs/2010AIPC.1281.1635K

    B. Kehlet and A. Logg (2013), Quantifying the computability of the Lorenz System, arXiv: 1306.2782. See also Proceedings of the VI International Conference on Adaptive Modeling and Simulation (ADMOS 2013), Edited by J. P. Moitinh de Almeida, P. D`ïez, C. Tiago and N. Parés. International Center for Numerical Methods in Engineering (CIMNE), 2013. http://arxiv.org/abs/1306.2782 http://www.math.chalmers.se/~logg/pub/papers/KehletLogg2010a.pdf

    Sarra and Meador (2011) also addressed the problem.

    Scott A. Sarra and Clyde Meador (2011), On the numerical solution of chaotic dynamical systems using extend precision floating point arithmetic 
and very high order numerical methods, Nonlinear Analysis: Modelling and Control, Vol. 16, No. 3, pp. 340–352. http://www.lana.lt/journal/42/NA16306.pdf

    Prior to the papers listed above that address the Lorenz system, Cloutman investigated systems that exhibit chaotic response even though the response theoretically cannot be chaotic.

    L. D. Cloutman (1996), A note on the stability and accuracy of finite difference approximations to differential equations, Lawrence Livermore National Laboratory Report, UCRL-ID-125549. http://www.osti.gov/scitech/servlets/purl/420369/

    L. D. Cloutman (1998), Chaos and instabilities in finite difference approximations to nonlinear differential equations, Lawrence Livermore National Laboratory Report, UCRL-ID-131333. http://www.osti.gov/scitech/servlets/purl/292334

    • As I noted above, this problem has nothing to do with chaos,climate or any such. Nor is it an issue with floating point, ways of doing T^4. It is an elementary case of a stiff differential equation. And it is very well known that you can’t use Euler’s method, as the author seeks to do, except with very small time steps. The remedies have been well known for at least a hundred years.

  34. It’s the year 2015, and we’re still using 64 bits for floating point calculations?

    https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format

    The register sizes of the latest CPUs are as large as 512 bits and support e.g. integer crypto operations at those sizes, but full HW support still not there on x86 for quadruple precision from what I can tell.

    If I’m doing my math right, it should take more at least 53 times as many iterations to show the same problem above with when using quadruple precision. I might be off by log(10)/log(2),can’t remember..

    Peter

  35. Anthony R. E.,

    The difference between the two codes builds up rapidly that by the 95th iteration, the difference is 4.5 per cent and by the 109th iteration is a huge 179 per cent as shown below. […] For the above discussion, a LENOVO G50 64 bit computer is used.

    I get the same results on the 109th iteration; the particulars of my software/system are:

    Software: Gnumeric Spreadsheet 1.12.9
    Processor : 8x Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz
    Operating System: Ubuntu 14.04.3 LTS

    However, the divergence is not monotonically increasing. There are instance such as in the 104th iteration, the divergence drops from 102 per cent to 18 per cent.

    Some basic descriptive stats might be of interest (β=170, T∞=243, n=65,535):

    	%Diff
    min	-67.1467
    max	204.3551
    range	271.5018
    mean	8.7428
    median	0.0000
    std dev	48.8920

    Under chaotic conditions, the same one line equation with the same initial conditions and constant but coded differently will have vastly differing results. Under chaotic conditions predictions made by computer models are unreliable.

    Well sure, if one’s goal is to predict the nth future state of a chaotic physical system to within a few hundredths of a percent, ALL methods will fail miserably at some point even if all methods produce exactly the same answer for any given n along the way.

    However, if one’s goal is to predict the statistics of the values a chaotic system is likely to experience under specified conditions (β=170, T∞=243, n=65,535) …

    	wT^4	wT*T*T*T*T
    min	75.0803	75.0803
    max	300.0000	300.0000
    range	224.9197	224.9197
    mean	218.2835	218.3249
    median	243.5310	243.5310
    std dev	69.7764	69.7301

    … -0.04 K (-0.02%) difference in the means and 0.0663 K and 0.05 K (0.07%) difference in the standard deviations suggest that the most dire problems with the model are probably not due to how we’ve coded the 4th power of T.

  36. Some (most?) languages have library support for arbitrary precision decimals:

    Java – java.math.BigDecimal
    Python – Decimal
    C++ – Quick google search showed a couple of options

  37. Dear Anthony R. E.,

    You say, ” … a simple example”.

    Hmmm ( ? )

    It appears to be complexity more than a woman will decide upon the right shoes for her new outfit.

    Pass me the George bottle !

    Regards.
    WL

  38. Fun: numerical computation in binary. Back in the 70’s it was done in decimal but now binary floating point hardware units (embedded in the cpu) do the job. Speed and precision have always been tradeoffs and as compilers have got ‘smarter’ most programmers have no idea of how their code will be executed. 10×10 can be interpreted as 10^2 and a compiler may choose to execute the code for that by any of several different methods chosen by the compiler writer for ‘efficient execution’ or personal preference even. The same code compiled differently will yield a different result at a significant number of places. That also goes for the executing processor environment too. The CPU only does addition and subtraction in its main arithmetic registers so a 32 bit CPU will have be able to process a 32 bit integer and the data pathways to the CPU will appear as 32 bits although they may be delivered differently. Scale it up to x64 similarly. Floating point hardware allows an extension then to 128 bits in an output register. FPU’s have externally defined operations but it is only the data format and the class of the operation which is defined. The execution microcode will be a trade secret and that too will be based on additive registers At that level shift and add rules along with programmer ingenuity for efficiency and speed.

    Once even business programmers had to know their compilers so that any arithmetic gave sensible results even with 2 places of decimals. I have seen Fortran output with 9’s for zero. Never presume equality and at every iteration the operation would need to be rounded to the significant number of places before proceeding. Bring back BCD with hardware multiply and divide. The Trachtenberg method can be implemented with a table for very speedy computations.

    It should be obvious to anyone with high school maths that climate model computations are probably impossible.

  39. As a programmer I would like to duplicate computer models and tests using the actual software or code as it is being used for further analysis. I’ve never been able to get the raw data or code being used. I want to perform testing on my own. I wonder if you can point me in the right direction? Even the physical location would be very useful.

    • Hey, you want the see the code, no problem. Directions? go to hell !

      “physical location” ? Up my a$$ and round the bend. ;) Does that answer your questions?

      Why should I let you see the code ? All you want to do is find something wrong with it.

      Welcome to the “science” of the anti-christ : climatology. Fully peer reviewed but you are not my peer, so butt out.

  40. An interesting post, but I’m not convinced that it is relevant. The errors in every single step of a climate model are so much larger than the difference between T^4 and T*T*T*T that any computation accuracy problem just pales into insignificance.

    On second thoughts, it isn’t that irrelevant, because it could help people to understand just how useless the climate models are : if you can get errors like that from the difference between T^4 and T*T*T*T, just think what errors you will get from the massive quantity of highly inaccurate weather data that the climate models use in their internal calculations.

  41. If you really want to make this simple for the layman to understand, find a way to do it without all of the math notation in the explanation. For many, the formulae get in the way of comprehension. It’s a foreign language they have a hard time grasping. Maybe that bothers the math-intuitive folks, but they need to have some patience for people without their abilities who still want to understand.

  42. Algebraic to floating point operation none equivalence is only part of the problem. To me, the real elephant in the room is the lack of accounting done for source precision and error in the models. For instance a measurement of temperature of 10.5 is not exactly 10.5 it could be anything from 10.40001 upto 10.59999 (depending on your measurement methods). That’s just under a 2% error range on the one measure. Also the error range is not constant (in this case it gets worse the lower the temperature!).

    Such errors cannot be removed by calculation or averaging – they are with you always and tend to grow rather quickly, i.e. a*b=c also adds the errors, so two 2% error margin numbers result in a 4% error margin result… Now do that a few times reusing the results (like 10) and you are soon into processing pure noise – the results become meaningless. Hence all the tuning parameters and fudge factors on the climatic models, they want to go chaotic if run too long, so you need the knobs to ‘dampen’ the eventual pure noise to the result you want.

    This reminds me of the old accountancy joke:

    A company is looking to hire an accountant, they have a simple test, what is 2+2. The first accountant says ‘4’ – he isn’t hired. The second accountant looks around and asks ‘what do you want it to be?’ – he is hired.

  43. Not nitpicking but a suggestion. You have mC is the mass times the specific heat capacity which is just heat capacity with symbol C while specific heat (per unit mass) has a lower case c as the convention.

      • This second graph shows the principle from my recent essay. The system tends towards its stable point at 243. It overshoots at first downward, then overshoots coming back up, but finally settles down and is rock solid.

      • Thanks, a visualisation is always helpful.

        The analytical solution is a decaying exp with no overshoot, so the iterative calculation is with an overshoot of about 100% is still very unstable and qualitatively incorrect, though not yet chaotic.

        Now perhaps you could plot Nick’s trapezoidal solution.

      • That’s the trajectory one gets if the solution of the characteristic equation gives an imaginary root with a negative real part, termed a ‘stable focus’ and exhibits an oscillatory approach to the stationary point.

    • The first graph shows the in the first 12 iterations that both formulas produce a single line, they are so close in final value that they overwrite one another here. (then follows a break in the data set, that looks visually as a long downslanting line) The same is true in the iterations 69-75, the values overwrite one another they are so close, and are still oscillating. (then follows a break in the data set) In the third segment, we see that the values of the two formulas have diverged, both in value, and then in nature — the orange trace seems to stabilize around a value of 240 or so while the blue trace still wildly slams up and down.

  44. What is the essential difference between to two treatments?

    The second graph, which represents the non-chaotic region of the formula (the first data set in the essay above) — the ‘forcing’ factor being the ß which coincides with the “r” term in the examples in my Chaos and Stability essay.

    At a ß of 100, the system tends to stability.

    In the top graph, Anthony R.E. has increased the ß to 170, at which value the system becomes chaotic.

    The two traces used show what happens when one performs the mathematical rendering of a non-linear chaotic dynamic system on modern computers — even if one does not change the values at any point in the way that Edward Lorenz did, a serendipitous event as it lead to the popularization of Chaos as a topic of scientific interest — because non-linear chaotic systems are extremely sensitive to initial conditions, any change in any one value, at any point, no matter how slight, will grow to change the outcome in unpredictable ways. Now, Anthony R.E. did not change values mid-stream — but the differences in the programming, the coding of the two formulas, though mathematically the same, introduce differences in the output values — which then set off the chaotic sensitivity to initials conditions, and we get not only wildly different values for specific iterations, but the behavior of the two systems have diverged and are dissimilar.

    Why has Anthony R.E. mixed these two different lessons together? It is not to confuse you — it is to show that even the slightest differences in these non-linear calculations lead to wildly different results — and these slight differences can be introduced by the computers and their internal workings themselves.

    To even approach comparable results between GCMs, a line by line evaluation would be required by specialists in the underlying code and machine languages.

    The above is exactly why GCMs produce spaghetti graphs of one thousand runs, initiated with almost exactly the same initial conditions, that predict everything from Fire Ball Earth to Ice Ball Earth.

    The idea that these chaotic results can then be averaged to produce a valid rational projection is beyond absurd.

    • The above is exactly why GCMs produce spaghetti graphs of one thousand runs, initiated with almost exactly the same initial conditions, that predict everything from Fire Ball Earth to Ice Ball Earth.

      This is simply not true. A particular model run with slightly different initial conditions but exactly the same forcings and run over, say, 100 years or more of rising greenhouse gas levels produce “spaghetti graphs” that all show different details (e.g., of ENSO and such) but show about the same overall rise in temperature.

      Over shorter time scales of, say, 10 or 15 years, then yes, you will see different trends…which is (one of reasons) how we know that such trends are not reliable or robust.

      • Reply to joeldshore ==> Sir, you are either being disingenuous or you are unaware of how the IPCC uses GCMs to make projections for its various scenarios. I suggest reading, say, Dr. Robert Brown, from Duke, quoted in this essay (not mine) : https://wattsupwiththat.com/2013/06/18/the-ensemble-of-models-is-completely-meaningless-statistically/

        Research how GCM ensemble results are used in arriving at a projection. Actually look at graphed results from a ensemble of runs…read the IPCC’s explanation of why they use ensembles instead of simply running the program once.

        The outlying runs, those at the top and the bottom out a hundred years, literally predict/project almost any climate, bounded only by the parameters fed in to the system by its administrators.

      • Kip:

        The above is exactly why GCMs produce spaghetti graphs of one thousand runs, initiated with almost exactly the same initial conditions, that predict everything from Fire Ball Earth to Ice Ball Earth.

        Bull

      • Kip Hansen: The link you gave me provides no support for the claim that Robert Brown makes. He makes some vague reference to some unnamed thread by Spencer. He also talks about an “ensemble of models”, i.e., he is not talking about how one particular model behaves but rather how the whole range of models behaves. (I believe that range has model sensitivities varying between about 1.5 and 4.5 C per CO2 doubling, so there is no great surprise to find a range of projections.) Even then, Robert Brown made no claim that they “predict everything from Fire Ball Earth to Ice Ball Earth”.

      • Reply to Mike ==> Matbe I haven’t been clear.

        Surely you are aware of the individual ensemble runs? And that they are produced by using slightly different initial conditions? And that they then discard “outliers” (those implausible results at the edges), and then use averaging to pick a mid-line, and parameters to set the upper and lower limits for each emissions scenario. None of this is controversial or in question — the IPCC explains it very clearly as the process.

        It is the discarded outliers that seem to “predict” Fire Ball Earth and Ice Ball Earth — and because they easily acknowledge that such results are ridiculous, they throw them out. I would too!

        It is the simple, IPCC-stated fact that Earth’s climate is a coupled non-linear chaotic dynamic system that causes the individual ensemble members to vary so much, despite careful parameterization (which tends to set bounds).

      • “None of this is controversial or in question — the IPCC explains it very clearly as the process”

        Is there any other field with such not “controversial or in question” procedures?

    • None of this is controversial or in question — the IPCC explains it very clearly as the process.

      Great, so then you can presumably supply us with a link to the IPCC discussion?

      The only time I have heard about such outliers was in some early climateprediction.net set of runs, where they had an occasional issue due to the simple ocean model that they were using…and these outliers were clearly identifiable. [In that modeling experiment, they were not (just) varying initial conditions but actually varying parameters to see what sort of range of climate sensitivities they could produce by varying the parameters within physically-plausible ranges.] Doing this was still not ideal, but it was pretty straightforward to justify which ones were completely unrealistic due to a known problem.

      • Thanks for the link to the 100+ page chapter in the IPCC report, but what I was hoping that you would provide was the specific text or graphics that justifies your claim ,”The above is exactly why GCMs produce spaghetti graphs of one thousand runs, initiated with almost exactly the same initial conditions, that predict everything from Fire Ball Earth to Ice Ball Earth.”

      • dbstealey: Since you claim to be a skeptic, you must have looked skeptically at the graph you have pasted. Given that, why don’t you explain for us exactly what the graph shows, i.e.,

        * What temperature data is plotted and what are the known limitations of that temperature data set?

        * How does that compare to the temperatures that the models are simulating?

        * How did the authors choose to align the graphs for the different temperature series?

        It is not hard to post lots of graphs that have never passed peer review and never could because they are created to push a certain point-of-view rather than to inform.

      • dbstealey: So, in other words, you know nothing about the details of that graph, but since it shows what you want to believe, you uncritically accept it. (And, without a bit of irony, you call yourself a “skeptic”.)

        I will give you a little hint: The temperature data shown there is what is called “TMT” or mid-tropospheric temperature data. Some of the problems with that data include the fact that the weighting function for what that satellite channel samples includes a tail extending into the stratosphere and hence it is contaminated by stratospheric cooling. Attempts to deal with that and other issues in the analysis is why the trend for the mid-tropospheric data differs by a factor of 3 between UAH and RSS, a fact that has been covered up in that graph by having one of the two plots that say “Reality” being an average of the UAH and RSS. (The other plot is an average of, I believe, 4 different analyses of balloon data. That data has similar difference between different analyses and even between different versions of different analyses. Not sure what versions of each analysis were used in producing the average here, but for a while some skeptics refused to use the latest version of one of the analyses because it showed a significantly larger trend than the earlier versions.

        So, in other words, the fact that the data that is labeled “Reality” is really shows considerable variation from one analysis to the next has been hidden in these plots by doing some sneaky averaging, so that you are fooled into believing that this data is reliable. So, a discrepancy between models and data that serious scientists are trying to understand is instead presented in such a way that the problems with the data and the analysis that went into producing the data are covered up.

        This one graph is a nice illustration of the sort of differences one sees between real science presented in scientific journals and pseudoscience presented for the consumption of gullible “skeptics”.

  45. Weather is what’s happening at a point in time. Climate is a statistical profile of weather. If the global circulation models were any good they’d at least get the statistical profile close to right even as chaotic feedback caused them to diverge from actual weather.

    But anyone following this site regularly knows how badly the models diverge from even the statistical profile.

  46. The two traces used show what happens when one performs the mathematical rendering of a non-linear chaotic dynamic system on modern computers

    No, it is not the system that is unstable it is iterative method this is unstable.

    Both you and Anthony R.E are making the same mistake. Nick Stokes seems to be the only one who gets it right.

    The system described by the initial ODE is perfectly stable, linear and non-chaotic. The unstable result comes from incorrect maths. The problem starts with the very first equation where we see:

    energy out {-2AσTn 4 }

    Firstly Stephan-Boltzman does not give “energy” it gives the power emitted.
    The energy is the integral of the power over the time step interval. Using the calculations with a step of
    one this is not seen but what that term is really saying is :

    energy out {-2AσTn 4 .dt }

    So now we see the error. The area under the T^4 curve is being approximated as the rectangle with the height calculated at Tn, ignoring the fact that T changes in the interval. This assumption is not stated and clearly the author has not realised he is doing it, otherwise he would not be suggesting using it in a situation where beta makes the iterative steps very large and such an approximation becomes nonsensical.

    This will work for small enough dt but when you get to large time intervals this becomes first crudely wrong, leading to oscillations around the correct solution, as the interval gets even greater it become totally chaotic. Increasing beta has the same effect as increasing dt for similar reasons.

    Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick Stokes’ suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

    The ensuing discussion, however, was very interesting , especially the comments by Nick and Leo.

    • No, it is not the system that is unstable it is iterative method this is unstable.

      “Chaotic” is just another word for random or kinda-random. It only exists in the mind and on paper; not reality. It is caused by round-off and truncation and lack of truly knowing the inputs to required precision. Nothing more.

  47. If this was correctly framed as an issue related to the need to keep the steps small enough, it may be relevant to GCMs. But the instability here is due improperly specifying the iteration, not the rounding errors.

  48. The two traces used show what happens when one performs the mathematical rendering of a non-linear chaotic dynamic system on modern computers

    No, it is not the system that is unstable it is iterative method.

    Both you and present author are making the same mistake. Nick seems to be the only one who gets it right.

    The system described by the initial ODE is perfectly stable, linear and non-chaotic. The unstable result comes from incorrect maths. The problem starts with the very first equation where we see:

    energy out {-2AσTn 4 }

    Firstly Stephan-Boltzman does not give “energy” it gives the power emitted.
    The energy is the integral of the power over the time step interval. Using the calculations with a step of
    one this is not seen but what that term is really saying is :

    energy out {-2AσTn 4 .dt }

    So now we see the error. The area under the T^4 curve is being approximated as the rectangle with the height calculated at Tn, ignoring the fact that T changes in the interval. This assumption is not stated and clearly the author has not realised he is doing it, otherwise he would not be suggesting using it in a situation where beta makes the iterative steps very large and such an approximation becomes nonsensical.

    This will work for small enough dt but when you get to large time intervals this becomes first crudely wrong, leading to oscillations around the correct solution, as the interval gets even greater it become totally chaotic. Increasing beta has the same effect as increasing dt for similar reasons.

    Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick Stokes’ suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

    The ensuing discussion, however, was very interesting , especially the comments by Nick and Leo.

    • Of course, you and Nick Stokes are right about this not being an example of chaos. Even the simple linear equation dy/dt = -(beta)*y will exhibit the sort of numerical instability shown here if you use the forward Euler method (which is what is being used) and you make (beta*timestep) too large.

  49. The two traces used show what happens when one performs the mathematical rendering of a non-linear chaotic dynamic system on modern computers

    No, it is not the system that is unstable it is iterative method.

    Both you and the author are making the same mistake. Nick seems to be the only one who gets it right.

    The system described by the initial ODE is perfectly stable, linear and non-chaotic. The unstable result comes from incorrect maths. The problem starts with the very first equation where we see:

    energy out {-2AσTn 4 }

  50. Firstly Stephan-Boltzman does not give “energy” it gives the power emitted.
    The energy is the integral of the power over the time step interval. Using the calculations with a step of
    one this is not seen but what that term is really saying is :

    energy out {-2AσTn 4 .dt }

  51. So now we see the error. The area under the T^4 curve is being approximated as the rectangle with the height calculated at Tn, ignoring the fact that T changes in the interval. This assumption is not stated and clearly the author has not realised he is doing it, otherwise he would not be suggesting using it in a situation where beta makes the iterative steps very large and such an approximation becomes nonsensical.

    This will work for small enough dt but when you get to large time intervals this becomes first crudely wrong, leading to oscillations around the correct solution, as the interval gets even greater it become totally chaotic. Increasing beta has the same effect as increasing dt for similar reasons.

    Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick Stokes’ suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

    The ensuing discussion, however, was very interesting , especially the comments by Nick and Leo.

  52. So now we see the error. The area under the T^4 curve is being approximated as the rectangle with the height calculated at Tn, ignoring the fact that T changes in the interval. This assumption is not stated and clearly the author has not realised he is doing it, otherwise he would not be suggesting using it in a situation where beta makes the iterative steps very large and such an approximation becomes nonsensical.

    This will work for small enough dt but when you get to large time intervals this becomes first crudely wrong, leading to oscillations around the correct solution, as the interval gets even greater it become totally chaotic. Increasing beta has the same effect as increasing dt for similar reasons.

  53. Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick Stokes’ suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

    The ensuing discussion, however, was very interesting , especially the comments by Nick and Leo.

  54. Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick Stokes’ suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

  55. Since T^4 is relatively straight locally ( around the 300K region being discussed ) Nick’s suggestion of a trapezoidal approximation will be far better and produces stable results.

    The whole of the rest of this article is a false lemma which was caused by talking of energy instead of power, mis-specifying the original iterative equation and not stating the assumptions being made.

    The ensuing discussion, however, was very interesting , especially the comments by Nick and Leo.

  56. Anthony R. E. and Kip Hansen are right.

    Their critics—of I showed myself above to be one—insist on talking about what I will call “inherently” chaotic systems. In such a system, there is a time t for which one can ensure that any states x(t) and x’(t) resulting from different initial conditions x(0) and x’(0) will differ by less than some (perhaps small but often a nearly as large as the entire state range) value delta only by making x’(0) differ from x(0) by less than some epsilon that is smaller than the resolution to which we can physically measure the initial condition. That large difference in subsequent state results even if there is no limit on the time and state-value resolutions of our calculations; it is caused by measurement limits. It’s true that the system that Anthony R. E. calculated values for is not such a system.

    In practice, though, there are indeed limits on our calculations’ time and state-value resolutions. And that means that a different kind of chaos can arise from the calculations themselves, even if the physical system being modeled isn’t itself inherently chaotic. This is what Anthony R. E. demonstrated, by way of an extreme example. He quantized the time resolution very coarsely so that the state-value resolution would make the computational process chaotic–even though the modeled system was not inherently so.

    Now, in his example we could dispel the chaos by increasing the calculation resolution. But in climate-system calculations the extent to which that is possible is limited.

    • Reply to Joe Born ==> The reason I write about Chaos here at WUWT is the same reason that Edward Lorenz was stunned by his first attempt to use computers to predict weather, even with his very simplistic toy model — he realized that one can not “dispel the chaos by increasing the calculation resolution”. One can improve short range weather prediction and they have succeeded at this since Lorenz’s time. What they can’t do is predict weather or climate long range. They can’t because you can’t “dispel the chaos” — it is, as you say, inherent in the system.

      It is the inherent Chaos that matters — we will eventually overcome the problems of rounding errors etc in computing, I believe. That part can be solved, the slight alterations to calculated values introduced by the computing process itself.

      Resolution improvements stave off the advent of Chaos in calculation, but not in Nature. Nature is resolution to the atomic scale, and yet Chaos ensures in non-linear chaotic systems, such as the atmosphere and the ocean circulations.

      But the problem of Chaos, which is built-in to the natural system, can not be reduced nor obviated. It is there to stay.

      I have no opinion about all the hoopla here about the formula for energy balance to equilibrium given here.

      • he realized that one can not “dispel the chaos by increasing the calculation resolution”

        You realize this implies effect without cause or at the very least that not all causes are known. Once all are known, the only thing preventing prediction would be lack of knowledge of precise values.

        It’s possible that randomness is inherent in physical processes but is devilishly hard to prove.. In the meantime, avoid confusing mathematics and mathematical models and out lack of complete understanding with reality.

      • Reply to DAV ==> Alas, the existence of Chaos (in the Chaos Theory sense) has long since been proven by experiment in many fields — not only to exist in the mathematics of the field but to exist in the results of real world physical experiments with real material dynamical systems.

        The cause of Chaos is known — that is what the study of non-linear dynamics consists of.

        If you haven’t yet, please read my two essay on Chaos here and here.

        If you still are not convinced, start with the Wiki page on Chaos.

        My two essay include a reading list if you wish to dig deeper.

      • Kip Hansen:

        [T]he problem of Chaos, which is built-in to the natural system, can not be reduced nor obviated. It is there to stay.

        Exactly.

        As you say, although it was quantization errors in the calculations that put us (or, more accurately, Lorenz) wise to the problem, the problem that we thereby recognized arises not from calculation limitations but rather from measurement limitations.

        That said, I think it’s more than plausible that as a practical matter calculational-resolution problems could prevent us from computing the behavior even of systems that are not inherently chaotic. E.g., even if it were true that we could know the climate’s initial state accurately enough in principle to narrow the range of the end of the century’s possible states adequately, arguments that Robert G. Brown has often made at this site incline me to believe that running the calculation in a reasonable amount of time still would not be possible even with the computing power whose availability a few decades into the future one might infer from Moore’s Law.

        But I could be wrong.

      • DAV:

        the only thing preventing prediction would be lack of knowledge of precise values

        Sure: the system is deterministic. But the whole concept of chaos is that although the system is deterministic those initial conditions can’t be known well enough: chaos is sensitivity to initial conditions so high as to beggar our ability to measure those conditions with enough precision to constrain the future state.

        Now, that inability may be “merely” a practical question. But I’ve heard–although I haven’t verified this for myself–that guys who know this stuff has shown that there are no “hidden variables” behind quantum mechanics, that, e.g., even in principle it is impossible to know the initial conditions well enough to compute the climate system years into the future. So the initial conditions may be unknowable not only in practice but also in principle. Or maybe not. Either way, what you have is still chaos.

      • Joe Born: Sure: the system is deterministic. But the whole concept of chaos is that although the system is deterministic those initial conditions can’t be known well enough: chaos is sensitivity to initial conditions so high as to beggar our ability to measure those conditions with enough precision to constrain the future state.

        Well, it’s either deterministic or not. Chaos is an epistemological artifact. It doesn’t exist anywhere except in our minds and is evidence of our limitations. To claim otherwise is to say the physical system can change without cause. Now, this may be possible but it would be impossible to prove it did so. If a system is deterministic it means that it has defined outputs for given inputs. That doesn’t mean those outputs can be practically determined because of any number of reasons.

        Kip Hansen: Alas, the existence of Chaos (in the Chaos Theory sense) has long since been proven by experiment in many fields

        Er, no it hasn’t. If you think it has then how? It’s a mathematical concept and not a physical one. If anything “chaos” is just a word that means a certain kind of “random” and “random” merely means “unknown”. It is not a physical property. Again, if physical reality were “random” and “chaotic” then you are accepting effect without cause. Any model that exhibits chaotic behavior (that is, it doesn’t predict well) is incomplete.

      • Reply to DAV ==> I do not have the time to attempt to school you individually in a topic which is the subject of an entire library of books.

        If you wish to know more:

        If you haven’t yet, please read my two essay on Chaos here and here. (Don’t skip the comments — they have a lot of good questions, good answers, and good examples of wrong thinking on the subject getting straightened out).

        If you still are not convinced, start with the Wiki page on Chaos.

        My two essay include a reading list if you wish to dig deeper.

        A google search for “online courses on Chaos Theory” returns lots of opportunities.

      • Kip, really?

        You actually believe chaos has been empirically found to be an inherent physical attribute? That it is ontological instead of epistemological and that it is an artifact arising from our knowledge limitations?

        Do you know the difference between the two? If not, I would not presume to educate you.

        When you say things like The problem of Chaos, which is built-in to the natural system it seems you don’t. If the natural system were really chaotic then when a time comes where two or more possible outcomes are possible, what makes the selection? If the answer is “nothing, it just happens” then you are espousing effect without cause. Is that a fair summary of your thinking?

        If, OTOH, you are saying models are incomplete; some exhibit chaotic results; and likely always will — then I agree with you. But, in no way would a chaotic model imply chaos in a natural system.Doing that would be engaging in reification where the model is also the reality.

      • Reply to DAV ==> Check back with me when you’ve at least read the Wiki article.

        You seem to still think that “chaos” means entirely random activity or results….which it does not in Chaos Theory, which is what the rest of us are talking about here. In Chaos Theory, chaotic behavior is entirely deterministic, but unpredictable.

        In that sense, Chaos is a natural feature, an inherent attribute, of real world physical non-linear dynamical systems in certain regions of their range. It has been found to be so by physical experiment in many and varied fields of endeavor. They have been studying it for fifty years.

        Read Ian Stewart’s book, Does God Play Dice? for a load of real world, real physical proofs of the existence of chaotic period doubling leading to chaotic results and other Chaos Theory behaviors.

        This is science, not philosophy class. You can rail and rant all you want in philosophy, but in science, you are required to read the studies and see the results before deciding that they are not so. Take a look at the evidence, even the Wiki has quite a list, if very incomplete, and get back to me again.

      • You seem to still think that “chaos” means entirely random activity or results

        No. I thought I made it clear that “random” means “unknown” and so does “chaotic”. There is no random behavior in reality unless you want to believe in effect without cause. Nor could you ever show reality is chaotic regardless of your sense of the word. See below.

        I agree that increasing the precision may not work. If would do so only if the model were faithful to reality. In fact, any claim that a chaotic model demonstrates chaotic tendencies in a is claiming the model IS an accurate representation of reality. You apparently want to claim you effectively know all the causes so any aberration from your predictions must be because the underlying system is chaotic. How do you know you have listed all the causes? Merely because you can’t think of any others?

        Think about it. You can NEVER show that a mathematically chaotic model represents anything. How would you do so? Not by comparing predictions. They may match for a time but will eventually depart. When would you admit the model is not quite right and needs correction? What would it take? Besides, if the models predictions don’t match reality what purpose do they serve?

        The only way you could show that your chaotic model might truly represent reality is for it to make accurate predictions while being chaotic. Something you admit can’t be done. That’s how models are validated. You are claiming your chaotic model is an accurate description of the underlying system despite not being able to validate it.

        This is science, not philosophy class. You can rail and rant all you want in philosophy, but in science you are required to read the studies and see the results before deciding that they are not so.

        You don’t seem to know the difference between these two as well. Science does not actually prove anything. It only provides probable explanations which are useful. These may or may not accurately represent reality. If the explanations are useful, they may be doing so, but in the long run it’s the usefulness which matters. Whether they are showing the true underlying structure of reality (What Truly Is) or not is a matter for philosophy. Science can’t provide the answer.

        When you start making statements about what really is and how we know what we know — which is exactly what you are doing when you claim chaotic systems exist in reality — then you are treading into a subject area you demonstrably know little about. After which, while still in the subject, you wave your hand in dismissal. How silly. Follow your own advice. It wouldn’t hurt if you tried to learn more about the subject you are attempting to employ and simultaneously dismiss because you don’t understand it.

        As far as reading your links, the Wiki one links to a branch point and it’s entirely unclear which of those you think I should follow. Even if they ended up where you thought they might, reading what someone else thinks will not tell me why you think the way you do. Only you can do this and you want to run away instead of giving an answer.

      • Last reply to DAV ==> Now you’re just trolling.

        If you want to learn, I have given you the references.

        If you just want to rail and rant — take it elsewhere

        I won’t be spending anymore time feeding your needs. .

      • How sad. I was asking why you think chaotic response is inherent in a natural system. You don’t want to reveal you reasoning or even narrow down a web search for possible ones. You make a pile of references and hope I find within them the kernels of your reasoning. IOW: a needle in a haystack. Maybe it’s all a muddle to you and you aren’t confident you could state why in a couple of sentences. I have said why I believe that your claim is incorrect and somehow that’s ranting and trollish. At least you didn’t play the Hitler card.

        But, OK, bye.

  57. Energy in { q/A*A= q} + energy out {-2AσTn 4 } + stored/ released energy {- mC( Tn+1 – Tn )} = 0 eq. (1)

    and

    Tn+1 = Tn – βTn 4 /T∞4 + β or Tn+1 =Tn +β(1-Tn4 /T∞4 ) eq (5)

    “I thought it will be easier for the layman to understand the behavior of computer models under chaotic conditions if there is a simple example that he could play”.

    You’re ‘aving a larf, aintcha?

    https://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/

    Pointman

    • DAV,
      I really admire your stamina. With patience like that you could become a good scientist. The first thing you could study, was a river’s motions. You could for example choose the river ‘Sjoa’, in Norway, a popular rafting site. You could find a good place to stand on one of its shores and observe the motions of water and try to predict the motions in a couple of scales (for example metric and decimetric) over a period of, say one year. Try to make weekly and monthly predictions. I think you would find out that there is chaotic (non-predictable) behaviour in the river. But take care, the river is infamous as it takes a few lives per year…

  58. What many fail to understand is that chaos is only associated with the path from one equilibrium state to another and has no bearing on what the next equilibrium state will be consequential to some change. We see this clearly in the data where the longer we average the apparent chaos, the measured behavior of the planet converges to the requirements of the Stefan-Boltzmann LAW and the inferred sensitivity.

  59. “The calculations are made for purposes of illustrating the effect of instability of simple non-linear dynamic system and may not have any physical relevance to more complex non-linear system such as the earth’s climate.”

    An excellent self effacing comment. The arguments that climate is chaotic are by analogy. They are not mathematical proof.

  60. The joke is that the “simple non-linear dynamic system” was …. a linear system.

    The non-linear ‘chaotic’ part was an unstable algo based on mathematics and physics errors.

    The large values of beta correspond to a very fast settling exponential solution of the ODE. On the scale of the numerical method it is almost a step function. The numerical error in using Euler’s rule to solve the integral is effectively introducing a significant lag in the system response that means that it does not model the original linear system as intended. The strong feedback (large beta ) plus the erroneous lag is what leads to the instability.

    This almost certainly is non-linear but has little to do with the original equation.

    The main lesson to be drawn from this article and discussion is how one can be led to incorrect conclusions and false attribution by computer models and numerical methods if you are not well competent in using such methods.

    There are certainly implications for the many home-spun, ad hoc methods that get used in climatology. This applies to many of out hubristic “nobel prize winners”.

  61. Anthony R.E.,

    ‘computer models with advance/d/ cinematic features that
    ‘laymen assume is reality.’ :
    ____

    computer models with advanced cinematic features

    only impress obamas, popes, swartzeneggers, merkels …

    -laymen LIVE IN reality differing to Hollywod and tabloid pulp.
    ____

    talking ‘assume’ pls. 1st refer 2Urself + convinient peers.

    Regards – Hans

  62. You really assume laymen

    clinton, ban ki moon, obama, trump, sahra palin, trudeau, J.C.Junker, Putin, La Guarde
    ____

    mismatch reality with

    ‘Consider a thought experiment of a simple system in a vacuum consisting of a constant energy source per unit area of q/A and a fixed receptor/ emitter with an area A and initial absolute
    temperature, T0 . The emitter/
    receptor has mass m , specific
    heat C, and Boltzmann constant σ.’

    Hans

Comments are closed.