Guest essay by Anthony R. E.
Recently, I read a posting by Kip Hansen on Chaos and Climate. (Part 1 and Part 2) I thought it will be easier for the layman to understand the behavior of computer models under chaotic conditions if there is a simple example that he could play. I used the attached file in a course where we have lots of “black box” computer models with advance cinematic features that laymen assume is reality.
Consider a thought experiment of a simple system in a vacuum consisting of a constant energy source per unit area of q/A and a fixed receptor/ emitter with an area A and initial absolute temperature, T0 . The emitter/receptor has mass m , specific heat C, and Boltzmann constant σ. The enclosure of the system is too far away such that heat emitted by the enclosure has no effect on the behaviour of the heat source and the emitter/receptor.
The energy balance in the fixed receptor/emitter at any time n is:
Energy in { q/A*A= q} + energy out {-2AσTn 4 } + stored/ released energy {- mC( Tn+1 – Tn )} = 0 eq. (1)
If Tn+1 > Tn the fixed body is a heat receptor, that is, it receives more energy than it emits and if Tn > Tn+1 it is an emitter, that is, it emits more energy than it receives. If Tn = Tn+1 the fixed body temperature is at equilibrium.
Eq (1) could be rearranged as :
Tn+1 = Tn -2AC Tn 4 /mC +q/mC eq(2)
Since 2AC/mC is a constant, we could call this α, and q/mC is also a constant we could call this β to facilitate calculations. Eq (2) could be written as:
Tn+1 = Tn – αTn 4 + β eq.(3)
The reader will note this equation exhibits chaotic properties as described by Kip Hansen in this previous post at WUWT on November 23, 2015, titled “Chaos & Climate –Part 2 Chaos=Stability”. At equilibrium, Tn=+1 = Tn , and if the equilibrium temperature is T∞ then from equation (3)
T∞ 4 =β/α or α = β / T∞4 if eq. (4)
And eq (3) could be written as
Tn+1 = Tn – βTn 4 /T∞4 + β or Tn+1 =Tn +β(1-Tn4 /T∞4 ) eq (5)
Eq (5) could be easily programmed in Excel. However, there are several ways of writing T4 . One programmer could write it as T*T*T*T, another programmer could write it as T ^2* T ^2, another programmer could write it as T*T ^3 and another could write as T^4. From what we learned in basic algebra, it does not matter as all those expressions are the same. The reader could try all the variations of writing T4 . For purposes of illustration, let us look at β= 100, T∞ =243 ( I am using this out of habit that if it were not for greenhouse gases the earth would be -30o C or 2430 K but you could try other temperatures) and initial temperature of 300 K. After the 17th iteration the temperature has reached its steady state and the difference between coding T4 as T^4 and T*T*T*T is zero. This is the non-chaotic case. Extract from the Excel spreadsheet is shown below:
| beta= | 100 | ||||
| Iteration | w T^4 | w T*T*T*T | % diff. | T ∞= | 243 |
| 0 | 300.00 | 300.00 | 0.00 | ||
| 1 | 167.69 | 167.69 | 0.00 | ||
| 2 | 245.01 | 245.01 | 0.00 | ||
| 3 | 241.66 | 241.66 | 0.00 | ||
| 4 | 243.85 | 243.85 | 0.00 | ||
| 5 | 242.44 | 242.44 | 0.00 | ||
| 6 | 243.36 | 243.36 | 0.00 | ||
| 7 | 242.77 | 242.77 | 0.00 | ||
| 8 | 243.15 | 243.15 | 0.00 | ||
| 9 | 242.90 | 242.90 | 0.00 | ||
| 10 | 243.06 | 243.06 | 0.00 | ||
| 11 | 242.96 | 242.96 | 0.00 | ||
| 12 | 243.03 | 243.03 | 0.00 | ||
| 13 | 242.98 | 242.98 | 0.00 | ||
| 14 | 243.01 | 243.01 | 0.00 | ||
| 15 | 242.99 | 242.99 | 0.00 | ||
| 16 | 243.00 | 243.00 | 0.00 | ||
| 17 | 243.00 | 243.00 | 0.00 | ||
| 18 | 243.00 | 243.00 | 0.00 | ||
| 19 | 243.00 | 243.00 | 0.00 | ||
| 20 | 243.00 | 243.00 | 0.00 |
If β is changed to170 with the same initial T and T∞, T does not gradually approach T∞ unlike in the non chaotic case but fluctuates as shown below. While the difference in coding T4 as T^4 and T*T*T*T is zero to the fourth decimal place, differences are really building up as shown in the third table.
| beta | 170 | ||||
| Iteration | w T^4 | w T*T*T*T | % diff | T ∞ | 243 |
| 0 | 300.0000 | 300.0000 | 0.0000 | ||
| 1 | 75.0803 | 75.0803 | 0.0000 | ||
| 2 | 243.5310 | 243.5310 | 0.0000 | ||
| 3 | 242.0402 | 242.0402 | 0.0000 | ||
| 4 | 244.7102 | 244.7102 | 0.0000 | ||
| 5 | 239.8738 | 239.8738 | 0.0000 | ||
| 6 | 248.4547 | 248.4547 | 0.0000 | ||
| 7 | 232.6689 | 232.6689 | 0.0000 | ||
| 8 | 259.7871 | 259.7871 | 0.0000 | ||
| 9 | 207.7150 | 207.7150 | 0.0000 | ||
| 10 | 286.9548 | 286.9548 | 0.0000 | ||
| 11 | 126.3738 | 126.3738 | 0.0000 | ||
| 12 | 283.9386 | 283.9386 | 0.0000 |
By the 69th the difference between coding T4 as T^4 and T*T*T*T is now apparent at the fourth decimal place as shown below:
| 69 | 88.6160 | 88.6153 | 0.0008 |
| 70 | 255.6095 | 255.6088 | 0.0003 |
| 71 | 217.4810 | 217.4824 | 0.0007 |
| 72 | 278.4101 | 278.4086 | 0.0005 |
| 73 | 155.4803 | 155.4850 | 0.0030 |
| 74 | 296.9881 | 296.9894 | 0.0004 |
The difference between the two codes builds up rapidly that by the 95th iteration, the difference is 4.5 per cent and by the 109th iteration is a huge 179 per cent as shown below.
| 95 | 126.5672 | 132.2459 | 4.4866 |
| 96 | 284.0558 | 287.3333 | 1.1538 |
| 97 | 136.6329 | 125.0047 | 8.5105 |
| 98 | 289.6409 | 283.0997 | 2.2584 |
| 99 | 116.5073 | 139.9287 | 20.1029 |
| 100 | 277.5240 | 291.2369 | 4.9412 |
| 101 | 158.3056 | 110.4775 | 30.2125 |
| 102 | 297.6853 | 273.2144 | 8.2204 |
| 103 | 84.8133 | 171.5465 | 102.2637 |
| 104 | 252.2905 | 299.3233 | 18.6423 |
| 105 | 224.7630 | 77.9548 | 65.3169 |
| 106 | 270.3336 | 246.1543 | 8.9442 |
| 107 | 179.9439 | 237.1541 | 31.7934 |
| 108 | 298.8261 | 252.9321 | 15.3581 |
| 109 | 80.0515 | 223.3877 | 179.0549 |
However, the divergence is not monotonically increasing. There are instance such as in the 104th iteration, the divergence drops from 102 per cent to 18 per cent. One is tempted to conclude T^4 ≠ T*T*T*T.
Conclusion:
Under chaotic conditions, the same one line equation with the same initial conditions and constant but coded differently will have vastly differing results. Under chaotic conditions predictions made by computer models are unreliable.
The calculations are made for purposes of illustrating the effect of instability of simple non-linear dynamic system and may not have any physical relevance to more complex non-linear system such as the earth’s climate.
Note:
For the above discussion, a LENOVO G50 64 bit computer is used. If a 32 bit computer is used the differences would be noticeable at a much earlier iterations. A different computer processor with the same number of bit will also give different results.
![climate-model-1[1]](https://wattsupwiththat.files.wordpress.com/2013/09/climate-model-11.jpg?resize=576%2C576&quality=83)
“x bits CPU” applies to integer registers, or address registers, or both.
Not fp!
*sigh*
Not fp!
=====
depends how the floating point is implemented. regardless, there are loss of precision errors in fp calculations on digital machines. similar to 1/3 in decimal notation, you cannot represent this as 0.33333 in a finite number of decimals. these small truncation errors can quickly build up through repeated iterations as are found in climate models, such that the error overwhelms the result.
for example, due to round off/truncation errors it is nearly impossible to create a climate model that doesn’t gain or lose energy, without any energy actually being added or lost from the system! So, at each iteration it is often necessary to calculate the phantom gain/loss in energy, and then average this back into the model over all the grids. In effect you smear the error over the globe, in an attempt to hide it, rather than reporting it as model error.
Floating point usually follows IEEE 754 single and double.
And excess precision of extended registers a big issue, a disgusting “feature” of the Intel CPU. Excess precision leads to semantic issues, cases where the axiom of equality A=A for any expression A doesn’t hold. It’s very very bad. It’s a disease. Many people in the field deny that, even a programming language designer.
If I had to recruit a computer scientist, I would ask him how bad it is.
@Simple Touriste:
In my FORTRAN class of decades back, we were cautioned not to test for equality in floats, but to test for “smaller than an error band”. So ABS (A-A) .LT. 0.000001
I’ve noticed that is not done in GIStemp and would speculate it is missing in the GCMs.
Oh, and per your question: “It is bad.. very Very Bad.. Evil, even…”
@E.M.Smith
Exactly. One of the first things I “learned by doing” in numerical analysis is to never test floats for equality. And at the time (also decades ago) I was only trying to solve a simple system of linear equations !
How do you test fp numbers?
@Simple Touriste
There is no easy answer. I refer to this long and rather technical article by Bruce Dawson, a Google programmer who focuses on optimization and reliability.
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
Great article. the money quote:
“[Floating-point] math is hard. You just won’t believe how vastly, hugely, mind-bogglingly hard it is. I mean, you may think it’s difficult to calculate when trains from Chicago and Los Angeles will collide, but that’s just peanuts to floating-point math.”
The author brings up a whole series of problems that occur with floating point math. For example, whenever the results get close to zero you get problems such as N/0 and 0/0. Because of precision problems, these values are rarely exactly zero in FP, so the computer goes ahead and blindly calculates results for these problems, often in intermediate temporary values that are hidden from the programmer. The results “blow up”, but the programmer and computer are both unaware of what has gone on. They report the resulting nonsense as though it was a valid answer.
The bottom line is that floating point values and iterative solutions do not mix well together. This problem is well known to Numerical Analysts, but largely unknown by everyone else. Yet this is exactly how the climate models are implemented. As a result, even if the theory is 100% correct, the results in practice are unlikely to be correct.
“Floating points”?
According to the Climate Models, won’t that be all that is left?
Only a few of the alarmists have knowledge enough to understand math…. that’s sad.
None of the so called “experts” who presented computer models has the ability and knowledge how to write a proper algoritm taking ALL NEEDED parameters/variables into consideration. When I myself analysed waterlevels from peak Stone Age up to year 1000 AD, I used 43….. Is there anyone having seen any of the so called experts taking more than 12 into consideration?
Reblogged this on Norah4you's Weblog and commented:
some types of Erosion
* Wind-erosion
* Water-erosion
* Temperature-erosion
* Gravitional-erosion
(but there are more types)
When erosion on sand and/or coral reef occurs along coasts the “land” always will loose areas to the Sea. If there are houses or settlements close to water, sooner or later those will “disappear”. That’s due to the Natural Forces mostly to the force we call Erosion.
Example:
two phenomena are common: Wind, waves and temperature allows the sand/coral to become undermined and the land above piece by piece falls into water. Sometimes down in the form of landslides.
Depending on the area where they occur, tidal power and strength of ebb and flow, this can mean anything from that sand is spread out over a larger area of land near, with the seemingly may look as if the water has been raised or that the sand is spread by wind and waves over much large surfaces.
Another beachfront effect of erosion, in the form of sludge, gravel, and pollutants including carried out thanks to meandering rivers. Water cycle makes the water trying to reach the lowest point possible. While travelling from land to Sea the speed force in rivers are lower in the middle of the streams. This causes eroded land/sand and so on to fall into the water bringing the sludge deposit to the sides and close to entering the Sea river delta might occure/arise. Normal school knowledge that has been forgotten can be repeated
Actually it’s called numerical unstability: Multiply a small number (for example 0.99) with itself a thousand times, and what do you get? – A very, very small number, e.g., nearl zero. Multiply a small, but, slightly larger number (for example 1.01) with itself a thousand times, then you get large numbers, trending towards infinity !
It’s called iteration, and any computation where iteration is part of the game, just like in all GCMs and climate models, this is the root of chaos.
Numerical unstability, iteration? Your example is just a geometric progression where you choose the common ratio just above and below 1. This is basic maths:
In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio.
The behaviour of a geometric sequence depends on the value of the common ratio.
If the common ratio is:
– Greater than 1, there will be exponential growth towards positive or negative infinity (depending on the sign of the initial term).
– 1, the progression is a constant sequence.
– Between −1 and 1 but not zero, there will be exponential decay towards zero.
This is basic maths:
==========
to a mathematician. to a climate scientists, it looks like a ‘tipping point”. you start with 1, and iterate to the Nth power, you still have 1. climate is stable. Now add in CO2. Now you start with 1.00001. Iterate this to the Nth power and it blows up. a tipping point. climate is unstable, thus any change to the atmosphere, any change at all will lead to run-away warming or run-away cooling.
which is nonsense, because it would have already happened, given the past changes to earth’s atmosphere. so the result must be a mathematical illusion. the result of iterating (1 +/- small value) to the Nth power.
Pick up a text on Numerical Methods to cure your ignorance.
Simple? Not for me, sorry.
@asybot:
A simple example would just do something like C=(A x B ) / 100
and C=(A/100.0) x (B / 100.0)
and show they were both different… then show how usng one vs the other changes results, especially in repeated use in a loop.
Not quite the same chaotic thing, but similar idea of math divergence over time.
typo: C = (A x B)/10000. A and B accuracy will be individually truncated, while A x B may be massively truncated before the division. A better example may be
(A/128) x (B/128) vs (A x B)/16384
Suppose you had a fixed point signed representation of 8 bits, and A = 128 and B = 128. The answer is C = 1, but A x B overflows, so (A x B)/16384 gives you 0.
Nor me. But what I took away from this article is:
1. The coding methods used and the equipment on which the models are run can introduce a large error, especially the further out the projects are run, and
2. the modelers do not recognize or account for this hidden source of error.
I hope I have this right…if not, could someone please correct me?
Correct, but more subtly you cannot compensate for this error. It’s chaotic and unless you know what the answer should be, there’s no way to know what the error is.
Thank you for the clarification.
To me, it has more to do with the style of algorithm. Taking a number (data point)with an accuracy of +/-2% and then taking it to the 4th power would lead to great inaccuracies. Taking the 4th root of a number with an accuracy of+/- 2% would give a better outcome.
A computer is like a food mixer, it just churns the ingredients in the order you put them in.
For a perfect confection you need the best ingredients, the skill of knowing the correct order to add them & how long to churn them for.
The ‘climate meddlers’ have used flour salt water & chalk thrown it in mixed for a few mins, added a few picked cherry’s & are trying to pass it of as a gateaux.
GIGO – garbage in garbage out.
That should read ‘GIGO’
[Reply: Fixed it for you. -ModE]
“1saveenergy
December 6, 2015 at 12:07 am
A computer is like a food mixer, it just churns the ingredients in the order you put them in.”
Ahhh, no. A mixer does exactly that, mixes (Whatever you put in to it). A computer does exactly what it was programmed to do.
it just churns the ingredients in the order you put them in.
Er no, these days it churns them out in the order the compiler put them in..
And compilers are making a lot of decisions the programmer didn’t even think about.
Exactly right, …. and that is the very reason all of my “programming” was done in Assembler Code (machine language). And my software/firmware performed the way I wanted it to perform …. and not the way the programmer of a “compiler” per se assumed I wanted it to perform.
That is why one needs to use a validated compiler designed to be deterministic.
Technically modern CPUs perform out of order execution where it will reorder instructions for parallel execution where it deems they are independent of one another.
-And even compilers aren’t perfect. Yea, many moons ago, we got a *brand* *new* FORTRAN 77 compiler. All went OK, until I switched off the debugging mode. It took weeks of going through the assembler output to find out that the compiler itself had bugs!
Back in the 80’s we had a Mark Williams compiler that generated weird bugs whenever we tried to crank optimization above level 0.
There are also chip level micro codes between the assembler emitted by the compiler and the silicon. Further, some compilers do not emit assembler, they emit C, which is further compiled by a C compiler that emits assembler. You also need to account for the procedure libraries used by the compilers.
This was eye-opening. How do the ‘models’ avoid this iterative chaos ruining the projections?
They can’t. That is sort of the definition of the effects of a chaotic system…
Part of why any model must be validated. Frequently. And each new release all over again.
Futher, Björn, according to Christopher Essex, because of calculation error, machine epsilon, computer parameterization, and several other climate and computer related issues, earth climate models can never be correct. It’s about 1 hour 13 minutes long, but in the video found here, Mr. Essex explains why (it is quite worthwhile): http://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/
The Essex presentation is both excellent and very worthwhile. it is also simple enough for a non expert to follow.
I have been wondering the same. Could someone please explain why CMIP5 Climate Models diverge so little from each other? I have my doubts that is the case in the real life.
The simple answer is, ….. because they both use the same “canned” program “sub-routines”.
Fudge factors. They know the results are bad. So they introduce “corrective” instructions that are designed to “correct” the results, to make them acceptable. Looking at it from the other side, they make conclusions about what the results should be, then program it in.
As they don’t understand the atmosphere to the degree required to model it, they can’t model it. Complaints that their computers just aren’t big enough are lame. “If we only had more granularity.” I think the entire approach, GCM Nudge Models, is wrong. They are trying to substitute brute-force computing power for brain power.
It’s due to the tuning that they apply to the system. That is, they adjust the user adjustable variables until they get the answer they were looking for.
It is even worse than that.
Code can be compiler release senstive, language sensitive, order of operation sensitive, and more.
In GIStemp, I looked at just ONE line of code in one progran out a dozen or so. A minor coding fault caused an average 1/1000 degree temp warming of the overall data, and 1/10 C warming of individual readings.
https://chiefio.wordpress.com/2009/07/30/gistemp-f-to-c-convert-issues/
covers it, and some compiler dependencies and more.
I then varied some of the data type choices and by making different “reasonable” choices could get up to 1/2 C difference…
It is essential to have an independent trained programming team audit every single line of code before you can even begin to trust it, and even then, test repeatedly..
The skill needed is beyond that of the climate researchers. It takes a good engineer to even start on the task, with lots of code audit experience.
What an interesting article, why am I not at all surprised?
I would recommend all to read your article, but just in case people do not have the time, and in view of the importance of your findings, I set out your conclusions drawn from your analysis:
I can’t believe they are still using Fortran.
Any statement about a complex computer model in the tenths to hundredths of a degree range is just silly. Back when I first got involved in computer simulations in the 1970’s we got a short and very sensible lecture on the difference between accuracy and precision. The greater the great number of decimal places in such numbers is no indication of accuracy, indeed it is more often than not an indication of imprecision.
Any study that does not have an appendix that examines and details the error bars implicit in the analysis is not worth the paper its printed on. From what I saw in the climategate releases the code written at the CRU of the UEA was appalling in this regard. Not surprising really in that most was written by undergraduates with little or no formal computer science training.
In one release we see the following comment added in by the last programmer who tried to figure out what the code was doing.
“I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more”
This is the basis on which we are supposed to change our entire economic system.
I knew some physics student; once one of them told me that his program was giving satisfying results but then his team discovered that they implemented equations that were not the one they wanted to implement and the output was meaningless – but plausible.
Scientific programming is subtle and tricky.
Each of these options of GCC can impact program semantics in subtle ways:
https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
Thanks a lot. Almost none of that had ever occurred to me before.
“Under chaotic conditions predictions made by computer models are unreliable.”
AND, the atmosphere is not made of boxes in a grid, nor does it behave as a load of boxes in a grid and we cannot hope to understand it if we think of it as boxes in a grid.
+1
AND.. with every so called ‘box’ there will be interfaces to the next boxes which will need heavy duty maths to describe how these boxes inter react. These interfaces do not exist!
Of course they exist. They are coupled nonlinear differential equations. “Doesn’t exist” is very different from “fiendishly difficult to solve, if at all”.
Do not forget that to make the mathematics and algorithms easier, changes in CO2 level are considered instantaneous over all ‘boxes’. The errors due to that which exceed those due to iterated float imprecision are then multiplied by the float imprecision.
All science is about simplified models.
I like to think of chaos type edge effects like this.
A sniper in an ideal world has a perfect rifle, the air is dead calm,. the cartridge is reliably loaded with the same charge of explosive manufactured to the same formula, and his sights are set perfectly, the air is at standard pressure and humidity and we ignore any effects of sunshine (yes, light exerts a force too). And the prediction is at a mile range he kills the president.
In reality, at that range so many random second order effects come into play that any rifle marksmen will tell you that identical shots will be out by an inch or three.
And here’s the rub. The result of as single shot could be anything from a breeze past his cheek, to a nasty flesh wound, permanent brain damage or instantaneous death.
No laws of physics are violated to create this uncertainty, merely the nature of the system we have hypothetically constructed.
And yet human history being what it is, this might be the difference between a global war and a nasty incident best forgotten.
The point is that the mathematical precisions uncertainty entirely mimics real world uncertainty. This isn’t necessarily an issue with the models, or the computation, it’s an issue with using a particular sort of model outside the limits of its valid applicability.
We are asking the question ‘will a rifle aimed this way, kill the president or not?’ and the answer is ‘we can’t say: A random gust of wind, or a slightly out of true manufactured round, could be the difference between life and death’.
I have said it before and I will say it again, as an engineer, this is a bad solution. If you want to ensure the outcome, get closer.
I think people are still confused by all this chaos.
This article is really about the butterfly effect. ‘Sensitivity to initial conditions’ – or in this case floating point precision.
If you have a model that is that sensitive, it simply isn’t usable as a predictive device. Period.
I am more interested in models that do actually represent reality and yet still cannot be used to predict outcomes. That is the class of equations that produce (via negative feedback) bounded solutions, yet cannot be analysed sufficiently well to give even an approximate solution, due to the sorts of effects mentioned above.
What we need to do here, is model climate using these sorts of delayed feedback models, in order to get a picture of the boundaries of climate – that is, a range of temperatures beyond which it cannot go…
I suspect once the heat (sic!) goes out of the climate argument as countries promise much, do nothing, and nothing actually happens, we will develop the climate models to incorporate the sort of multi-decadal negative feedback systems that cause the quasi periodic oscillation of temperature and other climate and weather related effects.
I have been shying away from it, but I suspect I may need to write some sample code to demonstrate how such periodic delayed feedback systems actually create a system that fluctuates naturally within certain bounds, but relatively unpredictably.
Just as importably, the real world has essentially infinite precision, and eventually a difference between the model and reality I’m the millionth digit will make a difference.
NOW you’re talking!
I don’t understand this. There is an equilibrium temperature determined by insolation, that doesn’t change. Any departure causes a feedback to arise which tends to return the system to equilibrium, no matter whether the “error” is from natural variability or from processor “rounding” errors. Any such errors cannot be cumulative or the model is simply wrong.
Yes; the post was misleading to start off with “The reader will note this equation exhibits chaotic properties.” There’s nothing about the system defined by that equation that is inherently chaotic. If you run that system in you favorite differential-equations solver, you won’t observe chaotic behavior.
All he’s showing is that variations in computer operations can cause minuscule errors that pile up, independently of whether the system being modeled is chaotic.
Exactly,
And this can add up to be something of significance. As E. M. Smith ( December 6, 2015 at 12:29 am ) observes:
The actual system may not be chaotic, but the model essentially is.
pochas said “There is an equilibrium temperature determined by insolation . . . ” Peripheral to the topic, but please note that at each point in the atmosphere one can calculate (at least) two temperatures both of which change most of the time and which cannot be in equilibrium because of transparency: the gas kinetic, and the radiation field. They are seldom closer than several degrees F. At best the two will be in balance twice a day as the diurnal curves cross each other.
It’s true that the earth is never in thermal equilibrium, as you say. Because I like to avoid long-winded comments I did skate around some important things that can cause the equilibrium surface temperature to change. They must all be understood before any attempt can be made to model the climate. To name a few, the day/night and seasonal cycles, solar activity (TSI, solar wind, terrestrial and interplanetary magnetic fields), solar spectral composition, tidal effects on oceans and atmosphere, volcanism (aerosols), and yes, humans.
Using Integer-Vectors as in cryptography-mathematics the numbers can be as big and accurate as you want but calculating time will increase.
Here it is as Fortran, using interval arithmetic.
real, parameter :: Tinf = 243
real, parameter :: beta = 100
interval :: T
integer :: i
T = 300
do i = 1, 120
T = T + beta*(1 – (T/Tinf)**4)
print *, i, t
end do
end
The results start out looking good, but suddenly…
1 [167.69426874581208,167.6942687458124]
2 [245.01402143715617,245.01402143715672]
3 [241.65731550475314,241.65731550475474]
…
36 [163.12325262212141,308.84946610130112]
37 [2.1702338224654909,388.54281085174552]
38 [-551.45741730850421,488.54281021553476]
39 [-3103.7585801101132,588.54281021553482]
40 [-2664504.7860223172,688.54281021553482]
41 [-1.4455759832608285E+18,788.54281021553482]
42 [-1.2523871492099317E+65,888.54281021553482]
43 [-7.0555246943510491E+252,988.54281021553482]
44 [-Inf,1088.542810215535]
This is using double precision arithmetic. So even the first version has its problems. Interval arithmetic is often pessimistic, but it can give you a clue that you’ve got trouble. In this particular case, what it’s telling us is that this is a lousy way to integrate this particular equation, which isn’t really surprising. In more complicated situations, it can be hard to see. (Kahan has an instructive paper on useful but not foolproof ways to tell if a floating point computation should not be trusted.)
REAL FORTRAN is ALL CAPS!
; sarc/>
Then again, I’m old too 😉
… and statements starting in column 6!
And remember not to drop the card deck!
I agree with your comment. The method shown isn’t a formal numerical method of solution. The article has nothing to do with chaos but is a naive exposition of numerical rounding errors.
The example used does not exhibit mathematical chaotic behavior (a particulr jargon meaning), but the computer processing done on it does exhibit the common meaning of chaos. The result becomes rapidly and unpredictably disordered.
I’m not sure which the author set out to illustrate, so I refrain fromtossing rocks about it. YMMV.
GCMs ought to be regarded as a source of entertainment and amusement for computer buffs, with the hope that some good will spin-off from their efforts to model such a system in such a way. But sad to say it has been no laughing matter as the owners and operators of these models have extracted, with some determination, just the kind of illustrations they seek to show the world what they insist must happen to that system thanks to our CO2 emissions. This is would-be visionary stuff. Not science.
You have lost the plot in terms of the definition of “simple”.
Try something like the change in the investment rate for savings over time, even that is complex for most people. If I recall someone did this some years ago on this site. It illustrates the point without the complexity.
You know, there’s a saying in French that goes like “Envoyez un âne à paris, il n’en reviendra pas plus appris” (‘send a donkey to Paris, it won’t come back any smarter’ – my translation).
Or in the alleged words of Einstein: “Everything should be made as simple as possible, but not simpler.”
I can appreciate that not everyone will find this article enlightening, but that’s not a reason not to write it.
In another way, it is “even worse” – that is psychology. I’ve done modeling work in a context other than climate, but the psychology is the same in all modeling. At the start, there is uncertainty is some data or “parameters” (assumptions). When doing good modeling work, these are always well understood at the start. But after days (or months or years) of looking at numbers to with 8 (or more) significant digits, it is VERY EASY to start thinking that differences in that 5th significant digit (or 4th or 3rd etc.) really tell you something while forgetting that important inputs were +/- 10% (for example). In my experience, it takes extreme discipline to avoid this trap. Maybe this is one reason one well known climate modeler (who did not accept the global warming party line) once said something like, “climate modelers need to go outside and look at the weather now and then.” In “impact” models, ones which assume a certain warming and then try to predict the impact of that warming on some ecosystem, the input assumptions have huge uncertainties making this phenomena probably even more pronounced.
This post has nothing to do with climate. It is one of an endless series of posts which seems to claim that ordinary everyday engineering maths is impossible.
But is isn’t. Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.
It illustrates that if you don’t know what you are doing, solution algorithms for differential equations can be unstable. You learn that quickly in CFD. Your program blows up within a few thousand iterations. But GCMs don’t do that. There is a whole body of knowledge about stable solution. You have to get it right. In a situation like this, you first have to think about timestep. But better, use a degree of implicitness, which relates the change in T to the average T^4 during the time step, not the initial value.
To expand on that point, you’ll do better replacing
Tn+1 = Tn – αTn4 + β
by
Tn+1 = Tn – αTn4 – 2αTn3(Tn+1-Tn) + β
You’re getting into stiff differential equations. My recommendation here is the trapezoidal method (Eq 3 in the link).
Nick- “Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.” Engineering math works because engineers are responsible when the bridge falls down or the plane crashes. They know how to check for instabilities in solutions and how to prevent them, and then do a reality check and make sure the results make sense. They also always build in generous safety factors into every critical design. Even so, planes still crash and bridges do fall down, but only rarely. There still are some 70-80 year old airplane still being flown in commercial use because 1)they were well-designed to begin with, 2) they were substantially over built, 3) they can be economically repaired and upgraded and have been over the years.
The large scale GCM’s cannot be checked for the effect of calculation instabilities at every interation. Even so, the GCM’s can only be correct if they DO have the actual, important variables as part of the models. In that case, only ONE model would be needed. The fact that 32 or 97 or 123 different versions of the same process are used is because averaging them gives an illusion that the process is being correctly modeled and the results mean something useful. We’re only now, at 30 years on, that the models are showing that they do not correctly model the climate. It’s time for engineering to take over and re-do the design with the “correct” parameters and equations so the result can be tested for another 30-50 years to see if it works better. We’re still in the pre stage 1) above era, making a well-designed machine. Give it another 1000 years and we may have a well-designed, well-tested climate model.
Nice ! The post as well. Well done
“The large scale GCM’s cannot be checked for the effect of calculation instabilities at every interation.”
You don’t have to do that. The basic requirement of any numerical method is that the solution, when substituted, satisfies the equations. You can check that.
GCM developers wouldn’t be flummoxed by a simple stiff de like this. But the basic check that is done that would show it up is grid and timestep independence. If you have an instability like this, and it hasn’t already caused some overflow, then it will surely show when you vary the time step.
“The basic requirement of any numerical method is that the solution, when substituted, satisfies the equations. You can check that.”
And what would that be for the climate models? Tn+1 = Tn – αTn4 + β or Tn+1 = Tn – αTn4 – 2αTn3(Tn+1-Tn) + β
I kind of thought that it was just an example of how easy it is to cock up something more complex. I’m sure that you could do a better job of calculating something so simple but that’s not the point.
Nick Stokes, I never saw a plane (excluding RCM planes) that didn’t get plenty of time in a wind tunnel, whether in part or as a (model) whole, starting with the Wright Flyer.
Boeing says:
“The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade.”
Sounds like they think it works.
After the Tacoma Narrows, bridges and buildings were run through wind tunnels, but I suppose today they can use CFD ,,, but we always get surprises even when we think we have thought of everything:
http://www.engineering.com/Library/ArticlesPage/tabid/85/ArticleID/171/Tacoma-Bridge.aspx
Murphy is always hanging about waiting for us to think we have things figured out.
” bridges and buildings were run through wind tunnels”
You can’t put a bridge in a wind tunnel. You can try with a scale model, but getting the scaling right is a big problem even for fluid flow alone, and for fluid-structure interaction with large fluctuations, well, scarcely possible.
Billy Liar December 6, 2015 at 2:36 pm
Reply
Nick Stokes December 6, 2015 at 3:51 pm
Nick, he didn’t say that CFD doesn’t work. He said CFD is tested in the wind tunnel.
w.
Willis,
“Nick, he didn’t say that CFD doesn’t work. He said CFD is tested in the wind tunnel.”
I said originally that it works. This seemed to be controversial, so I pointed out that Boeing was affirming it. Yes, it is tested, but it passes. CFD works.
But in fact, CFD can tell you more than can be tested in wind tunnel. The Boeing link says:
“However, the wind tunnel typically does not produce data at flight Reynolds number, is subject to significant wall and mounting system corrections, and is not well suited to provide flow details. The strength of CFD is its ability to inexpensively produce a small number of simulations leading to understanding necessary for design.”
This relates to what I said about bridges. Just scaling even a pure fluid flow is hard, and the paper says that they can’t achieve flight Reynolds number – ie can’t scale. CFD can.
That’s just external. Engine makers use CFD (and linked structural) in engine design. Flow over the compressor blades is critical, as is stress in them. Very hot air, reactions, huge forces, all kinds of acoustics and vibration, all whirring at high speed. A typical environment where CFD is all that you have. And people say the atmosphere is too hard.
Nick Stokes December 7, 2015 at 12:49 am
And why do they say the atmosphere is too hard? Because the CFD models of the atmosphere have performed so abysmally at longer range forecasting, being totally unable to predict even next month’s weather.
Yes, CFD is extremely valuable. But as near as anyone can tell to date, it is totally inadequate to the task of simulating the climate. If it were adequate, it would have either predicted or hindcast the current plateau in warming. It did not.
Hardly a ringing endorsement of CFD …
w.
Willis,
You make it sound like a black-and-white sort of thing: Either the CFD models can predict climate or they cannot. However, the actual reality of the situation is that we know what sort of things the models are better at and what they are worse at. They are good at predicting the weather short-term, before the chaotic effects (i.e., extreme sensitivity to initial conditions) significantly degrades their performance. They are also presumably pretty good at predicting things that are not very sensitive to initial conditions, like how much colder the climate will typically be here in Rochester in January as compared to July. They are not very good (or, maybe so-so) at predicting issues of climate that are sensitive to initial conditions such as whether this winter will be colder or warmer than average in Rochester. The prediction of the ups-and-downs that have led people to talk about a plateau in warming is something that is sensitive to initial conditions; the climate 100 years from now in response to greenhouse gases is something that is not.
joeldshore December 7, 2015 at 5:35 pm
Joel, always good to hear from you. You say:
OK so far.
True dat.
I’m sorry, but although this is put forwards as though it were an obvious fact by the climate modelers, just as you have done so here, to date we have absolutely no evidence that it is true. If you have evidence that the climate 100 years from now is NOT sensitive to initial conditions, this would be the time to present it …
Next, the “plateau in warming” is up to two decades now … so your claim is that time spans up to two decades are dependent on initial conditions.
Another statement of your claims that I’ve seen is that “weather is an initial-conditions problem, and climate is a boundary problem”. However, when I’ve asked the proponents of this theory at what point in time does one change to the other, they just wave their hands … is it no longer an initial-value problem after five years? Ten years? Fifty years? You’ve said that it IS an initial-value problem out for no less than twenty years … but suddenly it will stop being one after what? … 30 years? 50 years?
My next question is, what are the nature and value of the boundary conditions TODAY that define the claimed “boundary problem”? I mean yes, there is a boundary in that space radiates at about 3 W/m2, and there is a boundary where the sea meets the ocean, but just what are the boundaries of climate today?
Next, if we know the current boundary values, what are the future values of those boundary conditions in say 2050? Will they be different?
So … let me pose those questions to you. Your claim is that the CFD models are right in the short term of days, passable in the longer term of a couple weeks, much worse in the longer term of a couple months, wrong in the longer term of a couple years, and totally off the rails by the time we’re out twenty years …
But by gosh, you assure us they will come right again a hundred years from now when we’re all dead … and you base this assurance on what?
You are a scientist, Joel, and from all reports a good one. What actual evidence do you have for your claim that someday the models will come right? Because their performance to date gives no evidence of that. For example, what other computer models of complex systems can you point to that have that quality (right short-term, wrong middle-term, right long-term)?
So the modelers make an iterative model of the most complex and most turbulent system we’ve ever tried to model. The climate contains the following subsystems: the atmosphere, hydrosphere, biosphere, cryosphere, lithosphere, and the electromagnetosphere. None of them are fully understood. In addition to phenomena occurring internal to one subsystem, there are innumerable mutual two-way or multiple-way interactions between these subsystems. For example: in response to heat from the downwelling solar and longwave radiation (electromagnetosphere), the plankton (biosphere) in the oceans (hydrosphere) put chemicals into the air (atmosphere) that function as nuclei for clouds that affect both the incoming sunlight and the downwelling longwave (electromagnetosphere).
Typical iteration times for climate models are a half an hour. Anyhow, they make up some kind of tinkertoy model of this insanely complex system, with many of the important emergent phenomena like thunderstorms and plankton and dust devils and Rayleigh-Benard circulation either simply “parameterized” or ignored entirely. Then they run their model for 1,753,200 iteration cycles, all the way out to the year 2100 …
Now, we know that after running for a modeled year of time, about 18,000 cycles, the model results are unreliable. And we know that after 350,000 cycles, or twenty years, they are still unreliable.
And despite all of that, some folks, including otherwise reasonable scientists, believe without evidence that if we just run them a leetle bit longer, the results from their simple iterative models will somehow represent the climate a hundred years from now …
In addition to the lack of evidence that the models will eventually come right, I’ve been writing computer programs for a half century now, including a variety of iterative models, and my experience says no way. Anyone who thinks that the climate models can predict the future climate out 100 years is fooling themselves about the capability of the programmers, the adequacy of the models, and the state of climate science. We’re nowhere near to understanding the climate well enough to do that.
My best to you and yours,
w.
Thanks for your reply, Willis.
It is not a hard thing to test that this is true in the models and is done all the time. They perturb the initial conditions on the model and run it again.
Now, you might claim that is a deficiency of the models, but the fact is that the whole field of chaos theory arose from studying models. It is not a case where they saw something in nature but could not reproduce it. In fact, Lorenz wrote down some ridiculously simple model and discovered the chaotic behavior. Indeed, most of what we know about the chaotic nature of the weather is from models…It is not like you can do these “perturb the initial condition” experiments on the real earth and see what happens.
As I understand it, the sense in which they mean it is a boundary value problem is that you are perturbing the energy balance of the Earth and the question is what happens in response to that. Just as when you go from winter to summer, you perturb the energy balance (on a more local scale).
In the rest of your post, you make it sound like there are no ways to test the models. But, in fact there are many ways, including looking at the fidelity of reproducing current climate, modeling past climatic changes, and looking at more detailed pieces (like is the upper troposphere moistening as the water vapor feedback claims that it should: http://www.sciencemag.org/content/310/5749/841.abstract )
No doubt, it is a difficult problem and there are considerable uncertainties. But, let’s face it, the easy problems in science are completely solved already and it is not as if we are doomed to not be able to make progress on these more difficult problems. If we demand, ridiculous certainty and precision of the science before we do anything, then we may well be doomed, but most people (other than those who have an extreme ideological predisposition to dislike the sort of solutions proposed) don’t demand that kind of precision before they take action.
joelshore says:
…most people (other than those who have an extreme ideological predisposition to dislike the sort of solutions proposed) don’t demand that kind of precision before they take action.
The kind of “precision” that models produce is so wrong it’s ridiculous:
http://www.drroyspencer.com/wp-content/uploads/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png
And as usual, joelshore sees everything through the prism of ‘ideology’, confirming once again that the ‘dangerous AGW’ scare is nothing more than a self-serving hoax based on politics and pseudo-science.
Joel, in response to your claim that:
I had said:
In response to my request for evidence, you have given me nothing but a paean to the climate models … it appears that despite a solid scientific background, you are mistaking the output of untested climate models for evidence.
So I’ll ask again—do you have any EVIDENCE that the models will someday come right? Because there is nothing in the model outputs to suggest that. To date, they’ve grown worse and worse over time … so just what magic is supposed to make them suddenly be able to predict the far, far distant future when they have failed so miserably at the short term (years) and the medium term (decades)?
You go on to say:
You misunderstand me. I did not say that there were no ways to test the models. I’m saying that they’ve failed the tests.
They are unable to reproduce the current climate, and are getting worse over time, not better. As to “modeling past climatic changes”, they can do well only on what they have been trained on, which is the recent temperature history. The models do a horrible job, for example, on hindcasting recent precipitation, a vital component of the climate. But it’s worse in the past. The models can’t tell us why the earth warmed up in Medieval times. They’ve been unable to say why those warm times ended and we went into the Little Ice Age. And they are similarly clueless about why the LIA ended and the globe has been warming for ~ 300 years.
So if you consider “the fidelity of reproducing current climate, [and] modeling past climatic changes” to be valid tests of the models, there’s bad news … they’ve failed.
Finally, I ask again. We know that the models do very poorly on short and medium term predictions. But you claim that at some future unknown date they will all come right and start suddenly giving us the right answers … so:
a) What is your evidence that the climate models’ currently terrible performance at forecasting the future will improve someday, and
b) When? I mean, when will the CMIP5 model projections become correct? In 3020? 2050? Tomorrow?, and
c) As a reality check, can you point to even one iterative computer model of any kind in any field which is right in the short term (weeks), wrong in the longer term (years to decades), but somehow it comes right again in the longest term (centuries)? I know of no iterative models with those characteristics, but I was born yesterday, what do I know …
I asked these three questions before, and you totally ignored all three of them … how about you answer them this time so I don’t have to ask again? Or at least let me know if you are refusing to answer them, or just ignoring them and hoping they go away …
Best regards,
w.
Nick writes “But is isn’t. Engineering maths works. Computational fluid dynamics works. Planes fly, bridges stay up.”
These are instantaneous effects, Nick, and the complexity is orders of magnitude less than a GCM. But to illustrate…how does CFD go if you tried to predict the fuel consumption of a theoretical aircraft circling the earth a bunch of times?
In the engineering world model validation is required and performed. Model parametrizations for two phase heat transfer, correlations for system bifurcation behavior, etc. are validated in the laboratory. Then the model is built to conservatively treat these parameters based on the real world limitation being protected. Even then, no one who deals with models of this type would take any model run as indicative of the real world result. You either treat all uncertainties conservatively to ensure that limits are not exceeded, or you sample uncertainties to develope a probability and confidence based on multiple model runs.
And even if you do multiple model runs, with a system as complex as climate you still have no idea if you have fully (or even significantly) explored the parameter space … nor do you know how well your parameter space approximates reality …
w.
The layman does not read formulae.
That was precisely my thought too. Anyone who has had to spend a long time explaining concepts to laymen knows that the quickest way to glaze their eyes over is to put even the simplest maths in front of them. The converse is often that if you cannot simply explain a concept in words (natural language) then you probably do not understand it yourself.
As a layman with no mean education, including plenty of science, I have to point out that
Tn+1 = Tn – βTn 4 /T∞4 + β or Tn+1 =Tn +β(1-Tn4 /T∞4 ) eq (5)
does, indeed, make my eyes and mind glaze over.
I pretty much educated myself in mathematics. At around the age of 45, I started studying Calculus once again with the determination this time to actually learn it. I studied every single day, carried my calculus book with me every single day to the coffee shop, and did this for several years. I wore out my calculus book, it is no longer fit to leave my desk.
I gained great insight into mathematics by this daily exposure. More about my experience some other day…
garymount
December 6, 2015 at 5:55 am
Fourrier was my nemesis
Stephen, numbers higher than ten were mine.
(But I do understand how Gödel’s theorems killed Hilbert’s programme. That’s logic.)
The term “layman” is relative. Even the best of us are layman in some degree. I appreciate this lead post as it helped this layman (me) to a great degree.
learned to read equations by substituting the symbols with their written English meaning. Works well for me but can sometimes be painful. Debye’s specific heat triple integration is the one that plagues my memory.
This isn’t surprising.
In programming. T^4 generally does not equal T*T*T*T. The former is almost always calculated with an approximation. The two should be close (whatever that means) but rarely will they be equal.
A second issue arises which is caused by loss of precision, chiefly underflow and overflow. The IEEE standard, for example, (which most machines use, nowadays) has what’s referred to as a double , 64 bits total, has an exponent of 11 bits and 53 bits in the mantissa. The physical representation of the mantissa has only 52 bits. If the number can be represented with normalization (that is, within a limited range) you can get 53 bits but this disappears when the value is outside of the range which can be normalized. Also, not all possible binary configurations are possible. Some are set aside to represent things like infinity and Not-A-Number..
There is a representation that has 80 bits which is used for intermediate calculations.It’s unlikely that Excel uses this. Certainly not when the value is stored in memory.
The possibility of underflow and overflow requires careful thought. Especially when doing a long calculation with many iterations. It is sometimes more prudent to give up some precision to extend the range. When multiplying many term together, it’s often better to add the logarithms of those terms and convert back with an approximation of the exp function. A single underflow or overflow can render a calculation completely useless. Using logarithms lessens the chance of under/overflow.
BTW: this only looks chaotic. It’s quite possible to predict the result if you understand how it was computed and have the values used..
Let me expand for those people who aren’t computer experts.
Let’s take an extreme example. Integer arithmetic. And let’s pick a really simple example.
You buy a object for 8 bucks and sell it for 10 what is the selling price as a percentage of the original amount. We know of course the answer is 10/8 × 100 = 125 %, simple huh?
So let’s do that with limited precision (integer) arithmetic
So that’s 10/8 x 100. But 10/8 = 1.25 = 1 (remember this is integer aritmetic, which is truncated to by the limited precision of integers) times 100 =100%
But let’s rexpress the calculation
10 x 100 / 8 in the computer you’d then get 1000/8 = 125%
Or I could express the same equation as
100/8 × 10
100/8=12.5=12 × 10 = 120%
So depending on how this calculation is written you get vastly different results 100, 120 or 125 %. Because compilers often reorder these expressions if I write this simple expression in integer arithmeric , I don’t know which outcome I’ll get so in fact I would have to code this as
(10*100)/8 in order to force the compiler to get the right answer, but note that this is different to how we ordinarilly think about that equation, on paper we would write it as 10/8 x 100, but if we transcribe the obvious equation directly into an integer computer we get the wrong answer entirely.
The same thing happens on floating point, whenever the number can’t be expressed exactly the representation truncates the value to the nearest representable value. Something is lost. As the value is smaller the amount lost becomes a higher percentage of the value. So 100/8 loses less in percentage terms than 10/8. Even in floating point, how the result is calculated can make enormous differences in the result. Iterating only makes it worse.
(10/8) × (10/8) ×100 = 100
While the true answer is 156.25 (156 truncated to an integer)
This is how my university showed the effects of limited precision, I’ve never forgotten.
Great thread! It is nice to see we have so many here who understand the limitations of digital computers.
I suppose that many here also understand the limitations of models even when coded as well as we could do. I would hope that many here understand that the model what you code comes from your own understanding and biases of the physical situation at hand. (and the laws of physics that govern)
In the post the author writes “( I am using this out of habit that if it were not for greenhouse gases the earth would be -30o C or 2430 K but you could try other temperatures)”. What if this “greenhouse gases warm the surface by 33 degrees” (or whatever) is completely wrong? It is well known that many of the skeptics (as verses lukewarmers) believe the atmospheric pressure, solar irradiance, albedo, water in all its forms, and other factors control the climate of earth. I also think the climate has been remarkably stable over the eons but that is a matter of perception. (what is “remarkably stable” after all?)
So, the takeaway here is that we have uncertainty in the models due to the calculation difficulties as outlined in the original post. We have difficulties in the construction of the models themselves before considering the numerical calculation difficulties. Then we have the problem that the underlining physics may well be wrong.
The current climate models are worthless. They are an exercise in futility as far as science goes, but are wonderful at making the IPCC look “scientific” to the layman.
It’s more than just limitations in calculation imposed by tools. All models (including supposed physical laws) are approximations. One should never confuse the model with reality. The climate modelers seem to have fallen into this trap. When the atmosphere runs off on its own and departs from the output of the models, the modelers seem to think the data are wrong or the heat went and hid somewhere. The latter perhaps is what should be part of the investigation that leads to correcting the models but, strangely, the assumption seems to be that the atmosphere has been subjected to some bizarre aberration and somehow the models prove that.
+100
The inability to represent reality accurately was what Lorenz identified. Minute differences in initial conditions in chaotic systems make prediction of past or future outcomes impossible.
This was almost recognized by the IPCC in their statement on non-linear coupled chaotic systems. But note it is impossible to predict past or future outcomes That is, it is not ‘difficult’, or requires knowing the correct compiler switches, or IEEE standards, it is impossible.
This is not a ‘matter of opinion’ . So none of these climate models can ever be correct.
… and whatever the programmer writes, the compiler will try to optimise how it is actually done. If you code x^2 the compiler will very likely implement x*x 😉
Compiler optimization is overrated. I rarely enable it. For one, it makes debugging difficult. After the code is debugged, you then have run yet another validation because the compiler output now isn’t what was debugged.
Besides, with modern processors and large fast RAM store, any compiler optimization is tiny compared to what might be obtained with a different algorithm. Hence, we see programs written in scripting languages and the resulting slowdown caused by interpreting intermediate code is often imperceptible. It’s better to profile subroutine execution times and locally optimize by hand.
Half adder ? use to be the internal method
A half adder is a hardware circuit for adding one bit to another with carry output. A full adder is one that not only inputs the two bits to be added but also adds the carry from the previous lower level. It’s usually the basic building block of an N-bit adder. A full adder can be built form two half-adders
Not sure what it has to do with the current topic.
Thanks DAV – And there was me thinking a ‘half adder’ was a dead snake
@1saveenergy:
LOL! No, it’s one of the pieces after it has been chopped approximately in the middle.
.
If you code x^2 the compiler will very likely implement x*x
Depends if the processor architecture has square functionality built in. Agreed Intel doesn’t seem to. In fact using C standard math libraries there is no ‘square’ function.
So you would have to specify how to perform the square anyway..
There is multiply and there is raise to a power.
Lord knows what a POS like Excel does.
The general case for raising to integer powers has two methodologies I am aware of take a logarithm, multiply, and anti-log the result, or do the successive multiplication.
The logarithm method is almost certainly slower but is the only method that can be used for raising to fractional powers.
My point is that even if you code x squared, you will probably get x times x in the object code. I’m pretty sure that the default for gcc on Intel platforms. Thus attempting to optimise your code by writing x+x or x << 1 instead of 2*x is pointless since the compiler will chose what it considers best with the currently active options.
If you want in specific implementation you may need to use inline assembler.
LOL,
Phil Jones often gets slagged off for saying he did not know how to fit a straight line in said POS. Sadly many here miss the point of that comment.
What! Version 15 and it’s still got bugs?
Great article; I have been banging on about rounding errors for a while;
https://www.ma.utexas.edu/users/arbogast/misc/disasters.html
… the Vancouver stock-exchange disaster is an eye-popper!.
And on this basis surely the Scripps CO2 series computations, on which the whole AGW edifice is built on, has been investigated and made public. And as these computations rely on data from a hardware Analog-to-Digital converter, an analysis of the known sources of error in A-D conversion and the potential for propagation through the series over many, many years surely ought to be a public document.
Could anyone advise?
TonyN said:
“and the potential for propagation through the series over many, many years ”
A long, long time ago (60’s) I heard the rumor, or maybe it was fact, that the City of Philadelphia and the County government were battling it out in Court to determine “who the money belonged to” that had accumulated over many, many years as a result of a “5 decimal point round-off” on the distribution of the taxes collected on Stock Market transactions.
The amount of money “setting in limbo” was around $10 million dollars.