Guest Essay by Kip Hansen
“…we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
– IPCC AR4 WG1
Introduction:
The IPCC has long recognized that the climate system is 1) nonlinear and therefore, 2) chaotic. Unfortunately, few of those dealing in climate science – professional and citizen scientists alike – seem to grasp what this really means. I intend to write a short series of essays to clarify the situation regarding the relationship between Climate and Chaos. This will not be a highly technical discussion, but an even-handed basic introduction to the subject to shed some light on just what the IPCC means when it says “we are dealing with a coupled nonlinear chaotic system” and how that should change our understanding of the climate and climate science.
My only qualification for this task is that as a long-term science enthusiast, I have followed the development of Chaos Theory since the late 1960s and during the early 1980s often waited for hours, late into the night, as my Commodore 64 laboriously printed out images of strange attractors on the screen or my old Star 9-pin printer.
PART 1: Linearity
In order to discuss nonlinearity, it is best to start with linearity. We are talking about systems, so let’s look at a definition and a few examples.
Edward Lorenz, the father of Chaos Theory and a meteorologist, in his book “The Essence of Chaos” gives this:
Linear system: A system in which alterations of an initial state will result in proportional alterations in any subsequent state.
In mathematics there are lots of linear systems. The multiplication tables are a good example: x times 2 = y. 2 times 2 = 4. If we double the “x”, we get 4 times 2 = 8. 8 is the double of 4, an exactly proportional result.
When graphing a linear system as we have above, we are marking the whole infinity of results across the entire graphed range. Pick any point on the x-axis, it need not be a whole number, draw a vertically until it intersects the graphed line, the y-axis value at that exact point is the solution to the formula for the x-axis value. We know, and can see, that 2 * 2 = 4 by this method. If we want to know the answer for 2 * 10, we only need to draw a vertical line up from 10 on the x-axis and see that it intersects the line at y-axis value 20. 2 * 20? Up from 20 we see the intersection at 40, voila!
[Aside: It is this feature of linearity that is taught in the modern schools. School children are made to repeat this process of making a graph of a linear formula many times, over and over, and using it to find other values. This is a feature of linear systems, but becomes a bug in our thinking when we attempt to apply it to real world situations, primarily by encouraging this false idea: that linear trend lines predict future values. When we see a straight line, a “trend” line, drawn on a graph, our minds, remembering our school-days drilling with linear graphs, want to extend those lines beyond the data points and believe that they will tell us future, uncalculated, values. This idea is not true in general application, as you shall learn. ]
Not all linear systems are proportional in that way: the ratio between the radius of a circle and its circumference is linear. C =2πR, as we increase the radius, R, we get a proportional increase in Circumference, in a different ratio, due to the presence of the constants in the equation: 2 and π.
In the kitchen, one can have a recipe intended to serve four, and safely double it to create a recipe for 8. Recipes are [mostly] linear. [My wife, who has been a professional cook for a family of 6 and directed an institutional kitchen serving 4 meals a day to 350 people, tells me that a recipe for 4 multiplied by 100 simply creates a mess, not a meal. So recipes are not perfectly linear.]
An automobile accelerator pedal is linear (in theory) – the more you push down, the faster the car goes. It has limits and the proportions change as you change gears.
Because linear equations and relationships are proportional, they make a line when graphed.
A linear spring is one with a linear relationship between force and displacement, meaning the force and displacement are directly proportional to each other. A graph showing force vs. displacement for a linear spring will always be a straight line, with a constant slope.
In electronics, one can change voltage using a potentiometer – turning the knob – in a circuit like this:
In this example, we change the resistance by turning the knob of the potentiometer (an adjustable resistor). As we turn the knob, the voltage increases or decreases in a direct and predictable proportion, following Ohm’s Law, V = IR, where V is the voltage, R the resistance, and I the current flow.
Geometry is full of lovely linear equations – simple relationships that are proportional. Knowing enough side-lengths and angles, one can calculate the lengths of the remaining sides and angles. Because the formulas are linear, if we know the radius of a circle or a sphere, we can find the diameter (by definition), the area or surface area and the circumference.
Aren’t these linear graphs boring? They all have these nice straight lines on them
Richard Gaughan, the author of Accidental Genius: The World’s Greatest By-Chance Discoveries, quips: “One of the paradoxes is that just about every linear system is also a nonlinear system. Thinking you can make one giant cake by quadrupling a recipe will probably not work. …. So most linear systems have a ‘linear regime’ –- a region over which the linear rules apply–- and a ‘nonlinear regime’ –- where they don’t. As long as you’re in the linear regime, the linear equations hold true”.
Linear behavior, in real dynamic systems, is almost always only valid over a small operational range and some models, some dynamic systems, cannot be linearized at all.
How’s that? Well, many of the formulas we use for the processes, dynamical systems, that make civilization possible are ‘almost’ linear, or more accurately, we use the linear versions of them, because the nonlinear version are not easily solvable. For example, Ian Stewart, author of Does God Play Dice?, states:
“…linear equations are usually much easier to solve than nonlinear ones. Find one or two solutions, and you’ve got lots more for free. The equation for the simple harmonic oscillator is linear; the true equation for a pendulum is not. The classic procedure is to linearize the nonlinear by throwing away all the awkward terms in the equation.
….
In classical times, lacking techniques to face up to nonlinearities, the process of linearization was carried out to such extremes that it often occurred while the equations were being set up. Heat flow is a good example: the classical heat equation is linear, even before you try to solve it. But real heat flow isn’t, and according to one expert, Clifford Truesdell, whatever good the classical heat equation has done for mathematics, it did nothing but harm to the physics of heat.”
One homework help site explains this way: “The main idea is to approximate the nonlinear system by using a linear one, hoping that the results of the one will be the same as the other one. This is called linearization of nonlinear systems.” In reality, this is a false hope.
The really important thing to remember is that these linearized formulas of dynamical systems –that are in reality nonlinear – are analogies and, like all analogies, in which one might say “Life is like a game of baseball”, they are not perfect, they are approximations, useful in some cases, maybe helpful for teaching and back-of-an-envelope calculations – but – if your parameters wander out of the system’s ‘linear regime’ your results will not just be a little off, they risk being entirely wrong — entirely wrong because the nature and behavior of nonlinear systems is strikingly different than that of linear systems.
This point bears repeating: The linearized versions of the formulas for dynamic systems used in everyday science, climate science included, are simplified versions of the true phenomena they are meant to describe – simplified to remove the nonlinearities. In the real world, these phenomena, these dynamic systems, behave nonlinearly. Why then do we use these formulas if they do not accurately reflect the real world? Simply because the formulas that do accurately describe the real world are nonlinear and far too difficult to solve – and even when solvable, produce results that are, under many common circumstances, in a word, unpredictable.
Stewart goes on to say:
“Really the whole language in which the discussion is conducted is topsy-turvy. To call a general differential equation ‘nonlinear’ is rather like calling zoology ‘nonpachydermology’.”
Or, as James Gleick reports in CHAOS, Making of a New Science:
“The mathematician Stanislaw Ulam remarked that to call the study of chaos “nonlinear science” was like calling zoology “the study of non-elephant animals.”
Amongst the dynamical systems of nature, nonlinearity is the general rule, and linearity is the rare exception.
Nonlinear system: A system in which alterations of an initial state need not produce proportional alterations in any subsequent states, one that is not linear.
When using linear systems, we expect that the result will be proportional to the input. We turn up the gas on the stove (altering the initial state) and we expect the water to boil faster (increased heating in proportion to the increased heat). Wouldn’t we be surprised though, if one day we turned up the gas and instead of heating, the water froze solid! That’s nonlinearity! (Fortunately, my wife, the once-professional cook, could count on her stoves behaving linearly, and so can you.)
What kinds of real world dynamical systems are nonlinear? Nearly all of them!
Social systems, like economics and the stock market are highly nonlinear, often reacting non-intuitively, non-proportionally, to changes in input – such as news or economic indicators.
Population dynamics; the predator-prey model; voltage and power in a resistor: P = V²2R; the radiant energy emission of a hot object depending on its temperature: R = kT4; the intensity of light transmitted through a thickness of a translucent material; common electronic distortion (think electric guitar solos); amplitude modulation (think AM radios); this list is endless. Even the heating of water, as far as the water is concerned, on a stove has a linear regime and a nonlinear regime, which begins when the water boils instead of heating further. [The temperature at which the system goes nonlinear allowed Sir Richard Burton to determine altitude with a thermometer when searching for the source of the Nile River.] Name a dynamic system and the possibility of it being truly linear is vanishing small. Nonlinearity is the rule.
What does the graph of a nonlinear system look like? Like this:
Here, a simple little formula for Population Dynamics, where the resources limit the population to a certain carrying capacity such as the number of squirrels on an idealized May Island (named for Robert May, who originated this work): xnext = rx(1-x). Some will recognize this equation as the “logistic equation”. Here we have set the carrying capacity of the island as 1 (100%) and express the population – x – in a decimal percentage of that carrying capacity. Each new year we start with the ending population of the previous year as the input for the next. r is the growth rate. So the growth rate times the population times the bit (1-x), which is the amount of the carrying capacity unused. The graph shows the results over 30 years using several different growth rates.
We can see many real life population patterns here:
1) With the relatively low growth rate of 2.7 (blue) the population rises sharply to about 0.6 of the carrying capacity of the island and after a few years, settles down to a steady state at that level.
2) Increasing the growth rate to 3 (orange) creates a situation similar to the above, except the population settles into a saw-tooth pattern which is cyclical with a period of two.
3) At 3.5 (red) we see a more pronounced saw-tooth, with a period of 4.
4) However, at growth rate 4 (green), all bets are off and chaos ensues. The slams up and down finally hitting a [near] extinction in the year 14 – if the vanishing small population survived that at all, it would rapidly increase and start all over again.
5) I have thrown in the purple line which graphs a linear formula of simply adding a little each year to the previous year’s population – xnext = x(1+(0.0005*year)) — slow steady growth of a population maturing in its environment – to contrast the difference between a formula which represents the realities of populations dynamics and a simplified linear versions of them. (Not all linear formulas produce straight lines – some, like this one, are curved, and more difficult to solve.) None of the nonlinear results look anything like the linear one.
Anyone who deals with populations in the wild will be familiar with Robert May’s work on this, it is the classic formula, along with the predator/prey formula, of population dynamics. Dr. May eventually became Princeton University’s Dean for Research. In the next essay, we will get back to looking at this same equation in a different way.
In this example, we changed the growth element of the equation gradually upwards, from 2.7 to 4 and found chaos resulting. Let’s look at one more aspect before we move on.
This image shows the results of xnext = 4x(1-x), the green line in the original, extended out to 200 years. Suppose you were an ecologist who had come to May Island to investigate the squirrel population, and spent a decade there in the period circled in red, say year 65 to 75. You’d measure and record a fairly steady population of around 0.75 of the carrying capacity of the island, with one boom year and one bust year, but otherwise fairly stable. The paper you published based on your data would fly through peer review and be a triumph of ecological science. It would also be entirely wrong. Within ten years the squirrel population would begin to wildly boom-and-bust and possibly go functionally extinct in the 81st or 82nd year. Any “cause” assigned would be a priori wrong. The true cause is the existence of chaos in the real dynamic system of populations under high growth rates.
You may think this a trick of mathematics but I assure you it is not. Ask salmon fishermen in the American Northwest and the sardine fishermen of Steinbeck’s Cannery Row. Natural populations can be steady, they can ebb and flow, and they can be truly chaotic, with wild swings, booms and busts. The chaos is built-in and no external forces are needed. In our May Island example, chaos begins to set in when the squirrels become successful, their growth factor increases above a value of three and their population begins to fluctuate, up and down. When they become too successful, too many surviving squirrel pups each year, a growth factor of 4, disaster follows on the heels of success. For real world scientific confirmation, see this paper: Nonlinear Population Dynamics: Models, Experiments and Data by Cushing et. al. (1998)
Let’s see one more example of nonlinearity. In this one, instead of doing something as obvious as changing a multiplier, we’ll simply change the starting point of a very simple little equation:
At the left of the graph, the orange line overwrites the blue, as they are close to identical. The only thing changed between the blue and orange is that the last digit of the initial value 0.543215 has been rounded up to 2, 0.54322, a change of 1/10000th, or rounded down to 0.54321, depending on the rounding rule, much as your computer, if set to use only 5 decimal places, would do, automatically, without your knowledge. In dynamical sciences, a lot of numbers are rounded up or down. All computers have a limited number of digits that they will carry in any calculation, and have their own built in rounding rules. In our example, the values begin to diverge at day 14, if these are daily results, and by day 19, even the sign of the result is different. Over the period of a month and a half, whole weeks of results are entirely different in numeric values, sign and behavior.
This is the phenomena that Edward Lorenz found in the 1960’s when he programmed the first computational models of the weather, and it shocked him to the core.
This is what I will discuss in the next essay in this series: the attributes and peculiarities of nonlinear systems.
Take Home Messages:
1. Linear systems are tame and predictable – changes in input produce proportional changes in results.
2. Nonlinear systems are not tame – changes in input do not necessarily produce proportional changes in results.
3. Nearly all real world dynamical systems are nonlinear, exceptions are vanishingly rare.
4. Linearized equations for systems that are, in fact, nonlinear, are only approximations and have limited usefulness. The results produced by these linearized equations may not even resemble the real world system results in many common circumstances.
5. Nonlinear systems can shift from orderly, predictable regimes to chaotic regimes under changing conditions.
6. In nonlinear systems, even infinitesimal changes in input can have unexpectedly large changes in the results – in numeric values, sign and behavior.
# # # # #
Author’s Comment Reply Policy:
This is a fascinating subject, with a lot of ground to cover. Let’s try to have comments about just the narrow part of the topic that is presented here in this one essay which tries to introduce readers to linearity and nonlinearity. (What this means to Climate and Climate Science will come in further essays in the series.)
I will try to answer your questions and make clarifications. If I have to repeat the same things too many times, I will post a reading list or give more precise references.
# # # # #
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Hi Kip and thank you for your article,
Sorry I did not have time to read all your work and the comments.
I do have a question concerning your Take Home point 3:
“3. Nearly all real world dynamical systems are nonlinear, exceptions are vanishingly rare.”
Background:
In 2008 I demonstrated a close relationship between global average atmospheric CO2 and global average temperature:
“The rate of change with time (t) dCO2/dt varies ~contemporaneously with temperature T.”
The paper and spreadsheet are located at
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/
For detrended data, the relationship is (I think – I’ve been up for about 20 hours)
dCO2/dt (in ppm/year) ~= 4T (where T is the global average temperature anomaly in degrees C)
You can check this yourself by viewing Figure 2 in the spreadsheet of my icecap paper.
So my question is:
Is this one of those vanishingly rare exceptions to point 3 – is this a linear dynamical system in the real world?
I suggest it is a pretty big system, since natural CO2 flux dwarfs human CO2 emissions.
[Because of this relationship, CO2 lags temperature by about 9 months in the modern data record, which has serious implications to current global warming dogma.]
[Detrending the data raises some questions, but I suggest these can be set aside, since I am not claiming that temperature is the only driver of atmospheric CO2, etc.]
Comments welcomed.
Regards, Allan
Reply to Allen ==> First, refer to the comment above by Willis E and my response.
I will read your 2008 paper when I get a chance, but shooting from the hip I would say that it is highly unlikely that there exists a linear relationship between a single environmental change (one input of very many) and the resulting surface temperature of the Earth due to the combined effects of the climate system being both “chaotic” (in the Chaos Theory sense) and “complex” (in the Complexity Theory sense).
As an aside, I find the CO2 concentration graph interesting in its almost perfect long-term linearity. Weird is how I would put it.
Hi Kip,
To be clear, I am saying that Temperature (among other factors) drives CO2 much more than CO2 drives Temperature.
The annual rate of change dCO2/dt (detrended, in ppm/year) ~= 4T (Temperature anomaly, in degrees C).
Thus CO2 lags surface and tropospheric temperatures by about 9 months.
Natural CO2 flux dwarfs humanmade CO2 emissions. Some parties say that the observed increase in atmospheric CO2 concentrations is primarily natural – I suggest the jury is still out on this question, and fossil fuel combustion, clearing and burning of rainforests and other land use changes do contribute – how much is the question.
See also this January 2013 paper from Norwegian researchers:
The Phase Relation between Atmospheric Carbon Dioxide and Global Temperature
Global and Planetary Change
Volume 100, January 2013, Pages 51–69
by Ole Humluma, Kjell Stordahlc, Jan-Erik Solheimd
http://www.sciencedirect.com/science/article/pii/S0921818112001658
Highlights
– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.
– Changes in ocean temperatures explain a substantial part of the observed changes in atmospheric CO2 since January 1980.
– Changes in atmospheric CO2 are not tracking changes in human emissions.
See also Murry Salby’s address in Hamburg 2013:
Anyone here who thinks they can calculate exactly where a (real) pendulum bob will be in space at any specific point time following release to swing needs to study up on the fundamentals of chaos theory.
I do believe this is possible. I am not sure if it is still there, but in the Science Museum in London, there is a “pendulum” clock, and is quite accurate in fact. I don’t recall the specifics, I’ve not been there since the late 80’s/early 90’s.
Study the subject. You will find it is only possible to calculate an envelope outside of which where the pendulum bob CANNOT be at any particular moment in time. Within the envelope, the pendulum paints a random scatter of position versus time. The shape of that envelope is determined by “the point and time of release” of the pendulum. Has nothing to do with how accurate a pendulum clock may be over an extended period of time. The pendulum demonstration is BASIC to chaos theory!
I say, if it is still there, go see it at the London Science Museum.
Good grief man! The random jitter of pendulums within their chaotic envelopes is repeatedly demonstrated in lab experiments for “chaos 101”. It’s the usual starting point for understanding the random behavior of non-linear systems and the mathematics of chaos!
I have a problem with the definition of linearity you are using: “Linear system: A system in which alterations of an initial state will result in proportional alterations in any subsequent state.”
Is a radioactive decay a linear system? If you start with a twice the amount of stuff, you will have twice the amount of stuff after an hour, a year, a half-life, or seven half-lives.
Curious George
No. Not true.
Isotopes are either stable, or radioactive.
If it is a stable isotope, the atomic nuclei will not change until modified by an external energy or particle reaction.
If it is radioactive, the nuclei will decay.
If you have enough radioactive nuclei, you can use the immense number of individually un-predictable decay probablities to predict very reliably exactly how much of a sample will have decayed at any moment. (That DOES ASSUME you have enough radioactive nuclei, and at over 10^20 radioactive nuclei per gram of a sample, you can actually use the “theory of big numbers” to actually get reliable results.
That half-life may be milli-seconds. Might be seconds, minutes, or hours. Might be millions of years. But the predictable relationship is always maintained if you have “enough” radioactive nuclei. Now, by the time you get down to 12 or 16 individual nuclei?
No. You don’t have enough to use statistical predictions. But it takes a while to get from 10^23 isotopes down to 10 isotopes.
You are making my point exactly. It is always proportional, 2x.By Edward Lorenz’s definition it is a linear system.
It is a linear system. It is a linear system because it is the solution to a first order, linear, ordinary differential equation. In fact, it is one of the archetypal solutions to a first order, linear, ordinary differential equation. It also satisfies the rule that if A(t) satisfies the ODE and B(t) satisfies the ODE, a A(t) + b B(t) satisfy the ODE for any a and b. Because it is a linear system.
I’ll repeat this, since this is something that was not made at all clear at the beginning. A linear system in anybody’s terms is a dynamical system with a time evolution described by a linear ordinary differential equation. We just don’t like to keep saying a linear (second order homogeneous ordinary differential) system (of differential equations).
I think Kip linked this, but I’ll link it again since clearly nobody read it if he did:
http://en.wikipedia.org/wiki/Nonlinear_system
Note well that the definitions in physics and engineering and mathematics are all the same. I will therefore cite in detail the mathematical definition of nonlinear system in contrast to linear:
I gave explicit examples of this above. Seriously folks, if you want to learn you need to learn that wikipedia is your best friend in math, physics, and most of the sciences (as well as a lot of other non-political stuff). Only when the populace is highly polarized and doesn’t reference matters of simple fact does its content get sketchy. This article in fact has many of the examples discussed above, authoritatively. Linear decay is linear because it solves a linear ODE and its solutions therefore superpose. Harmonic oscillation is linear because it solves a linear ODE and its solutions superpose. Neither of those solutions is a straight line, and in neither case is the “forcing” (to the extent that such a thing can be defined in the case of decay) a linear function of the independent variable.
I don’t quite get the bit of outputs proportional to inputs — that makes no sense at all in a ninth order linear homogeneous ODE — what exactly is an “input” or “output”? There are just solutions. But linearity itself in systems of equations is easy enough to understand, hence courses in “linear algebra” and “linear ODEs” and “linear PDEs” and “linear dynamical systems” and so on, available at any university including one near you…;-)
rgb
rgb, your definition differs from Lorentz’s, and I prefer it. But Kip’s preference should matter most.
Is “chaos” a metaphysical term or an epistemogical term? If it’s metaphysical, the idea contradicts the determinism of the physical world and is nonsense. As an epistemological term, it’s merely another word used to indicate that we are not omniscient. As for “non-linearity,” what does this word mean that is so daunting? We can create non-linear equations that describe many natural phenomena.
The fact is that the climate is “chaotic” only because its processes are far too complicated for us to understand completely and predict.
Isn’t this what it all comes down to?
No, that’s not it. Not exactly. Chaos is in this context a mathematical term of extreme precision, and it is used to describe very simple systems whose output cannot be predicted, not because the equations don’t describe then exactly, nor because the equations are hard to write down, nor because we dont understand them completely, but because the equations themselves may be said to contain enough inherent feedback sufficiently large to create extreme instability, or even catastrophic behaviour.
The physics and mathematics of rolling dice are not complicated, and rolling dice is completely deterministic,.. There is nothing we dont understand about rolling a die. But we cant predict the outcome beyond saying that there is an overwhelming likelihood that it will end up with one of six possible faces on display. But which one is essentially impossible to predict.
Deterministic chaos is epistemological. It doesn’t just reflect a lack of omniscience. It reflects a particularly nasty kind of a lack of omniscience. Often in dynamical systems one can observe a kind of coherence where if one knows the initial conditions only approximately, one loses that knowledge gradually or not at all. The solution is “insensitive” to initial conditions, and solutions started out at nearby initial conditions remain close indefinitely into the future. Chaotic systems exhibit the opposite. No matter how close you start two solutions to the same system, the solutions diverge over time to end up as unknown relative to one another as it is possible to be given the e.g. energy constraints of the system. Knowing one of the solutions then tells you basically nothing about the other, within a very broad range of accessible possibilities.
CAGW is a joke because the hypothetical assumptions are a complete joke.
CAGW’s hypothetical efficacy depends ENTIRELY on a “positive runaway feedback loop” involving an exponential increase in water vapor in response to increased CO2 concentrations, which is NOT being observed in any way shape or form….
Nature HATES runaway feedback loops because once the sum of the feedbacks exceeds 1, the output eventually goes to infinite; aka the Gore Effect where, “..two miles under Earth’s surface, it’s several million degrees.”…..
Conversely, what actually seems to be occurring is that any increase in CO2 forcing is partially offset by an increase in cloud cover, which has a cooling effect, thereby reducing any net CO2 warming effect.
This explains why CAGW model-mean projections are already over 2 standard deviations off from reality, and why there hasn’t been a global warming trend for almost 19 years, despite 30% of all CO2 emissions since 1750 being emitted over just the last 19 years…
In just 5~7 years, the discrepancies between CAGW projections vs. reality could well be over 3 standard deviations, with almost a quarter of a century without a global warming trend, at which point, CAGW will have to be tossed in the trash bin.
Once non-linear reality vs. projections start to exceed 3 standard deviations for a statistically significant duration, the hypothesis can be considered a bust….
We’re getting agonizingly close to meeting those parameters for CAGW disconfirmation.
Even the Met Office have suggested that a return to warming may not be seen before 2030.
I have repeatedly been making the point that we can today already contemplate what that would say about cAGW and model prokections. If the Met Office are correct that there will not be a resumption in warming beforfe 2030 (“In just 5~7 years, the discrepancies between CAGW projections vs. reality could well be over 3 standard deviations, with almost a quarter of a century without a global warming trend”) it shows that cAGW and the models upon which it relies is failed conjecture.
“An automobile accelerator pedal is linear” This is not the best example, at least not these days, and particularly so with the advent of non-mechanical pedal-throttle linkages.
At low openings, the pedal is designed to travel further to open the throttle a fixed amount. This makes low-speed driving in lower gears smoother.
At high openings, the same pedal travel would open the throttle more. This is because at higher speed, principally due to much increased wind resistance, more power is required to accelerate the vehicle a given amount.
The type of car and the engine power characteristics will also have an impact on the pedal/throttle response curve.
One other problem is that the pedal-throtle behavior is in the simplest case just a static function at the input to the non-linear dynamical system called a car. The behavior for the system between pedal to vehicle speed is not a linear system for several reasons but maybe most easily seen by considering the the air resistance depends on the speed squared.
Yes, but also is engine power a simple linear function of fuel/air mixtrure?
Doesn’t the fact that peak power and peak torque not coincide establish that engine performance and efficiency are not simple linear functions of throttle response/deployment.
To rickard, yes probably but I don’t really know anything about the modeling of engines.
Reply to Robin Matyjasek and others ==> My use of the automobile accelerator was very pragmatic and given as “An automobile accelerator pedal is linear (in theory) – the more you push down, the faster the car goes. It has limits and the proportions change as you change gears.” As an example for average joe’s and homemakers, they can understand that “the more you push down, the faster the car goes”.
[ And of course, “It has limits and the proportions change as you change gears. ” ]
Really? You take a non linear system and talk about it as linear with what seems to be only steady state behavior? Ignoring the dynamics and the obvious non linearises in your dynamical systems in an essay about non linear dynamical systems? That really doesn’t make much sense…
Look chaps and chapesses: simmer down!
And review what Kip has said.
1/. Models are used to represent the real world. They are in fact all we have unless you are into transcendental meditation and Gnosis.
2/. There is a class of models – mathematical models – that are deterministic, and whose output displays extreme pseudorandom behaviour to the point where it is essentially unpredictable within enormous limits.
3/. If we are in the game of using models to predict the future – and boy that is what climate science has allegedly had billions poured into it to do – then we had better know if the model that most closely resembles the model we use to represent reality-the-data-set is mathematically chaotic or note.
4/. If we are using the rational materialists model of Reality, we are implicitly subdividing Reality into Objects and Actions connected by an Immutable Causality, and bounded by Natural law. As such all relationships are mathematically describable as time differential equations, and one moment in the aeon is presumed to be the starting conditions for the next moment in that aeon. This is inescapable if you start from rational materialism and want to do normal science.
5/. Ergo introducing metaphysical notions a la Korzybski is a straw man in the context of the discussions of science. In that context we have to assume rational materialism. That the world exists as objects connected in space time by cause and effect. That this is in itself only a working hypothesis – a model if you like – is not denied, but in the context of science that is where we begin.
I am sorry to beat the philosophical drum, but it is hypocritical to use the fact that all models are flawed to dismiss one model that is less flawed in favour of one that is more flawed. Chaos theory is a good usable description of real world phenomena that appear in the rational materialistic worldview, and its better than linear for most phenomena.
What it tells us is that of all the possible ways things can happen, some are so unstable that if, by chance, they did occur, it wouldn’t be for very long. One does not generally see serried ranks of pencils marching down the street balanced on their points, but rolling on their sides into the gutter is not unheard of.
And there we have it. There are classes of systems that represent partial solutions to the theory of All There Is, that are approximately linear, and we have made full and good use of them to define and construct a surprisingly stable and ‘unnatural’ world in which to bring up Greens and Climate Scientists, who wouldn’t after all exist without our efforts to make the world sufficiently ‘unnatural’ that their chances of survival were greater than zero. There are classes of systems that represent partial solutions to the theory of All There Is, that are approximately non-linear, but bounded, and we live within the sheer unpredictability of them relying on faith, or simple pragmatism, to get us through. Lightning may strike me down tomorrow. I cant say, but what can I do? And there are classes of systems that represent partial solutions to the theory of All There Is, that are approximately non-linear, and unbounded, that once they have happened, like Humpty Dumpty, result in worlds that can never be put back together again. We can never go back to the Big Bang and start over any more than anything in the Universe is accurately described by the term Renewable Energy: Nothing is renewable. We surf the wave of entropy from the big bang and once its gone, that’s the end of everything.
Please, when operating in the framework of one metaphysic do not call on the premiss of another to make your point., Its bad logic and smacks of sophistry.
I think you need to look up the difference between affect and effect.
Then you can examine the case for saying that climate is in fact the long term average of weather.
I agree, Leo. A lazy mind can’t spell.
First of all, What Kip is saying – and it echoes what have said here in the past, and it echoes what Robert Brown has said many times much better – is that the proposition that climate is a complex non-linear dynamic systems means but one thing above all others.
Namely that we may not need anything beyond climate itself, to account for climate change.
So CO2 is not needed to explain late 20th century climate change.
Obviously in a real world system CO2 must have some effect on the world. We know it has a small radiative impact. We know it affects ocean acidity very slightly and we know it affects plant growth mightily.
The question is whether or not it can cause runaway global warning or deep and otherwise unachievable climate change – here the evidence of the last billion years or so is almost certainly not. CO2 appears to lag temperature, not lead it. It is an effect of climate change, not a cause, and if it were a cause, then the climate of the past simply wouldn’t have been as stable as it was. In short the climate sensitivity is almost certainly less than unity with respect to analysis of the purely radiative firceing of CO2, that is CO2 probably only makes a fraction of a degree difference for every doubling, which in political and economic and social terms means it is totally and utterly irrelevant, as the climate is dominated by other far more powerful causes and effects.
The skeptical position is challenged on two points by the warmists –
(i) “If it isn’t CO2, what cause late 20th century warming?” Here the answer is ‘largely nothing: Its purely natural feedback in the climate system that gave rise to those fluctuations’
(ii) CO2 must have some effect, dont deny it (not that anyone ever has) so how much effect will it have?” And the answer doesn’t come from chaos theory,directly, but from history, and the short answer is ‘not enough to give a damn about frankly’
Does it follow from this statement (if true) “…we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.” that it is impossible for us to know what the climate today should truly and properly be like, and therefore we are unable to answer whether there is anything at all unusual about the climate as we experience it today?
Anyone, any views on that.
Reply to richard verney ==> Personally, I don’t think the second thought can be derived from the IPCC statement.
I don’t think there is a “what the climate today should truly and properly be like”. The real world does deal in “should be likes” — it is the way it is. Environmentalists in general make this mistake over and over. They assert that certain parts of the real world “should be like” some idealized version which is entirely subjective (but likely to never have really existed).
that it is impossible for us to know what the climate today should truly and properly be like, and therefore we are unable to answer whether there is anything at all unusual about the climate as we experience it today due to the lies and misinformation published to enable the rulers to tax and spend at the rate they please,,,
Kip, Excellent essay. One thing I find troubling is the terminology which seems to equate linearity to predictability and chaos to unpredictability. I don’t think you’ve gotten into trouble yet by using that terminology, but I think it might be hard not to at times. In actuality, many quite predictable phenomena like exponential growth, radioactive decay, gravitational attraction, etc can not be described by linear equations. In some cases such as exponentials they can be easily translated into a framework (working with logarithms of values) where linear tools work. In some cases, they can’t. Also, I’m not so sure about phenomena such as the time and location of the slippages on the San Andreas and associated faults. Are California earthquakes chaotic in the sense that chaos theory can tell us useful things about them. Or are they just not very predictable?
I’d also quibble a bit about your usage of linearization. It’s true that simplifying assumptions are often needed in order to arrive at a solution to a complex problem like whether a proposed bridge is going to remain standing after the first windstorm. And the simplifications may lead to a linear solution. But linearization can also mean simply solving a problem in a domain where things that are curves in one domain can be represented by arithmetically convenient straight lines.
And a super quibble. Many potentiometers such as those designed for volume control applications do not have not linear taper. Maybe you should slip the modifier “linear” in before the word potentiometer?
This is the basic difference between a prediction and probability.
There is a 100% probability that the San Andreas will have a major earthquake event. The nearness of such an event rises higher and higher the longer we anxiously wait for it to happen. The longer it is between these events=the stronger the quake, too.
But predicting the TIME of a quake is impossible. All we know is, it will happen.
Same with climate: history tells us crystal clear that a cycle of Ice Ages/Interglacials is happening and these are seeing greater and greater extremes when they happen and the probability that our present Interglacial will end is virtually 100%.
When this will happen cannot be predicted. We can only see rising probability it will happen. Since we still can’t understand what causes Ice Ages to END, we don’t know the level of contribution from the sun causing glacial melting but I would assume it is nearly 100% probable the sun suddenly is much more energetic and causes these mysterious and sudden melts.
Chaos comes from trying to predict exactly when natural forces will cause events to happen and this will remain so forever because there is no way of computing exactly when an event happens in complex systems with input of energy from various sources.
Reply to Don K ==> See Leo’s comment.. Engineers make very sure that all their systems fall well within the stable, predictable regimes of their dynamical systems. Once the system transitions to chaos, predictability is lost.
As for linearization, it is when the mathematical equations are set up to be linear by ignoring the nonlinear elements that leads to trouble. Pretending that a nonlinear dynamical system is linear works only as long as the actual forces remain within the tame linear regime for that system — once chaos is allowed to out, all bets are off.
(And yes, there are nonlinear pots — my pragmatic example is “As we turn the knob, the voltage increases or decreases in a direct and predictable proportion, ” — the word linear is in the image. )
Kip, my point was that while linear systems are predictable, not all predictable systems are linear. That distinction isn’t important when dealing with relatively simple situations. I’m not so sure that it can’t become an issue when dealing with real world problems which are often quite hazy.
Likewise chaotic systems inherently have limited or no predictability, but it’s not clear to me that all poorly predictable or unpredictable systems are chaotic. I’m inclined to think that weather/climate for example is indeed inherently unpredictable. OTOH, I’m far from convinced that reasonably accurate earthquake prediction is impossible even though we currently have not the slightest idea how to make such predictions in any very useful fashion.
This limitation can be overcome by writing code, for example a ‘large’ class in languages that support classes that the only limitation is how much memory a computer has.
Here is an article for example, (in which the author also uses “BigNum class in calculating pi to 10,000 digits of accuracy” :
http://www.drdobbs.com/a-class-for-representing-large-integers/184403247
This does increase the time to do calculations and consumes more of the resources used of the computing system, so may be a tradeoff between having more accurate calculations at the loss of having more parameters to make a climate model more accurate (making assumptions) as well as the loss higher resolution (grid size).
I am very curious if there are papers or studies on the limitations of climate models with regards to computer resources available today. My thinking is either a computer system powerful enough to do a reasonable job of modeling the climate is perhaps 20 to 30 years away, or computer systems are adequate today and a good computer model exists but doesn’t show future warming, so is …?
Or Chaos makes modeling the climate impossible. Nevertheless, there is clear evidence that the earth maintains a temperature range despite the level of carbon dioxide as has been paleo proxy witnessed.
I remain somewhat skeptical that chaos will forever make climate models useless but I will be keeping it in mind while I continue my studies and research, and wait for part II and III.
– – –
By the way, can’t one use linearity for highly accurate estimates if one uses a small enough ‘grid or element’ size? I have seen a demonstration of estimating Pi using linear triangles and a somewhat high resolution or precision:
Reply to garymount ==> It is possible to compute big numbers — and there are many schemes and programming work-arounds. In pragmatic terms, all computers end up rounding somewhere — the digital version of 1/3 is 0.333333… to some defined length. Your example of computing a single number to great length is one of the things that computers do well.
In Climate Science, consider that before the 1980s, ALL temperatures were rounded to the nearest whole degree. With electronic, I believe they round to the nearest 100th (? anyone?). Taking monthly averages, also rounded, as if they were actual data points compounds this. We’ll get to this in a later part of the series.
The resolution of the registers is not the limiting feature for temperature, it is the resolution/accuracy of the thermometer. With a typical analog e.g. mercury thermometer, you’ll be doing well to have a resolution of 0.1 C — fever thermometers sometimes have this resolution as a result of having a very thin tube for the mercury you view relative to the volume of the bulb and then using the lensing of the cylindrical class to make it visible, sort of, at the right angle. More common household thermometers are likely accurate to 1 degree F or around 1/2 C. My beermaking bimetallic thermometers are accurate to two to four degrees F, maybe a couple of degrees C, and probably distort nonlinearly as well. The digital thermometer outside of the house returns temperatures to the nearest degree F — but probably has another digit internally (my previous one did and displayed it). The chips used to measure temperature can probably produce at least a floating point number (roughly six digits in whatever scale) if not a double precision float which is so many digits of precision that we just don’t care any more. However, Anthony has carefully documented that most of the digital thermometers coming here from e.g China don’t have anything like that sort of accuracy. They can be as precise as you like but still be no more accurate than my beermaking thermometer. The only way to even detect this is to hand-test and calibrate each and every such device you plan to use and then hope that the calibration is not itself a function of temperature or environment or time as the semiconductors used to do the measurement anneal. Most digital instrumentation of this sort exhibits some sort of drift over long enough time — but so do regular analog thermometers for different reasons.
As I pointed out in one or another of my many replies above, this is what makes the “anomaly vs absolute” problem so problematic. The idea is that if your thermometer is miscalibrated, your knowledge of the absolute temperature may be incorrect and hence averages based on this are uncertain, but your miscalibrated thermometer will still measure almost the right degree size, so it it shows warming over a decade, that warming is real. The problems with this assumption are manifold. For one thing, yes thermometers may be miscalibrated, but there is little reason to think that all thermometers are miscalibrated systematically. On average, in fact, since all thermometers are tested/calibrated at least once, one rather expects miscalibration to be normally distributed around true calibration with little skew. What that means is that if you average the output of many thermometers, the more you use the more accurate your average as the output miscalibration (and roundoff, and reading errors etc) are all likely to be distributed symmetrically around zero and in any even the error there is likely to be bounded and stationary even if there is some source of systematic/biased error as well.
If this is not true, then your anomaly is no more likely to be accurate than the average. The miscalibration could be an incorrect degree size! You could be dead on a reference temperature, but exaggerate all warming above that temperature. With glass thermometers this isn’t even all that unlikely, as the degree scale used assumes that the fluid inside resides in a perfect cylinder that doesn’t change its diameter when the glass heats or cools! The thermometer could also be changed every decade, so that “anomalies” are just the result of changing calibration. The person reading the thermometer is definitely changing over the course of 165 years. The physical location of the thermometer may have changed. Nearby trees may have grown, been cut, grown again, and be cut, all without documentation. The physical environment of the thermometer may have changed. Many “official” thermometers and weather sites are located at airports. Airports change! In some ways they are terrible locations for thermometers used to monitor climate, however useful they are to pilots and control tower staff who don’t care what the regional average temperature is, but very much want to know the temperature right next to the asphalt runway.
The one single thing that really is constant over 165 years is that every person was trying to accurately read whatever instrument they were using, which had been expressly constructed for the purpose of making accurate measurements. Everybody believed in good faith that when they read off 83.5F on their thermometer, that was the temperature of the thermometer (to within a few tenths of a degree F) at the time they read it, and that the thermometer in question was in reasonable equilibrium with its immediate surroundings.
Again, if we wanted to compute the change in height of eleven year old children from 1850 to today, we would not do this using anomalies. We would take a collection of 1850 data and do our best to estimate the mean height of eleven year old children in 1850, quite possibly concluding that we don’t know it then very accurately at all because the data simply isn’t there or is completely absent from important parts of the world representing a substantial fraction of the world’s population of eleven year olds (our sample is not iid drawn from a hat, it is therefore likely to be biased). We would then find the mean height of eleven year old children in 2015 far more systematically, using random sampling and taking care to sample in a proportional way from the many genetic pools that very likely have differential mean height as coherent groups (personally I prefer purely random sampling but a good statistician can do as well by polling a smaller but carefully selected subpopulation, still using random sampling but not using e.g Monte Carlo selection of individuals to be included and letting the Monte Carlo process correct for and converge to racial biases). We would then take the difference, adding the standard errors and MAYBE renormalizing the result (better not).
We would absolutely not restrict ourselves to sampling only in one particular town, or using one particular yardstick, or not use all of the data we have in the present because it all wasn’t present in the past. We would just plain properly acknowledge our computed lack of statistical certainty in the final result.
Here’s the rub. Suppose I have just two samples from two locations. The temperature at 3 pm in My Back Yard today is 22 C. Yesterday it was 20 C. The temperature at 3 pm in Your Back Yard today is 23 C. Yesterday it was 20C. If we compute the “mean temperature in back yards” from these samples, we get 20 C yesterday, 22.5 C today, with an average change of 2.5 C. If I instead recorded only the anomaly relative to yesterday used as a “reference temperature”, I get ((20 -20) + (20 – 20))/2 = 0 C for the anomaly yesterday, and ((22 – 20) + (23-20)/2 = 2.5 C for the anomaly today, and conclude that the average change is — wait for it — 2.5 C.
Under what circumstances will the anomaly give a more accurate result for the change than the differences in the means? A tiny bit of algebra suggests under no circumstances will this occur! Under what circumstances will the accuracy of the mean of the anomalies be better than the accuracy of the difference in the means average temperatures? Only when the thermometers are, on average, systematically miscalibrated so that the degree size is reliable but the absolute degree reading is not and further is not a random variation with zero mean, it has to be a systematic error that survives the averaging process instead of shrinking as we add more data and reduce the error.
And that’s where I part company with the folks who massage the temperature data. They seem to believe that the average surface temperature is not very accurately known today for the very good reason that even today we only sample the surface enormously sparsely at some highly and often systematically biased locations, with instrumentation that may or may not be well-calibrated and as accurate as it is precise. It is hard for me to measure the average temperature in my own yard as different sides of the house would have completely different answers, and those answers would be different from the house across the street which has trees and doesn’t sit on a southwest facing slope above the pavement. Fine. We know the temperature in 1850 enormously less accurately than we know it today. All this means is that we cannot know the change in temperature very accurately at all. We can get a number, but the number may well be smaller than the uncertainty no matter how you obtain it from the same data, because one cannot algebraically transform lead into gold. The information content of the data does not change.
rgb
My reference to for example ‘BigInteger’, or code to contain numbers for a large number of decimal places is to compensate for loss of precision as large numbers of calculations take place and rounding has to occur. For example it the 2 quadrillion calculations per second super computer is used and runs a climate model for a few months, that is a lot of calculations that takes place. As an example, perhaps after every 100 calculations you lose a decimal precision, you will want to start your calculations, your solving of equations, with very large precision numbers so you have some precision left over by the time you finish computing.
But like I said, moving outside of the ‘native’ size of numbers can for example double, or quadruple the time it takes to do all your calculations because of all the extra book-keeping code required to run. You might even push the computer system doing the calculations beyond its resources and thereby grinding the calculation to a comparative halt as memory now has to be swapped to slower virtual memory that exists in the form of hard-drives instead of RAM.
Leo wrote,
“No, that’s not it. Not exactly. Chaos is in this context a mathematical term of extreme precision, and it is used to describe very simple systems whose output cannot be predicted, not because the equations don’t describe then exactly, nor because the equations are hard to write down, nor because we dont understand them completely, but because the equations themselves may be said to contain enough inherent feedback sufficiently large to create extreme instability, or even catastrophic behaviour.
Now that we are done with mental chaos, time to talk about cats inside of boxes! 🙂
Leo wrote,
“No, that’s not it. Not exactly. Chaos is in this context a mathematical term of extreme precision, and it is used to describe very simple systems whose output cannot be predicted, not because the equations don’t describe then exactly, nor because the equations are hard to write down, nor because we dont understand them completely, but because the equations themselves may be said to ontain enough inherent feedback sufficiently large to create extreme instability, or even catastrophic behavior.”
Respectfully, I believe this is a contradiction. If the equations describe them “exactly,” there could be no such bazaar, “unstable” behavior, etc. that it would be a problem. Either the equations describe the phenomena or they don’t, in which cases they are right or wrong. But not both.
Leo are correct. What he says is more or less the mathematical meaning of chaos. A chaotic system is a system that amomg other things are so senitiv to initial values (and rounding errors) that you in some sense cant predict the system state in the future. A chaotic system could be (or maybe always are) quite well behaved about were in the state space the state can be in so it is not completely random were the system state end up.
One thing that might be confusing is that a non-linear dynamic system is given by either a differential equation or a difference equation and those can in general not be solved exactly so it is necessary to solve the equations numerically. So it is not that we have a function of time with strange behavior.
It is that we have a function of time (or rather, a family of functions of time) with strange behavior. Where the strange behavior is precisely that a bundle of the functions that come together arbitrarily close to one another at some point in their trajectory spend almost all of the rest of their time distributed over the full allowable range of those functions. The phase space volume occupied by the solutions starts off in a very small volume that then diverges exponentially with time to fill the accessible volume.
This has nothing to do with numerical errors. It is a property of the functions themselves. The actual solutions to the differential equations, started from nearby initial conditions, diverge rapidly. It is neither stochastic nor “error”. Hence the phrase “deterministic chaos”, hence the entire concept of the Lyapunov exponent:
http://en.wikipedia.org/wiki/Lyapunov_exponent
The climate very likely possesses an entire spectrum of Lyapunov exponents as I’m guessing it runs in a densely multicritical regime as evidenced by the nucleation and growth of self-organized phenomena like hurricanes and thunderstorms which in turn make year to year differences in climate rapidly spin out at different rates for different parts of different phenomena to fill the “bundle” of possible futures from any trivially perturbed initial state. This is largely what one sees in the climate models directly if one looks at the enormous span of results from a single model with perturbed initial conditions.
That doesn’t even mean that the actual trajectory will be in this diverging envelope. That would likely be the case only if we were actually solving the equations of motion for the system.
The problem in climate science is that we aren’t doing that. We are solving the equations that describe a different system, an “earth-like” system in many ways, at a completely different resolution than the actual dynamics. We routinely replace entire chunks of the internal dynamics with heuristic approximations that completely erase the distribution of actual dynamics and make huge chunks of “North Carolina” into homogeneous climate entities hoping that the neglected dynamical errors aren’t being amplified by chaos itself until the trajectories cease to have any meaning at all, have the wrong large scale quasiparticle structure altogether.
rgb
Reply to Butke ==> If chaos and its attendant phenomena was easy to explain, there would be no need for an entire library section of books on the topic.
Read just the Wiki article on Chaos Theory and see if that doesn’t clear up your concerns.
(I don’t recommend the Wiki article as the best or most complete, but only as the most quickly and easily accessible).
Kip Hansen, 3/15/15 @ur momisugly 3:14 pm:
Reply to Jeff Glassman ==> It has been some time since I heard someone claim that their own sense of logic trumped the Real World.
I made no such claims, which are truly nonsensical deductions. I have no proprietary logic, and no contest exists in these discussions between logic and the real world. The problem is where the definitions apply, and whether you might have some proprietary definitions.
There are so many real world examples of these chaotic behaviors in natural systems that I find your continuing assertions to the contrary difficult to understand.
Prove the existence just one. To do so, you must first define your terms, specifically linearity/nonlinearity and chaos, and then apply them to the real world without relying on any model of the real world. By apply, I mean use ordinary logic as is found embedded in human language. I am quite open to any proposed definitions you might have which are model-free.
Did you read the linked study Nonlinear Population Dynamics: Models, Experiments and Data by Cushing et. al. (1998)? … I can only suggest reading any of the four books listed in the Introduction to Chaos Theory Reading List.
Reading any one of them should manage to bring you around…I hope!
Of course not, and this is no way to participate in a dialog. You seem to have a pattern of throwing out these same roadblocks. To John West @ur momisugly 5:22 pm; to all @ur momisugly 12:14 pm. Do not expect your readers to search through a library of books in the hope of finding something that supports your thesis. If you found anything of value in your bibliography, your duty is to quote it thoroughly, providing a precise reference to the volume and page number. From that point, we, your readers, can verify that you’ve applied the information correctly or show you your error.
Kip Hansen, 3/15/2015 @ur momisugly 3:06 pm:
Mr. Glassman seems to be railing against mathematical models in general and incorrectly believes that the chaos is a product of the math — which it is not.
Not at all. My point is that chaos, like linearity/nonlinearity, is only defined on what you call the math.
You continue in a most promising fashion:
The chaotic behaviors are natural phenomena, only recently being discovered to also exist in the very mathematics of the systems described.
The very mathematics of the systems described are merely man’s attempt to model the systems. Those mathematics are not inherent in real world systems, and the adjective very doesn’t make it so. The chaos you describe exists only in the mathematics. If you think it exists is the real world too, then support your position with reasoning. Don’t just claim it to be so. The same challenge applies: define chaos so that it is model-free, then apply facts from the real world to show it fits your definition.
Kip Hansen, 3/15/2015 @ur momisugly 2:48 pm.
“A dynamical system is a concept in mathematics where a fixed rule describes how a point in a geometrical space depends on time.
In plain English a dynamical system is a real world process, … .
This is a contradiction. If you think the real world contains mathematics, please tell us about its discovery.
Reply to Jeff Glassman ==> It is not my intention to duplicate the function of the whole library of literature on Chaos Theory — the Wiki article supplies a very complete reading list of references at the bottom, under the heading Scientific Literature..
I can’t help you any more than this — giving an introductory essay on the subject and pointing you to the literature. The learning, the understanding, part is up to you.
I am curious when you became the authority on chaos theory? You write an essay and when people find errors in it or want to discuss its content you either ignore the comments or say that people should read more about chaos theory. The last advice is of course a very good one and much better than reading your essay or your comments if you want to learn something.
I just find it strange how someone seems to believe that they are correct and everyone else are wrong when it seems that the only thing they have done is to read some popular science book about a subject.
Kip Hansen, 3/16/2015 @ur momisugly 7:16 am
Reply to Jeff Glassman ==> It is not my intention to duplicate the function of the whole library of literature on Chaos Theory — the Wiki article supplies a very complete reading list of references at the bottom, under the heading Scientific Literature. [¶] I can’t help you any more than this — giving an introductory essay on the subject and pointing you to the literature. The learning, the understanding, part is up to you.
Supplying a library of literature, or even a bibliography, is strictly your idea on defense. This is at least your fourth reference to this library, where what is required is your definition of chaos, the one on which you rely for your claim that the real world exhibits chaos. Since you have abandoned the topic (I can’t help you … , apparently you are unable to supply the definition. You claim only to be a scientist enthusiast, but you are writing a scientific paper without following elementary standards for science writing (standards often ignored in today’s professional journals, but standards nonetheless).
Considering that your article above is only Part 1: Linearity, you wouldn’t have been expected to have yet defined chaos. Nonetheless, you did rely on the term incorrectly, and ambiguously, in your very first sentence (already criticized on other grounds, 3/15 @ur momisugly 10:37 am). The ambiguity is whether the therefore is yours or IPCCs. Either way, the problem is that you ignored the plain scientific error:
The IPCC has long recognized that the climate system is 1) nonlinear and therefore, 2) chaotic.
All chaotic systems are nonlinear, but all nonlinear systems are not chaotic. If IPCC said it, you should have criticized it; otherwise, you should not have inserted therefore.
Here’s an example that should be included in a study on climate, but which IPCC ignores. The absorption of CO2 in water is linear in pCO2, but outgassing is nonlinear, being inversely proportional to pCO2. The physics is the same; the model depends on the direction. What is linear or nonlinear can be a simple matter of the modeler’s choice. And being nonlinear does not make the process chaotic.
Unless one’s objective is to produce pretty butterfly patterns or fascinating Mandelbrots, in other words, if one is trying to do science, chaos is the failure of the model.
The objective here is not to convert Kip Hansen into a science writer, but to inform and remind the public that IPCC cannot rely on the falsehood that the climate is in chaos as an excuse for the failure of its models, or anything else for that matter.
Jeff Glassman wrote:
“Supplying a library of literature, or even a bibliography, is strictly your idea on defense”
That’s the problem with Kip and I believe that it is a quite common problem. He has read some books and thinks he knows the subject. When people point out errors or possibly errors in his thinking he believes that they are wrong and he just point them to the literature that he believes agrees with his view. The problem is that the literature don’t agree with his view.
It is of course impossibly to have a constructive discussion with someone that thinks like this. Should we read thousands of pages so that we can claim that it is not written in any of the books that for example all non linear systems are chaotic? If we read for example the wiki page and try to argue that it doesn’t agree with Kip’s understanding we cant do any more than try to argue for that with the result of more links to the same literature (if he don’t just ignore you).
The whole idea of “chaos” bothers me, because, looking at its dictionary definition, it is really just indeterminacy.
“1 obsolete : chasm, abyss
2 a often capitalized : a state of things in which chance is supreme; especially : the confused unorganized state of primordial matter before the creation of distinct forms — compare cosmos
b : the inherent unpredictability in the behavior of a complex natural system (as the atmosphere, boiling water, or the beating heart)”
Merriam-Webster
Nothing happens by chance. Every physical action obeys the deterministic laws of nature. If it’s too complicated for us to understand, we regard it as chance only because we don’t understand it. “Chance” is epistemological, not metaphysical, not a feature of anything real. But, I’ll break down and read the Wikipedia article, at least.
Reply to Burke ==> Thank you. Yes, we are talking about scientific”chaos” as in “Chaos Theory” (which, btw, is not really a theory, but an area of study that crosses lines of many scientific disciplines).
As you read, you’ll find that to be chaotic in this sense, a function or process MUST be entirely deterministic, however, the future outcomes will be unpredictable.
Phil Cartier, 3/15/2015 @ur momisugly 6:30 pm:
I think you and Mr. Glassman are making different arguments, his philosophical and yours physical. In the real world the initial conditions occurred so long ago there is really no way to mentally get from there to the present.
Consider the problem that brings us here: Earth’s climate, which IPCC asserts is chaotic (to excuse the chaotic behavior of its models and to keep the regulations and money flowing). Climate began no earlier than the accretion of Earth (a model). It began no earlier than the accumulation of water that formed the oceans (another model, … ). It began no earlier than the formation of the contents, and the development of the ocean to store thermal energy and CO2 and the currents that distribute both to the air. For today’s concepts, the climate began so sooner than the accumulation of the last of the various gases in the atmosphere. This all seems (a) applicable and (b) physical, not philosophical.
Some point in that time line might be considered an initial condition for climate but for the fact that it is a sequence of processes. If indeed we could stipulate to an initial condition, where is the evidence that the climate then proceeded along an unpredictable trajectory, a condition of chaos?
The problem of chaos in climate is the choice of radiative forcing for its model. George Simpson, Met Office director, predicted to Guy Callendar in 1938 (the Callendar Effect, aka the Greenhouse Effect) that it would be unsuccessful, and IPCC has managed to validate Simpson’s analysis. A promising alternative would be a lumped parameter heat transfer (a colloquial misnomer) model. Bearing in mind that all measurements (hence all facts) have an error, and all predictions consequently inherit uncertainty, the climate can always be modeled, can always be modeled linearly, and can always be modeled non-chaotically. One cannot just model an arbitrary parameter to an arbitrary accuracy.
For example and for starters, the Global Average Surface Temperature varies periodically between 5ºC and 17ºC, with a period of about 100,000 years, and it is currently a degree or two below the maximum averaged over the past 5 periods. Much more can be said, leading to the fact that GAST follows the Sun by a simple transfer function, and humans are not involved. Not much chaos in that for either the real climate or its model.
To validate their worse than worthless, GIGO models, the modelers (not scientists) should start with the initial state of the climate 2.588 Ma, when the continents first assumed their present confguration, ie with the Americas connected by the Isthmus of Panama, or about 1.2 Ma, when eccentricity replaced obliquity as the major Milankovitch cycle (although of course the other cycles still have an effect).
The Atlantic has widened by about 40 miles on average since the end of the Pliocene, ie not enough to have a major effect on oceanic circulation or other climatic phenomena.
At that time atmospheric CO2 was probably about the same as now, thus the onset of Northern Hemisphere glaciation wasn’t caused by CO2, however concentration of the gas naturally fell as general climate and the oceans cooled.
Adding one CO2 molecule per 10,000 dry air molecules over the past ~300 years, since the depths of the LIA, has had negligible climatic effect. Adding two more during perhaps the next 100 years, for a doubling of “pre-industrial” Holocene “normality”, will likewise produce little effect, but whatever that might be, would be beneficial.
The climate is chaotic in that a given forcing will not always give the same climatic result unless it is evaluated in the entire spectrum of items exerting forcing upon the climate and the Initial ,State Of The Climate (such as land ocean arrangements, how far the climate is from the glacial -inter- glacial threshold). In addition noise will always be present to an extent due to random terrestrial events or for that matter extra events. Sometimes however these random events could send the climate in a particular trend if significant enough.
My point of view is the climate is random and chaotic but if evaluated comprehensively one can forecast a general trend in the climate..
Along those lines I wrote a paper how the climate may change which I will send following this post.
Here is what I have concluded. My explanation as to how the climate may change conforms to the historical climatic data record which has led me to this type of an explanation. It does not try to make the historical climatic record conform to my explanation. It is in two parts.
PART ONE
HOW THE CLIMATE MAY CHANGE
Below are my thoughts about how the climatic system may work. It starts with interesting observations made by Don Easterbrook. I then reply and ask some intriguing questions at the end which I hope might generate some feedback responses. I then conclude with my own thoughts to the questions I pose.
From Don Easterbrook – Aside from the statistical analyses, there are very serious problems with the Milankovitch theory. For example, (1) as John Mercer pointed out decades ago, the synchronicity of glaciations in both hemispheres is ‘’a fly in the Malankovitch soup,’ (2) glaciations typically end very abruptly, not slowly, (3) the Dansgaard-Oeschger events are so abrupt that they could not possibility be caused by Milankovitch changes (this is why the YD is so significant), and (4) since the magnitude of the Younger Dryas changes were from full non-glacial to full glacial temperatures for 1000+ years and back to full non-glacial temperatures (20+ degrees in a century), it is clear that something other than Milankovitch cycles can cause full Pleistocene glaciations. Until we more clearly understand abrupt climate changes that are simultaneous in both hemispheres we will not understand the cause of glaciations and climate changes.
. My explanation:
I agree that the data does give rise to the questions/thoughts Don Easterbrook, presents in the above. That data in turn leads me to believe along with the questions I pose at the end of this article, that a climatic variable force which changes often which is superimposed upon the climate trend has to be at play in the changing climatic scheme of things. The most likely candidate for that climatic variable force that comes to mind is solar variability (because I can think of no other force that can change or reverse in a different trend often enough, and quick enough to account for the historical climatic record) and the primary and secondary effects associated with this solar variability which I feel are a significant player in glacial/inter-glacial cycles, counter climatic trends when taken into consideration with these factors which are , land/ocean arrangements , mean land elevation ,mean magnetic field strength of the earth(magnetic excursions), the mean state of the climate (average global temperature gradient equator to pole), the initial state of the earth’s climate(how close to interglacial-glacial threshold condition it is/ average global temperature) the state of random terrestrial(violent volcanic eruption, or a random atmospheric circulation/oceanic pattern that feeds upon itself possibly) /extra terrestrial events (super-nova in vicinity of earth or a random impact) along with Milankovitch Cycles.
What I think happens is land /ocean arrangements, mean land elevation, mean magnetic field strength of the earth, the mean state of the climate, the initial state of the climate, and Milankovitch Cycles, keep the climate of the earth moving in a general trend toward either cooling or warming on a very loose cyclic or semi cyclic beat but get consistently interrupted by solar variability and the associated primary and secondary effects associated with this solar variability, and on occasion from random terrestrial/extra terrestrial events, which brings about at times counter trends in the climate of the earth within the overall trend. While at other times when the factors I have mentioned setting the gradual background for the climate trend for either cooling or warming, those being land/ocean arrangements, mean land elevation, mean state of the climate, initial state of the climate, Milankovitch Cycles , then drive the climate of the earth gradually into a cooler/warmer trend(unless interrupted by a random terrestrial or extra terrestrial event in which case it would drive the climate to a different state much more rapidly even if the climate initially was far from the glacial /inter-glacial threshold, or whatever general trend it may have been in ) UNTIL it is near that inter- glacial/glacial threshold or climate intersection at which time allows any solar variability and the associated secondary effects no matter how SLIGHT at that point to be enough to not only promote a counter trend to the climate, but cascade the climate into an abrupt climatic change. The back ground for the abrupt climatic change being in the making all along until the threshold glacial/inter-glacial intersection for the climate is reached ,which then gives rise to the abrupt climatic changes that occur and possibly feed upon themselves while the climate is around that glacial/inter-glacial threshold resulting in dramatic semi cyclic constant swings in the climate from glacial to inter-glacial while factors allow such an occurrence to take place.
The climatic back ground factors (those factors being previously mentioned) driving the climate gradually toward or away from the climate intersection or threshold of glacial versus interglacial, however when the climate is at the intersection the climate gets wild and abrupt, while once away from that intersection the climate is more stable. Although random terrestrial events and extra terrestrial events could be involved some times to account for some of the dramatic swings in the climatic history of the earth( perhaps to the tune of 10% ) at any time , while solar variability and the associated secondary effects are superimposed upon the otherwise gradual climatic trend, resulting in counter climatic trends, no matter where the initial state of the climate is although the further from the glacial/inter-glacial threshold the climate is the less dramatic the overall climatic change should be, all other items being equal.
The climate is chaotic, random, and non linear, but in addition it is never in the same mean state or initial state which gives rise to given forcing to the climatic system always resulting in a different climatic out-come although the semi cyclic nature of the climate can still be derived to a degree amongst all the noise and counter trends within the main trend.
QUESTIONS:
Why is it when ever the climate changes the climate does not stray indefinitely from it’s mean in either a positive or negative direction? Why or rather what ALWAYS brings the climate back toward it’s mean value ? Why does the climate never go in the same direction once it heads in that direction?
Along those lines ,why is it that when the ice sheets expand the higher albedo /lower temperature more ice expansion positive feedback cycle does not keep going on once it is set into motion? What causes it not only to stop but reverse?
Vice Versa why is it when the Paleocene – Eocene Thermal Maximum once set into motion, that being an increase in CO2/higher temperature positive feedback cycle did not feed upon itself? Again it did not only stop but reversed?
My conclusion is the climate system is always in a general gradual trend toward a warmer or cooler climate in a semi cyclic fashion which at times brings the climate system toward thresholds which make it subject to dramatic change with the slightest change of force superimposed upon the general trend and applied to it. While at other times the climate is subject to randomness being brought about from terrestrial /extra terrestrial events which can set up a rapid counter trend within the general slow moving climatic trend.
.
Despite this ,if enough time goes by (much time) the same factors that drive the climate toward a general gradual warming trend or cooling trend will prevail bringing the climate away from glacial/inter-glacial threshold conditions it had once brought the climate toward ending abrupt climatic change periods eventually, or reversing over time dramatic climate changes from randomness.
NOTE 1- Thermohaline Circulation Changes are more likely in my opinion when the climate is near the glacial/ inter-glacial threshold probably due to greater sources of fresh water input into the North Atlantic.
There is no mystery to D-O events like the Dryases. They occur with varying amplitudes continuously during both glacial and interglacial periods, based primarily on cycles in solar activity which overlie the effects on orbital and rotational mechanical Milankovitch cycle on insolation.
The changes these cycles produce are about an order of magnitude greater during glacial (D-O and Heinrich events) than interglacial periods (Bond Cycles). But perhaps most pronounced are those during deglaciations, such as the Dryas events, amplified by the effect of pulses of cold fresh water from melting ice sheets on NH ocean circulation.
It took about 12,000 years for the northern continental ice sheets to melt, from the Last Glacial Maximum, c. 18,000 years ago, to the time when the Laurentide ice sheet totally disappeared, during the Holocene Optimum. The 8.2 Ka cold snap was the last Dryas-like event, which requires ice sheets. But the cycles which cause them operate all the time.
Catherine ,we are on the same page essentially.
To a large extent.
That Milankovitch Cycles rule at the scale of tens and hundreds of thousands of years is IMO well established, at some highly statistically significant level. But the causes of multidecadal to centennial to millennial scale fluctuations, perhaps less so.
“6. In nonlinear systems, even infinitesimal changes in input can have unexpectedly large changes in the results – in numeric values, sign and behavior.”
That could lead to notions of runaway irreversible global warming and serious weather weirding.
Reply to Ulric Lyons ==> And, indeed, that is one of the common concerns about climate — whether it is justified or not is another matter.
My position is that Edward Lorenz has convinced most parties that natural variability of atmospheric teleconnections and oceanic modes are internal, merely by offering a clever model of it, but never addressing whether it is internal, let alone providing proof. I have never believed a word of it. And would suggest that until it is recognised exactly how the Sun drives it all, none will be wiser about weather climate, past or future.
“It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about Nature.”
– Bohr
After reading all of these comments, it struck me that no one mentioned that the answer is 42, and that the computer that we all live in was designed to figure out the question that was originally asked…