The Chaos theoretic argument that undermines Climate Change modelling

Just to be clear ahead of time, chaos in weather is NOT the same as climate disruption listed below – Anthony

Guest submission by Dr. Andy Edmonds

This is not intended to be a scientific paper, but a discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.

First of all, what is Chaos? I use the term here in its mathematical sense. Just as in recent years Scientists have discovered extra states of matter (not just solid, liquid, gas, but also plasma) so also science has discovered new states that systems can have.

Systems of forces, equations, photons, or financial trading, can exist effectively in two states: one that is amenable to mathematics, where the future states of the systems can be easily predicted, and another where seemingly random behaviour occurs.

This second state is what we will call chaos. It can happen occasionally in many systems.

For instance, if you are unfortunate enough to suffer a heart attack, the normally predictable firing of heart muscles goes into a chaotic state where the muscles fire seemingly randomly, from which only a shock will bring them back. If you’ve ever braked hard on a motorbike on an icy road you may have experienced a “tank slapper” a chaotic motion of the handlebars that almost always results in you falling off. There are circumstances at sea where wave patterns behave chaotically, resulting in unexplained huge waves.

Chaos theory is the study of Chaos, and a variety of analytical methods, measures and insights have been gathered together in the past 30 years.

Generally, chaos is an unusual occurrence, and where engineers have the tools they will attempt to “design it out”, i.e. to make it impossible.

There are, however, systems where chaos is not rare, but is the norm. One of these, you will have guessed, is the weather, but there are others, the financial markets for instance, and surprisingly nature. Investigations of the populations of predators and prey, for instance shows that these often behave chaotically over time. The author has been involved in work that shows that even single cellular organisms can display population chaos at high densities.

So, what does it mean to say that a system can behave seemingly randomly? Surely if a system starts to behave randomly the laws of cause and effect are broken?

A little over a hundred years ago scientists were confident that everything in the world would be amenable to analysis, that everything would be therefore predictable, given the tools and enough time. This cosy certainty was destroyed first by Heisenberg’s uncertainty principle, then by the work of Kurt Gödel, and finally by the work of Edward Lorenz, who first discovered Chaos, in, of course, weather simulations!

Chaotic systems are not entirely unpredictable, as something truly random would be. They exhibit diminishing predictability as they move forward in time, and this diminishment is caused by greater and greater computational requirements to calculate the next set of predictions. Computing requirements to make predictions of chaotic systems grow exponentially, and so in practice, with finite resources, prediction accuracy will drop off rapidly the further you try to predict into the future. Chaos doesn’t murder cause and effect; it just wounds it!

Now would be a good place for an example. Everyone owns a spread sheet program. The following is very easy to try for yourself.

The simplest man-made equation known that produces chaos is called the logistic map.

It’s simplest form is: Xn+1 = 4Xn(1-Xn)

Meaning that the next step of the sequence is equal to 4 times the previous step times 1 – the previous step. If we open a spread sheet we can create two columns of values:

Each column A and B is created by writing =A1*4* (1-A1) into cell A2, and then copying it down for as many cells as you like, the same for B2, writing in =B1*4* (1-B1). A1 and B1 contain the initial conditions. A1 contains just 0.3 and B1 contains a very slightly different number: 0.30000001

The graph to the right shows the two copies of the series. Initially they are perfectly in sync, then they start to divert at around step 22, while by step 28 they are starting to behave entirely differently.

This effect occurs for a wide range of initial conditions. It is fun to get out your spread sheet program and experiment. The bigger the difference between the initial conditions the faster the sequences diverge.

The difference between the initial conditions is minute, but the two series diverge for all that. This illustrates one of the key things about chaos. This is the acute sensitivity to initial conditions.

If we look at this the other way round, suppose that you only had the series, and let’s assume to make it easy, that you know the form of the equation but not the initial conditions. If you try to make predictions from your model, any minute inaccuracies in your guess of the initial conditions will result in your prediction and the result diverging dramatically. This divergence grows exponentially, and one way of measuring this is called the Lyapunov exponent. This measures in bits per time step how rapidly these values diverge, averaged over a large set of samples. A positive Lyapunov exponent is considered to be proof of chaos. It also gives us a bound on the quality of predictions we can get if we try to model a chaotic system.

These basic characteristics apply to all chaotic systems.

Here’s something else to stimulate thought. The values of our simple chaos generator in the spread sheet vary between 0 and 1. If we subtract 0.5 from each, so we have positive and negative going values, and accumulate them we get this graph, stretched now to a thousand points.

If, ignoring the scale, I told you this was the share price last year for some FTSE or NASDAQ stock, or yearly sea temperature you’d probably believe me. The point I’m trying to make is that chaos is entirely capable of driving a system itself and creating behaviour that looks like it’s driven by some external force. When a system drifts as in this example, it might be because of an external force, or just because of chaos.

So, how about the weather?

Edward Lorenz, (1917, 2008) was the father of the study of Chaos, and also a weather researcher. He created an early weather simulation using three coupled equations and was amazed to find that as he progressed the simulation in time the values in the simulation behaved unpredictably.

He then looked for evidence that real world weather behaved in this same unpredictable fashion, and found it, before working on discovering more about the nature of Chaos.

No climate researchers dispute his analysis that the weather is chaotic.

Edward Lorenz estimated that the global weather exhibited a Lyapunov exponent equivalent to one bit of information every 4 days. This is an average over time and the world’s surface. There are times and places where weather is much more chaotic, as anyone who lives in England can testify. What this means though, is that if you can predict tomorrows weather with an accuracy of 1 degree C, then your best prediction of the weather on average 5 days hence will be +/- 2 degrees, 9 days hence +/-4 degrees and 13 days hence +/- 8 degrees, so to all intents and purposes after 9-10 days your predictions will be useless. Of course, if you can predict tomorrow’s weather to +/- 0.1 degree, then the growth in errors is slowed, but since they grow exponentially, it won’t be many days till they become useless again.

Interestingly the performance of weather predictions made by organisations like the UK Met office drop off in exactly this fashion. This is proof of a positive Lyapunov exponent, and thus of the existence of chaos in weather, if any were still needed.

So that’s weather prediction, how about long term modelling?

Let’s look first at the scientific method. The principle ideas are that science develops by someone forming an hypothesis, testing this hypothesis by constructing an experiment, and modifying the hypothesis, proving or disproving it, by examining the results of the experiment.

A model, whether an equation or a computer model, is just a big hypothesis. Where you can’t modify the thing you are hypothesising over with an experiment, then you have to make predictions using your model and wait for the system to confirm or deny them.

A classic example is the development of our knowledge of the solar system. The first models had us at the centre, then the sun at the centre, then the discovery of elliptical orbits, and then enough observations to work out the exact nature of these orbits. Obviously, we could never hope to affect the movement of the planets, so experiments weren’t possible, but if our models were right, key things would happen at key times: eclipses, the transit of Venus, etc. Once models were sophisticated enough, errors between the model and reality could be used to predict new features. This is how the outer planets, Neptune and Pluto were discovered. If you want to know where the planets will be in ten years’ time to the second, there is software available online that will tell you exactly.

Climate scientists would love to be able to follow this way of working. The one problem is that, because the weather is chaotic, there is never any hope that they can match up their models and the real world.

They can never match up the model to shorter term events, like say six months away, because as we’ve seen, the weather six months away is completely and utterly unpredictable, except in very general terms.

This has terrible implications for their ability to model.

I want to throw another concept into this mix, drawn from my other speciality, the world of computer modelling through self-learning systems.

This is the field of artificial intelligence, where scientists attempt to create mostly computer programs that behave intelligently and are capable of learning. Like any area of study, this tends to throw up bits of general theory and one of these is to do with the nature of incremental learning.

Incremental learning is where a learning process tries to model something by starting out simple and adding complexity, testing the quality of the model as it goes.

Examples of this are neural networks, where the strength of connections between simulated brain cells are adapted as learning goes on or genetic programming, where bits of computer programs are modified and elaborated to improve the fit of the model.

From my example above of theories of the solar system, you can see that the scientific method itself is a form of incremental learning.

There is a graph that is universal in incremental learning. It shows the performance of an incremental learning algorithm, it doesn’t matter which, on two sets of data.

The idea is that these two sets of data must be drawn from the same source, but they are split randomly into two, the training set, used to train the model, and a test set used to test it every now and then. Usually the training set is bigger than the test set, but if there is plenty of data this doesn’t matter either. So as learning progresses the learning system uses the training data to modify itself, but not the test data, which is used to test the system, but is immediately forgotten by it.

As can be seen, the performance on the training set gets better and better as more complexity is added to the model, but the performance of the test set gets better, and then starts to get worse!

Just to make this clear, the test set is the only thing that matters. If we are to use the model to make predictions we are going to present new data to it, just like our test set data. The performance on the training set is irrelevant.

This is an example of a principle that has been talked about since William of Ockham first wrote “Entia non sunt multiplicanda praeter necessitatem “, known as Ockham’s razor and translatable as “entities should not be multiplied without necessity”, entities being in his case embellishments to a theory. The corollary of this is that the simplest theory that fits the facts is most likely to be correct.

There are proofs for the generality of this idea from Bayesian Statistics and Information Theory.

So, this means that our intrepid weather modellers are in trouble from both ends: if their theories are insufficiently complex to explain the weather their model will be worthless, if too complex then they will also be worthless. Who’d be a weather modeller?

Given that they can’t calibrate their models to the real world, how do weather modellers develop and evaluate their models?

As you would expect, weather models behave chaotically too. They exhibit the same sensitivity to initial conditions. The solution chosen for evaluation (developed by Lorenz) is to run thousands of examples each with slightly different initial conditions. These sets are called ensembles.

Each example explores a possible path for the weather, and by collecting the set, they generate a distribution of possible outcomes. For weather predictions they give you the biggest peak as their prediction. Interestingly, with this kind of model evaluation there is likely to be more than one answer, i.e. more than one peak, but they choose never to tell us the other possibilities. In statistics this methodology is called the Monte Carlo method.

For climate change they modify the model so as to simulate more CO2, more solar radiation or some other parameter of interest and then run another ensemble. Once again the results will be a series of distributions over time, not a single value, though the information that the modellers give us seems to leave out alternate solutions in favour of the peak value.

Models are generated by observing the earth, modelling land masses and air currents, tree cover, ice cover and so on. It’s a great intellectual achievement, but it’s still full of assumptions. As you’d expect the modellers are always looking to refine the model and add new pet features. In practice there is only one real model, as any changes in one are rapidly incorporated into the others.

The key areas of debate are the interactions of one feature with another. For instance the hypothesis that increased CO2 will result in run-away temperature rises is based on the idea that the melting of the permafrost in Siberia due to increased temperatures will release more CO2 and thus positive feedback will bake us all. Permafrost may well melt, or not, but the rate of melting and the CO2 released are not hard scientific facts but estimates. There are thousands of similar “best guesses’’ in the models.

As we’ve seen from looking at incremental learning systems too much complexity is as fatal as too little. No one has any idea where the current models lie on the graph above, because they can’t directly test the models.

However, dwarfing all this arguing about parameters is the fact that weather is chaotic.

We know of course that chaos is not the whole story. It’s warmer on average away from the equatorial regions during the summer than the winter. Monsoons and freezing of ice occur regularly every year, and so it’s tempting to see chaos as a bit like noise in other systems.

The argument used by climate change believers runs that we can treat chaos like noise, so chaos can be “averaged out”.

To digress a little, this idea of averaging out of errors/noise has a long history. If we take the example of measuring the height of Mount Everest before the days of GPS and Radar satellites, the method to calculate height was to start at Sea level with a theodolite and take measurements of local landmarks using their distance and their angle above the horizon to estimate their height. Then to move on to those sites and do the same thing with other landmarks, moving slowly inland. By the time surveyors got to the foothills of the Himalayas they were relying on many thousand previous measurements, all with measurement error included. In the event the surveyor’s estimate of the height of Everest was only a few hundred feet out!

This is because all those measurement errors tended to average out. If, however there had been a systemic error, like the theodolites all measuring 5 degrees up, then the errors would have been enormous. The key thing is that the errors were unrelated to the thing being measured.

There are lots of other examples of this in Electronics, Radio Astronomy and other fields.

You can understand climate modellers would hope for the same to be true of chaos. In fact, they claim this is true. Note however that the errors with the theodolites were nothing to do with the actual height of Everest, as noise in radio telescope amplifiers has nothing to do with the signals from distant stars. Chaos, however, is implicit in weather, so there is no reason why it should average out. It’s not part of the measurement; it’s part of the system being measured.

So can chaos be averaged out? If it can, then we would expect long term measurements of weather to exhibit no chaos. When a team of Italian researchers asked to use my Chaos analysis software last year to look at a time series of 500 years of averaged South Italian winter temperatures, the opportunity arose to test this. The picture below is this time series displayed in my Chaos Analysis program, ChaosKit.

The result? Buckets of chaos. The Lyapunov exponent was measured at 2.28 bits per year.

To put that in English, the predictability of the temperature quarters every year further ahead you try to predict, or the other way round, the errors more than quadruple.

What does this mean? Chaos doesn’t average out. Weather is still chaotic at this scale over hundreds of years.

If we were, as climate modellers try to do, to run a moving average over the data, to hide the inconvenient spikes, we might find a slight bump to the right, as well as many bumps to the left. Would we be justified in saying that this bump to the right was proof of global warming? Absolutely not: It would be impossible to say if the bump was the result of chaos, and the drifts we’ve see it can create or some fundamental change, like increasing CO2.

So, to summarize, climate researchers have constructed models based on their understanding of the climate, current theories and a series of assumptions. They cannot test their models over the short term, as they acknowledge, because of the chaotic nature of the weather.

They hoped, though, to be able to calibrate, confirm or fix up their models by looking at very long term data, but we now know that’s chaotic too. They don’t, and cannot know, whether their models are too simple, too complex, or just right, because even if they were perfect, if weather is chaotic at this scale, they cannot hope to match up their models to the real world, the slightest errors in initial conditions would create entirely different outcomes.

All they can honestly say is this: “we’ve created models that we’ve done our best to match up to the real world, but we cannot prove to be correct. We appreciate that small errors in our models would create dramatically different predictions, and we cannot say if we have errors or not. In our models the relationships that we have publicized seem to hold.”

It is my view that governmental policymakers should not act on the basis of these models. The likelihood seems to be that they have as much similarity to the real world as The Sims, or Half-life.

On a final note, there is another school of weather prediction that holds that long term weather is largely determined by variations in solar output. Nothing here either confirms or denies that hypothesis, as long term sunspot records have shown that solar activity is chaotic too.

Andy Edmonds

Short Bio

Dr Andrew Edmonds is an author of computer software and an academic. He designed various early artificial intelligence computer software packages and was arguably the author of the first commercial data mining system. He has been the CEO of an American public company and involved in several successful start-up businesses. His PhD thesis was concerned with time series prediction of chaotic series, and resulted in his product ChaosKit, the only standalone commercial product for analysing chaos in time series. He has published papers on Neural Networks, genetic programming of fuzzy logic systems, AI for financial trading, and contributed to papers in Biotech, Marketing and Climate.

Short summary: AA discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.

On his Website: http://scientio.blogspot.com/2011/06/chaos-theoretic-argument-that.html

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

212 Comments
Inline Feedbacks
View all comments
Latitude
June 13, 2011 8:00 am

chaos…….
When the computer says one thing, reality does another….
…and you still have to make your house payment

DirkH
June 13, 2011 8:19 am

Very clearly put, thanks. The test set performance retardation of models with increasing complexity and increasing performance in the training set is an overspecialization; the models have learned to exploit every wiggle of the data in the training set but “unlearned” more general rules that would have helped them survive with a different data set.

Patrick Keane
June 13, 2011 8:26 am

Hi,
Very, very interesting. Dr Edmond’s explanation is simple and elegant, and for an old duffer like me, very enlightening.
I am not qualified to comment on the mathematical details, my engineering maths courses back in the ’60’s did not stretch to Chaos theory but my gut feeling is that it makes sense.
Perhaps this paper should be required reading for all the alarmist journalists, especially that English lit. graduate, Richard Black who purports to write scientific articles for the BBC!
Keep up the good work!
Patrick

Kev-in-Uk
June 13, 2011 8:27 am

Excellent article!

DonS
June 13, 2011 8:30 am

Epiphany at last. Now I can find other interests.

June 13, 2011 8:30 am

I’ll throw this out again to ponder…
You have to go back to the 1930’s to find as many volcanoes honking as we’ve had since 2008.
http://www.volcano.si.edu/world/find_eruptions.cfm
What was going on back then? Chaos! The great depression and…
http://en.wikipedia.org/wiki/Dust_Bowl
Cold waves, heat waves, droughts and floods. Earthquakes and volcanic eruptions. This is what is going on now, it’s trying to dé jàvu 1930’s.

John Marshall
June 13, 2011 8:31 am

Good post.
Another problem with climate models is that they rely on a theory, GHG effect, that does not exist. There are other explanations for Earth’s average surface temperature being ‘too high’ without need of a theory that violates the laws of thermodynamics.

Mingy
June 13, 2011 8:38 am

I took a graduate level course in computer modeling in the late 1980s. The professor was a renowed mathematician with a field of study named after him. Most of the course was actually about how you can’t really model large complex systems and a large part of that had to do with a mathematical understanding of chaos and how chaotic systems can appear appealingly predictable for a time but are not. So, the course was basically about how can can develop models
but why you can’t trust them. The summary is models tell you nothing about nature. That course was taken completed in 1987, so presumably they don’t hold to old-fashioned ideas like that anymore – even if they are grounded in theoretical mathematics
This is as good a summary of chaos and the influence of chaos on times series computer models as I have seen.

June 13, 2011 8:40 am

A major problem
with scientific method
is there’s more than one.
Wild hypotheses
and unsubstantiated
conjectures abound,
for the “scientists”
of the climate-change bubble
just want to have fun
and invent figures;
but bubbles will always burst,
or so I have found.

June 13, 2011 8:43 am

That last chart looks remarkably like a tree-ring series IMHO (without the “blade” of course!). Many thanks to Dr. Edwards for this “chaos theory for dummies”. He mentions modellers averaging output from different runs, which is dubious enough, but we know they also average output from different models to get a kind of “consensus”. If models produce differing results, then no more than one can be accurate. If we assume that one of them is correct, but we don’t know which one, as they’re making predictions about the future, then averaging their output is meaningless. The averaged output would only be something close to reality if the “accurate” model was somewhere in the middle of the range.
Add to this the fact that models don’t include factors which are poorly understood, or indeed unknown (there are many), and “fudge factors” for some others which are poorly quantified, predictions made for time-scales of decades or longer must be highly suspect, if not worthless (as many here have long suspected).

PM
June 13, 2011 8:43 am

This is one of the best articles this site has seen. If the weather is inherently chaotic at long timescales and can’t be averaged means that climate modelling is much more demanding and unaccurate than generally believed. However there must be some physical stabilizing mechanisms (since the earths temperature has only varied very little on geological timescales) that are not chaotic.
In my opinion the physical constraints set the envelope or conditions where the weather can random-walk. The mainstream climate science has just thought that a minor change in the CO2 undoudably creates a definite rise in the temperature. It just may be the the warming effect may be drowned by the random walk of the climate. Anyways the climate models rely on a mechanistic view of the climate where each action has a certain reaction, based on this article that seem to be an outdated approach.

Dr. Lurtz
June 13, 2011 8:47 am

Chaos is everywhere except the Sun’s output: it is a constant and can’t be a climate driver!!
Blind, and deaf climate modelers… The Sun’s output varies as per the Dalton minimum in 1810 and the Maunder minimum in 1650 and the 11 year cycle from minimum to maximum. But not today!!
Of course, the effects of money, power, politics, and, women, do tend to affect scientific results.

Venter
June 13, 2011 8:50 am

The biggest problem with the climate models is the modellers themselves. They have a biblical faith in the models and treat them as inviolate. The can’t simply accept the fact that they are trying to model a chaotic system and it is a loser’s game. Instead of humility in accepting reality you see only arrogance from these modellers. They demand wholesale changes to the world based on these models which have not proved effective at predicting anything till date.

Frank K.
June 13, 2011 8:50 am

Excellent article! I hope it stimulates some good discussions.
I also hope it helps put to rest the nonsense that “climate prediction is a boundary value problem” … the GCMs don’t even solve it that way!

John B
June 13, 2011 8:53 am

So, the GHG effect violates the laws of thermodynamics? Pray tell how…

June 13, 2011 9:14 am

Evidently, Dr. Edwards is unaware of the three decade old work of Christensen, Eilbert, Lingren and Rans that made it possible to predict surface temperature and precipitation related variables in the western states of the U.S. as much as 3 years in advance. I provide an introduction to the methodology that made this advance possible and compare it to the methodology of modern climatology at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ . In brief, the idea was to construct an information theoretically optimal decoder of a “message” from the future that conveyed the outcomes of weather-related events.
While chaos precludes the success of an approach to long range weather forecasting that is based entirely upon the principles of modern physics, it does not preclude long range weather forecasting. The same might be be found to be true of long range climate forecasting if construction of a climate-forecasing decoder were to be attempted.

Dr T G Watkins
June 13, 2011 9:25 am

Thanks for an excellent article, Andy.
As others have said, it is a shame politicians and so called science journalists do not read the science on this site.

Strick
June 13, 2011 9:41 am

Exactly the kind of discussion I’ve been looking for for the last couple of years. With exactly the common sense results I expected from a lay reading of the field (but with more authority, of course).
Thank you.

Shevva
June 13, 2011 9:41 am

It amazes ,me that climate scientist’s can teach so many fields of science, mathmatic’s and Software engineering how not to do it.
And excellent article, top marks to Dr Andy.

KR
June 13, 2011 9:45 am

Weather is inherently chaotic, but climate (long term averages) is not. The IPCC is quite aware of that, see http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-1-2.html for a discussion. Climate is driven, controlled, and limited by energy, and a departure in one direction from balance (up or down) drives the climate back to that balance point.
Your continuing chaos with time is simply the same kind of data repeated, no averaging – the Lyapunov exponent of a chaotic series shouldn’t change – it’s going to vary around the climate mean. You have to look at the data averaged over a sufficient period to see past the effects of fluid turbulence and non-period variations such as the ENSO.
“Once again the results will be a series of distributions over time, not a single value, though the information that the modellers give us seems to leave out alternate solutions in favour of the peak value.”
Really? That’s an amazingly incorrect statement, Mr. Edmonds. See http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5-4-6.html, also http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5-3.html, http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-5-4-5.html for examples.
The claim is made that weather is chaotic – yes, it is. But the averages are not. Arguing that we cannot predict the climate is simply a plea to do nothing, to continue with business as usual. You’ve spent a lot of effort on a very deceptive posting.

KR
June 13, 2011 9:46 am

My apologies, I should have referred to “Dr. Edmonds” in my last post, not “Mr.” – no insult intended, just bad typing.

RickA
June 13, 2011 9:57 am

Dr. Andy Edwards:
Thank you – very interesting.
I am wondering about the difference though between weather and climate.
Clearly, I cannot predict the temperature on a particular day in the winter 6 months hence, as you pointed out.
However, I do know (I think to a very high confidence) that it is going to be cooler on average in the winter than in the summer.
So, does predicting the climate (of a season) for example, differ with respect to chaos, than does predicting the weather?
Climate models aren’t trying to predict the weather years in the future, but the climate, and I was wondering if you think that makes any difference?

Ian W
June 13, 2011 10:00 am


PM says:
June 13, 2011 at 8:43 am
In my opinion the physical constraints set the envelope or conditions where the weather can random-walk. The mainstream climate science has just thought that a minor change in the CO2 undoudably creates a definite rise in the temperature. It just may be the the warming effect may be drowned by the random walk of the climate. Anyways the climate models rely on a mechanistic view of the climate where each action has a certain reaction, based on this article that seem to be an outdated approach.

Its not so much a random walk as a Levy Flight as some of the random motions are far more than others. ‘Weather’ is actually a term for a complex multiplicity of interacting chaotic systems each in Levy flight. The bounding conditions are those of the current interglacial ‘attractor’ and the bounds are defined by the current limits of the variations of the inter-reacting systems. If some of these chaotic systems happen to act at the right time or periodocity with the right values then they can move the entire system to a different attactor and we can rapidly enter another ice-age.

Theo Goodwin
June 13, 2011 10:07 am

I am so sorry to rain on such a happy parade, but the author advertises his cluelessness about scientific method, not to mention Kepler’s work, as follows:
“A model, whether an equation or a computer model, is just a big hypothesis. Where you can’t modify the thing you are hypothesising over with an experiment, then you have to make predictions using your model and wait for the system to confirm or deny them.”
Hypotheses are used for prediction and explanation. If they do not predict phenomenon X then they cannot explain phenomenon X. If they do not explain phenomenon X then they cannot predict phenomenon X. We will see this in Kepler’s Laws below.
“A classic example is the development of our knowledge of the solar system. The first models had us at the centre, then the sun at the centre, then the discovery of elliptical orbits, and then enough observations to work out the exact nature of these orbits.”
Isn’t there a bit of hand-waving about this “discovery of elliptical orbits?” Did Kepler stumble over them on his way to the refrigerator? No, Kepler’s genius created his first hypothesis, that all planetary orbits are elliptical with the sun at one foci, just as Zeus gave birth to Athena from his forehead. Then Kepler applied his hypothesis to the data that had been collected mostly by Tyco Brahe and found that all the data fit the elliptical orbits.
Let’s review the steps. There is some data that needs organization. Existing hypotheses do not cover them. Kepler freely creates a hypothesis, now known as Kepler’s First Law, that exactly covers an important regularity in the data. However, there is more than prediction. The elliptical path will serve in powerful explanations of how the planets actually move and why they are observed to do so. Kepler’s three Laws will explain the observed retrograde motion of Mars, something that had been a puzzle for thousands of years. That comes below.
“Obviously, we could never hope to affect the movement of the planets, so experiments weren’t possible, but if our models were right, key things would happen at key times: eclipses, the transit of Venus, etc.”
No. We can conduct experiments. It’s just that they are passive, not active. Once Galileo has a telescope, he will conduct experiments by predicting the behavior of planetary phenomena and then testing the observations with his telescope.
“Once models were sophisticated enough, errors between the model and reality could be used to predict new features. This is how the outer planets, Neptune and Pluto were discovered. If you want to know where the planets will be in ten years’ time to the second, there is software available online that will tell you exactly.”
There was no need for more sophistication in Kepler’s hypotheses, though there was a need for Newton’s calculus. After inventing the calculus, Newton was able to deduce Kepler’s Laws from his Law of Universal Gravitation.
Our author drops the topic here.
Kepler’s Second Law is that a planet’s speed varies in direct proportion to the area swept by a line from the sun to the planet as the planet travels in its orbit. His Third Law specifies the size of orbits and is a predecessor of Newton’s Law of Gravitation. Given these three laws, Kepler was able to predict the observed path of Mars, the one true anomaly in the astronomy of that time, and to explain it. He explained that Mars is observed to stop, back up, and move forward again over a period of months because Mars and Earth are on intersecting elliptical paths and are changing speeds. In other words, he could explain to his fellows that Mars’ observed path is analogous to the observed path of a sailboat that temporarily overtakes our own, then seems to slow because our sailboat enjoys a burst of speed, and then enjoys its own burst of speed and moves past us.
The big point to notice is that Kepler’s hypotheses are used both for prediction and explanation. In all of science, hypotheses are useful for both explanation and prediction. The two always go together. In addition, each hypothesis is “fleshed out” in that it contributes content to explanations of planetary motions and predictions about those motions. In other words, each hypothesis has its own integrity. Of course, it follows from the preceding that each hypothesis is testable and falsifiable on its own.
By contrast to scientific hypotheses and sets of them, a computer model is analogous to a system of deduction. As anyone who has studied a system of deduction knows, such systems are not unique. I can create a deductive system that implies all and only the true statements that I specify. For example, Hans Reichenbach wrote a book called “Axiomatization of the Theory of Relativity” in which he identified a set of most basic principles and deduced all of the other consequences of Einstein’s theory from them. Such deductive systems are not unique. Reichanbach, or anyone, could have chosen an alternative set of deductive principles for his axiomatization. It would have yielded all and only the same truths, but the proofs of those truths would have differed. Difference in proof does not matter. If the two systems yield all and only the same truths then they are identical The same reasoning applies to computer models.
There are no hypotheses in computer models. You cannot remove one proof, one set of proofs, or even a heuristic for making equations solve and present it as a hypothesis. It has no more content than a set of deductive inferences in formal logic. For that reason, computer models cannot answer the scientific question Why? Hypotheses can answer scientific questions, as shown in the case of Kepler. Genuine hypotheses always give explanation along with prediction. There is never any scientific reason for coding physical hypotheses into a computer model. If you have the hypotheses, you do not need the model and you have explanations. You might axiomatize a physical theory for the purpose of searching for consequences that you have overlooked but that is a matter of accounting not science.
What can a computer model do? You can program it to produce a particular set of results. It can be used to retrodict (predict the past). Any two models that retrodict the same past are identical regardless of their internal construction. Can it predict? That is a matter of programming it to produce a set of results that you expect to happen in the future. You have to program those in. The computer will never be able to do more that produce results that you program. The model does not contain physical hypotheses and cannot be made to offer explanations for the results you program it to produce.

June 13, 2011 10:07 am

KR says:
“Arguing that we cannot predict the climate is simply a plea to do nothing, to continue with business as usual. You’ve spent a lot of effort on a very deceptive posting.”
Actually, the IPCC’s position is this:

“In climate research and modeling we should recognize that we are dealing with a coupled non linear chaotic system and therefore that the long term predictions of future climate states is not possible.”

Not one GCM [computer climate model] predicted the flat to declining temperatures over the past decade, proving that they cannot accurately predict the climate. Readers can decide for themselves who is posting deceptively.
And the most reasonable course of action at this point is to do nothing. There is no evidence whatever of global harm from increased CO2, and much evidence of global benefit. Thus, CO2 is harmless and beneficial. Under the circumstances, doing nothing is the only rational response.

1 2 3 9