The Chaos theoretic argument that undermines Climate Change modelling

Just to be clear ahead of time, chaos in weather is NOT the same as climate disruption listed below – Anthony

Guest submission by Dr. Andy Edmonds

This is not intended to be a scientific paper, but a discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.

First of all, what is Chaos? I use the term here in its mathematical sense. Just as in recent years Scientists have discovered extra states of matter (not just solid, liquid, gas, but also plasma) so also science has discovered new states that systems can have.

Systems of forces, equations, photons, or financial trading, can exist effectively in two states: one that is amenable to mathematics, where the future states of the systems can be easily predicted, and another where seemingly random behaviour occurs.

This second state is what we will call chaos. It can happen occasionally in many systems.

For instance, if you are unfortunate enough to suffer a heart attack, the normally predictable firing of heart muscles goes into a chaotic state where the muscles fire seemingly randomly, from which only a shock will bring them back. If you’ve ever braked hard on a motorbike on an icy road you may have experienced a “tank slapper” a chaotic motion of the handlebars that almost always results in you falling off. There are circumstances at sea where wave patterns behave chaotically, resulting in unexplained huge waves.

Chaos theory is the study of Chaos, and a variety of analytical methods, measures and insights have been gathered together in the past 30 years.

Generally, chaos is an unusual occurrence, and where engineers have the tools they will attempt to “design it out”, i.e. to make it impossible.

There are, however, systems where chaos is not rare, but is the norm. One of these, you will have guessed, is the weather, but there are others, the financial markets for instance, and surprisingly nature. Investigations of the populations of predators and prey, for instance shows that these often behave chaotically over time. The author has been involved in work that shows that even single cellular organisms can display population chaos at high densities.

So, what does it mean to say that a system can behave seemingly randomly? Surely if a system starts to behave randomly the laws of cause and effect are broken?

A little over a hundred years ago scientists were confident that everything in the world would be amenable to analysis, that everything would be therefore predictable, given the tools and enough time. This cosy certainty was destroyed first by Heisenberg’s uncertainty principle, then by the work of Kurt Gödel, and finally by the work of Edward Lorenz, who first discovered Chaos, in, of course, weather simulations!

Chaotic systems are not entirely unpredictable, as something truly random would be. They exhibit diminishing predictability as they move forward in time, and this diminishment is caused by greater and greater computational requirements to calculate the next set of predictions. Computing requirements to make predictions of chaotic systems grow exponentially, and so in practice, with finite resources, prediction accuracy will drop off rapidly the further you try to predict into the future. Chaos doesn’t murder cause and effect; it just wounds it!

Now would be a good place for an example. Everyone owns a spread sheet program. The following is very easy to try for yourself.

The simplest man-made equation known that produces chaos is called the logistic map.

It’s simplest form is: Xn+1 = 4Xn(1-Xn)

Meaning that the next step of the sequence is equal to 4 times the previous step times 1 – the previous step. If we open a spread sheet we can create two columns of values:

Each column A and B is created by writing =A1*4* (1-A1) into cell A2, and then copying it down for as many cells as you like, the same for B2, writing in =B1*4* (1-B1). A1 and B1 contain the initial conditions. A1 contains just 0.3 and B1 contains a very slightly different number: 0.30000001

The graph to the right shows the two copies of the series. Initially they are perfectly in sync, then they start to divert at around step 22, while by step 28 they are starting to behave entirely differently.

This effect occurs for a wide range of initial conditions. It is fun to get out your spread sheet program and experiment. The bigger the difference between the initial conditions the faster the sequences diverge.

The difference between the initial conditions is minute, but the two series diverge for all that. This illustrates one of the key things about chaos. This is the acute sensitivity to initial conditions.

If we look at this the other way round, suppose that you only had the series, and let’s assume to make it easy, that you know the form of the equation but not the initial conditions. If you try to make predictions from your model, any minute inaccuracies in your guess of the initial conditions will result in your prediction and the result diverging dramatically. This divergence grows exponentially, and one way of measuring this is called the Lyapunov exponent. This measures in bits per time step how rapidly these values diverge, averaged over a large set of samples. A positive Lyapunov exponent is considered to be proof of chaos. It also gives us a bound on the quality of predictions we can get if we try to model a chaotic system.

These basic characteristics apply to all chaotic systems.

Here’s something else to stimulate thought. The values of our simple chaos generator in the spread sheet vary between 0 and 1. If we subtract 0.5 from each, so we have positive and negative going values, and accumulate them we get this graph, stretched now to a thousand points.

If, ignoring the scale, I told you this was the share price last year for some FTSE or NASDAQ stock, or yearly sea temperature you’d probably believe me. The point I’m trying to make is that chaos is entirely capable of driving a system itself and creating behaviour that looks like it’s driven by some external force. When a system drifts as in this example, it might be because of an external force, or just because of chaos.

So, how about the weather?

Edward Lorenz, (1917, 2008) was the father of the study of Chaos, and also a weather researcher. He created an early weather simulation using three coupled equations and was amazed to find that as he progressed the simulation in time the values in the simulation behaved unpredictably.

He then looked for evidence that real world weather behaved in this same unpredictable fashion, and found it, before working on discovering more about the nature of Chaos.

No climate researchers dispute his analysis that the weather is chaotic.

Edward Lorenz estimated that the global weather exhibited a Lyapunov exponent equivalent to one bit of information every 4 days. This is an average over time and the world’s surface. There are times and places where weather is much more chaotic, as anyone who lives in England can testify. What this means though, is that if you can predict tomorrows weather with an accuracy of 1 degree C, then your best prediction of the weather on average 5 days hence will be +/- 2 degrees, 9 days hence +/-4 degrees and 13 days hence +/- 8 degrees, so to all intents and purposes after 9-10 days your predictions will be useless. Of course, if you can predict tomorrow’s weather to +/- 0.1 degree, then the growth in errors is slowed, but since they grow exponentially, it won’t be many days till they become useless again.

Interestingly the performance of weather predictions made by organisations like the UK Met office drop off in exactly this fashion. This is proof of a positive Lyapunov exponent, and thus of the existence of chaos in weather, if any were still needed.

So that’s weather prediction, how about long term modelling?

Let’s look first at the scientific method. The principle ideas are that science develops by someone forming an hypothesis, testing this hypothesis by constructing an experiment, and modifying the hypothesis, proving or disproving it, by examining the results of the experiment.

A model, whether an equation or a computer model, is just a big hypothesis. Where you can’t modify the thing you are hypothesising over with an experiment, then you have to make predictions using your model and wait for the system to confirm or deny them.

A classic example is the development of our knowledge of the solar system. The first models had us at the centre, then the sun at the centre, then the discovery of elliptical orbits, and then enough observations to work out the exact nature of these orbits. Obviously, we could never hope to affect the movement of the planets, so experiments weren’t possible, but if our models were right, key things would happen at key times: eclipses, the transit of Venus, etc. Once models were sophisticated enough, errors between the model and reality could be used to predict new features. This is how the outer planets, Neptune and Pluto were discovered. If you want to know where the planets will be in ten years’ time to the second, there is software available online that will tell you exactly.

Climate scientists would love to be able to follow this way of working. The one problem is that, because the weather is chaotic, there is never any hope that they can match up their models and the real world.

They can never match up the model to shorter term events, like say six months away, because as we’ve seen, the weather six months away is completely and utterly unpredictable, except in very general terms.

This has terrible implications for their ability to model.

I want to throw another concept into this mix, drawn from my other speciality, the world of computer modelling through self-learning systems.

This is the field of artificial intelligence, where scientists attempt to create mostly computer programs that behave intelligently and are capable of learning. Like any area of study, this tends to throw up bits of general theory and one of these is to do with the nature of incremental learning.

Incremental learning is where a learning process tries to model something by starting out simple and adding complexity, testing the quality of the model as it goes.

Examples of this are neural networks, where the strength of connections between simulated brain cells are adapted as learning goes on or genetic programming, where bits of computer programs are modified and elaborated to improve the fit of the model.

From my example above of theories of the solar system, you can see that the scientific method itself is a form of incremental learning.

There is a graph that is universal in incremental learning. It shows the performance of an incremental learning algorithm, it doesn’t matter which, on two sets of data.

The idea is that these two sets of data must be drawn from the same source, but they are split randomly into two, the training set, used to train the model, and a test set used to test it every now and then. Usually the training set is bigger than the test set, but if there is plenty of data this doesn’t matter either. So as learning progresses the learning system uses the training data to modify itself, but not the test data, which is used to test the system, but is immediately forgotten by it.

As can be seen, the performance on the training set gets better and better as more complexity is added to the model, but the performance of the test set gets better, and then starts to get worse!

Just to make this clear, the test set is the only thing that matters. If we are to use the model to make predictions we are going to present new data to it, just like our test set data. The performance on the training set is irrelevant.

This is an example of a principle that has been talked about since William of Ockham first wrote “Entia non sunt multiplicanda praeter necessitatem “, known as Ockham’s razor and translatable as “entities should not be multiplied without necessity”, entities being in his case embellishments to a theory. The corollary of this is that the simplest theory that fits the facts is most likely to be correct.

There are proofs for the generality of this idea from Bayesian Statistics and Information Theory.

So, this means that our intrepid weather modellers are in trouble from both ends: if their theories are insufficiently complex to explain the weather their model will be worthless, if too complex then they will also be worthless. Who’d be a weather modeller?

Given that they can’t calibrate their models to the real world, how do weather modellers develop and evaluate their models?

As you would expect, weather models behave chaotically too. They exhibit the same sensitivity to initial conditions. The solution chosen for evaluation (developed by Lorenz) is to run thousands of examples each with slightly different initial conditions. These sets are called ensembles.

Each example explores a possible path for the weather, and by collecting the set, they generate a distribution of possible outcomes. For weather predictions they give you the biggest peak as their prediction. Interestingly, with this kind of model evaluation there is likely to be more than one answer, i.e. more than one peak, but they choose never to tell us the other possibilities. In statistics this methodology is called the Monte Carlo method.

For climate change they modify the model so as to simulate more CO2, more solar radiation or some other parameter of interest and then run another ensemble. Once again the results will be a series of distributions over time, not a single value, though the information that the modellers give us seems to leave out alternate solutions in favour of the peak value.

Models are generated by observing the earth, modelling land masses and air currents, tree cover, ice cover and so on. It’s a great intellectual achievement, but it’s still full of assumptions. As you’d expect the modellers are always looking to refine the model and add new pet features. In practice there is only one real model, as any changes in one are rapidly incorporated into the others.

The key areas of debate are the interactions of one feature with another. For instance the hypothesis that increased CO2 will result in run-away temperature rises is based on the idea that the melting of the permafrost in Siberia due to increased temperatures will release more CO2 and thus positive feedback will bake us all. Permafrost may well melt, or not, but the rate of melting and the CO2 released are not hard scientific facts but estimates. There are thousands of similar “best guesses’’ in the models.

As we’ve seen from looking at incremental learning systems too much complexity is as fatal as too little. No one has any idea where the current models lie on the graph above, because they can’t directly test the models.

However, dwarfing all this arguing about parameters is the fact that weather is chaotic.

We know of course that chaos is not the whole story. It’s warmer on average away from the equatorial regions during the summer than the winter. Monsoons and freezing of ice occur regularly every year, and so it’s tempting to see chaos as a bit like noise in other systems.

The argument used by climate change believers runs that we can treat chaos like noise, so chaos can be “averaged out”.

To digress a little, this idea of averaging out of errors/noise has a long history. If we take the example of measuring the height of Mount Everest before the days of GPS and Radar satellites, the method to calculate height was to start at Sea level with a theodolite and take measurements of local landmarks using their distance and their angle above the horizon to estimate their height. Then to move on to those sites and do the same thing with other landmarks, moving slowly inland. By the time surveyors got to the foothills of the Himalayas they were relying on many thousand previous measurements, all with measurement error included. In the event the surveyor’s estimate of the height of Everest was only a few hundred feet out!

This is because all those measurement errors tended to average out. If, however there had been a systemic error, like the theodolites all measuring 5 degrees up, then the errors would have been enormous. The key thing is that the errors were unrelated to the thing being measured.

There are lots of other examples of this in Electronics, Radio Astronomy and other fields.

You can understand climate modellers would hope for the same to be true of chaos. In fact, they claim this is true. Note however that the errors with the theodolites were nothing to do with the actual height of Everest, as noise in radio telescope amplifiers has nothing to do with the signals from distant stars. Chaos, however, is implicit in weather, so there is no reason why it should average out. It’s not part of the measurement; it’s part of the system being measured.

So can chaos be averaged out? If it can, then we would expect long term measurements of weather to exhibit no chaos. When a team of Italian researchers asked to use my Chaos analysis software last year to look at a time series of 500 years of averaged South Italian winter temperatures, the opportunity arose to test this. The picture below is this time series displayed in my Chaos Analysis program, ChaosKit.

The result? Buckets of chaos. The Lyapunov exponent was measured at 2.28 bits per year.

To put that in English, the predictability of the temperature quarters every year further ahead you try to predict, or the other way round, the errors more than quadruple.

What does this mean? Chaos doesn’t average out. Weather is still chaotic at this scale over hundreds of years.

If we were, as climate modellers try to do, to run a moving average over the data, to hide the inconvenient spikes, we might find a slight bump to the right, as well as many bumps to the left. Would we be justified in saying that this bump to the right was proof of global warming? Absolutely not: It would be impossible to say if the bump was the result of chaos, and the drifts we’ve see it can create or some fundamental change, like increasing CO2.

So, to summarize, climate researchers have constructed models based on their understanding of the climate, current theories and a series of assumptions. They cannot test their models over the short term, as they acknowledge, because of the chaotic nature of the weather.

They hoped, though, to be able to calibrate, confirm or fix up their models by looking at very long term data, but we now know that’s chaotic too. They don’t, and cannot know, whether their models are too simple, too complex, or just right, because even if they were perfect, if weather is chaotic at this scale, they cannot hope to match up their models to the real world, the slightest errors in initial conditions would create entirely different outcomes.

All they can honestly say is this: “we’ve created models that we’ve done our best to match up to the real world, but we cannot prove to be correct. We appreciate that small errors in our models would create dramatically different predictions, and we cannot say if we have errors or not. In our models the relationships that we have publicized seem to hold.”

It is my view that governmental policymakers should not act on the basis of these models. The likelihood seems to be that they have as much similarity to the real world as The Sims, or Half-life.

On a final note, there is another school of weather prediction that holds that long term weather is largely determined by variations in solar output. Nothing here either confirms or denies that hypothesis, as long term sunspot records have shown that solar activity is chaotic too.

Andy Edmonds

Short Bio

Dr Andrew Edmonds is an author of computer software and an academic. He designed various early artificial intelligence computer software packages and was arguably the author of the first commercial data mining system. He has been the CEO of an American public company and involved in several successful start-up businesses. His PhD thesis was concerned with time series prediction of chaotic series, and resulted in his product ChaosKit, the only standalone commercial product for analysing chaos in time series. He has published papers on Neural Networks, genetic programming of fuzzy logic systems, AI for financial trading, and contributed to papers in Biotech, Marketing and Climate.

Short summary: AA discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.

On his Website: http://scientio.blogspot.com/2011/06/chaos-theoretic-argument-that.html

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
212 Comments
Inline Feedbacks
View all comments
June 14, 2011 7:14 am

I don’t understand why the two sides here, chaos vs determinism simply cannot agree that Chaos == Incomplete Information, and move on. It seems like semantics. I’ll stick my neck out and say: we’ll never know everything. But that does not mean we cannot try.

The appearance that the problem being addressed is a matter of mere semantics is created by the device of equating chaos to incomplete information. It would be more accurate to state that Chaos “implies” Incomplete Information. That it implies incomplete information has implications for climatology.
It follows from the non-linearity of the equations which describe a chaotic system that the associated system cannot be fully described as an interaction among the parts of this system. Thus, when an attempt is made at fully describing a chaotic system as an interaction among its parts, this attempt fails from the incompleteness of the information about the whole of this system. Through competently performed scientific research, the incompleteness may sometimes be reduced to a level that is sufficient for policy making.
Climatologists of the “consensus” argue, in effect, that the whole of the global climate system can be described as an interaction among the parts of this system. If the global climate system can be described in this way, the evolution of the climate is susceptible to being computed (e.g. by an atmosphere-ocean general circulation model). By his essay, Dr. Edmonds is making the important point that this argument is false.
As the climate models provide policy makers with incomplete information about climate outcomes, the question arises of the degree of the error. An answer to this question is unavailable in the literature of climatology. In effect, climatologists have based their claim to anthropogenic global warming upon a false argument.

Paul Vaughan
June 14, 2011 7:14 am

Amazing how many ignore the spatial dimensions. No wonder they buy the notion of temporal climate chaos wholesale. A string of spatiotemporally BIASED samples (a time series sampled from an eddy field) may appear chaotic, but oscillations run up against each other in space, which WRAPS on itself spherically. There are GLOBAL CONSTRAINTS on that sphere, as FIRMLY INDICATED by the Earth Orientation Parameter (EOP) record.
Is there regional interannual turbulence? Of course:
1) http://upload.wikimedia.org/wikipedia/commons/6/67/Ocean_currents_1943_%28borderless%293.png
2) http://wattsupwiththat.com/2011/05/15/interannual-terrestrial-oscillations/
But adjust the microscope across scale and look what aggregates into focus:
1. http://wattsupwiththat.files.wordpress.com/2011/04/vaughn_lod2_fig7.png
2. http://wattsupwiththat.files.wordpress.com/2010/09/scl_northpacificsst.png
3. http://wattsupwiththat.files.wordpress.com/2010/12/vaughn_lod_fig1b.png
4. http://wattsupwiththat.files.wordpress.com/2010/08/vaughn_lod_amo_sc.png
SCL’ = solar cycle deceleration
What one sees is a function of the location & scale of one’s samples. Turbulence at one scale. Strict periodicity at others. (And some are still refusing to adjust their scope location & zoom…)
Boris Gimbarzevsky, thanks for the stimulating comments.

June 14, 2011 8:04 am

Yet (NH) summer is always warmer than winter

J. D. Lindskog
June 14, 2011 9:04 am

It would appear at first look that Dr. Andy Edmonds has defined the the anatomy and functional operation of any government program and particular those programs conceived by committee.

KR
June 14, 2011 10:31 am

Tamino discusses this very post right here: [snip. When Tamino lists WUWT on his blogroll you can provide free advertising. ~dbs, mod.]
As I and others have noted, a chaotic system can certainly possess stable averages and deviations – and those are climate. Changing the boundary conditions (changing the constants in those chaotic attractors) changes the climate.
Edmond’s second illustration, incidentally, is just a random walk, which is not related to either climate or weather.
This particular error seems to recur every once in a while on various skeptic blogs – I’ll have to see if the distribution of recurrence fits a chaotic attractor 🙂

DCC
June 14, 2011 11:07 am

@Theo Goodwin. Let’s just say that your long dissertation yielded nothing. It’s a confusing mess.
“There are no hypotheses in computer models.” Would you care to elaborate on that? Explicit assumptions are hypotheses. That means that fudge factors are hypotheses. The entire model is built on assumptions, especially the one about the amplification of CO2 effects. There is an assumption that solar variation doesn’t count. The whole point of the models is the hypothesis that they can predict future climate!
You posted a lot of words but frankly, they made no sense as a coherent argument..

DCC
June 14, 2011 11:25 am

Terry Oldberg said: ” [It is] possible to predict surface temperature and precipitation related variables in the western states of the U.S. as much as 3 years in advance. I provide an introduction to the methodology that made this advance possible and compare it to the methodology of modern climatology at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ .”
The referenced article is the most unreadable bunch of claptrap I’ve seen in some time. And it purports to tell us about reasoning! Modèle vs model? Give us a break!

Tenuc
June 14, 2011 11:30 am

KR says:
June 14, 2011 at 10:31 am
“…As I and others have noted, a chaotic system can certainly possess stable averages and deviations – and those are climate. Changing the boundary conditions (changing the constants in those chaotic attractors) changes the climate…”
No, no, NO – your statement is complete and utter rubbish! Should you wish to understand the real story about how chaos (not randomness) affects our climate you need to spend some time learning about it, not listening to ignorant people on Tamino’s alarmist blog. A good place to start is here…
http://judithcurry.com/2011/02/10/spatio-temporal-chaos/
Perhaps when you understand a bit more you will be able to make a useful contribution to this insightful post.

Roger Knights
June 14, 2011 11:32 am

izen says:
June 13, 2011 at 11:02 pm
one of the most common chaotic systems people encounter is the dripping tap or spigot.
…………..
Like many natural systems there is a driving energy, the head or pressure or water in this case, and a pathway for that energy to be expended, the spout. While the way in which the energy is dissipated may be chaotic, the rate at which it is expended is constant over many drips even though each drip is chaotic. Changing the energy input, the pressure or changing the spout size, the energy dissipation process, will change the drip behavior, it will still be chaotic but the average rate of flow will change to reflect the new conditions.
The implications for climate, a similar system constrained by its energy input and energy dissipation processes, are obvious.

But there are no negative feedbacks in a dripping-tap scenario. Output can be computed from input factors when averaged over a long enough time scale. This is what is really objectionable about Warmism. We scorcher-scoffers claim there are such feedbacks in a warming climate, such as increased cloud formation and increased thunderstorm activity.
Here’s a WUWT article by Willis decrying “simple physics” thinking:
http://wattsupwiththat.com/2009/12/27/the-unbearable-complexity-of-climate-2/
And we claim that the warmists’ posited positive feedbacks, which are required to make CO2 alarming, are speculative and unlikely. (Else climate would have exhibited runaway warming in the past.)

R. Gates
June 14, 2011 11:59 am

Chris Colose says:
June 14, 2011 at 8:04 am
Yet (NH) summer is always warmer than winter
______
As obvious as this is, it is also the perfect way to see the difference in spatio-temperal scales when speaking about the chaos in the weather and chaos in the climate. You can in fact, be pretty certain the 25, 250, or 2500 years from now, NH summers will be warmer than winters. Yes, you can’t tell me what the temperature will be on Jan 1st next year versus July 1st for any given city. Climate, in his regard is easier to forecast than weather as the scales and forcings are long-term large events that are reliable (once you’ve got them all identified, which we may not yet).

R. Gates
June 14, 2011 12:02 pm

Richard M says:
June 14, 2011 at 6:22 am
Since our CAGW believers are once again focusing on CO2, I will once again challenge them to explain why the cooling effects of GHGs like CO2 are never mentioned in any discussions. For some reason they always run away and avoid the topic.
_____
Maybe because the net effect of GHG’s (taken over the whole atmosphere) is one of warming, not cooling…i.e. take away the GHG’s and the world would be so much colder.

John B
June 14, 2011 1:32 pm

Smokey says:
June 13, 2011 at 8:43 pm
John B says:
“1. It contradicts the ice core data”
No, it doesn’t, because the ice core data ends more than a century ago.

Well, how come the very picture you link to shows “CO2 ice core Antarctica” going right up to 1960? Check your sources!
“2. It suggests wild swings of CO2 that accepted theory says can’t happen”
There is no “accepted theory.” There is only conjecture at this point.”

Just because you don’t accept it, doesn’t mean it is not “accepted theory”
“3. If concentrations had been that high, there should be residual C13, which there isn’t”
You probably get that misinformation from Skeptical Pseudo-Science. Try reading the WUWT archives, you will get the straight skinny here.”

Now we are getting somewhere. You, a “skeptic”, are telling me I should get the “straight skinny” from… a skeptic blog. Not a paper, not research, but a skeptic blog. THAT is confirmation bias! And no, I didn’t get it from skepticalscience, rebuttals of Beck are widespread.
“4. The chemical measurements on which the paper is based are known to be problematic”
Really? “Problematic”? How so, exactly? The test apparatus has been replicated from the original drawings, and shown to be accurate to within ±3%. Looks like you’re just winging it with your ‘problematic’ comment.

This is from Keeling’s (of Mauna Lao fame) autobiography, talking about the wet chemical method, “This Scandinavian program, started by Rossby in 1954, had been a major factor in triggering interest in measuring CO2 during the IGY. Nevertheless it was quietly abandoned after the meeting, when the reported range in concentrations, 150–450 ppm, was seen to reflect large errors.” Again, this kind of criticism is widespread.
“5. It is only one paper, and no one else has published work that supports it or can reproduce its results”
As Albert Einstein said, it doesn’t require a hundred scientists signing a letter to falsify Relativity, it only requires one fact. CAGW has been repeatedly debunked. Only CAGW true believers still demonize “carbon”. Wise up, and stop being a true believer. CAGW is pseudo-science. It is more credible than Scientology at this point.

And now to my main point: Yes, indeed, one fact would debunk CAGW. But it has to (a) be a genuine “fact” and (b) be a relevant “fact”. If I say I have measured temperatures on my front porch for the last 30 years and seen no rise in temperatures, does that mean I have a “fact” which will debunk AGW? Probably not. I probably have a broken thermometer. How would you check? You would look at other people’s thermometers. Get it? Replication! Just the thing you skeptics are always calling for – except when it suits you. If one set of measurement is anomalous, first you should be skeptical of the measurement. If it holds up, then go ahead and be skeptical of the theory it questions.
The real problem here is that Beck is one paper, probably a flawed paper, but you accept it because it seems to say what you want to hear. And that is not skepticism, it is, well, “skepticism”. A true skeptic will look at all the data, and be skeptical of all the data.
But it’s not really about skepticism, is it? I see that now.

Rocky H
June 14, 2011 3:00 pm

I see JohnB keeps dodging the problem of the missing evidence of AGW. Without evidence its all just opinion and you know what they say about opinions.

phlogiston
June 14, 2011 3:17 pm

Its warmer and brighter in the day and darker and colder at night.
Therefore chaos and nonlinear / nonequilibrium pattern dynamics play no role in climate (and it can be only CO2) 🙂 Therefore atmosphere and ocean are always at or close to equilibrium, and the earth is not an open system receiving energy from outside (only CO2 warms us, we get not a single joule of heat energy from the sun). Therefore there are no positive or negative feedbacks, the interplay between which might otherwise cause nonlinear dynamics. No – its all simple, all linear, all equilibrium and all CO2! Hooray!
ummm … FAIL

June 14, 2011 3:48 pm

The referenced article is the most unreadable bunch of claptrap I’ve seen in some time. And it purports to tell us about reasoning! Modèle vs model? Give us a break!

DCC:
While I’d be pleased to debate the quality of the article entitled “The Principles of Reasoning: Part III” with you, the most appropriate place do to so is not in the comments section for Dr. Edmunds’ article but rather in the comments section for “The Principles of Reasoning: Part III.” You could start the debate going by sharing the basis in facts and logic, if any, for your characterization of “The Principles of Reasoning: Part III” as “a bunch of claptrap.”

June 14, 2011 3:53 pm

“No climate researchers dispute his analysis that the weather is chaotic.”
Except long range forecasters using solar factors.

John B
June 14, 2011 4:15 pm

RockyH said “I see JohnB keeps dodging the problem of the missing evidence of AGW. Without evidence its all just opinion and you know what they say about opinions.”
I can provide lots of evidence, but I have come to realise that it means nothing to most people here, whereas a picture of a Christmas tree or an out of context graph is treated as the “fact” that single handedly debunks AGW.
Here’s a whole bunch of studies for example:
http://ams.confex.com/ams/Annual2006/techprogram/session_19150.htm
The conclusion from paper 1.7 states:
“In comparison, an ensemble summary of our measurements indicates that an energy flux imbalance of 3.5 W/m2 has been created by anthropogenic emissions of greenhouse gases since 1850. This experimental data should effectively end the argument by skeptics that no experimental evidence exists for the connection between greenhouse gas increases in the atmosphere and global warming.” (emphasis mine)
You don’t like that one? There are literally hundreds more where that came from (i.e. mainstream science). It’s all there if you open your eyes. But Smokey will tell you that CO2 is plant food, and he has a picture to prove it!

June 14, 2011 5:40 pm

I can provide lots of evidence, but I have come to realise that it means nothing to most people here, whereas a picture of a Christmas tree or an out of context graph is treated as the “fact” that single handedly debunks AGW.
Here’s a whole bunch of studies for example:
http://ams.confex.com/ams/Annual2006/techprogram/session_19150.htm
The conclusion from paper 1.7 states:
“In comparison, an ensemble summary of our measurements indicates that an energy flux imbalance of 3.5 W/m2 has been created by anthropogenic emissions of greenhouse gases since 1850. This experimental data should effectively end the argument by skeptics that no experimental evidence exists for the connection between greenhouse gas increases in the atmosphere and global warming.” (emphasis mine)

While paper 1.7 certainly provides evidence of the downward pointing radiative flux at selected points in space and time, it fails to provide evidence of the causal relationship between an increase in the atmospheric CO2 concentration at the Mouna Loa observatory and an increase in the equilibrium temperature at Earth’s surface that is assumed by the theory of AGM. In fact, this evidence is unobtainable for the equilibrium temperature is not an observable. As this evidence is unobtainable, the theory of AGM is non-falsifiable thus lying outside science.

Starwatcher
June 14, 2011 5:48 pm

Weather arises from the random cycling of sticky variables (Mean temperature/precipitation/etc. values on some date) around their attractors. The mentioned acute sensitivity to initial conditions exhibited by Earth’s atmosphere is what makes forecasting individual weather events far into the future computationally intractable. However predictions regarding Climate, as Tamino states in his post, is not concerned with the exact values of the afformentioned variables but instead with the distributions of these variables. The distributions are dependent on the location of the attractors which shouldn’t change absent an imbalance in the ingoing and outgoung fluxes of energy to the Earth. There is a measured global heat imbalance of .75Wm^(-2) which is causing various attractors to change location. Accurate predictions on what effects these will have are in principle doable.
http://tamino.wordpress.com/2011/06/14/chaos/#comment-51525

June 14, 2011 6:08 pm

John B says:
“And now to my main point: Yes, indeed, one fact would debunk CAGW.”
I keep posting those inconvenient, pesky facts, and JB keeps ignoring them. Temperature moves independently of CO2, another pesky fact. And there is zero evidence of any global harm resulting from CO2 — another pesky fact ignored by the cognitive dissonance-afflicted alarmist crowd.
The reason WUWT is a much better place to learn the facts, rather than reading grant-trolling papers, is because “studies” cannot be cross examined. Faulty pal reviewed papers are routinely hand-waved through peer review, while skeptical papers have to jump through flaming hoops before they’re disallowed by heavily biased referees and editors. Here, we can have a conversation — if the alarmist contingent stops ignoring the scientific method, and concedes that there is no evidence of global damage from CO2. But so far, they have simply avoided the topic, rather than honestly admitting that they have no such evidence.
Another reason to discount peer reviewed papers is because they are usually wrong. Particularly climate-related papers, which are routinely debunked here, at Climate Audit, and at other scientifically skeptical sites. Keep in mind that scientific skeptics are the only honest kind of scientists, which leaves out the alarmist crowd entirely. Michael Mann has ignored the scientific method’s transparency requirement for 13 years and counting, with no sign yet of willing cooperation. That’s the kind of scientific charlatanism that indicates the alarmists have plenty to hide.
Finally, you can prove anything with contrived assumptions, which form the basis of the alarmist arguments. But that doesn’t make for correct conclusions. It tortures the alarmist crowd that the planet itself is proving them wrong. They’ve believed in Mann’s debunked Hokey Stick chart for so long that they own that falsified belief. Sad for them, the more we learn the more ridiculous the demonization of harmless, beneficial CO2 is.
I’m willing to discuss facts, but first John B needs to either put up testable, measurable evidence per the scientific method, showing global damage caused specifically by CO2, or admit that he has no evidence.

phlogiston
June 14, 2011 6:24 pm

JohnB
In comparison, an ensemble summary of our measurements ..
“Ensemble”? Ensemble refers to multiple model runs. No-one talks about ensembles of actual data. I think you might have shot yourself in the foot.
Is your AGW indoctrination so deep that, in seeking a data example, all you can come up with are model runs?
It looks like for many AGW devotees, what it will take to persuade them to recognise the existence of real world data, as opposed to computer models, is a process similar to that which Leonardo di Caprio had to undergo in the film “Shutter Island”.

sky
June 14, 2011 7:55 pm

Skeptical as I am of the results of GCMs run over climatic time-scales, there seems to be an overdependence upon chaos theory in explaining the workings of the climate system on a planetary scale. Mathematical possibilty is not the same same as physical reality. I believe there is much to be learned from adopting the same viewpoint as in ocean wave studies. While the actual time-history of wave motion at any point in time and space is is unpredictable, the spectral characteristics can be fairly well estimated from knowlege of relatively few parameters: wind speed, duration, and fetch dimensions. Unfortunately, climate science’s obsession with GHGs has led to fundamental confusion concerning the factors that set surface temperatures and no comparably skilled forecasting/hindcasting methodolgy has been developed.

Richard M
June 14, 2011 8:03 pm

R. Gates says:
June 14, 2011 at 12:02 pm
Richard M says:
June 14, 2011 at 6:22 am
Since our CAGW believers are once again focusing on CO2, I will once again challenge them to explain why the cooling effects of GHGs like CO2 are never mentioned in any discussions. For some reason they always run away and avoid the topic.
_____
Maybe because the net effect of GHG’s (taken over the whole atmosphere) is one of warming, not cooling…i.e. take away the GHG’s and the world would be so much colder.

OK, show me the computations. The truth is I’ve never seen anyone look at the “net effect”. Since you are so sure you understand it all, show me how it’s computed.

KR
June 14, 2011 8:54 pm

Richard M“OK, show me the computations. The truth is I’ve never seen anyone look at the “net effect”. Since you are so sure you understand it all, show me how it’s computed.”
I would suggest starting at the beginning, with Arrhenius 1896 (http://www.globalwarmingart.com/images/1/18/Arrhenius.pdf), in particular page 265, where he discusses nights warming faster than days, polar amplification, increased IR at ground level, land warming faster than days, etc. (GHG signatures), combined with numeric computations of the level of the CO2 greenhouse effect and H2O feedback on temperature. In other words, the “net effect”, fully presented as the initial discussion of the topic.
Stratospheric cooling (a signature of only greenhouse gas effects), is not discussed, as the stratosphere had not been discovered yet.
Of course, we’ve now obtained more accurate numbers on various aspects, but Arrhenius did a great job given the measurement limitations of the time.
Follow that with http://www.aip.org/history/climate/co2.htm for a history of the theory, with branches to whatever aspects of the theory you wish to look at.

Beth Cooper
June 14, 2011 10:06 pm

Theo Goodwin says “Hypotheses are used for prediction and explanation…If they do nor explain phenomenon x then they cannot predict phenomenon x….In all of science, hypotheses are useful for both prediction and explanation.(eg Kepler’s 3Laws.) ….it follows from the preceding that each hypothesis is testable and falsifiable..” ” A computer model is analogous to a system of deduction….The computer will never be able to do more than produce results that you program.The model does not contain physical hypotheses and cannot be made to offer explanations for the results you program it to produce.”
Thank you for this lucid critique of climate science based on modelling, Theo.