Just to be clear ahead of time, chaos in weather is NOT the same as climate disruption listed below – Anthony
Guest submission by Dr. Andy Edmonds
This is not intended to be a scientific paper, but a discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.
First of all, what is Chaos? I use the term here in its mathematical sense. Just as in recent years Scientists have discovered extra states of matter (not just solid, liquid, gas, but also plasma) so also science has discovered new states that systems can have.
Systems of forces, equations, photons, or financial trading, can exist effectively in two states: one that is amenable to mathematics, where the future states of the systems can be easily predicted, and another where seemingly random behaviour occurs.
This second state is what we will call chaos. It can happen occasionally in many systems.
For instance, if you are unfortunate enough to suffer a heart attack, the normally predictable firing of heart muscles goes into a chaotic state where the muscles fire seemingly randomly, from which only a shock will bring them back. If you’ve ever braked hard on a motorbike on an icy road you may have experienced a “tank slapper” a chaotic motion of the handlebars that almost always results in you falling off. There are circumstances at sea where wave patterns behave chaotically, resulting in unexplained huge waves.
Chaos theory is the study of Chaos, and a variety of analytical methods, measures and insights have been gathered together in the past 30 years.
Generally, chaos is an unusual occurrence, and where engineers have the tools they will attempt to “design it out”, i.e. to make it impossible.
There are, however, systems where chaos is not rare, but is the norm. One of these, you will have guessed, is the weather, but there are others, the financial markets for instance, and surprisingly nature. Investigations of the populations of predators and prey, for instance shows that these often behave chaotically over time. The author has been involved in work that shows that even single cellular organisms can display population chaos at high densities.
So, what does it mean to say that a system can behave seemingly randomly? Surely if a system starts to behave randomly the laws of cause and effect are broken?
A little over a hundred years ago scientists were confident that everything in the world would be amenable to analysis, that everything would be therefore predictable, given the tools and enough time. This cosy certainty was destroyed first by Heisenberg’s uncertainty principle, then by the work of Kurt Gödel, and finally by the work of Edward Lorenz, who first discovered Chaos, in, of course, weather simulations!
Chaotic systems are not entirely unpredictable, as something truly random would be. They exhibit diminishing predictability as they move forward in time, and this diminishment is caused by greater and greater computational requirements to calculate the next set of predictions. Computing requirements to make predictions of chaotic systems grow exponentially, and so in practice, with finite resources, prediction accuracy will drop off rapidly the further you try to predict into the future. Chaos doesn’t murder cause and effect; it just wounds it!
Now would be a good place for an example. Everyone owns a spread sheet program. The following is very easy to try for yourself.
The simplest man-made equation known that produces chaos is called the logistic map.
It’s simplest form is: Xn+1 = 4Xn(1-Xn)
Meaning that the next step of the sequence is equal to 4 times the previous step times 1 – the previous step. If we open a spread sheet we can create two columns of values:
Each column A and B is created by writing =A1*4* (1-A1) into cell A2, and then copying it down for as many cells as you like, the same for B2, writing in =B1*4* (1-B1). A1 and B1 contain the initial conditions. A1 contains just 0.3 and B1 contains a very slightly different number: 0.30000001
The graph to the right shows the two copies of the series. Initially they are perfectly in sync, then they start to divert at around step 22, while by step 28 they are starting to behave entirely differently.
This effect occurs for a wide range of initial conditions. It is fun to get out your spread sheet program and experiment. The bigger the difference between the initial conditions the faster the sequences diverge.
The difference between the initial conditions is minute, but the two series diverge for all that. This illustrates one of the key things about chaos. This is the acute sensitivity to initial conditions.
If we look at this the other way round, suppose that you only had the series, and let’s assume to make it easy, that you know the form of the equation but not the initial conditions. If you try to make predictions from your model, any minute inaccuracies in your guess of the initial conditions will result in your prediction and the result diverging dramatically. This divergence grows exponentially, and one way of measuring this is called the Lyapunov exponent. This measures in bits per time step how rapidly these values diverge, averaged over a large set of samples. A positive Lyapunov exponent is considered to be proof of chaos. It also gives us a bound on the quality of predictions we can get if we try to model a chaotic system.
These basic characteristics apply to all chaotic systems.
Here’s something else to stimulate thought. The values of our simple chaos generator in the spread sheet vary between 0 and 1. If we subtract 0.5 from each, so we have positive and negative going values, and accumulate them we get this graph, stretched now to a thousand points.
If, ignoring the scale, I told you this was the share price last year for some FTSE or NASDAQ stock, or yearly sea temperature you’d probably believe me. The point I’m trying to make is that chaos is entirely capable of driving a system itself and creating behaviour that looks like it’s driven by some external force. When a system drifts as in this example, it might be because of an external force, or just because of chaos.
So, how about the weather?
Edward Lorenz, (1917, 2008) was the father of the study of Chaos, and also a weather researcher. He created an early weather simulation using three coupled equations and was amazed to find that as he progressed the simulation in time the values in the simulation behaved unpredictably.
He then looked for evidence that real world weather behaved in this same unpredictable fashion, and found it, before working on discovering more about the nature of Chaos.
No climate researchers dispute his analysis that the weather is chaotic.
Edward Lorenz estimated that the global weather exhibited a Lyapunov exponent equivalent to one bit of information every 4 days. This is an average over time and the world’s surface. There are times and places where weather is much more chaotic, as anyone who lives in England can testify. What this means though, is that if you can predict tomorrows weather with an accuracy of 1 degree C, then your best prediction of the weather on average 5 days hence will be +/- 2 degrees, 9 days hence +/-4 degrees and 13 days hence +/- 8 degrees, so to all intents and purposes after 9-10 days your predictions will be useless. Of course, if you can predict tomorrow’s weather to +/- 0.1 degree, then the growth in errors is slowed, but since they grow exponentially, it won’t be many days till they become useless again.
Interestingly the performance of weather predictions made by organisations like the UK Met office drop off in exactly this fashion. This is proof of a positive Lyapunov exponent, and thus of the existence of chaos in weather, if any were still needed.
So that’s weather prediction, how about long term modelling?
Let’s look first at the scientific method. The principle ideas are that science develops by someone forming an hypothesis, testing this hypothesis by constructing an experiment, and modifying the hypothesis, proving or disproving it, by examining the results of the experiment.
A model, whether an equation or a computer model, is just a big hypothesis. Where you can’t modify the thing you are hypothesising over with an experiment, then you have to make predictions using your model and wait for the system to confirm or deny them.
A classic example is the development of our knowledge of the solar system. The first models had us at the centre, then the sun at the centre, then the discovery of elliptical orbits, and then enough observations to work out the exact nature of these orbits. Obviously, we could never hope to affect the movement of the planets, so experiments weren’t possible, but if our models were right, key things would happen at key times: eclipses, the transit of Venus, etc. Once models were sophisticated enough, errors between the model and reality could be used to predict new features. This is how the outer planets, Neptune and Pluto were discovered. If you want to know where the planets will be in ten years’ time to the second, there is software available online that will tell you exactly.
Climate scientists would love to be able to follow this way of working. The one problem is that, because the weather is chaotic, there is never any hope that they can match up their models and the real world.
They can never match up the model to shorter term events, like say six months away, because as we’ve seen, the weather six months away is completely and utterly unpredictable, except in very general terms.
This has terrible implications for their ability to model.
I want to throw another concept into this mix, drawn from my other speciality, the world of computer modelling through self-learning systems.
This is the field of artificial intelligence, where scientists attempt to create mostly computer programs that behave intelligently and are capable of learning. Like any area of study, this tends to throw up bits of general theory and one of these is to do with the nature of incremental learning.
Incremental learning is where a learning process tries to model something by starting out simple and adding complexity, testing the quality of the model as it goes.
Examples of this are neural networks, where the strength of connections between simulated brain cells are adapted as learning goes on or genetic programming, where bits of computer programs are modified and elaborated to improve the fit of the model.
From my example above of theories of the solar system, you can see that the scientific method itself is a form of incremental learning.
There is a graph that is universal in incremental learning. It shows the performance of an incremental learning algorithm, it doesn’t matter which, on two sets of data.
The idea is that these two sets of data must be drawn from the same source, but they are split randomly into two, the training set, used to train the model, and a test set used to test it every now and then. Usually the training set is bigger than the test set, but if there is plenty of data this doesn’t matter either. So as learning progresses the learning system uses the training data to modify itself, but not the test data, which is used to test the system, but is immediately forgotten by it.
As can be seen, the performance on the training set gets better and better as more complexity is added to the model, but the performance of the test set gets better, and then starts to get worse!
Just to make this clear, the test set is the only thing that matters. If we are to use the model to make predictions we are going to present new data to it, just like our test set data. The performance on the training set is irrelevant.
This is an example of a principle that has been talked about since William of Ockham first wrote “Entia non sunt multiplicanda praeter necessitatem “, known as Ockham’s razor and translatable as “entities should not be multiplied without necessity”, entities being in his case embellishments to a theory. The corollary of this is that the simplest theory that fits the facts is most likely to be correct.
There are proofs for the generality of this idea from Bayesian Statistics and Information Theory.
So, this means that our intrepid weather modellers are in trouble from both ends: if their theories are insufficiently complex to explain the weather their model will be worthless, if too complex then they will also be worthless. Who’d be a weather modeller?
Given that they can’t calibrate their models to the real world, how do weather modellers develop and evaluate their models?
As you would expect, weather models behave chaotically too. They exhibit the same sensitivity to initial conditions. The solution chosen for evaluation (developed by Lorenz) is to run thousands of examples each with slightly different initial conditions. These sets are called ensembles.
Each example explores a possible path for the weather, and by collecting the set, they generate a distribution of possible outcomes. For weather predictions they give you the biggest peak as their prediction. Interestingly, with this kind of model evaluation there is likely to be more than one answer, i.e. more than one peak, but they choose never to tell us the other possibilities. In statistics this methodology is called the Monte Carlo method.
For climate change they modify the model so as to simulate more CO2, more solar radiation or some other parameter of interest and then run another ensemble. Once again the results will be a series of distributions over time, not a single value, though the information that the modellers give us seems to leave out alternate solutions in favour of the peak value.
Models are generated by observing the earth, modelling land masses and air currents, tree cover, ice cover and so on. It’s a great intellectual achievement, but it’s still full of assumptions. As you’d expect the modellers are always looking to refine the model and add new pet features. In practice there is only one real model, as any changes in one are rapidly incorporated into the others.
The key areas of debate are the interactions of one feature with another. For instance the hypothesis that increased CO2 will result in run-away temperature rises is based on the idea that the melting of the permafrost in Siberia due to increased temperatures will release more CO2 and thus positive feedback will bake us all. Permafrost may well melt, or not, but the rate of melting and the CO2 released are not hard scientific facts but estimates. There are thousands of similar “best guesses’’ in the models.
As we’ve seen from looking at incremental learning systems too much complexity is as fatal as too little. No one has any idea where the current models lie on the graph above, because they can’t directly test the models.
However, dwarfing all this arguing about parameters is the fact that weather is chaotic.
We know of course that chaos is not the whole story. It’s warmer on average away from the equatorial regions during the summer than the winter. Monsoons and freezing of ice occur regularly every year, and so it’s tempting to see chaos as a bit like noise in other systems.
The argument used by climate change believers runs that we can treat chaos like noise, so chaos can be “averaged out”.
To digress a little, this idea of averaging out of errors/noise has a long history. If we take the example of measuring the height of Mount Everest before the days of GPS and Radar satellites, the method to calculate height was to start at Sea level with a theodolite and take measurements of local landmarks using their distance and their angle above the horizon to estimate their height. Then to move on to those sites and do the same thing with other landmarks, moving slowly inland. By the time surveyors got to the foothills of the Himalayas they were relying on many thousand previous measurements, all with measurement error included. In the event the surveyor’s estimate of the height of Everest was only a few hundred feet out!
This is because all those measurement errors tended to average out. If, however there had been a systemic error, like the theodolites all measuring 5 degrees up, then the errors would have been enormous. The key thing is that the errors were unrelated to the thing being measured.
There are lots of other examples of this in Electronics, Radio Astronomy and other fields.
You can understand climate modellers would hope for the same to be true of chaos. In fact, they claim this is true. Note however that the errors with the theodolites were nothing to do with the actual height of Everest, as noise in radio telescope amplifiers has nothing to do with the signals from distant stars. Chaos, however, is implicit in weather, so there is no reason why it should average out. It’s not part of the measurement; it’s part of the system being measured.
So can chaos be averaged out? If it can, then we would expect long term measurements of weather to exhibit no chaos. When a team of Italian researchers asked to use my Chaos analysis software last year to look at a time series of 500 years of averaged South Italian winter temperatures, the opportunity arose to test this. The picture below is this time series displayed in my Chaos Analysis program, ChaosKit.
The result? Buckets of chaos. The Lyapunov exponent was measured at 2.28 bits per year.
To put that in English, the predictability of the temperature quarters every year further ahead you try to predict, or the other way round, the errors more than quadruple.
What does this mean? Chaos doesn’t average out. Weather is still chaotic at this scale over hundreds of years.
If we were, as climate modellers try to do, to run a moving average over the data, to hide the inconvenient spikes, we might find a slight bump to the right, as well as many bumps to the left. Would we be justified in saying that this bump to the right was proof of global warming? Absolutely not: It would be impossible to say if the bump was the result of chaos, and the drifts we’ve see it can create or some fundamental change, like increasing CO2.
So, to summarize, climate researchers have constructed models based on their understanding of the climate, current theories and a series of assumptions. They cannot test their models over the short term, as they acknowledge, because of the chaotic nature of the weather.
They hoped, though, to be able to calibrate, confirm or fix up their models by looking at very long term data, but we now know that’s chaotic too. They don’t, and cannot know, whether their models are too simple, too complex, or just right, because even if they were perfect, if weather is chaotic at this scale, they cannot hope to match up their models to the real world, the slightest errors in initial conditions would create entirely different outcomes.
All they can honestly say is this: “we’ve created models that we’ve done our best to match up to the real world, but we cannot prove to be correct. We appreciate that small errors in our models would create dramatically different predictions, and we cannot say if we have errors or not. In our models the relationships that we have publicized seem to hold.”
It is my view that governmental policymakers should not act on the basis of these models. The likelihood seems to be that they have as much similarity to the real world as The Sims, or Half-life.
On a final note, there is another school of weather prediction that holds that long term weather is largely determined by variations in solar output. Nothing here either confirms or denies that hypothesis, as long term sunspot records have shown that solar activity is chaotic too.
Andy Edmonds
Short Bio
Dr Andrew Edmonds is an author of computer software and an academic. He designed various early artificial intelligence computer software packages and was arguably the author of the first commercial data mining system. He has been the CEO of an American public company and involved in several successful start-up businesses. His PhD thesis was concerned with time series prediction of chaotic series, and resulted in his product ChaosKit, the only standalone commercial product for analysing chaos in time series. He has published papers on Neural Networks, genetic programming of fuzzy logic systems, AI for financial trading, and contributed to papers in Biotech, Marketing and Climate.
Short summary: AA discussion of the disruptive light Chaos Theory can cast on climate change, for non-specialist readers. This will have a focus on the critical assumptions that global warming supporters have made that involve chaos, and their shortcomings. While much of the global warming case in temperature records and other areas has been chipped away, they can and do, still point to their computer models as proof of their assertions. This has been hard to fight, as the warmists can choose their own ground, and move it as they see fit. This discussion looks at the constraints on those models, and shows that from first principles in both chaos theory and the theory of modelling they cannot place reliance on these models.
On his Website: http://scientio.blogspot.com/2011/06/chaos-theoretic-argument-that.html
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

![clip_image002%25255B3%25255D[1]](http://wattsupwiththat.files.wordpress.com/2011/06/clip_image00225255b325255d1.jpg?w=1110&quality=83)




@terry, The paper says “ensemble summary of measurements”. It compares them to models, but it is summarising measurements, not model runs. The measurements are of downwelling radiation at frequencies attributable to GHGs. Conservation of energy says that that energy must go somewhere. Get it? Next you will be telling me that we haven’t measured temperatures, all we have seen is the level of mercury in a glass tube, and correlation does not equal causation. But if you are more swayed by a picture of a Christmas tree, I can’t help you.
@Smokey, your “facts” are just out of context (in this case cherry picked, short timescale) graphs. But you think they are better than any study ever done, unless that study agrees with you. And all the old drivel about “pal-review”. Gimme a break! Write a paper that makes sense, it will get published. Show a graph of the last 8 years and claim that “warming has stopped, FACT”, you will not, except maybe in one “journal” (E&E).
Look, Lindzen, for example, questions sensitivity and asserts that negative feedback from clouds will damp down warming. He has a point, though many disagree with him. You are just silly. Your pictures appeal to folk who lack either the abilitiy or desire to look deeper. Sadly, there are many such people.
OK, we are off Anthony’s front page now, see you all on another thread.
PS. If you don’t like Mann’s hockey stick (for which the data IS available), there are plenty of others. Replication, get it? Even Wyner and McShane produced a hockey stick, though, bizarrely, most here seemed not to see it.
@KR, I suspect few here will follow your links, so I have transcribed part of Arrhenius, p265, as it is so mind bogglingly prescient for 1896!!!
“The influence [of increased CO2] has a minimum near the equator, and increases from this to a flat maximum near the poles. … The influence is in general greater in the winter than in the summer … also greater for land than for ocean … [in the] Southern hemisphere, the effect will be less there than in the Northern hemisphere. … [the effect of increased CO2] will of course diminish the difference in temperature between day and night. ”
@everyone else. KR’s other link is good, too. Follow it and be enlightened!
John B fawns over KR’s link and discounts every one of the numerous links I’ve posted in this thread. That’s cognitive dissonance in action, folks. Harold Camping has nothing on the blinkered JB, who says:
“Show a graph of the last 8 years and claim that “warming has stopped, FACT”, you will not, except maybe in one “journal” (E&E).”
Wrong again, and glad to oblige: clickA, clickB, clickC clickD clickE clickF clickG clickH clickI clickJ clickK clickL clickM clickN clickO clickP
Prediction: John B will never accept any of those charts, because to admit that even one is valid will undermine the basis of his belief in catastrophic AGW. JB’s cognitive dissonance will not allow him to admit that the planet itself is falsifying his belief in the evil “carbon” demon, which is putatively causing runaway global warming — even though the real world evidence clearly debunks that nonsense. Unlike skeptics, who have nothing to prove, the believers in runaway global warming are forced to cherry-pick only what supports their improbable belief system. Too bad about that pesky reality, eh?☺
As we see in many of the charts above, the rise in temperature since the LIA is simply coincidental with the rise in CO2. It’s a coincidence, see? The planet is warming naturally from the LIA, and that warming trend has not accelerated despite the 40% increase in CO2. If rising CO2 caused rising temperatures, the trend line would be accelerating. But it’s not. In fact, temperatures are declining, which debunks the CO2 nonsense. And of course, none of the wild-eyed climate catastrophe predictions have come to pass. But when someone is afflicted with cognitive dissonance, they respond just like true believer Harold Camping: the end of the world didn’t happen as predicted, but that can’t possibly mean that Harold was wrong, it only means that the end of the world has been postponed until October. Cognitive dissonance in action; Orwell’s “doublethink”. And since the rise in harmless, beneficial CO2 has not caused the predicted climate catastrophe, it only means that doom is somewhere down the road. It couldn’t possibly mean that they were wrong about their CO2=CAGW conjecture. See how it works? True believers can never admit that they were wrong about CO2=CAGW, even when Planet Earth is decisively falsifying their belief system. That’s cognitive dissonance in action, and it is rarely curable. The earth could descend into the next great Ice Age, and John B would still believe that runaway global warming is coming.
John B:
One of several barriers to generalizing to AGW from measurements of the downwelling radiation that were made at discrete points in space and time is that it is the divergence of a heat flux vector and not the divergence of a radiant energy flux vector that warms Earth’s surface. Heat is transferred by conduction and convection as well as by electromagnetic radiation. Climatologists have not yet gotten around to predicting the consequences of the various fluxes for temperatures at Earth’s surface and comparing the predicted to the observed temperatures. In AR4, the comparisons are of projected to observed temperatures. While climatologists often confuse projections with predictions, unlike predictions, projections lack the property of falsifiability. Thus, while the climatology of AR4 can appear to confused climatologists to be a science it is in fact a pseudoscience from its lack of falsifiability.
says:
June 14, 2011 at 10:31 am
“…As I and others have noted, a chaotic system can certainly possess stable averages and deviations
—
Noting something and proving something are not the same thing.
Beyond that, even if you have proven the previous point, you haven’t proven that climate is one of those chaotic systems with this property.
(oh, I see we are still on the front page, so let’s continue)
Smokey said: “Prediction: John B will never accept any of those charts, because to admit that even one is valid will undermine the basis of his belief in catastrophic AGW. “
I accept that they are charts! I accept that they are valid in the sense that they do not actively lie (though some are so badly labelled that it is hard to tell). But my overriding view is, “it’s the trend that matters”. Yes, temperatures have been flattish for the last few years, but not long enough to conclude that the trend on late 20C warming has stopped.
Now, you tell me what you think of Arrhenius’ work, over 100 years ago, where he predicted the effects of increasing CO2 levels in such detail and every one of those predictions has now been observed. Lucky guess? Coincidence? Both of those things are possible (hence words in AR4 such as “highly unlikely” rather than “fact”), but do you really think so?
AGW rests on the following:
The characteristics of CO2-driven warming were predicted by theory (Arrhenius and others) and have been observed
No other plausible mechanism explains those observations
“Internal variablity” and all this chaos stuff are just obfuscation, amounting to saying “scientists don’t know everything, so they don’t know anything”.
Have you read KR’s link on the history of climate science? It’s a good read.
Here are quite a few charts of the *mild*upward trend from the LIA. This particular set of charts ends in 2000, but since then temps have been declining.
That debunks the claim that CO2 causes any measurable warming. CO2 has risen rapidly since WWII, but global temperatures haven’t changed frrom the trend beginning in the mid-1600’s. Rational folks will understand that CO2 has not caused the predicted accelerated warming. Just the opposite has happened, in fact. See the comment in red at the bottom of the chart.
I suspect the alarmist crowd will fall back on Trenberth’s ‘hidden heat in the pipeline’. But of course, that is simply baseless conjecture.
We need to look at the entire hemisphere, not just a few cities. So the chart I will fall back on is something like this:
http://www.ncdc.noaa.gov/paleo/ei/ei_image/highlat.gif
Interestingly, it comes from the same source, NCDC/NOAA, as some of yours. It also shows that warming has been greater at higher latitudes.
Stop the name calling for a moment and tell me, honestly, what you make of that chart.
This argument misrepresents the semantics of the word “predict.” The subject of a prediction is the outcome of a statistical event and not (as alleged by John B) the characteristics of this event. For example, in the event of a coin flip the subject of a prediction is whether the outcome is heads or tails and not whether the coin is bent.
Terry Oldberg – “* The characteristics of CO2-driven warming were predicted by theory (Arrhenius and others) and have been observed
* No other plausible mechanism explains those observations”
Terry – This argument misrepresents the semantics of the word “predict.”
So – it quacks like a duck, has white feathers like a duck, has webbed feed like a duck. Obviously, by your logic, it’s an elephant…
KR:
It sounds as though you missed my point. My point is that the entity predicted by a prediction is not a set of characteristics but rather is the outcome of a statistical event. To speak of predicting characteristics is to misuse the word “predict.” In the search for truth, it is crucial for all parties to use words properly for when they are used improperly a possible result is for a falsehood to seem true.
Terry:
Predicting a change with a particular set of characteristics is a prediction, especially when those characteristics can be used to distinguish between various causes of change. If I predict that dropping a vase will cause it to shatter, that’s a prediction of the effects of some change.
In this case, the characteristics of nights warming faster than days, polar amplification, increased IR at the surface, stratospheric cooling, etc., are all “fingerprints” of warming driven by greenhouse gas increases. And not the characteristics of solar increases, albedo changes, cosmic rays, or any of the alternative hypotheses for the warming.
We see the changes (global temperatures), the changes match the characteristics predicted for the greenhouse gas increases we have also measured – cause matches effect in all particulars.
KR, a night time slow down in cooling over land must be explained via the mechanics. These would be increased water vapor and cloud cover preventing nocternal radiative cooling (such as you experience in a dry desert at night). Are you saying that no other driver but an anthropogenic increase in CO2 can cause this night time slow down in cooling?
Terry Oldberg says:
June 15, 2011 at 9:37 pm
“For example, in the event of a coin flip the subject of a prediction is whether the outcome is heads or tails…”
Not so! When flipping coins, a reasonable prediction would be “1 out of 2 tosses will land heads”, as an individual toss cannot be predicted. You could also predict “10 heads in a row will be seen in 1 out of 1024 sequences of 10 tosses”. These are statistical predictions, based on what is known about coins. If they are not met, you can reasonably suspect something fishy is going on. But notice how we can predict that short term runs against the prevailing trend will happen, but that eventually the trend will prevail. Statisticians would not be surprised that in a long enough run of tosses, 10 heads turn up occasionally, in fact they would expect it. Sound familiar?
That is analogous to my use of the word predict, and to the way it is used in climate science. BTW, Arrhenius did provide numbers.
Smokey, I really would like to know what you think of my chart vs. yours. I think a discussion of that kind of evidence, what we each accept as evidence, and why, might be enlightening for both sides. But please stick to those charts to keep the discussion focussed. Anyone else like to weigh in?
Pamela Gray
A night-time reduction in cooling means less IR leaving to space. Clouds (of the appropriate type, namely high clouds entrapping IR) would do the job in this case, although satellite measures seem to indicate cloud cover decreasing with rising temperatures over the last 25 years. Low cloud cover increases should cool the earth by increases in albedo. Low cloud decreases alone would cause daytime warming but nighttime cooling.
So no – aside from stratospheric cooling, which is GHG effect only, any one or two of the characteristics could be caused by something else. It’s the group of characteristics that indicate AGW, not a single one or two. Nights warming faster than days, winters faster than summers, polar amplification, increased IR at the surface and decreased at the top of the atmosphere, cooling stratosphere (as IR is blocked lower in the atmosphere), faster warming over land than water – increased GHG’s are the only known cause that matches all the characteristics. And since we know we’re increasing GHG’s, that shouldn’t be a big surprise.
The Arrhenius link I posted earlier to his 1896 paper on “atmospheric carbonic acid” calls these characteristics out – except for the stratospheric cooling, which isn’t surprising since they hadn’t discovered it in 1896.
John B:
This thread has drifted off the track on which I tried to place it. I’ll try to put it back on this track.
You said (June 15, 2011) “The characteristics of CO2-driven warming were predicted by theory (Arrhenius and others) and have been observed. My objection was to your usage in this sentence of the word “predicted.” This usage makes “characteristics” the entity that is predicted by a scientific theory when it is the outcomes of independent statistical events that are predicted. I used “heads” and “tails” as an example because the set {heads, tails} contains a complete set of outcomes of a statistical event that is a coin flip.
If we agree that it is the outcomes of statistical events that are predicted by a theory and not a set of characteristics then the two of us are in a position to examine the scientific basis for AGM, for it is a set of independent, observed statistical events that provides the basis for testing the claims that are made by a theory. In a theory that is “scientific,” these claims are testable.
AR4 fails to identify the set of independent observed statistical events by which AGW can be tested or to reveal the results from such testing. Having searched without success for a description of these events, I believe they do not exist. If my belief is correct, AGM is not testable thus lying outside science. The year 2007 study of Green and Armstrong establishes that, at about the time AR4 was written, climatologists confused “projections” with “predictions” thus, perhaps, reaching the erroneous conclusion that AGW was testable and tested.
Terry Oldberg
I’m afraid your definition of testable does not match any I have run across in science, as it is far too limited.
Does the precession of Mercury’s orbit, and deflection of light near the sun, match the predictions of Einstein’s theory of relativity? Yes or no? (Yes.) Did the Michaelson-Morley experiment detect the difference in lightspeed expected due to the everpresent ether? Yes or no? (No.) And do the characteristics of warming match greenhouse gas increase and not other possible causes? Yes or no? (Yes.)
You can do the same thing in the old game of Clue – was it the butler in the parlor with the candlestick? If so, you would predict certain answers to various tests, and finding out the answers to those predictions lets you identify the culprit.
Predicted consequences of an action are entirely testable, Terry, your (re)definition notwithstanding.
Hey Mr Lynn,
Perhaps you could explain why Smokey’s chart of specific cities is “ample evidence” while mine, of an entire hemisphere, is merely “hypotheses stemming from said conjecture”. I really don’t understand what constitutes “evidence” around here.
John
Oops, wrong thread, but the request still stands,
This psuedo-mathematical post is not only misleading, but fundamentally wrong. See the ACTUAL mathematics here:
http://tamino.wordpress.com/2011/06/14/chaos/#more-3883
KR:
By “testable” I mean “falsifable” or “refutable.” Your analogy to Einsteinian relativity fails from the fact that relativity makes predictions while AGW makes projections and while predictions are testable projections are not.
@terry Oldberg
You are correct in that the IPCC reports contain projections, but you are not correct in saying that climate science does not make predictions. “GHG-induced warming will cause stratospheric cooling, warmer nights, etc.” are predictions. They are not exactly like the predictions of physics, but they are just like the predictions of, say, geology or evolution – in the sense that they predict things that we should find in the world if the theory is correct and not find if it is incorrect. e.g. evolution theory predicts that Australian mammals will be weird, and at the genetic level how weird. A prediction does not have to be to 10 decimal places to be valid, it just has to be distinguish between hypothesis and null hypothesis. “AGW will cause nights to warm more than days” is just such a prediction.
John B:
Thanks for taking the time to respond.
I am unable to reconcile your claim that “AGW will cause nights to warm more than days” is an example of a prediction with the elementary ideas of statistical reasoning. As I understand these ideas in relation to global climatology they are as follows:
* As only a single object (the Earth) is under observation, a study of the climate is an example of a “longitudinal study.”
* A longitudinal study divides the time into non-overlapping segments;
* Each segment is associated with an independent statistical event.
* A statistical “sample” is a subset of these events, each of which is observed.
* A “theory” is a procedure for making inferences.
* Inferences are prone to being incorrect.
* Predictions are associated with the kind of inference that is called a “predictive inference.”
* A predictive inference is a conditional prediction.
* In making a predictive inference, an extrapolation is made from the state of nature that I’ll call the “condition” to the state of nature that I’ll call the “outcome.”
* A “condition” is a condition on the associated theory’s independent variables such as “the CO2 concentration exceeds 400 ppm.”
* An “outcome” is a condition on the associated theory’s dependent variables such as “the global temperature anomaly exceeds nil.”
* A pairing of a condition with an outcome such as “the CO2 concentration exceeds 400 ppm and the global temperature anomaly exceeds nil” provides a partial description of an event.
* A “condition” is defined at the starting time of an event
* An “outcome” is defined at the ending time of an event.
* A “prediction” is a proposition that assigns a numerical value to the probability of each of the several possible outcomes of a particular event at the time the condition of this event is observed but before the outcome is observed. For example, given the condition that “the CO2 concentration exceeds 400 ppm” a prediction assigns a numerical value to the probability of the outcome “the global temperature anomaly exceeds nil” and a numerical value to the probability of the outcome “the global temperature anomaly does not exceed nil.”
* The probabilities of the conditional outcomes are called “state transition probabilities.”
* A theory is tested by comparison of the numerical values that are assigned to the state transition probabilities by the theory to the relative frequencies of the same conditional outcomes in a sample that is reserved for testing.
Perhaps you could enlighten me on the relationship of these ideas to the entity that you describe as a “prediction.” Also, if the purpose of regulation of CO2 emissions is to control the global temperature anomaly don’t we need the outcomes to be defined on this anomaly?
Now apply, your logic to the predictions of geology or evolution. Like I said, climate science is not quite like physics because, as you said, we only have one world.
It just occurred to me, that there are people who doubt the scientific nature of evolution and geology, too. Creationists! Hmmm.
Terry Oldberg
Let’s take this down to the basic, a testable assertion that informs our knowledge.
If A then B
If Not A then Not B
B is testable
The state of B tells us whether A is true or not.
The prediction is a consequence (or a set of them, in the case of greenhouse gas fingerprints) that can be tested – and if those consequences turn out to be true, we’ve tested the prediction. That was the point of my earlier examples of Einstein (orbit of Mercury, light deflection near the edge of the sun) and Michaelson-Morley (light speed difference due to passage through the ether).
As John B said earlier, the IPCC report certainly contains some projections. But the climate science as a whole contains a great number of predictions – teh characteristics of CO2 absorption, temporal and spatial distributions of temperature change, temperature change versus greenhouse gas concentrations, etc.
You have overdefined yourself into a corner, I believe, with a definition that simply does not include a great deal of how science is done.