By Christopher Monckton of Brenchley
Dr Gavin Cawley, a computer modeler the University of East Anglia, who posts as “dikranmarsupial”, is uncomfortable with my regular feature articles here at WUWT demonstrating the growing discrepancy between the rapid global warming predicted by the models and the far less exciting changes that actually happen in the real world.
He brings forward the following indictments, which I shall summarize and answer as I go:
1. The RSS satellite global temperature trend since 1996 is cherry-picked to show no statistically-discernible warming [+0.04 K]. One could also have picked some other period [say, 1979-1994: +0.05 K]. The trend on the full RSS dataset since 1979 is a lot higher if one takes the entire dataset [+0.44 K]. He says: “Cherry picking the interval to maximise the strength of the evidence in favour of your argument is bad statistics.”
The question I ask when compiling the monthly graph is this: “What is the earliest month from which the least-squares linear-regression temperature trend to the present does not exceed zero?” The answer, therefore, is not cherry-picked but calculated. It is currently September 1996 – a period of 17 years 6 months. Dr Pachauri, the IPCC’s climate-science chairman, admitted the 17-year Pause in Melbourne in February 2013 (though he has more recently got with the Party Line and has become a Pause Denier).
2. “In the case of the ‘Pause’, the statistical test is straightforward. You just need to show that the observed trend is statistically inconsistent with a continuation of the trend in the preceding decades.”
No, I don’t. The significance of the long Pauses from 1979-1994 and again from 1996-date is that they tend to depress the long-run trend, which, on the entire dataset from 1979-date, is equivalent to a little over 1.2 K/century. In 1990 the IPCC predicted warming at 3 K/century. That was two and a half times the real-world rate observed since 1979. The IPCC has itself explicitly accepted the statistical implications of the Pause by cutting its mid-range near-term warming projection from 2.3 to 1.7 K/century between the pre-final and final drafts of AR5.
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
One does not need anything as complex as a general-circulation model to explain observed temperature change. Dr Cawley may like to experiment with the time-integral of total solar irradiance across all relevant timescales. He will get a surprise. Besides, observed temperature change since 1950, when we might have begun to influence the warming trend, is well within natural variability. No explanation beyond natural variability is needed.
4. The evidence for an inconsistency between models and data is stronger than that for the existence of a pause, but neither is yet statistically significant.
Dr Hansen used to say one would need five years without warming to falsify his model. Five years without warming came and went. He said one would really need ten years. Ten years without warming came and went. The NOAA, in its State of the Climate report for 2008, said one would need 15 years. Fifteen years came and went. Ben Santer said, “Make that 17 years.” Seventeen years came and went. Now we’re told that even though the Pause has pushed the trend below the 95% significance threshold for very nearly all the models’ near-term projections, it is “not statistically significant”. Sorry – not buying.
5. If the models underestimate the magnitude of the ‘weather’ (e.g. by not predicting the Pause), the significance of the difference between the model mean and the observations is falsely inflated.
In Mark Twain’s words, “Climate is what you expect. Weather is what you get.” Strictly speaking one needs 60 years’ data to cancel the naturally-occurring influence of the cycles of the Pacific Decadal Oscillation. Let us take East Anglia’s own dataset: HadCRUT4. In the 60 years March 1953-February 2014 the warming trend was 0.7 K, equivalent to just 1.1 K/century. CO2 has been rising at the business-as-usual rate.
The IPCC’s mid-range business-as-usual projection, on its RCP 8.5 scenario, is for warming at 3.7 K/century from 2000-2100. The Pause means we won’t get 3.7 K warming this century unless the warming rate is 4.3 K/century from now to 2100. That is almost four times the observed trend of the past 60 years. One might well expect some growth in the so-far lacklustre warming rate as CO2 emissions continue to increase. But one needs a fanciful imagination (or a GCM) to pretend that we’re likely to see a near-quadrupling of the past 60 years’ warming rate over the next 88 years.
6. It is better to understand the science than to reject the models, which are “the best method we currently have for reasoning about the effects of our (in)actions on future climate”.
No one is “rejecting” the models. However, they have accorded a substantially greater weighting to our warming influence than seems at all justifiable on the evidence to date. And Dr Cawley’s argument at this point is a common variant of the logical fallacy of arguing from ignorance. The correct question is not whether the models are the best method we have but whether, given their inherent limitations, they are – or can ever be – an adequate method of making predictions (and, so far, extravagantly excessive ones at that) on the basis of which the West is squandering $1 billion a day to no useful effect.
The answer to that question is No. Our knowledge of key processes – notably the behavior of clouds and aerosols – remains entirely insufficient. For example, a naturally-recurring (and unpredicted) reduction in cloud cover in just 18 years from 1983-2001 caused 2.9 Watts per square meter of radiative forcing. That natural forcing exceeded by more than a quarter the entire 2.3 W m–2 anthropogenic forcing in the 262 years from 1750-2011 as published in the IPCC’s Fifth Assessment report. Yet the models cannot correctly represent cloud forcings.
Then there are temperature feedbacks, which the models use to multiply the direct warming from greenhouse gases by 3. By this artifice, they contrive a problem out of a non-problem: for without strongly net-positive feedbacks the direct warming even from a quadrupling of today’s CO2 concentration would be a harmless 2.3 Cº.
But no feedback’s value can be directly measured, or theoretically inferred, or distinguished from that of any other feedback, or even distinguished from the forcing that triggered it. Yet the models pretend otherwise. They assume, for instance, that because the Clausius-Clapeyron relation establishes that the atmosphere can carry near-exponentially more water vapor as it warms it must do so. Yet some records, such as the ISCCP measurements, show water vapor declining. The models are also underestimating the cooling effect of evaporation threefold. And they are unable to account sufficiently for the heteroskedasticity evident even in the noise that overlies the signal.
But the key reason why the models will never be able to make policy-relevant predictions of future global temperature trends is that, mathematically speaking, the climate behaves as a chaotic object. A chaotic object has the following characteristics:
1. It is not random but deterministic. Every change in the climate happens for a reason.
2. It is aperiodic. Appearances of periodicity will occur in various elements of the climate, but closer inspection reveals that often the periods are not of equal length (Fig. 1).
3. It exhibits self-similarity at different scales. One can see this scalar self-similarity in the global temperature record (Fig. 1).
4. It is extremely sensitive to the most minuscule of perturbations in its initial conditions. This is the “butterfly effect”: a butterfly flaps its wings in the Amazon and rain falls on London (again).
5. Its evolution is inherently unpredictable, even by the most sophisticated of models, unless perfect knowledge of the initial conditions is available. With the climate, it’s not available.
Figure 1. Quasi-periodicity at 100,000,000-year, 100,000-year, 1000-year, and 100-year timescales, all showing cycles of lengths and magnitudes that vary unpredictably. Click each image to enlarge it.
Not every variable in a chaotic object will behave chaotically: nor will the object as a whole behave chaotically under all conditions. I had great difficulty explaining this to the vice-chancellor of East Anglia and his head of research when I visited them a couple of years ago. When I mentioned the aperiodicity that is a characteristic of a chaotic object, the head of research sneered that it was possible to predict reliably that summer would be warmer than winter. So it is: but that fact does not render the climate object predictable.
By the same token, it would not be right to pray in aid the manifest chaoticity with which the climate object behaves as a pretext for denying that we can expect or predict that any warming will occur if we add greenhouse gases to the atmosphere. Some warming is to be expected. However, it is by now self-evident that trying to determine how much warming we can expect on the basis of outputs from general-circulation models is futile. They have gotten it too wrong for too long, and at unacceptable cost.
The simplest way to determine climate sensitivity is to run the experiment. We have been doing that since 1950. The answer, so far, is a warming trend so far below what the models have predicted that the probability of major warming diminishes by the month. The real world exists, and we who live in it will not indefinitely throw money at modelers to model what the models have failed to model: for models cannot predict future warming trends to anything like a sufficient resolution or accuracy to justify shutting down the West.
With reference to the question of whether there is the potential for successfully predicting the climate over more than a few days, it is pertinent that an existing model successfully predicts whether yearly rainfall will be wetter or dryer than the median in the watershed east of Sacramento, CA over a forecasting horizon of one to three years. Making this possible is that the model is the algorithm of an optimal decoder that is tuned to a message from the future. HDTV works by similar principles but the decoder is tuned to a message from the past.
The particular problem that makes very long run reliable climate prediction unavailable is the enormous and irremediable information deficit as to the initial conditions at any chosen starting moment. I have modeled many chaotic objects myself, from the Verhulst population model to the Mandelbrot fractal to the oscillation of a pendulum (which, under certain conditions, behaves as a chaotic object). It is naive in the extreme to imagine that we can gather enough information to render the climate sufficiently predictable, as catweazle666 rightly reminds us even the IPCC has accepted. It can’t be done. Chaos, therefore, is one of the reasons why the models are doing so badly, and why they will continue to do badly.
Monckton of Brenchley:
Well said. Thirty year forecasts are presently impossible because the available information is insufficient. Six month to one year forecasts are a possibility because the need for information is much less.
Monckton of Brenchley: It can’t be done. Chaos, therefore, is one of the reasons why the models are doing so badly, and why they will continue to do badly.
That’s where I disagree with you, and why I cited examples of its being done in physiological systems.
catweazle666: In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para., 14.2.2.2), p774.
With all due respect for their efforts, I do not believe projections by the IPCC. A good example of a coupled non-linear chaotic system whose future states are reasonably predictable is given by Leloup and Goldbeter, “Modeling the molecular regulatory mechanism of circadian rhythms in Drosophila” BioEssays, vol 22, pp 84-93, 2000, copyright John Wiley and sons. The authors have a followup with a more complicated model applicable to mouse circadian rhythms. This work does not have the potentially civilization shattering importance of modeling climate dynamics, but it does show that the quoted statement from the IPCC is not correct.
In fact, a chaotic quasi-periodic model of circadian rhythm with a capacity limited output transformation can produce an extremely stable circadian rhythm, with waking and sleeping falling within narrow time spans day after day. The principle is similar to the principle by which gradual input to the climate system can produce step-changes in temperature means, a topic of a previous interchange between Lord Monckton and me.
My modeling and Lord Monckton’s modeling (cited in his post just before mine) show that chaotic systems sometimes can and sometimes can not produce accurate predictions of the future. The range of possibilities in nonlinear dynamic modeling is astonishing.
Matthew R Marler says:
April 3, 2014 at 4:40 pm
but it does show that the quoted statement from the IPCC is not correct.
===========
I believe an alternative explanation is more likely. Circadian rhythm relies upon an outside clock, which makes the underlying chaotic system predictable. A similar mechanism is a work in predicting the earth’s ocean tides.
To the extent that orbital mechanics determines climate, climate is “predictable”. However, to try and accurately forecast climate from first principles using forcings and feedbacks is hopeless. It doesn’t work for the tides and it will not work for the significantly more complex problem of climate.
Further to Fred Berple’s remark, a stable ocillator such as an orbiting planet supplies a carrier frequency to which a decoder of a message from the future can be tuned. A complete mechanistic understanding of the associated phenomenon is not required in order to tune into this message.
DirkH says:
April 3, 2014 at 3:24 am
“But even Callendar 1938 outperforms todays unverified unvalidated computer models.”
——————————————————–
This is true, but correlation is not causation or rather Callendars work appears to do better than later work, but this is not necessarily because he understood the role of radiative gases in our atmosphere.
In reading the exchange between Callendar and Simpson, it is sadly clear that you are correct in saying “Climate Science has regressed over the last 70 years; and we would have achieved MORE by not doing any “research” at all.”
One of the most interesting points about Callendars work is that he was well aware of the bias of urban heat islands. A point rarely raised by those seeking to use his work to defend AGW.
Melord: I repeat my congratulations on this excellent article.
The usual approach in modeling processes over time – such as climate – is to start at the beginning, known as t0, and include as much information on all scales as the model can handle.
I know, I know. And if the figures have actual meaning, say, as in building a bridge, then yeah, I agree. As would any engineer. But Climate (and War) are a whole different bag of beans and require a more constrained approach.
This information is called the “initial conditions”. And one should not dismiss the potential influence of apparently minor events. It is in the nature of chaotic objects that even the smallest perturbation in the initial conditions can cause drastic bifurcations in the evolution of the object.
I think that with war as well as with climate there are too many “initial conditions” to consider from the bottom to top. So it has to start out very simply and from the top down. With Climate models, I’d start with “PDO flux + Everything Else” and take it from there. Keep it down to a half dozen factors at first, and add as you go.
But don’t try to feed it all into one end by complex formula and expect anything but meaningless insanity to come out the other end. And that’s what CMIP did, as you accurately describe — and accurately diagnose. But I think the whole approach is wrong (as in futile), though you don’t seem to.
It is also worth recalling historical events. The Hyksos people had stirrups and the Egyptians didn’t. Guess who won the war.
Ah, but the Hyksos were blooded veterans. Fully made. Hungrier for the win. The Egyptians at the time were soft, by comparison. My money says the Hickies would have hit the Egyptians for six if the lot of them had been on foot — and the operational boyz would be chalking it all up to anglefire.
Look, melord, I sympathize. Really, I do. I cut my teeth on operational and tactical wargames. My current jaundiced prejudice is the result of having walked many paths.
My philosophical basis here is inductive, not deductive — by necessity, not design. Call it a kind of reluctant intellectual “advance to the rear”.
We see a bit of T-34 syndrome in the Eastern Front buffs, too. Now, don’t get me wrong. The T-34/76 was the right tank in the right place at the right time. Best tank in history, in terms of effect (and let’s not forget the later /85). It gave Russians a big jump over the PzIII, even the H and J (50/L50) models, and it wasn’t until spring of ’42 that the Germans had upgunned the PzIV to the 75/L70, which brought them back to parity. (And KV 76-85 so on and Tiger I / Pz4H / Panther so forth. We could go on all night. Things shifted back and forth.) It was all very important. But I wouldn’t dream of ascribing the Soviet victory to tank design, as important as the T-34 was.
No, wait! We forgot the Optics and Radio Rules! And, OMG, what about the turret rings? It never ends. And before you can say “Advanced Squad Leader”, you wind up with a mishmash of seductive tactical porn at the unaffordable price of complete strategic disconnect.
“””””…..M Simon says:
April 3, 2014 at 12:05 pm
Patch: Change the value of the variable, “Earth Rotation frequency” , from 0.0000000 to 1.1574074E-5 Hertz.
Your number is not only in error, it is a variable and not a constant……”””””
Yes I know, I truncated it; the 407 triplet repeats indefinitely.
And I prefer MY number to Your “correct number.”
One possible “model” of the climate; whatever the blazes the climate is, would be a complete list of all of the raw data values of whatever constitutes the climate, that have been measured over the period for which the model is purported to be valid.
Such a model, must be correct, by definition, since it exactly reproduces the observational data that is used to compare ANY model to.
The aim, of climate modeling, then becomes a matter of simply reducing the number of data entries in the model, while still being able to replicate all the observational measured values from the reduced element model.
A really good model of some process, would be a closed form equation, with some small number of derived parameters, from which the outcome of any experimental observation of the system, can be predicted, with some degree of certainty.
It would appear, that existing climate models, require more parameters, than the total number of experimentally observed real measured values of the “climate.” That is a really lousy bargain, and explains why you need a terrafloppy computer.
george e. smith
You are describing the method for construction of a model that features a number of adjustable parameters whose numerical values are extracted from the observational data. The method of extraction generally employs an intuitive rule of thumb known as a “heuristic”; minimization of the squared error is an example of one of them. In each circumstance in which a particular heuristic selects a particular value, a different heuristic extracts a different value. In this way, the method of heuristics violates non-contradiction. Non-contradiction is a logical principle.
I have for many years used computer modeling to predict the performance of various optical systems, that I have designed, and actually had manufactured; and in numbers that are counted in billions for some systems.
You could be using some of them right now.
If your computer mouse glows red underneath, the chances (today) are about 50:50 that it has one of my patented macro camera lenses in it (1:1 close up, of about 1.5 mm focal length) and it also has a sophisticated non-imaging optical illumination system of my design. At one time the chances were about 90:10, but now it is about 75% likely that you have my design, or some Hong Kong phooey knockoff from my patents. If your mouse glows blue underneath; forget it; I had nothing to do with that; but know who did it. The blue is strictly for sex appeal; no virtue whatsoever; but very cool.
Optical design has two major branches; imaging and non-imaging.
Imaging is self explanatory; light from a point on an object is directed to a corresponding point on an image, no matter at what angle (within reason) it leaves the object
In non-imaging optics, no point to point correspondence is required. One seeks to direct as much light from a given non-point source area, onto a non point sink area, and source and sink, could be shapes other than flat surfaces. Where the rays end up is not important; we call it photon herding; get all the horses in the corral. Well the reverse would be photon stampeding; get the cows out of the barn onto the pasture. That’s the aim of LED lighting.
Actually NIO is every bit as difficult as IO, but it has its own set of “you can’t get there from here.” rules. You can see one every day in front of you in the tail lights of the car in front. An array of bright spots, in a mostly dark field area.
The second law of thermodynamics, won’t allow you to uniformly illuminate the whole area over the range of viewing angles required.
Modelling these things on a computer, is a very hierarchical process.
You can start with something as simple as Newton’s thin lens formula (for imaging): xy = f^2 where x and y are the object and image distances measured along the axis from the two focal points. All kinds of pestilence are hidden, by that simple model, and you can complicate it some by changing to the Gauss equations for a thick lens, possibly with three different optical media for object space, lens, and image space.
Sequential ray tracing programs can trace millions of geometrical optics rays through any number of optical surfaces, including refractive, reflective or even refractive surfaces. The geometrical optical approximation, assumes that rays have a wavelength of zero, so that there are no diffraction effects. Non- imaging (NIO) ray tracing can send million of rays from hundreds of sources, to any number of objects, hitting some but not others, and just carrying on till they get stopped somewhere.
Then you can do real physical optics, where the programs can resort to Maxwell’s equations, or even propagate coherent Gaussian beam modes.
The point is, that at any point in the design, you have to ask; just how big a gun, do I need for this fight ?
If you launch into a physical optics battle right at the outset, you are likely to have no insight whatsoever, as to what optical structure has a chance of working at all.
So all of the models in the hierarchy, have their place, and you better start off with bare fists, before resorting to any firearms.
I’m sure this isn’t much different from designing a carbon fiber aero-plane on a computer.
But the designer has to know at what point in the design, you need to crank up the modeling horsepower a notch, to get closer to what the natural laws of the universe, will allow you to accomplish.
I’m happy to be able to say that at least 97% of my finished designs (of this and that) actually went into production. As far as I know, not one of them (designs that is) failed to perform within the expected range of behavior, with acceptable manufacturing yields.
Modelling real systems, is a standard part of engineering, and our customers expect our stuff to do what we claim it will do.
We’d all be unemployed, if our idea of par, was what passes for climate modeling.
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
Okay, the next time I have a few hundred million dollars going spare I’ll give it a go.
Another excellent piece by Christopher Monckton.
Of course, it was computer models that gave us the BBQ summer, and computer models which predicted that the recent British winter would have slightly below average rainfall.
I would think that the IPCC’s claims that computer models can predict the climate in 50 or 100 years time are fraudulent.
I would think that, to successfully predict the future, at least three conditions must be fulfilled:
1. The physical laws that drive the system are perfectly understood.
2. The system is non-chaotic.
3. The initial conditions are known with perfect precision.
None of these conditions are met, even remotely.
On the other hand, computer models that predict the Earth’s position in 100 years are probably right. That’s because planetary dynamics models meet all three conditions. For example, the Newtonian and relativistic physical laws are very well understood, and can provide predictions to many decimal points of precision. Compared to this, climate models are laughably deficient.
But how to get this over to the Camerons and Obamas of this world….
Chris
Chris Wright:
It is not logically proper to reach a conclusion on the issue of whether computer models can predict the climate in 50 or 100 years time as the word “predict” is polysemic. Details on why this is so are available at http://wmbriggs.com/blog/?p=7923 .
3. “No skeptic has made a GCM that can explain the observed climate using only natural forcings.”
Once again: it is not the job of skeptics to explain anything.
Rather, it is the job of skeptics to falsify/deconstruct all conjectures and hypotheses, which claim that observed variables are not the result of natural variability.
But so far, all such conjectures have failed. Therefore, we are left with Occam’s Razor: the most likely explanation for current observations is natural climate variability.
The onus is on the alarmist crowd to show that currently observed conditions are un-natural.
In this, the alarmist crowd has failed miserably. But they should keep trying, in their futile effort to flog the carbon scare. While the rest of us will continue laughing at them…
george e. smith says:
April 4, 2014 at 12:04 am
…
I heartily agree with everything you said, but I want to highlight this part emphatically. I don’t ‘model’ in any of the usual direct ways we think of modeling. But something both academic scientists and non technical folk seem to fail to grasp is that in engineering, the damn thing has to do what we claim it will. That’s what we do. I don’t have to be a master of the theoretical science to apply the principles, but the science had darn well better be correct, or the application will fail. This is why I harp endlessly on the failed projections. If you want me to accept your science and add from it to my engineering toolbox, you had better be able to demonstrate that your theory accurately and usefully describes something in reality that we care about. I got no use for wrenches that can’t be used to tighten bolts, and I’m not going to pretend I can tighten bolts if my wrenches don’t work.
It would appear, that existing climate models, require more parameters, than the total number of experimentally observed real measured values of the “climate.” That is a really lousy bargain, and explains why you need a terrafloppy computer.
And my premise being that a terrafloppy won’t do. All you get is a bunchload of entirely false precision — as the train flies off the track and plunges down the ravine.
It’s as bad as trying to reproduce the Eastern Front using “Advanced Squad Leader”. Or “Doom”.
You’ll do better with pencil and paper (using one side of the page), as crude as that may be.
“Your number is not only in error, it is a variable and not a constant”
Earth rotation rate is not constant. Furthermore, the rate of rotation you gave corresponds to a solar day and not a siderial day, which would give you a much closer number to what is currently the Earth’s angular velocity of 1.160576…x10^-5 revolutions per second.
We could calculate that number more precisely, but there aren’t exactly 375.25 days per year. And that inexactness changes over time due to gravitational fluid torques on the Earth. These things, I think, account for the irregularity with which so-called leap seconds are applied.
With that, I have exceeded what I know. What I do know for certain is that the Earth’s inertial angular velocity is not well-characterized by calculations that use 1 rev/24 hours.
The correct description of Chaos is “Stochastic behavior in a Deterministic System” or lawlessness in a system governed entirely by laws.
ferdberple: Circadian rhythm relies upon an outside clock, which makes the underlying chaotic system predictable.
This is only semi true. Animals, including humans, maintain a circadian rhythm in the absence of external cues. The “circa” in circadian means “approximately”, and a human or hamster without external cues will maintain a rhythm with about a 24.5 hr period or 24.0 hour period, respectively. The rhythm depends on a feedback in the expression and transcription of genes, and the system has been well-studied in animals and plants whose genes and transcription factors can be directly manipulated. In humans the stability of the rhythm has been studied in at least one cave dweller, and in “forced desynchrony” routines where, for example, the light sequence is 90 min on and 30 min off for multiple weeks, and in submarine crews (4 hrs on, 4hrs off, and other unnatural schedules.) What the external cues, mainly sunrise and sunset, do is re-synchronize the natural oscillator to maintain the activity cycle appropriate to the animal in its niche: e.g., hamsters go to sleep near sunrise, and mosquito females forage for blood predominantly near sunrise and sunset.
For more information, start here: http://ccb.ucsd.edu/; check out the videos of the circadian rhythm machinery (SCN, etc.) For an example of a forced desynchrony study with measures of circadian rhythms in melatonin, core body temperature, cortisol, and visual acutity, try :
D. F. Kripke, S. D. Youngstedt, J. A. Elliott, A. Tuunainen, K. M. Rex, R. L. Hauger, and M.R. Marler, Circadian phase in adults of contrasting ages, Chronobiology International, 22L 695-710, 2005. (and papers that they cite)
For examples of circadian rhythms in hamsters absent external cues read some of the papers by J. A. Elliott of that author list.
oops, “visual acuity” for “visual acutity”.
Starbuck: The correct description of Chaos is “Stochastic behavior in a Deterministic System” or lawlessness in a system governed entirely by laws.
There is no single short description of “chaos”. Characterizations include: large Lyapunov exponents of opposite signs; functions of two or more periods where the ratio of the periods is irrational; strange attractors; functions that go through a region of phase space with a perfect periodicity but no point with a perfect periodicity; and others.
I picked up both definitions from Ian Stewart in his book; “Does God Play Dice?” (1989) page17. His book, (along with Mandelbrot and others) became the basis for my studies of Chaos. Indeed, the word “chaos” itself has different connotations here than in the everyday world. The definition was proposed by an international conference held by the Royal Society in London, 1986 to address the distinction.
Stewart did the interpretation: “lawlessness in a system governed entirely by laws. “, The Royal Society proposed: “Stochastic behavior in a Deterministic System”.
I hope this clears it up a bit, ca 1989!
“”””””……Slartibartfast says:
April 4, 2014 at 8:08 am
“Your number is not only in error, it is a variable and not a constant”
Earth rotation rate is not constant. Furthermore, the rate of rotation you gave corresponds to a solar day and not a siderial day, which would give you a much closer number to what is currently the Earth’s angular velocity of 1.160576…x10^-5 revolutions per second.
We could calculate that number more precisely, but there aren’t exactly 375.25 days per year…….””””””
Well I could just take a wild ass guess, and get closer than your number.
My first stab at it would most likely be 365.25 days.
And I don’t have a clue about the angular rotational rate of our galaxy, or any larger universal entity, so I cited the value I get in a rotating frame of reference, with the mean sun direction vector, as my zero angular reference.
So I’ll stick with my 86,400 second day, since the starting erroneous assumption was infinite; which is definitely incorrect.
But when you do your more accurate model, don’t forget to reference it as a sidereal model, so we can correctly assign any discrepancies.
Why would you want a model, where the earth does not circumnavigate the sun; that’s as erroneous, as an infinite day length.
Mr Oldberg continues to write complete nonsense, misdefining and misapplying “heuristic” and wittering on in his usual futile way about his notion that people should not use the word “predict” in the sense of “predict” because, he feels, it can mean something other than “predict”. Well, the IPCC uses “predict” when it means “predict”, and so do I, and so does everyone except Mr Oldberg. The fact is that the IPCC’s predictions of global temperature change have failed and failed and failed again, and trolling to the effect that their predictions were not predictions, or that one cannot make predictions because of Mr Oldberg’s barmy interpretation of the word “predictions” will not conceal that fact.
The RSS data for March 2014 are now available, and there has been no global warming for 17 years 8 months. The trend is flat. The IPCC did not predict that. The models did not predict that. They were wrong. Get over it.
starbuck: “Stochastic behavior in a Deterministic System”.
Nevertheless, the descriptions that I wrote are those that appear in the mathematics.
Back to climate for a moment, because some of the drivers are themselves random (volcanoes, variations in the particles that provide nucleation centers for condensation, etc) the climate really should be thought of as a random dynamical system, and these have characteristics a little different from their deterministic counterparts.
For an encyclopedic overview: Ludwick Arnold, “Random Dynamical Systems”, Springer, 1998
A particular example: “The Stochastic Brusselator: Parametric noise destroys the Hopf Bifurcation” by Ludwick Arnold et al in “Stochastic Dynamics” edited by H. Crauel and Matthias Gundlach.
For neurophysiological examples: “Noisy oscillators” by Bard Ermentrout
and
“Statistical models of spike trains” by Liam Paninski, Emery N. Brown, Satish Iyengar and Robert E. Kass
both of those articles in “Stochastic Methods in Neuroscience”, edited by Carlo Laing and Gabriel Lord.
Monckton of Brenchley: Mr Oldberg continues to write complete nonsense, misdefining and misapplying “heuristic” and wittering on in his usual futile way about his notion that people should not use the word “predict” in the sense of “predict” because, he feels, it can mean something other than “predict”.
I am with you on that.
To extend the discussion a little, without reference to anything in particular said by anyone on this thread; a problem with statements like “GCMs predict” or “a chaotic system is not predictable”, is that we fail to specify the appropriate error bounds that would make a prediction usable, and the conditions in which such an error bound is achievable. Clearly the GCMs are running too hot now. But all predictions are made, and all applications of mathematical modeling are carried out, with an error of approximation. Even linear and polynomial models, as well as trigonometric polynomials, have errors of approximation,and these may be severe outside the conditions in which the models have been fitted and tested. Chaotic models differ from those only in that the error of approximation grows faster, not that the others are error free.
How accurate would a climate model have to be in order to be useful? I would propose, to start the discussion, that a rms error of 0.25C over 120 years, for the mean, two quartiles, and the 5% and 95% points of the distribution (with some degree of temporal and regional specificity, like say Central Ohio in June, noon and midnight), with much lower average bias (i.e. not a growing upward or downward bias as exhibited by current GCMs) would be useful. A model capable of that is not going to be developed any time soon, and certainly could not be tested any time soon. Nothing in the knowledge base, mathematical analysis or empirical science, implies that is intrinsically impossible, only that it hasn’t been done yet, and it will be hard to do.
Matthew Marler:
Thanks for offering your views. I offer the following counter-examples in refutation of your claim that “…all predictions are made, and all applications of mathematical modeling are carried out, with an error of approximation.”
The claim that “the global temperature will be 15 Celsius at midnight GMT on January 1, 2030” is sure to be falsified by the evidence unless the position is adopted that what is meant is that the temperature will be “about 15 Celsius.” That it will be “about 15 Celsius” is an example of an equivocation, for the word “about” is polysemic. That the claim is an equivocation renders this claim non-falsifiable and thus non-scientific. In particular, there is no temperature value however distant from 15 Celsius that renders this claim false.
The approach to stating a claim that is illustrated above may be contrasted to the approach resulting in the claim that “the global temperature will lie between 15 and 16 Celsius at midnight GMT on January 1, 2030.” In this case, the claim is stated unequivocally. If the observed temperature is less than 15 or greater than 16, this claim is false. Otherwise it is true.
The entities that Monckton of Brenchley would evidently like to be free to call “predictions” are equivocations similar to the one stated above. It would be well if the terms of arguments about global warming were to be disambiguated such that decision makers were not misled through applications of the equivocation fallacy. To do so is my recommendation. For the person who favors a logical approach to scientific research this recommendation has no downside. It is, however, not currently being practiced.
starbuck: “Stochastic behavior in a Deterministic System”
That’s true for long-term predictions but not short-term predictions. Take the heart rate example, and assume for now that you know your heart rate is 70 bpm. Taking your most recent beat as 0 (and assuming a reasonably accurate time keeper), you can predict reasonably well when the next 3 – 10 heartbeats will occur, but the not the 70th. The same is true of breathing, and of a spiking neuron: for the latter, if you know the dynamics and the time of the most recent spike you can reasonably predict the times of the next 3 – 10 spikes, but not the 30th.
That’s to amplify my point earlier: when speaking of “predicting”, one should specify the time over which the prediction is intended to be accurate, and the accuracy needed for the prediction to be useful.
I’m with you on that. I studied my own heartbeats at the time I was following Chaos theory and saw that, as a synchronized free-running oscillator, it was not going to be a good timekeeper! I especially relied on that observation and the departures from regularity to illustrate to my physician that a drug he was prescribing did indeed interfere with that regularity.
Your comment to Monckton is quite sensible. The most important takeaway from this thread is the critiques of modelling as a predictor. My gut response has been for a number of years to be cautious of the conclusions being drawn, not because I have skills to produce or even analyze models, but beyond that, is the falsification of Scientific Determinism by Karl Popper in his book “Open Universe”. Determinism seems to be driving predictions from models.
What I never see mentioned is concern over what actions contemplated might actually do to climate. It seems most people feel that shutting down use of carbon fuels will restore us to earlier, comfortable times. I am not so sure, not at all.
Maybe I am missing this as I don’t read each and every thread on this subject, let alone all the papers!
Thanks for your comments, Matthew.
Personally Mr Oldberg and with respect to your opinion, anyone who even tries to predict future weather or statements that can be challenged should take it on the chin. There is one element in science that no one can predict as to what the future holds especially considering the theory of chaos. If someone is silly enough to swim in a lake full of sharks, man eaters, and gets eaten then that is not chaos. It is a likely outcome. If we or the earth is suddenly hit by an asteroid that is coming from the sun, and can’t be spotted, that can cause chaos. It was completely unexpected. If a volcano erupts suddenly, and they had no warning, it creates chaos. But if the temperature suddenly is 2C warmer than expected , the whole hypothesis of doom for humans and living organisms will happen if we don’t cut CO2 down to acceptable levels (what are they exactly) and this will prevent drastic climate change is a dicey model to predict. We don’t know. Personally I think some North Americans would like their winter to end right now and would welcome a increase of 2C. But a drop of 5C would have more affect on precipitation and crop growing. Predicting future temperatures and weather, is much like telling fortunes to gullible people. We don’t know and the unpredictable is not science.
bushbunny:
It sounds as though you misunderstand my position.