Study: 'Chaos Seeding' impairs the interpretation of Numerical Weather Models

This paper was published in late 2017, and we didn’t notice it then. Today thanks to a tip from Dr. Willie Soon, via Willis Eschenbach, we notice it now. The paper is open access. See PDF link below.

Seeding Chaos: The Dire Consequences of Numerical Noise in NWP Perturbation Experiments

Abstract

Studying changes made to initial conditions or other model aspects can yield valuable insights into dynamics and predictability, but are associated with an unrealistic phenomenon called chaos seeding that can cause misinterpretations of results.

Perturbation experiments are a common technique used to study how differences between model simulations evolve within chaotic systems. Such perturbation experiments include modifications to initial conditions (including those involved with data assimilation), boundary conditions, and model parameterizations. We have discovered, however, that any difference between model simulations produces a rapid propagation of very small changes throughout all prognostic model variables at a rate many times the speed of sound. The rapid propagation seems to be due to the model’s higher-order spatial discretization schemes, allowing the communication of numerical error across many grid points with each time step. This phenomenon is found to be unavoidable within the Weather Research and Forecasting model even when using techniques such as digital filtering or numerical diffusion.

These small differences quickly spread across the entire model domain. While these errors initially are on the order of a millionth of a degree with respect to temperature, for example, they can grow rapidly through nonlinear chaotic processes where moist processes are occurring. Subsequent evolution can produce within a day significant changes comparable in magnitude to high-impact weather events such as regions of heavy rainfall or the existence of rotating supercells. Most importantly, these unrealistic perturbations can contaminate experimental results, giving the false impression that realistic physical processes play a role. This study characterizes the propagation and growth of this type of noise through chaos, shows examples for various perturbation strategies, and discusses the important implications for past and future studies that are likely affected by this phenomenon.

From the conclusion:

Here the phenomenon of chaos seeding was discovered within the WRF grid point model, although other studies with different grid point modeling systems suggest the issue is more generalized since common numerical schemes are the cause. Spectral models likely suffer the same issue of chaos seeding given their inherent, instantaneous communication of perturbations across the entire modeling domain. Thus, chaos seeding within perturbation experiments appears to be a universal modeling problem. In turn, our hope with this study is to bring an awareness to this relatively unknown issue to the field of atmospheric sciences, and other fields where chaos seeding may plague perturbation experiments, such that attempts can be made by researchers to remove potential misinterpretations from their work. From a predictability perspective, chaos seeding presents an intrinsic limit on the predictability of certain features since even if nearly all sources of error can be removed in a numerical weather forecast, any tiny error in any limited part of the domain will rapidly seed the entire model grid with other tiny errors, which will subsequently evolve wherever the atmosphere supports rapid perturbation growth. Ensemble sensitivity and EOF analysis were two techniques presented here that have the potential to  mitigate chaos seeding in perturbation experiments toward distinguishing realistic processes, and we hope that these and other new techniques can be used to ensure that chaos seeding does not harm the integrity of modeling experiments in a variety of scientific disciplines.

This paper shows there is an issue in short term numerical weather models that span hours to days. Since we know chaos amplifies with time, one onders how much of lomg term climate modeling is affected by this.

The PDF of the paper: https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-17-0129.1

 

0 0 votes
Article Rating
207 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Phantor48
February 27, 2018 12:35 pm

Chaos seeding probably has no effect whatsoever on climate predictions. After all, “the science is settled.” /sarc

Bryan A
Reply to  Phantor48
February 27, 2018 2:22 pm

Doesn’t everyone realize yet that the Climate System (as indicated by every model) is a numerically linear, non-chaotic system? Introducing Chaos into it just messes with the math that has been flawlessly perfected in the modeled system. Plus, everyone knows that Chaos is only a mathematical construct that doesn’t exist in real life.

Latitude
Reply to  Bryan A
February 27, 2018 2:35 pm

/snark LOL

Robert W Turner
Reply to  Bryan A
February 27, 2018 2:51 pm

You have an example of chaos outside a model?

Phil.
Reply to  Bryan A
February 27, 2018 2:55 pm

Robert W Turner February 27, 2018 at 2:51 pm
You have an example of chaos outside a model?

‘Knock’ in an internal combustion engine.

Reply to  Bryan A
February 27, 2018 3:11 pm

RT, rework in North America’s largest truck assembly plant. See my peer reviewed microeconomics paper, A New Productivity Paradigm, for details.

paqyfelyc
Reply to  Bryan A
February 27, 2018 5:19 pm

W Turner
“You have an example of chaos outside a model?”
Your heartbeats. If it turns perfectly regular, hurry up to the hospital.

Philo
Reply to  Bryan A
February 28, 2018 8:38 am

Robert Turner: Chaos outside of climate models?? Turbulent flow is almost any fluid flow- water, air, oil, etc. A multi segment pendulum. A single arm pendulum is perfectly predictable with classical physics. A two segment pendulum can suddenly go into chaotic behavior when you try to accelerate it. The more segments it has the worse it is. Cell growth in cancerous tumors- a key indicator of cancer vs. non-cancer.
Need more?

Crispin in Waterloo
Reply to  Phantor48
February 27, 2018 9:18 pm

Things that are settled are called ‘the dregs’.

February 27, 2018 12:45 pm

How much of a problem is this with the GFS and ECMWF models? They use ensembles to show a range of uncertainties. It seems to me that this problem is in precision of numeric calculations, and increasing the precision can stave off significant effects of this problem by at least a few days.

Sparky
Reply to  Donald L. Klipstein
February 27, 2018 12:52 pm

Would’t Ensembles propagate the small errors in each model and multiply the errors? Inquiring minds want to know.

Sparky
Reply to  Sparky
February 27, 2018 4:20 pm

hmmmn, this website ain’t big enough for the both of us.
I’m going to have to change my name.

Nick Stokes
Reply to  Donald L. Klipstein
February 27, 2018 1:17 pm

Yes, ensembles do provide a control. The error due to “chaos seeding” is fully expressed in the variability already observed between ensemble members. So it isn’t something that hasn’t had to be accounted for in the past. It’s basically a mechanism with fast propagation due to the spectral or other methods used to gain speed in th dynamical core.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 2:04 pm

Yes. I gave it, if you thought to listen.

Robert W Turner
Reply to  Nick Stokes
February 27, 2018 2:52 pm

Wasn’t there a post about circular reasoning published on WUWT today?

Alan Tomalty
Reply to  Nick Stokes
February 27, 2018 3:02 pm

“Nick, have you got anything to back up your assertions?”
No he is just spouting off the top of his head as usual. Avoiding errors in perturbations only works if you understand what is going on in the basic physics and also that ypu have the computing resources to globalize local phenomena.. Dr Pat Frank has proved that all the climate models have an enormous error factor because they dont understand clouds and they cant model them even if they did understand them. Clouds work on a local basis but collectively they are a massive determinant both of weather and climate. Even if we understood clouds completely (and we dont) no amount of computer resources would be able to globalize their effect therefore the models are forced to divide the atmosphere into cones that are way too big to catch the local phenomena that clouds represent. It isnt enough to say that some clouds on average rain and some dont. or that some clouds reflect radiation better than others. If we could predict the weather a year in advance I might have confidence in climate models. When you look at it another way a climate model is simply a long long range predictor of weather.

Reply to  Nick Stokes
February 27, 2018 3:31 pm

Nick, while I appreciate dissent in any form (and I truly do), something struck me about the quality of your input. I get the impression that you do a cursory examination of a given topic through a medium such as Wikipedia or some similarly unscientific amalgamation of surface level factoids, then stumble on this site presenting yourself as intellectual. Now, I could be wrong and if so I’ll gladly apologize but it really appears that you feign intellect and knowledge rather than having a reasonable conversation with reasonable words.

Curious George
Reply to  Nick Stokes
February 27, 2018 4:26 pm

Nick, are all runs of a model included in an ensemble, or does someone select the runs to include?

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 5:02 pm

George,
“are all runs of a model included in an ensemble”
Yes. That is what ensemble means. I linked elsewhere the WMO guidelines. There wouldn’t be any point in calculating ensemble members and not using them.

menicholas
Reply to  Nick Stokes
February 27, 2018 5:09 pm

No tossing out the outliers?

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 5:16 pm

“No tossing out the outliers?”
No, not without evidence that the computing has gone wrong. One point of having a large ensemble is that outliers don’t much affect the mean.

paqyfelyc
Reply to  Nick Stokes
February 27, 2018 5:42 pm

Stokes
“There wouldn’t be any point in calculating ensemble members and not using them.”
I beg to differ. I DID quite a few simulation, and the like of me ALWAYS discard results of runs that are obviously nonsensical. There is good rationale for that: we know that the simulation is just that: a simulation. Not a perfect map of reality. So it is quite normal that unrealistic results appear, and just as normal to discard them, while retaining those runs that aren’t obviously out of touch with reality (notice that they may still be out of touch; but that’s not obvious)
Pretty much as a statistician would get rid of points he thinks tainted by some error, before any operation on the set.
The trouble is,
1) by doing this, you pretend that the model never produced those nonsensical runs, and don’t cope with the reason why it did, actually
2) you let your prejudice run the show, complete with false negative (some kept results that were actually unrealistic) and false positive (some discarded results ware actually perfectly realistic).

Curious George
Reply to  Nick Stokes
February 27, 2018 5:50 pm

Nick, your guidelines are for weather models, not for climate models. No IPCC.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 6:02 pm

“the like of me ALWAYS discard results of runs that are obviously nonsensical”
The CFD simulation is dynamic. Because timestep is held back by tendency to instability, they go as close as they can. The mode of “nonsensical” is that instability. But there is a saving grace for unstable recurrence relations. Nonsense blows up. If they haven’t blown up, there is a good chance that they are reasonable. I doubt if they get many outliers.

Reply to  Nick Stokes
February 27, 2018 6:15 pm

I spent a number of years in the chaotic world of fractals,strange attractors and non linear systems designing a stock market application. Nick’s ability to ‘fully expressed errors’ comes as a grand revelation to this old man. Why oh why didn’t I come to that obvious understanding back then? Darn.
https://notonmywatch.com/?p=679

Phil.
Reply to  Nick Stokes
February 27, 2018 6:19 pm

paqyfelyc February 27, 2018 at 5:42 pm
Stokes
“There wouldn’t be any point in calculating ensemble members and not using them.”
I beg to differ. I DID quite a few simulation, and the like of me ALWAYS discard results of runs that are obviously nonsensical. There is good rationale for that: we know that the simulation is just that: a simulation. Not a perfect map of reality. So it is quite normal that unrealistic results appear, and just as normal to discard them, while retaining those runs that aren’t obviously out of touch with reality (notice that they may still be out of touch; but that’s not obvious)
Pretty much as a statistician would get rid of points he thinks tainted by some error, before any operation on the set.
The trouble is,
1) by doing this, you pretend that the model never produced those nonsensical runs, and don’t cope with the reason why it did, actually
2) you let your prejudice run the show, complete with false negative (some kept results that were actually unrealistic) and false positive (some discarded results ware actually perfectly realistic).

Personally I used Chauvenet’s criterion, to reject outliers in a systematic way, that was with experimental data.

Curious George
Reply to  Nick Stokes
February 27, 2018 6:20 pm

You link to “Guidelines on Ensemble Prediction Systems and Forecasting”. IPCC does not do predictions nor forecasting. They engage in projections.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 6:31 pm

“IPCC does not do predictions “
The topic of this thread is Numerical Weather Prediction. That is what I was writing about.

Crispin in Waterloo
Reply to  Nick Stokes
February 27, 2018 9:32 pm

Nick
“The error due to “chaos seeding” is fully expressed in the variability already observed between ensemble members.”
That is an unsupportable assertion. It is also unprovable as it infers many variables are controlled.
There is no relationship between what various models produce and the problem the paper discusses. Biasing ten models in favour of a certain result is not better or more informative than biasing one model towards a certain result.
The paper describes a problem with climate models that literally cannot be resolved by building more of them or ‘averaging their outputs’ as in an ‘ensemble mean’. The average of ten wrong answers in less likely to be accidently correct than one wrong answer.

Reply to  Nick Stokes
February 27, 2018 9:33 pm

Alan Tomalty February 27, 2018 at 3:02 pm:
Regarding climate models, with things such as clouds and lack of a 1-year weather forecast: I see two different things here.
The first thing is where climate models are going wrong. The main thing I see going wrong in climate models, especially the CMIP5 ones, is groupthink leading to lack of consideration of multidecadal oscillations. These climate models were tuned and/or selected for success at hindcasting the past, especially the 30 years ending with 2005 or a similar period such as 1970-2005. During that time period, upswing of multidecadal oscillations including AMO, the longer term component of PDO, and ENSO smoothed by several years were on an upswing, and global temperature was rising fast as part of a periodic item that is easily visible in global surface temperature datasets, especially HadCRUT2 and HadCRUT3, still visible in the latest version of HadCRUT4. I see this multidecadal upswing causing about .2 degree C of the warming from 1970 or 1975 to 2005, which the climate models are attributing to increase of greenhouse gases because they do not consider multidecadal oscillations. I see this resulting in climate models finding above-reality climate sensitivity and incorrect modeling of the cloud albedo and water vapor feedbacks, including the tropical upper troposphere “hotspot” of especially rapid warming that Dr. Christy points out as being much closer to nonexistent than to showing up as predicted.
So, I see the problem with climate models as groupthink looking for model support of rapid global warming and high climate sensitivity. The groupthinking group selected and/or tuned the models to show the results that the groupthinkers were looking for. If the climate models were in the hands of scientists who are scientist-first, I would expect the average of the climate models to be impressively close to accurate for having predicted global warming and warming of the tropical upper troposphere. The problem I see is not so much problems with the climate models, more who is hired to develop, tune and select them.
The second thing I see with climate change forecast models is comparison to weather forecasting. I have an analogue: Modeling of a Class D audio amplifier. The climate model analogue here is forecasting of output voltage on a millisecond to multiple milliseconds scale, or how characteristics of this amplifier (or output thereof) would change if parts of this amplifier were changed. The weather model analogue is predicting a microsecond-by-microsecond schedule of the states of the output switching transistors.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 9:55 pm

“These climate models were tuned and/or selected for success at hindcasting the past, especially the 30 years ending with 2005 or a similar period such as 1970-2005.”
Do you really know that? I don’t think it is true.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 10:06 pm

Crispin
“The paper describes a problem with climate models”
It doesn’t. It makes no mention of climate models. It describes a problem with numerical weather forecasting models. It is a problem of very rapid (hypersonic) propagation of very small variations as the program starts up. They say this then “seeds chaos”.
But chaos doesn’t need seeding. Soon enough there will be plenty of it. I don’t know why they think this initial period is so important for NWP. But it certainly isn’t for climate. As people say endlessly, chaos grows (over some days), and once you have it, you have it. GCMs look for the attractor.
“That is an unsupportable assertion.”
It is a very simple and obvious proposition. Currently, in the medium term, NWP use ensembles. These are independent runs. The difference between them includes everything that chaos can do, and also “chaos seeding”. If it isn’t showing up there, then it doesn’t matter. If it is, we have a measure of the effect in the ensemble variation. That is an empirical observation; it isn’t affected by theoretical discoveries about chaos.
Climate models use ensembles too, and results are expressed in terms of averages of many runs. Chaos (and seeding) are already built into that. We are already seeing the results.

MarkW
Reply to  Nick Stokes
February 28, 2018 6:30 am

If your models are producing outliers, that’s just evidence that you need better models.

Reply to  Nick Stokes
February 28, 2018 9:32 am

Any electronic computer model has limits intrinsic to the machine used- often called the machine limit. It is the smallest number that can be expressed in binary coded decimal that is different from zero. When you try to display numbers below that limit you get a pattern of possible zeros, including the real zero. Tiny difference errors that can affect the results wildly.
Numerical computer methods can minimize the error and work fine for structural analysis such as bridge building where the safety factors are 3, 5, or even 10. The airliner manufacturers have developed that to a fine art- any modern airliner is as perfectly safe and extremely efficient(shown by operational accidents) when operated within its limits. Pushed beyond those limits it tends to disintegrate into a ball of parts.
If the numerical methods are limited to manipulating fractions, and rational and irrational numbers are treated as strings of digits that are many orders of magnitude long the effects of errors can be arbitrarily limited in the final calculations: 2/4* π= 2 * 3.1456……..2-300 digits/4. It only works if you have lots of time and lots of computational resources.

Curious George
Reply to  Nick Stokes
March 1, 2018 4:23 pm

Nick, you are 100% right. This is not about IPCC spaghetti graphs. My apologies.

D. J. Hawkins
Reply to  Donald L. Klipstein
February 27, 2018 4:04 pm

Film-to-video uses what’s called a “three-two frame pull down” technique to account for the mismatch between film frame rates (24 fps) and video frame rates (29.97 fps). Models also incorporate techniques to “pull down” the result to something approaching normalcy when the numerical calculations threaten to blow up. They won’t allow “unrealistic” results. Kind of like the Australian BOM cutting off low temperature readings at certain stations because they “know” it can’t get that low.

Nick Stokes
Reply to  D. J. Hawkins
February 27, 2018 5:07 pm

“Models also incorporate techniques to “pull down” the result to something approaching normalcy when the numerical calculations threaten to blow up”
They don’t, and it wouldn’t work. “Blow up” means there are solutions growing despite laws like conservation of energy. If this happens, it means the discretisation isn’t representing the differential equations properly. There will be not just one spurious solution, but many of them. You can’t damp them all.

GoatGuy
February 27, 2018 12:52 pm

There is an unspoken induction though about this, isn’t there? The implication that remarkable sensitivity to both initial values, and input chaos-randomization levels and the propagation of these influences at “many times the speed of sound” (kind of an interesting topic in its own right, no?) mean that no matter what a model predicts ought to be treated with deep suspicion, due to the hypersensitive reaction of the model’s active components to the otherwise seemingly subtle input jigging dynamics.
Seriously!
GoatGuy

menicholas
Reply to  GoatGuy
February 27, 2018 5:12 pm

Seriously.
To me, this all just restates and adds to what we already know.
The atmosphere is far too large and far to complex to model accurately.
And we see that in the results, despite all the tuning and fiddling they can think of.

ccscientist
February 27, 2018 12:55 pm

see also
Loehle, C. 2017. Epistemological Status of General Circulation Models. Climate Dynamics DOI 10.1007/s00382-017-3737-7

Scottish Sceptic
February 27, 2018 12:55 pm

The other end of this scale of problems is that initial seeding creates variability in the model which then tends to be extinguished leaving less and less actual variability in the system. To illustrate this effect with a simple example, imagine people going to a football match by train. You might set them to all leave their houses at a set of totally random times, but because they only travel on a set of discreet trains, they will all arrive at the train stations near the football ground at the same time. And, unless you reintroduce randomness (as happens in the real world), because many many paths have converged, they will now follow precisely the same path through the model almost irrespective of their starting conditions. They will therefore all get to the turnstiles simultaneously etc.
This is a huge problem for weather models – because in a similar way a large number of starting conditions tend tend to coalesce on a limited number of states. So whilst they might start of with perhaps 100 unique starting conditions, that scale of variability reduces the longer the model is run. The result is likely that likely a very limited set of possible future scenarios are explored almost irrespective of the degree of initial variability. Only by constantly re-adding variability can you maintain variability in the system.

Severian
February 27, 2018 12:59 pm

But just remember what the modelers say, the integral of uncertainty and error over time equals accuracy. Derp.

John Haddock
Reply to  Severian
February 28, 2018 3:12 pm

Brilliant!

Editor
February 27, 2018 1:03 pm

Of course Chaos plays havoc in the GCM and Weather models …… whatever happened to RG Brown?

Hugs
Reply to  Kip Hansen
February 27, 2018 1:22 pm

*wears a tinfoil hat*
Yes, what happened to him? Was the duke telling him to shut up?

Reply to  Kip Hansen
February 27, 2018 3:17 pm

Miss him also. But from personal experience, at least three things ‘catch up’. 1.Age and energy. 2. Other interests (my main case now that I ‘understand’ CAGW). 3. End of life or equivalents. Is a factor for me in re my significant other, as she is ‘failing to thrive’. So guest post output here and at Judith’s is way down cause other stuff is much more important.

D. J. Hawkins
Reply to  ristvan
February 27, 2018 4:09 pm

Sorry to hear about your SO. Demographics suggest that my wife will be dealing with this situation someday, not I, but life is full of surprises. Some of them unpleasant.

Reply to  ristvan
February 27, 2018 5:24 pm

DJH, yup. I am there. So was my father, but for diffrent reasons (Mom had MS plus metastatic breast cancer). Repeat is ironic. Life is what it becomes. Be strong and have faith.

Kristi Silber
Reply to  Kip Hansen
February 27, 2018 11:02 pm

Kip – I’m wondering just what you meant by “havoc,” and if it is different between the GCM and weather models. I hear the models are quite different, and I’ve suspect the way chaos is treated is one of the ways in which they differ. True?

Editor
Reply to  Kristi Silber
February 28, 2018 7:04 am

Kristi ==> By “havoc” I mean that the results of the numerical weather or climate models rapidly diverge from accurate predictions depending on initial conditions — any tiny tiny tiny change in starting values causes differences in output which grow over time. Weather models are usually run only for days (on a detailed basis) and thus escape the worst effects — but weather models are basically guesses after several days.
For a fuller explanation, see Chaos and Models.
Models don’t “treat” chaos at all — modelers ignore the chaotic nature of their models or, like Stokes, say they take the average of chaotic results (which is a really nutty idea).

February 27, 2018 1:07 pm

yahyahyhayhahyah

Reply to  Ben Bitner
February 28, 2018 11:54 pm

Chaos is intrinsic in the climate and weather of dear old planet Earth. The flapping of a butterflies wings can cause chaotic weather in distant climes.

Edwin
February 27, 2018 1:14 pm

Several decades ago a resource economist I was working with at the time gave me a paper. The authors had set up two analogue computers running identical data sets for an extremely complex system. One of the computers was accidentally turned off for no more than a second, immediately turned back on. They checked the run. They did not notice at that point that the two system were any difference but did take note of the event. When the experiment had run its time they got significantly different outputs. When they went back to the time of the brief power outage they determine just how minor they difference was. They ran the experiment the second time but this time made very minor, almost minuscule changes in the initial conditions. Again the got significantly difference results. That peaked my interest in Chaos theory, though I make no claim at being an expert. I do know that weather forecasting for a given area is little better than it was 40+ years ago when I was going to sea regularly. I believe it is because the forecaster are too dependent on models.

John Harmsworth
Reply to  Edwin
February 27, 2018 3:06 pm

That’s what makes climate modelling so superior! The results are determined before the computer is even programmed! Perfect results every time!

Extreme Hiatus
Reply to  John Harmsworth
February 27, 2018 11:43 pm

Not just the models. The whole thing is faked.
https://realclimatescience.com/visualizing-noaanasa-us-data-tampering/

Kristi Silber
Reply to  Edwin
February 27, 2018 11:15 pm

It the different data. The weather system is too chaotic to model past about 10 days. It’s simply a quality of the scale and parameters considered. Climate modeling uses different scale and parameters and can have the chaos accounted for without the model becoming destabilized and giving crazy results that would never happen in reality. Because the chaos does bring in some error, the model is run many times and and “average” of those is the result – similar to the fact that the “average” of the different models is a better overall predictor than any individual one (although the new generation seem to be better in some respects).
It may be helpful to consider that the development of climate models has always been limited by computing ability and time, but this is always changing.

MarkW
Reply to  Kristi Silber
February 28, 2018 6:32 am

The biggest limitation on climate models is all the stuff we don’t know yet.

paqyfelyc
Reply to  Kristi Silber
February 28, 2018 11:36 am

it may be helpful to consider that you need exponentially growing computing power just to have incrementally better models.
For instance, a BILLION increase was necessary to have weather forecast accurate prediction increase from 3 days to 7, and you’ll need another billion increase to reach 10 days, and another to reach two weekd (at this point, you need billion of billion of billion more computing power than you needed for 3 day forecast)
And there you run into the “wheat and chessboard problem”
That’s why it is said that chaotic systems (which include climate) are unpredictable.

Alasdair
Reply to  Edwin
March 2, 2018 7:03 pm

Edwin:
Yes: A decade ago I attended a Climate Course and we were told of a computer simulation experiment which modelled an area of land. Every time they ran it, it produced a different answer, no matter how accurately the start conditions were defined.
“In a chaotic system butterfly wings rule” seemed to be the conclusion. Since then I have surmised that the phenomenon might best be explained by quantum mechanics at the nano level where lengthy iteration calculations reveal the inherent uncertainties in competing physical processes.
Just a thought to ponder upon. All way beyond me.
However; what is apparent is the danger of reaching conclusions from model outputs which are then translated into draconian and coercive policies to the detriment of humanity.

Alasdair
February 27, 2018 1:22 pm

This is brilliant! I haven’t read it yet; but look forward.
For years now I have despaired at the absence of any references to Chaos and its implications in the modelling scene.
My own knowledge of Chaos Theory is minimal and only based on James Gleick’s book: “Chaos” 1989. — Well worth a read.
Perhaps this may knock a bit of sense into the debate; but very much doubt the Warmist Proponents will even bother to read it. Not their scene.
This propagation of tiny errors (butterfly wings?) across grid areas seems to me to be an intrinsic property of Chaos; but I merely surmise. Perhaps a factor of quantum mechanics??🤔🤔🤔

Germinio
Reply to  Alasdair
February 27, 2018 3:24 pm

Actually it is the presence of chaos that makes climate modelling possible. Chaos is due to the presence of a strange attractor which means that all trajectories converge to the attractor no matter what the initial conditions are. Thus you do not need to know the initial conditions exactly to forecast the climate whereas the weather is different which is what this paper is about.

MarkW
Reply to  Germinio
February 27, 2018 3:37 pm

According to Germino, no matter what we do to the climate, the climate will always return to the “strange attractor”. In other words, there is no need to panic.
Either that or Germino has no idea what he is talking about. Again.

Germinio
Reply to  Germinio
February 27, 2018 4:03 pm

Mark,
Climate change is then a change in the shape of the strange attractor. Ice ages are a good example where the stable climate (i.e. the strange attractor) can change dramatically over a short period.

MarkW
Reply to  Germinio
February 27, 2018 4:27 pm

If it can change shape that easily, then it was never an attractor in the first place.
BTW, the climate has never been stable. Not even close.

Nick Stokes
Reply to  Germinio
February 27, 2018 4:54 pm

“Chaos is due to the presence of a strange attractor which means that all trajectories converge to the attractor no matter what the initial conditions are.”
Exactly so.

paqyfelyc
Reply to  Germinio
February 27, 2018 6:28 pm

@Germinio Stokes
facepalm and LOL
You don’t know what a strange attractor is, do you? So why do you talk about this subject?
Read again wikipedia Attractor#Strange_attractor , that won’t do no harm. Not sure that will prevent you to write such nonsense as “the presence of chaos that makes climate modelling possible”, but, who knows, you may understand that you definitively didn’t understand what you were talking about, and be curious to learn, and meanwhile just STFU.
http://www.stsci.edu/~lbradley/seminar/attractors.html
Plotting the attractor take much more than runing simulations, and even if you could do that for climate (which you cannot, actually), you still wouldn’t know were the heck the system would be at any time. Not even as a probability (an attractor is NOT a probability density function).

Nick Stokes
Reply to  Germinio
February 27, 2018 7:12 pm

“You don’t know what a strange attractor is, do you?”
Yes, I do. I posted an extensive study of the Lorenz attractor in 2016. In this post, I made an interactive gadget that lets you put in arbitrary starting conditions and visualise (in 3D) the resulting solution. In this post, I did a closer study of the singular points, and said a bit more about the climate implications.
I spent most of my working life in a research institution dealing with the numerics and analysis of non-linear systems of differential equations; the last half in CFD.

Germinio
Reply to  Germinio
February 27, 2018 8:22 pm

paqyfelyc,
I do know what a strange attractor is and have studied them extensively in the context of
nonlinear optical systems and lasers.

Graemethecat
Reply to  Germinio
February 28, 2018 12:32 am

Of course, all trajectories will return to the strange attractor. The only problem is that is impossible to determine WHEN this will occur.

paqyfelyc
Reply to  Germinio
February 28, 2018 5:00 am


QED. Being able to make a widget obviously doesn’t prevent you write nonsense, like “The initial information is lost, and does not recognisably affect the outcome. But that is a plus, because we usually weren’t able to measure an initial state anyway”.
Seriously? you wrote that? Just read again. what do you meant? that a loss of information allows better prediction? That, since we know we’ll never have some information, it’s better to don’t know what its effect would had been, because… you can pretend it doesn’t matter?
or “climate is the attractor”
No, climate is not the attractor.
The attractor is just a collection of possible climate, that the system will experience and switch between deterministically and yet with all randomness appearance. And that include all the climate already experienced, from tropical in Antarctica to ice-balled Earth, and some more.
I think the explanation why you write “climate is the attractor” is that you understand that weather is chaotic, and you think climate is the attractor of weather. Well, it’s not. Climate is just some average of weather, not its attractor, and climate is chaotic of its own, with it’s own attractor
@germinio
the last paragraph also apply to you. climate change is just some move along the attractor, not some change of attractor.

Nick Stokes
Reply to  Germinio
February 28, 2018 12:10 pm

“Seriously? you wrote that? “
Yes. And it’s true. It’s why GCM runs start “wound back”, starting a few decades earlier than they want to, even though that uses a lot of computer time. They are then using a start point about which they have less information. But what is important about the starting point is not that it is accurate for a point in time, but that it is physically consistent. For example, flows should be largely divergence-free. From measurements you don’t get that, and so you start with a whole lot of oscillations while that settles down. You just have to wait. It is better to start with a weakly initialized established (settled) solution than with the best current measured information.
I’ll say it again – solutions, in CFD or GCMs, do not derive from on initial conditions. You need to think through the implications of that. A time-stepping solution needs a starting point, but starting points that differ, even slightly, will give different solutions. So what you learn from the solution will have to be independent of starting point.
A simple, and very important CFD solution is flow over a wing. It includes eddies etc, so has to be transient. But what is the starting point, and should it affect what you need from the solution (lift, drag, vibration etc)? No, it would be useless if it did. You never know a starting point in actual flight (or in a wind tunnel).

paqyfelyc
Reply to  Germinio
March 1, 2018 2:41 am


“solutions do not derive from on initial conditions. ” is true only in linear systems. You just plot equilibrium points and eigenvalues, and that it. You don’t need to know precisely how you blow the flute, the sound is bound by the flute characteristics; at most your blowing may command which one of the harmonic rather than the fundamental frequency is produced, but again you don’t need to know precisely how you blow. And you don’t even have to know about the chaotic eddies that exist in the flow as much as over a wing. These do not matter any more that the chaotic movement of air molecules, which exist, too. They just not belong in the system analysis, as they belong to the initial condition that do not matter for such system
In chaotic systems, this is just NOT true. This is the very definition, actually.
But you apply it anyway, writing “solutions, in CFD or GCMs, do not derive from on initial conditions. ”
QED. You don’t understand sh!t about chaos.
The worst thing is, you behave as if you did. You are a man looking for the key under the street light, despite the key knowingly being in the nearby dark alley.
BTW, if solutions, in GCMs didn’t derive from initial conditions, a single run would be enough. Actually, GCM wouldn’t even be useful, as a numerical solution of the behavior of the system could be found (super-calculator still needed, of course, but very different) just like you can calculate the sound of the flute without simulating a single blowing (nor hundreds, of course).
“A simple, and very important CFD solution is flow over a wing. It includes eddies etc, so has to be transient. But what is the starting point, and should it affect what you need from the solution (lift, drag, vibration etc)? No, it would be useless if it did. You never know a starting point in actual flight (or in a wind tunnel).”
I highlighted the main point. As you stated yourself, being a traitor to yourself, if the wing were in chaotic regime, affected by initial conditions, it would be useless. Wing designers obviously don’t want that, and they try to figure out the narrow band of condition where this doesn’t happens, and to have a match with the condition the plane will be in. They cannot get rid of all eddies, but they don’t need it, either. All they need is the wing to keep having nice smooth solution for lift.
Circular reasoning again, Nick.

LdB
Reply to  Germinio
March 1, 2018 9:16 pm

Nick you need to stop and think carefully
1.) I have told you climates main drive and feedback are quantum they don’t obey classical laws, quantum chaos does not have sensitivity to start conditions. One of the interesting and areas of studies is how it gives rise to classical chaos start sensitivity and some recent experiments with out of time conditions suggest it is because of entanglement. Nothing you are doing with classical chaos theory has anything to do with Climate.
2.) What paqyfelyc has tried to make you aware. You do not have it clear in your head what a classical attractor is and in fact in many systems you can’t identify the attractor. The reason you can’t identify the attractor goes back to point 1, the world is quantum and your mathematics won’t work in the quantum world. Now on some simple classical chaos systems you can identify the attractor but don’t believe for one minute that is always the case.
Since you seem to like your programming start here
https://en.wikipedia.org/wiki/Langton%27s_ant

All finite initial configurations tested eventually converge to the same repetitive pattern, suggesting that the “highway” is an attractor of Langton’s ant, but no one has been able to prove that this is true for all such initial configurations. It is only known that the ant’s trajectory is always unbounded regardless of the initial configuration[4] – this is known as the Cohen–Kung theorem.[5]

It’s the classic of what we are talking about you can’t always identify attractors.

Nick Stokes
February 27, 2018 1:26 pm

“Since we know chaos amplifies with time, one onders how much of lomg term climate modeling is affected by this.”
Very little, I would expect. The concern they have is that they have a numerical artefact which propagates faster than the speed of sound. That isn’t physically possible, so they know it is an artefact, probably of the accelerated methods used (spectral). It could be significant for NWP, because time intervals are calculated relative to stability criteria based on the speed of sound limit. However, you generally learn very rapidly if you have gone beyond a stability limit.
Climate modelling is all about the statistics of a flow in which chaos has been fully expressed. It is about aspects that are independent of initial conditions, so a departure from initial state that is more rapid than expected won’t cause a problem there.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 2:02 pm

“you know something for certain”
So, Forrest, what do you know about it?

Jim Masterson
Reply to  Nick Stokes
February 27, 2018 2:26 pm

>>
Climate modelling is all about the statistics of a flow in which chaos has been fully expressed.
<<
Wishful thinking, I’m afraid.
Jim

Alan Tomalty
Reply to  Jim Masterson
February 27, 2018 3:09 pm

Nick said
“Climate modelling is all about the statistics of a flow in which chaos has been fully expressed.”
One of the most stupid of statements ever made.

menicholas
Reply to  Jim Masterson
February 27, 2018 5:24 pm

There is no need for Mr. Stokes to try to wax eloquent.
It would all sound and amount to the same thing if it was just “Blah blah blah blahblahblah blah, blahblah blah blah blahblahblah…wrong”

jim
Reply to  Jim Masterson
February 27, 2018 6:52 pm

Mr Stokes thinks that Kent and Perthshire have the same temperatures, same cloud cover, same windspeed, same rainfall etc etc. Mr Stokes claims he knows what he is talking about. Mr Stokes is a charlatan.

Don K
Reply to  Nick Stokes
February 27, 2018 3:01 pm

“Climate modelling is all about the statistics of a flow in which chaos has been fully expressed. It is about aspects that are independent of initial conditions, so a departure from initial state that is more rapid than expected won’t cause a problem there.”
Nick. Have you given any thought to a career writing nebulous publicity releases for dysfunctional software products? Good hours. Pays well. Only drawback is the need to associate with some quite odd people. You seem to have a certain knack.
Really, your exposition may convey some deep and important truth. The words are clearly English. The grammar appears to be flawless. But it makes no sense whatsoever.
Are you suggesting that the “climate” on Pluto is the same as the climate on “Mercury”?

Nick Stokes
Reply to  Don K
February 27, 2018 4:52 pm

“Are you suggesting that the “climate” on Pluto is the same as the climate on “Mercury”?”
No. Forcings, and in particular insolation, are input on a per timestep basis. The climate modelling basically determines how the energy is distributed. Chaos affects the short term, but averages out in the long term, just as turbulence (also chaos) does in CFD. Not to zero, but to patterns that you can deal with.

menicholas
Reply to  Don K
February 27, 2018 5:28 pm

And this is why there is no natural variabity we need to concern ourselves with, not recently anyway. And why we know exactly why there was RWP, a MWP, a LIA (not to mention our complete understanding of why all those BIG ice ages and their timing and interglacial intervals)…and how we know what adjustments to make to the historical records to make it all pan out nicely with our gloomy and doomy prognostications.

don k
Reply to  Don K
February 27, 2018 11:58 pm

“Chaos affects the short term, but averages out in the long term”
So, chaos is (like the 2007 financial crisis) “contained”? Didn’t believe Ben Bernake in 2007 (I was all too right about that). In your case, it sounds like wishful thinking.. I’d love to discuss it, but the power supply in my usual PC has just died after a mere 12 years or constant use and this backup is weeks of configuration work away from being very usable.
My instincts tell me that getting the old PC running again probably is not going to be a simple plug and play replacement. Won’t know for sure until I can conjure up a replacement PS which is possible, but not all that easy, when one lives 200km from a major metropolitan area. Come morning (it’s 0300 here) I’ll see if my son has a spare PS hidden somewhere.
Some other time perhaps …

LdB
Reply to  Don K
March 1, 2018 10:45 pm

You seem to be intimating it in previous posts and I warned you but now you have stated it.
You really are applying classical rubbish physics to a quantum radiative balance problem … NO NICK not ever it doesn’t work.

Nick Stokes
Reply to  Don K
March 1, 2018 11:07 pm

LdB,
“You really are applying classical rubbish physics to a quantum radiative balance problem”
This crankery just excludes you from all discussions. Classical physics is not rubbish. It is the only basis used at WUWT and in all applied science and engineering. All CFD, of which GCMs are a subset, is done with classical physics. The Lorenz system, as advanced on this thread, is a simple 3D system of DE’s. I explained how to solve it and what determined the “chaos” behaviour. There is no role for quantum theory there.
The claims of chaos are based on the properties of the Navier-Stokes equations – classical physics. You never give any indication as to how a discussion in terms of quantum chaos should proceed.

LdB
Reply to  Don K
March 3, 2018 10:48 pm

I saw by the comic the classic Nick Stokes defense .. .try answering these question
Do you agree Radiative transfer obeys and acts only according to QM rules .. Yes/No?
Is Radiative transfer the major driver and feedbacks in climate change … Yes/No?
Classic physics is wrong we have known it for 100 years and it has been replaced by QM … Yes/No?
If you were actually a scientist you would have answered yes to all 3.
Next you jump to Navier–Stokes as your defence which yes is used to model some things again most using it would be well aware of the mathematical problems and breakdown issues. The obvious issue is the formula is entirely classical and it is trying to capture something that is mostly quantum and you chance are running closer to 0% than you would like to think.
Next you say this “You never give any indication as to how a discussion in terms of quantum chaos should proceed”. Well you would proceed exactly the same way you would with classical physics work up the equations and build a model, that is what you do for a living isn’t it?

LdB
Reply to  Don K
March 3, 2018 10:50 pm

Oh I forgot to say try your navier-stokes on the turmites program example and tell me how you go.

John Harmsworth
Reply to  Nick Stokes
February 27, 2018 3:12 pm

Chaos is fully expressed but reality (clouds?) is disregarded. What laughable bullshit!

Reply to  Nick Stokes
February 27, 2018 5:07 pm

“Chaos is due to the presence of a strange attractor which means that all trajectories converge to the attractor no matter what the initial conditions are.”
Exactly so.

Germinio, I think you just caught one. LOL

menicholas
Reply to  Nick Stokes
February 27, 2018 5:19 pm

I knew that too, despite starting with significantly different conditions than Mr. Gardner.

menicholas
Reply to  Nick Stokes
February 27, 2018 5:20 pm

Gardener, sorry.

paqyfelyc
Reply to  Nick Stokes
February 27, 2018 6:31 pm

“climate scientist”? stop the name-calling, this is gross

Reply to  Nick Stokes
February 27, 2018 7:48 pm

Exactly when is chaos fully expressed? Never, particulary of its seeds are blowing in the hyperaccoustic wind.
You appear to be conflating stability (and model propagation less than the speed of sound?) with the end of chaos.

Nick Stokes
Reply to  gymnosperm
February 27, 2018 8:02 pm

It is fully expressed because each of the ensemble members is a separate trajectory. So the variation between, say, 50 members gives a good idea of the worst chaos (+seeds) can do.

Kristi Silber
Reply to  Nick Stokes
February 27, 2018 11:32 pm

This is a nice little discussion of chaos in climate modelscomment image

Kristi Silber
Reply to  Kristi Silber
February 27, 2018 11:33 pm
paqyfelyc
Reply to  Kristi Silber
February 28, 2018 11:54 am

well, instead of this talk about Lorenz, why don’t you just read Lorenz himself?
http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
Obviouly these people don’t understand that we can never know both the exact location and velocity of tiny particles essentially because the particle itself doesn’t know them, and location and velocity actually don’t exist as “exact”, but only as a range of possibilities.
They don’t understand physics, and try to explain it. It would be funny, if it came from a simple teacher. It comes from an official agency…

Bill Yarber
February 27, 2018 1:34 pm

Translation: The butterflies in Japan really can’t cause rain in NYC!
Small perturbations tend to die out in dynamic systems but the models take them global in just one cycle. The science is as unsettled as the models.

Toneb
February 27, 2018 1:34 pm

” I do know that weather forecasting for a given area is little better than it was 40+ years ago when I was going to sea regularly. I believe it is because the forecaster are too dependent on models.”
They are … and that’s the reason why forecasts are now so much MORE accurate than “40+ years ago”, especially over the last 20 years when senior forecasters could no longer gainsay the performance of NWP models and they could be used as an ensemble.
https://www.nature.com/articles/nature14956comment image

Bob boder
Reply to  Toneb
February 27, 2018 2:03 pm

What bunch of BS, if anything weather prediction is worse because they rely on models and don’t do the hard work of looking back in time and learning from the past. I will take Bastardis predictions over the models every time.

Reply to  Bob boder
February 27, 2018 3:18 pm

Bob B wrte:
“I will take Bastardi’s predictions over the models every time.”
I agree with you Bob. The use of historical weather analogues in forecasting as done by Joe B and Joe A provides much more reliable forecasts, From my experience, the computer models, especially the “long-range” seasonal models, too often produce worthless nonsense. Here is one example:
https://wattsupwiththat.com/2017/01/13/new-butt-covering-end-of-snow-prediction/comment-page-1/#comment-2397292
A little recent history about Winter weather forecasts:
The National Weather Service (NWS) of the USA forecast a warm winter for 2014-15 and my friend Joe d’Aleo told me in October 2014 that the NWS forecast was seriously incorrect, and that the next winter would be particularly cold and snowy, especially in the populous Northeast. This was the second consecutive year that the NWS has made a very poor (excessively warm) Winter forecast, in Joe’s opinion – and he and his colleagues at WeatherBell have a great track record of accurate forecasts.
Joe and I had been working together on a paper on Excess Winter Mortality, and I suggested to Joe that this false “warm winter” NWS forecast was dangerous, especially if the country and its people were unprepared. Joe agreed, but did not know how to tackle the problem.
I proposed an approach, and we prepared a presentation for my friend at the US Energy Information Administration (EIA). Joe then prepared his own monthly Winter Forecast by region for the EIA, who re-ran their winter energy demand calculations. Using Joe’s forecast, the EIA projected 11% more winter energy required for the USA than the “warm” NWS forecast had projected.
After that brutally cold and snowy winter, a back-analysis showed that the actual energy used was 10% more than the NWS forecast projection, and just 1% less than Joe’s forecast projection.
(Note: all numbers are from memory.)
So I think we did a good deed.

Regards to all, Allan

John Harmsworth
Reply to  Toneb
February 27, 2018 3:30 pm

Years ago I worked for a small company that was taken over by a very large one. The big company had a customer satisfaction evaluation system which surveyed customers at random. Manager’s bonuses were put on the line. When our first survey was done we scored over 95% satisfaction while our parent operation scored under 85%.
We later discovered that whereas we had submitted our entire client list our sister organization had carefully removed their unhappy clients from their list.
The weather forecasters are the guys with the bonuses on the line in this scenario. They are doing their own evaluation. They are Not more accurate!

Reply to  John Harmsworth
February 28, 2018 5:07 am

Toneb – below is the most recent of ~16 times that I have caught you bullsh!tting.
Keep it up – you help prove my case that global warming alarmism is not only false – it is fraudulent.
Christy and McNider (2017) proved that even if (for the sake of argument) ALL warming in the satellite era (1979 to mid-2017) is due to increasing atmospheric CO2, then TCS is only about 1C/(2xCO2). This TCS is so low that there is NO dangerous global warming and no real global warming crisis.
There is also NO credible evidence that weather is becoming wilder, that ocean acidification is a problem, that polar bears are endangered, or any of the other-very-scary lies spread by the scoundrels and imbeciles of the global warming camp. You warmists are classic bullsh!tters who have misappropriated trillions of dollars for your own benefit.
You don’t even use your own name when you post, and so there is no way to verify your claimed credentials.
You are just another warmist bullsh!tter, hiding in the weeds and spouting lies.
________________________________
https://wattsupwiththat.com/2018/01/30/what-are-in-fact-the-grounds-for-concern-about-global-warming/comment-page-1/#comment-2731571
More falsehoods from ToneB.
Your claims re industrial aerosols causing the global cooling from ~1940 to ~1977 are refuted below, You already know this and are just repeating your false nonsense. You are not addressing my points, you are merely writing false propaganda for the imbeciles who might believe you.
Email correspondence with Dr. Douglas Hoyt (2009) and follow-up, refuting false aerosol claims:
https://wattsupwiththat.com/2018/01/17/proof-that-the-recent-slowdown-is-statistically-significant-correcting-for-autocorrelation/comment-page-1/#comment-2720683
and
https://wattsupwiththat.com/2012/03/29/canada-yanks-some-climate-change-programs-from-budget/#comment-940093

bitchilly
Reply to  Toneb
February 27, 2018 4:00 pm

toneb, i believe you are uk based. as someone that spends and has spent an inordinate amount of time on the uk coastline over the last 25 years i can tell you with great certainty that your statement regarding forecasting accuracy is nonsense. wind speed ,direction and precipitation on a local and regional scale is woeful at times. certainly nowhere near the claimed accuracy in your above chart.

Reply to  bitchilly
February 27, 2018 10:13 pm

Why would Toneb start telling the truth now? He has been a chronic bullsh!tter since he started posting his nonsense.

Toneb
Reply to  bitchilly
February 28, 2018 1:34 am

“toneb, i believe you are uk based. as someone that spends and has spent an inordinate amount of time on the uk coastline over the last 25 years i can tell you with great certainty that your statement regarding forecasting accuracy is nonsense. ”
To funny.
I forecast for the RAF at bases in Lincolnshire and my parents live in Cleethorpes where I grew up.
“I can tell you with great certainty” that forecasts now are orders of magnitude better that 40+ years ago.
How about you obtain your forecast from the most informed sources in order to access that forecast.
The fault usually lies with that, and, with the person being a “glass half-empty” type who remembers all the “wrong” ones and instantly forgets the many right ones.
Met them all the time and still do.
Here for example.

Toneb
Reply to  bitchilly
February 28, 2018 1:38 am

“Why would Toneb start telling the truth now? He has been a chronic bullsh!tter since he started posting his nonsense.”
How about you quit the ad homs.
That people with knowledge of climate/weather are despised here is a given (if they back the science).
You must have the contrarian mind set that invokes incompetence and/or conspiracy to arrive at that statement my friend.
Here in abundance.
Coz you “get” your climate science from here is it?
Or are you the genius that you D-K deludedly thinks you are?

menicholas
Reply to  Toneb
February 27, 2018 5:31 pm

You know, until you convinced me with your use of the word “gainsay”, I thought you were just spouting off.

Toneb
Reply to  menicholas
February 28, 2018 1:40 am

Not my fault that you vocabulary is lacking my friend.
But still if you want to use that deficiency as an insult to me, then feel free.
Like many on here you only have that.

menicholas
Reply to  menicholas
March 1, 2018 6:04 pm

Golly!
Nothing wrong with my vocabulary young man.
And if you are insulted, you might consider growing a thicker hide, or something.

Curious George
February 27, 2018 1:49 pm

Even with a good physics, minor differences in initial conditions are amplified. I suspect that most models do not represent good physics:
https://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257
A cavalier attitude of modelers to 2% percent errors is beautifully illustrated by Gavin’s comment in
https://judithcurry.com/2012/08/30/activate-your-science/#comment-234131:
“If the specific heats of condensate and vapour is assumed to be zero (which is a pretty good assumption given the small ratio of water to air, and one often made in atmospheric models) then the appropriate L is constant (=L0). (Note that all models correctly track the latent heat of condensate).”
They don’t care that their pretty good assumption leads to a 2% error and causes models to completely leave reality after several simulated weeks at best. They happily project to year 2100, worrying about penguins and polar bears.

Titanicsfate
February 27, 2018 1:50 pm

This is not new. An NCEP research scientist first observed it in 1965. He stopped the model he was running at the end of the day and restarted it next morning with initial values that were close, but not identical, to his earlier run. He was astonished at how radically the model runs differed from what he thought were minor changes. He wrote a couple of seminal papers on it, concluding it would never be possible to fully and accurately model atmospheric behavior.
Today, NOAA is well aware of the problem. NOAA-NESDIS works very hard to properly initialize NCEP’s supercomputer model runs for daily forecasting. It has an entire organization — STAR — dedicated to the problem. Where satellites are concerned, the effort is called data assimilation. It involves entering satellite data into the model before it starts to run. It is very difficult science and is not taught in atmospheric science programs. Get the data assimilation wrong and the model does not give an accurate forecast.

menicholas
Reply to  Titanicsfate
February 27, 2018 5:35 pm

“Get the data assimilation wrong and the model does not give an accurate forecast.”
Except when it does though, right?
And get the data assimilation spot on, and it does give an accurate forecast…oh, except when it does not.
If only we could tell ahead of time when it was going to be wrong and when it was going to be right, huh?

Titanicsfate
Reply to  menicholas
February 28, 2018 3:32 pm

menicholas, you miss my point. My bad; sorry for not being clear. The model must be properly initialized via data assimilation. Then it must accurately model what the atmosphere will do. I am not talking about silly university models that mean nothing but their designers expect us to believe they can forecast out a century or more. I’m talking about National Centers for Environmental Prediction (NCEP) supercomputer models the National Weather Service forecasters use, among other inputs, to generate operational forecasts they disseminate to the public every 6 hours. The people who design the NCEP models work very hard to get the model to make accurate forecasts, but their forecasts are only valid for a few days in the future. And the daily forecasters know that.

menicholas
Reply to  menicholas
March 1, 2018 6:12 pm

I agree that weather predictions are well known to become very unreliable after more than a few days. I also agree with those who have the opinion that forecasts have, in general, become more reliable over the past few decades.
But sometimes even the forecast for tomorrow is completely wrong.
Here in Fort Myers, they have predicted a near certain chance of rain several times over the past few months.
All but once no rain fell on most areas.
Some places are more prone to errant forecasts, as are certain types of weather.
Fair weather…easy to get right.
Rain in the dry season in Florida…not as easy to get right.
This happens all the time in this neck of the woods.
BTW…I was not criticizing you…more like lamenting that we never know in advance which forecasts are going to be wrong.
But in general, things that occur less often in a given location seem to be more likely to be incorrectly forecast.

Titanicsfate
Reply to  menicholas
March 2, 2018 1:40 pm

I wasn’t feeling criticized, and I got the joke about knowing when the model will fail! The NWS uses several models. If there’s a primary one it’s the GFS, which has an 18 mile grid size. That’s the resolution. So its rain prediction might not accurately predict what will happen in a specific area. Most of the models update every 6 hours. They use a LOT of processing power. After Sandy NOAA got supplemental funding to improve their models, among other things. The NWS forecasters use GFS and other models, along with direct observations and knowledge of the area, to come up with the daily forecasts. They don’t just regurgitate model output. The commercial forecasters tend to do that. Your forecasting office is Tampa Bay. You will see that Fort Myers is at the southern end of its area of responsibility.
https://www.weather.gov/tbw/

goldminor
Reply to  Titanicsfate
February 27, 2018 11:12 pm

@ T…that is interesting. +10

Titanicsfate
Reply to  goldminor
February 28, 2018 3:33 pm

Thank you.

Gunga Din
February 27, 2018 1:54 pm

Sometimes I wonder if “Chaos Theory” is just a sciency way of admitting “we just don’t know”…But we have a theory that explains why!
That is, there are too many interacting variables changing the data (Man’s changes aside) for any formula or program or supercomputer in the world to make sense of it … unless it’s “Climate Science”. That “science” is settled. (A tree ring told them so.)

Editor
Reply to  Gunga Din
February 27, 2018 2:11 pm

Gunga Din ==> Chaos Theory is not just some excuse — it is a very serious study of a mathematical topic that has been found to manifest itself in nearly all fields of scientific inquiry.
It is why long-term climate prediction is not really possible — despite the weak brush off and protests of Mr Stokes and others.
It has long been a recommendation of mathematicians that climate modelling groups get Chaos Theory experts on board theirs teams to try and get around the Chaos Problem.
Chaos and its effects on climate models has been a blind spot for the modellers from the beginning — well, except for Ed Lorenz, who “discovered” Chaos when he built his first simple weather/climate models.

Jim Masterson
Reply to  Kip Hansen
February 27, 2018 2:31 pm

>>
. . . well, except for Ed Lorenz, who “discovered” Chaos when he built his first simple weather/climate models.
<<
And coined the term “butterfly effect.”
Jim

Reply to  Kip Hansen
February 27, 2018 2:38 pm

KH, completely agree. Was deep into this theory and its underlying math with respect to microeconomic consequences back in the mid to late 1990’s. Even published a seminal peer reviewed paper, A new productivity paradigm, IIRC 2000 in Journal of Strategy. Have not kept up with the math theory since. But suspect that climate models will always converge to their strange attractors in n-1 Poincare space, and that attribution confounded unavoidable parameterizations (see previous guest posts here ‘The trouble with models’ and ‘Why models run hot’) will flip between attractors ‘cooling’ and ‘warming’.

Gunga Din
Reply to  Kip Hansen
February 27, 2018 3:23 pm

Kip, I didn’t use the word “excuse”, though I understand what I said could be taken that way. I said “admit”.
There are aspects of the physical realm where its complexity is way beyond Man’s ability, with all his tools, to understand at present. Maybe we never will.
To involve those who realize that in any field of science is a good thing. Admit, recognize, that some projections are “a bridge too far” is good. Recognizing the limits of what is known yet stretching to expand those limits beyond what now might be called “Chaos” is not a bad thing.
The problem with “The Settled Climate Political Science” today is that they will not admit that they don’t know.
No possibility of “Chaos” in the solutions to their projections.
Perhaps I don’t understand “Chaos Theory”.
But I’m just a Layman. What do I know?

Editor
Reply to  Gunga Din
February 27, 2018 3:40 pm

Gunga ==> Chaos Theory is not a study for the timid….I tried my hand at a layman’s approach in my series here some time ago.
It is fascinating — really.

Germinio
Reply to  Kip Hansen
February 27, 2018 3:28 pm

Kip,
Again chaos makes weather forecasting impossible but makes climate modelling possible. Chaos
implies a attractor in the system and hence all trajectories will converge onto the attractor. The climate is
essentially the average of the conditions as you go round the attractor and that average is the same
no matter where on the attractor you start.

Editor
Reply to  Kip Hansen
February 27, 2018 3:35 pm

ristvan ==> Not so sure about the attractors in models — but pretty sure in the real world.
Ice ages and Interglacials look like attractors of the long-term climate system — as do the littler Warm Periods and Little Ice Ages — in a fractal sense.

Editor
Reply to  Kip Hansen
February 27, 2018 3:43 pm

Geronimo ==> As long as you will settle for a prediction of

“We will either have a new Ice Age or an Interglacial, or something in between!”

you are absolutely right.
Most of us though don’t believe that that is what we really hoped for from a climate model.

dougbadgero
Reply to  Gunga Din
February 27, 2018 3:22 pm

Given a choice between research on the unified field theory and non-linear dynamical systems, including chaos, I would take the latter all day long. There are real differences that can be made in this area IMO. Virtually nothing in the real world is linear. Many systems may be profitably studied as linear systems but that always fails at the edges.

Bruce of Newcastle
February 27, 2018 2:15 pm

Maybe they’d get better results if they fed the model with red noise or lists of telephone numbers.

jim
Reply to  Bruce of Newcastle
February 27, 2018 7:01 pm

Bruce, its been done with London bus timetables, and it does not make any difference, because the result is baked into the inputs, the CO2 forcing. GIGO.

Bruce of Newcastle
Reply to  jim
February 28, 2018 12:49 pm

Alas my joke fell flat. I was alluding to a certain notorious paleotemperature algorithm. I think it was Ross MicKitrick who found you could put in red noise or telephone numbers and out would pop a hockey stick!

Jim Masterson
February 27, 2018 2:22 pm

>>
Since we know chaos amplifies with time, one onders how much of lomg term climate modeling is affected by this.
<<
I think you need a “w” for “wonders” and an “n” for “long.”
Jim

February 27, 2018 2:30 pm

the only problem is the predictions made by gcms are pretty good.

michael hart
Reply to  Steven Mosher
February 27, 2018 3:30 pm

did the modellers tell you that?

MarkW
Reply to  Steven Mosher
February 27, 2018 3:39 pm

As long as you don’t compare them to the real world.

John Harmsworth
Reply to  Steven Mosher
February 27, 2018 3:42 pm

Right! Somewhere between 1.5 and 4.5C warming. Give us another 40 years and we’ll see if we can narrow it down a little!

menicholas
Reply to  Steven Mosher
February 27, 2018 5:45 pm

Hardly surprising that you think so, Mosh.
Otherwise, how could you possibly have the gall to defend them, in spite of every evidence to the contrary?

paqyfelyc
Reply to  Steven Mosher
February 27, 2018 6:59 pm

We don’t need just “pretty good” prediction (if any… still never saw a single one…). We need pretty good NON TRIVIAL predictions. A +1.5°C global warming prediction is just trivial, as it was observed in the past.
We need precise explainations of past know event, RWP, MWP, LIA, warming 1910-1940, etc.
Good luck delivering.

Loren Wilson
Reply to  Steven Mosher
February 27, 2018 7:16 pm

As I understand it, the absolute temperature is not correct. In my line of business, if I can’t get the value correctly, how can I trust the derivative? Second beef: no error bars.

Jeff Cagle
February 27, 2018 2:32 pm

Nick: The error due to “chaos seeding” is fully expressed in the variability already observed between ensemble members.
I think he wants a link. You’re making a mathematical assertion. It seems plausible (to me), but I would prefer a proof.

Reply to  Jeff Cagle
February 27, 2018 2:56 pm

You won’t get one. AR3 said (I paraphrase from memory, its a footnote in the Climate chapter of ebook The Arts of Truth) climate is a nonlinear dynamic (chaotic) system, so accurate long term predictions are not possible. This correct statement devolved by AR4 into an arguement about skeptical ‘starting conditions’ (the chaotic Lorentz argument for unpredictability based on sensitive dependence on initial conditions) and warmunist ‘boundary conditions’—the more predictable Poincare strange attractor limits to climate. Problem is, neither climate scientists nor skeptics actually deeply understand this math stuff. Boundary conditions are more correct mathematically in terms of what climate models are attempting, but still grossly FAIL because of the attribution problem inherent in model parameterization tuning. See previous guest posts ‘The trouble with models’ and ‘Why models run hot’ for a next level of detail under this high level comment.

dougbadgero
Reply to  ristvan
February 27, 2018 3:28 pm

AR3 also stated in that same paragraph that new statistical methods are needed. The statistical methods chosen are an obomination, averaging outputs of different models with different physics, etc.

Nick Stokes
Reply to  Jeff Cagle
February 27, 2018 3:32 pm

Jeff,
“I would prefer a proof”
There is nothing mysterious. I was replying to DLK, who said much the same, and no-one seemed to have any trouble with that. Numerical weather forecasts are achieved by running the models many times, varying inputs (ensemble), and looking at the statistics of the results. The forecast is basically the ensemble mean, and the variance is often quoted as the uncertainty. Each of those runs is an independent realisation of a chaotic solution, and it also incorporates independent realisations of this “chaos seeding” effect. So it is not a new element in what you actually observe in the ensemble. It is not “worse than we thought”.

bitchilly
Reply to  Nick Stokes
February 27, 2018 4:08 pm

how many model runs do you need to realise a meaningful mean ? how do you know what the min max parameters are in the global climate since time began, to the accuracy required, to give any credence to the model output.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 4:31 pm

“how do you know what the min max parameters are in the global climate”
The topic here is numerical weather forecasting. The WMO guidelines are here. On ensemble numbers, the general answer is, as many as you can, consistent with forecasting in real time. In the short term, you have less time to make the forecast, but the variability is less. In the long term, you can use bigger ensembles (ECMWF uses 50), and they are needed. The basic test is the observed variability.

LdB
Reply to  Nick Stokes
February 27, 2018 4:51 pm

I love the idea of a quoted uncertainty on that process, like it has some meaning 🙂
Only a statistician would ever consider doing that pretending you actually have encompassed the entire problem.

zazove
February 27, 2018 3:02 pm

The only thing being spread here are seeds of doubt. Unfortunately in one direction only.

Reply to  zazove
February 27, 2018 3:37 pm

zazove, care to elaborate? Christy’s 29 March 2017 congressional testimony proved AR5 CMIP5 models run hot—the INM-CR4 exception being noteworthy for higher ocean thermal inertia, lower water vapor feedback, and lower ECS. Fact. Sea level rise not accelerating except by data manipulation. Fact. The pause, except by Karlization or Mearsation. Fact. Polar bears thriving thanks to Dr. Crockford. Fact. Earth greening thanks to NASA. Fact. And so on. At some point in this game, we have to call your hand. So show us your warmunist climate cards. All in.

MarkW
Reply to  zazove
February 27, 2018 3:40 pm

And people wonder why we consider the AGW to be a new religion.
The greatest sin is spreading doubt.
Begone you heathens.

zazove
Reply to  zazove
February 27, 2018 8:06 pm

Doubt that it exists, or
Doubt that is influenced by humans, or
Doubt about those doing the research or their equipment (eg this post) , or
Doubt that it could be a bad thing anyway, or
Doubt it will be anytime soon, or
Doubt that anything can be done about it, or
Doubt there can be anything other than BAU.
But no doubt whatsoever about the motives for fabricating it because that explains the other doubts.

MarkW
Reply to  zazove
February 28, 2018 6:38 am

There you go with the old lies again.
Nobody has ever denied that the climate changes.
Nobody has ever denied that man can influence the climate.
Beyond that, why should we cast doubt on bad research. Read the climate gate e-mails, even the insiders cast doubt on their own research.
There isn’t a shred of evidence to support the claim that climate change will be a bad thing. Even the IPCC has given up trying to claim that it will warm the earth by more than 5C, which would get us back to the range of the Holocene Climate Optimum, which wasn’t bad.
Even the IPCC is pushing it’s scare dates back to 2100 and later, that’s not soon.
All of the proposed solutions are orders of magnitude more expensive than the worst case projections.
As to fabricating motives, I’ll leave that to you conspiracy types.

Bob boder
Reply to  zazove
February 28, 2018 8:52 am

MarkW
I think you a wrong in one respect, the AGW crowd clearly thinks that climate doesn’t change or at least doesn’t change without mans influence.

tadchem
February 27, 2018 3:19 pm

Those not properly educated in Numerical Analysis often assume that mathematical models are deterministic.

D. J. Hawkins
Reply to  tadchem
February 27, 2018 4:29 pm

Aren’t they? If you put exactly the same inputs into exactly the same program on exactly the same machine with exactly the same compiler, you have to get the same answer, every time. All the equations are deterministic. Unless you have a (somewhat) random number generator plugged in somewhere, that’s the way it has to be.

commieBob
February 27, 2018 3:37 pm

These folks cite Lorenz twice in the first sentence of their introduction but they clearly miss his point because they go on to say:

Collectively, the large body of work regarding the chaotic evolution of initial condition errors has provided valuable guidance to the development of operational forecasting/data assimilation systems that aim to decrease detrimental weather-related impacts on safety and the economy.

The only way computer models can decrease the detrimental weather-related impacts on safety and the economy is to successfully forecast the weather. Lorenz, and others, have demonstrated that accurate long range predictions are impossible. Chaos is baked into the system. It’s not just a mathematical artifact resulting from insufficient computer power.
In the Summary and Discussion they go on to say:

From a predictability perspective, chaos seeding presents an intrinsic limit on the predictability of certain features since even if nearly all sources of error can be removed in a numerical weather forecast, any tiny error in any limited part of the domain will rapidly seed the entire model grid with other tiny errors, which will subsequently evolve wherever the atmosphere supports rapid perturbation growth.

There’s that word. We’ve been throwing bigger and bigger computers at the weather. How much has forecasting improved in the last fifty years since Lorenz described the problem? It seems to me that they’re barking up the wrong tree.

Nick Stokes
Reply to  commieBob
February 27, 2018 4:37 pm

“How much has forecasting improved in the last fifty years since Lorenz described the problem? “
Lots. Toneb gave the graph above – here it is again:comment image

commieBob
Reply to  Nick Stokes
February 27, 2018 5:26 pm

Lorenz, and others, have demonstrated that accurate long range predictions are impossible.

Eyeballing the graphs, it looks like the ten day forecast is asymptotically approaching 50%.
Given that computer power has increased by orders of magnitude in the last fifty years, and forecast skill looks like it’s levelling off, Eroom’s Law seems to be in effect for weather forecasting as well as drugs.

jim
Reply to  Nick Stokes
February 27, 2018 7:09 pm

Are you seriously arguing that 3 day out weather forecasts are correct over 97% of the time?
Seriously, in the real world? Not just in ‘Nick’s world’?

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 7:25 pm

The paper is here.

paqyfelyc
Reply to  Nick Stokes
February 27, 2018 7:50 pm

Well, “Chaos is unpredictable” is a shortcut. It would be more accurate to say that it takes exponentially growing computing power to arithmetically increase accuracy of numerical calculation of the outcome.
You have to multiply computing power by 10 every time you want to advance a few hours in the range of prediction.
Which is just what your graph shows, Nick.
Then you get into “Wheat and chessboard problem”, and you (hopefully) understand how fool it is to claim some predictibility of climate.

Paul Penrose
Reply to  Nick Stokes
February 27, 2018 8:32 pm

Big deal. The 7 day forecast has improved by about 20% since 1981. How much has computing power increased in that same time period?

Bob boder
Reply to  Nick Stokes
February 28, 2018 9:28 am

Nick
Nice chart, can we get that level prediction in my neighborhood? Seriously can’t tell you how many times this winter we couldn’t make plains for our kids and schooling because the predictions of each and every day was so far off that our districts couldn’t make decisions until they had a chance to physically check the conditions in the morning, so either your chart is accurate and it just so happens that I live in the one place in the world that five different prediction services can’t make a prediction that is accurate or your chart is a bunch of BS.

LdB
Reply to  Nick Stokes
March 1, 2018 11:03 pm

The trick is to massage the word “acccurate” remember you have an ensemble read the section on “Forecast reliability” its a hoot.

Reliability = correspondence between forecast probability and observed frequency of an event given the forecast

Think about this with a sine wave phase offset by 90 degree AKA cosine versus sine wave
Your observed frequency confirms to it being totally accurate under that definition but your actual prediction is correct 0% of the time … nice definition.

Michael Carter
February 27, 2018 3:41 pm

To be fair: Weather forecasting up to 5 days ahead in New Zealand has been very dependable over the last few years. I notice because I am farmer, among other things. This is not easy, as though NZ is a small country it has quite a number of very different climate/weather zones throughout its 1600 km length.
Just a week back we were hit by an ex-tropical cyclone. NZ MetService started to model its future path shortly after it struck Tonga, 2000 km to the north. Their model predicted the path trend over a period of several days to be SW into Tasman Sea, than to change to SE, striking NZ in the NW coast of the Sth Island. They got it spot on within 100 km. That’s pretty impressive IMO.
Not to be confused with climate models. They are proving themselves to expensive folly.
Regards
M

jim
Reply to  Michael Carter
February 27, 2018 7:12 pm

Clearly NZ are a lot better than US then. 5 days out of landfall of Irma had it nowhere near Marcos Island.

MarkW
Reply to  jim
February 28, 2018 6:40 am

Weather forecasting is not the same thing as hurricane forecasting.

Bob boder
Reply to  jim
February 28, 2018 9:30 am

Irma didn’t even track were it was predicted 8 hours before.

TDBraun
February 27, 2018 4:05 pm

The models have a problem with mathematical chaos, but the Earth system itself has anti-chaotic features, does it not?… negative feedbacks that tend to rebalance features that get out of equilibrium, like the Gaia theory. Perhaps the climate models essentially try to predict where the equilibrium is going to move?

Nick Stokes
Reply to  TDBraun
February 27, 2018 4:39 pm

“Perhaps the climate models essentially try to predict where the equilibrium is going to move?”
Exactly.

Bob boder
Reply to  Nick Stokes
February 28, 2018 9:32 am

Said this a million times, the climate never ever ever reaches equilibrium, never has, never will, there is no such thing.

LdB
Reply to  TDBraun
February 27, 2018 5:22 pm

The problem is the longer your time sequence you lose accuracy. Ask Nick but what you then do is run lots and lots of simulations and then average the simulations which becomes your prediction. Works really well until Earth happens to be on one of the outliner simulations. This is the problem with this sort of process and it’s the same for hurricane and cyclone tracking.
The problem with the process is that you quote a stupid confidence (the value falls within 95% of the simulations) so my uncertainty is 5%. Actually the certainty is nothing like that and if you apply the proper scientific principles of look elsewhere effect.
The kicker is of coarse you could be wrong from day one and there is no way to know :-).

paqyfelyc
Reply to  TDBraun
February 27, 2018 8:01 pm

NOPE.
you need negative feedback to have chaos in the first place (otherwise, you just get unstable, diverging system, which is utterly different). So don’t count on them for “rebalancing” the system.
Gaia hypothesis is all about the Earth acting as a living being, that is, being out of equilibrium. When a living system returns to equilibrium, we call it DEAD.
“Perhaps the climate models essentially try to predict where the equilibrium is going to move?” yes they do… and that’s why they are wrong! Chaotic systems don’t have equilibrium points, they have strange attractors, each including all possible results the system can experience (in the Earth climate case: anything from iceball Earth to tropical climate at the pole), results that are not endpoints but just waypoints.

Nick Stokes
Reply to  paqyfelyc
February 27, 2018 8:14 pm

“in the Earth climate case: anything from iceball Earth to tropical climate at the pole”
No. Those are variations induced by different drivers (Milankovitch etc), equivalent to changing the parameters in the Lorenz equations. They change the attractor. Not knowing where you are on the attractor, without change in driver (forcing) is equivalent to weather variation, up to ENSO variations and pauses, but not a GHG driven trend. That is moving to a modified attractor, and that modification is what GCMs calculate.
You do need something like negative feedback. I did that analysis of the Lorenz system here. Linearised, you get a 3×3 coefficient matrix, and in most regions at least, one eigenvalue has large negative real part. That is what glues trajectories to the attractor.In the Lorenz case (with his parameters) the other two have moderate real part, which allows trajectories to wander over the surface. If another had been large negative, the attractor would shrink to a 1D.

paqyfelyc
Reply to  paqyfelyc
February 28, 2018 5:29 am


“Those are variations induced by different drivers (Milankovitch etc), equivalent to changing the parameters in the Lorenz equations. ”
For these to qualify as “parameters”, they have to be stable during the study. They don’t: they change all the time. So you have to include them in the variable set, even though they don’t depend on other variables. You just cannot get rid of Milankovitch, volcanoes, tides, sunspots, … by putting them in some sort of background.
Alternatively, you can study the climate of an alternate Earth with fixed such parameters. Could be interesting, but that wouldn’t prevent wild variation of climate. That would mean that climate is not chaotic, after all.
Looks like you just don’t get that climate is chaotic, actually. You think as if only weather was, and as if climate was weather’s attractor. Neither is true.
“You do need something like negative feedback.” Of course you do. Just don’t count on them to somehow get rid of annoying feature of chaos.

LdB
Reply to  paqyfelyc
February 28, 2018 5:05 pm

He is correct that is a big no, Nick.
So lets actually show you Nick, go to the chaos theory page on wikipedia (https://en.wikipedia.org/wiki/Chaos_theory) and look at the animation on the right hand side.
It’s a double rod pendulum it has exactly your argument gravity as a constant down force and the two rods as the chaotic system. All the gravity does it constrain the chaos to the bottom half of a circle. In no way does gravity stop the chaos.
You your classical feedbacks may reduce constrain the level of chaos but no they are doing nothing to the values within the chaos. So you are barking up the wrong tree.

Nick Stokes
Reply to  paqyfelyc
February 28, 2018 8:29 pm

“He is correct that is a big no”
What is a big no? Negative feedback? That was paq’s suggestion. I agreed that you need something like that. And I explained why.
The attractor is a submanifold of the solution space. That means that at points off the manifold, the trajectory is forced toward it. How that works is that the locally linearised system looks like y’=A*y+b. Some of the eigenvalues of A have large negative real part, so on their subspace, the solution rapidly converges to zero. That leaves the remaining space which is the local tangent to the attractor. I traced all this for the 3D Lorenz system here and earlier.
That exponential convergence to the manifold could often be seen as a negative feedback, which is why I said you need something like that. Many control problems look like this. The negative feedback does not, of course, suppress the chaos. In fact the separation of solutions mainly arises from the eigenvalues of larger real part which spread the solution on the manifold. One of the reasons I gave a hi-res 3D visualisation was to show the areas where trajectories diverged within the attractor, which is the mechanism for giving the “chaos” effect, where solutions starting close together end up in different places.

LdB
Reply to  paqyfelyc
February 28, 2018 10:54 pm

Nick, again that lack of understanding thing, lets start with the most basic your analogy is stupid when talking about the Earth. You are playing around with a Lorenz with nice classical equations, the earth is controlled by radiative transfer and it plays by quantum rules which look nothing like the stupid example you cite.
You have at least got the idea you can’t stop the chaos you can only dampen it, which TDBraun didn’t.
In fact you live in a QM universe in which every molecule and everything is seething with chaotic movement it’s just dampened down to a level you don’t notice, if can supress it to zero then you have identified an area at absolute zero in temperature.
The problem comes you can’t really extend your idea to deal with the real problem, so why don’t you start your reading here …. https://en.wikipedia.org/wiki/Quantum_chaos
One of the really interesting things about quantum chaos is it can actually result in stability because you can get quantized distributions. That is why our Quantum World actually looks classical.
There is no way into the problem from the classical world or science and you are wasting your time and any argument you make using it will be wrong.

LdB
Reply to  paqyfelyc
February 28, 2018 11:38 pm

I did some hunting around for you for an student presentation Nick this is the best I could find
https://www2.physik.uni-bielefeld.de/fileadmin/user_upload/workshops/random/Talks/sebastian_mueller_bielefeld_01.pdf
It starts by giving you similarities and differences between classical and quantum chaos
I should add the actual equation for quantum chaos of an atom was actually solved last year which if it interests you I am sure you can search on the net.

Gary Pearse
February 27, 2018 4:37 pm

Here we have small changes becoming big changes with iteration. What does cooking the books on temperature – the main factor in climate science- do to the forecasts. We have already begun to see incongruities in data sets that because of linear thinking weren’t anticipated. For example when you cook the books, you have to cook all the stuff (impossible to do) that feeds into it. We have the ridiculous situation that masses of territory – North America, Europe, north Asia and even South Africa, Paraguay, Ecuador, Chile have the mid1930’s to early 40s holding all the records for hot days and periods of weeks, droughts, etc. Try taking the real temperature conditions of the mid 30s-40s and use these as the initial conditions and see what happens.
Temperature records should be sacred if you are serious about studying whither climate. What does it do to predictions if you hindcast with the 1930s “pushed down” the best part of a degree. It doesn’t alter the long term “trend” so much but it means that the Pause was likely from 1937 to 2016. The temperature increase from 1880 to now is about a degree, but in actual fact the warming before the GISS hachet-job was about a degree from 1880 to 1940 – 60yrs – a heck of a fast warming followed by 70yrs of not much warming at all. The period of 40yrs when scientists were worrying about a galloping ice age coming on then gave way to a simple recovery that caused all the hoopla by the climateers.
Conclusion: With the insight of this paper and the quality of the world temperature record, there isn’t a snowball’s chance in Hell of a meaningful forecast. Worse, the algorithm used by NOAA et all continually changes past temperatures (downward on average). WUWT? Mark Steyn’s observation in the Senate science and tech committee hearings on quality of data will be a quote for the historians of this period. It was to the effect that, how can we know within IPCC’s statistical likelihood what the temperature will be in 2100 when we don’t know what 1950’s temperature WILL BE!!

Nick Stokes
Reply to  Gary Pearse
February 27, 2018 4:43 pm

“What does cooking the books on temperature – the main factor in climate science- do to the forecasts.”
Nothing at all. Numerical Weather Forecasting does not use historical temperature data (from years ago). It assimilates data from the last few days to get an initial state, and then solves for the next few days.

menicholas
Reply to  Nick Stokes
February 27, 2018 5:53 pm

Do you really think he was talking about short term weather forecasts?

jim
Reply to  Nick Stokes
February 27, 2018 7:18 pm

A typical example of Mr Stokes deliberately misinterpreting the comment to obfuscate. He does this all the time.

Nick Stokes
February 27, 2018 6:07 pm

It’s the topic of this thread “the interpretation of Numerical Weather Models”. Climate models don’t use surface temperature indices either.

Nick Stokes
Reply to  Nick Stokes
February 27, 2018 6:08 pm

This is response to menicholas “Do you really think..”

menicholas
Reply to  Nick Stokes
February 27, 2018 7:57 pm

*SMH*

Alan Tomalty
Reply to  Nick Stokes
February 28, 2018 7:08 am

Nick Stokes said
“Climate models don’t use surface temperature indices either.”
What in the hell do they use then?

Nick Stokes
Reply to  Alan Tomalty
February 28, 2018 11:51 am

You could learn something about GCMs and find out.

February 27, 2018 6:15 pm

I dunno why, but reading te paper, I get the strong impressions that although there is something to be talked about, the three authors haven’t a clue what it is, which is why the whole paper is littered with a sort of pompous verbosity designed to disguise the ineptitude of its perpetrators. Or they wrote it that way as a subtle joke, to see whether anyone could spot the BS.
What seems to be the case is that they have been playing with climate models without understanding how those models work, and observing outputs that are obviously inconsistent with reality.
that is unsurprising: models are by definition limited, and if you leave out, or parameterize, a variable like e.g. the speed of sound – in a model then you always run the risk of producing unrealistic output if that parameter turns out to have a significant relevance in the RealWorld TM…
However it has produced some amusing posts with what appears to be arrant nonsense floated past the nose of Nick Stokes who has taken the bait and made a bigger fool of himself than usual…
BBB
BullSh1t Baffles Brains.
All it says in the end is that stepwise iteration of discrete cell models coupled with parametrizations to simplify the whole thing to the point where simulations can be run on a supercomputer, is simply not adequate to describe the processes in play in the Real World. Which Robert Brown said some years back, here and elsewhere.
This is the sort of analysis that needs a Turing class brain to unravel to the point where you can make a definitive statement like ‘there is no way to prove that a model output using this class of mathematics will not be in the end completely wrong’
I am perhaps one of the few people here who has actually built a model to evaluate nonlinear differential equations over time. I never meant to, but a friend, whose interest was astronomy and astrophysics, and who was affluent enough to have one of the first personal computers, and a very well stocked drinks cupboard, challenged me = or we all challenged each other, to build a model of arbitrary masses at random positions and velocities and evaluate their orbital paths over time, on a graphical monitor.
ISTR it took about three hours, and the next three we just sat watching the pretty patterns. As run after run from various initial conditions produced runs with attractors, where quasi stable orbits happened, or runs where the while thing exploded instantly, to runs where for minutes stability seemed to reign and THEN the whole thing exploded,,,
which is why I can see statements like Nick Stokes; as based on a profound ignorance. Attractors do not cause chaos, they are the result of some types of chaotic equations coupled with some types of initial conditions. A given equation may have one or more attractors, and regions of what you might call repulsion where the whole output flies off to infinity. Or another attractor. These are not guaranteed results either, from the equations of certain models. It takes a special sort of relationship to end up with a bounded chaotic model such as climate seems to be, but even then we suspect that in addition to the attractor of the interglacials, there may be a strong and strange attractor in terms of millennial ice ages.
Chaos is where the rule ‘tomorrow will be broadly similar to yesterday’ breaks down. a random chunk of rock passing another random chunk of rock in deep interstellar space might just end up being diverted towards our solar system where it could end all life on earth.
Despite the fact that (Velikovsky aside) we appear to have had quasi stable planetary orbits for a few billion years.
Playing with models whose construction you do not fully understand will always raise some issues.
That chaotic models achieve stability for a period, is no guarantee they will retain it forever. Worse – and i went looking and asked some mathematicians – no one knows how to tell if a given chaotic model is ‘stable’ and will never ‘explode’ or not, except by running it. We have no known ‘stability criteria’ if you like.
No. This paper tells us nothing that anyone who understands how models work, cant tell you – models are not reality, and where chaos is concerned, not even very useful models of reality, and where climate is concerned not only are the necessary models chaotic in the first place, but they cant be constructed adequately anyway.
That is to say that the output of a climate model is in all likelihood necessarily less meaningful than disembowelling a goat and examining its entrails. And takes a huge amount more computing power, which is of course what impresses the profane and gives it spurious credibility,
That three students have noticed this and dressed it up in obfuscation and called it ‘chaotic seeding’ is no great surprise. That’s what students do.
All it shows is that models are probably not fit for purpose.
No sh1t Sherlock.

jorgekafkazar
Reply to  Leo Smith
February 28, 2018 3:45 pm

“Velikovsky aside…”
Far aside.

Kaiser Derden
February 27, 2018 6:40 pm

because multi variable models of chaotic systems aren’t science. They are mathematical circle jerks for computer programmers … Wall Street has been try to “model” the market for as long as there have been pen and paper and then computers because of the potential money to be made predicting market direction … and they have been doing it with people alot smarter and more highly motivated that the likes of Mann et al … and they have never succeeded … its like actually curing cancer, can’t be done … (surgery “removes” cancer, chemo/radiation kills cancer cells … doesn’t cure a thing … )

menicholas
Reply to  Kaiser Derden
February 27, 2018 8:00 pm

Well, strictly speaking, no one is interested in curing cancer. The focus is on curing patients.
If a cancer is removed from a person’s body, and it does not come back, the patient is considered cured.
Of cancer.

jorgekafkazar
Reply to  menicholas
February 28, 2018 4:28 pm

Only to eventually catch something else and expire, go to meet ones maker, rest in peace, push up the daisies, ring down the curtain and join the choir invisible.

MarkW
Reply to  Kaiser Derden
February 28, 2018 6:42 am

If all the influenza cells in my body are killed, am I not cured of the flu?

Reply to  MarkW
February 28, 2018 6:51 am

There is no such thing as an “influenza cell.” Influenza is a virus.

paqyfelyc
Reply to  MarkW
March 1, 2018 2:56 am

well, I guess you can name cells infected with influenza virus “influenza cells”, can’t you?

Tom Dayton
February 27, 2018 8:34 pm

A brief overview of chaos’s role in climate is available at Skeptical Science; read the Basic tabbed pane and then the Intermediate one–written by someone with a PhD in Complexity Studies: https://skepticalscience.com/chaos-theory-global-warming-can-climate-be-predicted-basic.htm.

Toneb
February 28, 2018 1:47 am

Improvements in NCEP NWP ….
http://www.iweathernet.com/wxnetcms/wp-content/uploads/2015/01/ncep-model-forecast-skill.png
(still behind UKMO and ECM for skill)

Ethan Brand
Reply to  Toneb
February 28, 2018 5:59 am

Toneb: Note that this is skill at 500mb. Model “skill” at ground level (ie where we live) is not nearly as good.
Even so, we increase computing power by orders of magnitude, data input accuracy probably the same, software sophistication by leaps and bounds….and we can barely forecast the weather (locally) a couple days in the future? Yet, somehow, this is supposed to be related to the predictive ability of climate models decades in the future? Regional forecasts (lets say midwest USA) are, as far as I know, pretty worthless more than 10 days or so in the future. My observation is that long range forecasting (say 30 days plus) for even continental size areas is pretty worthless. I find it astonishing that one can somehow make the leap that even though we can barely predict the (local) weather with any usable accuracy a few days in the future, to somehow making broad statements that the “science is settled” and we can have excellent confidence in macro climate trends decades or centuries in the future. I understand that the what climate models are supposed to be telling us is far different that a local temperature forecast. However, it seems that these global climate predictions are somehow supposed to give me usable information about regional weather trends decades in the future. For example, a GCM predicts (somehow with a high degree of confidence) that the midwest USA will be x degrees warmer (in general) in, say 2050. Ok, then that supposedly lets me generate a reasonably accurate prediction of what local weather conditions would do. Now, since right now, we can’t ,with any degree of usable accuracy, provide reasonable forecasts on the same scale past 10 days, how the heck does one make the leap that our forecast accuracy somehow increases if we start with a overall temperature a little bit higher 20 years from now? In other words, we have a pretty good handle on what the regional weather conditions are right now. We throw massive CPU time at it…and come up with relatively worthless forecasts more than a few days in the future. Now we fast forward 20 years…and now we somehow conclude that our forecasting skill is accurate (for any regional size) for decades?
My problem with this whole AGW issue is one of credibility. There is really no doubt that the macro weather forecasting models for large regions past 10 days or so generate crap (most of the time). Yet, somehow, the fact that the NWS and other forecast agencies regularly publish them is supposed to be an indicator of success. What is the skill level of NH/Global long range forecasts in the 30-365 day range? I am pretty confident it is in the worthless category.

John Haddock
Reply to  Toneb
February 28, 2018 5:28 pm

Pretty chart.
But it tells me nothing about what dimensions of the forecast are being measured. Nor how accuracy is assessed. Please excuse my skepticism, but I’ve been in too many businesses that fiddled measurements when it was in their interest to do so.

Ed Zuiderwijk
February 28, 2018 2:10 am

Or, to put it succinctly: such models, however ‘sophysticated’, have limited predictive ability.

Alan Tomalty
Reply to  Ed Zuiderwijk
February 28, 2018 7:13 am

Please substitute the word “limited” with “NO”

nn
Reply to  Alan Tomalty
February 28, 2018 10:45 am

They work in a frame of reference that is limited in time and space, forward and backward. They may work beyond, but their accuracy is logically and practically inversely proportional to the product of time and space offsets from an established frame of reference. Evolution or chaos is not a progressive (i.e. monotonic) function. Case in point: human life.

February 28, 2018 6:12 am

TTo be practical, let’s discuss what works and what does not work. As an example, let’s use Winter weather forecasts, which predict about 6 to 9 months into the future.
The USA National Weather Service (NWS), with all its computing and modeling power, routinely gets its Winter forecasts wrong. Weatherbell, which uses historic weather analogues, has a much stronger track record of forecasting success. One example is included below, but there are many.
The fact that Weatherbell can predict Winter weather much better than the NWS suggests that there is indeed a significant degree of predictability in weather within this 6-to-9-month time frame. What is apparently absent from the computer weather models is the “experience factor” provided by the historic weather analogues. There may also be mathematical frailties in the computer models that are either solvable or fatal – for example, even the “seasonal” models used by the NWS for the Winter forecast have a history of “running hot”.
The longer-term multi-decadal climate models also run too hot, probably because of the assumption that climate sensitivity to CO2 (TCS) is about an order of magnitude higher than reality. There is also the problem of model instability, exemplified by reported wide divergence of results when minor changes are made to initial conditions.
I am not saying that these weather and climate model problems are not solvable, but I am saying that these models are not currently fit-for-purpose, and probably should be shelved and greatly improved before they are put back into service.
In the meantime, the NWS and other forecasters should focus on what does work. There is no excuse for continuing to produce weather and climate forecasting nonsense.
Regards, Allan
https://wattsupwiththat.com/2018/02/27/study-chaos-seeding-impairs-the-interpretation-of-numerical-weather-models/#comment-2753714
Bob B wrote:
“I will take Bastardi’s predictions over the models every time.”
I agree with you Bob. The use of historical weather analogues in forecasting as done by Joe B and Joe A provides much more reliable forecasts, From my experience, the computer models, especially the “long-range” seasonal models, too often produce worthless nonsense. Here is one example:
https://wattsupwiththat.com/2017/01/13/new-butt-covering-end-of-snow-prediction/comment-page-1/#comment-2397292

Anonymoose
February 28, 2018 2:17 pm

“any difference between model simulations”
What does that refer to? Someone is loading differences between models into something?

Tomas Milanovic
February 28, 2018 4:12 pm

Well the problem with chaos in weather forecasting (and with chaos in general) is that it generally does NOT allow averaging multiple runs .
Or more accurately an average of multiple runs is often garbage and worse , you never know in advance when it is garbage and when not .
The reason ?
It should be obvious ! Every run describes a possible future state but these future states do NOT have the same probability and there is no reason why they should .
Example of 3 runs :
Run 1 has 50 % probability to occur (but you do not know that)
Run 2 has 30 % probability to occur (but you do not know that)
Run 3 has 5 % probability to occur (but you do not know that)
Note that the sum is not 100 % because you cannot obviously make an infinity of runs to exhaust all possible future states .
By making an “ensemble average” what is totally stupid for a chaotic system you make
(Run1 + Run2 + Run 3) / 3 and you can readily see that this average has nothing to do with the reality which is mainly the Run 1 . On the contrary averaging will move your forcast farther from Run 1 which is the most realistic so it makes things worse , not better .
Of course in some cases (like the 99 storm in Europe) the reality is in the 15 % where you did no run so that it comes quite unexpected .
As many said before , forecasting chaotic systems is only possible for short periods of time (a few days at maximum for the weather and in some cases with steep gradients even much less) and for longer periods it is better to not even try because it is impossible both in principle and in practice .

paqyfelyc
Reply to  Tomas Milanovic
March 1, 2018 3:10 am

As a matter of fact, you don’t even need chaos for averaging being worse than garbage. Every time the population has several mode, and this is the general situation, the average has dubious meaning.
In fact, your example is fine, except that it is not related to chaos. In a chaotic system, even assessing probability is hard.

Jaap Titulaer
March 1, 2018 2:05 pm

A few words of wisdom about stochastics & ‘chaos’ from Ed & Henk.
https://judithcurry.com/2013/10/13/words-of-wisdom-from-ed-lorenz/
E.N. Lorenz (1991) Chaos, spontaneous climatic variations and detection of the greenhouse effect. Greenhouse-Gas-Induced Climatic Change: A Critical Appraisal of Simulations and Observations, M. E. Schlesinger, Ed. Elsevier Science Publishers B. V., Amsterdam, pp. 445-453.

In this talk I wish to examine the basis for speculating that the greenhouse effect is not the main cause of what we have been experiencing and, particularly, that the suggested warming is due to processes purely internal to the atmosphere and its immediate surroundings.

Unfortunately, recognizing a system as chaotic will not tell us all that we might like to know. It will not provide us with a means of predicting the future course of the system. It will tell us that there is a limit to how far ahead we can predict, but it may not tell us what this limit is. Perhaps the best advice that chaos “theory” can give us is not to jump at conclusions; unexpected occurrences may constitute perfectly normal behavior.

Henk Tennekes, former Director of Research of the KNMI (Royal Netherlands Meteorological Institute)
http://scienceandpublicpolicy.org/images/stories/papers/commentaries/tennekes_essays_climate_models.pdf

In response to a paper by Tim Palmer of ECMWF, I wrote:
“Palmer et al. seem to forget that, though weather forecasting is focused on the rapid succession of atmospheric events, climate forecasting has to focus on the slow evolution of the circulation in the world ocean and slow changes in land use and natural vegetation. In the evolution of the Slow Manifold (to borrow a term coined by Ed Lorenz) the atmosphere acts primarily as stochastic high frequency noise. If I were still young, I would attempt to build a conceptual climate model based on a deterministic representation of the world ocean and a stochastic representation of synoptic activity in the atmosphere.
From my perspective it is not a little bit alarming that the current generation of climate models cannot simulate such fundamental phenomena as the Pacific Decadal Oscillation.
I will not trust any climate model until and unless it can accurately represent the PDO and other slow features of the world ocean circulation.
Even then, I would remain skeptical about the potential predictive skill of such a model many tens of years into the future.

Let me take an example in which I have been involved for thirty years, the problem of a finite prediction horizon for complex deterministic systems.
This, the very problem first defined by Edward Lorenz, still is not properly accounted for by the majority of climate scientists. In a meeting at ECMWF in 1986, I gave a speech entitled “No Forecast Is Complete Without A Forecast of Forecast Skill.” This slogan gave impetus to the now common procedure of Ensemble Forecasting, which in fact is a poor man’s version of producing a guess at the probability density function of a deterministic forecast. The ever expanding powers of supercomputers permit such simplistic research strategies.
Since then, ensemble forecasting and multi-model forecasting have become common in climate research, too. But fundamental questions concerning the prediction horizon are being avoided like the plague. There exists no sound theoretical framework for climate predictability studies. As a turbulence specialist, I am aware that such a framework would require the development of a statistical-dynamic theory of the general circulation, a theory that deals with eddy fluxes and the like. But the very thought is anathema to the mainstream of dynamical meteorology.

%d
Verified by MonsterInsights