This paper was published in late 2017, and we didn’t notice it then. Today thanks to a tip from Dr. Willie Soon, via Willis Eschenbach, we notice it now. The paper is open access. See PDF link below.
Seeding Chaos: The Dire Consequences of Numerical Noise in NWP Perturbation Experiments
Abstract
Studying changes made to initial conditions or other model aspects can yield valuable insights into dynamics and predictability, but are associated with an unrealistic phenomenon called chaos seeding that can cause misinterpretations of results.
Perturbation experiments are a common technique used to study how differences between model simulations evolve within chaotic systems. Such perturbation experiments include modifications to initial conditions (including those involved with data assimilation), boundary conditions, and model parameterizations. We have discovered, however, that any difference between model simulations produces a rapid propagation of very small changes throughout all prognostic model variables at a rate many times the speed of sound. The rapid propagation seems to be due to the model’s higher-order spatial discretization schemes, allowing the communication of numerical error across many grid points with each time step. This phenomenon is found to be unavoidable within the Weather Research and Forecasting model even when using techniques such as digital filtering or numerical diffusion.
These small differences quickly spread across the entire model domain. While these errors initially are on the order of a millionth of a degree with respect to temperature, for example, they can grow rapidly through nonlinear chaotic processes where moist processes are occurring. Subsequent evolution can produce within a day significant changes comparable in magnitude to high-impact weather events such as regions of heavy rainfall or the existence of rotating supercells. Most importantly, these unrealistic perturbations can contaminate experimental results, giving the false impression that realistic physical processes play a role. This study characterizes the propagation and growth of this type of noise through chaos, shows examples for various perturbation strategies, and discusses the important implications for past and future studies that are likely affected by this phenomenon.
From the conclusion:
Here the phenomenon of chaos seeding was discovered within the WRF grid point model, although other studies with different grid point modeling systems suggest the issue is more generalized since common numerical schemes are the cause. Spectral models likely suffer the same issue of chaos seeding given their inherent, instantaneous communication of perturbations across the entire modeling domain. Thus, chaos seeding within perturbation experiments appears to be a universal modeling problem. In turn, our hope with this study is to bring an awareness to this relatively unknown issue to the field of atmospheric sciences, and other fields where chaos seeding may plague perturbation experiments, such that attempts can be made by researchers to remove potential misinterpretations from their work. From a predictability perspective, chaos seeding presents an intrinsic limit on the predictability of certain features since even if nearly all sources of error can be removed in a numerical weather forecast, any tiny error in any limited part of the domain will rapidly seed the entire model grid with other tiny errors, which will subsequently evolve wherever the atmosphere supports rapid perturbation growth. Ensemble sensitivity and EOF analysis were two techniques presented here that have the potential to mitigate chaos seeding in perturbation experiments toward distinguishing realistic processes, and we hope that these and other new techniques can be used to ensure that chaos seeding does not harm the integrity of modeling experiments in a variety of scientific disciplines.
This paper shows there is an issue in short term numerical weather models that span hours to days. Since we know chaos amplifies with time, one onders how much of lomg term climate modeling is affected by this.
The PDF of the paper: https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-17-0129.1
Here we have small changes becoming big changes with iteration. What does cooking the books on temperature – the main factor in climate science- do to the forecasts. We have already begun to see incongruities in data sets that because of linear thinking weren’t anticipated. For example when you cook the books, you have to cook all the stuff (impossible to do) that feeds into it. We have the ridiculous situation that masses of territory – North America, Europe, north Asia and even South Africa, Paraguay, Ecuador, Chile have the mid1930’s to early 40s holding all the records for hot days and periods of weeks, droughts, etc. Try taking the real temperature conditions of the mid 30s-40s and use these as the initial conditions and see what happens.
Temperature records should be sacred if you are serious about studying whither climate. What does it do to predictions if you hindcast with the 1930s “pushed down” the best part of a degree. It doesn’t alter the long term “trend” so much but it means that the Pause was likely from 1937 to 2016. The temperature increase from 1880 to now is about a degree, but in actual fact the warming before the GISS hachet-job was about a degree from 1880 to 1940 – 60yrs – a heck of a fast warming followed by 70yrs of not much warming at all. The period of 40yrs when scientists were worrying about a galloping ice age coming on then gave way to a simple recovery that caused all the hoopla by the climateers.
Conclusion: With the insight of this paper and the quality of the world temperature record, there isn’t a snowball’s chance in Hell of a meaningful forecast. Worse, the algorithm used by NOAA et all continually changes past temperatures (downward on average). WUWT? Mark Steyn’s observation in the Senate science and tech committee hearings on quality of data will be a quote for the historians of this period. It was to the effect that, how can we know within IPCC’s statistical likelihood what the temperature will be in 2100 when we don’t know what 1950’s temperature WILL BE!!
“What does cooking the books on temperature – the main factor in climate science- do to the forecasts.”
Nothing at all. Numerical Weather Forecasting does not use historical temperature data (from years ago). It assimilates data from the last few days to get an initial state, and then solves for the next few days.
Do you really think he was talking about short term weather forecasts?
A typical example of Mr Stokes deliberately misinterpreting the comment to obfuscate. He does this all the time.
It’s the topic of this thread “the interpretation of Numerical Weather Models”. Climate models don’t use surface temperature indices either.
This is response to menicholas “Do you really think..”
*SMH*
Nick Stokes said
“Climate models don’t use surface temperature indices either.”
What in the hell do they use then?
You could learn something about GCMs and find out.
I dunno why, but reading te paper, I get the strong impressions that although there is something to be talked about, the three authors haven’t a clue what it is, which is why the whole paper is littered with a sort of pompous verbosity designed to disguise the ineptitude of its perpetrators. Or they wrote it that way as a subtle joke, to see whether anyone could spot the BS.
What seems to be the case is that they have been playing with climate models without understanding how those models work, and observing outputs that are obviously inconsistent with reality.
that is unsurprising: models are by definition limited, and if you leave out, or parameterize, a variable like e.g. the speed of sound – in a model then you always run the risk of producing unrealistic output if that parameter turns out to have a significant relevance in the RealWorld TM…
However it has produced some amusing posts with what appears to be arrant nonsense floated past the nose of Nick Stokes who has taken the bait and made a bigger fool of himself than usual…
BBB
BullSh1t Baffles Brains.
All it says in the end is that stepwise iteration of discrete cell models coupled with parametrizations to simplify the whole thing to the point where simulations can be run on a supercomputer, is simply not adequate to describe the processes in play in the Real World. Which Robert Brown said some years back, here and elsewhere.
This is the sort of analysis that needs a Turing class brain to unravel to the point where you can make a definitive statement like ‘there is no way to prove that a model output using this class of mathematics will not be in the end completely wrong’
I am perhaps one of the few people here who has actually built a model to evaluate nonlinear differential equations over time. I never meant to, but a friend, whose interest was astronomy and astrophysics, and who was affluent enough to have one of the first personal computers, and a very well stocked drinks cupboard, challenged me = or we all challenged each other, to build a model of arbitrary masses at random positions and velocities and evaluate their orbital paths over time, on a graphical monitor.
ISTR it took about three hours, and the next three we just sat watching the pretty patterns. As run after run from various initial conditions produced runs with attractors, where quasi stable orbits happened, or runs where the while thing exploded instantly, to runs where for minutes stability seemed to reign and THEN the whole thing exploded,,,
which is why I can see statements like Nick Stokes; as based on a profound ignorance. Attractors do not cause chaos, they are the result of some types of chaotic equations coupled with some types of initial conditions. A given equation may have one or more attractors, and regions of what you might call repulsion where the whole output flies off to infinity. Or another attractor. These are not guaranteed results either, from the equations of certain models. It takes a special sort of relationship to end up with a bounded chaotic model such as climate seems to be, but even then we suspect that in addition to the attractor of the interglacials, there may be a strong and strange attractor in terms of millennial ice ages.
Chaos is where the rule ‘tomorrow will be broadly similar to yesterday’ breaks down. a random chunk of rock passing another random chunk of rock in deep interstellar space might just end up being diverted towards our solar system where it could end all life on earth.
Despite the fact that (Velikovsky aside) we appear to have had quasi stable planetary orbits for a few billion years.
Playing with models whose construction you do not fully understand will always raise some issues.
That chaotic models achieve stability for a period, is no guarantee they will retain it forever. Worse – and i went looking and asked some mathematicians – no one knows how to tell if a given chaotic model is ‘stable’ and will never ‘explode’ or not, except by running it. We have no known ‘stability criteria’ if you like.
No. This paper tells us nothing that anyone who understands how models work, cant tell you – models are not reality, and where chaos is concerned, not even very useful models of reality, and where climate is concerned not only are the necessary models chaotic in the first place, but they cant be constructed adequately anyway.
That is to say that the output of a climate model is in all likelihood necessarily less meaningful than disembowelling a goat and examining its entrails. And takes a huge amount more computing power, which is of course what impresses the profane and gives it spurious credibility,
That three students have noticed this and dressed it up in obfuscation and called it ‘chaotic seeding’ is no great surprise. That’s what students do.
All it shows is that models are probably not fit for purpose.
No sh1t Sherlock.
“Velikovsky aside…”
Far aside.
because multi variable models of chaotic systems aren’t science. They are mathematical circle jerks for computer programmers … Wall Street has been try to “model” the market for as long as there have been pen and paper and then computers because of the potential money to be made predicting market direction … and they have been doing it with people alot smarter and more highly motivated that the likes of Mann et al … and they have never succeeded … its like actually curing cancer, can’t be done … (surgery “removes” cancer, chemo/radiation kills cancer cells … doesn’t cure a thing … )
Well, strictly speaking, no one is interested in curing cancer. The focus is on curing patients.
If a cancer is removed from a person’s body, and it does not come back, the patient is considered cured.
Of cancer.
Only to eventually catch something else and expire, go to meet ones maker, rest in peace, push up the daisies, ring down the curtain and join the choir invisible.
If all the influenza cells in my body are killed, am I not cured of the flu?
There is no such thing as an “influenza cell.” Influenza is a virus.
well, I guess you can name cells infected with influenza virus “influenza cells”, can’t you?
A brief overview of chaos’s role in climate is available at Skeptical Science; read the Basic tabbed pane and then the Intermediate one–written by someone with a PhD in Complexity Studies: https://skepticalscience.com/chaos-theory-global-warming-can-climate-be-predicted-basic.htm.
Improvements in NCEP NWP ….
http://www.iweathernet.com/wxnetcms/wp-content/uploads/2015/01/ncep-model-forecast-skill.png
(still behind UKMO and ECM for skill)
Toneb: Note that this is skill at 500mb. Model “skill” at ground level (ie where we live) is not nearly as good.
Even so, we increase computing power by orders of magnitude, data input accuracy probably the same, software sophistication by leaps and bounds….and we can barely forecast the weather (locally) a couple days in the future? Yet, somehow, this is supposed to be related to the predictive ability of climate models decades in the future? Regional forecasts (lets say midwest USA) are, as far as I know, pretty worthless more than 10 days or so in the future. My observation is that long range forecasting (say 30 days plus) for even continental size areas is pretty worthless. I find it astonishing that one can somehow make the leap that even though we can barely predict the (local) weather with any usable accuracy a few days in the future, to somehow making broad statements that the “science is settled” and we can have excellent confidence in macro climate trends decades or centuries in the future. I understand that the what climate models are supposed to be telling us is far different that a local temperature forecast. However, it seems that these global climate predictions are somehow supposed to give me usable information about regional weather trends decades in the future. For example, a GCM predicts (somehow with a high degree of confidence) that the midwest USA will be x degrees warmer (in general) in, say 2050. Ok, then that supposedly lets me generate a reasonably accurate prediction of what local weather conditions would do. Now, since right now, we can’t ,with any degree of usable accuracy, provide reasonable forecasts on the same scale past 10 days, how the heck does one make the leap that our forecast accuracy somehow increases if we start with a overall temperature a little bit higher 20 years from now? In other words, we have a pretty good handle on what the regional weather conditions are right now. We throw massive CPU time at it…and come up with relatively worthless forecasts more than a few days in the future. Now we fast forward 20 years…and now we somehow conclude that our forecasting skill is accurate (for any regional size) for decades?
My problem with this whole AGW issue is one of credibility. There is really no doubt that the macro weather forecasting models for large regions past 10 days or so generate crap (most of the time). Yet, somehow, the fact that the NWS and other forecast agencies regularly publish them is supposed to be an indicator of success. What is the skill level of NH/Global long range forecasts in the 30-365 day range? I am pretty confident it is in the worthless category.
Pretty chart.
But it tells me nothing about what dimensions of the forecast are being measured. Nor how accuracy is assessed. Please excuse my skepticism, but I’ve been in too many businesses that fiddled measurements when it was in their interest to do so.
Or, to put it succinctly: such models, however ‘sophysticated’, have limited predictive ability.
Please substitute the word “limited” with “NO”
They work in a frame of reference that is limited in time and space, forward and backward. They may work beyond, but their accuracy is logically and practically inversely proportional to the product of time and space offsets from an established frame of reference. Evolution or chaos is not a progressive (i.e. monotonic) function. Case in point: human life.
TTo be practical, let’s discuss what works and what does not work. As an example, let’s use Winter weather forecasts, which predict about 6 to 9 months into the future.
The USA National Weather Service (NWS), with all its computing and modeling power, routinely gets its Winter forecasts wrong. Weatherbell, which uses historic weather analogues, has a much stronger track record of forecasting success. One example is included below, but there are many.
The fact that Weatherbell can predict Winter weather much better than the NWS suggests that there is indeed a significant degree of predictability in weather within this 6-to-9-month time frame. What is apparently absent from the computer weather models is the “experience factor” provided by the historic weather analogues. There may also be mathematical frailties in the computer models that are either solvable or fatal – for example, even the “seasonal” models used by the NWS for the Winter forecast have a history of “running hot”.
The longer-term multi-decadal climate models also run too hot, probably because of the assumption that climate sensitivity to CO2 (TCS) is about an order of magnitude higher than reality. There is also the problem of model instability, exemplified by reported wide divergence of results when minor changes are made to initial conditions.
I am not saying that these weather and climate model problems are not solvable, but I am saying that these models are not currently fit-for-purpose, and probably should be shelved and greatly improved before they are put back into service.
In the meantime, the NWS and other forecasters should focus on what does work. There is no excuse for continuing to produce weather and climate forecasting nonsense.
Regards, Allan
https://wattsupwiththat.com/2018/02/27/study-chaos-seeding-impairs-the-interpretation-of-numerical-weather-models/#comment-2753714
Bob B wrote:
“I will take Bastardi’s predictions over the models every time.”
I agree with you Bob. The use of historical weather analogues in forecasting as done by Joe B and Joe A provides much more reliable forecasts, From my experience, the computer models, especially the “long-range” seasonal models, too often produce worthless nonsense. Here is one example:
https://wattsupwiththat.com/2017/01/13/new-butt-covering-end-of-snow-prediction/comment-page-1/#comment-2397292
“any difference between model simulations”
What does that refer to? Someone is loading differences between models into something?
Well the problem with chaos in weather forecasting (and with chaos in general) is that it generally does NOT allow averaging multiple runs .
Or more accurately an average of multiple runs is often garbage and worse , you never know in advance when it is garbage and when not .
The reason ?
It should be obvious ! Every run describes a possible future state but these future states do NOT have the same probability and there is no reason why they should .
Example of 3 runs :
Run 1 has 50 % probability to occur (but you do not know that)
Run 2 has 30 % probability to occur (but you do not know that)
Run 3 has 5 % probability to occur (but you do not know that)
Note that the sum is not 100 % because you cannot obviously make an infinity of runs to exhaust all possible future states .
By making an “ensemble average” what is totally stupid for a chaotic system you make
(Run1 + Run2 + Run 3) / 3 and you can readily see that this average has nothing to do with the reality which is mainly the Run 1 . On the contrary averaging will move your forcast farther from Run 1 which is the most realistic so it makes things worse , not better .
Of course in some cases (like the 99 storm in Europe) the reality is in the 15 % where you did no run so that it comes quite unexpected .
As many said before , forecasting chaotic systems is only possible for short periods of time (a few days at maximum for the weather and in some cases with steep gradients even much less) and for longer periods it is better to not even try because it is impossible both in principle and in practice .
As a matter of fact, you don’t even need chaos for averaging being worse than garbage. Every time the population has several mode, and this is the general situation, the average has dubious meaning.
In fact, your example is fine, except that it is not related to chaos. In a chaotic system, even assessing probability is hard.
A few words of wisdom about stochastics & ‘chaos’ from Ed & Henk.
https://judithcurry.com/2013/10/13/words-of-wisdom-from-ed-lorenz/
E.N. Lorenz (1991) Chaos, spontaneous climatic variations and detection of the greenhouse effect. Greenhouse-Gas-Induced Climatic Change: A Critical Appraisal of Simulations and Observations, M. E. Schlesinger, Ed. Elsevier Science Publishers B. V., Amsterdam, pp. 445-453.
Henk Tennekes, former Director of Research of the KNMI (Royal Netherlands Meteorological Institute)
http://scienceandpublicpolicy.org/images/stories/papers/commentaries/tennekes_essays_climate_models.pdf
…