guest post by Nick Stokes
There has been a lot of discussion lately of error propagation in climate models, eg here and here. I have spent much of my professional life in computational fluid dynamics, dealing with exactly that problem. GCM’s are a special kind of CFD, and both are applications of the numerical solution of differential equations (DEs). Propagation of error in DE’s is a central concern. It is usually described under the heading of instability, which is what happens when errors grow rapidly, usually due to a design fault in the program.
So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. It doesn’t matter for DE solution why you think it is wrong; all that matters is what the iterative calculation then does with the difference. That is the propagation of error.
A general linear equation in time can be formulated as
y’ = A(t)*y+f(t) ……….(1)
y(t) could be just one variable or a large vector (as in GCMs); A(t) will be a corresponding matrix, and f(t) could be some external driver, or a set of perturbations (error). The y’ means time derivative. With a non-linear system such as Navier-Stokes, A could be a function of y, but this dependence is small locally (in space and time) for a region; the basics of error propagation follow from the linearised version.
I’ll start with some bits of DE theory that you can skip (I’ll get more specific soon). If you have another solution z which is the solution following an error, then the difference satisfies
(y-z)’=A*(y-z)
The dependence on f(t) has gone. Error propagation is determined by the homogeneous part y’=A*y.
You can write down the solutions of this equation explicitly:
y(t) = W(t)*a, W(t) = exp(∫ A(u) du )
where the exp() is in general a matrix exponential, and the integral is from starting time 0 to t. Then a is a vector representing the initial state, where the error will appear, and the exponential determines how it is propagated.
You can get a long way by just analysing a single error, because the system is linear and instances can be added (superposed). But what if there is a string of sequential errors? That corresponds to the original inhomogeneous equation, where f(t) is some kind of random variable. So then we would like a solution of the inhomogeneous equation. This is
y(t) = W(t) ∫ W-1(u) f(u) du, where W(t)=exp(∫ A(v) dv ), and integrals are from 0 to t
To get the general solution, you can add any solution of the homogeneous equation.
For the particular case where A=0, W is the identity, and the solution is a random walk. But only in that particular case. Generally, it is something very different. I’ll describe some special cases, in one or few variable. In each case I show a plot with a solution in black, a perturbed solution in red, and a few random solutions in pale grey for context.
Special case 1: y’=0

This is the simplest differential equation you can have. It says no change; everything stays constant. Every error you make continues in the solution, but doesn’t grow or shrink. It is of interest, though, in that if you keep making errors, the result is a random walk.
Special case 2: y”=0

The case of no acceleration. Now if there is an error in the velocity, the error in location will keep growing. Already different, and already the simple random walk solution for successive errors doesn’t work. The steps of the walk would expand with time.
Special case 3: y’=c*y
where c is a constant. If c>0, the solutions are growing exponentials. The errors are also solutions, so they grow exponentially. This is a case very important to DE practice, because it is the mode of instability. For truly linear equations the errors increase in proportion to the solution, and so maybe don’t matter much. But for CFD it is usually a blow-up.

But there are simplifications, too. For the case of continuous errors, the earlier ones have grown a lot by the time the later ones get started, and really are the only ones that count. So it loses the character of random walk, because of the skewed weighting.
If c<0, the situation is reversed (in fact, it corresponds to above with time reversed). Both the solutions and the errors diminish. For continuously created errors, this has a kind of reverse simplifying effect. Only the most recent errors count. But if they do not reduce in magnitude while the solutions do, then they will overwhelm the solutions, not because of growing, but just starting big. That is why you couldn’t calculate a diminishing solution in fixed point arithmetic, for example.
This special case is important, because it corresponds to the behaviour of eigenvalues in the general solution matrix W. A single positive eigenvalue of A can produce growing solutions which, started from any error, will grow and become dominant. Conversely the many solutions that correspond to negative eigenvalues will diminish and have no continuing effect.
Special case 4: Non-linear y’=1-y2
Just looking at linear equations gives an oversimplified view where errors and solutions change in proportion. The solutions of this equation are the functions tanh(t+a) and coth(t+a), for arbitrary a. They tend to 1 as t→∞ and to -1 as t→-∞. Convergence is exponential. So an error made near t=-1 will grow rapidly for a while, then plateau, then diminish, eventually rapidly and to zero.

Special case 5: the Lorenz butterfly
This is the poster child for vigorous error propagation. It leads to chaos, which I’ll say more about. But there is a lot to be learnt from analysis. I have written about the Lorenz attractor here and in posts linked there. At that link you can see a gadget that will allow you to generate trajectories from arbitrary start points and finish times, and to see the results in 3D using webGL. A typical view is like this

Lorenz derived his equations to represent a very simple climate model. They are:

The parameters are conventionally σ=10, β=8/3, ρ=28. My view above is in the x-z plane and emphasises symmetry. There are three stationary points of the equations, 1 at (0,0,0),(a, a, 27)and,(-a, -a, 27) where a = sqrt(72). The last two are centres of the wings. Near the centres, the equations linearise to give a solution which is a logarithmic spiral. You can think of it as a version of y’=a*y, where a is complex with small positive real part. So trajectories spiral outward, and at this stage errors will propagate with exponential increase. I have shown the trajectories on the plot with rainbow colors, so you can see where the bands repeat, and how the colors gradually separate from each other. Paths near the wing but not on it are drawn rapidly toward the wing.
As the paths move away from the centres, the linear relation erodes, but really fails approaching z=0. Then the paths pass around that axis, also dipping towards z=0. This brings them into the region of attraction of the other wing, and they drop onto it. This is where much mixing occurs, because paths that were only moderately far apart fall onto very different bands of the log spiral of that wing. If one falls closer to the centre than the other, it will be several laps behind, and worse, velocities drop to zero toward the centre. Once on the other wing, paths gradually spiral outward toward z=0, and repeat.
Is chaos bad?
Is the Pope Catholic? you might ask. But chaos is not bad, and we live with it all the time. There is a lot of structure to the Lorenz attractor, and if you saw a whole lot of random points and paths sorting themselves out into this shape, I think you would marvel not at the chaos but the order.
In fact we deal with information in the absence of solution paths all the time. A shop functions perfectly well even though it can’t trace which coins came from which customer. More scientifically, think of a cylinder of gas molecules. Computationally, it is impossible to follow their paths. But we know a lot about gas behaviour, and can design efficient ICE’s, for example, without tracking molecules. In fact, we can infer almost everything we want to know from statistical mechanics that started with Maxwell/Boltzmann.
CFD embodies chaos, and it is part of the way it works. People normally think of turbulence there, but it would be chaotic even without it. CFD solutions quickly lose detailed memory of initial conditions, but that is a positive, because in practical flow we never knew them anyway. Real flow has the same feature as its computational analogue, as one would wish. If it did depend on initial conditions that we could never know, that would be a problem.
So you might do wind tunnel tests to determine lift and drag of a wing design. You never know initial conditions in tunnel or in flight but it doesn’t matter. In CFD you’d start with initial conditions, but they soon get forgotten. Just as well.
GCMs and chaos
GCMs are CFD and also cannot track paths. The same loss of initial information occurs on another scale. GCMs, operating as weather forecasts, can track the scale of things we call weather for a few days, but not further, for essentially the same reasons. But, like CFD, they can generate longer term solutions that represent the response to the balance of mass, momentum and energy over the same longer term. These are the climate solutions. Just as we can have a gas law which gives bulk properties of molecules that move in ways we can’t predict, so GCMs give information about climate with weather we can’t predict.
What is done in practice? Ensembles!
Analysis of error in CFD and GCMs is normally done to design for stability. It gets too complicated for quantitative tracing of error, and so a more rigorous and comprehensive solution is used, which is … just do it. If you want to know how a system responds to error, make one and see. In CFD, where a major source of error is the spatial discretisation, a common technique is to search for grid invariance. That is, solve with finer grids until refinement makes no difference.
With weather forecasting, a standard method is use of ensembles. If you are unsure of input values, try a range and see what range of output you get. And this is done with GCMs. Of course there the runs are costlier, and so they can’t do a full range of variations with each run. On the other hand, GCM’s are generally surveying the same climate future with just different scenarios. So any moderate degree of ensemble use will accumulate the necessary information.
Another thing to remember about ensemble use in GCM’s is this. You don’t have to worry about testing a million different possible errors. The reason is related to the loss of initial information. Very quickly one error starts to look pretty much like another. This is the filtering that results from the vary large eigenspace of modes that are damped by viscosity and other diffusion. It is only the effect of error on a quite small space of possible solutions that matters.
If you look at the KNMI CMIP 5 table of GCM results, you’ll see a whole lot of models, scenarios and result types. But if you look at the small number beside each radio button, it is the ensemble range. Sometimes it is only one – you don’t have to do an ensemble in every case. But very often it is 5,6 or even 10, just for 1 program. CMIP has a special notation for recording whether the ensembles are varying just initial conditions or some parameter.
Conclusion
Error propagation is very important in differential equations, and is very much a property of the equation. You can’t analyse without taking that into account. Fast growing errors are the main cause of instability, and must be attended to. The best way to test error propagation, if computing resources are adequate, is by an ensemble method, where a range of perturbations are made. This is done with earth models, both forecasting and climate.
Appendix – emulating GCMs
One criticised feature of Pat Frank’s paper was the use of a simplified equation (1) which was subjected to error analysis in place of the more complex GCMs. The justification given was that it emulated GCM solutions (actually an average). Is this OK?
Given a solution f(t) of a GCM, you can actually emulate it perfectly with a huge variety of DEs. For any coefficient matrix A(t), the equation
y’ = A*y + f’ – A*f
has y=f as a solution. A perfect emulator. But as I showed above, the error propagation is given by the homogeneous part y’ = A*y. And that could be anything at all, depending on choice of A. Sharing a common solution does not mean that two equations share error propagation. So it’s not OK.
So Nick seems to be telling us that changing the net input energy by 8 watts per square meter (+/-4) will have no effect on the output air temperature after an elapsed time of years (20?) in the GCMs. Thank goodness. That .035 CO2 forcing is clearly meaningless then.
The (+/-)4 W/m^2 is not changing the net inputs, John C. It’s an expression of ignorance concerning the state of the clouds.
It means that GCMs are entirely unable to resolve the response of clouds to the forcing from CO2 emissions.
It means GCM air temperature projections are physically meaningless.
Nick Stokes doesn’t understand resolution. Nor does Steve Mosher.
It would be useful if each major contributor to this discussion — Pat Frank, Roy Spencer, and Nick Stokes — could agree upon a glossary of scientific and technical terms common to the arguments being made by each contributor so that the WUWT readership can reach a conclusion whether or not the three major participants are even talking the same language.
Beta Blocker
Yes.
And it would be even a bit more useful if one of the three would stop claiming that the others don’t understand this and that.
This is simply zero-level, and it is really a pity the he doesn’t understand such basic things.
But they DON’T UNDERSTAND! That’s what so amazing about this. Perhaps I missed it, but I haven’t seen Nick Stokes address Pat Frank’s central point.
Pat Frank is talking about uncertainty and that the amount of uncertainty keeps increasing – NOT ABOUT ERROR. The uncertainty that emerges from the models with their results – as an intrinsic property of those results – is so huge it renders the results essentially meaningless. It’s like saying “It will be 60.25 degrees today, plus or minus 27 degrees.”
Also, is your last sentence horribly ironic or is it supposed to be a quote being made by “one of the three”? (If it is a quote it ought to be inside quotation symbols)
” is so huge it renders the results essentially meaningless”
No, it renders uncertainty meaningless.
What does it mean, anyway? Do you know? An uncertainty that is not related to expected error?
(In reply to Nick Stokes…)
Uncertainty is a way of quantifying the statement “I’m a little fuzzy on this, so I can’t give you a more precise answer.”
So, when the question is asked, “How many people will show up for tonight’s event? The CORRECT answer might be:
“Well, I’ve reviewed past events that were similar to yours. I conducted a poll to see how ‘hot’ your topic is. Finally, I checked to see what else is going on in town. Based on all of that, I believe you will have between 500 and 600 people.”
“But I need a number.”
“Okay, 550 people, give or take 50.”
“So, you’re saying 550 people will come out tonight?”
“Yes, give or take 50.”
I gave the best answer I could within unavoidable constraints. So, if 535 people show up my answer isn’t IN ERROR. I gave a correct answer, within my stated bounds. But if only 417 people show up, then my answer would be INCORRECT and IN ERROR.
BUT, what if the query had been about a much bigger event, and I had said, “55,000 people will show up, give or take 35,000 people”? Well, the event organizer would have every reason to fire me and laugh me out of her office… BEFORE the event even takes place. Because, who cares if my answer proves to be “correct”? It is virtually useless. 20, 450 people showing up, or 79,823 people attending would both make my answer technically correct. Therefore, my answer was trite and not worthy of serious consideration.
Pat Frank says the extent of the uncertainty that emerges from climate models shows the results are the polar opposite of earthshaking. They are trite and not worthy of serious consideration – whether they prove to be “correct”, or not.
(extending my reply to Nick Stokes)
Notice in my scenario that “give or take 50 people” is an integral part of my answer – as important as the “550 people” part. Let’s say my event organizer went to a follow-up meeting, and said, “Schilling did some analysis and says 550 people are coming.” She would not have given them the answer I gave her. If I found out about it later, I wouldn’t think she was purposely lying. Rather, I would figure she simply didn’t grasp the meaning or implications of “give or take 50 people.”
I honestly think that’s what happening here. You’re a bright, articulate guy who can be commended for regularly making the effort to be civil. But I think you just haven’t appropriated the concept of uncertainty, or the ramifications of it – particularly for climate models.
Pat Frank is stating the uncertainty that arises from the calculations and algorithms that make up climate models is so great it drains their outputs of all import. Climate model outputs are studiously carved into the sand and announced with great fanfare… just before a wave of uncertainty – THEIR wave of uncertainty – washes over them.
“I gave the best answer I could within unavoidable constraints. So, if 535 people show up my answer isn’t IN ERROR. I gave a correct answer, within my stated bounds. But if only 417 people show up, then my answer would be INCORRECT and IN ERROR.”
Well, different from Pat Frank’s insistence (but there is no consistency here). He insists that error is just the difference between measurement and reality. Not being outside CI limits. But then again, his paper is titled “Propagation of error…”. I don’t think your version corresponds to what anyone else is saying.
Nick,
“Well, different from Pat Frank’s insistence (but there is no consistency here). He insists that error is just the difference between measurement and reality.”
That *is* the definition of error. But it is *not* the definition of uncertainty. You still don’t seem to grasp the difference between the two. Reading the distance between the mounting holes in a single girder can suffer from error in measurement. Multiple measurements made on that same girder with the same measuring device can help decrease the size of the error in measurement.
Calculating the span length of ten girders tied together with fish plates suffers from *uncertainty*. You don’t know which girders have distances between mounting holes that are short, which girders have distances between mounting holes that are long, and which girders are dead on accurate. No amount of statistical averaging will help you determine the span length of those connected girders. It will always be uncertain till you actually put them together and see what the result is. That’s not “error”, it is “uncertainty”!
“That’s not “error”, it is “uncertainty”!”
Well, on that basis you’d say that measuring 17″ with a 2 ft ruler might have error, but if you measure it out with a 1 ft ruler, it is uncertainty.
But it’s clear that your definition of error is different to what Matthew Schilling calls ERROR! There isn’t much consistency here. Perhaps you could explain the usage of “propagation of error” in Pat Frank’s title.
“…What does it mean, anyway?…”
I’m uncertain.
[mic drop]
Nick Stokes: Well, on that basis you’d say that measuring 17″ with a 2 ft ruler might have error, but if you measure it out with a 1 ft ruler, it is uncertainty.
No. Uncertainty results from measuring with a ruler whose length is not known. Imagine if you will 1,000 rulers off an assembly line, all manufactured “within tolerances”. You can be pretty sure that at some resolution, no two of them have the same deviation from perfect, and that no one of them is perfect. So you choose one of them. If you use it to measure something, you can be pretty sure that the error of measurement is bounded by the limit set by the tolerances. But you can’t be sure that the error is positive, or that it is negative. The same random deviation is added to the estimated length each time you lift the ruler and place the trailing edge where the leading edge was; if something is measured to be 8 ruler lengths, then the uncertainty is 8 times the manufacturing tolerance limits. (as I described it, there may be a different independent random error added each time I move the ruler along, but that isn’t the analogy to Pat Frank’s procedure.)
Matthew Schilling, “is so huge it renders the results essentially meaningless”
Nick, “No, it renders uncertainty meaningless.”
No, it renders the results meaningless.
Nick, “What does it mean, anyway? Do you know? An uncertainty that is not related to expected error?”
It means the prediction is meaningless, Nick.
It means the model cannot resolve the effect.
One can always construct some analytical model, calculate something, and get a number. Uncertainty tells one whether that number has any predictive reliability.
Uncertainty stems from a calibration test, using the model to calculate some known quantity. Poor calibration results = poor model.
Huge predictive uncertainty = no predictive reliability.
Standard reasoning in science. Matthew is correct.
Dr. Frank,
Glad to see that you are still contributing to this discussion. Does the 4 W/m^2 estimated cloud error applies to the entire GCM-modeled greenhouse effect or just to the cloud-related impact of the incremental CO2 forcing? I’m assuming it’s the former, but if it turns out that some portion of this error could be a) directly attributed to incremental CO2 forcing and b) that the resulting propagation of the smaller error still exceeded the GCMs’ forecasts, this would be a very strong evidence that the models have no skill. Maybe not physically correct, but certainly an iron-clad result. Thank you.
It’s your latter condition, Frank, “cloud-related impact of the incremental CO2 forcing.”
GCMs can’t resolve the response of clouds to CO2 forcing. It’s opaque to them. That means the air temperatures they calculate are physically ungrounded.
I’ll have a post about that, probably by tomorrow.
Well, I may be over simplifying the issue. Thanks for even noticing my comment. Although I suspect that you can’t even drag this race horse to the water, it may be that your paper will encourage a few readers to drink.
Nick and the GCMs seem to treat the cloud effect as a ‘n’ W/m^2 “forcing.” I understood your references to say that forcing’s magnitude is not known more precisely than +/- 4 W/m^2 (although it may well be far less constrained). If I assume for the sake of argument that the rest of the GCM is flawless and all other initial and continuing conditions are correct, then I can run the model with the cloud effect set to ‘n’-4 and ‘n’+4 with the difference in result being the uncertainty range that a +/-4 cloud effect creates (in the model). I do not actually think that the GCMs are flawless, or that the other parameters are correct, so the difference in output will likely not be the uncertainty that should propagate through the model. But, lets pretend.
Nick seems to contend that a change in the magnitude of the cloud forcing parameter equal to the (minimum) uncertainty has no effect on the results of a GCM run. And yet he also seems to believe that a (much smaller) change in the CO2 forcing parameter makes significant changes in a GCM run. Apparently, some Watts are less equal than others.
I just reread Nick’s post. I still see him pretending that an uncertainty is an error, and alleging that the error will be erased by the other model constraints. So I guess he’s alleging that inputs and coefficients don’t matter, the model will make it right because SCIENCE. (Sort of like modeling the hair length of Marine Recruits. No matter the input length, after processing the length at the exit is always the same.)
“Nick seems to contend that a change in the magnitude of the cloud forcing parameter equal to the (minimum) uncertainty has no effect on the results of a GCM run. ”
In essence Nick is saying that initial conditions are irrelevant. They can be anything and you’ll still get the same answer out of the model. That’s just proof that the models are set up to give a specific answer. Tbat’s why Pat Frank could get the same results with a much simpler emulation!
“…In essence Nick is saying that initial conditions are irrelevant. They can be anything and you’ll still get the same answer out of the model. That’s just proof that the models are set up to give a specific answer…”
Not exactly. Lots of mathematical models near to have some lead-time before they start converging and making sense. It doesn’t mean they are set-up to give a specific answer.
Example: if you are modeling levels of a lake in response to the annual water cycle (pretty simple water balance), does it matter if you start it 100% full or completely dry in 1900 if you are interested in results from 2000-2019? Obviously the results will be much different around 1900, but the levels should get closer together over time and at some point align.
Michael,
“Obviously the results will be much different around 1900, but the levels should get closer together over time and at some point align.”
Such a model should show not just a single “average” but also trends in the average. Those trends will depend heavily n what the initial conditions are in the time frame you are studying. E.g. if you are in a wet trend in 1900 but in a drought trend in 2000. If you merely want to add all annual levels together and then calculate an average you lose a *lot* of data that will inform you. If all climate models converge to a single average over a long period of time then of what use are they? They will tell you nothing about trends and would certainly be useless telling you what is happening to maximum temperatures and minimum temperatures in the biosphere. It is those maximums and minimums which determine the climate, not a single average.
….after reading this thread…..I take it you are all in agreement with all of the horrendous adjustments they have done to past temperatures
because that’s what the models are tuned to
Latitude
“I take it you are all in agreement with all of the horrendous adjustments they have done to past temperatures…”
Well, apart from your usual, prehistoric flip-flop pictures (probably originating from the Goddard blog): do you have some thing really trustworthy to offer?
Or do you prefer to stay in the good old times where the 1930’s were so pretty warm in comparison to today, due to
– incompetent restriction on TMAX records, though today everybody knows that TMIN increases much faster than TMAX everywhere
– ten thousands of weather stations less than today
and, last not least,
– completely deprecated processing algorithms no one would still keep in use today?
https://drive.google.com/file/d/1ESDd0LROc53jvSm1rZFhjkaQqif7tZ5R/view
Yeah.
Bindidon,
So we have accurate global temperatures from 1895…are you kidding?
Good point. The models are tuned to the fraudulent Hockey Stick. GIGO.
Climate Model Ruse
Climate modelers must make the following assumptions:
1) The correct continuum dynamical equations are being used
This is false because the primitive equations are not the correct reduced system.
If they were, they would be well posed for both the initial and initial-boundary value problems. Oliger and Sundstrom proved that the initial-boundary value problem is not well posed for the prinmitive (hydrostatic) equations.
2) The numerics are an accurate approximation of the continuum partial differential equations.
This is false as shown in Browning, Hack, and Swarztrauber (1989). The numerics are not accurately describing the correct partial differential equations (the reduced system), and not even accurately approximating the hydrostatic equations because Richardson’s equation is mimicing the insertion of discontinuities into the continuum solution destroying the numerical accuracy.
3. The physicscal parameterizatios are accurately describing the true physics.
This is false. In Sylvie Gravel’s manuscript it is shown that the hydrostatic model is inaccurate within 1-2 days, i.e., starts to deviate from the observations in an unrealistic manner thru growth of the velocity at the surface. For forecasting this problem is circumvented
by injecting new observational data into the hydrostatic model every few hours. If this were not done the forecast model would go off the rails within several days. This injection of data is not possible in a climate model.
Thus, IPCC uncertainty-language terms are very likely physically meaningless.
Robert,
I do not know if Global Warming is real or not. Certainly there are some physical signs that are disturbing.
But I do know that climate models are hogwash. They are based on the wrong system of equations
and they have violated every principle of numerical analysis. The “scientists” are more interested in their funding than scientific integrity. I love computers, but they must be used correctly. Unfortunately
this is not the case in many areas of computational fluid dynamics.
Latitude: .I take it you are all in agreement with all of the horrendous adjustments they have done to past temperatures
I think a fairer summary would be that even after all of the adjustments made to match recorded temperatures, the uncertainty in the parameter values implies that the uncertainty in the forecasts of the GCMs is too great for them to be relied upon.
My two cents here. Nick did hit the nail on the head that the weakness in Pat Frank’s paper is the GCM approximation equations. Nothing in the remainder of Pat’s paper is incorrect that I can see. Nick’s error propagation examples are correct.
However.
Nick is glossing over the power of ensembles (and models in general) – not that he is denying this but its certainly understated. Since it would be virtually impossible to explicitly propagate an assumed form of error through the GCM’s, the simplest fallback becomes ensembles. Unfortunately very little research has been done on evaluating the stability level of initial boundary conditions in GCM’s. Most of the perturbed inputs are driven by researcher “wisdom”. There are (probably) an infinite combination of initial conditions that create unstable or semi-stable GCM’s outputs in terms of error. Its not been studied closely and even a cursory look at performance of models tuned with a back date far enough can demonstrate model error bounds underestimate vs real world conditions many years later.
I rarely have enough time nowadays to do more than drive by these types of problems but I do know friends that work on GCM’s and propagation of errors/model stability is a real problem.
The original dynamical equations are essentially hyperboilc (automatically well posed) and much is known about them mathematically. There are mathematical estimates for the growth of perturbations in the initial conditions for a finite period of time and the Lax Equivalence Theorem states that a numerical method will converge to the continuum solution in that time if the numerical method is accurate and stable.
However, all of this goes out the window when a dissipation term is added that leads to a larger continuum error than the numerical errors. At that point the numerical solution is converging to the wrong system of equations. i.e., an atmophere more like molasses.
Nick,
The Lorenz equations were derived by an extremely crude numerical approximation
of the Euler equations so that they cannot be proved to be close to to the
solution of those equations. Multiple time scale hyperbolic equations have very reasonable determinate
solutions for a fixed period of time. I also mention that Kreiss has shown that all derivatives of the incompressible Navier-Stokes exist and that if the numerical accuracy is as required by the mathematical
estimates, the numerical method will converge to the continuum solution.
Jerry
John,
Ensembles do not help unless the model is accurately describing the correct system of equations
and that is not the case for weather or climate models. The behavior of air versus molasses illustrates this point.
Jerry
John, “the weakness in Pat Frank’s paper is the GCM approximation equations.”
One equation — that does a bang-up job duplicating GCM outputs.
Maybe we need a different guest blogger. Not a climate scientist and not a misogynistic male denier, either, lol.
https://www.researchgate.net/publication/321213778_Initial_conditions_dependence_and_initial_conditions_uncertainty_in_climate_science
Abstract
This article examines initial-condition dependence and initial-condition uncertainty for climate projections and predictions. The first contribution is to provide a clear conceptual characterization of predictions and projections. Concerning initial-condition dependence, projections are often described as experiments that do not depend on initial conditions. Although prominent, this claim has not been scrutinized much and can be interpreted differently. If interpreted as the claim that projections are not based on estimates of the actual initial conditions of the world or that what makes projections true are conditions in the world, this claim is true. However, it can also be interpreted as the claim that simulations used to obtain projections are independent of initial-condition ensembles. This article argues that evidence does not support this claim. Concerning initial-condition uncertainty, three kinds of initial-condition uncertainty are identified (two have received little attention from philosophers so far). The first (the one usually discussed) is the uncertainty associated with the spread of the ensemble simulations. The second arises because the theoretical initial ensemble cannot be used in calculations and has to be approximated by finitely many initial states. The third uncertainty arises because it is unclear how long the model should be run to obtain potential initial conditions at pre-industrial times. Overall, the discussion shows that initial-condition dependence and uncertainty in climate science are more complex and important issues than usually acknowledged.
I belong to the “”Keep it simple “” school. I admit that beyond simple Algebra such as sufficient for electronics, I do not understand any of Nicks and others comments.
But as we arere talking about Global warming come CC, does it really matter.
After all this whole sham is apparently based on the alleged heating abilities of a trace gas. CO2.
So lets stick to that one factor. Does it, CO2 actually heat the atmosphere as a result of it accepting energy from the Sun.
And what about the Logarithmic effect of this gas, that its now reached the point where it cannot effect the heating of the earth, if indeed it ever did.
Going on percentages, the increase of the CO2 does not appear to match the increase in the temperature from say 1880, and that .8 C which we arere told signals the end of the world as we know it, was probably the result of the change from the low temperatures of the Little Ice Age and back to what it was in the MWP.
So please less complicated maths and just get back to the basics of this whole farce of CC.
MJE VK5ELL
Michael
September 17, 2019 at 6:30 pm
Yes exactly.
The Little Ice Age is over…get used to it! We should be celebrating, not moaning.
Because of the unrealisticlly large dissipation (necessary to overcome the insertion of large amounts of energy into the smallest scales of climate and weather models caused by discontinuous parameterizations ), one can consider the atmospheric fluid that these models are approximating to be closer to molasses than air. Obviously using such a model tp predict anything about the earth’s atmosphere is dubious at best.
The use of a numerical approximation of the continuum derivative of a differential equation requires that the solution of the differential equation be diifferentiable, and the higher the order accuracy of the numerical method, the smoother the continuous solution must be in order to provide a better result (see tutorial on Climate Audit). As mentioned above, the parameterizations used for the heating/cooling in these models mimics discontinuities in the forcing (and thus in the solution) of the continuum equations.
As stated above, in Browning, Hack, and Swarztrauber when the continuum solution was analytic (all derivatives existed) higher order numerical methods provided more accurate results with less computational burden. However, when the artificial dissipation used in climate and weather models was added,
the accuracy of the best numerical method was reduced by several orders of magnitude. The use of this
unnatural dissipation clearly is necessitated by the parameterizations impact on the solution, not the numerical method.
Nick,
While it is true that the forcing drops out of your error equation, that assumes no error in the forcing. You need to also show how an error in the forcing terms is propagated. Browning and Kreiss have shown how discontinuous forcing causes all kinds of havoc on the solution of systems with multiple time scales.
And that is exactly what is happening in the climate and weather models.
Jerry
Nick,
The Lorenz equations were derived by an extremely crude numerical approximation
of the Euler equations so that they cannot be proved to be close to to the
solution of those equations. Multiple time scale hyperbolic equations have very reasonable determinate
solutions for a fixed period of time. I also mention that Kreiss has shown that all derivatives of the incompressible Navier-Stokes exist and that if the numerical accuracy is as required by the mathematical
estimates, the numerical method will converge to the continuum solution.
Jerry
The problem with Nick Stokes is his last name. Navier-Stokes, that Stokes was his grand-father. His frantic attempts to defend Global Circulation Models are because they use Computerized Fluid Dynamics, his gram-pa’s legacy, and he studied it in school.
He is a slick dude, mis-leads at every opportunity. His income apparently depends on this
So, there is this, increasing CO2 does raise the altitude at which the atmosphere is freely able to radiate to space, which lowers the temperature at which the atmosphere is freely able to radiate to space, which does lower the flux to space. If you do not know what flux is, look it up, has nothing to do with soldering.
The magnitude of this effect has never been calculated from First Principles, cannot be, I tried. It does trap energy ,also known ,to those who have not studied it ,as Heat.
“But the effect of CO2 is logarithmic.” Is it? The 280 ppm so-called Pre-Industrial atmospheric concentration was already saturated at about 10 m altitude, absorbing and thermalizing all the 15-micron radiation from the surface of the Earth. Raising the CO2 to 400 ppm or so may have lowered that altitude maybe a few cm, causing no change in the temp of the Atmosphere. Another word you should look up, ‘thermalizing.”
Every word is true, ask a professor at a good ME school, but do not ask Nick Stokes.
I schooled Mosher on this, ask him, I did.
The General Circulation Models run on these expensive super-computers look at wind, the constant radiation from the Sun, the albedo which changes every second and is very difficult to quantify, water vapor also ever-changing and difficult to quantify, and, the boss, CO2. Volcanoes, sure, whatever.
They all seem to be programmed to increase the Global Average Surface Temperature of our atmosphere by some fractional number of degrees per each extra ppm of CO2, and then amplify this effect by so-called Positive Feedback, but no one can show any such increases from First Principles. It is all What Might Happen, But We Do Not Really Know.
I do not know why anyone would set out to mislead the uneducated public in this way, except that, these guys hate Mining. Mining tears up the Earth until the miners restore it to the way it was before, which most of them do now. Our modern prosperity is all based on Mining. They hate it. Mining: Oil, gas, coal, metals, minerals, imagine life without it.
Someone tell me another reason these gentleman and ladies would do this, The Biggest Lie since the Big Three: The Check is in the Mail, My Wife doesn’t Understand Me, and, the biggest before this one, We’re from the Government and We’re Here to Help You…
Seem if some actually tells you, you cannot handle it.
How about Sir D. Attenborough, or Dr. Schellnhuber, CBE just for starters. The goal is at most 1 billion human beings. With clean hands 5+ billion human beings erased.
This kind of stuff got a bad rap in WWII, so it was renamed “conservation”, now Green New Deal.
Exactly as Abba Lerner publicly stated in 1971 at NY Queens College “if the Germany had accepted austerity, Hit*ler would not have been necessary”.
Today if the west had accepted green austerity, with all kinds of sciency flimflam, Greta would not have been necessary.
Resorting to the abuse of kids with “die-ins” should tell you something of their desperation.
Using kids like this shows they even do not believe the models.
There’s more – Dr. Happer just left his post at Pres. Trump’s NSC. Happer’s climate review, delayed 1 year, was opposed by Pres. Science Advisor Kelvin Droegemeier , a climate modeller, who was endorsed by none other than Obama’s Holdren, a notorius population reduction advocate with close ally “Population Bomb” Ehrlich and Dr. Schellnhuber.
General circulation models can be useful, albeit for short time scales. Global climate models are blunt tools that pretend to be sophisticated but are no better than drastic simplifications. Some folks like to say GCM and not specify which one they are referring to, and some people will argue that global climate models are general circulation models with bells and whistles (actually with many bells and whistles taken away).
Global climate models are CFD (computational, not computerized) models in the sense that they apply some CFD concepts and include simplistic numerical solutions to Navier-Stokes. They are CFD on a technicality. The resolution and mesh are so simplistic that the models are portrayed as Corvettes and are really just Chevettes. It is the same way that simple algebraic equations used to generate numerical approximations to solutions of differential equations in climate models are presented as governing differential equations.
Michael Moon said:
“The problem with Nick Stokes is his last name. Navier-Stokes, that Stokes was his grand-father. His frantic attempts to defend Global Circulation Models are because they use Computerized Fluid Dynamics, his gram-pa’s legacy, and he studied it in school.”
Studied it in school? Dr Stokes did a lot more than that. He and his team were awarded the CSIRO research medal in 1995 for their development of the Fastflow CFD software. He spent 30 years at the CSIRO working on applied math and statistics.
“He is a slick dude, mis-leads at every opportunity. His income apparently depends on this”
He’s a retired grandfather. No one is paying him to do this.
This is a cheap and unnecessary attack Michael.
My my Toto,
here we are in Oz :
the great and mighty Wizard Nick has decreed that initial conditions are irrelevant to output from the black box that is behind his curtain; he tells us ensembles are the key !; “If you want to know how a system responds to error, make one and see” and “Very quickly one error starts to look pretty much like another. This is the filtering that results from the vary large eigenspace of modes that are damped by viscosity and other diffusion. It is only the effect of error on a quite small space of possible solutions that matters.”
What’s that Toto? “Woof, woof”
You want to know what the actual uncertainty is? Well my love, the great and powerful NickOz doesn’t trouble himself with an answer to that : he just says that silly old duffer Frank is filled with straw needs a brain.
OK, yes and strawman Frank has said there actually are uncertainties and they make the GCMs temperature signal invisible?
Oh we’re back in munchkin land, don’t you see there are a whole variety of differential equations behind the curtain of GCMs and some make pretty butterflies?
Oh silly Toto I was blown to Oz by a weather event and because it’s pre-1980 it’s not a climate event as the CO2 is too low for hurricanes in Kansas yet, and Aunt Em will be pleased when I blow back.
I suspect that there are a fair number of practicing climate scientists who are cowardly lions — they have been displaced to Oz, and they are afraid to confront the Wizard.
Nick,
I particularly liked this.
“Analysis of error in CFD and GCMs is normally done to design for stability. It gets too complicated for quantitative tracing of error, and so a more rigorous and comprehensive solution is used, which is … just do it. If you want to know how a system responds to error, make one and see. In CFD, where a major source of error is the spatial discretisation, a common technique is to search for grid invariance. That is, solve with finer grids until refinement makes no difference.”
Can you cite a single example of a successful L2 convergence test on any AOGCM? Ignoring “white mice” experiments on the primitive equations, the only attempts I have seen (and there are not many published) have acknowledged that unpredictable (arbitrary) adjustments need to be made to the tuning parameters to align any grid refinement with coarser grid solutions.
A further problem is that with the mixture of explicit and implicit terms involved in the linked equation set, the solution is heavily dependent on the order in which the equations are updated. Se the Donahue and Caldwell paper presented here:- http://www.miroc-gcm.jp/cfmip2017/presentation.html
An a priori requirement for the application of analytic methods for assessing error propagation is that the numerical formulation is actually solving the governing equations you think it is. In the case of the AOGCMs this is simply not the case. They do not conserve water mass. They cannot match atmospheric angular momentum. They cannot match atmospheric temperature field, nor regional estimates of temperature and precipitation , nor SST, nor SW progression – even in hindcast. The only thing that they do consistently is to preserve the aggregate, internal, model relationship:- Ocean heat uptake varies with Net flux = F – restorative flux. They do this however for radically different estimates of the forcing F, ocean heat uptake and emergent temperature-dependent feedback on which the restorative flux term is calculated. It makes more sense to me then to take this latter equation on its own and test the observational constraints on the credible parameter-space rather than put any credibility into numerical models which are not just error prone, but error prone in a highly unpredictable way.
I say this as someone who has (often) run error propagation tests on numerical models, as part of an uncertainty analysis. The difference is that those models were credible.
kribaez
Thanks for the interesting critique, way way away from all these ‘opinion-based’ comments upthread having nothing to do with science, let alone engineering.
Rgds
J.-P. D.
You point out some other good parameters that must have uncertainty associated with them too. Climate as defined by the models just doesn’t mean much. Being able to say the Earth will 4 degrees hotter in 100 years basically doesn’t give a clue about climate change. Will some areas be wetter or drier, cooler or warmer, more wind or less wind? This is where relying on the models just isn’t satisfying.
“Can you cite a single example of a successful L2 convergence test on any AOGCM? “
No. GCM’s are limited by a Courant condition based on gravity waves, (the limit that would be speed of sound in a closed space). It just isn’t practical to get complete grid invariance.
“Se the Donahue and Caldwell paper presented here”
Thanks for the very useful link. He’s not talking about the core PDE algorithm, but the various associated processes. As he says: “There is wisdom to certain orders, but no rules that I know of”. He’s not saying that people are getting it wrong. he’s saying that if you made other choices, it could make a substantial difference. He thinks it could explain some of the difference between models that is observed.
Well, it could. The differences are there and noted. That isn’t anything new.
” is that the numerical formulation is actually solving the governing equations you think it is. In the case of the AOGCMs this is simply not the case. They do not conserve water mass. They cannot match atmospheric angular momentum.”
I’m not sure what you’re basis is for saying that. They certainly have provision for conserving water mass; phase change is a complication, but they do try to deal with it, and I don’t know of evidence that they fail. Likewise with angular momentum, where surface boundary layers are the problem, but again I think you’d need to provide evidence that they fail.
But the thing about “the numerical formulation is actually solving the governing equations you think it is” is that you can test whether the governing equations actually have been satisfied.
Nick,
“Likewise with angular momentum, where surface boundary layers are the problem, but again I think you’d need to provide evidence that they fail.”
There are hundereds of papers on this subject. You might start here (while noting that these comparisons allow for forced “nudging” of the QBO and prescribed SST boundary conditions):-
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2011JD016555
For water mass conservation, see precipitation bias data the SI data from Randall, D. A., and Coauthors, 2007: Climate models and their evaluation. Climate Change 2007: The Physical Science Basis.
“But the thing about “the numerical formulation is actually solving the governing equations you think it is” is that you can test whether the governing equations actually have been satisfied.”
Not normally you can’t. If you have an analytic solution available for a white mouse problem, you can test against that. Here you do not. Hence, for a given numerical formulation you can at best confirm that the system converges to A solution for that formulation. There is no guarantee that it is a CORRECT solution of the governing equations, since such numerical solution is subject to inter alia time and space truncation error. In the case of climate models, it is also subject to non-physical adjustment to force convergence – hyperviscosity or “atmosphere like molasses” to quote Gerald Browning.
If you can run successful L2 convergence tests that might give you some optimism that it is a credible solution within the epistemology of the numerical scheme. If you can change the solution order of coupled or linked equations and get the same solution, that might also give you some comfort with respect to truncation errors. If you fail at both, that is sufficient to know that resolution at grid level scale is based on worthless self-delusion. The only thing left is to test whether in aggregate behaviour, a model is conserving what it is supposed to be conserving.
Even if someone does not understand the intricacies of numerical analysis very well, most intelligent individuals have an intuitive grasp that large errors in comparisons between observed and modeled critical variables in history matching (“HINDCASTING”) does not bode well for any future projections.
AOGCMs might still have some value for gaining insight into lage-scale mechanisms or missing physics, but IMO the reliance on projections from such models to inform decision-making is delusional.
kribaez
“Not normally you can’t [test].”
Yes, you can, by the old fashioned method of substituting the solution in the difference (or equivalent) equations representing the equation to see if they satisfy.
.
I don’t think this would normally be done on a GCM in production mode. But it will be done extensively during development.
Nick,
To see that the boundary layer fails miserable, see Sylvie Gravel’s manuscript
(Use google) to see that the artificial drag, diffusion at the boundary is used to slow down the unrealistic growth of the velocity at the surface. But the difference between obs and the model patch causes the model to deviate from reality in a few days.
jerry
When I read about using “ensembles” to arrive at average GCM values, I always wonder how many GCM runs are quietly round-filed or sent to the bit bucket when their results don’t reflect the “narrative.” IOW only positive results are acceptable for publication.
Anyone have any examples of that happening?
Thank you for the enlightening posts and conversations.
Sadly, it seems to remain that the projections don’t match reality. Even if reality was to hurry and catch up, the projections still would not have matched.
Are the early data sets being adjusted and the original collected data being destroyed (or hidden so they can’t be used)? If so, it seems to me that this is all noise, not science.
A bit cryptic. b² or not b²? That is the question.
I am trying to post a latex file on numerical approximations of derivatives. It worked successfully on climate audit, but not here, thus the test. I see that you did not use latex in your post, but winged it.
Nick,
It is very clear you are not up to date on the literature. In particular I suggest you read the literature on the Bounded Derivative Theory (BDT) for ODE’s and PDE’s by Kreiss and on the atmospheric equations by Browning and Kreiss. Your example is way too simple to explain what happens with hyperbolic PDE’s with multiple time scales.
Jerry
“Your example is way too simple to explain what happens with hyperbolic PDE’s with multiple time scales.”
I’m not trying to explain that. I’m trying to explain why you can’t ignore the PDE totally, as Pat Frank did.
What happened to my last post? Is it being censored? I guess hard mathematics is not allowed on this site?
Nick, “I’m not trying to explain that. I’m trying to explain why you can’t ignore the PDE totally, as Pat Frank did.”
Your PDEs produce linear extrapolations of GHG forcing, Nick. Showing that, is what I did. Once linearity of their output is shown, linear propagation of their error is correct.
Nick,
But that is exactly what the modelers are doing. By adding unrealistically large dissipation they have altered the accepted continuum dynamical equations to be closer to a heat equation than a hyperbolic one with small dissipation. And heat equations have very different error characteristics than hyperbolic ones with small dissipation.
I think it is quite humorous that a simple linear model can reproduce the results of thousands of hours of wasted computer time.
I have posted a simple tutorial on the numerical approximation of a derivative on climate audit. All such approximations require that the continuum solution of the differential equations be differentiable. But using discontinuous forcing as in climate or weather models means they are mimicking a discontinuous continuum solution and injecting large energy into the smallest scales of the model. Thus the need for the large dissipation. Kreiss and I have published a manuscript on “The impact of rough forcing on systems with multiple time scales”. Also Browning, Hack and Swarztrauber have shown that the large dissipation used in climate and weather models destroys the numerical accuracy. As I stated you need to keep up with the literature.
And finally I have submitted a manuscript that shows that the hydrostatic equations
are not the correct equations (under review). I also expect problems in the review process, but the math cannot be refuted. Given that is the case, it shows that the wrong dynamical equations have been tuned to produce the answers wanted by the modelers. Then Pat’s error analysis becomes a bit more believable.
Nick,
But that is exactly what climate and weather modelers are doing. By adding unrealistically large dissipation, they have essentially modified the accepted dynamical equations
to be more like a heat equation than a hyperbolic one with small dissipation. This is a continuum error and overwhelms the numerical truncation error (Browning, Hack and Swarztrauber). The large dissipation is necessitated because the forcing is mimicking a discontinuous continuum solution. As shown in the Tutorial I posted on Climate Audit,
all numerical approximations require that the continuum solution of the pde be differentiable so this is a basic violation of numerical analysis. Kreiss and I wrote the manuscript “ The impact of rough forcing on systems with multiple time scales” that you need to read.
Finally, I have submitted a manuscript that shows that the wrong continuum equations are
being approximated. I expect problems in the review process, but the mathematics cannot be refuted. I think this makes Pat’s error analysis more believable, I.e., the wrong equations have been tuned to provide the answer the modelers want to continue their funding, but
the actual model errors are huge. In any case I find it totally amusing that a simple linear model can reproduce the thousands of hours of wasted computer time.
Jerry
Nick, I have a generic question on fluid dynamics.
Assume a complex system with convection, evaporation, and condensation. My real physical model was a large tube with an ice water bath at the top, and halogen light projector bulb shining down on a little island surrounded by water, thermocouple on the island shaded by a tiny umbrella from the local bar. I let this run until a stable equilibrium was established. I then changed the atmosphere from air to about 50% carbon dioxide. Drum roll please…….. Results: the temperature went up on the island… for a couple minutes. Five minutes after the gas exchange the temperature was back to the original equilibrium and for no change.
My little toy model had a robust equilibrium state and carbon dioxide had no net change on the island.
Now my question: If I were to add additional nonlinear methods of heat transport, eg. A sealed heat pipe within the tube, a highland pond around the island and a little stream down to the lake pond fed with condensate from ice bath at the top, (total water mass the same), what would happen to the robustness of the equilibrium state?
I predict that the equilibrium state will become more stable as more dis-similar non-linear heat paths are added. I suggest that this generic question could be addressed by computational models.
It would be fun to repeat my experiment in a silo big enough for a cloud to form.
Basically, the real model is that about 1361 W/m2 of sunlight approaches the Earth. A lot reaches the surface. Stuff happens, and the heat leaves. Temperatures are determined by the thermal resistance that that heat flux encounters. The global models of Manabe and Weatherald got all that pretty right (including varying GHG), but they had to estimate the resistances. GCM’s add in synthetic weather to improve the estimate, and so can give a more reliable estimate of the effects on temperature.
I thought I would contribute two quotes from John Tukey
1. Anything worth doing is worth doing badly. In context, he did not literally mean “badly”, but it is an important counterpoise to Emerson’s anything worth doing is worth doing well. In the common case of time pressure, it is important not to stall and extend and postpone forever getting the best answer, but you have to get at least a workable first approximation to be any use at all.
2. It is better to have an approximate answer to the right question, …, than to have an exact answer to the wrong question, … . Not that propagation of the uncertainty of the initial values in the DE solution, addressed in detail by Nick Stokes here, is exactly a “wrong” question; but the propagation of the error/uncertainty of parameter estimates, addressed by Pat Frank, is definitely a right question. His approach may not be as good as the hypothetical bootstrapping and other resampling procedures that can not meaninfully be carried out until there are much faster computers, but his approach is eminently defensible, and not likely to be improved upon any time soon. I am hoping critics like Roy Spencer and Nick Stokes are able to provide their ideas of improved computations, before say 2119.
Pat Frank did a very good calculation of a very good approximation to the uncertainty in the GCM “forecasts” that results from uncertainty in one parameter estimate.
Nick Stokes showed how to do an estimate of the uncertainty propagated from uncertainty in the initial values (estimates of the present state).
Other sources of uncertainty:
All the other parameters whose point estimates have been written into the code, hence considered “known” by Nick Stokes; but not considered “known about the processes” by anybody.
Choice of grid size and numerical integration algorithm expressed in the GCM computer code.
By any reasonable estimate, Pat Frank has underestimated the uncertainty in the GCM model output. He has written a really fine and admirable paper.
And, kudos to WUWT for making Nick Stokes’ essay public for reading and commentary. And kudos to Pat Frank and Nick Stokes for engaging with the commenters and critics.
Readers will not have missed that the same sense of urgency motivates the modelers to make simplifying assumptions about parameters and processes that will be better known 100 years hence. The models are admirable achievements of human effort and intelligence. They just have not been shown yet to be adequate to the evaluate any expensive public policy other than further work on the models.
well said
Nick, thank you for a careful statement and analysis from someone who knows what he is talking about. I stopped reading Pat Frank’s analysis when he interpreted his discovery (made earlier by Willis Eschenbach) that the temperature predictions of GCM models obey Taylor’s Theorem (1712) as some deep insight into the models rather than as a mathematical constraint that every true or false model must obey.
Your error analysis assesses an important problem in computer science – the computational stability of the algorithms used – but not the problem that is of most practical or policy importance. For assessing computational stability, the concept of “error” and its propagation is the one that you discuss, but for policy purposes a different concept is needed. Several commentators have intuited that your concept of error is not the same as theirs, but it is worth setting out the difference.
To do that, we need the solution to “God’s differential equation”, which governs the movement of every atom in the universe. Using your simplified symbolic form, this is
z’ = G(t)*z + g(t)
contrasted with the climate science differential equation, your equation (1):
y’ = A(t)*y + f(t)
G(t) and A(t), and g(t) and f(t), differ because we probably do not know all of the processes at work, because we must parameterize some processes for tractability and we do not know all of the right parameter values, and because we cannot know (and do not want to know) the details of every atom in the universe. God solves his differential equation at a space-time resolution of one Planck length or so, and we can find the solution z by observation of the world as it evolves.
The error that matters for policy is not the computational stability of our algorithm, but the difference between our solution and God’s, which is given by
(y-z)’ = G(t)*(y-z) + (A(t)-G(t))*y + (f(t)-g(t))
Analysis of this difference requires knowledge of G(t) and g(t), for the reasons that you set out. We cannot assess it from knowledge of A(t) and f(t) alone.
The CMIP project provides a clever piece of misdirection. Instead of comparing y to z, it compares y1, y2, y3, … derived from A1(t), A2(t), A3(t), …. However, none of these tells us anything about the “policy-relevant” error y-z. The only place where I see regular comparisons between model solutions and observation is at sites like WUWT, and the comparisons do not seem flattering. The IPCC reports tend to focus on the latest models and their predictions, and by definition it is too early to assess their accuracy against data that was unknown at the time the models were formulated.
But, to reassure Pat and Willis, God’s solution z and the error y-z will both obey Taylor’s Theorem.
Paul,
See my analysis above and my statement of the difference between modeling molasses and air.
Nick does not understand the difference between continuum error and model error.
Jerry
“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”
Is as meaningful as saying “life is just the discrepancy that arises between the the world with life and that without”.
And then Nick is effectively arguing that eonomics, art, politics, etc. do not exist because they are “just” life which disappears in the equations.
The point is that “error” is not “just”. “Error” or to use a more accurate term which gives respect to the subject “Natural variation”, is like life, a highly complex issue with many many facets and extremely complex and at least in part NON-LINEAR!! Yes, like Nick you can in some circumstances use a model for “error” that means it can be largely ignored. But just because that model CAN be used, does not mean it is general applicable unless you PROVE the variation behaves as required by EXPERIMENTATION.
To put it in the simplest form “natural variation” is a theoretical way to model what we don’t know. And what Nick shows is that he has not the slightest clue about what he doesn’t know.