guest post by Nick Stokes

There has been a lot of discussion lately of error propagation in climate models, eg here and here. I have spent much of my professional life in computational fluid dynamics, dealing with exactly that problem. GCM’s are a special kind of CFD, and both are applications of the numerical solution of differential equations (DEs). Propagation of error in DE’s is a central concern. It is usually described under the heading of instability, which is what happens when errors grow rapidly, usually due to a design fault in the program.

So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. It doesn’t matter for DE solution why you think it is wrong; all that matters is what the iterative calculation then does with the difference. That is the propagation of error.

A general linear equation in time can be formulated as

y’ = A(t)*y+f(t) ……….(1)

y(t) could be just one variable or a large vector (as in GCMs); A(t) will be a corresponding matrix, and f(t) could be some external driver, or a set of perturbations (error). The y’ means time derivative. With a non-linear system such as Navier-Stokes, A could be a function of y, but this dependence is small locally (in space and time) for a region; the basics of error propagation follow from the linearised version.

I’ll start with some bits of DE theory that you can skip (I’ll get more specific soon). If you have another solution z which is the solution following an error, then the difference satisfies

(y-z)’=A*(y-z)

The dependence on f(t) has gone. Error propagation is determined by the homogeneous part y’=A*y.

You can write down the solutions of this equation explicitly:

y(t) = W(t)*a, W(t) = exp(∫ A(u) du )

where the exp() is in general a matrix exponential, and the integral is from starting time 0 to t. Then a is a vector representing the initial state, where the error will appear, and the exponential determines how it is propagated.

You can get a long way by just analysing a single error, because the system is linear and instances can be added (superposed). But what if there is a string of sequential errors? That corresponds to the original inhomogeneous equation, where f(t) is some kind of random variable. So then we would like a solution of the inhomogeneous equation. This is

y(t) = W(t) ∫ W^{-1}(u) f(u) du, where W(t)=exp(∫ A(v) dv ), and integrals are from 0 to t

To get the general solution, you can add any solution of the homogeneous equation.

For the particular case where A=0, W is the identity, and the solution is a random walk. But only in that particular case. Generally, it is something very different. I’ll describe some special cases, in one or few variable. In each case I show a plot with a solution in black, a perturbed solution in red, and a few random solutions in pale grey for context.

#### Special case 1: y’=0

This is the simplest differential equation you can have. It says no change; everything stays constant. Every error you make continues in the solution, but doesn’t grow or shrink. It is of interest, though, in that if you keep making errors, the result is a random walk.

#### Special case 2: y”=0

The case of no acceleration. Now if there is an error in the velocity, the error in location will keep growing. Already different, and already the simple random walk solution for successive errors doesn’t work. The steps of the walk would expand with time.

#### Special case 3: y’=c*y

where c is a constant. If c>0, the solutions are growing exponentials. The errors are also solutions, so they grow exponentially. This is a case very important to DE practice, because it is the mode of instability. For truly linear equations the errors increase in proportion to the solution, and so maybe don’t matter much. But for CFD it is usually a blow-up.

But there are simplifications, too. For the case of continuous errors, the earlier ones have grown a lot by the time the later ones get started, and really are the only ones that count. So it loses the character of random walk, because of the skewed weighting.

If c<0, the situation is reversed (in fact, it corresponds to above with time reversed). Both the solutions and the errors diminish. For continuously created errors, this has a kind of reverse simplifying effect. Only the most recent errors count. But if they do not reduce in magnitude while the solutions do, then they will overwhelm the solutions, not because of growing, but just starting big. That is why you couldn’t calculate a diminishing solution in fixed point arithmetic, for example.

This special case is important, because it corresponds to the behaviour of eigenvalues in the general solution matrix W. A single positive eigenvalue of A can produce growing solutions which, started from any error, will grow and become dominant. Conversely the many solutions that correspond to negative eigenvalues will diminish and have no continuing effect.

#### Special case 4: Non-linear y’=1-y^{2}

Just looking at linear equations gives an oversimplified view where errors and solutions change in proportion. The solutions of this equation are the functions tanh(t+a) and coth(t+a), for arbitrary a. They tend to 1 as t→∞ and to -1 as t→-∞. Convergence is exponential. So an error made near t=-1 will grow rapidly for a while, then plateau, then diminish, eventually rapidly and to zero.

#### Special case 5: the Lorenz butterfly

This is the poster child for vigorous error propagation. It leads to chaos, which I’ll say more about. But there is a lot to be learnt from analysis. I have written about the Lorenz attractor here and in posts linked there. At that link you can see a gadget that will allow you to generate trajectories from arbitrary start points and finish times, and to see the results in 3D using webGL. A typical view is like this

Lorenz derived his equations to represent a very simple climate model. They are:

The parameters are conventionally σ=10, β=8/3, ρ=28. My view above is in the x-z plane and emphasises symmetry. There are three stationary points of the equations, 1 at (0,0,0),(a, a, 27)and,(-a, -a, 27) where a = sqrt(72). The last two are centres of the wings. Near the centres, the equations linearise to give a solution which is a logarithmic spiral. You can think of it as a version of y’=a*y, where a is complex with small positive real part. So trajectories spiral outward, and at this stage errors will propagate with exponential increase. I have shown the trajectories on the plot with rainbow colors, so you can see where the bands repeat, and how the colors gradually separate from each other. Paths near the wing but not on it are drawn rapidly toward the wing.

As the paths move away from the centres, the linear relation erodes, but really fails approaching z=0. Then the paths pass around that axis, also dipping towards z=0. This brings them into the region of attraction of the other wing, and they drop onto it. This is where much mixing occurs, because paths that were only moderately far apart fall onto very different bands of the log spiral of that wing. If one falls closer to the centre than the other, it will be several laps behind, and worse, velocities drop to zero toward the centre. Once on the other wing, paths gradually spiral outward toward z=0, and repeat.

#### Is chaos bad?

Is the Pope Catholic? you might ask. But chaos is not bad, and we live with it all the time. There is a lot of structure to the Lorenz attractor, and if you saw a whole lot of random points and paths sorting themselves out into this shape, I think you would marvel not at the chaos but the order.

In fact we deal with information in the absence of solution paths all the time. A shop functions perfectly well even though it can’t trace which coins came from which customer. More scientifically, think of a cylinder of gas molecules. Computationally, it is impossible to follow their paths. But we know a lot about gas behaviour, and can design efficient ICE’s, for example, without tracking molecules. In fact, we can infer almost everything we want to know from statistical mechanics that started with Maxwell/Boltzmann.

CFD embodies chaos, and it is part of the way it works. People normally think of turbulence there, but it would be chaotic even without it. CFD solutions quickly lose detailed memory of initial conditions, but that is a positive, because in practical flow we never knew them anyway. Real flow has the same feature as its computational analogue, as one would wish. If it did depend on initial conditions that we could never know, that would be a problem.

So you might do wind tunnel tests to determine lift and drag of a wing design. You never know initial conditions in tunnel or in flight but it doesn’t matter. In CFD you’d start with initial conditions, but they soon get forgotten. Just as well.

#### GCMs and chaos

GCMs are CFD and also cannot track paths. The same loss of initial information occurs on another scale. GCMs, operating as weather forecasts, can track the scale of things we call weather for a few days, but not further, for essentially the same reasons. But, like CFD, they can generate longer term solutions that represent the response to the balance of mass, momentum and energy over the same longer term. These are the climate solutions. Just as we can have a gas law which gives bulk properties of molecules that move in ways we can’t predict, so GCMs give information about climate with weather we can’t predict.

#### What is done in practice? Ensembles!

Analysis of error in CFD and GCMs is normally done to design for stability. It gets too complicated for quantitative tracing of error, and so a more rigorous and comprehensive solution is used, which is … just do it. If you want to know how a system responds to error, make one and see. In CFD, where a major source of error is the spatial discretisation, a common technique is to search for grid invariance. That is, solve with finer grids until refinement makes no difference.

With weather forecasting, a standard method is use of ensembles. If you are unsure of input values, try a range and see what range of output you get. And this is done with GCMs. Of course there the runs are costlier, and so they can’t do a full range of variations with each run. On the other hand, GCM’s are generally surveying the same climate future with just different scenarios. So any moderate degree of ensemble use will accumulate the necessary information.

Another thing to remember about ensemble use in GCM’s is this. You don’t have to worry about testing a million different possible errors. The reason is related to the loss of initial information. Very quickly one error starts to look pretty much like another. This is the filtering that results from the vary large eigenspace of modes that are damped by viscosity and other diffusion. It is only the effect of error on a quite small space of possible solutions that matters.

If you look at the KNMI CMIP 5 table of GCM results, you’ll see a whole lot of models, scenarios and result types. But if you look at the small number beside each radio button, it is the ensemble range. Sometimes it is only one – you don’t have to do an ensemble in every case. But very often it is 5,6 or even 10, just for 1 program. CMIP has a special notation for recording whether the ensembles are varying just initial conditions or some parameter.

#### Conclusion

Error propagation is very important in differential equations, and is very much a property of the equation. You can’t analyse without taking that into account. Fast growing errors are the main cause of instability, and must be attended to. The best way to test error propagation, if computing resources are adequate, is by an ensemble method, where a range of perturbations are made. This is done with earth models, both forecasting and climate.

#### Appendix – emulating GCMs

One criticised feature of Pat Frank’s paper was the use of a simplified equation (1) which was subjected to error analysis in place of the more complex GCMs. The justification given was that it emulated GCM solutions (actually an average). Is this OK?

Given a solution f(t) of a GCM, you can actually emulate it perfectly with a huge variety of DEs. For any coefficient matrix A(t), the equation

y’ = A*y + f’ – A*f

has y=f as a solution. A perfect emulator. But as I showed above, the error propagation is given by the homogeneous part y’ = A*y. And that could be anything at all, depending on choice of A. Sharing a common solution does not mean that two equations share error propagation. So it’s not OK.

Nick Stokes

Thanks Nick for the good exposé, but I miss here quite a lot of what you so pretty good explained on your own blog:

https://moyhu.blogspot.com/2019/09/how-errors-really-propagate-in.html

That was really amazing stuff.

Best regards

J.-P.

Thanks, Bindi

The problem is it is wrong from the first equation and gets more wrong every equation after that. This is the problem with Layman pretending to be scientists like Nick and Mosher and especially those of the old science variety.

Radiative transfer has it’s own version of system linear equations because of its quantum nature.

https://en.wikipedia.org/wiki/Quantum_algorithm_for_linear_systems_of_equations

There are any number of good primers on Quantum linear systems algorithms on university sites on the web.

Nicks basic claim is that his equations somehow covers the problem generally .. when any actual physicist knows for a fact that is a NOT EVEN WRONG.

I should also add Daniel Vaughan put up a good 3 part series on programming for quantum circuits on codeproject

https://www.codeproject.com/Articles/5155638/Quantum-Computation-Primer-Part-1

If you follow the basic mathematics you will understand how something linear in the Quantum domain gets very messy in the classical domain.

LdB is right. There was a study some time ago (https://news.ucar.edu/132629/big-data-project-explores-predictability-climate-conditions-years-advance reported on WUWT) in which climate models were re-run over just one decade with less than a trillionth(!) of a degree difference in initial temperatures. The reults differed hugely from the original run, with some regions’ temperatures changing by several degrees. NCAR/UCAR portrayed it as a demonstration of natural climate variabiliity. It wasn’t, of course, it was a demonstration of the instability of climate models.

As LdB says: The problem is it is wrong from the first equation and gets more wrong every equation after that.

“It wasn’t, of course, it was a demonstration of the instability of climate models.”They aren’t unstable. They are just not determined by initial conditions. Same with CFD.

The initial conditions determined the wide variability in output. They may not be unstable but that doesn’t mean they don’t have huge uncertainty associated with their outputs.

You can determine all the start conditions you like if you it tells you nothing about the behaviour because it isn’t a classical system. Take a piece of meta-material engineered to create the “greenhouse effect” and you can know every condition your classical little measurements can muster you still won’t be able to predict what will happen using any classical analysis. The bottom of the problem is actually easy to understand temperature is not a fundamental statistic it is a made up classical concept with all the problems that goes with that.

They are unstable. They have code that purposely flattens out the temperature projections because they too often blew up. Modelers have admitted this. Also Nick you said :

“It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.” THAT IS WRONG. What you believe is not important. What is important is the true number that is backed up by observations that you obtain by running real world experiments.

There is no essential difference between an error of 0.000000001 deg C in the initial conditions and a 0.000000001 deg C error in the first few iterations (typically 20 minutes each) of calculations. (a) can anyone state credibly that a model’s calculations cannot very quickly be out by 0.000000001 deg C?, and (b) can anyone state credibly that initial conditions are known to something like 0.000000001 deg C in the first place?

“can anyone state credibly that initial conditions are known to something like 0.000000001 deg C in the first place?”No, and that is the point. In reality the initial conditions don’t matter. No one worries about the initial state of a wind tunnel. Let alone of the air encountered by an airplane. It happens that CFD (and GCMs) are structured as time-stepping, and so have to start somewhere. But where you start is disconnected from the ongoing solution, just as with the wind tunnel. It’s like the old saw about a butterfly in Brazil causing a storm in China. Well, it might be true but we can tell a lot about storms in China without monitoring butterflies in Brazil, and just as well.

The mechanics of GCM require starting somewhere, but they deliberately use a spin-up of a century or so, despite the fact that our knowledge of that time is worse. It’s often described as the difference between an initial problem and a boundary problem. The spin-up allows the BVP to prevail over the IVP.

Nick,

“No one worries about the initial state of a wind tunnel.”

Of course they don’t. Because the initial conditions don’t determine the final wind speed, the guy controlling the field current to the drive motors does.. Your ceiling fan doesn’t start out at final speed either. It ramps up to a final value determined by all sorts of variables, including where you set the controls for the fan speed.

The Earths thermodynamic system is pretty much the same. We have a good idea of what the lower and upper bounds are based on conditions for as far back as you want to look. CO2 has been higher and lower than what it is today. Temperatures have been higher and lower that what they are today. Humans have survived all of these.

The climate alarmists persist in saying the models support their view that we are going to turn the Earth into a cinder, i.e. no boundary on maximum temperatures. If that is actually what the models say then it should be obvious to anyone who can think rationally that something is wrong with the models. Of course in such a case the initial conditions won’t matter, the temperature trend is just going to keep going up till we reach perdition!

LdB

Commenter LdB, you behave quite a bit arrogant here. Who are you?

I write behind a nickname for the sake of self-protection against people who disturbed my life years ago. Maybe one day I give that up.

But I don’t discredit people behind a fake name, unless I can prove with real data that they wrote here or elsewhere absolute nonsense.

*

Where are your own publications allowing you to discredit and denigrate Mr Stokes and Mr Mosher down to laymen?

Why do you, LdB, comfortably discredit Nick Stokes behind a nickname, with nothing else than some obscure, non-explained references to the Quantum domain which you yourself probably would not be able to discuss here?

J.-P D.

Do you have anything but this cynical nonsense to say to LdB? Take issue with the content of LdB’s post rather than impuning LdB’s intention.

@ Bindidon Mr Stokes in this instance is trying to play in the physics field. He has no qualification in that field and any physics student knows his answer is junk … where do you want me to go from there?

Would you care to argue two basic points, they are drop dead stupid and even a layman should be able to search the answer.

1.) Is the “greenhouse effect” a Quantum effect?

2.) Does temperature exist in quantum mechanics.

So do your homework, search, read do whatever are those two statements correct?

Now I am going to take a leap of faith and guess you find those statement are correct. To even a layman there must be warnings going off that you are trying to connect two things that aren’t directly related and you might want to think about what problems that creates.

Again a leap of faith you can read, the best you ever get in these situations is an approximation of a very distinct range and any range needs validation. Nowhere in Nicks discussion does he talk about the issue, he argues he covers all possible errors (well technically he tried to exclude some localized results) but the general thrust was it covers the error …. SORRY no it doesn’t.

Transfer of energy through the quantum domain can not be covered by any classical law that is why we need special Quantum Laws.

Nick’s only choice of argument is that Global Warming isn’t a Quantum Effect that is somehow a classical effect and he is entitled to use his classical formula.

Bindidon

You complained about me not calling Nick “Dr. Stokes,” yet, you call him “Mr Stokes!” Where are your manners!

1. A simple tutorial on numerical approximations of derivatives

In calculus the derivative is defined as

However, in the discrete case (as in a numerical model) when is not 0 but small, the numerator on the right hand side can be expanded in a Taylor series with remainder as

where lies between and . Dividing by

This formula provides an error term for an approximation of the derivative when is not zero. There are several important things to note about this formula. The first is that the Taylor series cannot be used if the function that is being approximated does not have at least two derivatives, i.e., it cannot be discontinuous.The second is that in numerical analyis it is the power of the coefficient in the error term that is important. In this case because the power is 1, the accuracy of the method is called first order.

Higher order accurate methods have higher order powers of . In the example above only two points were used, i.e., and . A three point discrete approximation to a derivative is

Expanding both terms in the numerator in Taylors series with remainder , subtracting the two series and then dividing by produces

Because of the power of 2 in the remainder term, this is called a second order method and assuming the derivatives in both examples are of similar size, this method will produce a more accurate approximation as the mesh size decreases. However, the second method requires that the function be even smoother, i.e., have more derivatives.

The highest order numerical methods are called spectral methods and require that all derivatives of the function exist. Because Richardson’s equation in a model based on the hydrostatic equations causes discontinuities in the numerical solution, even though a spectral method is used, spectral accuracy is not achieved. The discontinuities require large dissipation to prevent the model from blowing up and this destroys the numerical accuracy (Browning, Hack, and Swarztrauber 1989).

Currently modelers are switching to different numerical methods (less accurate than spectral methods but more efficient on parallel computers) that numerically conserve certain quantities. Unfortunately this only hides the dissipation in the numerical method and is called implicit dissipation (as opposed to the curent explicit dissipation).

Gerald,

I don’t see the point here. Your criticism applies equally to CFD. And yes, both do tend to err on the side of being overly dissipative. But does that lead to error in climate predictions? All it really means is more boring weather.

Nick,

You have not done a correct analysis on the difference between two solutions, one with the control forcing and the other with a perturbed (GHG) forcing. See my correct analysis on Climate Audit.

And yes if the dissipation is like molasses, then the continuum error is so large as to invalidate the model. The sensitivity of molasses to perturbations is quite different than that of air..

This post is entirely misleading. For the correct estimate see my latest posts on Climate Audit (I cannot post the analysis here because latex is not working here according to Ric). That estimate clearly shows that a linear growth in time as in Pat Frank’s manuscript is to be expected.

Jerry

\title{Analysis of Perturbed Climate Model Forcing Growth Rate}

\author{G L Browning}

\maketitle

Nick Stokes (on WUWT) has attacked Pat Frank’s article for using a linear growth in time of the change in temperature due to increased Green House Gas (GHG) forcing in the ensemble GCM runs. Here we use Stokes’ method of analysis and show that a linear increase in time is exactly what is to be expected.

1. Analysis

The original time dependent pde (climate model) for the (atmospheric) solution y(t) with normal forcing f(t) can be represented as

where y and f can be scalars or vectors and A correspondingly a scalar or matrix. Now supose we instead solve the equation

where is the Green House Gas (GHG) perturbation of f. Then the equation for the difference (growth due to GHG) is

Multipy both sides by

Integrate both sides from 0 to t

Assume the initial states are the same, i.e., is 0. Then multiplying by yields

Taking norms of both sides the estimate for the growth of the perturbation is

where we have assumed the norm of as in the hyperbolic or diffusion case. Note that the difference is a linear growth rate in time of the climate model with extra CO2 just as indicated by Pat Frank.

Jerry

“That estimate clearly shows that a linear growth in time as in Pat Frank’s manuscript is to be expected.”Well, firstly Pat’s growth is not linear, but as sqrt(t). But secondly, your analysis is just wrong. You say that

exp(-A*t) ∫ exp(A*u) f(u) du integration 0:t

has linear growth, proportional to max(|f|). Just wrong. Suppose f(u)=1. Then the integral is

exp(-A*t) (exp(A*t)-1)/A = (1-exp(-A*t))/A (6)

which does not grow as t (7), but is bounded above by 1/A.

In fact, the expression is just an exponential smooth of f(u), minus a bit of tail in the convolver.

Good try Nick. But f is not equal to 1 but to a vector of perturbations. And your formula does not work if A is singular as it is when there is no wind or in your example when you used the scalar A =0. And what does 1/A mean – is it suppose ot be the inverse? The estimate is the standard one for the integral of the solution operator times the forcing.

Now let me provide an additional lesson. Suppose the real equation is

but the model is solving

where $\epsilon_{l} >> \epsilon_{s} $

Subtracting equations as before the difference satisfies the equation

to a very good approximation.

This is a continuum error and one is essentially solving an equation for molasses and not

air.

You need to read Browning and Kreiss (Math Comp) to understand that using the wrong

type or amount of dissipation produces the wrong answer.

Jerry

All,

Note that Stokes’ scalar f = 1 is not time dependent as is the vector of time dependent perturbations. Nick chose not to mention that fact or to use a scalar function that is a function of time so he could mislead the reader. In fact if the scalar A is 0 (singular case),

the growth is proportional to t. The point of my Analysis is to show that Stokes’ analysis stated that the forcing drops out and that is also misleading to say the least.

In the case of excessively large dissipation in a model, I will rewrite the z equation as

.

Then the difference is

to a very good approximation. Now one can see that the difference equation E has a large added forcing term that does not disappear, i.e., a continuum error that means that one is not solving the correct equation.

Jerry

Jerry,

“Nick chose not to mention that fact or to use a scalar function that is a function of time so he could mislead the reader. “I really have trouble believing that you were once a mathematician. If your formula fails when f=1, that is a counter-example. It is wrong. And you can’t save it by waving hands and saying – what if things were more complicated in some way? If you want to establish something there, you have to make the appropriate provisions and prove it.

Variable f doesn’t help – the upper bound is just max(||f||)/A

In fact my analysis covered three cases, case 1 (A=0) and case 3 (A>0 and A<0). As I said, with A0, it is bounded above as I showed. Indeed, as I also said, it is just the exponential smooth of f, minus a diminishing tail.

Matrix A won’t save your analysis either. The standard thing is to decompose:

AP=PΛ where Λ is the diagonal matrix of eigenvalues

Then set E=PG

PG’ + APG = PG’ + PΛG = f

or G’ + ΛG = P⁻¹f

That separates it into a set of 1-variable equations, which you can analyse in the same way.

Well Nick,

I will let my mathematics speak for themselves. I notice that you stated in reference to my Tutorial on Numerical Approximation:

“I don’t see the point here. Your criticism applies equally to CFD. And yes, both do tend to err on the side of being overly dissipative. But does that lead to error in climate predictions? All it really means is more boring weather.”

Yes, the criticism applies to any CFD models that mimic a discontinous continuum solution, i.e., then the numerics are not accurate.

At least you admitted that the CFD models tend to be ” overly dissipative”. But not what that does to the continuum solution with the real dissipation. My analysis shows that excessive (unrealistic) dissipation causes the model to converge to the wrong

equation. I guess you are saying there is no difference between the behavior of molasses and air. This is no surprise to me from someone that made a living using numerical models with excessive dissipation. And as I have said before you need to keep up with the literature. I mentioned two references that you have clearly not read or you would not continue to poo poo this common fudging technique in CFD and climate models.

Next you said

“I really have trouble believing that you were once a mathematician. If your formula fails when f=1, that is a counter-example. It is wrong. And you can’t save it by waving hands and saying – what if things were more complicated in some way? If you want to establish something there, you have to make the appropriate provisions and prove it.”

Is is not a counter example because the symbol A for a symmetric hyperbolic PDE can be singular and then you cannot divide by A (or more correctly multipliy by $ latex A^{-1}$. All cases of A must be taken into account for a theory to be robust. Note that the only thing I assumed about the solution operator $\exp ( A t)$ is that it is bounded. I assumed nothing about the inverse of A. The solution operator is bounded for all symmetric hyperbolic equations even if A is singular. Thus your use of the inverse is not a robust theory.

If my analysis is wrong, why not try a nonconstant function of t so you cannot divide by A? I thought so.

As far as your use of eigenvalue , eigenvector decomposition I am fully aware of that math.

For a symmetric hyperbolic system A leads to eignevalues that can be imaginary or 0

(so the solution operator is automatically bounded).

In the latter case A is singular so any robust theory must take that into account, i.e. you cannot assume that the inverse exists. Also note that f is a function of t, not a constant. You continue to avoid that issue because for an arbitrary nonconstant function of t,

the integral in general cannot be solved, but can be estimated as I have done.

It appears you just don’t want to accept the linear growth in time estimate. Good luck with that.

Jerry

“All cases of A must be taken into account for a theory to be robust. “It is your theory that perturbations increase linearly with t. And it fails for the very basic case when A=1, f=1. And indeed for any positive A and any bounded f.

For matrix A, the problem partitions. And then, as indicated in my posts, there are three possibilities for the various eigenvalues and their spaces

1. Re(λ)>0 – perturbation changes with exponential smooth of driver, bounded if driver is bounded

2. Re(λ)=0 – perturbation changes as integral of driver

3. Re(λ)<0 – perturbation grows exponentially

Ok Nick,

Let us make clear what solution you are suggestiing in contrast to the one I am using. Green House Gases (GHG) in the atmosphere are increasing. So the climate modelers are injecting increasingly larger GHG (CO2) into the climate models over a period of time, i.e. the amount of CO2 in the model is changing. The forcing in the models depends on the amount of CO2 so the forcing in the models also is changing over time. You are assuming the additional forcing is constant in time which is clearly not the case. I on the other hand am allowing the forcing to change in time as is the case in reality. All you have to do to disprove my estimate is to make the physically correct assumption that the

increase in forcing is a function of time. Then your example will not work because the change in forcing is no longer constant. Quit making physically incorrect assumptions

and stating my correct one is wrong.

Also in my estimate of E, assuming the solution operator is bounded by a constant instead of unity leads to fact that by changing that constant by changing the amount of dissipation (or other tuning parameters) allows the modelers to change the amount of growth as they wish, even though it might not be physically realistic.

I also see that you must agree now with my Tutorial on the misuse of numerical approximations in models that mimic discontinuities in the continuum solution or

otherwise you would have made some assinine counter to that fact without proof.

And you have yet to counter with proof that using excessive dissipation alters the

solution of the pde with the correct amount of dissipation. This has been

shown with mathematical estimates for the full nonlinear compressible Navier -Stokes equations (the equations used in turbulence modeling) and demonstrated with convergent numerical solutions (Henshaw, Kreiss and Reyna). You need to keep up with the literature.

Jerry

“All you have to do to disprove my estimate is to make the physically correct assumption that the increase in forcing is a function of time.”I dealt with that case:

“Variable f doesn’t help – the upper bound is just max(||f||)/A”It makes no difference at all.

exp(-A*t) ∫ exp(A*u) f(u) du < exp(-A*t) ∫ exp(A*u) Max(f(u)) du

= Max(f(u))exp(-A*t) ∫ exp(A*u) du < Max(f(u))/A

and to complete the bounds:

-exp(-A*t) ∫ exp(A*u) f(u) du < Max(-f(u))/A

““Variable f doesn’t help – the upper bound is just max(||f||)/A””

How do you know the maximum of “f” if it is a function of time? What generates the upper bound in that case?

“How do you know the maximum”It is here the maximum in the range. But it could be the overall maximum.

What if f itself increases without limit? Well, then of course perturbations could have similar behaviour.

Remember, Gerald has a specific proof claimed here. Perturbations increase linearly. You can’t keep saying, well it didn’t work here, but it might if we make it a bit more complicated. Maths doesn’t work like that. If a proof has counterexamples, it is wrong. Worthless. It failed. You have to fix it.

“It is here the maximum in the range. But it could be the overall maximum.

What if f itself increases without limit? Well, then of course perturbations could have similar behaviour.”

You *still* didn’t answer how you know the maximum so you can use it as a bound. What is the range. How is it determined? Is it purely subjective?

What if f *does* increase without limit? Isn’t that what an ever growing CO2 level would cause? At least according to the models that is what would happen.

If there *is* a limit then why don’t the models show that in the temperature increases over the next 100 years?

OK Nick,

It is getting easier and easier to rebut your nonsense

Consider your favorite scalar equation

with a a nonzero constant and f an arbitrary function of time as physically correct (not a constant as physically inappropriate as made clear above).

As before multiply by

$ latex ( \exp (at) y )_{t} = \exp (at) f(t)$

Integrate fron 0 to t

$ latex \exp (at) y (t) = \exp (0) y (0) + \int_{0}^{t)} \exp (a \tilde{t} ) f( \tilde{t} d \tilde{t}$

assuming the model with normal forcing and the model with added GHG forcing start from the same initial conditions

This becomes

$ latex y (t) = \int_{0}^{t)} \exp (a \tilde{t}-at) f( \tilde{t} d \tilde{t}$

Now for a positive, 0 or negative the exponential is bounded by a constant C for t finite.

Taking absolute values or both sides

and the estimate is exactly as in the full system, i.e., a bounded linear growth in time.

I have heard that when you are wrong you either obfuscate or bend the truth. I now fully believe that based on your misleading responses to my comments.

I am also waiting for your admission that discontinuous forcing causes the numerical solutuon of a model to mimic a discontinuous continuum solution invalidating the numerical analysis accuracy requirements of differentiability.

And I am also wating for your admission that using excessively large dissipation means that you are not solving the correct system of equations.

Jerry

Gerald,

“and the estimate is exactly as in the full system”Well, it’s just a very bad estimate, and ignores the behaviour of the exponential. It’s actually a correct inequality. But it’s also true that E is bounded as I said. So it is quite misleading to say that E increases linearly with t. I have given several basic examples where that just isn’t true.

It is true also that E ≤ C exp(t²) for some C. That doesn’t mean that C increases as exp(t²).

That doesn’t mean that E increases as C*exp(t²).

“You *still* didn’t answer how you know the maximum so you can use it as a bound. “Δf() is a prescribed function. If you prescribe it, you know if it has a maximum, and what it is.

But the whole discussion has been muddled by Gerald – I’m just pointing to the errors in his maths. In fact Pat Frank was talking about how uncertainty propagates in a GCM from uncertainty in cloud cover. He says it grows fast. Gerald has switched to a perturbation in temperature due to GHGs. Not uncertainty in GHGs, but just GHGs. And he claims they grow indefinitely.

Well, they might. It’s not usually a proposition promoted at WUWT. In fact, if GHGs keep growing indefinitely, temperatures will. This is totally unrelated to what Pat Frank is writing about.

OK Nick,

It is getting easier and easier to rebut your nonsense

Let us onsider your favorite scalar equation

title{Estimate of Growth for a Scalar Equation with Time Dependent Forcing}

\author{GL Browning}

\maketitle

Nick Stokes has claimed that the estimate for a scalar version of my difference equation E is different than the matrix version. Here we show that is not the case for all finite values of Re(a) and that by tweaking the amount of dissipation in the case , the linear growth rate can be changed arbitrarily.

1. Analysis

Consider the equation

with a being a constant and f an arbitrary function of time as physically correct (not a constant as physically inappropriate as made clear above). As before multiply by :

Integrating fron 0 to t

Assuming the model with normal forcing and the model with added GHG forcing start from the same initial conditions this becomes

Taking absolute values or both sides

Note that the quantity so changes the sign of a or is 0. Thus for Re(a) positive (dissipative) case, 0, or negative (growth case), the exponential is bounded by a constant C for t finite. So

from elementary calculus This is the same estimate as before, a bounded linear growth in time. Note that messing with the dissipation , C can be made to be whatever one wants.

I am waiting for your admission that discontinuous forcing causes the numerical solution of a model to mimic a discontinuous continuum solution invalidating the numerical analysis accuracy requirements of differentiability.

And I am also waiting for your admission that using excessively large dissipation means that you are not solving the correct system of equations.

All,

That didn’t come out very well as there is no preview. I will try again because this is important.

Some of you might not be familiar with complex variables.

I will add a bit of info that hopefully helps.

The derivative of an exponential with a real or complex exponent is the same so nothing changes in that part of the proof, i.e., the same formula holds whether a is a real or complex number. However, the absolute value of an exponential

with a complex number is just the absolute value of the exponent of the real part.

That is because the exponential of an imaginary exponent like $ latex \exp (i Im (a)t)$

is defined as $ latex cos( Im(a) t) + i sin ( Im(a) t)$ whose absolute value is 1.

Thus

$ latex | \exp ( ( Re(a) + i Im(a) ) t) | = | \exp ( Re(a) ) \exp ( i Im(a) )| = | \exp ( Re(a) ) | | | \exp ( i Im(a) )| = \exp ( Re(a) )$

Jerry

All,

I gave up trying to do this inside a comment and went outside where I could test everything.

All,

That didn’t come out very well as there is no preview. I will try again because this is important.

Some of you might not be familiar with complex variables. I will add a bit of info that hopefully helps.

The derivative of an exponential with a real or complex exponent is the same so nothing changes in that part of the proof, i.e., the same formula holds whether a is a real or complex number. However, the absolute value of an exponential with a complex number is just the absolute value of the exponent of the real part. That is because the exponential of an imaginary exponent like is defined as

whose absolute value is 1 because

Thus

Jerry

All,

Now let us discuss the case of exponentially growing in time solutions.

In the theory of partial differential equations there are only two classes of equations:

well posed systems and ill posed systems. In the latter case there is no hope to compute a numerical solution because the continuum solution grows exponentially unbounded in an infinitesmal amount of time. It is surprising how many times one comes across such equations in fluid dynamicals because seemingly reasonable physical assumptions lead to mathematical problems of this type.

The class of well posed problems are computable because the exponential growth rate (if any) is bounded for a finite time. As is well known in numerical analysis, if there are exponentially growing solutions of this type, they can only be numerically computed for a short period of time because the error also grows exponentialy in time. So we must assume that climate models that run for multiple decades either do not have any exponentially growing components or they have been suppressed by artificial excessively large dissipation. So assuming the climate models only have dissipattive

types of components, we have seen that the linear growth rate of added CO2 can be controlled to be what a climate modeler wants by changing the amount of dissipation.

I find that Pat Frank’s manuscript on the linear growth rate of the perturbations of temperature with added CO2 in the ensemble climate model runs as emminetlly reasonable.

Jerry

Nick, “

Well, firstly Pat’s growth is not linear, but as sqrt(t)”Jerry Browning is talking about the growth in projected air temperature, not growth in uncertainty.

The growth in uncertainty grows as rss(±t_i), not as sqrt(t).

That’s two mistakes in one sentence. But at least they’re separated by a comma.

“Jerry Browning is talking about the growth in projected air temperature”His result, eq 7, says

“the estimate for the growth of the perturbation is…”Nick,

If you look at the system, it is the growth due to the change (perturbation) in the forcing by adding GHG’s to the control forcing f (no GHC_). Don’t try to play gmes. The math is very clear as to what I was estimating.

Jerry

I

Gerald

“The math is very clear”So where is a cause due to GHG entered into the math? How would the math be different if the cause were asteroids? or butterflies?

Nick,

You are clearly getting desperate. The magnitude of the change in forcing would change if the perturbation were from a butterfly or asteroid. Thus both are taken into account.

I await your use of the correct physical assumption that the change in forcing is a function of time.

Jerry

” The magnitude of the change in forcing would change if the perturbation were from a butterfly or asteroid. “Your proof uses algebra, not arithmetic.

“Your proof uses algebra, not arithmetic.”

So what? The result is the same.

Also, I should have noted, in response to Nick’s

Pat’s growth is not linear, but as sqrt(t)”, that growth in T goes as [(F_0+∆F_i)/F_0], which is as linear as linear ever gets.“growth in T goes as [(F_0+∆F_i)/F_0], which is as linear as linear ever gets”Well, in Eq 1 it was (F_0+Σ∆F_i)/F_0. But yes, by Eq 5.1 it has become (F_0+∆F_i)/F_0. None of this has anything to do with whether it grows linearly with time.

Ya lost me at “what you believe to be the correct number.”

You can be agnostic about that if you like. I’m showing how a discrepancy between to possible initial states is propagated in time by a differential equation. It could be a difference between what you think is right and its error, or just a measure of an error distribution. The key thing is what the calculation does to the discrepancy. Does it grow or shrink?

What would cause someone to BELIEVE a number to be correct, rather than to know with confidence that the number IS correct?

An unconfirmed BELIEF would seem to be a theoretical foundation of the model, and this belief itself could be subject to uncertainty — a theory error? … with accompanying uncertainty above and beyond the uncertainty of the performance of the calculation that incorporates this uncertain theoretical foundation?

This discussion is so far beyond me that I have to fumble with it in general terms. Nick’s presentation seems to be further effort to sink Pat Frank’s assessment, and so I’m caught between two competing experts light years of understanding ahead of me.

I still get the feeling that Pat is talking about something that is captured by Nick’s use of the word, “belief”, and so I’m not ready to let Pat’s ship sink yet.

‘and so I’m not ready to let Pat’s ship sink yet.”

pats boat sunk when he forgot that Watts are already a rate

One mistake renders an entire paper useless.

Unless you are a climate scientist.

Clarify. Thanks.

Steve is just parroting Nick Stokes’ mistake, Robert.

He actually doesn’t know what he’s talking about and so cannot clarify.

“Discrepancy” is not uncertainty.

“Discrepancy” is not uncertainty.

+1

Surely the correct number is what actually happens in the system being modelled using GCMs. Not much use if you’re trying to make predictions for chaotic systems.

Nick, could you explain how negative feedback affects propagating error, please?

Well, negative feedback is a global descriptor, rather than an active participant in the grid-level solution of GCMs. But in other systems it would act somewhat like like special case 3 above, with negative c, leading to decreasing error. And in electronic amplifiers, that is just what it is used for.

So you agree with the previous paper Monckton? That used an electrical feedback circuit to show GCMs were wrong

“That used an electrical feedback circuit to show GCMs were wrong”Well, he never said how, and was pretty cagey about what the circuit actually did. But no, you can’t show GCM’s are wrong with a feedback circuit. All you can show is that the circuit is working according to specifications.

Well you can’t use ICE knowledge and design to defend climate models, either, but you tried.

Same goes for wind-tunnel tests.

Negative feedback is *NOT* to reduce error. In fact, in an op amp the negative feedback *always* introduces an error voltage between the input and the output. It is that error that provides something for the op amp to actually amplify. The difference may be small but it is always there.

This is basically true for any physical dynamic system, whether it is the temperature control mechanism in the body or the blood sugar level control system.

“For the case of continuous errors, the earlier ones have grown a lot by the time the later ones get started, and really are the only ones that count.So it loses the character of random walk, because of the skewed weighting“So the GCM’s are then worse at error propagation than we thought. The similarity in TCF error hindcast residuals between GISS-er and GISS-eh (Frank’s Figure 4. and discussion therein) tells us that the errors are systematic, not random. And those TCF systematic errors start at t=0 in the GCM simulations.

“So the GCM’s are then worse at error propagation than we thought.”Well, not worse than was thought. Remember the IPCC phrase that people like to quote:

“In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles”It comes back to the chaos question; there are things you can’t resolve, but did you really want to? If a solution after a period of time has become disconnected from its initial values, it has also become disconnected from errors in those values. What it can’t disconnect from are the requirements of conservation of momentum, mass and energy, which still limit and determine its evolution. This comes back to Roy Spencer’s point, that GCM’s can’t get too far out of kilter because TOA imbalance would bring them back. Actually there are many restoring mechanisms; TOA balance is a prominent one.

On your specific point – yes, DEs will generally have regions where they expand error, but also regions of contraction. In terms of eigenvalues and corresponding exponential behaviour, the vast majority correspond to exponential decay, corresponding to dissipative effects like viscosity. For example, if you perturb a flow locally while preserving angular momentum, you’ll probably create two eddies with contrary rotation. Viscosity will diffuse momentum between them (and others) with cancellation in quite a short time.

So the values at any point in time are going to be bounded by the conservation laws? Temperatures may go up, but regardless of any error propagation, they won’t reach the melting point of lead.

“This comes back to Roy Spencer’s point, that GCM’s can’t get too far out of kilter because TOA imbalance would bring them back. Actually there are many restoring mechanisms; TOA balance is a prominent one.”

Each additional W/m2 leads to 0.5 to 1°C of warming. About the same effect as a third of a percent change of albedo. Seriously, how far out of kilter does it have to get to come up with 8 °C climate sensitivity?

Robert B

There is no way the sensitivity can be that high. Have a look at the insolation difference between summer and winter in the northern and southern hemispheres. The two are isolated enough for a difference of dozens of Watts/m^2 to produce a temperature difference. If the sensitivity was 0.5 to 1.0 C per Watt, the summers in the South would be dozens of degrees warmer than summers in the North. They are not.

Crispin,

That is a good point.

It applies not just to a temperature comparison of the northern and southern hemispheres, but also to what could happen within each hemisphere, i.e., no drastic excursions could happen because other factors would offset them, i.e., the weather comes and goes, then settles to some average climate state.

Any climate changes due to mankind’s efforts are so puny they could affect the hemisphere climate only very slowly, if at all.

Not my estimate.

My point was that 298^4/290^4 is about 1.1 so 10% more insolation for a real blackbody or dropping albedo by a third. According to the estimate, at most an extra 16W/m2 needed for 8 degree increase which is the high end of modelling. That is about 1/6 drop in albedo. The 8°C is out of kilter and needed to be chucked in the bin, and 4 is dodgy.

So why do the GCM models only show warming, when cooling happens? Where is that bias forced? When terms fudged, parameterized, or ignored is the error distribution reset?

The IPCC quote is very much to the topic.

“This reduces climate change to the discernment of significant differences in the statistics of such ensembles”.

The ongoing problem is that there is only going to be one result over future time. The need is for one model that produces the future climate down to the acre.

Having 40-50 models that can produce projection graphs 100 years into the future is useless if none of the graphs can be shown to be predictive. Planning for the future when the prediction is that at any given point in time the temperature will be within a range that grows exponentially over 80 years from +/-.1deg to +/- 3deg is not useful or effective.

What Mr. Frank was trying to demonstrate was that the models have such a wide, exponentially growing error range, as shown in the many versions of AR5 graphs, that the results aren’t predictive in any way, shape, or form after just a few years.

“If a solution after a period of time has become disconnected from its initial values, it has also become disconnected from errors in those values. ”

The problem isn’t the error in the initial values, the problem is the uncertainty of the outputs based on those inputs. The uncertainty does *not* become disconnected.

If your statement here were true then it would mean that the values of the initial value could be *anything* and you would still get the same output. The very situation that causes most critics to have no faith in the climate models.

Tim Gorman

Yes, it would seem that Stokes is arguing that GCMs violate the GIGO principle.

The GCM’s are unstable. They have code that purposely flattens out the temperature projections because they too often blew up. Modelers have admitted this. Also Nick you said :

“It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.” THAT IS WRONG. What you believe is not important. What is important is the true number that is backed up by observations that you obtain by running real world experiments.

“The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. ” That statement by the IPCC is so mathematically wrong and illogical that it defies belief. You CANNOT IMPROVE A PROBABILITY DISTRIBUTION BY RUNNING MORE SIMULATIONS OR USING MORE MODELS THAT HAVE THE SAME SYSTEMATIC ERROR.

As such the title to Nick’s essay should read, “How

randomerror propagation works with differential equations (and GCMs)”.And let’s be clear here, Lorenz’s uncertainties that Nick spent a great effort explaining with nice diagrams, were due to random error sets in initialization conditions.

The visual message of Frank’s Figure 4 (of the general similarity of the TCF errors) should be the big clue of what the modellers are doing to get their ECS so wrong from observation.

The TOA energy balance boiling pot argument… schamargument,

the random error propagation… schamargation…

You don’t get systematic errors that looks so similar (Fig 4 again) without the “common” need for a tropospheric positive feedback in the atmosphere

that obviously isn’t there.What a tangled web we weave, when first we endeavor to deceive.

(not you Nick, but the modellers needing (expecting?) a high ECS.)

“As such the title to Nick’s essay should read”Actually, no. I’m showing what happens to a difference between initial states, however caused.

” Lorenz’s uncertainties … were due to random error sets in initialization condition”Again, it doesn’t matter what kind of errors you have in the initial sets. Two neighboring paths end up a long way apart quite quickly. I don’t think Lorenz invoked any kind of randomness.

Nick Stokes:

Actually, no. I’m showing what happens to a difference between initial states, however caused.That much is true. That is not what Pat Frank was doing. He was deriving an approximation to the uncertainty in the model output that followed from uncertainty in one of the model parameters. The uncertainty in the model parameter estimate was due in part to random variation in the measurement errors and other random influences on the phenomenon measured; the uncertainty in the propagation was modeled as random variation consequent on random variation in the measurements and the parameter estimate. That variation was summarized as a confidence interval.

You have still, as far as I have read, not addressed the difference between propagating an error and propagating an interval of uncertainty.

BINGO!

IOW, still trying to change the subject in a manner that supposedly undermines Pat Frank’s analysis and conclusions.

“I am not sure how many ways to say this, but the case analysed by Pat Frank is the case where A is not known exactly, but is known approximately”He analysed the propagation of error in GCMs. In GCMs A is known; it is the linearisation of what the code implements. And propagation of error in the code is simply a function of what the code does. Uncertainty about parameters in the GCM is treated as an error propagated within the GCM. To do this you absolutely need to take account of what the DE solution process is doing.

As to

“the difference between propagating an error and propagating an interval of uncertainty”there has been some obscurantism about uncertainty; it is basically a distribution function of errors. Over what range of outputs might the solution process take you if this input, or this parameter, varied over such a range. The distribution is established by sampling. A one pair sample might be thought small, except that differential equations, being integrated, generally give smooth dependence – a near linear stretching of the solution space. So the distribution scales with the separation of paths.

Nick Stokes:

In GCMs A is known;That is clearly false. Not a single parameter is known exactly.

That is one of the ways that you are missing Pat Frank’s main point: you are regarding as known a parameter that he regards as approximately known at best, within a probability range.

There has been some obscurantism about uncertainty; it is basically a distribution function of errors. Over what range of outputs might the solution process take you if this input, or this parameter, varied over such a range. The distribution is established by sampling. A one pair sample might be thought small, except that differential equations, being integrated, generally give smooth dependence – a near linear stretching of the solution space. So the distribution scales with the separation of paths.You start well, then disintegrate. What “scales” if you start with the notion of a distribution of possible errors (uncertainty) in a parameter is the variance in the uncertainty of the outcome. That is what Pat Frank computed and you do not.

I showed how in my comment on your meter stick of uncertain length. If you are uncertain of its length then your uncertainty in the resultant measure grows with the distance measured. I addressed two cases: the easy case where the true value is known exactly within fixed bounds; the harder case where the uncertainty is represented by a confidence interval. You wrote as though the error could become known, and adjusted for — that would be bias correction, not an uncertainty propagation.

Not everyone accepts, or is prepared to accept, that the unknowableness of the parameter estimate implies the unknowableness of the resultant model calculation; or that the “unkowableness” can be reasonably well quantified as the distribution of the probable errors, summarised by a confidence interval. That “reasonable quantification of the uncertainty” is the point that I think you miss again and again. Part of it you get:

There has been some obscurantism about uncertainty; it is basically a distribution function of errors.But you seem only to concern yourself with the errors in the starting values of the iteration, not the distribution of the errors of the parameter values. Thus a lot of what you have written, when not actually false (a few times, as I have claimed), has been irrelevant to Pat Frank’s calculation.“But you seem only to concern yourself with the errors in the starting values of the iteration, not the distribution of the errors of the parameter values.”Pat made no useful distinction either – he just added a bunch of variances, improperly accumulated. But the point is that wherever the error enters, it propagates by being carried along with the flow of the DE solutions, and you can’t usefully say anything about it without looking at how those solutions behave.

On the obsrurantism, the fact is that quantified uncertainty is just a measure of how much your result might be different if different but legitimate choices had been made along the way. And the only way you can really quantify that is by observing the effect of different choices (errors), or analysing the evolution of hypothetical errors.

“Nick Stokes: In GCMs A is known;That is clearly false. “

No, it is clearly true. A GCM is a piece of code that provides a defined result. The components are known.

It is true that you might think a parameter could in reality have different values. That would lead to a perturbation of A, which could be treated as an error within the known GCM. But the point is that the error would be propagated by the performance of the known GCM, with its A. And that is what needs to be evaluated. You can’t just ignore what the GCM is doing to numbers if you want to estimate its error propagation.

Nick Stokes:

No, it is clearly true. A GCM is a piece of code that provides a defined result. The components are known.It is true that you might think a parameter could in reality have different values. That would lead to a perturbation of A, which could be treated as an error within the known GCM. But the point is that the error would be propagated by the performance of the known GCM, with its A. And that is what needs to be evaluated. You can’t just ignore what the GCM is doing to numbers if you want to estimate its error propagation.

That is an interesting argument: the parameter is known, what isn’t known is what it ought to be.

We agree that bootstrapping from the error distributions of the parameter estimates is the best approach for the future: running the program again and again with different choices for A (well, you don’t use the word bootstrapping, but you come close to describing it.) Til then, we have Pat Frank’s article, which is the best effort to date to quantify the effects in a GCM of the uncertainty in a parameter estimate. I eagerly await the publication of improved versions. Like Steven Mosher’s experiences with people trying to improve on BEST, I expect “improvements” on Pat Frank’s procedure to produce highly compatible results.

Your essay focuses on propagating the uncertainty in the initial values of the DE solution. You have omitted entirely the problem of propagating the uncertainty in the parameter values. You agree, I hope, that propagating the uncertainty of the parameter values is a worthy and potentially large problem to address. You have not said so explicitly, nor how a computable approximation might be arrived at in a reasonable lenght of time with today’s computing power.

“You have omitted entirely the problem of propagating the uncertainty in the parameter values.”No, I haven’t. I talked quite a lot about the effect of sequential errors (extended here), which in Pat Frank’s simple equation leads to random walk. Uncertainty in parameter values would enter that way. If the parameter p is just added in to the equation, that is how it would propagate. If it is multiplied by a component of the solution, then you can regard it as actually forming part of a new equation, or say that it adds a component Δp*y. To first order it is the same thing. To put it generally, to first order (which is how error propagation is theoretically analysed):

y0’+Δy’=(A+ΔA)*(y0+Δy)

is, to first order

y0’+Δy’=A*(y0+Δy)+ΔA*y0

which is an additive perturbation to the original equation.

Nick Stokes:

Uncertainty in parameter values would enter that way.So how exactly do you propagate the uncertainty in the values of the elements of A? You have alternated in consecutive posts between claiming that the elements of A are known (because they are written in the code), and claiming that the parameter values used in calculating the elements of A are uncertain.

Pat Frank’s procedure is not a random walk; it does not add a random variable from a distribution at each step, it shows that the variance of the uncertainty of the result of each step is the sum of the variances of the steps up to that point (the correlations of the deviations at the steps are handled in his equations 3 and 4.) How exactly have you arrived at the idea that his procedure generates a random walk? Conditional on the parameter selected at random from its distribution, the rest of the computation is deterministic (except for round-off error and such); the uncertainty in the value of the outcome depends entirely on the uncertainty in the value of the parameter, not on randomness in the computation of updates.

I think your idea that a parameter whose value is approximated with a range of uncertainty becomes “known” when it is written into the code is bizarre. If there are 1,000 values of the parameter inserted into the code via a loop over the uncertainty range (either over a fixed grid or sampled pseudo-randomly as in bootstrapping), you would treat the parameter value (the A matrix) as “known” to have 1,000 different values. That is (close to) the method you advocate for estimating the effect of uncertainty in the parameter on uncertainty in the model output.

“it does not add a random variable from a distribution at each step, it shows that the variance of the uncertainty of the result of each step is the sum of the variances of the steps up to that point”No, it says that (Eq 6) the sd σ after n steps (not of the nth step) is the sum in quadrature of the uncertainties (sd’s) of the first n steps. How is that different from a random walk?

You keep coming back to Eq 3 and 4, even though you can’t say where any correlation information could come from. Those equations were taken from a textbook; there is no connection made with the calculation, which seems to rely entirely on Eq 5 and Eq 6. There isn’t information provided to do anything else.

“I think your idea that a parameter whose value is approximated with a range of uncertainty becomes “known” when it is written into the code is bizarre.”No, it is literally true. There is an associated issue of whether you choose to regard it as a new equation, and solve accordingly, or regard it as a perturbing the solutions of of the original one. The first would be done in an ensemble; the second lends itself better to analysis. But they are the same to first order in perturbation size.

Nick,

“the sum in quadrature of the uncertainties”

I believe you are trying to say that the uncertainties cancel. Uncertainties don’t cancel like errors do. Uncertainties aren’t random in each step.

“I believe you are trying to say that the uncertainties cancel.”I’m saying exactly what eq 6 does. It is there on the page.

Nick,

What eq 6 are you talking about? Dr Frank’s? The one where he writes:

“Equation 6 shows that projection uncertainty must increase with every simulation step, as is expected from the impact of a systematic error in the deployed theory.”

That shows nothing about uncertainties canceling.

Or are you talking about one of your equations? Specifically where you say “where f(t) is some kind of random variable.”?

The problem here is that uncertainty is not a random variable. You keep trying to say that it is so you can depend on the central limit theory to argue that it cancels out sooner or later. But uncertainty never cancels, it isn’t random. If a model gives the same output no matter what the input is then the model has an intrinsic problem, it can just be represented by a constant. If the input is uncertain then a proper model will give an uncertain output, again if it doesn’t then it just represents a constant. What use is a model that only outputs a constant?

“That shows nothing about uncertainties canceling.”You made that up. I said nothing about uncertainties cancelling. I said they added in quadrature, which is exactly what Eq 6 shows. He even spells it out:

“Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.”“You made that up. I said nothing about uncertainties cancelling. I said they added in quadrature, which is exactly what Eq 6 shows. He even spells it out:

“Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.””

So you admit the uncertainties do not cancel, correct?

Nick, many thanks for this, and many thanks to WUWT for hosting it. This is what makes this site different.

As much as I dislike’s Nick’s stubbornness and refusal to admit when he’s wrong (a trait he has in common with Mann) he does occasionally provide useful insight.

In addition to models being reflections of their creators and tuning, there’s there’s the lack of knowing the climate sensitivity number for the last 40 years.

They should probably name a beer after climate modeling, called “Fat Tail”.

https://wattsupwiththat.com/2011/11/09/climate-sensitivity-lowering-the-ipcc-fat-tail/

Anthony Watts

September 16, 2019 at 7:53 pm

Well, I personally would like to thank Anthony for running such a great site that actually allows Nick and his colleagues to have an input. Otherwise we are talking to ourselves, and what’s the point of that?

I often don’t agree with what Nick posts but I always learn something from the comments that inevitably ensue. It’s such a fun way to learn and you never know just what tangent you are going to be thown off into.

The main thing I have learnt here is that the science is NOT settled!

The other great thing about WUWT is that (mostly) comments remain polite and civil. Everyone here appears to be an adult, unlike most other sites. So, thanks also to the moderators.

I’m with Alastair. I appreciate WUWT hosting someone like Nick, who we may disagree with, but is rational, polite and adds to the discourse. I often find that skeptical positions are improved and refined in the responses to the objections that Nick raises.

“Well, I personally would like to thank Anthony for running such a great site that actually allows Nick and his colleagues to have an input. Otherwise we are talking to ourselves, and what’s the point of that?”

I couldn’t agree more. We want to hear from all sides. We are not afraid of the truth.

Except for Griff, free the Griff! 😉

Yes, thanks to Anthony.

I’d like to add my thanks to WUWT for hosting it. I hope it adds to the discussion.

Nick Stokes Thanks for highlighting the consequences of modeling chaotic climate.

Now how can we quantify model uncertainty? Especially Type B vs Type A errors per BIPM’s international GUM standard?

Guide for the Expression of Uncertainty in Measurement (GUM).

See McKitrick & Christy 2018 who show the distribution and mean trends of 102 IPCC climate model runs. The chaotic ensemble of 102 IPCC models shows a wide distribution within the error bars. (Fig 3, 4)

However, the mean of the IPCC climate models are running some 285% of the independent radiosonde and satellite data since 1979, assuming a break in the data.

That appears to indicate that IPCC models have major Type B (systematic) errors. e.g., the assumed high input climate sensitivity causing the high global warming predictions for the anthropogenic signature (Fig 1) of the Tropical Tropospheric Temperatures. These were not identified by the IPCC until McKitrick & Christy tested the IPCC “anthropogenic signature” predictions from surface temperature tuned models, against independent Tropical Tropospheric Temperature (T3) data of radiosonde, satellite, and reanalyses.

McKitrick, R. and Christy, J., 2018. A Test of the Tropical 200‐to 300‐hPa Warming Rate in Climate Models. Earth and Space Science, 5(9), pp.529-536.

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401

Evaluation of measurement data — Guide to the expression of uncertainty in measurement

Look forward to your comments.

So do I . . .

David,

“Now how can we quantify model uncertainty?”Not easily. Apart from anything else, there are a huge number of output variables, with varying uncertainty. You have mentioned here tropical tropospheric temperature. I have shown above a couple of non-linear equations, where solution paths can stretch out to wide limits. That happens on a grand scale with CFD and GCMs. The practical way ahead is by use of ensembles. Ideally you’d have thousands of input/output combinations, which would clearly enable a Type A analysis in your terms. But it would be expensive, and doesn’t really tell you what you want. It is better to use ensembles to explore for weak spots (like T3), and hopefully, help with remedying.

Nick, thanks for the discussion. Rational dialogue beats the diatribe we are constantly subjected to in the climate debate.

It seems to me that we can argue ad nauseum about error propagation in models. But there is still one and only one test of models that is relevant. I’ll let Richard Feynman do the talking. The following is from a video at http://www.richardfeynman.com/.

Now I’m going to discuss how we would look for a new law. In general we look for a new law by the following process. First we guess it. [Laughter.] Then we… well don’t laugh, that’s really true. Then we compute the consequences of the guess to see what… if this is right… if this law that we guessed is right. We see what it would imply. And then we compare the computation results to nature, or we say compare to experiment or experience… compare it directly with observation to see if it… if it works. If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is, it doesn’t make a difference how smart you are, who made the guess, or what his name is [laughter], if it disagrees with experiment, it’s wrong [laughter]. That’s all there is to it.

I am not an expert on GCMs, but I often see graphs comparing GCM forecasts/projections/guesses with reality, and unless the creators of the graphs are intentionally distorting the results, the GCMs consistently over-estimate warming. It’s one thing to know how error propagates when we know the form of the equations (which is an implicit assumption in your discussion), but quite something else when we don’t.

It’s one thing to know how error propagates when we know the form of the equations (which is an implicit assumption in your discussion), but quite something else when we don’t.repeated for effect.

“quite something else when we don’t”With GCMs we do. They may not be perfect models of reality. But this article is discussing the error propagation of the equations we have and use.

With GCMs we do. They may not be perfect models of reality.So we do….but we don’t. LOL.

We know the equations in the models, david. They may be numerical estimates for the solutions to differential equations, they may leave-out or treat certain processes in a bulk manner, etc, and be a poor representative for reality…but we know what the equations are.

We know the equations in the models, david.Of course we do. That wasn’t Randy’s point.

Stokes

You said, “I have shown above a couple of non-linear equations, where solution paths can stretch out to wide limits.” This sounds very much like you are qualitatively supporting Frank’s claim, even if you disagree at the quantitative level.

You had also said above, “… yes, DEs will generally have regions where they expand error, but also regions of contraction.” Do the contractions only occur with negative feedback? This sounds like an intractable problem where you never really know whether the uncertainty is within acceptable bounds or not. How would you recommend dealing with this? I’m not comfortable with the ensemble approach because what little I know about it suggests that the results can vary even when the same inputs are used in repetitive runs. If the outputs vary (with or without changing input variables) do the uncertainty errors vary as well?

Clyde,

“This sounds very much like you are qualitatively supporting Frank’s claim”Firstly I’m saying that no such claim makes sense without looking at the underlying DE, which he does not do. But second, in the discussion on chaos, I’m pointing out that in terms of climate model performance, it’s looking at the wrong thing. We know that GCMs lose the information in the initial conditions (as does CFD). So there actually isn’t any point in trying to analyse what happened to error in the ICs. What counts is how the model handles the flux balances that it makes along the way.

Nick, “

If a solution after a period of time has become disconnected from its initial values, it has also become disconnected from errors in those values.”That does not mean the solution is correct.

“

DEs will generally have regions where they expand error, but also regions of contraction.”The analytical issue is predictive uncertainty not error. Big mistake.

“

This comes back to Roy Spencer’s point, that GCM’s can’t get too far out of kilter because TOA imbalance would bring them back.”Roy Spencer’s argument turns upon his fatal misapprehension that a calibration error statistic is an energy flux.

Nick, “If a solution after a period of time has become disconnected from its initial values, it has also become disconnected from errors in those values.”

That does not mean the solution is correct.

—————————————–

If a solution becomes disconnected from its initial value then that means the initial values are meaningless. They could be anything. That’s just confirmation that the models are designed to produce a specific output regardless of inputs!

Spencer, learn some manners.

Greg

Whom are you presuming to lecture, me or Roy? I didn’t realize that well-mannered people lectured others. Or do you consider yourself above your own advice?

Spencer

” I didn’t realize that well-mannered people lectured others.”

No they usually don’t, Spencer.

I’m near to 70 years old, and would NEVER AND NEVER be of such subcutaneous arrogance to name a commenter like you do.

“Stokes”

By intentionally, repeatedly naming Dr Stokes that way, you show where you are…

Perfect.

Rgds

J.-P. Dehottay

Bindidon

It has already been established that I’m lacking manners. What is your beef? You and Greg referred to me by my surname! I reserve the familiarity of first names to those who I consider to be friends or at least not antagonistic to me. Not many people refer to Dr. Einstein by his formal title. It is, after all, a little redundant. If you or Stokes accomplish as much as Einstein, I’ll consider officially recognizing that you have a sheepskin. However, I consider it pretentious to insist that those who have completed the hoop jumping be addressed with an honorific title.

BTW, for what it is worth, I’m older than you by several years. How about showing a little respect for age?

Bindidon;

I’m reminded of a joke:

Dr. Smith is driving to his office and not paying close attention to things around him; he is busy checking his smartphone for his golfing appointment times. He is in a fatal accident and the next thing he is aware of is that he is at the end of a very long line outside of what is obviously the Pearly Gates. After getting his bearings, he walks up to the head of the line to speak with Saint Peter.

When he gets to Saint Peter he says, “Hello, I’m Doctor Smith and apparently I’ve just died. I’d like to get into Heaven.” Saint Peter looks down at him and says, “Yes, while you are earlier than expected, I know who you are. Please return to your position in line and I’ll process you when your turn comes.”

Dr. Smith blusters, “But, but, I’m a doctor. I have spent my entire career in the service of mankind healing the sick and living a good life. I have made generous contributions to various benevolent societies and my golf club. I’m not used to waiting. People wait for me!” Saint Peter replies, “Up here you are just like everyone else. Only the good people get this far, but I have some perfunctory tests to perform before I can allow you in. Now, end of the line!”

With head down and heavy heart, Dr. Smith trudges back to the end of the line, resigned to the fact that he will not get special treatment in heaven just because of his title. The line moves very slowly. It is even worse than the Department of Motor Vehicles! After a few years, a grizzled old man, stooped with age and carrying a little black bag with MD on the side, walks past the line, talks briefly with Saint Peter, and enters Heaven.

Dr. Smith is incensed! He runs back up to the font of the line and impertinently yells at Saint Peter, “Why are you making me stand in line while you let that other doctor just stroll right in?” Saint Peter adjusts his glasses so that he can look over the top of them, and sternly says, “You are mistaken! That was not a doctor. That was God. He only thinks he is a doctor.”

———

It isn’t just MDs that are afflicted with the problem of inflated self-importance. Even many lowly PhDs (and those who worship authority) suffer from a lack of perspective and think that somehow they should be accorded more respect because they have demonstrated the 1% inspiration and 99% perspiration that is required for an advanced degree. What they do with that degree is rarely taken into consideration. After all, they have achieved the penultimate accomplishment in life!

The spirit of the Title of Nobility Clause of the US Constitution is to discourage special privileges of citizens based on class or the seductive award of titles. That is, “One man, one vote.” People may receive respect from others for their accomplishments, but they shouldn’t expect that it is owed to them based on education alone. People have to earn my respect.

https://en.wikipedia.org/wiki/Title_of_Nobility_Clause

P.S. You said, “… would NEVER AND NEVER be of such subcutaneous arrogance to name a commenter like you do.” However, despite your protest, you demonstrated the falsity of your claim by addressing me the same way! I guess “never” means something different for you than it does me.

By all means reserve first name terms for friends, that does not prevent you from being polite to others. Calling someone by their surname when addressing them directly is obviously and intentionally being offensive, for no better reason than disagree about some aspect of climate.

I do that in addressing you above to reflect back your own offensive attitude. I tell you to learn some manners as one would correct in insolent child, since that is the level of maturity you are displaying.

Now learn some manners, and stop answering back !

Nick Stokes

What good are 1000s of runs to further quantify Type A uncertainty analysis – when ALL the models fail by 285% from Type B uncertainty when tested against independent climate data of Tropical Tropospheric Temperatures?

1) *** Why not fix this obvious Type B error first? ***

2) *** Are there other independent tests of climate sensitivity to confirm/disprove

this massive Type B error? ***

I am not a statistical expert ( but I have stayed at Best Western many times ).I am thinking type A is what Nick is doing. Basically evaluating the errors generated within the GCM itself. While type B is what Pat is doing, basically using external information to generate the uncertainties. In Pat’s case the external data itself is experimental, actual, real data.

The common shape of TCF error shown in Pat Frank’s Figure 4 is a damning piece of evidence of a common systematic bias programmed into many (most?) of the GCMs.

Computer models simply quantify a theory or hypothesis. Ultimately they are either able to predict what happens in the real climate or not. How their errors arise and propagate is important, but less so than their failure to predict real world results they are said to “model”

The various 100+ global climate models starting back in the late 1980’s have consistently missed the mark….The Earth has heated far less than they predicted. Continuous ad hoc adjustments in the data and recalibration of the models fails because the modellers refuse, as yet it seems, to create models strong on the natural negative feedbacks against temperature rise that our climate possesses. Also they do not seem to model for possible saturation effects producing diminishing forcing from CO2 as its concentration arises.

The current religion-like conviction that climate catastrophe is imminent is preventing these presumably intelligent scientists from doing unbiased work. They defend a belief rather than search for truth.

+1

+1

+1

“I am thinking type A is what Nick is doin”Pretty much. Ensemble is really a poor man’s Monte Carlo, limited by compute power.

And limited by the fact that we are really only guessing at the biases in the wheel. It may be a perfectly balanced system with lots of randomness about observed results, or it could be worn and wobbly. We’re paying attention to hurricanes in my part of the world, and ‘ensembles get a lot of play in the press. There are some serious outliers early, and often late. The differences between the real storm and to forecasts may be trivial to those thousands of miles away, but they make a heck of a difference to those 20 miles from the eye wall.

It doesn’t give one confidence that the modeling is up to the task of predicting real world behavior or validating a theory when considering weather systems….that discussion of errors n the models can be valid and useful, but it is not a discussion about whether the models are predicting a climate causation theory that is or is not being validate by live data which is or is not being diddled with.

The prediction doesn’t tell us where a hurricane will go. The hurricane validates, or does not, the utility of the forecast model(s).

If you are a long way away from the storm the models are pretty good. But it seems like more often that not, I’ve put up the shutters when I don’t need to.

I think the model = real life equation need a lot of work on the climate side before I’ll take action, & iy would be sure be nice if we tracked ‘climate’ with the public accuracy and detail devoted to hurricanes.

Wind tunnels still exist because people cannot yet accept CFD as the the final say on aircraft design

It is almost impossible to model uncertainty in chaotic systems..

The analogy I like to use is a racing car crash. Perhaps some of you follow Formula 1 or Indy car or Nascar.

The difference of a few mm can result in a crash, or none.

Other issues of mm can decide whether or not the crash involves other cars and no one can predict at the start of the race that there will be a crash or where the pieces of car will end up. You would be a fool to try.

Predicting a global climate is somewhat akin to that.

All one can really do is for example say that if the cars are closely matched it increases the chance of a crash. Or if they are very disparate so they get lapped a lot, that also increases the chance. Maybe.

Nick is as bad as Brenchley at blinding with irrelevant BS, At making complicated things impenetrable.

At using one word to describe something different.

The fact is that GCMs are really not fit to forecast more than a few days ahead. That we don’t even have the right partial differential equations to integrate, have no idea of the starting values and have to fudge things so they work at all. We don’t even really understand the feedbacks inherent in the climate, and we have proved that we don’t.

All we can say is that long term there appears to be enough negative feedback to not freeze ALL of the oceans and not let the earth get much warmer than it already is, no matter how much historical CO2 it has had.

If people didn’t live in man made cities and suburbs they would never for an instant conceive of the ludicrous possibility that humans could control the climate.

Country slickers and city bumpkins these days.

…and if you can model certainties in a chaotic system that system is no longer chaotic.

Yes, absolutely; plus nothing of his article was cast into the standard language of the GUM, including probability distributions, or applying equation (10) of the GUM to calculate combined uncertainties from the measurement equation (1), Y = f(X1, X2, X3, … Xn).

It would also be quite interesting to see Monte Carlo methods applied to a GCM as per the GUM, varying all the adjustable parameters over their statistical ranges and distributions (GUM Supplement 1, “Propagation of distributions using a Monte Carlo method”, JCGM 101:2008).

A post elsewhere that highlights the problem Nick is trying to address

“how is it that we can reasonably accurate calculate GMST with only about 60 gauges? I know that ATTP has had at least one blog post in that regard. Now, I think that error improves as the (inverse) square root of the number of gauges. The average is twice as accurate for N = 3,600, not proportional to the square root of N but proportional to the inverse square root of N.”

–

GMST is such a fraught concept.

Problem one is the definition of the surface on a mixed changing atmospheric world (variable water vapour) plus a mixed solid/liquid “surface of variable height and depth on top of an uneven shape with long term variability in the spin and torque and inclination of the world plus the variation in distance from the heating element plus variation in the shade from the satellite at times and albedo variation from clouds and volcanic emissions and ice and dust storms and heating from volcanic eruptions and CO2 emmision and human CO2 emissions.

Phew.

–

We could get around this partly by measuring solar output, albedo change and earth output from space by satellites and just using a planetary emmision temperature as a substitute for GMST.

You could actually compute what the temperature should be at any location on earth purely by it’s elevation, time of year and orientatation in space to the sun without using a thermometer.

–

In a model world, barring inbuilt bias, one only ever needs one model thermometer. There can be no error. Using 3600 does not improve the accuracy.

In a model world allowing a standard deviation for error will lead to a possible Pat Frank scenario. The dice can randomly throw +4W/m-2 for ever. Having thrown one head is no guarantee that the next throw or the next billion throws will not be a head.

Using 3600 instead of 60 does not improve the accuracy at all. It improves the expectation of where the accuracy should be is all. While they look identical accuracy and expectation of accuracy are two completely different things. Your statement on probability is correct.

–

Finally this presupposes a model world and temperature and reasonable behaviour. Thermometers break,or degrade over time, people enter results wrongly,or make them up or take them at the wrong time of day or average them when missing ( historical). The accuracy changes over time. They only cover where people can get to easily, like looking for your keys under the streetlight, spacial, height, sea, polar, desert, Antarctica etc. Collating the information in a timely manner, not 3 months later when it all comes in. Are 3600 thermometers in USA better than 60 scattered around the world.

–

60 is a good number adequately sited for an estimation. 3600 is a lot better. As Paul said any improvement helps modelling tremendously.

Not having a go at you, just pointing out the fraughtness

Long time no see angech!

I assume this is the article to which you are referring?

https://andthentheresphysics.wordpress.com/2019/09/08/propagation-of-nonsense/

“I’ll briefly summarise what I think is the key problem with the paper. Pat Frank argues that there is an uncertainty in the cloud forcing that should be propagated through the calculation and which then leads to a very large, and continually growing, uncertainty in future temperature projections. The problem, though, is that this is essentially a base state error, not a response error. This error essentially means that we can’t accurately determine the base state; there is a range of base states that would be consistent with our knowledge of the conditions that lead to this state. However, this range doesn’t grow with time because of these base state errors.”

Hi Jim, good to see you to.

Glad to see you commentating.

Hope all going well.

I have been hibernating.

Jim, ATTP hasn’t grasped the fact that the LWCF error derives from GCM theory-error. It enters every single step of a simulation of future climate.

I explained this point in detail in the paper. ATTP either didn’t see it, or else doesn’t understand it.

In either case, his argument is wrong.

ATTP apparently also does not understand that propagating an incorrect base-state using a model with deficient theory produces subsequent states with a different error distribution. Error is never constant, never subtracts away, and in a futures projection is of unknown sign and magnitude.

There’s a long discussion of this point in the SI, as well.

All that’s left to determine projection reliability is an uncertainty calculation. And that shows the projected states have no physical meaning.

Not pertinent to the point you are making, but despite Lorenz being an atmospheric physicist, the set of equations you have shown weren’t meant as a simple climate model. They actually are a highly truncated solution of the equations of motion (Navier-Stokes) plus energy equation (heat transport) designed as a model of finite amplitude thermal convection.

Yes, “climate model” is too loose. He described the equations as representing elements of cellular convection.

“…GCMs are CFD and also cannot track paths…”

Huh? Of course you can track paths in CFD. It’s one of the main uses of CFD modeling.

“…GCMs, operating as weather forecasts, can track the scale of things we call weather for a few days, but not further, for essentially the same reasons…”

I noticed you led off with “GCM” in the article and didn’t specify between general circulation models and global climate models. For weather forecasts, clearly you are referring to the former. That’s a much higher degree of resolution and doesn’t use gross approximations for much of the physics involved the way global climate models do.

“That’s a much higher degree of resolution and doesn’t use gross approximations for much of the physics involved the way global climate models do.”

And they’re still amazingly wrong a good portion of the time.

In my opinion the problem with an ensembles approach is that, when there is a spread in the model output, you never know which model is more correct, so using the ensemble average can lead to more error than using a known better model, at least over time. For example, in weather forecasting, if you know one particular model is more correct more of the time, you can lean or hedge toward that to reduce your overall error, perhaps not all of the time, but for more of the time and certainly for individual forecasts. And, fortunately, that known better model is identifiable in weather forecasting, whereas with climate forecasting, this would seem to be an intractable problem in identifying a superior performer, assuming the same paradigm can be used in both cases.

4caster

+1

As I said elsewhere, Nick is talking about ensembles of runs of the SAME model, not IPCC’s averaging of an “ensemble” of garbage.

So even the IPCC not trust their models, so take “different” ones for an ensemble and average.

The Root Means Square of trust should tell something!

Great post Nick … but you make comment that Frank used a simple model instead of the complex GCMS . …… In Frank’s defense, he demonstrated that the GCM was nothing more than a really fancy, complex computer code, that was created, and tuned for a purpose, …. to create a linear relationship between GHG forcing and temperature. So … while it may give the “desired” result for a temperature, it failed miserably at predicting cloud fraction. By default, cloud fraction has an impact on LWCF, and if the model cannot accurately predict the cloud fraction, it can’t accurately predict the total GHG forcing. As such, it is really not computing the impact from the raw inputs … it is some bastardized system to make it look as if it is, but in reality, it is just tracking a made up GHG forcing that was created to fit the agenda. But his paper took it one step further. The flaw in the coding didn’t just randomly miss the cloud fraction, it was a systemic error in coding that was adopted apparently by all GCMs, and as such they ALL wrongly predicted cloud fraction in the same fashion. …. thus, they ALL are nothing more than complex models that depend on only 1 factor, that being the prescribed “estimated climate sensitivity” as deemed appropriate by the creator. …. ie., all the factors that go into the complex GCM model you talk about cancel out except one … ECS. What this means is that all the various contributing factors to ECS are rolled up into one number, and it doesn’t matter what the various factors have to say about it. If you get cloud fraction wrong, doesn’t matter, the model will still predict the desired temperature because the model only takes into account ECS. If water vapor doesn’t work out, if the hot spot isn’t there, positive feedback, negative feedback … none of it matters, because the GCM really only takes into account ECS. …. and they ALL do it. This is why they all failed miserably at predicting temperature as a function of CO2. AND … we know where they got their made up ECS …. from the correlation of temperature with CO2 in the warming period of the latter 20th century.

Essentially, GCM’s give the result that the creators designed them to give …. catastrophic global warming and it is all Man’s fault.

“he demonstrated that the GCM was nothing more than a really fancy, complex computer code, that was created, and tuned for a purpose, …. to create a linear relationship between GHG forcing and temperature”Well, he said all that. But I didn’t see a demonstration, and it isn’t true. The fact that there is an approximately linear relation between forcing and temperature isn’t surprising – Arrhenius figured it out 123 years ago without computers. And the fact that GCMs confirm that it is so is not a negative for GCM’s. In fact GCMs were created to predict weather, which they do very well. It is a side benefit that if you leave them running beyond the prediction period, they lose synch with the weather, but still get the climate right.

” that being the prescribed “estimated climate sensitivity” as deemed appropriate by the creator”ECS is not prescribed. A GCM could not satisfy such a prescription anyway.

Shouldn’t the simplest equation that accurately describes a system always be used over an arbitrarily more complex one?

Why do you assume GCMs and CFDs behave the same and use the same formulas? Aren’t the CFD models only good for closed systems? The atmosphere is definitely not a closed system.

Fair enough Nick, …. he didn’t go through a lot of mathematics to show the similarity between GCMs and a linear model, but you can’t deny that the two predict the same thing. You also can’t deny that the GCMs have not been accurate in their predictions.

Also, you note that the GCMs are just CFD equations and predict weather. While true, there is a big difference. Your article got me to reading about CFD, and a big difference is that CFDs use known parameters to predict the unknown. GCMs in climate science use a lot of unknowns to predict a desired outcome. For that matter, even in the articles about CFDs and their use in GCMs, they note they are only good out to a few days. I interpret this as being consistent with your original statement that when error happens in a CFD equation, it blows up. It’s a case where CFD is as good an approach as you can get for modeling weather, but it really is not the appropriate for long term calculations, because the knowns normally used in CFDs become unknowns the farther you go out in a GCM.

Essentially, your explanation could be read as supporting evidence for Frank. Error in a CFD results in disaster. Likewise, since GCMs do not use known parameters, they are destined to blow up in the long run. And that is exactly what they do.

The Errors of Arrhenius

That is just an opinion by a blogger. And it is ill-informed. There was a counter-argument put by an Ångström, but it was that CO2 absorption bands were saturated, not that water was not significant. Neither was a consensus position until better spectroscopy resolved in favor of Arrhenius in the 1950’s.

“The fact that there is an approximately linear relation between forcing and temperature isn’t surprising – Arrhenius figured it out 123 years ago without computers.”

Once again ignoring the fact that the “forcing” of which you speak contains a gigantic caveat – that whole “all other things held equal” thing. In other words,

the affect on temperature is entirely hypothetical. Since in the real word, not only are “all other things” most certainly NOT “held equal,” butthe feedbacks are net negative,which means the actual “temperature result” of increasing atmospheric CO2 is essentiallyzero. Hence the reason thatno empirical evidence shows a CO2 effect on temperature– only the fantasy world computer models do.OH, and of course, you ignore the most pertinent conclusion of Arrhenius – that

any warming resulting from increasing atmospheric CO2 would be.both mild and beneficial+1

The whole climate scare industry is incredibly obnoxious. It is such an insult to intelligence, all the while being so strident and domineering.

@Nick

What’s about the logarithmic relation CO2 -> temperature rise ?

That is a log relation between CO2 and forcing.

Nick despite your immense knowledge of a variety of advanced topics like chemistry,physics,mathematical analysis…. etc, it is statements like the following that reveal that ultimately you have no idea what GCM’s do. You said “The fact that there is an approximately linear relation between forcing and temperature isn’t surprising ……………..And the fact that GCMs confirm that it is so is not a negative for GCM’s. ” THEY DO NOT CONFIRM ANY SUCH THING. THERE IS A CORE CODE KERNEL THAT IS PASSED ON WITH EACH GENERATION THAT IS GIVEN TO EACH GCM TEAM. THAT IS WHY THE GENERATIONS ARE NUMBERED. IT IS NOT BECAUSE OF NEW HARDWARE. THAT IS ALSO THE REASON WHY INITIALLY ALL THE CMIP6TH GENERATION MODELS RAN HOTTER THAN THE CMIP5TH. THE CORE CODE FORCES THE MODELS TO ADOPT A LINEAR RELATIONSHIP BETWEEN FORCING AND TEMPERATURE. GCMs CONFIRM NOTHING BECAUSE THEY KNOW NOTHING.

“THERE IS A CORE CODE KERNEL THAT IS PASSED ON WITH EACH GENERATION”OK, if you know all about GCMs, could you point to that code kernel?

I think you are missing an “n” in your final summary/conclusion as to the cause of catastrophic global warming. 🙂

great comment. thank you for that incisive summary. although I had my doubts about the paper at first, I have come to the conclusion that it shows exactly what you said.

Nick, “

Well, he said all that. But I didn’t see a demonstration, ….That can only be for one of two reasons. Either you didn’t look, or you didn’t believe your lying eyes.

“

…and it isn’t true.” Itistrue.The paper has 27 demonstrations, with another 43 or so in the SI.

Plus a number of emulation minus projection plots that show near-zero residuals.

“The paper has 27 demonstrations”OK, let’s look at the quote again:

“he demonstrated that the GCM was nothing more than a really fancy, complex computer code, that was created, and tuned for a purpose, …. to create a linear relationship between GHG forcing and temperature”“he demonstrated that the GCM was nothing more than a really fancy, complex computer code”Well, it’s a computer code. I don’t know how the journal allowed the adornment of “fancy, complex”. That just means you haven’t got a clue what is in it.

“that was created, and tuned for a purpose”You demonstrated nothing about why it was created. You cave no information at all. Nor did you give any information about tuning. That is just assertion.

“to create a linear relationship between GHG forcing and temperature”No, it didn’t create such a relationship. That is basically a consequence of Fourier’s Law. Arrhenius knew about that. People like Manabe and Weatherald developed it with far more insight and sophistication, 50+ years ago, than you have. And it is not surprising that a GCM should find such a relation.

And to say it was created for that purpose is absurd. GCMs calculate far more variables than just surface temperature. And the original reason for the creation of such programs was in fact weather forecasting.

You have given nothing to support the claim that they were created for this purpose.

OK, this maths is way beyond me, but the takeaway I get from this is that if we run multiple iterations, we can predict the weather more than a week in advance?

There’s math up there?? I thought maybe Nick was using the Wingdings font. 🙂

I think he is saying all GCMs are always correct because if you use the most complex equation TOA forces it to always be right no matter how much or how many errors you have or what input conditions there are. I want to be a climate scientists then my models mathemagically are always correct.

But, seriously what error if anything that is put into these models can make them wrong using your logic?

Ironargonaut, good question. Is there any level of uncertainty that makes the models blow up?

These models start out with a key variable error larger than the signal, but due to the “rules of constraint” (including thermal equilibrium which is god awfully unlikely if integrated within time scales less than a few centuries) everything stays within bounds.

Your question needs to be answered.

To add to that uncertainty, the variable in question (cloud effects), is a variable variable over spatial resolutions 2 orders of magnitude finer than the resolution of the models. We may never have the computing power to accurately simulate global climate reality.

You can model what you don’t know everything about and beneficially learn a lot about “it”…but with certainty inversely proportional to the ignorance. We are not there yet.

Spot on. We can’t even guess what the sun is up to next. And we all know it’s the biggest player in the room. Look at what it was doing in the late 90’s versus these days. I made a movie clip of May 1998 of the explosions going on, and it was incredible. Now look at it.

Agree 100% We aren’t there yet. And common sense tells you that there’s no way an increase from 300 PPM to 400 PPM can make that much difference. It’s practically nothing.

And then you look at how the IPCC has had to dial back their predictions over and over again. There’s no way these guys have figured this out yet. Not even close. If their was a model that worked, we’d all be looking at it and referencing it, and trying to make it better. But there isn’t one that comes close yet.

You merely need to compare the “ensemble of wrong answers” from the models to reality (overstated as it is due to “adjustments” of the “actual” temperature “records” which are heavily polluted by confirmation bias) to know that there is systemic error in the models – NONE are

belowthe “actual” temperatures, because they all contain some “version” of the same mistake.They

allassume atmospheric CO2 “drives” the Earth’s temperature,when it clearly does not.Ensembles of runs are already used in modelling weather since small uncertainties in meteo data fed in as initial conditions can lead to storm forming or not. That is where they get the probabilities of various outcomes for the days ahead, storm warnings etc.

The tricks are in the parsing for instance.

“For any coefficient matrix A(t), the equation. y’ = A*y + f’ – A*f

has y=f as a solution. A perfect emulator. But as I showed above, the error propagation is given by the homogeneous part y’ = A*y. And that could be anything at all, depending on choice of A. Sharing a common solution does not mean that two equations share error propagation. So it’s not OK.”

–

fIrst up sharing a common solution is impressive. It is a hint that there is connectivity. So really it is Ok.

Second, if the part Pat Frank is talking about is the one where error propagation is given, and it is, because you have effectively stated that your first equation does not have any error propagation in it, ie it has a a (one and only ) solution. Then you have subtly swapped the important one with errors in for the unimportant straight number solution.

Then you claim that your substituted example is useless because it has no error in it.

Well done.

Particularly all the speil while swapping the peas.

Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows.

general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth’s atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.

GCMs and global climate models are used for weather forecasting, understanding the climate and forecasting climate change.

https://youtu.be/ivGNV_lXvSo

Nick. Now I see the point you were making last week. However, this doesn’t have to involve differential equations–it can involve only a measurement equation. Let’s return to the equation in your first example

where a is a constant acceleration. The solution is

This is a measurement equation for y. In this case it comes from a differential equation, but it does not have to. Let our uncertainty in the acceleration (a), and in the initial velocity (v) be…

As we usually do propagation of error in general cases the variance in y from the propagated uncertainties in a, and v is

The partial derivatives are sensitivity coefficients. In this particular case

The uncertainty in initial velocity produces an uncertainty in y growing linearly in time, and that in acceleration produces an uncertainty in y that grows as the second power in time. This is a fairly general view of the matter which can apply to calibration uncertainties, errors in the parameters of a measurement equation and so forth–i.e. laboratory work, measurement of real things.

In the state space system partials of the matrix A with respect to state variables are sensitivity terms.

Darnit. I didn’t get the sensitivity coefficients squared (not having a test page in real time is a sort of nuisance). That last propagation of error should look like…

which is the same as Nick’s result when a=0. The uncertainty in y then grows linearly in time. One other point I failed to mention is that the GCMs are not just differential equations. They involve other rules like parameterization. This means they would have to do something like what I have outlined above. Or Nick’s ensemble method.

The ensemble approach takes care of a lot of this messiness, but then one has to make an honest ensemble. Are people really any good at such? Fischoff and Henrion showed that scientists are not especially good at evaluating bias.

Kevin,

“However, this doesn’t have to involve differential equations”but it can. I thought of trying to say more on what DEs do to random walk, but the article was getting long. The key is the inhomogeneous solution:

W(t) ∫ W⁻¹(u) f(u) du

where W = exp(∫ A(u) du )

If f is an iid random, then that has to be integrated in quadrature, weighted with W⁻¹ . With constant acceleration, it will have power weightings, giving the case you describe. The exponential cases are interesting. With W=exp(c*t) and f unit normal, the answer is

exp(c*t)sqrt(∫exp(-2*u*c)) = 1/sqrt(2*c) exp(c*t) for large t

So just a scaled exponential rise. And if c is negative, it is the recent end of the integral that takes over, and so tends to a constant

1/sqrt(-2*c)

Neither is like a random walk; that is really dependent on the simplicity of y’=0. In your case too, σ_y has t² behaviour, same as the solution. Except for y’=0, the cumulative error just tends either to a multiple of a single error, or a multiple of constant σ.

I’m no expert, but I am reasonably literate in the use of CFD and other modeling sciences so while I can’t speak to this error propagation issue directly I am a skeptic of GCMs.

For instance let’a examine computational aerodynamics. This field is over 50 years old and been funded massively by NASA and industry, and been tested side-by-side with physical wind tunnel models to validate and calibrate the codes.

Splendid! We can now design aircraft that we know with high confidence will take off, fly safely, and land on the first go. But only if they adhere to highly constrained well behaved flight. Assume some severe maneuvers that create a high angle of attack triggering turbulent flow and all bets are off. Yes, you can refine the grid but for a practical matter not enough to be computationally feasible. For instance, it is IMPOSSIBLE with current technology to model a parachute opening which causes the planetary lander community no end of grief. But thank God for wind tunnels and the ability to physically test prototypes.

Another notable example is nuclear weapons research, where they need 6 months or so of the worlds greatest supercomputers (like a 50,000 node cluster of FP intense (high precision floating point) processors to run 1 simulation of a few microseconds of real-time. Now admittedly that is for a grid intensive model no doubt engineered to grid invariant levels, but again its only for a few microsends. Molecular modeling is similarly challenging, trying to predict gene folding and the like for drug design. Here the model is limited by practicality to some 10s of thousands of atoms (call it 50K) and here they run maxwells equations across the model on femtosecond steps again for just a few microseconds. And this take a big cluster (at least 100s of bodes) 3 or more months.

All of this makes me extremely skeptical of climate.models. There are simply too many variables, too many unknowns, and way too much potential for confirmation bias in their crafting since they are completely untestable in the real world (well at least until 100 years pass).

I work at a large semiconductor company were modeling device physics is absolutely critical to advancing the state of the art. But models are simply guidelines, they ARE NOT data. Data comes when you actually build devices and characterize their performance physically. And it is ALWAYS different than the models, and often by a lot.

Earth is a reasonably large structure and atmospheric and oceanic behavior involves a lot of turbulent behavior. And our understanding of the feedback mechanisms, harmonics, and countless other factors are marginal at best. And while ensembles seem a useful tool to help deal with uncertainty, I still question the legitimacy of any climate model to make multi-trillion dollar policy decisions. If these modelers were really as good as they think they are they’d all be hedge fund billionaires instead of coding earth scale science fiction (not totally meant as a disparagement, at least somewhat to reiterate the point that models are not data).

“But thank God for wind tunnels”He isn’t all that generous. I think you’ll find that in the cases you mention (high angle of attack etc) wind tunnels don’t do so well either. It’s just a very hard problem, and one for planes to avoid. And remember, wind tunnels aren’t reality either. They are a scaled version, which has to be related to reality via some model (eg Reynolds number).

“they are completely untestable in the real world”No, they are being tested all the time. They are basically the same as weather forecasting models; some, like GFDL, really are the same programs, run at different resolution. Weather forecasting certainly has a similar number of variables etc.

And the reliability of whether forecasts are still massively disappointing, even though they have fabulously precise historic data, a massive global sensor network, and satellite imagery to support their creation, programming, calibration, etc. Which is why I always carry an umbrella no matter where I’m traveling around the U.S. and east Asia. Don’t leave home without it.

And look at the recent hurricane Dorian tracks published by NOAA. They were quite literally all over the map looking out more than 1 day. And none predicted the virtual stall over the Bahamas for a day and a half that I am aware of putting their time scale accuracy in pretty severe question.

I’m not a flat earthier who doesn’t think increasing co2 concentrations don’t have any effect. Physics is physics. But we can’t model nature as precisely as many contend, and weather forecasting is the PERFECT example. I have never heard a rain forecast stated with less than a 10% kind of accuracy for example.

I simply don’t think we are anywhere near a “tipping point” response (what with life on earth thriving during the Cambrian explosion etc. with concentrations like 10x higher than today), and that net net Pat Frank is right, there is no certain doom in our current carbon energy dependency other than to decarbonize as recklessly as the AGW alarmist camp advocates. That is the most certain path to diminishing the quality of life for billions that I can think of.

I’m no expert, but I am reasonably literate in the use of CFD and other modeling sciences so while I can’t speak to this error propagation issue directly I am a skeptic of GCMs.

For instance let’a examine computational aerodynamics. This field is over 50 years old and been funded massively by NASA and industry, and been tested side-by-side with physical wind tunnel models to validate and calibrate the codes.

Splendid! We can now design aircraft that we know with high confidence will take off, fly safely, and land on the first go. But only if they adhere to highly constrained well behaved flight. Assume some severe maneuvers that create a high angle of attack triggering turbulent flow and all bets are off. Yes, you can refine the grid but for a practical matter not enough to be computationally feasible. For instance, it is IMPOSSIBLE with current technology to model a parachute opening which causes the planetary lander community no end of grief. But thank God for wind tunnels and the ability to physically test prototypes.

Another notable example is nuclear weapons research, where they need 6 months or so of the worlds greatest supercomputers (like a 50,000 node cluster of FP intense (high precision floating point) processors to run 1 simulation of a few microseconds of real-time. Now admittedly that is for a grid intensive model no doubt engineered to grid invariant levels, but again its only for a few microsends. Molecular modeling is similarly challenging, trying to predict gene folding and the like for drug design. Here the model is limited by practicality to some 10s of thousands of atoms (call it 50K) and here they run maxwells equations across the model on femtosecond steps again for just a few microseconds. And this take a big cluster (at least 100s of bodes) 3 or more months.

All of this makes me extremely skeptical of climate.models. There are simply too many variables, too many unknowns, and way too much potential for confirmation bias in their crafting since they are completely untestable in the real world (well at least until 100 years pass).

I work at a large semiconductor company were modeling device physics is absolutely critical to advancing the state of the art. But models are simply guidelines, they ARE NOT data. Data comes when you actually build devices and characterize their performance physically. And it is ALWAYS different than the models, and often by a lot.

Earth is a reasonably large structure and atmospheric and oceanic behavior involves a lot of turbulent behavior. And our understanding of the feedback mechanisms, harmonics, and countless other factors are marginal at best. And while ensembles seem a useful tool to help deal with uncertainty, I still question the legitimacy of any climate model to make multi-trillion dollar policy decisions. If these modelers were really as good as they think they are they’d all be hedge fund billionaires instead of coding earth scale science fiction (not totally meant as a disparagement, at least somewhat to reiterate the point that models are not data).

“even though they have fabulously precise historic data”

No they don’t. What limited data exists may be precise, but most of the data are fake. Not altered – FAKE.

Exactly the comment I was waiting to see.

Some well known auto firms closed their test strips with assurance the CFD was good enough (and cheaper). The result was highly embarassing, and costly.

Even the bomb test moratorium is just smoke and mirrors – what is the NIF?

No one would put national defense at the mercy of any kind of MHD code alone.

Now why are supposed to put the entire physical evconomy at the mercy of an “ensemble” of dancing models parading as reality? It makes can-can look serious!

It’s impossible to accurately model something that is not well understood. That’s all a model is: encoded intelligence of understanding. Climate dynamics are not well understood, and there simply aren’t sufficient historical temperature data. Anyone who says otherwise are liars. So really this whole controversy boils down to faith: faith in empirical, measurable science or faith in model world.

+1

Repeated for emphasis:

It’s impossible to accurately model something that is not well understood. That’s all a model is: encoded intelligence of understanding. Climate dynamics are not well understood, and there simply aren’t sufficient historical temperature data.

I thought a few of Nick’s comments elsewhere might help us understand his rationale and point of view.

“”Nick Stokes September 11, 2019 at 8:48 am

I’ve put up a post here on error propagation in differential equations, expanding on my comment above. Error propagation via de’s that are constrained by conservation laws bears no relation to propagation by a simple model which comes down to random walk, not subject to conservation of mass momentum and energy.

…….I am talking about error propagation in the Navier-Stokes equations as implemented in GCMs. I have decades of experience in dealing with them. It is supposed to also be the topic of Pat Frank’s paper.

……No, the actual nature of the units are not the issue. If the quantity really was increasing by x units/year, then the /year would be appropriate, whether units were watts or whatever. But they aren’t. They are just annual averages, as you might average temperature annually. That doesn’t make it °C/year.

……Yes, that is the problem with these random walk things. They pay no attention to conservation principles, and so give unphysical results. But propagation by random walk has nothing to do with what happens in differential equations. I gave a description of how error actually propagates in de’s here. The key thing is that you don’t get any simple kind of accumulation; error just shifts from one possible de solution to another, and it then depends on the later trajectories of those two paths. Since the GCM solution does observe conservation of energy at each step, the paths do converge. If the clouds created excess heat at one stage, it would increase TOA losses, bring the new path back toward where it would have been without the excess.”

–

Hm Could he be saying “‘ If the CO2 created excess heat at one stage, it would increase TOA losses, bring the new path back toward where it would have been without the excess.” Surely not

“GCM’s are a special kind of CFD, and both are applications of the numerical solution of differential equations (DEs). Propagation of error in DE’s is a central concern. It is usually described under the heading of instability, which is what happens when errors grow rapidly, usually due to a design fault in the program.

So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. It doesn’t matter for DE solution why you think it is wrong; all that matters is what the iterative calculation then does with the difference. That is the propagation of error.”

–

“So first I should say what error means here.”

“It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”

–

This is not the definition of an error, here or in any place.

*An error is a proven and provable mistake.

Not what you believe is the true answer.

You have no right to think “Why it is wrong” as it is not wrong, it is a range of uncertainty from a mean.

–

In fact in differential equations when one includes what is called an error range this is erroneously mislabeled. An uncertainty range is the correct definition. All values in the uncertainty range have different probabilities of being right, not of being wrong. They are all correctly, whether approximated by a random walk or a differential equation, able to occur in that time frame.

The better term then is uncertainty or discrepancy defined as the difference between a number that arises in the calculation, and what you have calculated to be the true mean.

Propagation of error is apparently the term everyone wishes to use but we should all remember it is the propagation of uncertainty, not of belief.

Isn’t this the point missed by those who reject Pat Frank’s thesis? He’s talking about uncertainty – which can be calculated. He put in the effort to make a good calculation of that uncertainty. His conclusion? There’s so much uncertainty in the Climate Scare Industry’s models their outputs simply do not provide useful information.

I check on today’s weather and it states it will be 60 degrees F, so I walk toward my closet to get something to wear. But then I stop… I realize the weather prediction actually said “60 degrees F, plus or minus 27 degrees F”. I’m left standing there, with NO IDEA what to wear.

Which was (sort of) my dilemma in High School. Solution: cotton socks & underwear, jeans, short sleeve shirt. If Air Temp at 6 am < 42F, jacket. (We were in the desert, if it was 40 at 6am, it would be 80+ by 2pm and I had a 400' hill climb after class)

LOL. Layers are the answer! As the old saying goes, it’s easier to take off a layer than to knit one!

Alcohol and calculus don’t mix. Never drink and derive.

AAARGH!

Deriving Under Influence should be a felony, and on persistance, Integrated.

Special case 1: y’=0

“This is the simplest differential equation you can have. It says no change; everything stays constant. Every error you make continues in the solution, but doesn’t grow or shrink. It is of interest, though, in that if you keep making errors, the result is a random walk.”

Everything does not stay constant or it would be a spot. As a function it has a changing time component even thoughwhat you mean is there is no change in y with time.

Consequently this equation does not have “errors”. It is not allowed to have perturbations or deviations by definition. You cannot even set it to the wrong amount as in your graph example in red.

–

If you now choose to add perturbations in and get a random walk you really only have the equation as an approximation of the mean, a totally different thing.

It also means you are wrong in your conclusion when you pointed out that

“Sharing a common solution does not mean that two equations share error propagation.”

Here you have just proved that Pat Frank’s use of a simplified equation subjected to error analysis in place of the more complex GCMs giving a random walk is indeed equivalent to a DE with perturbations.

Well done!

Well deflected.

All I can do is to point out your inconsistencies.

The issue is not uncertainty range, but error propagation forced by hypotheses (“models”), that are constructed on incomplete characterization and insufficient resolution, which is why they have demonstrated no skill to hindcast, let alone forecast, and certainly not prediction, without perpetual tuning to reach a consensus with reality.

Some observations. If there were true error propagation of cloud forcing in running GCMs, they would never get results for climate sensitivity (CS), because cloud forcing error would make models totally unstable. The conclusion is that cloud forcing is not a real variable in GCMs. Nobody has introduced in these conversations that GCMs have the capability to calculate cloud forcing in history, today and in the future. If this was true, then you could find this factor as an RF forcing factor in the SPM of AR5.

Quotation from Anthony Watts: ….”there’s the lack of knowing the climate sensitivity number for the last 40 years.” I fully agree if this means that we have no observational evidence. This statement could be understood also in a way that the CS should be able to explain the temperature variations during the last 40 years. Firstly, a CS number depends only on the CO2 concentration and there are a lot of other variables like other GH gases, which vary independently. Therefore a CS number can never explain temperature variations even though it would be 100 % correct.

The science of the IPCC has been constructed on anthropogenic factors like GH gases. The Sun’s role in AR5 was about 2 % in the warming since 1750. The contrarian researchers have other factors like the Sun which has a major role in climate change.

Many people think that GCMs approved by the IPCC have been forced to explain the temperature increase by anthropogenic factors and mainly by CO2 concentration increase. This worked well till 2000 and thereafter the error became so great that the IPCC did not show the model-calculated temperature for 2011. The observed temperature can be found in AR5 and it was 0.85 C. The total RF value of 2011 was 2.39 W/m2 and using the climate sensitivity parameter of the IPCC, the model-calculated temperature would be 0.5*2.34 = 1.17 C; an error of 38%.

Nick,

Your diagram of Lorenz attractor shows only 2 loci of quasi-stability, how many does our climate have? And what evisence do you have to show this.

Well, that gets into tipping points and all that. And the answer is, nobody knows. Points of stability are not a big feature of the scenario runs performed to date.

Nick’s discussion goes off the rails very early on, right here: “

So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. It doesn’t matter for DE solution why you think it is wrong; all that matters is what the iterative calculation then does with the difference. That is the propagation of error.”At least three big mistakes here. First, Nick ignores measurement error and parameter uncertainty. That’s the difference between a physically true value and a measured or observed value.

Such errors go into calculations, where they put uncertainty into a calculated result. Uncertainty is not error.

It defines an interval where the true value may lay. But where the true value resides in the interval is generally unknown. That’s uncertainty.

Second, when one is projecting a future state, the error in the calculation is unknown. One can’t know what Nick’s DE is doing when there’s no way to know the error in the expectation value.

Third, propagation of error isn’t error. It’s not the checkable result of an iterated error. It is not a measure of correct minus incorrect.

It is a measure of uncertainty in the result that arises because factors used in the calculation are poorly constrained.

In Nick’s DE, uncertainty in result would arise from a poor value constraint of the differentiated factors. The uncertainty bounds in the values would have to be propagated through Nick’s DE, yielding an interval spread of results. That would be estimation of uncertainty, not Nick’s iteration of differences.

Actual propagation of error is root-sum-square of all the errors and uncertainties going into a calculation. It maps the growth of uncertainty following a series of calculations using poorly constrained values.

One suspects that Nick’s numerical methods definition of error propagation is at serious variance with the meaning and method of error propagation in the sciences.

The definition of error propagation within science would include the method to be used for GCMs, purporting them to fall under the purview of science and to be physical models.

Nick’s understanding of science is illustrated in his claim that measurement instruments have perfect accuracy and infinite precision. See here and comments following.

He apparently knows nothing of resolution (a concept applicable to both measurement instruemnts and to physical models), or of sources of physical error, or of error propagation as carried out in the sciences.

Nick started out his discussion with very fundamental mistakes. He cannot conclude correctly.

For example, towards the end he wrote, “

The best way to test error propagation, if computing resources are adequate, is by an ensemble method, where a range of perturbations are made.”Except that ensemble method does not employ the root-sum-square of error that is the unequivocal definition of error propagation in the sciences.

Except that the ensemble method is perfectly fine for testing parameter-calibrated engineering models (Nick’s day job).

Except that the ensemble method is useless for testing predictively oriented physical models.

Nick’s parameter-calibrated models are predictively useless outside their calibration bounds.

The ensemble method just shows run variability about an ensemble mean. Great for engineering. That method is all about model precision. It merely reveals how well alternative model runs resemble one another.

The ensemble method is not about accuracy or the predictive reliability of unknown states. It’s about precision, a metric completely opaque to predictive accuracy.

Predictive reliability is what error propagation is all about. Nick has totally missed the boat.

Nick wrote concerning the GCM emulation equation in my paper, “

The justification given was that it emulated GCM solutions (actually an average). Is this OK?”The emulation equation was successful with individual GCM air temperature projections. Not just an average.

Nick knows that because he read the paper. Is it possible his misrepresentation here is a mistake?

Nick wrote, “

Given a solution f(t) of a GCM, you can actually emulate it perfectly with a huge variety of DEs.”You can also emulate a GCM solution perfectly with an arbitrary polynomial. Or with a cubic spline. And that would tell you about as much as Nick’s example. Namely, nothing.

The emulation equation in the paper reproduced GCM air temperature projections as a linear extrapolation of the very same GHG forcings that the models themselves use to project air temperature.

It shows that GCM simulations of air temperature using GHG forcings are indistinguishable from linear extrapolations of those GHG forcings.

An emulation of GCMs in terms of their own operational forcing factors is not arbitrary.

The emulation equation has an air temperature response to GHG forcing that is virtually identical to the response of any GCM to that same GHG forcing. The emulation of GCM uncertainty follows from that identity.

Nick has yet again achieved complex nonsense.

” First, Nick ignores…”I emphasise that I am talking about a numerical process, which is what GCMs are. Numbers are not tagged with their status as measurement error, parameter uncertainty or whatever. They are simply modified by the calculation process and returned. I am describing the modification process. And it depends critically on the differential equation. You can’t analyse without looking at it.

“the error in the calculation is unknown”Hopefully, there are no significant errors in the calculation. The question is, what does the calculation do with uncertainties in the inputs, expressed as ranges of some kind.

“Actual propagation of error is root-sum-square of all the errors and uncertainties going into a calculation.”Root sum square with uniform terms (as here) implies iid random variables. That is, independent, identically distributed. Independence is a big issue. In the PF mark ii paper, there was talk of autocorrelation etc in Eq 4, but it all faded by Eq 6. And necessarily so, because the datum was a single value, 4 W/m2 from Lauer. But identically distributed is the issue with DEs. Successive errors are not identically distributed. They are modified by the progress of the DE.

The formulae taken from the metrology documents are for stationary processes. DEs are not stationary.

“in his claim that measurement instruments have perfect accuracy and infinite precision”I of course make no such claim. But it is irrelevant to the performance of a DE solver.

“Except that ensemble method does not employ the root-sum-square of error that is the unequivocal definition”No, of course not, for the reasons above. It simply and directly answers the question – if you varied x, how much does the output change. Then you can quantify the effect of varying x because of measurement uncertainty or whatever. The cause won’t change the variation factor. The DE will.

“Or with a cubic spline. And that would tell you about as much as Nick’s example.”No, it tells you about Pat’s emulation logic. There are many schemes that could produce the emulation. They will not have the same error propagation performance. In fact, my appendix showed that you can design a perfect emulation to give any error propagation that an arbitrary DE can achieve.

Nick, “

But identically distributed is the issue with DEs. Successive errors are not identically distributed. They are modified by the progress of the DE.”The paper deals with propagated uncertainty, Nick, not error. You keep making that mistake, and it’s fatal to your case.

Nick, “

I of course make no such claim [that measurement instruments have perfect accuracy and infinite precision.]”Yes, you did.

Nick, “

No, it tells you about Pat’s emulation logic. There are many schemes that could produce the emulation.”Which is you admitting the polynomial and cubic spline examples tell us about your emulation logic. You’re stuck in empty numerology, Nick. You observably lack any capacity for physical reasoning.

The emulation equation in the paper invariably reproduces GCM air temperature projections using the same quantity inputs as the GCMs themselves. Evidently, that point is lost on you. A fatal vacuity, Nick,

“This yields the uncertainty in tropospheric thermal energy flux, i.e., ±(cloud-cover-unit) × [Wm–2/(cloud-cover-unit)] = ± Wm–2 year–1.”

This is funny

hey Pat what is a Watt?

Hey Steve, what’s dimensional analysis?

Pat, you might like to post a careful dimensional analysis of your equation 6, and report on the units of the result.

Nick:

Perhaps you should first apologies in big red type for screwing up and declaring in an update to your Sunday, September 8, 2019 article that Pat Frank had made an egregious error.

https://moyhu.blogspot.com/2019/09/another-round-of-pat-franks-propagation.html

At the top of Nick’s page in red:

See update below for a clear and important error.Towards the end of Nick’s page in red:

Update – I thought I might just highlight this clear error resulting from the nuttiness of the /year attached to averaging. It’s from p 12 of the paper:This is followed by an extract from Frank’s paper showing +/- 4 W/m2/year boldly underlined in red by Nick. This is the annual AVERAGE CMIP5 LWCF calibration uncertainty established from prior samples. It is used in calculations for uncertainty propagation. The focus is solely on equation 6.

Nick opines further nonsense and then concludes:

Still makes no sense; the error for a fixed 20 year period should be Wm-2.I guess Nick has discovered his own egregious error in that Equation 6 does not contain any W/m2 dimensions /year or otherwise. Perhaps that’s why he is not making a song and dance about it here. But even if it did, each year would need to be multiplied by “1 year” to account for its weighting in the summation and hence eliminate the so called dimensional error.

So I guess Pat can expect a nice apology in big red type soon. 🙂

“I guess Nick has discovered his own egregious error in that Equation 6 does not contain any W/m2 dimensions /year or otherwise.”Bizarre claim. Here is an image of the section of text. Just 4 lines above the eq, it says:

“The annual average CMIP5 LWCF calibration uncertainty, ±4 Wm-2 year-1, has the appropriate dimension to condition a projected air temperature emulated in annual time-steps.”But whatever. All I’d like to see is a clear explanation of what are the dimensions of what goes in to Eq 6, what comes out, and how it got there. I think it makes no sense.

Nick,

I admire your intellect but heavens read the whole paragraph and its reference to previous equations. It clearly states equation 1. Follow Frank’s paper and by equation 5 he has converted the W/m2/year to T for that year.

But as I said, even if you had been right there is a weighting for each “i” th uncertainty of “1 year” which multiplies the Ui per year and eliminates the “/year” dimension. The Ui is a Temperature so it does not have the units you claim.

“The Ui is a Temperature so it does not have the units you claim.”Then why is it added in quadrature over timesteps? What are the units after that integration?

An interesting question is why is that statement about ±4 Wm-2 year-1 there at all, emphasising the appropriateness of the dimension, and yet in eq 5.2, it goes in as 4 Wm-2.

Please stop going around in circles. Even in your selected paragraph Frank leads that average into equation 5:

For the uncertainty analysis below, the emulated air temperature projections were calculated in annual time steps using equation 1, with the conditions of year 1900 as the reference state (see above). The annual average CMIP5 LWCF calibration uncertainty, ±4 Wm–2 year–1, has the appropriate dimension to condition a projected air temperature emulated in annual time-steps. Following from equations 5, the uncertainty in projected air temperature “T” after “n” projection steps is (Vasquez and Whiting, 2006),±σTn=∑ni=1[±ui(T)]2−−−−−−−−−−−−√(6)

Equation 6 shows that projection uncertainty must increase with every simulation step, as is expected from the impact of a systematic error in the deployed theory.There is no need to even go to his paper as it is all there.

I can’t produce equations in proper format but it is clear.

Nonetheless your issue was with the dangling “/year” dimension. As I said the weighting will fix that. Additionally that +-4 W/m2/year is an average. Each year effect is not the average/year but the quantum for the year (ie its weighted component for the sum which eliminate the “/year” dimension).

“As I said the weighting will fix that.”I’d like to see someone spell out how. I don’t think it does.

Nick, I put it to you that you are being deliberately obstinate. What you “think” now is quite different to your tone in lambasting Frank.

My weight gain as a boy was 2kg/year average for my last 5 years. Assuming I gain at the same p.a. rate what is my weight gain in the next 4 years.

Year 1: 1 year x 2 kg/year = 2kg

Year 2: 1 year x 2 kg/year = 2kg

etc

Sum by the end of 4th year = 8kg

The “/year” has gone due to the weighting factor dimension.

Likewise in equation 6 there would be no “/year” component of the quantum to square. I’m sure you can do these sort of calcs in your sleep.

But if you wish to continue to go around in circles then so be it. You could have attempted to apply it to his formula except his formula does not have the units you claimed.

Nick, my post went to heaven so won’t be spending more time beyond this attempt.

My weight gain as a boy was 2kg/year average for my last 5 years. Assuming I gain at the same p.a. rate what is my weight gain in the next 4 years.

Year 1: 1 year x 2 kg/year = 2kg

Year 2: 1 year x 2 kg/year = 2kg

etc

Sum by the end of 4th year = 8kg

The “/year” has gone due to the weighting factor.

Likewise in equation 6 there is no “/year” component to square. I’m sure you can do these sort of calcs in your sleep.

You could try and apply it to equation 6 except it does not have the units you claim. But imagine that Ui is a quantum per year and just multiply by its weighting of 1 year just like the example above. It is self evident that the “year” dimension is eliminated.

At least we have made some progress from lambasting Pat Frank to “I’d like to see someone spell out how. I don’t think it does. ” Think some more as I don’t intend putting more time into this.

TonyM,

“The “/year” has gone due to the weighting factor dimension.”Yes, and I think that is what was supposed to happen here, although the /year has no justification. But the problem is it is added in quadrature. The things being added are /year, but squared become /year^2. Then when you add over time they become /year. Then when you take sqrt, they become /sqrt(year). Not gone at all, but become very strange.

Equation 6 gives the root-sum square of the step-wise uncertainty in temperature, Nick.

It’s sqrt[sum over(uncertainty in Temp)^2] = ±C.

The analysis time step is per year. The ±4 W/m^2 is an annual (per year) calibration average error.

This is the deep mystery that exercises Nick Stokes.

If you want to do a

reallycareful dimensional analysis, Nick, you take into explicit account the temperature time step of 1 year. Note the subscript “i.” Eqn. 5.1, 5.2 yield (±C/year)*(1 year) = ±u_i = ±C.“The ±4 W/m^2 is an annual (per year) calibration average error.”So why does it keep changing its units? Those are the units going into Eq 5.2. But then, by the next eq 6 we have

“The annual average CMIP5 LWCF calibration uncertainty, ±4 Wm⁻² year⁻¹, has the appropriate dimension to condition a projected air temperature emulated in annual time-steps. “And earlier on, it is

“the global LWCF calibration RMSE becomes ±Wm⁻² year⁻¹ model⁻¹”It is just one quantity, given by Lauer as 4 W/m⁻²

“sqrt[sum “Summed over time in years. So why doesn’t it acquire a *year unit in the summation.

You’re clearly the expert on measurement error and parameter uncertainty.

Glad to see you’re still defending your work.

Agreed. As soon as start using eigenvectors you have lost the original variables. Its a very complicated way of ‘curve fitting’, using the eigenvectors to reduce statistical error. Its useless for forecasting the original variables.

If I understand this correctly, Nick’s informative post is making a very simple logical point in criticism of the original paper.

He is basically arguing that the simple emulation of GCM models that Mr Frank has used in his paper does not behave in the same way, with regard to error propagation, as the originals. He gives reasons and a detailed analysis of why this is so, which I am not competent to evaluate.

But this is the logic of his argument, and its quite straightforward, and if he is correct (and it seems plausible) then its terminal to Mr Frank’s argument.

It is restricted in scope. It does not show that the models are valid or useful for policy purposes or accurately reflect climate futures. It just shows that one particular criticism of them is incorrect.

The thing that has always puzzled me about the models and the spaghetti graphs one sees all the time is a different and equally simple logical point. We have numerous different models. Some of them track evolving temperatures well, others badly.

Why does anyone think its legitimate to average them all to get a prediction going forwards? Why are we not simply rejecting the non-performing ones and using only those with a track record of reasonable accuracy?

Surely in no other field, imagine safety testing or vaccine effectiveness, would we construct multiple models, and then average the results to get a policy prediction, when more than half of them have been shown by observation not to be fit for purpose.

Well, michel, if I have any inkling of the gist of the reality, the original models cannot ever account for information to such an extent that they have any predictive value. So, how does a person show this using the original models, when the original equations upon which they are based are unsolvable? It seems that you model the models, which, yes, might not be the original models, but, remember, the models are not the original climate either — they are simulations based on limited input.

Tools that have inherent reality limitations might be subject to a set of limitations themselves that analyze them, in this respect.

The paper demonstrates that the models are linear air temperature projection machines, michel.

Nick’s post is a complicated diversion, is all. A smoke screen.

I doubt stability analysis is the same as propagation of uncertainty. A stable numerical solution to a differential equation still propagates uncertainty. They are related because stability is required, otherwise any further analysis is impossible.

I’ve used (and written) many simulation programs for technical applications (flight simulation) that involve solving differential equations. I’m familiar with propagation of uncertainty in this kind of programs. It has a distinctive mathematical form. I’ll try to illustrate below.

Wikipedia shows propagation of uncertainty involves the Jacobian matrix (and its transpose) of the function under analysis, see:

https://en.wikipedia.org/wiki/Propagation_of_uncertainty#Non-linear_combinations

This shows the distinctive pattern: J*cov(x)*trn(J)

were J is the Jacobian, cov(x) is the covariance of x, trn() means transpose()

You can see how uncertainty propagation works in the prediction and update steps of a Kalman filter (linear case btw), see:

https://en.wikipedia.org/wiki/Kalman_filter#Predict

Sure enough we see the pattern J*var(x)*trn(J).

Since we are dealing with differential equations y’=f(x) I expected to see the Jacobian of the derivative function f(x) and its transpose to emerge in this article. But I don’t see the Jacobian anywhere. What’s up? 😉

(Sorry if I messed up the links. I’m unfamiliar with this forum system.)

“I doubt stability analysis is the same as propagation of uncertainty.”No, it’s in effect a subset. If there is a component that propagates with exponential increase, corresponding to a positive eigenvalue of A, then it is unstable. As you say, that is the first thing you have to establish about propagation.

“involves the Jacobian matrix (and its transpose) of the function under analysis”That is for a mapping with a prescribed function. Here we have a function indirectly prescribed by a differential equation. The equivalent of Jacobian is the matrix W(t) = exp(∫ A(u) du )that I defined.

“But I don’t see the Jacobian anywhere.”If the de is non-linear, y’=g(y,t), then A is the Jacobian of g.

I left the equation mostly at the deterministic stage, but gave the mapping of an added term f as W(t) ∫ W⁻¹(u) f(u) du. If f(u) is a random variable, then the integral is stochastic, and should be evaluated as sqrt( ∫ w*u*cov(f)*u*w du) with appropriate transposes, covariance including autocovariance, w standing for W⁻¹. Same pattern but you have to build in the integration. That is the generalisation of random-walk style integration in quadrature for general DE’s. W(t) is also the fundamental solution matrix, which you can take to be the set of solutions with initial conditions the identity I.

“Since we are dealing with differential equations y’=f(x)”Did you mean y’=f(x)*y? That is what I had, with A for f.

I’m just a simple engineer. I’m used to the recurrence equations used to find the state evolution of systems described by X’ = f(t, X) with X as system state vector:

X_n+1 = X_n + h*A(t, X_n)

Where h is the time step and A(t, X_n) is an approximation of the slope between X-n and X_n+1. Usually A(t, X_n) is a Runge-Kutta scheme that evaluates f(t, X) at intermediate points. If one uses the Euler integration scheme then A(t, X_n) is equal to f(t. X_n).

Analysing the propagation of uncertainty by this recurrence equation produces another recurrence equation that describes the evolution of uncertainty in the system state (same as Kalman filter does):

cov(X_n+1) = J*cov(X_n)*trn(J)

With Jacobian J = d(X_n+1)/d(X_n) = I + h*d(A(t, X_n))/d(X_n)

For complex systems finding the Jacobian of function f(t, X) can be difficult. But, in principle de propagation of uncertainty is straight forward and can be combined with the evolution of the system state itself. If the uncertainty exceeds acceptable limits: stop the simulation.

“Did you mean y’=f(x)*y? That is what I had, with A for f.”

Yeah, I messed up. I meant to write y’=f(y, t).

“For complex systems finding the Jacobian of function f(t, X) can be difficult. But, in principle de propagation of uncertainty is straight forward and can be combined with the evolution of the system state itself. If the uncertainty exceeds acceptable limits: stop the simulation.”Yes, I agree with all that. And your Jacobian approach will tell you whether in one step solutions are diverging or converging. Again it comes down to whether your dA/dX has a positive eigenvalue.

I’m tending to look at multi-steps where you say you have a basis set of independent solutions W, and say that any solution is a linear combination of that basis. You could get that from your Runge-Kutta recurrence. Another way of seeing it is as if the multistep is the product matrix of your single steps I+h*dA/dX.

Thanks Nick, an interesting overview of just one flawed feature of climate models.

In my earlier days, I was involved in a project to computerise the fluid dynamics of molten aluminium as it solidified in a rapid chill casting process. This was an attempt to predict where trapped air would congregate and create areas of potential failure under extreme stress.

The variables are not so great in this set up, as they are in global climate modelling. Finite element analysis was deployed and some of the best mathematical minds were engaged to help write the code and verify the model’s potential.

I won’t go into detail, but it’s safe to say my confidence in academia and computer modelling was crystallised during that exercise, if only the castings had experienced such predictable crystallisation….

The difficulties with trying to capture all the variables that impact a chaotic system are where the challenge actually is. The known flaws in the computer algorithms and even the maths deployed in the code is not where the challenge is. Just missing any variable that impacts the model, renders the model useless.

The ability of climate models to predict the future is zero.

The evidence of this is there for all to see. The models are all running hot when compared to real observation. That is telling us something.

It is telling us the models are missing a feedback or are based on a flawed hypothesis, possibly both!

When weather predictions can only be confident/meaningfully accurate to three days out, and as weather patterns most definitely play a part in our experience of climate, who out there, is going to bet on the same weather/climate people getting it right 100 years out?

“When observation disagrees with the hypothesis, the hypothesis is wrong” Feynman

https://www.presentationzen.com/presentationzen/2014/04/richard-feynman-on-the-scientific-method-in-1-minute.html

“In my earlier days, I was involved in a project to computerise the fluid dynamics of molten aluminium as it solidified in a rapid chill casting process. This was an attempt to predict where trapped air would congregate and create areas of potential failure under extreme stress.”Well, well. My group did high pressure die casting with aluminium, but using smoothed particle hydrodynamics. It worked pretty well. The problem with FEM is the fast moving boundary; hard to do with mesh. GCMs don’t have anything like that.

“The models are all running hot when compared to real observation.”But they are all running, and they produce a huge amount more of information than just surface temperature. And they aren’t describing weather, which covers much of the comparison time. It is quite possible that the Earth has been running cool, and will catch up.

Stokes

You said, “It is quite possible that the Earth has been running cool, and will catch up.” Almost anything is possible! What is the probability? On what would you base the estimation of such a probability? Your remark is not unlike all the scare stories based on words such as “may, could, conceivably, etc.”

+1

So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. It doesn’t matter for DE solution why you think it is wrong; all that matters is what the iterative calculation then does with the difference. That is the propagation of error.So what if, instead of knowing an error, you know only the range or confidence interval of an important parameter? How do you propagate the range or confidence interval of model values reasonably concordant with the range or CI of the parameter? That is the problem addressed by Pat Frank that you have never addressed yet.

You wrote of the “scale” problem of using a meter stick that was 0.96m in length. What if all you know is that the stick is between 0.94 and 0.98 m? The distance measured to be equal to 1 stick length is between 0.94 and 0.98m; two lengths would be between 1.88 and 1.96; …; N lengths would be between N*94 and N*98, and the uncertainty would be N*0.04. That’s for absolute limits. With confidence intervals instead, the propagation of the uncertainty is more complex.

Given the CI of the cloud feedback parameter addressed by Pat Frank, what is your best calculation of the uncertainty of the GCM forecasts? Less than his calculated value? More than his calculated value?

As I wrote, you have not yet come to grips with the difference between propagating an error and propagating an interval or range of uncertainty.

It would be good of you, in the spirit of scientific disputation, to submit your article for publication.

“How do you propagate the range or confidence interval of model values reasonably concordant with the range or CI of the parameter?”In the same way as for point pairs or groups. A DE determines the stretching of the solution space; the range or CI’s stretch in accordance with the separation of two points, or however many are needed to provide an effective basis to the dimension of the space.

“That’s for absolute limits.”No, it’s just for scaling. The ruler doesn’t change between measurings. You may not know what the number is, but no stats can help you here. If you think it is wrong relative to the standard metre, you have to consult the standard metre. Calibrate.

“Given the CI of the cloud feedback parameter addressed by Pat Frank, what is your best calculation of the uncertainty of the GCM forecasts?”Can’t say – we have just one number 4 W/m2. There is no basis for attaching a scale for how if at all it might change in time. There is also the issue raised by Roy, which I would put a little differently. If it is just a bias, an uncertainty about a fixed offset, then in terms of temperature that would be taken out by the anomaly calculation. It is already well known that GCM’s have considerable uncertainty about the absolute temperature, but make good predictions about anomaly, which is what we really want to know. If it does have a fairly high frequency variation, that will just appear as weather, which for climate purposes is filtered out. The only way it might impact on climate is if it has secular variation on a climate time scale.

“It would be good of you, in the spirit of scientific disputation, to submit your article for publication.”I doubt if it would qualify for originality. A lot of it is what I learnt during my PhD, and sad to say, is therefore not new.

Nick Stokes:

You may not know what the number is, but no stats can help you here. If you think it is wrong relative to the standard metre, you have to consult the standard metre. Calibrate.Boy are you ever dedicated to missing the point. When you recalibrate you are given the range (or CI) of likely true lengths, not the true length. Do you believe that in actual practice you know either the true length or any specific error? The error does not have to change randomly from placement to placement of the “meter” stick in order for the uncertainty induced by the calibration uncertainty to accumulate.

Can’t say – we have just one number 4 W/m2.That is not true. Besides that best estimate, you have a CI of values that probably contains the true value within it (but might not). From that, what is reasonably knowable is the range of GCM outputs reasonably compatible with the CI of reasonably likely parameter values.

The “point” you and Roy emphasize is exactly wrong. What we have here is that the parameter value is not known, only a reasonable estimate and a reasonable estimate of its range of likely values; therefore what the output of the model would be with the correct parameter value is not known. What is a reasonable range of expectations of model output given the reasonable estimates of the parameter value and its likely range? A GCM run gives you one output that follows from the best estimate; what is a reasonable expectation of the possible range of model outputs compatible with the range of possible parameter values? That is what Pat Frank focused on calculating, and what you and Roy are systematically avoiding.

I doubt if it would qualify for originality.I meant as a point-by-point critique of Pat Frank’s article, not an introduction to what you have learned. It really isn’t such a point-by-point refutation.

Mr. Stokes –> You said ” “Given the CI of the cloud feedback parameter addressed by Pat Frank, what is your best calculation of the uncertainty of the GCM forecasts?” Can’t say”.

Therein lies the problem. You can’t say what it is, but you insist that Dr. Frank can’t either.

You first need to answer for yourself and others, whether there is uncertainty or there is not. If you agree there is, then you need to tell the world what your values are and provide numbers and calculations and how you derived them. If you continue to claim there is no uncertainty, then we have an immediate answer.

Your math is above my pay grade in resolving. Been too long since I delved into this. However, as an old electrical engineer, I can tell you that all the fancy equations and models of even simple circuits never survived the real world. Kind of like they say a battle plan never survives contact with the enemy. What is the moral of this? There is always uncertainty. Those of us who have done things, built things, answered the boss’s question about how certain you are about your design know this first hand.

KISS, keep it simple stupid! Dr. Frank has done this. All your protestation aside, you have not directly disproved either his hypothesis nor his conclusions. You’re basically trying to say that Dr. Frank is wrong because your mathematics is correct, which is basically saying that you believe there is no uncertainty.

Until you man up and can derive your estimation of the amount of uncertainty in GCM projections you are not in the game. Dr. Frank has put his projections out for the whole world to see, let us see what you come up with.

“You can’t say what it is, but you insist that Dr. Frank can’t either.”That’s a common situation. It in no way follows that if I can’t get an estimate, any other guess must be right.

In fact, what I say is that the way to find out is to do an ensemble and look at the spread of outcomes. I don’t have the facilities to do that, but others do.

Nick Stokes:

In fact, what I say is that the way to find out is to do an ensemble and look at the spread of outcomes.I advocated that, as have many others, with the proviso that the “ensemble” runs include (random) samples from the CI of the parameter values (in a response to Roy I mentioned “bootstrappping”.)

Until then, an improvement on Pat Frank’s analysis is unlikely to appear any time soon. By addressing only one of many uncertain parameters, he has likely produced an underestimate of the uncertainty of GCM model output.

“That’s a common situation. It in no way follows that if I can’t get an estimate, any other guess must be right.”

ROFL! No, the issue is that you can’t say that other estimates are wrong! You are trying to say that since you don’t know then no one else can know either!

“In fact, what I say is that the way to find out is to do an ensemble and look at the spread of outcomes. I don’t have the facilities to do that, but others do.”

Ensembles simply can’t tell you about uncertainty. All you wind up with is a collection of different output values. You still don’t know what the uncertainty is for each of those different output values.

Nick deserves this post as head post for a few days in keeping with the status afforded to Pat.

Irrespective of the time Pat Franks post was pegged, I would say this is one of the most informative posts in a while here and deserves visibility above the typical low level opinion posts and “guest rants” of which we see several per day. Posts of this quality do not happen.

As the Author said just above , the lead has no originality.

So it actulally brings nothing new to the mad climate tea party, except more cups.

He said it was nothing ground breaking that would merit a paper. That does not mean it is not a valuable contribution to the ongoing discussion of error propagation started by the prominent coverage given to Pat Franks “discovery”.

If you want any more plastic throw away cups , why do you still read WUWT?

To quote above : “I doubt if it would qualify for originality. A lot of it is what I learnt during my PhD, and sad to say, is therefore not new.”

Now Pat Frank’s thorough and devastating uncertainty analysis is only new for climate actors.

Say for the sake of argument that the parameter estimate was 0.10, and the 95% CI was (-1.9, 2.1). One way to propagate the CI would be to calculate the GCM model output by running it repeatedly with this parameter sequentially assigned values -1.9, -1.8, …, -0.1, …, 2.1, keeping all other parameter estimates at their best estimated values. If model output were monotonic in this parameter, the CI on outputs would be the set of lower and higher outputs from this run; without the monotonocity, the calculation of a CI would be more complicated.

Once again, the point is that propagation of uncertainty requires propagation of an interval of uncertainty. Nick Stokes has illustrated the propagation of an error.

“Besides that best estimate, you have a CI of values that probably contains the true value within it (but might not). “I think it would be useful if someone would actually write down, with references, what we do know about this datum. I don’t know where you get that CI from, or what the number might be; AFAIK, Lauer just gives the value 4 W/m2 as the 20 year annual average rmse. No further CIs, no spectral information. Pat insists that it can be compounded in quadrature – ie variance/year, but the factual basis is very weak, and I think wrong. It all hangs on Lauer including the word annual, but it seems to me that this is just saying that it is a non-seasonal average, just as you might say annual average temperature. Anyway, people are trying to build so much on it that the bare facts should be clearly stated.

Nick Stokes:

I think it would be useful if someone would actually write down, with references, what we do know about this datum.That’s potentially a good idea, but would you care? Your argument has been that propagating the uncertainty in the parameter value is something that you have never studied (implication being that it is intrinsically worthless); besides that, you have never responded when your other questions were answered, such as What use is made of the correlation of the errors at successive time intervals.

Where-ever any GCM modeler got any parameter estimates, those estimates were reported with “probable errors”, and the implications of the probable errors have been ignored up til now. You advocate continuing to ignore them.

Incidentally, I am reading a book titled “Statistical Orbit Determination”. Surprisingly, some of the parameter estimates (called “constants”) are reported to 12+ significant figures of accuracy. Nothing in climate science can claim to be that well known. N.B. “accuracy” is the intended adjective, since accuracy of the orbit of a satellite is of great importance. My reading of the GCM modelers is that they treat their parameter estimates as though such accuracy has been attained for them.

So, perhaps someone has already tabulated the parameter estimates, their sources, and their confidence intervals. Surely the GCM modelers have done this?

Firstly, many thanks to Nick Stokes for a very informed and enlightening introduction to the subject in a clear and accessible way.

There is an unjustified jump here. We know that gas laws work because of

extensive observational verification.We have zero verification for GCM output, thus the claimed “so GCMs give information about” does not follow. It is not sufficient that both are working is similar ways to infer that GCMs “give information” which is as sure and tested as the gas laws.This should read more like ” GCMs have the potential , after observational validation, of give information about climate “. At the moment they give us what they have been tuned to give us.

It should be clarified that this means ensembles of individual runs of the same model. Not the IPCC’s “ensembles” of random garbage from a pot-pourri of unvalidated models from all commers. Just taking the average of an unqualified mess does not get us nearer to a scientific result.

“It would be good of you, in the spirit of scientific disputation, to submit your article for publication.”I doubt if it would qualify for originality. A lot of it is what I learnt during my PhD, and sad to say, is therefore not new.

“We know that gas laws work because of extensive observational verification. “Well, yes, Boyle and Charles got in first. But Maxwell and Boltzmann could have told us if B&C hadn’t.

“At the moment they give us what they have been tuned to give us.”That’s actually not true, and one piece of evidence is Hansen’s predictions. There was virtually no tuning then; Lebedev’s log of the runs is around somewhere, and there were very few extensive runs before publication. 1980’s computers wouldn’t do it.

“It should be clarified that this means ensembles of individual runs of the same model”That is certainly the simplest case, and they are the ones I was referring to. However I wouldn’t discount the Grand ensembles that CMIP put together quite carefully.

Nick,

Your comment on Hansen’s predictions is a logical fallacy. Back in 1976, Lambeck and Cazenave looked at the strong quasi-60 year cycle in about 15 separate climate indices, which they related to (lagged) changes in LOD. This was at the time when the consensus fear was still of continued global cooling. They wrote:-

“…but if the hypothesis is accepted then the continuing deceleration of m for the

last 10 yr suggests that the present period of decreasing average global temperature

will continue for at least another 5-10 yr. Perhaps a slight comfort in this gloomy

trend is that in 1972 the LOD showed a sharp positive acceleration that has persisted

until the present, although it is impossible to say if this trend will continue as it did

at the turn of the century or whether it is only a small perturbation in the more general

decelerating trend.”

If Hansen’s predictions are “evidence”, then by the same token we can conclude from Lambeck’s prediction that the post-1979 upturn in temperature is natural variation following a change in LOD.

“Your comment on Hansen’s predictions is a logical fallacy”No, I can’t see the logic of your version. I said that Hansen’s successful predictions were evidence of the lack of need for tuning in a GCM, because he didn’t do any. Lambeck and Cazenave didn’t have a GCM at all. They made an unsuccessful prediction based on over-reliance on a supposed periodicity.

Nick,

I think you meant to say that they made a successful prediction. They did successfully predict the upturn in temperature. The “gloomy trend” was the continuing cooling of average temperature.

Hi Nick,

you have avoided my main point that you try to infer GCMs tell us something by comparing to gas laws, yet this is a non sequitur fallacy. We can rely on gas laws because they have been

validated by observation.This NOT true of GCMs.ALL climate models are tuned and always have been because a lot of processes are not modelled at all and are represented by ill-constrained parameters. Hindcasts do not work first time because we have such a thorough and accurate model of the climate that they can model it from “basic physics” as some pretend to con the unwashed, but because parameters are juggled until the hindcast of the post 1960 record fits as well as possible.

It is true that at least Hansens group have thrown out physics based modelling of volcanic forcing for arbitrarily adjusted scaling more recently, so this is a situation which is getting worse not better.

further more, Hansen more recently introduced the concept of “effective forcing” where 1W/m^2 of one forcing is not necessarily the same as 1W/m^2 of another : each gains an “effective” scaling.

Whether this is legitimate physically or not, it has introduced a whole raft of arbitrary unconstrained parameters which add more degrees of freedom to the tuning process. Von Newman’s elephant is no longer just wiggle its tail, it is now able to dance around the room.

The problem with an ensemble of dancing elephants is that stage they are pounding just happens to be our physical economy, and fissures already appear!

One very specific case in point is the scaling of AOD to W/m^2.

Lacis et al did this by basic physics. The result under-estimated the effect on TLS ( and one may suggest this indicates it under-estimated the effect on lower tropo climate ).

Ten years later they dropped any attempt at calculating the forcing directly and just introduced a fiddle factor, making an arbitrary scaling to “reconcile” their model’s output with the late 20th c. surface record. This resulted in an effect twice a large as observed on TLS ( and presumably twice as strong at the surface ).

If you make the model change twice as much as observed, you have doubled the climate sensitivity to this forcing. In order to get you model to match lower climate record you will need to double a counter-acting forcing ( eg CO2 ) twice as strong to balance it. You then hope no one spends much time worrying about your failure to match TLS and pretend your model has “skill”.

Since there have been no major eruptions since Mt. P they are left with a naked CO2 forcing which is twice what it should be with no counter balance. Hence the warming is about twice what is observerd.

This is not the only problem, the whole tuning process is rigged. But I suspect this is one of the major causes of models’ exaggerated warming since 1995.

Greg,

“We can rely on gas laws because they have been validated by observation. This NOT true of GCMs.”Two aspects to that. One is that a great deal of GCM behaviour has been validated by observation. That is the part used in numerical weather forecasting. Some still scoff at that, forgetting how vague forecasts used to be (when did you ever get a rainfall forecast in mm?). But even without that, the fact is that they do render a reasonable, tested, emulation of the whole globe weather system over that time period.

The other is that they provide an immense amount of detail which corresponds with observation. I often post this u-tube

https://www.youtube.com/watch?v=KoiChXtYxOY

Remember, GCMs are just solving flow equations, in this case ocean, subject to energy inputs and bottom topography. And they get not only the main ocean current systems right, but also the mechanics of ENSO.

I think that GCM are based on observations, not validated by them.

“Some still scoff at that, forgetting how vague forecasts used to be “

Yes, 10% chance of rain at 5.00 o’clock is so informative.

Remind me again how many millimeters that is Nick?

And will it actually rain at 5.00?

Incredible accuracy.

No “error” in that figure.

+1 Greg Goodman.

What Nick Stokes is arguing is frequently not what he appears to want you to think he is arguing.

Climate models tell you nothing about what the climate will be like in 2100

They can’t tell you anything, because the uncertainty by 2100 is enormous.

So whatever actions are taking to mitigate alleged CO2 driven climate, cannot be measured for effectiveness.

What do we know? We know the models have not been reliable to date, and the general discussion around this topic largely does not address the huge number of problems with the models, let alone the tuning.

There is little value at all in these models, as observations have shown.

It amounts to crystal ball gazing dressed up in some sort of “scientific exercise”, the uncertainty in the real world is so huge, that any actions based on this nonsense that harms people now, for a completely unknown future (and unknown results of mitigation) is ludicrous.

Climate model output is directly responsible for developing nations and 3rd world country getting investment and world bank loans for energy sources that would dramatically improve their lives.

Maybe send climate modelers to live in the same conditions so they get some perspective on how damaging this crystal ball gazing is to real people.

Lastly, Mann’s latest twitter meltdown is hilarious, Heller merely pointed out Mann was starting his doom graph from the coolest point in the data to claim doom, and Mann had a complete meltdown, talking about Russians!! 😀

I guess Mann’s problem with the Russians is that they abandoned the communism he and his colleges are trying to impose on the world by rigging climate science and hijacking the peer review process.

Hansen 2002 states that climate models can be tune to produce whatever climate sensitivity is desired. There are so many poorly constrained variables that you can easily ( intentionally or otherwise ) get the ‘right answer for the wrong reasons’ by tweaking parameters to approximately reproduce your calibration period. This does not mean you are correctly calibrated or that even the shortest extrapolation will be informative. Anyone clamouring to redesign the world economy based on non validated models which are already significantly wrong, is basing his ideas on climate models or science. That is simply a pretense.

oops: is NOT basing his ideas on climate models or science. That is simply a pretense.

“Anyone clamouring to redesign the world economy based on non validated models which are already significantly wrong, is NOT basing his ideas on climate models or science. That is simply a pretense.”

Still makes no sense – the ensemble of models validates the other dancers, right? Mann’s well known hockey stick is in fact the conductors batton.

From this self-validating ensemple, or Troupe, dances out the one true real climate?

Shades of von Hayek’s spontaneous unknowable economics, the Fable of the Bees.

“

varylarge eigenspace of modes”Simple typo or something deep and profound that is beyond my pay grade?

As for the rest. It requires days/months/years of thinking. My gut feeling. Ensembles are, as Pat Frank says, great for testing. And I can believe they are good for handling slightly divergent modelling. However, I’m inclined to doubt they have the magic property of extracting truth from a menage of faulty analyses.

“Sharing a common solution does not mean that two equations share error propagation.” Correct. But neither does it mean that the equations don’t have similar/identical error propagation. If the two equations always have the same solution (as Pat would maintain?), isn’t it quite possible that they DO have similar error propagation?

“isn’t it quite possible that they DO have similar error propagation?”Not really, because the homogeneous part is just that very simplest of equations y’=0. That has only one error propagation mechanism, the random walk cited. GCM’s have millions, and they are subject to conservation requirements. That’s one reason why random walk should always be regarded with great suspicion in physical processes. They are free to breach conservation laws, and do.

A randomly varying radiative ‘forcing’ would simplistically lead to a random walk in temperature but the larger excursions would be constrained by neg. feedbacks. It would still look and behave pretty much like a random walk.

Part of the neg f/b may be loosing more energy to space, so be careful with the boundaries of your conservation laws, Earth is not a closed system.

Thread message, IMO, no matter how much Stokes twists and turns? Models are rubbish, not fit for purpose (To change the world as we know it that is) no matter how hard you “polish” them. Even a turd can be polished, but it is still a turd!

I’ve always heard that you can’t polish a turd. Maybe you have more experience that I do. I guess there’s some Teflon polymer spray-on finish available in aerosol format from Walmarts now.

Myth Busters tried it: Busted- you can polish excrement. Why anyone would?

“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”

So it’s the difference between the cargo cultist pseudo scientific model result and the cargo cultist religious blind belief.

LMFAO – That’s IT!!

How is the postulated Soden-Held water vapor feedback mechanism implemented in the GCM’s? And is the implementation handled differently among different GCM’s?

While interesting, does it really matter whether the math is right or not since the GCM’s are not modelling this planet’s climate? Don’t they admit to not including all know forcings because they don’t know how much effect each has while they include CO2 as if they know its effect?

The GCMs results are not useful because they aren’t “of this world”.

Stokes ==> The REAL problem with GCMs is Lorentz as clearly demonstrated in the paper:

“Forced and Internal Components of Winter Air Temperature Trends over North America during the past 50 Years: Mechanisms and Implications*”

““[T]he scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree”.

Run for 50 years, they got 30 entirely different projections for North American winter.

see the image here: https://curryja.files.wordpress.com/2016/10/slide11.png

Read my essay about it about it here: Lorentz Validated at Judith Curry’s.

GCMs are extremely sensitive to “initial conditions” as demonstrated by NCAR’s Large Ensemble experiments. They are also sensitive to “processing order”.

GCMs can not and do not offer reliable projections of future climate states.

Kip,

“Lorentz Validated”Another title might be

“Ensembles validated”. Your case illustrates first the disconnect between initial conditions and later path. And it shows the variability that can result. In the case of N American winter temperatures, that is a lot. It is well acknowledged that GCMs do not reliably predict regional temperatures. The reason is that there are more degrees of freedom to vary and still comply with overall conservation laws.Exactly , there are too many degrees of freedom to solve the set of equations. It is ill-conditioned.

This means that there are any number of solutions which will roughly fit the hindcast but we have no idea whether this is because we have the correct weighting ( parameterisation ) of various forcings or just a convenient tweaking to get the result.

As I pointed out above , if you double AOD ( volcanic ) sensitivity and double CO2 sensitivity you will get about the right hindcast while both are present. When one is no longer there ( post 1995 ) your model will run hot.

This is exactly what we see.

The problem is far more messy than just two uncertain parameters so there may be another similar reason models run hot. The current state of the GCM art is useless for even modelling known, recent climate variation.

“This means that there are any number of solutions which will roughly fit the hindcast”

You’d think this would be obvious after the 100th or so model that could hindcast yet had different results than all the others.

Greg,

“This means that there are any number of solutions which will roughly fit the hindcast”It means it has a nullspace. That is where, for example, the indifference to initial conditions fits in. It means that you can’t pin down all the variables.

But often with ill-conditioned equations, you can determine a subset. They are rank-deficient, but not rank zero. And that is what CFD and GCM programs do. They look at variables that are based on conservation, and are properly constrained by the equations. Energy balance does not tightly constrain winters in N America. But it does constrain global temperature.

Stokes ==> Well, we disagree about that — you correctly state their CLAIM but that is not what the study actually shows. It shows that if one makes the tiniest, itty-bitty chnges to initial conditions (starting point) the resulting projections are entirely different — This illustrates that Lozentz discovered in his very first toy models — that because ot=f the inherent non-linearity in the mathematics of these models, they will be extremely sensitive to initial conditions and therefore long-term prediction/projectionof climate states IS NOT POSSIBLE.

The NCAR experiment shows this categorically.

It is nonsensical to state the the models mathematical chaos — sensitivity to initial conditions — illustrates “climate variability”.

…bad fingers!

….because of the inherent non-linearity….

….long-term prediction/projection of climate states IS NOT POSSIBLE. …

Hey Nick,

Nice general intro! But – what are you actually trying to say with respect to work of Mr Frank? What I can only see is that for some cases of differential equations error propagation/accumulation is not linear. But how that translates to earlier discussions? Are you trying to say that linear growth of uncertainty documented in the article of Dr Frank is not how the error in climate modelling behaves?

Secondly, there was mentioned here couple of times that error propagation is not the same as uncertainty propagation (or more precisely error calibration uncertainty). Do you buy that and if not why not?

“CFD solutions quickly lose detailed memory of initial conditions, but that is a positive, because in practical flow we never knew them anyway.”

Really? So there is no such thing as an initial, boundary value problem in computational fluid dynamics? And what do we mean by “quickly”?

Isn’t the 4 W/m2 uncertainty actually an indicator of our lack of understanding of how clouds behave? So it doesn’t matter what models are doing, because this uncertainty surrounds any calculations. The uncertainty increases as time passes, because time is a factor in the physical processes and the future state is dependent upon what actually happened previously. The models might happen to seem reasonable, but we don’t know exactly how the clouds can be expected to behave so the models can’t be proven to be accurately correct.

“So there is no such thing as an initial, boundary value problem”There is, of course, and CFD programs are structured thus. But as I said about wind tunnels, the data for correct initial conditions is usually just not there. So you start with a best guess and as with GCMs allow a spin-up period at the start to let the flow settle down. There are some invariants you have to get right, like mass flux. The fact that it usually does settle down gives an indicator of the status of initial conditions. Most features don’t influence the eventual state that you want to get results from, and that is true for error in them too.

Nick, maybe you’d like to take another stab at addressing the non sequitur fallacy .

https://wattsupwiththat.com/2019/09/16/how-error-propagation-works-with-differential-equations-and-gcms/#comment-2797025

OK. Imagine we have a 1 m cube of hot steel at 200 F (the initial condition). We place this in a room at ambient conditions (70F another initial condition). The steel cube cools via natural convection and we are interested in the time history of the steel and air temperatures as well as the flow field. If we calculate this using conjugate heat transfer CFD, how long will it take before the fluid flow “forgets” it’s initial condition? Will the initial condition matter?

In a more general context, are there fluid dynamics problems for which the transient behavior depends entirely on the initial condition? Can there be multiple solutions to fluid dynamics problems for which the particular solution you obtain depends on the initial condition? What if the system has multiple phases (e.g. liquid water, air, dust particles)?

what you are describing is one cell of a GCM with totally constant surrounding cells and the rest of the system in total equilibrium. In a word irrelevant to the current discussion.

The properties of initial boundary value problems in computational physics is irrelevant to the current discussion? OK….

“are there fluid dynamics problems for which the transient behavior depends entirely on the initial condition”I once spent far too long studying a problem of solute dispersion in Poiseuille flow. It made some sense because there was an experimental technique of instantaneously heating a layer with some radiation, and then tracking. That is for an existing steady flow, though. Generally I think that dependence is very rare. There are some quite difficult problems like just creating a splash, like this:

https://www.youtube.com/watch?v=Xl0RGPa57rI

The steel cube you describe is dominated by cooling, which has the continuity of heat conservation. Conserved quantities are of course remembered; the rest fades.

Random walks are based on the possibility of variation that occur with physical processes.

There are conservation of energy laws of particles and an entity called entropy that is inherently anti conservation of state.

There are no conservation laws as you state it in climate but I believe you can put it in as part of a computer programme ie TOA has to be conserved as a stated value.

The fact that this is totally unrealistic due to among other things cloud cover is why the computer programmes cannot predict accurately a future climate state.

Apparently everyone wants to talk about CGMs and not about whether we know enough about clouds to program the CGMs.

We don’t that is well known and one of the main reasons by GCM output is whatever it has been tweaked to produce, and not something with any objective validity.

The “basic physics ” meme is a lie.

Some time ago I downloaded a free training module from UCAR/NCAR entitled “Introduction to Climate Models”. Here is a link to a few screenshots concerning how they work. Some modeled behavior is “resolved”, i.e. each time-step results from solving a set of equations for motion, temperature, etc. for each grid-box. The rest is parameterized. Take a look. The parameterized outputs dominate those aspects of GCM’s which relate to clouds and longwave radiation, which of course is the supposed purpose of using GCM’s to simulate the climate impact of increases in greenhouse gases.

The GCM’s make sausage. Would you like it spicy hot, or milder? Take your pick, tweak the parameterizations.

So Nick Stokes may have some great points in this post, but I don’t see how it matters much, nor does it invalidate Pat Frank’s approach to determining the reliability of air temperature projections.

https://www.dropbox.com/sh/9trnmu9vepf1e2b/AAA7EZKmSmAnGVT9unkHeleYa?dl=0

Wow, it’s worse that we thought:

“when we simulate the climate system, we want no intrinsic climate drift in the model. In an process akin to calibrating laboratory instruments, modellers tune the model to achieve a steady state ”

This implies two assumptions.

1. there was some point in the measurable record when climate was in an equilibrium state, where we can initiate the “control run”.

2. Without human emissions and land use changes , climate will remain in a “steady state” and show zero trend in all major metrics over a test period.

Who are the climate change deniers in this story ?!

I think he just means they tune it against actual temperature and other measurements.

In fact it used to be argued that climate model predictions must be true because models can hindcast. The fact that lots of models with different results can also hindcast is more obvious now.

whoever argued that new nothing about modelling or fitting and degrees of freedom, ie they know nothing. Maybe it was Nobel lauriet Mickey Mann.

TallDave

Tuning the models to hindcast is essentially curve fitting. Anyone familiar with fitting high-order polynomials should be accustomed to the fact that such curve fitting does a good job on the data used for fitting, but produces unreliable results for extrapolations beyond the domain of the data used for fitting.

Some time ago I downloaded a free training module from UCAR/NCAR entitled “Introduction to Climate Models”. Here is a link to a few screenshots concerning how they work. Some modeled behavior is “resolved”, i.e. each time-step results from solving a set of equations for motion, temperature, etc. for each grid-box. The rest is parameterized. Take a look.Interesting. I’ve had a brief look into SimMod – simplified climate model developed at the Berkeley. Authors claim that even such simplified model follows closely more advanced ones and is suitable for researches. In this model forcings are front-loaded from different RCP models. There is no specific forcing due to clouds so I suppose this forcing is lumped into non-greenhouse gases forcing. Default time for simulation run spans for over 300 years (1765-2100). Non-ghg forcing is a simple difference between total forcing frontloaded from RCP model and total forcing due to greenhouse gases. Default climate sensitivity is 1.25. Till 1850 non-ghg forcing is set up to zero. For the next 50 years there is a steady decrease of non-ghg forcing from ~0.3 W/m^2 to 0, then the downward trend continues from 1900 till 2000 when the non-ghg forcing drops to -0.4 W/m^2 and then from 2000 there is a steady increase from -0.4 to 0.2 W/m^2. Have a look at the chart.

According the the model forcing due to CO2 alone dwarfs non-ghg forcing, especially towards later years – again the graph may be helpful.

y’ = A(t)*y+f(t) ……….(1)y(t) could be just one variable or a large vector (as in GCMs); A(t) will be a corresponding matrix, and f(t) could be some external driver, or a set of perturbations (error).

I am not sure how many ways to say this, but the case analysed by Pat Frank is the case where A is not known exactly, but is known approximately; and where the goal is to analyze the effects on knowledge of output given the approximate knowledge of A. He is not focusing, as you are, on the divergence of solutions resulting from known A and different starting values.

I am sensing that the idea that elements of A are only known to be in intervals, but not known exactly, is an idea that you have never addressed. You write as though A is either known or not known; not that its values are uncertain but likely within limits.

Lots of practical information is available with limits on accuracy: resistances and capacitances of electrical components; concentrations of active ingredients of cough medicine; milliamp hours of output from rechargeable batteries; calories in a cup of cooked rice; driving speed displayed on a speedometer. Imprecise knowledge of component characteristics leads to imprecise knowledge of the effects of using them.

Which brings us back to a question addressed by Pat Frank: what is the “probable error” of GCM forecasts? Much greater than you expect if you ignore the probable error of the parameter inputs.

Mathew RM wrote:

Untested assumptions of component characteristics leads to even greater imprecise knowledge.

For example, my understanding is that models operate on the assumption that the atmosphere is only in local thermodynamic equilibrium. I have been led to believe that real-world data shows this to be wrong.

Then according to your CGM’s Anthony’s CO2 jar experiment should have shown an increase in temperature with an increase in CO2, but the temperature did not increase. It should be easy with only one variable the CO2 ppm.

“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number. ”

Well, there’s the problem. What about the discrepancy between what you believe is the true number, and the physically true number? That would be a function of the physical uncertainty in your measurements underlying the parameters.

Consider the difference between Step 0 (measurement) and Step 1 (simulated). If you could measure all the parameters again in Step 1, their measurement error wouldn’t change. But since you can’t, it has to increase. It doesn’t make sense simulated values would have the same physical error at every step, just like actual measurements.

You’re proven the models are fit for making guesses about the numbers you believe are correct, but proved nothing about whether they are fit to make reliable predictions about the future states of the physical properties you’re actually measuring.

This is why there are so many abandoned models, and so many current models with different ECS — all of them can’t be right, but all of them can be wrong.

I’m comfortable with Nick Stokes explanation that cumulative error won’t send the GCMs to extremes. Now with cumulative error out of the way the only remaining reason for GCMs to be so very wrong is because the models themselves are wrong and not repairable via refinement to reduce error.

Nick doesn’t get that uncertainty is not error, and that uncertainty doesn’t affect model outputs at all.

His entire analysis is one long non-sequitur.

This was an excellent post, but it might have failed to address another source of error.

Nick mentions that a major source of error is grid discretization, and suggests using finer grids until no further changes in error are observed. This can possibly be done with weather forecasting models, where the time span being forecast is on the order of five or 10 days. If the model can be run on the computer within a few hours, a weather forecast can be generated before the forecast weather actually happens. If later observations are different than those predicted by the model “one day out”, those observations can be incorporated as initial conditions into a new model run, and the model can be corrected.

But short-term weather forecasts do tend to diverge from reality (later observed weather) within 5 to 10 days, even using relatively small grids. If a model is to be used to predict the climate 50 or 100 years from now, the number of time steps simulated needs to be drastically increased. In order to keep calculation times reasonable, that may mean increasing the grid size, since calculation time is proportional to (number of grids) * (number of time steps).

If the grid size is increased, the spatial errors per time step are likely to increase (if the temperature, pressure, humidity, wind speed, etc. at the center of a grid cell do not correspond to the calculated linearized “averages” from the edges of the grid). Since the time steps for a climate model will probably be longer than those for a weather-forecasting model, there would be greater temporal errors per time step, also propagated out over many more time steps.

This qualitative analysis has not been subjected to any differential equations, but it would be expected that random errors due to imperfect knowledge of conditions within a grid cell and those between time steps would tend to increase much faster for a Global Climate Model than for a short-term weather-forecasting model. It does not seem that Nick Stokes’ analysis has addressed this problem.

Steve Z

“In order to keep calculation times reasonable, that may mean increasing the grid size, since calculation time is proportional to (number of grids) * (number of time steps).”

I fully agree with you.

But are not we living in a perverted world where the American military will soon get a computer to simulate ultramodern nuclear power heads that climate research in fact should benefit from as well?

https://insidehpc.com/2019/08/cray-to-build-el-capitan-exascale-supercomputer-at-llnl/

El Capitan is projected to runnational nuclear security applicationsat more than 50 times the speed of LLNL’s Sequoia system.I’m living since very long time in Germany, but can tell you that in France, the computing power dedicated to climate research is, to say the least, incredibly low. The people there are proud to obtain, in some near future, a ‘supercomputer’ doing no more than 11 teraflops!

El Capitan will come around in a few years with 1.5 petaflops, i.e. over 100 times more.

Rgds

J.-P. D.

“It does not seem that Nick Stokes’ analysis has addressed this problem.”Well, I did mention grid invariance in CFD. It’s a big issue in all PDE solution, including GCMs.

You’re right about the trade-off. GCMs have to run many time steps, so have to use lower spatial resolution. Grid sizes are large, usually 100 km or so horizontally. They can’t really do hurricanes, for example. But they are not trying to emulate weather.

Here are two questions for Nick Stokes and Steven Mosher concerning the scientific credibility of GCM model runs which produce exceptionally high predictions of future warming; for example, 6C of warming, a figure which Steven Mosher regards as credible.

Question #1:What kinds of evaluation criteria should be applied when assessing the scientific credibility of a climate model run which produces 6C of future warming?Question #2:Will the basic list of evaluation criteria be different for a model run which produces 2C of future warming, as opposed to a run which produces 6C of warming?I’ve always been concerned about how GCMs that approximate climate can be trusted very far into the future. I think that everybody would agree that the GCMs only approximate how climate will change?

If we agree on that much, then for each step of a GCM run we should be able to agree that the result of that step is also an approximation? I’m not asking about how big a bound of eror on all the paramters, just agreement that the result won’t match reality in some amount.

If we agree on that, then consider how that approximation is affected by the next step of the run?

A trivial example that came to mind based on an experience of laying a large circular wall where I was using a spirit level for each wall block. I was making sure that each wall block was level and level to each of the three blocks before it. Spirit levels are approximations on true level and is off true level by +/- .1%. Using the eye for each step of laying the wall blocks is also an approximation. My eyesight isn’t that great and I use progressive lenses so my accuracy on assessing true level by the spirit level is +/- 1%. With 100 wall blocks in the circle and using the method described, how close did my last wall block come to being level with the first block laid? Was it above or below the first one?

I think that is the sort of uncertainty that Pat Franks is trying to get at. GCMs are a tool used to measure future climate in the same way that the spirit level and my eye was a tool I used to level my circular wall blocks. To answer my question about the wall blocks you’d have to know how far off each assessment of level was and in which direction. Without that detail, you can’t really say how far off I would be. You can put a bound on how far I could be, though based on the uncertainty numbers I provided. The same could be said for GCMs.

That is what I think Pat Franks is trying to characterize in his paper. From what could see in Nick Stokes post, this aspect of the problem with GCMs is not addressed. In the comments Nick did appear to acknowledge that this type of thing is very hard to calculate.

From Nick’s respone to David:

“David,

“Now how can we quantify model uncertainty?”

Not easily. Apart from anything else, there are a huge number of output variables, with varying uncertainty. You have mentioned here tropical tropospheric temperature. I have shown above a couple of non-linear equations, where solution paths can stretch out to wide limits. That happens on a grand scale with CFD and GCMs. The practical way ahead is by use of ensembles. Ideally you’d have thousands of input/output combinations, which would clearly enable a Type A analysis in your terms. But it would be expensive, and doesn’t really tell you what you want. It is better to use ensembles to explore for weak spots (like T3), and hopefully, help with remedying.”

I believe that Pat is willing to stipulate that GCMs are all internally consistent, have a balanced energy budget, and converge on solutions. But for all that, they are still only approximations of reality. Pat is saying that the results the GCMs end up with are only approximations to what the climate will actually be and is trying to put an upper bound on how far off those final states will be. We can argue on whether the bound he is using is too large or not and can discuss how to improve the assessment of uncertainty bounding the final results.

So Nick seems to be telling us that changing the net input energy by 8 watts per square meter (+/-4) will have no effect on the output air temperature after an elapsed time of years (20?) in the GCMs. Thank goodness. That .035 CO2 forcing is clearly meaningless then.

The (+/-)4 W/m^2 is not changing the net inputs, John C. It’s an expression of ignorance concerning the state of the clouds.

It means that GCMs are entirely unable to resolve the response of clouds to the forcing from CO2 emissions.

It means GCM air temperature projections are physically meaningless.

Nick Stokes doesn’t understand resolution. Nor does Steve Mosher.

It would be useful if each major contributor to this discussion — Pat Frank, Roy Spencer, and Nick Stokes — could agree upon a glossary of scientific and technical terms common to the arguments being made by each contributor so that the WUWT readership can reach a conclusion whether or not the three major participants are even talking the same language.

Beta Blocker

Yes.

And it would be even a bit more useful if one of the three would stop claiming that the others don’t understand this and that.

This is simply zero-level, and it is really a pity the he doesn’t understand such basic things.

But they DON’T UNDERSTAND! That’s what so amazing about this. Perhaps I missed it, but I haven’t seen Nick Stokes address Pat Frank’s central point.

Pat Frank is talking about uncertainty and that the amount of uncertainty keeps increasing – NOT ABOUT ERROR. The uncertainty that emerges from the models with their results – as an intrinsic property of those results – is so huge it renders the results essentially meaningless. It’s like saying “It will be 60.25 degrees today, plus or minus 27 degrees.”

Also, is your last sentence horribly ironic or is it supposed to be a quote being made by “one of the three”? (If it is a quote it ought to be inside quotation symbols)

” is so huge it renders the results essentially meaningless”No, it renders uncertainty meaningless.

What does it mean, anyway? Do you know? An uncertainty that is not related to expected error?

(In reply to Nick Stokes…)

Uncertainty is a way of quantifying the statement “I’m a little fuzzy on this, so I can’t give you a more precise answer.”

So, when the question is asked, “How many people will show up for tonight’s event? The CORRECT answer might be:

“Well, I’ve reviewed past events that were similar to yours. I conducted a poll to see how ‘hot’ your topic is. Finally, I checked to see what else is going on in town. Based on all of that, I believe you will have between 500 and 600 people.”

“But I need a number.”

“Okay, 550 people, give or take 50.”

“So, you’re saying 550 people will come out tonight?”

“Yes, give or take 50.”

I gave the best answer I could within unavoidable constraints. So, if 535 people show up my answer isn’t IN ERROR. I gave a correct answer, within my stated bounds. But if only 417 people show up, then my answer would be INCORRECT and IN ERROR.

BUT, what if the query had been about a much bigger event, and I had said, “55,000 people will show up, give or take 35,000 people”? Well, the event organizer would have every reason to fire me and laugh me out of her office… BEFORE the event even takes place. Because, who cares if my answer proves to be “correct”? It is virtually useless. 20, 450 people showing up, or 79,823 people attending would both make my answer technically correct. Therefore, my answer was trite and not worthy of serious consideration.

Pat Frank says the extent of the uncertainty that emerges from climate models shows the results are the polar opposite of earthshaking. They are trite and not worthy of serious consideration – whether they prove to be “correct”, or not.

(extending my reply to Nick Stokes)

Notice in my scenario that “give or take 50 people” is an integral part of my answer – as important as the “550 people” part. Let’s say my event organizer went to a follow-up meeting, and said, “Schilling did some analysis and says 550 people are coming.” She would not have given them the answer I gave her. If I found out about it later, I wouldn’t think she was purposely lying. Rather, I would figure she simply didn’t grasp the meaning or implications of “give or take 50 people.”

I honestly think that’s what happening here. You’re a bright, articulate guy who can be commended for regularly making the effort to be civil. But I think you just haven’t appropriated the concept of uncertainty, or the ramifications of it – particularly for climate models.

Pat Frank is stating the uncertainty that arises from the calculations and algorithms that make up climate models is so great it drains their outputs of all import. Climate model outputs are studiously carved into the sand and announced with great fanfare… just before a wave of uncertainty – THEIR wave of uncertainty – washes over them.

“I gave the best answer I could within unavoidable constraints. So, if 535 people show up my answer isn’t IN ERROR. I gave a correct answer, within my stated bounds. But if only 417 people show up, then my answer would be INCORRECT and IN ERROR.”Well, different from Pat Frank’s insistence (but there is no consistency here). He insists that error is just the difference between measurement and reality. Not being outside CI limits. But then again, his paper is titled “Propagation of error…”. I don’t think your version corresponds to what anyone else is saying.

Nick,

“Well, different from Pat Frank’s insistence (but there is no consistency here). He insists that error is just the difference between measurement and reality.”

That *is* the definition of error. But it is *not* the definition of uncertainty. You still don’t seem to grasp the difference between the two. Reading the distance between the mounting holes in a single girder can suffer from error in measurement. Multiple measurements made on that same girder with the same measuring device can help decrease the size of the error in measurement.

Calculating the span length of ten girders tied together with fish plates suffers from *uncertainty*. You don’t know which girders have distances between mounting holes that are short, which girders have distances between mounting holes that are long, and which girders are dead on accurate. No amount of statistical averaging will help you determine the span length of those connected girders. It will always be uncertain till you actually put them together and see what the result is. That’s not “error”, it is “uncertainty”!

“That’s not “error”, it is “uncertainty”!”Well, on that basis you’d say that measuring 17″ with a 2 ft ruler might have error, but if you measure it out with a 1 ft ruler, it is uncertainty.

But it’s clear that your definition of error is different to what Matthew Schilling calls ERROR! There isn’t much consistency here. Perhaps you could explain the usage of “propagation of error” in Pat Frank’s title.

“…What does it mean, anyway?…”

I’m uncertain.

[mic drop]

Nick Stokes:

Well, on that basis you’d say that measuring 17″ with a 2 ft ruler might have error, but if you measure it out with a 1 ft ruler, it is uncertainty.No. Uncertainty results from measuring with a ruler whose length is not known. Imagine if you will 1,000 rulers off an assembly line, all manufactured “within tolerances”. You can be pretty sure that at some resolution, no two of them have the same deviation from perfect, and that no one of them is perfect. So you choose one of them. If you use it to measure something, you can be pretty sure that the error of measurement is bounded by the limit set by the tolerances. But you can’t be sure that the error is positive, or that it is negative. The same random deviation is added to the estimated length each time you lift the ruler and place the trailing edge where the leading edge was; if something is measured to be 8 ruler lengths, then the uncertainty is 8 times the manufacturing tolerance limits. (as I described it, there may be a different independent random error added each time I move the ruler along, but that isn’t the analogy to Pat Frank’s procedure.)

Matthew Schilling, “is so huge it renders the results essentially meaningless”

Nick, “

No, it renders uncertainty meaningless.”No, it renders the results meaningless.

Nick, “

What does it mean, anyway? Do you know? An uncertainty that is not related to expected error?”It means the prediction is meaningless, Nick.

It means the model cannot resolve the effect.

One can always construct some analytical model, calculate something, and get a number. Uncertainty tells one whether that number has any predictive reliability.

Uncertainty stems from a calibration test, using the model to calculate some known quantity. Poor calibration results = poor model.

Huge predictive uncertainty = no predictive reliability.

Standard reasoning in science. Matthew is correct.

Dr. Frank,

Glad to see that you are still contributing to this discussion. Does the 4 W/m^2 estimated cloud error applies to the entire GCM-modeled greenhouse effect or just to the cloud-related impact of the incremental CO2 forcing? I’m assuming it’s the former, but if it turns out that some portion of this error could be a) directly attributed to incremental CO2 forcing and b) that the resulting propagation of the smaller error still exceeded the GCMs’ forecasts, this would be a very strong evidence that the models have no skill. Maybe not physically correct, but certainly an iron-clad result. Thank you.

It’s your latter condition, Frank, “

cloud-related impact of the incremental CO2 forcing.”GCMs can’t resolve the response of clouds to CO2 forcing. It’s opaque to them. That means the air temperatures they calculate are physically ungrounded.

I’ll have a post about that, probably by tomorrow.

Well, I may be over simplifying the issue. Thanks for even noticing my comment. Although I suspect that you can’t even drag this race horse to the water, it may be that your paper will encourage a few readers to drink.

Nick and the GCMs seem to treat the cloud effect as a ‘n’ W/m^2 “forcing.” I understood your references to say that forcing’s magnitude is not known more precisely than +/- 4 W/m^2 (although it may well be far less constrained). If I assume for the sake of argument that the rest of the GCM is flawless and all other initial and continuing conditions are correct, then I can run the model with the cloud effect set to ‘n’-4 and ‘n’+4 with the difference in result being the uncertainty range that a +/-4 cloud effect creates (in the model). I do not actually think that the GCMs are flawless, or that the other parameters are correct, so the difference in output will likely not be the uncertainty that should propagate through the model. But, lets pretend.

Nick seems to contend that a change in the magnitude of the cloud forcing parameter equal to the (minimum) uncertainty has no effect on the results of a GCM run. And yet he also seems to believe that a (much smaller) change in the CO2 forcing parameter makes significant changes in a GCM run. Apparently, some Watts are less equal than others.

I just reread Nick’s post. I still see him pretending that an uncertainty is an error, and alleging that the error will be erased by the other model constraints. So I guess he’s alleging that inputs and coefficients don’t matter, the model will make it right because SCIENCE. (Sort of like modeling the hair length of Marine Recruits. No matter the input length, after processing the length at the exit is always the same.)

“Nick seems to contend that a change in the magnitude of the cloud forcing parameter equal to the (minimum) uncertainty has no effect on the results of a GCM run. ”

In essence Nick is saying that initial conditions are irrelevant. They can be anything and you’ll still get the same answer out of the model. That’s just proof that the models are set up to give a specific answer. Tbat’s why Pat Frank could get the same results with a much simpler emulation!

“…In essence Nick is saying that initial conditions are irrelevant. They can be anything and you’ll still get the same answer out of the model. That’s just proof that the models are set up to give a specific answer…”

Not exactly. Lots of mathematical models near to have some lead-time before they start converging and making sense. It doesn’t mean they are set-up to give a specific answer.

Example: if you are modeling levels of a lake in response to the annual water cycle (pretty simple water balance), does it matter if you start it 100% full or completely dry in 1900 if you are interested in results from 2000-2019? Obviously the results will be much different around 1900, but the levels should get closer together over time and at some point align.

Michael,

“Obviously the results will be much different around 1900, but the levels should get closer together over time and at some point align.”

Such a model should show not just a single “average” but also trends in the average. Those trends will depend heavily n what the initial conditions are in the time frame you are studying. E.g. if you are in a wet trend in 1900 but in a drought trend in 2000. If you merely want to add all annual levels together and then calculate an average you lose a *lot* of data that will inform you. If all climate models converge to a single average over a long period of time then of what use are they? They will tell you nothing about trends and would certainly be useless telling you what is happening to maximum temperatures and minimum temperatures in the biosphere. It is those maximums and minimums which determine the climate, not a single average.

….after reading this thread…..I take it you are all in agreement with all of the horrendous adjustments they have done to past temperatures

because that’s what the models are tuned to

Latitude

“I take it you are all in agreement with all of the horrendous adjustments they have done to past temperatures…”

Well, apart from your usual, prehistoric flip-flop pictures (probably originating from the Goddard blog): do you have some thing really trustworthy to offer?

Or do you prefer to stay in the good old times where the 1930’s were so pretty warm in comparison to today, due to

– incompetent restriction on TMAX records, though today everybody knows that TMIN increases much faster than TMAX everywhere

– ten thousands of weather stations less than today

and, last not least,

– completely deprecated processing algorithms no one would still keep in use today?

https://drive.google.com/file/d/1ESDd0LROc53jvSm1rZFhjkaQqif7tZ5R/view

Yeah.

Bindidon,

So we have accurate global temperatures from 1895…are you kidding?

Good point. The models are tuned to the fraudulent Hockey Stick. GIGO.

Climate Model Ruse

Climate modelers must make the following assumptions:

1) The correct continuum dynamical equations are being used

This is false because the primitive equations are not the correct reduced system.

If they were, they would be well posed for both the initial and initial-boundary value problems. Oliger and Sundstrom proved that the initial-boundary value problem is not well posed for the prinmitive (hydrostatic) equations.

2) The numerics are an accurate approximation of the continuum partial differential equations.

This is false as shown in Browning, Hack, and Swarztrauber (1989). The numerics are not accurately describing the correct partial differential equations (the reduced system), and not even accurately approximating the hydrostatic equations because Richardson’s equation is mimicing the insertion of discontinuities into the continuum solution destroying the numerical accuracy.

3. The physicscal parameterizatios are accurately describing the true physics.

This is false. In Sylvie Gravel’s manuscript it is shown that the hydrostatic model is inaccurate within 1-2 days, i.e., starts to deviate from the observations in an unrealistic manner thru growth of the velocity at the surface. For forecasting this problem is circumvented

by injecting new observational data into the hydrostatic model every few hours. If this were not done the forecast model would go off the rails within several days. This injection of data is not possible in a climate model.

Thus, IPCC uncertainty-language terms are very likely physically meaningless.

Robert,

I do not know if Global Warming is real or not. Certainly there are some physical signs that are disturbing.

But I do know that climate models are hogwash. They are based on the wrong system of equations

and they have violated every principle of numerical analysis. The “scientists” are more interested in their funding than scientific integrity. I love computers, but they must be used correctly. Unfortunately

this is not the case in many areas of computational fluid dynamics.

Latitude:

.I take it you are all in agreement with all of the horrendous adjustments they have done to past temperaturesI think a fairer summary would be that even after all of the adjustments made to match recorded temperatures, the uncertainty in the parameter values implies that the uncertainty in the forecasts of the GCMs is too great for them to be relied upon.

My two cents here. Nick did hit the nail on the head that the weakness in Pat Frank’s paper is the GCM approximation equations. Nothing in the remainder of Pat’s paper is incorrect that I can see. Nick’s error propagation examples are correct.

However.

Nick is glossing over the power of ensembles (and models in general) – not that he is denying this but its certainly understated. Since it would be virtually impossible to explicitly propagate an assumed form of error through the GCM’s, the simplest fallback becomes ensembles. Unfortunately very little research has been done on evaluating the stability level of initial boundary conditions in GCM’s. Most of the perturbed inputs are driven by researcher “wisdom”. There are (probably) an infinite combination of initial conditions that create unstable or semi-stable GCM’s outputs in terms of error. Its not been studied closely and even a cursory look at performance of models tuned with a back date far enough can demonstrate model error bounds underestimate vs real world conditions many years later.

I rarely have enough time nowadays to do more than drive by these types of problems but I do know friends that work on GCM’s and propagation of errors/model stability is a real problem.

The original dynamical equations are essentially hyperboilc (automatically well posed) and much is known about them mathematically. There are mathematical estimates for the growth of perturbations in the initial conditions for a finite period of time and the Lax Equivalence Theorem states that a numerical method will converge to the continuum solution in that time if the numerical method is accurate and stable.

However, all of this goes out the window when a dissipation term is added that leads to a larger continuum error than the numerical errors. At that point the numerical solution is converging to the wrong system of equations. i.e., an atmophere more like molasses.

Nick,

The Lorenz equations were derived by an extremely crude numerical approximation

of the Euler equations so that they cannot be proved to be close to to the

solution of those equations. Multiple time scale hyperbolic equations have very reasonable determinate

solutions for a fixed period of time. I also mention that Kreiss has shown that all derivatives of the incompressible Navier-Stokes exist and that if the numerical accuracy is as required by the mathematical

estimates, the numerical method will converge to the continuum solution.

Jerry

John,

Ensembles do not help unless the model is accurately describing the correct system of equations

and that is not the case for weather or climate models. The behavior of air versus molasses illustrates this point.

Jerry

John, “

the weakness in Pat Frank’s paper is the GCM approximation equations.”One equation — that does a bang-up job duplicating GCM outputs.

Maybe we need a different guest blogger. Not a climate scientist and not a misogynistic male denier, either, lol.

https://www.researchgate.net/publication/321213778_Initial_conditions_dependence_and_initial_conditions_uncertainty_in_climate_science

Abstract

This article examines initial-condition dependence and initial-condition uncertainty for climate projections and predictions. The first contribution is to provide a clear conceptual characterization of predictions and projections. Concerning initial-condition dependence, projections are often described as experiments that do not depend on initial conditions. Although prominent, this claim has not been scrutinized much and can be interpreted differently. If interpreted as the claim that projections are not based on estimates of the actual initial conditions of the world or that what makes projections true are conditions in the world, this claim is true. However, it can also be interpreted as the claim that simulations used to obtain projections are independent of initial-condition ensembles. This article argues that evidence does not support this claim. Concerning initial-condition uncertainty, three kinds of initial-condition uncertainty are identified (two have received little attention from philosophers so far). The first (the one usually discussed) is the uncertainty associated with the spread of the ensemble simulations. The second arises because the theoretical initial ensemble cannot be used in calculations and has to be approximated by finitely many initial states. The third uncertainty arises because it is unclear how long the model should be run to obtain potential initial conditions at pre-industrial times. Overall, the discussion shows that initial-condition dependence and uncertainty in climate science are more complex and important issues than usually acknowledged.

I belong to the “”Keep it simple “” school. I admit that beyond simple Algebra such as sufficient for electronics, I do not understand any of Nicks and others comments.

But as we arere talking about Global warming come CC, does it really matter.

After all this whole sham is apparently based on the alleged heating abilities of a trace gas. CO2.

So lets stick to that one factor. Does it, CO2 actually heat the atmosphere as a result of it accepting energy from the Sun.

And what about the Logarithmic effect of this gas, that its now reached the point where it cannot effect the heating of the earth, if indeed it ever did.

Going on percentages, the increase of the CO2 does not appear to match the increase in the temperature from say 1880, and that .8 C which we arere told signals the end of the world as we know it, was probably the result of the change from the low temperatures of the Little Ice Age and back to what it was in the MWP.

So please less complicated maths and just get back to the basics of this whole farce of CC.

MJE VK5ELL

Michael

September 17, 2019 at 6:30 pm

Yes exactly.

The Little Ice Age is over…get used to it! We should be celebrating, not moaning.

Because of the unrealisticlly large dissipation (necessary to overcome the insertion of large amounts of energy into the smallest scales of climate and weather models caused by discontinuous parameterizations ), one can consider the atmospheric fluid that these models are approximating to be closer to molasses than air. Obviously using such a model tp predict anything about the earth’s atmosphere is dubious at best.

The use of a numerical approximation of the continuum derivative of a differential equation requires that the solution of the differential equation be diifferentiable, and the higher the order accuracy of the numerical method, the smoother the continuous solution must be in order to provide a better result (see tutorial on Climate Audit). As mentioned above, the parameterizations used for the heating/cooling in these models mimics discontinuities in the forcing (and thus in the solution) of the continuum equations.

As stated above, in Browning, Hack, and Swarztrauber when the continuum solution was analytic (all derivatives existed) higher order numerical methods provided more accurate results with less computational burden. However, when the artificial dissipation used in climate and weather models was added,

the accuracy of the best numerical method was reduced by several orders of magnitude. The use of this

unnatural dissipation clearly is necessitated by the parameterizations impact on the solution, not the numerical method.

Nick,

While it is true that the forcing drops out of your error equation, that assumes no error in the forcing. You need to also show how an error in the forcing terms is propagated. Browning and Kreiss have shown how discontinuous forcing causes all kinds of havoc on the solution of systems with multiple time scales.

And that is exactly what is happening in the climate and weather models.

Jerry

Nick,

The Lorenz equations were derived by an extremely crude numerical approximation

of the Euler equations so that they cannot be proved to be close to to the

solution of those equations. Multiple time scale hyperbolic equations have very reasonable determinate

solutions for a fixed period of time. I also mention that Kreiss has shown that all derivatives of the incompressible Navier-Stokes exist and that if the numerical accuracy is as required by the mathematical

estimates, the numerical method will converge to the continuum solution.

Jerry

The problem with Nick Stokes is his last name. Navier-Stokes, that Stokes was his grand-father. His frantic attempts to defend Global Circulation Models are because they use Computerized Fluid Dynamics, his gram-pa’s legacy, and he studied it in school.

He is a slick dude, mis-leads at every opportunity. His income apparently depends on this

So, there is this, increasing CO2 does raise the altitude at which the atmosphere is freely able to radiate to space, which lowers the temperature at which the atmosphere is freely able to radiate to space, which does lower the flux to space. If you do not know what flux is, look it up, has nothing to do with soldering.

The magnitude of this effect has never been calculated from First Principles, cannot be, I tried. It does trap energy ,also known ,to those who have not studied it ,as Heat.

“But the effect of CO2 is logarithmic.” Is it? The 280 ppm so-called Pre-Industrial atmospheric concentration was already saturated at about 10 m altitude, absorbing and thermalizing all the 15-micron radiation from the surface of the Earth. Raising the CO2 to 400 ppm or so may have lowered that altitude maybe a few cm, causing no change in the temp of the Atmosphere. Another word you should look up, ‘thermalizing.”

Every word is true, ask a professor at a good ME school, but do not ask Nick Stokes.

I schooled Mosher on this, ask him, I did.

The General Circulation Models run on these expensive super-computers look at wind, the constant radiation from the Sun, the albedo which changes every second and is very difficult to quantify, water vapor also ever-changing and difficult to quantify, and, the boss, CO2. Volcanoes, sure, whatever.

They all seem to be programmed to increase the Global Average Surface Temperature of our atmosphere by some fractional number of degrees per each extra ppm of CO2, and then amplify this effect by so-called Positive Feedback, but no one can show any such increases from First Principles. It is all What Might Happen, But We Do Not Really Know.

I do not know why anyone would set out to mislead the uneducated public in this way, except that, these guys hate Mining. Mining tears up the Earth until the miners restore it to the way it was before, which most of them do now. Our modern prosperity is all based on Mining. They hate it. Mining: Oil, gas, coal, metals, minerals, imagine life without it.

Someone tell me another reason these gentleman and ladies would do this, The Biggest Lie since the Big Three: The Check is in the Mail, My Wife doesn’t Understand Me, and, the biggest before this one, We’re from the Government and We’re Here to Help You…

Seem if some actually tells you, you cannot handle it.

How about Sir D. Attenborough, or Dr. Schellnhuber, CBE just for starters. The goal is at most 1 billion human beings. With clean hands 5+ billion human beings erased.

This kind of stuff got a bad rap in WWII, so it was renamed “conservation”, now Green New Deal.

Exactly as Abba Lerner publicly stated in 1971 at NY Queens College “if the Germany had accepted austerity, Hit*ler would not have been necessary”.

Today if the west had accepted green austerity, with all kinds of sciency flimflam, Greta would not have been necessary.

Resorting to the abuse of kids with “die-ins” should tell you something of their desperation.

Using kids like this shows they even do not believe the models.

There’s more – Dr. Happer just left his post at Pres. Trump’s NSC. Happer’s climate review, delayed 1 year, was opposed by Pres. Science Advisor Kelvin Droegemeier , a climate modeller, who was endorsed by none other than Obama’s Holdren, a notorius population reduction advocate with close ally “Population Bomb” Ehrlich and Dr. Schellnhuber.

General circulation models can be useful, albeit for short time scales. Global climate models are blunt tools that pretend to be sophisticated but are no better than drastic simplifications. Some folks like to say GCM and not specify which one they are referring to, and some people will argue that global climate models are general circulation models with bells and whistles (actually with many bells and whistles taken away).

Global climate models are CFD (computational, not computerized) models in the sense that they apply some CFD concepts and include simplistic numerical solutions to Navier-Stokes. They are CFD on a technicality. The resolution and mesh are so simplistic that the models are portrayed as Corvettes and are really just Chevettes. It is the same way that simple algebraic equations used to generate numerical approximations to solutions of differential equations in climate models are presented as governing differential equations.

Michael Moon said:

“The problem with Nick Stokes is his last name. Navier-Stokes, that Stokes was his grand-father. His frantic attempts to defend Global Circulation Models are because they use Computerized Fluid Dynamics, his gram-pa’s legacy, and he studied it in school.”

Studied it in school? Dr Stokes did a lot more than that. He and his team were awarded the CSIRO research medal in 1995 for their development of the Fastflow CFD software. He spent 30 years at the CSIRO working on applied math and statistics.

“He is a slick dude, mis-leads at every opportunity. His income apparently depends on this”

He’s a retired grandfather. No one is paying him to do this.

This is a cheap and unnecessary attack Michael.

My my Toto,

here we are in Oz :

the great and mighty Wizard Nick has decreed that initial conditions are irrelevant to output from the black box that is behind his curtain; he tells us ensembles are the key !; “If you want to know how a system responds to error, make one and see” and “Very quickly one error starts to look pretty much like another. This is the filtering that results from the vary large eigenspace of modes that are damped by viscosity and other diffusion. It is only the effect of error on a quite small space of possible solutions that matters.”

What’s that Toto? “Woof, woof”

You want to know what the actual uncertainty is? Well my love, the great and powerful NickOz doesn’t trouble himself with an answer to that : he just says that silly old duffer Frank is filled with straw needs a brain.

OK, yes and strawman Frank has said there actually are uncertainties and they make the GCMs temperature signal invisible?

Oh we’re back in munchkin land, don’t you see there are a whole variety of differential equations behind the curtain of GCMs and some make pretty butterflies?

Oh silly Toto I was blown to Oz by a weather event and because it’s pre-1980 it’s not a climate event as the CO2 is too low for hurricanes in Kansas yet, and Aunt Em will be pleased when I blow back.

I suspect that there are a fair number of practicing climate scientists who are cowardly lions — they have been displaced to Oz, and they are afraid to confront the Wizard.

Nick,

I particularly liked this.

“Analysis of error in CFD and GCMs is normally done to design for stability. It gets too complicated for quantitative tracing of error, and so a more rigorous and comprehensive solution is used, which is … just do it. If you want to know how a system responds to error, make one and see. In CFD, where a major source of error is the spatial discretisation, a common technique is to search for grid invariance. That is, solve with finer grids until refinement makes no difference.”

Can you cite a single example of a successful L2 convergence test on any AOGCM? Ignoring “white mice” experiments on the primitive equations, the only attempts I have seen (and there are not many published) have acknowledged that unpredictable (arbitrary) adjustments need to be made to the tuning parameters to align any grid refinement with coarser grid solutions.

A further problem is that with the mixture of explicit and implicit terms involved in the linked equation set, the solution is heavily dependent on the order in which the equations are updated. Se the Donahue and Caldwell paper presented here:- http://www.miroc-gcm.jp/cfmip2017/presentation.html

An a priori requirement for the application of analytic methods for assessing error propagation is that the numerical formulation is actually solving the governing equations you think it is. In the case of the AOGCMs this is simply not the case. They do not conserve water mass. They cannot match atmospheric angular momentum. They cannot match atmospheric temperature field, nor regional estimates of temperature and precipitation , nor SST, nor SW progression – even in hindcast. The only thing that they do consistently is to preserve the aggregate, internal, model relationship:- Ocean heat uptake varies with Net flux = F – restorative flux. They do this however for radically different estimates of the forcing F, ocean heat uptake and emergent temperature-dependent feedback on which the restorative flux term is calculated. It makes more sense to me then to take this latter equation on its own and test the observational constraints on the credible parameter-space rather than put any credibility into numerical models which are not just error prone, but error prone in a highly unpredictable way.

I say this as someone who has (often) run error propagation tests on numerical models, as part of an uncertainty analysis. The difference is that those models were credible.

kribaez

Thanks for the interesting critique, way way away from all these ‘opinion-based’ comments upthread having nothing to do with science, let alone engineering.

Rgds

J.-P. D.

You point out some other good parameters that must have uncertainty associated with them too. Climate as defined by the models just doesn’t mean much. Being able to say the Earth will 4 degrees hotter in 100 years basically doesn’t give a clue about climate change. Will some areas be wetter or drier, cooler or warmer, more wind or less wind? This is where relying on the models just isn’t satisfying.

“Can you cite a single example of a successful L2 convergence test on any AOGCM? “No. GCM’s are limited by a Courant condition based on gravity waves, (the limit that would be speed of sound in a closed space). It just isn’t practical to get complete grid invariance.

“Se the Donahue and Caldwell paper presented here”Thanks for the very useful link. He’s not talking about the core PDE algorithm, but the various associated processes. As he says:

“There is wisdom to certain orders, but no rules that I know of”. He’s not saying that people are getting it wrong. he’s saying that if you made other choices, it could make a substantial difference. He thinks it could explain some of the difference between models that is observed.Well, it could. The differences are there and noted. That isn’t anything new.

” is that the numerical formulation is actually solving the governing equations you think it is. In the case of the AOGCMs this is simply not the case. They do not conserve water mass. They cannot match atmospheric angular momentum.”I’m not sure what you’re basis is for saying that. They certainly have provision for conserving water mass; phase change is a complication, but they do try to deal with it, and I don’t know of evidence that they fail. Likewise with angular momentum, where surface boundary layers are the problem, but again I think you’d need to provide evidence that they fail.

But the thing about

“the numerical formulation is actually solving the governing equations you think it is”is that you can test whether the governing equations actually have been satisfied.Nick,

“Likewise with angular momentum, where surface boundary layers are the problem, but again I think you’d need to provide evidence that they fail.”

There are hundereds of papers on this subject. You might start here (while noting that these comparisons allow for forced “nudging” of the QBO and prescribed SST boundary conditions):-

https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2011JD016555

For water mass conservation, see precipitation bias data the SI data from Randall, D. A., and Coauthors, 2007: Climate models and their evaluation. Climate Change 2007: The Physical Science Basis.

“But the thing about “the numerical formulation is actually solving the governing equations you think it is” is that you can test whether the governing equations actually have been satisfied.”

Not normally you can’t. If you have an analytic solution available for a white mouse problem, you can test against that. Here you do not. Hence, for a given numerical formulation you can at best confirm that the system converges to A solution for that formulation. There is no guarantee that it is a CORRECT solution of the governing equations, since such numerical solution is subject to inter alia time and space truncation error. In the case of climate models, it is also subject to non-physical adjustment to force convergence – hyperviscosity or “atmosphere like molasses” to quote Gerald Browning.

If you can run successful L2 convergence tests that might give you some optimism that it is a credible solution within the epistemology of the numerical scheme. If you can change the solution order of coupled or linked equations and get the same solution, that might also give you some comfort with respect to truncation errors. If you fail at both, that is sufficient to know that resolution at grid level scale is based on worthless self-delusion. The only thing left is to test whether in aggregate behaviour, a model is conserving what it is supposed to be conserving.

Even if someone does not understand the intricacies of numerical analysis very well, most intelligent individuals have an intuitive grasp that large errors in comparisons between observed and modeled critical variables in history matching (“HINDCASTING”) does not bode well for any future projections.

AOGCMs might still have some value for gaining insight into lage-scale mechanisms or missing physics, but IMO the reliance on projections from such models to inform decision-making is delusional.

kribaez

“Not normally you can’t [test].”Yes, you can, by the old fashioned method of substituting the solution in the difference (or equivalent) equations representing the equation to see if they satisfy.

.

I don’t think this would normally be done on a GCM in production mode. But it will be done extensively during development.

Nick,

To see that the boundary layer fails miserable, see Sylvie Gravel’s manuscript

(Use google) to see that the artificial drag, diffusion at the boundary is used to slow down the unrealistic growth of the velocity at the surface. But the difference between obs and the model patch causes the model to deviate from reality in a few days.

jerry

When I read about using “ensembles” to arrive at average GCM values, I always wonder how many GCM runs are quietly round-filed or sent to the bit bucket when their results don’t reflect the “narrative.” IOW only positive results are acceptable for publication.

Anyone have any examples of that happening?

Thank you for the enlightening posts and conversations.

Sadly, it seems to remain that the projections don’t match reality. Even if reality was to hurry and catch up, the projections still would not have matched.

Are the early data sets being adjusted and the original collected data being destroyed (or hidden so they can’t be used)? If so, it seems to me that this is all noise, not science.

A bit cryptic. b² or not b²? That is the question.

I am trying to post a latex file on numerical approximations of derivatives. It worked successfully on climate audit, but not here, thus the test. I see that you did not use latex in your post, but winged it.

Nick,

It is very clear you are not up to date on the literature. In particular I suggest you read the literature on the Bounded Derivative Theory (BDT) for ODE’s and PDE’s by Kreiss and on the atmospheric equations by Browning and Kreiss. Your example is way too simple to explain what happens with hyperbolic PDE’s with multiple time scales.

Jerry

“Your example is way too simple to explain what happens with hyperbolic PDE’s with multiple time scales.”I’m not trying to explain that. I’m trying to explain why you can’t ignore the PDE totally, as Pat Frank did.

What happened to my last post? Is it being censored? I guess hard mathematics is not allowed on this site?

Nick, “

I’m not trying to explain that. I’m trying to explain why you can’t ignore the PDE totally, as Pat Frank did.”Your PDEs produce linear extrapolations of GHG forcing, Nick. Showing that, is what I did. Once linearity of their output is shown, linear propagation of their error is correct.

Nick,

But that is exactly what the modelers are doing. By adding unrealistically large dissipation they have altered the accepted continuum dynamical equations to be closer to a heat equation than a hyperbolic one with small dissipation. And heat equations have very different error characteristics than hyperbolic ones with small dissipation.

I think it is quite humorous that a simple linear model can reproduce the results of thousands of hours of wasted computer time.

I have posted a simple tutorial on the numerical approximation of a derivative on climate audit. All such approximations require that the continuum solution of the differential equations be differentiable. But using discontinuous forcing as in climate or weather models means they are mimicking a discontinuous continuum solution and injecting large energy into the smallest scales of the model. Thus the need for the large dissipation. Kreiss and I have published a manuscript on “The impact of rough forcing on systems with multiple time scales”. Also Browning, Hack and Swarztrauber have shown that the large dissipation used in climate and weather models destroys the numerical accuracy. As I stated you need to keep up with the literature.

And finally I have submitted a manuscript that shows that the hydrostatic equations

are not the correct equations (under review). I also expect problems in the review process, but the math cannot be refuted. Given that is the case, it shows that the wrong dynamical equations have been tuned to produce the answers wanted by the modelers. Then Pat’s error analysis becomes a bit more believable.

Nick,

But that is exactly what climate and weather modelers are doing. By adding unrealistically large dissipation, they have essentially modified the accepted dynamical equations

to be more like a heat equation than a hyperbolic one with small dissipation. This is a continuum error and overwhelms the numerical truncation error (Browning, Hack and Swarztrauber). The large dissipation is necessitated because the forcing is mimicking a discontinuous continuum solution. As shown in the Tutorial I posted on Climate Audit,

all numerical approximations require that the continuum solution of the pde be differentiable so this is a basic violation of numerical analysis. Kreiss and I wrote the manuscript “ The impact of rough forcing on systems with multiple time scales” that you need to read.

Finally, I have submitted a manuscript that shows that the wrong continuum equations are

being approximated. I expect problems in the review process, but the mathematics cannot be refuted. I think this makes Pat’s error analysis more believable, I.e., the wrong equations have been tuned to provide the answer the modelers want to continue their funding, but

the actual model errors are huge. In any case I find it totally amusing that a simple linear model can reproduce the thousands of hours of wasted computer time.

Jerry

Nick, I have a generic question on fluid dynamics.

Assume a complex system with convection, evaporation, and condensation. My real physical model was a large tube with an ice water bath at the top, and halogen light projector bulb shining down on a little island surrounded by water, thermocouple on the island shaded by a tiny umbrella from the local bar. I let this run until a stable equilibrium was established. I then changed the atmosphere from air to about 50% carbon dioxide. Drum roll please…….. Results: the temperature went up on the island… for a couple minutes. Five minutes after the gas exchange the temperature was back to the original equilibrium and for no change.

My little toy model had a robust equilibrium state and carbon dioxide had no net change on the island.

Now my question: If I were to add additional nonlinear methods of heat transport, eg. A sealed heat pipe within the tube, a highland pond around the island and a little stream down to the lake pond fed with condensate from ice bath at the top, (total water mass the same), what would happen to the robustness of the equilibrium state?

I predict that the equilibrium state will become more stable as more dis-similar non-linear heat paths are added. I suggest that this generic question could be addressed by computational models.

It would be fun to repeat my experiment in a silo big enough for a cloud to form.

Basically, the real model is that about 1361 W/m2 of sunlight approaches the Earth. A lot reaches the surface. Stuff happens, and the heat leaves. Temperatures are determined by the thermal resistance that that heat flux encounters. The global models of Manabe and Weatherald got all that pretty right (including varying GHG), but they had to estimate the resistances. GCM’s add in synthetic weather to improve the estimate, and so can give a more reliable estimate of the effects on temperature.

I thought I would contribute two quotes from John Tukey

1. Anything worth doing is worth doing badly. In context, he did not literally mean “badly”, but it is an important counterpoise to Emerson’s anything worth doing is worth doing well. In the common case of time pressure, it is important not to stall and extend and postpone forever getting the best answer, but you have to get at least a workable first approximation to be any use at all.

2. It is better to have an approximate answer to the right question, …, than to have an exact answer to the wrong question, … . Not that propagation of the uncertainty of the initial values in the DE solution, addressed in detail by Nick Stokes here, is exactly a “wrong” question; but the propagation of the error/uncertainty of parameter estimates, addressed by Pat Frank, is definitely a right question. His approach

maynot be as good as the hypothetical bootstrapping and other resampling procedures that can not meaninfully be carried out until there are much faster computers, but his approach is eminently defensible, and not likely to be improved upon any time soon. I am hoping critics like Roy Spencer and Nick Stokes are able to provide their ideas of improved computations, before say 2119.Pat Frank did a very good calculation of a very good approximation to the uncertainty in the GCM “forecasts” that results from uncertainty in one parameter estimate.

Nick Stokes showed how to do an estimate of the uncertainty propagated from uncertainty in the initial values (estimates of the present state).

Other sources of uncertainty:

All the other parameters whose point estimates have been written into the code, hence considered “known” by Nick Stokes; but not considered “known about the processes” by anybody.

Choice of grid size and numerical integration algorithm expressed in the GCM computer code.

By any reasonable estimate, Pat Frank has underestimated the uncertainty in the GCM model output. He has written a really fine and admirable paper.

And, kudos to WUWT for making Nick Stokes’ essay public for reading and commentary. And kudos to Pat Frank and Nick Stokes for engaging with the commenters and critics.

Readers will not have missed that the same sense of urgency motivates the modelers to make simplifying assumptions about parameters and processes that will be better known 100 years hence. The models are admirable achievements of human effort and intelligence. They just have not been shown yet to be adequate to the evaluate any expensive public policy other than further work on the models.

well said

Nick, thank you for a careful statement and analysis from someone who knows what he is talking about. I stopped reading Pat Frank’s analysis when he interpreted his discovery (made earlier by Willis Eschenbach) that the temperature predictions of GCM models obey Taylor’s Theorem (1712) as some deep insight into the models rather than as a mathematical constraint that every true or false model must obey.

Your error analysis assesses an important problem in computer science – the computational stability of the algorithms used – but not the problem that is of most practical or policy importance. For assessing computational stability, the concept of “error” and its propagation is the one that you discuss, but for policy purposes a different concept is needed. Several commentators have intuited that your concept of error is not the same as theirs, but it is worth setting out the difference.

To do that, we need the solution to “God’s differential equation”, which governs the movement of every atom in the universe. Using your simplified symbolic form, this is

z’ = G(t)*z + g(t)

contrasted with the climate science differential equation, your equation (1):

y’ = A(t)*y + f(t)

G(t) and A(t), and g(t) and f(t), differ because we probably do not know all of the processes at work, because we must parameterize some processes for tractability and we do not know all of the right parameter values, and because we cannot know (and do not want to know) the details of every atom in the universe. God solves his differential equation at a space-time resolution of one Planck length or so, and we can find the solution z by observation of the world as it evolves.

The error that matters for policy is not the computational stability of our algorithm, but the difference between our solution and God’s, which is given by

(y-z)’ = G(t)*(y-z) + (A(t)-G(t))*y + (f(t)-g(t))

Analysis of this difference requires knowledge of G(t) and g(t), for the reasons that you set out. We cannot assess it from knowledge of A(t) and f(t) alone.

The CMIP project provides a clever piece of misdirection. Instead of comparing y to z, it compares y1, y2, y3, … derived from A1(t), A2(t), A3(t), …. However, none of these tells us anything about the “policy-relevant” error y-z. The only place where I see regular comparisons between model solutions and observation is at sites like WUWT, and the comparisons do not seem flattering. The IPCC reports tend to focus on the latest models and their predictions, and by definition it is too early to assess their accuracy against data that was unknown at the time the models were formulated.

But, to reassure Pat and Willis, God’s solution z and the error y-z will both obey Taylor’s Theorem.

Paul,

See my analysis above and my statement of the difference between modeling molasses and air.

Nick does not understand the difference between continuum error and model error.

Jerry

“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”

Is as meaningful as saying “life is just the discrepancy that arises between the the world with life and that without”.

And then Nick is effectively arguing that eonomics, art, politics, etc. do not exist because they are “just” life which disappears in the equations.

The point is that “error” is not “just”. “Error” or to use a more accurate term which gives respect to the subject “Natural variation”, is like life, a highly complex issue with many many facets and extremely complex and at least in part NON-LINEAR!! Yes, like Nick you can in some circumstances use a model for “error” that means it can be largely ignored. But just because that model CAN be used, does not mean it is general applicable unless you PROVE the variation behaves as required by EXPERIMENTATION.

To put it in the simplest form “natural variation” is a theoretical way to model what we don’t know. And what Nick shows is that he has not the slightest clue about what he doesn’t know.

You can model A climate but you cannot model THE climate. There is a computational solution to THE climate and it is currently being run on a platform we call the Galaxy using software we call the laws of physics.

Cephus0

Great comment!

Jerry

This was a nice opening discussion of one aspect of the problems with solving systems of differential equations (error propagation), and gives some perspective on the subject. I appreciate Nick Stokes’ laying it out, and recommend reading his own blog post, which is more thorough.

However, this discussion is flawed (a term I’ve come to dislike, but it’s the best one applicable) from the beginning, where Mr. Stokes loosely defines “error”: “It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”

I’m at a loss to even guess what is meant by “what you believe is the true number.” The error term in a numerical integration scheme is given by the order of magnitude of the sum of all of the terms following the last in a truncated series extrapolation; it’s the difference between the series extrapolation one has performed and the series extrapolation of an infinite series that one has not performed. It has no defined algebraic sign, let alone magnitude. There isn’t a number representing “what you believe is the true number.” There is just an undefined term.

In ordinary differential equation integration, error “control” is often applied in the form of either variable time step, use of a “mop up step”, or both. In each case, the integration is performed using a certain order of approximation most of the time, but occasionally performing a higher order approximation. The difference is taken, and if it exceeds a certain percentage of the difference between the low order time step values, then this difference is added to the last integration step. This is a “mop up step,” and its utility has been the subject of debate for decades.

The variable time step check is somewhat better. Two integrations between t0 and t1 are performed, first with the normal time step t1-t0 = h, and then two more time steps with step size h/2. If the second differs from the first by some prescribed amount, the time step going forward is reduced (it isn’t necessarily halved – there are a number of algorithms having more sophisticated time step control).

In short, the treatment of “error” herein is based on a faulty definition. But there are other problems with the discussion.

Probably the biggest argument I have with it is in the assertion that “loss of memory” of initial conditions in the integration of the equations of motion in CFD is a

feature. What that implies is that questions of existence and uniqueness of solutions of the Navier Stokes equations are superfluous. We may not be able to find a closed form solution to the general Navier Stokes equations, but we can in restricted cases. And in those (or any closed form) solutions, there can be no “loss of memory” of conditions at any point.One of my career fields of interest is astrodynamics, in which we deal with large systems of ordinary non-linear differential equations. The number of scales of forces, masses, and times involved is very large, and dealing with them computationally is extremely challenging. Nevertheless, we are able to start and stop a simulation of the entire solar system (not just planets, but all of the known moons and asteroids) at any point over a span of millions of years, and achieve the same results no matter how we do it. The solar system is no less chaotic than a fluid system. But “chaos” is not randomness: it’s deterministic. And a numerical solution that does not exhibit deterministic behavior by being able to run in reverse does not fit the definition of a “solution.” It’s simply garbage.

There is much, much more to the bad behavior of computational CFD. For example, the treatment of turbulence Mr. Stokes gives in his (quite good) blog post hints of something that, unfortunately, he doesn’t develop. He cites the k-epsilon model, where k is a parameter applied to the flow kinetic energy, while epsilon is a diffusion parameter. They are introduced into the Reynolds-averaged Navier Stokes equations to make up for the fact that there are more variables than equations – that is, the problem is mathematically undefined just on the basis of physics. Though a “kinetic energy” and “diffusion” parameter sound physical, they are simply adjustable parameters in an equation having no basis in physics, and are tuned to make numerical results agree with experimental measurements. No climatic experiments have been or can be performed to define these values for GCMs.

I applaud Nick Stokes for opening this part of the discussion, and hope that more people familiar with computational physics will join in.

“where Mr. Stokes loosely defines “error””It was deliberate, and I made that clear. Differential equations don’t care about causes of perturbation. They just take points and map them onto new domains. And my analysis was all about how two neighboring points are either moved apart from each other, or brought together. If you have a separation that you attribute to error, then the first amplifies it, and the second attenuates it. That is all.

The same amplification or not will apply to distributions that you might wish to call uncertainty.

” If the second differs from the first by some prescribed amount”I think that sums up the “error” situation. You don’t have one that is right and one wrong. You have one that you believe is probably better, but the key metric is the difference. You take action based on that.

“We may not be able to find a closed form solution to the general Navier Stokes equations, but we can in restricted cases. And in those (or any closed form) solutions, there can be no “loss of memory” of conditions at any point.”I would invite you to nominate any non-trivial closed form solution which is usefully specified as an initial value problem. They are mostly steady state – Poiseuille pipe flow is a classic. In its applications, at least, you just take it as a parabolic velocity profile, and accept that that will be the converged result, however it started. It’s true that, subject to a lot of assumptions, you can find one convergent path. But it is far from unique.

The reason I described it as a “feature” is this. We are trying to model real world flows, like that pipe flow. Those flows are often steady or periodic. If not, we probably want statistics of a transient flow, eg lift/drag. The real flows do not have known or knowable initial conditions. CFD is structured as time-stepping, and so does have to start somewhere. If the start state mattered, the CFD would not be emulating the real world. Fortunately, it usually doesn’t.

It appears to me that a good analogy here would be the ensemble modeling of hurricanes.

The result is that several all the solutions are pretty close together in the first day. After several days, a few solutions will have trajectories that are fairly close together with some that are obvious outliers.

With hurricanes, we have a system of models that can be verified rather quickly.

With GCM’s, not so much. Although within 20 years, the models have been shown to be somewhat faulty.

A question though:

Do the hurricane forecast models work by tweaking the initial conditions and running the program?

If that is the case, then the math behind the model is inaccurate to some degree. The results then will always be wrong. Some results however are just less wrong than others and may be useful.