Posted on June 20, 2020

by Gerald Browning

Climate model sensitivity to CO2 is heavily dependent on artificial parameterizations (e.g. clouds, convection) that are implemented in global climate models that utilize the wrong atmospheric dynamical system and excessive dissipation.

The peer reviewed manuscript entitled “The Unique, Well Posed Reduced System for Atmospheric Flows: Robustness In The Presence Of Small Scale Surface Irregularities” is in press at the journal Dynamics of Atmospheres and Oceans (DAO) [link] and the submitted version of the manuscript is available on this site, with some slight differences from the final published version. Link to paper is here: Manuscript

**Abstract:** It is well known that the primitive equations (the atmospheric equations of motion under the additional assumption of hydrostatic equilibrium for large-scale motions) are ill posed when used in a limited area on the globe. Yet the equations of motions for large-scale atmospheric motions are essentially a hyperbolic system, that with appropriate boundary conditions, should lead to a well-posed system in a limited area. This apparent paradox was resolved by Kreiss through the introduction of the mathematical Bounded Derivative Theory (BDT) for any symmetric hyperbolic system with multiple time scales (as is the case for the atmospheric equations of motion). The BDT uses norm estimation techniques from the mathematical theory of symmetric hyperbolic systems to prove that if the norms of the spatial and temporal derivatives of the ensuing solution are independent of the fast time scales (thus the concept of bounded derivatives), then the subsequent solution will only evolve on the advective space and time scales (slowly evolving in time in BDT parlance) for a period of time. The requirement that the norm of the time derivatives of the ensuing solution be independent of the fast time scales leads to a number of elliptic equations that must be satisfied by the initial conditions and ensuing solution. In the atmospheric case this results in a 2D elliptic equation for the pressure and a 3D equation for the vertical component of the velocity.

Utilizing those constraints with an equation for the slowly evolving in time vertical component of vorticity leads to a single time scale (reduced) system that accurately describes the slowly evolving in time solution of the atmospheric equations and is automatically well posed for a limited area domain. The 3D elliptic equation for the vertical component of velocity is not sensitive to small scale perturbations at the lower boundary so the equation can be used all of the way to the surface in the reduced system, eliminating the discontinuity between the equations for the boundary layer and troposphere and the problem of unrealistic growth in the horizontal velocity near the surface in the hydrostatic system.

The mathematical arguments are based on the Bounded Derivative Theory (BDT) for symmetric hyperbolic systems introduced by Professor Heinz-Otto Kreiss over four decades ago and on the theory of numerical approximations of partial differential equations.

What is the relevance of this research for climate modeling? At a minimum, climate modelers must make the following assumptions:

1. The numerical climate model must accurately approximate the correct dynamical system of equations

Currently all global climate (and weather) numerical models are numerically approximating the primitive equations — the atmospheric equations of motion modified by the hydrostatic assumption. However this is not the system of equations that satisfies the mathematical estimates required by the BDT for the initial data and subsequent solution in order to evolve as the large scale motions in the atmosphere. The correct dynamical system is introduced in the new manuscript that goes into detail as to why the primitive equations are not the correct system.

Because the primitive equations use discontinuous columnar forcing (parameterizations), excessive energy is injected into the smallest scales of the model. This necessitates the use of unrealistically large dissipation to keep the model from blowing up. That means the fluid is behaving more like molasses than air. References are included in the new manuscript that show that this substantially reduces the accuracy of the numerical approximation.

2. The numerical climate model correctly approximates the transfer of energy between scales as in the actual atmosphere.

Because the dissipation in climate models is so large, the parameterizations must be tuned in order to try to artificially replicate the atmospheric spectrum. Mathematical theory based on the turbulence equations has shown that the use of the wrong amount or type of dissipation leads to the wrong solution. In the climate model case, this implies that no conclusions can be drawn about climate sensitivity because the numerical solution is not behaving as the real atmosphere.

3. The forcing (parameterizations) accurately approximate the corresponding processes in the atmosphere and there is no accumulation of error over hundreds of years of simulation.

It is well known that there are serious errors in the parameterizations, especially with respect to clouds and moisture that are crucial to the simulation of the real atmospheres. Pat Frank has addressed the accumulation of error in the climate models. In the new manuscript, even a small error in the system impacts the accuracy of the solution in a short period of time.

One might ask how can climate models apparently predict the large-scale motions of the atmosphere in the past given these issues. I have posted a simple example on Climate Audit (reproducible on request) that shows that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants. This is essentially what the climate modelers have done in order to match the previous climate given the wrong dynamical system and excessive dissipation.

I reference a study on the accuracy of a primitive equation global forecast model by Sylvie Gravel et al. [link]. She showed that the largest source of error in the initial stages of a forecast are from the excessive growth of the horizontal velocity near the lower boundary. Modelers have added a boundary layer drag/dissipation in an attempt to prevent this from happening. I note in the new manuscript that this problem does not occur with the correct dynamical system and that in fact the correct system is not sensitive to small-scale perturbations at the lower boundary.

**Biosketch:** I am an independent applied mathematician trained in partial differential equations and numerical analysis concerned about the loss of integrity in science through the abuse of rigorous mathematical theory by numerical modelers. I am not funded by any outside organization. My previous publications can be found on google scholar by searching for Browning and Kreiss.

Please do not reply to this comment. Replies are reserved for comments by Gerald Browning that should be of general interest to readers of this blog.

Just the first page from google scholar search for Kreiss

BOOK] Time dependent problems and difference methods

B Gustafsson, HO Kreiss, J Oliger – 1995 – books.google.com

Time dependent problems frequently pose challenges in areas of science and engineering

dealing with numerical analysis, scientific computation, mathematical models, and most

importantly–numerical experiments intended to analyze physical behavior and test design …

Cited by 1328 Related articles All 9 versions

[CITATION] Initial boundary value problems for hyperbolic systems

HO Kreiss – Communications on Pure and Applied Mathematics, 1970 – Wiley Online Library

Consider a first order system of partial differential equations with constant real coefficients.

Here u (x, t)=(dl)(xJ t),-.-, u (“)(x, t)) l is a vector function of the real variables (x, t)=(xl,*-, x,, t)

and A, Bi are constant square matrices of order n. We assume that (1.1) is strictly hyperbolic …

Cited by 932 Related articles All 4 versions

[BOOK] Initial-boundary value problems and the Navier-Stokes equations

HO Kreiss, J Lorenz – 2004 – SIAM

Due primarily to the proliferation of computers, many questions in science and engineering

have become amenable to quantitative study, and modeling by partial differential equations

(PDEs) is playing an ever increasing role. This book introduces a part of this vast subject …

Cited by 1005 Related articles All 9 versions

[PDF] ams.org

Stability theory of difference approximations for mixed initial boundary value problems. II

B Gustafsson, HO Kreiss, A Sundström – Mathematics of Computation, 1972 – JSTOR

A stability theory is developed for general difference approximations to mixed initial

boundary value problems. The results are applied to certain commonly used difference

approximations which are stable for the Cauchy problem, and different ways of defining …

Cited by 609 Related articles All 5 versions

[PDF] tandfonline.com

Full View

Comparison of accurate methods for the integration of hyperbolic equations

HO Kreiss, J Oliger – Tellus, 1972 – Taylor & Francis

Historically, second order accurate difference methods have been used for computations in

dynamic meteorology and oceanography. We investigate more accurate difference methods

and show that fourth order methods are optimal in some sense. This method is then …

Cited by 695 Related articles All 7 versions

[CITATION] Methods for the approximate solution of time dependent problems

H Kreiss, J Oliger – 1973 – International Council of Scientific …

Cited by 425 Related articles

[PDF] ams.org

Stability theory for difference approximations of mixed initial boundary value problems. I

HO Kreiss – Mathematics of Computation, 1968 – JSTOR

0. Introduction. Consider a first-order hyperbolic system of partial differential equations (0.1)

au/at= Aau/lx with constant coefficients in the quarter space, x> 0, t _ 0. Here u (x, t)’=(u (1)(x,

t),*, U (n)(X, t))** is a vector function of the real variables (x, t), and A is a constant matrix of …

Cited by 213 Related articles All 3 versions

[CITATION] Problems with different time scales for partial differential equations

HO Kreiss – Communications on Pure and Applied Mathematics, 1980 – Wiley Online Library

Dedicated to Fritz John on the Occasion of his 70th Birthday, June 14, 1980 … In a previous

paper [6] we have developed a theory for problems with different time scales for systems of ordinary

differential equations … We assumed that the eigenvalues of A were essentially purely imaginary …

Cited by 174 Related articles All 4 versions

[PDF] dtic.mil

Stability of the Fourier method

HO Kreiss, J Oliger – SIAM Journal on Numerical Analysis, 1979 – SIAM

1. Introduction. The collocation method based on trigonometric interpolation is called the Fourier

(or pseudo-spectral) method. It hasbeen used extensively for the computation of approximate

solutions of partial differential equations with periodic solutions. A satisfactory theoretical justification …

Cited by 168 Related articles All 8 versions

[PDF] ams.org

Difference approximations for boundary and eigenvalue problems for ordinary differential equations

HO Kreiss – Mathematics of Computation, 1972 – ams.org

Abstract. The boundary value problem for ordinary differential equations is considered and a

general theory for difference approximation is developed. In particular, the influence of extra

boundary conditions is investigated and the eigenvalue problem is considered in detail …

Cited by 136 Related articles All 5 versions

1 2 3 4 5 6 7 8 9 10

Next

Help

Charles, are you deleting some of my replies?

No. But it helps if you stick with a single username G. And the use of certain words will cause an auto deletion, but I see no evidence of that.

Charles,

You can remove the google scholar listing for Heinz. I have dealt with the problem later on.

Thanks

Jerry

All

There is a new thread on this site about the IPCC climate models.

Here is one quote:

As scientists work to determine why some of the latest climate models suggest the future could be warmer than previously thought, a new study indicates the reason is likely related to challenges simulating the formation and evolution of clouds.

Another result is a wider range in possible temperatures.

Sensitivity to forcing because of different formulations (accuracy).

Jerry

All,

Nick Stokes has criticized the Bounded Derivative Derivative (BDT) by Professor Heinz Kreiss

as not being main stream.

I now ask him to provide the following references as to his qualifications for making such a criticism:

A reference to his work contributions to the continuum mathematical theory for time dependent initial and initial boundary value problems, especially in relation to systems with multiple time scales.

A reference to his work contributing to the continuum mathematical theory of numerical analysis.

A reference to his work in the atmospheric sciences, especially with regard to the dynamical equations

and physical approximations.

A reference to his work with numerical models that does not contain explicit or implicit dissipation that exceeds the amount in the real fluid.

I also note that he could not find anything about the Bounded Derivative Theory although

the two manuscripts where the theory was introduced by Kreiss were included in my manuscript.

Jerry

All,

Nick Stokes has stated he has no knowledge of the BDT and thus it must be not in the mainstream

John Marshall (MIT) met with Heinz and me and graciously asked if he could use the BDT oceanographic

system of equations introduced in our oceanographic manuscript (reference in my manuscript).

That system involved using our concepts of a multiscale reduced system for oceanographic flows that

is similar to the incompressible Navier-Stokes equations rather than the hydrostatic system then in common use. John has now had over 2000 cites of the manuscript:

A finite‐volume, incompressible Navier Stokes model for studies of the ocean on parallel computers

I worked with Tom Holzer at NCAR using concepts from the BDT for the plasma physics equations.

(reference in my manuscript). It turned out that because those equations were ill posed in a certain spatial region of the atmosphere because of an assumption about a balance between large

terms being assumed that the two terms were equal (similar to the hydrostatic assumption )

was the cause of the ill posedness. We applied concepts of the BDT multiscale system to remove the cause of the ill posedness.

There have been a number of new numerical techniques based on the concept of multiscale systems

introduced by Heinz developed by famous numerical analysts.

I think it is clear that the BDT is well established in many areas of mathematics and the physical sciences.

Jerry

Computer models predicted millions dead from SARS COV 2. Yep, millions. even with all the lies it has taken 7 months to get to 470 thousand world wide and yes the flu started in about Dec last year that is why the designation 19. Climate change is rife with BS and bafflegab and trillions of words describing that which is both cyclical and natural.

Please keep comments relevant to the topic on this thread.

Thank you.

“…if one is allowed to choose the forcing, one can reproduce any solution one wants.” Bingo! The crux of the climate model validity.

He shoots….he scores!

The forcing, and the parameters, too. A traveling frame of reference that exceeds scientific methods and domains.

At a minimum, climate modelers must make the following assumptions:I would call those 3 points

criteria, not assumptions, for climate modelers to adopt if they are interested in exiting the Cargo Cult Science arena they currently inhabit and returning to actual science.Why would they be interested in that, Joel? That’s not where the money is 🙂

Steve,

And what agency is so naive as to fund that questionable work?

Jerry

The “mistakes” in climate models start way earlier than this and directly affect the “GHE”. Neither the assumption of a perfect surface emissivity is even closely true, nor do clouds reduce emissions only insignificantly. So a hypothetic Earth without GHGs would be a lot warmer than just 255K.

A closer look however reveals that the whole postulated negative CRE (cloud radiative effect) of some -20W/m2 is based little more than obscure models, while real data show the CRE is indeed positive.

https://de.scribd.com/document/466536029/The-Strange-NASA-Map1

Leitwolf,

That document is no longer available?

Jerry

The link works for me..

I get a page that says at the top “This document has been removed from Scribd.”

Then a section of “Reading Suggestions”.

Perhaps you are drawing from a cache somewhere?

Ok, they pulled it indeed and I can imagine why..

In the meanwhile here is an alternative link

https://docdro.id/phJh2cU

I understand that providing a dumbed down version for us layman is difficult, because it it will necessarily lack the same precision and nuance of the original. But it would be appreciated nonetheless.

I’ll be interested in what Nick Stokes has to say. His comments are always interesting, even when (especially when?) I disagree with him. One of the reasons I love this site so much.

David,

I will make attempt. But keep asking until it is clear.

The atmospheric equations of motion describe a number of different

types of motion. Probably the one most familiar are sound waves.

These waves cannot be accurately observed by the current observational system and thus not computed accurately by a numerical

model based on those equations. Thus it is desirable to mathematically derive an approximate system that provides an accurate solution of the full system without those waves involved. Mathematically this is called the reduced system and there is only one such system.

Let me know if this helps.

Jerry

That helps a little.

Graham,

Please let me know where it is confusing. I am happy to “dumb” down as far as need be

because the more people that understand the manuscripts and its conclusions, the less they will be hoodwinked by numerical models just because they are run on supercomputers. Note that all

of my numerical runs in the manuscript were done on a pc.

Jerry

Congratulations on getting through peer-review to publication, Jerry!

It looks like you’re able to dispose of the typical hyperviscous (molasses) atmosphere used in climate models to suppress turbulent gyres

I’d guess your paper may be the largest advance in climate modeling since at least Ramanathan’s 1976 paper on atmospheric radiative transfer.

Thanks, by the way, for mentioning my uncertainty analysis. Small potatoes compared to your accomplishment.

+1 for Nick Stokes. An echo chamber will never uncover the truth. And Kudos to WUWT to encouraging diverse viewpoints.

Pat,

The climate modelers now have a problem because they have to explain how using the wrong dynamical system provides reliable results on the increase of CO_2.

Jerry

Pat,

This is serious because now the climate modelers must explain how using the wrong dynamical

system provides reliable info.

Jerry

Pat I still remember you and Gerald at Real Climate forcing Gavin to argue as an engineer rather than a atmospheric physicist. It is good to see that your mathematical reasoning of error was shown to agree with this paper.

I always hated the “molasses” trick in atmospheric mass and energy balance systems. Love from the article “Because the primitive equations use discontinuous columnar forcing (parameterizations), excessive energy is injected into the smallest scales of the model. “

Is this different from:

I’m guessing that most climate modellers can recite the wisdom of both von Neumann and Lorenz with regard to models. They then turn around and ignore those words of wisdom. They should have to explain why those words of wisdom don’t apply to them.

Hi commie,

The Lorenz equations are mathematically amusing, but are misleading as far as symmetric hyperbolic

equations are concerned. They were derived using only a few spectral modes as an approximation of the atmospheric equations. Thus they cannot be proved to be anywhere close to the full system.

It can be proved that a stable and accurate numerical method will converge to the solution of a symmetric hyperbolic system (Lax equivalence theorem). Note that in my manuscript I used two different numerical

methods , one on the multiscale system and one on the reduced system (both mathematically

proved to be close to any large scale solution of the full system) and both produced the same solution

as expected. That is the power of good mathematics.

Jerry

The specific wisdom I was referring to was this:

I interpret that to mean that, if you input the physics an math, and if the output matches the system behavior, your approach is valid. If you have to tune the system using system behavior, you’re just curve matching.

You said:

As far as I can tell, you and von Neumann and Lorenz are in agreement on that point.

The author’s comment about the atmosphere being a ‘symmetric hyperbolic’ system reminds me of Lord Monckton’s efforts to follow through on describing the most general hyperbolic implications of assuming a simple feedback model for the atmosphere. Is there truly a meaningful connection here, I wonder, between Monckton’s ideas and the analysis referred to in the current head posting?

No.

As I understand it, for results that require many iterative calculations, machine epsilon can also become problematic, causing drift and strange results that can’t be trusted.

Sorry, I just don’t get the basis of the above-discussed paper.

The above article makes this statement: “At a minimum, climate modelers must make the following assumptions: . . . 2. The numerical climate model correctly approximates the transfer of energy between scales as in the actual atmosphere.”

One of the largest semi-regular transfers of energy in the atmosphere occurs as warm fronts and cold fronts move around the globe.

Since such fronts can span up to approximately half the N-S distance of the North American continent, do they meet the definition of “Small Scale Surface Irregularities”?

What global climate models, if any, model such weather fronts . . . let alone model them accurately?

Beyond this, what global climate models, if any, accurately model the PDO, AMO, and AO? With what accuracy have any such models, including those incorporating Bounded Derivative Theory, hindcast such global weather/climate patterns?

The above are, of course, largely rhetorical questions.

Gordon,

You forgot hurricanes. 🙂

Jerry

Jerry, thank you for the cute reply, but, no, I did not forget hurricanes. Nor did I forget typhoons, tornadoes, derechos, heat waves, dust storms, and long-term droughts.

The questions remain, vis-à-vis BDT: are these, like cold and warm fronts, to be considered as small scale or large scale “surface irregularities”, and what is the modeling “robustness” for these phenomena?

Gordon,

Small scale irregularities in my manuscript refers to individual rocks or trees or small scale

heating features at the surface. Large scale features like lake effect snowstorms are seen by the elliptic equation for w. The modeling robustness means that you only need to include the larger scale features of importance at the surface.

Jerry

Gordon,

As I indicated in my manuscript, I believe that the reduced system allows the formation of fronts (also see Hoskins manuscript on this subject). However the obs network cannot resolve the details of a front and a model must allow for the turbulence at the frontal boundary which might be no easy task.

When we used to watch fronts come through Boulder we would see a massive dust cloud at the interface of the front.

Jerry

The gust front and storm line is almost always several hundred kilometers in front of the actual frontal boundary for systems rolling off the Rockies onto the plains. Even weather models have had a hard time with that until the recent decade. Wx forecasters pretty much just had to know that was going to happen from experience.

Joiel,

Agreed. When I was at NOAA, the staff would attempt to forecast the weather. They would look at the model output and then modify the result based on their experience.

I would like to run an experiment where the forecasters only used model output or only used satellite imagery and obs data and compare the difference.

Jerry

Clay,

That is correct. You might want to read Pat Frank’s manuscript about error propagation.

Jerry

“The mathematical arguments are based on the Bounded Derivative Theory (BDT) for symmetric hyperbolic systems introduced by Professor Heinz-Otto Kreiss over four decades ago and on the theory of numerical approximations of partial differential equations.”Yes. But the Bounded Derivative Theory (BDT) is not mainstream maths. If you google the phrase, you’ll find just 24 references, almost all authored by Gerald Browning.

On the assumptions

1.

The refutation is

“However this is not the system of equations that satisfies the mathematical estimates required by the BDT for the initial data”So it comes back to saying that GCMs are not compatible with BDT. Well, it’s his theory, so I guess that’s true. Maybe BDT is wrong.

2.

The refutation is

“Mathematical theory based on the turbulence equations has shown that the use of the wrong amount or type of dissipation leads to the wrong solution.”This is just hand-waving. Of course you can’t get turbulence exactly right, and to the extent that it is wrong, the solution reflects that. True in CFD, and true of almost all propositions in applied math. The question is, to what extent (if any) is the amount wrong? No answer here.

3.

Again, just a hand waving assertion that because parameter estimates might not be exactly right, GCMs fail. But then GB claims as an essential assumption

“there is no accumulation of error over hundreds of years of simulation”That is just nonsense. Differential equations numerically solved do give varying trajectories when there is error. What then happens depends on the DE, as I described here. GCMs work. Computational Fluid Dynamics (CFD) works. We know how to do this stuff.

Nick’s basic assertion is that BDT is not mainstream consensus, thus it can be dismissed.

That’s PNS in action, not science.

Joel,

He never heard of the Kreiss matrix theorem. Heinz was a renowned applied mathematician,

expert in partial differential equations and numerical analysis.

Jerry

I should have clarified, “… not mainstream consensus in climate modeling.”

Climate science modeling community pretty much ignores the rigors of science in general, that is everything from initialization measurement uncertainty, model error propagation, and alternative explanations.

Do you use a diffent Google as I do ?

Do you search as Oreskes searched in Climate “Science” ini case of so called “consensus” ?

So what do you find? I searched for the name “Bounded Derivative Theory”.

Some articles, papers and books from Russia, Chinese authors, also Kreiss himself, from Mr. Browning the one and the other, including Judith Currys blog, books about maths…..

Plenty of articles will contain those words, somewhere. But the intact phrase “Bounded Derivative Theory”?

PS:

Kasahara, Mironenko , Fry, Hovakimyan, Jones, Krantz, Pesin, Lavrentʹev, Kudin, Temlyakov, Kozine, Maliszewski, Turner, Wu, Celentano, George, Friz, Granath, Yang, Colombini, Kostov, Daley, Henshaw, Marsch, Yström, Motamed, LeVeque, Schochet, Bolotnikov, Hunana etc ppp

enough ?

Specific example? Link? I don’t see those.

Nick,

That is because you don’t keep up with the literature.

If you look at my first post on this site you will see just how prolific Professor Heinz Kreiss\was in the mathematical literature (only the first page). He had the same status as Peter Lax who you also probably don’t know (Lax equivalence theory).

CITATION] Problems with different time scales for partial differential equations

HO Kreiss – Communications on Pure and Applied Mathematics, 1980 – Wiley Online Library

Dedicated to Fritz John on the Occasion of his 70th Birthday, June 14, 1980 … In a previous

paper [6] we have developed a theory for problems with different time scales for systems of ordinary

“That is because you don’t keep up with the literature.”I googled today. The only person I see writing about BDT is you.

“how prolific Professor Heinz Kreiss was”Yes, he was. But I can’t see him writing about BDT, except for a couple of papers with you as lead author.

Here is a direct link to the Heinz‐Otto Kreiss paper.

Google Scholar says it’s been cited 112 times.

Here’s how it starts out (no abstract):

“1. IntroductionIn a previous paper [6] we have developed a theory for problems with different time scales for systems of ordinary differential equations

du/dt = A(t)u + F(t)

We assumed that the eigenvalues of A were essentially purely imaginary and could be divided into two groups M₁, M₂; M₁ contained the large eigenvalues of order O(ε⁻¹) 0< ε << 1, and M₂ the eigenvalues of order one. Thus there were two time scales present: the fast one of order O(ε⁻¹) and the slow one of order O(1).

In many applications one is not interested in the fast time scale but only in the slow. In [6] we have shown that one can prepare the initial data, using a very simple principle, such that the corresponding solution only varies on the slow time scale. This principle is:

Choose the initial data such that a couple of time derivatives d^νu/dt^ν |t=0,

v = 0, 1,…, p, are of order O(1) at t = 0.”

And so it goes, into the wild blue yonder. 🙂

Nick,

Whatever you do don’t look at the references in my manuscript. You might learn something.

Jerry

“Google Scholar says it’s been cited 112 times.”Yes. But what does it have to do with the current post?

Nick it’s just a form of applied mathematics if you suspect a problem get a mathematician to look at it.

Gerald you are playing with a Quantum process (EM radiation) you will need to do some checks your theory works on the QM domain. It has quantization and a non real domain of field spins thru the 3D space. Most of the problems with the current models is they are almost exclusively classical and none of them remotely even cover all the energy properly. I am not doubting your intent but if you want to make better predictions you first need to check your proposed mathematics covers the thing being modeled.

“none of them remotely even cover all the energy properly”So do you think Jerry is covering all the energy properly?

No idea I asked him to check that, the process is fairly straight forward but very mathematically complex but basically you do something like solve it against the GKSL equation. There are a number of other ways to do it but you basically need to cover an open Quantum System and convince yourself the maths holds. I don’t accept his theory without proof but the fact it is not mainstream is not a valid argument. People can solve things that other mainstream overlook take Grigori Perelman as an example (if you don’t know who he is google it).

LdB,

The equations of motion for large scale motions in the atmosphere are essentially a symmetric hyperbolic system of equations and thus have well known mathematical properties. Kreiss theoretically treated such systems with multiple time scales and his theory has applications in many areas of science. To accurately compute the component of such systems that evolve on the advective time scale, one must choose the initial data

according to his Bounded Derivative Theory (BDT). If that data is not chosen in that manner the other components, e.g. inertial gravity waves, will be excited from the beginning and cause all kinds of havoc for the artificial parameterizations (that is why forecast modelers have also attempted to remove those waves).

In the BDT, to ensure that the model continues to evolve on the large scale, the mathematics requires that both space and time derivatives stay large scale or else that the necessary L_2 estimates of the ensuing solution be independent of the fast time scales will be violated. This precludes discontinuities in the forcing as occurs in the primitive equations. Note that numerical analysis also requires that the solution of the continuum system be differentiable to ensure that the truncation error is small, yet modelers ignore this requirement and merrily apply the difference equations to situations where the solution is discontinuous, thus violating the basic requirements of the theory.

Note that any compact supported discontinuous function can be accurately approximated by a compact function that is infinitely differentiable . But the problem is that approximation must still be resolved by the numerical model and that is certainly not the case in climate models.

Jerry

“…Of course you can’t get turbulence exactly right, and to the extent that it is wrong, the solution reflects that…The question is, to what extent (if any) is the amount wrong?…”

So you admit “of course” that it is wrong, but then question if it is wrong.

You also claim the solution reflects “the extent that it is wrong,” then find yourself asking “to what extent…is the amount wrong.”

Sounds like you need to settle the discussion with yourself before you engage others, Nick.

Virtually all applied maths, and indeed scientific analysis, processes uncertain input to produce uncertain output. That is why folks here are keen on error limits. And that’s all GB is saying about assumption 2. You could say it about anything.

When you apply classical physics to a quantum problem you can’t even put a number on the error, so good luck with that 🙂

“…GCMs work. Computational Fluid Dynamics (CFD) works. We know how to do this stuff…”

You also know how to put lipstick on a pig.

GCMs work for a very limited timeframe. CFD works provided it isn’t GIGO and has the proper initial boundary conditions, grid resolution, parameterization, etc.

Get real.

GB says he has a theory (BDT) and GCMs are inconsistent with it.

OK

Why do you think BDT is right?

Nick,

I am not going to respond to you because both Pat and I have dealt with you before and you clearly do not

understand the meaning of a mathematical proof.

Jerry

Yes, Jerry, I too have pigeonloled your maths with Pat Frank’s.

In the pigeon hole marked Nick knows zip about physical error and its analysis.

So stop posting you are contributing nothing other than Nick says/Nick thinks.

Stop posting if you’ve nothing but a complaint from ignorance.

My comment about Nick is from experience. Perhaps you weren’t here to see it.

And that link is far from exhaustive.

None of the assertions about climate models made by Dr. Browning appear to be justified by the manuscript

he mentions.Consider the claim that

“the primitive equations use discontinuous columnar forcing (parameterizations), excessive energy is injected into the smallest scales of the model.”

is nowhere discussed or mentioned in the recently published manuscript. I have no idea where the statement is true or not but given that the source code for climate models is online Dr. Browning should be able to provide the exact lines of code that demonstrate the truth of this. Also there is nothing to stop parameterisations of clouds etc to be continuous so this does not seem to be an obvious error.

All in all Dr. Browning seems to have published a paper that is a minor advance on what he had been publishing for many years and then uses that to make major unsupported claims about climate models when the two are only tangentially related.

Izaak,

Ask the modelers to state clearly how the amount of dissipation in the their models differs from the ECMWF model and reality. Then have them remove that dissipation and see how long their models run

with the discontinuous parameterizations.

Jerry

Hi Jerry,

In the abstract of your 2002 paper you state:

“Concepts from the theory also have been used to prove the existence of a simple reduced system that accurately describes the dominant component of a midlatitude mesoscale storm forced by cooling and heating. Recently, it has been shown how the latter results can be extended to tropospheric flows near the equator. In all of these cases, only a single type of flow was assumed to exist in the domain of interest in order to better examine the characteristics of that flow. Here it is shown how BDT concepts can be used to understand the dependence of developing mesoscale features on a balanced large-scale background flow. That understanding is then used to develop multiscale initialization constraints for the three-dimensional diabatic equations in any domain on the globe.”

which suggests that BDT can be used to improve the accuracy of climate models and improve our understanding of them. And in that paper you discuss how to model the climate using reduced equations. You also state in that paper that

“if one uses a numerical method that is sufficient to accurately approximate the continuum equations of motion for the scales in the domain of interest, a difference in resolution between the models is not a problem.” again suggesting that at least in 2002 you did not see a serious issue with global climate models so I am wondering what the difference is between you 2002 paper and your current conclusions?

Izaak,

There are four sources of error in modeling the atmosphere on a short time scale.

1. The correct dynamical system must be approximated.

2. The numerical error must be sufficiently small (and the method accurate and stable)

2. The initial data (obs) must be sufficiently accurate to describe the slowly evolving component of the solution

3. The forcing (physics) must be accurate.

It is now well known what the correct reduced systems are for all scales of motion in all areas on the globe (and it is not the primitive equations).

Accurate and stable numerical methods for the sphere are known. However if a continuum error is made in the continuum equations thru the addition of an incorrect term (such as a dissipative term much larger than the one in reality), then that error can overwhelm the

truncation error and lead to an incorrect solution.

In our 2002 manuscript, I did not add random obs error to the boundary data as would be the case in practice. I did that so as to show that the mathematics of limited area modeling is now well understood. However had I added that random error, the characteristics would have been incorrect all hell would have broken loose.

The forcing terms are parameterizations that are minimally accurate at best, but crucial to

the a correct forcast of the weather.

e

Thus the largest sources of error in a short term forecast are now the obs and the forcing.

Jerry

The forcing 3 which is actually 4 is impossible in the classical domain.

Izaak,

This quote was taken out of context:

“if one uses a numerical method that is sufficient to accurately approximate the continuum equations of motion for the scales in the domain of interest, a difference in resolution between the models is not a problem.”

It referred to the fact that the channel (global) model could have a coarse mesh if it only needed to resolve the large scale solution but the mesoscale (limited area model) would have to have a fine mesh in order to resolve both the large and mesoscale solutions. The comment had nothing to do with climate models.

the

Izaak,

There are references s cited in the manuscript that show that is exactly the case. Read the

manuscript by Browning, Hack and Swarztrauber where using the amount of dissipation

in climate models destroyed the numerical accuracy of the spectral method.

Jerry

Izaak,

Climate models will always have to use a model dissipation term larger than reality and inaccurate physics to extend their simulations so there will always be large continuum errors that will overwhelm the time and space truncation errors. I have never said that I support climate models even though at one time I built one for NCAR when I was a programmer and had to do so. So I am fully aware of all of the skeletons in their closets.

Jerry

Izaak,

You clearly did not read the manuscript before making derogatory remarks:

This requirement precludes129the primitive equation solution from evolving correctly because the columnar130equation for the vertical component of the velocity can change discontinuously131from horizontal point to point because of switches in the heating parameteri-132zations. Those discontinuities violate the spatial derivative estimates required1336

by the BDT. They also require unrealistically large dissipation because they134inject energy into the smallest scales in a numerical model.T

Izaak,

This statement needs some facts:

“All in all Dr. Browning seems to have published a paper that is a minor advance on what he had been publishing for many years and then uses that to make major unsupported claims about climate models when the two are only tangentially related.”

How a manuscript that proves that all climate and weather models are based on the wrong dynamical system of equations is a minor advance is clearly the type of statement made by someone that is embarrassed by the truth being shown by rigorous mathematics. I must assume that you are somehow involved with the IPCC who blindly supports their dubious arguments based on faulty climate models.

Jerry

Izaak,

Please provide your real name and qualifications. Then everyone will know where you are coming from.

Jerry

Gerald, I found that the CESM 1.0 model used a latent heat of water vaporization which was independent of temperature – that causes a 3% error in energy transfer by evaporation from tropical seas. Can we estimate how this affects the model’s predictive power? Here is my post :

https://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257

and here is a probable cause for using a constant latent heat:

https://judithcurry.com/2012/08/30/activate-your-science/#comment-234131

Please note that Gavin’s error analysis is a handwaving: “[it] is a pretty good assumption given the small ratio of water to air, and one often made in atmospheric models”.

CESM 1 was a part of CMIP5. I suspect that CESM 2 is still built on that assumption. CESM2 is a part of CMIP6.

Best put down of GCMs I have seen – “primitive equations”.

Also explains why there aree numerous climate models that deliver different results despite the claim that “the science is settled”.

“Best put down of GCMs I have seen – “primitive equations”.”Standard terminology.

“The primitive equations are a set of nonlinear differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models.”Curious,

Ab error in forcing can be estimated in the same way as a truncation error in the numerical

approximation for a short period of time for a symmetric hyperbolic system. Note that in my manuscript I left

off a term of the size of 10% in the reduced system and obtained an error of that size in 4 days.

Jerry

The main problem with climate models, especially CMIP3, CMIP5 AND CMIP6 ones, is that they are selected/tuned to hindcast the past – especially the 30 years before their hindcast to forecast transition dates. And to do so without consideration for multidecadal oscillations. Multidecadal oscillations were favoring rising temperature and contributed to a rapid temperature upswing from the mid 1970s to a few years after 2000, I figure by about .2 degree C (by using Fourier on HadCRUT3). This is most of the last 30 years of the hindcast periods of the CMIP3, CMIP5 and CMIP6 models. This means those models (generally) have feedbacks more strongly positive than they actually are, and some of the temperature rise after the mid 1970s was hindcasted as being caused by these feedbacks being more strongly positive than they actually are. So these models are generally overpredicting warming for the first century of their forecast periods, I figure mostly by about 1 degree C, more for higher CO2 scenarios/RCPs and about 1 degree C for RCP 4.5.

From the article: “One might ask how can climate models apparently predict the large-scale motions of the atmosphere in the past given these issues. I have posted a simple example on Climate Audit (reproducible on request) that shows that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants. This is essentially what the climate modelers have done in order to match the previous climate”

I would say the climate models do not predict the Earth’s atmosphere in the past. What they do predict is the bogus, bastardized surface temperature record.

The climate computer models backcast science fiction. The Climategate Charlatans and their spawn bastardized the surface temperature record and then they manipulate their climate computer models to reproduce the bastardization.

They ought to try manipulating their models to reproduce the U.S. surface temperature chart, the *real* temperature profile of the globe. The one that shows the 1930’s were just as warm as today. Instead, they reproduce the bastardized Hockey Stick chart. One lie correlating with another lie.

Climate models fail on a very fundamental physical limit; specifically it is impossible to have negative atmospheric water. The linked chart is the average precipitation minus evaporation for the CMIP5 ensemble of climate models:

http://climexp.knmi.nl/data/icmip5_pme_Amon_modmean_rcp85_0-360E_-90-90N_n_+++_2010:2020.png

By inspection, it is quite clear that the models produce a lot more precipitation than evaporation – EVERY YEAR. If you take the current level of atmospheric water, the climate models would deplete that water within 3 years. The only way that this result can happen beyond three years is for the atmosphere to be manufacturing water. The only known process where the atmosphere can manufacture vast quantity of water is in climate models – they are unphysical bunkum.

The models are so simplistic that they cannot even get the the mass balance of the atmosphere correct year-to-year. What hope have they got of getting the formation of clouds correct when the mass balance clouds depend on is just pick a number.

Nice point. One I have never heard before. Based on this we should have all drowned long ago!

40 days and 40 nights!

“that given any time dependent system (even if it is not the correct one for the fluid being studied), if one is allowed to choose the forcing, one can reproduce any solution one wants. ”

Absolutely! TRUTH. Tune all you want, but if you have a incorrect model it will NEVER be predictive.

Robert

And, if one is using a model that is either missing one or more non-trivial parameters, or if one or more parameters are wrong because of inaccurate measurements (or subsequent ‘corrections’) and the model is ‘tuned’ to get historical agreement, then the arbitrary adjustment provides no assurance that the adjustment will work beyond the interval of time used for tuning. That is, it becomes a process of fitting a complicated function to observational data that is only valid for a limited time interval; not unlike fitting a high-order polynomial and naively expecting predictions to be useful.

The claim is often made that GCMs are based on first-principle physics. That ignores the parameterizations and tuning. If the model was truly based on physics, there would be no need for tuning — it would work properly ‘right out of the box.’

Clyde,

Well said.

Jerry

Clyde,

Your comment is succinct – thank you.

There are so many sources of error in just one parameter, the typical surface air temperature, that it alone is capable of distortion of model runs. The more the temperature data is criticised, the more it is subjected to arbitrary adjustments, worsening the problem when improvement should be the aim. It is like improving the navigation control system on an unguided missile.

“There are so many sources of error in just one parameter, the typical surface air temperature”Measured surface air temperature is not a source of error. It is not used.

Nick,

To use a logic method like yours, precisely where did I say it was used? I merely said it was capablwe of distortion. Gotcha! Geoff S

Geoff,

You said, inter alia,

” it alone is capable of distortion of model runs.”How can it distort model runs if it isn’t used? How can it be a source of error if it isn’t used?

The whole argument is moot, because surface temperature isn’t a parameter, it’s a variable.

And it [surface temperature] damn well better be used to initialize a run that purports to start at some declared point in time and project into the future, otherwise the entire run is b***shit.

Not that there are enough surface temperature measurements to *properly* initialize a GCM run, but what there are need to be included…along with all of the interpolated (as Steve Mosher would accurately say, “predicted”) temperatures derived from them at the same point in time. That’s the only way a one for one comparison could possibly be made between a historical initial state and one at some point in the future.

That said, the idea that surface temperature errors propagate is valid. They just aren’t “parameters.” Those are chosen, and are, in many cases, arbitrary.

Michael,

Good post. If surface temperatures are not used to initialize the climate models then how can they possibly work? If you are trying to predict temperatures then you better be using temperature as an input!

“And it [surface temperature] damn well better be used to initialize a run”Yes, an initial state is used, which includes temperature (everywhere). They deliberately wind back 100 years or more, so it is a state from long ago. And the point of doing that is precisely that the period that you want is not very much dependent on the initial state – it depends on the energy balance that is created in the years since.

Nick, “

the period that you want is not very much dependent on the initial state”A numerical model truth. Not a physical truth.

Nick,

Surface air temperature is one of the most-discussed of the parameters in general climate work. It is a corner stone of global warming hypotheses.

It was used here as an example of a parameter familiar to many people, in the sense of “If the rest of climate work is as inaccurate and fiddled as much as surface air temperature, then we have a problem.”

Nick, sometimes I wonder if you really understand the importance of missing, interpolated, cherry-picked, weighted fundamental data. Slack approaches to measurement and uncertainty seem part of the modern mind set of at least some researchers as shown by their papers. There is a probability that sloppy treatment of original input data, alone, can invaldate the outcome of a GCM. SAT as I used it for an example is but a visible symptom of a widespread ailment of the mind. Geoff S

Stokes

You asked, “How can it [surface air temperature, SAT] be a source of error if it isn’t used?” You acknowledge that SAT is used for initialization, but claim that it doesn’t matter. Then why bother?

Is it not used to tune the models? Is it not used as a reference for determining the skill in hindcasting? If the SATs tune the models improperly then they are a source of error! If the models can’t provide reliable hindcasting, then they are clearly unreliable for forecasting.

“Measured surface air temperature is not a source of error. It is not used.”

“Yes, an initial state is used, which includes temperature (everywhere).”

LMAO!

Nothing like two opposing assertions within a few messages of each other. “No, temperature isn’t used but yes, temperature is used!”

Unfreakingbeliveable!

Tim

Stokes has an uncanny ability to come up with statements that appear, on face value, to support his belief system. Unfortunately, when it serves his purpose, he will contradict himself, or summon up some little known and seldom used hypothesis that has no real application to the topic of discussion.

Gerald, you open with the statement “Climate model sensitivity to CO2 is heavily dependent on artificial parameterizations …….”

There is no climate sensitivity to CO2 apparent in the empirical climate data. Application of a First Order Autoregression Model determines that atmospheric temperature is independent of CO2 concentration but that the annual rate of change of CO2 concentration is dependent on the climate. This applies to a number of CO2 recording stations, Mauna Loa and more, from around the World as detailed on my web site at https://www.climateauditor.com

Any use of CO2 sensitivity in a climate model must therefore be in error.

Further, the CO2 records show an obvious response to the major Mount Pinatubo volcanic eruption on 12 June 1991. If models are adjusted to fit past records then, again, they will be irrelevant because no computer model is going to know when, where and how severe future eruptions will take place.

An example of the futility of current modelling may be seen in the CO2 record for Point Barrow, Alaska. The seasonal variation is of the order of 20 ppm presumably from biogenesis. How are the models taking this into account? Do they know how fast the trees are growing or what the next phytoplankton bloom will generate?

Good reply. The Saharan dust cloud headed our way is a primary example. It will lower temperatures both of the ocean and the land that the dust storm traverses. This effect could last quite a long time before the dust finally settles out of the atmosphere. Since the climate models are an iterative process that means that the temperatures of this year are an input for the next year. If the temp impacts of this year are not included in the input for next year then how can the next year projection be correct? The only conclusion would be that the models don’t depend at all on local and regional climate impacts and if that is the case then exactly what are they projecting?

Tim

It seems to me that the kind of unpredictable meteorological events, which can affect temperature, are almost exclusively of the type to lower the temperature. Even if GCMs were reasonably skillful, leaving temperature lowering events out would produce predictions that were upper-bounds because the real temperature will always be lower, thanks to things like volcanic eruptions and Saharan dust storms.

Bevan,

We no longer need argue about the type or accuracy of climate model paramterizations. The more fundamental error that the models are using the wrong dynamical system is the more serious error.

And that now cannot be refuted because it is based on rigorous mathematics.

Jerry

Gerald, I have great respect for anyone even trying to mathematically model the huge complexity of the atmosphere.

I share your concern ‘about the loss of integrity in science through the abuse of rigorous mathematical theory by numerical modelers’.

In response to Izaac you wrote on 24 June: ‘There are four sources of error in modeling the atmosphere on a short time scale. [for convenience of future reference I have renumbered these as shown below as intended.

1. The correct dynamical system must be approximated.

2. The numerical error must be sufficiently small (and the method accurate and stable)

3. The initial data (obs) must be sufficiently accurate to describe the slowly evolving component of the solution

4. The forcing (physics) must be accurate.

As Incoming Energy – Outgoing Energy = Radiative Forcing, how can such errors be ameliorated without duly taking into account that Earth is an integral part of the solar system?

FYI, the 4 outer planets (JSUN), with a combined mass of 446 Earths, cause the sun’s barycentric motion and vary its energy output as is known from the sunspot cycles. At the same time they make Earth’s orbital motion look like a helium filled balloon held by a small child hopping around in circles, to clearly affect mostly the illumination of the poles as is evident from the rapid and very prominent changes in the Arctic and Antarctic temperatures that dwarf the variations in temperatures at lower latitudes and the global mean temperatures.

It would seem that while climate modellers ignore short period solar system cycles that are clearly evident in climate related observations, trying to make climate models accurate enough to be relied on for predictions amounts to an exercise in futility.

John,

To make it clear, I am not a supporter of climate modeling as I am well aware of all of their shortcomings.

In mathematics, forcing would include solar forcing and as I worked with Tom Holzer on the plasma physics equation for the upper atmosphere and the sun, I am well aware of solar cycles and especially the Maunder minimum and the LIA. There can be arguments about the accuracy of the forcing terms (and I know how rough the parameterizations for clouds, precipitation, etc. are) , but it has now been shown through rigorous mathematics that the atmospheric dynamical equations (hydrostatic equations) approximated by all current global climate models are wrong, i.e., there can be no arguments about this serious flaw in the models.

Once this result becomes more widely known, the IPCC can no longer justify any conclusions that are based on their models and should be ridiculed for doing so.

BTW thanks for renumbering the points!

Jerry

John,

Read the manuscript by Sylvie Gravel (link at start of thread). The forecast models go bad in a matter of a day or so just because they are also based on the hydrostatic system. The ad hoc fix of including a boundary layer dissipation to artificially slow down the unrealistic growth of the horizontal velocity

at the surface is completely arbitrary. The only way they keep a forecast model from going completely off the rails is by tuning the parameterizations and starting a new forecast with new obs each day.

Note that the reduced system introduced in my manuscript does not require that arbitrary dissipation at the surface and is in fact stable to small scale perturbations at the lower boundary.