Emulation, ±4 W/m² Long Wave Cloud Forcing Error, and Meaning

Guest post by Pat Frank

My September 7 post describing the recent paper published in Frontiers in Earth Science on GCM physical error analysis attracted a lot of attention, consisting of both support and criticism.

Among other things, the paper showed that the air temperature projections of advanced GCMs are just linear extrapolations of fractional greenhouse gas (GHG) forcing.

Emulation

The paper presented a GCM emulation equation expressing this linear relationship, along with extensive demonstrations of its unvarying success.

In the paper, GCMs are treated as a black box. GHG forcing goes in, air temperature projections come out. These observables are the points at issue. What happens inside the black box is irrelevant.

In the emulation equation of the paper, GHG forcing goes in and successfully emulated GCM air temperature projections come out. Just as they do in GCMs. In every case, GCM and emulation, air temperature is a linear extrapolation of GHG forcing.

Nick Stokes’ recent post proposed that, “Given a solution f(t) of a GCM, you can actually emulate it perfectly with a huge variety of DEs [differential equations].” This, he supposed, is a criticism of the linear emulation equation in the paper.

However, in every single one of those DEs, GHG forcing would have to go in, and a linear extrapolation of fractional GHG forcing would have to come out. If the DE did not behave linearly the air temperature emulation would be unsuccessful.

It would not matter what differential loop-de-loops occurred in Nick’s DEs between the inputs and the outputs. The DE outputs must necessarily be a linear extrapolation of the inputs. Were they not, the emulations would fail.

That necessary linearity means that Nick Stokes’ entire huge variety of DEs would merely be a set of unnecessarily complex examples validating the linear emulation equation in my paper.

Nick’s DEs would just be linear emulators with extraneous differential gargoyles; inessential decorations stuck on for artistic, or in his case polemical, reasons.

Nick Stokes’ DEs are just more complicated ways of demonstrating the same insight as is in the paper: that GCM air temperature projections are merely linear extrapolations of fractional GHG forcing.

His DEs add nothing to our understanding. Nor would they disprove the power of the original linear emulation equation.

The emulator equation takes the same physical variables as GCMs, engages them in the same physically relevant way, and produces the same expectation values. Its behavior duplicates all the important observable qualities of any given GCM.

The emulation equation displays the same sensitivity to forcing inputs as the GCMS. It therefore displays the same sensitivity to the physical uncertainty associated with those very same forcings.

Emulator and GCM identity of sensitivity to inputs means that the emulator will necessarily reveal the reliability of GCM outputs, when using the emulator to propagate input uncertainty.

In short, the successful emulator can be used to predict how the GCM behaves; something directly indicated by the identity of sensitivity to inputs. They are both, emulator and GCM, linear extrapolation machines.

Again, the emulation equation outputs display the same sensitivity to forcing inputs as the GCMs. It therefore has the same sensitivity as the GCMs to the uncertainty associated with those very same forcings.

Propagation of Non-normal Systematic Error

I posted a long extract from relevant literature on the meaning and method of error propagation, here. Most of the papers are from engineering journals.

This is not unexpected given the extremely critical attention engineers must pay to accuracy. Their work products have to perform effectively under the constraints of safety and economic survival.

However, special notice is given to the paper of Vasquez and Whiting, who examine error analysis for complex non-linear models.

An extended quote is worthwhile:

… systematic errors are associated with calibration bias in [methods] and equipment… Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected.

“Usually, it is assumed that the scientist has reduced the systematic error to a minimum, but there are always irreducible residual systematic errors. On the other hand, there is a psychological perception that reporting estimates of systematic errors decreases the quality and credibility of the experimental measurements, which explains why bias error estimates are hardly ever found in literature data sources.”

“Of particular interest are the effects of possible calibration errors in experimental measurements. The results are analyzed through the use of cumulative probability distributions (cdf) for the output variables of the model.

“As noted by Vasquez and Whiting (1998) in the analysis of thermodynamic data, the systematic errors detected are not constant and tend to be a function of the magnitude of the variables measured.

When several sources of systematic errors are identified, [uncertainty due to systematic error] beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

“beta = sqrt[sum over(theta_S_i)^2],

“where “i” defines the sources of bias errors and theta_S is the bias range within the error source i. (my bold)”

That is, in non-linear models the uncertainty due to systematic error is propagated as the root-sum-square.

This is the correct calculation of total uncertainty in a final result, and is the approach taken in my paper.

The meaning of ±4 W/m² Long Wave Cloud Forcing Error

This illustration might clarify the meaning of ±4 W/m^2 of uncertainty in annual average LWCF.

The question to be addressed is what accuracy is necessary in simulated cloud fraction to resolve the annual impact of CO2 forcing?

We know from Lauer and Hamilton, 2013 that the annual average ±12.1% error in CMIP5 simulated cloud fraction (CF) produces an annual average ±4 W/m^2 error in long wave cloud forcing (LWCF).

We also know that the annual average increase in CO₂ forcing is about 0.035 W/m^2.

Assuming a linear relationship between cloud fraction error and LWCF error, the GCM annual ±12.1% CF error is proportionately responsible for ±4 W/m^2 annual average LWCF error.

Then one can estimate the level of GCM resolution necessary to reveal the annual average cloud fraction response to CO₂ forcing as,

(0.035 W/m^2/±4 W/m^2)*±12.1% cloud fraction = 0.11%

That is, a GCM must be able to resolve a 0.11% change in cloud fraction to be able to detect the cloud response to the annual average 0.035 W/m^2 increase in CO₂ forcing.

A climate model must accurately simulate cloud response to 0.11% in CF to resolve the annual impact of CO₂ emissions on the climate.

The cloud feedback to a 0.035 W/m^2 annual CO2 forcing needs to be known, and needs to be able to be simulated to a resolution of 0.11% in CF in order to know how clouds respond to annual CO2 forcing.

Here’s an alternative approach. We know the total tropospheric cloud feedback effect of the global 67% in cloud cover is about -25 W/m^2.

The annual tropospheric CO₂ forcing is, again, about 0.035 W/m^2. The CF equivalent that produces this feedback energy flux is again linearly estimated as,

(0.035 W/m^2/|25 W/m^2|)*67% = 0.094%.

That is, the second result is that cloud fraction must be simulated to a resolution of 0.094%, to reveal the feedback response of clouds to the CO₂ annual 0.035 W/m^2 forcing.

Assuming the linear estimates are reasonable, both methods indicate that about 0.1% in CF model resolution is needed to accurately simulate the annual cloud feedback response of the climate to an annual 0.035 W/m^2 of CO₂ forcing.

This is why the uncertainty in projected air temperature is so great. The needed resolution is 100 times better than the available resolution.

To achieve the needed level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms, all to 0.1% accuracy. This requirement is an impossibility.

The CMIP5 GCM annual average 12.1% error in simulated CF is the resolution lower limit. This lower limit is 121 times larger than the 0.1% resolution limit needed to model the cloud feedback due to the annual 0.035 W/m^2 of CO₂ forcing.

This analysis illustrates the meaning of the ±4 W/m^2 LWCF error in the tropospheric feedback effect of cloud cover.

The calibration uncertainty in LWCF reflects the inability of climate models to simulate CF, and in so doing indicates the overall level of ignorance concerning cloud response and feedback.

The CF ignorance means that tropospheric thermal energy flux is never known to better than ±4 W/m^2, whether forcing from CO₂ emissions is present or not.

When forcing from CO₂ emissions is present, its effects cannot be detected in a simulation that cannot model cloud feedback response to better than ±4 W/m^2.

GCMs cannot simulate cloud response to 0.1% accuracy. They cannot simulate cloud response to 1% accuracy. Or to 10% accuracy.

Does cloud cover increase with CO₂ forcing? Does it decrease? Do cloud types change? Do they remain the same?

What happens to tropical thunderstorms? Do they become more intense, less intense, or what? Does precipitation increase, or decrease?

None of this can be simulated. None of it can presently be known. The effect of CO₂ emissions on the climate is invisible to current GCMs.

The answer to any and all these questions is very far below the resolution limits of every single advanced GCM in the world today.

The answers are not even empirically available because satellite observations are not better than about ±10% in CF.

Meaning

Present advanced GCMs cannot simulate how clouds will respond to CO₂ forcing. Given the tiny perturbation annual CO₂ forcing represents, it seems unlikely that GCMs will be able to simulate a cloud response in the lifetime of most people alive today.

The GCM CF error stems from deficient physical theory. It is therefore not possible for any GCM to resolve or simulate the effect of CO₂ emissions, if any, on air temperature.

Theory-error enters into every step of a simulation. Theory-error means that an equilibrated base-state climate is an erroneous representation of the correct climate energy-state.

Subsequent climate states in a step-wise simulation are further distorted by application of a deficient theory.

Simulations start out wrong, and get worse.

As a GCM steps through a climate simulation in an air temperature projection, knowledge of the global CF consequent to the increase in CO₂ diminishes to zero pretty much in the first simulation step.

GCMs cannot simulate the global cloud response to CO₂ forcing, and thus cloud feedback, at all for any step.

This remains true in every step of a simulation. And the step-wise uncertainty means that the air temperature projection uncertainty compounds, as Vasquez and Whiting note.

In a futures projection, neither the sign nor the magnitude of the true error can be known, because there are no observables. For this reason, an uncertainty is calculated instead, using model calibration error.

Total ignorance concerning the simulated air temperature is a necessary consequence of a cloud response ±120-fold below the GCM resolution limit needed to simulate the cloud response to annual CO₂ forcing.

On an annual average basis, the uncertainty in CF feedback into LWCF is ±114 times larger than the perturbation to be resolved.

The CF response is so poorly known that even the first simulation step enters terra incognita.

The uncertainty in projected air temperature increases so dramatically because the model is step-by-step walking away from an initial knowledge of air temperature at projection time t = 0, further and further into deep ignorance.

The GCM step-by-step journey into deeper ignorance provides the physical rationale for the step-by-step root-sum-square propagation of LWCF error.

The propagation of the GCM LWCF calibration error statistic and the large resultant uncertainty in projected air temperature is a direct manifestation of this total ignorance.

Current GCM air temperature projections have no physical meaning.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 1 vote
Article Rating
578 Comments
Inline Feedbacks
View all comments
September 19, 2019 8:53 am

A well written piece Dr Frank.

It seems so difficult to argue against your logic, having codified what many engineers have thought and said over the years. But I suspect the usual candidates will have a go at you.

Be brave, the logic is nearly complete.

September 19, 2019 9:06 am

“That necessary linearity means that Nick Stokes’ entire huge variety of DEs would merely be a set of unnecessarily complex examples validating the linear emulation equation in my paper.”
What is the basis of “unnecessarily”? You are using your very simple model which produces near linear dependence of global average surface temperature, to replace a GCM, which is certainly very complex, and saying that the simple model can be taken to emulate the error propagation of the GCM, even though it has none of the physics of conservation of mass, momentum and energy which actually guides and limits the propagation of error.

As a scientific claim, you actually have the obligation to demonstrate that the error behaviour is the same, it you are going to analyse it in place of the GCM. Not just wave hands about how close emulation of a single variable is.

Forgotten as always in this is that GCMs are not just devices to predict global average surface temperature. They cover a huge number of variables, including atmospheric temperature at all levels. Matching just surface temperature over a period in no way establishes that the models are equivalent. This is obvious when Dr Spencer points out that this silly error growth claim would be limited by the requirements of TOA balance. Well, the Earth has such a balance, and so do the GCM’s, but there is nothing in Pat Frank’s toy model about it.

The point of my proof that you can match a prescribed solution to any kind of error behaviour just reinforces the point that you have in no way established the requirements for analysing the toy in place of the real.

Editor
Reply to  Nick Stokes
September 19, 2019 9:39 am

Nice try Nick. But, the point is that the toy does just as good a job estimating global mean surface temperature as the GCM’s. The fact the the GCM’s attempt (and fail) to produce a matrix of temperatures throughout the Troposphere is irrelevant, all anyone talks about is surface temperature, which is sad I agree. Besides, John Christy has shown that the models are inept at estimating the vertical temperature gradient.

Creating yet another red herring does not hide the problems with model validation, or lack thereof.

Reply to  Andy May
September 19, 2019 9:55 am

“But, the point is that the toy does just as good a job estimate global mean surface temperature as the GCM’s. “
So would a curve fit, as the toy effectively is. But it tells you nothing about error propagation in the GCM.

And you can’t isolate single variables – in the GCM they all interact. There are all sorts of effects in the GCM which would ensure that the temperature can’t just rise 18°C, as Pat’s toy model (with no physics) can. Dr Spencer’s TOA balance is just one.

Gator
Reply to  Nick Stokes
September 19, 2019 10:03 am

And you can’t isolate single variables – in the GCM they all interact.

Which is why all GCM’s are pure fantasy, and why they are not proof of anything, except bias.

Editor
Reply to  Nick Stokes
September 19, 2019 10:11 am

Nick,
I agree with you (and Spencer) to a point. Dr. Frank’s work does not invalidate the GCM’s, nor does it explain the propagated errors in the models. The climate data we have does that quite well.
What his work does show, is that what the models were designed to do, compute man’s influence on climate, cannot be accomplished with them, because they cannot resolve the low-level cloud cover accurately enough. The error due to changing cloud cover swamps what they are trying to measure and is unknown. This has been known for a long time. Spencer, Lindzen, and others have written about it before. I think that Frank’s work compliments the others and is very helpful.
I realize you (and perhaps Spencer as well) are trying to throw irrelevant stuff to mask his main conclusion, similar to others efforts to trivialize the work Spencer and Christy did with satellite temperature measurements or the work that Lindzen did on tropical cloud cover ECS estimates, but it won’t work. The underlying problem with the climate models is they are not accurate enough to estimate man’s contribution to climate change, and they may never be.

Tommy
Reply to  Nick Stokes
September 19, 2019 11:49 am

“And you can’t isolate single variables – in the GCM they all interact. There are all sorts of effects in the GCM which would ensure that the temperature can’t just rise 18°C, as Pat’s toy model (with no physics) can. Dr Spencer’s TOA balance is just one”

Are you saying that if you don’t understand how one variable works, you should add many many more variables that you also don’t understand and…Magic ?

It sounds to me like you’re admitting the GCMs have an a priori conclusion (reasonable looking predictions that show an impact of co2 forcing). Of course you can curve fit enough variables to grt what you want. Does it model the real world, though?

Matthew R Marler
Reply to  Nick Stokes
September 19, 2019 2:05 pm

Nick Stokes: And you can’t isolate single variables – in the GCM they all interact. There are all sorts of effects in the GCM which would ensure that the temperature can’t just rise 18°C, as Pat’s toy model (with no physics) can. Dr Spencer’s TOA balance is just one.

Actually, Pat Frank’s analysis shows that you can isolate a single variable. Your point about there being many variables whose uncertainties ought to be estimated concurrently implies that Pat Frank has achieved an approximate lower bound on the estimation uncertainty.

The sum and substance of your commentaries is just that: the actual model uncertainty resulting from uncertainties in the parameter values, is greater than his estimate.

Reply to  Matthew R Marler
September 19, 2019 8:24 pm

“Actually, Pat Frank’s analysis shows that you can isolate a single variable.”
Well, it shows that he did it. But not that it makes any sense. Dr Spencer’s point was, in a way, that there is a Le Chatelier principle at work. If something changes, something else varies to counter the change. The reason is the overall effect of the conservation principles at work. Roy cited TOA balance as one.

But Pat Frank’s toy does not have any other variables that could change, or any conservation principles that would require them to.

Reply to  Nick Stokes
September 19, 2019 8:53 pm

Nick,

“If something changes, something else varies to counter the change. The reason is the overall effect of the conservation principles at work. Roy cited TOA balance as one.

But Pat Frank’s toy does not have any other variables that could change, or any conservation principles that would require them to.”

Why would Pat Frank’s emulation *need* any other variables if his output matches the output of the models? This just sounds like jealousy rearing its ugly head.

Conservation principles do not cancel out uncertainty. Trying to say that it does is really nothing more than an excuse being used to justify a position.

Reply to  Matthew R Marler
September 19, 2019 10:35 pm

Roy Spencer’s analysis confuses a calibration error statistic with an energy flux.

His argument has no critical impact, or import for that matter.

Your comment, Nick, shows you don’t understand that really obvious distinction, either.

Either that, or you’re just opportunistically exploiting Roy’s mistake for polemical advantage.

Matthew Schilling
Reply to  Matthew R Marler
September 20, 2019 7:55 am

A question for Pat Frank: Would it be incorrect to think of an uncertainty value as a type of metadata, attached to and describing the result? The result addresses the question posed, while the uncertainty value speaks to the (quality of the) result.

Clyde Spencer
Reply to  Matthew R Marler
September 20, 2019 11:11 am

Matthew Schilling
Since Pat hasn’t responded, I’ll presume to weigh in. I think that metadata is an apt description for uncertainty.

Reply to  Andy May
September 19, 2019 10:24 pm

Nick, “There are all sorts of effects in the GCM which would ensure that the temperature can’t just rise 18°C, as Pat’s toy model (with no physics) can. (my bold)”

Here we go again. Now even Nick Stokes thinks that an uncertainty in temperature is a physical temperature.

That’s right up there with thinking a calibration statistic is an energy flux.

You qualify to be a climate modeler, Nick. Your level of incompetence has raised you up into that select group

nick
Reply to  Pat Frank
September 20, 2019 8:27 am

Pat, for what it’s worth, but I as a physicist am shocked by the sheer incompetence that seems to be present in the climate community. The method you are using is absolutely standard, every physics undergraduate is supposed to understand it – and usually does without any effort. The mistaken beliefs about error propagation that the climate guys show in all their comments are downright ridiculous. Kudos to you for your patience explaining again and again the difference between error and uncertainty. Really hard concepts to grasp for certain people.
(I’m usually much more modest when commenting, but when people trumpet BS with such a conviction, then I can’t hold myself back)

Barbara Hamrick
Reply to  nick
September 20, 2019 5:41 pm

+1

Reply to  nick
September 28, 2019 7:35 pm

Thank-you so much for the breath of fresh air, nick.

It means a huge lot to me to get support in public from a bona fide physicist.

As you can imagine, climate modelers are up in arms.

Even climate scientists skeptical of AGW are going far afield. You may have seen Roy Spencer’s critique, equating an uncertainty statistic with an energy flux, here and here, as well as on his own site..

Others have equated an uncertainty in temperature with a physical temperature.

If you wouldn’t mind, it would be of huge help if you might support my analysis elsewhere to others as the occasion permits.

Thanks again for stepping out. 🙂

Reply to  Nick Stokes
September 19, 2019 10:22 am

Nick,

Is TOA balance a constraint on the GCMs? Wouldn’t there be any number of non-unique solutions to the models if it wasn’t?

Reply to  Frank from NoVA
September 19, 2019 3:14 pm

Sorry, a few too many negatives there for me to process. But yes, it is an important constraint.

Reply to  Nick Stokes
September 19, 2019 4:58 pm

Thank you Nick. It sounded like a constraint to me. For this reason, I was puzzled by Dr. Spencer’s initial objection to Dr. Frank’s paper on the basis that GCMs achieve TOA balance. PS – Given your modeling expertise with DEs, did you ever do any work in quantitative finance?

Reply to  Frank from NoVA
September 20, 2019 12:06 am

Frank,
Oddly yes. I wrote a program Fastflo, an FEM PDE solver, intended for Fluid mechanics. We adapted it for modelling options pricing, and it became the basis of a plug-in for GFI FENICS. You can read about it here.

Beta Blocker
Reply to  Nick Stokes
September 19, 2019 10:55 am

Soden and Held’s postulated water vapor feedback mechanism is central to the theory that additional warming at the earth’s surface caused by adding CO2 to the atmosphere can be amplified, over time, from the +1 to +1.5C direct effect of CO2 into as much as +6C of total warming. (Citing Steven Mosher’s opinion that +6C is credible as an upper bound for feedback driven amplified warming.)

However, it is impossible at the current state of science to directly observe this postulated mechanism operating in real time inside of the earth’s climate system. The mechanism’s presence must be inferred from other kinds of observations. One important source of the ‘observations’ used to characterize and quantify the Soden-Held feedback mechanism is the output generated from the IPCC’s climate models, a.k.a. the GCM’s.

See this comment from Nick Stokes above:

https://wattsupwiththat.com/2019/09/19/emulation-4-w-m-long-wave-cloud-forcing-error-and-meaning/#comment-2799230

In the above comment, Nick Stokes says, “The point of my proof that you can match a prescribed solution to any kind of error behaviour just reinforces the point that you have in no way established the requirements for analysing the toy in place of the real.”

In his comment, Nick Stokes labels Pat Frank’s GCM output emulation equation as ‘the toy’ and the GCM’s the equation emulates as ‘the real.’

Referring to Soden and Held’s use of output from the GCM’s as observational data which supports their theory, it is perfectly appropriate to extend Nick Stoke’s line of argument by labeling the GCM’s as ‘the toys’ and the earth’s climate system itself as ‘the real.’

With this as background, I make this request of Nick Stokes and Roy Spencer:

Please post a list of requirements for producing and analyzing the outputs of GCM’s being used as substitutes for observations made directly within the earth’s real climate system. In addition, please include a glossary of scientific and technical terms which defines and clarifies the exact meaning and application of those terms, as these are being employed in your list of requirements.

Thanks in advance.

HAS
Reply to  Beta Blocker
September 19, 2019 1:52 pm

Right at the beginning of this debate I reminded the protagonists, particularly Nick, to be rigorous about the various constructs they were discussing. As you say there is the real world, the current set of GCMs, the linear emulator of that set of models. Add to that is some future set(s) of potentially improved GCMs and their potential emulators.

The questions being discussed relate to the way each perform on temp projection in and out of sample, and conclusions can only be drawn within the limitations of that framework.

I’ve decided in the end that rigour is to be avoided in favour of the rhetoric that that lack of it allows.

Reply to  Beta Blocker
September 19, 2019 3:12 pm

“Please post a list of requirements for producing and analyzing the outputs of GCM’s being used as substitutes for observations made directly within the earth’s real climate system. “
They aren’t a substitute for observing future states. We just don’t know how to do that yet.

But in fact they are used to enhance observation of the present. This is the reanalysis of data from numerical weather forecasting, which is really just re-running the processes of the forecasts themselves. It does build on the knowledge of the earth that we acquire from observation. And it is using programs from the GCM family.

Beta Blocker
Reply to  Nick Stokes
September 19, 2019 8:19 pm

Within the context of your ongoing debate with Pat Frank, your response indicates you have no intention of addressing what is a perfectly reasonable request.

Clyde Spencer
Reply to  Nick Stokes
September 19, 2019 11:17 am

Stokes,
You said, “They cover a huge number of variables, including atmospheric temperature at all levels.” And they also cover precipitation. They are notorious for doing a poor job of predicting precipitation at the regional level, with different models getting opposite results. This is further evidence that the models are unfit for purpose. So, what if they include “atmospheric temperature at all levels?” They may be “reasonable’ in the sense that they are physically possible, but are they “reliable?”

Reply to  Nick Stokes
September 19, 2019 12:21 pm

Nick Stokes –> You’re missing the trees for the forest. If I drove 100 miles and used 10 gallons of gas I could calculate my average miles/gallon a couple of ways. I could assume a simple (KISS) linear model and simply divide 100 by 10. Or, I could go off and develop all kinds of equations that simulate aerodynamics, tire friction, ICE performance (as you mentioned), etc., and end up writing Global Mile per Gallon Model (GMGM). Which one do you think would give me the better answer with the least uncertainty?

Matthew R Marler
Reply to  Nick Stokes
September 19, 2019 5:04 pm

Nick Stokes: You are using your very simple model which produces near linear dependence of global average surface temperature,

Pat Frank does not have a simple model of global average surface temperatures, he has a simple model of GCM-forecast global average surface temperatures. He also does not have a model of any other GCM forecast, only global mean temperature. He does not claim to show that GCM forecasts are unreliable in all things, such as global annual rainfall, only that they are unreliable on their most cited forecasts, global mean temperature. If they are truly reliable for their other forecasts, that will be remarkable. He has provided a template for how the unreliability of other forecasts might be estimated: start by regressing rainfall forecasts against forcing inputs, and go from there; if it’s another monotonic function with a tight fit, we’re golden.

In the mean time, know that the GCM forecasts of mean global temp are unreliable.

Reply to  Matthew R Marler
September 19, 2019 10:28 pm

A very salient comment, indeed, Matthew.

You’ve exactly got what I did. It’s a real puzzle to me how something that straight-forward is such a mystery to so many.

Thanks. 🙂

Kenneth Denison
Reply to  Matthew R Marler
September 20, 2019 6:41 am

+1

Matthew Schilling
Reply to  Matthew R Marler
September 20, 2019 8:35 am

+1

Reply to  Nick Stokes
September 19, 2019 10:19 pm

Irrelevant, Nick. My analysis is about air temperature projections, and nothing else.

GCMs are just linear extrapolation machines. The uncertainty analysis follows directly from that.

The emulation equation shows the same sensitivity to GHG forcing as any GCM. Embarrassing, isn’t it.

All the rest of your caviling about the internal complexity of GCMs is just so much raising of the dust. Their output is simple. And that’s the rub.

Kenneth Denison
Reply to  Pat Frank
September 20, 2019 6:45 am

“The emulation equation shows the same sensitivity to GHG forcing as any GCM.”

This is a very important point and one that I would expect GCM modelers to want to dig into. This result certainly opens the possibility that the GCMs are all subject to significant modeler’s bias that should be analyzed and run to ground.

It is almost unbelievable that these complex models would yield a linear relationship between GHG forcing and temperature. Clearly the climate does not behave that way, indicating that the GCMs are not representing reality.

That so many cannot see this point is astounding to me.

Great job Dr. Frank

Clyde Spencer
Reply to  Kenneth Denison
September 20, 2019 11:16 am

Kenneth Denison
+1

I would not expect the output to be linear. I have programmed System Dynamics models with linear inputs, and the outputs were invariably non-linear.

Barbara
Reply to  Kenneth Denison
September 20, 2019 7:04 pm

+1 Kenneth

September 19, 2019 9:11 am

When CERN ran the CLOUD test with their particle beam and aerosols, the actually modelled the formation of Cloud Condensation Nuclei and got it wrong! Svensmark pointed that out in early 2018.
Since the GCM’s cannot handle hurricanes, with the joker card they lack resolution, they have no hope of handling Forbush decreases and CME’s.
So the reason in this case is not resolution, just lack of physics.

It is very refreshing to see resolution, uncertainty, error all clearly expressed.

Just a side note – Boeing engineering was forced to change the engines because of CO2, and used (outsourced) software to compensate, which failed. Someone decided physics and engineering could be sidelined. I just wonder if that software was ever run through such an uncertainty and error analysis?

J Mac
September 19, 2019 9:13 am

Dr. Franks,
An excellent ‘plain english’ explanation of your published paper and a succinct rebuttal of Nick Stokes differential equations dissembling. You logically constrained Stokes to a black box ‘time out’.
Thank You!

September 19, 2019 9:28 am

There is no sensitivity to CO2. If there was then the specific heat table would say you must include forcing equation doing air or CO2 and infrared is involved. But it doesn’t.

MDBill
September 19, 2019 9:32 am

Thank you, Dr. Frank. Your analyses are logical, and further our understanding of the credibility of “Models”, which have become the underpinnings of planned political movements. If we can only get the policy makers, the general public, and especially the youth to understand that the huge investments being contemplated are merely building castles in the sand. I’ve seen too many of these half-baked good intentions in my lifetime. (“The Great Society”, Vietnam, Iraq, etc.) Again, thank you for the breath of fresh air you are providing!

Reply to  MDBill
September 19, 2019 10:31 pm

Happy it worked out, MDBill.

One really beneficial outcome is to remove the despair that is being pounded into young people.

There’s every reason to think the future will be better, not doom-laden. That word should get out.

slow to follow
Reply to  Pat Frank
September 20, 2019 1:32 am

+10000000

The destructive impact of poor science promoted as certain fact is something which poses a bigger threat than inevitable and manageable changes in our environment.

slow to follow
Reply to  Pat Frank
September 20, 2019 5:06 am

Not sure if this live feed is visible from outside the UK, but if so it shows the level of conviction with which views based on GCM output are held:

Millions of people are joining a global climate strike

https://www.bbc.co.uk/news/live/world-49753710

Roy
September 19, 2019 9:35 am

Simulations start out wrong and get worse.

Is that the best one line summary of climate models ever written?

September 19, 2019 9:48 am

And I Nick Stokes’ Tavern did frequent
But came out not one whit
Wiser, than where in I went…

Reply to  Leo Smith
September 19, 2019 9:56 am

That’s the problem with random walk.

Matthew R Marler
Reply to  Nick Stokes
September 19, 2019 2:10 pm

Nick Stokes: That’s the problem with random walk.

Pat Frank’s procedure is not a random walk.

Reply to  Matthew R Marler
September 19, 2019 7:15 pm

I think Nick still believes that uncertainty is the same as a random error and since random errors tend toward the central limit theory that uncertainty does the same. Uncertainty, however, is not random!

Matthew R Marler
Reply to  Nick Stokes
September 19, 2019 7:23 pm

Nick Stokes: That’s the problem with random walk.

You have not, as far as I have read, explained why you think Pat Frank’s procedure is a random walk. Perhaps because the uncertainty is represented as the standard deviation of a probability distribution you think the parameter has a different randomly sampled value each year. That would produce a random walk. That is not what he did.

Reply to  Matthew R Marler
September 19, 2019 8:13 pm

“explained why you think Pat Frank’s procedure is a random walk”
Well, here is what he says on p 10:
“The final change in projected air temperature is just a linear sum of the linear projections of intermediate temperature changes. Following from equation 4, the uncertainty “u” in a sum is just the root-sum-square of the uncertainties in the variables summed together, i.e., for c = a + b + d + … + z, then the uncertainty in c is ±u_c =√(u²_a+u²_b+…+u²_z) (Bevington and Robinson, 2003). The linearity that completely describes air temperature projections justifies the linear propagation of error. Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.”

Or look at Eq 6. The final uncertainty is the sqrt sum of variances (which are all the same). The expected value of the sum. How is that not a random walk?

And note that despite the generality of Eq 3 and 4, he is assuming independence here, though doesn’t say so. No correlation matrix is used.

Reply to  Nick Stokes
September 19, 2019 10:37 pm

It’s not a random walk because it’s not about error, Nick.

It’s about uncertainty.

Matthew R Marler
Reply to  Nick Stokes
September 20, 2019 1:04 am

Nick Stokes: And note that despite the generality of Eq 3 and 4, he is assuming independence here, though doesn’t say so. No correlation matrix is used.

That part was explained already: the correlation is used in computing the covariance.

Reply to  Nick Stokes
September 20, 2019 1:06 am

“Uncertainty, however, is not random!”
“It’s about uncertainty.”

Uncertainty has a variance (Eq 3). And its variance compounds by addition through n steps (eq 4). That is exactly how a random walk works. Just below Eq 4:

“Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.”

That is exactly a random walk.

Reply to  Nick Stokes
September 20, 2019 5:06 am

Nick,

“Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.”

That is exactly a random walk.”

No, it is not a random walk. The uncertainties are no random in nature, therefore their sum cannot be a random walk.

Reply to  Nick Stokes
September 20, 2019 2:38 am

“the correlation is used in computing the covariance.”
Where? Where did the data come from? What numbers were used?

As far as I can see, the arithmetic of Eqs 5 and 6 is fully spelt out, with a hazy fog of units. The numbers are given. None relates to correlation. No term for correlation or covariance appears.

Clyde Spencer
September 19, 2019 10:30 am

“In the paper, GCMs are treated as a black box.”
As they should be because programs of that size are not easily examined and understood by those who are not paid to do so, and can expend the necessary time to step through the Fortran code. There is an old saying that “All non-trivial computer programs have bugs.” Parallel processing programs of the size of GCMs, rife with numerically-approximated partial differential equations, certainly qualify as being “non-trivial.”

Reply to  Clyde Spencer
September 19, 2019 10:37 am

“As they should be because programs of that size are not easily examined and understood”
So you write a paper about how you don’t understand GCM’s, so you’ll analyse something else?

Clyde Spencer
September 19, 2019 10:32 am

Pat,
I like the imagery of your “extraneous differential gargoyles.”

Reply to  Clyde Spencer
September 19, 2019 10:38 pm

I had William of Ockham in mind when I wrote that, Clyde. 🙂

September 19, 2019 10:43 am

Pat, your work and the responses in critical articles and thoughtful comments here has been the best example of science at work at WUWT in a long time. Today’s response from you has clarified a complex issue. Many thanks. You have also inspired an (old) idea and a way forward in development of a more robust theory in your comments below:

“Does cloud cover increase with CO₂ forcing? Does it decrease? Do cloud types change? Do they remain the same?

What happens to tropical thunderstorms? Do they become more intense, less intense, or what? Does precipitation increase, or decrease?”

I think the answer to these questions is calling out loud and clear. Our fixation on satellite and computer tech has blinded us to the importance of old fashioned detailed fieldwork for getting at I am a geologist who has sweated out mapping geology on foot, canoe, Landrover, helicopter etc. on geological survey and mining exploration work in Canada, Africa, US and Europe.

We know the delta CO2 well enough. We need to make millions of observations in the field along with help from our tech and record local (high resolution) changes in temperatures, pressures, humidity, wind speeds and direrctions, details on development and physiology of thunderstorms. A new generation of buoys that can see the sky and record all this would also be useful.

Doubting that such a task could be accomplished? Here is a Geological Map of Canada that is a compilation of millions of observations, records and interpretations (a modest number of pixels of this is my work, plus ~ 35, 000km^2 of Nigeria, etc.)

https://geoscan.nrcan.gc.ca/starweb/geoscan/servlet.starweb?path=geoscan/fulle.web&search1=R=208175

Scroll down a page, tap the thumbnail image and expand with your fingers.

Reply to  Gary Pearse
September 19, 2019 10:47 pm

Your comment, Gary, that, “I think the answer to these questions is calling out loud and clear. Our fixation on satellite and computer tech has blinded us to the importance of old fashioned detailed fieldwork for getting at I am a geologist who has sweated out mapping geology on foot, canoe, Landrover, helicopter etc. on geological survey and mining exploration work in Canada, Africa, US and Europe.” …

expresses something I’ve also thought for a long time.

Climate modeling has abandoned the reductionist approach to science. They try to leap to a general theory, without having done all the gritty detail work of finding out how all the parts work.

Their enterprise is doomed to failure, exactly for that reason.

It won’t matter how well they parse their differential equations, how finely they grid their models, or how many and powerful are their parallel processors. They have skipped all the hard-scrabble work of finding out how the parts of the climate system operate and how they couple.

Each bit of that is the work of a lifetime and brings little glory. It certainly disallows grand schemes and pronouncements. Perhaps that explains their avoidance.

Kenneth Denison
Reply to  Pat Frank
September 20, 2019 6:48 am

+100

Reply to  Pat Frank
September 20, 2019 7:53 am

Oh God, you have hit the nail on the head for all of post-modern science. It’s all about me and how much fame and fortune I can gather. Doing gritty work, that’s for peons, not for educated scientists.

+100

Matthew Schilling
Reply to  Pat Frank
September 20, 2019 8:39 am

Mic drop!

David Longinotti
September 19, 2019 10:45 am

I attempt a first-order analogy of the earth’s temperature with the water-level of a hypothetical lake. This causes me to question both the GCMs and Dr. Frank’s method of estimating their error bounds:

Suppose it is observed that the water level of some lake varies up and down slightly from year to year, but over numerous years has a long-term trend of rising. We want to determine the cause. Assume there are both natural and human contributors to the water entering the lake. The natural “forcing” of the lake’s level consists of streams that carry rainwater to the lake, and the human forcing is from nearby houses that empty some of their waste-water into the lake. Some claim that it is the waste-water from the houses that is causing most of the long-term rise in the lake. This hypothesis is based on a model that is thought to accurately estimate the increasing amount of water contributed yearly by the houses, as more developments are built in the vicinity. However, the measurement of the other contributor, the water that flows naturally into the lake, is not very good; the uncertainty in that water flow is 100 times greater than the modeled amount of water from the houses. Presumably, in such a case, one could not conclude with any confidence that it is the human ‘forcing’ that is causing the bulk of the rise in the lake.

Similarly, given the uncertainty in the contribution of natural forcings like clouds on earth’s temperature, the GCMs give us little or no confidence that the source of the warming is mainly human CO2 forcing.

We could remove the effects of clouds in the GCMs if we knew that their influence on world temperature was constant from one year to the next, just as in the analogy we could remove the effects of natural sources of water on the level of the lake if we knew that the streams contribute the same amount each year. But, presumably, we don’t have good knowledge of the variability of cloud forcings from one year to the next, and I think this is the problem with Dr. Frank’s error calculation. To calculate the error in the GCM predictions, what is needed is the error in the variability of the cloud effects from one year to the next, not the error in their absolute measurement. Perhaps this is what Dr. Spencer was getting at in his critique of Dr. Frank’s method. To analogize once gain, if my height can only be measured to the nearest meter as I grow, there is an uncertainty of one meter in my actual, absolute height at the time of measurement. But this provides no reasonable basis for treating the error in my predicted height as cumulatively increasing by many meters as years go by.

Reply to  David Longinotti
September 19, 2019 1:03 pm

Good God, David L,

That analogy so clouds my understanding.

I have never understood the approach of creating a mind-boggling analogy to help clarify an already mind-boggling argument. It’s as if you substitute one complexity for another and ask us to dissect the flaws or attributes of an entirely separate thing, in addition to trying to understand what is already hard enough to understand.

GENERAL REQUEST: Stop with the convoluted analogies that only confuse the issue more.

Antero Ollila
September 19, 2019 11:08 am

I figured it out, maybe. If an annual CO2-increase is 2,6 ppm, then the annual RF-value for CO2 is 0.035 W/m^2. It cannot be called an average value but it is in the high end of the present CO2 annual growth variations. A long term annual CO2 growth rate has been something like 2.2 ppm.

Reply to  Antero Ollila
September 21, 2019 2:20 pm

Why is it such a big mystery, Antero? I described the method as the average since 1979.

The forcings I used were 1979-2013, calculated using the equations of Myhre, 1998:

In 1979, excess CO2 forcing was 0.653 W/m^2; CO2 + N2O + CH4 forcing was 1.133 W/m^2.

In 2013 they were 1.60 and 2.44 W/m^2.

CO2 = (1.60-0.653)/34 = 0.028 W/m^2.

Major GHG = (2.44 – 1.13)/34 = 0.038 W/m^2

The numbers to 2015 at the EPA page, give 0.025 W/m^2 and 0.030 W/m^2 respectively.

Robert Stewart
September 19, 2019 11:15 am

It’s not coincidental that much of the most persuasive criticism of the climate scam has come from professionals who deal in engineering and economic analyses such as McIntyre and McKitrick. Or from scientists like Dr. Frank who seek to use experimental data to prove or disprove theoretical calculations. There’s nothing like reality, whether measured in dollars or in the failure of devices, to focus one’s mind. Consider the manufacture of any large structure, say an airplane or a ship Mass production methods require that the components of the final product be assembled in an efficient process. The tolerances of each part must be sufficiently tight that when a large number of them are put together, the resultant subassembly can still satisfy similarly tight tolerances, so that the final assembly stays within tolerance. Boeing’s attempt to farm out subassemblies of the 787 was only partially successful because manufacturing practices in some countries simply weren’t at the level needed. This traces back to WWII and aircraft production facilities like that at Willow Run where Ford produced B-24s at a rate of about one aircraft per hour. B-24s were assembled at Willow Run using about 20,000 manhours, whereas the previous methods used by Consolidated in San Diego took about 200,000 manhours. Much of those 200,000 hours were spent by craftsmen working to get all the disparate, relatively low-tolerance pieces to fit together. Construction of huge tankers and bulk carriers face the same problem as very large subassemblies are brought together. Failure to control the tolerances of the parts and pieces means that the final assembly cannot be completed without costly reworking the parts. So reality lends a hand in focusing the engineering effort. Ten percent uncertaintiess in the widths of pieces that were to be assembled into the engine room of a tanker would be highly visible and painfully obvious, even to a climate modeler.

Stevek
September 19, 2019 11:18 am

Dr Frank,

If we have some idea of the probability distribution of the cloud forcing uncertainty, can get a probability distribution for the temperature at the end of 100 years that model gives ? Can another formula instead of the square root of sum of errors squared be used if we know more about the e distribution at each step ?

Reply to  Stevek
September 19, 2019 10:55 pm

Stevek, how does one know the error probability distribution of a simulation of a future state? There are no observables.

Stevek
Reply to  Pat Frank
September 20, 2019 1:56 pm

Thank you ! That makes sense to me now and clears up my thinking. The uncertainty itself at end of 100 years must have a distribution that can be calculated if we know the distribution of all variables that go into the initial state ? You are not saying all points within the ignorance are equally likely?

Reply to  Stevek
September 21, 2019 1:45 pm

Stevek, I’m saying that no one knows where the point should be within the uncertainty envelope.

To be completely clear: suppose the uncertainty bound is (+/-)20 C. Suppose, also, that the physical bounds of the system requires that the solution be somewhere within (+/-)5 C.

Then the huge uncertainty means that the model cannot say where the correct value should be, within that (+/-)5 C.

The uncertainty is larger than the physical bounds. This means the prediction, whatever value it takes, has no physical meaning.

September 19, 2019 11:28 am

“I want you to unite behind the science. And then I want you to take real action.”

Swedish climate activist Greta Thunberg appeared before Congress to urge lawmakers to “listen to the scientists” and embrace global efforts to reduce carbon emissions. https://twitter.com/ABC/status/1174417222892232705

Dr. Frank, have you received your invitation to present actual Science to the Commi….huh?….no “contrary views allowed”….”only CONSENSUS ‘Science’ is acceptable?”…..oh, well, sorry to have bothered you.

Reply to  TEWS_Pilot
September 19, 2019 10:56 pm

No invitations, TEWS. You’re right. 🙂

September 19, 2019 11:59 am

I quote from chapter 20: Basic equations of general circulation models from “Atmospheric Circulation Dynamics and General Circulation Models” by Masaki Satoh”

One of the most uncertain factors in the reliability of currently used general circulation models is the use of cumulus parameterization. Since the horizontal extent of cumulus convection is about 1 km, the effects of cumulus convection must be statistically treated in general circulation models with horizontal resolutions of about 100 km. However, it is very difficult to appropriately parameterize all the statistical effects of cumulus convection, though many kinds of cumulus parameterizations are being used in current models. As the horizontal resolution of numerical models approaches 1 km, individual clouds can be directly resolved in the models, so that it is expected that we will no longer need to use such cumulus parameterization based on statistical hypothesis. Thus, the likely horizontal resolution of next generation general circulation models is a few kilometers. We expect the use of models with 10-km resolution or less will come within the range of our computer facilities. With such finer resolution models, the assumption of hydrostatic balance is no longer acceptable. We must switch governing equation of the general circulation models from hydrostatic primitive equations to non-hydrostatic equations. As for vertical resolution, we do not have a suitable measure of its appropriateness.

And Dr. Frank makes the following comment:
>>
The CMIP5 GCM annual average 12.1% error in simulated CF is the resolution lower limit. This lower limit is 121 times larger than the 0.1% resolution limit needed to model the cloud feedback due to the annual 0.035 W/m^2 of CO₂ forcing.
<<

So let’s talk about the grid resolutions of CMIP5 GCMs. Here is a link to a list of resolutions: https://portal.enes.org/data/enes-model-data/cmip5/resolution.

The finest resolution is down to about 0.1875 degrees (which is probably questionable). Most of the resolutions are around 1 degree or more. A degree on a great circle is 60 nautical miles. That is more than 111 km. Even the 0.1875 degree resolution is more than 20 km. Obviously they are using parameterization to deal with cumulus convection. In other words, cumulus convection is one of the more important physics of the atmosphere, and they are making it up.

Jim

Clyde Spencer
September 19, 2019 11:59 am

Pat,

Stokes has previously stated, “… yes, DEs will generally have regions where they expand error, but also regions of contraction.” As I read this, it isn’t obvious or easily determined just where the expansions or contractions occur, or how to characterize them other than by running a large number of ensembles to estimate the gross impact.

I think that an important contribution you have made is the insight of being able to emulate the more complex formulations of GCMs with a linear model. You are then able to demonstrate in a straight forward way, and certainly more economically than running a large number of ensembles, the behavior of uncertainty in the emulation. It would seem reasonable to me that if the emulation does a good job of reproducing the output of GCMs, then it should also be properly emulating the uncertainty.

Reply to  Clyde Spencer
September 19, 2019 11:01 pm

It seems reasonable to me, too, Clyde. It also seemed reasonable to my very qualified reviewers at Frontiers.

Mark Broderick
September 19, 2019 12:13 pm

How about getting these same “models” to explain deep “Ice Ages” while co2 was much higher than today ? If they can’t do that then they are worthless to predict the future….

Mark Broderick
September 19, 2019 12:28 pm

Stoke… “… yes, DEs will generally have regions where they expand error, but also regions of contraction.” ?
How could any intelligent person “assume that the positive and negative errors would cancel each other out ?

Tommy
Reply to  Mark Broderick
September 19, 2019 2:30 pm

Completely agree. It’s mind boggling to me, but as far as I can tell that was Dr. Spencer’s argument.

Reply to  Mark Broderick
September 19, 2019 2:31 pm

“cancel each other out”
Who said that? Firstly, it isn’t positive or negative, but expanding and contracting. But more importantly I’m saying that there is a whole story out there that you just can’t leave out of error propagation. It’s what DEs do.

In fact, I think the story is much more complicated and interesting than Pat Frank has. The majority of error components diminish because of diffusion (viscosity). Nothing of that in Pat’s model. But some grow. The end result is chaos, as is well recognised, and is true in all fluid flow. But it is a limited, manageable problem. We live in a world of chaos (molecules etc) and we manage quite well.

Tommy
Reply to  Nick Stokes
September 19, 2019 4:51 pm

Who said errors cancel? Dr Spencer wrote this:

“The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance”

Reply to  Tommy
September 19, 2019 5:14 pm

“Dr Spencer wrote this”
Well, it wasn’t me. But it is a different context. He is saying, not that the biases are assumed to balance at TOA, but that they are required to balance. This is an application of conservation of energy in the model, and would prevent the sort of accumulation of error that Pat is claiming. Not that it even arises; he seems to have abandoned the claim that the units of the RMSE involved are 4 Wm⁻² year⁻¹.

Clyde Spencer
Reply to  Nick Stokes
September 19, 2019 5:40 pm

Stokes
You really don’t understand! Only the nominal (calculated) values output at each time step can be tested for “TOA balance” or any test of reasonableness. Unless the calculations are performed in tandem with the maximum and minimum probable values, the “accumulation error” (as you call it) isn’t going to show up. That is, the way it is done, with a single value being output, the uncertainties have to be calculated separately.

Pat is NOT claiming that the nominal value drifts with time, but rather, that the uncertainty envelope around the calculated nominal value rises more rapidly than the predicted temperature increase.

Reply to  Nick Stokes
September 19, 2019 6:45 pm

Clyde,
“Unless the calculations are performed in tandem with the maximum and minimum probable values, the “accumulation error” (as you call it) isn’t going to show up. “
I commented earlier about the tribal gullibility of sceptics, which you seem to exhibit handsomely. I noted, for example, the falling in line behind the bizarre proposition that the 4 Wm⁻² added a year⁻¹ to the units because it was averaged over a year (if it was). Folks nodded sagely, of course it must be so. Now the year⁻¹ has faded away. So, I suppose, they nod, yes was it ever different? Certainly no-one seems interested in these curious unit changes.

And so it is here. Pat creates some weird notion of an uncertainty that goes on growing, and can’t be tested, because it would be wrong to expect to see errors in that range. “You really don’t understand!”, they say. ” the “accumulation error” (as you call it) isn’t going to show up”.

So what is this uncertainty that is never going to show up? How can we ever be affected by it? Doesn’t sound very scientific.

Tommy
Reply to  Nick Stokes
September 19, 2019 6:59 pm

I know you didn’t say it (though you referenced and supported Dr. Spencer’s overall take elsewhere). And, it seems like you don’t disagree with the statement and think it’s relevant to Dr. Frank’s error accumulation argument (correct?).

Honestly, it seems to me folks are talking past each other.

The issue isn’t that “errors” accumulate in the sense that the variance of expected model outcomes would increase. They’re engineered not to.

The “error” of interest, and the one that does accumulate, is our confidence (lack of) the model is accurately modeling the Real World.

Do you disagree that the value proposition of the models is that they are predictive and that they are predictive because they (purportedly) simulate reality?

Reply to  Nick Stokes
September 19, 2019 7:51 pm

Tommy,
“that they are predictive because they (purportedly) simulate reality”
They simulate the part of reality that they claim to simulate, namely climate. It is well acknowledged that they don’t predict weather, up to and including ENSO. That is another thing missing from Pat Frank’s analysis. He includes all uncertainty about weather in his inflated totals.

That comes back to my point about chaos. It means you can’t predict certain fine scale features of the solution. But in that, it is usually reflecting reality, where those are generally unknown, because they don’t affect anything we care about. For example, CFD won’t predict the timing of shedding of vortices from a wing. It does a reasonable job of getting the frequency right, which might be important for its interaction with structural vibrations. And it does a good job of calculating the average shed kinetic energy in the vortices, which shows up in the drag.

Those are the things that you might want to do an uncertainty analysis on. No use lumping in the uncertainty of things you never wanted to know about.

Tommy
Reply to  Nick Stokes
September 19, 2019 8:30 pm

Nick, I appreciate your thoughtful reply, but I’m confused by this statement:

“Those are the things that you might want to do an uncertainty analysis on. No use lumping in the uncertainty of things you never wanted to know about.”

Isn’t the parameter Dr. Frank is isolating an input to the model at each iteration? Isn’t an input necessarily something want to know about?

And, given that:

“That comes back to my point about chaos. It means you can’t predict certain fine scale features of the solution”

But, if you can’t predict the small things that are iterative inputs to your model, how can you hope to predict the larger things (climate) that depend on them?

It seems to me that in order to remove the accumulation of uncertainty, you either have to remove Dr. Frank’s parameter of focus as from the model (with justification) or improve the modeling accuracy of it. You can’t whitewash the fact you can’t model that small bit of the puzzle by claiming you get the bigger picture correct, when the bigger picture is a composite of the littler things.

Reply to  Tommy
September 19, 2019 9:09 pm

+1

Reply to  Nick Stokes
September 19, 2019 11:12 pm

Nick, “This is an application of conservation of energy in the model, and would prevent the sort of accumulation of error that Pat is claiming.

I claim no accumulation of error, Nick. I claim growth of uncertainty. You’re continually making this mistake, which is fatal to your case.

This may help:

Kline SJ. The Purposes of Uncertainty Analysis. Journal of Fluids Engineering. 1985;107(2):153-60. https://doi.org/10.1115/1.3242449

The Concept of Uncertainty

Since no measurement is perfectly accurate, means for describing inaccuracies are needed. It is now generally agreed that the appropriate concept for expressing inaccuracies is an “uncertainty” and that the value should be provided by an “uncertainty analysis.”

An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.

The term “calibration experiment” is used in this paper to denote an experiment which: (i) calibrates an instrument or a thermophysical property against established standards; (ii) measures the desired output directly as a measurand so that propagation of uncertainty is unnecessary.

The information transmitted from calibration experiments into a complete engineering experiment on engineering systems or a record experiment on engineering research needs to be in a form that can be used in appropriate propagation processes (my bold). … Uncertainty analysis is the sine qua non for record experiments and for systematic reduction of errors in experimental work.

Uncertainty analysis is … an additional powerful cross-check and procedure for ensuring that requisite accuracy is actually obtained with minimum cost and time.

Propagation of Uncertainties Into Results

In calibration experiments, one measures the desired result directly. No problem of propagation of uncertainty then arises; we have the desired results in hand once we complete measurements. In nearly all other experiments, it is necessary to compute the uncertainty in the results from the estimates of uncertainty in the measurands. This computation process is called “propagation of uncertainty.”

Let R be a result computed from n measurands x_1, … x_n„ and W denotes an uncertainty with the subscript indicating the variable. Then, in dimensional form, we obtain: (W_R = sqrt[sum over(error_i)^2]).”

https://doi.org/10.1115/1.3242449

Reply to  Nick Stokes
September 19, 2019 11:15 pm

Nick, “Now the year⁻¹ has faded away.

Wrong again, Nick. It’s indexed away. I’ve answered your querulousity several times.

Reply to  Nick Stokes
September 19, 2019 11:53 pm

Tommy,
“Isn’t the parameter Dr. Frank is isolating an input to the model at each iteration? Isn’t an input necessarily something want to know about?”
Not, it isn’t. There is a parametrisation, to which Pat wants to attach this uncertainty. It isn’t a new uncertainty at each iteration; that would give a result that would make even Pat blanch. he says, with no real basis, every year. I don’t believe that it is even an uncertainty of the global average over time.

“But, if you can’t predict the small things that are iterative inputs to your model”
Because many have only transient effect. Think of a pond as an analogue solver of a CFD problem. Suppose you throw a stone in to create an error. What happens?

The stone starts up a lot of eddies. There is no net angular momentum, because that is conserved. The angular momentum quickly diffuses, and the eddies subside.
There is a net displacement of the pond. Its level rises by a micron or so. That is the permanent effect.
And there are ripples. These are the longest lasting transient effect. But they get damped when reflected from the shore, or if not, then by viscosity.
And that is it, typical of what happens to initial errors. The one thing that lasts is given by an invariant, conservation of mass, which comes out as volume, since density is constant.

Reply to  Nick Stokes
September 20, 2019 12:26 am

Pat
“Nick, “Now the year⁻¹ has faded away.”
Wrong again, Nick. It’s indexed away. “

You’ve excelled yourself in gibberish. Units are units, They mean something. You can’t “index them away”.

Reply to  Nick Stokes
September 21, 2019 1:18 pm

Nick, “You can’t “index them away”.

Let me clarify it for you, Nick. Notice the yearly index “i” is not included in the right side of eqn. 5.2. That’s for a reason.

But let’s put it all back in for you, including the year^-1 on the (+/-)4 W/m^2.

Right side: (+/-)[0.42 * 33K *4W/m^2 year^-1/F_0]_year_1, where “i'” is now the year 1 index.

Cancelling through: (+/-)[0.42 * 33K *4W/m^2/F_0]_1.

That is, we now have the contribution of uncertainty to the first year projection temperature, indexed “1.”

For year two: (+/-)[0.42 * 33K *4W/m^2 year^-1/F_0]_year_2, and

(+/-)[0.42 * 33K *4W/m^2/F_0]_2; index “2.”

(+/-)[0.42 * 33K *4W/m^2 year/F_0]_n; index “n.”

= (+/-)u_i

And those are what go on into eqn. 6, with units K.

You may not get it, Nick, but every scientist and engineer here will do.

Reply to  Nick Stokes
September 21, 2019 1:22 pm

Nick, “It isn’t a new uncertainty at each iteration;

Yes it is. It derives from deficient theory.

Reply to  Nick Stokes
September 21, 2019 6:06 pm

Pat,
“but every scientist and engineer here will do”
Every scientist and engineer understands the importance of being clear and consistent about units. Not just making it up as you go along.

This story on indexing is just nuttier. To bring it back to Eq 5.1 (5.2 is just algebraically separated), you have a term
F₀ + ΔFᵢ ±4Wm⁻²
and now you want to say that it should be
F₀ + ΔFᵢ ±4Wm⁻²year⁻¹*yearᵢ
“i” is an index for year. But year isn’t indexed. year₁ isn’t different to year₂; years are all the same (as time dimension). Just year.

So now you want to say that the units of 4Wm⁻² are actually 4Wm⁻²year⁻¹, but whenever you want to use it, you have to multiply by year. Well, totally weird, but can probably be made to work. But then what came of the statement just before Eq 6:
“The annual average CMIP5 LWCF calibration uncertainty, ±4Wm⁻²year⁻¹, has the appropriate dimension to condition a projected air temperature emulated in annual time-steps.”
What is the point of modifying Lauer’s dimension for the quantity, saying that that is the “appropriate dimension”, and then saying you have to convert back to Lauer’s dimension before using it?

Reply to  Nick Stokes
September 21, 2019 10:54 pm

Nick, “… but can probably be made to work.

Good. You’ve finally conceded that you’ve been wrong all along. You didn’t pay attention to the indexing, did you, so intent were you on finding a way to kick up some dust.

What is the point of modifying Lauer’s dimension for the quantity, saying that that is the “appropriate dimension”,

Right. There you go again claiming that an annual average is not a per year value. Really great math, Nick. And you call me nutty. Quite a projection.

… and then saying you have to convert back to Lauer’s dimension before using it?

The rmse of Lauer and Hamilton’s is an annual average, making it per year, no matter how many times you deny it. Per year, Nick.

I changed nothing. I followed the units throughout.

You just missed it in your fog of denial. And now you’re, ‘Look! A squirrel!’ hoping that no one notices.

Clyde Spencer
Reply to  Nick Stokes
September 20, 2019 10:47 am

Stokes,

It has been a long time since someone has referred to me as “handsome.” So, thank you. 🙂

Now, pleasantries aside, on to your usual sophistry. The graph on the right side of the original illustration (panel b) has a red line that is not far from horizontal. That is the nominal prediction of temperatures, based on iterative calculations. It does not provide any information on the uncertainty of those nominal values. Overlaying that is a lime-green ‘paraboloid’ opening to the right. That is the envelope of uncertainty, and is calculated separately from the nominal values, again by an iterative process. The justification for the propagation of error in the forcing is stated clearly (Frank, 2019): “That is, a measure of the predictive reliability of the final state obtained by a sequentially calculated progression of precursor states is found by serially propagating known physical errors through the individual steps into the predicted final state.”

I know that you are bright and well-educated, so other than becoming senile, the only other explanation for your obtuseness that seems to make sense if that you don’t want to understand it, perhaps because of your personal “tribalness.”

You might find it worth your while to peruse this:
https://www.isobudgets.com/pdf/uncertainty-guides/bipm-jcgm-100-2008-e-gum-evaluation-of-measurement-data-guide-to-the-expression-of-uncertainty-in-measurement.pdf

bit chilly
Reply to  Mark Broderick
September 20, 2019 12:07 am

You don’t have to assume, that’s where the fudge factors come in. When the model run starts to deviate beyond what would be termed reasonable a fudge factor is applied to bring it back into line. Given the known unknowns and the unknown unknowns involved in the climate it is the ONLY way the models can run in the territory of reasonableness for so long.

If any of these models were ever to be evaluated line by line i would be willing to bet my house in a legally binding document the above is the case.

Gator
September 19, 2019 12:38 pm

Thanks so much for all your perseverance Pat, I have been sharing your work with anyone who would listen, since I ran across it this past July. Once the public understands that all of the hype surrounding the “climate crisis” is based upon models, and that those same models are simply fictions created by those that “believe”, virtually all of this nonsense will end. And that is why your very important work is being misrepresented by those who stand to lose everything.

Reply to  Gator
September 19, 2019 11:18 pm

Thanks, Gator. And you’re right, literally the transfer trillions of dollars away from the middle class into the pockets of the rich, and many careers, are under threat from that paper.

Ethan Brand
September 19, 2019 12:55 pm

Pat is making another critical explaination here that is not being addressed by most of the posters:.
How GCMs actually produce their output is completely irrelevant (black box). Whether a table lookup or the most advanced AI available..NOT relevant to his analysis. This point is being completely missed by most, and hence most of the criticism is irrelevant. Relevant criticism needs to address this key concept. To defend the skill of GCM output, defenders need to specifically address the CF uncertainty relative to CO 2 effects.

September 19, 2019 1:07 pm

Here:

https://wattsupwiththat.com/2019/09/19/emulation-4-w-m-long-wave-cloud-forcing-error-and-meaning/#comment-2799242

… Nick S wrote (among other things):

That is actually the point of GCM’s. They go beyond the time when weather can be predicted, but they don’t blow up. They keep calculating perfectly reasonable weather. It isn’t a reliable forecast any more, but it has the same statistical characteristics, which is what determines climate.

His particular wording there caught my attention, because, at first glance, it seems nonsensical.

How can climate models that produce outcomes following reasonable weather calculations determine a reasonably REAL climate forecast? What reliable prediction of climate could be fashioned from reasonable-but-unreliable forecasts? — that makes no sense to me. The unreliability of the concomitant weather forecasts would seem to propagate into the unreliability of the climate forecasts that the models produce.

Apples might be rotten, but they still determine the pie? A rotten apple is still a reasonable apple? A pie of rotten apples is still a reasonable pie? An unreliable forecast is a reasonable forecast?

A sum of reasonable-but-unreliable weather calculations would seem to constitute a reasonable-but-unreliable climate forecast. I think that what we primarily seek in climate models is reliable, NOT merely “reasonable”. Reasonable alone can be a mere artifact of internal consistency. Reasonableness in climate models seems to be built in — that’s what models are — reasonable representations of something, based on the reasoning in their own structure.

What is UNreasonable about this is that the model does not represent reality — it represents a reasonable model of reality that is unreliable — unfit to dictate real-world decisions. Using an unreliable model to dictate real-world decisions is unreasonable. It is the USE of climate models, then, that is UNREASONABLE.

Models probably have great use for studying the complexity of climate. They might be great educational tools. But they do not seem to be great practical tools to guide the development of civilization. They are UNREASONABLE tools for helping to shape civilization.

Tommy
Reply to  Robert Kernodle
September 19, 2019 3:10 pm

Exactly!

They’re a model of something and that something has temperature outcomes that are plausible for our reality (in addition to being politically convenient), but as far as I can tell some very smart people aren’t making the connection that this says NOTHING about what is actually going to happen in the real world.

Reply to  Robert Kernodle
September 20, 2019 8:52 am

It sounds to me that he is saying the results they get are what they expected, so they do not think they could be wrong. In fact they do not believe they CAN be wrong.
After all, it was just like they expected it would be.

DABidwell
September 19, 2019 1:10 pm

If a model can emulate a model, and we believe the first model is correct, wouldn’t if follow the second model also be correct? And does it matter how it does it as long as the emulation is correct (limited to mean surface temperature)? After reading through all the comments to date, I’m still not convinced as to why one model (GCM or spreadsheet) is better than the other. Sure one is fancier, but I can get to the ball in either vehicle.

Paramenter
September 19, 2019 1:30 pm

Prof. Frank,

Decent ‘version for dummies’ – explains and clarifies many questions that sprouted from the original article. By the way, your article is doing fairly well: “This article has more views than 99% of all Frontiers articles”. Your paper is also in the top 5% of all research outputs scored by Altmetric. Well done!

With respect to your current post:

“When several sources of systematic errors are identified, [uncertainty due to systematic error] beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

“beta = sqrt[sum over(theta_S_i)^2]

Does this operation apply to each iterative step in calculation (i.e. in the model) or we calculate beta and it is subsequently used as constant in each iteration? Advocatus diaboli may argue that this quotation from Vasquez and Whiting means we add and square uncertainties, they propagate to next iterations but remain constant and do not add up.

Reply to  Paramenter
September 19, 2019 11:24 pm

Paramenter, I’m just scientific staff, thanks Not a professor.

If one takes the model of Vasquez and Whiting and runs it through a series of sequential step-wise calculations to determine how a system changes across time, would the uncertainty in the final result be the root-sum-square of the uncertainties in all the intermediate states?

Paramenter
Reply to  Pat Frank
September 20, 2019 2:19 pm

If one takes the model of Vasquez and Whiting and runs it through a series of sequential step-wise calculations to determine how a system changes across time, would the uncertainty in the final result be the root-sum-square of the uncertainties in all the intermediate states?

I would say so. Still, how simulations work for instance in the field of CFD where are used for advanced aerodynamic analysis? Results obtained are used in design of wings and other components so it simply works. Such simulations carry millions of steps each run. Small uncertainty must be associated with each step but because sheer number of steps, adding and squaring uncertainty associated with each steps causes that uncertainty quickly grows and renders results useless. But that does not happen.

Quotation from Journal of Fluid Engineering is decent:

An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.

It clearly distinguishes error from uncertainty, concept people even well-versed into stats struggling with. I reckon we need another post just with clear definitions!

Reply to  Paramenter
September 20, 2019 3:27 pm

Paramenter,

“Small uncertainty must be associated with each step but because sheer number of steps, adding and squaring uncertainty associated with each steps causes that uncertainty quickly grows and renders results useless. But that does not happen.”

If the uncertainty is very small then lots of steps still won’t overwhelm the result. Just how many variables in an aircraft simulation have a significant uncertainty? And these simulations *do* blow up under some conditions where the uncertainty becomes large.

Paramenter
Reply to  Tim Gorman
September 21, 2019 3:55 am

Hey Tim,

If the uncertainty is very small then lots of steps still won’t overwhelm the result. Just how many variables in an aircraft simulation have a significant uncertainty?

True, many if not all, parameters are well defined due to extensive experimental research, e.g. wind tunnels. Still, if we have millions of cells (each with its own small uncertainty) and millions of steps I cannot imagine how such uncertainty does not propagate and accumulate. But – I’m not an expert in this area so it’s question rather than any solid claim.

And these simulations *do* blow up under some conditions where the uncertainty becomes large.

Indeed. What I heard some conditions are also very hard to simulate, e.g. higher angles of attack where simulations may render wildly different results.

Reply to  Paramenter
September 21, 2019 5:13 am

Paramenter,

“Still, if we have millions of cells (each with its own small uncertainty) and millions of steps I cannot imagine how such uncertainty does not propagate and accumulate.”

Uncertainty does accumulate. The difference is that it doesn’t overwhelm the results. However, the simulations don’t give perfect answers. It’s why test pilots earn big bucks pushing the envelope of aircraft to confirm operational characteristics. Simulations can only go so far.

Reply to  Paramenter
September 21, 2019 1:29 pm

Paramenter, when people parameterize engineering models, they use the models only within the parameter calibration bounds.

Inside those calibration bounds, observables are accurately simulated.

Climate models projections proceed very far beyond their parameter calibration bounds. The predictive uncertainties are necessarily huge.