12 October 2019

Pat Frank

A bit over a month ago, I posted an essay on WUWT here about my paper assessing the reliability of GCM global air temperature projections in light of error propagation and uncertainty analysis, freely available here.

Four days later, Roy Spencer posted a critique of my analysis at WUWT, here as well as at his own blog, here. The next day, he posted a follow-up critique at WUWT here. He also posted two more critiques on his own blog, here and here.

Curiously, three days before he posted his criticisms of my work, Roy posted an essay, titled, “The Faith Component of Global Warming Predictions,” here. He concluded that, *[climate modelers] have only demonstrated what they assumed from the outset. *They are guilty of “*circular reasoning*” and have expressed a “*tautology*.”

Roy concluded, “*I’m not saying that increasing CO**₂** doesn’t cause warming. I’m saying we have no idea how much warming it causes because we have no idea what natural energy imbalances exist in the climate system over, say, the last 50 years. … Thus, global warming projections have a large element of faith programmed into them.*”

Roy’s conclusion is pretty much a re-statement of the conclusion of my paper, which he then went on to criticize.

In this post, I’ll go through Roy’s criticisms of my work and show why and how every single one of them is wrong.

So, what are Roy’s points of criticism?

He says that:

1) My error propagation predicts huge excursions of temperature.

2) Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

3) **The Error Propagation Model is Not Appropriate for Climate Models**

I’ll take these in turn.

This is a long post. For those wishing just the executive summary, all of Roy’s criticisms are badly misconceived.

1) __Error propagation predicts huge excursions of temperature.__

Roy wrote, “*Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT). (my bold)*”

For the attention of Mr. And then There’s Physics, and others, Roy went on to write this: “*The modelers are well aware of these biases [in cloud fraction], which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.*” No more dismissals of root-mean-square error, please.

Here is Roy’s Figure 1, demonstrating his first major mistake. I’ve bolded the evidential wording.

Roy’s blue lines are **not** air temperatures emulated using equation 1 from the paper. They do not come from eqn. 1, and do not represent physical air temperatures at all.

They come from eqns. 5 and 6, and are the growing uncertainty bounds in projected air temperatures. Uncertainty statistics are not physical temperatures.

Roy misconceived his ±2 Wm^{-2} as a radiative imbalance. In the proper context of my analysis, it should be seen as a ±2 Wm^{-2} uncertainty in long wave cloud forcing (LWCF). It is a statistic, not an energy flux.

Even worse, were we to take Roy’s ±2 Wm^{-2} to be a radiative imbalance in a model simulation; one that results in an excursion in simulated air temperature, (which is Roy’s meaning), we then have to suppose the imbalance is both positive and negative at the same time, i.e., ±radiative forcing.

A ±radiative forcing does not alternate between +radiative forcing and -radiative forcing. Rather it is both signs together at once.

So, Roy’s interpretation of LWCF ±error as an imbalance in radiative forcing requires simultaneous positive and negative temperatures.

Look at Roy’s Figure. He represents the emulated air temperature to be a hot house and an ice house simultaneously; both +20 C and -20 C coexist after 100 years. That is the nonsensical message of Roy’s blue lines, if we are to assign his meaning that the ±2 Wm^{-2} is radiative imbalance.

That physically impossible meaning should have been a give-away that the basic supposition was wrong.

The ± is not, after all, one or the other, plus or minus. It is coincidental plus and minus, because it is part of a root-mean-square-error (rmse) uncertainty statistic. It is **not** attached to a physical energy flux.

It’s truly curious. More than one of my reviewers made the same very naive mistake that ±C = physically real +C or -C. This one, for example, which is quoted in the Supporting Information: “T*he author’s error propagation is not] physically justifiable. (For instance, even after forcings have stabilized, [the author’s] analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states. Which, it should be obvious, does not actually happen).*“

Any understanding of uncertainty analysis is clearly missing.

Likewise, this first part of Roy’s point 1 is completely misconceived.

Next mistake in the first criticism: Roy says that the emulation equation does not yield the flat GCM control run line in his Figure 1.

However, emulation equation 1 would indeed give the same flat line as the GCM control runs under zero external forcing. As proof, here’s equation 1:

In a control run there is no change in forcing, so* **D**F _{i}* = 0. The fraction in the brackets then becomes F

_{0}/F

_{0}= 1.

The originating *f _{CO₂}* = 0.42 so that equation 1 becomes,

*D*

*T*

_{i}(K) = 0.42*´*

*33K*

*´*

*1 + a*= 13.9 C +a = constant (a = 273.1 K or 0 C).

When an anomaly is taken, the emulated temperature change is constant zero, just as in Roy’s GCM control runs in Figure 1.

So, Roy’s first objection demonstrates three mistakes.

1) Roy mistakes a rms statistical uncertainty in simulated LWCF as a physical radiative imbalance.

2) He then mistakes a ±uncertainty in air temperature as a physical temperature.

3) His analysis of emulation equation 1 was careless.

Next, Roy’s 2): __Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux__* *

Roy wrote, “*If any climate model has as large as a 4 W/m ^{2} bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”*

I will now show why this objection is irrelevant.

Here, now, is Roy’s second figure, again showing the perfect TOA radiative balance of CMIP5 climate models. On the right, next to Roy’s figure, is Figure 4 from the paper showing the total cloud fraction (TCF) annual error of 12 CMIP5 climate models, averaging ±12.1%. [1]

Every single one of the CMIP5 models that produced average ±12.1% of simulated total cloud fraction error also featured Roy’s perfect TOA radiative balance.

Therefore, every single CMIP5 model that averaged ±4 Wm^{-2} in LWCF error also featured Roy’s perfect TOA radiative balance.

How is that possible? How can models maintain perfect simulated TOA balance while at the same time producing errors in long wave cloud forcing?

Off-setting errors, that’s how. GCMs are required to have TOA balance. So, parameters are adjusted within their uncertainty bounds so as to obtain that result.

Roy says so himself: “*If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, …”*

Are the chosen GCM parameter values physically correct? No one knows.

Are the parameter sets identical model-to-model? No. We know that because different models produce different profiles and integrated intensities of TCF error.

This removes all force from Roy’s TOA objection. Models show TOA balance and LWCF error simultaneously.

In any case, this goes to the point raised earlier, and in the paper, that a simulated climate can be perfectly in TOA balance **while the simulated climate internal energy state is incorrect**.

That means that the physics describing the simulated climate state is incorrect. This in turn means that the physics describing the simulated air temperature is incorrect.

The simulated air temperature is not grounded in physical knowledge. And that means there is a large uncertainty in projected air temperature because we have no good physically causal explanation for it.

The physics can’t describe it; the model can’t resolve it. The apparent certainty in projected air temperature is a chimerical result of tuning.

This is the crux idea of an uncertainty analysis. One can get the observables right. But if the wrong physics gives the right answer, one has learned nothing and one understands nothing. The uncertainty in the result is consequently large.

This wrong physics is present in every single step of a climate simulation. The calculated air temperatures are not grounded in a physically correct theory.

Roy says the LWCF error is unimportant because all the errors cancel out. I’ll get to that point below. But notice what he’s saying: the wrong physics allows the right answer. And invariably so in every step all the way across a 100-year projection.

In his September 12 criticism, Roy gives his reason for disbelief in uncertainty analysis: “*All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)! *

*“Why?*

*“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior.*”

There it is: wrong physics that is invariably correct in every step all the way across a 100-year projection, because large-scale errors cancel to reveal the effects of tiny perturbations. I don’t believe any other branch of physical science would countenance such a claim.

Roy then again presented the TOA radiative simulations on the left of the second set of figures above.

Roy wrote that models are forced into TOA balance. That means the physical errors that might have appeared as TOA imbalances are force-distributed into the simulated climate sub-states.

Forcing models to be in TOA balance may even make simulated climate subsystems more in error than they would otherwise be.

After observing that the “*forced-balancing of the global energy budget***“** is done only once for the “*multi-century pre-industrial control runs*,” Roy observed that models world-wide behave similarly despite a, “** WIDE variety of errors in the component energy fluxes**…”

Roy’s is an interesting statement, given there is nearly a factor of three difference among models in their sensitivity to doubled CO₂. [2, 3]

According to Stephens [3], “*This discrepancy is widely believed to be due to uncertainties in cloud feedbacks. … Fig. 1 [shows] the changes in low clouds predicted by two versions of models that lie at either end of the range of warming responses. The reduced warming predicted by one model is a consequence of increased low cloudiness in that model whereas the enhanced warming of the other model can be traced to decreased low cloudiness. (original emphasis)*”

So, two CMIP5 models show opposite trends in simulated cloud fraction in response to CO₂ forcing. Nevertheless, they both reproduce the historical trend in air temperature.

Not only that, but they’re supposedly invariably correct in every step all the way across a 100-year projection, because their large-scale errors cancel to reveal the effects of tiny perturbations.

In Stephen’s object example we can see the hidden simulation uncertainty made manifest. Models reproduce calibration observables by hook or by crook, and then on those grounds are touted as able to accurately predict future climate states.

The Stephens example provides clear evidence that GCMs plain cannot resolve the cloud response to CO₂ emissions. Therefore, GCMs cannot resolve the change in air temperature, if any, from CO₂ emissions. Their projected air temperatures are not known to be physically correct. They are not known to have physical meaning.

This is the reason for the large and increasing step-wise simulation uncertainty in projected air temperature.

This obviates Roy’s point about cancelling errors. The models cannot resolve the cloud response to CO₂ forcing. Cancellation of radiative forcing errors does not repair this problem. Such cancellation (from by-hand tuning) just speciously hides the simulation uncertainty.

Roy concluded that, “*Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).*“I

Everyone should now know why Roy’s view is wrong. Off-setting errors make models similar to one another. They do not make the models accurate. Nor do they improve the physical description.

Roy’s conclusion implicitly reveals his mistaken thinking.

1) The inability of GCMs to resolve cloud response means the temperature projection consistency among models is a chimerical artifact of their tuning. The uncertainty remains in the projection; it’s just hidden from view.

2) The LWCF ±4 Wm^{-2} rmse is not a constant offset bias error. The ‘±’ alone should be enough to tell anyone that it does not represent an energy flux.

The LWCF ±4 Wm^{-2} rmse represents an uncertainty in simulated energy flux. It’s not a physical error at all.

One can tune the model to produce (simulation minus observation = 0) no observable error at all in their calibration period. But the physics underlying the simulation is wrong. The causality is not revealed. The simulation conveys no information. The result is not any indicator of physical accuracy. The uncertainty is not dismissed.

3) All the models making those errors are forced to be in TOA balance. Those TOA-balanced CMIP5 models make errors averaging ±12.1% in global TCF.[1] This means the GCMs cannot model cloud cover to better resolution than ±12.1%.

To minimally resolve the effect of annual CO₂ emissions, they need to be at about 0.1% cloud resolution (see Appendix 1 below)

4) The average GCM error in simulated TCF over the calibration hindcast time reveals the average calibration error in simulated long wave cloud forcing. Even though TOA balance is maintained throughout, the correct magnitude of simulated tropospheric thermal energy flux is lost within an uncertainty interval of ±4 Wm^{-2}.

Roy’s 3) __Propagation of error is inappropriate__.

On his blog, Roy wrote that modeling the climate is like modeling pots of boiling water. Thus, “*[If our model] can get a constant water temperature, [we know] that those rates of energy gain and energy loss are equal, even though we don’t know their values. And that, if we run [the model] with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.*”

Roy continued, “*the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system.*”

Roy there implied that the only way air temperature can change is by way of an increase or decrease of the total energy in the climate system. However, that is not correct.

Climate subsystems can exchange energy. Air temperature can change by redistribution of internal energy flux without any change in the total energy entering or leaving the climate system.

For example, in his 2001 testimony before the Senate Environment and Public Works Committee on 2 May, Richard Lindzen noted that, “*claims that man has contributed any of the observed warming (ie attribution) are based on the assumption that models correctly predict natural variability. [However,] natural variability does not require any external forcing – natural or anthropogenic*. (my bold)” [4]

Richard Lindzen noted exactly the same thing in his, “**Some Coolness Concerning Global Warming**. [5]

“*The precise origin of natural variability is still uncertain, but it is not that surprising. Although the solar energy received by the earth-ocean-atmosphere system is relatively constant, the degree to which this energy is stored and released by the oceans is not. As a result, the energy available to the atmosphere alone is also not constant. … Indeed, our climate has been both warmer and colder than at present, due solely to the natural variability of the system. External influences are hardly required for such variability to occur*.(my bold)”

In his review of Stephen Schneider’s “Laboratory Earth,” [6] Richard Lindzen wrote this directly relevant observation,

“*A doubling CO₂ in the atmosphere results in a two percent perturbation to the atmosphere’s energy balance. But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance, and these errors involve, among other things, the feedbacks which are crucial to the resulting calculations. Thus the models are of little use in assessing the climatic response to such delicate disturbances. Further, the large responses (corresponding to high sensitivity) of models to the small perturbation that would result from a doubling of carbon dioxide crucially depend on positive (or amplifying) feedbacks from processes demonstrably misrepresented by models.* (my bold)”

These observations alone are sufficient to refute Roy’s description of modeling air temperature in analogy to the heat entering and leaving a pot of boiling water with varying amounts of lid-cover.

Richard Lindzen’s last point, especially, contradicts Roy’s claim that cancelling simulation errors permit a reliably modeled response to forcing or accurately projected air temperatures.

Also, the situation is much more complex than Roy described in his boiling pot analogy. For example, rather than Roy’s single lid moving about, clouds are more like multiple layers of sieve-like lids of varying mesh size and thickness, all in constant motion, and none of them covering the entire pot.

The pot-modeling then proceeds with only a poor notion of where the various lids are at any given time, and without fully understanding their depth or porosity.

__Propagation of error__: Given an annual average +0.035 Wm^{-2} increase in CO₂ forcing, the increase plus uncertainty in the simulated tropospheric thermal energy flux is (0.035±4) Wm^{-2}. All the while simulated TOA balance is maintained.

So, if one wanted to calculate the uncertainty interval for the air temperature for any specific annual step, the top of the temperature uncertainty interval would be calculated from +4.035 Wm^{-2}, while the bottom of the interval would be -3.9065 Wm^{-2}.

Putting that into the right side of paper eqn. 5.2 and setting *F _{0}*=33.30 Wm

^{-2}, then the single-step projection uncertainty interval in simulated air temperature is +1.68 C/-1.63 C.

The air temperature anomaly projected from the average CMIP5 GCM would, however, be 0.015 C; not +1.68 C or -1.63 C.

In the whole modeling exercise, the simulated TOA balance is maintained. Simulated TOA balance is maintained mainly because simulation error in long wave cloud forcing is offset by simulation error in short wave cloud forcing.

This means the underlying physics is wrong and the simulated climate energy state is wrong. Over the calibration hindcast region, the observed air temperature is correctly reproduced only because of curve fitting following from the by-hand adjustment of model parameters.[2, 7]

Forced correspondence with a known value does not remove uncertainty in a result, because causal ignorance is unresolved.

When error in an intermediate result is imposed on every single step of a sequential series of calculations — which describes an air temperature projection — that error gets transmitted into the next step. The next step adds its own error onto the top of the prior level. The only way to gauge the effect of step-wise imposed error is step-wise propagation of the appropriate rmse uncertainty.

Figure 3 below shows the problem in a graphical way. GCMs project temperature in a step-wise sequence of calculations. [8] Incorrect physics means each step is in error. The climate energy-state is wrong (this diagnosis also applies to the equilibrated base state climate).

The wrong climate state gets calculationally stepped forward. Its error constitutes the initial conditions of the next step. Incorrect physics means the next step produces its own errors. Those new errors add onto the entering initial condition errors. And so it goes, step-by-step. The errors add with every step.

When one is calculating a future state, one does not know the sign or magnitude of any of the errors in the result. This ignorance follows from the obvious difficulty that there are no observations available from a future climate.

The reliability of the projection then must be judged from an uncertainty analysis. One calibrates the model against known observables (e.g., total cloud fraction). By this means, one obtains a relevant estimate of model accuracy; an appropriate average root-mean-square calibration error statistic.

The calibration error statistic informs us of the accuracy of each calculational step of a simulation. When inaccuracy is present in each step, propagation of the calibration error metric is carried out through each step. Doing so reveals the uncertainty in the result — how much confidence we should put in the number.

When the calculation involves multiple sequential steps each of which transmits its own error, then the step-wise uncertainty statistic is propagated through the sequence of steps. The uncertainty of the result must grow. This circumstance is illustrated in Figure 3.

Figure 3: Growth of uncertainty in an air temperature projection. is the base state climate that has an initial forcing, F_{0}, which may be zero, and an initial temperature, T_{0}. The final temperature *T _{n}* is conditioned by the final uncertainty ±

*e*, as

_{t}*T*

_{n}*±*

*e*.

_{t}Step one projects a first-step forcing F_{1}, which produces a temperature T_{1}. Incorrect physics introduces a physical error in temperature, e_{1}, which may be positive or negative. In a projection of future climate, we do not know the sign or magnitude of e_{1}.

However, hindcast calibration experiments tell us that single projection steps have an average uncertainty of ±e.

T_{1} therefore has an uncertainty of

The step one temperature plus its physical error, T_{1}+e_{1}, enters step 2 as its initial condition. But T_{1} had an error, e_{1}. That e_{1} is an error offset of unknown sign in T_{1}. Therefore, the incorrect physics of step 2 receives a T_{1} that is offset by e_{1.} But in a futures-projection, one does not know the value of T_{1}+e_{1}.

In step 2, incorrect physics starts with the incorrect T_{1} and imposes new unknown physical error e_{2} on T_{2}. The error in T_{2} is now e_{1}+e_{2}. However, in a futures-projection the sign and magnitude of e_{1}, e_{2} and their sum remain unknown.

And so it goes; step 3, …, n add in their errors e_{3} +, …, + e_{n}. But in the absence of knowledge concerning the sign or magnitude of the imposed errors, we do not know the total error in the final state. All we do know is that the trajectory of the simulated climate has wandered away from the trajectory of the physically correct climate.

However, the calibration error statistic provides an estimate of the uncertainty in the results of any single calculational step, which is ±e.

When there are multiple calculational steps, ±e attaches independently to every step. The predictive uncertainty increases with every step because the ±e uncertainty gets propagated through those steps to reflect the continuous but unknown impact of error. Propagation of calibration uncertainty goes as the root-sum-square (rss). For ‘n’ steps that’s . [9-11]

It should be very clear to everyone that the rss equation does not produce physical temperatures, or the physical magnitudes of anything else. it is a statistic of predictive uncertainty that necessarily increases with the number of calculational steps in the prediction. A summary of the uncertainty literature was commented into my original post, here.

The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.

Supporting Information Section 10.2 discusses uncertainty and its meaning. C. Roy and J. Oberkampf (2011) describe it this way, “*[predictive] uncertainty [is] due to lack of knowledge by the modelers, analysts conducting the analysis, or experimentalists involved in validation. The lack of knowledge can pertain to, for example, modeling of the system of interest or its surroundings, simulation aspects such as numerical solution error and computer roundoff error, and lack of experimental data.*” [12]

The growth of uncertainty means that with each step we have less and less knowledge of where the simulated future climate is, relative to the physically correct future climate. Figure 3 shows the widening scope of uncertainty with the number of steps.

Wide uncertainty bounds mean the projected temperature reflects a future climate state that is some completely unknown distance from the physically real future climate state. One’s confidence is minimal that the simulated future temperature is the ‘true’ future temperature.

This is why propagation of uncertainty through an air temperature projection is entirely appropriate. It is our only estimate of the reliability of a predictive result.

Appendix 1 below shows that the models need to simulate clouds to about ±0.1% accuracy, about 100 times better than ±12.1% the they now do, in order to resolve any possible effect of CO₂ forcing.

Appendix 2 quotes Richard Lindzen on the utter corruption and dishonesty that pervades AGW consensus climatology.

Before proceeding, here’s NASA on clouds and resolution: “*A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.*”

That hundred-fold is exactly the message of my paper.

If climate models cannot resolve the response of clouds to CO₂ emissions, they can’t possibly accurately project the impact of CO₂ emission on air temperature?

The ±4 Wm^{-2} uncertainty in LWCF is a direct reflection of the profound ignorance surrounding cloud response.

The CMIP5 LWCF calibration uncertainty reflects ignorance concerning the magnitude of the thermal flux in the simulated troposphere that is a direct consequence of the poor ability of CMIP5 models to simulate cloud fraction.

From page 9 in the paper, “*This climate model error represents a range of atmospheric energy flux uncertainty within which smaller energetic effects cannot be resolved within any CMIP5 simulation.*”

The 0.035 Wm^{-2} annual average CO₂ forcing is exactly such a smaller energetic effect.

It is impossible to resolve the effect on air temperature of a 0.035 Wm^{-2} change in forcing, when the model cannot resolve overall tropospheric forcing to better than ±4 Wm^{-2}.

The perturbation is ±114 times smaller than the lower limit of resolution of a CMIP5 GCM.

The uncertainty interval can be appropriately analogized as the smallest simulation pixel size. It is the blur level. It is the ignorance width within which nothing is known.

Uncertainty is not a physical error. It does not subtract away. It is a measure of ignorance.

The model can produce a number. When the physical uncertainty is large, that number is physically meaningless.

All of this is discussed in the paper, and in exhaustive detail in Section 10 of the Supporting Information. It’s not as though that analysis is missing or cryptic. It is pretty much invariably un-consulted by my critics, however.

Smaller strange and mistaken ideas:

Roy wrote, “*If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time.*”

But the LWCF error statistic is ±4 Wm^{-2}, not (+)4 Wm^{-2} imbalance in radiative flux. Here, Roy has not only misconceived a calibration error statistic as an energy flux, but has facilitated the mistaken idea by converting the ± into (+).

This mistake is also common among my prior reviewers. It allowed them to assume a constant offset error. That in turn allowed them to assert that all error subtracts away.

This assumption of perfection after subtraction is a folk-belief among consensus climatologists. It is refuted right in front of their eyes by their own results, (Figure 1 in [13]) but that never seems to matter.

Another example includes Figure 1 in the paper, which shows simulated temperature anomalies. They are all produced by subtracting away a simulated climate base-state temperature. If the simulation errors subtracted away, all the anomaly trends would be superimposed. But they’re far from that ideal.

Figure 4 shows a CMIP5 example of the same refutation.

Figure 4: RCP8.5 projections from four CMIP5 models.

Model tuning has made all four projection anomaly trends close to agreement from 1850 through 2000. However, after that the models career off on separate temperature paths. By projection year 2300, they range across 8 C. The anomaly trends are not superimposable; the simulation errors have not subtracted away.

The idea that errors subtract away in anomalies is objectively wrong. The uncertainties that are hidden in the projections after year 2000, by the way, are also in the projections from 1850-2000 as well.

This is because the projections of the historical temperatures rest on the same wrong physics as the futures projection. Even though the observables are reproduced, the physical causality underlying the temperature trend is only poorly described in the model. Total cloud fraction is just as wrongly simulated for 1950 as it is for 2050.

LWCF error is present throughout the simulations. The average annual ±4 Wm^{-2} simulation uncertainty in tropospheric thermal energy flux is present throughout, putting uncertainty into every simulation step of air temperature. Tuning the model to reproduce the observables merely hides the uncertainty.

Roy wrote, “*Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.*”

But, of course, eqn. 6 would not produce wildly different results because simulation error varies with the length of the GCM time step.

For example, we can estimate the average per-day uncertainty from the ±4 Wm^{-2} annual average calibration of Lauer and Hamilton.

So, for the entire year (±4 Wm^{–2})^{2} = , where *e*_{i}* *is the per-day uncertainty. This equation yields, *e** _{i}* = ±0.21 Wm

^{–2}for the estimated LWCF uncertainty per average projection day. If we put the daily estimate into the right side of equation 5.2 in the paper and set

*F*=33.30 Wm

_{0}^{-2}, then the one-day per-step uncertainty in projected air temperature is ±0.087 C. The total uncertainty after 100 years is sqrt[(0.087)

^{2}´365´100] = ±16.6 C.

The same approach yields an estimated 25-year mean model calibration uncertainty to be sqrt[(±4 Wm^{–2})^{2}´25] = ±20 Wm^{–}^{2}. Following from eqn. 5.2, the 25-year per-step uncertainty is ±8.3 C. After 100 years the uncertainty in projected air temperature is sqrt[(±8.3)^{2}´4)] = ±16.6 C.

Roy finished with, “*I’d be glad to be proved wrong.*”

Be glad, Roy.

Appendix 1: Why CMIP5 error in TCF is important.

We know from Lauer and Hamilton that the average CMIP5 ±12.1% annual total cloud fraction (TCF) error produces an annual average ±4 Wm^{-2} calibration error in long wave cloud forcing. [14]

We also know that the annual average increase in CO₂ forcing since 1979 is about 0.035 Wm^{-2} (my calculation).

Assuming a linear relationship between cloud fraction error and LWCF error, the ±12.1% CF error is proportionately responsible for ±4 Wm^{-2} annual average LWCF error.

Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO₂ forcing as:

[(0.035 Wm^{-2}/±4 Wm^{-2})]*±12.1% total cloud fraction = 0.11% change in cloud fraction.

This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to barely resolve the annual impact of CO₂ emissions on the climate. If one wants accurate simulation, the model resolution should be ten times small than the effect to be resolved. That means 0.011% accuracy in simulating annual average TCF.

That is, the cloud feedback to a 0.035 Wm^{-2} annual CO₂ forcing needs to be known, and able to be simulated, to a resolution of 0.11% in TCF in order to minimally know how clouds respond to annual CO₂ forcing.

Here’s an alternative way to get at the same information. We know the total tropospheric cloud feedback effect is about -25 Wm^{-2}. [15] This is the cumulative influence of 67% global cloud fraction.

The annual tropospheric CO₂ forcing is, again, about 0.035 Wm^{-2}. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 Wm^{-2}/25 Wm^{-2})*67% = 0.094%. That’s again bare-bones simulation. Accurate simulation requires ten times finer resolution, which is 0.0094% of average annual TCF.

Assuming the linear relations are reasonable, both methods indicate that the minimal model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 Wm^{-2} of CO₂ forcing, is about 0.1% CF.

To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.

This analysis illustrates the meaning of the annual average ±4 Wm^{-2} LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.

The TCF ignorance is such that the annual average tropospheric thermal energy flux is never known to better than ±4 Wm^{-2}. This is true whether forcing from CO₂ emissions is present or not.

This is true in an equilibrated base-state climate as well. Running a model for 500 projection years does not repair broken physics.

GCMs cannot simulate cloud response to 0.1% annual accuracy. It is not possible to simulate how clouds will respond to CO₂ forcing.

It is therefore not possible to simulate the effect of CO₂ emissions, if any, on air temperature.

As the model steps through the projection, our knowledge of the consequent global air temperature steadily diminishes because a GCM cannot accurately simulate the global cloud response to CO₂ forcing, and thus cloud feedback, at all for any step.

It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.

This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge, further and further into ignorance.

On an annual average basis, the uncertainty in CF feedback is ±144 times larger than the perturbation to be resolved.

The CF response is so poorly known, that even the first simulation step enters terra incognita.

Appendix 2: On the Corruption and Dishonesty in Consensus Climatology

It is worth quoting Lindzen on the effects of a politicized science. [16]”*A second aspect of politicization of discourse specifically involves scientific literature. Articles challenging the claim of alarming response to anthropogenic greenhouse gases are met with unusually quick rebuttals. These rebuttals are usually published as independent papers rather than as correspondence concerning the original articles, the latter being the usual practice. When the usual practice is used, then the response of the original author(s) is published side by side with the critique. However, in the present situation, such responses are delayed by as much as a year. In my experience, criticisms do not reflect a good understanding of the original work. When the original authors’ responses finally appear, they are accompanied by another rebuttal that generally ignores the responses but repeats the criticism. This is clearly not a process conducive to scientific progress, but it is not clear that progress is what is desired. Rather, the mere existence of criticism entitles the environmental press to refer to the original result as ‘discredited,’ while the long delay of the response by the original authors permits these responses to be totally ignored. *

“*A final aspect of politicization is the explicit intimidation of scientists. Intimidation has mostly, but not exclusively, been used against those questioning alarmism. Victims of such intimidation generally remain silent. Congressional hearings have been used to pressure scientists who question the ‘consensus’. Scientists who views question alarm are pitted against carefully selected opponents. The clear intent is to discredit the ‘skeptical’ scientist from whom a ‘recantation’ is sought.*“[7]

Richard Lindzen’s extraordinary account of the jungle of dishonesty that is consensus climatology is required reading. None of the academics he names as participants in chicanery deserve continued employment as scientists. [16]

If one tracks his comments from the earliest days to near the present, his growing disenfranchisement becomes painful and obvious.[4-7, 16, 17] His “*Climate Science: Is it Currently Designed to Answer Questions?*” is worth reading in its entirety.

References:

[1] Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117(D14): p. D14105.

[2] Kiehl, J.T., Twentieth century climate model response and climate sensitivity. Geophys. Res. Lett., 2007. 34(22): p. L22710.

[3] Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18(2): p. 237-273.

[4] Lindzen, R.S. (2001) Testimony of Richard S. Lindzen before the Senate Environment and Public Works Committee on 2 May 2001. URL: http://www-eaps.mit.edu/faculty/lindzen/Testimony/Senate2001.pdf Date Accessed:

[5] Lindzen, R., Some Coolness Concerning Warming. BAMS, 1990. 71(3): p. 288-299.

[6] Lindzen, R.S. (1998) Review of Laboratory Earth: The Planetary Gamble We Can’t Afford to Lose by Stephen H. Schneider (New York: Basic Books, 1997) 174 pages. Regulation, 5 URL: https://www.cato.org/sites/cato.org/files/serials/files/regulation/1998/4/read2-98.pdf Date Accessed: 12 October 2019.

[7] Lindzen, R.S., Is there a basis for global warming alarm?, in Global Warming: Looking Beyond Kyoto, E. Zedillo ed, 2006 *in Press* The full text is available at: https://ycsg.yale.edu/assets/downloads/kyoto/LindzenYaleMtg.pdf Last accessed: 12 October 2019, Yale University: New Haven.

[8] Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety, 2000, IECEC: Las Vegas, pp. 1026-1031.

[9] Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill.

[10] Brown, K.K., et al., Evaluation of correlated bias approximations in experimental uncertainty analysis. AIAA Journal, 1996. 34(5): p. 1013-1018.

[11] Perrin, C.L., Mathematics for chemists. 1970, New York, NY: Wiley-Interscience. 453.

[12] Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.

[13] Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.

[14] Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.

[15] Hartmann, D.L., M.E. Ockert-Bell, and M.L. Michelsen, The Effect of Cloud Type on Earth’s Energy Balance: Global Analysis. J. Climate, 1992. 5(11): p. 1281-1304.

[16] Lindzen, R.S., Climate Science: Is it Currently Designed to Answer Questions?, in Program in Atmospheres, Oceans and Climate. Massachusetts Institute of Technology (MIT) and Global Research, 2009, Global Research Centre for Research on Globalization: Boston, MA.

[17] Lindzen, R.S., Can increasing carbon dioxide cause climate change? Proc. Nat. Acad. Sci., USA, 1997. 94(p. 8335-8342.

“They [climate modelers] have only demonstrated what they assumed from the outset.”

This is an instance of the logical fallacy called “begging the question”. Unfortunately, lately people who should know better confuse the name of this fallacy with “raising the question”. I wish they’d stop doing this.

The problem is climate science is modeling a planet without an ocean.

Or Roy is “working with the premise”. Pat is talking about wrong reason the premise is wrong.

So got model Earth as world which covered by ocean. And add land to it, after can model a planet completely covered with an ocean.

Or as say, we are in Ice Age because we have a cold ocean.

And we would be in hothouse climate, if we had a warm ocean.

The atmosphere follows the ocean, or Atmosphere doesn’t lead the ocean.

So, since the average temperature of entire ocean is about 3.5 C, you can’t get a hot climate until

the ocean warms to higher temperature than 5 C. And that requires at least 1000 years.

Or if want forecast a mere 100 years, the colder ocean prevents much warming.

Or could global clouds of any type or zero clouds {of any type} and not much warming or cooling occurs within a hundred years. Or if you think greenhouse gases is big factor, same applies, put any amount greenhouse gases, and will not make much difference within 100 years.

There does seem to be the basic problem in Climate Modeling that they want the models to “prove” their hypothesis, but to do so, the models have to assume the hypothesis is correct. If the models are seen as assuming the hypothesis and thus if they fail to make accurate predictions, the hypothesis fails, that’s fine. But Climate Science refuses to accept that.

The money or the funding is about answering the question what is the warming effect from higher levels of CO2.

And the money spent, has given some results. And the results indicate that doubling CO2 global levels, does not cause much warming. Or 280 ppm + 280 ppm equaling 560 ppm of global CO2 will not

cause much warming.

Now, one argue whether we will ever reach 560 ppm and/or one could argue global CO2 level may exceed 1120 ppm.

I would note that ideas about disaster related to triggering massive greenhouse gases are related to a significant warming of the ocean. Ie, if ocean waters at 700 meter deep warm by a significant amount this could cause methane release {methane hydrate deposit are related to ocean temperatures]. Or said differently I know of no doomsday fear, connected to an increase of ocean surface temperatures. And btw I think over last 100 years the average ocean surface temperature as risen by about .5 C and water under the surface have obviously have increased far less than that. Though there are probably small regions of deeper ocean, over last 100 years {and over thousands of years which fluctuation which might be greater than .5 C- or there number factors that could effect small regions. One could suppose deeper ocean doesn’t have such fluctuation in temperature as small region of land has [which bounce up and down by more than 1 C] but fluctuation less than 1 C must occur in some deeper water regions. Anyways one might imagine there is some very fragile temperature change of methane deposit somewhere, but it seems one would need a significant change of average deep water temperature to have a common destabilization and earthquakes and temperature change might be needed. Or we not observed it happen, though perhaps we should actual monitor it, or better still mine methane hydrates. And want to mine first those deposits which might be “lost” due to larger earthquakes events.

Or I think the monetary of loss of large and valuable deposits of natural gas disappearing, is a bigger problem than whatever relatively small {as global amount} of methane is released. Or flaring of natural gas has been loss, in due failing to recover for a useful purpose, rather any effect upon global air temperature. Anyhow, the deeper ocean has not and will not anytime soon, warm enough,

but we should be focusing on near term efforts to mine ocean hydrates, mainly because natural gas is good energy source.

But back to point, the effort and money spent, determining if rising CO2 levels will have a large negative effect, indicates it will not have such effect within 50 or 100 years.

Thank you Pat!

So IPCC models produce very big statistics, but presumably this is not a problem because it is not an energy flux and thus cannot produce any warming. It seems that you have just defeated your own argument.

I gave up at this point since it seems like all this is now a matter of pride you and the rather tetchy tone of you comments does not seem to in the spirit of resolving a technical issue but of saving face.

Thank you Pat.

You’ve completely mangled the idea , Greg. Perhaps you should remain on the sidelines.

I gave up at this point since it seems like all this is now a matter of pride you and the rather tetchy tone of you comments does not seem to in the spirit of resolving a technical issue but of saving face.Tetchy is justified when people continue to spew the same wrong thoughts over and over and over and over again. At this point, a rational person should give up even being tetchy, because there seems to be little hope of reaching zealots with rational thought. What happens after that is an escalation of mangling ideas to the point of pathological inability to see truth.

Anything further by Pat to explain his position might well be an unachievable mission to cure a form of mental illness. He should walk away with confidence that some people actually get it.

Roy contradicts himself, by stating in plain language the unavoidable point that Pat is getting at. I get the sense that he doesn’t understand Pat’s language enough to even realize this. What is it that he fears loosing by agreeing in Pat’s terms? I don’t get his disagreement — it seems suspiciously tenacious.

Even if Roy agrees with the idea expressed in Pat’s paper he can still disagree with the logic Pat used to arrived at that conclusion.

Roy can do as he likes, JGrizz. But his argument is analytically wrong.

Greg,

Context is critical in any discussion, and you’ve mixed two very different contexts together, resulting in a completely invalid argument.

These discussions show things are far from settled as the political side would like it to be. There are too many complexities and external forces that play in this to make predictions. My climatology professors in the 90’s didn’t buy the media and political rhetoric and were not on the grant gravy train to push the narrative. I am glad to see there is competing discussions. It’s not settled and it won’t be anytime soon.

Considering the very long article, and lots of comments, I wonder if the author would be willing to post a very simple summary of his mains points,

with no numbers.

I’ve attempted to do that by cherry picking four sentences from the article, as a summary of the whole article (but I’m just a reader, not the author):

I believe this article can be summarized using four of its sentences, although I would add two words (“happen to” ) to the first sentence:

(1)

“But if the wrong physics (happens to ) gives the right answer, one has learned nothing, and one understands nothing.”

(2)

“This wrong physics is present in every single step of a climate simulation.”

(3)

“The calculated air temperatures are not grounded in a physically correct theory.”

(4)

“Tuning the model to reproduce the observables merely hides the uncertainty.”

Good summary Richard.

Add (5) Climate models cannot resolve the cloud response to CO2 emissions.

And (6) Climate models cannot resolve the effect of CO2 emissions (if any) on air temperature.

To quote Inigo Montoya, “Let me e’splain. No, there is too much. Let me sum up. Future projections from climate models are as accurate as examining sheep entrails.”

My Shaman has an 82% accuracy record against the spread for the NCAA basketball season. But he does use goats, not sheep.

Pat Frank,

That is one heck of a technical rebuttal! I read the first half in detail but time constraints forced me to skim the second half. I’ll have to return later for a more thorough 2nd reading. Your logic and clarity is exemplary. I found much here that I agree with.

There is one detail that was not covered in the explanation of how errors propagate. It is not a mistake but it is missing and the average reader might not see a gap.

“Propagation of calibration uncertainty goes as the root-sum-square (rss).”

This is quite true and the formula shown for adding in quadrature is correct, however it is not explained that the error might be expressed as a value above or below a relevant quantity, (the absolute error) or a % of the that quantity (the relative error). When propagating “addition” errors (because the sum involved an addition) the error to be square is the value, not the % of the quantity that has the uncertainty. When the formula has a multiplication or division, the % of quantity is used.

Error propagation involves using the appropriate absolute or relative error in each single step of the formulas from beginning to end. In the case above Pat is adding the uncertainties. If the projected temperature is ±0.1, then the uncertainty at step 2 is

SQRT(0.1^2+0.1^2) = ±0.141

At step three it is SQRT(0.1^2+0.1^2+0.1^2) = ±0.173

At step 50 it is Projected Temperature ±7.07

It is also important to recognize that this isn’t an “error” per se. It can not be reduced by making multiple runs. As Dr. Frank tries to point our it is an uncertainty, that is, you simply don’t know where the value is within the interval. It is not a description of an error measurement where you can find a “true” value by taking many outputs and averaging them.

In essence, (value + 3.14) is just as likely as (value – 5.0). You just don’t know which is the defintion of uncertainty.

Wow, two comments I actually understood. 🙂 Thank you Jim and Crispin AND Dr. Frank.

Your math is just a little off. Factor out the 0.1 from all the elements and you get sqrt [(0.1)^2 * n], n being the number of terms. The sqrt of (0.1)^2 is just 0.1. So you get (0.1) sqrt (n). At step 50 you’ll wind up with +/- 0.707 not +/- 7.07.

But your methodology is correct!

Dr. Frank,

Very impressive. An academic who can explain himself well to those outside his area of expertise. Well done.

I see what you are saying. However, the problem is that as a practical matter, there are realistic max and min temperatures that can be achieved, and they are much smaller than the confidence interval in your results. In other words, the confidence interval you calculate is outside the realm of the physically possible, and I think that is where Spencer is going. I agree that the published confidence intervals for the models are simply ridiculous, they seem to assume that averaging a bunch of temperatures taken at different locations will reduce the measurement error, when in fact it does not. But at the same time, the confidence interval you calculate suggests possible temperatures outside the realm of the physically possible. How to reduce that I do not know.

You still do not understand- they are bands of ignorance. In the New series of Cosmos by Tyson (Episode 11), Tyson stated that “dark energy is a placeholder for our ignorance”. In the same sense, Pat’s uncertainty bands show how little confidence we can have with these models after 100 years- in other words, “none”.

BTW- Look at Figure 4. Sure, after 100 years we do not see those uncertainties, but look at 150 years, or 200 years or more. That figure shows the results with no uncertainties! Is +/- 16.6 C really that surprising after 100 years with some uncertainty analysis in that context ?

First, deep thanks to Anthony and Charles for their strength of mind, their openness to debate, and for being here for all of us.

Andrew, over and yet over again, I have pointed out that uncertainty is not error. Uncertainty bounds do not represent possible outcomes. They represent an ignorance interval, within which, somewhere, the true answer lays. When the uncertainty interval is larger than any possible value, the answer given by the model is meaningless.

From the essay, “

It should be very clear to everyone that the rss equation does not produce physical temperatures, or the physical magnitudes of anything else. It is a statistic of predictive uncertainty that necessarily increases with the number of calculational steps in the prediction. A summary of the uncertainty literature was commented into my original post, here.(bold added)”The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.Please. Let’s not have this again. Uncertainty intervals in air temperature projections

do notrepresent possible physical temperatures. Enough already.” When the uncertainty interval is larger than any possible value, the answer given by the model is meaningless.”No. When the uncertainty interval is (much) larger than any value that the GCM could have possibly produced, then the uncertainty interval is meaningless. That was Roy’s point.

Nick,

Are you telling us that, when a value is completely uncertain then it must be correct?

Is there a limit to uncertainty?

No, I’m telling you that if a model couldn’t possibly produce a result, because it violates the conservation laws built into the model, but not Pat’s curve fitting), then you are not uncertain about whether it might have produced that result. Such numbers cannot be part of the uncertainty range.

Uncertainty isn’t error, Nick.

You continue to make the same mistake over, and over, and yet over again.

Pat’s uncertainty calculation is a way of revealing the uncertainty which has been hidden by model tuning. If the models were tuned to correctly replicate cloud cover, its the temperatures which would be way off.

What did I say about error?

Its not an error Nick, its a measure of the uncertainty which has been buried as a result of tuning the model to hindcast temperature while ignoring the effect of gross cloud cover errors.

If the models were tuned to get cloud cover right it would be the temperature which would go wild.

” Its not an error Nick, its a measure of the uncertainty which has been buried as a result of tuning the model to hindcast temperature while ignoring the effect of gross cloud cover errors.”It is nothing like that. Nothing in Pat’s paper (or Lauer’s) quantifies the effect of tuning. No data for that is cited at all. The ±4 W/m2 statistic in fact represents the sd of discrepancies between local computed and observed cloud cover at individual points on a grid. It is not a global average. No-one seems to have the slightest interest in what it actually is, or how it is (mis)used in the calculation.

But in any case, I didn’t mention error at all. The range of uncertainty of the output of a calculation cannot extend beyond the range of numbers that the calculation can produce. That is elementary.

The problem with Nick, and others like him, is that they assume the models are basically correct and complete representations of the actual atmosphere, and that the only errors are of precision (noise) that cancel out over all the time steps. So they simply can’t understand Pat’s argument. And even when they admit that there are things missing in the theory (and therefore in the models), they insist on treating those accuracy errors as precision errors and pretend they cancel out, as if ignorance can cancel out somehow and reveal truth.

Nick, “

What did I say about error?”Prior Nick, “

if a model couldn’t possibly produce a result,..”A result a model cannot produce is error.

Uncertainty is not at all the result a model may produce. It’s the reliability of that result.

Nick, “

It is not a global average.”Yes it is. It expresses total cloud variability.

No. It’s not meaningless. It shows how pure crap the models are.

It shows nothing about the models. There is nothing in the paper about how GCMs actually work.

Spot on.

Nick- wrong again. There is a linear emulator that was validated against many model runs.

And, ultimately what the paper shows is the models DO NOT work.

“There is a linear emulator that was validated against many model runs.”It is curve fitting. The paper says:

“For all the emulations, the values of fCO2 and the coefficient a varied with the climate model. The individual coefficients were again determined for each individual projection from fits to plots of standard forcing versus projection temperature”And the standard forcing comes from model output as well.

Granted, it is curve fitting based on the form of CO2 forcing using the parameter set of the model, but it did emulate the behavior of the models over a significant range of model outputs. So emulating the model outputs does say something about the models.

Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.

“GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing”That is nothing like what they do. What you have shown is that you can, by adjusting parameters, fit a curve to their temperature output.

Nick, “

It is curve fitting.”Like model tuning.

You’ve just repudiated a standard practice in climate modeling, Nick. Good job.

The climatologically standard curve-fitting shows in Figure 4 above, where the historical record is reproduced because of model tuning, and then they all go zooming off, each into their own bizarro-Earth after that.

All I do is find an effective sensitivity for a given model. After that, the linearity of the projection is a matter of invariable demonstration.

You wrote, “

And the standard forcing comes from model output as well.”Wrong again, Nick.

I’m very clear about the origin of the forcings. They’re all IPCC standard. The SRES forcings and the RCP forcings. No forcing is derived from any model.

How did you miss that? Or did you not.

Nick, “

What you have shown is that you can, by adjusting parameters, fit a curve to their temperature output.”No, Nick.

I’ve shown that with a single parameter, a linear equation with standard forcings will reproduce any GCM air temperature projection.

It’s all right there in my paper. Studied obtuseness won’t save you.

“It’s all right there in my paper.”Yes, I quoted it. Two fitting parameters. Plus the forcings used actually come from model output.

Where do you think forcings come from?

Nick, “

Where do you think forcings come from?”I know where the forcings come from. Not from the models.

They are the standard SRES and RCP forcings. They are externally given and independent of the model.

Standard IPCC forcings and one parameter are enough to reproduce the air temperature projections of any advanced GCM.

Nick –> Why the problem with curve fitting. If I want to find out the response to an input of a block box, what does it matter. If the output follows a linear progression from an input, it really doesn’t matter what integrators, summers, adders, feedback loops are present in the box. I’ll be able to tell you what the output will be for a given input.

If you want to prove your point, you need to be arguing about why Dr. Franks emulator doesn’t follow the output of the GCM’s, not why GCM’s are too complicated to allow a linear emulator! He gives you the equations, check it out for yourself. Show where the emulator is wrong.

“There is nothing in the paper about how GCMs actually work.”

Exactly as there is absolutely nothing in GCMs about how the climate actually works.

But there is a difference. In fact, Frank’s model simulates quite well the bullshit climastrological models, while the climastrological models are pure crap when they have to simulate the actual climate.

“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”

Nope they dont’

Look at the code.

now you CAN fit a linear model to the OUTPUT. we did that years ago

with Lucia “lumpy model”

But it is not what the models do internally

“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”

Steve Mosher, “

Nope they dont’”Look at the code.

Yes, they do. Look at the demonstration.

Wrong- uncertainty can increase very rapidly and can be greater than the “conservation laws of the models” precisely because the models do not contain the relevant actual physics that the uncertainty parameter was derived from! If my model is T=20.000 Deg. C +.000001 t (t is years) and MEASURED uncertainty is +/- 2 Deg. C annually, you dang well better believe the uncertainty can outstrip the model output and its conservation laws!

Sorry, do not collect $200, Do not pass GO.

“If my model is T=20.000 Deg. C +.000001 t”Yes. That is your model, and Pat’s is Pat’s. And they don’t embody conservation laws, and can expand indefinitely. But GCM’s do embody conservation laws, and you model (and Pat’s) say nothing about GCMs.

My model is just a simple model model.

Nick, “

… and can expand indefinitely.”Wrong yet again, Nick. The emulator gives entirely presentable air temperature numbers. Just like GCMs.

The

uncertaintycan expand indefinitely. Until it shows the discrete numbers are physically meaningless. Uncertainty is not a physical magnitude. It’s a statistic. It is not bound by physics or by conservation laws.“

But GCM’s do embody conservation laws, and you model (and Pat’s) say nothing about GCMs”If says that GCMs project air temperature as a linear extrapolation of fractional GHG forcing.

You insistently deny what is right there in front of everyone’s eyes, Nick.

IMO Pat fits the modelled GMST with his equation. However, the GCM show more warming over land than over oceans which is this what we observe to. GCM also show some arctic amplification , also observable. All these issues are not a result of Pat’s equation. Therefore the GCM must do it in other ways then fitting.

“Therefore the GCM must do it in other ways then fitting.”

Or their fitting is more granular, and occurring in more places in code, than you realize.

Stokes

In the more usual situation, where the uncertainty bounds are a small fraction of the nominal measurement, it provides an estimate of the probable precision. As the bounds increase, and approach and exceed the nominal measurement, it tells us that there is little to no confidence in the nominal measurement because, for a +/- 2 sigma uncertainty, there is 95% probability that the true value falls within the uncertainty range that is far greater than the nominal measurement. If the uncertainty range exceeds physically meaningful values, then it is further evidence that there are serious problems with either the measurement or calculations.

The uncertainty is a function of the measurement (or calculation) and not the other way around. That is, you have it backwards.

You’re fixated on the criteria of statistical/epidemiological modeling, Nick, where there is no physically correct answer. There is only a probability distribution of possible outcomes.

None of them may prove correct, in the event.

In physical science, there’s a correct answer. One number. The question in science is accuracy and the reliability of the number the model provides.

Uncertainty tracks through a calculation and ends up in the result.

Take a look, Nick. Where do any of the published works restrict the width of the uncertainty interval.

Your criteria are wrong.

Neither you, nor any of the folks who insist as you do, understand the first thing about physical error analysis.

“Uncertainty tracks through a calculation”And that is exactly what you didn’t do. You nowhere looked at the GCM calculation at all.

Nick, “em>You nowhere looked at the GCM calculation at all.”

I looked at GCM output, Nick. All that is necessary to judge their accuracy. By comparison with known observables.

How they calculate their inaccurate output is pretty much irrelevant.

“I looked at GCM output, Nick. All that is necessary to judge their accuracy. By comparison with known observables.”

In fact I would add that is exactly what Lauer did. He took observational satellite data and compared it to the outputof GCMs and concluded (in the case of cloud forcing) that the CIMP5 GCMs did a pretty lousy job of correlating to the observations.

When the uncertainty interval is (much) larger than any value that the GCM could have possibly produced, then the uncertainty interval is meaninglessYou’ve got that exactly backwards Nick, when the uncertainly is much larger than any value the GCM could have possibly produced, then it’s the GCM that is meaningless.

Uncertainty and precision are two entirely different things. By this point it’s clear you are willfully not understanding that.

I am not a mathematician but the conception of calculated error bars reflecting uncertainty in the models as opposed to possible temperatures is obvious to me. I cannot in fact fathom how people fail to understand this issue. There seems to be a fundamental misconception at play, by which the outputs of mathematical models are confused with reality.

That’s why “willfully obtuse” is a phrase people must learn. Willfully obtuse people are actually gifted, in a way. They have an ability to perform microsurgery on their own minds, cauterizing themselves against obvious facts. Some people who can wiggle their ears; willfully obtuse people can induce a fact-blindness within themselves that lets them not see what they’d rather not see – even if it is paraded before them.

Many bright, willfully obtuse people are employed by climastrology. Born with a natural talent for it, they have honed their innate talent into a marketable skill – and have been financially rewarded for it.

Imagine I step forward and say I was an eye witness to a crime and then state I am quite certain the perp stood between 2 feet and 12 feet tall. Yet, when pressed, I can’t commit to a height estimate range any narrower than that. How likely do you think it is I will be called to testify at trial? Climate models bring the same amount of useful information to their domain, yet are considered star witnesses!

We’re all stuck in a Kurt Vonnegut novel and can’t get out.

When one ventures down the Rabbit Hole that is climate modelling, one then finds oneself stuck in Wonder Land, watching all the madness around. In Climate WonderLand the climate scientists are the Hatters turned “mad” at their vain, Sisyphean attempts to make nature conform to a contrived fiction.

+1

Well, the climate models look to be suggesting a height somewhere between -10 and 22 feet tall.

This stuff is largely above my pay grade, but in relation to how the “confidence interval” (or lack of confidence, in this case…) impacts actual temperature: it does not. What it says, however, is that if the bounds are so wide as have been calculated here, that the projection made by the modeler is meaningless. The projected temperature has no predictive value: it is a meaningless number, one that might just as well have been pulled from a hat.

The models, therefore, are not fit for public policy purposes, since they have no demonstrable skill in prediction. Absent such skill, you have only an illusion of what the outcome will be, one that is entirely undercut by the gross uncertainty that attends the particular projection.

Taleb deals with this issue as well in his “Black Swan” book, where he ridicules financial modeling -among other pastimes – and where he usefully quotes Yogi Berra: “It’s tough to make predictions, especially about the future.” Chapter 10 of that book is entitled “The Scandal of Prediction”, and covers the types of issues that plague all modelers of chaotic systems.

You missed his whole point. They do not predict possible temperatures outside the realm of possibility. The confidence intervals tell you that the underlying models are worthless.

Agreed.

His point is that the possible futures predicted by the models are so uncertain that all possible futures known by simple physical contraints lie within their bounds. Therefore the models tell us nothing useful.

After the models are presented we know nothing more than before we see the models—-except that their presenters are scientific charlatans.

You’re missing the whole point and unfortunately falling into the same thinking as Spencer.

He’s not calculating temperatures, he’s calculating uncertainty. None of this has anything to do with temperature projections, only what I will call margins of error for simplicity.

The fact that the model’s physics allow for unrealistic temperatures within their error margins proves that they aren’t reliable.

Andrew, I don’t know if Pat is right or not. But what he is saying, which makes your argument wrong if he is correct, is that what you are taking to be ‘realistic max and min temperatures’ are no such thing.

His argument is that they are bounds of uncertainty.

To give a very simple example, if I understand him correctly, suppose there is a cloudburst, one of many this autumn That cloudburst delivers a certain number of inches of rain.

I say, the problem with this measuring instrument is that its very uncertain. The real rainfall today could be anywhere between 0.5 and 3 inches, that is how bad it is.

You would then be replying, this cannot be. Rainfall in these cloudbursts never varies by that much, they always have pretty narrow band of inches of rain. They never fall below 1 inch and never are more than 1.5. So you must be wrong, your claim is refuted by experience.

True on the usual size of the bursts, but irrelevant. Because the level of uncertainty about how much rain fell is what is being asserted, which has nothing to do with the amount of variability of rainfall from these bursts.

Variability is a physical measure of quantity of rain. Uncertainty is to do with the confidence with which we measure it. The limits of accuracy of our measuring equipment.

I hope I understood Pat correctly…..

Andrew K. – Suppose you go to the gym and step on their scale to weigh yourself. The scale shows 200 lb which you find strange since you weighed 180 this morning on your own scale. You step off, see the scale says zero, step on again and it now shows 160. Several more checks show weights that vary all over the place. You conclude that the scale is broken or – in measurement speak – it has a very large uncertainty which makes it unfit for purpose.

All Pat Frank has done is use the well established process of measurement uncertainty analysis to show that GCM projections of GMT 80 or 100 years out have an uncertainty far too large to have any confidence in the results. The fact that the uncertainty in 80 years is something like +/- 20 C is unimportant. The game is over as soon as the uncertainty reaches the magnitude of the projected result. If my model projects a 1 year temperature increase of 0.1 C with an uncertainty of +/- 0.4 C my model is not very useful. And anyone who thinks that uncertainty in future predictions (projections, forecasts, SWAGs) does not increase the further out in time they go, needs to find a way to reconnect to reality.

You got it exactly right, Rick C. Thanks for the very succinct explanation.

What you have described so readily continually escapes the grasp of nearly all climate scientists of my experience.

I’m really lucky that somehow Frontiers found three scientists that understand physical error analysis, and its propagation.

Early in the Pat Frank uncertainty story I touched on using an analogy with bathroom scales for body weights. They provide neat ways to explain certain types of errors. But soon, there were too many bloggers presenting other analogies and the field got too bunched.

The scales story is useful as well to explain aspects of signal and noise. When your mind has grasped the weighing principles for the naked human body, you can change the object to one of a tenth of a kilogram instead of 100 kg or so. Uncertainty acceptable for body weights becomes impossible for a tenth of a kg. For that you need to build a new design of scales, like letter scales for envelope postage, just as a new design of GCM seems to be required for some purposes related to uncertainty.

Pat, years ago you commented that you had not met anyone from the global warming community who knew that errors could be classed into accuracy and precision types. I thought you were exaggerating. From what has been aired since then, I see what you meant. It is like a different sub-species of human scientists has been uncovered by “climate change, the emergency” and its precursors.

Stick with it, Pat, you are winning. Geoff S

Thanks, a far better explanation than my attempt, and I am a lot clearer for it.

The dichotomy between those who understand exactly what is actually shown by Pat’s paper, and those who simply cannot, or will not, grasp what is being said, is startling.

Which all by itself is very illuminating.

There is an inflexibility of thinking being demonstrated, when we see one example after another of what is NOT being said and why, and one example after another of the correct description using varied verbiage and analogies, and why, and yet despite this, there is very little evidence of a learning curve for those who have it wrong.

The cloud of uncertainty is larger than the range in possible values of the correct answer.

This says nothing about the correct answer, it means that no light has been shed on what it is.

It means that our lack of knowledge that we had to start with regarding where in the range of possible correct answers, has not been narrowed down by these models.

They tell us nothing.

They cannot tell us anything.

They cannot shrink the range of possible correct answers that exists prior to constructing the model.

Throwing darts at a dart board through a funnel does not mean you have good aim.

A funnel is not the same as aiming.

Imagine the dartboard is drawn in invisible ink…you do not know where the bullseye is.

All you know is the funnel has constrained where the dart can go.

But is that analogy true for climate models – if they are wrongly modelling clouds then they will do this consistently rather than randomly like your analogy. I’d be more inclined with the following analogy.

At home I weigh 180, at the gym it says 160, and at my friends place 210. But if I put on 10 lb then will the three new weights be 190, 170, and 220 or at least will the measured change be close to 10. That’s what is important.

As I see it, Pat Frank has looked at the uncertainty due to cloud cover in a single run. But we want the uncertainty for the difference in two runs (one calibrated). The earlier models have predicted actual current temperatures quite well. And yes, there is uncertainty in the difference between calibrated and scenario runs – that’s why the IPCC has a 1.5-4.5 range. Of course, if you compare runs from different models using different RCPs (and then compare to actual CP) you will have a huge variation in results (but then you’d be nuts to think that was anything but meaningless).

How will you know if you put on ten pounds, and not 4 or 5, or if you lost weight?

Each scale gives a different answer but is consistent. If I weigh myself at 1 minute intervals it will give the same answer (within measurement error, but that’s a different issue not relevant here). If I’m heavier, all scales will give me a heavier weight (in the same way all climate models give higher temperatures with more CO2). And how close to the true 10 lb will be important (and the uncertainty of the change is unlikely to bear much – if any – relationship to the uncertainty of the initial weight). In the case of climate models, they’ve been close enough since first developed to give useful answers.

“… they’ve been close enough since first developed to give useful answers.”

Could not disagree more.

Twenty years ago, did they give an accurate idea of the next twenty years?

These modeled results will be completely discredited in less than the 12 years we have been assured by climate experts AOC and Greta Thunberg is the beginning of the end of the world.

I will bet anyone a pile of money on it.

Two years running with an average temp of TLT given by satellite data below the 1980-2010 average, between now and then.

If that does not occur, I lose.

The scales in your analogy are all off because you didn’t really weigh 180 pounds. Prove me wrong!

The ONLY consistency displayed by climate models is a consistency FORCED upon them BY DESIGN. They are NOT getting it generally correct. At least, there is NO reason to believe they are, since their innate uncertainty makes their outputs useless. This is the part that’s so sad – millions don’t realize they are being duped by brazen grifters.

What you are missing is that in your example each measurement has an uncertainty. Your example is mainly dealing with measurement error, not uncertainty. Uncertainty tells you that you would have measurements like 180 +/- 5 lbs, 160 +/- 4 lbs, 210 +/- 8 lbs. The next set would be 190 +/- 5 lbs, 170 +/- 5 lbs, and 220 +/- 8 lbs.

It means you are uncertain about what each value is every time you weigh so that you couldn’t really calculate that you had a change of 10 lbs.

Exactly.

Same issue as many commenters.

Inability to conceptualize the difference, is that the problem?

Brigitte,

You claim that “if they are wrongly modelling clouds then they will do this consistently rather than randomly like your analogy”.

You are not justified in making this assumption. Clouds change all of the time. A set of errors that applies to heavy, low cloud might not apply for high, light cloud. There is no way that you can assume that the errors cancel, especially when there is expert consensus that the clouds have either positive or negative nett feddback, overall. Geoff S

How do you know the scales will all give you a heavier weight? You are making the same mistake all the supporters of the CGM’s are making – that all the scales give accurate readings plus or minus a fixed bias amount. But you have not accounted for the uncertainty associated with each scale. That’s what the climate modelers do – assume all their models are accurate with nothing more than a fixed bias in the output. They ignore that there is an uncertainty associated with each of their models, and that uncertainty factor is *not* a fixed bias.

If all the scales have an uncertainty of +/- 5lbs then you won’t know whether you gained 10lbs or not.

Rick C PE

+1

To add another element to your excellent illustration, suppose I know that my weight historically varies between 170 and 190 and almost never goes out of that range, and suppose further I could use settings on the scale to arbitrarily set mimimum and maximum readings to be within that range based on, let’s say, past readings from a separate set of reliable weight measurements (model “tuning”). The measuring mechanism of the scale itself is still varying wildly outside that range but you don’t see that because of the arbitrary minimums and maximums that are baked in. Now you have a scale that is outputing readings that appear to be meaningful but that is an illusion. The reading you see on any given day is completely meaningless as a measure of reality.

“…suppose further I could use settings on the scale to arbitrarily set minimum and maximum readings to be within that range based on…”

IOW…a funnel.

Constraining the output is exactly what I was referring to in my “throwing darts at a board through a funnel” analogy.

Every shot hits within some predetermined boundary, but it has nothing to do with my aim.

And no one can see the bullseye, on top of that.

All that is known is that the darts hit the board.

But the purpose of weighing yourself or running a climate model is to find out what you do not know.

In the example used by Brigitte, the parameters have been changed…she is asserting a set of scales that read different from each other, but each always gives the same result, and if ten pounds is gained, each will increase by ten pounds.

Several things about this remove this analogy from being…analogous to the original issue.

One is that assumptions are being made about the true values, and ability to know it.

If you know you gained or will gain, exactly ten pounds, there is no need for a scale!

And a scale which reads different from another, but both read the same each time, is not uncertainty, it is zeroing error.

*Market earnings being discussed, so am now distracted, lost train of thought.*

Andrew,

The situation is ridiculous, but it can’t be helped. We simply don’t know enough about cloud behavior to reduce the uncertainty in the simulations. The simulations are trying to create numbers, but the uncertainty is inherently huge no matter what the computers to. The uncertainty is part of the science, which shouldn’t be separated from the model calculations. Also, the uncertainty about clouds is separate from temperature measurement issues.

No, the confidence intervals measure confidence, not temperature. That’s the whole point. The fact that that means the models could produce temperatures far from possible shows how wrong the models are, not the other way around.

+10

You might comsider the sign…..

” ….confidence intervals measure confidence, not temperature. ”

You mean the +/- 1 standard deviation intervals about the mean , as given inthe graphs of Pat and Roy. (A Confidence interval is defined differently, but based on the sd and would be much narrower.)

You can see the spread of the interval is a measure of uncertainty; the wider, the more uncertain; the narrower, the more confidence you can give to the estimated mean. But the values of sd have to be given in temperature since they are dependent on the mean, otherwise the +/- sign would be invalid. They, and what they encompass, must be regarded as potential temperatures, given your model and procediments are correct.

” that means the models could produce temperatures far from possible shows how wrong the models are”

With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.

sorry…… is a measure -> as a measure

>>With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.<<

All Roy showed is that GCMs do not produce an unbelievable spread in predicted temperatures. Pat Frank's emulator did not produce an unbelievable spread in predicted temperatures either. Pat Frank's analysis of the propagation of uncertainty produced an unbelievable spread in uncertainty. The GCMs have not performed any appropriate analysis of the propagation of uncertainty. Not sure that they know where to begin…

“Pat Frank’s emulator did not produce an unbelievable spread in predicted temperatures either. Pat Frank’s analysis of the propagation of uncertainty produced an unbelievable spread in uncertainty.”

An uncertainty in temperature has to be derived as and be given in terms uf temperature, how else ? Large spread in projected temperatures means large uncertainty in possible temperatures. When the spread ranges into impossible temps, the process should be stopped an revised.

“The GCMs have not performed any appropriate analysis of the propagation of uncertainty. Not sure that they know where to begin…”

Sensitivity analyses have been done, as I conclude from the titles of cited papers. That seems to me the way to do it, repeated model evaluations with variability on selected parameters. The between-model comparisons (within ensembles) are better than nothing, but not the optimum.

Ulises, “

When the spread ranges into impossible temps, the process should be stopped an revised.”When the uncertainty spread reaches impossible temperatures, it means the projection is physically meaningless. That is the standard interpretation of an uncertainty interval.

Ulises, “

That seems to me the way to do it, repeated model evaluations with variability on selected parameters.That tests only precision, not accuracy. Such tests reveal nothing of the reliability of the projection.

Uncertainty analyses are about accuracy. A wide uncertainty interval means the model is not accurate.

“With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.”

What Pat’s analysis shows is that the iterative process of the CGM’s result in uncertainty intervals that get wider and wider with each iteration. The iterations should stop when the uncertainty overwhelms the output, e.g. trying to measure a 0.1degC difference when the uncertainty is more than +/- 0.1degC. As Pat has showed, once this tipping point is reached you no longer know if the CGM’s produce an unbelievable result.

Uncertainty is not error. It is just uncertainty.

“What Pat’s analysis shows is that the iterative process of the CGM’s result in uncertainty intervals that get wider and wider with each iteration.”It doesn’t show anything about GCMs at all. It includes no information about the operation of GCMs. In fact the iteration period of a GCM is about 30 minutes. Pat makes up a nonsense process in which the iteration is, quite arbitrarily, a year.

Nick, “

It doesn’t show anything about GCMs at all.It shows that GCMs project air temperature as a linear extrapolation of fractional GHG forcing. The paper plus SI provide 75 demonstrations of that fact.

Nick, “

It includes no information about the operation of GCMs.”It shows that linear forcing goes in, and linear temperature comes out. Whatever loop-de-loops happen inside are irrelevant.

Nick, “

In fact the iteration period of a GCM is about 30 minutes.”Irrelevant to an uncertainty analysis.

Nick, “

Pat makes up a nonsense process in which the iteration is, quite arbitrarily, a year.”Well, arbitrarily a year, except that air temperature projections are typically published in annual steps. Arbitrarily a year except for that.

Well, except that L&H published an annual LWCF calibration error statistic.

Let’s see: that makes yearly time steps and yearly calibration error.

So, apart from the published annual time step and the published annual average of error,

~~what have the Romans ever …~~oops, it’s arbitrarily a year.“Uncertainty is not error. It is just uncertainty.”

Yeah, you know it when you feel it. Why bother with defiitions.

Uncertainty isn’t error, ulises.

The uncertainty interval doesn’t give a predicted range of model output. As Tim Gorman pointed out, the uncertainty intervals gives a range within which the correct result may be found.

However, one has no idea where, within that interval, the correct values lays.

When the uncertainty interval is larger than any possible physical limit, it means that the discrete model output has no physical meaning.

“Uncertainty isn’t error, ulises. ”

Well, I’ve heard this a number of times, yet never accompanied by arguments,nor by an explanation what must go wrong if this lemma is not observed.

Seems rather to me like an amulet which makes your reasonings critic-proof.

“Uncertainty” is, if good for anything, a superior term to “error”. The “JCGM Guide to the expression of uncertainty in measurement ” – you cited it – subsums everything which was formerly known as “error analysis” under the umbrella of “uncertainty analysis”. Nothing new, no changes but for a unified notation incl. renaming. What used to be a “standard deviation” is now to be named the “standard uncertainty Type A”. ( Short: “u”, variance “u^2”). No change in how to derive it, nor in interpretation, nor in subsequent usage (e.g. propagation).

Your study deals with “error” propagation. Can you explain why it is in place there ?

Your Eqn. 6 has u^2 terms on the rhs, and sigma to the left, where one would expect uc, = uncertanty combined, which would be in line with the unnumbered previous eqn in the text, as well with the notation in the Guide.

Why is the u-notation dropped in mid-equation? A standard deviation

(or uncertainty) is expected on the lhs. Sigma is neither of it, it is the value of the sampled underlying population and *unknown*, in practice replaced by the sd as the sample estimate, but *not* equal to it. All this is explained in the Guide.

“The uncertainty interval doesn’t give a predicted range of model output.”

In the best case it should. That’s how it is defined. If your sample is from a Normally distributed population, e.g. the mean +/- 1sd comprises the well-known 68% of the potential outcomes.

“As Tim Gorman pointed out, the uncertainty intervals gives a range within which the correct result may be found.

However, one has no idea where, within that interval, the correct values lays.”

If you’re talking about measurements, under controlled conditions you may assume there is a “correct value” that can be approached.

For a new measurement, you can’ t predict its outcome. But you *can* predict that it is more likely to fall closer to the mean of the distribution than farther off. The interval is centered about the mean.

“When the uncertainty interval is larger than any possible physical limit, it means that the discrete model output has no physical meaning.”

Don’t know what “discrete” means in that context. Ignoring that, I’d say that first of all, you have to attribute physical meaning to the model output, otherwise you can’t compare it to other physical units. ” Larger than” , or not, can only be assessed on the same scale. Then you may conclude that the model output is impossible in the given context – e.g.,

50 C can well be accepted for a cup of tea, not so for the open ocean surface. (Asteroid impacts excluded). If the error propagation model runs into impossible values, it is time to stop and work on it.

I am working to understand this and in that effort I’ll quote Andrew’s objection and ask a question; “the confidence interval you calculate suggests possible temperatures outside the realm of the physically possible.” If I’m understanding anything here, that statement is precisely the criticism this whole essay was designed to correct. When Frank writes, “They come from eqns. 5 and 6, and are the growing uncertainty bounds in projected air temperatures. Uncertainty statistics are not physical temperatures.” is he not directly claiming this objection is unfounded? His error boundaries are NOT confidence intervals. Am I even on the right track here?

The confidence interval gives boundaries within which the actual temperature might lie but it doesn’t specify that the actual temperature has to be at either edge of the boundaries. The uncertainty interval is not a probability function that predicts anything, it is just a value.

Even thinking that the uncertainty level is associated with what the actual temperatures could be is probably misleading. I like to think of it in this way: NOAA says this past month is the hottest on record by 0.1degC. If the uncertainty interval for the calculation is +/- 0.5degC then how do you really know if the past month is the hottest on record? The uncertainty interval would have to be smaller than +/- 0.1degC in order to actually have any certainty about the claim.

It’s the same for the models. If they predict a 2degC rise in temperature over the next 100 years but the uncertainty associated with the model is +/- 5degC then is the output of the model useful in predicting a 2degC rise? It could actually be anything between -3degC and +7degC.

And I think this is what Pat is really saying – if the uncertainty interval is larger than the change they are predicting then the prediction is pretty much useless. It doesn’t really matter how large the uncertainty interval is as long as it is larger than the change the model is trying to predict – the model prediction is just useless in such a case.

“

Uncertainty *is* a strictly independent *value*. It is not a variable. The uncertainty at step “n” is not a variable or a probability function. Therefore it can have no correlation to any of the other variables.”Seems just so much common sense. For any given proposition (mathematical or otherwise) there is an uncertainty value intrinsic to the proposition itself. The intrinsic uncertainty value of the proposition is entirely independent of the proposition’s calculated truth value.

Take, for example, the proposition, “God exists.” Since this proposition is wholly untestable, we could say the intrinsic uncertainty value is 100%. The uncertainty value, however, has no relation to the proposition’s actual truth value. The proposition is either objectively true or false regardless of intrinsic uncertainty.

syscompuing:

A very good analogy. But it will just go over the heads of the warmists.

Yes, you are right, see my post above. The error boundaries in question are meant to be +/- 1sd intervals, while “confidence intervals” have another definition in Statistics.

But try to teach that to the crowd here.

“….working to understand…”—– Keep on !

But don’t rely on what is presented here. It is like observing some wrestling in mud through a smoke screen.

Always try to refer to some basic text to better understand the issue.

Good Luck !

Excellent, Pat.

In Your original paper you used the term “error” as in propagating error,- etc.

Maybe – just maybe, Roy and others had understood Your reasoning better, had you instead used the term “uncertainty”.

Just a thought.

But brilliant – Thank you.

Hans K

Hi Hans — you have a point. Propagated error is what uncertainty is all about. The terms are connected, especially through calibration error.

Physical scientists and engineers would not be confused by the terms. That leaves everyone else. But one can’t abandon proper usage.

In medieval studies, my work frequently included physics (experimental archaeology) and stuff that was understood by insiders but understood *differently* by everybody else? That’s the point where my editors (who understood the field at large but not my specific subset of it) would say “explain in footnote,” and once done they could follow the evidentiary and logical chains. So I think in this case Hans K’s observation is worth considering.

Brilliant work. Thanks.

>>

Physical scientists and engineers would not be confused by the terms.

<<

This engineer is confused. You don’t compute an average by dividing by a time unit. You create an average by dividing by a unit-less value or count–the number of items averaged. When you divide by a time unit, you change the value to a rate. It pains me, but I’m going to have to agree with Mr. Stokes here.

Jim

in response to Jim Masterson

>>>>

Physical scientists and engineers would not be confused by the terms.

<<

This engineer is confused. You don’t compute an average by dividing by a time unit. You create an average by dividing by a unit-less value or count–the number of items averaged. When you divide by a time unit, you change the value to a rate. It pains me, but I’m going to have to agree with Mr. Stokes here.<<

This engineer is not confused. Sum 20 instances of yearly uncertainty and divide by 20 instances. Average yearly uncertainty.

>>

Average yearly uncertainty.

<<

Yes, but it’s not an average uncertainty PER year. You divided by an instance and not a time unit.

Jim

>>Yes, but it’s not an average uncertainty PER year. You divided by an instance and not a time unit.<<

And my understanding is that is exactly what Pat Frank et. al. have been trying to tell Nick Stokes. They took an average annual uncertainty, based upon 20 years of discrete instances of annual uncertainty, and propagated that as a discrete instance value of uncertainty in the emulator for, say, 100 years of predicted temperature response.

Then it’s non-standard terminology. You don’t add a ‘/year’ term, which implies a rate. Averages are computed using an integer count. It doesn’t change the unit you are averaging. You can plot an average on the same graph as the items you are averaging.

Jim

>>Then it’s non-standard terminology. You don’t add a ‘/year’ term, which implies a rate. Averages are computed using an integer count.<<

Who added a '/year' term?

The term that Pat Frank used, many times, is "annual average ±4 Wm-2 LWCF error".

>>

Who added a ‘/year’ term?

<<

I guess you haven’t been following the argument.

>>

Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

Jim

>>I guess you haven’t been following the argument.

>>

Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

<<

I have been following the argument and note that many suggest Nick Stokes creates a distracting argument. While Tim Gorman may have been distracted into the diatribes on average price of Apple stock and driving miles per year, the original argument centered around +- 4 Wm-2 average annual error LWCF in Pat Franks paper.

To further indulge in the distraction, miles per year may be considered a speed (however in the example given it would be silly to interpret it that way). It is more reasonably interpreted as an annual rate of vehicle utilization.

miles/per year is not a rate. Velocity is a vector quantity. In physics it is displacement divided by time. If I take one step forward and one step backward my velocity is ZERO, i.e. no displacement. The number of miles driven does not equal displacement. If I leave home and drive 100,000 miles around in a circle over a year just to return home then my velocity is zero.

Velocity is a rate of displacement. Miles driven in a year does not specify a displacement and therefore should not be considered as a velocity.

>>

. . . an annual rate of vehicle utilization.

<<

In other words, an average speed–not an average distance. I agree that Mr. Stokes is diverting the argument. However, Mr. Gorman is using wrong terminology. Thanks for agreeing with me without agreeing with me.

Jim

>>>>

. . . an annual rate of vehicle utilization.

<<

In other words, an average speed–not an average distance. I agree that Mr. Stokes is diverting the argument. However, Mr. Gorman is using wrong terminology. Thanks for agreeing with me without agreeing with me.

<<

Whoa, slow down (no pun intended). Allow me to be the one to determine whether or not I agree with you.

Annual rate of vehicle utilization, in my way of thinking, is an average distance, not an average speed. The average speed at which that distance was traveled is another measurement altogether.

If I were looking for an engineer to study factors that impact the maintenance cost of a vehicle, such as utilization in miles per year, or utilization in operating hours per year, or average speed of operation in miles per hour, and someone came along telling me that everyone knows the average speed is determined by the total miles driven divided by the number years in which those miles were driven, I would look for another engineer.

>>

Whoa, slow down (no pun intended). Allow me to be the one to determine whether or not I agree with you.

<<

Yeah, I knew it was too good to be true.

>>

Annual rate of vehicle utilization, in my way of thinking, is an average distance, not an average speed.

<<

If you were really an engineer, I wouldn’t have to say this. The term dx/dt is the RATE of change of x with respect to time. If you stick “rate” in there, then it’s a change WRT time.

>>

. . . someone came along telling me that everyone knows the average speed is determined by the total miles driven divided by the number years in which those miles were driven, I would look for another engineer.

<<

Yet, that’s exactly how you determine average speed over a period of time. A rate of utilization, may mean when the vehicle is actually being used; i.e., when someone is driving it. Still, it’s an average speed and not an average distance.

You didn’t really “define” your terms, so I’m free to assign my own meanings to them. If you would like to define your terms, then I’ll decide accordingly. However, dividing by a time creates a value that changes with time and is usually a rate. Any other meaning is non-standard.

You may go find another engineer–you won’t hurt my feelings. And this is a stupid argument. I remember arguing with another EE about the true meaning of pulsating DC. That was a stupid argument too.

Jim

An average of denominator ‘per year’ in a (+/-) uncertainty doesn’t imply a rate, Jim. There’s no physical velocity.

Furlongs per fortnight is a rate. (+/-)furlongs per fortnight is an uncertainty in that rate. It is not itself a rate.

Jim, “

The term dx/dt is the RATE of change of x with respect to time.”And (+/-)dx/dt. Is that a rate, too?

It’s the

(+/-)dx/dt that is at issue, not dx/dt.>>

Tim Gorman

October 22, 2019 at 5:04 pm

miles/per year is not a rate.

<lt;

It’s a speed and that makes it a rate.

>>

Velocity is a vector quantity.

<<

True, so? You can always take the magnitude of a vector–which gives you a scalar. The magnitude of velocity is speed.

>>

In physics it is displacement divided by time.

<<

In physics, it’s the rate-of-change of a distance with respect to time. I don’t know what you mean by displacement (I know what a displacement is, but your use of it is non-standard).

>>

If I take one step forward and one step backward my velocity is ZERO, i.e. no displacement.

<<

If you take one step forward, you accelerate forward and then decelerate. That means your instantaneous velocity increases above zero and then goes back to zero. Taking one step backward does the reverse–you accelerate backward and then decelerate. Your instantaneous velocity increases and then goes back to zero. Your average speed is two steps divided by the time it takes you to make those steps. If you paused between steps, then that just reduces your average speed. Average velocity may be zero, but your average speed isn’t.

>>

The number of miles driven does not equal displacement.

<<

Again, so what? I don’t know what you mean by your non-standard use of displacement.

>>

If I leave home and drive 100,000 miles around in a circle over a year just to return home then my velocity is zero.

<<

If you traveled a 100,000 miles, then your speed cannot be zero–it’s physically impossible.

>>

Velocity is a rate of displacement. Miles driven in a year does not specify a displacement and therefore should not be considered as a velocity.

<<

Again, I don’t know what you mean by your non-standard use of displacement. Velocity is an instantaneous rate-of-change of distance with respect to time. A velocity also has a direction component. Since you’re not specifying the direction, you must be talking about speed.

Jim

“If you take one step forward, you accelerate forward and then decelerate. That means your instantaneous velocity increases above zero and then goes back to zero. Taking one step backward does the reverse–you accelerate backward and then decelerate. Your instantaneous velocity increases and then goes back to zero. Your average speed is two steps divided by the time it takes you to make those steps. If you paused between steps, then that just reduces your average speed. Average velocity may be zero, but your average speed isn’t.”

I’m sorry to tell you this but velocity *is* a vector and is defined as displacement/time. Zero displacement means zero velocity. You are trying to avoid this definition by speaking about “instantaneous” velocity but 100,000 miles/year is *not* a measure of “instantaneous” velocity. It is a measure of miles driven, not a measure of either speed (the scalar value of the velocity vector) or velocity. Three cups of flour used in a cake is not a rate either, it is an *amount*. Yet you can use the measurement 3 cups of flour/cake to determine how much flour you will need if you are going to bake multiple cakes.

And as Pat has pointed out, +/- 4 W/m^2 is an interval, not a rate. Thanks, Pat!

Dr. Frank,

You needn’t waste your time on me. I don’t understand your point about uncertainty. Maybe someday, I’ll be smart enough to figure it out.

Jim

>>

Zero displacement means zero velocity.

<<

Okay, I put you in a rocket sled and accelerate you for one minute at 40g’s in one direction. Then I turn you around and accelerate you in the opposite direction at 40g’s for another minute. Your displacement is zero. Acceleration is dv/dt or the rate-of-change of velocity WRT time. According to you, your velocity is zero, then your acceleration is also zero. What crushed you then? (And this is another stupid argument.)

Jim

” According to you, your velocity is zero, then your acceleration is also zero. What crushed you then? (And this is another stupid argument.)”

𝐯= ∆𝐱/∆t

Velocity is a vector. Congruent positive and negative vectors cancel. Zero velocity.

If you put your car’s drive wheels up on jackstands, start the car, put it gear, and let it go till the odometer reads 100,000 miles then what was the peak velocity reached by the car? What was the average velocity reached by the car? If this is all done in the same year doesn’t it work out to be 100,000 miles/year with exactly zero velocity (i.e. the car never moves)?

This isn’t a stupid argument. It’s basic physics. You keep wanting to use instantaneous scalar quantities when you should be using vectors.

𝐯= ∆𝐱/∆t = (𝐱𝐟 – 𝐱𝟎) / (tf – t0)

And, again, as Pat pointed out – a +/- uncertainty interval isn’t a velocity to begin with.

This raises an interesting way to get out of speeding tickets. All I need is a picture of my car in my garage with a time stamp occurring before the speeding ticket and another picture of my car in my garage with a time stamp occurring after my speeding ticket. Zero displacement means zero velocity and zero speed. I’m sure the judge will dismiss my speeding ticket without delay.

Jim

Go for it!

Tim Gorman’s time average is an instance, not a rate.

A time average becomes a rate when it is extrapolated through time.

Physical context determines meaning.

If I say that I commute an average of 15,000 miles/year, that’s not a rate. That’s an instance.

If one wants to know how many miles I’ve driven in 10 years, then an extended time enters the physical meaning. The average becomes a rate in that context.

But rate requires an extended interval. A single instance of time average has no extended time and cannot be a velocity.

>>

If you put your car’s drive wheels up on jackstands . . . .

<<

Now I know that I’ve entered Looking-Glass world:

>>

<<

Your definition is not correct, actually (I couldn’t copy yours, so I think I duplicated it correctly–I prefer using arrows over vector quantities rather than just making them bold). The correct definition comes from Calculus:

And velocity is an instantaneous vector quantity–notice where goes to zero in the limit. The dot notation comes to us from Newton, apparently. I don’t think this old engineer has looked at the definition of velocity for over fifty years. You made me look, and it is displacement. The usual letter for displacement is . represents , , and more succinctly.

Acceleration has a similar definition:

Since we’re talking about scalar vs. vector quantities, here’s a formula for circular motion:

It’s acceleration equals velocity squared divided by the radial distance. All three variables are vector quantities, but this isn’t a vector equation–they are all scalars. You can’t divide by a vector anyway. Circular motion includes circular orbits, which are hard to do in practice. It also describes tying a string to a mass and spinning it over your head.

But let’s talk about a circular orbit. The acceleration is a vector that points to the center of the circle–called centripetal acceleration. The velocity vector is always tangent to the circle and points in the direction of motion. The radial or position vector extends from the center of the circle and points to the object in motion. As an object moves around the circle, the vectors track with it–keeping their respective directions.

If we take one complete circuit around, all the vectors cancel. The displacement is zero. Why don’t objects in circular orbit fall out of the sky after one orbit? Stupid question, isn’t it.

>>

This isn’t a stupid argument. It’s basic physics. You keep wanting to use instantaneous scalar quantities when you should be using vectors.

<<

Yes, it is stupid. I specifically didn’t mention velocity–I said speed. For some silly reason, you brought up velocity. Let’s stop trying to divert the argument from what I originally said–bad units. You guys have been arguing too much with Mr. Stokes and Mr. Mosher. You’re changing the subject like they do.

>>

Tim Gorman’s time average is an instance, not a rate.

A time average becomes a rate when it is extrapolated through time.

Physical context determines meaning.

If I say that I commute an average of 15,000 miles/year, that’s not a rate. That’s an instance.

If one wants to know how many miles I’ve driven in 10 years, then an extended time enters the physical meaning. The average becomes a rate in that context.

But rate requires an extended interval. A single instance of time average has no extended time and cannot be a velocity.

<<

You’re talking to an engineer. Converting units is what we do. !5,000 miles/year is (15,000 miles/year)*(1 year/365 days) = 41.10 miles/day (assuming a 365 day year). And 41.10 miles/day is (41.10 miles/day)*(1 day/24 hours) = 1.71 miles/hour. Miles per hour is a speed (a very, very slow speed), or do you think your speedometer is lying, and your car is up on jackstands?

An average does not change the units. You divide by the number of items–a dimensionless quantity. The correct term for an average (in this case) is the annual average distance is 15,000 miles, not 15,000 miles/year.

The rest of Dr. Frank’s statements aren’t exactly correct either. Originally, I didn’t use velocity–that’s Mr. Gorman’s attempt to divert the argument.

Jim

Jim,

Speed is a scalar value. It tells you nothing about velocity as a vector.

You keep jumping to the definition of “instantaneous” velocity. 100,000 miles/year is *NOT* an instantaneous velocity.

“Yes, it is stupid. I specifically didn’t mention velocity–I said speed. For some silly reason, you brought up velocity.”

The value of 100,000/year is neither speed or velocity. is is the distance traveled in a year. It specifies neither the speed *or* velocity associated with that distance of travel.

“Let’s stop trying to divert the argument from what I originally said–bad units. ”

The only one diverting here is you. You are trying to make the distance traveled in a year equal to a scalar speed or a vector velocity. As I tried to point out with my examples, which you used an argumentative fallacy of Argument by Dismissal to avoid actually addressing, distance traveled in a year gives you no information about speed or velocity (i.e. a car up on jackstands).

And *my* 1963 engineering introductory physics book, University Physics (Sears and Zemansky) 3rd Edition, defines velocity *exactly* as I wrote, right down to the bolding. And they make a distinction between velocity as a vector displacement divided by the time it takes to travel that displacement and INSTANTANEOUS velocity which is the tangent of the position curve at a specific point in time which does *not* describe the velocity between two points, e.g. P and Q. And I will repeat, if x2-x1 = 0 then there is no displacement and thus no velocity vector.

“An average does not change the units. You divide by the number of items–a dimensionless quantity. The correct term for an average (in this case) is the annual average distance is 15,000 miles, not 15,000 miles/year.”

Let me repeat again for emphasis: DISTANCE TRAVELED IN A YEAR IS NEITHER A SPEED OR A VELOCITY. And it *does* have the units of miles/year, you used the term ANNUAL yourself. Your “item” is *not* dimensionless. If you just say 15,000 miles you don’t know if that was covered in one month, one year, a decade, or a century. It is, therefore an inaccurate statement for the number of miles traveled over a period of time. That period of time is an essential piece of information. AND I REPEAT: DISTANCE TRAVELED IN A YEAR IS NEITHER A SPEED OR A VELOCITY. It is just the distance traveled. That whole distance could have been done in a second, a minute, an hour, a day, a week, a month, or a year. Only if you know that the distance was traveled in a continuous path over a distinct period of time would you be able to evaluate the speed by which it happened. Since you do *not* know that the miles covered were done in 100,000 increments, 10,000 increments, or even increments of a foot, you can make no evaluation of the speed.

It’s why you had to say “annual”. Annual implies year and that is the denominator.

The exact same logic applies to Pat’s uncertainty. His uncertainty interval *has* to be associated with the same time increment the CGM’s operate with – i.e. annual estimates of the global temperature. He did so by analyzing the given record over a 20 year period. The only way to change that to a time increment matching that of the CGM was to divide by 20 years to get an annual figure.

This is *not* hard to understand. Stop trying to invalidate Pat’s thesis through some hokey “dimensional analysis”. His dimensions are fine. As I tried to point out it is no different than saying you need 3 cups of flour per cake. That does not specify any “rate” at which the flour must be added to the dough, i.e. no speed or velocity. But it *does* tell you how much flour you expended for that cake. Just like miles traveled/year doesn’t tell you speed or velocity but does tell you how far you traveled in a year!

Jim Masterson October 22, 2019 at 11:58 am

>>

Who added a ‘/year’ term?

<

>Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

Jim

________________________________

Jim, Miles per year is the task of a salesman.

inter alia.I guess the comments on this post are about to close, so this will probably be my last comment.

>>

Tim Gorman

October 24, 2019 at 5:01 pm

Speed is a scalar value. It tells you nothing about velocity as a vector.

<<

Well, not exactly. A vector has a magnitude and a direction. The magnitude of a velocity vector is speed–same units in fact.

>>

You keep jumping to the definition of “instantaneous” velocity.

<<

Because that’s the definition of a velocity. It’s been the definition since Newton’s time when he invented Calculus.

>>

100,000 miles/year is *NOT* an instantaneous velocity.

<<

Again, it depends on how it was derived.

>>

The value of 100,000/year is neither speed or velocity.

<<

You’re correct. Your typo (I assume it’s a typo) changes it to a frequency. And it’s possible to convert it to the SI unit for frequency as follows:

The units all cancel except for hertz. Silly, isn’t it? That’s what making mistakes with units leads to–nonsense. In my engineering classes, if we messed up the units (as you just did), we were marked down.

>>

is is the distance traveled in a year. It specifies neither the speed *or* velocity associated with that distance of travel.

<<

It’s only a distance if you use a distance unit. It’s a speed if you divide a distance by a time unit.

>>

The only one diverting here is you. You are trying to make the distance traveled in a year equal to a scalar speed or a vector velocity.

<<

Actually, I’m not trying to make a distance a speed. I’m saying that when you divide a distance by a time unit, you’re making a distance into a speed.

>>

As I tried to point out with my examples, which you used an argumentative fallacy of Argument by Dismissal to avoid actually addressing, distance traveled in a year gives you no information about speed or velocity (i.e. a car up on jackstands).

<<

It also gives no information about distance traveled. Your car isn’t going anywhere while it’s up on jack stands.

>>

And *my* 1963 engineering introductory physics book . . . .

<<

I’d get rid of that book if I were you. I looked up velocity on my dad’s old mechanical engineering handbook (Third edition) and it uses distance in the definition. My old high school physics book says the same thing. I wish I kept my college dynamics book, because I’d like to see what it said about velocity too.

>>

It’s why you had to say “annual”. Annual implies year and that is the denominator.

<<

No, I used a label for the computed average distance. Your division of a distance by a time unit turns a distance into a speed.

>>

Stop trying to invalidate Pat’s thesis through some hokey “dimensional analysis”. His dimensions are fine.

<<

It’s not hokey. If my attempt to correct Dr. Frank’s non-standard usage invalidates his thesis, then his thesis must not stand on very firm ground. His dimensions are not “fine.”

>>

Just like miles traveled/year doesn’t tell you speed or velocity but does tell you how far you traveled in a year!

<<

A distance is a distance. The units miles/year is a speed.

I was going to demonstrate the fallacy of dividing averages by time with a temperature example. However, since temperatures are intensive properties, I don’t want to go on record as supporting averaging temperatures.

Jim

“Because that’s the definition of a velocity. It’s been the definition since Newton’s time when he invented Calculus.”

“Actually, I’m not trying to make a distance a speed. I’m saying that when you divide a distance by a time unit, you’re making a distance into a speed.”

Once again, go read your fathers textbook. Velocity is a vector. Zero distance traveled means zero velocity. Zero velocity means zero speed. It doesn’t matter what the derivative along the path is. It’s no different than a conservative force applied over a closed path. The net work done is zero.

“It also gives no information about distance traveled. Your car isn’t going anywhere while it’s up on jack stands.”

The car’s odometer shows 100,000 miles traveled. Think about it.

“No, I used a label for the computed average distance. Your division of a distance by a time unit turns a distance into a speed.”

Computed average distance ANNUALLY! Annually means PER YEAR! You can run but you can’t hide!

“A distance is a distance. The units miles/year is a speed.”

Miles traveled annually *IS* miles/year. You said it yourself. Live with it.

“I was going to demonstrate the fallacy of dividing averages by time with a temperature example.”Nothing wrong with averaging temperatures; they are just measurements. But here it is shown with Apple closing share prices.

>>

Nick Stokes

October 28, 2019 at 8:25 pm

Nothing wrong with averaging temperatures; they are just measurements.

<<

Except if you measure temperature with a thermometer or something like a thermometer, it makes the temperature an intensive property. Averaging intensive properties is nonsense and has no meaning.

We argued about this before with beakers of water (https://wattsupwiththat.com/2018/09/04/almost-earth-like-were-certain/#comment-2448213). The only way to solve the problem is to do what you did–convert the intensive temperatures into extensive temperatures. Alas, that is not done with SAT.

Jim

>>

Tim Gorman

October 28, 2019 at 3:30 pm

Once again, go read your fathers textbook. Velocity is a vector. Zero distance traveled means zero velocity. Zero velocity means zero speed. It doesn’t matter what the derivative along the path is. It’s no different than a conservative force applied over a closed path. The net work done is zero.

>>

Zero velocity? Zero speed? Zero distance? What on Earth are you talking about?

>>

The car’s odometer shows 100,000 miles traveled. Think about it.

<<

I have. You think about it. How do you get 100,000 miles traveled when the car’s on jack stands? Why would you try to trick the odometer?

>>

Computed average distance ANNUALLY! Annually means PER YEAR! You can run but you can’t hide!

<<

Annual or yearly average distance is not distance per year. One’s a label, the other is a change relative to time–in this case a speed.

>>

Miles traveled annually *IS* miles/year. You said it yourself. Live with it.

<<

No, I said annual distance traveled, not distance per year. One’s a label describing the average, the other is a speed. You live with it (but won’t apparently).

Jim

Jim M

“We argued about this before with beakers of water”No, that was about how to deduce heat content from temperature measurements. But that doesn’t mean that temperature measurements can’t be averaged. They constitute a distribution, and you can form sample averages to estimate the population mean. Just as with heights, opinions, stock prices or whatever.

And it is done all the time, and isn’t controversial. In one location, you get a monthly average max by averaging the daily max for the month. Likewise annual. No issue of intensiveness.

>>

Nick Stokes

October 29, 2019 at 2:15 am

But that doesn’t mean that temperature measurements can’t be averaged.

>>

Yes, you “can” average any set of numbers. You can average phone numbers. What does an average phone number mean?

>>

And it is done all the time, and isn’t controversial.

<<

It is controversial. It’s just that alarmists ignore the controversy and do it without regard to the physics.

>>

In one location, you get a monthly average max by averaging the daily max for the month. Likewise annual. No issue of intensiveness.

<<

They also violate the rules of significant figures. They take a list of temperatures during a month and average them. Those temperature have precision down to a degree. The monthly average has precision down to a tenth of a degree. That’s not allow in most engineering and physics disciplines. It’s false precision. They then average those monthly averages (mathematically invalid too) and obtain precision down to hundredths of a degree. Those two precision digits are bogus. But without them you can’t perform magic scare manipulations of the temperature record.

Jim

Brilliant work. No answer to the salesman problem.

https://www.google.com/search?q=mathematics+the+salesman+problem&oq=mathematics+the+salesman+&aqs=chrome.

‘had you instead used the term “uncertainty”.’

What would that change ? A ” standard deviation” and a “standard uncertanty” are exactly the same in definition and numerical value (for the same case). The particular concept is the same, just the notation is dfferent.

Ulises,

“What would that change ? A ” standard deviation” and a “standard uncertanty” are exactly the same in definition and numerical value (for the same case). The particular concept is the same, just the notation is dfferent.”

Look at the title of the JCGM – Guide to the expression of uncertainty in measurement

It is in this document that “standard uncertainty” is defined as a standard deviation. However, this document has to do with *MEASUREMENT*, not with uncertainty of calculated model outputs which is what the subject of discussion is.

The JCGM defines uncertainty as:

uncertainty (of measurement)

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that

could reasonably be attributed to the measurand

And the definition of a measurand is:

A quantity intended to be measured.

(engineering) An object being measured.

A physical quantity or property which is measured.

Again, none of these has to do with the uncertainty of a calculated result based on uncertain inputs.

Pat determined the uncertainty in the input of the CGM’s using a Type A determination. The definition of a Type A determination is: method of evaluation of uncertainty by the statistical analysis of series of observations

What Pat has offered is actually defined in Section 6 of the JCGM as “expanded uncertainty”. From the document: “Although uc(y) can be universally used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory applications, and when health and safety are concerned, it is often necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. The existence of this requirement was recognized by the Working Group and led to paragraph 5 of Recommendation INC-1 (1980). It is also reflected in Recommendation 1 (CI-1986) of the CIPM. ”

From the document: “The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U u Y u y + U. ”

This is exactly what Pat has done.

Now, to the CGM’s. Pat has shown that the CGM’s are basically a linear prediction of future temperatures. With the uncertainty interval Pat as calculated for the input to the CGM this can be expressed as:

f(x =/- u) = kx +/- u where “k” is a constant for the linear relationship.

For an iterative process like a CGM, the value of u compounds exactly as Pat has laid out, i.e. root-sum-square. “u” is an interval, it is not a probability function thus there is no “mean” or standard deviation for the uncertainty. It cannot be minimized by trying to use the central limit theorem.

Pat’s thesis appears to be quite rigorous and mathematically correct. It simply cannot be easily dismissed.

Tim,

I missed this while I was hooked on your dialogue with Rich. You may find that some comments there are also relevant to your thoughts expressed here. But let’s go on :

“Look at the title of the JCGM – Guide to the expression of uncertainty in measurement

It is in this document that “standard uncertainty” is defined as a standard deviation. However, this document has to do with *MEASUREMENT*, not with uncertainty of calculated model outputs which is what the subject of discussion is.”

Tim, ALL statistics is dealing with measurements or counts. The approaches are portable among problems of the same type, while these may widely differ in verbal description or mental representation. The standard deviation tells you the same in whichever approach where the use of the Normal Distribution is justified. Is there any alternative definition to “standard uncertainty” than sd ?.

You may however question whether a model (GCM) output can be regarded and treated as a random variable. (I’d say Yes if in an approach like sensitivity analysis, otherwise not).

But Pat does not deal with GCM output (it’s Roy who does), but with his own

GCM emulation model. He refers to the practices collated in JCGM as errror propagation ,=> variances in, variance out .

“Pat determined the uncertainty in the input of the CGM’s using a Type A determination. The definition of a Type A determination is: method of evaluation of uncertainty by the statistical analysis of series of observations”

OK, Type A is classical analysis. But Pat determined nothing substantial, he picked from an analysis given in the literature, which deales with GCM output, not input. He determined he could use it in his approach and built it in.

>>What Pat has offered is actually defined in Section 6 of the JCGM as “expanded uncertainty”. From the document: “Although uc(y) can be universally used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory applications, and when health and safety are concerned, it is often necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. The existence of this requirement was recognized by the Working Group and led to paragraph 5 of Recommendation INC-1 (1980). It is also reflected in Recommendation 1 (CI-1986) of the CIPM. ”

From the document: “The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U u Y u y + U. ”

This is exactly what Pat has done: <<

No, it is not. At least, he does not state it (should then use U, not u). Basic value is the +/-4W sd=u in cloud forcing, output after multiple steps of combining is also in terms of sd. With multiples of sd, the largely "unphysical" intervals in his Figs. 6A,7A would be proportionally wider.

[ see also my comments in the other post]

"Now, to the CGM’s. Pat has shown that the CGM’s are basically a linear prediction of future temperatures. With the uncertainty interval Pat as calculated for the input to the CGM this can be expressed as:

f(x =/- u) = kx +/- u where “k” is a constant for the linear relationship. "

The fit of his emulation model is indeed excellent. But his error treatment is not based on the fitting process, it is a separate process, based on a value he picked from literature and embedded in his simulated forcing regime.

Your equation confuses me. It is unconventional to have a +/- term on the lhs.

I don't understand it. Full stop.

"For an iterative process like a CGM, the value of u compounds exactly as Pat has laid out, i.e. root-sum-square."

Iterative or not, for any error combination the same rules should apply.

" root-sum-square" is higly misleading . What is summed are variances, i.e. mean squares. So the correct version in that terminology would be root-sum-mean-squares.

' “u” is an interval, it is not a probability function thus there is no “mean” or standard deviation for the uncertainty. It cannot be minimized by trying to use the central limit theorem.'

It is not an interval, but +/-u would be. It is not a probability function, but it is

an estimate of sigma, the 2nd moment of the well-known Normal Distribution. And yes, u=sd , as a sample estimate, has its own u=sd.

"Pat’s thesis appears to be quite rigorous and mathematically correct. It simply cannot be easily dismissed."

Borrowing your words, my opinion may be " expressed as y − U u Y u y + U. ”

howling monkey's lament——–borrowing without your kind permission—sorry, I couldn't resist]

“But Pat does not deal with GCM output (it’s Roy who does), but with his own GCM emulation model.”

Of course he does. If y=kx and z=lx and k=z then you get the same answer for both. And that is what Pat found. He could emulate the CGMs output using a linear equation. Again, it doesn’t matter what is inside the black box known as a CGM if it’s output is a linear equation.

“You may however question whether a model (GCM) output can be regarded and treated as a random variable. (I’d say Yes if in an approach like sensitivity analysis, otherwise not).”

Again, a sensitivity analysis won’t help if there is uncertainty in the inputs and outputs. It’s like we found with the Monte Carlo analyses of capital projects. A sensitivity analysis done by varying one input only tells you the sensitivity of the model to that one input. It doesn’t tell you anything about the uncertainty in the input or the output. If the model of one capital project shows a high sensitivity to interest rates and the model for another capital project does not then the second project is much less risky and gets ranked higher as a possible project. That sensitivity analysis tells you nothing about the actual uncertainty for the either project because future interest rates are very uncertain. Please not that interest rates are not a probability function with a mean and standard deviation. You can guess at what future interest rates will be but the fact that you have to “guess” is just proof of the uncertainty associated with them.

“Basic value is the +/-4W sd=u in cloud forcing, output after multiple steps of combining is also in terms of sd.”

If you are saying that standard deviations combine as root-sum-square instead of root-mean-square then you are trying to make a distinction where there is no difference.

“But his error treatment is not based on the fitting process, it is a separate process, based on a value he picked from literature and embedded in his simulated forcing regime. Your equation confuses me. It is unconventional to have a +/- term on the lhs. I don’t understand it. Full stop.”

His treatment of the uncertainty does not need to be part of the fitting process. It is merely enough to show that the CGMs provide a linear output. That output is either perfectly accurate or it isn’t. If it isn’t then an uncertainty interval applies. If there is uncertainty in the input then there *has* to be uncertainty in the output of the mathematical process, i.e. the lhs. The climate alarmists like Nick claim that the math model can somehow negate that uncertainty in the input so that the output is accurate to any number of significant digits. What Pat has shown is how that uncertainty compounds over an iterative process. It simply isn’t sufficient to say the magic words “central limit theorem” and wave your hands over a computer terminal in order to claim no uncertainty in the output.

“Iterative or not, for any error combination the same rules should apply. ” root-sum-square” is higly misleading . What is summed are variances, i.e. mean squares. So the correct version in that terminology would be root-sum-mean-squares.”

Again, you are assuming that the uncertainty interval is described by a normal probability distribution. If that were true then the climate alarmists claim that the outputs can be made as accurate as wanted using the central limit theorem. It’s the difference between error and uncertainty. Averaging measurements can make the measurement more accurate based on the central limit theorem. That just isn’t the case with uncertainty. If your uncertainty is +/- 4 W/m^2 then exactly what probability distribution is associated with that? If it is a normal probability distribution then you would assume the mean would be +/- 0 W/m^2 – i.e. no inaccuracy at all so why even bother trying to determine the uncertainty?

It truly is that simple. If the uncertainty interval is a probability distribution then the uncertainty interval can be made as small as you want using the methods in the JCGM. If it isn’t a probability distribution then you can’t make the uncertainty interval smaller with nothing more than calculations.

“It is not an interval, but +/-u would be. It is not a probability function, but it is an estimate of sigma, the 2nd moment of the well-known Normal Distribution. And yes, u=sd , as a sample estimate, has its own u=sd.”

If it is not a probability function then how can anything associated with it be? The 2nd moment, i.e. variance, requires a mean be determined. That requires a probability function to be defined, i.e. defining which values are more probable than others. Same for variance and therefore for the sd as well. The mere definition of uncertainty means you simply don’t know which values are more probable than others. It’s like trying to guess at what the third digit is on a digital meter that has only two digits. The uncertainty is a minimum of +/- .0025 and you simply don’t know where in the +/- interval the actual value lies. And no amount of statistics can lessen that uncertainty.

Reference #7 link rto Dr Lindzen’s pdf on the Yale eturns: “Page not found

The requested page could not be found.”

As can be seen on this page:

https://ycsg.yale.edu/climate-change-0

All the other 18 links return the requested pdf presentation….except the link to the Dr Lindzen presentation pdf. So it is not a typo of the URL by Pat Frank.

Looks like Yale pulled the access to the Lindzen’s PDF presentation from their web host server to hide counter-evidence/views.

Just another day at the Climate Disinformation Campaign by academia.

This link appears to provide the Lindzen paper:

https://www.independent.org/publications/article.asp?id=1714

From the article: “This is a long post. For those wishing just the executive summary, all of Roy’s criticisms are badly misconceived.”

Now *that* is a summary! Short and sweet. 🙂

Everyone, it isn’t this complicated, and the general public will never understand these arguments. Keep it simple, present arguments in a manner that an 8th grader could understand. Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.

NASA GISS has a website there you can view raw temperature data from all the weather stations in their network.

https://data.giss.nasa.gov/gistemp/station_data_v4_globe/

Weather stations are impacted by the Urban Heat Island Effect so NASA produces a BI value or Brightness Value for each site. Stations with BIs of 10 are less are considered rural. If you go there and look up Central Park New York you will see a gradual temperature increase over the past 100 years. If you go a little north to West Point you will find no warming. CO2 increases from 300 to 400 in both West Point and NYC, yet only NYC shows any warming. A 33% increase in CO2 had no impact on West Point Temperatures which is what one would expect for a radiative molecule that shows a logarithmic decay in its W/m^2 absorption.

Now, the Hockeystick on which all this climate histeria🤦♂️ is based shows a 1.25°C increase since 1902. If you simply limit the NASA GISS stations to the stations that existed before 1902 and narrow them down to stations with a BI of 10 or less, you will see that there are very few if any that show an uptrend in temperatures. Almost all will show that recent temperatures are at or below the levels reached in the early 1900s.

The question that needs to be answered is how can a 33% increase in CO2 not result in any measurable increase in temperatures? There are plenty of examples right on the NASA website. Until someone can explain how CO2 can result in stable temperatures at almost all stations controlled for the UHIE there is no need to try to explain how it causes warming because the thermommeters of NASA says it doesn’t. What Michael Manns Hockeyystick is measuring is the UHIE if it is measuring anything at all. His increase matches that of New York City, not Westpoint.

Can I (try) to summarise, for us of lesser knowledge than your good selves.

Pat, are you saying that your analysis shows the degree of failure of the GCM’s is greater or less than Roy’s analysis – i.e. its worse than we thought (couldn’t resist that).

Or that Roy’s analysis is, either in part or wholly inappropriate, or inadequately describes how the GCM models are failing?

I can see the concerns of those of us not wanting to hand sticks to the CAGW mob, an analogy would be, say, an inaccuracy in the detail of Charles Darwin’s version of the theory of evolution, being used to claim its falsehood in entirety by religious theologians, when all Darwin had done was failed to correct or notice an error in one part, which had no impact on its overall validity.

CO2 asks “The question that needs to be answered is how can a 33% increase in CO2 not result in any measurable increase in temperatures?”

The answer is coming from Ronan and Michael Connolly. See( https://www.youtube.com/watch?v=XfRBr7PEawY ) for radiosonde evidence that the atmosphere obeys ideal gas law and no greenhouse effect is present.

This answer underscores Pat’s uncertainty analysis that the physics in the models is not right. The greenhouse gas warming in every GCM is wrong.

Thanks for your persistence Pat.

Yep. The effect of CO2 conc. is minimal to nil.

Thank you.

”Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.”

Very easy…

CM = BS + infin ERR^3

”Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.”

GCM = infin BS x ERR^3

OMG, I think you’ve nailed it!

I’d only suggest using the mathematical symbol for infinity to make it look more technical and sciency 🙂

@ CO2isLife

Ok, I’ll keep it simple.

Historical AND modern weather readings are nowhere near accurate enough to produce the results that the warmists claim and will NEVER be accurate enough to be able to resolve a margin of error of less than about ±2.3°F per-day per-cycle per-equation and homogenizing the data doesn’t resolve anything because it always produces a + error in the result since 0 is 0°K not some “floating average” you get to assign. Even tho you hide the error in °K by using some average it doesn’t go away in the physics.

The absolute rule of statistics is that you must precisely calculate your error and deal with it or your results are wrong no matter what they are. There is no justifiable time in mathematics in which you can claim error doesn’t happen or doesn’t matter and the moment you start pulling numbers out of your ass *by any method* your error goes nuclear huge. Applying a custom waveguide to your output is FRAUD from the start.

Agreed, if you have to funnel your model outputs to keep them reasonable then your model, by definition, is unreasonable (and unrealistic).

Lindzen : “Rather, the mere existence of criticism entitles the environmental press to refer to the original result as ‘discredited,’ ”

Exactly, Kant’s Critique of Pure Reason.

These are hamfisted Kantian wannabees. Kant set up a straw dog of “pure reason” and assaulted it, the Robespierre of the human mind. When in fact “pure reason” does not exit, rather creative reason does.

Pointing out that the climate physics at every iteration is still wrong, implies much creative reason, science , is needed to further advance. Yet exactly that is the target of the climate gang, creative reason itself.

Kant, who can’t do it anyway (wrote Edgar Poe), recanted with pity for poor butler Lampe, and brought back smashed roadkill instead, the Critique of Practical Reason – “the errors in practice cancel”.

It took a poet to notice this, Heinrich Heine, and it escapes most today. Meanwhile the Robespierre of the mind is producing something mindless called Extinction Rebellion, easily seen how with Heine’s razor sharp insight.

Please stop modelling anything, as long as you have not even understood the very basics. I suggest you start learning about the “GHE”..

https://de.scribd.com/document/414175992/CO21

I don’t post much on here but I saw Roy’s comments and thought “Signal to noise?”

Pat has expressed in climate model terms the second part of the Scientific Method after you come up with an idea – namely what precision does your idea require for measurement?

Or in simpler terms – use the right tools, don’t be the tool.

I saw the same guff with the temperature measurement averages. It breaks the Central Limit Theorem at any rate except in one case and one case only: HypotheticalLand

Here you are free to hypothesise and spectulatise, while riding unicorns down showers of rainbows. Or in reality get a very sore head with hard equations.

But do not apply this to the real world.

How hard is this to understand?

The best thing is to have Skin in the Game. So if we apply climate science methods to drinking water, you could be drinking turgid sludge that would still have low levels (ppms) of contaminents by climate science measurement methods – +/- 1000% is fine.

Drink away!!!!

Mkcky … “Turbid.

” Sludge has turbidity, other things have turgidity. Although, at my age, not as often as it used to.

Thomas,

you may try myxomycetes, they are mobile sludge (protoplasma), but can erect turgid constructs.

[Prophylactic health warning ! …… No experience of my own.]

Also, I’ve noticed that all the in-text Greek deltas have becomes Latin “D,” as in DT instead of delta T.

Please take this into account, especially in various equations.

Ptolemaic equation for calculating the movements of the planets and stars (sans Jupiter’s moons) returned pretty good results too for hundreds of years. Good enough that until digital computers came along, 20th Century mechanical planetarium projectors used Ptolemaic equations to recreate the planets and stars motion to planetarium goers.

The fundamental underlying physics (model) was of course very wrong. But Ptolemaic equations return useful results for planetary positions short periods of time. They look rather convincing.

So too AOGCMs. But GCMs have substantially larger uncertainty errors (%-wise in the underlying physical measures that are the fudged/tuned parameters) so those GCM results quickly become useless (Pat shows they are useless at ~ one year’s time step).

Both the modern GCMs and Ptolemaic mathematical models of the heavens are explicit examples of Richard Feynman’s Cargo Cult Science analogy. Everything appears to work in them at the level of abstraction to the casual viewer and they then assume the underlying physics are correct. But we know better today about the proper modeling of the motions of stars and planets we see in the sky.

So you’d never use Ptolemaic planetary motion models to target a multi-billion dollar planetary probe to Jupiter or Mars and expect it to actually arrive there. A stupendous waste of money and resources would result.

However the Climate Cargo Cultists expect Multi-Trillion dollar rearrangements of the world’s energy economy based on their junk model outputs claiming high CO2 sensitivity when observations suggest otherwise.Cargo cultism at its finest.And yet the Leftists/Climate Cultists labels Climate Skeptics as “anti-science” and “science deniers.”

Mere projection on their part.

Correct, I think, John Q. There is a conceptual problem here that seems beyond resolution for some people. Dr. Frank has made a beautiful job of this. Perhaps it would help to repeat his summary –

“….The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows…..”

Thank goodness Pat Frank wasn’t my math teacher. If he had been, I might have understood some of the concepts that caused me such difficulty as to lead me to abandon the study of the physical sciences, and I wouldn’t have gone back to my my real love in the biological sciences!

“Thank goodness Pat Frank wasn’t my math teacher.”+1

Stokes

Snark does not become you!

+1Oh my, lookie here at ‘ole doc . . . got his Nickers all Stoked up into an

ad homineeringwad again.Chalk it up to apotheosize shrinkage I guess.

The fact Nick Stokes feels the need to make a personal attack, as opposed to sticking to the math & science, says a great deal.

Just a +1. Not even a ±1.

More importantly, he completely missed the point, intentionally or otherwise (typically), which was that Pat being the commenter’s maths teacher would have allowed him to understand the maths they had had difficulty understanding.

“would have allowed him to understand the maths they had had difficulty understanding”In fact, as I showed here, the paper is riddled with errors in elementary math. No-one seems to have the slightest curiosity about that.

Nick,

Let’s take your comment about the math and look at it.

“1. To estimate uncertainty you need to study the process actually producing the numbers – the GCM. Not the result of a curve fitting exercise to the results.”

Sorry, I can write a transfer equation for a black box by merely knowing the input and output. I don’t need to study the process.

“You need to clearly establish what the starting data means. The 4 W/m2 is not an uncertainty in a global average; it is a variability of grid values, which would be much diminished in taking a global average.”

Certainly the +/- 4W/m^2 is a global average. You didn’t bother to read Pat’s paper at all. You can’t even use the +/- in front of it!

“Eq 2 is just a definition of a mean.”

So what? What’s actually wrong with it?

“Eqs 3 and 4 are generic formulae, similar to those in say Vasquez, for mapping uncertainty intervals. They involve correlation terms; no basis for assigning values to those is ever provided.”

Again, so what? Eq 3 and 4 explain the propagation of uncertainty. They don’t actually involve correlation terms. As Pat’s document states: “When states x0,., xn represent a time-evolving system, then the model expectation value XN is a prediction of a future state and σ2XN is a measure of the confidence to be invested in that prediction, i.e., its reliability. ”

” but where Eq 1 took an initial F₀ and added the sum of changes: F₀+ΣΔFᵢ, Eq 5 Takes that initial F₀ and adds the ith change without the previous ones F₀+ΔFᵢ.”

The only one with a math problem here seems to be you. Eq 5.1 and 5.2 describe the ith step. Why would you need to involve the previous steps?

“It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation.”

Because the value being used is a 20 year average.

“If n increased, the “mean” would rise, not because of bigger values, but just because there were more in the sample.”

Huh? The mean doesn’t rise because of the number of samples, it would rise because the sum of the samples went up. It could also go down if the additional samples were of lesser value than the mean!

“The unit of the results is K sqrt(year). If you use ±4 Wm⁻²/year, as Pat intermittently does, the units are K/sqrt(year)”

Pat states: “Following from equations 5, the uncertainty in projected air temperature “T” after “n” projection steps is (Vasquez and Whiting, 2006)”

The units are actually K/step (each step just happens to be a year). And when summed from 1…n steps you get temperature as the final result.

None of your math objections make any sense and you certainly didn’t prove anything other than the fact that you have a hard time reading what is written.

Nick, “

In fact, as I showed here, the paper is riddled with errors in elementary math.”Completely refuted, point-by-point here.

You showed nothing except wrong.

Great and detailed response, Tim Gorman.

You’re putting in a lot of work. I admire that (and am grateful).

Tim Gorman

“Let’s take your comment about the math and look at it”“Eq 3 and 4 explain the propagation of uncertainty. They don’t actually involve correlation terms.”Look at the last term in Eq 3. The σ_{u,v}. What do you think that is, if not a correlation? In Eq 4 it is σ_{i,i+1} etc

“‘It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation.’Because the value being used is a 20 year average.”OK, let’s just focus on that one. It is S6.2. You might like to note the complete hash of the subscripts in the statement. But anyway, the upshot is that to get the average, n sets of numbers are added. Then the total is divided, not by n, but by 20 years, units emphasised. That is not an average. Junior high school kids who put that in their tests would fail.

And as I said, a simple test is, what if all the numbers were the same value c? Then the average should be c. But this botch would give c*n/(20 years). Not just a different number, but different units too. And not constant, but proportional to n, the number in the sample.

Nick,

“Look at the last term in Eq 3. The σ_{u,v}. What do you think that is, if not a correlation? In Eq 4 it is σ_{i,i+1} etc”

“For example, in a single calculation of x = f(u,v,…), where u, v, etc., are measured magnitudes with uncertainties in accuracy of ±(σu,σv,…), then the uncertainty variance propagated into x is”

They are uncertainties! They add in quadrature because they are independent. Correlation is not required nor specified here.

“That is, a measure of the predictive reliability of the final state obtained by a sequentially calculated progression of precursor states is found by serially propagating known physical errors through the individual steps into the predicted final state. When states x0,., xn represent a time-evolving system, then the model expectation value XN is a prediction of a future state and σ2XN is a measure of the confidence to be invested in that prediction, i.e., its reliability.”

“Junior high school kids who put that in their tests would fail.”

Which you continue to do. Again, if I drive my car 100,000 miles over a 20 year period then I *can* divide that 100,000 miles by 20 to get an average of how many miles I drove per year.

“ut this botch would give c*n/(20 years).”

Which would be correct if c is an annual average and n is 20 years.

I’ll let Pat speak for himself:

“and ei,g is of magnitude D(cloud-cover-unit) and of dimension cloud-cover-unit. For

model “i,” the ANNUAL (capitalization mine, tpg) mean simulation error at grid-point g, calculated over 20 years of

observation and simulation, is”

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g”

ACROSS THE 20-YEAR CALIBRATION PERIOD (capitalization mine, tpg). Individual grid-point error ei,g is of dimension cloud-cover-unit year -1, and can be of positive or negative sign; see Figure 5.

The model mean calibration uncertainty in simulated cloud cover at grid-point “g” for

“N” models EN,g, is the average of all the 20-year annual mean model grid-point errors,”

“This error represents the 20-year annual mean cloud cover calibration error statistic for

“N” models at grid-point “g.” The 20-year annual mean grid-point calibration error for

“N” models is of dimension cloud-cover-unit year-1. The 20-year annual mean calibration

error at any grid-point “g” can be of positive or negative sign; see Figure 5.”

I don’t know what “across the 20-year calibration period” means to you but to me it means a 20 year total. When divided by 20 gives an annual mean!

Pat has pointed this out to you at least twice that I know of. Take your fingers out of your ears and listen!

“Again, if I drive my car 100,000 miles over a 20 year period then I *can* divide that 100,000 miles by 20 to get an average of how many miles I drove per year.”That is not an average of n numbers. That is a rate.

“Which would be correct if c is an annual average and n is 20 years.”n is not 20 years. It is a number, the number of things being averaged. In this case n is the number of simulation-observation pairs evaluated at grid-point g.

“That is not an average of n numbers. That is a rate.”

So is year^-1!!! Anything over time is a rate! SO WHAT?

““Which would be correct if c is an annual average and n is 20 years.”

n is not 20 years. It is a number, the number of things being averaged. In this case n is the number of simulation-observation pairs evaluated at grid-point g.”

Again, I’ll let Pat speak for himself: “”where “n” is the number of simulation-observation pairs evaluated at grid-point “g” ACROSS THE 20-YEAR CALIBRATION PERIOD (capitalization mine, tpg). Individual grid-point error ei,g is of dimension cloud-cover-unit year -1, and can be of positive or negative sign; see Figure 5.”

Why do you *always* manage to leave off the “ACROSS THE 20-YEAR CALIBRATION PERIOD”? You are really getting to be freaking ridiculous. When you have to quote out of context to support your assertions it’s bloody ridiculous.

I’ve said it before and I’ll say it again, you are nothing more than an internet troll. Your goal is to see your name on the internet, it’s not to actually contribute anything.

“Anything over time is a rate! SO WHAT?”I think it is cute that folks who tell us that only they understand about error and uncertainty, and so GCMs have it all wrong, can’t cope with a simple bit of maths like an average. Suppose you want to know the average sale price of a stock OVER A ONE DAY PERIOD. You might sum the prices over sales, and divide by the number of sales. You might sum the stocks traded and divide by the total paid. Both those are averages. But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.

Nick, “But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.”

Why not? I see monthly averages of stock prices all the time! How do you suppose they come up with those? BTW, the dimension becomes price/month – a *rate* associated with time.

go here: http://stocks.tradingcharts.com/stocks/charts/AAPL/m

for a chart of monthly stock prices for Apple, Inc from 2008 to 10/2019.

Jeez, you’ve gotten so far afield in your denials now that you are about to fall off the edge of the earth!

Nick,

And that is what is happening here.”No, it’s not.

Tim Gorman,

“Why not? I see monthly averages of stock prices all the time! How do you suppose they come up with those? BTW, the dimension becomes price/month”More stuff that is just weird, although it is in line with the Pat Frank claim.. I looked at your link. It said Apple stock price is currently about $240. No mention of $240/month. Can you find anyone saying the price is $240/month? Would that be an annual average of $2880/year?

“More stuff that is just weird, although it is in line with the Pat Frank claim.. I looked at your link. It said Apple stock price is currently about $240. No mention of $240/month. Can you find anyone saying the price is $240/month? Would that be an annual average of $2880/year?”

OMG! Are you *TRULY* that obtuse?

Nick: ““But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.”

You said you can’t divide by a time step to get an average over time! I just gave you an example. The graph is not a “rate”, the graph is the monthly average of the stock price. It’s determined by finding the average price in each month. Got that? A month is a unit of time! It’s the AVERAGE STOCK PRICE PER MONTH, not the rate of growth of the stock price per month. One is done by adding all the daily average stock prices (i.e. average price per day) and dividing by the number of days in the month! Note the word “dividing”. That indicates a denominator that is a time step. How do you suppose the average *daily* price is determined? The *rate* of growth would be determined by subtracting the average price on the last day of the month from the average price on the first day of the month! A totally different numerator! But it would still have a “per month” denominator!

You’ve just totally lost it Nick. Take a vacation! Average price per day is not a rate. Average price per month is not a rate. Each has a denominator based on time.

Tim

“I see monthly averages of stock prices all the time! How do you suppose they come up with those?”Just to answer that question, suppose June has 20 trading days. They would add the closing prices for those days, which comes to about $4800. Then they would divide by 20, getting average price $240.

By Pat’s method of S6.2, they would divide the $4800 by one month, to get 4800 $/month for the monthly price average. Not only weird units, but the number makes no sense.

You *have* to be kidding me!

“Just to answer that question, suppose June has 20 trading days. They would add the closing prices for those days, which comes to about $4800. Then they would divide by 20, getting average price $240.”

That’s an AVERAGE PRICE PER DAY! It has a time step in the denominator! If you don’t define the interval then you can’t define the average either. Look at the words you used: “those days”. In other words a time step!

“By Pat’s method of S6.2, they would divide the $4800 by one month, to get 4800 $/month for the monthly price average. Not only weird units, but the number makes no sense.”

Huh? Talk about mixing up dimensions! You said divide by 20 days! That’s a time interval. 20 days equals one month in your example. (x/20)(20/month) equals (x/month).

What is so god damn hard about understanding that?

Tim,

” One is done by adding all the daily average stock prices (i.e. average price per day) and dividing by the number of days in the month! Note the word “dividing”. That indicates a denominator that is a time step.”Note the word number. In fact you don’t divide by the number of days in the month. You divide by the number of days for which you have prices – ie trading days. And Pat’s formula says you should divide, not by some number of days, but by 1 month, to get month in the denominator.

But you said

“the dimension becomes price/month”. You mean, presumably price/time. Once you have worked out dimensions, then you can assign units, with the appropriate conversions. $240/month is $2880/year is €214/month. If the dimension is meaningful, the conversions apply. 1 pound/cu ft is 16 kg/m^3. You don’t need to ask what it represents.” In fact you don’t divide by the number of days in the month. You divide by the number of days for which you have prices – ie trading days. And Pat’s formula says you should divide, not by some number of days, but by 1 month, to get month in the denominator.”

OMG! Again!

Where does Pat say that? He develops an annual mean and says it is an annual mean! I.e. per year!

It’s no different that developing an annual average of miles driven and saying it has the dimensions of miles/year!

You remind me of my kids when they were six years old. They couldn’t let things go even when shown over and over again that they were wrong!

“240/month is $2880/year is €214/month”

You *still* haven’t got it! Price per month is *not* the same thing as price-growth per month. You can’t get your dimensions correct at all!

Price per month is an average price in that month. It’s sum/interval. Price-growth per month is a delta. It’s subtraction/interval.

Average miles driven per year is miles/year. Growth in the miles driven per year is (miles/year1) – (miles/year0)

Average daily price is a sum/interval. Average daily price growth is subtraction/interval.

I simply don’t know how to make it any more plain.

“Price per month is an average price in that month. It’s sum/interval.”OK, let’s do that arithmetic. It is the arithmetic of Pat’s S6.2. Again, suppose 20 trading days in June, so sum of 20 closing prices for Apple is $4800. You want to say the interval is 1 month. So sum/interval is 4800 $/month

Or maybe you want to say that the month is 30 days. OK,sum/interval = 160 $/day

Or maybe you do just want to count trading days. Then the result is 240 $/(trading day). That is the conventional arithmetic, but without making the units change.

Or a month is 1/12 year, so the month average is sum/interval= 4800/(1/12)= 57600 $/year.

You see that you can’t escape the conversion of units. That is built in. 160 $/day = 4800 $/month = 57600 $/year. It comes from the number you put in the denominator.

Nick,

“You see that you can’t escape the conversion of units. That is built in. 160 $/day = 4800 $/month = 57600 $/year. It comes from the number you put in the denominator.”

You are *still* confusing growth rate with average price. Did it not hit you at all that in order to calculate growth rate you have to subtract? Growth is a delta and an average is not.

miles/year2 – miles/year1 gives you growth in the number of miles/year. (miles/year1 + miles/year2) /2 gives you an average miles/year.

Can you truly not tell the difference between the addition and subtraction operators?

“You are *still* confusing growth rate with average price.”And you are still dodging the arithmetic consequences of your claim. You are big on blustery words, very small on numbers. I followed through on your assertion about the arithmetic you specified – how do you think it actually works? Monthly average Apple share price, according to you, is $4800/month. Do you want to defend that as the right answer? If not, how would you calculate it? Numbers, please.

Monthly average Apple share price, according to you, ”

Sorry. I pointed out to you that growth requires a delta, i.e. a subtraction. You keep on using multiplication. You simply refuse to accept the fact that a monthly average is not a growth rate. And the average monthly price has a dimension of price/month. If the time interval isnot specified then you can’t tell if it is a daily average price, amonthly average price, or an annual average price. It truly is that simple – except for you I guess.

Nick thinks you can measure something the size of a nanometer with a tape measure from Home Depot. That’s what this comes down to. He’s pretending the stability in output of a sensitivity analysis of a black box model overrides the futility of trying to estimate the size of a human cell with a tape measure.

Probably because he’s a pure math guy with no exposure to the real world.

Nick, “

It is S6.2. You might like to note the complete hash of the subscripts in the statement.Right. Subscript “g” for grid-point and subscript “i” for climate model (i = 1->N). Very confusing.

Eqn. S6.2 just shows calculation of an annual average. Apparently another very difficult concept.

Nick, “

But anyway, the upshot is that to get the average, n sets of numbers are added. Then the total is divided, not by n, but by 20 years, units emphasised. That is not an average. Junior high school kids who put that in their tests would fail..”An annual average is a sum divided by number of years.

You know that, Nick. You’re just being misleading. As you were when when you falsely called a straightforward set of subscripts, “hash.” Pretty shameless.

Nick, “

That is not an average of n numbers. That is a rate.”A (+/-) root-mean-square error is not a rate. (+/-)4 Wm^-2 year^-1 is not a rate.

Rate is velocity. (+/-)rmse is not a velocity. A (+/-) uncertainty statistic is not a physical magnitude. Plus/minus uncertainty statistics do not describe motion.

Your objection on those grounds is wrong and misleading, Nick.

We all know you’ll stop it, now that you realize your misconception.

Pat,

“Right. Subscript “g” for grid-point and subscript “i” for climate model (i = 1->N). Very confusing”“you falsely called a straightforward set of subscripts, “hash.””It is a hash. Again the equation and nearby text is here. I can’t do a WYSIWYG version in comments, but here it is in TeXlike notation

ε_{i,g}=(20 years)⁻¹ \sum_(g=1)^n e_{i,g}

The first, obvious hash, is that you are summing over g, yet g appears on the left hand side. It is a dummy suffix; that can’t happen.

The next obvious hash is that S6.2 says it is summed over g from 1 to n. Yet the accompanying text says, rightly,

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g””. And defining, it says“For model “i,” the annual mean simulation error at grid-point g”You are summing over pairs at a fixed grid-point, not over grids. Actually, “g” should appear on the LHS, but not “i”.

But of course all this pales beside the fact that what is formed in S6.2 just isn’t an average.

“An annual average is a sum divided by number of years.”Again a schoolboy error. Do you have anything to offer on the Apple share arithmetic above? Is the monthly average Apple price $4800/month? Would it be $57600/year, based on adding 240 closing prices and dividing by 1 year?

Tim, one of Nick’s standard tactics is to make a subtly misleading argument, and delude people into arguing within the wrong context. He does that to his advantage.

For example and another and yet another.

Nick, “

The first, obvious hash, is that you are summing over g, yet g appears on the left hand side. It is a dummy suffix; that can’t happen.,”Eqn. 6.2 is summing over 20 years of error at grid-point ‘g’ for model ‘i.’ There are ‘n’ values of grid-point ‘g’ error. The total error epsilon, is for model ‘i’ at grid-point ‘g.’ Why would it not be eps_i,g.

You’re just objecting over usage, not meaning.

Next, “

The next obvious hash is that S6.2 says it is summed over g from 1 to n.”There are ‘n’ values of error at grid-point ‘g’ for model ‘i.’

Next, “

You are summing over pairs at a fixed grid-point, not over grids. Actually, “g” should appear on the LHS, but not “i”.”The grid-point errors are for model ’i.’ The error is for model ‘i,’ and thus e_i. So, your sore point is that my usage is not the usage you’d have used. Too bad. We know you could figure it out. If you tried.

What next, Nick? Will you object to my sentence construction?

Next, “

But of course all this pales beside the fact that what is formed in S6.2 just isn’t an average.”The 20 year sum of grid-point errors divided by 20 is not an annual average. Got it.

Nick, “

“Again a schoolboy error. Do you have anything to offer on the Apple share arithmetic above?”Misdirection ala’ Nick Stokes. The point is eqn. S6.2, and not anything else.

Eqn. 6.2 adds 20 years of grid-point error and divides by 20 to produce an annual average of error. Not a daily average. Not a monthly average. An annual average.

You claim this average is not an average, and then go on to accuse

meof making a schoolboy error. What a laugh.“one of Nick’s standard tactics is to make a subtly misleading argument, and delude people into arguing within the wrong context.”There is nothing misleading here. You have asserted your principle just above:

“An annual average is a sum divided by number of years.”Tim has asserted it above:

“Price per month is an average price in that month. It’s sum/interval.”Your Eq S6.2 asserts it.

I chose Tim’s example of Apple monthly average share price. I showed what happens if you calculate it according to that principle. It makes no sense. No-one does it. I have invited you to cite any example of people averaging items that way.

I cited the familiar case of calculating an annual average temperature for a location. °C/year?

I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?

Nick,

“I chose Tim’s example of Apple monthly average share price. I showed what happens if you calculate it according to that principle. It makes no sense. No-one does it. I have invited you to cite any example of people averaging items that way.”

What doesn’t make any sense is your assertion that an average per unit time is a GROWTH rate. 100,000 miles driven in ten years divided by 10 years is *NOT A GROWTH RATE” in miles driven per year.

I’ll repeat it for at least the fifth time – a growth rate is a delta. It is *not* an average!

“I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?”

Yes, and it has been given to you over and over again. The sum of miles driven in 10 years divided by 10 years is an annual average in miles/year. Just like Pat did in his paper.

What is so hard about this that you can’t understand it? You are just exhibiting the behaviour of a six year old in denying this simple truth!

“Eqn. 6.2 is summing over 20 years of error at grid-point ‘g’ for model ‘i.’ There are ‘n’ values of grid-point ‘g’ error. The total error epsilon, is for model ‘i’ at grid-point ‘g.’ Why would it not be eps_i,g.”Hash upon hash! No, you are not summing over 20 years. You say you are summing over grid-points. But the text says you are summing over, well, something, up to “n”,

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g””. Clearly it is “simulation-observation pairs”, and that makes sense.The total error ε is not for model i at grid-point g. You say you have summed over g. So which gridpoint does the g on the left refer to? In fact you have summed over i, but the same thing applies over that index.

“The point is eqn. S6.2, and not anything else.”No, the point is the meaning of an average. S6.2 asserts one. You have repeatedly asserted it in words, as has Tim. I show what stupidity it amounts to if you actually try to put numbers to it. I keep challenging you or Timto actually give an example, using that arithmetic, which makes sense. You can’t.

“You’re just objecting over usage, not meaning.”No, I’m objecting to getting maths grossly wrong. You seem to think it is OK to write down anything, and then redefine it as you wish. You can do no wrong. But it is wrong and gives wrong results.

” I keep challenging you or Timto actually give an example, using that arithmetic, which makes sense. You can’t.”

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

The observations summed over 20 years divided by 20 years IS EXACTLY THE SAME!

When I was in business I used to keep track of the total miles on my vehicle so I could calculate miles/year to estimate expenses for the IRS. Have you been sheltered from the real world for all of your life?

CC,

“because he’s a pure math guy with no exposure to the real world”Actually, mostly doing industrial CFD. But if you think you can guide me to the real world, perhaps you explain what I’m getting wrong in the calculation of Apple monthly share price average using the (sum/(time interval)) method that is in S^.2 of Pat’s paper – sum/(time interval)? Or give another real world example of how it works?

“what I’m getting wrong in the calculation of Apple monthly share price average using the (sum/(time interval)) method that is in S^.2 of Pat’s paper – sum/(time interval)? Or give another real world example of how it works?”

What you are getting wrong is using the average price per unit time (i.e. month) as a GROWTH RATE! Trying to say an average stock price/month of $240 will give you $2880 per year.

It’s pretty obvious from your statement above that you’ve finally tired of having your nose rubbed in such idiocy. Now you are trying to make it look like you never made that mistake at all. Pat did the same thing you have in your statement here, only he did it over a 20 year span and not a monthly span.

Nick, “

Hash upon hash! No, you are not summing over 20 years. You say you are summing over grid-points.”Twenty years of simulation error over each grid-point. L&H say so. it says so in my paper. Try to get it right, Nick. The hash is all yours.

“

So which gridpoint does the g on the left refer to?It refers to grid-point ‘g.’

“

I show what stupidity it amounts to if you actually try to put numbers to it.”Your tendentiously concocted numbers, Nick. (Sum of errors over 20 years)/20 = annual average error.

You call that stupid. After a lifetime in math, you deny that (sum/term) = average. Your objection is indistinguishable from intentional nonsense. And I’m being polite.

“

No, I’m objecting to getting maths grossly wrong.”No you’re not. You’re being Nick Stokes.

Nick, “

I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?”[(Twenty-year sum of days)/20] = annual average = 365.24 days/year.

Give it up, Nick.

“It refers to grid-point ‘g.’”You don’t seem to have any idea how calculations work. If you have a g index on the left, you need something on the right which tells you what g value is being picked out. You don’t have that at all.

“[(Twenty-year sum of days)/20] = 365.24 days/year”So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.

“So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.”

Does it matter how many miles I drive per day if I am calculating an annual average? If I sum the miles driven every day for a period of time (say 20 years) then I divide by 20 years to get the annual sum exactly what is wrong with that mathematically?

Nick, “

You don’t seem to have any idea how calculations work. If you have a g index on the left, you need something on the right which tells you what g value is being picked out. You don’t have that at all.”From the SI, “

let observed cloud cover at grid-point “g” be (x^obs)_g. For each model “i,” let the simulated cloud cover of grid-point “g” be (x^mod)_g,i.”Nick, “So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.Summing 20 years of days and dividing by 20 to get the annual average days.

Sum 20 years of error and divide by 20 to get the average annual error.

That will make everything clear to everyone, with one likely exception.

Read the next sentence.

What he said was he was glad he did not have a great math teacher.

Did you get that part, Nick?

As I mention above, Nick, as he typically does, either intentionally or mistakenly, misunderstood the point.

Quite telling, really.

Thank god Nick wasn’t either, his inflexibility and super assurandness in the sign of someone with a very narrow focus and inability to question their own results.

“Thank goodness Pat Frank wasn’t my math teacher.”

+1

I double the score. Btw some Math and Statistics are really helpful in the Biological Sciences, according to my experience.

[/back to snark ] And: Pat Frank’ s Uncertain Math is far easier to grasp than standard math, being not fraught with that nasty rigour which scared so many students off.

Pat’s math is rigorous and correct even according to the JDGM.

As usual, people, including you, are mixing up error and uncertainty as well as mixing up input variables having a random distribution with a calculated, determinative result from a model.

The climate warmists don’t even want to admit to any uncertainty in their inputs let alone any uncertainty in their outputs. Doing so would mean having to admit that their models don’t handle the physics very well.

When they are trying to forecast annual temperature increases to the hundredth of a degC when the historical temperature record has an uncertainty of somewhere greater than +/- 1degC they are violating the rules of significant digits right from the very beginning.

“So you’d never use Ptolemaic planetary motion models to target a multi-billion dollar planetary probe to Jupiter or Mars and expect it to actually arrive there.”

100% correct. But…

The scare is imho used to further the goals of certain groups of people, that have a rather sinister agenda. They do not believe in AGW themselves but are only interested in the results of the proposed and now implemented ‘climate actions’: depopulation and deindustrialization. The happy few will live the high life, the rest can struggle and perish like in mediaval times.

I hope they fail quickly though. Unfortunately, snake oil salesman is the second-oldest profession in the world. People just want to get duped, and this is a very well orchestrated plot, but the truth will come to light. The day of reckoning will be gruesome for the perpetrators.

>>

The day of reckoning will be gruesome for the perpetrators.

<<

You reminded me of the six phases of a project:

1. Enthusiasm

2. Disillusionment

3. Panic

4. Search for the guilty

5. Punishment of the innocent

6. Praise and honors for the non-participants

Jim

Great! Are we in fase 2 yet? ☺

Good point on Ptolemy.

Kepler shwed why Ptolemy, Copernicus, Brahe and his own test vicarious version were all wrong, even when all getting equal orbital fits.

Why?

They were all based on a falacy – that the cosmos was just geometry.

Universal gravity, Kepler uniquely showed to be a force outside any geometry. Kepler is the first modern physicist. When Einstein showed later geometry. spacetime, is a stress-energy effect, it confirms Kepler.

If one takes Newton’s version of Kepler, the famous 3-body problem turns up. I would interpret this with Pat Frank’s uncertainty propagation exactly the same way. Where then is Newton’s error? It is the insistance on pair-wise action-at-a-distance. Kepler never used that meme. That very reductionist procedure builds in an uncertainty propogation.

I would suggest this uncertainty propogation is related to Entropy, maybe the Second Law of Thermodynamics.

A question for Pat Frank : Is it possible the Second Law is just an uncertainty measure?

A bit far afield for me, bonbon.

In the sense that a system as a huge number of iso-energetic states and we have no idea which one it occupies at any time, then perhaps yes. 🙂

I’m trying to parse one of the references about the effect of clouds:

According to Stephens [3], “The reduced warming predicted by one model is a consequence of increased low cloudiness in that model whereas the enhanced warming of the other model can be traced to decreased low cloudiness. (original emphasis)”

So: less warming as a result of increased clouds, and more warming as the result of decreased clouds. I don’t see how the models are in conflict. Can someone explain? Thanks!

Both models predict warming, David. But one predicts warming by increasing clouds in response to CO2 emissions. The other produces warming by reducing clouds in response to CO2 emissions.

So, they both produce warming, but by opposite responses to the same CO2 emissions.

It’s just that one of the models (increased clouds) produces less warming than the other (decreased clouds).

These are the models that are asserted to accurately predict air temperature 100 years from now.

“In summary, not only are all future predictions from climate models wrong, they are really, really wrong in such a way which prevents them from ever being right.” Got it.

Thanks! A thorough mathematical smack-down of Brobdingnagian proportions, to be sure. Lights out, the party’s over.

Godel’s Incompleteness Theorem comes to mind.

Please remember that, statistically, we do not even have the correct number of daily temperature readings to determine what the average daily temperature is to a tenth of a degree, worldwide. Again, the uncertainty is being omitted in published climate scientist findings, time and again. Without that little tidbit, trying to calculate what net effect a trace gas, of which humans only contribute a fractional amount, has on overall temperature is, at best, a fool’s errand.

If we cannot farm Greenland with a horse-drawn plow, we haven’t even globally warmed up to the level the Vikings enjoyed. And THAT is an inconvenient fact lost in the noise in the signal!

Please remember that, statistically, we do not even have the correct number of daily temperature readings to determine what the average daily temperature is to a tenth of a degree, worldwide.

Thank you, the treatment of spatial uncertainty in climatology needs a detailed look.

“Please remember that, statistically, we do not even have the correct number of daily temperature readings to determine what the average daily temperature is to a tenth of a degree, worldwide.”

It wouldn’t matter how many you had. Temperature is an intensive property of the point in space and time measured. Averaging it with other temperature measurements from different locations is physically meaningless.

And that’s assuming those measurements were all taken at exactly the same time. Add in the variations in time that the various temperature sensors readings at the various locations are taken only makes it more meaningless. just change the time of readings of several sensors by as little as a half-hour (or even less) and/or location of those sensors by as little as a few feet could easily change that particular “daily average” by a few tenths of a degree at least.

I’ll say it again Pat you’re unquestionably the expert on the subject at hand.

Climate science is well off the rails because the wrong people are in charge.

Fancy that . . . unfunded Frank and Monckton having to step in and correct the establishment.

Anyone with a solid understanding of physics can plainly see the dogma has a false base.

Or even a small understanding of physics. 😇. Like me.

Mathematics was my second language, but most of what these two present to us is way too complex for me to grasp without many hours or even days of intense study. I have great respect for both Chris’ and Pat’s dedication and patience in the face of such extraordinary attacks.

Personally, I went into the whole software thing, mainly because it’s much easier than working for a living 🙂

My immediate take on the models was simple and obvious. If you create a program that assumes CO2 causes warming, it will produce results that show that CO2 causes warming. The fact that all other parameters have been adjusted to fit past (questionable) data does not invalidate the original circular reasoning of the models.

That’s how software works.

Oh yeah,

And Harry Read Me showed me that my assumption on this was spot on. That should have shut down the scaremongering immediately, even if you ignored the rest. In fact, anyone quoting model results as ‘evidence’ should be countered with “Harry Read Me” and nothing else.

If they don’t get the message they are totally uninformed.

It does seem like there is some uncertainty that needs accounting at every time step…and that should ‘grow’ with time. There was some argument over the use of +/- 4 watts/m^2 as being an ‘annual’ value (as I recall, the original paper said it was an annual value based on a 20 year period…or some such.) Regardless though, it does seem intuitive that there is some value of uncertainty based on our lack of knowledge of the inputs, and that should accumulate over time. I’m less persuaded by arguments that there can’t be growing uncertainty in various components of the balance by virtue of the fact that an overall balance is maintained perforce. Still open to be convinced of this. As I recall, I didn’t think the heated pot of water was a good analogy (for similar reasons Dr. Frank noted, I think). There needs to be some internal mechanisms of energy transfer (within the pot) that are important in the outcome we are trying to predict. Alas I need to go back and review Dr. Roy’s description of that. I really do applaud his attempt to make a simplified analogy like the water pot, but that one I think might still be lacking.

Agreed. I was “uncertain” of the 4 W/sqm initially also, but it appears to be the best uncertainty statistic we have and is solidly based in actual measurements. It is the rmse of 20 years of CIMP5 runs during periods where we have actual data.

The fact that the balance is maintained in the model or in real life is not a factor. Look at my simple model, T=20.000 Deg. C +.000001 t (t in years); Balance is maintained perforce. Let’s say MEASUREMENT indicates an uncertainty of +/-2 Deg. C annually. Are you still “less persuaded by arguments that there can’t be growing uncertainty in various components of the balance by virtue of the fact that an overall balance is maintained perforce”? I.e., my model sucks. Unfortunately, it appears that CIMP5 may also shall we say not be of the highest accuracy.

If the uncertainty exceeds the model extensively, then the model is not capable relative to the known MEASURED cloud forcing uncertainty. During the 20 year period there appears to be some evidence that heating could have occurred, BUT if you read the Lauer report, there was little or no correlation showing that CIMP5 models captured anything actually real about cloud forcing. In other words CIMP5 cannot and does not really even try to model clouds, so yes, uncertainty can increase. Again, that is different than saying that temperature is increasing, only uncertainty in the uncertainty of the model outputs.

“only increase in the uncertainty of the model outputs.” sorry

Dr. Frank,

Dr. Spencer included a graphical comparison of the month-over-month radiative resistances of the GCMs and actual satellite data on his blog article dated September 28th, 2019. The discrepancies are large, and I would argue, entirely consistent with the uncertainty that you have been pointing to in the models.

Good effort.

It boils down to the accounting problem known as multiple compensating errors.

Where such errors are present the outcome tells you nothing.

Climate modellers have built their models on a plethora of compensating errors so that the models are wholly worthless.

I’d be interested to see Pat apply his technique to the modelling exercise presented previously here at WUWT by me and Philip Mulholland.

No compensating errors there and it works for multiple planets and moons.

SUMMARY:

WHY DO WE NEED COMPUTER GAME “MODELS” THAT MAKE WRONG CLIMATE PREDICTIONS ?

And why do we need to refute those so called “models” when they refute themselves by grossly over-predicting global warming ?

What are called “General Circulation Models” are only elaborate versions of the opinions of the people who program them.

GCMs are personal opinions — they are not a model of any climate change process on this planet.

The physics of climate change is not known with enough accuracy to construct a real climate model … however just enough is known to play computer games, and make consistently wrong predictions.

DETAILS:

The modelers (computer gamers, actually) merely assume the CO2 level is most important for determining the future global average temperature, and natural causes of climate change are just “noise”.

Then they try to get the public’s attention by inventing a water vapor positive feedback theory that allegedly triples the alleged warming effect of CO2 by itself.

The near global measurements of warming using weather satellites (UAH compilation) would have to be doubled to tripled to match the average climate model prediction of global warming (excluding the Russian model, that seems to predict the past rate of warming will continue).

Anyone with common sense would reconsider the water vapor positive feedback theory, because without that alleged “global warming tripler”, the average climate model would seem to provide a reasonable guess.

The leftists in charge of the “coming global warming crisis fantasy” DO NOT like to change their predictions.

They just love the old 1970’s-era CO2 – temperature relationship highlighted in the 1979 Charney Report (+1.5 to +4.5 degrees C. warming per 100% CO2 increase).

So they sticking with it — FOR 40 YEARS SO FAR — and never mind the always wrong climate model predictions — the mainstream media never reports them, so few people know.

Every year we hear a “new” prediction of a coming climate change crisis, and every year we wonder why it never seems to arrive.

The climate computer games have to be wrong because not enough is known about climate change physics to construct a real climate model.

Whatever personal climate change theories drive the computer games are obviously wrong — the computer games make wrong predictions — in real science wrong predictions falsify the climate theories of the modelers.

But in climate change alarmist junk science, nothing can be falsified !

WHY DO WE NEED MODELS WHEN WE HAVE EXPERIENCE ?

We have over 300 years of actual experience with intermittent global warming (since the late 1600s).

We have over 100 years of actual experience with adding CO2 to the atmosphere.

We have decent near-global temperature averages since 1979.

We have decent global average CO2 measurements since 1958.

Just assume past mild global warming will continue and move on to solving real problems in the world.

I know that such a prediction is boring, but it is not as boring as the RIGHT PREDICTION = NO ONE KNOWS whether the global average temperature will be warmer or colder in 100 years … and we don’t even know if the Holocene inter-glacial will still be in progress !

What we do know is the global warming so far has been harmless, at worst, and beneficial at best (greening the planet and supporting better growth of C3 plants used for food by people and animals.

Why would anyone in their right mind want the 300+ years of mild global warming to stop ?

The climate history of our planet strongly suggests that if global warming stops, then global cooling will begin. Most people would not like that, with the exception of ski bums.

“Anyone with common sense would reconsider the water vapor positive feedback theory, because without that alleged “global warming tripler”, the average climate model would seem to provide a reasonable guess.”Further to your point Richard:

The water vapor positive feedback (the computer simulated “tripler” for temperatures) in the CMIP3/5 GCM outputs is physically manifested as a

predictedtropical mid-tropospheric hotspot in the GCM outputs due to the release of copious latent heat to sensible heat from convective transport/precipitation. This hotspot is a critical prediction of the CMIP3/5 ensemble, as openly recognized by the modelling community.The original 2009 US government NCA, the authors (Tom Karl and his ilk) tried to hand-wave away the lack of hotspot observation (by both satellite AMSU data and balloon radiosonde data) by calling the issue “largely resolved” and blaming a “uncertainties” in the observational data sets why they did report the model-predicted hotpsot. A drill down through the NCA reference for that statement (#71, CCSP – 2006) and the reference’s references of course did not in anyway “resolve” the lack of observation of a critical fingerprint prediction of the “multiplier effect” in the CMIP 3/5 ensemble temperature projections for climate sensitivity to CO2. It was merely an obfuscation to make the reader think the issue was resolved by blaming observations are not good enough. And now in 2019, the lack of the tropical mid-troposhere hotspot in the observational datasets is even more glaring than it was in 2009. Although Tom Karl is now retired, others have taken up the government-run climate disinformation campaign cause at DOE, NOAA, and NASA.

The climate modelling community is huge jobs program for both government “scientists”, computer engineers, software programmers, and modelling team management and leadership positions. Even just the paying for installation and the “idle” operating the Supercomputer Centers needed for these cargo cult climate model ventures is enormous. And it is all done on the tax-payer dime.

So there are huge financial incentives in-play to keep the climate GCM scam running for thousands of engineers, managers, and so-called “climate scientists” across many US congressional districts and states. Everyone involved simply has to ignore the huge, unresolved problems that exist within their “rent-seeking” industry that would have caused a private venture to have collapsed long ago as investors would have walked away from such obvious failures.

So the Climate Disinformation Campaign trundles on-wards in a massive bilking of taxpayers to fund the jobs programs for climate modelling community members. Essentially high paid welfare for engineers and “scientists” and the universities that support them and depend on the grants/appropriated funds/money flows that produce overhead revenues.

Thank you Mr. O’Bryan, for understanding the missing science, left wing politics, and financial incentives, that led to a fake “coming climate change crisis’ — always coming, but never arrives (even as the actual climate of our planet has been getting better for over 300+ years !)

Ya know, this is the type of scientific “discussion” that is supposed to be taking place in “The Journals”. Why isn’t it?

Because WUWT reaches millions, not hundreds. Besides, the back and forth is educational.

Today has been a real humdinger , very good reading!

They are always good but some are where you read every comment avidly.

Gate keeping.

Consensus enforcement.

Pal reviews.

Politicized science.

It’s Political Power and Money involved in the Climate Scam as powerful forces to continue the scam and that fuels the Climate Disinformation Campaign being waged for public opinion.

For 30 years of the climate scam (1988-2017), the many public opinion/attitudes surveys told the climate scammers their efforts on climate alarmism were not even registering on average folk’s top list of 20 to 30 concerns.

But in the past 2 years, driven by the Trump victory and his wrecking-crew approach to the climate hustle, the climate scammers and their deep-pocketed GreenSlime billionaires have thrown all scientific uncertainty and cautions to the wind and

gone all-in on funding a disinformation campaign the likes of which the democracies of the First World economies world have never seen.And it is going to get even more shrill and voluminous in the next 13 months.So the Green Slime and their flying monkeys (think Marcia McNutt, the US NAS President) have been putting the thumb screws to Journal editors and staffs, with clear threats to their jobs, should they allow critical debates on climate science issues in their pages. This began in earnest after the Hockey Stick Team’s fraud was exposed ~15 years ago, and their ever growing web of lies weaves an ever more “tangled web” of reality problems for the climate deceivers.

Bottom Line: They simply cannot allow their Cargo Cult Climate Science to be debunked in the peer-reviewed literature and thus have their gas-lighting attempts on the public opinion to fail.

It’s all about massive Political Power and vast amounts of Money to be transferred from an increasingly impoverished and fleeced middle-class (both in taxes and higher energy costs) to the financial benefit of an elitist class controlling the climate propo efforts.

Joel O’Bryan

You said, “And it is going to get even more shrill and voluminous in the next 13 months.” Yes, because, assuming that Trump is the GOP nominee, the Democrats will be desperate to defeat him. Anything and everything will be used because “The end justifies the means.” If the public can be convinced that AGW is real, they will be less inclined to vote for someone who doesn’t believe it. That is, the propaganda about climate change becomes a tool to help the Democratic nominee to win.

“is supposed to be taking place in “The Journals”. Why isn’t it?”Because the paper is full of elementary errors. and journals don’t want to waste bandwidth on that sort of stuff, as the 30 or so reviews that Pat catalogued set out convincingly. To summarise, there are at least two basic structural errors:

1. To estimate uncertainty you need to study the process actually producing the numbers – the GCM. Not the result of a curve fitting exercise to the results.

2. You need to clearly establish what the starting data means. The 4 W/m2 is not an uncertainty in a global average; it is a variability of grid values, which would be much diminished in taking a global average.

But there are just glaring errors in the maths. It starts with the curve fitting model, Eq 1. Eq 2 is just a definition of a mean. Eqs 3 and 4 are generic formulae, similar to those in say Vasquez, for mapping uncertainty intervals. They involve correlation terms; no basis for assigning values to those is ever provided. The first equation to which values are assigned is Eq 5.1. It makes no sense. It resembles Eq 1, and is presumably to be derived from it, but where Eq 1 took an initial F₀ and added the sum of changes: F₀+ΣΔFᵢ, Eq 5 Takes that initial F₀ and adds the ith change without the previous ones F₀+ΔFᵢ. That makes no sense whether the equation is for an increment, as it seems to say, or for a cumulative stage.

2. There is a section 6 in the SI which is supposed to be the justification, especially for the eccentric units used. Eq 6.2 is here. It described the mean simulation error at a grid point from a set of n discrepancies. It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation. Well, that gives the stated units, but it isn’t any kind of mean. If n increased, the “mean” would rise, not because of bigger values, but just because there were more in the sample.

3. When you eventually sort out Eq 5.2 and 6, the result, which gives the uncertainty curves shown in the figures, is, after n years

0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)

I have marked the units of each quantity. The unit of the results is K sqrt(year). If you use ±4 Wm⁻²/year, as Pat intermittently does, the units are K/sqrt(year). Neither makes much sense, but anyway, they are placed on the plot as if the units were K.

“To estimate uncertainty you need to study the process actually producing the numbers – the GCM. ”

Nick, when one actually does open the GCM “black box” to see what goes on inside, one finds a dozen or more

fudge factorsfor quantizing critical water/wv physics processes on were and how much of energy flows gets partitioned in a complex, many-compartmented system.The modellers seriously do not want that kind of outside scrutiny of their black box innards and goings on. The clear evidence of that is shown by the decades of close-hold nature of their tuning secret sauce formulas they only discuss within their close circles. If anything of what they did was actually science, there would be only 3 (maybe 4 at most) super-computer climate modelling programs world-wide. That there are dozens of climate models and teams world-wide, and they meet every few years and mash-up their outputs into a combined “ensemble” should tell any actual scientists that what the climate modellers do is indeed junk science.

Furthermore, they claim they keep their internal tuning strategies close-hold because it “invites” skepticism” of their results. As it should if what they practiced was indeed a scientific endeavour of natural truth discovery. The only conclusion then is they secretly understand what they do is not science, but merely to produce outputs for climate policy advocacy justification.

All that is multiple lines of evidence that climate modelling in today’s GCMs is complete and utter junk science. The entire community is merely a jobs program for computer hardware/software engineers and climate scientists.

“Nick, when one actually does open the GCM “black box” to see what goes on”My point here is that unless you do look at what a GCM does, you can’t possibly determine the uncertainty of its output. Your criticisms of wv modelling etc are misplaced, but anyway irrelevant. They don’t change the fact that Pat’s uncertainties include large ranges that GCMs could not possibly produce because they conserve energy.

“The modellers seriously do not want that kind of outside scrutiny of their black box innards and goings on. “They document and publish their code.

Stokes

You yourself have commented on the difficulty of calculating the uncertainty of PDFs. That is compounded when there are non-physical adjustments and parameterizations of unknown validity.

It is a common practice, when confronted with an intractable problem, to simplify it. (e.g. For very small angles, sine theta = theta, so substituting theta for sine theta allows the antiderivative of the function to be calculated!) That is an important insight that Pat has had: All the Black Box machinations can be accurately simulated with a simple function, sans the PDFs! One might say that it is an example of observing Occam’s Razor.

Nick,

You wrote, “They document and publish their code.”

Nice misdirection. Code is not the parameter tuning runs they undertook in the many submodules, and then again in a the full-up model. They do not discuss their parameter tuning strategies with outsiders. Nor do they discuss all the “calibration runs” they performed to find sets of parameters (degeneracy) that work to give “expected” CO2 sensitivity.

No climate model output can be replicated by another modeling team. That is for both the reason that Lorenz described (chaotic, nonlinear equations sensitive to initial values) and also because their tuning strategies in the various sub-modules are not published.

They have said so much in the few times in the open literature where they have discussed it. I can’t imagine how they can think what they do is science – i.e. an objective pursuit of natural truth about a physical system.

Seems to me a very basic estimate of the uncertainty in the output of a model is to compare its predictions to observations. For example, if a model predicts an anomaly to be +0.7C, and the observed anomaly is +0.2, you could say for a start that the uncertainty was ±0.5C. Make enough comparisons and you can start closing in on the uncertainty a little better.

Nick, I’ve been looking over your 2017 post on the SST mesh on the globe, and the distance between some of those nodes across the Arctic is nearly 2400 km. That’s twice as far as NASA GISS goes. Are you also using ERSST at the pole? NASA GISS says that data is invalid where sea ice is present, and I’m pretty certain that the pole is still covered year-round.

James,

I suggest you try this page, which lets you see the full set of live nodes for each month, along with a triangle mesh which clearly shows the interpolation distances. It shows the SST points too. I don’t use SST nodes where there is sea ice.

“For example, if a model predicts an anomaly to be +0.7C, and the observed anomaly is +0.2, you could say for a start that the uncertainty was ±0.5C.”GCMs don’t aim to predict individual locations decades ahead, so that would be an irrelevant measure. GCMs predict climate, which is an average over space and time.

“…My point here is that unless you do look at what a GCM does, you can’t possibly determine the uncertainty of its output.”

Utter nonsense. One can look at the range of predictions of the various GCMs and quickly see there is a large uncertainty. They can’t all be right.

Or how about setting them to a past starting conditions (say 1900) and let them run with NO tuning, no tuning just initial conditions. I don’t need to see the Bull in order to smell the Bullshi&.

Nick, “

They don’t change the fact that Pat’s uncertainties include large ranges that GCMs could not possibly produce because they conserve energy.”Uncertainty is not error. Nor is an uncertainty range a strict measure of model output.

Uncertainty statistics are not bound by physics.

Another irrelevant argument.

Frenchie77

“Utter nonsense. One can look at the range of predictions of the various GCMs and quickly see there is a large uncertainty.”That is in fact what they do, with ensemble calculations. That looks at what real GCM calculations do. But if you want to try to do it analytically, you have to analyse what the GCM does. Not what a toy curve fitting model made up by Pat Frank does.

Nick, “

Not what a toy curve fitting model made up by Pat Frank does.”A

toy curve fitting modelwith one degree of freedom that successfully emulates the air temperature projections of advanced climate models.All those PDEs combine out to linearity. Oh, well.

“A toy curve fitting model with one degree of freedom that successfully emulates the air temperature projections of advanced climate models.”Two dof. From your paper:

“The fitted values of fCO2 and a were then entered into equation 1 and the emulation of the air temperature projection for the given GCM was calculated using the standard SRES or RCP forcings”Different parameter values fitted for each GCM.

Nick,

I checked out the link to your WebGL map, and it is an impressive graphic. Where I see a problem with it is that there are many, many places where the distances between the nodes is well over the 1200 km range specified by NASA. Here’s a few I looked up using NOAA’s calculator for getting the distance between points of latitude and longitude.

CA002400305 to Sea 80 166: 1777 km

CA002400305 to Sea 82 140: 1690 km

CA002400305 to Sea 82 84: 1648 km

CD000004756 to KE000063612: 2012 km

The first three are from the Arctic and the last is from Africa. Only a few stations across the ice cap in the Antarctic are within 1200 km of another, yet you use them. Do you feel you have a better method than NASA’s, to let you use stations at such distances?

James S,

“Do you feel you have a better method than NASA’s, to let you use stations at such distances?”Yes. But firstly, it doesn’t matter if stations are more than 1200 km apart. They are 0 km from the nearest reading. The point is that you are calculating all the intermediate points by interpolation. So the question is whether there are points that do not have any data within 1200 km. The longest line you have noted is 2012 km, so the midpoint is 1006 km from the nearest datum. There will be a centroid of the neighboring triangle which will be a little further.

But anyway, 1200 km isn’t a sudden cutoff. In Hansen’s model and others, correlation dies away exponentially, and I think 1200 km is a “half-life”. The main point is that you are calculating a global average, and should always use the best estimate for each point, ie the nearest points, even if they were more than 1200 km. It’s an average over the whole Earth, so a weakness in Africa is regrettable, but not a disaster.

A lot of people have looked at this issue and calculated the uncertainty due to coverage (or lack of it). The figure often quoted of ±0.05°C for an annual average is mostly due to that. I did my own test here. I removed stations in a randomised way, and calculated using a reduced mesh. I did it many times. I could get down to about 500 nodes (almost a tenth) before the error range (variation) got up to ±0.05°C.

Nick,

This is from NASA’s GISS sources page:

The 1200 km figure is from the center of a grid cell, not the center point between two stations. It’s also not a “half-life”, and it doesn’t fall off exponentially, it’s linear.

To be fair, it’s obvious you’re not using NASA’s method, but one of your own design. I don’t like the business of interpolating “data” between stations; don’t see why you need it if you have stations within your now-2400 km radius.

The sad truth of this exercise is that the average global anomaly is the

sine qua nonof the alarmist universe. Without a number that can be pointed to as a signpost, the entire movement falters.That explains the sketchy statistical gymnastics that produce average global anomalies which are very consistent with one another, but the Lord of the Rings epic also had a very consistent internal structure, but no one ever mistook it for reality.

James

“The 1200 km figure is from the center of a grid cell, not the center point between two stations. It’s also not a “half-life”, and it doesn’t fall off exponentially, it’s linear.”The grid cell is appropriate if you use regular grids, but my mesh connects nodes. They have used a linear drop-off in weighting, but that doesn’t mean the correlation behaves that way. In fact 1200 is the half point, from Hansen and Lebedeff 1987:

“the correlations fall below 0.5 at a stations separation of about 1200 km, on the average”However, they don’t explicitly say it is exponential.

“I don’t like the business of interpolating “data” between stations”Everything is interpolated. You have just a few thousand thermometers; everything between has to be inferred. That is how it is in most of science. We only know from samples.

I’m just about to post another integration method – best yet, I think.

Nick, “

Two dof. … Different parameter values fitted for each GCM.”“a” takes a value only when the projection is not an anomaly. Then it’s just an offset.

One could just as easily subtract the starting temperature and produce the anomaly series before finding f_CO2.

The curvature of an emulation comes directly from the standard forcing. The fitted f_CO2 supplies the intensity. One degree of freedom.

Nick,

Where did you get that idea? Interpolated points are not data, the are predictions, dependent upon your function. You can’t take two measurements 2000 km apart, run a weighting funtion against them , and then use the result as though it were the same as actual measurements from those locations. All you’re doing is manufacturing a larger N to get the uncertainty in the mean to the “correct” value, which apparently is 0.05.

“Science” uses interpolations to make predictions, which are then compared to real measurements to determine the veracity of the function. You can’t perform that vital step. You can’t possibly know if that temperature 400 km away matches your function, so it’s useless.

It doesn’t matter how consistent your results are, because you have no way to determine the accuracy of the result.

Nick, I’m a lay person trying to follow this argument, had a few ELI5 questions that maybe you shed light on:

Is the original uncertainty range of +/- 4w/meter squared referring to estimates of the forcing effects of clouds? Or a change in cloud cover? My concept of “forcing” is a new condition in the system that causes an energy imbalance until tempurature changes to bring the system back in balance. How could clouds change enough in a short period of time to generate changes in forcing around the magnitude of 4w/meter squared?

And are you suggesting that since these are uncertain estimates for many grid points all over the globe that there will be a tendency for the errors to cancel out as the number of grid cells increases.

It seems like one confusion happening is that this original value, the uncertainty range, is saying that, here’s our estimate, and within some probability the actual value will be in the uncertainty range of +/-4…meaning that there’s a very good chance that real values are in the uncertainty range.

It’s unclear to me what the concept is that Franks is trying to communicate by saying the uncertainty range is outside possible real values.

There are three concepts that seem related, but I can’t follow the semantic distinctions as they’ve been argued in the various discussions I’ve read of this paper: error, certainty, and precision.

“Is the original uncertainty range of +/- 4w/meter squared referring to estimates of the forcing effects of clouds? Or a change in cloud cover?”What Lauer did was to compare observed values of cloud cover to those predicted by GCMs at various grid points around the world, at various points in time over a 20 year period. He found a correlation of 0.93. That is, of all pairings over all years. He then translated that, via a Taylor diagram, to an rmse of 4 W m⁻². It isn’t a change, annual or otherwise; it is a re-expression of a failure of correlation. There is nothing about it that makes it sensible to talk of per year or per day.

…perhaps other than the fact that is was calculated using 20 multimodel ANNUAL means…

Nick, “

It isn’t a change, annual or otherwise; it is a re-expression of a failure of correlation.”No it isn’t.

Let’s let Lauer and Hamiton tell us what they did: “

A measure of the performance of the CMIP model ensemble in reproducing observed mean cloud properties is obtained bycalculating the differences in modeled (x_mod) and observed (x_obs) 20-yr means.These differences are then averaged over all N models in the CMIP3 or CMIP5 ensemble to calculate the multimodel ensemble mean bias…“The”overall comparisons of the annual mean cloud properties with observationsare summarized for individual models and for the ensemble means by the Taylor diagrams for CA, LWP, SCF, and LCF shown in Fig. 3. These give thestandard deviation and linear correlationwith satellite observations ofthe total spatial variabilitycalculated from 20-yr annual means. … In this polar coordinate system, the linear distance between the observations and each model is proportional tothe root-mean- square error (rmse)… The linear correlation coefficients for total CA among the individual CMIP3 models range from 0.12 to 0.87 …Extracting:

the standard deviation of the total spatial variability calculated from 20-yr annual means.L&H Figure 2 shows the 20-year mean annual error for the CMIP3 and CMIP5 models for the various cloud proeprties. The total spatial variability was calculated as the rmse of those errors.

The correlations are how well the 20-year mean total spatial variability of the simulated cloud cover tracks against the observed cloud cover, for each model.

RMS error and linear correlations are calculated separately and diffrently.

The calibration error is the rmse of the 20-year mean of total spatial differences between simulation and observations.

Note that L&H write, “standard deviation

andlinear correlation”; not ‘standard deviationoflinear correlation.’John Q Public is exactly right.

Pat

“L&H Figure 2 shows the 20-year mean annual error for the CMIP3 and CMIP5 models for the various cloud proeprties.”They show a single figure for the 20 years. Not 20 averages. And they show it per grid point. Not as a global average, which is how you use it. You have bolded his descriptor – total spatial variability. But you have used it as a global average time variability.

If I drive 100,000 miles over a twenty year period it is reasonable to say I have driven an average of 5,000 miles per year. That *is* converting a spatial value into a time related value.

And you are saying that doing so is not reasonable?

ROFL!!

Nick, “

You have bolded his descriptor – total spatial variability.”Which is a global average.

Nick, “

But you have used it as a global average time variability.”Very nice misdirection. I have used it for what it is, a global simulation error average from 20 years of simulation.

Not what you suppose at all.

“Which is a global average”It is a global average of spatial variability.

Not time variability of a global average.

Nick, “

Not time variability of a global average.”No. It’s a annual average of simulation error. You’re couching your argument in a straw man.

I appreciate the opportunity to confront Nick Stokes’ central thesis.

Thanks for the comprehensive essay, Nick. You’ve provided a real opportunity to set aside your objections for good.

Point-by-point.

“Nick, “

Because the paper is full of elementary errors.”Unsubstantiated. Vacantly asserted.

Nick, “

… and journals don’t want to waste bandwidth on that sort of stuff, as the 30 or so reviews that Pat catalogued set out convincingly.”Reviewers who:

1) did not know to distinguish accuracy from precision.

2) Did not understand the difference between an energy flux and a statistic

3) asserted that a (+/-)uncertainty implies model oscillation.

4) asserted that a (+/-) temperature uncertainty is a physical temperature.

5) know nothing of error propagation

6) know nothing of physical error analysis.

Like you, Nick. You evidence no understanding of any of that, either.

I have documented the extraordinary incompetence of my prior reviewers here and again here.

I’m not surprised you’d suppose their errors are convincing. You’d likely assert that even if you knew they were wrong. Which you probably do not.

Nick, “

To summarise, there are at least two basic structural errors:”1. To estimate uncertainty you need to study the process actually producing the numbers – the GCM. Not the result of a curve fitting exercise to the results.

An equation linear in fractional forcing, with one degree of freedom, is able to reproduce the air temperature projections of advanced climate models running on supercomputers.

That really frosts you, doesn’t it, Nick.

But that demonstration is definitive with respect to the behavior of GCMs.

GCMs are linear in output. Linear propagation of output uncertainty follows.

The explanation is really simple, Nick. GCMs can’t resolve the cloud response to GHG forcing. Simulated tropospheric thermal energy flux is consequently wrong. Their projected air temperature response to GHG forcing is necessarily meaningless.

Nick, “

2. You need to clearly establish what the starting data means. The 4 W/m2 is not an uncertainty in a global average; it is a variability of grid values, which would be much diminished in taking a global average.”Here’s what Lauer and Hamilton say they did (page 3833):

The overall comparisons of the annual mean cloud properties with observations are summarized for individual models and for the ensemble means by the Taylor diagrams for CA, LWP, SCF, and LCF shown in Fig. 3. These give the” (my bold)standard deviationand linear correlation with satellite observations ofthe total spatial variabilitycalculated from 20-yr annual means. … The overall comparisons of the annual mean cloud properties with observations are summarized for individual models and for the ensemble means by the Taylor diagrams for CA, LWP, SCF, and LCF shown in Fig. 3. These give thestandard deviationand linear correlation with satellite observations of thetotal spatial variability calculated from 20-yr annual means.Lauer and Hamilton comment,

In both CMIP3 and CMIP5, the large intermodal spread and biases in CA and LWP contrast strikingly with a much smaller spread and better agreement of” (my bold)global average SCF and LCF with observations. The SCF and LCF directly affect theglobal mean radiative balanceof the earth, so it is reasonable to suppose that modelers have focused on ‘‘tuning’’ their results to reproduce aspects of SCF and LCF as the global energy balance is of crucial importance for long climate integrations.Not grid-point variability. Total spatial variability.

I also confirmed the Lauer and Hamilton LWCF calibration error with estimates from other sources.

Nick, “

But there are just glaring errors in the maths. It starts with the curve fitting model, Eq 1.”There’s no error in eqn. 1.

“

Eq 2 is just a definition of a mean.”Eqn. 2 is from Lauer and Hamilton. It’s how they calculated their annual mean calibration error.

Nick, “

Eqs 3 and 4 are generic formulae, similar to those in say Vasquez, for mapping uncertainty intervals.”True. Where’s the error?

And very nice of you to concede uncertainty, Nick. Everyone will notice that eqns. 5.1, 5.2 and 6 all follow from eqns. 3 and 4, which you have conceded represent uncertainty.

Please continue, and explain to us how the values derived form eqns. 3 and 4 are bounded by physics and conservation laws.

Nick, “

They involve correlation terms; no basis for assigning values to those is ever provided.”The correlation terms are zero. There’s no correlation with an empirical calibration error statistic. There’s no need to point that out to anyone who understands calibration error analysis.

You already admitted that in an earlier comment here. Very nice that you’ve conveniently revived it.

Nick, “

The first equation to which values are assigned is Eq 5.1.”Values are assigned to eqn. 1. The 33 K is assigned on page 2 and the 0.42 value for f𝔠o₂ on page 3, column 1. On page 3, column 2, the values for F₀ are described. The SRES and RCP forcings for ΔFᵢ are given where they first appear on pp. 3 and 5. How did you miss that? Or did you.

Nick, “

It makes no sense.”It makes sense to anyone who reads fairly, Nick.

Nick,

It resembles Eq 1, and is presumably to be derived from it, …Presumably, indeed. Here’s the paper:

The CMIP5 average annual LWCF (+/-)4.0 Wm^-2 year^-1 calibration thermal flux error is now combined with the thermal flux due to GHG emissions in emulation equation 1, to produce equation 5.Clear as a bell to any fair reader, Nick. There’s no presumably about it.

Nick,

… but where Eq 1 took an initial F₀ and added the sum of changes: F₀+ΣΔFᵢ, Eq 5 Takes that initial F₀ and adds the ith change without the previous ones F₀+ΔFᵢ. That makes no sense whether the equation is for an increment, as it seems to say, or for a cumulative stage.Wrong yet again, Nick. The paper, one sentence on:

In equation 5 the step-wise GHG forcing term, ΔFi, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.Below eqn. 5.2, is,

In equations 5, F₀ + ΔFᵢ represents the tropospheric GHG thermal forcing at simulation step “i.” The thermal impact of F₀ + ΔFᵢ is conditioned by the uncertainty in atmospheric thermal energy flux.Eqn. 1 is the general emulation equation. Eqn. 5.1 and 5.2 show how this is used in a step-wise uncertainty analysis.

I know you understand all that, Nick. But you just can’t help manufacturing false confusion, can you.

Nick, “

2. There is a section 6 in the SI which is supposed to be the justification, especially for the eccentric units used.For those new readers, this is Nick’s introduction to not understanding that averages have denominators.

Nick, “

Eq 6.2 is here. It described the mean simulation error at a grid point from a set of n discrepancies. It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation. Well, that gives the stated units, but it isn’t any kind of mean. If n increased, the “mean” would rise, not because of bigger values, but just because there were more in the sample.”Except you don’t know how many are positive, how many are negative, or their magnitudes, Nick. Adding more to “n” may make the average smaller. Most likely, given the regional correlation of the data, adding more points to an already good sampling would not change the average much.

Nick, “

3. When you eventually sort out Eq 5.2 and 6, the result, which gives the uncertainty curves shown in the figures, is, after n years 0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)”You actually actually gave up on this objection in another comment, Nick. And I quote, “

So now you want to say that the units of 4Wm⁻² are actually 4Wm⁻²year⁻¹, but whenever you want to use it, you have to multiply by year. Well, totally weird,”but can probably be made to work.But apparently you can’t leave a convenient misdirection unexploited. That’s the second objection you’ve resurrected from the dead.

for everyone reading, here’s what Nick pretends he doesn’t get. Gird yourself because it’s really complicated.

Right side of eqn. 5.2, for each year time step:

First annual step: (+/-)[(0.42 x 33K x 4 W/m^-2 year^-1)/F₀] x year_1 = (+/-)[(0.42 x 33K x 4 W/m^-2)/F₀]_1 = (+/-)u_1

Second annual step: (+/-)[(0.42 x 33K x 4 W/m^-2 year^-1)/F₀] x year_2 = (+/-)[(0.42 x 33K x 4 W/m^-2)/F₀]_2 = (+/-)u_2

…

ith annual step: (+/-)[(0.42 x 33K x 4 W/m^-2 year^-1)/F₀] x year_i = (+/-)[(0.42 x 33K x 4 W/m^-2)/F₀]_i = (+/-)u_i

…

nth annual step: (+/-)[(0.42 x 33K x 4 W/m^-2 year^-1)/F₀] x year_n = (+/-)[(0.42 x 33K x 4 W/m^-2)/F₀]_n = (+/-)u_n

where “(+/-)u” is uncertainty.

That’s what Nick says he does not understand. After a lifetime of doing math.

Eqn. 6 then propagates the u_i as the root-sum-square (rss): (+/-)sqrt[sum over i =(1->n) of (u_i)^2]

Eqn. 6 is the rss expression of eqns 3 and 4 that Nick already agreed provide the uncertainty interval.

Nick, “

I have marked the units of each quantity. The unit of the results is K sqrt(year).Wrong again, Nick.

Nick, “

If you use ±4 Wm⁻²/year, as Pat intermittently does, the units are K/sqrt(year).”No they clearly are not. At least, not to anyone but you.

F₀ is in W/m^2. “K” is the only unit that doesn’t cancel out.

Not one of my reviewers, not even the ones you like so much because they don’t understand physical error analysis in the same way you don’t understand it, Nick. Not one of them raised so nonsensical an objection as yours.

Nick, “

Neither makes much sense, but anyway, they are placed on the plot as if the units were K.”It makes no sense only if one is not sensible. The units of uncertainty follow perfectly and are (+/-)K.

Point-by-point, Nick. Not one of your objections survive examination.

Pat,

“The correlation terms are zero. There’s no correlation with an empirical calibration error statistic.”So you assert. But you give no rational argument. The 4 Wm⁻² is not in any case a calibration statistic. It is derived from a correlation between grid point observations and calculated values. It is actually an overall average, over 20 years. But you combine them as if they represented

uncorrelatedrandom variables from year to year.” is now combined with the thermal flux due to GHG emissions in emulation equation 1, to produce equation 5.Clear as a bell to any fair reader, Nick”An interested reader would want to see that mathematics set out, step by step. I don’t believe that you can derive Eq 5.1 in that way from Eq 1. But if you can, let’s see it.

“F₀ + ΔFᵢ represents the tropospheric GHG thermal forcing at simulation step “i.” “But it doesn’t. F₀ is the forcing at step 0. ΔFᵢ is the change in going from step i-1 to step i. The forcing at step i is actually F₀ + ΣΔFᵢ, as it says in Eq 1. Initial plus the sum of changes. Not the initial plus the most recent step.

“Nick’s introduction to not understanding that averages have denominators”Of course averages have denominators. You add the things, and then divide by the number of things. Not by something else that you might have thought of.

Your formula S6.2 for average is

Σeₖ/(20 years)

I’ve changed the suffix g to k. The sum is from 1 to n, where n is the number of simulation-observation pairs. S6.2 has the indices mixed up, saying that it is summing over gridpoints g. But whatever – it has to be summed over those pairs. But you don’t divide by n, you divide by 20 years, because observations were taken over that period.

To see how wrong this is, suppose the eₖ were all the same, equal to c cloud-cover units, say. Then any sensible maths would say the average is c. But this formula says the average is c*n/(20 years).

“Except you don’t know how many are positive, how many are negative, or their magnitudes, Nick. Adding more to “n” may make the average smaller.”Well, it might. Or not. That is a pathetic excuse for a wrong formula. If you average samples from a population, the result isn’t supposed to depend on the number in the sample. That is why you divide by n. Gosh, this stuff is elementary.

“That’s what Nick says he does not understand.”No, I understand it very well. You say

“Gird yourself because it’s really complicated.”, but in fact, those terms are all the same. And so what your analysis boils down to is, as I said:0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)

and people here can sort out dimensions. They come to K*sqrt(year).

“Not one of them raised so nonsensical an objection as yours.”It’s actually one of Roy’s objections (“it will produce wildly different results depending upon the length of the assumed time step”). The units are K*sqrt(time). You have taken unit of time as year. If you take it as month, you get numbers sqrt(12) larger. But in any case they are not units of K, and can’t be treated as uncertainty of something in K.

Nick, “

So you assert. But you give no rational argument.”You a href=“https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790356”>already agreed to the rational argument, Nick. The calibration error is an empirically derived constant. It doesn’t vary. It cannot covary. Correlation is necessarily zero. I’ve pointed that out here and here.

Nick, “

The 4 Wm⁻² is not in any case a calibration statistic. It is derived from a correlation between grid point observations and calculated values.”It’s calculated separately from an average (simulation minus observation) error.

Nick, “

But you combine them as if they represented uncorrelated random variables from year to year.”I’ve combined nothing. The calibration error metric is taken directly from Lauer and Hamilton.

Nick, “

An interested reader would want to see that mathematics [of eqn. 5] set out, step by step. I don’t believe that you can derive Eq 5.1 in that way from Eq 1. But if you can, let’s see it.”Eqn. 5 is a single step of the eqn. 1 stepwise summation of steps. There’s no mystery.

Nick, “

But it doesn’t. F₀ is the forcing at step 0. ΔFᵢ is the change in going from step i-1 to step i. The forcing at step i is actually F₀ + ΣΔFᵢ, as it says in Eq 1.”You’ve misrepresented eqn. 5. It is a generalized single step. The forcing

at any step ‘i’is ΔFᵢ.Nick, “

To see how wrong this [eqn. S6.2] is, suppose the e_k were all the same, equal to c cloud-cover units, say. Then any sensible maths would say the average is c.”No, it would not. Your “c” is not an annual average. It’s the per-subtraction average. The calculation is the annual average error, not the per-subtraction error. In 20 years of error, the annual average is n*c/20.

Nick, “

But this formula says the average is c*n/(20 years).”Which is correct.

Nick,

That is a pathetic excuse for a wrong formula.”Rather, yours is a pathetic excuse for an objection. You assert a specific result, where no result is known.

Nick, “

If you average samples from a population, the result isn’t supposed to depend on the number in the sample.”No. It depends on the magnitudes of the samples. Add in samples of lesser magnitude than the average, and the extended average is reduced.

“

That is why you divide by n. Gosh, this stuff is elementary.”And you don’t get it. After a lifetime doing math.

Nick, “

No, I understand it very well. You say “Gird yourself because it’s really complicated.”, but in fact, those terms are all the same. And so what your analysis boils down to is, as I said:”No, it doesn’t.

Nick, “

0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)”Wrong again, Nick.

Here again is how it works, for the ith term: [(0.42 * 33 K * (+/-)4 W/m^2 year^-1)/(33.3 W/m^2)] * year_i.

= [(0.42 * 33 K * (+/-)4 W/m^2)/33.3 W/m^2]_i = (+/-)K_i

The (+/-)K_i = the (+/-)u_i that goes into eqn. 6.

Nick, “

and people here can sort out dimensions. They come to K*sqrt(year).”Everyone here can sort out dimensions except, evidently, you. You seem to have forgotten that you once knew how.

And as you previously agreed that the method is correct, we can be charitable and surmise you’re fatally forgetful.

Nick,

It’s actually one of Roy’s objections (“it will produce wildly different results depending upon the length of the assumed time step”).Roy wasn’t one of my reviewers. And his objection wasn’t about dimensions. You’re being misleading. Again.

Nick, “

The units are K*sqrt(time).”Wrong. As demonstrated above. And, even worse, you’re contradicting yourself.

Nick, “

You have taken unit of time as year.”No. L&H provided the calibration error time interval. I agree that it was convenient.

Nick, “

If you take it as month, you get numbers sqrt(12) larger.”No. The size of the calibration uncertainty varies with the length of the simulation time-step.

Nick, “

But in any case they are not units of K,…”Yes, they are, as demonstrated above.

Nick, “

…and can’t be treated as uncertainty of something in K”Yes, they can be and they are, as demonstrated above.

Your arguments are wrong throughout, Nick.

And you’re reduced to repeating failed arguments and contradicting yourself.

“That is in fact what they do, with ensemble calculations. That looks at what real GCM calculations do. But if you want to try to do it analytically, you have to analyse what the GCM does. Not what a toy curve fitting model made up by Pat Frank does.”

I see that the calculation of uncertainty is complex or perhaps intractable. Having criticised Pat Frank’s attempt to estimate uncertainty of GCMs it would be very helpful if Nick Stokes could provide even a ‘back of envelope’ estimation of (any) GCM uncertainty.

I assume this would clarify the uncertainty in GCMs, having Pat Frank and Nick Stokes estimates to compare.

KNMI gives a collection of runs, including ensemble runs, here. There is a lot of data. Ensembles are tighter than collections of different models, but the spaghetti plots of different models give a reasonable idea.

Ensembles of runs, each with an uncertainty interval, does not cancel out the uncertainty.

A conglomeration of wrongs doesn’t all of a sudden become right just because it is part of a collection.

Dorothy, please see my comment below which uses your question as a starting point for thinking about how a concept for rigorously defining the uncertainty of the climate models (a.k.a. the GCMs) might be developed and systematized.

https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/#comment-2824001

So no math model can ever be correct. Got it.

Nope, but if you have a lot of uncertainty…well…then yes 😉

My take is that uncertainty is ok, but it propagates. Therefore it’s ok, but if you then use those results in the next calculation, and have the same uncertainty, and keep doing that hundreds of times, the results are pretty useless. This is true even if the original uncertainty is not huge.

“So no math model can ever be correct. Got it.”

You’ve got the same problem Nick has. Uncertainty is not a measure of correctness. You are confusing uncertainty with error. They are not the same.

A model can certainly produce the correct answer. The question is how do you know if it is correct? You have to eliminate as much uncertainty as possible in any model. If you build in a small uncertainty that adds iteration after iteration even an initial small uncertainty can get quite large.

A math model can be correct, the question is how can you be certain that it is correct? The only way is to compare prediction to observation (which, to date, the models have spectacularly failed to match) but how can you be certain *before* the comparative observations get made? you can’t, you can only be uncertain. uncertain doesn’t necessarily mean wrong.

Ever see a “storm track” on the news? notice how the track’s path starts at a single point and as time moves forward the cone of the possible path (aka the cone of uncertainty) gets wider the further away from that initial point you get. That’s uncertainly in action. Doesn’t meant the storm track is “wrong”, just that you can’t be certain of the path within the cone of uncertainty.

Think of it this way. what ever you are modeling the more sets of choices there are the more uncertain you can be about the model picking the correct choices because each subsequent set of choices has the uncertainty of all the previous sets of choices baked in as you move forward in time. If you are 10% uncertain at the first set of choices your model has to make, Your second set of choices can’t be any less than 10% uncertain because it relies on the first set of choices having been correct.

John Endicott

“If you are 10% uncertain at the first set of choices your model has to make, Your second set of choices can’t be any less than 10% uncertain because it relies on the first set of choices having been correct.”

+1

Let me sum up this debate in two words.

Models Suck!

” TOA balance”

Earth is not a system at equilibrium. Earth is a dynamical system and as such, the TOA balance assumption is nonsense. It would require instantaneous (as in beats the crap the light speed) thermodynamics and some strange physical law to maintain that inexistent balance. Or some sentient beings with incredible computing power in each point, talking instantaneously to each other to adjust in order to achieve that inexistent balance.

Or some climastrological religious crap like that.

Ex falso, quodlibet.

Pat Frank, thank you for the essay.

I hope you like it, Matthew.

Has Patrick Frank written on the supposed (natural) atmospheric greenhouse effect too?

e.g. on a supposed surface earth of -18°C radiating into a colder(!) atmosphere and this atmosphere radiating back an amount equal to 33°C (or newer versions 32°C up to 14°C or a version -19°C +33°C= 14°C) thus leading the warmer earth surface to warm up even more to 15°C?

Folks, do you know what all this is to me?

It is to me that the general public has been convinced of 2+2=5.

“For example, we can estimate the average per-day uncertainty from the ±4 Wm-2 annual average calibration of Lauer and Hamilton.”More nutty units. The 4 W/m2 is just an annual average because it is averaged over a full year (being seasonal). London has an annual average temperature of 15°C. That doesn’t mean an average per-day of 15/365 = 0.04°C.

But I see here that it doesn’t even work like that. The ±4 Wm-2 was described in the paper (intermittently but rather emphatically) as ±4 Wm-2/year. That would give a basis for conversion to 4/365 =0.011 Wm-2/day. But the arithmetic here converts as 4/sqrt(365) =0.21 Wm-2/day. How does that make sense?

Nick, this is the rudest and most deliberately obtuse comment that I have ever seen you make.

Why do you do it?

Lighten up.

Get a life.

Do you have any answers?

Nick Stokes: “Do you have any answers?”

Not on this point Nick, because I don’t understand your example:

“London has an annual average temperature of 15°C. That doesn’t mean an average per-day of 15/365 = 0.04°C.”

But I do have a question. How was your 15 degrees Centigrade calculated? Please carefully describe the input data and the method of calculation. Thank you.

“How was your 15 degrees Centigrade calculated?”I looked it up in Wikipedia. Sorry, it is the average max. The number is the average of the twelve months. Each month is the average of the daily maxes for the month. You can just average the days of the year, if you like; same result (with a minor correction for the varying days in month).

Places have average temperatures. It is not an exotic concept. And dividing an annual average by 365 (or anything else) to get an average per day makes no sense at all.

“…I looked it up in Wikipedia. Sorry, it is the average max…”

No need for apologies. That was just such a challenging task.

Well it’s not an exotic concept Nick, but even so you managed to mistake mistakenly describe the annual average of maximum daily temperatures in London as an annual average temperature. Following that you then introduce an arbitrary calculation by dividing this number by 365.

Please can you carefully explain your exact point and it’s relevance to the paper under discussion? At the moment I’m afraid you appear to be intentionally constructing an unrelated and incoherent arguing point with the intention to confuse rather than clarify.

Thank you.

“Please can you carefully explain your exact point…”Pat insists that because Lauer described his grid RMS obs-GCM discrepancy as an annual average, therefore he can accumulate it year by year. Roy correctly noted that it is just an average, and accumulating by year is arbitrary; you could equally accumulate it by month, say, getting a much larger result. Day even more so. Pat says no, the daily average would be reduced by a factor sqrt(365) (why sqrt?).

I point out that an annual average temperature is a familiar idea, and while a daily average might vary seasonally, you don’t get it by dividing by 365, or even sqrt(365). And it is equally wrong to do it with Lauer’s 4 W/m2 measure of cloud discrepancy.

He went to Wikipedia and grabbed the average MAX.

I’m sure that was purely coincidental.

Wow, after reading all this I thought that Nick was at least reasonable (while wrong) to a degree to this point… but average temperature based on daily average max temperature?

Apparently in addition to not understanding uncertainty, Nick doesn’t understand resolution. Nick has really run off the rails here into some ridiculous, unreasonable non-sense.

I wonder if Nick would think a stock market analyst was reasonable in calculating “average” stock price of an stock based on average daily high stock price. If so… I’ve got a lot of stocks to sell him.

Nick, after seeing your most recent response post, I understand what you think your point is and that you aren’t actually claiming to have calculated the average temperature.

However, we understand the physics of seasonal flux (to a degree) and the variations in temperatures of a specific location due to energy flux from the sun over the seasons; so the analogy is a bit misleading. No one would claim any kind of lower resolution prediction based on yearly average because we would all agree the physics of a yearly average do not allow that kind of prediction (nor subsequent calculation within that year based on a mid year figure).

Which bring us back to the uncertainty issues addressed in the original post. So I guess I’m missing your point to a degree. The reliability of physics (both certainty and resolution) within the model remain the primary issue you seem to want to talk around. Overall model certainty relies on resolution propagation.

Shaun,

“So I guess I’m missing your point to a degree.”You’re certainly missing this point. It has nothing to do with the particularities of daily max temp. Same for averaging anything – annual average house price, stock price, dam level, whatever. The point is that you don’t turn an annual average price, say, into $/year, and then divide by 365 to convert it to $/day.

Re.

Nick, this is the ………………… comment that I have ever seen you make.

Why do you do it? Lighten up. Get a life.

Reply Nick Stokes Do you have any answers?

Sure.

Stop saying demeaning, ridiculing and ridiculous things like

“More nutty units. The 4 W/m2 is just an annual average because it is averaged over a full year (being seasonal).”

Your only point seems to be that Nick Stokes mathematics would do an annual average over a different time frame than a year?

How ” -” is that comment?

–

No one mathematical would deliberately confuse an annual average rate of uncertainty of TOA 4 W/m2 with the daily TOA itself 240 W/m2 would they?

And then divide that daily figure by 365 days to say that the earths TOA is gets 2/3 W/m2 per day?

–

Oh wait

He does

” London has an annual average temperature of 15°C. That doesn’t mean an average per-day of 15/365 = 0.04°C.”

Nick. …..

London having an annual average temperature of 15°C means an average of 15 C daily taken over a year.

There would be a uncertainty figure around that a lot smaller of perhaps +/- 0.5C for the yearly uncertainty, unless near an airport or exhaust fan.

Stop this deliberate misquoting of units and their applicability.

It should be beneath you.

“No one mathematical would deliberately confuse an annual average rate of uncertainty”“an annual average rate of uncertainty”?That is very confused.

Nick Stokes October 16, 2019 at 1:03 pm

“No one mathematical would deliberately confuse an annual average rate of uncertainty”“an annual average rate of uncertainty”?

“That is very confused.”

–

I was quoting your comment on uncertainty

“Nick Stokes October 15, 2019 at 3:06 pm

The 4 W/m2 is just an annual average because it is averaged over a full year (being seasonal).”

–

That was very confused.

Over a 20 year period the average annual temperature change is 0.3 Deg.C. So The average temperature change is 0.3 Deg. C/year.

“NASA: We Can’t Model Clouds”

The rest is junk.

If that was all we had, it would be enough.

Hi Pat, What you make clear by quoting Roy

“This discrepancy is widely believed to be due to uncertainties in cloud feedbacks. … Fig. 1 [shows] the changes in low clouds predicted by two versions of models that lie at either end of the range of warming responses. The reduced warming predicted by one model is a consequence of increased low cloudiness in that model whereas the enhanced warming of the other model can be traced to decreased low cloudiness. (original emphasis)”

Is admitting that GCM’s with their many fudge factors and reliance on adjusting those to fit the past, have no more predictive ability than the many curve-fit climate models that have debuted on WUWT… And that’s what the climate scientists don’t want to admit. And Roy is missing what Roy’s figure-1 chart indicates; that GCM’s can wander way off in any direction, not that they’re going to oscillate within those bounds.

Which is why one doesn’t predict forward with a “curve fit” very far (having explored that in my grad school past).

But it sounds like climate scientist of any bent are unwilling to admit they don’t know the physics.

But, if they did know the physics, they could build a useful model? I think so, given about 500 years. It’s an extremely complex problem.

There are 3 streams in the debate on this paper evident in the past and ongoing discussion of this paper all of which go past each other on substantial issues.

Frank’s paper.

Roy’s discussal of his paper.

Various defenders of a very faulty GCM product.

Frank rightly points out that all the models contain large uncertainty issues that could grow in time leading to a completely unreliable projection.

But the models do have an inbuilt regulator, as Roy points out, that yearly forces them back onto an even keel by wrongly adjusting the accesses to a semi fixed TOA.

This equates to his image of the pot boiling on the stove, one keeps going back to the 100C needed for boiling water to the TOA in balance needing to radiate out what the sun puts in.

Both of you have validity in your claims.

You have rightly pointed out and he could admit that the large uncertainty yearly, perpetuated recurrently each year, makes any future prediction relying on individual components meaningless.

–

1) My error propagation predicts huge excursions of temperature.

This is what it looks like unfortunately to the non statistician .

You are trying to show the degree of unreliability that should occur if propagated as a normal program would do.

–

2) Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux.

They should have and do have as you show by the wide range of error in just one statistic, cloud cover.

However TOA is basically sun energy in and out and moves around a fairly fixed range. If you have internal variability it will tend to even out in time, hotter radiates more, cooler radiates less. If you program your computer to cheat and readjust each year to keep to a set TOA by adjusting the cloud cover up or down ( another 2 pixels of cloud this year please to cover that heat, it is only an algorithm) you can keep running your uncertainty algorithms and not deviate off from the programmed warming.

–

3) The Error Propagation Model is Not Appropriate for Climate Models

Of course not.

Each scenario, otherwise known as a projection or prediction is based on only two things. The inbuilt ECS algorithm and the CO2 level.

They do not do productions based on the observations and any other inputs they use because they remove these effects yearly.

–

Congratulations on showing how a GCM should work if properly tuned and why it cannot work ( uncertainty too large).

I would back off on Roy, he should not have approached your paper in the way that he did but he is defending how he does his modelling, properly. I believe he is on the same page as you, for the sane reasons, on the terrible bias shown by all the models to date.

Thank you for taking the time to offer your assessment(s) on Dr. Frank’s work both here and in the past articles. I hope you choose to continue to engage this and other debates here.

Whatever their accuracy or error range, the models have proven themselves to have no scientific use at all. They are only used as a political propaganda tool to terrify people into paying the shake-down Global Warming/Climate Change tax.

They are useful as educational tools to test dynamics in the model system.

They may be useful in the real world, but they need to relax their certainty, acknowledge their assumptions/assertions, the regular “tuning”, and expanding the probable range of outcomes.

n.n, climate modelers have tried to leap directly to a complete model of the climate without doing any of the intermediate hard work to figure out how the climate actually functions.

It would probably take about 100 years of inglorious but beautiful observational physics, such as Richard Lindzen and his students do, to figure out how the climate sub-systems work and couple.

It’s only after knowing all of that,

allof that, that a complete model of the climate can be chanced.The people modeling climate are primarily mathematicians. They have no idea how to go about doing science.

Instead, they’ve made a sort of Platonic ideal of an engineering model and are too untrained in science to know that it absolutely cannot predict observables beyond its parameter calibration bounds.

Why the physics establishment let them get away with it, is anyone’s guess.

>>

chimerical

<<

Ahh, yes. A new word for my vocabulary.

Jim

“Roy misconceived his ±2 Wm-2 as a radiative imbalance. In the proper context of my analysis, it should be seen as a ±2 Wm-2 uncertainty in long wave cloud forcing (LWCF). It is a statistic, not an energy flux.”

And what did I saw? I said you guys are not even on the same page. Roy was talking about inputs and outputs and Pat was talking about uncertainty. That’s chalk and cheese. Both can be round (within a rounding error) but that’s about it.

The exposure of the CMIP model as circularly “self correcting” is hilarious. We will discuss it at this year’s conference later in the week. Wow. If the output is constrained and the internals of the model are forced to bring it into balance, the stability of the output is fabricated, literally. The entire model output is without meaning. I thought they had some vague usefulness. Not so. They are meaningless if that is how the constraints are applied.

Pat did you know that? You critique was spot on of course, and the modeled temperature projections are highly uncertain, but did you know beforehand that the modelers were fixing the TOA values and forcing the model to fiddle internally until it met the requirement? Then to have them say the model is validated because it “balances” is a bad, sad joke! Wow. Just, wow.

Let’s look at the analogy. I have a bridge and it held 6 tons without failing. I model it and change the thickness of the deck making in thinner in the model, and force the output to sustain 6 tons, then have the computer fiddle things like the foundation mass and so forth. Then I say afterwards that the model is validated because the calculated sustainable weight is always 6 tons. This is total garbage. It was an input. The calculation could include several non-physical quantities like steel bars that are 50 times stronger than normal. Kinda like feedbacks….

Crispin, “

Pat did you know that? You critique was spot on of course, and the modeled temperature projections are highly uncertain, but did you know beforehand that the modelers were fixing the TOA values and forcing the model to fiddle internally until it met the requirement?,”Yes, and thank-you, Crispin. 🙂

Knowing that and figuring out a way to test the reliability of the air temperature projections are two separate issues, though.

All I have to say is this is how real scientific debate should occur in the modern world. Kudos to WUWT & Team for posting these articles!

Cheers!

Correct!

Pat has opened the door to actual debate by using a simple engineering methodology that cannot be easily dismissed by hand waving. Bravo Pat!

Thanks, John. 🙂

Testing models fitness through a propagation of modeling error(s), where the models demonstrate no skill to hindcast, let alone forecast, and certainly not predict, and require regular injections of black… brown matter to remain compliant with reality.

I love the lid on the pot analogy.

The models cannot predict whether the lid will stay on the pot, therefore the models are useless at predicting how the pot will boil.Which climate model is doing the best job of forecasting actual changes?

They are a bowl of spaghetti.

Completely agree. I just wanted to see if someone would actually ID a particular model. So far, they are all worthless.

Excellent article. I was convinced before but now I am convinced and perplexed at how obtuse some of your critics are.

The models uncertainty and error are irrelevant, because the models themselves are irrelevant.

You cannot build an accurate model of a complex system by guessing at how it *might* work. You can tweak the various knobs to produce accurate-seeming (within some range of error) of historically measured data, but without knowing which processes are “tuned” correctly and those that are not, it is basically a useless guess when used for prediction. One can assume they got lucky and guessed the tuning knobs correct values, but that is an act of faith, not science.

We can’t even trust our historic data…no one really knows just how much error is contained within it, or how much bias is being added in. That is the place to start – get the historic data cleaned up but then a lot of the supposed warming disappears because its caused by poor sites that are poorly maintained and lots of heat pollution nearby.

So, without good data, or first principle climate models…we have a bunch of rather useless but expensive computer garbage running on our super-computers. The future will bear this out, unfortunately it will take another 20 to 30 years to prove it unless we get lucky and there is a demonstrable cooling in the next 10 years. Since we DO NOT KNOW what causes natural warming, we cannot guess how it will behave with any certainty.

Dr. Frank,

I want to know where and when you decide to insert your error into your Eq. 1. You claim that in the

control run there is no change in forcing and therefore there is no uncertainty. Yet somehow when

Delta F is non-zero the +/- 4 W.m^2 of uncertainty appears. If there is a fixed uncertainty in the forcing

then it would appear even if the additional forcing is zero. After all it makes no sense to claim that

climate scientists know the long wave forcing exactly if CO2 is constant but not if CO2 changes by a tiny

fraction.

The uncertainty is in the simulated long wave thermal energy flux, Izaak. Thermal energy flux that CO2 forcing enters and is subsumed within.

The simulated tropospheric thermal energy flux has an average annual uncertainty of (+/-)4 Wm^2. That is the lower limit of resolution of the model. Annual CO2 forcing increase is about 0.035 W/m^2.

It is brought in independently as a model resolution limit.

You’re asking a model to resolve the impact of a 0.035 W/m^2 perturbation, when it cannot resolve anything smaller than (+/-)4 W/m^2. You’re asking a blind man to see.

That resolution limit remains whether CO2 forcing is present or not, or is a constant or is changing. The (+/-)4 W/m^2 is a property of the models.

That should explain it for you. If it does not, then try researching the idea of instrumental resolution in the context of science and engineering. Models are merely instruments made of software.

Let me go out on a limb a little to look at this in a different way: a little bit of Dr. Spencer and a little bit of Dr. Frank.

First, I want to use the concept of leverage – a term I am making up. For example, a model that forecasts one year ahead using five years of data would have a leverage of 20%. One thing I don’t like about tree ring reconstructions is that only about 5% of the range is calibrated and the other 95% is assumed to be accurate. In that case, the leverage would be 20 to 1 or 2,000%.

Second, let’s look at an instrument that is calibrated to say one tenth of a unit in a range of 1 to 100. If that instrument is used to measure something at say 110, the measurement would clearly be outside the calibration range, so the accuracy would be unknown. However, a gut feeling would be that the measurement accuracy would probably be closer to a tenth than to one, because the leverage would only be 10%. If that instrument were used to take a measurement of say 200, then it would be reasonable to guess that the accuracy would probably

notbe close to a tenth of a unit.I don’t see anything inherently wrong with tuned models. They can be useful. They don’t have to model the physics correctly to be useful. Such models are really not physical models but heuristic ones. I think that Dr. Frank’s uncertainty (if I may presume to call it that) is more applicable to physical models than to heuristic ones. Let me explain.

Like a calibrated instrument, a tuned heuristic model is only accurate within the calibration range. Unlike a calibrated instrument, however, tuned heuristic models will always be used outside their calibration range. Like an instrument used outside its calibration range, it doesn’t seem that a measurement (instrument) or prediction (model) with a leverage of 10% would have an uncertainty much greater than the uncertainty or accuracy within the calibration range. However, as the leverage increases, one is further and further away from the calibration range and the uncertainty or accuracy will no longer be close to what it was within the calibration range.

Let’s look at Figure 4: RCP8.5 projections from four CMIP5 models in the main post. Assuming that the calibration range is from 1850 to 1950, the dispersion of the models stays fairly tight from 1950 to 2000 (a leverage of 50%) then starts increasing a bit but not too much from 2000 to 2050 (a leverage of 100%) and blowing up after that.

Therefore, may I suggest that when the leverage of forecasting is low (say 100%) then Dr. Frank’s uncertainty will show up relentlessly.

I have read that weather forecasts are pretty good up to about 72 hours, with diminished accuracy thereafter, but I don’t know on how many hours of data such a forecast is based. I would be curious to see what the leverage would be for weather forecasts.

Again, just a thought.

Someone is trying throwing semantic sand into our faces, and it’s not Spencer.

Easy to say, perfecto.

I’m a layperson and have been following the paper and response and rebuttal. After a few weeks of brushing up on Error Propagation, and remembering my lab uncertainties from organic chemistry and how steps make the error propagate to reduce what you actually know, I can say Pat Frank’s rebuttal was immensely clear and satisfying. I completely get it.

People like Stokes seem tripped up by the idea that the linear emulation isn’t the GCM. It doesn’t matter. Correcting errors that force some BS conservation don’t make the initial calibration error go away.

You can measure nanometers with a ruler. Sorry!

I’ve learned in the process how f-ing stubborn scientists who actually agree on end results can be. I hope Roy will admit Frank is right.

This is all very interesting.

Let me propose the following experiment. Take two different climate models and tune the parameters such that they both accurately reproduce some version of the historical record. If they give wildly different projections of the future, we have shown that this type of modeling is inherently unreliable and should not be used for decision making in the real world.

See Figure 4 above, Frank.

Pat,

In case you missed it, here is an email about measurement and uncertainty from the Australian Boureau of Meteorology, BOM. I asked a couple of questions that are repeated in the BOM response. I get the impression that they are going through a mental period of discovery before some simpler clarification emerges in the group.

However, I find their answer interesting because they have put some figures on uncertainty that can be analysed in something the same way as you have used the cloud numbers and their uncertainty. Clouds, thermometers, and about 50 more effects with uncertainty for the GCMs – they compound with each other and I dread to imagine the final result. Geoff S. BOM email follows:_

Dear Mr Sherrington,

Thank you for your correspondence dated 1 April 2019 and apologies for delays in responding.

Dr Rea has asked me to respond to your query on his behalf, as he is away from the office at this time.

The answer to your question regarding uncertainty is not trivial. As such, our response needs to consider the context of “values of X dissected into components like adjustment uncertainty, representative error, or values used in area-averaged mapping” to address your question.

Measurement uncertainty is the outcome of the application of a measurement model to a specific problem or process. The mathematical model then defines the expected range within which the measured quantity is expected to fall, at a defined level of confidence. The value derived from this process is dependent on the information being sought from the measurement data. The Bureau is drafting a report that describes the models for temperature measurement, the scope of application and the contributing sources and magnitudes to the estimates of uncertainty. This report will be available in due course.

While the report is in development, the most relevant figure we can supply to meet your request for a “T +/- X degrees C” is our specified inspection threshold. This is not an estimate of the uncertainty of the “full uncertainty numbers for historic temperature measurements for all stations in the ACORN_SAT group”. The inspection threshold is the value used during verification of sensor performance in the field to determine if there is an issue with the measurement chain, be it the sensor or the measurement electronics. The inspection involves comparison of the fielded sensor against a transfer standard, in the screen and in thermal contact with the fielded sensor. If the difference in the temperature measured by the two instruments is greater than +/- 0.3°C, then the sensor is replaced. The test is conducted both as an “on arrival” and “on departure/replacement” test.

In 2016, an analysis of these records was presented at the WMO TECO16 meeting in Madrid. This presentation demonstrated that for comparisons from 1990 to 2013 at all sites, the bias was 0.02 +/- 0.01°C and that 5.6% of the before tests and 3.7% of the after tests registered inspection differences greater than +/- 0.3°C. The same analysis on only the ACORN-SAT sites demonstrated that only 2.1% of the inspection differences were greater than +/- 0.3°C. The results provide confidence that the temperatures measured at ACORN-SAT sites in the field are conservatively within +/- 0.3°C. However, it needs to be stressed that this value is not the uncertainty of the ACORN-SAT network’s temperature measurements in the field.

Pending further analysis, it is expected that the uncertainty of a single observation at a single location will be less than the inspection threshold provided in this letter. It is important to note that the inspection threshold and the pending (single instrument, single measurement) field uncertainty are not the same as the uncertainty for temperature products created from network averages of measurements spread out over a wide area and covering a long-time series. Such statistical measurement products fall under the science of homogenisation.

Regarding historical temperature measurements, you might be aware that in 1992 the International Organization for Standardization (ISO) released their Guide to the Expression of Uncertainty in Measurement (GUM). This document provided a rigorous, uniform and internationally consistent approach to the assessment of uncertainty in any measurement. After its release, the Bureau adopted the approach recommended in the GUM for calibration uncertainty of its surface measurements. Alignment of uncertainty estimates before the 1990s with the GUM requires the evaluation of primary source material. It will, therefore, take time to provide you with compatible “T +/- X degrees C” for older records.

Finally, as mentioned in Dr Rea’s earlier correspondence to you, dated 28 November 2018, we are continuing to prepare a number of publications relevant to this topic, all of which will be released in due course.

Yours sincerely,

This report, when made available, ought to warrant a guest blog. I think it will be very interesting.

I agree with Kevin, Geoff.

It would be great if you could write it up and submit as a story with figures here at WUWT.

I just want to thank Pat Frank for his marvellous patience and tenacity in educating everyone in basic engineering. For me it further illuminated the problems with the models.

Thanks krm.

Why do we spend so much time and confused effort on this. Perhaps a constellation of 3 to 6 fairly simple satellites with multi-spectral IR sensors could keep track of the energy balance on the Earth in 50km to 100km grid pixelse, forget the details of atmospheric depth vs surface measurement. You’d know the temperature of each pixel pretty accurately, be able to do some energy balance measurements, and notice any warming.

I’ve come to the conclusion that experts on both sides of the AGW debate are invested in the debate continuing, and even if the bulk conclusion that there is hardly any warming, and those who point this out are acting in good faith and the AGW mob isn’t, the good people interested in this subject will find some highly technical minutia to continue the debate over. While I’ll continue to cheer the good people here fighting against trillions of malinvestment, this article was the straw that broke the camel’s back for me, the debate is over, and I’ve lost interest.

Tom Schaefer, as of October, 2019, you have no incentive to follow these endless debates. Nor does the average citizen on Main Street USA.

Only after climate activists get serious about reducing America’s carbon emissions will the scientific and public policy debates reach a critical mass. If serious restrictions are ever placed on your access to gasoline and diesel, then you will be back.

How do those IR sensors see through the clouds? How do they see through the smoke from wildfires or pasture burning? The temperature difference on the surface and in the atmosphere over a 50km to 100km grid can vary wildly due to things like evapotranspiration, road surface density, urban heat island effect, etc.

Satellites are not a complete answer, at least not at the level of technology we have today. They are just one more input among others.

I recently found an old, compact, computational slide rule and brought it to my office. While sitting in on not too interesting teleconferences, I have been entertaining myself by doing multiplications and divisions with the slide rule and comparing the results with my electronic calculator. My inability to discern the subdivisions in the scale of the slide rule lead to errors in my slide rule results that easily are as much as 5% different than the result from my electronic calculator. Dr. Frank’s analysis shows, if I were to use the result from one slide rule calculation in a subsequent slide rule calculation, how the second calculation is less reliable than the first, and so on, and so on. So while electronic calculators have reduced the uncertainty from my inability to discern the scale subdivisions of a slide rule, they have not reduced the uncertainty from our inability to discern the state of cloud cover forcing.

An interesting example.

Agreed, the example is interesting, but is there not a conceptual problem equating “reliability” with “uncertainty”?

“

Dr. Frank’s analysis shows, if I were to use the result from one slide rule calculation in a subsequent slide rule calculation, how the second calculation is less reliable than the first, and so on, and so on.”I’m not sure how the second calculation is “less reliable” than the first when the first produces an erroneous result. If my method calculates 2 + 2 = 5, then = 6, then = 10, and so on, the method isn’t “less reliable” upon each calculation, it’s just as unreliable from the first to the last.

I don’t think this is the same thing as Frank’s uncertainty calculation where uncertainty isn’t error.

You missed the fact that the unreliable result from the first slide rule is calculation is used as an input to the second. So the uncertainty of the first result gets propagated into the second slide rule calculation. So you have more and more uncertainty about the final result.

“

. . . the unreliable result from the first slide rule is calculation is used as an input to the second.I most certainly did! Many thanks!

Oh man, have you got it. I went thru college using only a slide rule. When I was a senior, my uncle bought one of the first HP calculators for $400, a whole years tuition! I would have given an eye tooth for that.

Dr Frank,

Decent post. Clear explanation, pleasure to read, even I can get the picture.

The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.I reckon here lies the crux of the problem, namely assumption that large uncertainty bounds mean projected temperature will vary within those bounds with roughly equal probability. This disconnection between uncertainty and actual error creates plenty of confusion – even fellas well-versed into science and stats cannot get it.

Besides it is astonishing that such a huge scientific(?) effort like climate science modelling so easily falls into such errors like described by you or Lord Monckton (I’d like to believe his paper also will be published in a scientific journal). Misunderstanding of uncertainty propagation, misunderstanding of feedback mechanisms – that means climate modelling is really FUBAR. Fellows behind this science should really start to behave morally and intellectually.

At risk of being over-simplistic and perhaps naive, it seems to me that Dr Frank is proving what we always knew. CO2 induced warming is so tiny that we do not notice it and it may take decades to notice the cumulative effect, if any. Cloud cover can and does make a substantial difference to incoming solar radiation and at night can reduce outgoing IR radiation. These effects are so large that anyone can simply feel the magnitude for themselves.

But when it comes to the models, the potentially massive positive or negative cloud effect cannot be calculated for various reasons. Taken together, the cloud effect swamps any CO2 induced warming, but since we cannot put reliable numbers on the former and its feedbacks, any projection of the resulting temperature is meaningless.

It may be possible to constrain the models or fit values obtained by hindcasting or whatever, but while that might make the model output look sensible rather than obviously wrong, it does not remove the uncertainty and the meaningless nature of the result.

Well, done, Dr Frank, excellent work.

Here’s another analogy that could help. Consider a man walking from point A to point B at a distance of 100 steps. Every step the man takes to get to point B shows some degree of variance. Think of this variance as basic “uncertainty” and is the equivalent of the cloud error range in climate models.

What Dr. Frank has done is essentially assume the man is blindfolded and this uncertainty applies to each step. The result is some range of values after 100 steps which would be huge.

What Roy and Nick are saying is we know a person who is not blindfolded will do a lot better on each step. That is, for climate conservation of energy limits the actual error. This could be done in our analogy by placing walls on either side of the man’s route to point B. These walls limit how far off the man could get. Hence, even when blindfolded the walls keep the man within a narrow range. As a result of these walls the real error is less than the error predicted by error propagation.

The problem here is that these walls are no more than another guess. It also ignores there are other unknowns. For example, in this analogy there could be cross winds or uneven ground that is completely ignored in the measurement of the single step uncertainty.

To me the big question in model credibility is to what degree are these walls based on valid physics which takes into account ALL possible situations. I will acknowledge that such walls may exist but I need those who would use them to defend the model results to tell us exactly how they were built.

If walls are needed to keep the model in check, then the model must be wrong; for a “correct” model would not have need of any walls at all.

Depending upon how it is being done, is the process of quantifying process uncertainty subject itself to some level of process uncertainty?

Hey Nick,

It shows nothing about the models. There is nothing in the paper about how GCMs actually work.If you can get the same results as complex GCM running simple calculations in Excel so what’s the problem? You cannot do that for CFD. You cannot do that for FEM. But looks like you can do this for GCM. If so, maybe those complex models are simply overgrown? If relationship between forcings and air temperature is relatively easy to capture simply calculations will work equally well as costly and complex ones.

Parameter, as long as the thirty-year running average of global mean temperature stays at or above + 0.1C per decade, climate scientists will claim that observations are consistent with model predictions.

“There is nothing in the paper about how GCMs actually work.”

GCMs don’t work.

They do not produce any useful and verifiable scientific results.

They produce projections of imagination.

The only value of their results is to demonstrate their lack of skill.

One hopes that efforts dedicated to improve them might be equal to the efforts made to defend them.

https://www.wcrp-climate.org/images/documents/grand_challenges/GC4_cloudsStevensBony_S2013.pdf

https://science.sciencemag.org/content/340/6136/1053

“If you can get the same results as complex GCM running simple calculations in Excel so what’s the problem? You cannot do that for CFD.”Of course you can do it for CFD. If you model laminar pipe flow with CFD, you’ll get a uniform pressure gradient and a parabolic velocity profile. You could have got that with Excel. Undergraduates did it even before computers. But CFD will do transition to turbulence. Excel won’t.

But here it isn’t even an independent calculation. To get the “same results” you have to use two fitting parameters derived from looking at the GCM calculations you are trying to emulate.

Hey Nick,

Of course you can do it for CFD. If you model laminar pipe flow with CFD, you’ll get a uniform pressure gradient and a parabolic velocity profile. You could have got that with Excel.OK. Can you model in Excel pressure coefficients for a simulation of an aircraft in near-stall condition with few millions of volume cells? And that’s the level we’re talking about with respect to GCM, not simply laminar flow approximation. If you can easily replicate results of GCM in Excel, even without using any solvers, that means complex models are not so complex, unlike complex CFD or FEA. If you can do something in simple way why do the same thing in complicated and costly way?

But here it isn’t even an independent calculation. To get the “same results” you have to use two fitting parameters derived from looking at the GCM calculations you are trying to emulate.Don’t quite get it – can you elaborate?

“If you can easily replicate results of GCM in Excel, even without using any solvers, that means complex models are not so complex, unlike complex CFD or FEA.”In fact GCMs are CFD. My point with pipe flow is that if the flow can be modelled simply, CFD will produce the simple result. What else should it do? It doesn’t mean CFD is an Excel macro.

“simulation of an aircraft in near-stall condition”CFD doesn’t do so well there either (neither do aircraft). But CFD can do flow over an aerofoil in fairly normal conditions, as can a wind tunnel. So can a pen and paper Joukowski calculation. That doesn’t trivialise CFD.

“can you elaborate?”Yes. Pat claims that his Eq 1 emulates the GCM output (for just one variable, surface temperature), and so he can analyse it for error propagation instead. But Eq 1 emulation requires peeking at the GCM output to get the emulation (curve fitting with parameters). So you can’t say that uncertainty in an input would produce such and such and uncertainty in the output of the calculation. You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.

“simulation of an aircraft in near-stall condition”

CFD doesn’t do so well there either (neither do aircraft). But CFD can do flow over an aerofoil in fairly normal conditions, as can a wind tunnel. So can a pen and paper Joukowski calculation. That doesn’t trivialise CFD.

————————————————-

It is difficult, but if you follow AIAA, they are having some success by going to unsteady CFD (DES, LES, DNS, etc.). These methods can be high fidelity, but are totally out of the realm of what is practical (based on computational power and time availibility) for climate modeling.

Hey Nick,

In fact GCMs are CFD. My point with pipe flow is that if the flow can be modelled simply, CFD will produce the simple result.You can some and you cannot other. If you could model everything in Excel you wouldn’t need more advanced tools. Fact that you can emulate GCM air temperature output using simple equation may be embarrassing to some but it does not to be a weakness. If relationship between different forcings and temperature output is simply enough what’s wrong with that? As you would say: “Of course you can do it for CFD.”

In fact GCMs are CFD.Is it not multiphysics? Interesting.

“simulation of an aircraft in near-stall condition”CFD doesn’t do so well there either (neither do aircraft).

As far as I’m aware simulations of higher angles of attack and near stall are not uncommon. It’s not an easy problem to simulate (surely beyond Excel) but can be done, some claim with reasonable accuracy, compared with experimental data.

“can you elaborate?”Yes. Pat claims that his Eq 1 emulates the GCM output (for just one variable, surface temperature), and so he can analyse it for error propagation instead. But Eq 1 emulation requires peeking at the GCM output to get the emulation (curve fitting with parameters). So you can’t say that uncertainty in an input would produce such and such and uncertainty in the output of the calculation. You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.

So, are you saying that Pat employs here some kind of circular reasoning? In order to emulate GCM output we need to look at this output first to figure out emulation values? That’s bizarre – in this case we wouldn’t need any emulator – just copy the output from GCM. Let’s have a closer look at that: Which term in Pat’ equation represents this ‘peeking’ into the GCM model?

Nick, “

You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.”Uncertainty doesn’t affect GCMs.

It’s funny, really. It’s GCMs that affect uncertainty.

You’re always making that same mistake, Nick.

If all you wnat is delat P and velocity profile in pipe flow, I would suggest using an Excel spreadsheet. You will get the exact [analytical] answer without having to deal with the meshing, mesh refinement, etc. The CFD will actually give you an approximation of the analytical answer. Laminar to turbulent transition in CFD is actually difficult.

What Pat showed was that if you want to know the annual global temperature output of a CIMP5 GCM, don’t bother with running the GCM, just use his simple expression and get a good enough answer.

Now if you want, as a scientist to examine the interplay of various energy exchange mechanisms in the climate, and have models or hypothesis to test a GCM may be a good platform – just do not mistake the temperature outputs as accurate. The interplay between mechanisms may shed new insight into the physics.

“just use his simple expression and get a good enough answer”You would get an answer about how his curve fitting model behaves, once you sort out the mathematical errors. It doesn’t tell you anything about how a GCM would respond. In fact, since the curve fitter depends on the GCMs for fitted coefficients, you can’t even consistently analyse the simple model, since you don’t know how those coefficients might change.

“You would get an answer about how his curve fitting model behaves, once you sort out the mathematical errors. It doesn’t tell you anything about how a GCM would respond.”

Actually the model does, because he validated it over the space of many GCM runs.

“since the curve fitter depends on the GCMs for fitted coefficients, you can’t even consistently analyse the simple model, since you don’t know how those coefficients might change”

He used model parameters, not fitting coefficients, the same paramters and teh same basic form of GHG forcing as the GCM does. Not sure the parameters change in GCMs. If they do, the emulator still validated against many runs.

Pat,

If I may, I’d like to add a couple of thoughts to your opening list…

First, propagation of error is not a prediction, but rather a calculation of what variation is consistent with a model plus uncertainties in the model parameters and inputs. When you published your original post I really was convinced that all disagreement was simply a misunderstanding. I now am convinced that your critics have a fundamental misconception about models, measurements, and resolution. Among other things there seems little appreciation that even with a stable system, one still has uncertainty of inputs that are not like initial conditions, and which drive the model without end. These translate into interminable uncertainty in model output. I tried to show this in my post of a little over a week ago — without much effect.

Second, beyond the idea that climate models do not have errors of this sort, there is the insistence of Nick Stokes that a Monte Carlo simulation would actually be more appropriate to determining the value of climate models than would be the error propagation you introduced. In principle he is correct, but my understanding is that the climate models may have a hundred adjustable parameters, and perhaps additional adjustable inputs (drivers). The idea of doing a credible Monte Carlo simulation in such a high dimensional space is preposterous. Perhaps your approach of a representative model of models is the only reasonable approach. However, Mototaka Nakamura, in his English language version of parts of his recent book on Amazon, related the story of modifying a climate code to use a more representative parameterization of a factor I cannot recall at this moment, without it having much effect. Perhaps the climate models could be trimmed down to a much smaller kernel on which a credible Monte Carlo effort is possible.

Third, error propagation may not be used in simulations at present, but I can’t understand the stance that it is

a priorinot pertinent. I teach a number of design and laboratory courses in mechanical engineering. I present error propagation (which I call uncertainty propagation) as a way to evaluate designs and experiments, and to guide modifications required to meet objectives. No precision work of any sort is possible without it.A couple of closing thoughts: Nick said in the thread above

I don’t know if this was Roy’s point, but if a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing, then the person making the claim must have some credible competing and independent estimate of the bound, which no critic ever seems to offer. Absent some omniscience about the physical process in question, one would think that estimated bounds beyond physical possibility indicate something wrong or incomplete with the model.

In my case I was at first put off by the stunning size of your bounds, and by the propagation of error through iterated steps. I found the step size you considered to be sort of

ad hocand I wasn’t certain that it was a reasonable model of how error stacks up. I wonder if instead you might consider an alternative of a secular trend with an uncertain slope?Finally, one factor appearing to produce the same confusion, over and over, is that estimates of the uncertainties have to come from what we know of the underlying physical process, or calibration of instruments and so forth, which all involve physical units just like the units of actual quantities. So, an uncertainty in solar insolation is (per Mototaka Nakamura) which looks like a true energy, but in fact represents our level of ignorance about an input.

“if a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing, then the person making the claim must have some credible competing and independent estimate of the bound”That doesn’t make much sense. But of course they have an estimate, and Roy said so. Conservation, particularly of energy. If the IR opacity of air remains about the same and the temperature increases 10°C, then the Earth will lose heat faster than the Sun can supply it. So you are not uncertain about whether that situation could happen. The calculation in a GCM conserves energy, so it cannot yield such a situation either. Pat’s calculatiion can. That is why it is meaningless.

“The idea of doing a credible Monte Carlo simulation in such a high dimensional space is preposterous.”No, it isn’t. Chaos limits dimensionality; it reduces to that of the attractor. Varying those parameters to not produce independent effects. The perturbations reduce to a space of much smaller dimension, and it is propagation of that which you test.

You’re confusing precision of an output with uncertainty around that output.

You keep doing it, and it’s getting embarrassing.

You really should read up.

It should not be hard to understand that uncertainty propagates. It should also not be hard to understand that uncertainty in an initial state that compounds will result in greater uncertainty at a later state, irrespective of the alleged precision of the model.

“You’re confusing precision of an output with uncertainty around that output”Could you explain the difference?

Sure. Precision as defined in GCMs has been how much the individual models have moment around the mean. Another alternate definition is given general stochastic influences on a model how much it’s output in different runs varies from the mean.

Uncertainty is about what is knowable given the crudeness of the underlying measurements going into a calculation.

If your model claims to resolve a 2-4 watts per square meter forcing and it has an input with an uncertainty of (+\-)4 Watts per square meter, it’s dogsh*t.

You can’t measure a nanometer with a millimeter ruler. You can’t measure a 2 w/sq meter resolution event with a model that has inputs that vary by (+/-)4 watts per square meter.

The more operations, the more the uncertainty propagates. At the end of 100 years, it doesn’t matter what garbage overrides in a model have made it converse to an expected output with a high precision, because the underlying measurements are not known to a high degree.

Your model can give and expected value that is acceptable down to the nanometer, with standard error to the nanometer. But that doesn’t mean it has information value. Your measurements don’t support the precision expected.

If your uncertainty is greater than your range of outcomes, your model is dogsh*t.

“Precision as defined in GCMs has been how much the individual models have moment around the mean.”Who defines it so? It seems strained to me. Terms like variability are more appropriate.

“If your uncertainty is greater than your range of outcomes, your model is dogsh*t.”No, the uncertainty is wrong. Uncertainty

isthe range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.“Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution.”

Sounds like a Numerical Variational Study. Determines the sensitivity of outputs to variations in inputs, not so much the uncertainty.

It would sure be nice if we could at least agree on terms. “Precision”, to me, is a measurement of how close together the results are. It says nothing about those close-together results being close to the true value.

One could calculate several outputs that fall within ±0.1W/m^2 of one another, but each having an uncertainty of ±4W/m^2. They would be very precise, have large uncertainties, and we would still not know how close to the true value any of them are.

Is there a different definition of “precision” used in GCMs than the rest of science?

“It would sure be nice if we could at least agree on terms. “Precision”, to me, is a measurement of how close together the results are. It says nothing about those close-together results being close to the true value.”

+1

Nick does not understand what uncertainty is. Nick thinks you can measure something in nanometers with underlying measurements to the millimeter.

Way too much evil CO2 is expended trying to teach this troll.

Take a bucket.

1.Take a 1000ml graduated cylinder with 2ml gradations and fill it with 500 ml of water and add it to the bucket.

2. Pipet our 250mL our of the bucket into the 1000ml graduates cylinder and throw the water away.

3. Repeat this process 10000 times.

How much water is in the bucket Nick? Easy. 250ml.

What’s your uncertainty, Nick?

It doesn’t matter that your math says the bucket hasn’t filled up or emptied. Math doesn’t care that your graduated cylinder can only measure to (+\-) 1mL and that over 10,000 times the uncertainty propagates.

CORRECTION: SORRY ABOUT THAT. IPHONE TYPO.

Take a bucket.

1.Take a 1000ml graduated cylinder with 2ml gradations and fill it with 500 ml of water and add it to the bucket.

2. Pipet our 250mL our of the bucket into the 1000ml graduated cylinder and throw the water away. Add 250mL of new water to the bucket from the same graduated cylinder.

3. Repeat this process 10000 times.

How much water is in the bucket Nick? Easy. 500ml.

V = 500 – 250 + 250 ………….. – 250 + 250 = 500mL

Each addition and subtraction gets you back to the initial volume, in math.

What’s your uncertainty, Nick?

It doesn’t matter that your math says the bucket hasn’t filled up or emptied. Math doesn’t care that your graduated cylinder can only measure to (+\-) 1mL and that over 10,000 times the uncertainty propagates.

Nick,

“No, the uncertainty is wrong. Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.”

No, we went over this in a different thread. How quickly you forget. The CGM models are determinative. Put in an output and you get out an output. You can put in the same input time after time and you will get the same output. If this isn’t true then the models are even more useless that I expected. What you are trying to say is that a Monte Carlo analysis using a large number of runs with different outputs can define the uncertainty in the output. And that is just plain wrong.

Many, many years ago when I was doing long range planning for a large telephone company we did what you describe in order to rank capital expenditure projects. We would take all kinds of unknowns, e.g. ad valorem taxes, interest rates, rates of return on investment, labor costs, etc, and vary each of them one at a time over a range of values to see what happened to the outputs. That’s called “sensitivity analysis”, not uncertainty analysis. It tells you how sensitive the model is to changes in input but tells you absolutely nothing about the uncertainty in the model output. Run 1 with a set of input values would have some uncertainty in the output. Run 2 with one input changed would *still* have some uncertainty in the output. Same for Run 3 to Run 100. That uncertainty was based on the fact that not all inputs could be made 100% accurate. You could never tell exactly what the corporation commission was going to do with rates of return three years, ten years, or twenty years in the future. You could never tell what the FED was going to do with interest rates at any point in the future. All you could do is pick the capital projects which showed the least sensitivity to all the inputs while still providing acceptable earnings on investment. (all kinds of other judgments also had to be made, such as picking highly sensitive projects with short payback periods – it’s called risk analysis).

The very fact that your inputs have uncertainty is a 100% guarantee that your output will have uncertainty. The only way to have no uncertainty in your output is to have no uncertainty in your input and no uncertainty in the model equations.

Please try to tell us that your model inputs are all 100% accurate!

Tim Gorman

You remarked, “How quickly you forget.” It is impossible to know whether cognitive bias is making Stokes’ memory be selective, or whether he is being disingenuous to try to win the argument. The fact that I’ve never known him to admit to a mistake, and that he always finds something to object to from everyone who disagrees with him, suggests to me that he is not being honest.

https://en.wikipedia.org/wiki/Sensitivity_analysis

Wikipedia makes a distinction between sensitivity analysis and uncertainty analysis. I think that reading the above link would be in everyone’s best interest, especially Stokes.

Disclaimer: While I understand that some people don’t think highly of Wikipedia, it has been my experience that, in areas of science and mathematics, it is generally trustworthy. It is in areas of politics and ideologically-driven topics that it presents biased opinions.

Nick, “

No, the uncertainty is wrong.”Uncertainty is the root-sum-square. It grows without bound across sequential calculations. In principle, uncertainty can grow to infinity and not be wrong.

“

Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution.”In epidemiological models. Not in physical models.

“Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.”

Not sure about that. Maybe you could do (with the CMIP5 GCMs) what I proposed here:

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790375

Nick, you say

But sir, the Earth has been nearly that much warmer at times in the past so there must be some combination of parameters that can and did produce a large excursion. I want to know how it is you are so certain that some similar displacement is not possible now. Rather than me not making much sense, this is exactly what I mean by having some independent and credible figure of what the climate is capable of — where do you get yours?

I don’t know how chaos entered the discussion. I didn’t bring it up, but it seems to me that it doesn’t have much bearing on the subject of how uncertain we might be about the result of a calculation or measurement from defensible estimates of how uncertain we are about the factors that go into the measurement or calculation.

Finally you ask Capt. Climate nearby about what is the difference between precision and uncertainty. As you may know there have been competing measurements of fundamental constants, each of which indicated great precision based on repeated measurements, but which differed from each other by, sometimes, scores of standard errors. Two independent measurements of the same thing differing by so much is extremely improbable. Yet it happened. It’s the difference between precision and uncertainty.

Kevin,

“But sir, the Earth has been nearly that much warmer at times in the past so there must be some combination of parameters that can and did produce a large excursion. “At times very long ago, and with a very different atmosphere, not to mention configuration of continents etc. The Earth is not going to get into that state in the next century or so.

Much nonsense has been spoken here about uncertainty, elevating it to some near spiritual state disconnected from what a GCM might actually produce. I don’t believe that notion has any place in regular science, but the natural response is, if GCMs aren’t actually going to show it, why would we want to think about it. It is just an uncertainty in Pat’s model, not GCMs. What GCMs might do, and the way the physics constrains them, was the main criticism made by Roy. A 10°C rise is surface temperature with no great change in atmospheric impedance would lead to unsustainable IR outflux at TOA. The physics built into GCMs would prevent them entering such a state. Uncertainty for a GCM is simply the range of outputs that it would produce if the various inputs varied through their range of uncertainty. There is no other way of quantifying it.

Chaos relates to your proposition that GCMs have far too many dimensions to test by ensemble. My point is that chaos reduces dimensionality. You already see this in the Lorenz demo, where the 3D space of possible states is reduced to the 2D space of the butterfly. Chaos ensures that there is vanishing dependence on initial state. That means that all the possible dimensions associated with initial wrinkles merge. You can imagine it with a river. You could do a Monte Carlo by throwing in stones, dipping in paddles, whatever. The only things that would make a real difference downstream is a substantial mass influx, or maybe heat. That is a very few dimensions. Most fluid flow is like this. It is why turbulence modelling works.

“It’s the difference between precision and uncertainty.”No, it is just an inadequate estimate of precision. Both describe the variability that might ensue if the measurement were done in other ways. The gap just illustrates that whoever estimated precision did not think of all the possible ways measurement methods could vary.

Nick,

“Much nonsense has been spoken here about uncertainty, elevating it to some near spiritual state disconnected from what a GCM might actually produce. I don’t believe that notion has any place in regular science, but the natural response is, if GCMs aren’t actually going to show it, why would we want to think about it.”

OMG! You just described the attitude of mathematicians and computer programmers to a T. “My program has no uncertainty in its output!”

Uncertainty is why test pilots still die when testing planes and cars at speed. And *you* don’t believe that uncertainty has any place in regular science.

The mission of science is to describe reality. To think that *your* description of reality is perfect is the ultimate in hubris, it puts you on the same plane as God – you are omniscient. It’s no wonder you can’t accept that there is uncertainty in the CGM’s description of reality.

” You just described the attitude of mathematicians and computer programmers to a T. “My program has no uncertainty in its output!””No, I’m saying that the output of the program reflects the uncertainty of the inputs, modified by whatever the processing does. And so you need to know what the processing does.

But uncertainty has to be connected to what the output can actually produce. In terms of your black boxes, Pat’s curves are like saying the output of the box is ±40 V, when the power supply is 15 V.

“No, I’m saying that the output of the program reflects the uncertainty of the inputs, modified by whatever the processing does. And so you need to know what the processing does.”

You have been fighting against Pat’s thesis which *is* uncertainty of input. Internal processing simply cannot decrease uncertainty caused by uncertain inputs. Internal processing can only *add* more uncertainty which Pat did not address.

“But uncertainty has to be connected to what the output can actually produce.”

Actually it does *not* have to do so in an iterative process. The iterative process should *stop* when the output becomes so uncertain that the iterative process is overwhelmed by the uncertainty. The uncertainty only grows past what the output can actually produce because the process is carried past the point where the output is overwhelmed.

“Pat’s curves are like saying the output of the box is ±40 V, when the power supply is 15 V.”

The curves should stop when the uncertainty goes past +/- 15v. It would appear that what you are actually trying to say is that the uncertainty level of the CGM’s doesn’t matter – it is valid to continue the iterative process past the point where the uncertainty overwhelms the output. The *only* reason to continue further is because you don’t care about the uncertainty interval. If you stop the iterative process at the point where the uncertainty overwhelms the output then you will never see the uncertainty interval growing past what the model output can reach.

Think about it. If the uncertainty is large enough you won’t even be able to get past the first step! You won’t know if your model resembles reality or not! If after the first step your model shows a temp increase of 1 but the uncertainty is +/- 2 you won’t even know for sure if the sign of your output is correct!

+1 big time, Tim Gorman.

“Second, beyond the idea that climate models do not have errors of this sort, there is the insistence of Nick Stokes that a Monte Carlo simulation would actually be more appropriate to determining the value of climate models than would be the error propagation you introduced. In principle he is correct,”

I don’t agree. All such a Monte Carlo analysis would show is the sensitivity is of the model to input variations. It won’t help define the uncertainty in a determinative model in any way, shape, or form. Each and every run would have an uncertainty still associated with the output. The very fact that inputs have an uncertainty which allows varying the inputs is a 100% guarantee that the outputs will have an uncertainty based on the uncertainty of the inputs.

I agree with you about Monte Carlo analysis in these cases, Tim.

The efficacy of a Monte Carlo analysis in the context of physical science would require its use to evaluate the output distribution of a physical model already and independently known to be physically complete and correct.

An example might be Thermodynamics. If someone were calculating some complex gas-phase system in which the PVT phase diagram of the gas mixture is not well known, then a Monte Carlo evaluation of the dependence of the calculations on the uncertainty widths of the incompletely known PVT values would reveal the uncertainty in the predicted behavior.

But that’s only because Thermodynamics is independently known to be a physically complete theory and to yield correct and accurate answers when the state variables of gases are well known.

Climate models incorporate incomplete or wrong physics. They are not independently known to give accurate answers. The mean of a Monte Carlo distribution of climate model results may be well-displaced from the physically correct answer, but no one can know that. Nor how far the displacement.

The physically correct answer about climate is not known. Nor even is a tight range of physically likely answers. No one knows where the answer lays, concerning future air temperatures.

So, a Monte Carlo interval based upon parameter uncertainties (a sensitivity analysis) tells no one anything about an interval around the correct answer — accuracy.

It only tells us about the spread of the model — precision.

At some future day, when physical meteorologists have figured out how the climate actually works, and produced a viable and falsifiable physical theory then, and only then, might a Monte Carlo analysis of climate model outputs become useful.

A long way to agree with you. 🙂

Pat,

The very fact that some think it is necessary to vary the inputs to the climate models in order to evaluate uncertainty is tacit admittance that there *is* uncertainty in the model inputs and outputs. Once that is admitted then the next step is to actually evaluate that uncertainty – something the warmists refuse to do and will fight to the death to kill any suggestion they need to do so. The very fact that some think you an determine uncertainty by using uncertain inputs is a prime example of the total lack of understanding about uncertainty. It’s a snake chasing it’s own tail. Sooner or later the snake eats itself!

You’ve been a hero of this conversation, Tim.

It’s clear that whatever training climate scientists get, almost none of them ever get exposed to physical error analysis. The whole idea seems foreign to almost all of that group.

My qualifiers refer to three of my four Frontiers reviewers. I am extraordinarily lucky those people were picked. Otherwise I’d still be in the outer darkness of angry reviewers who have no idea what they’re going on about.

Pat,

“It’s clear that whatever training climate scientists get, almost none of them ever get exposed to physical error analysis. The whole idea seems foreign to almost all of that group.”

It’s not just climate scientists. My son received his PhD in Immunology in the recent past. He has always been a perfectionist (a real pain in the butt sometimes :-)) and meticulous in everything he does. He is now involved in HIV research. He has told me many times that part of the reason so many experiments are not reproducible today in his field is because few researchers both to do any uncertainty analysis in their experiment design, execution methodology and analysis methods, e.g. like your post about titration. It just seems to be so endemic in so much academic hierarchy today. My son didn’t listen when his undergraduate advisor told him to not worry about taking statistics courses – that you could always find a math major to do that! Unfreakingbelievable!

Kevin, you wrote a very interesting post, and I regret not having the time to comment there.

I completely agree with your take on propagation. Misunderstanding that one point is the source of about 99.9% of all the critical objections I’ve received.

In Chemistry, we call uncertainty ‘propagation of error,’ because typically the initial uncertainty metric derives from some calibration experiment that shows the error in the measurement or the model.

That’s pretty much what Lauer and Hamilton provided with their annual average (simulation minus observation) rmse of (+/-)4 W/m^2 in LWCF; i.e., a calibration error statistic derived from model error.

The reason I chose a year is because that’s typically how air temperature projections are presented, and also the LWCF rmse was an annual average. So the two annual metrics pretty much dovetailed.

I, too, was surprised by the size of the uncertainty envelope, but that was how it came out, and one has to go with whatever the result.

Your point that if

a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing,… indicate[s] something wrong or incomplete with the model.is dead on.I have been making that point in as many ways as I could think. So has Tim Gorman, Clyde Spencer and many others here.

But not one climate modeler has ever agreed to it. Nick Stokes and Mr. ATTP have dismissed your very point endlessly. They see no distinction between precision and accuracy.

It may be that Mototaka Nakamura is talking about the uncertainty in TOA flux. As I recall, Graeme Stephens published an analysis showing the uncertainty in various fluxes. He reported that the TOA flux wasn’t known to better than (+/-)3.9 W/m^2.

Another stunner he reported was that the surface flux wasn’t known to better than (+/-)17 W/m^2. And then modelers talk about a 0.6 W/m^2 surface imbalance.

About your,

I wonder if instead you might consider an alternative of a secular trend with an uncertain slope?I did a pretty standard uncertainty analysis. Call it a first attempt. If you or anyone would like to essay something more complete, I’d be all for it. 🙂Thanks Kevin.

What the heck is the first term on the right hand side of equation 1? Please show. The equation does not make sense. If it is a forcing it can not be dimensionless.

https://www.cawcr.gov.au/technical-reports/CTR_042.pdf

This says forcing are in Watts per meter sqrd.

F = 5.35lnC/Co This is the forcing equation it is in Watts per meter sqrd.

https://www.friendsofscience.org/assets/documents/GlobalWarmingScam_Gray.pdf

If equation 1 is valid or if forcing equation valid they should be able to tell the change in temperature of a jar of air going from 0% CO2 to 100% CO2.

Anthony’s jar experiment proved higher concentration of CO2 does not lead to higher temperature.

mkelly, here’s the description of the first term of eqn. 1 from page 3 of the paper:

“

The f_CO2 = 0.42 is derived from the published work of Manabe and Wetherald (1967), and represents the simulated fraction of global greenhouse surface warming provided by water-vapor-enhanced atmospheric CO2, taking into account the average of clear and cloud-covered sky.(my bold)”The full derivation is provided in Section 2 of the Supporting Information,…Please consult Supporting Information Section 2 for details.

The following statement by Nick S also caused some dissonance for me:

My question is, “How could you know that any future value produced by the GCM was below any reasonable value?” — the data does not exist yet — it is future data yet to be observed — it is unreal. You could only know, when the time arrives when the data was recorded, and then you would compare the actual data to previous-forecast data about what it might be to see how the latest piece of real data measured up.

Something has to give you reason to have confidence 50 or 100 years out that a given forecast has some dependability. If the same unknowns are being used over and over again for decades on end, how can the uncertainty about these unknowns not balloon up to ridiculous sizes that make the forecasts useless?

Thus, as I see it, it is the forecast that is meaningless, not the uncertainty interval that gives a basis to trust the forecast.

I did not see any terms for greenhouse gas concentrations in Pat Frank’s equations. This means that according to Pat Frank, the uncertainty would be the same in RCP2.6 models or if unchanging CO2 is modeled as it is in RCP8.5 models.

“fCO2 is a dimensionless fraction expressing the magnitude of the water-vapor enhanced (wve) CO2 GHG forcing relevant to transient climate sensitivity but only as expressed within GCMs”

from description of equation 1 in Pat’s paper

The uncertainty is in the inability of models to simulate cloud fraction, Donald. The uncertainty doesn’t depend on CO2 emissions or concentrations.

I think the big disconnect is in the term “propagation of error”. There are basically two types of errors: calibration, and precision/noise. Pat is talking about calibration (accuracy) errors, and others like Nick are thinking of precision errors, while others are confusing the two. Here’s my stab a explaining the difference:

Precision/noise error:

If you take a digital picture of a scene, you will sometimes see some speckling in the darker areas. This is due to the light intensity in those areas being too close to the lowest sensitivity level of the pixels in the image sensor. Transistor switching noise at this level will cause random differences between adjacent pixels. Most cameras have filter software that can average out this kind of noise to reduce this speckling considerably. This improves the visual image because noise was reduced. Details that were obscured before may now be visible.

Accuracy/Calibration error:

Now take that same image and look at a very small detail (perhaps in the background); one that is too small to identify. Zoom up that detail with software. It is now a bit “grainier” or “blockier”, but not any clearer. Zoom up again. It gets bigger and blockier, but you still can’t make out any more detail. In fact, it probably gets worse the more the zoom up. This is because the camera only has so many pixels per inch in the image sensor. The detail you are trying to resolve simply was not captured with enough pixels to tell what it is. You are just missing too much information, and nothing you can do in post processing can produce that missing information. You can interpolate to estimate it, but that is still just a guess. It may be a good guess, or a bad guess, but you can’t *know* either way. If the stakes are small, then maybe a guess is good enough, but if the stakes are big, then you want to know, not guess.

Maybe a few more people will get it now. Alas, some never will.

What about the 3rd most important kind of error. Incomplete or completely wrong foundation data.

That is…Crap goes in, crap comes out.

Most of this thread focuses on the finer points in error propagation in models, and the discussion has been instructive.

In my almost 40 years in modeling and simulation working for a major defense technology company partnering with Los Alamos NL, our modeling for the USAF in various classified domains resulted in over 10M lines of code to represent large numbers of stochastic, interacting entity and aggregate level systems. It would not have occurred to us to worry about propagation of error if we did not fully understand the behavior we were modeling, and validate and thoroughly verify the model or simulation representation. Why is this not true of the GCMs? It seems to me that climate dynamics has large areas that remain poorly understood. Or did I miss something?

Because in order to verify GCMs you have to wait 80-100 years to see the result (clue = 42).

The GCMs are “tuned” using currently available data, so testing it is not meaningful.

The other part is that they continually “update” the GCM’s with the effective note, “please disregard previous projections of inaccurate models”. Consequently, projections from 20 years ago are to be ignored. I have yet to see any reference to any study or paper that analyzes the errors and uncertainty of previous models when compared to two decades of actual data. I think more time, money and effort is expended on learning what and by how much to change parameters than that spent on learning specifically what scientists don’t know.

Up above, I showed an email relating to uncertainty in measurements of conventional historic air temperatures managed by the BOM. Some points arising:

1. The official estimate that Australia warmed by 0.9 deg C in the century starting 1900 has to be viewed in context of individual observations being one sigma +/- 0.3 deg C. Or larger.

2. That 0.3 degrees was calculated for electronic thermometers introduced in the mid 1990s. The errors with liquid in glass thermometers and their screens are highly likely to make the figure larger, as is the transition from LIG to electronic.

3. But the BOM notes its work on errors is still in progress. It is relevant to ask why official figures for warming are being calculated and used for policy formulation when their accuracy and precision is still unknown. To me, this is simply horribly poor science.

4. Because we do not know the accurate figure for warming here, we need to keep an eye on destinations like estimates of global warming. These BOM temperatures are exported to places like Berkeley, Giss, Hadley. They are some of the core inputs to GCM studies like the CMIP series.

5. It follows that at least some of the CMIP inputs have unknown or unstated uncertainties.

6. The GCM process needs a formal error study to add to this work by Pat Frank.

7. There is no way that GCMs, with unknown uncertainties, should be used for international or national policy formulation.

8. There ought to be a law preventing people from using work in progress as if it was the fully-studied, error-known full Monty. Geoff S.

As someone who has performed billions of measurements with advanced instruments (mostly automated, of course), I’m not sure that I agree that uncertainty does not result in errors. Perhaps it’s an issue of semantics and this is not what Pat is saying.

I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period, and various models use different tuning. This is pure bull$#!&, and arguing otherwise just throws your credibility out the window. You either have an accurate model that covers all major factors such that it can be developed using say, 1900 to 1960 data only, and then you run it for 1961 to 2020 with zero additional tweaking and you match the global temperature for 1961 to 2020, or you have to fake it from 1900 up to now for a match. And then you claim it’s good for predicting the temp in 2100.

“I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period”That is not a fact.

If the GCM’s can’t handle clouds properly through computation of equations of physical properties, then just how are clouds handled if not by programming “tuned or fudge-factored” parameters?

Nick StokesOctober 16, 2019 at 10:53 pm

“I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period”

That is not a fact.

Why so?

“All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Frank-model-vs-10-CMIP5-control-runs-LW-550x458Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”.

Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years, it is done once, for the average behavior of the model over multi-century pre-industrial control runs.

The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) but are

“tuned” by hand (“fudge-factored”) to match known results over an entire period”

That is a fact see Roy Spencer,these are his words on the fudging that occurs.

Model tuning to match target observables is discussed here; a far from exhaustive list:

Kiehl JT. Twentieth century climate model response and climate sensitivity. Geophys Res Lett. 2007;34(22):L22710.

Bender FA-M. A note on the effect of GCM tuning on climate sensitivity. Environmental Research Letters. 2008;3(1):014001.

Hourdin F, et al. The Art and Science of Climate Model Tuning. Bulletin of the American Meteorological Society. 2017;98(3):589-602.

Mauritsen T, et al. Tuning the climate of a global model. Journal of Advances in Modeling Earth Systems. 2012;4(3).

Lauer and Hamilton also mention model tuning: “

The SCF [shortwave cloud forcing] and LCF [longwave cloud forcing] directly affect the global mean radiative balance of the earth, so it is reasonable to suppose that modelers have focused on ‘‘tuning’’ their results to reproduce aspects of SCF and LCF as the global energy balance is of crucial importance for long climate integrations.“…

“The better performance of the models in reproducing observed annual mean SCF and LCF therefore suggests that this good agreement is mainly a result of careful model tuning rather than an accurate fundamental representation of cloud processes in the models.

I am confused. It cannot be possible that this was noz part of the curriculum of climate science! This is elementary science (experimental physics). We had it in high school and then later during the bachelor studies for engineering and physics!

And if prominent climate scientists don’t grasp the difference between statistics and energy state differential equations of physical properties, then something is truly wrong in the climate sciences.

You got it, Max.

something is truly wrong in the climate sciencesHey James,

Is there a different definition of “precision” used in GCMs than the rest of science?The Future of the World’s Climate(Henderson-Sellers and McGuffie, 2012, Elsevier) lists following four uncertainties associated with GCM models:1. “Uncertainty due to the imperfectly known initial conditions. In climate modelling, a simulation only provides one possible realization among many that may be equally likely, and it requires only small perturbations in the initial conditions to produce different realizations […]”

2. Uncertainty due to parameterization of processes that occur on scales smaller than the grid scale of the model (that covers cloud physics).

3. Uncertainty due to numerical approximation of the non-linear differential equations.

4. Uncertainty due to precision of the hardware on which the model is run.

Authors freely admit that it is difficult to determine precisely how much uncertainty is associated with each of these sources. However, and that’s the interesting bit in my view: “their overall impact can be estimated by the spread in the climate simulations from many GCMs”.

So looks like modelers equate spread of the runs from multiple simulations with the total uncertainty.

Authors refer to several different GCM runs that generated spread of outputs about 0.5°C. “This is an indication of the level of uncertainty associated with the simulation of global temperature from GCMs”.

The main problem I see with most climate models is None Of The Above. It is something that I remember Dr. Roy Spencer discussing to some extent or another in his blog. The main problem I see is with climate models is groupthink with some enforcement of a party line, with multidecadal oscillations being ignored. I see ignoring of multidecadal oscillations for calibrating / “tuning” models (especially CMIP5 ones) as causing most of these models to be tuned to have feedbacks from warming caused by increase of GHGs to have these feedbacks causing about .065 degree/decade (C/K degrees) more warming than they actually did during the most recent 30 years of their hindcasts.

I expect that if these climate models are retuned to hindcast 1970-2000 or 1975-2005 as having .2 degree C/K less warming than actually happened, because ~ .2 degree C/K of warming during that period was from multidecadal oscillations which these models don’t consider, their forecasting would improve so greatly that most people who are employed to work from their alarmism or from their being incorrectly alarmist would both become unemployed.

There’s also the predictive uncertainty stemming from an incomplete and/or wrong physical theory.

Nick Stokes October 16, 2019 at 12:15 pm

“If the IR opacity of air remains about the same and the temperature increases 10°C, then the Earth will lose heat faster than the Sun can supply it. So you are not uncertain about whether that situation could happen”

–

Nick Stokes So you assert. But you give no rational argument. You would be better to state that you are certain that this situation could not happen. The 10C increase is physically possible if the sun was to produce more heat in the first place in which case the earth still has to lose heat at the same rate the sun is putting it in. Or another source of heat for the 10 C increase is magically present.

An interested reader would want to see that magic set out, step by step. I don’t believe that you can do that but if you can, let’s see it.

“The earth will lose heat faster than the sun can supply it.”

To see how wrong this is, suppose the temperature was 30 degrees higher. By your logic the planet would lose heat so much faster than the sun could put in it would turn into a snowball earth in weeks. Any sensible maths would say the poor sun would not have a chance to warm things up.

“Except you don’t know. Well, it might. Or not. That is a pathetic excuse for a wrong thermodynamic equation. Gosh, this stuff is elementary.

–

some of my phrases are undoubtably a bit harsh.

“That’s what Nick says he does not understand.”

No, I understand it very well. You say “Gird yourself because it’s really complicated.”, but in fact, those terms are all the same. And so what your analysis boils down to is, as I said:

0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)

and people here can sort out dimensions. They come to K*sqrt(year).

“Not one of them raised so nonsensical an objection as yours.”

It’s actually one of Roy’s objections (“it will produce wildly different results depending upon the length of the assumed time step”). The units are K*sqrt(time). You have taken unit of time as year. If you take it as month, you get numbers sqrt(12) larger. But in any case they are not units of K, and can’t be treated as uncertainty of something in K.

Dorothy says:

” ……. I see that the calculation of uncertainty is complex or perhaps intractable. Having criticized Pat Frank’s attempt to estimate uncertainty of GCMs, it would be very helpful if Nick Stokes could provide even a ‘back of envelope’ estimation of (any) GCM uncertainty. I assume this would clarify the uncertainty in GCMs, having Pat Frank and Nick Stokes estimates to compare.”No such comparison is useful unless a common, rigorous definition for the ‘uncertainty of a climate model’ has been adopted; and a common process for quantifying and estimating that uncertainty has been used in producing the comparison.

One would suspect that a tailored definition of uncertainty is needed for assessing the predicative abilities of a climate model. That definition can then used as the starting point for systematically evaluating the level of uncertainty associated with any specific climate model, and also with any specific run of that climate model.

A common definition of ‘uncertainty’ as applied to the climate models might include:

1) — A summary description of the term ‘uncertainty’ as it is applied to the climate models, a.k.a the GCMs.

2) — The component and sub-component elements of climate model uncertainty; for example, cloud forcings, water vapor feedback, computational constraints, etc.

3) — The units of uncertainty measurement, their meanings, their dimensions, and their proper application to each component and sub-component element.

4) — The total uncertainty of a specific climate model run versus its component and sub-component uncertainties.

5) — Guidelines for the use of common scientific terms and measurement units in defining the units of uncertainty measurement.

6) — Guidelines for the use of uncertainty measurement units in quantifying sub-component, component, and total uncertainty.

7) — Guidelines for describing and integrating the sub-component, component and total uncertainty for a specific run of a GCM.

8) — Guidelines for comparing the sub-component, component and total uncertainties among multiple runs of a specific GCM.

9) — Guidelines for comparing the component and total uncertainties among different GCMs.

The question naturally arises, would the climate science community ever consider adopting a rigorous, systematized approach for quantifying and estimating the uncertainty of their climate models?

I leave that question to the WUWT readership to comment upon. However, it is 100% certain many of you will have definite opinions concerning the question.

Thanks Beta Blocker,

One can only imagine why such a rigid, science-based exercise has not been done.

Based only on my reading of others and not from hands on experience with modelling GCMs, I have to wonder how to handle what others describe as a procedure. That procedure is said to be the subjective adoption or rejection of ‘runs’ that do or do not meet subjective criteria. One cannot calculate overall uncertainty for GCMs without including all runs, including those subjectively rejected. If that is indeed done? Geoff S

Geoff, several people familiar with the application of computational fluid dynamics (CFD) to other areas of engineering and science have offered their commentaries on the general topic of uncertainty in computational modeling.

But none has yet described a rigorous definition for the concept of uncertainty as it is being applied to the CFD-based models used in their own scientific or engineering disciplines.

For these various CFD-driven models, is there a standard process for defining, quantifying, and estimating uncertainty for their particular science/engineering applications?

In what ways is the concept of uncertainty, as applied to the outputs of these CFD-driven models, used to inform technical and engineering decision making?

Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations. Or better yet, it’s more like the intended property of the Navier–Stokes equations to deal with exactly that aspect. Many CFD simulations would not even be possible if they couldn’t solve into a solution which actually retains some significance to the topic at hand.

This can be easily verified by anyone familiar with CFD inner workings. If Pat’s conclusion was to be followed, we should stop using the outcome of many other CFD solutions and thermodynamic testing of fluids, gasses, fuel combustion, aerodynamics and so on. They have often similar uncertainties on the initial model.

Nick Stokes has provided some introduction earlier on this site https://wattsupwiththat.com/2019/09/16/how-error-propagation-works-with-differential-equations-and-gcms/

I still fail to see how Nick’s earlier article relates to Pat’s paper. Nick addresses error propagation in results and not uncertainty. His article addressed the following:

“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”

Pat never claimed the simulations run off the rails in their results, only in certainty.

It’s not that we think the modeled results are “wrong”; only that they are so uncertain to have much value.

Pat never claimed the simulations run off the rails in their results, only in certainty.Methinhs, that’s confusing for many. If a simulation (not necessary GCM) gives consistently good result, confirmed by observations so who cares if actual uncertainty potentially is massive? Or in other words: if uncertainty does not manifest itself in the actual error who cares if is large or small?

Except, in the case of GCMs the results are not good and are not confirmed by observations (the models run hot, with a gap between results and observations the gets wider every year as time marches forward), so yes, uncertainty is manifesting itself by dint of how bad the models are at modeling reality. The fact that the uncertainty is so big merely illustrates the fact of what is already widely known: the models are unfit for purpose.

You don’t understand Error Propagation either.

Nick added nothing. He was talking about how chaos can lead to explosive situations which would give nonsensical results in a model. But that’s not the issue.

That is completely different from the epistemological considerations of what you know based on uncertainty of measurements, and how that propagates through an equation with each step.

Anyone can force a model to converge on a falsely precise figure. That doesn’t mean it’s useful.

John Dowser, “

Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations.”I have answered that question a zillion times, John, including in the thread comment here.

The GCMs have a linear output. Their output can be compared with observations. This comparison is sufficient to estimate the accuracy of their simulations.

Linearity of output justifies linear propagation of model calibration error through that output. This justification appears in both the abstract and the body of my paper.

Pat,

You write:-

“Linearity of output justifies linear propagation of model calibration error through that output.”

I offered four examples of uncertainty propagation in 4 linear systems here:-https://wattsupwiththat.com/2019/10/04/models-feedbacks-and-propagation-of-error/#comment-2816455

One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance. As I have stated before, your equation S10.1 is a mis-statement of the reference from which you draw it; it has limited validity only for the sum of strictly independent variables and no validity when applied to the sum of correlated variables.

There is no one-suit-fits-all recipe for calculation of uncertainty propagation even in a linear system.

“One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance.”

What are you talking about? What correlated variables? How do you have covariance with an uncertainty interval that does not even have a probability function?

“As I have stated before, your equation S10.1 is a mis-statement of the reference from which you draw it; it has limited validity only for the sum of strictly independent variables and no validity when applied to the sum of correlated variables.”

Uncertainty *is* a strictly independent *value*. It is not a variable. The uncertainty at step “n” is not a variable or a probability function. Therefore it can have no correlation to any of the other variables. Correlation implies that if you know X then you can deduce Y through a linear relationship. There is no linear relationship between uncertainty and any other variable. You can’t determine total flux by knowing only the uncertainty in the value of the total flux. Nor can you determine the uncertainty by knowing the total value of the flux. There is no linear relationship between the two so they simply cannot be correlated.

There is no correlation between the uncertainty and any random variable thus uncertainty is independent and the uncertainty at each step is independent of the uncertainty of the previous step thus adding them using root-sum-square is legitimate. The fact that the uncertainty in each step is equal to all other steps in Pat’s analysis does not determine independence, it just makes the calculation simpler. Even if some steps were to be different for some reason they would still remain independent values not correlated to any random variable and would contribute to the root-sum-square.

kribaez, “

One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance.”What correlated variables? Tim Gorman has it right.

The uncertainty statistic is a constant. It is not correlated to anything. It does not covary. There are no correlated variables

Apart from the GUM, see Sections 4 and 5, and Appendix A in Taylor &, Kuyatt (1994)

Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement ResultsNIST Technical Note 1297. Washington, DC: National Institute of Standards and Technology, here (pdf).I don’t see any strength to your objection.

In your linked comment, you wrote, that, “

for adequacy, any emulator here requires at the very least the ability to distinguish between an error in a flux component, an error in net flux and an error in forcing.”No, it does not. Eqn. 1 is an output emulator, not an error emulator.

You wrote, “

You cannot assess the uncertainty arising from a variable Z if your equation or mapping function contains no reference to variable Z.”The reference to variable Z derives independently from Lauer and Hamilton. It conditions the emulation in eqns. 5.

You wrote, “

any of these models present a demonstrably more credible representation of AOGCM aggregate response than does Pat’s model.”But eqn. 1 successfully emulates the air temperature projection of any advanced GCM. How is that not credible? And what could be more credible than that?

Maybe by “credible” you mean attached to physics. In that case, you’d be right, but irrelevant.

But if by “credible” you mean comports with AOGCM projections, then your dismissal of eqn. 1 is cavalier.

You wrote, “

Importantly, they all lead to a substantial calculation of uncertainty in temperature projection, but one which is different in form and substance from Pat’s.”Right, Their uncertainty would be a matter of precision. Mine is a matter of accuracy. In the physical sciences, mine is the more important.

You wrote, “

I gave an example in the previous thread of a simple system given by”Y = bX

First problem: X carries an uncertainty of (+/-)2 …

But it does not. Both b and X are givens and are known exactly. The conditioning uncertainty is external and comes in independently. As a calibration error, it adds in the resolution of the calculation.

A simple example. I have used microliter syringes in titrations. They read to 0.1 microliter. Suppose I calibrate the syringe before use, by weighing syringed water using a microbalance.

I find that the calibration error averages (+/-)0.1 microliter, even though I was really visually careful to put the tip of the syringe plunger right on a barrel measurement line.

If I use that syringe to make multiple additions, then each one is of unknown volume to (+/-)0.1 microliter. After ten additions, my uncertainty in total volume added is (+/-)0.3 microliters.

When I calculate up the results, into that calculation comes an uncertainty in reagent quantity represented by that (+/-)0.3 microliters.

That uncertainty is external and is brought into, and conditions the result of, whatever thermodynamic calculation I may be making. The (+/-)0.3 microliters is a measure of (part of) the resolution of the experiment.

Look at eqns. 5. They operate the same way.

Using your linear model, when the calibration uncertainty is added in, it becomes Y(+/-)y = bX(+/-)bz, where the dimensions of uncertainty (+/-)z are converted to that of y.

The independent (+/-)z uncertainty does not change in time or space. It is not dependent on the value of ‘b’ or ‘X.’ It is a measure of the resolution of the calculation.

Your enumerated third and fourth problems there stem from your first mistake of supposing the error is in X or b, and changes with their measurement. None of that is appropriate to the emulation or to the uncertainty analysis.

You wrote, “

Pat’s response here is tellingly incorrect; indeed he has set up a paradox whereby the 1-sigma uncertainty in X is (+/-2) and also (+/-3.5) simultaneously.”I was just following through the logic of your example, kribaez. Nothing more. Note my opening phrase: “

The way you wrote it…”Here’s an interesting thing. You wrote, there, “

Mechanistically, each Xi value is calculated here as:- Xi = Xi-1 + ΔXi = Xi-1 + (Xi – Xi-1) so the actual realised error in Xi-1 is eliminated leaving only the uncertainty in the Xi measurement.”But you also wrote that, “

… [X] is always measured to an accuracy of (+/-)2.”So, your (Xi – Xi-1) should really be (Xi(+/-)2 – Xi-1(+/-)2). In a difference, the uncertainties add in quadrature. So, the uncertainty in your ΔXi is sqrt(2^2 + 2^2) = (+/-)2.8. (+/-)uncertainties do not subtract away.

Your method of introducing ΔXi as a difference has caused the uncertainty to increase, not to disappear. Remove the use of a difference and the uncertainty does not increase, and your lag-1 autocorrelation disappears.

The ‘bX’ in the emulation equation 1 is just 0.42 x 33K and ΔFi. The first is a constant and the second is the standard forcings. Both amount to givens.

The rest of your analysis there is founded upon your basic misconception that the uncertainty resides in X or in b. It does not.

The uncertainty is independent of either and both. It is introduced from an external origin, it is a measure of model resolution, it conditions the result, and is a constant (+/-)value. It does not covary. It is not correlated with anything.

In fact, there is no necessary uncertainty in the uncertainty, either. It is taken as a given constant.

” It is not correlated to anything. It does not covary. There are no correlated variables”“National Institute of Standards and Technology, here (pdf).”From your NIST link, Eq A-3, which is the source of your Eq 3. The Guidelines say it is

“conveniently referred to as the law of propagation of uncertainty”. And like your Eq (3), it has covariance terms. They say:“u(xᵢ) is the standard uncertainty associated with the input estimate xᵢ ; and u(xᵢ , xₖ) is the estimated covariance associated with xᵢ and xₖ .”Not only do you give no reason for assuming the covariances are zero, you deny that they could even exist. But there it is.

““u(xᵢ) is the standard uncertainty associated with the input estimate xᵢ ; and u(xᵢ , xₖ) is the estimated covariance associated with xᵢ and xₖ .””

Read this sentence again, this time for meaning.

“covariance is a measure of the joint variability of two random variables.”

“estimated covariance associated with xᵢ and xₖ”

Do you see anything about u being a random variable that has covariance with xᵢ or xₖ?

Covariance is a VALUE, it is ot a random variable. How to you have covariance between a VALUE and a random variable?

Nick, “

Not only do you give no reason for assuming the covariances are zero, ….”I gave the definitive reason here.

And here.

And you had agreed, here.

Nick, “

…you deny that they could even exist. But there it is.A very thin fabrication. The real Nick, for all to see.

You missed your calling. You’ve a real Vyschinskyesque talent.

John,

The counterargument to Pat is not that the uncertainty might not track that way through the GCMs, but that it most definitely DOES NOT track that way. It is ironic that Pat quotes Richard Lindzen, who IMO gets it exactly right.

“But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance, and these errors involve, among other things, the feedbacks which are crucial to the resulting calculations. ”

I have tried several times to make this point to Pat in previous threads, but he rejects the argument. For example, I noted here https://wattsupwiththat.com/2019/10/04/models-feedbacks-and-propagation-of-error/#comment-2819759

“… However, since it is impractical to run full MC experiments on the AOGCMs, there really is little choice other than to make use of high-level emulators to test uncertainty propagation arising from uncertainty in data and parameter inputs. Such tests support the existence of large uncertainty arising from cloud parameterisation, but do not support the shape of uncertainty propagation suggested by Pat. An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature. It leaves the model with an incorrect internal climate state, no question. During subsequent forced temperature projections, the incorrect climate state translates into an error in the flux feedback from clouds. Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped. The error propagation is then via a term R'(t) x ΔT(t), where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE. Although temperature may be changing with time, this uncertainty propagation mechanism is not at all the same as your uncertainty mechanism above, or Pat’s, which are both propagated with TIME independent of temperature change.”

““But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance”

Again, error is not uncertainty. How many times must it be repeated for this to be accepted?

“An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature.”

And, once again, you are trying to equate error, in this case “bounded error” with uncertainty. An uncertainty in the flux cannot simply be cancelled out. No amount of “spin-up” can cancel the uncertainty.

“Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped. The error propagation is then via a term R'(t) x ΔT(t), where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE. ”

And, once again, you conflate error with uncertainty. You are still arguing that you can reduce the error but that doesn’t reduce the uncertainty in that error. In essence you are saying you can make the output more accurate by tuning the model using an R'(t) factor. If the model were correct to begin with you wouldn’t need to tune it. The fact that you have to is just proof there is uncertainty in the model! And since you can’t compare the model’s future outputs with future reality you simply can’t *know* that the R'(t) you select will match what happens in the future. You can “assume” your R'(t) tuning will match but here comes that “uncertainty factor” again!

“Again, error is not uncertainty. How many times must it be repeated for this to be accepted?”

Again, sampled error from the input space yields the uncertainty spread in the output space. How many times must it be repeated for this to be accepted? There is no magic uncertainty which is not rendered visible by MC sampling. This is the recommended method in almost every reference quoted by Pat, except where the problem is simple enough to allow analytic quadrature.

Sampling of cloud error across a wide range yields no uncertainty in net flux over a 500 year period. The mathematics (not tuning) say that the net flux must go to zero, and it always will. Pat says that the net flux is equal to zero but with a massive uncertainty which is invisible even to full sampling of the cloud error distribution. This is nonsense.

“Again, sampled error from the input space yields the uncertainty spread in the output space.”

Uncertainty is not a random variable. It does not specify a probability function. Varying the input only tells you how the output responds, it doesn’t tell you anything about the uncertainty of the input or the output. If the input has a +/- uncertainty interval for an input value of A then it will still have an uncertainty interval for an input value of B. You have to act on the uncertainty of the input variable in order to lower it.

“There is no magic uncertainty which is not rendered visible by MC sampling.”

Again, the MC analysis can only tell you the sensitivity of the model to changes in the input. If I put in A and get out B and then put in C and get out D then each of the outputs, B and D, *still* have an uncertainty associated with each. In a determinative system, where an input A always gives an output B there simply isn’t any way to determine the uncertainty by varying the input. Inputs A and C will still have an uncertainty interval associated with them. You can’t avoid it. That also means that any output will have an uncertainty. You can’t avoid it by doing multiple runs.

Now, if on each run of the model with an input A you get different answers, then you can estimate the uncertainty interval with large number of runs. But if the climate models give different answers each time they are run then just how good are they as models?

“This is the recommended method in almost every reference quoted by Pat, except where the problem is simple enough to allow analytic quadrature.”

Actually it isn’t. If I input 2 +/- 1 to the model and then input 3 +/- 1 just exactly what do you think the two runs tell you, i.e. an MC of 2. The input of 2 can vary within the uncertainty interval between 1 and 3. The input of 3 can vary from 2 to 4. Tell me exactly how this small MC run can tell tell you anything about the uncertainty of the output? The problem with your assertion is that you think you can have inputs with +/- 0.0 uncertainty and thus get outputs with +/- 0.0 uncertainty. Just how are you going to accomplish that?

“Pat says that the net flux is equal to zero but with a massive uncertainty which is invisible even to full sampling of the cloud error distribution. This is nonsense.”

Nope. Pat is correct. I have two different cars moving on the highway. Their net acceleration is zero, i..e just like the net flux. Do you know the velocity of each for certain? Net flux can be zero but still have a uncertain value for the flux itself. And since it is the total flux input that determines temperature, not the net flux, what does the net flux actually tell you? If the sun gets hotter we will have more total flux coming in, we might still have a net flux of zero but that total flux in *is* going to have an impact on the actual temperature. If it didn’t then the sun could go dark and the earth would still maintain the same temperature.

You can’t wish away uncertainty. It just isn’t possible.

kribaez, “

Again, sampled error from the input space yields the uncertainty spread in the output space. How many times must it be repeated for this to be accepted?”Precision, kribaez. That’s all you’re offering. It’s scientifically meaningless.

For your metric to be a measure of physical accuracy, you’d have to know independently that your model is physically complete and correct, and capable of physically accurate predictions without any by-hand tuning.

Then a parameter uncertainty spread of predictive results implies predictive accuracy.

But your climate models are neither known to be complete, or correct, or capable of accurate predictions. And they need by-hand tuning to reproduce target observations.

Your metric gives us no more than the output coherence of models forced into calibration similarity. That has nothing whatever to do with model predictive accuracy.

Your models are unable to resolve the physical response of the climate to the perturbation of forcing from CO2 emissions.

It matters not that they show a certain result or that they all show a restricted range of results. That behavior is a forced outcome. It’s not an indication of knowledge.

Those outcomes are not predictions. They are mere model indicators. Like Figure 4 above.

kribaez, you wrote, “

… However, since it is impractical to run full MC experiments on the AOGCMs, there really is little choice other than to make use of high-level emulators to test uncertainty propagation arising from uncertainty in data and parameter inputs.’The method you’re describing is not uncertainty propagation. It is merely variation about an ensemble mean. It’s not even error, because no one has any idea where the correct value lays.

“

Such tests support the existence of large uncertainty arising from cloud parameterisation, but do not support the shape of uncertainty propagation suggested by Pat.”That’s no surprise, though, is it. After all, variation about an ensemble mean isn’t an accuracy metric.

“

An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature.”Irrelevant. The net flux and the absolute temperature have large implicit uncertainties because their physical derivation is wrong.

“

It leaves the model with an incorrect internal climate state, no question.”With this, kribaez, you admit the air temperature is wrong, the cloud fraction is wrong, etc., and that they get wrongly propagated through subsequent simulation steps.

How can you possibly not see that process of building error upon error produces an increasing uncertainty in a result?

“

Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped.”You have no idea of the subsequent cloud error in a futures projection. You have no idea how the cloud fraction responds to the forcing from CO2 emissions. You have no idea of the correct temperature response.

Projection uncertainty can only increase with every projection step, because you have no idea how the simulated trajectory maps against the physically correct trajectory of the future climate.

“

… where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE.”But kribaez, you don’t know the rate of change of flux due to cloud changes with respect to temperature. That metric cannot be measured and cannot be modeled.

Clouds can be measured to about (+/-)10% cloud fraction. Models can simulate cloud fraction to an average (+/-)12% uncertainty. The changes you’re talking about cannot be resolved.

Models can give some numbers But your metrics, as determined from models, are merely false precision. They’re physically meaningless.

The LWCF uncertainty metric is derived from error in simulated cloud fraction across 20 calibration years. Those years included seasonal temperature changes. The simulation error thus includes temperature cloud response error. It includes the whole gemisch of cloud error sources. The initial calibration metric — annual average error in simulated cloud fraction — produced the annual average (+/-)4W/m^2 LWCF calibration error.

That error likewise represent the whole gemisch of simulated cloud response errors. It shows that your models plain cannot resolve the impact of CO2 forcing on clouds. The cloud response is far below the lowest level of model resolution.

In real science, every single model run would have to be conditioned by the uncertainty stemming from that error. That means every run of an ensemble would have very large uncertainty bounds around the projection.

The ensemble average would have the rms of all those uncertainties. And if one calculates the (run minus mean) variability it’s the rss of the uncertainties in the run and the mean. The uncertainty in the difference is necessarily larger than run or mean.

Your methods are not the practice of science at all. They’re the practice of epidemiology, where predictive pdfs have no accuracy meaning. They are merely anticipatory.

The accuracy of such a model pdf is only known after the arrival of the physical event. One may find that the model pdf did not capture the position of the real event at all. It was inaccurate, no matter that it was precise.

Dr. Frank,

The text of the post you’re responding to here refers back to a post on an earlier thread, the remaining part of which is enclosed in brackets below:

[“…but shouldn’t the uncertainty of clouds, as you indicate, at least be inherent in how we view the model’s results?”

Yes, it should. I am already on record as saying that I agree with Pat that the GCM results are unreliable, and unfit for informing decision-making, and I also agree with him that cloud uncertainty alone presents a sufficiently large uncertainty in temperature projections to discard estimates of climate sensitivities derived from the GCMs. Unfortunately, I profoundly disagree with the methodology he is proposing to arrive at that conclusion.]

I know that for the sake of science, agreement on methodology / process is important, but for the time being, am grateful that your work, which I agree with fully, is exposing the weakness of the GCM results. Thank you. – F.

Frank, “

Unfortunately, I profoundly disagree with the methodology he is proposing to arrive at that conclusion.I don’t see why, Frank. It’s a straight-forward calibration-error/predictive-uncertainty analysis.

John Dowser, “

Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations.”I have addressed that problem, John.

Air temperatures emerge from models as linearly dependent on fractional forcing input. What sort of loop-de-loops happen inside of models is irrelevant, once the linearity of output is demonstrated.

That linear relation between input forcing and output air temperature necessitates a linear relation between input uncertainty in forcing and output uncertainty in air temperature.

I pointed that out in the post you linked, here

Tim Gorman also pointed that out in the comments under the post you linked. Also here and here.

Nick tries to make it look complicated. It’s not.

Hey John,

This can be easily verified by anyone familiar with CFD inner workings. If Pat’s conclusion was to be followed, we should stop using the outcome of many other CFD solutions and thermodynamic testing of fluids, gasses, fuel combustion, aerodynamics and so on. They have often similar uncertainties on the initial model.I asked similar question under one of the previous posts: if CFD simulations run over millions of volume cells and millions of steps, even small initial uncertainty associated with each step/cell will quickly render such simulation useless (because it accumulates with each step). The answer I received from people more familiar with the subject was something like:

1. CFD algorithms undergo robust experimental verification and validation procedures

2. Initial conditions are often very well defined.

3. Uncertainties are far smaller than in climate models

4. Still, it is not uncommon that such CFD simulations go haywire and produce absurd results.

Few years ago I had a contact with a chap who was managing large aerodynamics CFD simulation project (aircraft in the landing configuration). Even with very precise model, mesh and using validated industry-standard CFD package. Still, it did produce detectable alpha shift compared with experimental data – such shift had to be corrected for the model. The guy was told that such anomalies between results of CFD and wind tunnels are not uncommon.

I know of 1 gcm that went haywire once and disagreed with observations.

input file error

I know of 1 gcm that went haywire once and disagreed with observations.input file error

And what was observation in this context? According to Dr Frank models are constantly tuned to match past observations. But that does not guarantee at all that future predictions are accurate. Good test, I reckon, would be running GCM for a local region with good quality record of air temperatures for the last 50 or 80 years and then – without adjusting model to readings – check how the model behaves compared with actual records.

Steven Mosher, in a comment I’ve posted on this thread this morning, I ask you once again — as I have done several times before — to defend your assertion that credible scientific evidence exists for claiming that +6C of warming is possible.

https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/#comment-2825172

The uncertainty of that +6C prediction is central to its credibility. As are the uncertainties of predictions of +2C, +3C, and +4C.

And yet as far as I am aware, no systematic evaluation of the uncertainties of GCM model outputs is being done. That topic is discussed in the comment link I posted above.

Dr. Roy provides a useful devil’s advocate position that usefully elicits a counter-argument list his post.

But from a decision-theoretic POV, he fails. As John McCain (RIP) said: the worst case is that we leave a cleaner planet for our children.

IMO we are way past the quibbles raised by the 1%ers like Dr. Roy. I, for one, am not willing to fight to avoid a few hundred dollars per year in transfer payments just to roll the increasingly-loaded dice so that my grandchildren will have a livable planet in 30 years, when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.

so, from now on, please let’s have no one over the age of 50 advocating for a very low-probability future just so that fuel companies can make a profit.

“so that my grandchildren will have a livable planet in 30 years, when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.”

Tell your grandchildren to move to Kansas/Nebraska/Iowa, the epicenter of a global warming hole. No Eve of Destruction going on here.

We didn’t have a single day of >100degF here this summer!

They changed me from Zone 4 to Zone 5 partly because of global warming.

Now all the garden centers sell Zone 5 plants and they die almost every year. I keep telling people to stop buying Zone 5 plants.

Record cold top killed all my Zone 4 grapevines this year (isn’t suppose to happen in Zone 4).

Weather isn’t climate… until it’s hot.

It’s difficult to argue that we are on the eve of destruction where atmospheric modeled warming is lower than surface modeled warming (observed). This observation alone pretty much defeats CO2 forcing.

The irony of course being that failure to correctly control for surface urban heat effect on modeled observation likely drives the divergence defeating their own argument.

Based on the squirrel activity this fall we are going to have a cold winter. They are hauling every single nut they can find and burying them out in the back fence row. I haven’t seen them this active for literally a decade. I’ve already stocked up on suet and corn to feed all the critters this winter. Hope I won’t need it!

Going to be a hoot to see all the claims that this December through March will be the warmest on record!

chris

You stated your personal opinion: “… when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.” However, you provided no evidence to support you opinion.

You also remarked, “… just so that fuel companies can make a profit.” You are demonstrating a narrow, biased view of technology and economics, again with out any evidence to support your opinion. That is the crux of the problem. People like you feel certain that you have special insights on reality, and feel no obligation to provide proof. Yet, you would have everyone pay increased taxes, and want to silence those who see things differently. I find that to be very arrogant.

Chris, “

when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.”That’s not obvious at all. Your thought just exemplifies modern millennialist madness. It’s groundless.

Steven Mosher has said in comments made on other climate science blogs that +6C of warming is possible.

He has also said in a comment made on WUWT that if we are to objectively determine a GCM model’s uncertainty, what the model does internally must be directly examined; i.e., just using emulations of that model’s outputs isn’t good enough for evaluating a model’s uncertainty.

—————————————————-

Steven Mosher, October 18, 2019 at 3:11 am

“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”

Nope they dont’

Look at the code.

now you CAN fit a linear model to the OUTPUT. we did that years ago

with Lucia “lumpy model”

But it is not what the models do internally

—————————————————-

It is easy for many of us to believe that a prediction of +6C of warming is more uncertain than is a prediction of say, +3C of warming. But how can that opinion be offered objectively?

If one is intent on evaluating the total uncertainty of a specific GCM model run, or of a collection of GCM model runs — doing so objectively in quantitative terms according Steven Mosher’s requirement — then one must look closely at all the parameter assumptions and at all the computational internals. Each and every model component and sub-component. All of them. Without exception.

Quantifying and estimating the uncertainties of each component and sub-component of a GCM using a systematic approach, and then integrating those uncertainties into an overall total evaluation, would be an exceedingly difficult task.

In the comment referenced below, I offer a framework for how a common conceptual approach for evaluating these uncertainties might be defined, developed, and systematized.

https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/#comment-2824001

As far as I am aware — please correct me if I am wrong — none of the CGM modelers formally quantify and estimate the uncertainties of each component and sub-component of their models using a systematic approach.

Nor do the GCM modelers make any attempt at formally integrating and documenting those sub-component uncertainties into an overall evaluation of a model run’s total uncertainty, one that is supported by a disciplined and documented analysis.

Steven Mosher, I ask you once again, as I have done several times before, to defend your assertion that credible scientific evidence exists for claiming that +6C of warming is possible.

The uncertainties associated with that +6C claim, stated in a detailed and objective evaluation, one supported by a systematic analysis, would be central to its credibility as a scientific prediction.

As would be the case for predictions of + 2C, + 3C, and + 4C. It’s all the same thing.

“He has also said in a comment made on WUWT that if we are to objectively determine a GCM model’s uncertainty, what the model does internally must be directly examined; i.e., just using emulations of that model’s outputs isn’t good enough for evaluating a model’s uncertainty”

Stephen Mosher has obviously never been handed a black box by an engineering professor and told to figure out the transfer function between the input and the output. What happens inside the black box is totally irrelevant. You simply don’t know, it’s all hidden inside the box.

That does not mean that you can’t determine the transfer function between the input and the output. And if it turns out that the transfer function is a simple linear one then, again, who cares what is inside the box?

Where Mosher gets it really wrong is that what is inside the box isn’t necessarily what determines the uncertainty. The uncertainty in the input gets included in the output. If the uncertainty of the input frequency is +/- 1 hz for example, there is no way the output of the black box can have an uncertainty less than +/- 1 hz. The uncertainty can be higher, of course, but it can never be less. If what is input to that black box is a cloud function and that cloud function is uncertain then that uncertainty gets reflected into the uncertainty of the output. And not one single thing needs to be known about what is inside the black box.

This is why it is so important for the climate modelers to at least identify the uncertainties in the inputs they use to their models. And then to recognize that uncertainties never cancel, they are not random variables that can be hand-waved away using the central limit theorem.

I think it’s a bit more complicated.

It seems like what they are really saying is that the input frequency oscillates at +/- 1hz uncertainty over a 20 year period; meanwhile the black box can calculate monthly feedback forcing based on the 20 year oscillation while maintaining a +/1 hz uncertainty over the 20 year period. Maybe I misunderstand Roy, but it seems like that is what he and Nick are saying.

I know that’s a horrible over-simplification of an analogy, but am I way off base?

You are describing a steady-state situation where the input and output never change, i.e. a +/- 1 hz uncertainty over a 20-year period. Kind of like the earth being covered in the same cloud cover over the same geographical area for the entire 20 years.

Think about the situation if the output of the first black box gets fed into another black box. The output of the first black box has an uncertain output because of the uncertainty in the input of the first black box. Thus the second black box compounds the uncertainty when it performs its transfer function.

Let’s add it up. The input to the first black box can be 10 volts at 10hz with an uncertainty of +/- 1hz. The transfer function for the box appears to be a x2 frequency multiplier. So the output becomes 10volts at 20hz +/- 2hz. (9hz x 2=18. 11hz x 2 = 22. a span of +/- 2hz). This then feeds into a 3rd black box that has the same transfer function. It’s output will be 10volts at 40hz +/- 4hz. (44hz – 36hz). The uncertainty compounds with each iteration. This is the case where the uncertainty is not independent so it is a straight additive.

Now, if it is the black box itself that is causing the uncertainty, perhaps due to power fluctuations or temperature drift in the components, the uncertainty compounds over each iteration as independent values thus adding in quadrature.

This actually is a good example of error vs uncertainty. I can write a transfer function to describe what I measure between the input and the output. But that doesn’t address the uncertainties associated with inputs and internal operation. It’s exactly the same with the CGM’s. They don’t address the uncertainties associated with their inputs, e.g. clouds, or with their internal operation. Thus their uncertainties compound over each iteration, just like they do with the black boxes.

Hey Tim,

iLet’s add it up. The input to the first black box can be 10 volts at 10hz with an uncertainty of +/- 1hz. The transfer function for the box appears to be a x2 frequency multiplier. So the output becomes 10volts at 20hz +/- 2hz. (9hz x 2=18. 11hz x 2 = 22. a span of +/- 2hz). This then feeds into a 3rd black box that has the same transfer function. It’s output will be 10volts at 40hz +/- 4hz. (44hz – 36hz). The uncertainty compounds with each iteration. This is the case where the uncertainty is not independent so it is a straight additive.That’s a good analogy! So, how would you expect this uncertainty manifests itself in this context? If we’ve got connected set of 3 black boxes, as you described, would you expect the output from the 3rd converter to be 10 volts and something between 36-44 Hz? If, after several repeated runs, you have consistent output, say, 39.7-40.1 Hz does it change anything? My understanding is that fellas as Nick and Roy argue because output of GCM is clustered around +/- 0.5 C or bit more that is actual uncertainty range.

What I meant to say was that it oscillates randomly (both temporally and in frequency) within +/-1 hz over a 20 year period.

You are describing error, not uncertainty. The uncertainty interval says nothing about the actual value the output will take, it only describes an interval in which the actual value will be found.

Tim Gorman, let’s all recognize that Steven Mosher is playing a fine

game of gotchawith Pat Frank and with the other knowledgeable critics of the climate models. Nick Stokes is doing the same thing, but he is doing it in a way that does have the benefit of offering useful insight into the theoretical and computational uncertainties of the GCMs.Mosher and Stokes have to know just how time consuming and expensive it would be to examine each and every component and sub-component of a GCM to determine what kinds of uncertainties are present and then to quantify and document all of those uncertainties.

They know too that quantifying and documenting all those uncertainties must be done by following every step the processing engine passes through, starting with reading the initial inputs on through to performing the internal computational process operations on through to producing the final outputs.

In other words, no black box. The whole enchilada, end to end. The inputs, the parameterizations, the assumed physics, the software code, and the computational correctness of the outputs according to accepted software quality assurance principles.

Mosher and Stokes also know what everyone who has faced a similar requirement in other engineering disciplines knows — that climate scientists would strongly oppose any serious demand to systematically quantify and document the uncertainties of the GCMs which now support climate change policy decision making.

Beta,

I don’t disagree with you on a theoretical basis. I would only add that on a practical basis not all components and sub-components and uncertainties would need to be evaluated in order to invalidate the models. It would be much simpler to identify, on a subjective basis, a few, maybe even only one or two, of those components and sub-components which would have the biggest impact from their uncertainties. Evaluate those and see what happens to the models.

This is basically what Pat has done. He picked one identifiable component and evaluated what that uncertainty causes. It’s not good for the reliability quotient of the models!

I agree that the mathematicians and computer programmers will never attempt even the smallest evaluation of their models. I suspect most of them don’t know how and those that do know how also know what the result would be!

That’s why Pat’s contribution is so important. It needs the widest distribution and support possible. Perhaps someone can get it to the new Energy Secretary and explain to him how the math works.

Thanks to Beta Blocker and Tim Gorman for painstakingly pointing out what is going on behind the math, as motivations for the obfuscation – which is dressed up as dialogue :

Denial isn’t just a river in Egypt.

It is a primary defence mechanism after all.

Pat Frank has also contributed a very important result that calls to account the use of GCM forecasts as basis for policy. He has refuted meticulously repeatedly, and with good grace, attempts to obscure the result, which if were more clearly understood by modelers – would suggest a significant rethink of this approach.

“Mosher and Stokes have to know just how time consuming and expensive it would be to examine each and every component and sub-component of a GCM to determine what kinds of uncertainties are present and then to quantify and document all of those uncertainties.”

So Pat took a short cut and and determined the uncertainty caused by the largest known component of uncertainty , which also happens to not be addressed by the model. And once it is determined that the output of GCMs are meaningless, there is no need to spend years adding additional smaller uncertainties. I mean meaningless is meaningless. It is like being ahead in the soccer game 35-2, then glorying that in the last 5 minutes you managed to score 2 additional goals to make the final score 37-2.

An approach which assesses all of the component and sub-component uncertainties would provide an objective means for determining and highlighting where inside a specific GCM the most important uncertainties influencing its predictive outputs lie —

if it were to be done systematically as a tool in evaluating the credibility of a GCM’s output.

This raises a further question. How would the

absenceof a GCM component or sub-component, one which is thought to be necessary for making useful predictions, be quantitatively assessed for its contribution to a model’s overall uncertainty?IOW, I believe the adage goes, “it is good enough for government work”!

“Where Mosher gets it really wrong is that what is inside the box isn’t necessarily what determines the uncertainty”

We pretty much know that GCMs cannot calculate cloud forcing, so in fact, it is what is NOT in the models (cloud physics) that is in fact causing the uncertainty in the models output.

If I model a car driving with the engine at various RPMs and the transmission in various gear ratios, but neglect to model the effect of braking, and I can show through experiment (analogous to Lauer) the amount of braking that is going on, I can calculate an uncertainty of the car model due to the lack of modeling the braking!

I’m in awe. Boy, did you folks get it down! 🙂

I’ve copied out that whole thread. It’s a keeper.

Dan Kahan needs to see these comments. The spectacle of posters, many who are obviously technically proficient, defending Pat Frank’s irrelevant paper, rife with math errors, because it supports their prejudgments, is a shining example of his System 2 findings….

“defending Pat Frank’s irrelevant paper, rife with math errors”

Addicted to using emotional arguments instead of facts, are we? You offer nothing to support your claim that the paper is irrelevant and has math errors.

“But it very much consistent with CCT, which predicts that individuals will use their System 2 reasoning capacities strategically and opportunistically to reinforce beliefs that the their cultural group’s positions on such issues reflect the best available evidence and that opposing groups’ positions do not.”

Facts are funny things. They either show reality or they don’t. Belief has nothing to do with it. I suspect the groups involved in Kahan’s studies didn’t have all the facts, couldn’t understand the math, and weren’t versed in experimental science. Thus their “belief” that their cultural group’s positions on issues reflect the best available evidence is based more on “group think” than on actual fact and analysis. That is true on *both* sides of the argument. It is a proven fact, however, that most on the climate alarmists side of the argument are mathematicians and computer programmers and are not familiar with scientific studies based in reality. See Pat’s writings for proof. See the unsupported and debunked claims that his writings are full of math errors. Since his thesis is *not* full of math errors it means it is *very* relevant. His thesis *is* falsifiable but has yet to be shown to be false. Until his thesis can be falsified his thesis stands as true and an accurate description of reality. That’s not true of the climate alarmists CGM studies which Pat’s thesis has falsified.

Looks like desperation time for Pomo State. Did coach send you in from the end of the bench to commit the intentional?

“

rife with math errors”There are no math errors.

Dan Kahan, no matter his accomplishments, shows no perceptible ability to evaluate the science himself.

An irony, given his work, is that Dan Kahan would have to rely on an argument from authority for a view of my paper.

Any guess which authority he’d pick? Might it be someone who has an

egalitarian, communitarian world-view, rather than one of those awfulhierarchical individualists?+1

Pat Frank, what you are doing is important work and is greatly appreciated from the perspective of an engineer. Your responses to questions and objections make good sense as this topic continues to attract attention and you patiently keep re-stating the points from the paper and the posts. Please keep on.

Thanks, David. I appreciate your knowledge-based support.

Yours and Tim Gorman’s and Beta Blocker’s and John Q. Public’s, and angech’s, and Kevin Kelty’s.

The knowledgeable people who weigh in are a company worth having.

I am absolutely not knowledgeable but I greatly appreciate your patience and your striving for clarity.

Hi Pat, it’s me again, thought you’d be pleased 🙂

Before writing anything substantive, I’ll ask a question about your statement:

“Off-setting errors, that’s how. GCMs are required to have TOA balance. So, parameters are adjusted within their uncertainty bounds so as to obtain that result.”

I think this may be close to the crux of the problem, so I want to understand your take on it. Let us assume, for the sake of argument, that there is one parameter ‘p’, whose adjustment gives the required TOA balance. Now, the GCM simulates across many years. Did p get adjusted at the start of the run, or on every year of the run? If the former, then how can its balancing act be effective at the end of the run, and if the latter then what mechanism in the GCM actually does that?

Rich.

I can’t say how they do it, Rich, and it doesn’t matter to the uncertainty analysis.

This comment will have limited interest to most readers, and is addressed to kribaez.

kribaez, when I finally got round to studying properly your Oct8 5:42am comment on the previous thread, I found a quicker way than yours to prove non-zero covariance in your Problem 4 (you started, without proof, that ΔXi’s have lag-1 autocorrelation -0.5). It is:

Var(X_i) = 4

Var(ΔX_i) = Var(X_i-X_{i-1}) = Var(X_i) + Var(X_{i-1}) = 8

4 = Var(X_i) = Var(X_{i-1}+ΔX_i) = Var(X_{i-1}) + Var(ΔX_i) + 2Cov(X_{i-1},ΔX_i) = 12 + 2Cov(X_{i-1},ΔX_i)

Cov(X_{i-1},ΔX_i) = -4

Cor(X_{i-1},ΔX_i) = -4/sqrt(4*8) = -1/sqrt(2) = -0.7071

So the proper error propagation all comes down to how the models, in approximating reality, derive T_i from T_{i-1}. I previously wrote an equation

M(t) = a M(t-1) + B(t;z,s)

Pat has a = 1, and your Problem 4 has a = 0, which Pat can reasonably argue is hugely different. In a future comment I hope to explore intermediate values of a.

Rich, “

So the proper error propagation all comes down to how the models, in approximating reality, derive T_i from T_{i-1}. I previously wrote an equation …”Models do not derive T_i from T_{i-1}. They derive all T’s from the forcing inputs. Any final T_n is a linear sum of T_0 + sum over[(i=1->n)ΔT_i], where each ΔT_i is derived from its ΔF_i.

However models do it, T_0 -> T_n comes out as a linear extrapolation of fractional change in inputted forcing. Whatever models do inside is irrelevant. The outputs are linear with inputs. Proper uncertainty analysis does not require knowing more than that.

I am absolutely not knowledgeable but I greatly appreciate your patience and your striving for clarity.

David,

how, with your stated level of background, can you then assess clarity ?

It might be a bit like art: “I know what I like when I see it”.

Am I allowed to believe that his appreciation is directed towards me? He could be referring to Pat, as his 7:23pm comment falls between mine and David’s.

I have something very important (I believe) to say about model emulators and their error propagation, but first I want to agree with kribaez where he writes “sampled error from the input space yields the uncertainty spread in the output space…there is no magic uncertainty which is not rendered visible by M[onte]C[arlo] sampling”. Tim Gorman keeps banging on about uncertainty being a number which can therefore have no covariance with anything, but as I wrote on the previous thread with various examples to justify, uncertainty is properly a distribution of a random variable which sometimes but not always describes an “error”, and the spread, or variance, or standard deviation, of the uncertainty is sometimes taken, as in the +/-u notation, to be the “uncertainty” itself. But this is to throw away information, because having done that different u’s cannot be properly compounded except by magic.

However, in this comment, I am not going to dwell on covariance, and for now I am going to give Pat Frank a free pass on that. Instead I am going to dwell on the nature, value, and fidelity, of any climate model emulator, and introduce a new one to you for comparison. Pat has written “Models do not derive T_i from T_{i-1}. They derive all T’s from the forcing inputs.”, and while that is true for GCMs, Pat’s own emulator effectively does derive T_i from T_{i-1}.

His emulator can fairly be written, I believe, as

(1): T(t) = T(t-1) + b(f(t)-f(t-1)) + U(t)

where T(t) is a good emulator of the mean of an ensemble of GCMs, b is a constant, f(t) is the total and known anomaly in GHG forcing at time t, and U(t) is an error term. Though setting U(t) = 0 gives a good fit overall, T(t) does not then exactly match the ensemble mean, so U(t) is a necessary correcting error term. Pat conflates uncertainty distribution with uncertainty value and therefore writes +/-u_t in place of U(t), but this notational difference is not problematic. I shall assume that U(t) has zero mean a variance of s^2 independent of t (and Pat has derived credible estimates of s from the +/-4 W/m^2 cloud forcing errors). (1) then implies that:

T(t) = sum_0^{t-1} U(t-i) + T(0) + b(f(t)-f(0))

We can choose our anomaly baselines so that T(0)=0 and f(0)=0, so

(2): T(t) = sum_0^{t-1} U(t-i) + b f(t)

Now, under the assumption of no covariance between different U(j)’s we can derive

(3): E[T(t)] = b f(t), Var[T(t)] = sum_0^{t-1} Var[U(t-i)] = ts^2

Later on I’ll use the simplification f(t) = td for some constant d, with the implication E[T(t)] = bdt. I don’t believe that Pat should have a problem with the above, as it verifies his results under the given assumptions.

Now I present to you a new emulator:

(4): X(t) = (1-a)X(t-1) + c f(t) + G(t)

What is the purpose of this? Why can’t I just let Pat’s emulator be? Well, suppose in some sense it is a better emulator: wouldn’t we want to use it instead?

Comparing (4) with (1), my X, c, G replace T, b, U respectively, merely so that we know which model we are referring to. ‘a’ is a new number between 0 and 1, and it reflects the saying “today’s warmth is due to today’s sunshine, not yesterday’s”. (OK, I just made that one up.) The point is that temperatures don’t really add together directly. Radiative forcing influences temperature, temperature influences storage of heat, and storage of heat influences radiative forcing. So some of yesterday’s sunshine gets retained in the earth or the sea, and some of that may be returned to augment sensible temperature today. But if the sun ain’t out today, yesterday’s heat only helps a little.

My G will generalize U by allowing a non-zero mean z as well as variance s^2. Now, assuming X(0) = 0,

(5): X(t) = sum_0^{t-1} (1-a)^i (cf(t-i) + G(t-i))

Then with the simplifying f(t) = dt and some algebra it can be shown that:

(6): E[X(t)] = cd(at + a – 1 + (1-a)^(t+1))/a^2 + z(1 – (1-a)^t)/a,

Var[X(t)] = s^2(1-(1-a)^(2t))/(2a-a^2)

Assuming that a > 0, the following asymptotics hold as t increases:

(7): E[X(t)] = cdt/a + O(1), Var[X(t)] = s^2/(2a-a^2)

If we choose c = ab then we get:

(8): E[X(t)] = bdt + O(1)

and that is identical to E[T(t)] = bdt derived below (3), apart from the O(1) term.

How can we tell these two fine emulators apart, since they both match the GCM ensemble mean very well? The answer is the variance, which when square rooted gives the standard deviation alias “uncertainty bound”. T(t) has s.d. s sqrt(t), X(t) has s.d. tending upwards to the limit s/sqrt(2a-a^2).

An emulator is of no value unless, as well as fitting GCM runs for the past, it reasonably predicts the spread from running them into the future – that, after all, is surely what an uncertainty spread means? Future running of GCMs into the future, or past archived results of running GCMs into the future, would surely settle the question of whether T(t) or X(t), or neither, is a faithful emulator of GCMs. If climate science thinks this is an important question, then I’m sure that amongst the billions spent on it some computer time could be devoted to answering this.

To recap, the emulator (by taking c = ab in (4)) which is

(9): X(t) = (1-a)X(t-1) + ab f(t) + G(t)

has a better physical justification than Pat’s (1), emulates model temperatures almost identically, and has a much smaller “uncertainty bound”.

Rich,

“uncertainty is properly a distribution of a random variable which sometimes but not always describes an “error”,”

Sorry, this is just plain wrong. You still have not bothered to read the JCGM all the way through, have you?

From the JCGM:

“6.1.2 Although uc(y) can be universally used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory applications, and when health and safety are concerned, it is often necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. The existence of this requirement was recognized by the Working Group and led to paragraph 5 of Recommendation INC‑1 (1980). It is also reflected in Recommendation 1 (CI‑1986) of the CIPM.”

“6.2.1 The additional measure of uncertainty that meets the requirement of providing an interval of the kind indicated in 6.1.2 is termed expanded uncertainty and is denoted by U. The expanded uncertainty U is obtained by multiplying the combined standard uncertainty uc(y) by a coverage factor k:

U = ku_c(y) (18)

The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U ≤ Y ≤ y + U.”

Please note carefully that the guide is talking about an INTERVAL and not a random variable with a mean and a standard deviation.

“the spread, or variance, or standard deviation, of the uncertainty is sometimes taken, as in the +/-u notation, to be the “uncertainty” itself. But this is to throw away information, because having done that different u’s cannot be properly compounded except by magic.”

Nothing is being thrown away. See the JCGM. And of course uncertainty can be compounded in an iterative process. You claim they can’t but you give no proof.

From the University of North Carolina physics department “Introduction to Measurements and Error Analysis”: “Note that the relative uncertainty in f, as shown in (b) and (c) above, has the same form for

multiplication and division: the relative uncertainty in a product or quotient is the square root of the

sum of the squares of the relative uncertainty of each individual term, as long as the terms are not

correlated. ”

Pat also referenced Bevington and Robinson, 2003 in his statement “The final change in projected air temperature is just a linear sum of the linear projections of intermediate temperature changes. Following from equation 4, the uncertainty “u” in a sum is just the root-sum-square of the uncertainties in the variables summed together, i.e., for c = a + b + d + … + z, then the uncertainty in c is ±uc=sqrt(ua^2+ub^2+ud^2+…+uz^2) (Bevington and Robinson, 2003). The linearity that completely describes air temperature projections justifies the linear propagation of error. Thus, the uncertainty in a final projected air temperature is the root-sum-square of the uncertainties in the summed intermediate air temperatures.”

“My G will generalize U by allowing a non-zero mean z as well as variance s^2. Now, assuming X(0) = 0,”

You are still falling into the trap of trying to make an uncertainty interval into a random variable with a probability function, a mean, and a standard deviation. The uncertainty interval is *NOT* a random variable, it has no probability function, nor does it have a mean and a standard deviation. If it *did* have these then based on the central limit theorem you could predict the most accurate result from the model.

Would it help if we started talking about a confidence interval instead of an uncertainty interval?

Tim, (October 25, 2019 at 8:19 am )

first of all thanks to refer to the JCGM document, such that we have something palpable to discuss about.

Tim:”Please note carefully that the guide is talking about an INTERVAL and not a random variable with a mean and a standard deviation.”

Please read carefully: “6.1.2 Although uc(y) can be universally used to express the uncertainty of a measurement result,……..”

So there IS a distribution of the measurand “y”, combined (“c”) in the conventional way from parent distributions, and given as the standard deviation “u”. This is necessary and suffficent to describe a Normal Distribution.

Then it goes on there: “….in SOME commercial, industrial, and regulatory applications, and when health and safety are concerned, it is OFTEN necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. ” [uppercase mine U.]

In selected contexts, “some”, “often” : You see this not a universal recipe, as you would like to have it.

It is, in my view, a recommendation HOW TO PRESENT the analysis to the customer/public/decision takers, given in the precautionary sense that the thin ends of the tails of the distribution are not to be ignored in a careless attempt, while extreme events, as low as their probability of occurrence may be, can have severe consequences .

The cited following Section 6.2.1 in JCGM recommends to calculate the range as multiples of sd=u (sic! distribution!) and call it “expanded uncertainty”, and to give the range of it.

Nothing substantial, just presentation.

Rich (cited by Tim): “the spread, or variance, or standard deviation, of the uncertainty is sometimes taken, as in the +/-u notation, to be the “uncertainty” itself. But this is to throw away information, because having done that different u’s cannot be properly compounded except by magic.”

Tim :” Nothing is being thrown away.”

Right. Even if the recommendation is followed (not relevant in our context), the respective distribution is there, just deliberatly not presented to the public, and if not lost by accident, can be referenced or processed subsequently.

Tim : “And of course uncertainty can be compounded in an iterative process.”

Yes, compounded as summing the variances of the means being summed (distributions!). Note that it is the variance of the MEAN which is processed, i.e. u(mean)^2 = u^2/n, u^2 being the variance of the sample or experiment . The root of the sum then delivers a sd=u of the mean. No iteration needed.

(The above is the simplest case. Generally, covariances and weighting factors may have to be considered.)

Tim : “You are still falling into the trap of trying to make an uncertainty interval into a random variable with a probability function, a mean, and a standard deviation. The uncertainty interval is *NOT* a random variable, it has no probability function, nor does it have a mean and a standard deviation. ”

I hope I have shown that the opposite is true. There is nothing particular with the u-notation but for the naming, all uncertainty evaluations are identical with those you would do in the errror-deviation world. Do you think it is a wise concept to sum up variances, just to obtain an empty range for final result ? Could be more economically done with just ranges as input.

Tim : “If it *did* have these [mean and a standard deviation] then based on the central limit theorem you could predict the most accurate result from the model. ”

Yes, in principle, but call it precise !!! (Pat, as you know, has no respect for those who don’t get this distinction right! Wereas he himself is is rather relaxed in the usage)

And no, in practice it would be ridiculous to drive precision toward zero sd, apart from the costs, when there is lots of bias involved. I think they are still fairly busy now with the hiatus issue.

Tim : “Would it help if we started talking about a confidence interval instead of an uncertainty interval?”

The question goes to Rich, but my suggestion is that you try to get rid of this burnt-in reflex of “uncertainty is not error !” (I really wonder where you got it from). Take another pass over the JCGM, and you’ll see that it’s all about means and variances : distributions. And do we agree that a +/- k*sd is *not* a confidence interval according to the literature ? The CI is based on the sd of the mean, the one that shrinks with n.

Looking forward to your progress !

U.

“So there IS a distribution of the measurand “y”, combined (“c”) in the conventional way from parent distributions, and given as the standard deviation “u”. This is necessary and suffficent to describe a Normal Distribution.”

You keep missing that this is talking about a MEASUREMENT and not the input to or the output of a mathematical model!

When the inputs to a mathematical model contain uncertainty, as Pat Frank has shown, the output then has uncertainty. The JCGM is useful in talking about some aspects of uncertainty in a mathematical model but it not definitive on that subject, it is aimed at something else. Look at the examples given in the document, not a single one speaks to determining the uncertainty of a mathematical model, but instead on how to measure a temperature or voltage.

“It is, in my view, a recommendation HOW TO PRESENT the analysis to the customer/public/decision takers, given in the precautionary sense that the thin ends of the tails of the distribution are not to be ignored in a careless attempt, while extreme events, as low as their probability of occurrence may be, can have severe consequences .”

Once again we see you conflating the concepts of a random variable with a probability distribution with the concepts of uncertainty. When I tell you the model inputs have an uncertainty of +/- 4 W/m^2 exactly what does that tell you about any supposed probability distribution of the output, e.g. the “tails of the distribution”?

The uncertainty interval tells give you a range in which the output might exist, it does not tell you the probability of each point in the interval which is what a probability distribution would do.

“u^2/n”

Uncertainty is not summed and then divided by n. It is a root-sum-square. No denominator. It is a vector addition of independent, orthogonal values, not some kind of convolution of probability distributions.

“Yes, in principle, but call it precise !!!”

This can be done with MEASUREMENTS, i.e. using a micrometer to measure the thickness of a sheet of media. It simply cannot be done with uncertainty because uncertainty is not a probability distribution.

“And no, in practice it would be ridiculous to drive precision toward zero sd, apart from the costs, when there is lots of bias involved. I think they are still fairly busy now with the hiatus issue.”

It is impractical because uncertainty isn’t subject to the central limit theorem since it isn’t a probability distribution. Please, *please* keep in mind the difference between taking the measurement of a voltage with a digital meter and determining the uncertainty interval for the output of a mathematical model. They are *not* the same.

“The question goes to Rich, but my suggestion is that you try to get rid of this burnt-in reflex of “uncertainty is not error !” (I really wonder where you got it from). ”

It is a burnt-in reflex because it is based on reality. I got it from reality. I got it from designing fish plates to connect steel girders. I can measure the length of the girders down to a gnat’s behind using the techniques in the JCGM. But when mixing different runs of girders, each of which I can measure down to the gnat’s behind, in an iterative span of any specific number of girders then the connecting fishplates better designed in such a manner as to be able to handle the uncertainty of length that mix of girders will provide. There *is* a difference between measurement precision and outcome uncertainty. In the physical world this becomes quite obvious very quickly.

“ake another pass over the JCGM, and you’ll see that it’s all about means and variances : distributions.”

MEANS AND VARIANCES OF MEASUREMENTS!. Not of uncertainty of the output of iterative runs of a mathematical model.

“And do we agree that a +/- k*sd is *not* a confidence interval according to the literature ? The CI is based on the sd of the mean, the one that shrinks with n.”

No, I do *not* agree. When I see the output of a mathematical model expressed as X degC +/- u degC I see a confidence interval which tells me where the true value might lie. That interval has no probability distribution, no mean, no standard deviation and therefore no n. The output of the model is *not* a measurement whose error can be driven to zero using the central limit theorem. If the central limit theorem doesn’t apply then it is not a probability distribution.

What that uncertainty interval tells me is that when they speak of the model being able to resolve differences over a number of iterations where the differences are smaller than the uncertainty interval then someone is blowing smoke up your butt! A model trying to resolve 0.1 degC differences with an input of +/- 1 degC uncertainty and an uncertainty in the output that is greater than the input uncertainty is a joke.

Rich,

I think my proof in the appendix here is relevant. You can construct a differential equation of arbitrary uncertainty propagation characteristic which includes any prescribed solution, which could be your past values.

An appendix that angech showed is misleading here, wherein he finished with this congratulatory statement, “

Well done.”Particularly all the speil while swapping the peas.

Also here, where he concluded, regarding your effort, Nick, “

Well deflected.“All I can do is to point out your inconsistencies.

Rich, “

I want to agree with kribaez where he writes “sampled error from the input space yields the uncertainty spread in the output space…there is no magic uncertainty which is not rendered visible by M[onte]C[arlo] sampling”.”He’s wrong, and so are you, Rich. Tim Gorman refuted kribaez here, where he wrote, “

Again, the MC analysis can only tell you the sensitivity of the model to changes in the input. If I put in A and get out B and then put in C and get out D then each of the outputs, B and D, *still* have an uncertainty associated with each.”The uncertainty derived from a calibration experiment is entirely independent from the model variation due to variations in inputs.

I also refuted the idea, Here, writing, “

[The input space] metric gives us no more than the output coherence of models forced into calibration similarity. That has nothing whatever to do with model predictive accuracy.”Also here, “

The method you’re describing is not uncertainty propagation. It is merely variation about an ensemble mean. It’s not even error, because no one has any idea where the correct value lays.”Honestly, I think it’s a bit disingenuous of you to proceed as though kribaez’ view had been unexamined.

You wrote, “

uncertainty is properly a distribution of a random variable” Not when it’s derived from an empirical calibration error. You continue to treat error in science with the closed form ideas of statistics. They are useful only as a guide, not as a bound.I’m going to paraphrase Einstein to try to get the point across. “

Statistics without contact with science becomes an empty scheme. Science without statistics is—insofar as it is thinkable at all—primitive and muddled. However, no sooner has the statistician, who is seeking a clear system, fought his way through to such a system, than he is inclined to interpret the thought-content of science in the sense of his system and to reject whatever does not fit into his system. The scientist, however, cannot afford to carry his striving for statistical systematic that far. He accepts gratefully the statistician’s conceptual analysis; but the external conditions, which are set for him by the facts of experience, do not permit him to let himself be too much restricted in the construction of his conceptual world by the adherence to a statistical system. He therefore must appear to the systematic statistician as a type of unscrupulous opportunist…”Your continued strict recourse to statistical ideas is inappropriate, Rich. They limit your thinking.

Physical science deals with a messy physical world; a world that is much more messy than statistics allows. Approximations and estimates are central to success in science.

Calibration uncertainty is not a random variable. Statistical methods are used to determine calibration error, but the structure of the error itself violates statistical axioms.

Physical scientists don’t care about that violation because the uncertainty estimate is useful, indeed central, to an appraisal of predictive reliability.

You wrote, “

I am going to give Pat Frank a free pass on [covariance]”You’re not giving me a free pass on anything. There is no covariance in a constant calibration uncertainty. It doesn’t vary.

Yo wrote, “

His emulator can fairly be written, I believe, as(1): T(t) = T(t-1) + b(f(t)-f(t-1)) + U(t)But my emulator is eqn. 1, and eqn. 1 has no error term, Rich.

Let’s compare your equation with eqn. 1: ΔT = f_CO2 x 33K x {[F_0+(sum over ΔF_i)]/F_0}.

That’s nothing like your equation. No uncertainty term. No T on the right side at all, except the greenhouse 33 K.

Let’s look at an individual emulation step, added to an intermediate term of step “i”:

T(t) = T(i-1)+ {f_CO2 x 33K x [F_0+ΔF_(1->(i-1))+ΔF_i]/F_0}.

That’s nothing like what you wrote. Your

b(f(t)-f(t-1))is nothing like{f_CO2 x 33K x [F_0+ΔF_(1->(i-1))+ΔF_i]/F_0}Why you think yours is “

fairly writtenis anyone’s guess, when mere inspection shows that it is not.Now, let’s compare your eqn with eqn. 5, which actually does include an uncertainty term (positionally equivalent to, but not identical with your U(t)).

ΔT_i ±u_i = [f_CO2 x 33K x (F_0+ΔF_i)/F_0] ±[( f_CO2 x 33 K 4 W/m^2)/F_0] .

Eqn. 5 looks like eqn. 1, doesn’t it, except for the addition of the uncertainty due to model calibration uncertainty.

So, eqn. 5 doesn’t look like your equation, either.

Once again, you’re imposing a false model onto my work, yet again arguing from a straw man position.

You wrote, “

Though setting U(t) = 0 gives a good fit overall, T(t) does not then exactly match the ensemble mean, so U(t) is a necessary correcting error term.”Wrong analogy, then, Rich.

If your (1) does not fit an ensemble mean without your U(t), then it not only does not do as well as paper eqn. 1, but it also requires a term that has no counterpart in paper eqn. 1.

Eqn. 1 will nicely match any ensemble mean because f_CO2 varies with the individual projection. It will be different for an ensemble mean relative to its value for a single projection run.

Take a look at paper Figure 7: very nice emulations of the CMIP5 ensemble mean. With no uncertainty term to correct the emulation.

If one does a “perturbed physics” series using a single model, eqn. 1 can fit every single one of them.

And uncertainty, your U(t), makes no contribution to any of the emulations of paper eqn. 1.

You wrote, “

Pat conflates uncertainty distribution with uncertainty value …”A calibration uncertainty interval is not a distribution. It’s an empirical value. Yet another example of you continuing to impose your incorrect meanings onto my work.

You wrote, “

I shall assume that U(t) has zero mean …”An assumption without any relevance to an uncertainty from empirical calibration error. You’re just imposing assumptions that necessarily lead to your desired conclusion, Rich. That’s called a tendentious argument.

You wrote, “

it reasonably predicts the spread from running them into the future – that, after all, is surely what an uncertainty spread means?”No, it surely doesn’t mean that. The uncertainty interval is a spread within which the correct value somewhere lays (though we do not know where and the interval mean is not the most probable value).

Once again, and like so many others, you’ve mistakenly supposed the uncertainty interval is equal to model projection spread — the spread of predicted outcomes.

You wrote, “

[Your emulator] has a better physical justification than Pat’s (1), emulates model temperatures almost identically, and has a much smaller “uncertainty bound””Your “

abf(t)” employs the same forcing terms, which provides no better physical justification.Your “

G(t)” is an assumption — your invention, really — and has no physical justification at all. The ±4 W/m^2 is a known calibration uncertainty directly derived from GCM simulations.That means the uncertainty bound calculated by propagating that calibration uncertainty is a physically true conditional of GCM air temperature projections.

Your argument is wrong throughout.

Pat, you have given a pretty full reply which will take me a little time to digest (I was busy today so far). But there is one thing I can ask you about immediately, to do with your “Once again, and like so many others, you’ve mistakenly supposed the uncertainty interval is equal to model projection spread — the spread of predicted outcomes.”

Now, if the model projection spread can include randomization of inputs within the confines of limits on errors in its parameters (calibration errors I suppose), are you still saying that at the far end your uncertainty interval does not relate to the model projection spread? If so, I am struggling to understand any useful meaning of your paper. So if you could clarify this point that would certainly help. If you could write some maths by way of example, that would help even more, because as you can see, I am trying to understand how the maths fits together to support your conclusions.

Rich, “

are you still saying that at the far end your uncertainty interval does not relate to the model projection spread?”Yes.

In fact, every single one of the uncertainty intervals all along the projection, does not represent model air temperature projection spread at that point.

Instead, each interval represents the width of ignorance about the value (the position within the interval) of the physically correct temperature. The correct value is lost within that interval.

“

If you could write some maths by way of example, that would help even more, because as you can see, I am trying to understand how the maths fits together to support your conclusions.”Look at the papers extracted here.

They will give you the analytical approach to uncertainty, and its meaning.

My conclusion cannot be understood by strict reference to mathematics, Rich.

Physics is not about math. It’s about causality. It’s about objective knowledge about what we have observed. That means physical sciences must have a way to represent residual ignorance, so as to condition their conclusions.

That’s what an uncertainty analysis does. It provides an ignorance interval. One does not know, within that interval, where the physically correct answer lays.

Instrumental resolution is an example of such uncertainty. A claimed measurement magnitude that is smaller than the given instrument can resolve has no physical meaning.

If a classical liquid-in-glass (LiG) thermometer resolution is (+/-)0.25 C, it can produce no data more accurate than that. No data means, that interval is the pixel size. Everything inside is a uniform blur. A temperature reading taken from that thermometer and written as, e.g., 25.1 C, is meaningless past the decimal.

In terms of practicalities, that (+/-)0.25 C acknowledges that the thermometer capillary is not of uniform width; that the inner surface of the glass is not perfectly smooth and uniform; that the liquid inside is not of constant purity; that the entire thermometer body is not at constant temperature.

All these things are uncontrolled variables and add up to produce errors of unknown size in a measurement. They vary with the thermometer, and with the age of a thermometer. Which is why thermometers must be periodically re-calibrated.

Even if one can visually estimate the distance between the inscribed lines to (+/-)0.1 C, the reading has no meaning because the position of the liquid is not at the correct spot for the external temperature.

Lin and Hubbard have discussed analogous sources of error and resolution limits in modern electronic thermometers: (2004)

Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station NetworksJ. Atmos Ocean Technol. 21, 1025-1032.All physical scientists deal with this — resolution limits, errors, and uncertainty — as a matter of course in their work. Uncertainty can be expressed using statistics, but is outside the realm where statistical mathematics applies exactly.

One can’t deal with uncertainty from a purely statistics perspective. One must approach uncertainty by way of empirical calibration experiments. And then the uncertainty interval violates all the rules of statistical inference.

Uncertainty does not represent random error, does not include iid values, ideas of stationarity do not apply. Uncertainty has no distribution and its mean is not the most probable physical value.

If you like, take a look at Rukhin (2009)

Weighted means statistics in interlaboratory studiesMetrologia 46, 323-331; doi: 10.1088/0026-1394/46/3/021.Section 6 discusses Type B (systematic) errors. Along the way, Rukhin observes that if the type B error does not have a mean of zero, “

then all weighted means statistics become biased, and [the measurement mean] itself cannot be estimated.” That is exactly the case with a calibration uncertainty interval. Even worse, the interval mean has no discrete physical significance at all.Rukhin’s interlaboratory analysis has real-world significance. Geoff Sherrington will tell you all about the terrifying outcome from tests of interlaboratory coherence of analytical results, when the moon rocks were being analyzed.

Your approach to the problem from inside statistics is inappropriate, Rich. It won’t lead you anywhere useful.

Pat, I have just read this and will make a very quick reply and then think harder. I see now that you are worrying about the accuracy of the models rather than (or as well as) their precision, and to be fair I believe you made a comment along those lines to kribaez, but I hadn’t noticed this strongly in your paper so I had in fact ignored that. Nevertheless, when I was formulating my model Equation (4) above, I had considered including a “reality” term, but rejected that; I can now reconsider.

But I should like to return to my original question, and ask you to answer how you would compare my emulator based on (4) with your emulator based on (1) – you have been discarding the error terms (U(t) for (1)) for the purpose of your emulator but then effectively reintroducing it later when considering uncertainty (e.g. equation 6 in your paper). This may possibly be justifiable. But in any case, I believe that if I treat my emulator in the same way as you have yours, I come up with a lower uncertainty interval. Do you accept that, and how would you distinguish between your emulator and mine?

I am going to continue to press mathematical models and statistics to the limit, and if in the end I have to give up I can always fall back, perhaps unfairly, on your intriguing quote from Einstein: “He therefore must appear to the systematic statistician as a type of unscrupulous opportunist…”.

Further reply to Pat Oct26 12:39pm

Pat, here is a more considered reply, prepending your comments with P: and mine with R:.

P: Instead, each interval represents the width of ignorance about the value (the position within the interval) of the physically correct temperature. The correct value is lost within that interval.

R: I am happy with that, apart from a detail which will become apparent further down.

P(R): “If you could write some maths by way of example, that would help even more, because as you can see, I am trying to understand how the maths fits together to support your conclusions.”

Look at the papers extracted here.

They will give you the analytical approach to uncertainty, and its meaning.

R: I’m quite comfortable with that approach, because for example “X=X_i(measured) (+/-)dX_i (20:1)” clearly shows they are using statistical theory under the bonnet, where the 20:1 is defined as a probability that the “true” value will be in the interval, and there is reference to that being “2-sigma” which shows that the underlying distribution is normal (approximately).

P: My conclusion cannot be understood by strict reference to mathematics, Rich.

R: Then that’s sad – I don’t understand how you can expect any credibility in that case. I don’t care what Einstein may have said about statistics, because when it came to the crunch he was always very careful with his mathematics, to the point of learning new stuff like manifold theory.

P: Physics is not about math. It’s about causality. It’s about objective knowledge about what we have observed. That means physical sciences must have a way to represent residual ignorance, so as to condition their conclusions.

R: Yes, but that representation is through mathematics, which includes the possibilities that “uncertainties” are correlated. The null hypothesis is that they are not, and it may be that in your case you may have demonstrated it in your paper, or it may be true, or both. In any case I am not concerned about that right now.

P: That’s what an uncertainty analysis does. It provides an ignorance interval. One does not know, within that interval, where the physically correct answer lays.

Instrumental resolution is an example of such uncertainty. A claimed measurement magnitude that is smaller than the given instrument can resolve has no physical meaning.

If a classical liquid-in-glass (LiG) thermometer resolution is (+/-)0.25 C, it can produce no data more accurate than that. No data means, that interval is the pixel size. Everything inside is a uniform blur. A temperature reading taken from that thermometer and written as, e.g., 25.1 C, is meaningless past the decimal.

In terms of practicalities, that (+/-)0.25 C acknowledges that the thermometer capillary is not of uniform width; that the inner surface of the glass is not perfectly smooth and uniform; that the liquid inside is not of constant purity; that the entire thermometer body is not at constant temperature.

All these things are uncontrolled variables and add up to produce errors of unknown size in a measurement. They vary with the thermometer, and with the age of a thermometer. Which is why thermometers must be periodically re-calibrated.

R: The fact that many errors add together to make up the total “uncertainty” is precisely why your statement “everything inside is a uniform blur” is false, because the sum of the errors is well approximated by a normal distribution. And that is why a +/-u (20:1) uncertainty is quoted. Not only is the true value not uniform inside the interval, it is not even guaranteed to be inside that interval (there’s a 5% chance it’s outside). This got discussed on the previous thread with Tim Gorman and his 12+/-1” rulers. To measure 10 feet I proposed using 10 independent rulers, and the uncertainty was then not +/-10” but a much smaller value.

P: Even if one can visually estimate the distance between the inscribed lines to (+/-)0.1 C, the reading has no meaning because the position of the liquid is not at the correct spot for the external temperature.

R: This (“no meaning”) is not true because the uncertainty from the visual estimation has to get added to the instrumental uncertainty, and sqrt(0.25^2+0.1^2) is smaller than sqrt(0.25^2+0.25^2). Nevertheless it shows that it is futile to attempt too great a visual resolution, because the advantage rapidly diminishes in the +/-sqrt(0.25^2+e^2) as e is reduced.

P: Lin and Hubbard have discussed analogous sources of error and resolution limits in modern electronic thermometers: (2004) Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks J. Atmos Ocean Technol. 21, 1025-1032.

All physical scientists deal with this — resolution limits, errors, and uncertainty — as a matter of course in their work. Uncertainty can be expressed using statistics, but is outside the realm where statistical mathematics applies exactly.

R: I agree that the mathematics cannot be applied exactly, because there is uncertainty in the uncertainties (e.g. how close to normal is the actual distribution of relevance). But the theory is generally GEFGU (Good Enough For Government Use). It saves people money.

P: One can’t deal with uncertainty from a purely statistics perspective. One must approach uncertainty by way of empirical calibration experiments. And then the uncertainty interval violates all the rules of statistical inference.

R: Please explain which rules of statistical inference it violates. The root-sum-of-squares rule fits in very nicely with statistical theory, provided that your next paragraph is contradicted.

P: Uncertainty does not represent random error, does not include iid values, ideas of stationarity do not apply. Uncertainty has no distribution and its mean is not the most probable physical value.

R: Tim Gorman gave some examples trying to support that thesis, but in every case I was able to demonstrate an underlying statistical model. For example I devised a scheme wherein I could, with sufficient purchasing power, test a manufacturer’s claim that their rulers were 12+/-x” (where x was 1 but could have been a different fixed number).

P: If you like, take a look at Rukhin (2009) Weighted means statistics in interlaboratory studies Metrologia 46, 323-331; doi: 10.1088/0026-1394/46/3/021.

Section 6 discusses Type B (systematic) errors. Along the way, Rukhin observes that if the type B error does not have a mean of zero, “ then all weighted means statistics become biased, and [the measurement mean] itself cannot be estimated.” That is exactly the case with a calibration uncertainty interval. Even worse, the interval mean has no discrete physical significance at all.

R: That conclusion depends on whether the mean was incorrectly assumed to be zero and whether any attempts were made to detect or estimate the bias through, as you say, interlaboratory analysis. A ruler manufacturer can (in theory) go to the NPL in Teddington, Middlesex (where my sister happens to live) to get a good estimate on the bias as well as mean error in his rulers. But certainly bad assumptions can lead to invalid results.

P: Rukhin’s interlaboratory analysis has real-world significance. Geoff Sherrington will tell you all about the terrifying outcome from tests of interlaboratory coherence of analytical results, when the moon rocks were being analyzed.

Your approach to the problem from inside statistics is inappropriate, Rich. It won’t lead you anywhere useful.

R: Well, we’ll see. I still think it already illuminated some features in the rulers problem, and my next posting will be on progress on emulator models. God willing – one must bear in mind the uncertainty of living to complete the work!

“Tim Gorman gave some examples trying to support that thesis, but in every case I was able to demonstrate an underlying statistical model.”

Actually you didn’t Rich. If you take a number of girders from different manufactures you can measure the length between their connecting holes down to a gnat’s behind using statistical methods as described in the JCGM. And each of those girders will have small differences in their length, perhaps a small difference in girders from the same manufacturer but difference nonetheless.

When you design the fishplates to connect those girders together in an iterative process you better include an uncertainty factor to allow for the various lengths you know so precisely and for the mixing of those precisely measured girders of different lengths. No amount of statistics, calculation of means, and of standard deviations will help you in such a process. You can calculate what the uncertainty interval is and that is about all. That uncertainty interval will run from all short girders to all long girders and there is no amount of statistics that will help you design those fishplates any better than that uncertainty interval. They better be designed to handle anywhere in that uncertainty interval.

You simply never showed how statistics could help in such a case. It just went ignored.

Pat is correct. In physical engineering there are uncertainties that are not subject to statistics. They just *are* and you need to recognize what they are or you run into big trouble.

Rich, I’m not going to concern myself with your emulator.

The LWCF uncertainty interval I use derives directly from the calibration of climate models against measured observables.

That calibration uncertainty defines a resolution lower limit of climate models.

The uncertainty does not enter the emulation at all.

Your definition of an uncertainty bound as something that, “

reasonably predicts the spread from running [GCMs] into the future” is just an epidemiological variation in model output. A predictive pdf.Your U, G is not a predictive uncertainty bound as understood in the physical sciences, unless one has a perfectly correct and complete physical theory. Which climate modelers do not. By far.

Any number of times, it’s been pointed out that yours is

notthe meaning of a predictive uncertainty bound derived from propagated calibration error.And yet you continue to go back to it.

I don’t know how to say this except baldly, Rich, but every single one of your statistics based analyses has been thoroughly malapropos.

I wish you would stop trying to force your ideas into an arena where they most thoroughly do not belong. Your push is pure square-peg-round-hole-ism.

Rich,

clearly shows they are using statistical theory under the bonnet,”Statistical methods, Rich. Not necessarily statistical theory. When error is not known to be normal, we still calculate an SD and report it as an uncertainty. Even though it violates the underlying statistical assumptions.

That’s what Einstein’s “unscrupulous opportunist” means.

“

which includes the possibilities that “uncertainties” are correlated.”LWCF uncertainty is an unvarying constant, Rich. It doesn’t correlate with anything.

“

I don’t understand how you can expect any credibility in that case.”I used “strict reference,”, didn’t I. I’m not worried about credibility among statisticians who must worry about closed form niceties. I’m worried about knowing whether a result is reliable or not. That’s what “strict reference” means. It means I use the method because it tells me something I need to know — reliability — even though the use violates statistical assumptions. Unscrupulous opportunist that I am.

“

The fact that many errors add together to make up the total “uncertainty” is precisely why your statement “everything inside is a uniform blur” is false, because the sum of the errors is well approximated by a normal distribution.”You don’t know that is true. Tim Gorman has pointed out to you almost ad nauseam, that empirical uncertainty intervals have no known distribution. And here you ignore that.

It’s clear you do not understand the meaning of resolution. Resolution is the limit of measurable data or calculational accuracy. Everything smaller than that limit is a blur.

Models that have a resolution limit of (+/-)4 W/m^2 cannot resolve a smaller perturbation. They are blind to it. If I wanted to quote that resolution limit as a 20:1 statistic, and say the limit is (+/-)8 W/m^2, that would not imply a normal distribution. It would imply only that I am applying a stricter standard.

“

And that is why a +/-u (20:1) uncertainty is quoted”That is

notwhy a (+/-)20:1 uncertainty is quoted. A (+/-)20:1 uncertainty is quoted because it is a useful measure of reliability, even when the error distribution is unknown or skewed.“

Not only is the true value not uniform inside the interval, it is not even guaranteed to be inside that interval (there’s a 5% chance it’s outside).”When the resolution limit is an empirical uncertainty interval, the statistical probability does not apply. Given an empirical calibration SD, one cannot say there is a 5% chance the correct answer is outside 2-sigma. Such a statement is meaningless — because the error distribution is not known to be normal.

The SD is a calculation of convenience. It is not statistically rigorous.

“

This (“no meaning”) is not trueIt

istrue, because the 0.1 C is physically meaningless, not merely uncertain. The thermometer is literally incapable of producing a reading to that accuracy.Your criteria of judgement continue to be malappropriate, Rich.

“

I agree that the mathematics cannot be applied exactly, because there is uncertainty in the uncertainties (e.g. how close to normal is the actual distribution of relevance).”The mathematics cannot be applied exactly because the instrument produces erroneous readings for reasons rooted in uncontrolled and unknown variables. It’s not just unknown distributions, though that is always an issue. It’s unknown sources of error.

Why do you think high-precision, high-accuracy instruments are made, but not deployed in numbers, Rich? It’s because such instruments are extremely expensive. They are used to calibrate field instruments.

Field instruments are subject to field environments that are not predictable. Field calibrations show all sorts of strange error profiles that can vary in time and space.

“

Please explain which rules of statistical inference it violates. The root-sum-of-squares rule fits in very nicely with statistical theory,”RSS is invariably used. Including when the uncertainty interval has no knowable distribution. Scientists’ unscrupulous opportunism again.

We are interested in useful indications of reliability. RSS of empirical calibration SDs are used without any care whether the SD meets the criteria of statistical purity, or not.

“

test a manufacturer’s claim that their rulers were 12+/-x”And how would you know a priori that “x” is normally distributed? And, if so in your instance, stays normally distributed?

“

[Rukhin’s] conclusion depends on whether the mean was incorrectly assumed to be zero.”No, it depends on when the error is not stationary.

It’s around and around the same circle, Rich.

“RSS of empirical calibration SDs are used without any care whether the SD meets the criteria of statistical purity, or not.”

RSS is an easily understood way to combine independent, orthogonal values. Since they are orthogonal they form a right triangle and their sum is hypotenuse = sqrt (a^2 + b^2).

It doesn’t require any statistical purity at all!

Reply to Tim Gorman Oct27 1:38pm

Tim, I have just seen this. As you can see, I have been busy further downthread. So I may start to address your comment here, but this thread is moribund, so I shall wait to see if there is a new relevant thread in the near future. I think there may be some other comments of yours which I will also have to defer. In the meantime, best wishes.

Tim, I don’t want to spend too much more time on this aspect, as I am more interested in people’s thoughts on how to choose between competing emulator models.

Still, thanks for offering to use the term “confidence interval”, for then you have “fallen into the trap” of using statistical terminology. A confidence interval is part of the range of the distribution of a random variable such that if some parameter lies outside it could only have happened with some small given probability. So distributions and r.v.’s do come into play. As for what that JCGM is saying, I think they are paraphrasing for scientists and engineers in order to simplify matters which arise from deeper mathematical/statistical theory, and they do talk about correlation (or lack of it), which is a feature of joint probability distributions. I’d appreciate 3rd party insights on that.

Anyway, probably best to leave it at that, and thanks for your stimulating points, especially on rulers and your wife’s car in Topeka (previous thread for passers-by here).

Rich.

“A confidence interval is part of the range of the distribution of a random variable such that if some parameter lies outside it could only have happened with some small given probability”

But it is *still* an interval and not a random variable nor is it a mean or standard deviation.

“So distributions and r.v.’s do come into play. ”

But not with the interval itself. The interval specifies no probability for any specific value. From the JCGM:

“The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that

the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that

may be expected to encompass a large fraction of the distribution of values that could reasonably be

attributed to Y. Such an interval is also expressed as y − U u Y u y + U. ”

“As for what that JCGM is saying,”

Most of what the JCGM talks about is MEASUREMENTS. Look at the title – “Guide to the expression

of uncertainty in measurement “. The examples in the document are about how to *measure* things and about the errors and uncertainty in those measurements. That is *not* what Pat’s thesis is about and it is not what the output of the CGM’s are about. What the climate alarmists need to begin paying attention to are the uncertainties associated with the inputs they use in their calculations – which is what Pat is addressing. If you read Pat’s thesis: “Propagation of error is a standard method used to estimate the uncertainty of a prediction, i.e., its reliability, when the physically true value of the predictand is unknown (Bevington and Robinson, 2003).”

Pat’s thesis is about the propagation of error and not about uncertainties in measurement. He uses the very statistical methods you speak of in order to determine the uncertainty +/- 4W/m^2. Once that is done then the propagation of that error comes into play in the iterative process of the CGM.

The point about the uncertainty interval not being a random variable itself and not having a mean or standard deviation comes into play when you propagate the uncertainty through multiple iterations. With independent, orthogonal uncertainty *intervals* they combine as root-sum-square, not root-mean-square. They don’t combine by convolving proability functions or anything else since they don’t have a probability function nor does the central limit theory work to eliminate the uncertainty. The uncertainty grows with each iteration.

This is also why Monte Carlo runs don’t work to generate the uncertainty interval for the final output. If the inputs are uncertain then the output *has* to be uncertain – meaning an independent run in a MC analysis can’t define the uncertainty interval. If Input A gives output B +/- u then it will *always* output the same relationship. An Input A + offset1 will always give B + offset2 +/- u. The MC simply cannot define uncertainty.

“they do talk about correlation (or lack of it), which is a feature of joint probability distributions.”

Again, uncertainty intervals don’t have a probability distribution – and neither does a standard deviation. Both are *values* and not probability distributions. You can’t convolve two standard deviation values any more than you can convolve uncertainties. The correlation the JCGM talks about are associated with how to deal with the measurement when two different probability distributions are involved.

Pat’s math is correct.

Rich, “

A confidence interval is part of the range of the distribution of a random variable…”Not when it is an empirical calibration error statistic.

Experimental physical science is not statistics, Rich. Somehow that realization invariably evades you.

The question provokes an answer.

The question provokes a question.

http://www.ams.org/publicoutreach/feature-column/fcarc-tsp

Moderator: I intend to post something here tomorrow. May I assume that this thread will be open for comments until sometime on October 29th?

Thanks,

Rich.

Threads stay open 2 weeks, Rich. It’s a WordPress thing. CtM and Anthony have no control over it.

Your purely statistical approach is never, ever, going to cover the bases of an empirically based predictive uncertainty analysis, Rich.

We do have control, but that’s the time period that has been chosen as our policy.

Below I have distilled the essentials of Pat Frank’s long and erudite paper, and my alternative emulator, into a dozen numbered paragraphs. I hope readers will find it useful to have the basic arguments summarized.

1. There exists a GASAT (Global Average Surface Air Temperature) which we wish to model, using values at past times to fit/calibrate the model, and future values which we wish to estimate and to know a probable value of the error of our estimate. A range of probable values may be called an “uncertainty interval”.

2. GCMs (Global Circulation Models) are a type of climate model favoured by the IPCC, and within those the CMIP5 models are an important subset.

3. The anomaly in radiative forcing due to GHGs (GreenHouse Gases) is assumed to be known to high accuracy in the past, and for the future a particular “scenario” is chosen to predict their forcing. At time t, f(t) denotes this value in W/m^2.

4. It is observed that graphs of GCM values of GASAT into the future, whilst having some wiggles, are well approximated by a constant times f(t).

5. In Pat Frank’s paper this relationship is described by his Equation (1), which can be rewritten in slightly different notation as:

(1) T(t) = b f(t) + A

where T(t) is the emulated value at time (year) t. The value of constant A, an offset, is not especially important, but the value of constant b is, and Pat supplies a value for this.

6. Though T(t) in (1) here approximates the GCM values well, it does not tell us about errors in those GCMs, which might lead them to be wildly inaccurate in the future. That is, the uncertainty over how far off the real GASAT it might be at the year 2100 might be great, either because the spread of probable GCM values might be great (the precision problem), or because the GCM exhibits a nonzero bias (the mean of its envelope minus the true GSTA) which is amplified over a period of 80 years (the accuracy problem), or both.

7. In addition to the GHG forcing f(t), the GCMs use other much larger forcings, say F(t), to model temperature. Pat quotes other papers to show that for the LWCF (Long Wave Cloud Forcing) component of this, the RMSE (Root Mean Squared Error) is +/-4W/m^2 when averaged over a year (relevant if T(t) is advanced with the increment of t being 1 year).

8. Therefore in any one year there is, as well as f(t), an uncertainty of +/-4W/m^2 to be added in. When considering the change from one year to the next, the change in GASAT over that period is to be considered, so the model is for T(t)-T(t-1) and the uncertainty is applied to f(t)-f(t-1), This is the import of Pat’s Equation (5.1) which I rewrite here as

(2) T(t)-T(t-1) = b(f(t)-f(t-1)) +/- u

where u = 4W/m^2.

9. By the RSS (Root Sum of Squares) method of combining independent uncertainties, and adding the telescoping terms in (2) for successive values of t, we get

(3) T(t)-T(0) = b(f(t)-f(0)) +/- u sqrt(t)

So, for example, the uncertainty after 81 years is +/-9u = +/-36W/m^2, a large value indeed. Ergo, useless GCMs!

10. The head posting by Pat argues against Roy Spencer’s criticisms. From (3) it looks as if GCMs should wander by +/-36W/m^2, but they don’t. Pat writes “Models show TOA (Top Of Atmosphere) balance and LWCF error simultaneously”. This is certainly disturbing for the GCMs, as one wonders how they magically achieve balance in these circumstances, but it is also disturbing for Equation (3) above because it suggests that between different times the u’s might have correlation structure, contradicting the independence assumption required for (3). Again, as in my comment of Oct25 7:12am, I am not going to pursue this line for now.

11. (Whether Pat likes it or not, “uncertainty intervals” +/-u_i can be written as random variables U(i) and give the same results in the usual cases (normal, zero mean), and as I am more familiar with that notation I am going to use it here.) Consider the model

(4) T_k(t) = T_k(t-1) + b(f(t)-f(t-1)) + kU(t)

where k is 0 or 1 and T_0(0) = T_1(0) is an initial condition. If k=0 then (4) can be summed to give an emulator equation like (1) here and Pat’s (1). If k=1 then (4) is effectively the same as the “uncertainty” equation (2) or Pat’s equation (5). Using this recursion we can derive

(5) T_k(t) = T_k(0) + b(f(t)-f(0)) + k sum_1^t U(i)

It follows that T_1(t) = T_0(t) + sum U(i) and this links Pat’s equations (1) and (5) together. Now T_0(t) can be declared to be an emulator for anything, but a justification needs to be made. Pat declares his emulator to be for the ensemble mean of some CMIP5 models, and justifies this through Figure 1 of the paper. For T_1(t), the uncertainty equation, with sum_1^t U(i) replaced by +/-u sqrt(t) in his notation, he justifies it through analysis of TCF errors.

12. Now let’s turn to my new emulator again, as introduced in my Oct25 7:12am comment.

(6) X_k(t) = (1-a)X(t-1) + c f(t) + k G(t)

where G(t) has some distribution with mean z and variance s^2. Then

(7) X_k(t) = sum_0^{t-1} (1-a)^i(b f(t-i) + k G(t-i)) + (1-a)^t X_k(0)

Now assume that f(t) = dt for some constant d. Then

(8) E[X_k(t)] = cd(at+a-1)/a^2 + (1-a)^(t+1)(cd/a^2-X_k(0)) + kz(1-(1-a)^t)/a

(9) Var[X_1(t)] = (1-(1-a)^(2t)) s^2/(2a-a^2)

For 0<a<1, choosing c = ab makes X_0(t) follow a path very close to T_0(t), so it is an equally good emulator as T_0(t). But the variance of the uncertainty version X_1(t) does not grow without limit, as it tends to s^2/(2a-a^2), which is very different behaviour from T_1(t).

I’ll write a separate comment to use this exposition to respond to some of Pat’s comments above.

“11. (Whether Pat likes it or not, “uncertainty intervals” +/-u_i can be written as random variables U(i) and give the same results in the usual cases (normal, zero mean),”

If uncertainty is a random variable then it is subject to being made more accurate using the central limit theorem. This runs into the problem of – how does a model made up of differential equations use the central limit theorem to cancel out errors in its output?

Assuming that uncertainty is a random variable with a normal distribution means that its mean is also the highest probability value – i.e. the most accurate. Thus the claim made by the climate alarmists that the CGMs are highly accurate because of the cancellation of errors should be considered to be true. But then this runs into the paradox that their outputs don’t match reality – so how can their outputs be the most accurate?

Replies to some of Pat’s comments upstream.

P: Rich, I’m not going to concern myself with your emulator.

R: Pat, I’m not surprised, because you show absolutely no interest in addressing the falsifiability question of your results. And it’s understandable, given how many years you have invested into this. But other readers will see that the existence of an equally good emulator (my para 12 above) which, combined with a method for calculating uncertainty which cannot be proven to be distinct from yours (i.e. RSS), gives much lower uncertainty bounds, means that your conclusion is not, as they say in the climate science trade, “robust”.

P: The LWCF uncertainty interval I use derives directly from the calibration of climate models against measured observables.

R: I agree; it’s my para. 7 above, and it’s a strong part of your paper.

P: That calibration uncertainty defines a resolution lower limit of climate models.

R: I agree; it’s not the initial uncertainty that bothers me, but the propagation.

P: The uncertainty does not enter the emulation at all.

R: Strictly speaking that is true, as it is T_0(t) in my para 11 above. But it enters into T_1(t), which differs from T_0(t) only in the inclusion of “error” or “uncertainty” terms, and it is from T_1(t) that the error/uncertainty propagation is derived. Therefore the general form of the emulation equation is important for the subsequent derivation of uncertainty.

P: Your definition of an uncertainty bound as something that, “reasonably predicts the spread from running [GCMs] into the future” is just an epidemiological variation in model output. A predictive pdf.

R: Yes, for a few days now I have been happy to take back that narrow view. This is for two reasons, first that it only addresses “precision” and not “accuracy”, and secondly that there seems to be some weird internal correction in the GCMs, which is deeply disturbing to me, which ensures that rough radiative balance is achieved at the top of the atmosphere.

P: LWCF uncertainty is an unvarying constant, Rich. It doesn’t correlate with anything.

R: It can’t be a constant because as you have said yourself, it has a +/-, i.e. +/-4W/m^2. That indicates a range, whether it be a standard deviation portion of a distribution or a uniform inviolable interval. In this case it’s the difference between what the GCMs say LWCF should be, and what it was actually observed to be, which varies from year to year. Sane mathematicians and most scientists are going to call that an error distribution. And then, of course, correlation is perfectly possible. I can see that this is a fundamental difference between us which I have just about given up on resolving.

P: I’m worried about knowing whether a result is reliable or not. That’s what “strict reference” means. It means I use the method because it tells me something I need to know — reliability — even though the use violates statistical assumptions.

R: I’m also worried about that too. I’m worried about the reliability of your very wide estimates of GCM uncertainty in the year 2100! And I don’t mind a certain amount of pragmatism, ignoring the difference between the sum of 5 uniform intervals and a normal distribution say, but one has to be very careful not to go too far.

P: When the resolution limit is an empirical uncertainty interval, the statistical probability does not apply. Given an empirical calibration SD, one cannot say there is a 5% chance the correct answer is outside 2-sigma. Such a statement is meaningless — because the error distribution is not known to be normal.

R: Well, I was merely regurgitating the sense of what you wrote in your “helpful screed”:

“* The odds are 20 to 1 against the uncertainty of X_i being larger than (+/-)dX_i.

The value of dX_i represents 2-sigma for a single-sample analysis, where sigma is the standard deviation of the population of possible measurements from which the single sample X_i was taken.

The uncertainty (+/-)dX_i Moffat described, exactly represents the (+/-)4W/m^2 LWCF calibration error statistic derived from the combined individual model errors in the test simulations of 27 CMIP5 climate models.”

But of course I agree that depending on how far from normal the distribution is the 5% will be subject to some error. But I’m pragmatic about that; the 5% still gives a general flavour.

Rich,

” I’m also worried about that too. I’m worried about the reliability of your very wide estimates of GCM uncertainty in the year 2100!”

You shouldn’t be worried. Once the CGM’s iterative runs identify a temperature differential greater than the uncertainty interval the iterative runs should be stopped. Their outputs are not reliable past that point. That’s true for your calculation of uncertainty as well.

Tim, no my worry is not in that direction. It is in the direction that Pat has overestimated the uncertainty interval width, and the runs will never get close to those bounds. In fact, Pat has admitted that the GCMs have smaller spread than his uncertainty, but this appears to imply that they are therefore inaccurate, i.e have a bias which may be unknowable until observations down the line.

“In fact, Pat has admitted that the GCMs have smaller spread than his uncertainty,”

What Pat has pointed out is that the models all produce about the same results. That really has nothing to do with the uncertainty of the results. The CGMs are determinative, meaning if you put in A you *always* get out B. A single model doesn’t vary over a different number of runs as long as no changes are made in the input data, the fudge parameters, or the order of evaluating the differential equations.

“this appears to imply that they are therefore inaccurate, i.e have a bias which may be unknowable until observations down the line.”

It tells you that they are data matching programs being used to extrapolate into the future. Each data matching program is slightly different thus giving different extrapolations. They are *not* comprehensive models of the overall physics associated with the Earth and its sub-components. If they were they wouldn’t have different outputs. Take Gauss’ Law on electric charges. It *is* a comprehensive model of the physics. It gives exact, accurate answers every single time – as long as the inputs are exact and accurate. The problem is in measuring the input, i.e. the electric charge, exactly and accurately. Because that measurement has inexactness and inaccuracy the output has an uncertainty interval. It’s impossible to eliminate it. Tell me *exactly* and accurately what the charge on an electron is and I’ll ask you how you measured at that resolution.

Final thoughts on Pat Frank’s paper.

Take the year 2100. A GCM in 2019 might, by some fluke, choose a “pathway” for GHGs and solar variations which closely matches what actually happens. It will predict a GASAT value M (for model) in 2100, and there will be a measured GASAT value A (for actual) then. (If there isn’t, it will either be because humans think it too boring or irrelevant by then, or we have been wiped out by a cataclysm which in my view certainly won’t be from CAGW.)

So there will be an error M-A. If Pat’s paper doesn’t say anything about that then I have been wasting my time studying it these past several weeks. If it does, then I think it says that |M-A| will be of the order of 16K (say, the exact figure is not important here). Now M-A derives from two sources: variance – intra-model variation (or inter-model as well if an ensemble is used), and bias. We can see the variance of the models from their outputs, and it’s much smaller than radiation of +/-4W/m^2 implies. Therefore, if the models are not biased, |M-A| will turn out to be much smaller than 16K. But Pat may be right and |M-A| around 16K will actually occur. In this case the models must be biased, so the parameter z in my para 12 above (which could equally apply to U for Pat’s emulator) is non-zero. Hopefully we could detect z != 0 much earlier than 2100, and indeed there are already claims of bias in the models over the last 30 years. 16K divided by 80 years is “only” 0.2K per year, but after 10 years that becomes 2K which is a significant departure.

Returning to T_1(t), Pat’s emulator plus error, we can change it so that it tracks GASAT rather than the GCMs. If U(t) = Z + V(t) then Z can match the unknown bias and V(t) with mean 0 and variance s^2 can match the intra-model spread. The total standard deviation sqrt( Var(Z)+s^2) can match the scaled LWCF error +/-4 W/m^2, and so Var(Z) can be deduced. Over a period of some years the single realized value Z=z can be estimated, and the intra-model spread prediction +/-s*sqrt(t) tested. In this way Pat’s emulator can be improved so as to become falsifiable (and validated in the happy event that Pat’s theory is correct).

The same procedure can be done for my emulator X(t), with some particular value of ‘a’ specified, and results compared with T(t). Eventually it should be possible to discriminate between the two.

Pat will no doubt object, as usual, but the above is proper mathematical modelling which has some prospect of being validated.

“But Pat may be right and |M-A| around 16K will actually occur. ”

Pat’s thesis doesn’t predict anything! *Any* value within the uncertainty interval is possible. Your M-A could be large or it could be small based on the uncertainty interval.

What his thesis *does* say, is that once the uncertainty interval is larger than the anomaly the CGMs are trying to calculate that the CGMs become totally unreliable. There is no use in extending their iteration interval past that point. It’s no different than trying to read millivolts on a digital meter with only two significant digits. Your uncertainty is greater than what you are trying to read! And no amount of statistical analysis can change that fact. As Pat said, the fuzziness of the pixel precludes knowing anything.

Rich,

I’m pretty sure that no one that follows Pat’s analysis believes that |M-A| will be on the order of 16K by 2100. The argument, as I understand it, is that M-A ca. 2100 will be meaningless because M 2100 (as of 2019) is meaningless. And this follows because M2099 (as of 2019) is meaningless and so on. They are all meaningless because for any forecast period, the uncertainty of the cloud physics, as amply evidenced by the results of the GCMs themselves, greatly exceeds the magnitude of the forecasts. The fact that the GCMs are somehow constrained to prevent realizations commensurate with the magnitude of the missing and/or misspecified physics is of no import since Pat’s emulations indicate that the forecasted changes in GASAT are linear with the projected forcings. This means that the uncertainty of the forecasts should also accumulate accordingly. No heavy math needed, just logic.

Frank from NoVA, “just logic” eh? Very fuzzy logic to my mind, which is why I set out my paragraphs 1 to 12 to follow it, and succeeded provided I could use kosher statistical modelling rather than the concept of “uncertainty intervals” over which there has been so much disagreement in this thread.

Your logic includes “this means that the uncertainty of the forecasts should also accumulate accordingly”, and that is where the logic goes wrong. Mathematics shows that the accumulation depends on the nature of the emulation, and I produced an emulation (in fact infinitely many in the spectrum of 0<a<1) which agrees very closely with Pat's and yet has much smaller growth in uncertainty over time. I don't think at present we can distinguish between these two emulators, though I'd like to.