12 October 2019

Pat Frank

A bit over a month ago, I posted an essay on WUWT here about my paper assessing the reliability of GCM global air temperature projections in light of error propagation and uncertainty analysis, freely available here.

Four days later, Roy Spencer posted a critique of my analysis at WUWT, here as well as at his own blog, here. The next day, he posted a follow-up critique at WUWT here. He also posted two more critiques on his own blog, here and here.

Curiously, three days before he posted his criticisms of my work, Roy posted an essay, titled, “The Faith Component of Global Warming Predictions,” here. He concluded that, *[climate modelers] have only demonstrated what they assumed from the outset. *They are guilty of “*circular reasoning*” and have expressed a “*tautology*.”

Roy concluded, “*I’m not saying that increasing CO**₂** doesn’t cause warming. I’m saying we have no idea how much warming it causes because we have no idea what natural energy imbalances exist in the climate system over, say, the last 50 years. … Thus, global warming projections have a large element of faith programmed into them.*”

Roy’s conclusion is pretty much a re-statement of the conclusion of my paper, which he then went on to criticize.

In this post, I’ll go through Roy’s criticisms of my work and show why and how every single one of them is wrong.

So, what are Roy’s points of criticism?

He says that:

1) My error propagation predicts huge excursions of temperature.

2) Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

3) **The Error Propagation Model is Not Appropriate for Climate Models**

I’ll take these in turn.

This is a long post. For those wishing just the executive summary, all of Roy’s criticisms are badly misconceived.

1) __Error propagation predicts huge excursions of temperature.__

Roy wrote, “*Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT). (my bold)*”

For the attention of Mr. And then There’s Physics, and others, Roy went on to write this: “*The modelers are well aware of these biases [in cloud fraction], which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.*” No more dismissals of root-mean-square error, please.

Here is Roy’s Figure 1, demonstrating his first major mistake. I’ve bolded the evidential wording.

Roy’s blue lines are **not** air temperatures emulated using equation 1 from the paper. They do not come from eqn. 1, and do not represent physical air temperatures at all.

They come from eqns. 5 and 6, and are the growing uncertainty bounds in projected air temperatures. Uncertainty statistics are not physical temperatures.

Roy misconceived his ±2 Wm^{-2} as a radiative imbalance. In the proper context of my analysis, it should be seen as a ±2 Wm^{-2} uncertainty in long wave cloud forcing (LWCF). It is a statistic, not an energy flux.

Even worse, were we to take Roy’s ±2 Wm^{-2} to be a radiative imbalance in a model simulation; one that results in an excursion in simulated air temperature, (which is Roy’s meaning), we then have to suppose the imbalance is both positive and negative at the same time, i.e., ±radiative forcing.

A ±radiative forcing does not alternate between +radiative forcing and -radiative forcing. Rather it is both signs together at once.

So, Roy’s interpretation of LWCF ±error as an imbalance in radiative forcing requires simultaneous positive and negative temperatures.

Look at Roy’s Figure. He represents the emulated air temperature to be a hot house and an ice house simultaneously; both +20 C and -20 C coexist after 100 years. That is the nonsensical message of Roy’s blue lines, if we are to assign his meaning that the ±2 Wm^{-2} is radiative imbalance.

That physically impossible meaning should have been a give-away that the basic supposition was wrong.

The ± is not, after all, one or the other, plus or minus. It is coincidental plus and minus, because it is part of a root-mean-square-error (rmse) uncertainty statistic. It is **not** attached to a physical energy flux.

It’s truly curious. More than one of my reviewers made the same very naive mistake that ±C = physically real +C or -C. This one, for example, which is quoted in the Supporting Information: “T*he author’s error propagation is not] physically justifiable. (For instance, even after forcings have stabilized, [the author’s] analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states. Which, it should be obvious, does not actually happen).*“

Any understanding of uncertainty analysis is clearly missing.

Likewise, this first part of Roy’s point 1 is completely misconceived.

Next mistake in the first criticism: Roy says that the emulation equation does not yield the flat GCM control run line in his Figure 1.

However, emulation equation 1 would indeed give the same flat line as the GCM control runs under zero external forcing. As proof, here’s equation 1:

In a control run there is no change in forcing, so* **D**F _{i}* = 0. The fraction in the brackets then becomes F

_{0}/F

_{0}= 1.

The originating *f _{CO₂}* = 0.42 so that equation 1 becomes,

*D*

*T*

_{i}(K) = 0.42*´*

*33K*

*´*

*1 + a*= 13.9 C +a = constant (a = 273.1 K or 0 C).

When an anomaly is taken, the emulated temperature change is constant zero, just as in Roy’s GCM control runs in Figure 1.

So, Roy’s first objection demonstrates three mistakes.

1) Roy mistakes a rms statistical uncertainty in simulated LWCF as a physical radiative imbalance.

2) He then mistakes a ±uncertainty in air temperature as a physical temperature.

3) His analysis of emulation equation 1 was careless.

Next, Roy’s 2): __Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux__* *

Roy wrote, “*If any climate model has as large as a 4 W/m ^{2} bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”*

I will now show why this objection is irrelevant.

Here, now, is Roy’s second figure, again showing the perfect TOA radiative balance of CMIP5 climate models. On the right, next to Roy’s figure, is Figure 4 from the paper showing the total cloud fraction (TCF) annual error of 12 CMIP5 climate models, averaging ±12.1%. [1]

Every single one of the CMIP5 models that produced average ±12.1% of simulated total cloud fraction error also featured Roy’s perfect TOA radiative balance.

Therefore, every single CMIP5 model that averaged ±4 Wm^{-2} in LWCF error also featured Roy’s perfect TOA radiative balance.

How is that possible? How can models maintain perfect simulated TOA balance while at the same time producing errors in long wave cloud forcing?

Off-setting errors, that’s how. GCMs are required to have TOA balance. So, parameters are adjusted within their uncertainty bounds so as to obtain that result.

Roy says so himself: “*If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, …”*

Are the chosen GCM parameter values physically correct? No one knows.

Are the parameter sets identical model-to-model? No. We know that because different models produce different profiles and integrated intensities of TCF error.

This removes all force from Roy’s TOA objection. Models show TOA balance and LWCF error simultaneously.

In any case, this goes to the point raised earlier, and in the paper, that a simulated climate can be perfectly in TOA balance **while the simulated climate internal energy state is incorrect**.

That means that the physics describing the simulated climate state is incorrect. This in turn means that the physics describing the simulated air temperature is incorrect.

The simulated air temperature is not grounded in physical knowledge. And that means there is a large uncertainty in projected air temperature because we have no good physically causal explanation for it.

The physics can’t describe it; the model can’t resolve it. The apparent certainty in projected air temperature is a chimerical result of tuning.

This is the crux idea of an uncertainty analysis. One can get the observables right. But if the wrong physics gives the right answer, one has learned nothing and one understands nothing. The uncertainty in the result is consequently large.

This wrong physics is present in every single step of a climate simulation. The calculated air temperatures are not grounded in a physically correct theory.

Roy says the LWCF error is unimportant because all the errors cancel out. I’ll get to that point below. But notice what he’s saying: the wrong physics allows the right answer. And invariably so in every step all the way across a 100-year projection.

In his September 12 criticism, Roy gives his reason for disbelief in uncertainty analysis: “*All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)! *

*“Why?*

*“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior.*”

There it is: wrong physics that is invariably correct in every step all the way across a 100-year projection, because large-scale errors cancel to reveal the effects of tiny perturbations. I don’t believe any other branch of physical science would countenance such a claim.

Roy then again presented the TOA radiative simulations on the left of the second set of figures above.

Roy wrote that models are forced into TOA balance. That means the physical errors that might have appeared as TOA imbalances are force-distributed into the simulated climate sub-states.

Forcing models to be in TOA balance may even make simulated climate subsystems more in error than they would otherwise be.

After observing that the “*forced-balancing of the global energy budget***“** is done only once for the “*multi-century pre-industrial control runs*,” Roy observed that models world-wide behave similarly despite a, “** WIDE variety of errors in the component energy fluxes**…”

Roy’s is an interesting statement, given there is nearly a factor of three difference among models in their sensitivity to doubled CO₂. [2, 3]

According to Stephens [3], “*This discrepancy is widely believed to be due to uncertainties in cloud feedbacks. … Fig. 1 [shows] the changes in low clouds predicted by two versions of models that lie at either end of the range of warming responses. The reduced warming predicted by one model is a consequence of increased low cloudiness in that model whereas the enhanced warming of the other model can be traced to decreased low cloudiness. (original emphasis)*”

So, two CMIP5 models show opposite trends in simulated cloud fraction in response to CO₂ forcing. Nevertheless, they both reproduce the historical trend in air temperature.

Not only that, but they’re supposedly invariably correct in every step all the way across a 100-year projection, because their large-scale errors cancel to reveal the effects of tiny perturbations.

In Stephen’s object example we can see the hidden simulation uncertainty made manifest. Models reproduce calibration observables by hook or by crook, and then on those grounds are touted as able to accurately predict future climate states.

The Stephens example provides clear evidence that GCMs plain cannot resolve the cloud response to CO₂ emissions. Therefore, GCMs cannot resolve the change in air temperature, if any, from CO₂ emissions. Their projected air temperatures are not known to be physically correct. They are not known to have physical meaning.

This is the reason for the large and increasing step-wise simulation uncertainty in projected air temperature.

This obviates Roy’s point about cancelling errors. The models cannot resolve the cloud response to CO₂ forcing. Cancellation of radiative forcing errors does not repair this problem. Such cancellation (from by-hand tuning) just speciously hides the simulation uncertainty.

Roy concluded that, “*Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).*“I

Everyone should now know why Roy’s view is wrong. Off-setting errors make models similar to one another. They do not make the models accurate. Nor do they improve the physical description.

Roy’s conclusion implicitly reveals his mistaken thinking.

1) The inability of GCMs to resolve cloud response means the temperature projection consistency among models is a chimerical artifact of their tuning. The uncertainty remains in the projection; it’s just hidden from view.

2) The LWCF ±4 Wm^{-2} rmse is not a constant offset bias error. The ‘±’ alone should be enough to tell anyone that it does not represent an energy flux.

The LWCF ±4 Wm^{-2} rmse represents an uncertainty in simulated energy flux. It’s not a physical error at all.

One can tune the model to produce (simulation minus observation = 0) no observable error at all in their calibration period. But the physics underlying the simulation is wrong. The causality is not revealed. The simulation conveys no information. The result is not any indicator of physical accuracy. The uncertainty is not dismissed.

3) All the models making those errors are forced to be in TOA balance. Those TOA-balanced CMIP5 models make errors averaging ±12.1% in global TCF.[1] This means the GCMs cannot model cloud cover to better resolution than ±12.1%.

To minimally resolve the effect of annual CO₂ emissions, they need to be at about 0.1% cloud resolution (see Appendix 1 below)

4) The average GCM error in simulated TCF over the calibration hindcast time reveals the average calibration error in simulated long wave cloud forcing. Even though TOA balance is maintained throughout, the correct magnitude of simulated tropospheric thermal energy flux is lost within an uncertainty interval of ±4 Wm^{-2}.

Roy’s 3) __Propagation of error is inappropriate__.

On his blog, Roy wrote that modeling the climate is like modeling pots of boiling water. Thus, “*[If our model] can get a constant water temperature, [we know] that those rates of energy gain and energy loss are equal, even though we don’t know their values. And that, if we run [the model] with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.*”

Roy continued, “*the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system.*”

Roy there implied that the only way air temperature can change is by way of an increase or decrease of the total energy in the climate system. However, that is not correct.

Climate subsystems can exchange energy. Air temperature can change by redistribution of internal energy flux without any change in the total energy entering or leaving the climate system.

For example, in his 2001 testimony before the Senate Environment and Public Works Committee on 2 May, Richard Lindzen noted that, “*claims that man has contributed any of the observed warming (ie attribution) are based on the assumption that models correctly predict natural variability. [However,] natural variability does not require any external forcing – natural or anthropogenic*. (my bold)” [4]

Richard Lindzen noted exactly the same thing in his, “**Some Coolness Concerning Global Warming**. [5]

“*The precise origin of natural variability is still uncertain, but it is not that surprising. Although the solar energy received by the earth-ocean-atmosphere system is relatively constant, the degree to which this energy is stored and released by the oceans is not. As a result, the energy available to the atmosphere alone is also not constant. … Indeed, our climate has been both warmer and colder than at present, due solely to the natural variability of the system. External influences are hardly required for such variability to occur*.(my bold)”

In his review of Stephen Schneider’s “Laboratory Earth,” [6] Richard Lindzen wrote this directly relevant observation,

“*A doubling CO₂ in the atmosphere results in a two percent perturbation to the atmosphere’s energy balance. But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance, and these errors involve, among other things, the feedbacks which are crucial to the resulting calculations. Thus the models are of little use in assessing the climatic response to such delicate disturbances. Further, the large responses (corresponding to high sensitivity) of models to the small perturbation that would result from a doubling of carbon dioxide crucially depend on positive (or amplifying) feedbacks from processes demonstrably misrepresented by models.* (my bold)”

These observations alone are sufficient to refute Roy’s description of modeling air temperature in analogy to the heat entering and leaving a pot of boiling water with varying amounts of lid-cover.

Richard Lindzen’s last point, especially, contradicts Roy’s claim that cancelling simulation errors permit a reliably modeled response to forcing or accurately projected air temperatures.

Also, the situation is much more complex than Roy described in his boiling pot analogy. For example, rather than Roy’s single lid moving about, clouds are more like multiple layers of sieve-like lids of varying mesh size and thickness, all in constant motion, and none of them covering the entire pot.

The pot-modeling then proceeds with only a poor notion of where the various lids are at any given time, and without fully understanding their depth or porosity.

__Propagation of error__: Given an annual average +0.035 Wm^{-2} increase in CO₂ forcing, the increase plus uncertainty in the simulated tropospheric thermal energy flux is (0.035±4) Wm^{-2}. All the while simulated TOA balance is maintained.

So, if one wanted to calculate the uncertainty interval for the air temperature for any specific annual step, the top of the temperature uncertainty interval would be calculated from +4.035 Wm^{-2}, while the bottom of the interval would be -3.9065 Wm^{-2}.

Putting that into the right side of paper eqn. 5.2 and setting *F _{0}*=33.30 Wm

^{-2}, then the single-step projection uncertainty interval in simulated air temperature is +1.68 C/-1.63 C.

The air temperature anomaly projected from the average CMIP5 GCM would, however, be 0.015 C; not +1.68 C or -1.63 C.

In the whole modeling exercise, the simulated TOA balance is maintained. Simulated TOA balance is maintained mainly because simulation error in long wave cloud forcing is offset by simulation error in short wave cloud forcing.

This means the underlying physics is wrong and the simulated climate energy state is wrong. Over the calibration hindcast region, the observed air temperature is correctly reproduced only because of curve fitting following from the by-hand adjustment of model parameters.[2, 7]

Forced correspondence with a known value does not remove uncertainty in a result, because causal ignorance is unresolved.

When error in an intermediate result is imposed on every single step of a sequential series of calculations — which describes an air temperature projection — that error gets transmitted into the next step. The next step adds its own error onto the top of the prior level. The only way to gauge the effect of step-wise imposed error is step-wise propagation of the appropriate rmse uncertainty.

Figure 3 below shows the problem in a graphical way. GCMs project temperature in a step-wise sequence of calculations. [8] Incorrect physics means each step is in error. The climate energy-state is wrong (this diagnosis also applies to the equilibrated base state climate).

The wrong climate state gets calculationally stepped forward. Its error constitutes the initial conditions of the next step. Incorrect physics means the next step produces its own errors. Those new errors add onto the entering initial condition errors. And so it goes, step-by-step. The errors add with every step.

When one is calculating a future state, one does not know the sign or magnitude of any of the errors in the result. This ignorance follows from the obvious difficulty that there are no observations available from a future climate.

The reliability of the projection then must be judged from an uncertainty analysis. One calibrates the model against known observables (e.g., total cloud fraction). By this means, one obtains a relevant estimate of model accuracy; an appropriate average root-mean-square calibration error statistic.

The calibration error statistic informs us of the accuracy of each calculational step of a simulation. When inaccuracy is present in each step, propagation of the calibration error metric is carried out through each step. Doing so reveals the uncertainty in the result — how much confidence we should put in the number.

When the calculation involves multiple sequential steps each of which transmits its own error, then the step-wise uncertainty statistic is propagated through the sequence of steps. The uncertainty of the result must grow. This circumstance is illustrated in Figure 3.

_{0}, which may be zero, and an initial temperature, T_{0}. The final temperature *T _{n}* is conditioned by the final uncertainty ±

*e*, as

_{t}*T*

_{n}*±*

*e*.

_{t}Step one projects a first-step forcing F_{1}, which produces a temperature T_{1}. Incorrect physics introduces a physical error in temperature, e_{1}, which may be positive or negative. In a projection of future climate, we do not know the sign or magnitude of e_{1}.

However, hindcast calibration experiments tell us that single projection steps have an average uncertainty of ±e.

T_{1} therefore has an uncertainty of

The step one temperature plus its physical error, T_{1}+e_{1}, enters step 2 as its initial condition. But T_{1} had an error, e_{1}. That e_{1} is an error offset of unknown sign in T_{1}. Therefore, the incorrect physics of step 2 receives a T_{1} that is offset by e_{1.} But in a futures-projection, one does not know the value of T_{1}+e_{1}.

In step 2, incorrect physics starts with the incorrect T_{1} and imposes new unknown physical error e_{2} on T_{2}. The error in T_{2} is now e_{1}+e_{2}. However, in a futures-projection the sign and magnitude of e_{1}, e_{2} and their sum remain unknown.

And so it goes; step 3, …, n add in their errors e_{3} +, …, + e_{n}. But in the absence of knowledge concerning the sign or magnitude of the imposed errors, we do not know the total error in the final state. All we do know is that the trajectory of the simulated climate has wandered away from the trajectory of the physically correct climate.

However, the calibration error statistic provides an estimate of the uncertainty in the results of any single calculational step, which is ±e.

When there are multiple calculational steps, ±e attaches independently to every step. The predictive uncertainty increases with every step because the ±e uncertainty gets propagated through those steps to reflect the continuous but unknown impact of error. Propagation of calibration uncertainty goes as the root-sum-square (rss). For ‘n’ steps that’s

It should be very clear to everyone that the rss equation does not produce physical temperatures, or the physical magnitudes of anything else. it is a statistic of predictive uncertainty that necessarily increases with the number of calculational steps in the prediction. A summary of the uncertainty literature was commented into my original post, here.

The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.

Supporting Information Section 10.2 discusses uncertainty and its meaning. C. Roy and J. Oberkampf (2011) describe it this way, “*[predictive] uncertainty [is] due to lack of knowledge by the modelers, analysts conducting the analysis, or experimentalists involved in validation. The lack of knowledge can pertain to, for example, modeling of the system of interest or its surroundings, simulation aspects such as numerical solution error and computer roundoff error, and lack of experimental data.*” [12]

The growth of uncertainty means that with each step we have less and less knowledge of where the simulated future climate is, relative to the physically correct future climate. Figure 3 shows the widening scope of uncertainty with the number of steps.

Wide uncertainty bounds mean the projected temperature reflects a future climate state that is some completely unknown distance from the physically real future climate state. One’s confidence is minimal that the simulated future temperature is the ‘true’ future temperature.

This is why propagation of uncertainty through an air temperature projection is entirely appropriate. It is our only estimate of the reliability of a predictive result.

Appendix 1 below shows that the models need to simulate clouds to about ±0.1% accuracy, about 100 times better than ±12.1% the they now do, in order to resolve any possible effect of CO₂ forcing.

Appendix 2 quotes Richard Lindzen on the utter corruption and dishonesty that pervades AGW consensus climatology.

Before proceeding, here’s NASA on clouds and resolution: “*A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.*”

That hundred-fold is exactly the message of my paper.

If climate models cannot resolve the response of clouds to CO₂ emissions, they can’t possibly accurately project the impact of CO₂ emission on air temperature?

The ±4 Wm^{-2} uncertainty in LWCF is a direct reflection of the profound ignorance surrounding cloud response.

The CMIP5 LWCF calibration uncertainty reflects ignorance concerning the magnitude of the thermal flux in the simulated troposphere that is a direct consequence of the poor ability of CMIP5 models to simulate cloud fraction.

From page 9 in the paper, “*This climate model error represents a range of atmospheric energy flux uncertainty within which smaller energetic effects cannot be resolved within any CMIP5 simulation.*”

The 0.035 Wm^{-2} annual average CO₂ forcing is exactly such a smaller energetic effect.

It is impossible to resolve the effect on air temperature of a 0.035 Wm^{-2} change in forcing, when the model cannot resolve overall tropospheric forcing to better than ±4 Wm^{-2}.

The perturbation is ±114 times smaller than the lower limit of resolution of a CMIP5 GCM.

The uncertainty interval can be appropriately analogized as the smallest simulation pixel size. It is the blur level. It is the ignorance width within which nothing is known.

Uncertainty is not a physical error. It does not subtract away. It is a measure of ignorance.

The model can produce a number. When the physical uncertainty is large, that number is physically meaningless.

All of this is discussed in the paper, and in exhaustive detail in Section 10 of the Supporting Information. It’s not as though that analysis is missing or cryptic. It is pretty much invariably un-consulted by my critics, however.

Smaller strange and mistaken ideas:

Roy wrote, “*If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time.*”

But the LWCF error statistic is ±4 Wm^{-2}, not (+)4 Wm^{-2} imbalance in radiative flux. Here, Roy has not only misconceived a calibration error statistic as an energy flux, but has facilitated the mistaken idea by converting the ± into (+).

This mistake is also common among my prior reviewers. It allowed them to assume a constant offset error. That in turn allowed them to assert that all error subtracts away.

This assumption of perfection after subtraction is a folk-belief among consensus climatologists. It is refuted right in front of their eyes by their own results, (Figure 1 in [13]) but that never seems to matter.

Another example includes Figure 1 in the paper, which shows simulated temperature anomalies. They are all produced by subtracting away a simulated climate base-state temperature. If the simulation errors subtracted away, all the anomaly trends would be superimposed. But they’re far from that ideal.

Figure 4 shows a CMIP5 example of the same refutation.

Figure 4: RCP8.5 projections from four CMIP5 models.

Model tuning has made all four projection anomaly trends close to agreement from 1850 through 2000. However, after that the models career off on separate temperature paths. By projection year 2300, they range across 8 C. The anomaly trends are not superimposable; the simulation errors have not subtracted away.

The idea that errors subtract away in anomalies is objectively wrong. The uncertainties that are hidden in the projections after year 2000, by the way, are also in the projections from 1850-2000 as well.

This is because the projections of the historical temperatures rest on the same wrong physics as the futures projection. Even though the observables are reproduced, the physical causality underlying the temperature trend is only poorly described in the model. Total cloud fraction is just as wrongly simulated for 1950 as it is for 2050.

LWCF error is present throughout the simulations. The average annual ±4 Wm^{-2} simulation uncertainty in tropospheric thermal energy flux is present throughout, putting uncertainty into every simulation step of air temperature. Tuning the model to reproduce the observables merely hides the uncertainty.

Roy wrote, “*Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.*”

But, of course, eqn. 6 would not produce wildly different results because simulation error varies with the length of the GCM time step.

For example, we can estimate the average per-day uncertainty from the ±4 Wm^{-2} annual average calibration of Lauer and Hamilton.

So, for the entire year (±4 Wm^{–2})^{2} = *e*_{i}* *is the per-day uncertainty. This equation yields, *e** _{i}* = ±0.21 Wm

^{–2}for the estimated LWCF uncertainty per average projection day. If we put the daily estimate into the right side of equation 5.2 in the paper and set

*F*=33.30 Wm

_{0}^{-2}, then the one-day per-step uncertainty in projected air temperature is ±0.087 C. The total uncertainty after 100 years is sqrt[(0.087)

^{2}´365´100] = ±16.6 C.

The same approach yields an estimated 25-year mean model calibration uncertainty to be sqrt[(±4 Wm^{–2})^{2}´25] = ±20 Wm^{–}^{2}. Following from eqn. 5.2, the 25-year per-step uncertainty is ±8.3 C. After 100 years the uncertainty in projected air temperature is sqrt[(±8.3)^{2}´4)] = ±16.6 C.

Roy finished with, “*I’d be glad to be proved wrong.*”

Be glad, Roy.

Appendix 1: Why CMIP5 error in TCF is important.

We know from Lauer and Hamilton that the average CMIP5 ±12.1% annual total cloud fraction (TCF) error produces an annual average ±4 Wm^{-2} calibration error in long wave cloud forcing. [14]

We also know that the annual average increase in CO₂ forcing since 1979 is about 0.035 Wm^{-2} (my calculation).

Assuming a linear relationship between cloud fraction error and LWCF error, the ±12.1% CF error is proportionately responsible for ±4 Wm^{-2} annual average LWCF error.

Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO₂ forcing as:

[(0.035 Wm^{-2}/±4 Wm^{-2})]*±12.1% total cloud fraction = 0.11% change in cloud fraction.

This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to barely resolve the annual impact of CO₂ emissions on the climate. If one wants accurate simulation, the model resolution should be ten times small than the effect to be resolved. That means 0.011% accuracy in simulating annual average TCF.

That is, the cloud feedback to a 0.035 Wm^{-2} annual CO₂ forcing needs to be known, and able to be simulated, to a resolution of 0.11% in TCF in order to minimally know how clouds respond to annual CO₂ forcing.

Here’s an alternative way to get at the same information. We know the total tropospheric cloud feedback effect is about -25 Wm^{-2}. [15] This is the cumulative influence of 67% global cloud fraction.

The annual tropospheric CO₂ forcing is, again, about 0.035 Wm^{-2}. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 Wm^{-2}/25 Wm^{-2})*67% = 0.094%. That’s again bare-bones simulation. Accurate simulation requires ten times finer resolution, which is 0.0094% of average annual TCF.

Assuming the linear relations are reasonable, both methods indicate that the minimal model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 Wm^{-2} of CO₂ forcing, is about 0.1% CF.

To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.

This analysis illustrates the meaning of the annual average ±4 Wm^{-2} LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.

The TCF ignorance is such that the annual average tropospheric thermal energy flux is never known to better than ±4 Wm^{-2}. This is true whether forcing from CO₂ emissions is present or not.

This is true in an equilibrated base-state climate as well. Running a model for 500 projection years does not repair broken physics.

GCMs cannot simulate cloud response to 0.1% annual accuracy. It is not possible to simulate how clouds will respond to CO₂ forcing.

It is therefore not possible to simulate the effect of CO₂ emissions, if any, on air temperature.

As the model steps through the projection, our knowledge of the consequent global air temperature steadily diminishes because a GCM cannot accurately simulate the global cloud response to CO₂ forcing, and thus cloud feedback, at all for any step.

It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.

This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge, further and further into ignorance.

On an annual average basis, the uncertainty in CF feedback is ±144 times larger than the perturbation to be resolved.

The CF response is so poorly known, that even the first simulation step enters terra incognita.

Appendix 2: On the Corruption and Dishonesty in Consensus Climatology

It is worth quoting Lindzen on the effects of a politicized science. [16]”*A second aspect of politicization of discourse specifically involves scientific literature. Articles challenging the claim of alarming response to anthropogenic greenhouse gases are met with unusually quick rebuttals. These rebuttals are usually published as independent papers rather than as correspondence concerning the original articles, the latter being the usual practice. When the usual practice is used, then the response of the original author(s) is published side by side with the critique. However, in the present situation, such responses are delayed by as much as a year. In my experience, criticisms do not reflect a good understanding of the original work. When the original authors’ responses finally appear, they are accompanied by another rebuttal that generally ignores the responses but repeats the criticism. This is clearly not a process conducive to scientific progress, but it is not clear that progress is what is desired. Rather, the mere existence of criticism entitles the environmental press to refer to the original result as ‘discredited,’ while the long delay of the response by the original authors permits these responses to be totally ignored. *

“*A final aspect of politicization is the explicit intimidation of scientists. Intimidation has mostly, but not exclusively, been used against those questioning alarmism. Victims of such intimidation generally remain silent. Congressional hearings have been used to pressure scientists who question the ‘consensus’. Scientists who views question alarm are pitted against carefully selected opponents. The clear intent is to discredit the ‘skeptical’ scientist from whom a ‘recantation’ is sought.*“[7]

Richard Lindzen’s extraordinary account of the jungle of dishonesty that is consensus climatology is required reading. None of the academics he names as participants in chicanery deserve continued employment as scientists. [16]

If one tracks his comments from the earliest days to near the present, his growing disenfranchisement becomes painful and obvious.[4-7, 16, 17] His “*Climate Science: Is it Currently Designed to Answer Questions?*” is worth reading in its entirety.

References:

[1] Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117(D14): p. D14105.

[2] Kiehl, J.T., Twentieth century climate model response and climate sensitivity. Geophys. Res. Lett., 2007. 34(22): p. L22710.

[3] Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18(2): p. 237-273.

[4] Lindzen, R.S. (2001) Testimony of Richard S. Lindzen before the Senate Environment and Public Works Committee on 2 May 2001. URL: http://www-eaps.mit.edu/faculty/lindzen/Testimony/Senate2001.pdf Date Accessed:

[5] Lindzen, R., Some Coolness Concerning Warming. BAMS, 1990. 71(3): p. 288-299.

[6] Lindzen, R.S. (1998) Review of Laboratory Earth: The Planetary Gamble We Can’t Afford to Lose by Stephen H. Schneider (New York: Basic Books, 1997) 174 pages. Regulation, 5 URL: https://www.cato.org/sites/cato.org/files/serials/files/regulation/1998/4/read2-98.pdf Date Accessed: 12 October 2019.

[7] Lindzen, R.S., Is there a basis for global warming alarm?, in Global Warming: Looking Beyond Kyoto, E. Zedillo ed, 2006 *in Press* The full text is available at: https://ycsg.yale.edu/assets/downloads/kyoto/LindzenYaleMtg.pdf Last accessed: 12 October 2019, Yale University: New Haven.

[8] Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety, 2000, IECEC: Las Vegas, pp. 1026-1031.

[9] Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill.

[10] Brown, K.K., et al., Evaluation of correlated bias approximations in experimental uncertainty analysis. AIAA Journal, 1996. 34(5): p. 1013-1018.

[11] Perrin, C.L., Mathematics for chemists. 1970, New York, NY: Wiley-Interscience. 453.

[12] Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.

[13] Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.

[14] Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.

[15] Hartmann, D.L., M.E. Ockert-Bell, and M.L. Michelsen, The Effect of Cloud Type on Earth’s Energy Balance: Global Analysis. J. Climate, 1992. 5(11): p. 1281-1304.

[16] Lindzen, R.S., Climate Science: Is it Currently Designed to Answer Questions?, in Program in Atmospheres, Oceans and Climate. Massachusetts Institute of Technology (MIT) and Global Research, 2009, Global Research Centre for Research on Globalization: Boston, MA.

[17] Lindzen, R.S., Can increasing carbon dioxide cause climate change? Proc. Nat. Acad. Sci., USA, 1997. 94(p. 8335-8342.

“They [climate modelers] have only demonstrated what they assumed from the outset.”

This is an instance of the logical fallacy called “begging the question”. Unfortunately, lately people who should know better confuse the name of this fallacy with “raising the question”. I wish they’d stop doing this.

The problem is climate science is modeling a planet without an ocean.

Or Roy is “working with the premise”. Pat is talking about wrong reason the premise is wrong.

So got model Earth as world which covered by ocean. And add land to it, after can model a planet completely covered with an ocean.

Or as say, we are in Ice Age because we have a cold ocean.

And we would be in hothouse climate, if we had a warm ocean.

The atmosphere follows the ocean, or Atmosphere doesn’t lead the ocean.

So, since the average temperature of entire ocean is about 3.5 C, you can’t get a hot climate until

the ocean warms to higher temperature than 5 C. And that requires at least 1000 years.

Or if want forecast a mere 100 years, the colder ocean prevents much warming.

Or could global clouds of any type or zero clouds {of any type} and not much warming or cooling occurs within a hundred years. Or if you think greenhouse gases is big factor, same applies, put any amount greenhouse gases, and will not make much difference within 100 years.

There does seem to be the basic problem in Climate Modeling that they want the models to “prove” their hypothesis, but to do so, the models have to assume the hypothesis is correct. If the models are seen as assuming the hypothesis and thus if they fail to make accurate predictions, the hypothesis fails, that’s fine. But Climate Science refuses to accept that.

The money or the funding is about answering the question what is the warming effect from higher levels of CO2.

And the money spent, has given some results. And the results indicate that doubling CO2 global levels, does not cause much warming. Or 280 ppm + 280 ppm equaling 560 ppm of global CO2 will not

cause much warming.

Now, one argue whether we will ever reach 560 ppm and/or one could argue global CO2 level may exceed 1120 ppm.

I would note that ideas about disaster related to triggering massive greenhouse gases are related to a significant warming of the ocean. Ie, if ocean waters at 700 meter deep warm by a significant amount this could cause methane release {methane hydrate deposit are related to ocean temperatures]. Or said differently I know of no doomsday fear, connected to an increase of ocean surface temperatures. And btw I think over last 100 years the average ocean surface temperature as risen by about .5 C and water under the surface have obviously have increased far less than that. Though there are probably small regions of deeper ocean, over last 100 years {and over thousands of years which fluctuation which might be greater than .5 C- or there number factors that could effect small regions. One could suppose deeper ocean doesn’t have such fluctuation in temperature as small region of land has [which bounce up and down by more than 1 C] but fluctuation less than 1 C must occur in some deeper water regions. Anyways one might imagine there is some very fragile temperature change of methane deposit somewhere, but it seems one would need a significant change of average deep water temperature to have a common destabilization and earthquakes and temperature change might be needed. Or we not observed it happen, though perhaps we should actual monitor it, or better still mine methane hydrates. And want to mine first those deposits which might be “lost” due to larger earthquakes events.

Or I think the monetary of loss of large and valuable deposits of natural gas disappearing, is a bigger problem than whatever relatively small {as global amount} of methane is released. Or flaring of natural gas has been loss, in due failing to recover for a useful purpose, rather any effect upon global air temperature. Anyhow, the deeper ocean has not and will not anytime soon, warm enough,

but we should be focusing on near term efforts to mine ocean hydrates, mainly because natural gas is good energy source.

But back to point, the effort and money spent, determining if rising CO2 levels will have a large negative effect, indicates it will not have such effect within 50 or 100 years.

Thank you Pat!

So IPCC models produce very big statistics, but presumably this is not a problem because it is not an energy flux and thus cannot produce any warming. It seems that you have just defeated your own argument.

I gave up at this point since it seems like all this is now a matter of pride you and the rather tetchy tone of you comments does not seem to in the spirit of resolving a technical issue but of saving face.

Thank you Pat.

You’ve completely mangled the idea , Greg. Perhaps you should remain on the sidelines.

I gave up at this point since it seems like all this is now a matter of pride you and the rather tetchy tone of you comments does not seem to in the spirit of resolving a technical issue but of saving face.Tetchy is justified when people continue to spew the same wrong thoughts over and over and over and over again. At this point, a rational person should give up even being tetchy, because there seems to be little hope of reaching zealots with rational thought. What happens after that is an escalation of mangling ideas to the point of pathological inability to see truth.

Anything further by Pat to explain his position might well be an unachievable mission to cure a form of mental illness. He should walk away with confidence that some people actually get it.

Roy contradicts himself, by stating in plain language the unavoidable point that Pat is getting at. I get the sense that he doesn’t understand Pat’s language enough to even realize this. What is it that he fears loosing by agreeing in Pat’s terms? I don’t get his disagreement — it seems suspiciously tenacious.

Even if Roy agrees with the idea expressed in Pat’s paper he can still disagree with the logic Pat used to arrived at that conclusion.

Roy can do as he likes, JGrizz. But his argument is analytically wrong.

Greg,

Context is critical in any discussion, and you’ve mixed two very different contexts together, resulting in a completely invalid argument.

These discussions show things are far from settled as the political side would like it to be. There are too many complexities and external forces that play in this to make predictions. My climatology professors in the 90’s didn’t buy the media and political rhetoric and were not on the grant gravy train to push the narrative. I am glad to see there is competing discussions. It’s not settled and it won’t be anytime soon.

Considering the very long article, and lots of comments, I wonder if the author would be willing to post a very simple summary of his mains points,

with no numbers.

I’ve attempted to do that by cherry picking four sentences from the article, as a summary of the whole article (but I’m just a reader, not the author):

I believe this article can be summarized using four of its sentences, although I would add two words (“happen to” ) to the first sentence:

(1)

“But if the wrong physics (happens to ) gives the right answer, one has learned nothing, and one understands nothing.”

(2)

“This wrong physics is present in every single step of a climate simulation.”

(3)

“The calculated air temperatures are not grounded in a physically correct theory.”

(4)

“Tuning the model to reproduce the observables merely hides the uncertainty.”

Good summary Richard.

Add (5) Climate models cannot resolve the cloud response to CO2 emissions.

And (6) Climate models cannot resolve the effect of CO2 emissions (if any) on air temperature.

To quote Inigo Montoya, “Let me e’splain. No, there is too much. Let me sum up. Future projections from climate models are as accurate as examining sheep entrails.”

My Shaman has an 82% accuracy record against the spread for the NCAA basketball season. But he does use goats, not sheep.

Pat Frank,

That is one heck of a technical rebuttal! I read the first half in detail but time constraints forced me to skim the second half. I’ll have to return later for a more thorough 2nd reading. Your logic and clarity is exemplary. I found much here that I agree with.

There is one detail that was not covered in the explanation of how errors propagate. It is not a mistake but it is missing and the average reader might not see a gap.

“Propagation of calibration uncertainty goes as the root-sum-square (rss).”

This is quite true and the formula shown for adding in quadrature is correct, however it is not explained that the error might be expressed as a value above or below a relevant quantity, (the absolute error) or a % of the that quantity (the relative error). When propagating “addition” errors (because the sum involved an addition) the error to be square is the value, not the % of the quantity that has the uncertainty. When the formula has a multiplication or division, the % of quantity is used.

Error propagation involves using the appropriate absolute or relative error in each single step of the formulas from beginning to end. In the case above Pat is adding the uncertainties. If the projected temperature is ±0.1, then the uncertainty at step 2 is

SQRT(0.1^2+0.1^2) = ±0.141

At step three it is SQRT(0.1^2+0.1^2+0.1^2) = ±0.173

At step 50 it is Projected Temperature ±7.07

It is also important to recognize that this isn’t an “error” per se. It can not be reduced by making multiple runs. As Dr. Frank tries to point our it is an uncertainty, that is, you simply don’t know where the value is within the interval. It is not a description of an error measurement where you can find a “true” value by taking many outputs and averaging them.

In essence, (value + 3.14) is just as likely as (value – 5.0). You just don’t know which is the defintion of uncertainty.

Wow, two comments I actually understood. 🙂 Thank you Jim and Crispin AND Dr. Frank.

Your math is just a little off. Factor out the 0.1 from all the elements and you get sqrt [(0.1)^2 * n], n being the number of terms. The sqrt of (0.1)^2 is just 0.1. So you get (0.1) sqrt (n). At step 50 you’ll wind up with +/- 0.707 not +/- 7.07.

But your methodology is correct!

Dr. Frank,

Very impressive. An academic who can explain himself well to those outside his area of expertise. Well done.

I see what you are saying. However, the problem is that as a practical matter, there are realistic max and min temperatures that can be achieved, and they are much smaller than the confidence interval in your results. In other words, the confidence interval you calculate is outside the realm of the physically possible, and I think that is where Spencer is going. I agree that the published confidence intervals for the models are simply ridiculous, they seem to assume that averaging a bunch of temperatures taken at different locations will reduce the measurement error, when in fact it does not. But at the same time, the confidence interval you calculate suggests possible temperatures outside the realm of the physically possible. How to reduce that I do not know.

You still do not understand- they are bands of ignorance. In the New series of Cosmos by Tyson (Episode 11), Tyson stated that “dark energy is a placeholder for our ignorance”. In the same sense, Pat’s uncertainty bands show how little confidence we can have with these models after 100 years- in other words, “none”.

BTW- Look at Figure 4. Sure, after 100 years we do not see those uncertainties, but look at 150 years, or 200 years or more. That figure shows the results with no uncertainties! Is +/- 16.6 C really that surprising after 100 years with some uncertainty analysis in that context ?

First, deep thanks to Anthony and Charles for their strength of mind, their openness to debate, and for being here for all of us.

Andrew, over and yet over again, I have pointed out that uncertainty is not error. Uncertainty bounds do not represent possible outcomes. They represent an ignorance interval, within which, somewhere, the true answer lays. When the uncertainty interval is larger than any possible value, the answer given by the model is meaningless.

From the essay, “

It should be very clear to everyone that the rss equation does not produce physical temperatures, or the physical magnitudes of anything else. It is a statistic of predictive uncertainty that necessarily increases with the number of calculational steps in the prediction. A summary of the uncertainty literature was commented into my original post, here.(bold added)”The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.Please. Let’s not have this again. Uncertainty intervals in air temperature projections

do notrepresent possible physical temperatures. Enough already.” When the uncertainty interval is larger than any possible value, the answer given by the model is meaningless.”No. When the uncertainty interval is (much) larger than any value that the GCM could have possibly produced, then the uncertainty interval is meaningless. That was Roy’s point.

Nick,

Are you telling us that, when a value is completely uncertain then it must be correct?

Is there a limit to uncertainty?

No, I’m telling you that if a model couldn’t possibly produce a result, because it violates the conservation laws built into the model, but not Pat’s curve fitting), then you are not uncertain about whether it might have produced that result. Such numbers cannot be part of the uncertainty range.

Uncertainty isn’t error, Nick.

You continue to make the same mistake over, and over, and yet over again.

Pat’s uncertainty calculation is a way of revealing the uncertainty which has been hidden by model tuning. If the models were tuned to correctly replicate cloud cover, its the temperatures which would be way off.

What did I say about error?

Its not an error Nick, its a measure of the uncertainty which has been buried as a result of tuning the model to hindcast temperature while ignoring the effect of gross cloud cover errors.

If the models were tuned to get cloud cover right it would be the temperature which would go wild.

” Its not an error Nick, its a measure of the uncertainty which has been buried as a result of tuning the model to hindcast temperature while ignoring the effect of gross cloud cover errors.”It is nothing like that. Nothing in Pat’s paper (or Lauer’s) quantifies the effect of tuning. No data for that is cited at all. The ±4 W/m2 statistic in fact represents the sd of discrepancies between local computed and observed cloud cover at individual points on a grid. It is not a global average. No-one seems to have the slightest interest in what it actually is, or how it is (mis)used in the calculation.

But in any case, I didn’t mention error at all. The range of uncertainty of the output of a calculation cannot extend beyond the range of numbers that the calculation can produce. That is elementary.

The problem with Nick, and others like him, is that they assume the models are basically correct and complete representations of the actual atmosphere, and that the only errors are of precision (noise) that cancel out over all the time steps. So they simply can’t understand Pat’s argument. And even when they admit that there are things missing in the theory (and therefore in the models), they insist on treating those accuracy errors as precision errors and pretend they cancel out, as if ignorance can cancel out somehow and reveal truth.

Nick, “

What did I say about error?”Prior Nick, “

if a model couldn’t possibly produce a result,..”A result a model cannot produce is error.

Uncertainty is not at all the result a model may produce. It’s the reliability of that result.

Nick, “

It is not a global average.”Yes it is. It expresses total cloud variability.

No. It’s not meaningless. It shows how pure crap the models are.

It shows nothing about the models. There is nothing in the paper about how GCMs actually work.

Spot on.

Nick- wrong again. There is a linear emulator that was validated against many model runs.

And, ultimately what the paper shows is the models DO NOT work.

“There is a linear emulator that was validated against many model runs.”It is curve fitting. The paper says:

“For all the emulations, the values of fCO2 and the coefficient a varied with the climate model. The individual coefficients were again determined for each individual projection from fits to plots of standard forcing versus projection temperature”And the standard forcing comes from model output as well.

Granted, it is curve fitting based on the form of CO2 forcing using the parameter set of the model, but it did emulate the behavior of the models over a significant range of model outputs. So emulating the model outputs does say something about the models.

Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.

“GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing”That is nothing like what they do. What you have shown is that you can, by adjusting parameters, fit a curve to their temperature output.

Nick, “

It is curve fitting.”Like model tuning.

You’ve just repudiated a standard practice in climate modeling, Nick. Good job.

The climatologically standard curve-fitting shows in Figure 4 above, where the historical record is reproduced because of model tuning, and then they all go zooming off, each into their own bizarro-Earth after that.

All I do is find an effective sensitivity for a given model. After that, the linearity of the projection is a matter of invariable demonstration.

You wrote, “

And the standard forcing comes from model output as well.”Wrong again, Nick.

I’m very clear about the origin of the forcings. They’re all IPCC standard. The SRES forcings and the RCP forcings. No forcing is derived from any model.

How did you miss that? Or did you not.

Nick, “

What you have shown is that you can, by adjusting parameters, fit a curve to their temperature output.”No, Nick.

I’ve shown that with a single parameter, a linear equation with standard forcings will reproduce any GCM air temperature projection.

It’s all right there in my paper. Studied obtuseness won’t save you.

“It’s all right there in my paper.”Yes, I quoted it. Two fitting parameters. Plus the forcings used actually come from model output.

Where do you think forcings come from?

Nick, “

Where do you think forcings come from?”I know where the forcings come from. Not from the models.

They are the standard SRES and RCP forcings. They are externally given and independent of the model.

Standard IPCC forcings and one parameter are enough to reproduce the air temperature projections of any advanced GCM.

Nick –> Why the problem with curve fitting. If I want to find out the response to an input of a block box, what does it matter. If the output follows a linear progression from an input, it really doesn’t matter what integrators, summers, adders, feedback loops are present in the box. I’ll be able to tell you what the output will be for a given input.

If you want to prove your point, you need to be arguing about why Dr. Franks emulator doesn’t follow the output of the GCM’s, not why GCM’s are too complicated to allow a linear emulator! He gives you the equations, check it out for yourself. Show where the emulator is wrong.

“There is nothing in the paper about how GCMs actually work.”

Exactly as there is absolutely nothing in GCMs about how the climate actually works.

But there is a difference. In fact, Frank’s model simulates quite well the bullshit climastrological models, while the climastrological models are pure crap when they have to simulate the actual climate.

“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”

Nope they dont’

Look at the code.

now you CAN fit a linear model to the OUTPUT. we did that years ago

with Lucia “lumpy model”

But it is not what the models do internally

“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”

Steve Mosher, “

Nope they dont’”Look at the code.

Yes, they do. Look at the demonstration.

Wrong- uncertainty can increase very rapidly and can be greater than the “conservation laws of the models” precisely because the models do not contain the relevant actual physics that the uncertainty parameter was derived from! If my model is T=20.000 Deg. C +.000001 t (t is years) and MEASURED uncertainty is +/- 2 Deg. C annually, you dang well better believe the uncertainty can outstrip the model output and its conservation laws!

Sorry, do not collect $200, Do not pass GO.

“If my model is T=20.000 Deg. C +.000001 t”Yes. That is your model, and Pat’s is Pat’s. And they don’t embody conservation laws, and can expand indefinitely. But GCM’s do embody conservation laws, and you model (and Pat’s) say nothing about GCMs.

My model is just a simple model model.

Nick, “

… and can expand indefinitely.”Wrong yet again, Nick. The emulator gives entirely presentable air temperature numbers. Just like GCMs.

The

uncertaintycan expand indefinitely. Until it shows the discrete numbers are physically meaningless. Uncertainty is not a physical magnitude. It’s a statistic. It is not bound by physics or by conservation laws.“

But GCM’s do embody conservation laws, and you model (and Pat’s) say nothing about GCMs”If says that GCMs project air temperature as a linear extrapolation of fractional GHG forcing.

You insistently deny what is right there in front of everyone’s eyes, Nick.

IMO Pat fits the modelled GMST with his equation. However, the GCM show more warming over land than over oceans which is this what we observe to. GCM also show some arctic amplification , also observable. All these issues are not a result of Pat’s equation. Therefore the GCM must do it in other ways then fitting.

“Therefore the GCM must do it in other ways then fitting.”

Or their fitting is more granular, and occurring in more places in code, than you realize.

Stokes

In the more usual situation, where the uncertainty bounds are a small fraction of the nominal measurement, it provides an estimate of the probable precision. As the bounds increase, and approach and exceed the nominal measurement, it tells us that there is little to no confidence in the nominal measurement because, for a +/- 2 sigma uncertainty, there is 95% probability that the true value falls within the uncertainty range that is far greater than the nominal measurement. If the uncertainty range exceeds physically meaningful values, then it is further evidence that there are serious problems with either the measurement or calculations.

The uncertainty is a function of the measurement (or calculation) and not the other way around. That is, you have it backwards.

You’re fixated on the criteria of statistical/epidemiological modeling, Nick, where there is no physically correct answer. There is only a probability distribution of possible outcomes.

None of them may prove correct, in the event.

In physical science, there’s a correct answer. One number. The question in science is accuracy and the reliability of the number the model provides.

Uncertainty tracks through a calculation and ends up in the result.

Take a look, Nick. Where do any of the published works restrict the width of the uncertainty interval.

Your criteria are wrong.

Neither you, nor any of the folks who insist as you do, understand the first thing about physical error analysis.

“Uncertainty tracks through a calculation”And that is exactly what you didn’t do. You nowhere looked at the GCM calculation at all.

Nick, “em>You nowhere looked at the GCM calculation at all.”

I looked at GCM output, Nick. All that is necessary to judge their accuracy. By comparison with known observables.

How they calculate their inaccurate output is pretty much irrelevant.

“I looked at GCM output, Nick. All that is necessary to judge their accuracy. By comparison with known observables.”

In fact I would add that is exactly what Lauer did. He took observational satellite data and compared it to the outputof GCMs and concluded (in the case of cloud forcing) that the CIMP5 GCMs did a pretty lousy job of correlating to the observations.

When the uncertainty interval is (much) larger than any value that the GCM could have possibly produced, then the uncertainty interval is meaninglessYou’ve got that exactly backwards Nick, when the uncertainly is much larger than any value the GCM could have possibly produced, then it’s the GCM that is meaningless.

Uncertainty and precision are two entirely different things. By this point it’s clear you are willfully not understanding that.

I am not a mathematician but the conception of calculated error bars reflecting uncertainty in the models as opposed to possible temperatures is obvious to me. I cannot in fact fathom how people fail to understand this issue. There seems to be a fundamental misconception at play, by which the outputs of mathematical models are confused with reality.

That’s why “willfully obtuse” is a phrase people must learn. Willfully obtuse people are actually gifted, in a way. They have an ability to perform microsurgery on their own minds, cauterizing themselves against obvious facts. Some people who can wiggle their ears; willfully obtuse people can induce a fact-blindness within themselves that lets them not see what they’d rather not see – even if it is paraded before them.

Many bright, willfully obtuse people are employed by climastrology. Born with a natural talent for it, they have honed their innate talent into a marketable skill – and have been financially rewarded for it.

Imagine I step forward and say I was an eye witness to a crime and then state I am quite certain the perp stood between 2 feet and 12 feet tall. Yet, when pressed, I can’t commit to a height estimate range any narrower than that. How likely do you think it is I will be called to testify at trial? Climate models bring the same amount of useful information to their domain, yet are considered star witnesses!

We’re all stuck in a Kurt Vonnegut novel and can’t get out.

When one ventures down the Rabbit Hole that is climate modelling, one then finds oneself stuck in Wonder Land, watching all the madness around. In Climate WonderLand the climate scientists are the Hatters turned “mad” at their vain, Sisyphean attempts to make nature conform to a contrived fiction.

+1

Well, the climate models look to be suggesting a height somewhere between -10 and 22 feet tall.

This stuff is largely above my pay grade, but in relation to how the “confidence interval” (or lack of confidence, in this case…) impacts actual temperature: it does not. What it says, however, is that if the bounds are so wide as have been calculated here, that the projection made by the modeler is meaningless. The projected temperature has no predictive value: it is a meaningless number, one that might just as well have been pulled from a hat.

The models, therefore, are not fit for public policy purposes, since they have no demonstrable skill in prediction. Absent such skill, you have only an illusion of what the outcome will be, one that is entirely undercut by the gross uncertainty that attends the particular projection.

Taleb deals with this issue as well in his “Black Swan” book, where he ridicules financial modeling -among other pastimes – and where he usefully quotes Yogi Berra: “It’s tough to make predictions, especially about the future.” Chapter 10 of that book is entitled “The Scandal of Prediction”, and covers the types of issues that plague all modelers of chaotic systems.

You missed his whole point. They do not predict possible temperatures outside the realm of possibility. The confidence intervals tell you that the underlying models are worthless.

Agreed.

His point is that the possible futures predicted by the models are so uncertain that all possible futures known by simple physical contraints lie within their bounds. Therefore the models tell us nothing useful.

After the models are presented we know nothing more than before we see the models—-except that their presenters are scientific charlatans.

You’re missing the whole point and unfortunately falling into the same thinking as Spencer.

He’s not calculating temperatures, he’s calculating uncertainty. None of this has anything to do with temperature projections, only what I will call margins of error for simplicity.

The fact that the model’s physics allow for unrealistic temperatures within their error margins proves that they aren’t reliable.

Andrew, I don’t know if Pat is right or not. But what he is saying, which makes your argument wrong if he is correct, is that what you are taking to be ‘realistic max and min temperatures’ are no such thing.

His argument is that they are bounds of uncertainty.

To give a very simple example, if I understand him correctly, suppose there is a cloudburst, one of many this autumn That cloudburst delivers a certain number of inches of rain.

I say, the problem with this measuring instrument is that its very uncertain. The real rainfall today could be anywhere between 0.5 and 3 inches, that is how bad it is.

You would then be replying, this cannot be. Rainfall in these cloudbursts never varies by that much, they always have pretty narrow band of inches of rain. They never fall below 1 inch and never are more than 1.5. So you must be wrong, your claim is refuted by experience.

True on the usual size of the bursts, but irrelevant. Because the level of uncertainty about how much rain fell is what is being asserted, which has nothing to do with the amount of variability of rainfall from these bursts.

Variability is a physical measure of quantity of rain. Uncertainty is to do with the confidence with which we measure it. The limits of accuracy of our measuring equipment.

I hope I understood Pat correctly…..

Andrew K. – Suppose you go to the gym and step on their scale to weigh yourself. The scale shows 200 lb which you find strange since you weighed 180 this morning on your own scale. You step off, see the scale says zero, step on again and it now shows 160. Several more checks show weights that vary all over the place. You conclude that the scale is broken or – in measurement speak – it has a very large uncertainty which makes it unfit for purpose.

All Pat Frank has done is use the well established process of measurement uncertainty analysis to show that GCM projections of GMT 80 or 100 years out have an uncertainty far too large to have any confidence in the results. The fact that the uncertainty in 80 years is something like +/- 20 C is unimportant. The game is over as soon as the uncertainty reaches the magnitude of the projected result. If my model projects a 1 year temperature increase of 0.1 C with an uncertainty of +/- 0.4 C my model is not very useful. And anyone who thinks that uncertainty in future predictions (projections, forecasts, SWAGs) does not increase the further out in time they go, needs to find a way to reconnect to reality.

You got it exactly right, Rick C. Thanks for the very succinct explanation.

What you have described so readily continually escapes the grasp of nearly all climate scientists of my experience.

I’m really lucky that somehow Frontiers found three scientists that understand physical error analysis, and its propagation.

Early in the Pat Frank uncertainty story I touched on using an analogy with bathroom scales for body weights. They provide neat ways to explain certain types of errors. But soon, there were too many bloggers presenting other analogies and the field got too bunched.

The scales story is useful as well to explain aspects of signal and noise. When your mind has grasped the weighing principles for the naked human body, you can change the object to one of a tenth of a kilogram instead of 100 kg or so. Uncertainty acceptable for body weights becomes impossible for a tenth of a kg. For that you need to build a new design of scales, like letter scales for envelope postage, just as a new design of GCM seems to be required for some purposes related to uncertainty.

Pat, years ago you commented that you had not met anyone from the global warming community who knew that errors could be classed into accuracy and precision types. I thought you were exaggerating. From what has been aired since then, I see what you meant. It is like a different sub-species of human scientists has been uncovered by “climate change, the emergency” and its precursors.

Stick with it, Pat, you are winning. Geoff S

Thanks, a far better explanation than my attempt, and I am a lot clearer for it.

The dichotomy between those who understand exactly what is actually shown by Pat’s paper, and those who simply cannot, or will not, grasp what is being said, is startling.

Which all by itself is very illuminating.

There is an inflexibility of thinking being demonstrated, when we see one example after another of what is NOT being said and why, and one example after another of the correct description using varied verbiage and analogies, and why, and yet despite this, there is very little evidence of a learning curve for those who have it wrong.

The cloud of uncertainty is larger than the range in possible values of the correct answer.

This says nothing about the correct answer, it means that no light has been shed on what it is.

It means that our lack of knowledge that we had to start with regarding where in the range of possible correct answers, has not been narrowed down by these models.

They tell us nothing.

They cannot tell us anything.

They cannot shrink the range of possible correct answers that exists prior to constructing the model.

Throwing darts at a dart board through a funnel does not mean you have good aim.

A funnel is not the same as aiming.

Imagine the dartboard is drawn in invisible ink…you do not know where the bullseye is.

All you know is the funnel has constrained where the dart can go.

But is that analogy true for climate models – if they are wrongly modelling clouds then they will do this consistently rather than randomly like your analogy. I’d be more inclined with the following analogy.

At home I weigh 180, at the gym it says 160, and at my friends place 210. But if I put on 10 lb then will the three new weights be 190, 170, and 220 or at least will the measured change be close to 10. That’s what is important.

As I see it, Pat Frank has looked at the uncertainty due to cloud cover in a single run. But we want the uncertainty for the difference in two runs (one calibrated). The earlier models have predicted actual current temperatures quite well. And yes, there is uncertainty in the difference between calibrated and scenario runs – that’s why the IPCC has a 1.5-4.5 range. Of course, if you compare runs from different models using different RCPs (and then compare to actual CP) you will have a huge variation in results (but then you’d be nuts to think that was anything but meaningless).

How will you know if you put on ten pounds, and not 4 or 5, or if you lost weight?

Each scale gives a different answer but is consistent. If I weigh myself at 1 minute intervals it will give the same answer (within measurement error, but that’s a different issue not relevant here). If I’m heavier, all scales will give me a heavier weight (in the same way all climate models give higher temperatures with more CO2). And how close to the true 10 lb will be important (and the uncertainty of the change is unlikely to bear much – if any – relationship to the uncertainty of the initial weight). In the case of climate models, they’ve been close enough since first developed to give useful answers.

“… they’ve been close enough since first developed to give useful answers.”

Could not disagree more.

Twenty years ago, did they give an accurate idea of the next twenty years?

These modeled results will be completely discredited in less than the 12 years we have been assured by climate experts AOC and Greta Thunberg is the beginning of the end of the world.

I will bet anyone a pile of money on it.

Two years running with an average temp of TLT given by satellite data below the 1980-2010 average, between now and then.

If that does not occur, I lose.

The scales in your analogy are all off because you didn’t really weigh 180 pounds. Prove me wrong!

The ONLY consistency displayed by climate models is a consistency FORCED upon them BY DESIGN. They are NOT getting it generally correct. At least, there is NO reason to believe they are, since their innate uncertainty makes their outputs useless. This is the part that’s so sad – millions don’t realize they are being duped by brazen grifters.

What you are missing is that in your example each measurement has an uncertainty. Your example is mainly dealing with measurement error, not uncertainty. Uncertainty tells you that you would have measurements like 180 +/- 5 lbs, 160 +/- 4 lbs, 210 +/- 8 lbs. The next set would be 190 +/- 5 lbs, 170 +/- 5 lbs, and 220 +/- 8 lbs.

It means you are uncertain about what each value is every time you weigh so that you couldn’t really calculate that you had a change of 10 lbs.

Exactly.

Same issue as many commenters.

Inability to conceptualize the difference, is that the problem?

Brigitte,

You claim that “if they are wrongly modelling clouds then they will do this consistently rather than randomly like your analogy”.

You are not justified in making this assumption. Clouds change all of the time. A set of errors that applies to heavy, low cloud might not apply for high, light cloud. There is no way that you can assume that the errors cancel, especially when there is expert consensus that the clouds have either positive or negative nett feddback, overall. Geoff S

How do you know the scales will all give you a heavier weight? You are making the same mistake all the supporters of the CGM’s are making – that all the scales give accurate readings plus or minus a fixed bias amount. But you have not accounted for the uncertainty associated with each scale. That’s what the climate modelers do – assume all their models are accurate with nothing more than a fixed bias in the output. They ignore that there is an uncertainty associated with each of their models, and that uncertainty factor is *not* a fixed bias.

If all the scales have an uncertainty of +/- 5lbs then you won’t know whether you gained 10lbs or not.

Rick C PE

+1

To add another element to your excellent illustration, suppose I know that my weight historically varies between 170 and 190 and almost never goes out of that range, and suppose further I could use settings on the scale to arbitrarily set mimimum and maximum readings to be within that range based on, let’s say, past readings from a separate set of reliable weight measurements (model “tuning”). The measuring mechanism of the scale itself is still varying wildly outside that range but you don’t see that because of the arbitrary minimums and maximums that are baked in. Now you have a scale that is outputing readings that appear to be meaningful but that is an illusion. The reading you see on any given day is completely meaningless as a measure of reality.

“…suppose further I could use settings on the scale to arbitrarily set minimum and maximum readings to be within that range based on…”

IOW…a funnel.

Constraining the output is exactly what I was referring to in my “throwing darts at a board through a funnel” analogy.

Every shot hits within some predetermined boundary, but it has nothing to do with my aim.

And no one can see the bullseye, on top of that.

All that is known is that the darts hit the board.

But the purpose of weighing yourself or running a climate model is to find out what you do not know.

In the example used by Brigitte, the parameters have been changed…she is asserting a set of scales that read different from each other, but each always gives the same result, and if ten pounds is gained, each will increase by ten pounds.

Several things about this remove this analogy from being…analogous to the original issue.

One is that assumptions are being made about the true values, and ability to know it.

If you know you gained or will gain, exactly ten pounds, there is no need for a scale!

And a scale which reads different from another, but both read the same each time, is not uncertainty, it is zeroing error.

*Market earnings being discussed, so am now distracted, lost train of thought.*

Andrew,

The situation is ridiculous, but it can’t be helped. We simply don’t know enough about cloud behavior to reduce the uncertainty in the simulations. The simulations are trying to create numbers, but the uncertainty is inherently huge no matter what the computers to. The uncertainty is part of the science, which shouldn’t be separated from the model calculations. Also, the uncertainty about clouds is separate from temperature measurement issues.

No, the confidence intervals measure confidence, not temperature. That’s the whole point. The fact that that means the models could produce temperatures far from possible shows how wrong the models are, not the other way around.

+10

You might comsider the sign…..

” ….confidence intervals measure confidence, not temperature. ”

You mean the +/- 1 standard deviation intervals about the mean , as given inthe graphs of Pat and Roy. (A Confidence interval is defined differently, but based on the sd and would be much narrower.)

You can see the spread of the interval is a measure of uncertainty; the wider, the more uncertain; the narrower, the more confidence you can give to the estimated mean. But the values of sd have to be given in temperature since they are dependent on the mean, otherwise the +/- sign would be invalid. They, and what they encompass, must be regarded as potential temperatures, given your model and procediments are correct.

” that means the models could produce temperatures far from possible shows how wrong the models are”

With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.

sorry…… is a measure -> as a measure

>>With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.<<

All Roy showed is that GCMs do not produce an unbelievable spread in predicted temperatures. Pat Frank's emulator did not produce an unbelievable spread in predicted temperatures either. Pat Frank's analysis of the propagation of uncertainty produced an unbelievable spread in uncertainty. The GCMs have not performed any appropriate analysis of the propagation of uncertainty. Not sure that they know where to begin…

“Pat Frank’s emulator did not produce an unbelievable spread in predicted temperatures either. Pat Frank’s analysis of the propagation of uncertainty produced an unbelievable spread in uncertainty.”

An uncertainty in temperature has to be derived as and be given in terms uf temperature, how else ? Large spread in projected temperatures means large uncertainty in possible temperatures. When the spread ranges into impossible temps, the process should be stopped an revised.

“The GCMs have not performed any appropriate analysis of the propagation of uncertainty. Not sure that they know where to begin…”

Sensitivity analyses have been done, as I conclude from the titles of cited papers. That seems to me the way to do it, repeated model evaluations with variability on selected parameters. The between-model comparisons (within ensembles) are better than nothing, but not the optimum.

Ulises, “

When the spread ranges into impossible temps, the process should be stopped an revised.”When the uncertainty spread reaches impossible temperatures, it means the projection is physically meaningless. That is the standard interpretation of an uncertainty interval.

Ulises, “

That seems to me the way to do it, repeated model evaluations with variability on selected parameters.That tests only precision, not accuracy. Such tests reveal nothing of the reliability of the projection.

Uncertainty analyses are about accuracy. A wide uncertainty interval means the model is not accurate.

“With “the models” you mean the GCMs, but in fact it is Pat’s error propagation model which produces the unbelieveable spread. The GCMs don’t do this, as Roy showed.”

What Pat’s analysis shows is that the iterative process of the CGM’s result in uncertainty intervals that get wider and wider with each iteration. The iterations should stop when the uncertainty overwhelms the output, e.g. trying to measure a 0.1degC difference when the uncertainty is more than +/- 0.1degC. As Pat has showed, once this tipping point is reached you no longer know if the CGM’s produce an unbelievable result.

Uncertainty is not error. It is just uncertainty.

“What Pat’s analysis shows is that the iterative process of the CGM’s result in uncertainty intervals that get wider and wider with each iteration.”It doesn’t show anything about GCMs at all. It includes no information about the operation of GCMs. In fact the iteration period of a GCM is about 30 minutes. Pat makes up a nonsense process in which the iteration is, quite arbitrarily, a year.

Nick, “

It doesn’t show anything about GCMs at all.It shows that GCMs project air temperature as a linear extrapolation of fractional GHG forcing. The paper plus SI provide 75 demonstrations of that fact.

Nick, “

It includes no information about the operation of GCMs.”It shows that linear forcing goes in, and linear temperature comes out. Whatever loop-de-loops happen inside are irrelevant.

Nick, “

In fact the iteration period of a GCM is about 30 minutes.”Irrelevant to an uncertainty analysis.

Nick, “

Pat makes up a nonsense process in which the iteration is, quite arbitrarily, a year.”Well, arbitrarily a year, except that air temperature projections are typically published in annual steps. Arbitrarily a year except for that.

Well, except that L&H published an annual LWCF calibration error statistic.

Let’s see: that makes yearly time steps and yearly calibration error.

So, apart from the published annual time step and the published annual average of error,

~~what have the Romans ever …~~oops, it’s arbitrarily a year.“Uncertainty is not error. It is just uncertainty.”

Yeah, you know it when you feel it. Why bother with defiitions.

Uncertainty isn’t error, ulises.

The uncertainty interval doesn’t give a predicted range of model output. As Tim Gorman pointed out, the uncertainty intervals gives a range within which the correct result may be found.

However, one has no idea where, within that interval, the correct values lays.

When the uncertainty interval is larger than any possible physical limit, it means that the discrete model output has no physical meaning.

“Uncertainty isn’t error, ulises. ”

Well, I’ve heard this a number of times, yet never accompanied by arguments,nor by an explanation what must go wrong if this lemma is not observed.

Seems rather to me like an amulet which makes your reasonings critic-proof.

“Uncertainty” is, if good for anything, a superior term to “error”. The “JCGM Guide to the expression of uncertainty in measurement ” – you cited it – subsums everything which was formerly known as “error analysis” under the umbrella of “uncertainty analysis”. Nothing new, no changes but for a unified notation incl. renaming. What used to be a “standard deviation” is now to be named the “standard uncertainty Type A”. ( Short: “u”, variance “u^2”). No change in how to derive it, nor in interpretation, nor in subsequent usage (e.g. propagation).

Your study deals with “error” propagation. Can you explain why it is in place there ?

Your Eqn. 6 has u^2 terms on the rhs, and sigma to the left, where one would expect uc, = uncertanty combined, which would be in line with the unnumbered previous eqn in the text, as well with the notation in the Guide.

Why is the u-notation dropped in mid-equation? A standard deviation

(or uncertainty) is expected on the lhs. Sigma is neither of it, it is the value of the sampled underlying population and *unknown*, in practice replaced by the sd as the sample estimate, but *not* equal to it. All this is explained in the Guide.

“The uncertainty interval doesn’t give a predicted range of model output.”

In the best case it should. That’s how it is defined. If your sample is from a Normally distributed population, e.g. the mean +/- 1sd comprises the well-known 68% of the potential outcomes.

“As Tim Gorman pointed out, the uncertainty intervals gives a range within which the correct result may be found.

However, one has no idea where, within that interval, the correct values lays.”

If you’re talking about measurements, under controlled conditions you may assume there is a “correct value” that can be approached.

For a new measurement, you can’ t predict its outcome. But you *can* predict that it is more likely to fall closer to the mean of the distribution than farther off. The interval is centered about the mean.

“When the uncertainty interval is larger than any possible physical limit, it means that the discrete model output has no physical meaning.”

Don’t know what “discrete” means in that context. Ignoring that, I’d say that first of all, you have to attribute physical meaning to the model output, otherwise you can’t compare it to other physical units. ” Larger than” , or not, can only be assessed on the same scale. Then you may conclude that the model output is impossible in the given context – e.g.,

50 C can well be accepted for a cup of tea, not so for the open ocean surface. (Asteroid impacts excluded). If the error propagation model runs into impossible values, it is time to stop and work on it.

I am working to understand this and in that effort I’ll quote Andrew’s objection and ask a question; “the confidence interval you calculate suggests possible temperatures outside the realm of the physically possible.” If I’m understanding anything here, that statement is precisely the criticism this whole essay was designed to correct. When Frank writes, “They come from eqns. 5 and 6, and are the growing uncertainty bounds in projected air temperatures. Uncertainty statistics are not physical temperatures.” is he not directly claiming this objection is unfounded? His error boundaries are NOT confidence intervals. Am I even on the right track here?

The confidence interval gives boundaries within which the actual temperature might lie but it doesn’t specify that the actual temperature has to be at either edge of the boundaries. The uncertainty interval is not a probability function that predicts anything, it is just a value.

Even thinking that the uncertainty level is associated with what the actual temperatures could be is probably misleading. I like to think of it in this way: NOAA says this past month is the hottest on record by 0.1degC. If the uncertainty interval for the calculation is +/- 0.5degC then how do you really know if the past month is the hottest on record? The uncertainty interval would have to be smaller than +/- 0.1degC in order to actually have any certainty about the claim.

It’s the same for the models. If they predict a 2degC rise in temperature over the next 100 years but the uncertainty associated with the model is +/- 5degC then is the output of the model useful in predicting a 2degC rise? It could actually be anything between -3degC and +7degC.

And I think this is what Pat is really saying – if the uncertainty interval is larger than the change they are predicting then the prediction is pretty much useless. It doesn’t really matter how large the uncertainty interval is as long as it is larger than the change the model is trying to predict – the model prediction is just useless in such a case.

“

Uncertainty *is* a strictly independent *value*. It is not a variable. The uncertainty at step “n” is not a variable or a probability function. Therefore it can have no correlation to any of the other variables.”Seems just so much common sense. For any given proposition (mathematical or otherwise) there is an uncertainty value intrinsic to the proposition itself. The intrinsic uncertainty value of the proposition is entirely independent of the proposition’s calculated truth value.

Take, for example, the proposition, “God exists.” Since this proposition is wholly untestable, we could say the intrinsic uncertainty value is 100%. The uncertainty value, however, has no relation to the proposition’s actual truth value. The proposition is either objectively true or false regardless of intrinsic uncertainty.

syscompuing:

A very good analogy. But it will just go over the heads of the warmists.

Yes, you are right, see my post above. The error boundaries in question are meant to be +/- 1sd intervals, while “confidence intervals” have another definition in Statistics.

But try to teach that to the crowd here.

“….working to understand…”—– Keep on !

But don’t rely on what is presented here. It is like observing some wrestling in mud through a smoke screen.

Always try to refer to some basic text to better understand the issue.

Good Luck !

Excellent, Pat.

In Your original paper you used the term “error” as in propagating error,- etc.

Maybe – just maybe, Roy and others had understood Your reasoning better, had you instead used the term “uncertainty”.

Just a thought.

But brilliant – Thank you.

Hans K

Hi Hans — you have a point. Propagated error is what uncertainty is all about. The terms are connected, especially through calibration error.

Physical scientists and engineers would not be confused by the terms. That leaves everyone else. But one can’t abandon proper usage.

In medieval studies, my work frequently included physics (experimental archaeology) and stuff that was understood by insiders but understood *differently* by everybody else? That’s the point where my editors (who understood the field at large but not my specific subset of it) would say “explain in footnote,” and once done they could follow the evidentiary and logical chains. So I think in this case Hans K’s observation is worth considering.

Brilliant work. Thanks.

>>

Physical scientists and engineers would not be confused by the terms.

<<

This engineer is confused. You don’t compute an average by dividing by a time unit. You create an average by dividing by a unit-less value or count–the number of items averaged. When you divide by a time unit, you change the value to a rate. It pains me, but I’m going to have to agree with Mr. Stokes here.

Jim

in response to Jim Masterson

>>>>

Physical scientists and engineers would not be confused by the terms.

<<

This engineer is confused. You don’t compute an average by dividing by a time unit. You create an average by dividing by a unit-less value or count–the number of items averaged. When you divide by a time unit, you change the value to a rate. It pains me, but I’m going to have to agree with Mr. Stokes here.<<

This engineer is not confused. Sum 20 instances of yearly uncertainty and divide by 20 instances. Average yearly uncertainty.

>>

Average yearly uncertainty.

<<

Yes, but it’s not an average uncertainty PER year. You divided by an instance and not a time unit.

Jim

>>Yes, but it’s not an average uncertainty PER year. You divided by an instance and not a time unit.<<

And my understanding is that is exactly what Pat Frank et. al. have been trying to tell Nick Stokes. They took an average annual uncertainty, based upon 20 years of discrete instances of annual uncertainty, and propagated that as a discrete instance value of uncertainty in the emulator for, say, 100 years of predicted temperature response.

Then it’s non-standard terminology. You don’t add a ‘/year’ term, which implies a rate. Averages are computed using an integer count. It doesn’t change the unit you are averaging. You can plot an average on the same graph as the items you are averaging.

Jim

>>Then it’s non-standard terminology. You don’t add a ‘/year’ term, which implies a rate. Averages are computed using an integer count.<<

Who added a '/year' term?

The term that Pat Frank used, many times, is "annual average ±4 Wm-2 LWCF error".

>>

Who added a ‘/year’ term?

<<

I guess you haven’t been following the argument.

>>

Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

Jim

>>I guess you haven’t been following the argument.

>>

Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

<<

I have been following the argument and note that many suggest Nick Stokes creates a distracting argument. While Tim Gorman may have been distracted into the diatribes on average price of Apple stock and driving miles per year, the original argument centered around +- 4 Wm-2 average annual error LWCF in Pat Franks paper.

To further indulge in the distraction, miles per year may be considered a speed (however in the example given it would be silly to interpret it that way). It is more reasonably interpreted as an annual rate of vehicle utilization.

miles/per year is not a rate. Velocity is a vector quantity. In physics it is displacement divided by time. If I take one step forward and one step backward my velocity is ZERO, i.e. no displacement. The number of miles driven does not equal displacement. If I leave home and drive 100,000 miles around in a circle over a year just to return home then my velocity is zero.

Velocity is a rate of displacement. Miles driven in a year does not specify a displacement and therefore should not be considered as a velocity.

>>

. . . an annual rate of vehicle utilization.

<<

In other words, an average speed–not an average distance. I agree that Mr. Stokes is diverting the argument. However, Mr. Gorman is using wrong terminology. Thanks for agreeing with me without agreeing with me.

Jim

>>>>

. . . an annual rate of vehicle utilization.

<<

In other words, an average speed–not an average distance. I agree that Mr. Stokes is diverting the argument. However, Mr. Gorman is using wrong terminology. Thanks for agreeing with me without agreeing with me.

<<

Whoa, slow down (no pun intended). Allow me to be the one to determine whether or not I agree with you.

Annual rate of vehicle utilization, in my way of thinking, is an average distance, not an average speed. The average speed at which that distance was traveled is another measurement altogether.

If I were looking for an engineer to study factors that impact the maintenance cost of a vehicle, such as utilization in miles per year, or utilization in operating hours per year, or average speed of operation in miles per hour, and someone came along telling me that everyone knows the average speed is determined by the total miles driven divided by the number years in which those miles were driven, I would look for another engineer.

>>

Whoa, slow down (no pun intended). Allow me to be the one to determine whether or not I agree with you.

<<

Yeah, I knew it was too good to be true.

>>

Annual rate of vehicle utilization, in my way of thinking, is an average distance, not an average speed.

<<

If you were really an engineer, I wouldn’t have to say this. The term dx/dt is the RATE of change of x with respect to time. If you stick “rate” in there, then it’s a change WRT time.

>>

. . . someone came along telling me that everyone knows the average speed is determined by the total miles driven divided by the number years in which those miles were driven, I would look for another engineer.

<<

Yet, that’s exactly how you determine average speed over a period of time. A rate of utilization, may mean when the vehicle is actually being used; i.e., when someone is driving it. Still, it’s an average speed and not an average distance.

You didn’t really “define” your terms, so I’m free to assign my own meanings to them. If you would like to define your terms, then I’ll decide accordingly. However, dividing by a time creates a value that changes with time and is usually a rate. Any other meaning is non-standard.

You may go find another engineer–you won’t hurt my feelings. And this is a stupid argument. I remember arguing with another EE about the true meaning of pulsating DC. That was a stupid argument too.

Jim

An average of denominator ‘per year’ in a (+/-) uncertainty doesn’t imply a rate, Jim. There’s no physical velocity.

Furlongs per fortnight is a rate. (+/-)furlongs per fortnight is an uncertainty in that rate. It is not itself a rate.

Jim, “

The term dx/dt is the RATE of change of x with respect to time.”And (+/-)dx/dt. Is that a rate, too?

It’s the

(+/-)dx/dt that is at issue, not dx/dt.>>

Tim Gorman

October 22, 2019 at 5:04 pm

miles/per year is not a rate.

<lt;

It’s a speed and that makes it a rate.

>>

Velocity is a vector quantity.

<<

True, so? You can always take the magnitude of a vector–which gives you a scalar. The magnitude of velocity is speed.

>>

In physics it is displacement divided by time.

<<

In physics, it’s the rate-of-change of a distance with respect to time. I don’t know what you mean by displacement (I know what a displacement is, but your use of it is non-standard).

>>

If I take one step forward and one step backward my velocity is ZERO, i.e. no displacement.

<<

If you take one step forward, you accelerate forward and then decelerate. That means your instantaneous velocity increases above zero and then goes back to zero. Taking one step backward does the reverse–you accelerate backward and then decelerate. Your instantaneous velocity increases and then goes back to zero. Your average speed is two steps divided by the time it takes you to make those steps. If you paused between steps, then that just reduces your average speed. Average velocity may be zero, but your average speed isn’t.

>>

The number of miles driven does not equal displacement.

<<

Again, so what? I don’t know what you mean by your non-standard use of displacement.

>>

If I leave home and drive 100,000 miles around in a circle over a year just to return home then my velocity is zero.

<<

If you traveled a 100,000 miles, then your speed cannot be zero–it’s physically impossible.

>>

Velocity is a rate of displacement. Miles driven in a year does not specify a displacement and therefore should not be considered as a velocity.

<<

Again, I don’t know what you mean by your non-standard use of displacement. Velocity is an instantaneous rate-of-change of distance with respect to time. A velocity also has a direction component. Since you’re not specifying the direction, you must be talking about speed.

Jim

“If you take one step forward, you accelerate forward and then decelerate. That means your instantaneous velocity increases above zero and then goes back to zero. Taking one step backward does the reverse–you accelerate backward and then decelerate. Your instantaneous velocity increases and then goes back to zero. Your average speed is two steps divided by the time it takes you to make those steps. If you paused between steps, then that just reduces your average speed. Average velocity may be zero, but your average speed isn’t.”

I’m sorry to tell you this but velocity *is* a vector and is defined as displacement/time. Zero displacement means zero velocity. You are trying to avoid this definition by speaking about “instantaneous” velocity but 100,000 miles/year is *not* a measure of “instantaneous” velocity. It is a measure of miles driven, not a measure of either speed (the scalar value of the velocity vector) or velocity. Three cups of flour used in a cake is not a rate either, it is an *amount*. Yet you can use the measurement 3 cups of flour/cake to determine how much flour you will need if you are going to bake multiple cakes.

And as Pat has pointed out, +/- 4 W/m^2 is an interval, not a rate. Thanks, Pat!

Dr. Frank,

You needn’t waste your time on me. I don’t understand your point about uncertainty. Maybe someday, I’ll be smart enough to figure it out.

Jim

>>

Zero displacement means zero velocity.

<<

Okay, I put you in a rocket sled and accelerate you for one minute at 40g’s in one direction. Then I turn you around and accelerate you in the opposite direction at 40g’s for another minute. Your displacement is zero. Acceleration is dv/dt or the rate-of-change of velocity WRT time. According to you, your velocity is zero, then your acceleration is also zero. What crushed you then? (And this is another stupid argument.)

Jim

” According to you, your velocity is zero, then your acceleration is also zero. What crushed you then? (And this is another stupid argument.)”

𝐯= ∆𝐱/∆t

Velocity is a vector. Congruent positive and negative vectors cancel. Zero velocity.

If you put your car’s drive wheels up on jackstands, start the car, put it gear, and let it go till the odometer reads 100,000 miles then what was the peak velocity reached by the car? What was the average velocity reached by the car? If this is all done in the same year doesn’t it work out to be 100,000 miles/year with exactly zero velocity (i.e. the car never moves)?

This isn’t a stupid argument. It’s basic physics. You keep wanting to use instantaneous scalar quantities when you should be using vectors.

𝐯= ∆𝐱/∆t = (𝐱𝐟 – 𝐱𝟎) / (tf – t0)

And, again, as Pat pointed out – a +/- uncertainty interval isn’t a velocity to begin with.

This raises an interesting way to get out of speeding tickets. All I need is a picture of my car in my garage with a time stamp occurring before the speeding ticket and another picture of my car in my garage with a time stamp occurring after my speeding ticket. Zero displacement means zero velocity and zero speed. I’m sure the judge will dismiss my speeding ticket without delay.

Jim

Go for it!

Tim Gorman’s time average is an instance, not a rate.

A time average becomes a rate when it is extrapolated through time.

Physical context determines meaning.

If I say that I commute an average of 15,000 miles/year, that’s not a rate. That’s an instance.

If one wants to know how many miles I’ve driven in 10 years, then an extended time enters the physical meaning. The average becomes a rate in that context.

But rate requires an extended interval. A single instance of time average has no extended time and cannot be a velocity.

>>

If you put your car’s drive wheels up on jackstands . . . .

<<

Now I know that I’ve entered Looking-Glass world:

>>

<<

Your definition is not correct, actually (I couldn’t copy yours, so I think I duplicated it correctly–I prefer using arrows over vector quantities rather than just making them bold). The correct definition comes from Calculus:

And velocity is an instantaneous vector quantity–notice where goes to zero in the limit. The dot notation comes to us from Newton, apparently. I don’t think this old engineer has looked at the definition of velocity for over fifty years. You made me look, and it is displacement. The usual letter for displacement is . represents , , and more succinctly.

Acceleration has a similar definition:

Since we’re talking about scalar vs. vector quantities, here’s a formula for circular motion:

It’s acceleration equals velocity squared divided by the radial distance. All three variables are vector quantities, but this isn’t a vector equation–they are all scalars. You can’t divide by a vector anyway. Circular motion includes circular orbits, which are hard to do in practice. It also describes tying a string to a mass and spinning it over your head.

But let’s talk about a circular orbit. The acceleration is a vector that points to the center of the circle–called centripetal acceleration. The velocity vector is always tangent to the circle and points in the direction of motion. The radial or position vector extends from the center of the circle and points to the object in motion. As an object moves around the circle, the vectors track with it–keeping their respective directions.

If we take one complete circuit around, all the vectors cancel. The displacement is zero. Why don’t objects in circular orbit fall out of the sky after one orbit? Stupid question, isn’t it.

>>

This isn’t a stupid argument. It’s basic physics. You keep wanting to use instantaneous scalar quantities when you should be using vectors.

<<

Yes, it is stupid. I specifically didn’t mention velocity–I said speed. For some silly reason, you brought up velocity. Let’s stop trying to divert the argument from what I originally said–bad units. You guys have been arguing too much with Mr. Stokes and Mr. Mosher. You’re changing the subject like they do.

>>

Tim Gorman’s time average is an instance, not a rate.

A time average becomes a rate when it is extrapolated through time.

Physical context determines meaning.

If I say that I commute an average of 15,000 miles/year, that’s not a rate. That’s an instance.

If one wants to know how many miles I’ve driven in 10 years, then an extended time enters the physical meaning. The average becomes a rate in that context.

But rate requires an extended interval. A single instance of time average has no extended time and cannot be a velocity.

<<

You’re talking to an engineer. Converting units is what we do. !5,000 miles/year is (15,000 miles/year)*(1 year/365 days) = 41.10 miles/day (assuming a 365 day year). And 41.10 miles/day is (41.10 miles/day)*(1 day/24 hours) = 1.71 miles/hour. Miles per hour is a speed (a very, very slow speed), or do you think your speedometer is lying, and your car is up on jackstands?

An average does not change the units. You divide by the number of items–a dimensionless quantity. The correct term for an average (in this case) is the annual average distance is 15,000 miles, not 15,000 miles/year.

The rest of Dr. Frank’s statements aren’t exactly correct either. Originally, I didn’t use velocity–that’s Mr. Gorman’s attempt to divert the argument.

Jim

Jim,

Speed is a scalar value. It tells you nothing about velocity as a vector.

You keep jumping to the definition of “instantaneous” velocity. 100,000 miles/year is *NOT* an instantaneous velocity.

“Yes, it is stupid. I specifically didn’t mention velocity–I said speed. For some silly reason, you brought up velocity.”

The value of 100,000/year is neither speed or velocity. is is the distance traveled in a year. It specifies neither the speed *or* velocity associated with that distance of travel.

“Let’s stop trying to divert the argument from what I originally said–bad units. ”

The only one diverting here is you. You are trying to make the distance traveled in a year equal to a scalar speed or a vector velocity. As I tried to point out with my examples, which you used an argumentative fallacy of Argument by Dismissal to avoid actually addressing, distance traveled in a year gives you no information about speed or velocity (i.e. a car up on jackstands).

And *my* 1963 engineering introductory physics book, University Physics (Sears and Zemansky) 3rd Edition, defines velocity *exactly* as I wrote, right down to the bolding. And they make a distinction between velocity as a vector displacement divided by the time it takes to travel that displacement and INSTANTANEOUS velocity which is the tangent of the position curve at a specific point in time which does *not* describe the velocity between two points, e.g. P and Q. And I will repeat, if x2-x1 = 0 then there is no displacement and thus no velocity vector.

“An average does not change the units. You divide by the number of items–a dimensionless quantity. The correct term for an average (in this case) is the annual average distance is 15,000 miles, not 15,000 miles/year.”

Let me repeat again for emphasis: DISTANCE TRAVELED IN A YEAR IS NEITHER A SPEED OR A VELOCITY. And it *does* have the units of miles/year, you used the term ANNUAL yourself. Your “item” is *not* dimensionless. If you just say 15,000 miles you don’t know if that was covered in one month, one year, a decade, or a century. It is, therefore an inaccurate statement for the number of miles traveled over a period of time. That period of time is an essential piece of information. AND I REPEAT: DISTANCE TRAVELED IN A YEAR IS NEITHER A SPEED OR A VELOCITY. It is just the distance traveled. That whole distance could have been done in a second, a minute, an hour, a day, a week, a month, or a year. Only if you know that the distance was traveled in a continuous path over a distinct period of time would you be able to evaluate the speed by which it happened. Since you do *not* know that the miles covered were done in 100,000 increments, 10,000 increments, or even increments of a foot, you can make no evaluation of the speed.

It’s why you had to say “annual”. Annual implies year and that is the denominator.

The exact same logic applies to Pat’s uncertainty. His uncertainty interval *has* to be associated with the same time increment the CGM’s operate with – i.e. annual estimates of the global temperature. He did so by analyzing the given record over a 20 year period. The only way to change that to a time increment matching that of the CGM was to divide by 20 years to get an annual figure.

This is *not* hard to understand. Stop trying to invalidate Pat’s thesis through some hokey “dimensional analysis”. His dimensions are fine. As I tried to point out it is no different than saying you need 3 cups of flour per cake. That does not specify any “rate” at which the flour must be added to the dough, i.e. no speed or velocity. But it *does* tell you how much flour you expended for that cake. Just like miles traveled/year doesn’t tell you speed or velocity but does tell you how far you traveled in a year!

Jim Masterson October 22, 2019 at 11:58 am

>>

Who added a ‘/year’ term?

<

>Tim Gorman

October 20, 2019 at 5:36 am

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

>>

Miles per year is a speed. is it not?

Jim

________________________________

Jim, Miles per year is the task of a salesman.

inter alia.I guess the comments on this post are about to close, so this will probably be my last comment.

>>

Tim Gorman

October 24, 2019 at 5:01 pm

Speed is a scalar value. It tells you nothing about velocity as a vector.

<<

Well, not exactly. A vector has a magnitude and a direction. The magnitude of a velocity vector is speed–same units in fact.

>>

You keep jumping to the definition of “instantaneous” velocity.

<<

Because that’s the definition of a velocity. It’s been the definition since Newton’s time when he invented Calculus.

>>

100,000 miles/year is *NOT* an instantaneous velocity.

<<

Again, it depends on how it was derived.

>>

The value of 100,000/year is neither speed or velocity.

<<

You’re correct. Your typo (I assume it’s a typo) changes it to a frequency. And it’s possible to convert it to the SI unit for frequency as follows:

The units all cancel except for hertz. Silly, isn’t it? That’s what making mistakes with units leads to–nonsense. In my engineering classes, if we messed up the units (as you just did), we were marked down.

>>

is is the distance traveled in a year. It specifies neither the speed *or* velocity associated with that distance of travel.

<<

It’s only a distance if you use a distance unit. It’s a speed if you divide a distance by a time unit.

>>

The only one diverting here is you. You are trying to make the distance traveled in a year equal to a scalar speed or a vector velocity.

<<

Actually, I’m not trying to make a distance a speed. I’m saying that when you divide a distance by a time unit, you’re making a distance into a speed.

>>

As I tried to point out with my examples, which you used an argumentative fallacy of Argument by Dismissal to avoid actually addressing, distance traveled in a year gives you no information about speed or velocity (i.e. a car up on jackstands).

<<

It also gives no information about distance traveled. Your car isn’t going anywhere while it’s up on jack stands.

>>

And *my* 1963 engineering introductory physics book . . . .

<<

I’d get rid of that book if I were you. I looked up velocity on my dad’s old mechanical engineering handbook (Third edition) and it uses distance in the definition. My old high school physics book says the same thing. I wish I kept my college dynamics book, because I’d like to see what it said about velocity too.

>>

It’s why you had to say “annual”. Annual implies year and that is the denominator.

<<

No, I used a label for the computed average distance. Your division of a distance by a time unit turns a distance into a speed.

>>

Stop trying to invalidate Pat’s thesis through some hokey “dimensional analysis”. His dimensions are fine.

<<

It’s not hokey. If my attempt to correct Dr. Frank’s non-standard usage invalidates his thesis, then his thesis must not stand on very firm ground. His dimensions are not “fine.”

>>

Just like miles traveled/year doesn’t tell you speed or velocity but does tell you how far you traveled in a year!

<<

A distance is a distance. The units miles/year is a speed.

I was going to demonstrate the fallacy of dividing averages by time with a temperature example. However, since temperatures are intensive properties, I don’t want to go on record as supporting averaging temperatures.

Jim

“Because that’s the definition of a velocity. It’s been the definition since Newton’s time when he invented Calculus.”

“Actually, I’m not trying to make a distance a speed. I’m saying that when you divide a distance by a time unit, you’re making a distance into a speed.”

Once again, go read your fathers textbook. Velocity is a vector. Zero distance traveled means zero velocity. Zero velocity means zero speed. It doesn’t matter what the derivative along the path is. It’s no different than a conservative force applied over a closed path. The net work done is zero.

“It also gives no information about distance traveled. Your car isn’t going anywhere while it’s up on jack stands.”

The car’s odometer shows 100,000 miles traveled. Think about it.

“No, I used a label for the computed average distance. Your division of a distance by a time unit turns a distance into a speed.”

Computed average distance ANNUALLY! Annually means PER YEAR! You can run but you can’t hide!

“A distance is a distance. The units miles/year is a speed.”

Miles traveled annually *IS* miles/year. You said it yourself. Live with it.

“I was going to demonstrate the fallacy of dividing averages by time with a temperature example.”Nothing wrong with averaging temperatures; they are just measurements. But here it is shown with Apple closing share prices.

>>

Nick Stokes

October 28, 2019 at 8:25 pm

Nothing wrong with averaging temperatures; they are just measurements.

<<

Except if you measure temperature with a thermometer or something like a thermometer, it makes the temperature an intensive property. Averaging intensive properties is nonsense and has no meaning.

We argued about this before with beakers of water (https://wattsupwiththat.com/2018/09/04/almost-earth-like-were-certain/#comment-2448213). The only way to solve the problem is to do what you did–convert the intensive temperatures into extensive temperatures. Alas, that is not done with SAT.

Jim

>>

Tim Gorman

October 28, 2019 at 3:30 pm

Once again, go read your fathers textbook. Velocity is a vector. Zero distance traveled means zero velocity. Zero velocity means zero speed. It doesn’t matter what the derivative along the path is. It’s no different than a conservative force applied over a closed path. The net work done is zero.

>>

Zero velocity? Zero speed? Zero distance? What on Earth are you talking about?

>>

The car’s odometer shows 100,000 miles traveled. Think about it.

<<

I have. You think about it. How do you get 100,000 miles traveled when the car’s on jack stands? Why would you try to trick the odometer?

>>

Computed average distance ANNUALLY! Annually means PER YEAR! You can run but you can’t hide!

<<

Annual or yearly average distance is not distance per year. One’s a label, the other is a change relative to time–in this case a speed.

>>

Miles traveled annually *IS* miles/year. You said it yourself. Live with it.

<<

No, I said annual distance traveled, not distance per year. One’s a label describing the average, the other is a speed. You live with it (but won’t apparently).

Jim

Jim M

“We argued about this before with beakers of water”No, that was about how to deduce heat content from temperature measurements. But that doesn’t mean that temperature measurements can’t be averaged. They constitute a distribution, and you can form sample averages to estimate the population mean. Just as with heights, opinions, stock prices or whatever.

And it is done all the time, and isn’t controversial. In one location, you get a monthly average max by averaging the daily max for the month. Likewise annual. No issue of intensiveness.

>>

Nick Stokes

October 29, 2019 at 2:15 am

But that doesn’t mean that temperature measurements can’t be averaged.

>>

Yes, you “can” average any set of numbers. You can average phone numbers. What does an average phone number mean?

>>

And it is done all the time, and isn’t controversial.

<<

It is controversial. It’s just that alarmists ignore the controversy and do it without regard to the physics.

>>

In one location, you get a monthly average max by averaging the daily max for the month. Likewise annual. No issue of intensiveness.

<<

They also violate the rules of significant figures. They take a list of temperatures during a month and average them. Those temperature have precision down to a degree. The monthly average has precision down to a tenth of a degree. That’s not allow in most engineering and physics disciplines. It’s false precision. They then average those monthly averages (mathematically invalid too) and obtain precision down to hundredths of a degree. Those two precision digits are bogus. But without them you can’t perform magic scare manipulations of the temperature record.

Jim

Brilliant work. No answer to the salesman problem.

https://www.google.com/search?q=mathematics+the+salesman+problem&oq=mathematics+the+salesman+&aqs=chrome.

‘had you instead used the term “uncertainty”.’

What would that change ? A ” standard deviation” and a “standard uncertanty” are exactly the same in definition and numerical value (for the same case). The particular concept is the same, just the notation is dfferent.

Ulises,

“What would that change ? A ” standard deviation” and a “standard uncertanty” are exactly the same in definition and numerical value (for the same case). The particular concept is the same, just the notation is dfferent.”

Look at the title of the JCGM – Guide to the expression of uncertainty in measurement

It is in this document that “standard uncertainty” is defined as a standard deviation. However, this document has to do with *MEASUREMENT*, not with uncertainty of calculated model outputs which is what the subject of discussion is.

The JCGM defines uncertainty as:

uncertainty (of measurement)

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that

could reasonably be attributed to the measurand

And the definition of a measurand is:

A quantity intended to be measured.

(engineering) An object being measured.

A physical quantity or property which is measured.

Again, none of these has to do with the uncertainty of a calculated result based on uncertain inputs.

Pat determined the uncertainty in the input of the CGM’s using a Type A determination. The definition of a Type A determination is: method of evaluation of uncertainty by the statistical analysis of series of observations

What Pat has offered is actually defined in Section 6 of the JCGM as “expanded uncertainty”. From the document: “Although uc(y) can be universally used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory applications, and when health and safety are concerned, it is often necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. The existence of this requirement was recognized by the Working Group and led to paragraph 5 of Recommendation INC-1 (1980). It is also reflected in Recommendation 1 (CI-1986) of the CIPM. ”

From the document: “The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U u Y u y + U. ”

This is exactly what Pat has done.

Now, to the CGM’s. Pat has shown that the CGM’s are basically a linear prediction of future temperatures. With the uncertainty interval Pat as calculated for the input to the CGM this can be expressed as:

f(x =/- u) = kx +/- u where “k” is a constant for the linear relationship.

For an iterative process like a CGM, the value of u compounds exactly as Pat has laid out, i.e. root-sum-square. “u” is an interval, it is not a probability function thus there is no “mean” or standard deviation for the uncertainty. It cannot be minimized by trying to use the central limit theorem.

Pat’s thesis appears to be quite rigorous and mathematically correct. It simply cannot be easily dismissed.

Tim,

I missed this while I was hooked on your dialogue with Rich. You may find that some comments there are also relevant to your thoughts expressed here. But let’s go on :

“Look at the title of the JCGM – Guide to the expression of uncertainty in measurement

It is in this document that “standard uncertainty” is defined as a standard deviation. However, this document has to do with *MEASUREMENT*, not with uncertainty of calculated model outputs which is what the subject of discussion is.”

Tim, ALL statistics is dealing with measurements or counts. The approaches are portable among problems of the same type, while these may widely differ in verbal description or mental representation. The standard deviation tells you the same in whichever approach where the use of the Normal Distribution is justified. Is there any alternative definition to “standard uncertainty” than sd ?.

You may however question whether a model (GCM) output can be regarded and treated as a random variable. (I’d say Yes if in an approach like sensitivity analysis, otherwise not).

But Pat does not deal with GCM output (it’s Roy who does), but with his own

GCM emulation model. He refers to the practices collated in JCGM as errror propagation ,=> variances in, variance out .

“Pat determined the uncertainty in the input of the CGM’s using a Type A determination. The definition of a Type A determination is: method of evaluation of uncertainty by the statistical analysis of series of observations”

OK, Type A is classical analysis. But Pat determined nothing substantial, he picked from an analysis given in the literature, which deales with GCM output, not input. He determined he could use it in his approach and built it in.

>>What Pat has offered is actually defined in Section 6 of the JCGM as “expanded uncertainty”. From the document: “Although uc(y) can be universally used to express the uncertainty of a measurement result, in some commercial, industrial, and regulatory applications, and when health and safety are concerned, it is often necessary to give a measure of uncertainty that defines an interval about the measurement result that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to the measurand. The existence of this requirement was recognized by the Working Group and led to paragraph 5 of Recommendation INC-1 (1980). It is also reflected in Recommendation 1 (CI-1986) of the CIPM. ”

From the document: “The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. Such an interval is also expressed as y − U u Y u y + U. ”

This is exactly what Pat has done: <<

No, it is not. At least, he does not state it (should then use U, not u). Basic value is the +/-4W sd=u in cloud forcing, output after multiple steps of combining is also in terms of sd. With multiples of sd, the largely "unphysical" intervals in his Figs. 6A,7A would be proportionally wider.

[ see also my comments in the other post]

"Now, to the CGM’s. Pat has shown that the CGM’s are basically a linear prediction of future temperatures. With the uncertainty interval Pat as calculated for the input to the CGM this can be expressed as:

f(x =/- u) = kx +/- u where “k” is a constant for the linear relationship. "

The fit of his emulation model is indeed excellent. But his error treatment is not based on the fitting process, it is a separate process, based on a value he picked from literature and embedded in his simulated forcing regime.

Your equation confuses me. It is unconventional to have a +/- term on the lhs.

I don't understand it. Full stop.

"For an iterative process like a CGM, the value of u compounds exactly as Pat has laid out, i.e. root-sum-square."

Iterative or not, for any error combination the same rules should apply.

" root-sum-square" is higly misleading . What is summed are variances, i.e. mean squares. So the correct version in that terminology would be root-sum-mean-squares.

' “u” is an interval, it is not a probability function thus there is no “mean” or standard deviation for the uncertainty. It cannot be minimized by trying to use the central limit theorem.'

It is not an interval, but +/-u would be. It is not a probability function, but it is

an estimate of sigma, the 2nd moment of the well-known Normal Distribution. And yes, u=sd , as a sample estimate, has its own u=sd.

"Pat’s thesis appears to be quite rigorous and mathematically correct. It simply cannot be easily dismissed."

Borrowing your words, my opinion may be " expressed as y − U u Y u y + U. ”

howling monkey's lament——–borrowing without your kind permission—sorry, I couldn't resist]

“But Pat does not deal with GCM output (it’s Roy who does), but with his own GCM emulation model.”

Of course he does. If y=kx and z=lx and k=z then you get the same answer for both. And that is what Pat found. He could emulate the CGMs output using a linear equation. Again, it doesn’t matter what is inside the black box known as a CGM if it’s output is a linear equation.

“You may however question whether a model (GCM) output can be regarded and treated as a random variable. (I’d say Yes if in an approach like sensitivity analysis, otherwise not).”

Again, a sensitivity analysis won’t help if there is uncertainty in the inputs and outputs. It’s like we found with the Monte Carlo analyses of capital projects. A sensitivity analysis done by varying one input only tells you the sensitivity of the model to that one input. It doesn’t tell you anything about the uncertainty in the input or the output. If the model of one capital project shows a high sensitivity to interest rates and the model for another capital project does not then the second project is much less risky and gets ranked higher as a possible project. That sensitivity analysis tells you nothing about the actual uncertainty for the either project because future interest rates are very uncertain. Please not that interest rates are not a probability function with a mean and standard deviation. You can guess at what future interest rates will be but the fact that you have to “guess” is just proof of the uncertainty associated with them.

“Basic value is the +/-4W sd=u in cloud forcing, output after multiple steps of combining is also in terms of sd.”

If you are saying that standard deviations combine as root-sum-square instead of root-mean-square then you are trying to make a distinction where there is no difference.

“But his error treatment is not based on the fitting process, it is a separate process, based on a value he picked from literature and embedded in his simulated forcing regime. Your equation confuses me. It is unconventional to have a +/- term on the lhs. I don’t understand it. Full stop.”

His treatment of the uncertainty does not need to be part of the fitting process. It is merely enough to show that the CGMs provide a linear output. That output is either perfectly accurate or it isn’t. If it isn’t then an uncertainty interval applies. If there is uncertainty in the input then there *has* to be uncertainty in the output of the mathematical process, i.e. the lhs. The climate alarmists like Nick claim that the math model can somehow negate that uncertainty in the input so that the output is accurate to any number of significant digits. What Pat has shown is how that uncertainty compounds over an iterative process. It simply isn’t sufficient to say the magic words “central limit theorem” and wave your hands over a computer terminal in order to claim no uncertainty in the output.

“Iterative or not, for any error combination the same rules should apply. ” root-sum-square” is higly misleading . What is summed are variances, i.e. mean squares. So the correct version in that terminology would be root-sum-mean-squares.”

Again, you are assuming that the uncertainty interval is described by a normal probability distribution. If that were true then the climate alarmists claim that the outputs can be made as accurate as wanted using the central limit theorem. It’s the difference between error and uncertainty. Averaging measurements can make the measurement more accurate based on the central limit theorem. That just isn’t the case with uncertainty. If your uncertainty is +/- 4 W/m^2 then exactly what probability distribution is associated with that? If it is a normal probability distribution then you would assume the mean would be +/- 0 W/m^2 – i.e. no inaccuracy at all so why even bother trying to determine the uncertainty?

It truly is that simple. If the uncertainty interval is a probability distribution then the uncertainty interval can be made as small as you want using the methods in the JCGM. If it isn’t a probability distribution then you can’t make the uncertainty interval smaller with nothing more than calculations.

“It is not an interval, but +/-u would be. It is not a probability function, but it is an estimate of sigma, the 2nd moment of the well-known Normal Distribution. And yes, u=sd , as a sample estimate, has its own u=sd.”

If it is not a probability function then how can anything associated with it be? The 2nd moment, i.e. variance, requires a mean be determined. That requires a probability function to be defined, i.e. defining which values are more probable than others. Same for variance and therefore for the sd as well. The mere definition of uncertainty means you simply don’t know which values are more probable than others. It’s like trying to guess at what the third digit is on a digital meter that has only two digits. The uncertainty is a minimum of +/- .0025 and you simply don’t know where in the +/- interval the actual value lies. And no amount of statistics can lessen that uncertainty.

Reference #7 link rto Dr Lindzen’s pdf on the Yale eturns: “Page not found

The requested page could not be found.”

As can be seen on this page:

https://ycsg.yale.edu/climate-change-0

All the other 18 links return the requested pdf presentation….except the link to the Dr Lindzen presentation pdf. So it is not a typo of the URL by Pat Frank.

Looks like Yale pulled the access to the Lindzen’s PDF presentation from their web host server to hide counter-evidence/views.

Just another day at the Climate Disinformation Campaign by academia.

This link appears to provide the Lindzen paper:

https://www.independent.org/publications/article.asp?id=1714

From the article: “This is a long post. For those wishing just the executive summary, all of Roy’s criticisms are badly misconceived.”

Now *that* is a summary! Short and sweet. 🙂

Everyone, it isn’t this complicated, and the general public will never understand these arguments. Keep it simple, present arguments in a manner that an 8th grader could understand. Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.

NASA GISS has a website there you can view raw temperature data from all the weather stations in their network.

https://data.giss.nasa.gov/gistemp/station_data_v4_globe/

Weather stations are impacted by the Urban Heat Island Effect so NASA produces a BI value or Brightness Value for each site. Stations with BIs of 10 are less are considered rural. If you go there and look up Central Park New York you will see a gradual temperature increase over the past 100 years. If you go a little north to West Point you will find no warming. CO2 increases from 300 to 400 in both West Point and NYC, yet only NYC shows any warming. A 33% increase in CO2 had no impact on West Point Temperatures which is what one would expect for a radiative molecule that shows a logarithmic decay in its W/m^2 absorption.

Now, the Hockeystick on which all this climate histeria🤦♂️ is based shows a 1.25°C increase since 1902. If you simply limit the NASA GISS stations to the stations that existed before 1902 and narrow them down to stations with a BI of 10 or less, you will see that there are very few if any that show an uptrend in temperatures. Almost all will show that recent temperatures are at or below the levels reached in the early 1900s.

The question that needs to be answered is how can a 33% increase in CO2 not result in any measurable increase in temperatures? There are plenty of examples right on the NASA website. Until someone can explain how CO2 can result in stable temperatures at almost all stations controlled for the UHIE there is no need to try to explain how it causes warming because the thermommeters of NASA says it doesn’t. What Michael Manns Hockeyystick is measuring is the UHIE if it is measuring anything at all. His increase matches that of New York City, not Westpoint.

Can I (try) to summarise, for us of lesser knowledge than your good selves.

Pat, are you saying that your analysis shows the degree of failure of the GCM’s is greater or less than Roy’s analysis – i.e. its worse than we thought (couldn’t resist that).

Or that Roy’s analysis is, either in part or wholly inappropriate, or inadequately describes how the GCM models are failing?

I can see the concerns of those of us not wanting to hand sticks to the CAGW mob, an analogy would be, say, an inaccuracy in the detail of Charles Darwin’s version of the theory of evolution, being used to claim its falsehood in entirety by religious theologians, when all Darwin had done was failed to correct or notice an error in one part, which had no impact on its overall validity.

CO2 asks “The question that needs to be answered is how can a 33% increase in CO2 not result in any measurable increase in temperatures?”

The answer is coming from Ronan and Michael Connolly. See( https://www.youtube.com/watch?v=XfRBr7PEawY ) for radiosonde evidence that the atmosphere obeys ideal gas law and no greenhouse effect is present.

This answer underscores Pat’s uncertainty analysis that the physics in the models is not right. The greenhouse gas warming in every GCM is wrong.

Thanks for your persistence Pat.

Yep. The effect of CO2 conc. is minimal to nil.

Thank you.

”Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.”

Very easy…

CM = BS + infin ERR^3

”Einstein was able to define the universe in 3 letters E=MC^2. That is an elegant way to explain science in a manner that everyone can explain.”

GCM = infin BS x ERR^3

OMG, I think you’ve nailed it!

I’d only suggest using the mathematical symbol for infinity to make it look more technical and sciency 🙂

@ CO2isLife

Ok, I’ll keep it simple.

Historical AND modern weather readings are nowhere near accurate enough to produce the results that the warmists claim and will NEVER be accurate enough to be able to resolve a margin of error of less than about ±2.3°F per-day per-cycle per-equation and homogenizing the data doesn’t resolve anything because it always produces a + error in the result since 0 is 0°K not some “floating average” you get to assign. Even tho you hide the error in °K by using some average it doesn’t go away in the physics.

The absolute rule of statistics is that you must precisely calculate your error and deal with it or your results are wrong no matter what they are. There is no justifiable time in mathematics in which you can claim error doesn’t happen or doesn’t matter and the moment you start pulling numbers out of your ass *by any method* your error goes nuclear huge. Applying a custom waveguide to your output is FRAUD from the start.

Agreed, if you have to funnel your model outputs to keep them reasonable then your model, by definition, is unreasonable (and unrealistic).

Lindzen : “Rather, the mere existence of criticism entitles the environmental press to refer to the original result as ‘discredited,’ ”

Exactly, Kant’s Critique of Pure Reason.

These are hamfisted Kantian wannabees. Kant set up a straw dog of “pure reason” and assaulted it, the Robespierre of the human mind. When in fact “pure reason” does not exit, rather creative reason does.

Pointing out that the climate physics at every iteration is still wrong, implies much creative reason, science , is needed to further advance. Yet exactly that is the target of the climate gang, creative reason itself.

Kant, who can’t do it anyway (wrote Edgar Poe), recanted with pity for poor butler Lampe, and brought back smashed roadkill instead, the Critique of Practical Reason – “the errors in practice cancel”.

It took a poet to notice this, Heinrich Heine, and it escapes most today. Meanwhile the Robespierre of the mind is producing something mindless called Extinction Rebellion, easily seen how with Heine’s razor sharp insight.

Please stop modelling anything, as long as you have not even understood the very basics. I suggest you start learning about the “GHE”..

https://de.scribd.com/document/414175992/CO21

I don’t post much on here but I saw Roy’s comments and thought “Signal to noise?”

Pat has expressed in climate model terms the second part of the Scientific Method after you come up with an idea – namely what precision does your idea require for measurement?

Or in simpler terms – use the right tools, don’t be the tool.

I saw the same guff with the temperature measurement averages. It breaks the Central Limit Theorem at any rate except in one case and one case only: HypotheticalLand

Here you are free to hypothesise and spectulatise, while riding unicorns down showers of rainbows. Or in reality get a very sore head with hard equations.

But do not apply this to the real world.

How hard is this to understand?

The best thing is to have Skin in the Game. So if we apply climate science methods to drinking water, you could be drinking turgid sludge that would still have low levels (ppms) of contaminents by climate science measurement methods – +/- 1000% is fine.

Drink away!!!!

Mkcky … “Turbid.

” Sludge has turbidity, other things have turgidity. Although, at my age, not as often as it used to.

Thomas,

you may try myxomycetes, they are mobile sludge (protoplasma), but can erect turgid constructs.

[Prophylactic health warning ! …… No experience of my own.]

Also, I’ve noticed that all the in-text Greek deltas have becomes Latin “D,” as in DT instead of delta T.

Please take this into account, especially in various equations.

Ptolemaic equation for calculating the movements of the planets and stars (sans Jupiter’s moons) returned pretty good results too for hundreds of years. Good enough that until digital computers came along, 20th Century mechanical planetarium projectors used Ptolemaic equations to recreate the planets and stars motion to planetarium goers.

The fundamental underlying physics (model) was of course very wrong. But Ptolemaic equations return useful results for planetary positions short periods of time. They look rather convincing.

So too AOGCMs. But GCMs have substantially larger uncertainty errors (%-wise in the underlying physical measures that are the fudged/tuned parameters) so those GCM results quickly become useless (Pat shows they are useless at ~ one year’s time step).

Both the modern GCMs and Ptolemaic mathematical models of the heavens are explicit examples of Richard Feynman’s Cargo Cult Science analogy. Everything appears to work in them at the level of abstraction to the casual viewer and they then assume the underlying physics are correct. But we know better today about the proper modeling of the motions of stars and planets we see in the sky.

So you’d never use Ptolemaic planetary motion models to target a multi-billion dollar planetary probe to Jupiter or Mars and expect it to actually arrive there. A stupendous waste of money and resources would result.

However the Climate Cargo Cultists expect Multi-Trillion dollar rearrangements of the world’s energy economy based on their junk model outputs claiming high CO2 sensitivity when observations suggest otherwise.Cargo cultism at its finest.And yet the Leftists/Climate Cultists labels Climate Skeptics as “anti-science” and “science deniers.”

Mere projection on their part.

Correct, I think, John Q. There is a conceptual problem here that seems beyond resolution for some people. Dr. Frank has made a beautiful job of this. Perhaps it would help to repeat his summary –

“….The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows…..”

Thank goodness Pat Frank wasn’t my math teacher. If he had been, I might have understood some of the concepts that caused me such difficulty as to lead me to abandon the study of the physical sciences, and I wouldn’t have gone back to my my real love in the biological sciences!

“Thank goodness Pat Frank wasn’t my math teacher.”+1

Stokes

Snark does not become you!

+1Oh my, lookie here at ‘ole doc . . . got his Nickers all Stoked up into an

ad homineeringwad again.Chalk it up to apotheosize shrinkage I guess.

The fact Nick Stokes feels the need to make a personal attack, as opposed to sticking to the math & science, says a great deal.

Just a +1. Not even a ±1.

More importantly, he completely missed the point, intentionally or otherwise (typically), which was that Pat being the commenter’s maths teacher would have allowed him to understand the maths they had had difficulty understanding.

“would have allowed him to understand the maths they had had difficulty understanding”In fact, as I showed here, the paper is riddled with errors in elementary math. No-one seems to have the slightest curiosity about that.

Nick,

Let’s take your comment about the math and look at it.

“1. To estimate uncertainty you need to study the process actually producing the numbers – the GCM. Not the result of a curve fitting exercise to the results.”

Sorry, I can write a transfer equation for a black box by merely knowing the input and output. I don’t need to study the process.

“You need to clearly establish what the starting data means. The 4 W/m2 is not an uncertainty in a global average; it is a variability of grid values, which would be much diminished in taking a global average.”

Certainly the +/- 4W/m^2 is a global average. You didn’t bother to read Pat’s paper at all. You can’t even use the +/- in front of it!

“Eq 2 is just a definition of a mean.”

So what? What’s actually wrong with it?

“Eqs 3 and 4 are generic formulae, similar to those in say Vasquez, for mapping uncertainty intervals. They involve correlation terms; no basis for assigning values to those is ever provided.”

Again, so what? Eq 3 and 4 explain the propagation of uncertainty. They don’t actually involve correlation terms. As Pat’s document states: “When states x0,., xn represent a time-evolving system, then the model expectation value XN is a prediction of a future state and σ2XN is a measure of the confidence to be invested in that prediction, i.e., its reliability. ”

” but where Eq 1 took an initial F₀ and added the sum of changes: F₀+ΣΔFᵢ, Eq 5 Takes that initial F₀ and adds the ith change without the previous ones F₀+ΔFᵢ.”

The only one with a math problem here seems to be you. Eq 5.1 and 5.2 describe the ith step. Why would you need to involve the previous steps?

“It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation.”

Because the value being used is a 20 year average.

“If n increased, the “mean” would rise, not because of bigger values, but just because there were more in the sample.”

Huh? The mean doesn’t rise because of the number of samples, it would rise because the sum of the samples went up. It could also go down if the additional samples were of lesser value than the mean!

“The unit of the results is K sqrt(year). If you use ±4 Wm⁻²/year, as Pat intermittently does, the units are K/sqrt(year)”

Pat states: “Following from equations 5, the uncertainty in projected air temperature “T” after “n” projection steps is (Vasquez and Whiting, 2006)”

The units are actually K/step (each step just happens to be a year). And when summed from 1…n steps you get temperature as the final result.

None of your math objections make any sense and you certainly didn’t prove anything other than the fact that you have a hard time reading what is written.

Nick, “

In fact, as I showed here, the paper is riddled with errors in elementary math.”Completely refuted, point-by-point here.

You showed nothing except wrong.

Great and detailed response, Tim Gorman.

You’re putting in a lot of work. I admire that (and am grateful).

Tim Gorman

“Let’s take your comment about the math and look at it”“Eq 3 and 4 explain the propagation of uncertainty. They don’t actually involve correlation terms.”Look at the last term in Eq 3. The σ_{u,v}. What do you think that is, if not a correlation? In Eq 4 it is σ_{i,i+1} etc

“‘It forms the sum, but instead of dividing by n, the number of values, it divides by 20 years, the period of observation.’Because the value being used is a 20 year average.”OK, let’s just focus on that one. It is S6.2. You might like to note the complete hash of the subscripts in the statement. But anyway, the upshot is that to get the average, n sets of numbers are added. Then the total is divided, not by n, but by 20 years, units emphasised. That is not an average. Junior high school kids who put that in their tests would fail.

And as I said, a simple test is, what if all the numbers were the same value c? Then the average should be c. But this botch would give c*n/(20 years). Not just a different number, but different units too. And not constant, but proportional to n, the number in the sample.

Nick,

“Look at the last term in Eq 3. The σ_{u,v}. What do you think that is, if not a correlation? In Eq 4 it is σ_{i,i+1} etc”

“For example, in a single calculation of x = f(u,v,…), where u, v, etc., are measured magnitudes with uncertainties in accuracy of ±(σu,σv,…), then the uncertainty variance propagated into x is”

They are uncertainties! They add in quadrature because they are independent. Correlation is not required nor specified here.

“That is, a measure of the predictive reliability of the final state obtained by a sequentially calculated progression of precursor states is found by serially propagating known physical errors through the individual steps into the predicted final state. When states x0,., xn represent a time-evolving system, then the model expectation value XN is a prediction of a future state and σ2XN is a measure of the confidence to be invested in that prediction, i.e., its reliability.”

“Junior high school kids who put that in their tests would fail.”

Which you continue to do. Again, if I drive my car 100,000 miles over a 20 year period then I *can* divide that 100,000 miles by 20 to get an average of how many miles I drove per year.

“ut this botch would give c*n/(20 years).”

Which would be correct if c is an annual average and n is 20 years.

I’ll let Pat speak for himself:

“and ei,g is of magnitude D(cloud-cover-unit) and of dimension cloud-cover-unit. For

model “i,” the ANNUAL (capitalization mine, tpg) mean simulation error at grid-point g, calculated over 20 years of

observation and simulation, is”

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g”

ACROSS THE 20-YEAR CALIBRATION PERIOD (capitalization mine, tpg). Individual grid-point error ei,g is of dimension cloud-cover-unit year -1, and can be of positive or negative sign; see Figure 5.

The model mean calibration uncertainty in simulated cloud cover at grid-point “g” for

“N” models EN,g, is the average of all the 20-year annual mean model grid-point errors,”

“This error represents the 20-year annual mean cloud cover calibration error statistic for

“N” models at grid-point “g.” The 20-year annual mean grid-point calibration error for

“N” models is of dimension cloud-cover-unit year-1. The 20-year annual mean calibration

error at any grid-point “g” can be of positive or negative sign; see Figure 5.”

I don’t know what “across the 20-year calibration period” means to you but to me it means a 20 year total. When divided by 20 gives an annual mean!

Pat has pointed this out to you at least twice that I know of. Take your fingers out of your ears and listen!

“Again, if I drive my car 100,000 miles over a 20 year period then I *can* divide that 100,000 miles by 20 to get an average of how many miles I drove per year.”That is not an average of n numbers. That is a rate.

“Which would be correct if c is an annual average and n is 20 years.”n is not 20 years. It is a number, the number of things being averaged. In this case n is the number of simulation-observation pairs evaluated at grid-point g.

“That is not an average of n numbers. That is a rate.”

So is year^-1!!! Anything over time is a rate! SO WHAT?

““Which would be correct if c is an annual average and n is 20 years.”

n is not 20 years. It is a number, the number of things being averaged. In this case n is the number of simulation-observation pairs evaluated at grid-point g.”

Again, I’ll let Pat speak for himself: “”where “n” is the number of simulation-observation pairs evaluated at grid-point “g” ACROSS THE 20-YEAR CALIBRATION PERIOD (capitalization mine, tpg). Individual grid-point error ei,g is of dimension cloud-cover-unit year -1, and can be of positive or negative sign; see Figure 5.”

Why do you *always* manage to leave off the “ACROSS THE 20-YEAR CALIBRATION PERIOD”? You are really getting to be freaking ridiculous. When you have to quote out of context to support your assertions it’s bloody ridiculous.

I’ve said it before and I’ll say it again, you are nothing more than an internet troll. Your goal is to see your name on the internet, it’s not to actually contribute anything.

“Anything over time is a rate! SO WHAT?”I think it is cute that folks who tell us that only they understand about error and uncertainty, and so GCMs have it all wrong, can’t cope with a simple bit of maths like an average. Suppose you want to know the average sale price of a stock OVER A ONE DAY PERIOD. You might sum the prices over sales, and divide by the number of sales. You might sum the stocks traded and divide by the total paid. Both those are averages. But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.

Nick, “But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.”

Why not? I see monthly averages of stock prices all the time! How do you suppose they come up with those? BTW, the dimension becomes price/month – a *rate* associated with time.

go here: http://stocks.tradingcharts.com/stocks/charts/AAPL/m

for a chart of monthly stock prices for Apple, Inc from 2008 to 10/2019.

Jeez, you’ve gotten so far afield in your denials now that you are about to fall off the edge of the earth!

Nick,

And that is what is happening here.”No, it’s not.

Tim Gorman,

“Why not? I see monthly averages of stock prices all the time! How do you suppose they come up with those? BTW, the dimension becomes price/month”More stuff that is just weird, although it is in line with the Pat Frank claim.. I looked at your link. It said Apple stock price is currently about $240. No mention of $240/month. Can you find anyone saying the price is $240/month? Would that be an annual average of $2880/year?

“More stuff that is just weird, although it is in line with the Pat Frank claim.. I looked at your link. It said Apple stock price is currently about $240. No mention of $240/month. Can you find anyone saying the price is $240/month? Would that be an annual average of $2880/year?”

OMG! Are you *TRULY* that obtuse?

Nick: ““But you would not divide either of those numerator totals by one day and call it an average price. And that is what is happening here.”

You said you can’t divide by a time step to get an average over time! I just gave you an example. The graph is not a “rate”, the graph is the monthly average of the stock price. It’s determined by finding the average price in each month. Got that? A month is a unit of time! It’s the AVERAGE STOCK PRICE PER MONTH, not the rate of growth of the stock price per month. One is done by adding all the daily average stock prices (i.e. average price per day) and dividing by the number of days in the month! Note the word “dividing”. That indicates a denominator that is a time step. How do you suppose the average *daily* price is determined? The *rate* of growth would be determined by subtracting the average price on the last day of the month from the average price on the first day of the month! A totally different numerator! But it would still have a “per month” denominator!

You’ve just totally lost it Nick. Take a vacation! Average price per day is not a rate. Average price per month is not a rate. Each has a denominator based on time.

Tim

“I see monthly averages of stock prices all the time! How do you suppose they come up with those?”Just to answer that question, suppose June has 20 trading days. They would add the closing prices for those days, which comes to about $4800. Then they would divide by 20, getting average price $240.

By Pat’s method of S6.2, they would divide the $4800 by one month, to get 4800 $/month for the monthly price average. Not only weird units, but the number makes no sense.

You *have* to be kidding me!

“Just to answer that question, suppose June has 20 trading days. They would add the closing prices for those days, which comes to about $4800. Then they would divide by 20, getting average price $240.”

That’s an AVERAGE PRICE PER DAY! It has a time step in the denominator! If you don’t define the interval then you can’t define the average either. Look at the words you used: “those days”. In other words a time step!

“By Pat’s method of S6.2, they would divide the $4800 by one month, to get 4800 $/month for the monthly price average. Not only weird units, but the number makes no sense.”

Huh? Talk about mixing up dimensions! You said divide by 20 days! That’s a time interval. 20 days equals one month in your example. (x/20)(20/month) equals (x/month).

What is so god damn hard about understanding that?

Tim,

” One is done by adding all the daily average stock prices (i.e. average price per day) and dividing by the number of days in the month! Note the word “dividing”. That indicates a denominator that is a time step.”Note the word number. In fact you don’t divide by the number of days in the month. You divide by the number of days for which you have prices – ie trading days. And Pat’s formula says you should divide, not by some number of days, but by 1 month, to get month in the denominator.

But you said

“the dimension becomes price/month”. You mean, presumably price/time. Once you have worked out dimensions, then you can assign units, with the appropriate conversions. $240/month is $2880/year is €214/month. If the dimension is meaningful, the conversions apply. 1 pound/cu ft is 16 kg/m^3. You don’t need to ask what it represents.” In fact you don’t divide by the number of days in the month. You divide by the number of days for which you have prices – ie trading days. And Pat’s formula says you should divide, not by some number of days, but by 1 month, to get month in the denominator.”

OMG! Again!

Where does Pat say that? He develops an annual mean and says it is an annual mean! I.e. per year!

It’s no different that developing an annual average of miles driven and saying it has the dimensions of miles/year!

You remind me of my kids when they were six years old. They couldn’t let things go even when shown over and over again that they were wrong!

“240/month is $2880/year is €214/month”

You *still* haven’t got it! Price per month is *not* the same thing as price-growth per month. You can’t get your dimensions correct at all!

Price per month is an average price in that month. It’s sum/interval. Price-growth per month is a delta. It’s subtraction/interval.

Average miles driven per year is miles/year. Growth in the miles driven per year is (miles/year1) – (miles/year0)

Average daily price is a sum/interval. Average daily price growth is subtraction/interval.

I simply don’t know how to make it any more plain.

“Price per month is an average price in that month. It’s sum/interval.”OK, let’s do that arithmetic. It is the arithmetic of Pat’s S6.2. Again, suppose 20 trading days in June, so sum of 20 closing prices for Apple is $4800. You want to say the interval is 1 month. So sum/interval is 4800 $/month

Or maybe you want to say that the month is 30 days. OK,sum/interval = 160 $/day

Or maybe you do just want to count trading days. Then the result is 240 $/(trading day). That is the conventional arithmetic, but without making the units change.

Or a month is 1/12 year, so the month average is sum/interval= 4800/(1/12)= 57600 $/year.

You see that you can’t escape the conversion of units. That is built in. 160 $/day = 4800 $/month = 57600 $/year. It comes from the number you put in the denominator.

Nick,

“You see that you can’t escape the conversion of units. That is built in. 160 $/day = 4800 $/month = 57600 $/year. It comes from the number you put in the denominator.”

You are *still* confusing growth rate with average price. Did it not hit you at all that in order to calculate growth rate you have to subtract? Growth is a delta and an average is not.

miles/year2 – miles/year1 gives you growth in the number of miles/year. (miles/year1 + miles/year2) /2 gives you an average miles/year.

Can you truly not tell the difference between the addition and subtraction operators?

“You are *still* confusing growth rate with average price.”And you are still dodging the arithmetic consequences of your claim. You are big on blustery words, very small on numbers. I followed through on your assertion about the arithmetic you specified – how do you think it actually works? Monthly average Apple share price, according to you, is $4800/month. Do you want to defend that as the right answer? If not, how would you calculate it? Numbers, please.

Monthly average Apple share price, according to you, ”

Sorry. I pointed out to you that growth requires a delta, i.e. a subtraction. You keep on using multiplication. You simply refuse to accept the fact that a monthly average is not a growth rate. And the average monthly price has a dimension of price/month. If the time interval isnot specified then you can’t tell if it is a daily average price, amonthly average price, or an annual average price. It truly is that simple – except for you I guess.

Nick thinks you can measure something the size of a nanometer with a tape measure from Home Depot. That’s what this comes down to. He’s pretending the stability in output of a sensitivity analysis of a black box model overrides the futility of trying to estimate the size of a human cell with a tape measure.

Probably because he’s a pure math guy with no exposure to the real world.

Nick, “

It is S6.2. You might like to note the complete hash of the subscripts in the statement.Right. Subscript “g” for grid-point and subscript “i” for climate model (i = 1->N). Very confusing.

Eqn. S6.2 just shows calculation of an annual average. Apparently another very difficult concept.

Nick, “

But anyway, the upshot is that to get the average, n sets of numbers are added. Then the total is divided, not by n, but by 20 years, units emphasised. That is not an average. Junior high school kids who put that in their tests would fail..”An annual average is a sum divided by number of years.

You know that, Nick. You’re just being misleading. As you were when when you falsely called a straightforward set of subscripts, “hash.” Pretty shameless.

Nick, “

That is not an average of n numbers. That is a rate.”A (+/-) root-mean-square error is not a rate. (+/-)4 Wm^-2 year^-1 is not a rate.

Rate is velocity. (+/-)rmse is not a velocity. A (+/-) uncertainty statistic is not a physical magnitude. Plus/minus uncertainty statistics do not describe motion.

Your objection on those grounds is wrong and misleading, Nick.

We all know you’ll stop it, now that you realize your misconception.

Pat,

“Right. Subscript “g” for grid-point and subscript “i” for climate model (i = 1->N). Very confusing”“you falsely called a straightforward set of subscripts, “hash.””It is a hash. Again the equation and nearby text is here. I can’t do a WYSIWYG version in comments, but here it is in TeXlike notation

ε_{i,g}=(20 years)⁻¹ \sum_(g=1)^n e_{i,g}

The first, obvious hash, is that you are summing over g, yet g appears on the left hand side. It is a dummy suffix; that can’t happen.

The next obvious hash is that S6.2 says it is summed over g from 1 to n. Yet the accompanying text says, rightly,

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g””. And defining, it says“For model “i,” the annual mean simulation error at grid-point g”You are summing over pairs at a fixed grid-point, not over grids. Actually, “g” should appear on the LHS, but not “i”.

But of course all this pales beside the fact that what is formed in S6.2 just isn’t an average.

“An annual average is a sum divided by number of years.”Again a schoolboy error. Do you have anything to offer on the Apple share arithmetic above? Is the monthly average Apple price $4800/month? Would it be $57600/year, based on adding 240 closing prices and dividing by 1 year?

Tim, one of Nick’s standard tactics is to make a subtly misleading argument, and delude people into arguing within the wrong context. He does that to his advantage.

For example and another and yet another.

Nick, “

The first, obvious hash, is that you are summing over g, yet g appears on the left hand side. It is a dummy suffix; that can’t happen.,”Eqn. 6.2 is summing over 20 years of error at grid-point ‘g’ for model ‘i.’ There are ‘n’ values of grid-point ‘g’ error. The total error epsilon, is for model ‘i’ at grid-point ‘g.’ Why would it not be eps_i,g.

You’re just objecting over usage, not meaning.

Next, “

The next obvious hash is that S6.2 says it is summed over g from 1 to n.”There are ‘n’ values of error at grid-point ‘g’ for model ‘i.’

Next, “

You are summing over pairs at a fixed grid-point, not over grids. Actually, “g” should appear on the LHS, but not “i”.”The grid-point errors are for model ’i.’ The error is for model ‘i,’ and thus e_i. So, your sore point is that my usage is not the usage you’d have used. Too bad. We know you could figure it out. If you tried.

What next, Nick? Will you object to my sentence construction?

Next, “

But of course all this pales beside the fact that what is formed in S6.2 just isn’t an average.”The 20 year sum of grid-point errors divided by 20 is not an annual average. Got it.

Nick, “

“Again a schoolboy error. Do you have anything to offer on the Apple share arithmetic above?”Misdirection ala’ Nick Stokes. The point is eqn. S6.2, and not anything else.

Eqn. 6.2 adds 20 years of grid-point error and divides by 20 to produce an annual average of error. Not a daily average. Not a monthly average. An annual average.

You claim this average is not an average, and then go on to accuse

meof making a schoolboy error. What a laugh.“one of Nick’s standard tactics is to make a subtly misleading argument, and delude people into arguing within the wrong context.”There is nothing misleading here. You have asserted your principle just above:

“An annual average is a sum divided by number of years.”Tim has asserted it above:

“Price per month is an average price in that month. It’s sum/interval.”Your Eq S6.2 asserts it.

I chose Tim’s example of Apple monthly average share price. I showed what happens if you calculate it according to that principle. It makes no sense. No-one does it. I have invited you to cite any example of people averaging items that way.

I cited the familiar case of calculating an annual average temperature for a location. °C/year?

I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?

Nick,

“I chose Tim’s example of Apple monthly average share price. I showed what happens if you calculate it according to that principle. It makes no sense. No-one does it. I have invited you to cite any example of people averaging items that way.”

What doesn’t make any sense is your assertion that an average per unit time is a GROWTH rate. 100,000 miles driven in ten years divided by 10 years is *NOT A GROWTH RATE” in miles driven per year.

I’ll repeat it for at least the fifth time – a growth rate is a delta. It is *not* an average!

“I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?”

Yes, and it has been given to you over and over again. The sum of miles driven in 10 years divided by 10 years is an annual average in miles/year. Just like Pat did in his paper.

What is so hard about this that you can’t understand it? You are just exhibiting the behaviour of a six year old in denying this simple truth!

“Eqn. 6.2 is summing over 20 years of error at grid-point ‘g’ for model ‘i.’ There are ‘n’ values of grid-point ‘g’ error. The total error epsilon, is for model ‘i’ at grid-point ‘g.’ Why would it not be eps_i,g.”Hash upon hash! No, you are not summing over 20 years. You say you are summing over grid-points. But the text says you are summing over, well, something, up to “n”,

“where “n” is the number of simulation-observation pairs evaluated at grid-point “g””. Clearly it is “simulation-observation pairs”, and that makes sense.The total error ε is not for model i at grid-point g. You say you have summed over g. So which gridpoint does the g on the left refer to? In fact you have summed over i, but the same thing applies over that index.

“The point is eqn. S6.2, and not anything else.”No, the point is the meaning of an average. S6.2 asserts one. You have repeatedly asserted it in words, as has Tim. I show what stupidity it amounts to if you actually try to put numbers to it. I keep challenging you or Timto actually give an example, using that arithmetic, which makes sense. You can’t.

“You’re just objecting over usage, not meaning.”No, I’m objecting to getting maths grossly wrong. You seem to think it is OK to write down anything, and then redefine it as you wish. You can do no wrong. But it is wrong and gives wrong results.

” I keep challenging you or Timto actually give an example, using that arithmetic, which makes sense. You can’t.”

Total miles driven in 10 years divided by 10 years is the annual average of miles driven, i.e. miles/year.

The observations summed over 20 years divided by 20 years IS EXACTLY THE SAME!

When I was in business I used to keep track of the total miles on my vehicle so I could calculate miles/year to estimate expenses for the IRS. Have you been sheltered from the real world for all of your life?

CC,

“because he’s a pure math guy with no exposure to the real world”Actually, mostly doing industrial CFD. But if you think you can guide me to the real world, perhaps you explain what I’m getting wrong in the calculation of Apple monthly share price average using the (sum/(time interval)) method that is in S^.2 of Pat’s paper – sum/(time interval)? Or give another real world example of how it works?

“what I’m getting wrong in the calculation of Apple monthly share price average using the (sum/(time interval)) method that is in S^.2 of Pat’s paper – sum/(time interval)? Or give another real world example of how it works?”

What you are getting wrong is using the average price per unit time (i.e. month) as a GROWTH RATE! Trying to say an average stock price/month of $240 will give you $2880 per year.

It’s pretty obvious from your statement above that you’ve finally tired of having your nose rubbed in such idiocy. Now you are trying to make it look like you never made that mistake at all. Pat did the same thing you have in your statement here, only he did it over a 20 year span and not a monthly span.

Nick, “

Hash upon hash! No, you are not summing over 20 years. You say you are summing over grid-points.”Twenty years of simulation error over each grid-point. L&H say so. it says so in my paper. Try to get it right, Nick. The hash is all yours.

“

So which gridpoint does the g on the left refer to?It refers to grid-point ‘g.’

“

I show what stupidity it amounts to if you actually try to put numbers to it.”Your tendentiously concocted numbers, Nick. (Sum of errors over 20 years)/20 = annual average error.

You call that stupid. After a lifetime in math, you deny that (sum/term) = average. Your objection is indistinguishable from intentional nonsense. And I’m being polite.

“

No, I’m objecting to getting maths grossly wrong.”No you’re not. You’re being Nick Stokes.

Nick, “

I’ll ask again – can you give any example, in numbers, of a calculation of some familiar annual (or monthly, or daily) averaged quantity using the method of your paper?”[(Twenty-year sum of days)/20] = annual average = 365.24 days/year.

Give it up, Nick.

“It refers to grid-point ‘g.’”You don’t seem to have any idea how calculations work. If you have a g index on the left, you need something on the right which tells you what g value is being picked out. You don’t have that at all.

“[(Twenty-year sum of days)/20] = 365.24 days/year”So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.

“So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.”

Does it matter how many miles I drive per day if I am calculating an annual average? If I sum the miles driven every day for a period of time (say 20 years) then I divide by 20 years to get the annual sum exactly what is wrong with that mathematically?

Nick, “

You don’t seem to have any idea how calculations work. If you have a g index on the left, you need something on the right which tells you what g value is being picked out. You don’t have that at all.”From the SI, “

let observed cloud cover at grid-point “g” be (x^obs)_g. For each model “i,” let the simulated cloud cover of grid-point “g” be (x^mod)_g,i.”Nick, “So what is that the average of? You aren’t averaging daily data for each day. You are just summing days.Summing 20 years of days and dividing by 20 to get the annual average days.

Sum 20 years of error and divide by 20 to get the average annual error.

That will make everything clear to everyone, with one likely exception.

Read the next sentence.

What he said was he was glad he did not have a great math teacher.

Did you get that part, Nick?

As I mention above, Nick, as he typically does, either intentionally or mistakenly, misunderstood the point.

Quite telling, really.

Thank god Nick wasn’t either, his inflexibility and super assurandness in the sign of someone with a very narrow focus and inability to question their own results.

“Thank goodness Pat Frank wasn’t my math teacher.”

+1

I double the score. Btw some Math and Statistics are really helpful in the Biological Sciences, according to my experience.

[/back to snark ] And: Pat Frank’ s Uncertain Math is far easier to grasp than standard math, being not fraught with that nasty rigour which scared so many students off.

Pat’s math is rigorous and correct even according to the JDGM.

As usual, people, including you, are mixing up error and uncertainty as well as mixing up input variables having a random distribution with a calculated, determinative result from a model.

The climate warmists don’t even want to admit to any uncertainty in their inputs let alone any uncertainty in their outputs. Doing so would mean having to admit that their models don’t handle the physics very well.

When they are trying to forecast annual temperature increases to the hundredth of a degC when the historical temperature record has an uncertainty of somewhere greater than +/- 1degC they are violating the rules of significant digits right from the very beginning.

“So you’d never use Ptolemaic planetary motion models to target a multi-billion dollar planetary probe to Jupiter or Mars and expect it to actually arrive there.”

100% correct. But…

The scare is imho used to further the goals of certain groups of people, that have a rather sinister agenda. They do not believe in AGW themselves but are only interested in th