12 October 2019
Pat Frank
A bit over a month ago, I posted an essay on WUWT here about my paper assessing the reliability of GCM global air temperature projections in light of error propagation and uncertainty analysis, freely available here.
Four days later, Roy Spencer posted a critique of my analysis at WUWT, here as well as at his own blog, here. The next day, he posted a follow-up critique at WUWT here. He also posted two more critiques on his own blog, here and here.
Curiously, three days before he posted his criticisms of my work, Roy posted an essay, titled, “The Faith Component of Global Warming Predictions,” here. He concluded that, [climate modelers] have only demonstrated what they assumed from the outset. They are guilty of “circular reasoning” and have expressed a “tautology.”
Roy concluded, “I’m not saying that increasing CO₂ doesn’t cause warming. I’m saying we have no idea how much warming it causes because we have no idea what natural energy imbalances exist in the climate system over, say, the last 50 years. … Thus, global warming projections have a large element of faith programmed into them.”
Roy’s conclusion is pretty much a re-statement of the conclusion of my paper, which he then went on to criticize.
In this post, I’ll go through Roy’s criticisms of my work and show why and how every single one of them is wrong.
So, what are Roy’s points of criticism?
He says that:
1) My error propagation predicts huge excursions of temperature.
2) Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
3) The Error Propagation Model is Not Appropriate for Climate Models
I’ll take these in turn.
This is a long post. For those wishing just the executive summary, all of Roy’s criticisms are badly misconceived.
1) Error propagation predicts huge excursions of temperature.
Roy wrote, “Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT). (my bold)”
For the attention of Mr. And then There’s Physics, and others, Roy went on to write this: “The modelers are well aware of these biases [in cloud fraction], which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.” No more dismissals of root-mean-square error, please.
Here is Roy’s Figure 1, demonstrating his first major mistake. I’ve bolded the evidential wording.

Roy’s blue lines are not air temperatures emulated using equation 1 from the paper. They do not come from eqn. 1, and do not represent physical air temperatures at all.
They come from eqns. 5 and 6, and are the growing uncertainty bounds in projected air temperatures. Uncertainty statistics are not physical temperatures.
Roy misconceived his ±2 Wm-2 as a radiative imbalance. In the proper context of my analysis, it should be seen as a ±2 Wm-2 uncertainty in long wave cloud forcing (LWCF). It is a statistic, not an energy flux.
Even worse, were we to take Roy’s ±2 Wm-2 to be a radiative imbalance in a model simulation; one that results in an excursion in simulated air temperature, (which is Roy’s meaning), we then have to suppose the imbalance is both positive and negative at the same time, i.e., ±radiative forcing.
A ±radiative forcing does not alternate between +radiative forcing and -radiative forcing. Rather it is both signs together at once.
So, Roy’s interpretation of LWCF ±error as an imbalance in radiative forcing requires simultaneous positive and negative temperatures.
Look at Roy’s Figure. He represents the emulated air temperature to be a hot house and an ice house simultaneously; both +20 C and -20 C coexist after 100 years. That is the nonsensical message of Roy’s blue lines, if we are to assign his meaning that the ±2 Wm-2 is radiative imbalance.
That physically impossible meaning should have been a give-away that the basic supposition was wrong.
The ± is not, after all, one or the other, plus or minus. It is coincidental plus and minus, because it is part of a root-mean-square-error (rmse) uncertainty statistic. It is not attached to a physical energy flux.
It’s truly curious. More than one of my reviewers made the same very naive mistake that ±C = physically real +C or -C. This one, for example, which is quoted in the Supporting Information: “The author’s error propagation is not] physically justifiable. (For instance, even after forcings have stabilized, [the author’s] analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states. Which, it should be obvious, does not actually happen).“
Any understanding of uncertainty analysis is clearly missing.
Likewise, this first part of Roy’s point 1 is completely misconceived.
Next mistake in the first criticism: Roy says that the emulation equation does not yield the flat GCM control run line in his Figure 1.
However, emulation equation 1 would indeed give the same flat line as the GCM control runs under zero external forcing. As proof, here’s equation 1:

In a control run there is no change in forcing, so DFi = 0. The fraction in the brackets then becomes F0/F0 = 1.
The originating fCO₂ = 0.42 so that equation 1 becomes, DTi(K) = 0.42´33K´1 + a = 13.9 C +a = constant (a = 273.1 K or 0 C).
When an anomaly is taken, the emulated temperature change is constant zero, just as in Roy’s GCM control runs in Figure 1.
So, Roy’s first objection demonstrates three mistakes.
1) Roy mistakes a rms statistical uncertainty in simulated LWCF as a physical radiative imbalance.
2) He then mistakes a ±uncertainty in air temperature as a physical temperature.
3) His analysis of emulation equation 1 was careless.
Next, Roy’s 2): Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
Roy wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”
I will now show why this objection is irrelevant.
Here, now, is Roy’s second figure, again showing the perfect TOA radiative balance of CMIP5 climate models. On the right, next to Roy’s figure, is Figure 4 from the paper showing the total cloud fraction (TCF) annual error of 12 CMIP5 climate models, averaging ±12.1%. [1]

Every single one of the CMIP5 models that produced average ±12.1% of simulated total cloud fraction error also featured Roy’s perfect TOA radiative balance.
Therefore, every single CMIP5 model that averaged ±4 Wm-2 in LWCF error also featured Roy’s perfect TOA radiative balance.
How is that possible? How can models maintain perfect simulated TOA balance while at the same time producing errors in long wave cloud forcing?
Off-setting errors, that’s how. GCMs are required to have TOA balance. So, parameters are adjusted within their uncertainty bounds so as to obtain that result.
Roy says so himself: “If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, …”
Are the chosen GCM parameter values physically correct? No one knows.
Are the parameter sets identical model-to-model? No. We know that because different models produce different profiles and integrated intensities of TCF error.
This removes all force from Roy’s TOA objection. Models show TOA balance and LWCF error simultaneously.
In any case, this goes to the point raised earlier, and in the paper, that a simulated climate can be perfectly in TOA balance while the simulated climate internal energy state is incorrect.
That means that the physics describing the simulated climate state is incorrect. This in turn means that the physics describing the simulated air temperature is incorrect.
The simulated air temperature is not grounded in physical knowledge. And that means there is a large uncertainty in projected air temperature because we have no good physically causal explanation for it.
The physics can’t describe it; the model can’t resolve it. The apparent certainty in projected air temperature is a chimerical result of tuning.
This is the crux idea of an uncertainty analysis. One can get the observables right. But if the wrong physics gives the right answer, one has learned nothing and one understands nothing. The uncertainty in the result is consequently large.
This wrong physics is present in every single step of a climate simulation. The calculated air temperatures are not grounded in a physically correct theory.
Roy says the LWCF error is unimportant because all the errors cancel out. I’ll get to that point below. But notice what he’s saying: the wrong physics allows the right answer. And invariably so in every step all the way across a 100-year projection.
In his September 12 criticism, Roy gives his reason for disbelief in uncertainty analysis: “All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!
“Why?
“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior.”
There it is: wrong physics that is invariably correct in every step all the way across a 100-year projection, because large-scale errors cancel to reveal the effects of tiny perturbations. I don’t believe any other branch of physical science would countenance such a claim.
Roy then again presented the TOA radiative simulations on the left of the second set of figures above.
Roy wrote that models are forced into TOA balance. That means the physical errors that might have appeared as TOA imbalances are force-distributed into the simulated climate sub-states.
Forcing models to be in TOA balance may even make simulated climate subsystems more in error than they would otherwise be.
After observing that the “forced-balancing of the global energy budget“ is done only once for the “multi-century pre-industrial control runs,” Roy observed that models world-wide behave similarly despite a, “WIDE variety of errors in the component energy fluxes…”
Roy’s is an interesting statement, given there is nearly a factor of three difference among models in their sensitivity to doubled CO₂. [2, 3]
According to Stephens [3], “This discrepancy is widely believed to be due to uncertainties in cloud feedbacks. … Fig. 1 [shows] the changes in low clouds predicted by two versions of models that lie at either end of the range of warming responses. The reduced warming predicted by one model is a consequence of increased low cloudiness in that model whereas the enhanced warming of the other model can be traced to decreased low cloudiness. (original emphasis)”
So, two CMIP5 models show opposite trends in simulated cloud fraction in response to CO₂ forcing. Nevertheless, they both reproduce the historical trend in air temperature.
Not only that, but they’re supposedly invariably correct in every step all the way across a 100-year projection, because their large-scale errors cancel to reveal the effects of tiny perturbations.
In Stephen’s object example we can see the hidden simulation uncertainty made manifest. Models reproduce calibration observables by hook or by crook, and then on those grounds are touted as able to accurately predict future climate states.
The Stephens example provides clear evidence that GCMs plain cannot resolve the cloud response to CO₂ emissions. Therefore, GCMs cannot resolve the change in air temperature, if any, from CO₂ emissions. Their projected air temperatures are not known to be physically correct. They are not known to have physical meaning.
This is the reason for the large and increasing step-wise simulation uncertainty in projected air temperature.
This obviates Roy’s point about cancelling errors. The models cannot resolve the cloud response to CO₂ forcing. Cancellation of radiative forcing errors does not repair this problem. Such cancellation (from by-hand tuning) just speciously hides the simulation uncertainty.
Roy concluded that, “Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).“I
Everyone should now know why Roy’s view is wrong. Off-setting errors make models similar to one another. They do not make the models accurate. Nor do they improve the physical description.
Roy’s conclusion implicitly reveals his mistaken thinking.
1) The inability of GCMs to resolve cloud response means the temperature projection consistency among models is a chimerical artifact of their tuning. The uncertainty remains in the projection; it’s just hidden from view.
2) The LWCF ±4 Wm-2 rmse is not a constant offset bias error. The ‘±’ alone should be enough to tell anyone that it does not represent an energy flux.
The LWCF ±4 Wm-2 rmse represents an uncertainty in simulated energy flux. It’s not a physical error at all.
One can tune the model to produce (simulation minus observation = 0) no observable error at all in their calibration period. But the physics underlying the simulation is wrong. The causality is not revealed. The simulation conveys no information. The result is not any indicator of physical accuracy. The uncertainty is not dismissed.
3) All the models making those errors are forced to be in TOA balance. Those TOA-balanced CMIP5 models make errors averaging ±12.1% in global TCF.[1] This means the GCMs cannot model cloud cover to better resolution than ±12.1%.
To minimally resolve the effect of annual CO₂ emissions, they need to be at about 0.1% cloud resolution (see Appendix 1 below)
4) The average GCM error in simulated TCF over the calibration hindcast time reveals the average calibration error in simulated long wave cloud forcing. Even though TOA balance is maintained throughout, the correct magnitude of simulated tropospheric thermal energy flux is lost within an uncertainty interval of ±4 Wm-2.
Roy’s 3) Propagation of error is inappropriate.
On his blog, Roy wrote that modeling the climate is like modeling pots of boiling water. Thus, “[If our model] can get a constant water temperature, [we know] that those rates of energy gain and energy loss are equal, even though we don’t know their values. And that, if we run [the model] with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.”
Roy continued, “the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system.”
Roy there implied that the only way air temperature can change is by way of an increase or decrease of the total energy in the climate system. However, that is not correct.
Climate subsystems can exchange energy. Air temperature can change by redistribution of internal energy flux without any change in the total energy entering or leaving the climate system.
For example, in his 2001 testimony before the Senate Environment and Public Works Committee on 2 May, Richard Lindzen noted that, “claims that man has contributed any of the observed warming (ie attribution) are based on the assumption that models correctly predict natural variability. [However,] natural variability does not require any external forcing – natural or anthropogenic. (my bold)” [4]
Richard Lindzen noted exactly the same thing in his, “Some Coolness Concerning Global Warming. [5]
“The precise origin of natural variability is still uncertain, but it is not that surprising. Although the solar energy received by the earth-ocean-atmosphere system is relatively constant, the degree to which this energy is stored and released by the oceans is not. As a result, the energy available to the atmosphere alone is also not constant. … Indeed, our climate has been both warmer and colder than at present, due solely to the natural variability of the system. External influences are hardly required for such variability to occur.(my bold)”
In his review of Stephen Schneider’s “Laboratory Earth,” [6] Richard Lindzen wrote this directly relevant observation,
“A doubling CO₂ in the atmosphere results in a two percent perturbation to the atmosphere’s energy balance. But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance, and these errors involve, among other things, the feedbacks which are crucial to the resulting calculations. Thus the models are of little use in assessing the climatic response to such delicate disturbances. Further, the large responses (corresponding to high sensitivity) of models to the small perturbation that would result from a doubling of carbon dioxide crucially depend on positive (or amplifying) feedbacks from processes demonstrably misrepresented by models. (my bold)”
These observations alone are sufficient to refute Roy’s description of modeling air temperature in analogy to the heat entering and leaving a pot of boiling water with varying amounts of lid-cover.
Richard Lindzen’s last point, especially, contradicts Roy’s claim that cancelling simulation errors permit a reliably modeled response to forcing or accurately projected air temperatures.
Also, the situation is much more complex than Roy described in his boiling pot analogy. For example, rather than Roy’s single lid moving about, clouds are more like multiple layers of sieve-like lids of varying mesh size and thickness, all in constant motion, and none of them covering the entire pot.
The pot-modeling then proceeds with only a poor notion of where the various lids are at any given time, and without fully understanding their depth or porosity.
Propagation of error: Given an annual average +0.035 Wm-2 increase in CO₂ forcing, the increase plus uncertainty in the simulated tropospheric thermal energy flux is (0.035±4) Wm-2. All the while simulated TOA balance is maintained.
So, if one wanted to calculate the uncertainty interval for the air temperature for any specific annual step, the top of the temperature uncertainty interval would be calculated from +4.035 Wm-2, while the bottom of the interval would be -3.9065 Wm-2.
Putting that into the right side of paper eqn. 5.2 and setting F0=33.30 Wm-2, then the single-step projection uncertainty interval in simulated air temperature is +1.68 C/-1.63 C.
The air temperature anomaly projected from the average CMIP5 GCM would, however, be 0.015 C; not +1.68 C or -1.63 C.
In the whole modeling exercise, the simulated TOA balance is maintained. Simulated TOA balance is maintained mainly because simulation error in long wave cloud forcing is offset by simulation error in short wave cloud forcing.
This means the underlying physics is wrong and the simulated climate energy state is wrong. Over the calibration hindcast region, the observed air temperature is correctly reproduced only because of curve fitting following from the by-hand adjustment of model parameters.[2, 7]
Forced correspondence with a known value does not remove uncertainty in a result, because causal ignorance is unresolved.
When error in an intermediate result is imposed on every single step of a sequential series of calculations — which describes an air temperature projection — that error gets transmitted into the next step. The next step adds its own error onto the top of the prior level. The only way to gauge the effect of step-wise imposed error is step-wise propagation of the appropriate rmse uncertainty.
Figure 3 below shows the problem in a graphical way. GCMs project temperature in a step-wise sequence of calculations. [8] Incorrect physics means each step is in error. The climate energy-state is wrong (this diagnosis also applies to the equilibrated base state climate).
The wrong climate state gets calculationally stepped forward. Its error constitutes the initial conditions of the next step. Incorrect physics means the next step produces its own errors. Those new errors add onto the entering initial condition errors. And so it goes, step-by-step. The errors add with every step.
When one is calculating a future state, one does not know the sign or magnitude of any of the errors in the result. This ignorance follows from the obvious difficulty that there are no observations available from a future climate.
The reliability of the projection then must be judged from an uncertainty analysis. One calibrates the model against known observables (e.g., total cloud fraction). By this means, one obtains a relevant estimate of model accuracy; an appropriate average root-mean-square calibration error statistic.
The calibration error statistic informs us of the accuracy of each calculational step of a simulation. When inaccuracy is present in each step, propagation of the calibration error metric is carried out through each step. Doing so reveals the uncertainty in the result — how much confidence we should put in the number.
When the calculation involves multiple sequential steps each of which transmits its own error, then the step-wise uncertainty statistic is propagated through the sequence of steps. The uncertainty of the result must grow. This circumstance is illustrated in Figure 3.
Figure 3: Growth of uncertainty in an air temperature projection.
is the base state climate that has an initial forcing, F0, which may be zero, and an initial temperature, T0. The final temperature Tn is conditioned by the final uncertainty ±et, as Tn±et.
Step one projects a first-step forcing F1, which produces a temperature T1. Incorrect physics introduces a physical error in temperature, e1, which may be positive or negative. In a projection of future climate, we do not know the sign or magnitude of e1.
However, hindcast calibration experiments tell us that single projection steps have an average uncertainty of ±e.
T1 therefore has an uncertainty of ![]()
The step one temperature plus its physical error, T1+e1, enters step 2 as its initial condition. But T1 had an error, e1. That e1 is an error offset of unknown sign in T1. Therefore, the incorrect physics of step 2 receives a T1 that is offset by e1. But in a futures-projection, one does not know the value of T1+e1.
In step 2, incorrect physics starts with the incorrect T1 and imposes new unknown physical error e2 on T2. The error in T2 is now e1+e2. However, in a futures-projection the sign and magnitude of e1, e2 and their sum remain unknown.
And so it goes; step 3, …, n add in their errors e3 +, …, + en. But in the absence of knowledge concerning the sign or magnitude of the imposed errors, we do not know the total error in the final state. All we do know is that the trajectory of the simulated climate has wandered away from the trajectory of the physically correct climate.
However, the calibration error statistic provides an estimate of the uncertainty in the results of any single calculational step, which is ±e.
When there are multiple calculational steps, ±e attaches independently to every step. The predictive uncertainty increases with every step because the ±e uncertainty gets propagated through those steps to reflect the continuous but unknown impact of error. Propagation of calibration uncertainty goes as the root-sum-square (rss). For ‘n’ steps that’s
. [9-11]
It should be very clear to everyone that the rss equation does not produce physical temperatures, or the physical magnitudes of anything else. it is a statistic of predictive uncertainty that necessarily increases with the number of calculational steps in the prediction. A summary of the uncertainty literature was commented into my original post, here.
The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.
Supporting Information Section 10.2 discusses uncertainty and its meaning. C. Roy and J. Oberkampf (2011) describe it this way, “[predictive] uncertainty [is] due to lack of knowledge by the modelers, analysts conducting the analysis, or experimentalists involved in validation. The lack of knowledge can pertain to, for example, modeling of the system of interest or its surroundings, simulation aspects such as numerical solution error and computer roundoff error, and lack of experimental data.” [12]
The growth of uncertainty means that with each step we have less and less knowledge of where the simulated future climate is, relative to the physically correct future climate. Figure 3 shows the widening scope of uncertainty with the number of steps.
Wide uncertainty bounds mean the projected temperature reflects a future climate state that is some completely unknown distance from the physically real future climate state. One’s confidence is minimal that the simulated future temperature is the ‘true’ future temperature.
This is why propagation of uncertainty through an air temperature projection is entirely appropriate. It is our only estimate of the reliability of a predictive result.
Appendix 1 below shows that the models need to simulate clouds to about ±0.1% accuracy, about 100 times better than ±12.1% the they now do, in order to resolve any possible effect of CO₂ forcing.
Appendix 2 quotes Richard Lindzen on the utter corruption and dishonesty that pervades AGW consensus climatology.
Before proceeding, here’s NASA on clouds and resolution: “A doubling in atmospheric carbon dioxide (CO2), predicted to take place in the next 50 to 100 years, is expected to change the radiation balance at the surface by only about 2 percent. … If a 2 percent change is that important, then a climate model to be useful must be accurate to something like 0.25%. Thus today’s models must be improved by about a hundredfold in accuracy, a very challenging task.”
That hundred-fold is exactly the message of my paper.
If climate models cannot resolve the response of clouds to CO₂ emissions, they can’t possibly accurately project the impact of CO₂ emission on air temperature?
The ±4 Wm-2 uncertainty in LWCF is a direct reflection of the profound ignorance surrounding cloud response.
The CMIP5 LWCF calibration uncertainty reflects ignorance concerning the magnitude of the thermal flux in the simulated troposphere that is a direct consequence of the poor ability of CMIP5 models to simulate cloud fraction.
From page 9 in the paper, “This climate model error represents a range of atmospheric energy flux uncertainty within which smaller energetic effects cannot be resolved within any CMIP5 simulation.”
The 0.035 Wm-2 annual average CO₂ forcing is exactly such a smaller energetic effect.
It is impossible to resolve the effect on air temperature of a 0.035 Wm-2 change in forcing, when the model cannot resolve overall tropospheric forcing to better than ±4 Wm-2.
The perturbation is ±114 times smaller than the lower limit of resolution of a CMIP5 GCM.
The uncertainty interval can be appropriately analogized as the smallest simulation pixel size. It is the blur level. It is the ignorance width within which nothing is known.
Uncertainty is not a physical error. It does not subtract away. It is a measure of ignorance.
The model can produce a number. When the physical uncertainty is large, that number is physically meaningless.
All of this is discussed in the paper, and in exhaustive detail in Section 10 of the Supporting Information. It’s not as though that analysis is missing or cryptic. It is pretty much invariably un-consulted by my critics, however.
Smaller strange and mistaken ideas:
Roy wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time.”
But the LWCF error statistic is ±4 Wm-2, not (+)4 Wm-2 imbalance in radiative flux. Here, Roy has not only misconceived a calibration error statistic as an energy flux, but has facilitated the mistaken idea by converting the ± into (+).
This mistake is also common among my prior reviewers. It allowed them to assume a constant offset error. That in turn allowed them to assert that all error subtracts away.
This assumption of perfection after subtraction is a folk-belief among consensus climatologists. It is refuted right in front of their eyes by their own results, (Figure 1 in [13]) but that never seems to matter.
Another example includes Figure 1 in the paper, which shows simulated temperature anomalies. They are all produced by subtracting away a simulated climate base-state temperature. If the simulation errors subtracted away, all the anomaly trends would be superimposed. But they’re far from that ideal.
Figure 4 shows a CMIP5 example of the same refutation.

Figure 4: RCP8.5 projections from four CMIP5 models.
Model tuning has made all four projection anomaly trends close to agreement from 1850 through 2000. However, after that the models career off on separate temperature paths. By projection year 2300, they range across 8 C. The anomaly trends are not superimposable; the simulation errors have not subtracted away.
The idea that errors subtract away in anomalies is objectively wrong. The uncertainties that are hidden in the projections after year 2000, by the way, are also in the projections from 1850-2000 as well.
This is because the projections of the historical temperatures rest on the same wrong physics as the futures projection. Even though the observables are reproduced, the physical causality underlying the temperature trend is only poorly described in the model. Total cloud fraction is just as wrongly simulated for 1950 as it is for 2050.
LWCF error is present throughout the simulations. The average annual ±4 Wm-2 simulation uncertainty in tropospheric thermal energy flux is present throughout, putting uncertainty into every simulation step of air temperature. Tuning the model to reproduce the observables merely hides the uncertainty.
Roy wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”
But, of course, eqn. 6 would not produce wildly different results because simulation error varies with the length of the GCM time step.
For example, we can estimate the average per-day uncertainty from the ±4 Wm-2 annual average calibration of Lauer and Hamilton.
So, for the entire year (±4 Wm–2)2 =
, where ei is the per-day uncertainty. This equation yields, ei = ±0.21 Wm–2 for the estimated LWCF uncertainty per average projection day. If we put the daily estimate into the right side of equation 5.2 in the paper and set F0=33.30 Wm-2, then the one-day per-step uncertainty in projected air temperature is ±0.087 C. The total uncertainty after 100 years is sqrt[(0.087)2´365´100] = ±16.6 C.
The same approach yields an estimated 25-year mean model calibration uncertainty to be sqrt[(±4 Wm–2)2´25] = ±20 Wm–2. Following from eqn. 5.2, the 25-year per-step uncertainty is ±8.3 C. After 100 years the uncertainty in projected air temperature is sqrt[(±8.3)2´4)] = ±16.6 C.
Roy finished with, “I’d be glad to be proved wrong.”
Be glad, Roy.
Appendix 1: Why CMIP5 error in TCF is important.
We know from Lauer and Hamilton that the average CMIP5 ±12.1% annual total cloud fraction (TCF) error produces an annual average ±4 Wm-2 calibration error in long wave cloud forcing. [14]
We also know that the annual average increase in CO₂ forcing since 1979 is about 0.035 Wm-2 (my calculation).
Assuming a linear relationship between cloud fraction error and LWCF error, the ±12.1% CF error is proportionately responsible for ±4 Wm-2 annual average LWCF error.
Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO₂ forcing as:
[(0.035 Wm-2/±4 Wm-2)]*±12.1% total cloud fraction = 0.11% change in cloud fraction.
This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to barely resolve the annual impact of CO₂ emissions on the climate. If one wants accurate simulation, the model resolution should be ten times small than the effect to be resolved. That means 0.011% accuracy in simulating annual average TCF.
That is, the cloud feedback to a 0.035 Wm-2 annual CO₂ forcing needs to be known, and able to be simulated, to a resolution of 0.11% in TCF in order to minimally know how clouds respond to annual CO₂ forcing.
Here’s an alternative way to get at the same information. We know the total tropospheric cloud feedback effect is about -25 Wm-2. [15] This is the cumulative influence of 67% global cloud fraction.
The annual tropospheric CO₂ forcing is, again, about 0.035 Wm-2. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 Wm-2/25 Wm-2)*67% = 0.094%. That’s again bare-bones simulation. Accurate simulation requires ten times finer resolution, which is 0.0094% of average annual TCF.
Assuming the linear relations are reasonable, both methods indicate that the minimal model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 Wm-2 of CO₂ forcing, is about 0.1% CF.
To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.
This analysis illustrates the meaning of the annual average ±4 Wm-2 LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.
The TCF ignorance is such that the annual average tropospheric thermal energy flux is never known to better than ±4 Wm-2. This is true whether forcing from CO₂ emissions is present or not.
This is true in an equilibrated base-state climate as well. Running a model for 500 projection years does not repair broken physics.
GCMs cannot simulate cloud response to 0.1% annual accuracy. It is not possible to simulate how clouds will respond to CO₂ forcing.
It is therefore not possible to simulate the effect of CO₂ emissions, if any, on air temperature.
As the model steps through the projection, our knowledge of the consequent global air temperature steadily diminishes because a GCM cannot accurately simulate the global cloud response to CO₂ forcing, and thus cloud feedback, at all for any step.
It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.
This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge, further and further into ignorance.
On an annual average basis, the uncertainty in CF feedback is ±144 times larger than the perturbation to be resolved.
The CF response is so poorly known, that even the first simulation step enters terra incognita.
Appendix 2: On the Corruption and Dishonesty in Consensus Climatology
It is worth quoting Lindzen on the effects of a politicized science. [16]”A second aspect of politicization of discourse specifically involves scientific literature. Articles challenging the claim of alarming response to anthropogenic greenhouse gases are met with unusually quick rebuttals. These rebuttals are usually published as independent papers rather than as correspondence concerning the original articles, the latter being the usual practice. When the usual practice is used, then the response of the original author(s) is published side by side with the critique. However, in the present situation, such responses are delayed by as much as a year. In my experience, criticisms do not reflect a good understanding of the original work. When the original authors’ responses finally appear, they are accompanied by another rebuttal that generally ignores the responses but repeats the criticism. This is clearly not a process conducive to scientific progress, but it is not clear that progress is what is desired. Rather, the mere existence of criticism entitles the environmental press to refer to the original result as ‘discredited,’ while the long delay of the response by the original authors permits these responses to be totally ignored.
“A final aspect of politicization is the explicit intimidation of scientists. Intimidation has mostly, but not exclusively, been used against those questioning alarmism. Victims of such intimidation generally remain silent. Congressional hearings have been used to pressure scientists who question the ‘consensus’. Scientists who views question alarm are pitted against carefully selected opponents. The clear intent is to discredit the ‘skeptical’ scientist from whom a ‘recantation’ is sought.“[7]
Richard Lindzen’s extraordinary account of the jungle of dishonesty that is consensus climatology is required reading. None of the academics he names as participants in chicanery deserve continued employment as scientists. [16]
If one tracks his comments from the earliest days to near the present, his growing disenfranchisement becomes painful and obvious.[4-7, 16, 17] His “Climate Science: Is it Currently Designed to Answer Questions?” is worth reading in its entirety.
References:
[1] Jiang, J.H., et al., Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res., 2012. 117(D14): p. D14105.
[2] Kiehl, J.T., Twentieth century climate model response and climate sensitivity. Geophys. Res. Lett., 2007. 34(22): p. L22710.
[3] Stephens, G.L., Cloud Feedbacks in the Climate System: A Critical Review. J. Climate, 2005. 18(2): p. 237-273.
[4] Lindzen, R.S. (2001) Testimony of Richard S. Lindzen before the Senate Environment and Public Works Committee on 2 May 2001. URL: http://www-eaps.mit.edu/faculty/lindzen/Testimony/Senate2001.pdf Date Accessed:
[5] Lindzen, R., Some Coolness Concerning Warming. BAMS, 1990. 71(3): p. 288-299.
[6] Lindzen, R.S. (1998) Review of Laboratory Earth: The Planetary Gamble We Can’t Afford to Lose by Stephen H. Schneider (New York: Basic Books, 1997) 174 pages. Regulation, 5 URL: https://www.cato.org/sites/cato.org/files/serials/files/regulation/1998/4/read2-98.pdf Date Accessed: 12 October 2019.
[7] Lindzen, R.S., Is there a basis for global warming alarm?, in Global Warming: Looking Beyond Kyoto, E. Zedillo ed, 2006 in Press The full text is available at: https://ycsg.yale.edu/assets/downloads/kyoto/LindzenYaleMtg.pdf Last accessed: 12 October 2019, Yale University: New Haven.
[8] Saitoh, T.S. and S. Wakashima, An efficient time-space numerical solver for global warming, in Energy Conversion Engineering Conference and Exhibit (IECEC) 35th Intersociety, 2000, IECEC: Las Vegas, pp. 1026-1031.
[9] Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill.
[10] Brown, K.K., et al., Evaluation of correlated bias approximations in experimental uncertainty analysis. AIAA Journal, 1996. 34(5): p. 1013-1018.
[11] Perrin, C.L., Mathematics for chemists. 1970, New York, NY: Wiley-Interscience. 453.
[12] Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.
[13] Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.
[14] Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.
[15] Hartmann, D.L., M.E. Ockert-Bell, and M.L. Michelsen, The Effect of Cloud Type on Earth’s Energy Balance: Global Analysis. J. Climate, 1992. 5(11): p. 1281-1304.
[16] Lindzen, R.S., Climate Science: Is it Currently Designed to Answer Questions?, in Program in Atmospheres, Oceans and Climate. Massachusetts Institute of Technology (MIT) and Global Research, 2009, Global Research Centre for Research on Globalization: Boston, MA.
[17] Lindzen, R.S., Can increasing carbon dioxide cause climate change? Proc. Nat. Acad. Sci., USA, 1997. 94(p. 8335-8342.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Why do we spend so much time and confused effort on this. Perhaps a constellation of 3 to 6 fairly simple satellites with multi-spectral IR sensors could keep track of the energy balance on the Earth in 50km to 100km grid pixelse, forget the details of atmospheric depth vs surface measurement. You’d know the temperature of each pixel pretty accurately, be able to do some energy balance measurements, and notice any warming.
I’ve come to the conclusion that experts on both sides of the AGW debate are invested in the debate continuing, and even if the bulk conclusion that there is hardly any warming, and those who point this out are acting in good faith and the AGW mob isn’t, the good people interested in this subject will find some highly technical minutia to continue the debate over. While I’ll continue to cheer the good people here fighting against trillions of malinvestment, this article was the straw that broke the camel’s back for me, the debate is over, and I’ve lost interest.
Tom Schaefer, as of October, 2019, you have no incentive to follow these endless debates. Nor does the average citizen on Main Street USA.
Only after climate activists get serious about reducing America’s carbon emissions will the scientific and public policy debates reach a critical mass. If serious restrictions are ever placed on your access to gasoline and diesel, then you will be back.
How do those IR sensors see through the clouds? How do they see through the smoke from wildfires or pasture burning? The temperature difference on the surface and in the atmosphere over a 50km to 100km grid can vary wildly due to things like evapotranspiration, road surface density, urban heat island effect, etc.
Satellites are not a complete answer, at least not at the level of technology we have today. They are just one more input among others.
I recently found an old, compact, computational slide rule and brought it to my office. While sitting in on not too interesting teleconferences, I have been entertaining myself by doing multiplications and divisions with the slide rule and comparing the results with my electronic calculator. My inability to discern the subdivisions in the scale of the slide rule lead to errors in my slide rule results that easily are as much as 5% different than the result from my electronic calculator. Dr. Frank’s analysis shows, if I were to use the result from one slide rule calculation in a subsequent slide rule calculation, how the second calculation is less reliable than the first, and so on, and so on. So while electronic calculators have reduced the uncertainty from my inability to discern the scale subdivisions of a slide rule, they have not reduced the uncertainty from our inability to discern the state of cloud cover forcing.
An interesting example.
Agreed, the example is interesting, but is there not a conceptual problem equating “reliability” with “uncertainty”?
“Dr. Frank’s analysis shows, if I were to use the result from one slide rule calculation in a subsequent slide rule calculation, how the second calculation is less reliable than the first, and so on, and so on.”
I’m not sure how the second calculation is “less reliable” than the first when the first produces an erroneous result. If my method calculates 2 + 2 = 5, then = 6, then = 10, and so on, the method isn’t “less reliable” upon each calculation, it’s just as unreliable from the first to the last.
I don’t think this is the same thing as Frank’s uncertainty calculation where uncertainty isn’t error.
You missed the fact that the unreliable result from the first slide rule is calculation is used as an input to the second. So the uncertainty of the first result gets propagated into the second slide rule calculation. So you have more and more uncertainty about the final result.
“. . . the unreliable result from the first slide rule is calculation is used as an input to the second.
I most certainly did! Many thanks!
Oh man, have you got it. I went thru college using only a slide rule. When I was a senior, my uncle bought one of the first HP calculators for $400, a whole years tuition! I would have given an eye tooth for that.
Dr Frank,
Decent post. Clear explanation, pleasure to read, even I can get the picture.
The growth of uncertainty does not mean the projected air temperature becomes huge. Projected temperature is always within some physical bound. But the reliability of that temperature — our confidence that it is physically correct — diminishes with each step. The level of confidence is the meaning of uncertainty. As confidence diminishes, uncertainty grows.
I reckon here lies the crux of the problem, namely assumption that large uncertainty bounds mean projected temperature will vary within those bounds with roughly equal probability. This disconnection between uncertainty and actual error creates plenty of confusion – even fellas well-versed into science and stats cannot get it.
Besides it is astonishing that such a huge scientific(?) effort like climate science modelling so easily falls into such errors like described by you or Lord Monckton (I’d like to believe his paper also will be published in a scientific journal). Misunderstanding of uncertainty propagation, misunderstanding of feedback mechanisms – that means climate modelling is really FUBAR. Fellows behind this science should really start to behave morally and intellectually.
At risk of being over-simplistic and perhaps naive, it seems to me that Dr Frank is proving what we always knew. CO2 induced warming is so tiny that we do not notice it and it may take decades to notice the cumulative effect, if any. Cloud cover can and does make a substantial difference to incoming solar radiation and at night can reduce outgoing IR radiation. These effects are so large that anyone can simply feel the magnitude for themselves.
But when it comes to the models, the potentially massive positive or negative cloud effect cannot be calculated for various reasons. Taken together, the cloud effect swamps any CO2 induced warming, but since we cannot put reliable numbers on the former and its feedbacks, any projection of the resulting temperature is meaningless.
It may be possible to constrain the models or fit values obtained by hindcasting or whatever, but while that might make the model output look sensible rather than obviously wrong, it does not remove the uncertainty and the meaningless nature of the result.
Well, done, Dr Frank, excellent work.
Here’s another analogy that could help. Consider a man walking from point A to point B at a distance of 100 steps. Every step the man takes to get to point B shows some degree of variance. Think of this variance as basic “uncertainty” and is the equivalent of the cloud error range in climate models.
What Dr. Frank has done is essentially assume the man is blindfolded and this uncertainty applies to each step. The result is some range of values after 100 steps which would be huge.
What Roy and Nick are saying is we know a person who is not blindfolded will do a lot better on each step. That is, for climate conservation of energy limits the actual error. This could be done in our analogy by placing walls on either side of the man’s route to point B. These walls limit how far off the man could get. Hence, even when blindfolded the walls keep the man within a narrow range. As a result of these walls the real error is less than the error predicted by error propagation.
The problem here is that these walls are no more than another guess. It also ignores there are other unknowns. For example, in this analogy there could be cross winds or uneven ground that is completely ignored in the measurement of the single step uncertainty.
To me the big question in model credibility is to what degree are these walls based on valid physics which takes into account ALL possible situations. I will acknowledge that such walls may exist but I need those who would use them to defend the model results to tell us exactly how they were built.
If walls are needed to keep the model in check, then the model must be wrong; for a “correct” model would not have need of any walls at all.
Depending upon how it is being done, is the process of quantifying process uncertainty subject itself to some level of process uncertainty?
Hey Nick,
It shows nothing about the models. There is nothing in the paper about how GCMs actually work.
If you can get the same results as complex GCM running simple calculations in Excel so what’s the problem? You cannot do that for CFD. You cannot do that for FEM. But looks like you can do this for GCM. If so, maybe those complex models are simply overgrown? If relationship between forcings and air temperature is relatively easy to capture simply calculations will work equally well as costly and complex ones.
Parameter, as long as the thirty-year running average of global mean temperature stays at or above + 0.1C per decade, climate scientists will claim that observations are consistent with model predictions.
“There is nothing in the paper about how GCMs actually work.”
GCMs don’t work.
They do not produce any useful and verifiable scientific results.
They produce projections of imagination.
The only value of their results is to demonstrate their lack of skill.
One hopes that efforts dedicated to improve them might be equal to the efforts made to defend them.
https://www.wcrp-climate.org/images/documents/grand_challenges/GC4_cloudsStevensBony_S2013.pdf
https://science.sciencemag.org/content/340/6136/1053
“If you can get the same results as complex GCM running simple calculations in Excel so what’s the problem? You cannot do that for CFD.”
Of course you can do it for CFD. If you model laminar pipe flow with CFD, you’ll get a uniform pressure gradient and a parabolic velocity profile. You could have got that with Excel. Undergraduates did it even before computers. But CFD will do transition to turbulence. Excel won’t.
But here it isn’t even an independent calculation. To get the “same results” you have to use two fitting parameters derived from looking at the GCM calculations you are trying to emulate.
Hey Nick,
Of course you can do it for CFD. If you model laminar pipe flow with CFD, you’ll get a uniform pressure gradient and a parabolic velocity profile. You could have got that with Excel.
OK. Can you model in Excel pressure coefficients for a simulation of an aircraft in near-stall condition with few millions of volume cells? And that’s the level we’re talking about with respect to GCM, not simply laminar flow approximation. If you can easily replicate results of GCM in Excel, even without using any solvers, that means complex models are not so complex, unlike complex CFD or FEA. If you can do something in simple way why do the same thing in complicated and costly way?
But here it isn’t even an independent calculation. To get the “same results” you have to use two fitting parameters derived from looking at the GCM calculations you are trying to emulate.
Don’t quite get it – can you elaborate?
“If you can easily replicate results of GCM in Excel, even without using any solvers, that means complex models are not so complex, unlike complex CFD or FEA.”
In fact GCMs are CFD. My point with pipe flow is that if the flow can be modelled simply, CFD will produce the simple result. What else should it do? It doesn’t mean CFD is an Excel macro.
“simulation of an aircraft in near-stall condition”
CFD doesn’t do so well there either (neither do aircraft). But CFD can do flow over an aerofoil in fairly normal conditions, as can a wind tunnel. So can a pen and paper Joukowski calculation. That doesn’t trivialise CFD.
“can you elaborate?”
Yes. Pat claims that his Eq 1 emulates the GCM output (for just one variable, surface temperature), and so he can analyse it for error propagation instead. But Eq 1 emulation requires peeking at the GCM output to get the emulation (curve fitting with parameters). So you can’t say that uncertainty in an input would produce such and such and uncertainty in the output of the calculation. You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.
“simulation of an aircraft in near-stall condition”
CFD doesn’t do so well there either (neither do aircraft). But CFD can do flow over an aerofoil in fairly normal conditions, as can a wind tunnel. So can a pen and paper Joukowski calculation. That doesn’t trivialise CFD.
————————————————-
It is difficult, but if you follow AIAA, they are having some success by going to unsteady CFD (DES, LES, DNS, etc.). These methods can be high fidelity, but are totally out of the realm of what is practical (based on computational power and time availibility) for climate modeling.
Hey Nick,
In fact GCMs are CFD. My point with pipe flow is that if the flow can be modelled simply, CFD will produce the simple result.
You can some and you cannot other. If you could model everything in Excel you wouldn’t need more advanced tools. Fact that you can emulate GCM air temperature output using simple equation may be embarrassing to some but it does not to be a weakness. If relationship between different forcings and temperature output is simply enough what’s wrong with that? As you would say: “Of course you can do it for CFD.”
In fact GCMs are CFD.
Is it not multiphysics? Interesting.
“simulation of an aircraft in near-stall condition”
CFD doesn’t do so well there either (neither do aircraft).
As far as I’m aware simulations of higher angles of attack and near stall are not uncommon. It’s not an easy problem to simulate (surely beyond Excel) but can be done, some claim with reasonable accuracy, compared with experimental data.
“can you elaborate?”
Yes. Pat claims that his Eq 1 emulates the GCM output (for just one variable, surface temperature), and so he can analyse it for error propagation instead. But Eq 1 emulation requires peeking at the GCM output to get the emulation (curve fitting with parameters). So you can’t say that uncertainty in an input would produce such and such and uncertainty in the output of the calculation. You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.
So, are you saying that Pat employs here some kind of circular reasoning? In order to emulate GCM output we need to look at this output first to figure out emulation values? That’s bizarre – in this case we wouldn’t need any emulator – just copy the output from GCM. Let’s have a closer look at that: Which term in Pat’ equation represents this ‘peeking’ into the GCM model?
Nick, “You would have to first see how the uncertainty affected the GCMs from which you derive the fitting parameters.”
Uncertainty doesn’t affect GCMs.
It’s funny, really. It’s GCMs that affect uncertainty.
You’re always making that same mistake, Nick.
If all you wnat is delat P and velocity profile in pipe flow, I would suggest using an Excel spreadsheet. You will get the exact [analytical] answer without having to deal with the meshing, mesh refinement, etc. The CFD will actually give you an approximation of the analytical answer. Laminar to turbulent transition in CFD is actually difficult.
What Pat showed was that if you want to know the annual global temperature output of a CIMP5 GCM, don’t bother with running the GCM, just use his simple expression and get a good enough answer.
Now if you want, as a scientist to examine the interplay of various energy exchange mechanisms in the climate, and have models or hypothesis to test a GCM may be a good platform – just do not mistake the temperature outputs as accurate. The interplay between mechanisms may shed new insight into the physics.
“just use his simple expression and get a good enough answer”
You would get an answer about how his curve fitting model behaves, once you sort out the mathematical errors. It doesn’t tell you anything about how a GCM would respond. In fact, since the curve fitter depends on the GCMs for fitted coefficients, you can’t even consistently analyse the simple model, since you don’t know how those coefficients might change.
“You would get an answer about how his curve fitting model behaves, once you sort out the mathematical errors. It doesn’t tell you anything about how a GCM would respond.”
Actually the model does, because he validated it over the space of many GCM runs.
“since the curve fitter depends on the GCMs for fitted coefficients, you can’t even consistently analyse the simple model, since you don’t know how those coefficients might change”
He used model parameters, not fitting coefficients, the same paramters and teh same basic form of GHG forcing as the GCM does. Not sure the parameters change in GCMs. If they do, the emulator still validated against many runs.
Pat,
If I may, I’d like to add a couple of thoughts to your opening list…
First, propagation of error is not a prediction, but rather a calculation of what variation is consistent with a model plus uncertainties in the model parameters and inputs. When you published your original post I really was convinced that all disagreement was simply a misunderstanding. I now am convinced that your critics have a fundamental misconception about models, measurements, and resolution. Among other things there seems little appreciation that even with a stable system, one still has uncertainty of inputs that are not like initial conditions, and which drive the model without end. These translate into interminable uncertainty in model output. I tried to show this in my post of a little over a week ago — without much effect.
Second, beyond the idea that climate models do not have errors of this sort, there is the insistence of Nick Stokes that a Monte Carlo simulation would actually be more appropriate to determining the value of climate models than would be the error propagation you introduced. In principle he is correct, but my understanding is that the climate models may have a hundred adjustable parameters, and perhaps additional adjustable inputs (drivers). The idea of doing a credible Monte Carlo simulation in such a high dimensional space is preposterous. Perhaps your approach of a representative model of models is the only reasonable approach. However, Mototaka Nakamura, in his English language version of parts of his recent book on Amazon, related the story of modifying a climate code to use a more representative parameterization of a factor I cannot recall at this moment, without it having much effect. Perhaps the climate models could be trimmed down to a much smaller kernel on which a credible Monte Carlo effort is possible.
Third, error propagation may not be used in simulations at present, but I can’t understand the stance that it is a priori not pertinent. I teach a number of design and laboratory courses in mechanical engineering. I present error propagation (which I call uncertainty propagation) as a way to evaluate designs and experiments, and to guide modifications required to meet objectives. No precision work of any sort is possible without it.
A couple of closing thoughts: Nick said in the thread above
I don’t know if this was Roy’s point, but if a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing, then the person making the claim must have some credible competing and independent estimate of the bound, which no critic ever seems to offer. Absent some omniscience about the physical process in question, one would think that estimated bounds beyond physical possibility indicate something wrong or incomplete with the model.
In my case I was at first put off by the stunning size of your bounds, and by the propagation of error through iterated steps. I found the step size you considered to be sort of ad hoc and I wasn’t certain that it was a reasonable model of how error stacks up. I wonder if instead you might consider an alternative of a secular trend with an uncertain slope?
Finally, one factor appearing to produce the same confusion, over and over, is that estimates of the uncertainties have to come from what we know of the underlying physical process, or calibration of instruments and so forth, which all involve physical units just like the units of actual quantities. So, an uncertainty in solar insolation is (per Mototaka Nakamura)
which looks like a true energy, but in fact represents our level of ignorance about an input.
“if a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing, then the person making the claim must have some credible competing and independent estimate of the bound”
That doesn’t make much sense. But of course they have an estimate, and Roy said so. Conservation, particularly of energy. If the IR opacity of air remains about the same and the temperature increases 10°C, then the Earth will lose heat faster than the Sun can supply it. So you are not uncertain about whether that situation could happen. The calculation in a GCM conserves energy, so it cannot yield such a situation either. Pat’s calculatiion can. That is why it is meaningless.
“The idea of doing a credible Monte Carlo simulation in such a high dimensional space is preposterous.”
No, it isn’t. Chaos limits dimensionality; it reduces to that of the attractor. Varying those parameters to not produce independent effects. The perturbations reduce to a space of much smaller dimension, and it is propagation of that which you test.
You’re confusing precision of an output with uncertainty around that output.
You keep doing it, and it’s getting embarrassing.
You really should read up.
It should not be hard to understand that uncertainty propagates. It should also not be hard to understand that uncertainty in an initial state that compounds will result in greater uncertainty at a later state, irrespective of the alleged precision of the model.
“You’re confusing precision of an output with uncertainty around that output”
Could you explain the difference?
Sure. Precision as defined in GCMs has been how much the individual models have moment around the mean. Another alternate definition is given general stochastic influences on a model how much it’s output in different runs varies from the mean.
Uncertainty is about what is knowable given the crudeness of the underlying measurements going into a calculation.
If your model claims to resolve a 2-4 watts per square meter forcing and it has an input with an uncertainty of (+\-)4 Watts per square meter, it’s dogsh*t.
You can’t measure a nanometer with a millimeter ruler. You can’t measure a 2 w/sq meter resolution event with a model that has inputs that vary by (+/-)4 watts per square meter.
The more operations, the more the uncertainty propagates. At the end of 100 years, it doesn’t matter what garbage overrides in a model have made it converse to an expected output with a high precision, because the underlying measurements are not known to a high degree.
Your model can give and expected value that is acceptable down to the nanometer, with standard error to the nanometer. But that doesn’t mean it has information value. Your measurements don’t support the precision expected.
If your uncertainty is greater than your range of outcomes, your model is dogsh*t.
“Precision as defined in GCMs has been how much the individual models have moment around the mean.”
Who defines it so? It seems strained to me. Terms like variability are more appropriate.
“If your uncertainty is greater than your range of outcomes, your model is dogsh*t.”
No, the uncertainty is wrong. Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.
“Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution.”
Sounds like a Numerical Variational Study. Determines the sensitivity of outputs to variations in inputs, not so much the uncertainty.
It would sure be nice if we could at least agree on terms. “Precision”, to me, is a measurement of how close together the results are. It says nothing about those close-together results being close to the true value.
One could calculate several outputs that fall within ±0.1W/m^2 of one another, but each having an uncertainty of ±4W/m^2. They would be very precise, have large uncertainties, and we would still not know how close to the true value any of them are.
Is there a different definition of “precision” used in GCMs than the rest of science?
“It would sure be nice if we could at least agree on terms. “Precision”, to me, is a measurement of how close together the results are. It says nothing about those close-together results being close to the true value.”
+1
Nick does not understand what uncertainty is. Nick thinks you can measure something in nanometers with underlying measurements to the millimeter.
Way too much evil CO2 is expended trying to teach this troll.
Take a bucket.
1.Take a 1000ml graduated cylinder with 2ml gradations and fill it with 500 ml of water and add it to the bucket.
2. Pipet our 250mL our of the bucket into the 1000ml graduates cylinder and throw the water away.
3. Repeat this process 10000 times.
How much water is in the bucket Nick? Easy. 250ml.
What’s your uncertainty, Nick?
It doesn’t matter that your math says the bucket hasn’t filled up or emptied. Math doesn’t care that your graduated cylinder can only measure to (+\-) 1mL and that over 10,000 times the uncertainty propagates.
CORRECTION: SORRY ABOUT THAT. IPHONE TYPO.
Take a bucket.
1.Take a 1000ml graduated cylinder with 2ml gradations and fill it with 500 ml of water and add it to the bucket.
2. Pipet our 250mL our of the bucket into the 1000ml graduated cylinder and throw the water away. Add 250mL of new water to the bucket from the same graduated cylinder.
3. Repeat this process 10000 times.
How much water is in the bucket Nick? Easy. 500ml.
V = 500 – 250 + 250 ………….. – 250 + 250 = 500mL
Each addition and subtraction gets you back to the initial volume, in math.
What’s your uncertainty, Nick?
It doesn’t matter that your math says the bucket hasn’t filled up or emptied. Math doesn’t care that your graduated cylinder can only measure to (+\-) 1mL and that over 10,000 times the uncertainty propagates.
Nick,
“No, the uncertainty is wrong. Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.”
No, we went over this in a different thread. How quickly you forget. The CGM models are determinative. Put in an output and you get out an output. You can put in the same input time after time and you will get the same output. If this isn’t true then the models are even more useless that I expected. What you are trying to say is that a Monte Carlo analysis using a large number of runs with different outputs can define the uncertainty in the output. And that is just plain wrong.
Many, many years ago when I was doing long range planning for a large telephone company we did what you describe in order to rank capital expenditure projects. We would take all kinds of unknowns, e.g. ad valorem taxes, interest rates, rates of return on investment, labor costs, etc, and vary each of them one at a time over a range of values to see what happened to the outputs. That’s called “sensitivity analysis”, not uncertainty analysis. It tells you how sensitive the model is to changes in input but tells you absolutely nothing about the uncertainty in the model output. Run 1 with a set of input values would have some uncertainty in the output. Run 2 with one input changed would *still* have some uncertainty in the output. Same for Run 3 to Run 100. That uncertainty was based on the fact that not all inputs could be made 100% accurate. You could never tell exactly what the corporation commission was going to do with rates of return three years, ten years, or twenty years in the future. You could never tell what the FED was going to do with interest rates at any point in the future. All you could do is pick the capital projects which showed the least sensitivity to all the inputs while still providing acceptable earnings on investment. (all kinds of other judgments also had to be made, such as picking highly sensitive projects with short payback periods – it’s called risk analysis).
The very fact that your inputs have uncertainty is a 100% guarantee that your output will have uncertainty. The only way to have no uncertainty in your output is to have no uncertainty in your input and no uncertainty in the model equations.
Please try to tell us that your model inputs are all 100% accurate!
Tim Gorman
You remarked, “How quickly you forget.” It is impossible to know whether cognitive bias is making Stokes’ memory be selective, or whether he is being disingenuous to try to win the argument. The fact that I’ve never known him to admit to a mistake, and that he always finds something to object to from everyone who disagrees with him, suggests to me that he is not being honest.
https://en.wikipedia.org/wiki/Sensitivity_analysis
Wikipedia makes a distinction between sensitivity analysis and uncertainty analysis. I think that reading the above link would be in everyone’s best interest, especially Stokes.
Disclaimer: While I understand that some people don’t think highly of Wikipedia, it has been my experience that, in areas of science and mathematics, it is generally trustworthy. It is in areas of politics and ideologically-driven topics that it presents biased opinions.
Nick, “No, the uncertainty is wrong.”
Uncertainty is the root-sum-square. It grows without bound across sequential calculations. In principle, uncertainty can grow to infinity and not be wrong.
“Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution.”
In epidemiological models. Not in physical models.
“Uncertainty is the range of outcomes you could get if the inputs ranged over their uncertainty distribution. And that is the problem here. If there is a range of outputs the model just couldn’t produce, then you aren’t uncertain about that range. And if someone’s analysis tells you that you are, the analysis is wrong.”
Not sure about that. Maybe you could do (with the CMIP5 GCMs) what I proposed here:
https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790375
Nick, you say
But sir, the Earth has been nearly that much warmer at times in the past so there must be some combination of parameters that can and did produce a large excursion. I want to know how it is you are so certain that some similar displacement is not possible now. Rather than me not making much sense, this is exactly what I mean by having some independent and credible figure of what the climate is capable of — where do you get yours?
I don’t know how chaos entered the discussion. I didn’t bring it up, but it seems to me that it doesn’t have much bearing on the subject of how uncertain we might be about the result of a calculation or measurement from defensible estimates of how uncertain we are about the factors that go into the measurement or calculation.
Finally you ask Capt. Climate nearby about what is the difference between precision and uncertainty. As you may know there have been competing measurements of fundamental constants, each of which indicated great precision based on repeated measurements, but which differed from each other by, sometimes, scores of standard errors. Two independent measurements of the same thing differing by so much is extremely improbable. Yet it happened. It’s the difference between precision and uncertainty.
Kevin,
“But sir, the Earth has been nearly that much warmer at times in the past so there must be some combination of parameters that can and did produce a large excursion. “
At times very long ago, and with a very different atmosphere, not to mention configuration of continents etc. The Earth is not going to get into that state in the next century or so.
Much nonsense has been spoken here about uncertainty, elevating it to some near spiritual state disconnected from what a GCM might actually produce. I don’t believe that notion has any place in regular science, but the natural response is, if GCMs aren’t actually going to show it, why would we want to think about it. It is just an uncertainty in Pat’s model, not GCMs. What GCMs might do, and the way the physics constrains them, was the main criticism made by Roy. A 10°C rise is surface temperature with no great change in atmospheric impedance would lead to unsustainable IR outflux at TOA. The physics built into GCMs would prevent them entering such a state. Uncertainty for a GCM is simply the range of outputs that it would produce if the various inputs varied through their range of uncertainty. There is no other way of quantifying it.
Chaos relates to your proposition that GCMs have far too many dimensions to test by ensemble. My point is that chaos reduces dimensionality. You already see this in the Lorenz demo, where the 3D space of possible states is reduced to the 2D space of the butterfly. Chaos ensures that there is vanishing dependence on initial state. That means that all the possible dimensions associated with initial wrinkles merge. You can imagine it with a river. You could do a Monte Carlo by throwing in stones, dipping in paddles, whatever. The only things that would make a real difference downstream is a substantial mass influx, or maybe heat. That is a very few dimensions. Most fluid flow is like this. It is why turbulence modelling works.
“It’s the difference between precision and uncertainty.”
No, it is just an inadequate estimate of precision. Both describe the variability that might ensue if the measurement were done in other ways. The gap just illustrates that whoever estimated precision did not think of all the possible ways measurement methods could vary.
Nick,
“Much nonsense has been spoken here about uncertainty, elevating it to some near spiritual state disconnected from what a GCM might actually produce. I don’t believe that notion has any place in regular science, but the natural response is, if GCMs aren’t actually going to show it, why would we want to think about it.”
OMG! You just described the attitude of mathematicians and computer programmers to a T. “My program has no uncertainty in its output!”
Uncertainty is why test pilots still die when testing planes and cars at speed. And *you* don’t believe that uncertainty has any place in regular science.
The mission of science is to describe reality. To think that *your* description of reality is perfect is the ultimate in hubris, it puts you on the same plane as God – you are omniscient. It’s no wonder you can’t accept that there is uncertainty in the CGM’s description of reality.
” You just described the attitude of mathematicians and computer programmers to a T. “My program has no uncertainty in its output!””
No, I’m saying that the output of the program reflects the uncertainty of the inputs, modified by whatever the processing does. And so you need to know what the processing does.
But uncertainty has to be connected to what the output can actually produce. In terms of your black boxes, Pat’s curves are like saying the output of the box is ±40 V, when the power supply is 15 V.
“No, I’m saying that the output of the program reflects the uncertainty of the inputs, modified by whatever the processing does. And so you need to know what the processing does.”
You have been fighting against Pat’s thesis which *is* uncertainty of input. Internal processing simply cannot decrease uncertainty caused by uncertain inputs. Internal processing can only *add* more uncertainty which Pat did not address.
“But uncertainty has to be connected to what the output can actually produce.”
Actually it does *not* have to do so in an iterative process. The iterative process should *stop* when the output becomes so uncertain that the iterative process is overwhelmed by the uncertainty. The uncertainty only grows past what the output can actually produce because the process is carried past the point where the output is overwhelmed.
“Pat’s curves are like saying the output of the box is ±40 V, when the power supply is 15 V.”
The curves should stop when the uncertainty goes past +/- 15v. It would appear that what you are actually trying to say is that the uncertainty level of the CGM’s doesn’t matter – it is valid to continue the iterative process past the point where the uncertainty overwhelms the output. The *only* reason to continue further is because you don’t care about the uncertainty interval. If you stop the iterative process at the point where the uncertainty overwhelms the output then you will never see the uncertainty interval growing past what the model output can reach.
Think about it. If the uncertainty is large enough you won’t even be able to get past the first step! You won’t know if your model resembles reality or not! If after the first step your model shows a temp increase of 1 but the uncertainty is +/- 2 you won’t even know for sure if the sign of your output is correct!
+1 big time, Tim Gorman.
“Second, beyond the idea that climate models do not have errors of this sort, there is the insistence of Nick Stokes that a Monte Carlo simulation would actually be more appropriate to determining the value of climate models than would be the error propagation you introduced. In principle he is correct,”
I don’t agree. All such a Monte Carlo analysis would show is the sensitivity is of the model to input variations. It won’t help define the uncertainty in a determinative model in any way, shape, or form. Each and every run would have an uncertainty still associated with the output. The very fact that inputs have an uncertainty which allows varying the inputs is a 100% guarantee that the outputs will have an uncertainty based on the uncertainty of the inputs.
I agree with you about Monte Carlo analysis in these cases, Tim.
The efficacy of a Monte Carlo analysis in the context of physical science would require its use to evaluate the output distribution of a physical model already and independently known to be physically complete and correct.
An example might be Thermodynamics. If someone were calculating some complex gas-phase system in which the PVT phase diagram of the gas mixture is not well known, then a Monte Carlo evaluation of the dependence of the calculations on the uncertainty widths of the incompletely known PVT values would reveal the uncertainty in the predicted behavior.
But that’s only because Thermodynamics is independently known to be a physically complete theory and to yield correct and accurate answers when the state variables of gases are well known.
Climate models incorporate incomplete or wrong physics. They are not independently known to give accurate answers. The mean of a Monte Carlo distribution of climate model results may be well-displaced from the physically correct answer, but no one can know that. Nor how far the displacement.
The physically correct answer about climate is not known. Nor even is a tight range of physically likely answers. No one knows where the answer lays, concerning future air temperatures.
So, a Monte Carlo interval based upon parameter uncertainties (a sensitivity analysis) tells no one anything about an interval around the correct answer — accuracy.
It only tells us about the spread of the model — precision.
At some future day, when physical meteorologists have figured out how the climate actually works, and produced a viable and falsifiable physical theory then, and only then, might a Monte Carlo analysis of climate model outputs become useful.
A long way to agree with you. 🙂
Pat,
The very fact that some think it is necessary to vary the inputs to the climate models in order to evaluate uncertainty is tacit admittance that there *is* uncertainty in the model inputs and outputs. Once that is admitted then the next step is to actually evaluate that uncertainty – something the warmists refuse to do and will fight to the death to kill any suggestion they need to do so. The very fact that some think you an determine uncertainty by using uncertain inputs is a prime example of the total lack of understanding about uncertainty. It’s a snake chasing it’s own tail. Sooner or later the snake eats itself!
You’ve been a hero of this conversation, Tim.
It’s clear that whatever training climate scientists get, almost none of them ever get exposed to physical error analysis. The whole idea seems foreign to almost all of that group.
My qualifiers refer to three of my four Frontiers reviewers. I am extraordinarily lucky those people were picked. Otherwise I’d still be in the outer darkness of angry reviewers who have no idea what they’re going on about.
Pat,
“It’s clear that whatever training climate scientists get, almost none of them ever get exposed to physical error analysis. The whole idea seems foreign to almost all of that group.”
It’s not just climate scientists. My son received his PhD in Immunology in the recent past. He has always been a perfectionist (a real pain in the butt sometimes :-)) and meticulous in everything he does. He is now involved in HIV research. He has told me many times that part of the reason so many experiments are not reproducible today in his field is because few researchers both to do any uncertainty analysis in their experiment design, execution methodology and analysis methods, e.g. like your post about titration. It just seems to be so endemic in so much academic hierarchy today. My son didn’t listen when his undergraduate advisor told him to not worry about taking statistics courses – that you could always find a math major to do that! Unfreakingbelievable!
Kevin, you wrote a very interesting post, and I regret not having the time to comment there.
I completely agree with your take on propagation. Misunderstanding that one point is the source of about 99.9% of all the critical objections I’ve received.
In Chemistry, we call uncertainty ‘propagation of error,’ because typically the initial uncertainty metric derives from some calibration experiment that shows the error in the measurement or the model.
That’s pretty much what Lauer and Hamilton provided with their annual average (simulation minus observation) rmse of (+/-)4 W/m^2 in LWCF; i.e., a calibration error statistic derived from model error.
The reason I chose a year is because that’s typically how air temperature projections are presented, and also the LWCF rmse was an annual average. So the two annual metrics pretty much dovetailed.
I, too, was surprised by the size of the uncertainty envelope, but that was how it came out, and one has to go with whatever the result.
Your point that if a credible uncertainty analysis results in bounds beyond what a physical phemonenon is capable of producing,… indicate[s] something wrong or incomplete with the model. is dead on.
I have been making that point in as many ways as I could think. So has Tim Gorman, Clyde Spencer and many others here.
But not one climate modeler has ever agreed to it. Nick Stokes and Mr. ATTP have dismissed your very point endlessly. They see no distinction between precision and accuracy.
It may be that Mototaka Nakamura is talking about the uncertainty in TOA flux. As I recall, Graeme Stephens published an analysis showing the uncertainty in various fluxes. He reported that the TOA flux wasn’t known to better than (+/-)3.9 W/m^2.
Another stunner he reported was that the surface flux wasn’t known to better than (+/-)17 W/m^2. And then modelers talk about a 0.6 W/m^2 surface imbalance.
About your, I wonder if instead you might consider an alternative of a secular trend with an uncertain slope? I did a pretty standard uncertainty analysis. Call it a first attempt. If you or anyone would like to essay something more complete, I’d be all for it. 🙂
Thanks Kevin.
What the heck is the first term on the right hand side of equation 1? Please show. The equation does not make sense. If it is a forcing it can not be dimensionless.
https://www.cawcr.gov.au/technical-reports/CTR_042.pdf
This says forcing are in Watts per meter sqrd.
F = 5.35lnC/Co This is the forcing equation it is in Watts per meter sqrd.
https://www.friendsofscience.org/assets/documents/GlobalWarmingScam_Gray.pdf
If equation 1 is valid or if forcing equation valid they should be able to tell the change in temperature of a jar of air going from 0% CO2 to 100% CO2.
Anthony’s jar experiment proved higher concentration of CO2 does not lead to higher temperature.
mkelly, here’s the description of the first term of eqn. 1 from page 3 of the paper:
“The f_CO2 = 0.42 is derived from the published work of Manabe and Wetherald (1967), and represents the simulated fraction of global greenhouse surface warming provided by water-vapor-enhanced atmospheric CO2, taking into account the average of clear and cloud-covered sky. The full derivation is provided in Section 2 of the Supporting Information,… (my bold)”
Please consult Supporting Information Section 2 for details.
The following statement by Nick S also caused some dissonance for me:
My question is, “How could you know that any future value produced by the GCM was below any reasonable value?” — the data does not exist yet — it is future data yet to be observed — it is unreal. You could only know, when the time arrives when the data was recorded, and then you would compare the actual data to previous-forecast data about what it might be to see how the latest piece of real data measured up.
Something has to give you reason to have confidence 50 or 100 years out that a given forecast has some dependability. If the same unknowns are being used over and over again for decades on end, how can the uncertainty about these unknowns not balloon up to ridiculous sizes that make the forecasts useless?
Thus, as I see it, it is the forecast that is meaningless, not the uncertainty interval that gives a basis to trust the forecast.
I did not see any terms for greenhouse gas concentrations in Pat Frank’s equations. This means that according to Pat Frank, the uncertainty would be the same in RCP2.6 models or if unchanging CO2 is modeled as it is in RCP8.5 models.
“fCO2 is a dimensionless fraction expressing the magnitude of the water-vapor enhanced (wve) CO2 GHG forcing relevant to transient climate sensitivity but only as expressed within GCMs”
from description of equation 1 in Pat’s paper
The uncertainty is in the inability of models to simulate cloud fraction, Donald. The uncertainty doesn’t depend on CO2 emissions or concentrations.
I think the big disconnect is in the term “propagation of error”. There are basically two types of errors: calibration, and precision/noise. Pat is talking about calibration (accuracy) errors, and others like Nick are thinking of precision errors, while others are confusing the two. Here’s my stab a explaining the difference:
Precision/noise error:
If you take a digital picture of a scene, you will sometimes see some speckling in the darker areas. This is due to the light intensity in those areas being too close to the lowest sensitivity level of the pixels in the image sensor. Transistor switching noise at this level will cause random differences between adjacent pixels. Most cameras have filter software that can average out this kind of noise to reduce this speckling considerably. This improves the visual image because noise was reduced. Details that were obscured before may now be visible.
Accuracy/Calibration error:
Now take that same image and look at a very small detail (perhaps in the background); one that is too small to identify. Zoom up that detail with software. It is now a bit “grainier” or “blockier”, but not any clearer. Zoom up again. It gets bigger and blockier, but you still can’t make out any more detail. In fact, it probably gets worse the more the zoom up. This is because the camera only has so many pixels per inch in the image sensor. The detail you are trying to resolve simply was not captured with enough pixels to tell what it is. You are just missing too much information, and nothing you can do in post processing can produce that missing information. You can interpolate to estimate it, but that is still just a guess. It may be a good guess, or a bad guess, but you can’t *know* either way. If the stakes are small, then maybe a guess is good enough, but if the stakes are big, then you want to know, not guess.
Maybe a few more people will get it now. Alas, some never will.
What about the 3rd most important kind of error. Incomplete or completely wrong foundation data.
That is…Crap goes in, crap comes out.
Most of this thread focuses on the finer points in error propagation in models, and the discussion has been instructive.
In my almost 40 years in modeling and simulation working for a major defense technology company partnering with Los Alamos NL, our modeling for the USAF in various classified domains resulted in over 10M lines of code to represent large numbers of stochastic, interacting entity and aggregate level systems. It would not have occurred to us to worry about propagation of error if we did not fully understand the behavior we were modeling, and validate and thoroughly verify the model or simulation representation. Why is this not true of the GCMs? It seems to me that climate dynamics has large areas that remain poorly understood. Or did I miss something?
Because in order to verify GCMs you have to wait 80-100 years to see the result (clue = 42).
The GCMs are “tuned” using currently available data, so testing it is not meaningful.
The other part is that they continually “update” the GCM’s with the effective note, “please disregard previous projections of inaccurate models”. Consequently, projections from 20 years ago are to be ignored. I have yet to see any reference to any study or paper that analyzes the errors and uncertainty of previous models when compared to two decades of actual data. I think more time, money and effort is expended on learning what and by how much to change parameters than that spent on learning specifically what scientists don’t know.
Up above, I showed an email relating to uncertainty in measurements of conventional historic air temperatures managed by the BOM. Some points arising:
1. The official estimate that Australia warmed by 0.9 deg C in the century starting 1900 has to be viewed in context of individual observations being one sigma +/- 0.3 deg C. Or larger.
2. That 0.3 degrees was calculated for electronic thermometers introduced in the mid 1990s. The errors with liquid in glass thermometers and their screens are highly likely to make the figure larger, as is the transition from LIG to electronic.
3. But the BOM notes its work on errors is still in progress. It is relevant to ask why official figures for warming are being calculated and used for policy formulation when their accuracy and precision is still unknown. To me, this is simply horribly poor science.
4. Because we do not know the accurate figure for warming here, we need to keep an eye on destinations like estimates of global warming. These BOM temperatures are exported to places like Berkeley, Giss, Hadley. They are some of the core inputs to GCM studies like the CMIP series.
5. It follows that at least some of the CMIP inputs have unknown or unstated uncertainties.
6. The GCM process needs a formal error study to add to this work by Pat Frank.
7. There is no way that GCMs, with unknown uncertainties, should be used for international or national policy formulation.
8. There ought to be a law preventing people from using work in progress as if it was the fully-studied, error-known full Monty. Geoff S.
As someone who has performed billions of measurements with advanced instruments (mostly automated, of course), I’m not sure that I agree that uncertainty does not result in errors. Perhaps it’s an issue of semantics and this is not what Pat is saying.
I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period, and various models use different tuning. This is pure bull$#!&, and arguing otherwise just throws your credibility out the window. You either have an accurate model that covers all major factors such that it can be developed using say, 1900 to 1960 data only, and then you run it for 1961 to 2020 with zero additional tweaking and you match the global temperature for 1961 to 2020, or you have to fake it from 1900 up to now for a match. And then you claim it’s good for predicting the temp in 2100.
“I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period”
That is not a fact.
If the GCM’s can’t handle clouds properly through computation of equations of physical properties, then just how are clouds handled if not by programming “tuned or fudge-factored” parameters?
Nick StokesOctober 16, 2019 at 10:53 pm
“I think Pat wins the day simply with the fact that models are “tuned” by hand (“fudge-factored”) to match known results over an entire period”
That is not a fact.
Why so?
“All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!
If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:
Frank-model-vs-10-CMIP5-control-runs-LW-550x458Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”.
Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years, it is done once, for the average behavior of the model over multi-century pre-industrial control runs.
The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.
Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) but are
“tuned” by hand (“fudge-factored”) to match known results over an entire period”
That is a fact see Roy Spencer,these are his words on the fudging that occurs.
Model tuning to match target observables is discussed here; a far from exhaustive list:
Kiehl JT. Twentieth century climate model response and climate sensitivity. Geophys Res Lett. 2007;34(22):L22710.
Bender FA-M. A note on the effect of GCM tuning on climate sensitivity. Environmental Research Letters. 2008;3(1):014001.
Hourdin F, et al. The Art and Science of Climate Model Tuning. Bulletin of the American Meteorological Society. 2017;98(3):589-602.
Mauritsen T, et al. Tuning the climate of a global model. Journal of Advances in Modeling Earth Systems. 2012;4(3).
Lauer and Hamilton also mention model tuning: “The SCF [shortwave cloud forcing] and LCF [longwave cloud forcing] directly affect the global mean radiative balance of the earth, so it is reasonable to suppose that modelers have focused on ‘‘tuning’’ their results to reproduce aspects of SCF and LCF as the global energy balance is of crucial importance for long climate integrations.
…
“The better performance of the models in reproducing observed annual mean SCF and LCF therefore suggests that this good agreement is mainly a result of careful model tuning rather than an accurate fundamental representation of cloud processes in the models.“
I am confused. It cannot be possible that this was noz part of the curriculum of climate science! This is elementary science (experimental physics). We had it in high school and then later during the bachelor studies for engineering and physics!
And if prominent climate scientists don’t grasp the difference between statistics and energy state differential equations of physical properties, then something is truly wrong in the climate sciences.
You got it, Max. something is truly wrong in the climate sciences
Hey James,
Is there a different definition of “precision” used in GCMs than the rest of science?
The Future of the World’s Climate (Henderson-Sellers and McGuffie, 2012, Elsevier) lists following four uncertainties associated with GCM models:
1. “Uncertainty due to the imperfectly known initial conditions. In climate modelling, a simulation only provides one possible realization among many that may be equally likely, and it requires only small perturbations in the initial conditions to produce different realizations […]”
2. Uncertainty due to parameterization of processes that occur on scales smaller than the grid scale of the model (that covers cloud physics).
3. Uncertainty due to numerical approximation of the non-linear differential equations.
4. Uncertainty due to precision of the hardware on which the model is run.
Authors freely admit that it is difficult to determine precisely how much uncertainty is associated with each of these sources. However, and that’s the interesting bit in my view: “their overall impact can be estimated by the spread in the climate simulations from many GCMs”.
So looks like modelers equate spread of the runs from multiple simulations with the total uncertainty.
Authors refer to several different GCM runs that generated spread of outputs about 0.5°C. “This is an indication of the level of uncertainty associated with the simulation of global temperature from GCMs”.
The main problem I see with most climate models is None Of The Above. It is something that I remember Dr. Roy Spencer discussing to some extent or another in his blog. The main problem I see is with climate models is groupthink with some enforcement of a party line, with multidecadal oscillations being ignored. I see ignoring of multidecadal oscillations for calibrating / “tuning” models (especially CMIP5 ones) as causing most of these models to be tuned to have feedbacks from warming caused by increase of GHGs to have these feedbacks causing about .065 degree/decade (C/K degrees) more warming than they actually did during the most recent 30 years of their hindcasts.
I expect that if these climate models are retuned to hindcast 1970-2000 or 1975-2005 as having .2 degree C/K less warming than actually happened, because ~ .2 degree C/K of warming during that period was from multidecadal oscillations which these models don’t consider, their forecasting would improve so greatly that most people who are employed to work from their alarmism or from their being incorrectly alarmist would both become unemployed.
There’s also the predictive uncertainty stemming from an incomplete and/or wrong physical theory.
Nick Stokes October 16, 2019 at 12:15 pm
“If the IR opacity of air remains about the same and the temperature increases 10°C, then the Earth will lose heat faster than the Sun can supply it. So you are not uncertain about whether that situation could happen”
–
Nick Stokes So you assert. But you give no rational argument. You would be better to state that you are certain that this situation could not happen. The 10C increase is physically possible if the sun was to produce more heat in the first place in which case the earth still has to lose heat at the same rate the sun is putting it in. Or another source of heat for the 10 C increase is magically present.
An interested reader would want to see that magic set out, step by step. I don’t believe that you can do that but if you can, let’s see it.
“The earth will lose heat faster than the sun can supply it.”
To see how wrong this is, suppose the temperature was 30 degrees higher. By your logic the planet would lose heat so much faster than the sun could put in it would turn into a snowball earth in weeks. Any sensible maths would say the poor sun would not have a chance to warm things up.
“Except you don’t know. Well, it might. Or not. That is a pathetic excuse for a wrong thermodynamic equation. Gosh, this stuff is elementary.
–
some of my phrases are undoubtably a bit harsh.
“That’s what Nick says he does not understand.”
No, I understand it very well. You say “Gird yourself because it’s really complicated.”, but in fact, those terms are all the same. And so what your analysis boils down to is, as I said:
0.42 (dimensionless)*33K *±4 Wm⁻² /(33.3 Wm⁻²) * sqrt(n years)
and people here can sort out dimensions. They come to K*sqrt(year).
“Not one of them raised so nonsensical an objection as yours.”
It’s actually one of Roy’s objections (“it will produce wildly different results depending upon the length of the assumed time step”). The units are K*sqrt(time). You have taken unit of time as year. If you take it as month, you get numbers sqrt(12) larger. But in any case they are not units of K, and can’t be treated as uncertainty of something in K.
Dorothy says: ” ……. I see that the calculation of uncertainty is complex or perhaps intractable. Having criticized Pat Frank’s attempt to estimate uncertainty of GCMs, it would be very helpful if Nick Stokes could provide even a ‘back of envelope’ estimation of (any) GCM uncertainty. I assume this would clarify the uncertainty in GCMs, having Pat Frank and Nick Stokes estimates to compare.”
No such comparison is useful unless a common, rigorous definition for the ‘uncertainty of a climate model’ has been adopted; and a common process for quantifying and estimating that uncertainty has been used in producing the comparison.
One would suspect that a tailored definition of uncertainty is needed for assessing the predicative abilities of a climate model. That definition can then used as the starting point for systematically evaluating the level of uncertainty associated with any specific climate model, and also with any specific run of that climate model.
A common definition of ‘uncertainty’ as applied to the climate models might include:
1) — A summary description of the term ‘uncertainty’ as it is applied to the climate models, a.k.a the GCMs.
2) — The component and sub-component elements of climate model uncertainty; for example, cloud forcings, water vapor feedback, computational constraints, etc.
3) — The units of uncertainty measurement, their meanings, their dimensions, and their proper application to each component and sub-component element.
4) — The total uncertainty of a specific climate model run versus its component and sub-component uncertainties.
5) — Guidelines for the use of common scientific terms and measurement units in defining the units of uncertainty measurement.
6) — Guidelines for the use of uncertainty measurement units in quantifying sub-component, component, and total uncertainty.
7) — Guidelines for describing and integrating the sub-component, component and total uncertainty for a specific run of a GCM.
8) — Guidelines for comparing the sub-component, component and total uncertainties among multiple runs of a specific GCM.
9) — Guidelines for comparing the component and total uncertainties among different GCMs.
The question naturally arises, would the climate science community ever consider adopting a rigorous, systematized approach for quantifying and estimating the uncertainty of their climate models?
I leave that question to the WUWT readership to comment upon. However, it is 100% certain many of you will have definite opinions concerning the question.
Thanks Beta Blocker,
One can only imagine why such a rigid, science-based exercise has not been done.
Based only on my reading of others and not from hands on experience with modelling GCMs, I have to wonder how to handle what others describe as a procedure. That procedure is said to be the subjective adoption or rejection of ‘runs’ that do or do not meet subjective criteria. One cannot calculate overall uncertainty for GCMs without including all runs, including those subjectively rejected. If that is indeed done? Geoff S
Geoff, several people familiar with the application of computational fluid dynamics (CFD) to other areas of engineering and science have offered their commentaries on the general topic of uncertainty in computational modeling.
But none has yet described a rigorous definition for the concept of uncertainty as it is being applied to the CFD-based models used in their own scientific or engineering disciplines.
For these various CFD-driven models, is there a standard process for defining, quantifying, and estimating uncertainty for their particular science/engineering applications?
In what ways is the concept of uncertainty, as applied to the outputs of these CFD-driven models, used to inform technical and engineering decision making?
Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations. Or better yet, it’s more like the intended property of the Navier–Stokes equations to deal with exactly that aspect. Many CFD simulations would not even be possible if they couldn’t solve into a solution which actually retains some significance to the topic at hand.
This can be easily verified by anyone familiar with CFD inner workings. If Pat’s conclusion was to be followed, we should stop using the outcome of many other CFD solutions and thermodynamic testing of fluids, gasses, fuel combustion, aerodynamics and so on. They have often similar uncertainties on the initial model.
Nick Stokes has provided some introduction earlier on this site https://wattsupwiththat.com/2019/09/16/how-error-propagation-works-with-differential-equations-and-gcms/
I still fail to see how Nick’s earlier article relates to Pat’s paper. Nick addresses error propagation in results and not uncertainty. His article addressed the following:
“So first I should say what error means here. It is just a discrepancy between a number that arises in the calculation, and what you believe is the true number.”
Pat never claimed the simulations run off the rails in their results, only in certainty.
It’s not that we think the modeled results are “wrong”; only that they are so uncertain to have much value.
Pat never claimed the simulations run off the rails in their results, only in certainty.
Methinhs, that’s confusing for many. If a simulation (not necessary GCM) gives consistently good result, confirmed by observations so who cares if actual uncertainty potentially is massive? Or in other words: if uncertainty does not manifest itself in the actual error who cares if is large or small?
Except, in the case of GCMs the results are not good and are not confirmed by observations (the models run hot, with a gap between results and observations the gets wider every year as time marches forward), so yes, uncertainty is manifesting itself by dint of how bad the models are at modeling reality. The fact that the uncertainty is so big merely illustrates the fact of what is already widely known: the models are unfit for purpose.
You don’t understand Error Propagation either.
Nick added nothing. He was talking about how chaos can lead to explosive situations which would give nonsensical results in a model. But that’s not the issue.
That is completely different from the epistemological considerations of what you know based on uncertainty of measurements, and how that propagates through an equation with each step.
Anyone can force a model to converge on a falsely precise figure. That doesn’t mean it’s useful.
John Dowser, “Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations.”
I have answered that question a zillion times, John, including in the thread comment here.
The GCMs have a linear output. Their output can be compared with observations. This comparison is sufficient to estimate the accuracy of their simulations.
Linearity of output justifies linear propagation of model calibration error through that output. This justification appears in both the abstract and the body of my paper.
Pat,
You write:-
“Linearity of output justifies linear propagation of model calibration error through that output.”
I offered four examples of uncertainty propagation in 4 linear systems here:-https://wattsupwiththat.com/2019/10/04/models-feedbacks-and-propagation-of-error/#comment-2816455
One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance. As I have stated before, your equation S10.1 is a mis-statement of the reference from which you draw it; it has limited validity only for the sum of strictly independent variables and no validity when applied to the sum of correlated variables.
There is no one-suit-fits-all recipe for calculation of uncertainty propagation even in a linear system.
“One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance.”
What are you talking about? What correlated variables? How do you have covariance with an uncertainty interval that does not even have a probability function?
“As I have stated before, your equation S10.1 is a mis-statement of the reference from which you draw it; it has limited validity only for the sum of strictly independent variables and no validity when applied to the sum of correlated variables.”
Uncertainty *is* a strictly independent *value*. It is not a variable. The uncertainty at step “n” is not a variable or a probability function. Therefore it can have no correlation to any of the other variables. Correlation implies that if you know X then you can deduce Y through a linear relationship. There is no linear relationship between uncertainty and any other variable. You can’t determine total flux by knowing only the uncertainty in the value of the total flux. Nor can you determine the uncertainty by knowing the total value of the flux. There is no linear relationship between the two so they simply cannot be correlated.
There is no correlation between the uncertainty and any random variable thus uncertainty is independent and the uncertainty at each step is independent of the uncertainty of the previous step thus adding them using root-sum-square is legitimate. The fact that the uncertainty in each step is equal to all other steps in Pat’s analysis does not determine independence, it just makes the calculation simpler. Even if some steps were to be different for some reason they would still remain independent values not correlated to any random variable and would contribute to the root-sum-square.
kribaez, “One of those examples highlights the conceptual error you made in trying to apply your uncertainty formula to correlated variables, despite the fact that your formula does not recognise covariance.”
What correlated variables? Tim Gorman has it right.
The uncertainty statistic is a constant. It is not correlated to anything. It does not covary. There are no correlated variables
Apart from the GUM, see Sections 4 and 5, and Appendix A in Taylor &, Kuyatt (1994) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results NIST Technical Note 1297. Washington, DC: National Institute of Standards and Technology, here (pdf).
I don’t see any strength to your objection.
In your linked comment, you wrote, that, “for adequacy, any emulator here requires at the very least the ability to distinguish between an error in a flux component, an error in net flux and an error in forcing.”
No, it does not. Eqn. 1 is an output emulator, not an error emulator.
You wrote, “You cannot assess the uncertainty arising from a variable Z if your equation or mapping function contains no reference to variable Z.”
The reference to variable Z derives independently from Lauer and Hamilton. It conditions the emulation in eqns. 5.
You wrote, “any of these models present a demonstrably more credible representation of AOGCM aggregate response than does Pat’s model.”
But eqn. 1 successfully emulates the air temperature projection of any advanced GCM. How is that not credible? And what could be more credible than that?
Maybe by “credible” you mean attached to physics. In that case, you’d be right, but irrelevant.
But if by “credible” you mean comports with AOGCM projections, then your dismissal of eqn. 1 is cavalier.
You wrote, “Importantly, they all lead to a substantial calculation of uncertainty in temperature projection, but one which is different in form and substance from Pat’s.”
Right, Their uncertainty would be a matter of precision. Mine is a matter of accuracy. In the physical sciences, mine is the more important.
You wrote, “I gave an example in the previous thread of a simple system given by
Y = bX
First problem: X carries an uncertainty of (+/-)2 …”
But it does not. Both b and X are givens and are known exactly. The conditioning uncertainty is external and comes in independently. As a calibration error, it adds in the resolution of the calculation.
A simple example. I have used microliter syringes in titrations. They read to 0.1 microliter. Suppose I calibrate the syringe before use, by weighing syringed water using a microbalance.
I find that the calibration error averages (+/-)0.1 microliter, even though I was really visually careful to put the tip of the syringe plunger right on a barrel measurement line.
If I use that syringe to make multiple additions, then each one is of unknown volume to (+/-)0.1 microliter. After ten additions, my uncertainty in total volume added is (+/-)0.3 microliters.
When I calculate up the results, into that calculation comes an uncertainty in reagent quantity represented by that (+/-)0.3 microliters.
That uncertainty is external and is brought into, and conditions the result of, whatever thermodynamic calculation I may be making. The (+/-)0.3 microliters is a measure of (part of) the resolution of the experiment.
Look at eqns. 5. They operate the same way.
Using your linear model, when the calibration uncertainty is added in, it becomes Y(+/-)y = bX(+/-)bz, where the dimensions of uncertainty (+/-)z are converted to that of y.
The independent (+/-)z uncertainty does not change in time or space. It is not dependent on the value of ‘b’ or ‘X.’ It is a measure of the resolution of the calculation.
Your enumerated third and fourth problems there stem from your first mistake of supposing the error is in X or b, and changes with their measurement. None of that is appropriate to the emulation or to the uncertainty analysis.
You wrote, “Pat’s response here is tellingly incorrect; indeed he has set up a paradox whereby the 1-sigma uncertainty in X is (+/-2) and also (+/-3.5) simultaneously.”
I was just following through the logic of your example, kribaez. Nothing more. Note my opening phrase: “The way you wrote it…”
Here’s an interesting thing. You wrote, there, “Mechanistically, each Xi value is calculated here as:- Xi = Xi-1 + ΔXi = Xi-1 + (Xi – Xi-1) so the actual realised error in Xi-1 is eliminated leaving only the uncertainty in the Xi measurement.”
But you also wrote that, “… [X] is always measured to an accuracy of (+/-)2.”
So, your (Xi – Xi-1) should really be (Xi(+/-)2 – Xi-1(+/-)2). In a difference, the uncertainties add in quadrature. So, the uncertainty in your ΔXi is sqrt(2^2 + 2^2) = (+/-)2.8. (+/-)uncertainties do not subtract away.
Your method of introducing ΔXi as a difference has caused the uncertainty to increase, not to disappear. Remove the use of a difference and the uncertainty does not increase, and your lag-1 autocorrelation disappears.
The ‘bX’ in the emulation equation 1 is just 0.42 x 33K and ΔFi. The first is a constant and the second is the standard forcings. Both amount to givens.
The rest of your analysis there is founded upon your basic misconception that the uncertainty resides in X or in b. It does not.
The uncertainty is independent of either and both. It is introduced from an external origin, it is a measure of model resolution, it conditions the result, and is a constant (+/-)value. It does not covary. It is not correlated with anything.
In fact, there is no necessary uncertainty in the uncertainty, either. It is taken as a given constant.
” It is not correlated to anything. It does not covary. There are no correlated variables”
“National Institute of Standards and Technology, here (pdf).”
From your NIST link, Eq A-3, which is the source of your Eq 3. The Guidelines say it is “conveniently referred to as the law of propagation of uncertainty”. And like your Eq (3), it has covariance terms. They say:
“u(xᵢ) is the standard uncertainty associated with the input estimate xᵢ ; and u(xᵢ , xₖ) is the estimated covariance associated with xᵢ and xₖ .”
Not only do you give no reason for assuming the covariances are zero, you deny that they could even exist. But there it is.
““u(xᵢ) is the standard uncertainty associated with the input estimate xᵢ ; and u(xᵢ , xₖ) is the estimated covariance associated with xᵢ and xₖ .””
Read this sentence again, this time for meaning.
“covariance is a measure of the joint variability of two random variables.”
“estimated covariance associated with xᵢ and xₖ”
Do you see anything about u being a random variable that has covariance with xᵢ or xₖ?
Covariance is a VALUE, it is ot a random variable. How to you have covariance between a VALUE and a random variable?
Nick, “Not only do you give no reason for assuming the covariances are zero, ….”
I gave the definitive reason here.
And here.
And you had agreed, here.
Nick, “…you deny that they could even exist. But there it is.
A very thin fabrication. The real Nick, for all to see.
You missed your calling. You’ve a real Vyschinskyesque talent.
John,
The counterargument to Pat is not that the uncertainty might not track that way through the GCMs, but that it most definitely DOES NOT track that way. It is ironic that Pat quotes Richard Lindzen, who IMO gets it exactly right.
“But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance, and these errors involve, among other things, the feedbacks which are crucial to the resulting calculations. ”
I have tried several times to make this point to Pat in previous threads, but he rejects the argument. For example, I noted here https://wattsupwiththat.com/2019/10/04/models-feedbacks-and-propagation-of-error/#comment-2819759
“… However, since it is impractical to run full MC experiments on the AOGCMs, there really is little choice other than to make use of high-level emulators to test uncertainty propagation arising from uncertainty in data and parameter inputs. Such tests support the existence of large uncertainty arising from cloud parameterisation, but do not support the shape of uncertainty propagation suggested by Pat. An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature. It leaves the model with an incorrect internal climate state, no question. During subsequent forced temperature projections, the incorrect climate state translates into an error in the flux feedback from clouds. Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped. The error propagation is then via a term R'(t) x ΔT(t), where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE. Although temperature may be changing with time, this uncertainty propagation mechanism is not at all the same as your uncertainty mechanism above, or Pat’s, which are both propagated with TIME independent of temperature change.”
““But the models used to predict the atmosphere’s response to this perturbation have errors on the order of ten percent in their representation of the energy balance”
Again, error is not uncertainty. How many times must it be repeated for this to be accepted?
“An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature.”
And, once again, you are trying to equate error, in this case “bounded error” with uncertainty. An uncertainty in the flux cannot simply be cancelled out. No amount of “spin-up” can cancel the uncertainty.
“Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped. The error propagation is then via a term R'(t) x ΔT(t), where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE. ”
And, once again, you conflate error with uncertainty. You are still arguing that you can reduce the error but that doesn’t reduce the uncertainty in that error. In essence you are saying you can make the output more accurate by tuning the model using an R'(t) factor. If the model were correct to begin with you wouldn’t need to tune it. The fact that you have to is just proof there is uncertainty in the model! And since you can’t compare the model’s future outputs with future reality you simply can’t *know* that the R'(t) you select will match what happens in the future. You can “assume” your R'(t) tuning will match but here comes that “uncertainty factor” again!
“Again, error is not uncertainty. How many times must it be repeated for this to be accepted?”
Again, sampled error from the input space yields the uncertainty spread in the output space. How many times must it be repeated for this to be accepted? There is no magic uncertainty which is not rendered visible by MC sampling. This is the recommended method in almost every reference quoted by Pat, except where the problem is simple enough to allow analytic quadrature.
Sampling of cloud error across a wide range yields no uncertainty in net flux over a 500 year period. The mathematics (not tuning) say that the net flux must go to zero, and it always will. Pat says that the net flux is equal to zero but with a massive uncertainty which is invisible even to full sampling of the cloud error distribution. This is nonsense.
“Again, sampled error from the input space yields the uncertainty spread in the output space.”
Uncertainty is not a random variable. It does not specify a probability function. Varying the input only tells you how the output responds, it doesn’t tell you anything about the uncertainty of the input or the output. If the input has a +/- uncertainty interval for an input value of A then it will still have an uncertainty interval for an input value of B. You have to act on the uncertainty of the input variable in order to lower it.
“There is no magic uncertainty which is not rendered visible by MC sampling.”
Again, the MC analysis can only tell you the sensitivity of the model to changes in the input. If I put in A and get out B and then put in C and get out D then each of the outputs, B and D, *still* have an uncertainty associated with each. In a determinative system, where an input A always gives an output B there simply isn’t any way to determine the uncertainty by varying the input. Inputs A and C will still have an uncertainty interval associated with them. You can’t avoid it. That also means that any output will have an uncertainty. You can’t avoid it by doing multiple runs.
Now, if on each run of the model with an input A you get different answers, then you can estimate the uncertainty interval with large number of runs. But if the climate models give different answers each time they are run then just how good are they as models?
“This is the recommended method in almost every reference quoted by Pat, except where the problem is simple enough to allow analytic quadrature.”
Actually it isn’t. If I input 2 +/- 1 to the model and then input 3 +/- 1 just exactly what do you think the two runs tell you, i.e. an MC of 2. The input of 2 can vary within the uncertainty interval between 1 and 3. The input of 3 can vary from 2 to 4. Tell me exactly how this small MC run can tell tell you anything about the uncertainty of the output? The problem with your assertion is that you think you can have inputs with +/- 0.0 uncertainty and thus get outputs with +/- 0.0 uncertainty. Just how are you going to accomplish that?
“Pat says that the net flux is equal to zero but with a massive uncertainty which is invisible even to full sampling of the cloud error distribution. This is nonsense.”
Nope. Pat is correct. I have two different cars moving on the highway. Their net acceleration is zero, i..e just like the net flux. Do you know the velocity of each for certain? Net flux can be zero but still have a uncertain value for the flux itself. And since it is the total flux input that determines temperature, not the net flux, what does the net flux actually tell you? If the sun gets hotter we will have more total flux coming in, we might still have a net flux of zero but that total flux in *is* going to have an impact on the actual temperature. If it didn’t then the sun could go dark and the earth would still maintain the same temperature.
You can’t wish away uncertainty. It just isn’t possible.
kribaez, “Again, sampled error from the input space yields the uncertainty spread in the output space. How many times must it be repeated for this to be accepted?”
Precision, kribaez. That’s all you’re offering. It’s scientifically meaningless.
For your metric to be a measure of physical accuracy, you’d have to know independently that your model is physically complete and correct, and capable of physically accurate predictions without any by-hand tuning.
Then a parameter uncertainty spread of predictive results implies predictive accuracy.
But your climate models are neither known to be complete, or correct, or capable of accurate predictions. And they need by-hand tuning to reproduce target observations.
Your metric gives us no more than the output coherence of models forced into calibration similarity. That has nothing whatever to do with model predictive accuracy.
Your models are unable to resolve the physical response of the climate to the perturbation of forcing from CO2 emissions.
It matters not that they show a certain result or that they all show a restricted range of results. That behavior is a forced outcome. It’s not an indication of knowledge.
Those outcomes are not predictions. They are mere model indicators. Like Figure 4 above.
kribaez, you wrote, “… However, since it is impractical to run full MC experiments on the AOGCMs, there really is little choice other than to make use of high-level emulators to test uncertainty propagation arising from uncertainty in data and parameter inputs.’
The method you’re describing is not uncertainty propagation. It is merely variation about an ensemble mean. It’s not even error, because no one has any idea where the correct value lays.
“Such tests support the existence of large uncertainty arising from cloud parameterisation, but do not support the shape of uncertainty propagation suggested by Pat.”
That’s no surprise, though, is it. After all, variation about an ensemble mean isn’t an accuracy metric.
“An error in a flux component (like for example LWCF) translates into a negligible effect on the net flux after the 500 year spin-up and a bounded error on the absolute temperature.”
Irrelevant. The net flux and the absolute temperature have large implicit uncertainties because their physical derivation is wrong.
“ It leaves the model with an incorrect internal climate state, no question. ”
With this, kribaez, you admit the air temperature is wrong, the cloud fraction is wrong, etc., and that they get wrongly propagated through subsequent simulation steps.
How can you possibly not see that process of building error upon error produces an increasing uncertainty in a result?
“Multiple sampling of the initial error in cloud definition allows the uncertainty in temperature projection to then be mapped.”
You have no idea of the subsequent cloud error in a futures projection. You have no idea how the cloud fraction responds to the forcing from CO2 emissions. You have no idea of the correct temperature response.
Projection uncertainty can only increase with every projection step, because you have no idea how the simulated trajectory maps against the physically correct trajectory of the future climate.
“ … where R'(t) is the rate of change of flux due to cloud changes with respect to TEMPERATURE.”
But kribaez, you don’t know the rate of change of flux due to cloud changes with respect to temperature. That metric cannot be measured and cannot be modeled.
Clouds can be measured to about (+/-)10% cloud fraction. Models can simulate cloud fraction to an average (+/-)12% uncertainty. The changes you’re talking about cannot be resolved.
Models can give some numbers But your metrics, as determined from models, are merely false precision. They’re physically meaningless.
The LWCF uncertainty metric is derived from error in simulated cloud fraction across 20 calibration years. Those years included seasonal temperature changes. The simulation error thus includes temperature cloud response error. It includes the whole gemisch of cloud error sources. The initial calibration metric — annual average error in simulated cloud fraction — produced the annual average (+/-)4W/m^2 LWCF calibration error.
That error likewise represent the whole gemisch of simulated cloud response errors. It shows that your models plain cannot resolve the impact of CO2 forcing on clouds. The cloud response is far below the lowest level of model resolution.
In real science, every single model run would have to be conditioned by the uncertainty stemming from that error. That means every run of an ensemble would have very large uncertainty bounds around the projection.
The ensemble average would have the rms of all those uncertainties. And if one calculates the (run minus mean) variability it’s the rss of the uncertainties in the run and the mean. The uncertainty in the difference is necessarily larger than run or mean.
Your methods are not the practice of science at all. They’re the practice of epidemiology, where predictive pdfs have no accuracy meaning. They are merely anticipatory.
The accuracy of such a model pdf is only known after the arrival of the physical event. One may find that the model pdf did not capture the position of the real event at all. It was inaccurate, no matter that it was precise.
Dr. Frank,
The text of the post you’re responding to here refers back to a post on an earlier thread, the remaining part of which is enclosed in brackets below:
[“…but shouldn’t the uncertainty of clouds, as you indicate, at least be inherent in how we view the model’s results?”
Yes, it should. I am already on record as saying that I agree with Pat that the GCM results are unreliable, and unfit for informing decision-making, and I also agree with him that cloud uncertainty alone presents a sufficiently large uncertainty in temperature projections to discard estimates of climate sensitivities derived from the GCMs. Unfortunately, I profoundly disagree with the methodology he is proposing to arrive at that conclusion.]
I know that for the sake of science, agreement on methodology / process is important, but for the time being, am grateful that your work, which I agree with fully, is exposing the weakness of the GCM results. Thank you. – F.
Frank, “ Unfortunately, I profoundly disagree with the methodology he is proposing to arrive at that conclusion.
I don’t see why, Frank. It’s a straight-forward calibration-error/predictive-uncertainty analysis.
John Dowser, “Pat still does not address the counter argument made a few times: that uncertainty might NOT track that way through these particular GCM calculations.”
I have addressed that problem, John.
Air temperatures emerge from models as linearly dependent on fractional forcing input. What sort of loop-de-loops happen inside of models is irrelevant, once the linearity of output is demonstrated.
That linear relation between input forcing and output air temperature necessitates a linear relation between input uncertainty in forcing and output uncertainty in air temperature.
I pointed that out in the post you linked, here
Tim Gorman also pointed that out in the comments under the post you linked. Also here and here.
Nick tries to make it look complicated. It’s not.
Hey John,
This can be easily verified by anyone familiar with CFD inner workings. If Pat’s conclusion was to be followed, we should stop using the outcome of many other CFD solutions and thermodynamic testing of fluids, gasses, fuel combustion, aerodynamics and so on. They have often similar uncertainties on the initial model.
I asked similar question under one of the previous posts: if CFD simulations run over millions of volume cells and millions of steps, even small initial uncertainty associated with each step/cell will quickly render such simulation useless (because it accumulates with each step). The answer I received from people more familiar with the subject was something like:
1. CFD algorithms undergo robust experimental verification and validation procedures
2. Initial conditions are often very well defined.
3. Uncertainties are far smaller than in climate models
4. Still, it is not uncommon that such CFD simulations go haywire and produce absurd results.
Few years ago I had a contact with a chap who was managing large aerodynamics CFD simulation project (aircraft in the landing configuration). Even with very precise model, mesh and using validated industry-standard CFD package. Still, it did produce detectable alpha shift compared with experimental data – such shift had to be corrected for the model. The guy was told that such anomalies between results of CFD and wind tunnels are not uncommon.
I know of 1 gcm that went haywire once and disagreed with observations.
input file error
I know of 1 gcm that went haywire once and disagreed with observations.
input file error
And what was observation in this context? According to Dr Frank models are constantly tuned to match past observations. But that does not guarantee at all that future predictions are accurate. Good test, I reckon, would be running GCM for a local region with good quality record of air temperatures for the last 50 or 80 years and then – without adjusting model to readings – check how the model behaves compared with actual records.
Steven Mosher, in a comment I’ve posted on this thread this morning, I ask you once again — as I have done several times before — to defend your assertion that credible scientific evidence exists for claiming that +6C of warming is possible.
https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/#comment-2825172
The uncertainty of that +6C prediction is central to its credibility. As are the uncertainties of predictions of +2C, +3C, and +4C.
And yet as far as I am aware, no systematic evaluation of the uncertainties of GCM model outputs is being done. That topic is discussed in the comment link I posted above.
Dr. Roy provides a useful devil’s advocate position that usefully elicits a counter-argument list his post.
But from a decision-theoretic POV, he fails. As John McCain (RIP) said: the worst case is that we leave a cleaner planet for our children.
IMO we are way past the quibbles raised by the 1%ers like Dr. Roy. I, for one, am not willing to fight to avoid a few hundred dollars per year in transfer payments just to roll the increasingly-loaded dice so that my grandchildren will have a livable planet in 30 years, when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.
so, from now on, please let’s have no one over the age of 50 advocating for a very low-probability future just so that fuel companies can make a profit.
“so that my grandchildren will have a livable planet in 30 years, when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.”
Tell your grandchildren to move to Kansas/Nebraska/Iowa, the epicenter of a global warming hole. No Eve of Destruction going on here.
We didn’t have a single day of >100degF here this summer!
They changed me from Zone 4 to Zone 5 partly because of global warming.
Now all the garden centers sell Zone 5 plants and they die almost every year. I keep telling people to stop buying Zone 5 plants.
Record cold top killed all my Zone 4 grapevines this year (isn’t suppose to happen in Zone 4).
Weather isn’t climate… until it’s hot.
It’s difficult to argue that we are on the eve of destruction where atmospheric modeled warming is lower than surface modeled warming (observed). This observation alone pretty much defeats CO2 forcing.
The irony of course being that failure to correctly control for surface urban heat effect on modeled observation likely drives the divergence defeating their own argument.
Based on the squirrel activity this fall we are going to have a cold winter. They are hauling every single nut they can find and burying them out in the back fence row. I haven’t seen them this active for literally a decade. I’ve already stocked up on suet and corn to feed all the critters this winter. Hope I won’t need it!
Going to be a hoot to see all the claims that this December through March will be the warmest on record!
chris
You stated your personal opinion: “… when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.” However, you provided no evidence to support you opinion.
You also remarked, “… just so that fuel companies can make a profit.” You are demonstrating a narrow, biased view of technology and economics, again with out any evidence to support your opinion. That is the crux of the problem. People like you feel certain that you have special insights on reality, and feel no obligation to provide proof. Yet, you would have everyone pay increased taxes, and want to silence those who see things differently. I find that to be very arrogant.
Chris, “when it is increasingly obvious (p>0.9) that we are On The Eve Of Destruction.”
That’s not obvious at all. Your thought just exemplifies modern millennialist madness. It’s groundless.
Steven Mosher has said in comments made on other climate science blogs that +6C of warming is possible.
He has also said in a comment made on WUWT that if we are to objectively determine a GCM model’s uncertainty, what the model does internally must be directly examined; i.e., just using emulations of that model’s outputs isn’t good enough for evaluating a model’s uncertainty.
—————————————————-
Steven Mosher, October 18, 2019 at 3:11 am
“Except that we know GCMs invariably project air temperatures as linear extrapolations of fractional GHG forcing.”
Nope they dont’
Look at the code.
now you CAN fit a linear model to the OUTPUT. we did that years ago
with Lucia “lumpy model”
But it is not what the models do internally
—————————————————-
It is easy for many of us to believe that a prediction of +6C of warming is more uncertain than is a prediction of say, +3C of warming. But how can that opinion be offered objectively?
If one is intent on evaluating the total uncertainty of a specific GCM model run, or of a collection of GCM model runs — doing so objectively in quantitative terms according Steven Mosher’s requirement — then one must look closely at all the parameter assumptions and at all the computational internals. Each and every model component and sub-component. All of them. Without exception.
Quantifying and estimating the uncertainties of each component and sub-component of a GCM using a systematic approach, and then integrating those uncertainties into an overall total evaluation, would be an exceedingly difficult task.
In the comment referenced below, I offer a framework for how a common conceptual approach for evaluating these uncertainties might be defined, developed, and systematized.
https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/#comment-2824001
As far as I am aware — please correct me if I am wrong — none of the CGM modelers formally quantify and estimate the uncertainties of each component and sub-component of their models using a systematic approach.
Nor do the GCM modelers make any attempt at formally integrating and documenting those sub-component uncertainties into an overall evaluation of a model run’s total uncertainty, one that is supported by a disciplined and documented analysis.
Steven Mosher, I ask you once again, as I have done several times before, to defend your assertion that credible scientific evidence exists for claiming that +6C of warming is possible.
The uncertainties associated with that +6C claim, stated in a detailed and objective evaluation, one supported by a systematic analysis, would be central to its credibility as a scientific prediction.
As would be the case for predictions of + 2C, + 3C, and + 4C. It’s all the same thing.
“He has also said in a comment made on WUWT that if we are to objectively determine a GCM model’s uncertainty, what the model does internally must be directly examined; i.e., just using emulations of that model’s outputs isn’t good enough for evaluating a model’s uncertainty”
Stephen Mosher has obviously never been handed a black box by an engineering professor and told to figure out the transfer function between the input and the output. What happens inside the black box is totally irrelevant. You simply don’t know, it’s all hidden inside the box.
That does not mean that you can’t determine the transfer function between the input and the output. And if it turns out that the transfer function is a simple linear one then, again, who cares what is inside the box?
Where Mosher gets it really wrong is that what is inside the box isn’t necessarily what determines the uncertainty. The uncertainty in the input gets included in the output. If the uncertainty of the input frequency is +/- 1 hz for example, there is no way the output of the black box can have an uncertainty less than +/- 1 hz. The uncertainty can be higher, of course, but it can never be less. If what is input to that black box is a cloud function and that cloud function is uncertain then that uncertainty gets reflected into the uncertainty of the output. And not one single thing needs to be known about what is inside the black box.
This is why it is so important for the climate modelers to at least identify the uncertainties in the inputs they use to their models. And then to recognize that uncertainties never cancel, they are not random variables that can be hand-waved away using the central limit theorem.
I think it’s a bit more complicated.
It seems like what they are really saying is that the input frequency oscillates at +/- 1hz uncertainty over a 20 year period; meanwhile the black box can calculate monthly feedback forcing based on the 20 year oscillation while maintaining a +/1 hz uncertainty over the 20 year period. Maybe I misunderstand Roy, but it seems like that is what he and Nick are saying.
I know that’s a horrible over-simplification of an analogy, but am I way off base?
You are describing a steady-state situation where the input and output never change, i.e. a +/- 1 hz uncertainty over a 20-year period. Kind of like the earth being covered in the same cloud cover over the same geographical area for the entire 20 years.
Think about the situation if the output of the first black box gets fed into another black box. The output of the first black box has an uncertain output because of the uncertainty in the input of the first black box. Thus the second black box compounds the uncertainty when it performs its transfer function.
Let’s add it up. The input to the first black box can be 10 volts at 10hz with an uncertainty of +/- 1hz. The transfer function for the box appears to be a x2 frequency multiplier. So the output becomes 10volts at 20hz +/- 2hz. (9hz x 2=18. 11hz x 2 = 22. a span of +/- 2hz). This then feeds into a 3rd black box that has the same transfer function. It’s output will be 10volts at 40hz +/- 4hz. (44hz – 36hz). The uncertainty compounds with each iteration. This is the case where the uncertainty is not independent so it is a straight additive.
Now, if it is the black box itself that is causing the uncertainty, perhaps due to power fluctuations or temperature drift in the components, the uncertainty compounds over each iteration as independent values thus adding in quadrature.
This actually is a good example of error vs uncertainty. I can write a transfer function to describe what I measure between the input and the output. But that doesn’t address the uncertainties associated with inputs and internal operation. It’s exactly the same with the CGM’s. They don’t address the uncertainties associated with their inputs, e.g. clouds, or with their internal operation. Thus their uncertainties compound over each iteration, just like they do with the black boxes.
Hey Tim,
iLet’s add it up. The input to the first black box can be 10 volts at 10hz with an uncertainty of +/- 1hz. The transfer function for the box appears to be a x2 frequency multiplier. So the output becomes 10volts at 20hz +/- 2hz. (9hz x 2=18. 11hz x 2 = 22. a span of +/- 2hz). This then feeds into a 3rd black box that has the same transfer function. It’s output will be 10volts at 40hz +/- 4hz. (44hz – 36hz). The uncertainty compounds with each iteration. This is the case where the uncertainty is not independent so it is a straight additive.
That’s a good analogy! So, how would you expect this uncertainty manifests itself in this context? If we’ve got connected set of 3 black boxes, as you described, would you expect the output from the 3rd converter to be 10 volts and something between 36-44 Hz? If, after several repeated runs, you have consistent output, say, 39.7-40.1 Hz does it change anything? My understanding is that fellas as Nick and Roy argue because output of GCM is clustered around +/- 0.5 C or bit more that is actual uncertainty range.
What I meant to say was that it oscillates randomly (both temporally and in frequency) within +/-1 hz over a 20 year period.
You are describing error, not uncertainty. The uncertainty interval says nothing about the actual value the output will take, it only describes an interval in which the actual value will be found.
Tim Gorman, let’s all recognize that Steven Mosher is playing a fine game of gotcha with Pat Frank and with the other knowledgeable critics of the climate models. Nick Stokes is doing the same thing, but he is doing it in a way that does have the benefit of offering useful insight into the theoretical and computational uncertainties of the GCMs.
Mosher and Stokes have to know just how time consuming and expensive it would be to examine each and every component and sub-component of a GCM to determine what kinds of uncertainties are present and then to quantify and document all of those uncertainties.
They know too that quantifying and documenting all those uncertainties must be done by following every step the processing engine passes through, starting with reading the initial inputs on through to performing the internal computational process operations on through to producing the final outputs.
In other words, no black box. The whole enchilada, end to end. The inputs, the parameterizations, the assumed physics, the software code, and the computational correctness of the outputs according to accepted software quality assurance principles.
Mosher and Stokes also know what everyone who has faced a similar requirement in other engineering disciplines knows — that climate scientists would strongly oppose any serious demand to systematically quantify and document the uncertainties of the GCMs which now support climate change policy decision making.
Beta,
I don’t disagree with you on a theoretical basis. I would only add that on a practical basis not all components and sub-components and uncertainties would need to be evaluated in order to invalidate the models. It would be much simpler to identify, on a subjective basis, a few, maybe even only one or two, of those components and sub-components which would have the biggest impact from their uncertainties. Evaluate those and see what happens to the models.
This is basically what Pat has done. He picked one identifiable component and evaluated what that uncertainty causes. It’s not good for the reliability quotient of the models!
I agree that the mathematicians and computer programmers will never attempt even the smallest evaluation of their models. I suspect most of them don’t know how and those that do know how also know what the result would be!
That’s why Pat’s contribution is so important. It needs the widest distribution and support possible. Perhaps someone can get it to the new Energy Secretary and explain to him how the math works.
Thanks to Beta Blocker and Tim Gorman for painstakingly pointing out what is going on behind the math, as motivations for the obfuscation – which is dressed up as dialogue :
Denial isn’t just a river in Egypt.
It is a primary defence mechanism after all.
Pat Frank has also contributed a very important result that calls to account the use of GCM forecasts as basis for policy. He has refuted meticulously repeatedly, and with good grace, attempts to obscure the result, which if were more clearly understood by modelers – would suggest a significant rethink of this approach.
“Mosher and Stokes have to know just how time consuming and expensive it would be to examine each and every component and sub-component of a GCM to determine what kinds of uncertainties are present and then to quantify and document all of those uncertainties.”
So Pat took a short cut and and determined the uncertainty caused by the largest known component of uncertainty , which also happens to not be addressed by the model. And once it is determined that the output of GCMs are meaningless, there is no need to spend years adding additional smaller uncertainties. I mean meaningless is meaningless. It is like being ahead in the soccer game 35-2, then glorying that in the last 5 minutes you managed to score 2 additional goals to make the final score 37-2.
An approach which assesses all of the component and sub-component uncertainties would provide an objective means for determining and highlighting where inside a specific GCM the most important uncertainties influencing its predictive outputs lie —
if it were to be done systematically as a tool in evaluating the credibility of a GCM’s output.
This raises a further question. How would the absence of a GCM component or sub-component, one which is thought to be necessary for making useful predictions, be quantitatively assessed for its contribution to a model’s overall uncertainty?
IOW, I believe the adage goes, “it is good enough for government work”!
“Where Mosher gets it really wrong is that what is inside the box isn’t necessarily what determines the uncertainty”
We pretty much know that GCMs cannot calculate cloud forcing, so in fact, it is what is NOT in the models (cloud physics) that is in fact causing the uncertainty in the models output.
If I model a car driving with the engine at various RPMs and the transmission in various gear ratios, but neglect to model the effect of braking, and I can show through experiment (analogous to Lauer) the amount of braking that is going on, I can calculate an uncertainty of the car model due to the lack of modeling the braking!
I’m in awe. Boy, did you folks get it down! 🙂
I’ve copied out that whole thread. It’s a keeper.
Dan Kahan needs to see these comments. The spectacle of posters, many who are obviously technically proficient, defending Pat Frank’s irrelevant paper, rife with math errors, because it supports their prejudgments, is a shining example of his System 2 findings….
“defending Pat Frank’s irrelevant paper, rife with math errors”
Addicted to using emotional arguments instead of facts, are we? You offer nothing to support your claim that the paper is irrelevant and has math errors.
“But it very much consistent with CCT, which predicts that individuals will use their System 2 reasoning capacities strategically and opportunistically to reinforce beliefs that the their cultural group’s positions on such issues reflect the best available evidence and that opposing groups’ positions do not.”
Facts are funny things. They either show reality or they don’t. Belief has nothing to do with it. I suspect the groups involved in Kahan’s studies didn’t have all the facts, couldn’t understand the math, and weren’t versed in experimental science. Thus their “belief” that their cultural group’s positions on issues reflect the best available evidence is based more on “group think” than on actual fact and analysis. That is true on *both* sides of the argument. It is a proven fact, however, that most on the climate alarmists side of the argument are mathematicians and computer programmers and are not familiar with scientific studies based in reality. See Pat’s writings for proof. See the unsupported and debunked claims that his writings are full of math errors. Since his thesis is *not* full of math errors it means it is *very* relevant. His thesis *is* falsifiable but has yet to be shown to be false. Until his thesis can be falsified his thesis stands as true and an accurate description of reality. That’s not true of the climate alarmists CGM studies which Pat’s thesis has falsified.
Looks like desperation time for Pomo State. Did coach send you in from the end of the bench to commit the intentional?
“rife with math errors”
There are no math errors.
Dan Kahan, no matter his accomplishments, shows no perceptible ability to evaluate the science himself.
An irony, given his work, is that Dan Kahan would have to rely on an argument from authority for a view of my paper.
Any guess which authority he’d pick? Might it be someone who has an egalitarian, communitarian world-view, rather than one of those awful hierarchical individualists?
+1
Pat Frank, what you are doing is important work and is greatly appreciated from the perspective of an engineer. Your responses to questions and objections make good sense as this topic continues to attract attention and you patiently keep re-stating the points from the paper and the posts. Please keep on.
Thanks, David. I appreciate your knowledge-based support.
Yours and Tim Gorman’s and Beta Blocker’s and John Q. Public’s, and angech’s, and Kevin Kelty’s.
The knowledgeable people who weigh in are a company worth having.
I am absolutely not knowledgeable but I greatly appreciate your patience and your striving for clarity.