September 11th, 2019 by Roy W. Spencer, Ph. D.

I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.

I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.

Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.

Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.

Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.

**A Very Brief Summary of What Causes a Global-Average Temperature Change**

Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, *the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system.* This is basic 1st Law of Thermodynamics stuff.

So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.

While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.

The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.

**What Frank’s Paper Claims**

Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).

He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.

First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.

He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.

Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.

But there are two fundamental problems with Dr. Frank’s methodology.

**Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux **

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

Why?

*Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.*

For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.

Figure 1. Output of Dr. Frank’s emulation model of global average surface air temperature change (his Eq. 1) with a +/- 2 W/m2 global radiative imbalance propagated forward in time (using his Eq. 6) (blue lines), versus the yearly temperature variations in the first 100 years of integration of the first 10 models archived at

https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .

If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.

Why don’t the climate models show such behavior?

The reason is that *the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance*. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.

This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.

**The Error Propagation Model is Not Appropriate for Climate Models**

The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,

“*Equation 6 shows that projection uncertainty must increase in every simulation* (time) *step, as is expected from the impact of a systematic error in the deployed theory*“.

While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).

Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.

I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.

**Concluding Comments**

There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.

Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).

But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.

My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.

There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.

I’d be glad to be proved wrong.

Roy, Pat accepts the models have been tuned to avoid radical departures, his point is the magnitude of the errors which are being swept under the carpet is evidence that the models are unphysical. Tuning doesn’t make the errors disappear, it just hides them. Agreement with past temperatures is not evidence the models are right, if the models get other things very wrong.

Eric:

A 12% variation across the model “errors” in LWCF is probably not much bigger than our uncertainty in LWCF. The measurement of cloud properties from satellite as truth is kind of uncertain: spatial resolution matters, thresholds, what the definition of “cloud” is.

So, there is uncertainty in things like LWCF, right? Well, despite that fact, 20 different models from around the world which have differing error-prone and uncertain values for all of the components impacting Earth’s radiative energy budget give about the same results, only varing in their warming rate depending on (1) climate sensitivity (“feedbacks”), and (2) rate of ocean heat storage. This suggests that their results don’t really depend upon model biases in specific processes.

That’s why they run different models with different groups in charge. To find out what a range of assumptions produce.

This is NOT where model projection uncertainty lies.

Is that not what the author was referring to, by the distinction between Precision and Accuracy; that statistical uncertainties may offset, but margins of physical error can only be cumulative?

What I got from the paper, is that regardless of what the uncertainties are, and no matter how precisely tuned the models might be, the margin of physical error is so broad, that scientifically, the models tell us… Absolutely Nothing.

when they hindcast these models…which adjusted data set do they use?…

..and are they able to run the models fast enough…before they adjust past temperatures again? 🙂

Models do not even get hindcasts right.I am unaware of any model which can reproduce the early 20th c. warming. They start too high and end too low, ie they are tuned to the average but do not represent the actual rate of warming, which is comparable to the late 20th c. warming.Lacis et al 1992 used “basic physics” modelling to calculate volcanic forcing. The same group in Hansen et al 2005 had abandoned all attempts at physically realistic modelling and arbitrarily tuned the scaling of measured AOD to radiative forcing , the sole criterion being to tweak climate model output to fit the climate record.

They jettisoned a valid parameter to gain a fudge factor.Once you take this approach, not only have you lost the claim to be using known, basic physics but you basically have an ill-conditioned problem. You have a large number of unconstrained variables with which to fit your model. You will get a reasonable fit but there is no reason that it will be physically real and meaningful. Take any number of red noise series with arbitrary scaling and you can model any climate data you wish to obtain a vague fit which looks OK , if you close one eye.

This also affords you the possibility to fix one parameter ( eg. GHG forcing ) at an elevated level and regress the model to fit other parameters around it. This , in truth, is what modellers are doing with the so-called “control runs”. This is a deceptive misnomer, since it does not control anything in the usual way this term is used in scientific studies since we don’t have any data for a climate without the GHG forcing. They are simply running the model which has already been tuned with one variable missing. The difference compared to the standard run is precisely the GHG effect they have hard-coded into the model. It is not a result, it and input.

This is well known to the those doing the modelling, and to present this as a “control run” and show it as a “proof” of the extent of CO2 warming is, to be honest, a scientific fraud.

When they have to adjust things…they don’t even understand….to get the results they want

If they understood…they wouldn’t need to adjust

Dr. Spencer, if you subtract observed cloud cover from predicted cloud cover over say a 20 year period:

cloud error = predicted – observed

Then flip the sign of the error:

cloud cover (test) = observed – cloud error

How much impact does using cloud cover (test) in the model have on predicted temperature, compared to the temperature predicted by the original projection?

Roy – Thanks for your critical analysis. If the climate alarmist community were as open to critical analysis then there would be no alarm, no global warming scare.

But coming back to your analysis – I’m not sure that your argument re the “20 different models” is correct. All the models are tuned to the same very recent observation history, so their results are very unlikely to differ by much over quite a significant future period. In other words, the models are not as independent of each other as some would like to claim. In particular, they all seem to have very similar climate sensitivity – and that’s a remarkable absence of independence.

Eric

People are confusing uncertainty with a coefficient of variation in the results. A CoV is not an uncertainty per se. It is based on the outputs and their variability. A model that has been validated, can be used to predict results and a confidence given based on an analysis of the output data set. However…

The uncertainty about the result is an inherent property of the measurement inputs or system. Uncertainties get propagated through the formulas. They can be expressed as an absolute error (±n) or a relative error (±n%). This is not the same thing as a sigma 1 coefficient of variation, at all.

Roy is claiming that having a low CoV means the uncertainty is low. The mean output value, the CoV and the uncertainty are different things.

Pat’s point is the cloud error has to be propagated because the error becomes part of the input state of the next iteration of the model.

I asked Pat whether the period of the model iteration makes a difference, from memory he said he tried different periods and it made very little difference.

Dr. Spencer is right that the models don’t exhibit wild departures and provide a reasonable hindmost of energy balance, but I see this as curve fitting. It won’t tell you if say another pause is about to occur, because the physics of the model with respect to important features such as cloud cover is wrong.

“A model that has been validated, can be used to predict results and a confidence given based on an analysis of the output data set.”

Claiming an atmospheric model is validated is spurious or intellectually dishonest at best and probably just plain ole malfeasance.

A blank 16 foot (roughly 4.9 meters) wall, a blindfolded dart thrower and a dart would yield better results.

Eric, I think another way of expressing your point is that independent errors will add in quadrature, and tend to cancel out. While the span of possible outcomes will increase greatly, the probability distribution remains centered on zero, and vast numbers of runs would be required to demonstrate the tails of the distribution. As a control system designer, I typically impose sanity checks to constrain multivariate systems to account for mathematically possible, but physically unreasonable outcomes. On a verifiable system, this is an interesting side effect of the imperfect mathematical models of physical processes. I suspect strongly that climate models are pushed beyond what is mathematically acceptable for any real world application.

The errors don’t tend to zero, they’re not random they’re systemic. If I understand Pat’s point properly he is suggesting the models do go wild, but they have been tuned to constrain the temperature at the expense of wild variations in cloud cover.

I’m going to post my reply to Roy here under Eric’s top comment.

But I’ll summarize Roy’s criticism in one line: Roy thinks a calibration error statistic is an energy.

Many of my climate modeler reviewers made the same mistake. It’s an incredible level of ignorance in a trained scientist.

Here’s the reply I posted on Roy’s blog, but without the introductory sentences.

Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”

You start by stating that I take “

an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) …”If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The (+/-)4 W/m^2 is a model calibration error statistic.

I do not assume “

that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.

Entry of the (+/-)4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.

You go on to write that I propagate “

the error forward in time during his emulation model’s integration.” You’re implying here that the The (+/-)4 W/m^2 is propagated forward. It’s not.It’s the uncertainty in air temperature, consequent to the uncertainty in simulated forcing, that is propagated forward.

Then you write, “

The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).”I must say I was really sorry to read that. It’s such a basic mistake. The (+/-)20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.

And consider this: the (+/-)20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.

One of my reviewers incredibly saw the (+/-)20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of (+/-)20 C as temperature would necessitate both states to be occupied simultaneously.

In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.

The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.

You wrote that, “

The modelers are well aware of these biases, which can be positive or negative depending upon the model.”The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.

You wrote, “

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s (+/-)4 W/m^2. Recognizing the (+/-) is critical to understanding.

And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s (+/-)4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.

You wrote, “

each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”And each of those models simulate cloud fraction incorrectly, producing an average calibration error (+/-)4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.

That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.

You wrote, “

If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.”No, they would not.

I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.

Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.

You wrote, “

+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.”I don’t assume that (+/-)4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.

Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a (+/-) uncertainty in the reported result.

There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.

You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.

All you’re doing is hiding the uncertainty by tuning your models.

Under “Error Propagation …” you wrote, “

If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time.”Once again, you imposed a positive sign on a (+/-) uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.

Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.

I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.

You wrote, “

Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.

The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.

And the time step is not

assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “

In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.”Eqn. 6 is a generalization of eqns 5.

I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.

Dr. Spencer:

I admire your work but I perceive a logical error in your analysis.

You say: “It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.”

But your own description of climate models includes parameterization of clouds, for example. And we know how badly that often fails.

And yes, other parameters are adjusted to compensate for that, still giving the same net TOA output.

But that is not evidence that the models are correct; rather, it is evidence that they are not.

Lonny,

I thought I made it clear, at the beginning and end, that I don’t believe the climate models are correct. I’m only faulting the analysis. Read my response to Eric, above. Parameterizations are fine if they reproduce the average behavior of clouds and the clouds’ dependence on a wide variety of variables. Model forecast errors in warming rates don’t seem to depend upon model biases in various process. They depend upon (1) feedbacks, and (2) the rate of deep ocean storage.

I made it very clear I’m not saying models are right. I just don’t think they are wrong for the reason Pat gives.

Many thanks to Dr Spencer for his analysis. It was immediately obvious to me that the paper was spurious but I did not have the time to go into it in the detail that he did to come up with a direct refutation and solid reasons why.

Being a sketpic means being equally skeptical and critical of everything, not just the results you don’t like.

many thanks for the objective scientific approach.

+1

The models do not give good answers because they assume feedback when there is no feedback. Please read the 5 postulates of Thermodynamics plus the zeroth law. I further suggest you read Engineering texts because thermodynamics and heat transfer are engineering subjects. Maybe you also read something about another engineering subject -dimensional analysis.

As a Research Professor on the Scientific Staff at SLAC National Accelerator Laboratory for going on 34 years, Dr. Frank (Chemistry PhD., Stanford) is highly skilled at data analysis.

Objections to his paper raised by Dr. Spencer and Nick Stokes were also mentioned by reviewers, but the journal found Dr. Frank’s responses valid.

John, let’s be careful about appealing to the authority of journals. Pat had a hard time getting that paper published, and when he did, it was in the one ranked 48th out of 49 Earth science journals by ResearchGate (I keep track of such things in a spreadsheet). The work must stand on its own merits, whether published or not.

There may be good reasons for such a low ranking; however, that’s simply a side-swipe.

The quality of the editor and reviewers is what we should concentrate on.

Jing-Jia Luo formerly of Australian Bureau of Meteorology is highly regarded and has published in the top journals including on model biases.

Dr. Spencer,

Of course not all journals are created equal, nor their editors. However the six years IMO owe more to resistance by the Team than to any inherent errors of analysis.

I didn’t mean to appeal to authority, but rather to the persuasiveness of Pat’s work, after subjected to rigorous criticism, then evaluated in that light by competent editors.

He and they might be wrong, but my point is that unbiased reviewers and editors considered objections such as yours, yet decided to go ahead and publish.

” the six years IMO owe more to resistance by the Team”It is due to reviewers seeing what Roy Spencer saw.

Nick,

Do you know who all the reviewers were who read the paper in its submissions over six years?

I don’t. Pat probably doesn’t. How then can you justify this conclusion?

OTOH, we know that the Team colludes to keep skeptical papers from being published.

I agree with you, Nick, that this is probably one factor contributing to the delay.

“Do you know who all the reviewers were who read the paper in its submissions over six years?”I know what they said. Pat posted a huge file of reviews. It’s linked in his “Mark ii” post and earlier ones.

Or it could just be they share a common misconception. This happens a lot when there are many competing beliefs for which set of partial explanations explains something controversial that has only one possible comprehensive explanation.

I see this all the time as the reflexive rejection that the SB Law quantifies the bulk relationship between the surface temperature and emissions at TOA as a gray body because there’s a common belief among many on both sides that the climate system must be more complicated than that. This appears to be a very powerful belief that unambiguous data has trouble overcoming and even the lack of other physical laws capable of quantifying the behavior in any other way is insufficient to quell that belief.

I put the blame on the IPCC who has framed the science in a nonsensical way since the first AR and the 3 decades that this garbage has been stewing ever since.

You, Dr. Spencer, rightly criticized appealing to authority. Then you used a similar argument, equally illogical, denigrating the opponent’s authority (the publication).

To be able to both criticize illogic and use the very same illogic in a brief statement is likely an indication of a blind spot. The capacity to do this may be affecting your discourse.

Thank you for your clarification, Dr. Spencer. I was a little suspicious of such a large propagating error in such a short time. Even if wrong, climate models are quite consistent in their response to increasing CO2 levels. That’s why they are easy to emulate with simple models.

As was I. If such a large propagating error existed, then surely it would have already manifested itself as a bigger deviation in the model runs from observations over the forecast period (starts 2006).

Like Dr Spencer I am not arguing that the models are “right”, or above questioning; and anyone can see that observations are currently on the low side of of the CMIP5 model runs overall. However, as things stand observations remain within the relatively narrow margins of the multi-model range.

If Pat’s hypothesis were right, and the error in the models was as big as he suggests, then after nearly 13 years we would already expect to see a much bigger deviation between model outputs and observations than we currently do.

Kudos to Roy Spencer and WUWT for demonstrating true skepticism here.

Tuning a dozen parameters on water physics keeps them running (outputs) to expectation. Which to me is the clearest reason the models are junk.

Tuning parameters are just fudge factors, because the values are so poorly constrained, a degeneracy exists in widely different parameter sets. Multiple set of parameters that “works”, no one knows what are the correct parameters in their models.

Even Dr Spencer’s comment that all models close the energy budget at the TOA I think is incorrect. So do, but many do not. If they try to close to the energy budget in/out, then the model runs far too “hot.” Which is also why the hindcasts had to use excessive levels of aerosols to cool the the runs to match the record, and then call that calibration.

“Tuning a dozen parameters on water physics keeps them running (outputs) to expectation. Which to me is the clearest reason the models are junk.”

To that I have to add one more thing that initially pegged my BS meter. Several years ago (it would now probably take me days or even weeks to find a reference) I remember several comments either here or on Steve M’s site that not only did the models include many adjustable parameters but in order to keep them within somewhat acceptable ranges on longer runs they had to include limit checks on various calculations in order to keep them from going totally off the rails.

I haven’t seen where the modelers ever got their code to the point where these limit checks have been removed. Unless/until then there is no way I would be able to accept any generate output as anything other than SWAGs.

“ , If such a large propagating error existed, then surely it would have already manifested itself as a bigger deviation in the model runs from observations over the forecast period (starts 2006)…”

Wrong. That is the whole point of Pat Frank’s analysis. The reason you don’t see the deviations in the model, is because the models don’t contain physics necessary to model the reality that was used to determine the uncertainty. The uncertainty was derived from satellite measurements, and satellites do not need sophisticated models to determine cloud behavior. They just measure it.

“then after nearly 13 years we would already expect to see a much bigger deviation between model outputs and observations than we currently do.”

Why?

Models are dependent on what numbers you feed them…

..and past temps have been so jiggered no model will ever be right..present temps not excluded either

Even the numbers they produce…an average….when they claim the Arctic has increased 3.6F…yet, the global average temp has only increased 1.4F….somewhere got colder

The temps have been jiggered to specifically match the CO2 concentration, which is why this graph by Tony Heller has a straight line with a R squared of 0.99

See here:

https://twitter.com/NickMcGinley1/status/1150523905293148160?s=20

Is this the data they feed into the models?

Is this the past conditions they are tuned against?

They have gamed all of the data at this point.

That is because the ARE simple models … all the complexity is a red scarf trick to add a pretense of deep understanding and complexity but the basic aim is to produce some noise around the monotonic rise in CO2 and use things like volcanic forcing to provide a couple of dips in about the right places to make it look more realistic.

And when the first 10y of projections fails abysmally, rather than attempt to improve the models ( which would require reducing the CO2 forcing ) they simply change the data to fit the models : see Karlisation and the ‘pause buster’ paper.

Great response, Roy, very informative. I have one question though: what is the basis for the assumed “energy balance” in their modeled system? Is it previous, hard and reliable temperature and other weather-type data that they can calculate an assumed “energy balance” from? If not, then how is it calculated? What assumptions go into determining an “energy balance” starting point from? Is it possible a regnant bias could exist in how they calculate a reliable equilibrium from which to go forward?

Like you, I hope Ross McKittrick offers his thoughts on the subject.

Great questions, and I have harped on this for years. There is little basis for the assumption. We don’t know whether the climate system was in energy balance in the 1700s and 1800s. Probably not, since the evidence suggests it was warming during that time. But all the modelers assume the climate system wouldn’t change on multi-decadal time scales without human interference. That’s one of the main points I make: Recent warming of the global oceans to 2,000 m depth represents an energy imbalance of 1 part in 250. We don’t know any of the individual energy fluxes in nature to even 1 part in 100. That’s why I say human-caused warming has a large dose of faith. Not necessarily bad, but at least be honest about it.

Another way of stating your point, I believe, is that there is no evidence that the climate system has any equilibrium states. Thus the adjustment of models to start with an equilibrium state (energy I/O balanced at the top of the atmosphere) is wrong to begin with. Calculating any departure from an incorrect equilibrium state due to human CO2 release would then not have any meaning, as would calculating an equilibrium sensitivity to CO2 content. Please correct me if I’m wrong.

I do see Dr. Frank’s point about uncertainty in the initial TOA energy balance causing problems, but climate models have way more problems with initial conditions than that. For example, just setting up all of the temperatures, pressures, and velocities at the grid points is fraught with uncertainty, and the actual codes contain numerical damping schemes to keep them from blowing up due to an inconsistent set of initial conditions. Those schemes become part of the model equations, and often have unforeseen effects throughout the integration period. That’s just the tip of the iceberg.

As someone very familiar with engineering CFD, I look at the problem of making a global circulation model and wonder what honest practitioner of the art would ever tell anyone that it was possible. But then, the people writing the checks are so far from being able to understand the futility of the task that it’s easy to swindle them.

Given the energy balance point is 0°K I doubt the models are intrinsically accurate OR precise based on the input they receive.

Original author is correct, the model intentionally removes error and smooths input and thus is invalid from a mathematical standpoint. There are no exceptions when dealing with statistical analysis and statistical input… if you have to “insanetize” your inputs your output, no matter how cloyingly close to your desired output, is just as wrong as if the computer you ran it on exploded into flames and teleported to the next office.

You run the math and if it doesn’t give you the correct answer then your inputs were wrong.

On the positive side Dr Spencer yet again shows he is his own man. On the negative side, just because errors cancel out does not mean that the errors do not exist. In contrast, it means that the total sum of RMS errors is even larger than if they did not cancel out.

However as Dr Spencer has done us the courtesy of putting in the time to understand the paper, perhaps I comment further only when I have done the same.

This was more-or-less the substance of my own reply.

In accountancy one has the interesting phenomenon of multiple ‘compensating’ errors self cancelling so that one thinks the accounts are correct when they are not.

This is similar.

Many aspects of the climate are currently unquantifiable so multiple potentially inaccurate parameters are inserted into the starting scenario.

That starting scenario is then tuned to match real world observations but it contains all those multiple compensating errors.

Each one of those errors then compounds with the passage of time and the degree of compensating between the various errors may well vary.

The fact is that over time the inaccurate net effect of the errors accumulates faster and faster with rapidly reducing prospects of unravelling the truth.

Climate models are like a set of accounts stuffed to the brim with errors that sometimes offset and sometimes compound each other such that with the passing of time the prospect of unravelling the mess reduces exponentially.

Pat Frank is explaining that in mathematical terms but given the confusion on the previous thread maybe it is best to simply rely on verbal conceptual imagery to get the point across.

Climate models are currently worthless and dangerous.

Roy appears to have missed the point.

The hockey stick graph is the perfect illustration of Pat’s point.

The errors accumulate more rapidly with time so that the model output diverges exponentially with time.

A hockey stick profile is to be expected from Pat’s analysis of the flaws in climate models as they diverge from reality more and more rapidly over time.

Good analogy about forced balancing of financial accounts.

You never know what degrees of shenanigans might have been committed

The biggest shenanigan is that assuming that natural emissions (which are an order of magnitude greater than fossil fuel burning emissions) are balanced out by natural sinks over time leaving anthropogenic emissions to “accumulate” in the atmosphere. The cold polar water sinks don’t know the difference. Both will be absorbed at the same rate. Atmospheric concentrations of CO2 have been rising because the rates of natural emissions have been rising faster than the polar water sink rates. On top of that, man burning emissions (excepting jets) are not likely to ever get to the polar regions as absorption in clouds and rain will return it to the surface to become a small part of natural emissions.

“The biggest shenanigan is that assuming that natural emissions (which are an order of magnitude greater than fossil fuel burning emissions) are balanced out by natural sinks over time leaving anthropogenic emissions to “accumulate” in the atmosphere.”They must balance. The first indicator is that they always did, at least during the Holocene, before we started burning and they went up 40%. But the mechanics of the “natural emissions” are necessarily cyclic. There are

1. Photosynthesis and oxidation. In a growing seasons, plants reduce about 10% of the CO2 that is in the air. But that reduced material cannot last in an oxidising environment. A fraction is oxidised during the season (leaves, grasses etc); woody materials may last a bit longer. But there is no large long-term storage, at least not one that varies. The oxidation flux, including respiration and wildfire, must match the reduction over a time scale of a few years.

2. Seasonal outgassing. This is the other big “natural emission”. As water warms in the spring, CO2 is evolved, because it is less soluble. But the same amount of CO2 was absorbed the previous autumn as the water cooled. It is an annual cycle.

There is a longer term cycle involving absorption in cold polar water than sinks. It is still a finite sink, and balances over about a thousand years. And it is small.

Why choose the Holocene, because it remained low due to the tempearture being 5 Degrees C lower perhaps?

If it “must balances” as you claim how do you think the Earth arrived at 200ppm from an historical high of 7000ppm?

How did the earth get back to 2000ppm from 200ppm in the Permian?

It has never balanced.

“Why choose the Holocene”Because it is a period of thousands of years when the climate was reasonably similar to present, and for which we have high resolution CO2 measures. Before we started burning, CO2 sat between about 260 and 280 ppm. These “natural emissions” were in fact just moving the same carbon around. Once we started burning fossil carbon and adding to to the stock in circulation, CO2 shot up to now about 410ppm, in proportion to what we burnt.

Yes, over those longer cycles the cyclical exchange between water and air must balance because the total amount of carbon is not changed. But as long as the earth rotates there will be a natural daily cycle in the net rate of emissions/”rain return” that changes from day to day as a function of cloud cover. This net exchange rate changes from year to year as a function of ocean currents which affect the surface temperature of the water.

I’m sorry Nick but that is rubbish.Are you saying cloudiness over the oceans occurs at identical levels each year and the cloud covered/cloud free parts of the oceans are at the same temperature every year ? That is a huge assumption and not one i can find any supporting evidence for.

There is no way on earth the annual co2 flux is a known known.

Agreed, 100%. The very notion that we “know” CO2 “sources” and “sinks,”

none of which are being measured,were in “balance,” particularly when supported by the scientific incompetence of comparing proxy records, which via resolution limitations or other issues, don’t show the complete extent of atmospheric CO2 variability, with modern atmospheric measurements, which show every hiccup ppm change,and ignoring the “inconvenient” parts of the Earth’s climate history which have shown both much higher CO2 levels and much more range of variability, is absolute nonsense.Stephen when I taught accountancy I often reminded students that an apparent imbalance of $0.01 might disguise two errors, one of $100.00 and another of $100.01

Errors never compensate. In the simplest case square of the total error is equal to the sum of the squares of all errors. That is also why you can never get rid of the instrument measurement error, which Dr. Roy Spencer probably did in his UAH dataset. The total error is always bigger than the instrument error. If a satellite can only measure sea level down to 4cm – the total error will always be larger than 4cm. And the result will be 0±4cm (or worse) – ergo meaningless.

Really, everyone should at the very least read “An Introduction to Error Analysis” by John R. Taylor or “Measurements and their Uncertainties” by Hughes and Hase before taking part in this discussion.

WUWT should also publish the comment at Dr Spencer’s site by Dr Frank addressing this critique.

Yes!

********************************************************************************As of ~3:05PM, 9/11/19, Dr. Frank’s response to Dr. Spencer has not been published on WUWT.

Simultaneous publication of that response would have been best practice journalism.

Publication of Frank’s response at all would be basic fairness.

WUWT republishes some articles from Climate Etc, and from drroyspencer.com, but doesn’t reproduce the comments. If you want a comment cross-published nothing can stop you from copy-pasting it. I can assure you it is not difficult. Don’t forget to credit the author of the comment and that’s it.

Yes, this is entirely accurate. This an interesting debate and a fine example of social media- based scientific debate.

Anyone, (including Dr. Frank) is free to repost his rejoinder(s) from another site.

Let us hope things remain civil between all parties.

Having me do it will get very messy and some of the +/- notation doesn’t work on Dr Spencer’s site. Feel free to delete this if Dr Frank posts it. Thanks for the explanation as to why you didn’t post it.

—————————————————————–

Pat Frank says:

September 11, 2019 at 11:59 AM

Hi Roy,

Let me start by saying that I’ve admired your work, and John’s, for a long time. You and he have been forthright and honest in presenting your work in the face of relentless criticism. Not to mention the occasional bullet. 🙂

Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”

You start by stating that I take “an example known bias in a typical climate models longwave (infrared) cloud forcing (LWCF) …”

If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The 4 W/m^2 is a model calibration error statistic.

I do not assume “that the typical models error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.

It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.

Entry of the 4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.

You go on to write that I propagate “the error forward in time during his emulation models integration.” You’re implying here that the The 4 W/m^2 is propagated forward. It’s not.

It’s the uncertainty in air temperature, consequent to the uncertianty in simulated forcing, that is propagated forward.

Then you write, “ The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).”

I must say I was really sorry to read that. It’s such a basic mistake. The 20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.

And consider this: the 20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.

One of my reviewers incredibly saw the the 20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of 20 C as temperature would necessitate both states to be occupied simultaneously.

In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.

The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.

You wrote that, “ The modelers are well aware of these biases, which can be positive or negative depending upon the model.”

The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.

You wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”

You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s 4 W/m^2. Recognizing the is critical to understanding.

And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s 4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.

You wrote, “each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

And each of those models simulate cloud fraction incorrectly, producing an average calibration error 4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.

That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.

You wrote, “If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.”

No, they would not.

I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.

Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.

You wrote, “+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.”

I don’t assume that 4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.

Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a uncertainty in the reported result.

There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.

You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.

All you’re doing is hiding the uncertainty by tuning your models.

Under “Error Propagation …” you wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. ”

Once again, you imposed a positive sign on a uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.

Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.

I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.

You wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”

No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.

The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.

And the time step is not assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.

You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.”

Eqn. 6 is a generalization of eqns 5.

I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.

Pat Frank says:

September 11, 2019 at 12:03 PM

Hmm… it seems that none of the corrective plus/minus signs I included have come through.

Everyone should please read, wherever a 4W/m^2 statistic occurs in my comments, it should be (+/-)4W/m^2.

It is always a joy to read something so powerfully correct. Pat Frank’s paper takes a fundamental insight and applies it thoroughly and relentlessly. It is a paradigm slayer. It doesn’t really matter that so many are, sadly, too obtuse to grasp that, yet. It will start dawning on more and more people. The honest and humble ones first, then others, till finally even the shameless grifters realize it’s time to fold up the tent and slink away.

CAGW is doomed to be remembered as a tale told by idiots, full of sound and fury, signifying nothing.

One must realize that even if absolute or relative errors cancel themselves out (via tuning or by chance) at the result level, the error bars (uncertainties) never cancel each other out but get compounded and thus increase the uncertainty. The LWCF error alone generates uncertainty high enough (two orders of magnitude higher) to make any interpretation due to CO2 forcing useless. Any other (even compensating) errors at the result level will only increase the uncertainty of the result, making the model even worse.

I assume there was no PDO or AMO ,El Nino/La Nina in operation in the pre industrial era going by the results of the ten archived model runs ?

BC:

Most of the models produce ENSO, so I would have to look at monthly time resolution of each. So I assume they are in there. I don’t know whether models produce AMO, I’ve never looked into it.

But, models can’t tell you how intense an El Niño will be or the timing, can they?

As far as I know, there are no models, which can predict ENSO events intensity or timing. Or can somebody tell what is the ENSO event status after 10 years? Certainly not.

I see a glaring problem with the pre-industrial control runs. The anomaly is too constant, even if nothing else is changing. The 12 month average temperature should be bouncing around within at least a 1C range which is about 10x larger than the models report. For one month averages, the seasonal signature of the N hemisphere is very visible since the seasonal variability in the N is significantly larger than that in the S. The lack of enough natural chaotic variability around the mean is one of the problems with modeling. Another is the assumption that the seasonal behaviors of the N and S hemispheres cancel.

There are many more. For example. no model can reproduce the bizzare behavior of cloud cuverage vs. temperature and latitude as shown in this scatter plot of monthly averages per 2.5 degree slice of latitude.

http://www.palisad.com/sens/st_ca.png

Notice how the first reversal occurs at 0C, where ice and snow melt away and clouds have a larger albedo effect and at the second reversal at about 300K occurs at the point where the latent heat of evaporation is enough to offset incremental solar input. Since balance can be achieved for any amount of clouds, something else is driving how the clouds behave.

Interestingly enough, this bizarre relationship is exactly what the system needs to drive the average ratio between the SB emissions of the surface and the emissions at TOA to a constant 1.62 corresponding to an equivalent gray body with an effective emissivity of 0.62.

This begs the question, which makes more sense? A climate system with the goal of a constant emissivity between the surface and space that drives what the clouds must be or a climate system with a strangely bizarre per-hemisphere relationship between cloud coverage, temperature and latitude just coincidentally result in a mostly constant effective emissivity from pole to pole.

The data that demonstrates that the planet exhibits a constant equivalent emissivity from pole to pole is here:

http://www.palisad.com/co2/tp/fig1.png

The thin green line is the prediction of a constant emissivity of 0.62 and each little red dot is the monthly average temperature (Y) vs. monthly average emissions at TOA (X), for each 2.5 degree slice of latitude from pole to pole. The larger dots are the averages over about 3 decades of data. Note that the relationship between temperature and clouds is significantly different per hemisphere, while the relationship between surface temperature and emissions at TOA is identical for both indicating a constant effective emissivity.

The cloud amount and temperature come directly from the ISCCP data set. The emissions at TOA are a complicated function of several reported variables and radiant transfer models for columns of atmosphere representing various levels of clear skies, cloudy skies and GHG concentrations. The most influential factor in the equation is the per slice cloud amount representing the fraction of the surface covered by clouds and which modulates the effective emissivity of each slice.

The calculated emissions at TOA are cross checked as being within a fraction of 1 percent of the energy arriving to the planet (the albedo and solar input are directly available) when integrated over the entire surface and across a whole number of years, so if anything is off, it’s not off by much. None the less, the constant emissivity still emerges and any errors would likely push it away from being as constant as it is.

I think it’s a question of fairnes to reposte Pat’s answer to Roy’s review.

Error propagation analysis does not predict actual error in some process. It gives bounds to the reliability of whatever result the process delivers. In conventional surveying systematic errors cannot be eliminated and in any traverse where the position of the next point relies on the accepted position of the last plus the error in set up, angle measurement, and distance measurement. This “error ellipse” is a mathematical calculation that gives an answer in the linear dimensions of the survey and is not the expected error but the positional area in which the found position can be expected to fall if no blunders were made. They define the aerial extent of the uncertainty and grow with each set up. As that area grows the true relation of the just measured point to the initial point cannot be reported as the simple differences of northings and eastings computed using the simple trigonometry of angles and distances but must contain the plus or minus dimensions of the ellipse. At that point the simple difference is only a hypothesis with large uncertainty until it is also measured to close the traverse and determine the true error allowing distribution of that error throughout the traverse. The climate model cannot be “closed ” like a traverse so the accumulated uncertainty remains like the error ellipse on the last traverse point whose position in relation to the beginning is very uncertain. At no place in the traverse is the position of one point to the next outside normal bounds.

This seems very analogous to Dr. Franks analysis and conclusions.

“This “error ellipse” is a mathematical calculation that gives an answer in the linear dimensions of the survey”GCMs are not surveying. They are solving differential equations for fluid flow and heat transfer. Differential equations have many solutions, depending on initial conditions. Error shifts from one solution to another, and propagates according to the difference between the new and old paths. The solution and its error-induced variant are subject to the laws of conservation of mass, momentum and energy that underlie the de, and the physics that it expresses. Errors propagate, but subject to those requirements. None of that is present in Pat Frank’s simplistic analysis.

NS

I believe that is why Dr. Frank computed the emulations of the model outputs. These are linear equations and react to uncertainty much like my surveys do.

“react to uncertainty much like my surveys do”Yes, they do. But neither has anything to do with GCMs and their physics.

Wow, speaking of “simplistic.”

Somehow you left-out parameter values and boundary conditions. And conservation of moisture.

The “laws” in models are represented by finite difference approximations to differential equations. They don’t have Mother Nature to force them into reality if they are in an unrealistic state. They can even be made to “blow up.”

“Somehow you left-out parameter values and boundary conditions. And conservation of moisture.”Parameter values are part of the equation system. Boundary conditions – basically air-surface interface, are part of the spatial system that you integrate over time. And conservation of mass means conserving the various components, including water.

” They don’t have Mother Nature to force them into reality if they are in an unrealistic state.”No, that is a core function of the program.

“They can even be made to “blow up.””They will blow up if you get the numerics of conservation wrong; it is a very handy indicator.

This can’t be right:

“The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century”

Why do the errors sum to zero? That’s a nice trick, if it’s possible. I mean, if you have one error (which obviously doesn’t sum to zero), all you need to do is make some more so that they cancel each other out!

I should think, rather, that the reason they appear to sum to zero in the models is because the models are tuned to produce a reasonable looking signal.

What Dr. Frank has demonstrated is that the error in a single component of the purported model is enough to make the entire thing meaningless. Yes, the models produce reasonable looking results. The point, however, is that they’re not arriving at that conclusion because they’re accurately modeling the real world.

DMA – I agree. I think this is an excellent analogy.

+42 +++ :<)

OK, I have now read the paper and unfortunately, Dr Spencer is wrong. The fact that the models are made to have zero “error” does not in any way change the fact that errors exist … only that the errors are made to zero out at an arbitrary point which is the present time period of the model. That is a temporary state of affairs which quickly disappears (but see below).

The only doubt I have is how to treat the 4w/m2 per year in projecting forward. The problem here, is that I saw no analysis of the form of this variation. If that variation has frequency components that are much greater than 100 years, then that will dramatically affect the way it should be treated than if the frequency components are all shorter than 100 years.

Indeed, if all the frequency components were greater than 100 years, Dr. Spencer would be (largely) right, but for the wrong reasons. Because the calibration done up to the present would still have a significant nulling affect in 100 years.

The greatest sin in science is to be right for the wrong reason because it means that one does not really understand the phenomenon and one will almost certainly be wrong the next time — and it will be a surprise!

Let me start off by saying that I am not an experienced scientist, but a humble comp sci engineer – relative to probably everyone on this forum I don’t know squat about thermodynamics, atmospheric physics, etc, etc.

One thing that I am curious about, though, is what the effect on global temperature is from the combustion of fuels and not the emissions. Since the combustion of oil, coal, natural gas, (and uranium fission…) results in a large amount of heat generation, could it possibly be that a sizable portion of any temperature rise is not so much the result of CO2 emissions but actual waste heat?

This has been something that I’ve wondered about for quite some time…

UHI effect

We generate about 15 Tw by combustion. That is about 0.03 W/m2. GHG forcing relative to preindustrial is estimates at about 2 W/m2.

Thanks for the clarification, Nick. Greatly appreciated!

ASSUMING “all other things held equal.” Which is NOT the case, and never will be.

Indeed. I say they mean when everything necessary and sufficient are equal. That said, how do we know that we know what’s necessary and sufficient, especially when conditions change “unexpectedly”, as they so often do.

@Moderator

The link at the top: “From Dr. Roy Spencer’s Blog” links to WUWT

Thanks, fixed.

That is absolutely what theory says. On the other hand, when you look at what happens after a strong El Nino, you usually see something that looks like ringing. example That implies a system modeled by a second order differential equation. ie. the simple thermodynamic theory may not be accounting for all the processes involved.

My electronics centered brain processes it thusly. If there are no energy storage components like capacitors or inductors, it’s linear and a differential equation is not needed. If there is one capacitor or inductor (but not both), the response to a step input is the familiar capacitor charge/discharge curve. It is modeled by a first order differential equation. If you have both a capacitor and inductor, you can have ringing. That, you model with a second order differential equation. link

If you’re modeling a thermodynamic system, energy can be stored as thermal inertia. Most people will tell you that that’s the only energy storage mechanism you have to worry about. So, a first order differential equation and no ringing. The temperature stops increasing when it reaches whatever temperature is exciting the system.

Given the complexity of the Earth’s energy balance, I suspect there may be something like ringing because the heat transport is not just by conduction.

On whether calibration gets rid of errors.

Imagine a situation where you have a perfect (builder’s) level. You lay it down on a surface with a -4mm per m uneveness such that the level now doesn’t show level. You then “calibrate” the level so that it shows level (but on the section with a -4mm/m error). Does this reduce the error? Obviously not! It merely masks the uneveness of the surface by introducing an extra error.

Now, instead of being better by calibration, if the level is laid on a perfectly flat section it now shows +4mm/m and it can show up to +8mm/m error on a section with +4mm/m error. So rather than reducing the error using this “calibration” the average error is actually increased.

“With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.”

Please give examples of when the temperature change is not due to an imbalance between energy gain and energy loss in a system. I have never seen the climate change industry ever admit to that fact.

I will give you one example – that undermines the entire case for CAGW (Catastrophic Anthropogenic Global Warming):

A source of new energy only increases the temperature of an object if the temperature of the emitting object is higher than the temperature of the absorbing body. If the temperature of the emitting object is lower than the temperature of the absorbing body then it does not matter how much energy is being emitted, the temperature of the absorbing body will not increase. The proof of this is that you can surround an object at room temperature with as many ice cubes as you like and the temperature of the body at room temperature will not go up.

This basic fact is ignored in the energy budget calculations behind all the climate models. They all assume that all sources increase temperatures. That is incorrect.

Since the temperature of the atmosphere is lower than the temperature of the earth’s surface, CO2 emissions from the atmosphere cannot increase the temperature of the surface.

Don’t respond with ‘It slows the cooling’. CAGW is based on the fear of maximum temperatures actually increasing, not minimum temperatures declining less.

Bernard, the only ones I can think of for the climate system are (1) phase change, such as heat going into melting ice, and (2) changes in the rate of energy transfer between the ocean to the atmosphere, which have very different specific heats for water and air (which is why a 3-D global average of the whole land-ocean-atmosphere system has little thermodynamic meaning, and Chris Essex like to point out). The temperatures can all change with no net energy gain or loss by the Earth system

I would say that those two processes are not properly described by the phrase “few exceptions”. Phase changes are far more energy intensive than temperature changes.

Potential energy changes may also be significant.

The Sun’s radiation temperature is 5800 K.

Dr Spencer writes: ‘Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. ‘

This criticism is valid if the +/- 2 Wm2 is not given as applying to a specific time period as in Wm2 / year or

Wm2 /12 if propagated over months. I assumed Dr Frank meant this to be the case, but did not find it explained in the paper. But then, I cannot do the math.

I also do not understand why running a model on a supercomputer with many iterations supposedly gives a more accurate picture of climate projections than a few iterations (1/year, say) if you have all the parameters correct. What are the many iterations doing?

Regardless of who has the best take on the topic there is a more important point. This is the type of scientific discussion, back and forth if you will, that should be part of every topic related to the ” climate change” debate. Something has been sorely lacking for a long time. The discussion is actually refreshing in its lack of dogmatic bullshit and reliance on actual scientific concepts. Please proceed.

” The discussion is actually refreshing in its lack of dogmatic bullshit and reliance on actual scientific concepts. Please proceed.”

Two thumbs up, Greg!

I’ve always wondered how the models explain the little ice age, the medieval warm period, the dark ages etc. Changes in CO2 don’t work. I also wonder what temperature records the use to calibrate the models. Given that adjustments are highly correlated with CO2, adjusted data is suspect.

C’mon man, those are just anecdotal. Can’t believe those old things, y’know.

Just say the little ice age was a European event, not worldwide, as IPCC did.

Then there were comments from Japan and other places they had a little ice age as well.

Quick meeting by PR people at IPCC headquarters.

Stop denying the obvious.

Act dumb.

Ignore questions.

Divert attention to another issue.

I wish I had the time to discuss the implementation of spatial error in the climate data sampling and gcm models at this level.

I was just taking another look at the paper to check for any assessment of long term frequency components and I spotted what appears to be a false assumption. The paper says that: “If the model annual TCF errors were random, then cloud error would disappear in multi-year averages. Likewise, the lag-1 autocorrelation of error would be small or absent in a 25-year mean. However, the uniformly strong lag-1 autocorrelations and the similarity of the error profiles (Figure 4 and Table 1) demonstrate that CMIP5 GCM TCF errors are deterministic, not random”

This surely is not correct, because if the error have frequency components that are longer than 25years (perhaps due to solar sun spot cycle, AMO, or similar long term purturbations), then there would be lag-1 autocorrelation which was due to errors with long term frequency components. So, the assertion that they demonstrate deterministic not random erros seems incorrect.

Likewise, when the author says: “For a population of white noise random-value series with normally distributed pair-wise correlations, the most probable pair-wise correlation is zero.” … this again does not hold for pink noise and so the assertion “but instead imply a common systematic cause” is incorrect.

However, this false inferrence of “errors in theory” does not change the fact that around 4w/m2 per year error still exists, but only that this may not be an error of theory but what would be called “natural variation” which is not accounted for in the model.

This analysis strikes me as strange. Without disrespect to the author, I feel the analysis above is made on the basis of a category error and does not address the main point of the original paper.

The category error is to attribute a propagated error as being an attribute of the output of the calculation made.

These are fundamentally different, conceptually. This analysis above attributes to the output value the attribute of “uncertainty” and claims that if a calculated result is uncertain, then the calculation will yield similarly variable output values. It is a synecdoche: the model output alone is being claimed to represent the whole of the result, which consists of two attributes, the result and a separate property, its uncertainty. It is normal to present the result of a calculation as some numerical value then a ±n value. The ±n has no relationship to the calculated value, which could be zero or a million. The model output cannot “contain” the attribute “uncertainty” because the ±n value is an inherent property of the experimental apparatus, in this case a climate model, not the numerical output value.

The fact that the outputs of a series of model runs are similar, or highly similar, has no bearing on the calculated uncertainty about the result. The claim that because the model results do not diverge to the limits of the uncertainty envelope means they are therefore “more certain” is incorrect. The uncertainty is an attribute of the system, not it’s output. The output is part of the whole, not the whole answer.

It appears that the calculations are deliberately constrained by certain factors, are very repetitive using values with a little or no variability, or represent a system with a low level of chaos. There are other possible explanations such as that the same inputs give the same outputs because the workings are “mechanical”.

Analogies are best.

Suppose we multiply two values obtained from an experiment: 3 x 2. We are quite certain about the value 3 but uncertain about the value 2. The result is 6, every time. Repeating the calculation 100 times does not make the value “6” more certain because the value of thaw remains uncertain.

If I tell you that the 3 represent a number of people and the 2 represents the height of those people rounded to the nearest metre, selected for this experiment because they are in the category of “people whose height is 2 metres, rounded to the nearest metre”, you can visualise the problem. Most people’s height is 2 m, if rounded to the nearest metre. That does not mean that 3 times their rounded height value equals their actual combined height. The answer is highly uncertain, even if it is highly repeatable, given the rules.

It is inherent in the experimental calculations that the uncertainty about any individuals height is half a metre, or 25% of value. The result is actually a sum of three values, each with an uncertainty of 0.5m. The uncertainty about the final answer is:

The square root of the sum of the squares of the uncertainty values

0.5^2 = 0.25

0.25 x 3 = 0.75

Sqrt 0.75 = 0.8666 metres

Just because the calculated value is always 6 does not mean the uncertainty about the result is less than 0.8666.

One climate model might give a series of results that cluster around a value of 4 deg C warming per doubling of CO2. If the uncertainty about that value is ±30 C, it is right to claim that model is useless for informing policy.

Another model might give values clustered about 8 deg C and have an uncertainty of ±40 C. That is no more useful that the first.

The uncertainty of the model output is calculated independently of the values produced. Even if ALL climate models produced exactly the same result, the uncertainty is unaffected. Even if the measured values matched the model over time (which they do not) it would not reduce the uncertainty about projections because it is not a property of the output.

Climate Science is riddled with similar misconceptions (and worse). CAGW is for people who are not very good at math. Dr Freeman Dyson said that some years ago. Unlike climate models, his prediction has been validated.

Crispin

You said, “The fact that the outputs of a series of model runs are similar, or highly similar, has no bearing on the calculated uncertainty about the result.” I believe I have read that the models test the output of the steps for physicality, and apply corrections (fudge factors) to keep things within bounds. If that is the case, it isn’t too surprising that the runs are similar! With an ‘auto-correct’ built into the models that is independent of the ‘first principles,’ it would explain why the model results are similar and why they don’t blow up and show the potential divergence that Frank claims.

Crispin:

100 , +/- 0.01

I think you [Crispin] have captured the issue well. “…the ±n value is an inherent property of the experimental apparatus, in this case a climate model [I would revise to a “set of climate models”], not the numerical output value.” I believe this is precisely the point Dr. Frank has tried repeatedly to make, but which seems to keep being ignored.

And, you go on, “The uncertainty of the model output is calculated independently of the values produced. Even if ALL climate models produced exactly the same result, the uncertainty is unaffected. Even if the measured values matched the model over time (which they do not) it would not reduce the uncertainty about projections because it is not a property of the output.”

This is also how I read Dr. Frank’s paper (and his responses). The repeated criticism that if the measure of uncertainty were so large, it would be seen in model outputs demonstrates a fundamental misunderstanding of what the propagated error represents.

+(plug in whatever value you need to convey you are greatly impressed)

LOL, and + [As many more]

And … now I realise even I’ve made a mistake!

Dr Frank has asserted that an autocorrelation of lag-1 demonstrate a deficiency of theory. I then said that there could be long term variation would explain this without a deficiency of theory. However, I’ve forgetten the important thing that natural variation isn’t distinct from what Dr Frank describes as theory.

To take a simple example, the long term Atlantic Multi-decadal Oscillation can be both described as “natural variation”, but also “theory”.

This is important, because the dividing line between “natural variation” and “theory” isn’t one enshrined in physics, but is instead one that is defined by the system boundaries we imply for the system. So, e.g. AMO could be described as “natural variation” if the boundary were set such that it did not fit our theoretical model. But if we can include it within our theory, then it is no longer natural variation.

Apologies for making this rather rudimentary error in climatic thermodynamics.

Dr. Spencer writes: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.

Does not this statement depend on the averaging time? For example, the earth has a very large capacitor called the ocean, which may take a thousand years to mix completely. Let’s suppose we have a true energy imbalance that persists for a few hundred years. It still might not show up in a change in temperature, due to the time taken for the ocean to mix.

A more complete version of the earth’s energy balance would be:

Input-Output = Accumulation + Generation

The more simplistic Input = Output is the case only when Accumulation and Generatoin equal (or nearly enough equal) zero. Postulating heat gain/loss of the non-insubstantial water mass on the plant’s surface might cast some doubt on such an assumption.

The Generation term might be attributable to the heat transferring out from the core. However, that heat flux might be insubstantial compared to the solar gain/loss and might be properly assumed to be near zero.

I note , temperature rise after Co2 increase. Is it not the other way around? Please explain.

Its a two way process Ray. Warming causes an increase in atmospheric CO2 concentration – less being absorbed by a warmer ocean, but a rise in CO2 concentration causes warming. The former is more pronounced but that does not mean the latter doesn’t exist.

In a nutshell, you have just described the most telling flaw in the CAGW narrative. If this were true, then

any warming for any reasonwould result in runaway warming!It doesn’t, never had, and never will. That’s the pin this while debacle is pinned on. Realising that this is not true, and cannot be true, gives you the understanding that the CAGW hypothesis is also not valid.

Anyone who cannot see this, is not fit to practice science in any way.

Plus [insert really big number here]

This is “IT” in a nutshell. The ultimate falsification of the whole “climate catastrophe by CO2 emission” meme. When the FACT that it was

temperaturethat increases or decreases FIRST, and atmospheric CO2 level that FOLLOWS in the ice core reconstructions was revealed, and begrudgingly acknowledged, they trotted out this CO2 “contribution” canard, arguing that once the (give or take) 800 year time lag elapsed and BOTH temperature AND atmospheric CO2 were BOTH rising, that CO2 was “contributing to” the amount of warming.HOWEVER, if one can read a graph it can easily be seen that this “argument” is pure nonsense. FIRST, no “acceleration” in the RATE of warming occurs at the point where the time lag has elapsed and CO2 beings to rise. SECOND, even if one could argue that the resolution of the graph wasn’t sufficient to show the

minusculeCO2 “contribution,” there is one place where the supposed “contribution,” or more correctly, thecomplete lack thereofCANNOT hide – when the (excuse me) REAL cause of the temperature rise stops, what we SHOULD see is that, as long as atmospheric CO2 continues to rise, the temperature should continue to rise, at a reduced rate (that reduced rate being the, you know, CO2 related “contribution” to the warming). Instead, what we see is this: Temperatures startfallingwhile atmospheric CO2 levels continue to rise, and then after the same time lag, CO2 levels being to fall, once again FOLLOWING temperatures. And THIRD,temperatures always START rising when atmospheric CO2 is LOW, and START falling when atmospheric CO2 is high,which tells you that atmospheric CO2 is absolutely NOT a temperature or “climate” driver at all.TEMPERATURE drives atmospheric CO2. Atmospheric CO2 DOES NOT “drive” temperature.

Having read comments here and at Dr Spencers site I come away withe the thought that it is sort of like the arrow on FEDEX trucks ….. some are able to see it , some cannot ….

😉

The concept of not knowing is a very difficult one for many people. How do you formally deal with something you know you don’t know? It is particularly difficult for academia, because academia prides itself on knowing everything (even though the evidence shows they fail spectacularly).

For this reason, many academics tend to have a mental model which divides the world into things they can measure – and things they can’t which they call “error”. That division works well when the main “what is unknown” fits a model of random white noise, but when it includes long term unknown trends, this simple conceptual model breaks down and is more a hindrance than a help.

In contrast, real-world engineers & scientists are steeped in a real world full of things that cannot be known, not for some academic reason, but because of things like the production department is deliberately fiddling the figures, or it just isn’t worth the time or effort to fully understand a system which would be cheaper to scrap and buy anew. As such, real world engineers and scientists, have no problem with the concept that they don’t know everything and have usually found ways to adapt theory to make it workable in real-world situations with large amounts of unknown. As such real world engineers tend to be able to cope with “unknowables” which include not just white noise, but long term trends and even include deliberate manipulations.

That I think is why some can “see” … and others can’t. It’s also why I think that most ivory-tower academics are a real hindrance to understanding the subject of climate.

Mike that is a great post. Articulates my thoughts on the situation far better than i ever could.

There is I think a simple analogy to explain why most climate scientists think that

Frank’s paper is wrong. Suppose I try to predict the trajectory of an object moving

with zero friction and constant velocity. Newton’s laws says that it will obey the equation:

x(t)=x0 + v*t

where x0 is the initial position, v is the velocity and t the time. Now if I measure the

initial position to an accuracy of 1cm then my final answer will be out by 1cm independent

of the time. If on the other hand there is a 10% error in the velocity then the error in the position

will grow linearly with time. There is thus a crucial distinction between errors in the initial position

and the initial velocity in terms of the accuracy of my predictions.

Dr. Spencer and most other climate scientists think that the errors in the forcings that Frank

mentioned are similar to an error in the initial position — i.e. they will result in a fixed error

independent of time. In contrast Frank claims that the error in the forcing is similar to an

error in the velocity and so the predictions will become increasingly inaccurate with time.

Now the reason that Dr. Spencer and others think what they do is that if you run the global climate

models with different values of the forcings due to cloud cover then the models converge to different

temperatures after some time and then are stable after that. None of the global climate models experience

continuously increasing temperatures. Hence errors in the cloud forcings product a constant error irrespective

of time. And hence the errors do not grow in the fashion claimed by Frank.

Izaak, I like your analogy to explain how the propagation of errors will (possibly) vary as a function of starting assumptions.

However, you stated “None of the global climate models experience continuously increasing temperatures.” Really? That not what I see when I look at plots of the IPCC CIMP5 global model forecasts . . . all 90-plus models (and they do have different cloud cover forcings) project continuously increasing global temperatures out to 2050 and beyond. See, for example, https://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

Gordon,

I should have been more precise. In the absence of increased forcing the global climate models

reach a steady state. The examples you present are what happens for increasing CO2 levels and

thus increasing forcings.

Gordon,

The temperatures used for calibration are all adjusted. THe adjustments are highly correlated with CO2, which makes little sense to me. Its not surprising that the results all trend higher in time with CO2. It will be interesting to watch what happens when the AMO/PDO turn to negative phases and the sun stays relatively quiet compared to the modern solar maximum.

Izaak: You may be right about what climate scientists are saying, but I think you missed Pat Frank’s point. He’s saying (per your analogy) that the velocity might have an uncertainty of say +/- 10% of the value used in the calculation. Therefore, any estimate of future position is subject to an uncertainty derived from the uncertainty of the velocity. Now once the time has passed and if you can measure the final position, you can then calculate what the actual velocity was. But you can’t ignore the uncertainty that exists at the time you started in making your prediction of the future.

Rick,

I would agree that Pat Frank is in my analogy claiming that there is an error in the velocity.

However Roy Spencer and others claim that the error he has highlighted corresponds to an

error in the starting point. The reason for that is that if you took a Global Climate Model and

ran it three time, once with a central estimate for cloud forcing, once with the lowest estimate

and once with the highest estimate then after 100 years and assuming no additional forcing

(i.e. constant greenhouse gases) then the three runs would converge to three different equilibrium temperatures. They would not continue to diverge as Pat Frank’s model predicts. Thus a change in forcing corresponds to a change in x0 in my analogy and not a change in the velocity.

“I would agree that Pat Frank is in my analogy claiming that there is an error in the velocity.”The fallacy here, as with Pat, is again just oversimplified physics – in this case motion with no other forces. Suppose you glimpse a planet, then try to work out its orbit. If you get the initial position wrong, you’ll make a finite error. If you get the velocity wrong, likewise.

Or starting a kid on a swing. You’d like to release to set up a desired amplitude. If you get the starting position wrong, you’ll make a limited error which won’t compound. Likewise with velocity. These are two simple cases where the analogy fails. It certainly fails for GCM.

If either your initial position or velocity are incorrect enough for said planet, you may very well calculate it dies a fiery death in its parent star or escapes it and wanders off in space. Those wouldn’t be “finite errors”.

Nick: “If you get the initial position wrong, you’ll make a finite error. If you get the velocity wrong, likewise.”

I think I don’t quite understand what you’re trying to say Nick. But …

I’m under the misapprehension that I actually somewhat understand how numerical integration of Newton’s Laws of motion works. Basically, you start with a position estimate and a velocity estimate. You step forward a short time and compute a new position. And you compute an acceleration vector based on a model of forces acting on the object (gravity, drag, radiation pressure) using that new position and the velocity as inputs. You have to do the acceleration step. Without it, you’ll have an object traveling in a straight line rather than in an “orbit”. Then you use the acceleration vector to adjust the velocity and use the adjusted velocity to compute the next position.

There’s usually some additional logic to manage step sizes, but that’s not relevant to this discussion.

One might think that position errors would be constant (e.g. 10km in-track) and that velocity errors would increase linearly over time. But accelerations are position dependent, so the position error will usually increase over time because position discrepancies affect accelerations and acceleration affect velocity. That means that given a bad initial position, velocities will be (increasingly) wrong even if they started off perfect. How wrong? Depends on the initial position estimate, And the initial velocity estimate. And the acceleration model.

And yes, it’s actually more complex than that. For Example, it’s probably possible for errors to decrease with time for some time intervals. I’m not sure that a simple general error analysis is even possible.

I have no idea if analogous situations prevail in climate modelling. My guess would be that they do.

In any case, I’m unclear on what you’re trying to say and I suspect you may have picked a poor example.

Don,

What I’ve been saying more generally, eg here, is that how errors really propagate is via the solution space of the differential equation. An error at a point in time means you shift to a different solution, and the propagation of error depends on where that solution takes you.

The case of uniform motion is actually a solution of the equation acceleration = y” = 0 (well, a 3D equivalent). And the solution space is all possible uniform velocity motions. An error sets you on a different such path, and will diverge from where you would have gove if the error was in velocity (but not position).

For orbit, the solutions are just the possible orbits, and they depend on energy levels, with a bit about eccentricity etc. So an error just takes you to a different orbit. Now it’s true that the equivalent velocity error will involve an increasing phase shift, but that can only take you finitely far from where you were. Compared to the uniform motion case, it’s a 3D version of harmonic motion, y”=-y.

It isn’t a particularly good example, because the solutions are limited in space, rather than having some restoring force that actually brings them closer after error, as could happen. I’m really just trying to point out that there is a range of ways error can propagate, and they are very dependent on the differential equation and its solution space.

Nick: finite error vs infinite error?

Only with a pendulum, maybe. If the motion is a parabolic, then the velocity error compounds. At least that’s the way it works in artillery. The strike of the round is off by a constant if the initial position is off. The strike is off by an increasing amount with time for velocity error. The only things that makes velocity error constant are tube elevation, initial velocity and gravity.

Nick: “I’m really just trying to point out that there is a range of ways error can propagate, and they are very dependent on the differential equation and its solution space.”

Yes, that’s fine. I haven’t read every post in these threads in detail. And some I have read, I don’t really understand all that well. But I doubt many folks would argue error propagation is always trivially simple.

“There is I think a simple analogy to explain why most climate scientists think that

Frank’s paper is wrong.”

Have you communicated with most climate scientists, Izaak?

Climate models serve to do one thing – justify the existence of the keyboard jockeys. That’s it and that’s all. In reality they lead to academic discussion which leads to precisely nothing due to the inevitable and insurmountable accuracy problem indicated elsewhere. Their value does not come remotely close to that of historical observation.

A quick observation:

The dueling analysis here can be attributed to whether the climate temperature model is (thermodynamically) a

state functionor if it is apath function.Many of the models that have been proposed have assumed (or implicitely assume) that the global temperature equals some historic temperature steady-state (I’d be leary of using the exalted thermodynamic term of

equilibriumin reference to climatic behavior) plus some additive term for GHG forcing. Thus, this year’s climate “temperature” is a determinative outcome of the current CO2 (thestate variable). This year’s temperature is independent of last year’s temperature (and GHG forcing) and has no bearing on next year’s temperature/forcing. If the model has such a themodynamic construct, then there can be no accumulation of error.However, if the thermodynamics have elements of a path function (which is often the case for systems with heat flows (Q) under non-isentropic conditions), then the current conditions are dependent on the pathway of the variables used to get here. All the nice heat cycles featured in various thermodynamic textbooks are prime examples. Consequently, this year’s temperature is dependent upon last year’s conditions and will affect next year’s outcome and error could accumulate.

Which is right? Beats me. As both authors have pointed-out, the pertinent details have been either lightly explored or ruthlessly ignored.

The fact that they are using a transient calculation indicates that it is a path function.

Are we witnessing “the beginning of the end” of (pal) peer review?

This pair of threads may be historic, and noted for bringing intelligent climate discussions to both a general audience (I’m a BSME with an MBA) who can listen to (and partially understand) the back and forth between

truescientists (who clearly respect each other despite disagreement).Who knows what we would now be experiencing if Steve McIntyre and Dr Mann had the same back and forth dialogue in a similar venue over several months, with respected advocates from both sides chipping in (Dr. Lindzen, Chris Monckton, Dr Curry, Nick Stokes, Kerry Emanuel, our own Mosh and Willis, et. al.)

The ignoranti (speaking for myself) can even ask questions and both Drs. Spencer and Frank have obliged.

What would be the best way to leverage (and extend) such dialogue (vs an interview with Greta) for the citizens of the US and other nations to have the requisite information to come to an objective consensus on policy decisions? (It was noteworthy to me that very few of the comments in both threads were political.)

In another post, I claimed the 3 most underused words in the English language are “I DON’T KNOW”.

If we can re-establish “Nullius in Verba” there is a chance that those of us skeptical of a catastrophic future can make a good case to the residents of the developed world that they don’t have to throw away their children’s and grandchildren’s future on alarming but unproven threats of future catastrophe.

Dr. Frank has calculated extremely wide future temperature uncertainty bounds in the models. He points out repeatedly in the discussions on the net that these are ‘uncertainty bounds’ not possible ‘real’ future temperatures that might actually happen, but uncertainty of the predictions of the model when a certain kind of error propagated is forward in a random fashion.

He also said that the only way to validate whether those wide uncertainty bounds were reasonable is, as is the case with any measurement uncertainty, to compare the actual performance of the models to the reality of the measured values that the models are predicting.

If a predicted uncertainty range is statistically correct, multiple models incorporating that potential error should spread themselves rather widely inside the envelope of the predicted error. One in 20 should, on average, go outside the error bounds.

If the models over time do not scatter widely around those error bounds, and if one in 20 doesn’t go outside them, the calculated uncertainty bounds are not predicting the true uncertainty of the models.

I think we already know that none of these models has come close to hitting such wide error bounds in the time that they have already been run over. Its true that the models are running higher than reality, but some are close. They are deviating somewhat one from the other. But none seem to be deviating at the kind of rates consistent with the magnitude of uncertainty demonstrated by Dr. Frank. Nor is the actual temperature of the planet.

Consequently the error magnitude between model and actual temperatures seems, to date, to be far less than the uncertainty predicted by Dr. Frank.

If, as time passes, we find that the real temperature diverges sufficiently massively from the models, or the models diverge sufficiently from each other such that 1 in 20 of the models finds themselves outside Dr. Frank’s predicted possible huge potential uncertainty range, and they rest scatter themselves widely within his those huge bounds, then Dr Frank’s proposed error extent will be proven correct.

However already we have had some considerable time with the models and they do not, over a sufficient time frame, show anything like that amount of deviation from each other or from the actual temperatures.

Now it could be that they’ve all been tuned to look good, and won’t stay good going forward. We will only know for sure in 50 years or so, I guess. However at present the data suggests that the true uncertainty measurement is not as wide as Dr. Frank suggests it is.

Dr. Frank is also, I think, perhaps trying to have his cake and eating it. When people pointed out that the world simply cannot warm or cool 15 degrees C in the time frame proposed, Dr. Frank said that uncertainty measurements are not actual temperatures possibilities, they are just a statistical measurement of uncertainty. This is not true. An accurate uncertainty measurement statistically encompasses the full range actual outcomes that might actually arise. For the uncertainty itself to be accurate, on average, 1:20 of the models, or the earth itself, or both, simply must go outside those error bounds. If they don’t and if they actually all sit quite close together and don’t scatter as widely as predicted, then the uncertainty assessment is simply incorrect.

That’s why valid measurement uncertainties must reflect ‘real potential errors’ if they are correct. Uncertainties are not theoretical values. If a machine makes holes and the drill has a certain radius uncertainty, the accuracy of that assessment can be validated by measuring the outcome of drilling lots of holes. If the machine does better than the predicted uncertainty, then the prediction was wrong. The predicted uncertainty was wrong.

Already we have enough data to know that Dr. Franks’s possible range of uncertainty hasn’t shown up in multiple model runs, or in the actual temperature of the earth. This suggests that his estimate of potential uncertainty is based on a false assumption. It does seem that the false assumption relates to whether or not the cloud forcing can or can’t propagate. The data to date invalidates his predicted uncertainty range and therefore most likely the cloud forcing error does not propagate.

Chris Thompson,

The models are not relying on modeled cloud. They can’t because cloud is too complex. They are relying on parameterizations of cloud. So the error doesn’t propagate, but the uncertainty does.

The programming doesn’t change the physical reality of the uncertainties.

Where do you get ” One in 20 should, on average, go outside the error bounds.”?

Why?, How?

“If a predicted uncertainty range is statistically correct, multiple models incorporating that potential error should spread themselves rather widely inside the envelope of the predicted error. One in 20 should, on average, go outside the error bounds. If the models over time do not scatter widely around those error bounds, and if one in 20 doesn’t go outside them, the calculated uncertainty bounds are not predicting the true uncertainty of the models.”

Better re-read much of this; that is NOT Dr. Frank’s argument. He is not indicating that the

results of the modelswill vary by that amount; he is indicating that theuncertainty in their resultsis that large, thereby making their “output” meaningless, regardless of how they are “constrained” by fudge factors.“I think we already know that none of these models has come close to hitting such wide error bounds in the time that they have already been run over. Its true that the models are running higher than reality, but some are close. They are deviating somewhat one from the other. But none seem to be deviating at the kind of rates consistent with the magnitude of uncertainty demonstrated by Dr. Frank. Nor is the actual temperature of the planet.”

What we “know” is that the uncertainty in the models and the deficiencies in the models makes their “output” worse than useless. And as noted above, their “outputs”

do not need to “deviate” consistent with the magnitude of uncertainty demonstrated by Dr. Frank, because that is not what Dr. Frank claimed, he merely showed how large the uncertainties were, and how meaningless THAT makes the “output” of those models.And while it’s true that “the models are running higher than reality,” you’ll notice a conspicuous absence of– which is because theyany model which runs “cooler” than realityall contain the same INCORRECT assumption – that atmospheric CO2 level “drives” the temperature.In summary,

the models are not only so uncertain as to be meaningless, but are built on assumptions that are used to provide pseudo-support for harmful, purely hypothetical bullshit,which is the notion that atmospheric CO2 “drives” the climate, which hasnever been empirically shown to occur in the Earth’s climate history.Modelling the climate to determine how much GAST will increase due to increasing CO2 requires a guess as to how much each new ppm of CO2 will increase GAST. Since this quantity is not known and cannot be calculated without wildly unscientific assumptions, the entire concept is rather dubious at best.

“CO2’s effect is logarithmic.” But is it, really? Right now CO2’s capacity to absorb and thermalize 15- micron radiation from the surface is saturated at around 10 meters altitude. So, increasing CO2 only raises the altitude at which the atmosphere is free to radiate to space, thus lowering the temperature at which the atmosphere radiates to space, thus lowering the amount of energy lost, thus increasing the energy in the atmosphere, thus increasing GAST. No one can calculate the magnitude of this effect.

This is a debate about statistics, rather than physics.

So, CO2’s effects are known?

No they are not. If the effect on GAST since 1880 was all because of CO2, we could estimate it.

One, we do not know that.

Two, temperature records, not so good, and altered.

Three, Dr. Roy Spencer, have you not abandoned First Principles and adopted the assumptions of the Warmists? You do not need to beat them at their own game, you need to point out that their own game has no basis in physics.

Gentleman…

I would be very interested to see Dr. Spencer, or any critic,

ACKNOWLEDGE that Pat Frank has posited that the +\-4W/sqm is in fact in uncertainty, and that the propagation calculation using this uncertainty is independent of the models, and is not intended to calculate a temperature at the end,But rather to make a statement about the validity of the models.

Then starting from that point, explain what their objections are.

+1

http://www.drroyspencer.com/2019/09/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/#comment-386837

It may be my ignorance, but why is Dr Frank’s original thread no longer available?

It is. You just have to go down to when he posted it.

It is. It’s just no longer “sticky” at the top of all posts.

Pat’s original post at WUWT:

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/

Dr. Spencer’s critique:

https://wattsupwiththat.com/2019/09/11/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/

Pat’s first response to Dr. Spencer’s critique at Dr. Spencer’s website:

http://www.drroyspencer.com/2019/09/critique-of-propagation-of-error-and-the-reliability-of-global-air-temperature-predictions/#comment-386837

It’s still available to me because I left it open in a tab. Try this:

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/

It’s up to 842 comments.

“First water detected on potentially ‘habitable’ planet”

https://www.ucl.ac.uk/news/2019/sep/first-water-detected-potentially-habitable-planet

“K2-18b, which is eight times the mass of Earth, is now the only planet orbiting a star outside the Solar System, or ‘exoplanet’, known to have both water and temperatures that could support life.”

WOW !

Although expected sooner or later, this is indeed big news.

However K2-18 is a red dwarf (M-type), the smallest and coolest kind of star on the main sequence. Red dwarfs are by far the most common type of star in the Milky Way, at least in the neighborhood of the Sun. But their frequent strong flaring might reduce the habitability of planets in their, necessarily close-in, habitable zones.

Dunno if planet K2-18b is tidally locked or not, but if so, this could also present problems for the development of life there.

This comment is interesting:

**The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.**

So are the “other biases” deliberate, scientific, or accidental? Is there “tuning to get a desired temperature forecast?

What would happen to the temperature error if the model was run without the additional elements?

Would there be runaway error as Dr. Frank indicated?

I don’t think these claimed other biases reduce the uncertainty. In fact, if uncertainty is literally “what we don’t know”, suggesting other (unidentified) biases is an argument that uncertainty is even greater.

Pat is being too generous when his uncertainty range starts from zero.

And let’s not forget that Pat has only positioned his analysis using an estimate of low-end uncertainty. The reality is probably a good bit worse than this.

The error is in finite representation. The model error is due to incomplete… insufficient characterization and an unwieldy space.

Dr. Spencer,

Isn’t the uncertainty in the unknown behavior of physical clouds? It doesn’t matter what the models are doing. The basic science doesn’t know how clouds should behave. So the uncertainty of 4 per year exists around the calculations which are done, not within the calculations.

Would I be correct to say that the uncertainty lies in the calculation process, NOT in the calculation outcome? — in how the calculations are done and not in what the calculations produce?

I believe this is correct, and none of the critics seem to want to address this issue, or seem to be aware of it.

It’s actually an awesome thing to watch bright, accomplished people completely miss a fundamental point – even after it is has been explained to them a few times. It’s also very humbling. It’s possible “genius” is just a synonym for clarity.

Dr. Spencer, I greatly respect your work and your postings on WUWT.

I was therefore concerned to see this statement in the above article: “With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.”

I am sure that you are quite aware that temperature change does not necessarily correlate to energy change, the most obvious case relevant to Earth’s climate being the fact that a substantial amount of energy is exchanged at CONSTANT temperature when ice melts to liquid water or when liquid water freezes to ice. These are not “exceptions” in Earth’s climate system but occur widely and daily all over the globe with the changing seasons (snow and ice storms), and from the tops of high mountains in the tropics to sea ice in the polar regions. The magic 0 C (32 deg-F) number for water/ice phase change is not at all unusual over the range of variability of Earth’s land, sea and atmospheric temperatures.

So, I believe I am correct in asserting that areas of Earth can hide a lot (that’s not a scientific term, I know) of energy gain or energy loss via the enthalpy of fusion of water, even in a “closed” system.

Can you please clarify your position on this vis-a-vis climate modeling . . . in particular, do you believe most of the global climate models relatively accurately capture the energy exchanges associated with melting ice/freezing water and variations therein over time (from seasonal to century timescales)?

Good point. Temperature is not a measure of the heat content of air. Enthalpy is.

Besides freezing, how about vaporization and condensation, specifically aerosol cloud condensation nuclei, and their dependance on ionizing radiation?

The paper seems to imply that it is impossible to write a valid climate model. I think we should keep trying anyway. I am really skeptical of self-propagating errors.

ATTP and Nick stokes have both done “take downs” of pats’ paper

Here is one

https://andthentheresphysics.wordpress.com/2019/09/10/propagation-of-nonsense-part-ii/

Pat’s mistake is using error propagation on a base state uncertainty.

+/-4 W/m2 is a BASE STATE FORCING uncertainty. you don’t propagate that. As Roy points out at model start up this is eliminated. If it wasnt eliminated during model spin up the control runs would be all over the map. They are not.

I have a clock. I tell you its base state error is +- 4 minutes. That means AT THE START it is up to 4 minutes fast or 4 minutes slow. I let the clock the run. That BASE STATE error does not accumulate.

The only thing that persists in all of this is Pat’s “base state” mistakes about error analysis of GCMs ( and temperature products as well, but thats a whole nother issue)

Not even wrong yet

“A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models.

Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.

If it is 4 minutes slow at the start it will always be 4 minutes wrong. If what made it 4 minutes wrong before you noticed it then it is probably slow 4 minutes a day.

Good luck in getting to work on time in 2 weeks.

* unless you work at home.

Wake us up when you get your clocks and stations right rather than adjusted.

Touche!

Mosher

You said, “I let the clock the run. That BASE STATE error does not accumulate.” That is correct as far as it goes. The problem is that, first of all, you are assuming that the clock is running at the correct rate and does not change; that may not be true for a natural phenomenon such as cloud coverage. Secondly, the problem is when the ‘time’ is used in a chain of calculations, particularly if the result of a calculation is then used in a subsequent calculation that again uses the time with an uncertainty that may be variable, for which you only know the upper and lower bounds. Rigorous uncertainty analysis requires that the worst case be considered for the entire chain of calculations, and not focus on just the uncertainty of one parameter.

I think that Dr. Spencer’s analysis lay on this finding (a quote):

“If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do. Why? Because each of these models is already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

As I have commented before, the GCMs or simpler models do not have the cloud forcing element as a parameter varying along the time. Do you think that modelers have enough knowledge to calculate what was cloud effect 45 years ago or after 10 years in the future? They have no idea, and so they do not even try. How they could program these cloud forcing effects into their models? Can you show at least one reference in which way they calculate cloud effects?

Another thing is that cloud forcing effects may be a major reason for the incorrect results of IPCC’s climate models because GCMs do not use cloud forcing effects. One of the competing theories is the so-called sun theory and you all know that clouds have a major role in this theory. The official IPCC climate science does not approve of this theory.

Please show that I am wrong.

Dr. Ollila,

Dr. Frank’s paper says the models mis-estimate cloud forcing. So, that is one reference showing that models calculate cloud effects.

Cloud is to complex to model, so I suppose they must be paramatized (it’s not even an english word!) but clearly the models do something to take into account changes in cloud. I suppose they must make some assumption for starting values of cloud cover when they model the past, but it is clearly being changed over time.

Thomas

But, “parameterized” is an English word.

Admit, skipped most comments. So maybe this is repetitive.

But, CtM alerted me to this critique coming at lunch today, so gave it some thought when appeared.

There are two fundamental problems with Roy Spencer’s critique, both illustrated by his figure 1.

First, figure 1 presumes an error bar. That is NOT the point of the paper. This confuses precision with accuracy, a derivative of the CtM Texas sharpshooter fallacy. (Patience, will soon be explained). Frank’s paper critiques accuracy in terms of uncertainty propagation error. That has nothing to do with precision, with model chaotic attractor stability, and all that. For a visual on this A Watts ‘distinction with a difference’, see my guest post here on Jason, unfit for purpose—which dissects accuracy v precision. CtM himself said today he found the explanatory graphic useful.

Put simply, the Texas sharpshooter fallacy draws a bullseye around the shots on the side of a barn and declares accuracy. Precision would be a small set of holes somewhere on the barn side but possibly missing the target. Franks paper simply calculates how big is the side of the barn. Answer, uselessly big.

Second, the issue (per CtM and Roy) is whether the ‘linear’ emulator equation used to propagate uncertainty encapsulates chaos and attractors (nonlinear dynamics, Lorenz, all that AR3 stuff including negative feedbacks and stable nodes). Now, this is a matter of perspective. Treat nonlinear dynamic (by definition chaotic) climate models as a chaotic black box, then derive an emulator over a reasonable time frame, of course dos not guarantee that the emulator is valid over a much longer time frame. But then, neither do models themselves. By definitional fit, CMIP5 is parameter tuned to hindcast 30 years from 2006. Not more. That period is fairly used by Frank. So the Dansgaard event and LGM are by definition excluded from both. IMO, reasonable since in 2019 trying to understand ‘just’ 2100.

Bottom line is, that the esoteric and normative climate models will be seen to be not even vaguely right …. but, “precisely” wrong. Time to move on and bury this crazy humans impacting climate narrative which has sad parallels to Akhenaton and the Aztec priests and their ridiculous delusions of controlling the sun. Galileo would be proud of you Pat!

The size of the barn analogy is the critical point of PFs analysis. Almost all of the comments I have read,including RSpencer are centered on the grouping (pun intended). Very difficult concept for most of us to “get”. Overall this discussion is probably the best ever presented on WUWT. Keep it up, progress is being made here.

Well, let’s carry the barn analogy to its rightful end . . . the IPCC would take delight in the fact that they even hit the “uselessly big” barn with most of their climate models. The shot groupings seem of little concern to them, with at least a 3:1 dispersion in result magnitudes of global warming rates based on the lot of CIMP 5 models.

That barn, full of buckshot, just happens to be our economy.

1″Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.”

2 “Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation”

–

I would assume that statement 2 sort of supports statement 1.

“Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

Dr. Spencer’s objections are somewhat more likely to be right if you think of the physics of the system, where there needs to be a net energy flow for temperature to rise. However in a computer model, many things can happen they physics would not allow. For example, in a molecular dynamics model, with which I have a bit of experience, you can model an inert block of diamond with no radiative coupling at all, which will nevertheless warm dramatically all by itself. Thus virtually all MD models have a “thermostat” which artificially clamps the temperature. I doubt the climate models have explicit thermostats, but I’ll bet they have implicit ones, perhaps even unknown to their writers. That “unforced pre-industrial” flatline should wander a lot more than it does.

Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do. Why? Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.

Pat Frank states “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models. This error is strongly pair-wise correlated across models, implying a source in deficient theory. The resulting long-wave cloud forcing (LWCF) error introduces an annual average ±4 Wm–2 uncertainty into the simulated tropospheric thermal energy flux. ”

–

There seems to be a bit of dissonance here.

Roy is arguing about TOA Net Energy Flux being balanced before the model is run I presume.

Since he also states ” the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.” The message surely is that there is a +/_5 Wm-2 degree of uncertainty in choosing the actual starting point. Obviously a possible inherent bias error.

–

“Because each of these models are already energy-balanced before they are run they have no inherent bias error to propagate.” This does not rule out a 4 W/m-2 yearly variation developing in one of the subsidiary components. “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction.

–

Worse, “Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.” misses the point that there are other inbuilt variations affecting TOA estimation combining to 5 W/m-2.

–

Worse is the obvious conclusion from the pre industrial graph figures provided by Roy that each of these models are continuously energy-balanced while they are run.

When Roy comments

“If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.”

I would do a Stephen McIntyre.

Average standard yearly deviation 0.11 C

Standard dev of model trends = 0.10 C/year.

So in 100 years , 100 years! ten models vary by only 0.10c from no warming!

Highly suspicious, temp varies a lot per century.

Then yearly deviation is greater than 100 deviation.

What gives?

Proponents of coin tosses will note that the deviation under fair conditions, despite return to the means, should be greater than this. And the conditions for temperature are never fair.

This rigid, straight jacketed, proof that the models are not working properly but have adjustments in them to return everything to the programmed constant TOA is shocking but perfectly computer program compatible.

–

A last question, if not why not?

If Pat Frank states that “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models.” then it either is or isn’t.

Roy.

Further more if it is not why not?

Further if it is what peregrinations are the rest of the algorithm doing to keep the TOA constant?

From the article.

Statement A:The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.Statement B:Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.These two statements cannot both be true. If Statement A is true then the energy-balance is a statistical sleight of hand that does not reflect the physical reality. Therefore there could be many inherent biases that are hidden during the balancing but are quite able to propagate later on.

And Statement A is clearly true as all the models do not get very nearly the same cloud amounts.

This begs the question.

If the climate is dependent on the previous state then there is no reason to say the previous imbalance cannot affect the future imbalances. The rate of response to the previous state would then determine the propagation (as the article points out). But why say that year-on–year is inappropriate just because month-on-month is not appropriate?

“Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.”

–

It makes perfect mathematical sense.

I believe it is called compound interest.

People usually choose a year to make an annual change, it makes sense.

Unlike Nick Stokes and Mosher, to use the figure you do have to give it a time unit

and the a +/-4 W/m2 assumed energy flux error is an annual rate.

You seem to apply it to each month as a 1/12 th of +/-4 W/m2 assumed energy flux error for your assumption of a much larger deduced model error.

Of course it would increase the amount of error.

This should happen, as the final calculation is dependent on the model time step chosen with a unchanging 1/12 th annual rate applied in 12 steps.

It does reach a limit if done continuously

Or you could do a Nick Stokes and apply the 4 W/m-2 monthly to give a mammoth answer.

He does not believe it has a time component.

That is why, if you are doing a simple annual calculation you use a simple annual rate of error.

If anyone was to do monthly calculations properly you would have to include monthly changes which by definition would be smaller than 1/12 of 4 W/M-2 so as to add up in the year to the annual +/-4 W/m2 assumed energy flux error.

I think your statement should be rewritten to acknowledge that using different time steps if done properly should give a similar answer in all cases.

The comment on choosing a 1 month time step, “that there would be 12x as many error accumulations and a much larger deduced model error in projected temperature is being spread by Nick Stokes using misdirection” as if you change the time frame you have to change the input, per second is different to per month and different to 4 W/M-2 annually which they all have to add up to.

“and the a +/-4 W/m2 assumed energy flux error is an annual rate”There is no evidence of that, and it isn’t true. Lauer and Hamilton said:

“These give the standard deviation and linear correlation with satellite observations of theand gave the result as 4W/m2, not 4 W/m2/year.total spatial variabilitycalculated from 20-yr annual means.”In fact, they didn’t even do that at first. They said

“For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m22)”The primary calculation is of the correlation (from which the 4 W/m2 follows). How are you going to turn that into a rate? What would it be for 2 years? 1.86?

Nick,

The 4 W/m2 value is “calculated from 20-yr *annual* means.” So it is annual.

Nick,

I believe that the -/+ 4 W/mw energy flux error is a list of determined errors from observations (year by year) minus the model predicts. So some are on the high side, some on the low side.

So that is 20 single value measurements of error. IS the STD deviation of these errors the value 4, or is 4 the average error amongst the set ? Can you post the actual errors ? Year to year, these errors have their own rates.

As the model is ran at a specific time step, is the yearly (average ?) error for that year used each time step ?

What is the time step period, when the model is ran.

The time step is about 30 minutes. That is the only period that could possibly make sense for accumulation, and would soon give an uncertainty of many hundreds of degrees. There is no reason why a GCM should take into account how Lauer binned his data for averaging.

Nick, “

There is no reason why a GCM…”But there is every reason to take a yearly mean error into account when estimating a projection uncertainty.

You admitted some long time ago the poverty of your position, Nick, when, in a careless moment, you admitted that climate models are engineering models rather than physical models.

Engineering models have little to no predictive value outside their parameter calibration bounds. They certainly have zero predictive value over extended limits past their calibration bounds.

You know that. But you cover it up.

It’s OK, Thomas, et al. Nick plain does not know the meaning of “

20-yr annual means.”For Nick, a per-year average taken over 20 years is not an average per year.

Nick’s is the level of thinking one achieves, apparently, after a career in numerical modeling.

Even Ben Santers says that model suck

– examples of systematic errors include a dry Amazon bias, a warm bias in the eastern parts of tropical ocean basins, differences in the magnitude and frequency of El Nino and La Nina events, biases in sea surface temperatures (SSTs) in the Southern Ocean, a warm and dry bias of land surfaces during summer, and differences in the position of the Southern Hemisphere atmospheric jet –

https://www.nature.com/articles/s41558-018-0355-y

Nick Stokes

“and the a +/-4 W/m2 assumed energy flux error is an annual rate”

There is no evidence of that, and it isn’t true.

Wrong.

You know it.

OK, produce the evidence.

Nick

CMIP5 paper 2019 shows that the fluxes under discussion are yearly calculations also known as Global annual mean sky budgets. Below is the shortwave component but I fully expect is also used for long wave TOA and TCF qed

Global budgets

Figure 2 shows the global annual mean clear-sky budgets as simulated by 38 CMIP5 GCMs at the surface (bottom panel), within the atmosphere (middle panel) and at the TOA (upper panel). The budgets at the TOA that govern the total amount of clear-sky absorption in the climate system, are to some extent tuned to match the CERES reference value, given at 287 Wm−2 for the global mean TOA shortwave clear-sky absorption. Accordingly, the corresponding quantity in the CMIP5 multi-model mean, at 288.6 Wm−2, closely matches the CERES reference (Table 2). Between the individual models, this quantity varies in a range of 10 Wm−2, with a standard deviation of 2.1 Wm−2, and with a maximum deviation of 5 Wm−2 from the CERES reference value (Table 2; Fig. 2 upper panel).

Shortwave clear-sky fluxes

Global budgets

Figure 2 shows the global annual mean clear-sky budgets as simulated by 38 CMIP5 GCMs at the surface (bottom panel), within the atmosphere (middle panel) and at the TOA (upper panel). The budgets at the TOA that govern the total amount of clear-sky absorption in the climate system, are to some extent tuned to match the CERES reference value, given at 287 Wm−2 for the global mean TOA shortwave clear-sky absorption. Accordingly, the corresponding quantity in the CMIP5 multi-model mean, at 288.6 Wm−2, closely matches the CERES reference (Table 2). Between the individual models, this quantity varies in a range of 10 Wm−2, with a standard deviation of 2.1 Wm−2, and with a maximum deviation of 5 Wm−2 from the CERES reference value (Table 2; Fig. 2 upper panel).

How is that evidence? Where is there a statement about rate? Scientists are careful about units. Lauer and Hamilton gave their rmse as 4 W/m2. No /year rate. Same here.

But it is also very clear that it isn’t an annual rate, and the argument that annual mean implies rate is just nonsense. The base quantity they are talking about is clear sky absorption at 287 W/m2. That may be an annual average, but it doesn’t mean that if you averaged over 2 years the absorption would be 574 W/m2. It is still 287 W/m2. And the sd of 2.1 W/m2 will be the same whether averaged over 1 year or 2 or 5. It isn’t a rate.

Nick, “

the argument that annual mean implies rate is just nonsense.”No one makes that argument except you, Nick. And you’re right, it’s nonsense.

So, do stop.

they have no inherent bias error to propogate –> they have no inherent bias error to propagate.

A practitioner of any discipline must accept *some* tenets of that discipline. A physicist who rejects all physical laws won’t be considered a physicist by other physicists, and won’t be finding work as a physicist. Similarly, Dr Spencer must accept certain practices of his climate science peers, if only to have a basis for peer discussion and to be considered qualified for his climate work. Dr Frank doesn’t have that limitation in his overview of climate science — he is able to reject the whole climate change portfolio in a way which Dr Spencer can’t. This is the elephant in the room.

NZ Willy,

I was thinking the same thing. Dr. Spencer has skin in this game. It’s a shame that he plays along.

Andrew

It seems that modellers spend effort neutralising, cancelling out or suppressing the range of factors that may cause energy imbalance resulting in warming or cooling. Such factors may not be well understood, difficult to model and detract from the main purpose which is to predict the effect of increasing atmospheric carbon dioxide over a long period of time.

As a consequence, the models do not show the range of outcomes that would otherwise be expected and mainly show a relatively narrow range of CO2 induced warming as intended. Such models clearly do not simulate reality but are a complex and expensive way of carrying out a simple calculation.

A model containing all the factors that may introduce imbalance together with estimates of the unknowns such as magnitudes and consequences would produce a much less predictable outcome together with a higher probability that the model would fail to simulate credible reality. The predictive ability would certainly decrease sharply with each iteration of the model run.

It can be claimed that models effectively bypass these problems by reducing such imbalances to net zero before introducing the CO2. It is easy to remove them from the model, but more difficult in the real world.

Dr. Spencer formulated the same thing by other terms. I formulated the same thing with very simple language and I think that you realized the same thing. Climate models do not contain any terms that modelers do not know.

Great. Thanks!

I have to admit that I’m confused. I’ve been frequenting this site lately looking for information that, I would like to use to debunk AGW theory. I thought Dr. Frank’s post would go a long way to finally driving a stake through the AGW monster that won’t seem to die.

Unlike one AGW supporter on this site, I remember very well the AGC scam that was going on back in the 70s until they said, (in my best Emily Litella voice) “nevermind”. Frankly I don’t believe that the miniscule amount of CO2 in the atmosphere affects our temperature even slightly, let alone the even MORE miniscule amount of it that is put there by human activity. I have often questioned why the [quote] science community ignores that small nuclear fusion reactor a mere 93 million miles away as having any impact on our temperatures. I read a quote from one contributor on WUWT that Mars has a higher percentage of CO2 than we have and yet is much colder – which further supported my view that CO2 is, at best a very weak GHG and overall insignificant.

I know that during the time when the AGW crowd was saying the temperatures were changing the most here on earth, the ice caps on Mars were shrinking. Further evidencing that the sun is more responsible for our climate and temperatures than man could ever be.

I don’t have enough of the background in the areas that Dr. Spencer or Dr. Frank or many of those on this site have to intelligently discuss the supporting statistics or math so I look to WUWT to provide cogent rational clear-enough-to-understand explanations to debunk AGW theory.

I met Dr. Spencer once long ago and respect him and I’ve been looking on this post for Dr. Frank’s rebuttal to see his explanation or comments to support his analysis that I was putting so much stock in. But haven’t seen it … yet.

Here is what I believe: CO2 has virtually no effect on our temperatures. Man’s contribution to the atmospheric concentration of CO2 in comparison to what nature puts there (3% I believe) is further miniscule and I propose irrelevant. If atmospheric CO2 does increase (likely it will) it will be beneficial for life. Atmospheric CO2 does not have an effect on severe weather events. Changes to CO2 are not responsible for temperature changes nor sea level rise – again, look to the sun.

I’d like to know if I’m wrong. (Yeah, I know that Loydo and Nick Stokes will tell me I’m wrong but I’ve heard that before by warmunists even though those two present better supporting arguments but being inundated by links utilizing the band width of the internet I don’t find compelling, plus, on the backside, I just no longer believe it nor the hysterics that go with it.)

Sam,

Carbon dioxide is known to be a minor player. Most of Earth’s greenhouse effect is due to water vapor. Models depend upon CO2 changes being amplified through feedback of increased water vapor.

Clouds are formed through a process that combines evaporative effects, convection, the lapse rate, etc, etc, etc. Don’t we know that tropospheric heat transfer is most through convection? Isn’t the failure to model clouds accurately due to both a grid scale to large and the inability to model convection. The focus on radiative forces to model tropospheric temperatures seems misplaced to me. As I said above, climate models have no ability to explain the temperature evolution that we have observed through the Holocene. This is the reason we observe such statements that “we have to get rid of the medieval warm period.” The most recent CMIP5 spaghetti graphs that show a monotonic rise through time driven by CO2 concentrations are the climate community placing a bet that ocean currents, solar effects, changes in the earth’s magnetic field, etc are second or third order effects comparted to CO2 concentration levels, In my view, the entire climate communities intellectual reputation is on the line. The good news is that it won’t take more than 5-10 years to see who is right. To be frank, I don’t see how anyone that has a familiarity with Holocene temperatures can believe that the output from climate models actually captures reality.

All,

Here is my rational explanation regarding traditional temperature prediction programs “tracking high”.

I work for an RE organization and am in charge of the temperature prediction computer program that helps keep a steady RE funding flow going.

I could be working at Dartmouth or UVM or MIT, or any entity dependent on an RE funding flow.

The flows likely would be from entities MAKING BIG BUCKS BY PLAYING THE RE ANGLE, like Warren Buffett.

My job security bias would be to adjust early data to produce low temperatures and later data to produce high temperatures.

I would use clever dampers and other tricks to make the program “behave”, with suitable squiggles to account for el ninos, etc.

Also I do not want to stick out by being too low or too high.

Everyone in the organization would know me as one of them, a team player.

Hence, about 60 or so temperature prediction programs behave the same way, if plotted on the same graph.

Comes along the graph, based on 40-y satellite data, which requires no adjustments at the low end or the high end.

It has plenty of ACCURATE data; no need to fill in any blanks or make adjustments.

Its temperature prediction SLOPE is about 50% of ALL THE OTHERS.

If I were a scientist, that alone would give me a HUGE pause.

However, I am merely an employee, good with numbers and a family to support.

Make waves? Not me.

If the above is not rational enough, here is another.

At Dartmouth the higher ups have decided burning trees is good.

Well, better than burning fossil fuels any way, which is not saying much.

By now all Dartmouth employees mouth in unison “burning trees is good, burning fossill fuels is bad”.

Job security is guaranteed for all.

But what about those pesky ground source heat pumps OTHER universities are using for their ENTIRE campus.

Oh, we looked at that and THEY are MUCH too expensive.

For now, ONLY THIRTYFIVE YEARS, Dartmouth will burn trees.

Dartmouth, with $BILLIONS in endowments, could not possibly afford those heat pumps.

And so it goes , said Kurt Vonnegut, RIP.

… “The errors show that (for example) we do not understand clouds” …

https://youtu.be/8L1UngfqojI?t=50

Too cheesy? Sorry, couldn’t help it…

In response to comment’s from Dr. Frank and many others, I have posted a more precise explanation of my main objection wherein I have quoted Pat’s main conclusion verbatim and why it is wrong. I would agree with him completely if climate models were periodically energy-balanced throughout their runs, but that’s not how they operate.

http://www.drroyspencer.com/2019/09/additional-comments-on-the-frank-2019-propagation-of-error-paper/

As for climate-models, I am a lay person considering all the technicalities, formulas and esoteric reasoning about this uncertainty beast. I try not to get lost in the mist of the expert’s arguments about details. So I ask myself: “What is basic in this discussion?” I would say: The use of parameters in models as an argument to undo or side-step Dr. Franks reasoning.

Isn’t working with parameters like creating a magical black box? “Hey, turning these parameter knobs, it starts working! I don’t learn from it how the system it tries to emulate does work, but who cares! Magic!”

I would feel, already there is this huge uncertainty problem, and now the parameter problem is added to it. Parameterization doesn’t enhance the models, it does exactly the opposite, it makes them even worse. Like turning a pretender into a conman.

Maybe my analysis is wrong, again, the details are beyond me. But I feel this is the essence of the discussion.

As I see it, Dr. Spencer and Dr. Frank are talking past each other. Heck, what Dr. Frank talked about was drilled into me in my analytical chemistry class; so I got his point immediately. Later, I did spectroscopy of various kinds. One of the main issues, to me, is the equivocation inherent in human language; so yes, semantics *do* matter, which got drilled into me from a debate class.

Rethinking all the arguments, I’m afraid Dr. Roy Spencer’s critique is right:

The reason is this: the climate models don’t propagate hardly anything. They don’t take the last climate state and calculate from this the next climate state. There is only one climate state: the unperturbed state of the control run, as shown here in Figure 1. From this, there is just one influence that can really change the climate state: a change in the CO2 forcing.

The only thing that is propagated, if you will, is the amount of the CO2 forcing. All the other subsystems, ocean heat uptake, cloud fraction, water vapor, etc. are coupled via time lags to the development of the CO2 forcing.

In a way the climate models don’t run along the time axis as one might think, they run perpendicular to it. That’s exactly why it is so easy to emulate their behavior with Dr. Franks emulation equation 1. If the climate models would truly propagate all the different states of their many variables, they would run out of control in a very short time and occupy the whole uncertainty range.

Maybe Dr. Spencer has misunderstood some of Dr. Franks arguments. Maybe he has sometimes used the wrong terminology to voice his critique, but his main point seems to be valid:

The propagation of errors is not a big issue with climate models. They are much more in error than that:

One big issue with climate models is the assumption that there is no internal variability, that the control run is valid when it looks like Figure 1.

The second big issue is the assumption that the CO2 forcing, minus aerosol cooling, minus changes in albedo, minus changes in forcings of all the subsystems of the climate system, must always be positive. (Kiehl, 2007)

In other words: the problem with the climate models is, that Dr. Franks equation 1 (also shown here before by Willis Eschenbach) is such a good emulation of the climate models in the first place.

BP, “

They don’t take the last climate state and calculate from this the next climate state.”Yes, they do.

Working with Dr. Spencer’s climate model (.xls) from his website, you can’t make the temperature decrease year over year no matter what you set the parameters to.

The glaring elephant in the room is that climate models assume that there’s a “greenhouse roof” capping emissions to space… and NASA’s own SABER data shows this has never been the case. The atmosphere expands and “breathes” in response to solar input variation. This is old news, and unfortunately, completely ignored by “climate scientists”.

It was SABER which taught us that there was no “hidden heat” in the atmosphere, which set off the search for “hidden heat” in the oceans. Good luck with that.

https://spaceweatherarchive.com/2018/09/27/the-chill-of-solar-minimum/

There seems to be a misunderstanding afoot in the interpretation of the description of uncertainty in iterative climate models. I offer the following examples in the hopes that they clear up some of the mistaken notions apparently driving these erroneous interpretations.

Uncertainty: Describing uncertainty for human understanding is fraught with difficulties, evidence being the lavish casinos that persuade a significant fraction of the population that you can get something from nothing. There are many other examples, some clearer that others, but one successful description of uncertainty is that of the forecast of rain. We know that a 40% chance of rain does not mean it will rain everywhere 40% of the time, nor does it mean that it will rain all of the time in 40% of the places. We however intuitively understand the consequences of comparison of such a forecast with a 10% or a 90% chance of rain.

Iterative Models: Let’s assume we have a collection of historical daily high temperature data for a single location, and we wish to develop a model to predict the daily high temperature at that location on some date in the future. One of the simplest, yet effective, models that one can use to predict tomorrow’s high temperature is to use today’s high temperature. This is the simplest of models, but adequate for our discussion of model uncertainty. Note that at no time will we consider instrument issues such as accuracy, precision and resolution. For our purposes, those issues do not confound the discussion below.

We begin by predicting the high temperatures from the historical data from the day before. (The model is, after all, merely a single day offset) We then measure model uncertainty, beginning by calculating each deviation, or residual (observed minus predicted). From these residuals, we can calculate model adequacy statistics, and estimate the average historical uncertainty that exists in this model. Then, we can use that statistic to estimate the uncertainty in a single-day forward prediction.

Now, in order to predict tomorrow’s high temperature, we apply the model to today’s high temperature. From this, we have an “exact” predicted value ( today’s high temperature). However, we know from applying our model to historical data, that, while this prediction is numerically exact, the actual measured high temperature tomorrow will be a value that contains both deterministic and random components of climate. The above calculated model (in)adequacy statistic will be used to create an uncertainty range around this prediction of the future. So we have a range of ignorance around the prediction of tomorrow’s high temperature. At no time is this range an actual statement of the expected temperature. This range is similar to % chance of rain. It is a method to convey how well our model predicts based on historical data.

Now, in order to predict out two days, we use the “predicted” value for tomorrow (which we know is the same numerical value as today, but now containing uncertainty ) and apply our model to the uncertain predicted value for tomorrow. The uncertainty in the input for the second iteration of the model cannot be ‘canceled out’ before the number is used as input to the second application model. We are, therefore, somewhat ignorant of what the actual input temperature will be for the second round. And that second application of the model adds its ignorance factor to the uncertainty of the predicted value for two days out, lessening the utility of the prediction as an estimate of day-after-tomorrow’s high temperature. This repeats so that for predictions for several days out, our model is useless in predicting what the high temperature actually will be.

This goes on for each step, ever increasing the ignorance and lessening the utility of each successive prediction as an estimate of that day’s high temperature, due to the growing uncertainty.

This is an unfortunate consequence of the iterative nature of such models. The uncertainties accumulate. They are not biases, which are signal offsets. We do not know what the random error will be until we collect the actual data for that step, so we are uncertain of the value to use in that step when predicting.

Maybe Models aren’t Models at all, and should be recognized as what they truly are: ‘Creations’, ‘Informed Imaginings’, Frankensteinian attempts at recreating Nature.

Bill Haag’s example is very clever, and rings true.

However, let’s think about the same model a little differently.

Let’s say our dataset of thousands of days shows the hottest ever day was 34 degrees C and the lowest 5 degrees C. The mean is 20 degrees C, with a standard deviation of +/- 6 degrees C. It’s a nice place to live.

Let’s say today is 20 degrees C. Tomorrow is very unlikely to be colder than 5 or hotter than 34C; its likely closer to 20 than 34. The standard deviation tells us that 19 out of 20 times, tomorrow’s temperature will range between 14 and 26 degrees.

But is this the correct statistic to predict tomorrow’s temperature, given today’s?

Actually, that statistic is a little different. A better statistic would be the uncertainty of the change in temperature from one day to the next.

So let’s say we go back to the dataset and find that 19 out of 20 days are likely to be within +/- 5 degrees C of the day before.

Is this a more helpful statistic? When today’s temperature is in the middle of the range, +/- 5 degrees C sounds fair and reasonable. But what if today’s temperature was 33 degrees C, does +/- 5 degrees C still sound fair and reaonsable – given that it’s never exceeded 34 degrees C ever? Is there really a 1 in 20 chance of reaching 38 degrees C? No, that’s not a good estimate of that chance.

It’s clear that the true uncertainty distribution for hot days is that the next day is more likely to be cooler than warmer. The uncertainty distribution of future temperatures after a very hot day is not symmetrical.

Let’s now try compounding uncertainties in the light of this dataset. Let’s say that we know our uncertainty is +/5 degrees, on average, starting at 20 degrees C. we want to go out two days. Is the uncertainty range now +/- 10 degrees? If we went out 10 days, could the uncertainties add up to +/- 50 degrees Centigrade? Plainly not. We can’t just keep adding uncertainties like that, because should a day actually get hot two days in a row, it has great difficult getting a lot hotter, and becomes more likely to get cooler.

Statistically, random unconstrained uncertainties are added by the the square root of the sample count. Let’s see how that might work out in practice. After four days, our uncertainty would double to 10 degrees C, and after 16 days, double again to 20 degrees C. This would proceed ad infinitum. After 64 days, the extrapolated uncertainty range becomes an impossible +/- 40 degrees C. Since such a range is truly impossible, there must be something wrong with our uncertainty calculation… and there is.

We have to recognise that the uncertainty range for a successive day depends greatly on the temperature of the present day, and that the absolute uncertainty range for any given starting temperature cannot exceed the uncertainty of all possible actual temperatures. In other words, the uncertainty range for any given day cannot exceed +/- 6 degrees C, the standard deviation of the dataset, no matter how far out we push our projection.

An analysis of this kind shows us that that measures of uncertainty cannot not be compounded infinitely – at least, in systems of limited absolute uncertainty, like the example given by Bill Haag.

The same is true for the application of uncertainty extrapolations as performed by Dr. Frank. Their uncertainty bounds do not increase as he predicts. They may well have wide uncertainty bounds, but not for the reasons Dr. Frank proposes. His methodology is fundamentally flawed in that regard. A careful consideration of Bill’s model shows us why.

Your discussion is wrong Chris Thompson, because you’re assigning physical meaning to an uncertainty.

Your mistake becomes very clear when you write that, “

could the uncertainties add up to +/- 50 degrees Centigrade? Plainly not. We can’t just keep adding uncertainties like that, because should a day actually get hot two days in a row, it has great difficult getting a lot hotter, and becomes more likely to get cooler.”That (+/-)50 C says nothing about what the temperature could actually be. It’s an estimate of what you actually know about the temperature 10 days hence. Namely, nothing.

That’s all it means. It’s a statement of your ignorance, not of temperature likelihood.

Thanks, Pat

I posted this in another thread as a reply to this comment from Chris,

———

Chris,

Thank you for the kind words in the first sentence.

However you are not “thinking about the same model a little differently”, you are changing the model. So everything after is not relevant to my points. Perhaps to other points, but not to my example of the projection of uncertainty, which was my point.

Once again, the model was to use the prior day’s high temperature to predict each day’s high temperature. The total range of the data over how ever many days of data you have is irrelevant for this model. From the historical data, a set of residuals are calculated for each observed-minus-predicted pair. These residuals are the ‘error’ in each historical prediction. The residuals are then used to calculate a historical model-goodness statistic (unspecified here to avoid other disagreements posted on the specifics of such calculations)

This model is then used going forward. See the earlier post for details, but it is the uncertainty not the error that is propagated. The model estimate for the second day out from today is forced to use the uncertain estimated value from the model of the first day out, while contributing its own uncertainty to its prediction. And so it goes forward.

You also are confusing uncertainty with error. The uncertainty is a quantity that describes the ignorance of a predicted value. Like the 40% chance of rain, it is not a description of physical reality, or physical future. It doesn’t rain 40% of the time everywhere, nor does it rain all the time in 40% of the places. But the uncertainty of rainfall is communicated without our believing that one of the two physical realities is being predicted.

Bill

For the benefit of all, I’ve put together an extensive post that provides quotes, citations, and URLs for a variety of papers — mostly from engineering journals, but I do encourage everyone to closely examine Vasquez and Whiting — that discuss error analysis, the meaning of uncertainty, uncertainty analysis, and the mathematics of uncertainty propagation.

These papers utterly support the error analysis in “Propagation of Error and the Reliability of Global Air Temperature Projections.”

Summarizing: Uncertainty is a measure of ignorance. It is derived from calibration experiments.

Multiple uncertainties propagate as root sum square. Root-sum-square has positive and negative roots (+/-). Never anything else, unless one wants to consider the uncertainty absolute value.

Uncertainty is an ignorance width. It is not an energy. It does not affect energy balance. It has no influence on TOA energy or any other magnitude in a simulation, or any part of a simulation, period.

Uncertainty does not imply that models should vary from run to run, Nor does it imply inter-model variation. Nor does it necessitate lack of TOA balance in a climate model.

For those who are scientists and who insist that uncertainty is an energy and influences model behavior (none of you will be engineers), or that a (+/-)uncertainty is a constant offset, I wish you a lot of good luck because you’ll not get anywhere.

For the deep-thinking numerical modelers who think rmse = constant offset or is a correlation: you’re wrong.

The literature follows:

Moffat RJ. Contributions to the Theory of Single-Sample Uncertainty Analysis. Journal of Fluids Engineering. 1982;104(2):250-8.

“

Uncertainty Analysis is the prediction of the uncertainty interval which should be associated with an experimental result, based on observations of the scatter in the raw data used in calculating the result.Real processes are affected by more variables than the experimenters wish to acknowledge. A general representation is given in equation (1), which shows a result, R, as a function of a long list of real variables. Some of these are under the direct control of the experimenter, some are under indirect control, some are observed but not controlled, and some are not even observed.

R=R(x_1,x_2,x_3,x_4,x_5,x_6, . . . ,x_N)

It should be apparent by now that the uncertainty in a measurement has no single value which is appropriate for all uses. The uncertainty in a measured result can take on many different values, depending on what terms are included. Each different value corresponds to a different replication level, and each would be appropriate for describing the uncertainty associated with some particular measurement sequence.

The Basic Mathematical FormsThe uncertainty estimates, dx_i or dx_i/x_i in this presentation, are based, not upon the present single-sample data set, but upon a previous series of observations (perhaps as many as 30 independent readings) … In a wide-ranging experiment, these uncertainties must be examined over the whole range, to guard against singular behavior at some points.

Absolute Uncertaintyx_i = (x_i)_avg (+/-)dx_i

Relative Uncertaintyx_i = (x_i)_avg (+/-)dx_i/x_i

Uncertainty intervals throughout are calculated as (+/-)sqrt[(sum over (error)^2].

The uncertainty analysis allows the researcher to anticipate the scatter in the experiment, at different replication levels, based on present understanding of the system.

The calculated value dR_0 represents the minimum uncertainty in R which could be obtained. If the process were entirely steady, the results of repeated trials would lie within (+/-)dR_0 of their mean …”

Nth Order UncertaintyThe calculated value of dR_N, the Nth order uncertainty, estimates the scatter in R which could be expected with the apparatus at hand if, for each observation, every instrument were exchanged for another unit of the same type. This estimates the effect upon R of the (unknown) calibration of each instrument, in addition to the first-order component. The Nth order calculations allow studies from one experiment to be compared with those from another ostensibly similar one, or with “true” values.”Here replace, “instrument” with ‘climate model.’ The relevance is immediately obvious. An Nth order GCM calibration experiment averages the expected uncertainty from N models and allows comparison of the results of one model run with another in the sense that the reliability of their predictions can be evaluated against the general dR_N.

Continuing: “

The Nth order uncertainty calculation must be used wherever the absolute accuracy of the experiment is to be discussed. First order will suffice to describe scatter on repeated trials, and will help in developing an experiment, but Nth order must be invoked whenever one experiment is to be compared with another, with computation, analysis, or with the “truth.”Nth order uncertainty, “*Includes instrument calibration uncertainty, as well as unsteadiness and interpolation.

*Useful for reporting results and assessing the significance of differences between results from different experiment and between computation and experiment.

The basic combinatorial equation is the Root-Sum-Square:

dR = sqrt[sum over((dR_i/dx_i)*dx_i)^2]”https://doi.org/10.1115/1.3241818

Moffat RJ. Describing the uncertainties in experimental results. Experimental Thermal and Fluid Science. 1988;1(1):3-17.

“

The error in a measurement is usually defined as the difference between its true value and the measured value. … The term “uncertainty” is used to refer to “a possible value that an error may have.” … The term “uncertainty analysis” refers to the process of estimating how great an effect the uncertainties in the individual measurements have on the calculated result.THE BASIC MATHEMATICSThis section introduces the(my bold)root-sum-square (RSS)combination, the basic form used for combining uncertainty contributions in both single-sample and multiple-sample analyses. In this section, the term dX_i refers to the uncertainty in X_i in a general and nonspecific way: whatever is being dealt with at the moment (for example, fixed errors, random errors, or uncertainties).Describing One VariableConsider a variable X_i, which has a known uncertainty dX_i. The form for representing this variable and its uncertainty is

X=X_i(measured) (+/-)dX_i (20:1)

This statement should be interpreted to mean the following:

* The best estimate of X, is X_i (measured)

* There is an uncertainty in X_i that may be as large as (+/-)dX_i

* The odds are 20 to 1 against the uncertainty of X_i being larger than (+/-)dX_i.

The value of dX_i represents 2-sigma for a single-sample analysis, where sigma is the standard deviation of the population of possible measurements from which the single sample X_i was taken.The uncertainty (+/-)dX_i Moffat described, exactly represents the (+/-)4W/m^2 LWCF calibration error statistic derived from the combined individual model errors in the test simulations of 27 CMIP5 climate models.

For multiple-sample experiments, dX_i can have three meanings. It may represent tS_(N)/(sqrtN) for random error components, where S_(N) is the standard deviation of the set of N observations used to calculate the mean value (X_i)_bar and t is the Student’s t-statistic appropriate for the number of samples N and the confidence level desired. It may represent the bias limit for fixed errors (this interpretation implicitly requires that the bias limit be estimated at 20:1 odds). Finally, dX_i may represent U_95, the overall uncertainty in X_i.From the “basic mathematics” section above, the over-all uncertainty U = root-sum-square = sqrt[sum over((+/-)dX_i)^2] = the root-sum-square of errors (rmse). That is U = sqrt[(sum over(+/-)dX_i)^2] = (+/-)rmse.

The result R of the experiment is assumed to be calculated from a set of measurements using a data interpretation program (by hand or by computer) represented byR = R(X_1,X_2,X_3,…, X_N)

The objective is to express the uncertainty in the calculated result at the same odds as were used in estimating the uncertainties in the measurements.

The effect of the uncertainty in a single measurement on the calculated result, if only that one measurement were in error would be

dR_x_i = (dR/dX_i)*dX_i)

When several independent variables are used in the function R, the individual terms are combined by a root-sum-square method.

dR = sqrt[sum over(dR/dX_i)*dX_i)^2]

This is the basic equation of uncertainty analysis. Each term represents the contribution made by the uncertainty in one variable, dX_i, to the overall uncertainty in the result, dR.http://www.sciencedirect.com/science/article/pii/089417778890043X

Vasquez VR, Whiting WB. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis. 2006;25(6):1669-81.

[S]ystematic errors are associated with calibration bias in the methods and equipment used to obtain the properties. Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected.Usually, it is assumed that the scientist has reduced the systematic error to a minimum, but there are always irreducible residual systematic errors. On the other hand, there is a psychological perception that reporting estimates of systematic errors decreases the quality and credibility of the experimental measurements, which explains why bias error estimates are hardly ever found in literature data sources.

Of particular interest are the effects of possible calibration errors in experimental measurements. The results are analyzed through the use of cumulative probability distributions (cdf) for the output variables of the model.”

A good general definition of systematic uncertainty is the difference between the observed mean and the true value.”

Also, when dealing with systematic errors we found from experimental evidence that in most of the cases it is not practical to define constant bias backgrounds. As noted by Vasquez and Whiting (1998) in the analysis of thermodynamic data, the systematic errors detected are not constant and tend to be a function of the magnitude of the variables measured.”

Additionally, random errors can cause other types of bias effects on output variables of computer models. For example, Faber et al. (1995a, 1995b) pointed out that random errors produce skewed distributions of estimated quantities in nonlinear models. Only for linear transformation of the data will the random errors cancel out.”

Although the mean of the cdf for the random errors is a good estimate for the unknown true value of the output variable from the probabilistic standpoint, this is not the case for the cdf obtained for the systematic effects, where any value on that distribution can be the unknown true. The knowledge of the cdf width in the case of systematic errors becomes very important for decision making (even more so than for the case of random error effects) because of the difficulty in estimating which is theunknown trueoutput value.(emphasisi in original)”It is important to note that when dealing with nonlinear models, equations such as Equation (2) will not estimate appropriately the effect of combined errors because of the nonlinear transformations performed by the model.Equation (2) is the standard uncertainty propagation sqrt[sum over(±sys error statistic)^2].

In principle, under well-designed experiments, with appropriate measurement techniques, one can expect that the mean reported for a given experimental condition corresponds truly to the physical mean of such condition, but unfortunately this is not the case under the presence of unaccounted systematic errors.When several sources of systematic errors are identified, beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

beta ~ sqrt[sum over(theta_S_i)^2], where i defines the sources of bias errors and theta_S is the bias range within the error source i. Similarly, the same approach is used to define a total random error based on individual standard deviation estimates,

e_k = sqrt[sum over(sigma_R_i)^2]

A similar approach for including both random and bias errors in one fterm is presented by Deitrich (1991) with minor variations, from a conceptual standpoint, from the one presented by ANSI/ASME (1998)http://dx.doi.org/10.1111/j.1539-6924.2005.00704.x

Kline SJ. The Purposes of Uncertainty Analysis. Journal of Fluids Engineering. 1985;107(2):153-60.

The Concept of UncertaintySince no measurement is perfectly accurate, means for describing inaccuracies are needed. It is now generally agreed that the appropriate concept for expressing inaccuracies is an “uncertainty” and that the value should be provided by an “uncertainty analysis.”An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.

The term “calibration experiment” is used in this paper to denote an experiment which: (i) calibrates an instrument or a thermophysical property against established standards; (ii) measures the desired output directly as a measurand so that propagation of uncertainty is unnecessary.

The information transmitted from calibration experiments into a complete engineering experiment on engineering systems or a record experiment on engineering research needs to be in a form that can be used(my bold).in appropriate propagation processes… Uncertainty analysis is the sine qua non for record experiments and for systematic reduction of errors in experimental work.Uncertainty analysis is … an additional powerful cross-check and procedure for ensuring that requisite accuracy is actually obtained with minimum cost and time.

Propagation of Uncertainties Into ResultsIn calibration experiments, one measures the desired result directly. No problem of propagation of uncertainty then arises; we have the desired results in hand once we complete measurements. In nearly all other experiments, it is necessary to compute the uncertainty in the results from the estimates of uncertainty in the measurands. This computation process is called “propagation of uncertainty.”

Let R be a result computed from n measurands x_1, … x_n„ and W denotes an uncertainty with the subscript indicating the variable. Then, in dimensional form, we obtain: (W_R = sqrt[sum over(error_i)^2]).”

https://doi.org/10.1115/1.3242449

Henrion M, Fischhoff B. Assessing uncertainty in physical constants. American Journal of Physics. 1986;54(9):791-8.

“Error” is the actual difference between a measurement and the value of the quantity it is intended to measure, and is generally unknown at the time of measurement. “Uncertainty” is a scientist’s assessment of the probably magnitude of that error.https://aapt.scitation.org/doi/abs/10.1119/1.14447