Critique of “Propagation of Error and the Reliability of Global Air Temperature Predictions”

From Dr. Roy Spencer’s Blog

September 11th, 2019 by Roy W. Spencer, Ph. D.

I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.

I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.

Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.

Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.

Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.

A Very Brief Summary of What Causes a Global-Average Temperature Change

Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.

So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.

While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.

The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.

What Frank’s Paper Claims

Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).

He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.

First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.

He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.

Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.

But there are two fundamental problems with Dr. Frank’s methodology.

Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

Why?

Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.

For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.

Frank-model-vs-10-CMIP5-control-runsFigure 1. Output of Dr. Frank’s emulation model of global average surface air temperature change (his Eq. 1) with a +/- 2 W/m2 global radiative imbalance propagated forward in time (using his Eq. 6) (blue lines), versus the yearly temperature variations in the first 100 years of integration of the first 10 models archived at
https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .

If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.

Why don’t the climate models show such behavior?

The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.

This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.

The Error Propagation Model is Not Appropriate for Climate Models

The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,

Equation 6 shows that projection uncertainty must increase in every simulation (time) step, as is expected from the impact of a systematic error in the deployed theory“.

While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).

Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.

I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.

Concluding Comments

There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.

Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).

But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.

My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.

There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.

I’d be glad to be proved wrong.

Get notified when a new post is published.
Subscribe today!
4.5 2 votes
Article Rating
220 Comments
Inline Feedbacks
View all comments
Admin
September 11, 2019 2:13 pm

Roy, Pat accepts the models have been tuned to avoid radical departures, his point is the magnitude of the errors which are being swept under the carpet is evidence that the models are unphysical. Tuning doesn’t make the errors disappear, it just hides them. Agreement with past temperatures is not evidence the models are right, if the models get other things very wrong.

Roy W. Spencer
Reply to  Eric Worrall
September 11, 2019 3:07 pm

Eric:

A 12% variation across the model “errors” in LWCF is probably not much bigger than our uncertainty in LWCF. The measurement of cloud properties from satellite as truth is kind of uncertain: spatial resolution matters, thresholds, what the definition of “cloud” is.

So, there is uncertainty in things like LWCF, right? Well, despite that fact, 20 different models from around the world which have differing error-prone and uncertain values for all of the components impacting Earth’s radiative energy budget give about the same results, only varing in their warming rate depending on (1) climate sensitivity (“feedbacks”), and (2) rate of ocean heat storage. This suggests that their results don’t really depend upon model biases in specific processes.

That’s why they run different models with different groups in charge. To find out what a range of assumptions produce.

This is NOT where model projection uncertainty lies.

PTP
Reply to  Roy W. Spencer
September 11, 2019 3:59 pm

Is that not what the author was referring to, by the distinction between Precision and Accuracy; that statistical uncertainties may offset, but margins of physical error can only be cumulative?

What I got from the paper, is that regardless of what the uncertainties are, and no matter how precisely tuned the models might be, the margin of physical error is so broad, that scientifically, the models tell us… Absolutely Nothing.

Latitude
Reply to  Roy W. Spencer
September 11, 2019 4:20 pm

when they hindcast these models…which adjusted data set do they use?…

..and are they able to run the models fast enough…before they adjust past temperatures again? 🙂

Greg Goodman
Reply to  Latitude
September 11, 2019 5:47 pm

Agreement with past temperatures is not evidence the models are right, if the models get other things very wrong.

Models do not even get hindcasts right. I am unaware of any model which can reproduce the early 20th c. warming. They start too high and end too low, ie they are tuned to the average but do not represent the actual rate of warming, which is comparable to the late 20th c. warming.

Lacis et al 1992 used “basic physics” modelling to calculate volcanic forcing. The same group in Hansen et al 2005 had abandoned all attempts at physically realistic modelling and arbitrarily tuned the scaling of measured AOD to radiative forcing , the sole criterion being to tweak climate model output to fit the climate record. They jettisoned a valid parameter to gain a fudge factor.

Once you take this approach, not only have you lost the claim to be using known, basic physics but you basically have an ill-conditioned problem. You have a large number of unconstrained variables with which to fit your model. You will get a reasonable fit but there is no reason that it will be physically real and meaningful. Take any number of red noise series with arbitrary scaling and you can model any climate data you wish to obtain a vague fit which looks OK , if you close one eye.

This also affords you the possibility to fix one parameter ( eg. GHG forcing ) at an elevated level and regress the model to fit other parameters around it. This , in truth, is what modellers are doing with the so-called “control runs”. This is a deceptive misnomer, since it does not control anything in the usual way this term is used in scientific studies since we don’t have any data for a climate without the GHG forcing. They are simply running the model which has already been tuned with one variable missing. The difference compared to the standard run is precisely the GHG effect they have hard-coded into the model. It is not a result, it and input.

This is well known to the those doing the modelling, and to present this as a “control run” and show it as a “proof” of the extent of CO2 warming is, to be honest, a scientific fraud.

Latitude
Reply to  Greg Goodman
September 12, 2019 4:49 am

When they have to adjust things…they don’t even understand….to get the results they want
If they understood…they wouldn’t need to adjust

Admin
Reply to  Roy W. Spencer
September 11, 2019 6:10 pm

Dr. Spencer, if you subtract observed cloud cover from predicted cloud cover over say a 20 year period:

cloud error = predicted – observed

Then flip the sign of the error:

cloud cover (test) = observed – cloud error

How much impact does using cloud cover (test) in the model have on predicted temperature, compared to the temperature predicted by the original projection?

Editor
Reply to  Roy W. Spencer
September 12, 2019 1:35 am

Roy – Thanks for your critical analysis. If the climate alarmist community were as open to critical analysis then there would be no alarm, no global warming scare.
But coming back to your analysis – I’m not sure that your argument re the “20 different models” is correct. All the models are tuned to the same very recent observation history, so their results are very unlikely to differ by much over quite a significant future period. In other words, the models are not as independent of each other as some would like to claim. In particular, they all seem to have very similar climate sensitivity – and that’s a remarkable absence of independence.

Crispin in Waterloo
Reply to  Eric Worrall
September 11, 2019 4:32 pm

Eric

People are confusing uncertainty with a coefficient of variation in the results. A CoV is not an uncertainty per se. It is based on the outputs and their variability. A model that has been validated, can be used to predict results and a confidence given based on an analysis of the output data set. However…

The uncertainty about the result is an inherent property of the measurement inputs or system. Uncertainties get propagated through the formulas. They can be expressed as an absolute error (±n) or a relative error (±n%). This is not the same thing as a sigma 1 coefficient of variation, at all.

Roy is claiming that having a low CoV means the uncertainty is low. The mean output value, the CoV and the uncertainty are different things.

Admin
Reply to  Crispin in Waterloo
September 11, 2019 5:43 pm

Pat’s point is the cloud error has to be propagated because the error becomes part of the input state of the next iteration of the model.

I asked Pat whether the period of the model iteration makes a difference, from memory he said he tried different periods and it made very little difference.

Dr. Spencer is right that the models don’t exhibit wild departures and provide a reasonable hindmost of energy balance, but I see this as curve fitting. It won’t tell you if say another pause is about to occur, because the physics of the model with respect to important features such as cloud cover is wrong.

JEHILL
Reply to  Crispin in Waterloo
September 12, 2019 6:30 am

“A model that has been validated, can be used to predict results and a confidence given based on an analysis of the output data set.”

Claiming an atmospheric model is validated is spurious or intellectually dishonest at best and probably just plain ole malfeasance.

A blank 16 foot (roughly 4.9 meters) wall, a blindfolded dart thrower and a dart would yield better results.

MattSt
Reply to  Eric Worrall
September 11, 2019 9:34 pm

Eric, I think another way of expressing your point is that independent errors will add in quadrature, and tend to cancel out. While the span of possible outcomes will increase greatly, the probability distribution remains centered on zero, and vast numbers of runs would be required to demonstrate the tails of the distribution. As a control system designer, I typically impose sanity checks to constrain multivariate systems to account for mathematically possible, but physically unreasonable outcomes. On a verifiable system, this is an interesting side effect of the imperfect mathematical models of physical processes. I suspect strongly that climate models are pushed beyond what is mathematically acceptable for any real world application.

Admin
Reply to  MattSt
September 12, 2019 5:22 am

The errors don’t tend to zero, they’re not random they’re systemic. If I understand Pat’s point properly he is suggesting the models do go wild, but they have been tuned to constrain the temperature at the expense of wild variations in cloud cover.

Reply to  Eric Worrall
September 12, 2019 11:14 am

I’m going to post my reply to Roy here under Eric’s top comment.

But I’ll summarize Roy’s criticism in one line: Roy thinks a calibration error statistic is an energy.

Many of my climate modeler reviewers made the same mistake. It’s an incredible level of ignorance in a trained scientist.

Here’s the reply I posted on Roy’s blog, but without the introductory sentences.

Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”

You start by stating that I take “an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) …

If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The (+/-)4 W/m^2 is a model calibration error statistic.

I do not assume “that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.

It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.

Entry of the (+/-)4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.

You go on to write that I propagate “the error forward in time during his emulation model’s integration.” You’re implying here that the The (+/-)4 W/m^2 is propagated forward. It’s not.

It’s the uncertainty in air temperature, consequent to the uncertainty in simulated forcing, that is propagated forward.

Then you write, “ The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).

I must say I was really sorry to read that. It’s such a basic mistake. The (+/-)20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.

And consider this: the (+/-)20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.

One of my reviewers incredibly saw the (+/-)20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of (+/-)20 C as temperature would necessitate both states to be occupied simultaneously.

In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.

The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.

You wrote that, “ The modelers are well aware of these biases, which can be positive or negative depending upon the model.

The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.

You wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s (+/-)4 W/m^2. Recognizing the (+/-) is critical to understanding.

And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s (+/-)4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.

You wrote, “each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.

And each of those models simulate cloud fraction incorrectly, producing an average calibration error (+/-)4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.

That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.

You wrote, “If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.

No, they would not.

I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.

Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.

You wrote, “+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.

I don’t assume that (+/-)4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.

Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a (+/-) uncertainty in the reported result.

There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.

You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.

All you’re doing is hiding the uncertainty by tuning your models.

Under “Error Propagation …” you wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time.

Once again, you imposed a positive sign on a (+/-) uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.

Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.

I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.

You wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.

No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.

The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.

And the time step is not assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.

You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.

Eqn. 6 is a generalization of eqns 5.

I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.

Lonny Eachus
September 11, 2019 2:16 pm

Dr. Spencer:

I admire your work but I perceive a logical error in your analysis.

You say: “It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.”

But your own description of climate models includes parameterization of clouds, for example. And we know how badly that often fails.

And yes, other parameters are adjusted to compensate for that, still giving the same net TOA output.

But that is not evidence that the models are correct; rather, it is evidence that they are not.

Roy W. Spencer
Reply to  Lonny Eachus
September 11, 2019 3:20 pm

Lonny,

I thought I made it clear, at the beginning and end, that I don’t believe the climate models are correct. I’m only faulting the analysis. Read my response to Eric, above. Parameterizations are fine if they reproduce the average behavior of clouds and the clouds’ dependence on a wide variety of variables. Model forecast errors in warming rates don’t seem to depend upon model biases in various process. They depend upon (1) feedbacks, and (2) the rate of deep ocean storage.

I made it very clear I’m not saying models are right. I just don’t think they are wrong for the reason Pat gives.

Greg Goodman
Reply to  Roy W. Spencer
September 11, 2019 5:53 pm

Many thanks to Dr Spencer for his analysis. It was immediately obvious to me that the paper was spurious but I did not have the time to go into it in the detail that he did to come up with a direct refutation and solid reasons why.

Being a sketpic means being equally skeptical and critical of everything, not just the results you don’t like.

many thanks for the objective scientific approach.

Reply to  Greg Goodman
September 11, 2019 6:44 pm

+1

Reply to  Roy W. Spencer
September 12, 2019 5:33 am

The models do not give good answers because they assume feedback when there is no feedback. Please read the 5 postulates of Thermodynamics plus the zeroth law. I further suggest you read Engineering texts because thermodynamics and heat transfer are engineering subjects. Maybe you also read something about another engineering subject -dimensional analysis.

John Tillman
September 11, 2019 2:19 pm

As a Research Professor on the Scientific Staff at SLAC National Accelerator Laboratory for going on 34 years, Dr. Frank (Chemistry PhD., Stanford) is highly skilled at data analysis.

Objections to his paper raised by Dr. Spencer and Nick Stokes were also mentioned by reviewers, but the journal found Dr. Frank’s responses valid.

Roy W. Spencer
Reply to  John Tillman
September 11, 2019 3:23 pm

John, let’s be careful about appealing to the authority of journals. Pat had a hard time getting that paper published, and when he did, it was in the one ranked 48th out of 49 Earth science journals by ResearchGate (I keep track of such things in a spreadsheet). The work must stand on its own merits, whether published or not.

Warren
Reply to  Roy W. Spencer
September 11, 2019 4:03 pm

There may be good reasons for such a low ranking; however, that’s simply a side-swipe.
The quality of the editor and reviewers is what we should concentrate on.
Jing-Jia Luo formerly of Australian Bureau of Meteorology is highly regarded and has published in the top journals including on model biases.

John Tillman
Reply to  Roy W. Spencer
September 11, 2019 5:52 pm

Dr. Spencer,

Of course not all journals are created equal, nor their editors. However the six years IMO owe more to resistance by the Team than to any inherent errors of analysis.

I didn’t mean to appeal to authority, but rather to the persuasiveness of Pat’s work, after subjected to rigorous criticism, then evaluated in that light by competent editors.

He and they might be wrong, but my point is that unbiased reviewers and editors considered objections such as yours, yet decided to go ahead and publish.

Reply to  John Tillman
September 11, 2019 6:47 pm

” the six years IMO owe more to resistance by the Team”
It is due to reviewers seeing what Roy Spencer saw.

John Tillman
Reply to  Nick Stokes
September 11, 2019 7:38 pm

Nick,

Do you know who all the reviewers were who read the paper in its submissions over six years?

I don’t. Pat probably doesn’t. How then can you justify this conclusion?

OTOH, we know that the Team colludes to keep skeptical papers from being published.

Michael S. Kelly LS, BSA Ret.
Reply to  Nick Stokes
September 11, 2019 7:40 pm

I agree with you, Nick, that this is probably one factor contributing to the delay.

Reply to  Nick Stokes
September 11, 2019 8:05 pm

“Do you know who all the reviewers were who read the paper in its submissions over six years?”
I know what they said. Pat posted a huge file of reviews. It’s linked in his “Mark ii” post and earlier ones.

Reply to  Nick Stokes
September 11, 2019 8:13 pm

Or it could just be they share a common misconception. This happens a lot when there are many competing beliefs for which set of partial explanations explains something controversial that has only one possible comprehensive explanation.

I see this all the time as the reflexive rejection that the SB Law quantifies the bulk relationship between the surface temperature and emissions at TOA as a gray body because there’s a common belief among many on both sides that the climate system must be more complicated than that. This appears to be a very powerful belief that unambiguous data has trouble overcoming and even the lack of other physical laws capable of quantifying the behavior in any other way is insufficient to quell that belief.

I put the blame on the IPCC who has framed the science in a nonsensical way since the first AR and the 3 decades that this garbage has been stewing ever since.

Reply to  Roy W. Spencer
September 12, 2019 12:58 pm

You, Dr. Spencer, rightly criticized appealing to authority. Then you used a similar argument, equally illogical, denigrating the opponent’s authority (the publication).

To be able to both criticize illogic and use the very same illogic in a brief statement is likely an indication of a blind spot. The capacity to do this may be affecting your discourse.

September 11, 2019 2:23 pm

Thank you for your clarification, Dr. Spencer. I was a little suspicious of such a large propagating error in such a short time. Even if wrong, climate models are quite consistent in their response to increasing CO2 levels. That’s why they are easy to emulate with simple models.

Reply to  Javier
September 11, 2019 4:39 pm

I was a little suspicious of such a large propagating error in such a short time.

As was I. If such a large propagating error existed, then surely it would have already manifested itself as a bigger deviation in the model runs from observations over the forecast period (starts 2006).

Like Dr Spencer I am not arguing that the models are “right”, or above questioning; and anyone can see that observations are currently on the low side of of the CMIP5 model runs overall. However, as things stand observations remain within the relatively narrow margins of the multi-model range.

If Pat’s hypothesis were right, and the error in the models was as big as he suggests, then after nearly 13 years we would already expect to see a much bigger deviation between model outputs and observations than we currently do.

Kudos to Roy Spencer and WUWT for demonstrating true skepticism here.

Reply to  TheFinalNail
September 11, 2019 7:32 pm

Tuning a dozen parameters on water physics keeps them running (outputs) to expectation. Which to me is the clearest reason the models are junk.

Tuning parameters are just fudge factors, because the values are so poorly constrained, a degeneracy exists in widely different parameter sets. Multiple set of parameters that “works”, no one knows what are the correct parameters in their models.

Even Dr Spencer’s comment that all models close the energy budget at the TOA I think is incorrect. So do, but many do not. If they try to close to the energy budget in/out, then the model runs far too “hot.” Which is also why the hindcasts had to use excessive levels of aerosols to cool the the runs to match the record, and then call that calibration.

Joe Crawford
Reply to  Joel O'Bryan
September 13, 2019 2:51 pm

“Tuning a dozen parameters on water physics keeps them running (outputs) to expectation. Which to me is the clearest reason the models are junk.”

To that I have to add one more thing that initially pegged my BS meter. Several years ago (it would now probably take me days or even weeks to find a reference) I remember several comments either here or on Steve M’s site that not only did the models include many adjustable parameters but in order to keep them within somewhat acceptable ranges on longer runs they had to include limit checks on various calculations in order to keep them from going totally off the rails.

I haven’t seen where the modelers ever got their code to the point where these limit checks have been removed. Unless/until then there is no way I would be able to accept any generate output as anything other than SWAGs.

John_QPubliv
Reply to  TheFinalNail
September 11, 2019 11:16 pm

“ , If such a large propagating error existed, then surely it would have already manifested itself as a bigger deviation in the model runs from observations over the forecast period (starts 2006)…”

Wrong. That is the whole point of Pat Frank’s analysis. The reason you don’t see the deviations in the model, is because the models don’t contain physics necessary to model the reality that was used to determine the uncertainty. The uncertainty was derived from satellite measurements, and satellites do not need sophisticated models to determine cloud behavior. They just measure it.

Jean Parisot
Reply to  TheFinalNail
September 12, 2019 7:18 pm

“then after nearly 13 years we would already expect to see a much bigger deviation between model outputs and observations than we currently do.”

Why?

Latitude
Reply to  Javier
September 11, 2019 5:17 pm

Models are dependent on what numbers you feed them…

..and past temps have been so jiggered no model will ever be right..present temps not excluded either

Even the numbers they produce…an average….when they claim the Arctic has increased 3.6F…yet, the global average temp has only increased 1.4F….somewhere got colder

Reply to  Latitude
September 12, 2019 10:48 am

The temps have been jiggered to specifically match the CO2 concentration, which is why this graph by Tony Heller has a straight line with a R squared of 0.99
See here:
https://twitter.com/NickMcGinley1/status/1150523905293148160?s=20

Is this the data they feed into the models?
Is this the past conditions they are tuned against?
They have gamed all of the data at this point.

Greg
Reply to  Javier
September 11, 2019 6:02 pm

Even if wrong, climate models are quite consistent in their response to increasing CO2 levels. That’s why they are easy to emulate with simple models.

That is because the ARE simple models … all the complexity is a red scarf trick to add a pretense of deep understanding and complexity but the basic aim is to produce some noise around the monotonic rise in CO2 and use things like volcanic forcing to provide a couple of dips in about the right places to make it look more realistic.

And when the first 10y of projections fails abysmally, rather than attempt to improve the models ( which would require reducing the CO2 forcing ) they simply change the data to fit the models : see Karlisation and the ‘pause buster’ paper.

Larry in Texas
September 11, 2019 2:24 pm

Great response, Roy, very informative. I have one question though: what is the basis for the assumed “energy balance” in their modeled system? Is it previous, hard and reliable temperature and other weather-type data that they can calculate an assumed “energy balance” from? If not, then how is it calculated? What assumptions go into determining an “energy balance” starting point from? Is it possible a regnant bias could exist in how they calculate a reliable equilibrium from which to go forward?

Like you, I hope Ross McKittrick offers his thoughts on the subject.

Roy W. Spencer
Reply to  Larry in Texas
September 11, 2019 3:29 pm

Great questions, and I have harped on this for years. There is little basis for the assumption. We don’t know whether the climate system was in energy balance in the 1700s and 1800s. Probably not, since the evidence suggests it was warming during that time. But all the modelers assume the climate system wouldn’t change on multi-decadal time scales without human interference. That’s one of the main points I make: Recent warming of the global oceans to 2,000 m depth represents an energy imbalance of 1 part in 250. We don’t know any of the individual energy fluxes in nature to even 1 part in 100. That’s why I say human-caused warming has a large dose of faith. Not necessarily bad, but at least be honest about it.

Michael S. Kelly LS, BSA Ret.
Reply to  Roy W. Spencer
September 11, 2019 5:52 pm

Another way of stating your point, I believe, is that there is no evidence that the climate system has any equilibrium states. Thus the adjustment of models to start with an equilibrium state (energy I/O balanced at the top of the atmosphere) is wrong to begin with. Calculating any departure from an incorrect equilibrium state due to human CO2 release would then not have any meaning, as would calculating an equilibrium sensitivity to CO2 content. Please correct me if I’m wrong.

I do see Dr. Frank’s point about uncertainty in the initial TOA energy balance causing problems, but climate models have way more problems with initial conditions than that. For example, just setting up all of the temperatures, pressures, and velocities at the grid points is fraught with uncertainty, and the actual codes contain numerical damping schemes to keep them from blowing up due to an inconsistent set of initial conditions. Those schemes become part of the model equations, and often have unforeseen effects throughout the integration period. That’s just the tip of the iceberg.

As someone very familiar with engineering CFD, I look at the problem of making a global circulation model and wonder what honest practitioner of the art would ever tell anyone that it was possible. But then, the people writing the checks are so far from being able to understand the futility of the task that it’s easy to swindle them.

Prjindigo
Reply to  Roy W. Spencer
September 12, 2019 9:10 am

Given the energy balance point is 0°K I doubt the models are intrinsically accurate OR precise based on the input they receive.

Original author is correct, the model intentionally removes error and smooths input and thus is invalid from a mathematical standpoint. There are no exceptions when dealing with statistical analysis and statistical input… if you have to “insanetize” your inputs your output, no matter how cloyingly close to your desired output, is just as wrong as if the computer you ran it on exploded into flames and teleported to the next office.

You run the math and if it doesn’t give you the correct answer then your inputs were wrong.

September 11, 2019 2:29 pm

On the positive side Dr Spencer yet again shows he is his own man. On the negative side, just because errors cancel out does not mean that the errors do not exist. In contrast, it means that the total sum of RMS errors is even larger than if they did not cancel out.

However as Dr Spencer has done us the courtesy of putting in the time to understand the paper, perhaps I comment further only when I have done the same.

Lonny Eachus
Reply to  Mike Haseler (Scottish Sceptic)
September 11, 2019 2:43 pm

This was more-or-less the substance of my own reply.

Stephen Wilde
September 11, 2019 2:31 pm

In accountancy one has the interesting phenomenon of multiple ‘compensating’ errors self cancelling so that one thinks the accounts are correct when they are not.
This is similar.
Many aspects of the climate are currently unquantifiable so multiple potentially inaccurate parameters are inserted into the starting scenario.
That starting scenario is then tuned to match real world observations but it contains all those multiple compensating errors.
Each one of those errors then compounds with the passage of time and the degree of compensating between the various errors may well vary.
The fact is that over time the inaccurate net effect of the errors accumulates faster and faster with rapidly reducing prospects of unravelling the truth.
Climate models are like a set of accounts stuffed to the brim with errors that sometimes offset and sometimes compound each other such that with the passing of time the prospect of unravelling the mess reduces exponentially.
Pat Frank is explaining that in mathematical terms but given the confusion on the previous thread maybe it is best to simply rely on verbal conceptual imagery to get the point across.
Climate models are currently worthless and dangerous.
Roy appears to have missed the point.
The hockey stick graph is the perfect illustration of Pat’s point.
The errors accumulate more rapidly with time so that the model output diverges exponentially with time.
A hockey stick profile is to be expected from Pat’s analysis of the flaws in climate models as they diverge from reality more and more rapidly over time.

Mr.
Reply to  Stephen Wilde
September 11, 2019 2:54 pm

Good analogy about forced balancing of financial accounts.
You never know what degrees of shenanigans might have been committed

Reply to  Mr.
September 11, 2019 5:33 pm

The biggest shenanigan is that assuming that natural emissions (which are an order of magnitude greater than fossil fuel burning emissions) are balanced out by natural sinks over time leaving anthropogenic emissions to “accumulate” in the atmosphere. The cold polar water sinks don’t know the difference. Both will be absorbed at the same rate. Atmospheric concentrations of CO2 have been rising because the rates of natural emissions have been rising faster than the polar water sink rates. On top of that, man burning emissions (excepting jets) are not likely to ever get to the polar regions as absorption in clouds and rain will return it to the surface to become a small part of natural emissions.

Reply to  Fred Haynie
September 11, 2019 10:28 pm

“The biggest shenanigan is that assuming that natural emissions (which are an order of magnitude greater than fossil fuel burning emissions) are balanced out by natural sinks over time leaving anthropogenic emissions to “accumulate” in the atmosphere.”

They must balance. The first indicator is that they always did, at least during the Holocene, before we started burning and they went up 40%. But the mechanics of the “natural emissions” are necessarily cyclic. There are
1. Photosynthesis and oxidation. In a growing seasons, plants reduce about 10% of the CO2 that is in the air. But that reduced material cannot last in an oxidising environment. A fraction is oxidised during the season (leaves, grasses etc); woody materials may last a bit longer. But there is no large long-term storage, at least not one that varies. The oxidation flux, including respiration and wildfire, must match the reduction over a time scale of a few years.
2. Seasonal outgassing. This is the other big “natural emission”. As water warms in the spring, CO2 is evolved, because it is less soluble. But the same amount of CO2 was absorbed the previous autumn as the water cooled. It is an annual cycle.

There is a longer term cycle involving absorption in cold polar water than sinks. It is still a finite sink, and balances over about a thousand years. And it is small.

A C Osborn
Reply to  Nick Stokes
September 12, 2019 1:47 am

Why choose the Holocene, because it remained low due to the tempearture being 5 Degrees C lower perhaps?
If it “must balances” as you claim how do you think the Earth arrived at 200ppm from an historical high of 7000ppm?
How did the earth get back to 2000ppm from 200ppm in the Permian?
It has never balanced.

Reply to  Nick Stokes
September 12, 2019 2:44 am

“Why choose the Holocene”
Because it is a period of thousands of years when the climate was reasonably similar to present, and for which we have high resolution CO2 measures. Before we started burning, CO2 sat between about 260 and 280 ppm. These “natural emissions” were in fact just moving the same carbon around. Once we started burning fossil carbon and adding to to the stock in circulation, CO2 shot up to now about 410ppm, in proportion to what we burnt.

Reply to  Nick Stokes
September 12, 2019 6:49 am

Yes, over those longer cycles the cyclical exchange between water and air must balance because the total amount of carbon is not changed. But as long as the earth rotates there will be a natural daily cycle in the net rate of emissions/”rain return” that changes from day to day as a function of cloud cover. This net exchange rate changes from year to year as a function of ocean currents which affect the surface temperature of the water.

bit chilly
Reply to  Nick Stokes
September 12, 2019 2:06 pm

I’m sorry Nick but that is rubbish.Are you saying cloudiness over the oceans occurs at identical levels each year and the cloud covered/cloud free parts of the oceans are at the same temperature every year ? That is a huge assumption and not one i can find any supporting evidence for.

There is no way on earth the annual co2 flux is a known known.

AGW is not Science
Reply to  Fred Haynie
September 12, 2019 7:54 am

Agreed, 100%. The very notion that we “know” CO2 “sources” and “sinks,” none of which are being measured, were in “balance,” particularly when supported by the scientific incompetence of comparing proxy records, which via resolution limitations or other issues, don’t show the complete extent of atmospheric CO2 variability, with modern atmospheric measurements, which show every hiccup ppm change, and ignoring the “inconvenient” parts of the Earth’s climate history which have shown both much higher CO2 levels and much more range of variability, is absolute nonsense.

Me@Home
Reply to  Stephen Wilde
September 11, 2019 9:05 pm

Stephen when I taught accountancy I often reminded students that an apparent imbalance of $0.01 might disguise two errors, one of $100.00 and another of $100.01

Arachanski
Reply to  Stephen Wilde
September 12, 2019 12:48 am

Errors never compensate. In the simplest case square of the total error is equal to the sum of the squares of all errors. That is also why you can never get rid of the instrument measurement error, which Dr. Roy Spencer probably did in his UAH dataset. The total error is always bigger than the instrument error. If a satellite can only measure sea level down to 4cm – the total error will always be larger than 4cm. And the result will be 0±4cm (or worse) – ergo meaningless.

Really, everyone should at the very least read “An Introduction to Error Analysis” by John R. Taylor or “Measurements and their Uncertainties” by Hughes and Hase before taking part in this discussion.

TRM
September 11, 2019 2:36 pm

WUWT should also publish the comment at Dr Spencer’s site by Dr Frank addressing this critique.

Janice Moore
Reply to  TRM
September 11, 2019 3:07 pm

Yes!

****************************************

WUWT should also publish the comment at Dr Spencer’s site by Dr Frank addressing this critique.

****************************************

As of ~3:05PM, 9/11/19, Dr. Frank’s response to Dr. Spencer has not been published on WUWT.

Simultaneous publication of that response would have been best practice journalism.

Publication of Frank’s response at all would be basic fairness.

Reply to  TRM
September 11, 2019 3:37 pm

WUWT republishes some articles from Climate Etc, and from drroyspencer.com, but doesn’t reproduce the comments. If you want a comment cross-published nothing can stop you from copy-pasting it. I can assure you it is not difficult. Don’t forget to credit the author of the comment and that’s it.

RobR
Reply to  Javier
September 11, 2019 7:35 pm

Yes, this is entirely accurate. This an interesting debate and a fine example of social media- based scientific debate.

Anyone, (including Dr. Frank) is free to repost his rejoinder(s) from another site.

Let us hope things remain civil between all parties.

TRM
Reply to  Javier
September 11, 2019 8:21 pm

Having me do it will get very messy and some of the +/- notation doesn’t work on Dr Spencer’s site. Feel free to delete this if Dr Frank posts it. Thanks for the explanation as to why you didn’t post it.
—————————————————————–
Pat Frank says:
September 11, 2019 at 11:59 AM
Hi Roy,

Let me start by saying that I’ve admired your work, and John’s, for a long time. You and he have been forthright and honest in presenting your work in the face of relentless criticism. Not to mention the occasional bullet. 🙂

Thanks for posting your thoughts here. I’m glad of the opportunity to dispel a few misconceptions. I’ll move right down to “What Frank’s Paper Claims.”

You start by stating that I take “an example known bias in a typical climate models longwave (infrared) cloud forcing (LWCF) …”

If your “bias” means offset, it is misleading. The LWCF error is a theory error, not a bias offset. That is demonstrated by its pair-wise correlation among all the models. The 4 W/m^2 is a model calibration error statistic.

I do not assume “that the typical models error (+/-4 W/m2) in LWCF can be applied in his emulation model equation.” That’s not an assumption.

It is justified several times in the paper on the grounds that it is an uncertainty in simulated tropospheric thermal energy flux. As such it conditions the simulated impact of CO2 forcing, which is also part of the very same tropospheric thermal energy flux.

Entry of the 4 W/m^2 into the emulation of projected global average air temperature is fully justified on those grounds.

You go on to write that I propagate “the error forward in time during his emulation models integration.” You’re implying here that the The 4 W/m^2 is propagated forward. It’s not.

It’s the uncertainty in air temperature, consequent to the uncertianty in simulated forcing, that is propagated forward.

Then you write, “ The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).”

I must say I was really sorry to read that. It’s such a basic mistake. The 20 C (your number) is not a temperature. It’s an uncertainty statistic. Propagated error does not impact model expectation values. It is evaluated separately from the simulation.

And consider this: the 20 C uncertainty bars are vertical, not offset. Your understanding of their meaning as temperature would require the model to imply the simultaneous coexistence of an ice house and a greenhouse state.

One of my reviewers incredibly saw the the 20 C as implying the model to be wildly oscillating between hothouse and ice-house states. He also not realizing the vertical bars require that his interpretation of 20 C as temperature would necessitate both states to be occupied simultaneously.

In any case, Roy, your first paragraph alone has enough mistakes in it to invalidate your entire critique.

The meaning of uncertainty is discussed in Sections 7.3 and 10 of the Supporting Information.

You wrote that, “ The modelers are well aware of these biases, which can be positive or negative depending upon the model.”

The errors are both positive and negative across the globe for each model. This is clearly shown in my Figure 4, and in Figures throughout Lauer and Hamilton, 2013. The errors are not bias offsets, as you have them here.

You wrote, “If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.”

You’re mistaken the calibration error statistic for an energy. It is not. And you’ve assigned an implied positive sign to the error statistic, representing it as an energy flux. It isn’t. It’s 4 W/m^2. Recognizing the is critical to understanding.

And let me ask you: what impact would a simultaneously positive and negative energy flux have at the TOA? After all, it’s 4 W/m^2. If that was a TOA energy flux, as you have it, it would be self-cancelling.

You wrote, “each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

And each of those models simulate cloud fraction incorrectly, producing an average calibration error 4 W/m^2 in LWCF, even though they are overall energy-balanced. I point out in my paper that the internal climate energy-state can be wrong, even though the overall energy balance is correct.

That’s what the cloud fraction simulation error represents: an internally incorrect climate energy-state.

You wrote, “If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling.”

No, they would not.

I’m sorry to say that your comment shows a complete lack of understanding of the meaning of uncertainty.

Calibration error statistics do not impact model expectation values. They are calculated after the fact from model calibration runs.

You wrote, “+/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.”

I don’t assume that 4 W/m^2. It is the reported LWCF calibration error statistic in Lauer and Hamilton, 2013.

Second, offsetting errors do not make the underlying physics correct. The correct uncertainty attending offsetting errors is their combination in quadrature and their report as a uncertainty in the reported result.

There is no reason to suppose that errors that happen to offset during a calibration period will continue to offset in a prediction of future states. No other field of physical science makes such awful mistakes in thinking.

You are using an incomplete or incorrect physical theory, Roy, adjusting parameters to get spuriously offsetting errors, and then assuming they correct the underlying physics.

All you’re doing is hiding the uncertainty by tuning your models.

Under “Error Propagation …” you wrote, “If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. ”

Once again, you imposed a positive sign on a uncertainty error statistic. The error statistic is not an energy flux. It does not perturb the model. It does not show up at the TOA.

Your imposition of that positive sign facilitates your incorrect usage. It’s an enabling mistake.

I have run into this mistaken thinking repeatedly among my reviewers. It’s incredible. It’s as though no one in climate science is ever taught anything about error analysis in undergraduate school.

You wrote, “Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step.”

No, it does not. Several reviewers, including Prof. Zanchettin, raised this question. I answered it to his satisfaction.

The size of the simulation calibration uncertainty statistic will vary with the time over which it is appraised. When this is taken into account, the centennial uncertainty comes out the same every time.

And the time step is not assumed, as you have it. Lauer and Hamilton provided an annual mean error statistic. That annual average calibration error was applied to annual temperature time steps. None of that was assumed.

You should have looked at eqns. 5, and the surrounding text. Here’s the critical point, from the paper: “In equation 5 the step-wise GHG forcing term, ΔF_i, is conditioned by the uncertainty in thermal flux in every step due to the continual imposition of LWCF thermal flux calibration error.”

Eqn. 6 is a generalization of eqns 5.

I’m sorry Roy. You’ve made one very fundamental mistake after another. Your criticism has no force.

Pat Frank says:
September 11, 2019 at 12:03 PM
Hmm… it seems that none of the corrective plus/minus signs I included have come through.

Everyone should please read, wherever a 4W/m^2 statistic occurs in my comments, it should be (+/-)4W/m^2.

Matthew Schilling
Reply to  TRM
September 12, 2019 6:43 am

It is always a joy to read something so powerfully correct. Pat Frank’s paper takes a fundamental insight and applies it thoroughly and relentlessly. It is a paradigm slayer. It doesn’t really matter that so many are, sadly, too obtuse to grasp that, yet. It will start dawning on more and more people. The honest and humble ones first, then others, till finally even the shameless grifters realize it’s time to fold up the tent and slink away.
CAGW is doomed to be remembered as a tale told by idiots, full of sound and fury, signifying nothing.

Reply to  Matthew Schilling
September 14, 2019 10:27 am

One must realize that even if absolute or relative errors cancel themselves out (via tuning or by chance) at the result level, the error bars (uncertainties) never cancel each other out but get compounded and thus increase the uncertainty. The LWCF error alone generates uncertainty high enough (two orders of magnitude higher) to make any interpretation due to CO2 forcing useless. Any other (even compensating) errors at the result level will only increase the uncertainty of the result, making the model even worse.

bit chilly
September 11, 2019 2:37 pm

I assume there was no PDO or AMO ,El Nino/La Nina in operation in the pre industrial era going by the results of the ten archived model runs ?

Roy W. Spencer
Reply to  bit chilly
September 11, 2019 3:32 pm

BC:
Most of the models produce ENSO, so I would have to look at monthly time resolution of each. So I assume they are in there. I don’t know whether models produce AMO, I’ve never looked into it.

Jeff Alberts
Reply to  Roy W. Spencer
September 11, 2019 5:58 pm

But, models can’t tell you how intense an El Niño will be or the timing, can they?

Antero Ollila
Reply to  Jeff Alberts
September 11, 2019 9:18 pm

As far as I know, there are no models, which can predict ENSO events intensity or timing. Or can somebody tell what is the ENSO event status after 10 years? Certainly not.

September 11, 2019 2:40 pm

I see a glaring problem with the pre-industrial control runs. The anomaly is too constant, even if nothing else is changing. The 12 month average temperature should be bouncing around within at least a 1C range which is about 10x larger than the models report. For one month averages, the seasonal signature of the N hemisphere is very visible since the seasonal variability in the N is significantly larger than that in the S. The lack of enough natural chaotic variability around the mean is one of the problems with modeling. Another is the assumption that the seasonal behaviors of the N and S hemispheres cancel.

There are many more. For example. no model can reproduce the bizzare behavior of cloud cuverage vs. temperature and latitude as shown in this scatter plot of monthly averages per 2.5 degree slice of latitude.

http://www.palisad.com/sens/st_ca.png

Notice how the first reversal occurs at 0C, where ice and snow melt away and clouds have a larger albedo effect and at the second reversal at about 300K occurs at the point where the latent heat of evaporation is enough to offset incremental solar input. Since balance can be achieved for any amount of clouds, something else is driving how the clouds behave.

Interestingly enough, this bizarre relationship is exactly what the system needs to drive the average ratio between the SB emissions of the surface and the emissions at TOA to a constant 1.62 corresponding to an equivalent gray body with an effective emissivity of 0.62.

This begs the question, which makes more sense? A climate system with the goal of a constant emissivity between the surface and space that drives what the clouds must be or a climate system with a strangely bizarre per-hemisphere relationship between cloud coverage, temperature and latitude just coincidentally result in a mostly constant effective emissivity from pole to pole.

Reply to  co2isnotevil
September 11, 2019 4:35 pm

The data that demonstrates that the planet exhibits a constant equivalent emissivity from pole to pole is here:

http://www.palisad.com/co2/tp/fig1.png

The thin green line is the prediction of a constant emissivity of 0.62 and each little red dot is the monthly average temperature (Y) vs. monthly average emissions at TOA (X), for each 2.5 degree slice of latitude from pole to pole. The larger dots are the averages over about 3 decades of data. Note that the relationship between temperature and clouds is significantly different per hemisphere, while the relationship between surface temperature and emissions at TOA is identical for both indicating a constant effective emissivity.

The cloud amount and temperature come directly from the ISCCP data set. The emissions at TOA are a complicated function of several reported variables and radiant transfer models for columns of atmosphere representing various levels of clear skies, cloudy skies and GHG concentrations. The most influential factor in the equation is the per slice cloud amount representing the fraction of the surface covered by clouds and which modulates the effective emissivity of each slice.

The calculated emissions at TOA are cross checked as being within a fraction of 1 percent of the energy arriving to the planet (the albedo and solar input are directly available) when integrated over the entire surface and across a whole number of years, so if anything is off, it’s not off by much. None the less, the constant emissivity still emerges and any errors would likely push it away from being as constant as it is.

Krishna Gans
September 11, 2019 2:40 pm

I think it’s a question of fairnes to reposte Pat’s answer to Roy’s review.

DMA
September 11, 2019 2:57 pm

Error propagation analysis does not predict actual error in some process. It gives bounds to the reliability of whatever result the process delivers. In conventional surveying systematic errors cannot be eliminated and in any traverse where the position of the next point relies on the accepted position of the last plus the error in set up, angle measurement, and distance measurement. This “error ellipse” is a mathematical calculation that gives an answer in the linear dimensions of the survey and is not the expected error but the positional area in which the found position can be expected to fall if no blunders were made. They define the aerial extent of the uncertainty and grow with each set up. As that area grows the true relation of the just measured point to the initial point cannot be reported as the simple differences of northings and eastings computed using the simple trigonometry of angles and distances but must contain the plus or minus dimensions of the ellipse. At that point the simple difference is only a hypothesis with large uncertainty until it is also measured to close the traverse and determine the true error allowing distribution of that error throughout the traverse. The climate model cannot be “closed ” like a traverse so the accumulated uncertainty remains like the error ellipse on the last traverse point whose position in relation to the beginning is very uncertain. At no place in the traverse is the position of one point to the next outside normal bounds.
This seems very analogous to Dr. Franks analysis and conclusions.

Reply to  DMA
September 11, 2019 3:31 pm

“This “error ellipse” is a mathematical calculation that gives an answer in the linear dimensions of the survey”
GCMs are not surveying. They are solving differential equations for fluid flow and heat transfer. Differential equations have many solutions, depending on initial conditions. Error shifts from one solution to another, and propagates according to the difference between the new and old paths. The solution and its error-induced variant are subject to the laws of conservation of mass, momentum and energy that underlie the de, and the physics that it expresses. Errors propagate, but subject to those requirements. None of that is present in Pat Frank’s simplistic analysis.

DMA
Reply to  Nick Stokes
September 11, 2019 5:37 pm

NS
I believe that is why Dr. Frank computed the emulations of the model outputs. These are linear equations and react to uncertainty much like my surveys do.

Reply to  DMA
September 11, 2019 6:34 pm

“react to uncertainty much like my surveys do”
Yes, they do. But neither has anything to do with GCMs and their physics.

Michael Jankowski
Reply to  Nick Stokes
September 11, 2019 5:40 pm

Wow, speaking of “simplistic.”

Somehow you left-out parameter values and boundary conditions. And conservation of moisture.

The “laws” in models are represented by finite difference approximations to differential equations. They don’t have Mother Nature to force them into reality if they are in an unrealistic state. They can even be made to “blow up.”

Reply to  Michael Jankowski
September 11, 2019 6:41 pm

“Somehow you left-out parameter values and boundary conditions. And conservation of moisture.”
Parameter values are part of the equation system. Boundary conditions – basically air-surface interface, are part of the spatial system that you integrate over time. And conservation of mass means conserving the various components, including water.

” They don’t have Mother Nature to force them into reality if they are in an unrealistic state.”
No, that is a core function of the program.

“They can even be made to “blow up.””
They will blow up if you get the numerics of conservation wrong; it is a very handy indicator.

Tommy
Reply to  Nick Stokes
September 12, 2019 10:31 am

This can’t be right:

“The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century”

Why do the errors sum to zero? That’s a nice trick, if it’s possible. I mean, if you have one error (which obviously doesn’t sum to zero), all you need to do is make some more so that they cancel each other out!

I should think, rather, that the reason they appear to sum to zero in the models is because the models are tuned to produce a reasonable looking signal.

What Dr. Frank has demonstrated is that the error in a single component of the purported model is enough to make the entire thing meaningless. Yes, the models produce reasonable looking results. The point, however, is that they’re not arriving at that conclusion because they’re accurately modeling the real world.

Barbara
Reply to  DMA
September 11, 2019 7:47 pm

DMA – I agree. I think this is an excellent analogy.

Joe Crawford
Reply to  DMA
September 13, 2019 3:04 pm

+42 +++ :<)

September 11, 2019 2:58 pm

OK, I have now read the paper and unfortunately, Dr Spencer is wrong. The fact that the models are made to have zero “error” does not in any way change the fact that errors exist … only that the errors are made to zero out at an arbitrary point which is the present time period of the model. That is a temporary state of affairs which quickly disappears (but see below).

The only doubt I have is how to treat the 4w/m2 per year in projecting forward. The problem here, is that I saw no analysis of the form of this variation. If that variation has frequency components that are much greater than 100 years, then that will dramatically affect the way it should be treated than if the frequency components are all shorter than 100 years.

Indeed, if all the frequency components were greater than 100 years, Dr. Spencer would be (largely) right, but for the wrong reasons. Because the calibration done up to the present would still have a significant nulling affect in 100 years.

Clyde Spencer
Reply to  Mike Haseler (Scottish Sceptic)
September 11, 2019 6:10 pm

The greatest sin in science is to be right for the wrong reason because it means that one does not really understand the phenomenon and one will almost certainly be wrong the next time — and it will be a surprise!

ih_fan
September 11, 2019 3:03 pm

Let me start off by saying that I am not an experienced scientist, but a humble comp sci engineer – relative to probably everyone on this forum I don’t know squat about thermodynamics, atmospheric physics, etc, etc.

One thing that I am curious about, though, is what the effect on global temperature is from the combustion of fuels and not the emissions. Since the combustion of oil, coal, natural gas, (and uranium fission…) results in a large amount of heat generation, could it possibly be that a sizable portion of any temperature rise is not so much the result of CO2 emissions but actual waste heat?

This has been something that I’ve wondered about for quite some time…

Reply to  ih_fan
September 11, 2019 5:12 pm

UHI effect

Reply to  ih_fan
September 12, 2019 12:42 am

We generate about 15 Tw by combustion. That is about 0.03 W/m2. GHG forcing relative to preindustrial is estimates at about 2 W/m2.

ih_fan
Reply to  Nick Stokes
September 12, 2019 7:53 am

Thanks for the clarification, Nick. Greatly appreciated!

AGW is not Science
Reply to  Nick Stokes
September 12, 2019 8:11 am

ASSUMING “all other things held equal.” Which is NOT the case, and never will be.

Reply to  AGW is not Science
September 12, 2019 3:06 pm

Indeed. I say they mean when everything necessary and sufficient are equal. That said, how do we know that we know what’s necessary and sufficient, especially when conditions change “unexpectedly”, as they so often do.

Scarface
September 11, 2019 3:12 pm

@Moderator

The link at the top: “From Dr. Roy Spencer’s Blog” links to WUWT

commieBob
September 11, 2019 3:17 pm

The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.

That is absolutely what theory says. On the other hand, when you look at what happens after a strong El Nino, you usually see something that looks like ringing. example That implies a system modeled by a second order differential equation. ie. the simple thermodynamic theory may not be accounting for all the processes involved.

My electronics centered brain processes it thusly. If there are no energy storage components like capacitors or inductors, it’s linear and a differential equation is not needed. If there is one capacitor or inductor (but not both), the response to a step input is the familiar capacitor charge/discharge curve. It is modeled by a first order differential equation. If you have both a capacitor and inductor, you can have ringing. That, you model with a second order differential equation. link

If you’re modeling a thermodynamic system, energy can be stored as thermal inertia. Most people will tell you that that’s the only energy storage mechanism you have to worry about. So, a first order differential equation and no ringing. The temperature stops increasing when it reaches whatever temperature is exciting the system.

Given the complexity of the Earth’s energy balance, I suspect there may be something like ringing because the heat transport is not just by conduction.

September 11, 2019 3:18 pm

On whether calibration gets rid of errors.

Imagine a situation where you have a perfect (builder’s) level. You lay it down on a surface with a -4mm per m uneveness such that the level now doesn’t show level. You then “calibrate” the level so that it shows level (but on the section with a -4mm/m error). Does this reduce the error? Obviously not! It merely masks the uneveness of the surface by introducing an extra error.

Now, instead of being better by calibration, if the level is laid on a perfectly flat section it now shows +4mm/m and it can show up to +8mm/m error on a section with +4mm/m error. So rather than reducing the error using this “calibration” the average error is actually increased.

Bernard Lodge
September 11, 2019 3:19 pm

“With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.”

Please give examples of when the temperature change is not due to an imbalance between energy gain and energy loss in a system. I have never seen the climate change industry ever admit to that fact.

I will give you one example – that undermines the entire case for CAGW (Catastrophic Anthropogenic Global Warming):

A source of new energy only increases the temperature of an object if the temperature of the emitting object is higher than the temperature of the absorbing body. If the temperature of the emitting object is lower than the temperature of the absorbing body then it does not matter how much energy is being emitted, the temperature of the absorbing body will not increase. The proof of this is that you can surround an object at room temperature with as many ice cubes as you like and the temperature of the body at room temperature will not go up.

This basic fact is ignored in the energy budget calculations behind all the climate models. They all assume that all sources increase temperatures. That is incorrect.

Since the temperature of the atmosphere is lower than the temperature of the earth’s surface, CO2 emissions from the atmosphere cannot increase the temperature of the surface.

Don’t respond with ‘It slows the cooling’. CAGW is based on the fear of maximum temperatures actually increasing, not minimum temperatures declining less.

Roy W. Spencer
Reply to  Bernard Lodge
September 11, 2019 3:38 pm

Bernard, the only ones I can think of for the climate system are (1) phase change, such as heat going into melting ice, and (2) changes in the rate of energy transfer between the ocean to the atmosphere, which have very different specific heats for water and air (which is why a 3-D global average of the whole land-ocean-atmosphere system has little thermodynamic meaning, and Chris Essex like to point out). The temperatures can all change with no net energy gain or loss by the Earth system

Beeze
Reply to  Roy W. Spencer
September 11, 2019 7:39 pm

I would say that those two processes are not properly described by the phrase “few exceptions”. Phase changes are far more energy intensive than temperature changes.

Potential energy changes may also be significant.

commieBob
Reply to  Bernard Lodge
September 11, 2019 3:42 pm

The Sun’s radiation temperature is 5800 K.

Fran
September 11, 2019 3:28 pm

Dr Spencer writes: ‘Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. ‘

This criticism is valid if the +/- 2 Wm2 is not given as applying to a specific time period as in Wm2 / year or
Wm2 /12 if propagated over months. I assumed Dr Frank meant this to be the case, but did not find it explained in the paper. But then, I cannot do the math.

I also do not understand why running a model on a supercomputer with many iterations supposedly gives a more accurate picture of climate projections than a few iterations (1/year, say) if you have all the parameters correct. What are the many iterations doing?

Greg Munger
September 11, 2019 3:37 pm

Regardless of who has the best take on the topic there is a more important point. This is the type of scientific discussion, back and forth if you will, that should be part of every topic related to the ” climate change” debate. Something has been sorely lacking for a long time. The discussion is actually refreshing in its lack of dogmatic bullshit and reliance on actual scientific concepts. Please proceed.

Richie
Reply to  Greg Munger
September 12, 2019 6:25 am

” The discussion is actually refreshing in its lack of dogmatic bullshit and reliance on actual scientific concepts. Please proceed.”

Two thumbs up, Greg!

September 11, 2019 3:39 pm

I’ve always wondered how the models explain the little ice age, the medieval warm period, the dark ages etc. Changes in CO2 don’t work. I also wonder what temperature records the use to calibrate the models. Given that adjustments are highly correlated with CO2, adjusted data is suspect.

Jeff Alberts
Reply to  Nelson
September 11, 2019 6:20 pm

C’mon man, those are just anecdotal. Can’t believe those old things, y’know.

Willem Post
Reply to  Nelson
September 11, 2019 8:21 pm

Just say the little ice age was a European event, not worldwide, as IPCC did.
Then there were comments from Japan and other places they had a little ice age as well.
Quick meeting by PR people at IPCC headquarters.
Stop denying the obvious.
Act dumb.
Ignore questions.
Divert attention to another issue.

Jean Parisot
Reply to  Willem Post
September 12, 2019 7:35 pm

I wish I had the time to discuss the implementation of spatial error in the climate data sampling and gcm models at this level.

September 11, 2019 3:39 pm

I was just taking another look at the paper to check for any assessment of long term frequency components and I spotted what appears to be a false assumption. The paper says that: “If the model annual TCF errors were random, then cloud error would disappear in multi-year averages. Likewise, the lag-1 autocorrelation of error would be small or absent in a 25-year mean. However, the uniformly strong lag-1 autocorrelations and the similarity of the error profiles (Figure 4 and Table 1) demonstrate that CMIP5 GCM TCF errors are deterministic, not random”

This surely is not correct, because if the error have frequency components that are longer than 25years (perhaps due to solar sun spot cycle, AMO, or similar long term purturbations), then there would be lag-1 autocorrelation which was due to errors with long term frequency components. So, the assertion that they demonstrate deterministic not random erros seems incorrect.

Likewise, when the author says: “For a population of white noise random-value series with normally distributed pair-wise correlations, the most probable pair-wise correlation is zero.” … this again does not hold for pink noise and so the assertion “but instead imply a common systematic cause” is incorrect.

However, this false inferrence of “errors in theory” does not change the fact that around 4w/m2 per year error still exists, but only that this may not be an error of theory but what would be called “natural variation” which is not accounted for in the model.

Crispin in Waterloo
September 11, 2019 3:45 pm

This analysis strikes me as strange. Without disrespect to the author, I feel the analysis above is made on the basis of a category error and does not address the main point of the original paper.

The category error is to attribute a propagated error as being an attribute of the output of the calculation made.

These are fundamentally different, conceptually. This analysis above attributes to the output value the attribute of “uncertainty” and claims that if a calculated result is uncertain, then the calculation will yield similarly variable output values. It is a synecdoche: the model output alone is being claimed to represent the whole of the result, which consists of two attributes, the result and a separate property, its uncertainty. It is normal to present the result of a calculation as some numerical value then a ±n value. The ±n has no relationship to the calculated value, which could be zero or a million. The model output cannot “contain” the attribute “uncertainty” because the ±n value is an inherent property of the experimental apparatus, in this case a climate model, not the numerical output value.

The fact that the outputs of a series of model runs are similar, or highly similar, has no bearing on the calculated uncertainty about the result. The claim that because the model results do not diverge to the limits of the uncertainty envelope means they are therefore “more certain” is incorrect. The uncertainty is an attribute of the system, not it’s output. The output is part of the whole, not the whole answer.

It appears that the calculations are deliberately constrained by certain factors, are very repetitive using values with a little or no variability, or represent a system with a low level of chaos. There are other possible explanations such as that the same inputs give the same outputs because the workings are “mechanical”.

Analogies are best.

Suppose we multiply two values obtained from an experiment: 3 x 2. We are quite certain about the value 3 but uncertain about the value 2. The result is 6, every time. Repeating the calculation 100 times does not make the value “6” more certain because the value of thaw remains uncertain.

If I tell you that the 3 represent a number of people and the 2 represents the height of those people rounded to the nearest metre, selected for this experiment because they are in the category of “people whose height is 2 metres, rounded to the nearest metre”, you can visualise the problem. Most people’s height is 2 m, if rounded to the nearest metre. That does not mean that 3 times their rounded height value equals their actual combined height. The answer is highly uncertain, even if it is highly repeatable, given the rules.

It is inherent in the experimental calculations that the uncertainty about any individuals height is half a metre, or 25% of value. The result is actually a sum of three values, each with an uncertainty of 0.5m. The uncertainty about the final answer is:

The square root of the sum of the squares of the uncertainty values

0.5^2 = 0.25
0.25 x 3 = 0.75

Sqrt 0.75 = 0.8666 metres

Just because the calculated value is always 6 does not mean the uncertainty about the result is less than 0.8666.

One climate model might give a series of results that cluster around a value of 4 deg C warming per doubling of CO2. If the uncertainty about that value is ±30 C, it is right to claim that model is useless for informing policy.

Another model might give values clustered about 8 deg C and have an uncertainty of ±40 C. That is no more useful that the first.

The uncertainty of the model output is calculated independently of the values produced. Even if ALL climate models produced exactly the same result, the uncertainty is unaffected. Even if the measured values matched the model over time (which they do not) it would not reduce the uncertainty about projections because it is not a property of the output.

Climate Science is riddled with similar misconceptions (and worse). CAGW is for people who are not very good at math. Dr Freeman Dyson said that some years ago. Unlike climate models, his prediction has been validated.

Clyde Spencer
Reply to  Crispin in Waterloo
September 11, 2019 6:31 pm

Crispin
You said, “The fact that the outputs of a series of model runs are similar, or highly similar, has no bearing on the calculated uncertainty about the result.” I believe I have read that the models test the output of the steps for physicality, and apply corrections (fudge factors) to keep things within bounds. If that is the case, it isn’t too surprising that the runs are similar! With an ‘auto-correct’ built into the models that is independent of the ‘first principles,’ it would explain why the model results are similar and why they don’t blow up and show the potential divergence that Frank claims.

Rick C PE
Reply to  Crispin in Waterloo
September 11, 2019 7:39 pm

Crispin:

100 , +/- 0.01

Barbara
Reply to  Crispin in Waterloo
September 11, 2019 7:44 pm

I think you [Crispin] have captured the issue well. “…the ±n value is an inherent property of the experimental apparatus, in this case a climate model [I would revise to a “set of climate models”], not the numerical output value.” I believe this is precisely the point Dr. Frank has tried repeatedly to make, but which seems to keep being ignored.

And, you go on, “The uncertainty of the model output is calculated independently of the values produced. Even if ALL climate models produced exactly the same result, the uncertainty is unaffected. Even if the measured values matched the model over time (which they do not) it would not reduce the uncertainty about projections because it is not a property of the output.”

This is also how I read Dr. Frank’s paper (and his responses). The repeated criticism that if the measure of uncertainty were so large, it would be seen in model outputs demonstrates a fundamental misunderstanding of what the propagated error represents.

Matthew Schilling
Reply to  Crispin in Waterloo
September 11, 2019 7:58 pm

+(plug in whatever value you need to convey you are greatly impressed)

AGW is not Science
Reply to  Matthew Schilling
September 12, 2019 8:29 am

LOL, and + [As many more]

September 11, 2019 3:48 pm

And … now I realise even I’ve made a mistake!

Dr Frank has asserted that an autocorrelation of lag-1 demonstrate a deficiency of theory. I then said that there could be long term variation would explain this without a deficiency of theory. However, I’ve forgetten the important thing that natural variation isn’t distinct from what Dr Frank describes as theory.

To take a simple example, the long term Atlantic Multi-decadal Oscillation can be both described as “natural variation”, but also “theory”.

This is important, because the dividing line between “natural variation” and “theory” isn’t one enshrined in physics, but is instead one that is defined by the system boundaries we imply for the system. So, e.g. AMO could be described as “natural variation” if the boundary were set such that it did not fit our theoretical model. But if we can include it within our theory, then it is no longer natural variation.

Apologies for making this rather rudimentary error in climatic thermodynamics.

Lance Wallace
September 11, 2019 4:07 pm

Dr. Spencer writes: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.

Does not this statement depend on the averaging time? For example, the earth has a very large capacitor called the ocean, which may take a thousand years to mix completely. Let’s suppose we have a true energy imbalance that persists for a few hundred years. It still might not show up in a change in temperature, due to the time taken for the ocean to mix.

Master of the Obvious
Reply to  Lance Wallace
September 11, 2019 5:21 pm

A more complete version of the earth’s energy balance would be:

Input-Output = Accumulation + Generation

The more simplistic Input = Output is the case only when Accumulation and Generatoin equal (or nearly enough equal) zero. Postulating heat gain/loss of the non-insubstantial water mass on the plant’s surface might cast some doubt on such an assumption.

The Generation term might be attributable to the heat transferring out from the core. However, that heat flux might be insubstantial compared to the solar gain/loss and might be properly assumed to be near zero.

1 2 3