Critique of “Propagation of Error and the Reliability of Global Air Temperature Predictions”

From Dr. Roy Spencer’s Blog

September 11th, 2019 by Roy W. Spencer, Ph. D.

I’ve been asked for my opinion by several people about this new published paper by Stanford researcher Dr. Patrick Frank.

I’ve spent a couple of days reading the paper, and programming his Eq. 1 (a simple “emulation model” of climate model output ), and included his error propagation term (Eq. 6) to make sure I understand his calculations.

Frank has provided the numerous peer reviewers’ comments online, which I have purposely not read in order to provide an independent review. But I mostly agree with his criticism of the peer review process in his recent WUWT post where he describes the paper in simple terms. In my experience, “climate consensus” reviewers sometimes give the most inane and irrelevant objections to a paper if they see that the paper’s conclusion in any way might diminish the Climate Crisis™.

Some reviewers don’t even read the paper, they just look at the conclusions, see who the authors are, and make a decision based upon their preconceptions.

Readers here know I am critical of climate models in the sense they are being used to produce biased results for energy policy and financial reasons, and their fundamental uncertainties have been swept under the rug. What follows is not meant to defend current climate model projections of future global warming; it is meant to show that — as far as I can tell — Dr. Frank’s methodology cannot be used to demonstrate what he thinks he has demonstrated about the errors inherent in climate model projection of future global temperatures.

A Very Brief Summary of What Causes a Global-Average Temperature Change

Before we go any further, you must understand one of the most basic concepts underpinning temperature calculations: With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.

So, if energy loss is less than energy gain, warming will occur. In the case of the climate system, the warming in turn results in an increase loss of infrared radiation to outer space. The warming stops once the temperature has risen to the point that the increased loss of infrared (IR) radiation to to outer space (quantified through the Stefan-Boltzmann [S-B] equation) once again achieves global energy balance with absorbed solar energy.

While the specific mechanisms might differ, these energy gain and loss concepts apply similarly to the temperature of a pot of water warming on a stove. Under a constant low flame, the water temperature stabilizes once the rate of energy loss from the water and pot equals the rate of energy gain from the stove.

The climate stabilizing effect from the S-B equation (the so-called “Planck effect”) applies to Earth’s climate system, Mars, Venus, and computerized climate models’ simulations. Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.

What Frank’s Paper Claims

Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation, propagating the error forward in time during his emulation model’s integration. The result is a huge (as much as 20 deg. C or more) of resulting spurious model warming (or cooling) in future global average surface air temperature (GASAT).

He claims (I am paraphrasing) that this is evidence that the models are essentially worthless for projecting future temperatures, as long as such large model errors exist. This sounds reasonable to many people. But, as I will explain below, the methodology of using known climate model errors in this fashion is not valid.

First, though, a few comments. On the positive side, the paper is well-written, with extensive examples, and is well-referenced. I wish all “skeptics” papers submitted for publication were as professionally prepared.

He has provided more than enough evidence that the output of the average climate model for GASAT at any given time can be approximated as just an empirical constant times a measure of the accumulated radiative forcing at that time (his Eq. 1). He calls this his “emulation model”, and his result is unsurprising, and even expected. Since global warming in response to increasing CO2 is the result of an imposed energy imbalance (radiative forcing), it makes sense you could approximate the amount of warming a climate model produces as just being proportional to the total radiative forcing over time.

Frank then goes through many published examples of the known bias errors climate models have, particularly for clouds, when compared to satellite measurements. The modelers are well aware of these biases, which can be positive or negative depending upon the model. The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.

But there are two fundamental problems with Dr. Frank’s methodology.

Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux

If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.

Why?

Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.

For example, the following figure shows 100 year runs of 10 CMIP5 climate models in their pre-industrial control runs. These control runs are made by modelers to make sure that there are no long-term biases in the TOA energy balance that would cause spurious warming or cooling.

Frank-model-vs-10-CMIP5-control-runsFigure 1. Output of Dr. Frank’s emulation model of global average surface air temperature change (his Eq. 1) with a +/- 2 W/m2 global radiative imbalance propagated forward in time (using his Eq. 6) (blue lines), versus the yearly temperature variations in the first 100 years of integration of the first 10 models archived at
https://climexp.knmi.nl/selectfield_cmip5.cgi?id=someone@somewhere .

If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.

Why don’t the climate models show such behavior?

The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance. It doesn’t matter how correlated or uncorrelated those various errors are with each other: they still sum to zero, which is why the climate model trends in Fig 1 are only +/- 0.10 C/Century… not +/- 20 deg. C/Century. That’s a factor of 200 difference.

This (first) problem with the paper’s methodology is, by itself, enough to conclude the paper’s methodology and resulting conclusions are not valid.

The Error Propagation Model is Not Appropriate for Climate Models

The new (and generally unfamiliar) part of his emulation model is the inclusion of an “error propagation” term (his Eq. 6). After introducing Eq. 6 he states,

Equation 6 shows that projection uncertainty must increase in every simulation (time) step, as is expected from the impact of a systematic error in the deployed theory“.

While this error propagation model might apply to some issues, there is no way that it applies to a climate model integration over time. If a model actually had a +4 W/m2 imbalance in the TOA energy fluxes, that bias would remain relatively constant over time. It doesn’t somehow accumulate (as the blue curves indicate in Fig. 1) as the square root of the summed squares of the error over time (his Eq. 6).

Another curious aspect of Eq. 6 is that it will produce wildly different results depending upon the length of the assumed time step. Dr. Frank has chosen 1 year as the time step (with a +/-4 W/m2 assumed energy flux error), which will cause a certain amount of error accumulation over 100 years. But if he had chosen a 1 month time step, there would be 12x as many error accumulations and a much larger deduced model error in projected temperature. This should not happen, as the final error should be largely independent of the model time step chosen. Furthermore, the assumed error with a 1 month time step would be even larger than +/-4 W/m2, which would have magnified the final error after a 100 year integrations even more. This makes no physical sense.

I’m sure Dr. Frank is much more expert in the error propagation model than I am. But I am quite sure that Eq. 6 does not represent how a specific bias in a climate model’s energy flux component would change over time. It is one thing to invoke an equation that might well be accurate and appropriate for certain purposes, but that equation is the result of a variety of assumptions, and I am quite sure one or more of those assumptions are not valid in the case of climate model integrations. I hope that a statistician such as Dr. Ross McKitrick will examine this paper, too.

Concluding Comments

There are other, minor, issues I have with the paper. Here I have outlined the two most glaring ones.

Again, I am not defending the current CMIP5 climate model projections of future global temperatures. I believe they produce about twice as much global warming of the atmosphere-ocean system as they should. Furthermore, I don’t believe that they can yet simulate known low-frequency oscillations in the climate system (natural climate change).

But in the context of global warming theory, I believe the largest model errors are the result of a lack of knowledge of the temperature dependent changes in clouds and precipitation efficiency (thus free-tropospheric vapor, thus water vapor “feedback”) that actually occur in response to a long-term forcing of the system from increasing carbon dioxide. I do not believe it is because the fundamental climate modeling framework is not applicable to the climate change issue. The existence of multiple modeling centers from around the world, and then performing multiple experiments with each climate model while making different assumptions, is still the best strategy to get a handle on how much future climate change there *could* be.

My main complaint is that modelers are either deceptive about, or unaware of, the uncertainties in the myriad assumptions — both explicit and implicit — that have gone into those models.

There are many ways that climate models can be faulted. I don’t believe that the current paper represents one of them.

I’d be glad to be proved wrong.

Get notified when a new post is published.
Subscribe today!
4.5 2 votes
Article Rating
220 Comments
Inline Feedbacks
View all comments
Ray g
September 11, 2019 4:27 pm

I note , temperature rise after Co2 increase. Is it not the other way around? Please explain.

Loydo
Reply to  Ray g
September 11, 2019 5:19 pm

Its a two way process Ray. Warming causes an increase in atmospheric CO2 concentration – less being absorbed by a warmer ocean, but a rise in CO2 concentration causes warming. The former is more pronounced but that does not mean the latter doesn’t exist.

Zig Zag Wanderer
Reply to  Loydo
September 12, 2019 1:36 am

In a nutshell, you have just described the most telling flaw in the CAGW narrative. If this were true, then any warming for any reason would result in runaway warming!

It doesn’t, never had, and never will. That’s the pin this while debacle is pinned on. Realising that this is not true, and cannot be true, gives you the understanding that the CAGW hypothesis is also not valid.

Anyone who cannot see this, is not fit to practice science in any way.

AGW is not Science
Reply to  Zig Zag Wanderer
September 12, 2019 8:57 am

Plus [insert really big number here]

This is “IT” in a nutshell. The ultimate falsification of the whole “climate catastrophe by CO2 emission” meme. When the FACT that it was temperature that increases or decreases FIRST, and atmospheric CO2 level that FOLLOWS in the ice core reconstructions was revealed, and begrudgingly acknowledged, they trotted out this CO2 “contribution” canard, arguing that once the (give or take) 800 year time lag elapsed and BOTH temperature AND atmospheric CO2 were BOTH rising, that CO2 was “contributing to” the amount of warming.

HOWEVER, if one can read a graph it can easily be seen that this “argument” is pure nonsense. FIRST, no “acceleration” in the RATE of warming occurs at the point where the time lag has elapsed and CO2 beings to rise. SECOND, even if one could argue that the resolution of the graph wasn’t sufficient to show the minuscule CO2 “contribution,” there is one place where the supposed “contribution,” or more correctly, the complete lack thereof CANNOT hide – when the (excuse me) REAL cause of the temperature rise stops, what we SHOULD see is that, as long as atmospheric CO2 continues to rise, the temperature should continue to rise, at a reduced rate (that reduced rate being the, you know, CO2 related “contribution” to the warming). Instead, what we see is this: Temperatures start falling while atmospheric CO2 levels continue to rise, and then after the same time lag, CO2 levels being to fall, once again FOLLOWING temperatures. And THIRD, temperatures always START rising when atmospheric CO2 is LOW, and START falling when atmospheric CO2 is high, which tells you that atmospheric CO2 is absolutely NOT a temperature or “climate” driver at all.

TEMPERATURE drives atmospheric CO2. Atmospheric CO2 DOES NOT “drive” temperature.

Sweet Old Bob
September 11, 2019 4:47 pm

Having read comments here and at Dr Spencers site I come away withe the thought that it is sort of like the arrow on FEDEX trucks ….. some are able to see it , some cannot ….
😉

Reply to  Sweet Old Bob
September 11, 2019 11:55 pm

The concept of not knowing is a very difficult one for many people. How do you formally deal with something you know you don’t know? It is particularly difficult for academia, because academia prides itself on knowing everything (even though the evidence shows they fail spectacularly).

For this reason, many academics tend to have a mental model which divides the world into things they can measure – and things they can’t which they call “error”. That division works well when the main “what is unknown” fits a model of random white noise, but when it includes long term unknown trends, this simple conceptual model breaks down and is more a hindrance than a help.

In contrast, real-world engineers & scientists are steeped in a real world full of things that cannot be known, not for some academic reason, but because of things like the production department is deliberately fiddling the figures, or it just isn’t worth the time or effort to fully understand a system which would be cheaper to scrap and buy anew. As such, real world engineers and scientists, have no problem with the concept that they don’t know everything and have usually found ways to adapt theory to make it workable in real-world situations with large amounts of unknown. As such real world engineers tend to be able to cope with “unknowables” which include not just white noise, but long term trends and even include deliberate manipulations.

That I think is why some can “see” … and others can’t. It’s also why I think that most ivory-tower academics are a real hindrance to understanding the subject of climate.

bit chilly
Reply to  Mike Haseler (Scottish Sceptic)
September 12, 2019 2:18 pm

Mike that is a great post. Articulates my thoughts on the situation far better than i ever could.

Izaak Walton
September 11, 2019 4:53 pm

There is I think a simple analogy to explain why most climate scientists think that
Frank’s paper is wrong. Suppose I try to predict the trajectory of an object moving
with zero friction and constant velocity. Newton’s laws says that it will obey the equation:
x(t)=x0 + v*t
where x0 is the initial position, v is the velocity and t the time. Now if I measure the
initial position to an accuracy of 1cm then my final answer will be out by 1cm independent
of the time. If on the other hand there is a 10% error in the velocity then the error in the position
will grow linearly with time. There is thus a crucial distinction between errors in the initial position
and the initial velocity in terms of the accuracy of my predictions.

Dr. Spencer and most other climate scientists think that the errors in the forcings that Frank
mentioned are similar to an error in the initial position — i.e. they will result in a fixed error
independent of time. In contrast Frank claims that the error in the forcing is similar to an
error in the velocity and so the predictions will become increasingly inaccurate with time.

Now the reason that Dr. Spencer and others think what they do is that if you run the global climate
models with different values of the forcings due to cloud cover then the models converge to different
temperatures after some time and then are stable after that. None of the global climate models experience
continuously increasing temperatures. Hence errors in the cloud forcings product a constant error irrespective
of time. And hence the errors do not grow in the fashion claimed by Frank.

Reply to  Izaak Walton
September 11, 2019 7:55 pm

Izaak, I like your analogy to explain how the propagation of errors will (possibly) vary as a function of starting assumptions.

However, you stated “None of the global climate models experience continuously increasing temperatures.” Really? That not what I see when I look at plots of the IPCC CIMP5 global model forecasts . . . all 90-plus models (and they do have different cloud cover forcings) project continuously increasing global temperatures out to 2050 and beyond. See, for example, https://www.climate-lab-book.ac.uk/comparing-cmip5-observations/

Izaak Walton
Reply to  Gordon Dressler
September 11, 2019 9:06 pm

Gordon,
I should have been more precise. In the absence of increased forcing the global climate models
reach a steady state. The examples you present are what happens for increasing CO2 levels and
thus increasing forcings.

Reply to  Gordon Dressler
September 11, 2019 11:06 pm

Gordon,

The temperatures used for calibration are all adjusted. THe adjustments are highly correlated with CO2, which makes little sense to me. Its not surprising that the results all trend higher in time with CO2. It will be interesting to watch what happens when the AMO/PDO turn to negative phases and the sun stays relatively quiet compared to the modern solar maximum.

Rick C PE
Reply to  Izaak Walton
September 11, 2019 8:50 pm

Izaak: You may be right about what climate scientists are saying, but I think you missed Pat Frank’s point. He’s saying (per your analogy) that the velocity might have an uncertainty of say +/- 10% of the value used in the calculation. Therefore, any estimate of future position is subject to an uncertainty derived from the uncertainty of the velocity. Now once the time has passed and if you can measure the final position, you can then calculate what the actual velocity was. But you can’t ignore the uncertainty that exists at the time you started in making your prediction of the future.

Izaak Walton
Reply to  Rick C PE
September 11, 2019 9:14 pm

Rick,
I would agree that Pat Frank is in my analogy claiming that there is an error in the velocity.
However Roy Spencer and others claim that the error he has highlighted corresponds to an
error in the starting point. The reason for that is that if you took a Global Climate Model and
ran it three time, once with a central estimate for cloud forcing, once with the lowest estimate
and once with the highest estimate then after 100 years and assuming no additional forcing
(i.e. constant greenhouse gases) then the three runs would converge to three different equilibrium temperatures. They would not continue to diverge as Pat Frank’s model predicts. Thus a change in forcing corresponds to a change in x0 in my analogy and not a change in the velocity.

Reply to  Izaak Walton
September 11, 2019 9:55 pm

“I would agree that Pat Frank is in my analogy claiming that there is an error in the velocity.”
The fallacy here, as with Pat, is again just oversimplified physics – in this case motion with no other forces. Suppose you glimpse a planet, then try to work out its orbit. If you get the initial position wrong, you’ll make a finite error. If you get the velocity wrong, likewise.

Or starting a kid on a swing. You’d like to release to set up a desired amplitude. If you get the starting position wrong, you’ll make a limited error which won’t compound. Likewise with velocity. These are two simple cases where the analogy fails. It certainly fails for GCM.

Matthew Schilling
Reply to  Nick Stokes
September 12, 2019 6:47 am

If either your initial position or velocity are incorrect enough for said planet, you may very well calculate it dies a fiery death in its parent star or escapes it and wanders off in space. Those wouldn’t be “finite errors”.

Don K
Reply to  Nick Stokes
September 12, 2019 8:40 am

Nick: “If you get the initial position wrong, you’ll make a finite error. If you get the velocity wrong, likewise.”

I think I don’t quite understand what you’re trying to say Nick. But …

I’m under the misapprehension that I actually somewhat understand how numerical integration of Newton’s Laws of motion works. Basically, you start with a position estimate and a velocity estimate. You step forward a short time and compute a new position. And you compute an acceleration vector based on a model of forces acting on the object (gravity, drag, radiation pressure) using that new position and the velocity as inputs. You have to do the acceleration step. Without it, you’ll have an object traveling in a straight line rather than in an “orbit”. Then you use the acceleration vector to adjust the velocity and use the adjusted velocity to compute the next position.

There’s usually some additional logic to manage step sizes, but that’s not relevant to this discussion.

One might think that position errors would be constant (e.g. 10km in-track) and that velocity errors would increase linearly over time. But accelerations are position dependent, so the position error will usually increase over time because position discrepancies affect accelerations and acceleration affect velocity. That means that given a bad initial position, velocities will be (increasingly) wrong even if they started off perfect. How wrong? Depends on the initial position estimate, And the initial velocity estimate. And the acceleration model.

And yes, it’s actually more complex than that. For Example, it’s probably possible for errors to decrease with time for some time intervals. I’m not sure that a simple general error analysis is even possible.

I have no idea if analogous situations prevail in climate modelling. My guess would be that they do.

In any case, I’m unclear on what you’re trying to say and I suspect you may have picked a poor example.

Reply to  Nick Stokes
September 12, 2019 10:40 am

Don,
What I’ve been saying more generally, eg here, is that how errors really propagate is via the solution space of the differential equation. An error at a point in time means you shift to a different solution, and the propagation of error depends on where that solution takes you.

The case of uniform motion is actually a solution of the equation acceleration = y” = 0 (well, a 3D equivalent). And the solution space is all possible uniform velocity motions. An error sets you on a different such path, and will diverge from where you would have gove if the error was in velocity (but not position).

For orbit, the solutions are just the possible orbits, and they depend on energy levels, with a bit about eccentricity etc. So an error just takes you to a different orbit. Now it’s true that the equivalent velocity error will involve an increasing phase shift, but that can only take you finitely far from where you were. Compared to the uniform motion case, it’s a 3D version of harmonic motion, y”=-y.

It isn’t a particularly good example, because the solutions are limited in space, rather than having some restoring force that actually brings them closer after error, as could happen. I’m really just trying to point out that there is a range of ways error can propagate, and they are very dependent on the differential equation and its solution space.

Reply to  Nick Stokes
September 12, 2019 11:14 am

Nick: finite error vs infinite error?

Only with a pendulum, maybe. If the motion is a parabolic, then the velocity error compounds. At least that’s the way it works in artillery. The strike of the round is off by a constant if the initial position is off. The strike is off by an increasing amount with time for velocity error. The only things that makes velocity error constant are tube elevation, initial velocity and gravity.

Don K
Reply to  Nick Stokes
September 12, 2019 3:19 pm

Nick: “I’m really just trying to point out that there is a range of ways error can propagate, and they are very dependent on the differential equation and its solution space.”

Yes, that’s fine. I haven’t read every post in these threads in detail. And some I have read, I don’t really understand all that well. But I doubt many folks would argue error propagation is always trivially simple.

Tom Abbott
Reply to  Izaak Walton
September 12, 2019 3:52 am

“There is I think a simple analogy to explain why most climate scientists think that
Frank’s paper is wrong.”

Have you communicated with most climate scientists, Izaak?

September 11, 2019 5:04 pm

Climate models serve to do one thing – justify the existence of the keyboard jockeys. That’s it and that’s all. In reality they lead to academic discussion which leads to precisely nothing due to the inevitable and insurmountable accuracy problem indicated elsewhere. Their value does not come remotely close to that of historical observation.

Master of the Obvious
September 11, 2019 5:11 pm

A quick observation:

The dueling analysis here can be attributed to whether the climate temperature model is (thermodynamically) a state function or if it is a path function.

Many of the models that have been proposed have assumed (or implicitely assume) that the global temperature equals some historic temperature steady-state (I’d be leary of using the exalted thermodynamic term of equilibrium in reference to climatic behavior) plus some additive term for GHG forcing. Thus, this year’s climate “temperature” is a determinative outcome of the current CO2 (the state variable). This year’s temperature is independent of last year’s temperature (and GHG forcing) and has no bearing on next year’s temperature/forcing. If the model has such a themodynamic construct, then there can be no accumulation of error.

However, if the thermodynamics have elements of a path function (which is often the case for systems with heat flows (Q) under non-isentropic conditions), then the current conditions are dependent on the pathway of the variables used to get here. All the nice heat cycles featured in various thermodynamic textbooks are prime examples. Consequently, this year’s temperature is dependent upon last year’s conditions and will affect next year’s outcome and error could accumulate.

Which is right? Beats me. As both authors have pointed-out, the pertinent details have been either lightly explored or ruthlessly ignored.

John_QPublic
Reply to  Master of the Obvious
September 11, 2019 5:33 pm

The fact that they are using a transient calculation indicates that it is a path function.

September 11, 2019 5:16 pm

Are we witnessing “the beginning of the end” of (pal) peer review?

This pair of threads may be historic, and noted for bringing intelligent climate discussions to both a general audience (I’m a BSME with an MBA) who can listen to (and partially understand) the back and forth between true scientists (who clearly respect each other despite disagreement).

Who knows what we would now be experiencing if Steve McIntyre and Dr Mann had the same back and forth dialogue in a similar venue over several months, with respected advocates from both sides chipping in (Dr. Lindzen, Chris Monckton, Dr Curry, Nick Stokes, Kerry Emanuel, our own Mosh and Willis, et. al.)

The ignoranti (speaking for myself) can even ask questions and both Drs. Spencer and Frank have obliged.

What would be the best way to leverage (and extend) such dialogue (vs an interview with Greta) for the citizens of the US and other nations to have the requisite information to come to an objective consensus on policy decisions? (It was noteworthy to me that very few of the comments in both threads were political.)

In another post, I claimed the 3 most underused words in the English language are “I DON’T KNOW”.
If we can re-establish “Nullius in Verba” there is a chance that those of us skeptical of a catastrophic future can make a good case to the residents of the developed world that they don’t have to throw away their children’s and grandchildren’s future on alarming but unproven threats of future catastrophe.

Chris Thompson
September 11, 2019 5:26 pm

Dr. Frank has calculated extremely wide future temperature uncertainty bounds in the models. He points out repeatedly in the discussions on the net that these are ‘uncertainty bounds’ not possible ‘real’ future temperatures that might actually happen, but uncertainty of the predictions of the model when a certain kind of error propagated is forward in a random fashion.
He also said that the only way to validate whether those wide uncertainty bounds were reasonable is, as is the case with any measurement uncertainty, to compare the actual performance of the models to the reality of the measured values that the models are predicting.

If a predicted uncertainty range is statistically correct, multiple models incorporating that potential error should spread themselves rather widely inside the envelope of the predicted error. One in 20 should, on average, go outside the error bounds.
If the models over time do not scatter widely around those error bounds, and if one in 20 doesn’t go outside them, the calculated uncertainty bounds are not predicting the true uncertainty of the models.

I think we already know that none of these models has come close to hitting such wide error bounds in the time that they have already been run over. Its true that the models are running higher than reality, but some are close. They are deviating somewhat one from the other. But none seem to be deviating at the kind of rates consistent with the magnitude of uncertainty demonstrated by Dr. Frank. Nor is the actual temperature of the planet.

Consequently the error magnitude between model and actual temperatures seems, to date, to be far less than the uncertainty predicted by Dr. Frank.
If, as time passes, we find that the real temperature diverges sufficiently massively from the models, or the models diverge sufficiently from each other such that 1 in 20 of the models finds themselves outside Dr. Frank’s predicted possible huge potential uncertainty range, and they rest scatter themselves widely within his those huge bounds, then Dr Frank’s proposed error extent will be proven correct.

However already we have had some considerable time with the models and they do not, over a sufficient time frame, show anything like that amount of deviation from each other or from the actual temperatures.

Now it could be that they’ve all been tuned to look good, and won’t stay good going forward. We will only know for sure in 50 years or so, I guess. However at present the data suggests that the true uncertainty measurement is not as wide as Dr. Frank suggests it is.

Dr. Frank is also, I think, perhaps trying to have his cake and eating it. When people pointed out that the world simply cannot warm or cool 15 degrees C in the time frame proposed, Dr. Frank said that uncertainty measurements are not actual temperatures possibilities, they are just a statistical measurement of uncertainty. This is not true. An accurate uncertainty measurement statistically encompasses the full range actual outcomes that might actually arise. For the uncertainty itself to be accurate, on average, 1:20 of the models, or the earth itself, or both, simply must go outside those error bounds. If they don’t and if they actually all sit quite close together and don’t scatter as widely as predicted, then the uncertainty assessment is simply incorrect.
That’s why valid measurement uncertainties must reflect ‘real potential errors’ if they are correct. Uncertainties are not theoretical values. If a machine makes holes and the drill has a certain radius uncertainty, the accuracy of that assessment can be validated by measuring the outcome of drilling lots of holes. If the machine does better than the predicted uncertainty, then the prediction was wrong. The predicted uncertainty was wrong.

Already we have enough data to know that Dr. Franks’s possible range of uncertainty hasn’t shown up in multiple model runs, or in the actual temperature of the earth. This suggests that his estimate of potential uncertainty is based on a false assumption. It does seem that the false assumption relates to whether or not the cloud forcing can or can’t propagate. The data to date invalidates his predicted uncertainty range and therefore most likely the cloud forcing error does not propagate.

Reply to  Chris Thompson
September 11, 2019 10:19 pm

Chris Thompson,

The models are not relying on modeled cloud. They can’t because cloud is too complex. They are relying on parameterizations of cloud. So the error doesn’t propagate, but the uncertainty does.

Arachanski
Reply to  Chris Thompson
September 12, 2019 1:03 am

The programming doesn’t change the physical reality of the uncertainties.

A C Osborn
Reply to  Chris Thompson
September 12, 2019 2:08 am

Where do you get ” One in 20 should, on average, go outside the error bounds.”?
Why?, How?

AGW is not Science
Reply to  Chris Thompson
September 12, 2019 9:51 am

“If a predicted uncertainty range is statistically correct, multiple models incorporating that potential error should spread themselves rather widely inside the envelope of the predicted error. One in 20 should, on average, go outside the error bounds. If the models over time do not scatter widely around those error bounds, and if one in 20 doesn’t go outside them, the calculated uncertainty bounds are not predicting the true uncertainty of the models.”

Better re-read much of this; that is NOT Dr. Frank’s argument. He is not indicating that the results of the models will vary by that amount; he is indicating that the uncertainty in their resultsis that large, thereby making their “output” meaningless, regardless of how they are “constrained” by fudge factors.

“I think we already know that none of these models has come close to hitting such wide error bounds in the time that they have already been run over. Its true that the models are running higher than reality, but some are close. They are deviating somewhat one from the other. But none seem to be deviating at the kind of rates consistent with the magnitude of uncertainty demonstrated by Dr. Frank. Nor is the actual temperature of the planet.”

What we “know” is that the uncertainty in the models and the deficiencies in the models makes their “output” worse than useless. And as noted above, their “outputs” do not need to “deviate” consistent with the magnitude of uncertainty demonstrated by Dr. Frank, because that is not what Dr. Frank claimed, he merely showed how large the uncertainties were, and how meaningless THAT makes the “output” of those models. And while it’s true that “the models are running higher than reality,” you’ll notice a conspicuous absence of any model which runs “cooler” than reality – which is because they all contain the same INCORRECT assumption – that atmospheric CO2 level “drives” the temperature.

In summary, the models are not only so uncertain as to be meaningless, but are built on assumptions that are used to provide pseudo-support for harmful, purely hypothetical bullshit, which is the notion that atmospheric CO2 “drives” the climate, which has never been empirically shown to occur in the Earth’s climate history.

September 11, 2019 5:37 pm

Modelling the climate to determine how much GAST will increase due to increasing CO2 requires a guess as to how much each new ppm of CO2 will increase GAST. Since this quantity is not known and cannot be calculated without wildly unscientific assumptions, the entire concept is rather dubious at best.

“CO2’s effect is logarithmic.” But is it, really? Right now CO2’s capacity to absorb and thermalize 15- micron radiation from the surface is saturated at around 10 meters altitude. So, increasing CO2 only raises the altitude at which the atmosphere is free to radiate to space, thus lowering the temperature at which the atmosphere radiates to space, thus lowering the amount of energy lost, thus increasing the energy in the atmosphere, thus increasing GAST. No one can calculate the magnitude of this effect.

Reply to  Michael Moon
September 11, 2019 11:26 pm

This is a debate about statistics, rather than physics.

So, CO2’s effects are known?

No they are not. If the effect on GAST since 1880 was all because of CO2, we could estimate it.

One, we do not know that.

Two, temperature records, not so good, and altered.

Three, Dr. Roy Spencer, have you not abandoned First Principles and adopted the assumptions of the Warmists? You do not need to beat them at their own game, you need to point out that their own game has no basis in physics.

Gentleman…

John_QPublic
September 11, 2019 5:47 pm

I would be very interested to see Dr. Spencer, or any critic,

ACKNOWLEDGE that Pat Frank has posited that the +\-4W/sqm is in fact in uncertainty, and that the propagation calculation using this uncertainty is independent of the models, and is not intended to calculate a temperature at the end,But rather to make a statement about the validity of the models.

Then starting from that point, explain what their objections are.

Matthew Schilling
Reply to  John_QPublic
September 12, 2019 7:40 pm

+1

September 11, 2019 5:58 pm

It may be my ignorance, but why is Dr Frank’s original thread no longer available?

Reply to  George Daddis
September 11, 2019 7:22 pm

It is. You just have to go down to when he posted it.

John Tillman
Reply to  George Daddis
September 11, 2019 7:44 pm

It is. It’s just no longer “sticky” at the top of all posts.

Tom Abbott
Reply to  George Daddis
September 12, 2019 4:02 am

It’s still available to me because I left it open in a tab. Try this:

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/

It’s up to 842 comments.

Mark Broderick
September 11, 2019 6:02 pm

“First water detected on potentially ‘habitable’ planet”

https://www.ucl.ac.uk/news/2019/sep/first-water-detected-potentially-habitable-planet

“K2-18b, which is eight times the mass of Earth, is now the only planet orbiting a star outside the Solar System, or ‘exoplanet’, known to have both water and temperatures that could support life.”

WOW !

John Tillman
Reply to  Mark Broderick
September 12, 2019 10:43 am

Although expected sooner or later, this is indeed big news.

However K2-18 is a red dwarf (M-type), the smallest and coolest kind of star on the main sequence. Red dwarfs are by far the most common type of star in the Milky Way, at least in the neighborhood of the Sun. But their frequent strong flaring might reduce the habitability of planets in their, necessarily close-in, habitable zones.

Dunno if planet K2-18b is tidally locked or not, but if so, this could also present problems for the development of life there.

Gerald Machnee
September 11, 2019 6:26 pm

This comment is interesting:
**The reason is that the +/-4 W/m2 bias error in LWCF assumed by Dr. Frank is almost exactly cancelled by other biases in the climate models that make up the top-of-atmosphere global radiative balance.**
So are the “other biases” deliberate, scientific, or accidental? Is there “tuning to get a desired temperature forecast?
What would happen to the temperature error if the model was run without the additional elements?
Would there be runaway error as Dr. Frank indicated?

Jordan
Reply to  Gerald Machnee
September 12, 2019 2:03 am

I don’t think these claimed other biases reduce the uncertainty. In fact, if uncertainty is literally “what we don’t know”, suggesting other (unidentified) biases is an argument that uncertainty is even greater.

Pat is being too generous when his uncertainty range starts from zero.

And let’s not forget that Pat has only positioned his analysis using an estimate of low-end uncertainty. The reality is probably a good bit worse than this.

n.n
September 11, 2019 6:40 pm

The error is in finite representation. The model error is due to incomplete… insufficient characterization and an unwieldy space.

Anonymoose
September 11, 2019 6:57 pm

Dr. Spencer,

Isn’t the uncertainty in the unknown behavior of physical clouds? It doesn’t matter what the models are doing. The basic science doesn’t know how clouds should behave. So the uncertainty of 4 per year exists around the calculations which are done, not within the calculations.

Reply to  Anonymoose
September 11, 2019 8:20 pm

Would I be correct to say that the uncertainty lies in the calculation process, NOT in the calculation outcome? — in how the calculations are done and not in what the calculations produce?

John_QPublic
Reply to  Anonymoose
September 11, 2019 11:03 pm

I believe this is correct, and none of the critics seem to want to address this issue, or seem to be aware of it.

Matthew Schilling
Reply to  John_QPublic
September 12, 2019 7:46 pm

It’s actually an awesome thing to watch bright, accomplished people completely miss a fundamental point – even after it is has been explained to them a few times. It’s also very humbling. It’s possible “genius” is just a synonym for clarity.

September 11, 2019 7:34 pm

Dr. Spencer, I greatly respect your work and your postings on WUWT.

I was therefore concerned to see this statement in the above article: “With few exceptions, the temperature change in anything, including the climate system, is due to an imbalance between energy gain and energy loss by the system. This is basic 1st Law of Thermodynamics stuff.”

I am sure that you are quite aware that temperature change does not necessarily correlate to energy change, the most obvious case relevant to Earth’s climate being the fact that a substantial amount of energy is exchanged at CONSTANT temperature when ice melts to liquid water or when liquid water freezes to ice. These are not “exceptions” in Earth’s climate system but occur widely and daily all over the globe with the changing seasons (snow and ice storms), and from the tops of high mountains in the tropics to sea ice in the polar regions. The magic 0 C (32 deg-F) number for water/ice phase change is not at all unusual over the range of variability of Earth’s land, sea and atmospheric temperatures.

So, I believe I am correct in asserting that areas of Earth can hide a lot (that’s not a scientific term, I know) of energy gain or energy loss via the enthalpy of fusion of water, even in a “closed” system.

Can you please clarify your position on this vis-a-vis climate modeling . . . in particular, do you believe most of the global climate models relatively accurately capture the energy exchanges associated with melting ice/freezing water and variations therein over time (from seasonal to century timescales)?

Reply to  Gordon Dressler
September 11, 2019 10:24 pm

Good point. Temperature is not a measure of the heat content of air. Enthalpy is.

Reply to  Gordon Dressler
September 12, 2019 8:03 am

Besides freezing, how about vaporization and condensation, specifically aerosol cloud condensation nuclei, and their dependance on ionizing radiation?

pochas94
September 11, 2019 8:16 pm

The paper seems to imply that it is impossible to write a valid climate model. I think we should keep trying anyway. I am really skeptical of self-propagating errors.

Steven Mosher
September 11, 2019 8:25 pm

ATTP and Nick stokes have both done “take downs” of pats’ paper

Here is one

https://andthentheresphysics.wordpress.com/2019/09/10/propagation-of-nonsense-part-ii/

Pat’s mistake is using error propagation on a base state uncertainty.

+/-4 W/m2 is a BASE STATE FORCING uncertainty. you don’t propagate that. As Roy points out at model start up this is eliminated. If it wasnt eliminated during model spin up the control runs would be all over the map. They are not.

I have a clock. I tell you its base state error is +- 4 minutes. That means AT THE START it is up to 4 minutes fast or 4 minutes slow. I let the clock the run. That BASE STATE error does not accumulate.

The only thing that persists in all of this is Pat’s “base state” mistakes about error analysis of GCMs ( and temperature products as well, but thats a whole nother issue)

AKSurveyor
Reply to  Steven Mosher
September 11, 2019 9:45 pm

Not even wrong yet

angech
Reply to  Steven Mosher
September 12, 2019 12:13 am

“A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models.
Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.
If it is 4 minutes slow at the start it will always be 4 minutes wrong. If what made it 4 minutes wrong before you noticed it then it is probably slow 4 minutes a day.
Good luck in getting to work on time in 2 weeks.
* unless you work at home.
Wake us up when you get your clocks and stations right rather than adjusted.

Bill Burrows
Reply to  angech
September 12, 2019 12:46 am

Touche!

Clyde Spencer
Reply to  Steven Mosher
September 12, 2019 10:11 am

Mosher
You said, “I let the clock the run. That BASE STATE error does not accumulate.” That is correct as far as it goes. The problem is that, first of all, you are assuming that the clock is running at the correct rate and does not change; that may not be true for a natural phenomenon such as cloud coverage. Secondly, the problem is when the ‘time’ is used in a chain of calculations, particularly if the result of a calculation is then used in a subsequent calculation that again uses the time with an uncertainty that may be variable, for which you only know the upper and lower bounds. Rigorous uncertainty analysis requires that the worst case be considered for the entire chain of calculations, and not focus on just the uncertainty of one parameter.

Antero Ollila
September 11, 2019 9:41 pm

I think that Dr. Spencer’s analysis lay on this finding (a quote):

“If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do. Why? Because each of these models is already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

As I have commented before, the GCMs or simpler models do not have the cloud forcing element as a parameter varying along the time. Do you think that modelers have enough knowledge to calculate what was cloud effect 45 years ago or after 10 years in the future? They have no idea, and so they do not even try. How they could program these cloud forcing effects into their models? Can you show at least one reference in which way they calculate cloud effects?

Another thing is that cloud forcing effects may be a major reason for the incorrect results of IPCC’s climate models because GCMs do not use cloud forcing effects. One of the competing theories is the so-called sun theory and you all know that clouds have a major role in this theory. The official IPCC climate science does not approve of this theory.

Please show that I am wrong.

Reply to  Antero Ollila
September 11, 2019 10:40 pm

Dr. Ollila,

Dr. Frank’s paper says the models mis-estimate cloud forcing. So, that is one reference showing that models calculate cloud effects.

Cloud is to complex to model, so I suppose they must be paramatized (it’s not even an english word!) but clearly the models do something to take into account changes in cloud. I suppose they must make some assumption for starting values of cloud cover when they model the past, but it is clearly being changed over time.

Clyde Spencer
Reply to  Thomas
September 12, 2019 10:16 am

Thomas
But, “parameterized” is an English word.

Rud Istvan
September 11, 2019 9:45 pm

Admit, skipped most comments. So maybe this is repetitive.
But, CtM alerted me to this critique coming at lunch today, so gave it some thought when appeared.

There are two fundamental problems with Roy Spencer’s critique, both illustrated by his figure 1.

First, figure 1 presumes an error bar. That is NOT the point of the paper. This confuses precision with accuracy, a derivative of the CtM Texas sharpshooter fallacy. (Patience, will soon be explained). Frank’s paper critiques accuracy in terms of uncertainty propagation error. That has nothing to do with precision, with model chaotic attractor stability, and all that. For a visual on this A Watts ‘distinction with a difference’, see my guest post here on Jason, unfit for purpose—which dissects accuracy v precision. CtM himself said today he found the explanatory graphic useful.

Put simply, the Texas sharpshooter fallacy draws a bullseye around the shots on the side of a barn and declares accuracy. Precision would be a small set of holes somewhere on the barn side but possibly missing the target. Franks paper simply calculates how big is the side of the barn. Answer, uselessly big.

Second, the issue (per CtM and Roy) is whether the ‘linear’ emulator equation used to propagate uncertainty encapsulates chaos and attractors (nonlinear dynamics, Lorenz, all that AR3 stuff including negative feedbacks and stable nodes). Now, this is a matter of perspective. Treat nonlinear dynamic (by definition chaotic) climate models as a chaotic black box, then derive an emulator over a reasonable time frame, of course dos not guarantee that the emulator is valid over a much longer time frame. But then, neither do models themselves. By definitional fit, CMIP5 is parameter tuned to hindcast 30 years from 2006. Not more. That period is fairly used by Frank. So the Dansgaard event and LGM are by definition excluded from both. IMO, reasonable since in 2019 trying to understand ‘just’ 2100.

Harry Newman
Reply to  Rud Istvan
September 11, 2019 11:33 pm

Bottom line is, that the esoteric and normative climate models will be seen to be not even vaguely right …. but, “precisely” wrong. Time to move on and bury this crazy humans impacting climate narrative which has sad parallels to Akhenaton and the Aztec priests and their ridiculous delusions of controlling the sun. Galileo would be proud of you Pat!

Ethan Brand
Reply to  Rud Istvan
September 12, 2019 6:36 am

The size of the barn analogy is the critical point of PFs analysis. Almost all of the comments I have read,including RSpencer are centered on the grouping (pun intended). Very difficult concept for most of us to “get”. Overall this discussion is probably the best ever presented on WUWT. Keep it up, progress is being made here.

Reply to  Rud Istvan
September 12, 2019 7:57 am

Well, let’s carry the barn analogy to its rightful end . . . the IPCC would take delight in the fact that they even hit the “uselessly big” barn with most of their climate models. The shot groupings seem of little concern to them, with at least a 3:1 dispersion in result magnitudes of global warming rates based on the lot of CIMP 5 models.

Reply to  Gordon Dressler
September 12, 2019 8:58 am

That barn, full of buckshot, just happens to be our economy.

angech
September 11, 2019 11:06 pm

1″Just for reference, the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.”
2 “Frank’s paper takes an example known bias in a typical climate model’s longwave (infrared) cloud forcing (LWCF) and assumes that the typical model’s error (+/-4 W/m2) in LWCF can be applied in his emulation model equation”

I would assume that statement 2 sort of supports statement 1.

angech
September 11, 2019 11:26 pm

“Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do.
Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.”

J Storrs Hall, PhD
September 12, 2019 12:01 am

Dr. Spencer’s objections are somewhat more likely to be right if you think of the physics of the system, where there needs to be a net energy flow for temperature to rise. However in a computer model, many things can happen they physics would not allow. For example, in a molecular dynamics model, with which I have a bit of experience, you can model an inert block of diamond with no radiative coupling at all, which will nevertheless warm dramatically all by itself. Thus virtually all MD models have a “thermostat” which artificially clamps the temperature. I doubt the climate models have explicit thermostats, but I’ll bet they have implicit ones, perhaps even unknown to their writers. That “unforced pre-industrial” flatline should wander a lot more than it does.

angech
September 12, 2019 12:02 am

Climate Models Do NOT Have Substantial Errors in their TOA Net Energy Flux
If any climate model has as large as a 4 W/m2 bias in top-of-atmosphere (TOA) energy flux, it would cause substantial spurious warming or cooling. None of them do. Why? Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.
Pat Frank states “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models. This error is strongly pair-wise correlated across models, implying a source in deficient theory. The resulting long-wave cloud forcing (LWCF) error introduces an annual average ±4 Wm–2 uncertainty into the simulated tropospheric thermal energy flux. ”

There seems to be a bit of dissonance here.
Roy is arguing about TOA Net Energy Flux being balanced before the model is run I presume.
Since he also states ” the average flows of energy into and out of the Earth’s climate system are estimated to be around 235-245 W/m2, but we don’t really know for sure.” The message surely is that there is a +/_5 Wm-2 degree of uncertainty in choosing the actual starting point. Obviously a possible inherent bias error.

“Because each of these models are already energy-balanced before they are run they have no inherent bias error to propagate.” This does not rule out a 4 W/m-2 yearly variation developing in one of the subsidiary components. “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction.

Worse, “Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propagate.” misses the point that there are other inbuilt variations affecting TOA estimation combining to 5 W/m-2.

Worse is the obvious conclusion from the pre industrial graph figures provided by Roy that each of these models are continuously energy-balanced while they are run.
When Roy comments
“If what Dr. Frank is claiming was true, the 10 climate models runs in Fig. 1 would show large temperature departures as in the emulation model, with large spurious warming or cooling. But they don’t. You can barely see the yearly temperature deviations, which average about +/-0.11 deg. C across the ten models.”
I would do a Stephen McIntyre.
Average standard yearly deviation 0.11 C
Standard dev of model trends = 0.10 C/year.
So in 100 years , 100 years! ten models vary by only 0.10c from no warming!
Highly suspicious, temp varies a lot per century.
Then yearly deviation is greater than 100 deviation.
What gives?
Proponents of coin tosses will note that the deviation under fair conditions, despite return to the means, should be greater than this. And the conditions for temperature are never fair.
This rigid, straight jacketed, proof that the models are not working properly but have adjustments in them to return everything to the programmed constant TOA is shocking but perfectly computer program compatible.

A last question, if not why not?
If Pat Frank states that “A directly relevant GCM calibration metric is the annual average ±12.1% error in global annual average cloud fraction produced within CMIP5 climate models.” then it either is or isn’t.
Roy.
Further more if it is not why not?
Further if it is what peregrinations are the rest of the algorithm doing to keep the TOA constant?

September 12, 2019 12:21 am

From the article.
Statement A: The errors show that (for example) we do not understand clouds and all of the processes controlling their formation and dissipation from basic first physical principles, otherwise all models would get very nearly the same cloud amounts.
Statement B: Because each of these models are already energy-balanced before they are run with increasing greenhouse gases (GHGs), so they have no inherent bias error to propogate.

These two statements cannot both be true. If Statement A is true then the energy-balance is a statistical sleight of hand that does not reflect the physical reality. Therefore there could be many inherent biases that are hidden during the balancing but are quite able to propagate later on.

And Statement A is clearly true as all the models do not get very nearly the same cloud amounts.