Additional Comments on the Frank (2019) “Propagation of Error” Paper

From Dr Roy Spencer’s Blog

September 12th, 2019 by Roy W. Spencer, Ph. D.

NOTE: This post has undergone a few revisions as I try to be more precise in my wording. The latest revision was at 0900 CDT Sept. 12, 2019.

If this post is re-posted elsewhere, I ask that the above time stamp be included.

Yesterday I posted an extended and critical analysis of Dr. Pat Frank’s recent publication entitled Propagation of Error and the Reliability of Global Air Temperature Projections. Dr. Frank graciously provided rebuttals to my points, none of which have changed my mind on the matter. I have made it clear that I don’t trust climate models’ long-term forecasts, but that is for different reasons than Pat provides in his paper.

What follows is the crux of my main problem with the paper, which I have distilled to its essence, below. I have avoided my previous mistake of paraphrasing Pat, and instead I will quote his conclusions verbatim.

In his Conclusions section, Pat states “As noted above, a GCM simulation can be in perfect external energy balance at the TOA while still expressing an incorrect internal climate energy-state.

This I agree with, and I believe climate modelers have admitted to this as well.

But, he then further states, “LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!

Why?

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Frank-model-vs-10-CMIP5-control-runs-LW-550x458Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”. Data available from https://climexp.knmi.nl/

Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives. Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).

That’s partly why different modeling groups around the world build their own climate models: so they can test the impact of different assumptions on the models’ temperature forecasts.

Statistical modelling assumptions and error analysis do not change this fact. A climate model (like a weather forecast model) has time-dependent differential equations covering dynamics, thermodynamics, radiation, and energy conversion processes. There are physical constraints in these models that lead to internally compensating behaviors. There is no way to represent this behavior with a simple statistical analysis.

Again, I am not defending current climate models’ projections of future temperatures. I’m saying that errors in those projections are not due to what Dr. Frank has presented. They are primarily due to the processes controlling climate sensitivity (and the rate of ocean heat uptake). And climate sensitivity, in turn, is a function of (for example) how clouds change with warming, and apparently not a function of errors in a particular model’s average cloud amount, as Dr. Frank claims.

The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.

The above represents the crux of my main objection to Dr. Frank’s paper. I have quoted his conclusions, and explained why I disagree. If he wishes to dispute my reasoning, I would request that he, in turn, quote what I have said above and why he disagrees with me.

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
350 Comments
Inline Feedbacks
View all comments
Eliza
September 12, 2019 4:09 pm

All of Roy Spencers satellite data plus balloon ect data show that the models dont work. Pat Frank is correct there is the proof !

John_QPublic
September 12, 2019 4:25 pm

Dr. Spencer:

I think I hear you saying that you believe that Pat Frank is stating that the models predictions are not accurate because of errors in cloud forcing. In a sense that is true, but what he’s actually saying is because the errors in cloud forcing are so high, the models are meaningless. In other words quite possibly the whole idea of greenhouse gas forcing, which is the common theme of all the models may be invalid.

Warren
September 12, 2019 4:37 pm
September 12, 2019 4:53 pm

Thank you Roy for taking the time to do an “objective” analysis from a “peer review” perspective. While others may have theories to the contrary, I suggest just wait till they submit their own WUWT articles, otherwise you’d just be responding to numerous “what-if” issues. Your stand-alone article is solid. Keep up the good work.

Anton Eagle
September 12, 2019 5:16 pm

Pat Frank is correct, in general, regarding how error propagate. Dr Spencer is incorrect in his statement that balancing the models at the start proves anything (the real climate is never in balance at any time).

That said… it doesn’t matter.

This entire argument misses an important point. This battle won’t be won by debating error bars (and it IS a battle). The general public will neither understand the nuances of error propagation, nor will care even if they do understand. It’s entirely the wrong debate to be having.

The problem with the warmist position is NOT the error bars on the data… the problem is the data itself. Data that has been manipulated, adjusted, is prone to inconsistencies due to station dropout, gridding, and on and on. The problem is… outside of the computer models… there is not one single shred of actual real-world evidence to support their position. None.

That is the argument we need to be always pounding… not error bars. Getting lost in the weeds debating error bars is a waste of time.

Reply to  Anton Eagle
September 12, 2019 5:46 pm

Well, it would not be a waste of time, if we had data about which we were confident. We still need to argue about the legitimacy of the tools that ultimately handle such worthy data, if we were ever to have it.

Rud Istvan
September 12, 2019 5:39 pm

I decided to let this perk before weighing in, altho knew was coming from lunch with CtM and addressing his own intuitive discomforts.

A lot of the disagreements here on Pat Frank’s paper are based on fuzzy definitions. Accuracy versus precision, and error versus uncertainty are the two biggies. Lets try to be less fuzzy.

Accuracy versus precision was illustrated in a figure from my guest post, Jason3: Fit for Purpose? (Answer no) Albeit the no accuracy and no precision figure part one could have been a bit more obvious. Accuracy is how close the average shot pattern is to the bullseye; precision is how tight the shot grouping whether or not on the bullseye.

Error is a physical notion of observational instruments measurement problems. Like the temperature record confounded by siting problems, or Jason 3 SLR struggling with waves, Earths non symmetric geoid, and orbital decay. Statistical in nature. Error bars express a ‘physical’ statistical uncertainty. Uncertainty itself is a mathematical rather than physical comstruct; how certain can we be that whatever the observational (with error bars) answer and error may be, it will be within the theoretical uncertainty envelope. Probability theory in nature. It is perfectly possible that all ‘strange attractor’ Lorenz nonlinear dynamic stable error nodes lie well within ‘so constrained’ the probabilistic uncertainty envelope. Because they are two different things, computed differently.

Frank’s paper says the accuracy uncertainty envelope from error propagation exceeds any ability of models to estimate precision with error bounds. That is very subtle, but fundamentally simple. Spencer’s rebuttal says possible error is different andprobanly constrained. True, but not relevant.

sycomputing
Reply to  Rud Istvan
September 13, 2019 7:28 am

Frank’s paper says . . . Spencer’s rebuttal says . . .

Thanks for the simplification. It seems the presuppositions underpinning the respective arguments are creating an epistemological language barrier – e.g., when the theologian discusses origins with the atheist.

Beyond this, didn’t the IPCC long ago admit Frank’s ultimate argument, i.e., that the models are worthless for prediction?

From the third assessment report, p. 774, section 14.2.2.2, “Balancing the need for finer scales and the need for ensembles”

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

September 12, 2019 5:39 pm

Roy, let me take a different approach to the problem.

We agree that all your GCMs produce an energy balance at the TOA. All of them accurately simulate the observed air temperature within the calibration bounds.

Nevertheless, they all make errors in simulating total cloud fraction within the same calibration bounds. That means they all make errors in simulated long wave cloud forcing, within those calibration bounds.

The simulated tropospheric thermal energy flux is wrong within those calibration bounds. Tropospheric thermal energy flux is the determinant of air temperature.

So the simulated calibration air temperature is correct while the simulated calibration tropospheric thermal energy flux is wrong. How is this possible?

Jeffrey Kiehl told us why in 2007.

The reason is that the models are all tuned to reproduce air temperature in their calibration bounds. The correctness of the calibration air temperature is an artifact of the tuning.

A large variety of tuned parameter sets will produce a good conformance with the observed air temperature (Kiehl, 2007). Therefore, model tuning hides the large uncertainty in simulated air temperature.

The simulated air temperature has a large uncertainty, even though it has a small data-minus-simulation error. That small error is a spurious artifact of the tuning. We remain ignorant about the physical state of the climate.

Uncertainty is an ignorance-width. The uncertainty in simulated air temperature is there, even though it is hidden, because the models do not reproduce the correct physics of the climate. They do not solve the problem of the climate energy-state.

Although the TOA energy is balanced, the energy within the climate-state is not partitioned correctly among the internal climate sub-states. Hence the cloud fraction error.

Even though the simulated air temperature is in statistical conformance with the observed air temperature, the simulated air temperature tells us nothing about the energy-state of the physically real climate.

The simulated calibration air temperature is an artifact of the offsetting errors produced by tuning.

Offsetting errors do not improve the physical description of the climate. Offsetting errors just hide the uncertainty in the model expectation values.

With incorrect physics inside the model, there is no way to justify an assumption that the model will project the climate correctly into the future.

With incorrect physics inside the model, the model injects errors into the simulation with every calculational step. Every single simulation step starts out with an initial values error.

That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection.

However, we do not know the magnitude of the errors, because the prediction is of a future state where no information is available.

Hence, we instead calculate an uncertainty from a propagated calibration error statistic.

We know the average LWCF calibration error characteristic of CMIP5 models. That calibration error reflects the uncertainty in the simulated tropospheric thermal energy flux — the energy flux that determines air temperature.

It is the energy range within which we do not know the behavior of the clouds. The clouds of the physically real climate may adjust themselves within that energy range, but the models will not be able to reproduce that adjustment.

That’s because the simulated cloud error of the models is larger than the size of the change in the physically real cloud cover.

The size of the error means that the small energy flux that CO2 emissions contribute is lost within the thermal flux error of the models. That is, the models cannot resolve so small an effect as the thermal flux produced by CO2 emissions.

Propagating that model thermal-flux calibration error statistic through the projection then yields an uncertainty estimate for the projected air temperature. The uncertainty bounds are an estimate of the reliability of the projection; of our statement about the future climate state.

And that’s what I’ve done.

Beeze
Reply to  Pat Frank
September 12, 2019 6:59 pm

Perhaps I can simplify:

If you know the answer of a sum is 20 and you need one value to be 5 or higher, then it is simple matter of adjusting the other parameters to your heart’s content.

20=5+10+2+2+1
20=5*5-5
20=((5/10+100)*pi*r^2+(the number of albums Justin Bieber sold last year))xAlpha [where alpha is whatever it needs to be to make the equation balance]

None of this says anything about the accuracy of 20 as an answer, and would still not even if the answer was more precise like 20.01946913905.

You can add in any number of real parameters, it wouldn’t matter if you have enough fudge factors to compensate.

Reply to  Pat Frank
September 12, 2019 10:40 pm

Pat and Roy,

There is a lot of money at stake here. GCM’s, yes clouds are a huge weakness, the behavior of clouds has not been predicted, and there is no assurance that the behavior of clouds can be predicted as CO2 level rise.

Goodness.

Fundamentally, there is no proof that CO2 rising ppm can heat the atmosphere! Saturated low, restricts the atmosphere from radiating freely to space high, but no one can calculate this effect. Could be Tiny, or even non-existent.

Speak the truth, both of you……

bobbyv
Reply to  Pat Frank
September 13, 2019 5:29 am

Dr Frank – perfect.

John_QPublic
Reply to  Pat Frank
September 13, 2019 9:49 am

That is perfectly easy to understand. I do not understand why distinguished and clearly intelligent scientists cannot understand that, and if they still have issues address it from that understanding.

Editor
September 12, 2019 6:09 pm

Roy – You again refer to “~20 different models”, but then acknowledge that “they all basically behave the same in their temperature projections for the same …..”.

As I commented on your first article, I’m not sure that your argument re the “20 different models” is correct. All the models are tuned to the same very recent observation history, so their results are very unlikely to differ by much over quite a significant future period. In other words, the models are not as independent of each other as some would like to claim. In particular, they all seem to have very similar climate sensitivity – and that’s a remarkable absence of independence.

I would add that I find your statement “All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!” rather disturbing: the models misuse clouds for a large part of the CO2 effect, so I can’t accept that the models do show the effect of anthropogenic CO2 emissions, and I can’t accept that clouds can simply be ignored as you seem to suggest.

September 12, 2019 6:10 pm

“All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!
Why?
If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:”

So long as the fudge turns out somewhat edible at the end, it’s all good?
My High School Physics teacher would have flunked me for that egregious fudging assumption.

Nor is “in global energy balance” a valid criteria.

“The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.”

This statement astonishes me. A program that responds to increasing greenhouse gases is purposely written to respond.
Why there is not a standard defined for exactly how model programs respond to greenhouse gases puzzles me.
If all of the programs return different numbers, they are not in agreement; even if they stay within some weird boundary! Nor does adding up the program runs then publishing the result cancel anything. That is a bland acceptance for bad programming while hoping to foist the results and costs on the unsuspecting public.

That the models programs all fail to adhere to reality over the long term is the sign those programs are failures. Especially as model results run into future weeks, months, years.

Apparently, propagation of error is uncontrolled! Those who assume the errors will cancel are making a gross assumption in the face of horrible model runs.
Nit picking an article about the “propagation of errors” should do so constructively. Not harp about cancelling, balance, gross acceptance or whatever.

Pat Frank addresses one part of climate science’s refusal to address systemic error throughout global temperature monitoring, storage, handling, calculations and presentations.
Propagation error is a problem for climate science, but apparently ignored by many climate scientists.

Defending propagation of error in model runs because the assumption is that they are cancelled out by other model bias is absurd.
Nor is assuming that TOA Longwave Radiative Flux variations is validation of a GCM program.

n.n
September 12, 2019 6:52 pm

The models are injected with brown (“black”) matter to conform with real processes, which are chaotic (e.g. evolutionary), not monotonic (e.g. progressive). The system has been incompletely, and, in fact, insufficiently characterized, and is estimated in an unwieldy space. This is why the models have demonstrated no skill to hindcast, forecast, let alone predict climate change.

John Dowser
September 12, 2019 10:26 pm

The discussion appears to me revolving around multiple potential misunderstandings.

1. As often mentioned already: accuracy versus precision and error versus uncertainty

2. Simple statistical analysis on measurement & linear processing versus emulations running Navier-Stokes equations approaching various states of equilibrium and complex feedback.

While the uncertainty and general unreliability of climate models can be argued for, and seems well understood within the sciences, even without all the mathematics, Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states. What’s left are more modest uncertainty bounds with the, I’d argue, well understood general short-coming of any model addressing reality. But the presence of unqualified, non-linear components in the real climate does not necessarily mean the model has no value when establishing a general trend for the future (through drawing scenarios, not merely predicting). The model can be overthrown each and every second by reality. This is no different than cosmology and astrophysics but that understanding will not make astrophysicists abandon their models on formation of stars or expansion of the universe. Of course nobody is asking yet for trillions of dollars based on arguments deriving from astrophysical models.

And that last bit is in my view the bigger problem: uncertainty versus money.

Ragnaar
Reply to  John Dowser
September 13, 2019 7:23 am

“Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states.”

Most agree that most of the time the climate is an equilibrium engine. It searches for that. An equilibrium is its anchor or the thing it revolves around like a planet around its sun.

We can calculate an orbit of a planet with errors similar to the errors in a climate model. Now predict 100 years in the future. Measure Earth’s distance from the Sun’s average position. Now be Galileo and do the same thing with his technology. His errors can be argued to be huge. Yet his model was probably pretty good for figuring the future Earth/Sun change in distance.

If a tall building’s upper floors displace in high winds, we don’t add the errors. We can’t calculate how much they displace at any time to 6 decimal places. But these errors do not add. But if we calculate a difference at the 6th decimal place and keep iterating that error, we are going to get a displacement that indicates a building failure eventually.

Paul Penrose
Reply to  John Dowser
September 13, 2019 11:50 am

John,
No. What Pat is talking about is a specification error; that is to say a limit on accuracy. As such it can’t be cancelled or reduced in any way because it literally is a loss of information, like a black hole of knowledge. There’s no way to use mathematics to change “I don’t know” into “I know”.

KDenison
September 13, 2019 12:27 am

As a CME who studied the hard sciences and engineering to get a PhD, it is amazing to see that Dr. Spencer and others do not appear to understand the difference between error and uncertainty. Simple searches find many good explanations including this one (https://www.bellevuecollege.edu/physics/resources/measure-sigfigsintro/b-acc-prec-unc/) or this one (https://www.nhn.ou.edu/~johnson/Education/Juniorlab/Error-SigFig/SigFiginError-043.pdf).

In this case, we cannot know the error in the model projections because we do not know the true value for the temperature in the future. Anyone who is discussing errors is missing the point.

We must, however, estimate the uncertainty on our projection calculations so that we then know what we can say with certainty about the model projections, e.g. so we can say “the temperature 100 years from now lies between A and B degrees”, or more typically that “the temperature 100 years from now will be X +/- y degrees.”

The estimate of the uncertainty can be made without ever running a single simulation as long as we have an idea of the errors in the “instruments” we are using for our experiments. This is what Pat Franks has done, estimated the uncertainty based on the estimated error in the parameterization of clouds that is used in all GCMs.

The result is that the best we can say is that we are certain that the future temperature (in 100 years) will be X +/- 18C where X is the output of your favorite GCM.

Reply to  KDenison
September 13, 2019 3:04 am

“Anyone who is discussing errors is missing the point.”
The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?

KDenison
Reply to  Nick Stokes
September 13, 2019 6:45 am

Well, to be more explicit, the errors in the “instruments” is what is propagated resulting in the uncertainty. In this particular case, the “instrument” that has the error is the parameterization of the effects of clouds.

Matthew R Marler
Reply to  Nick Stokes
September 13, 2019 9:44 am

Nick Stokes: The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?

As happens frequently, the phrase “propagation of error” has at least 2 distinct but related meanings.

a. It can mean the propagation of a known or hypothesized specific error;

b. It can mean the propagation of the probability distribution of the potential errors.

Pat Frank has been using it in the sense of (b).

Reply to  Matthew R Marler
September 13, 2019 5:32 pm

“Pat Frank has been using it in the sense of (b).”
So then what is the difference between “error”, meaning “the probability distribution of the potential errors”, and “uncertainty”?

Matthew R Marler
Reply to  Nick Stokes
September 13, 2019 7:01 pm

Nick Stokes: “the probability distribution of the potential errors”, and “uncertainty”?

The probability distribution of the potential outcomes is one of the mathematical models of uncertainty.

Reply to  Nick Stokes
September 13, 2019 7:24 pm

WellWell, Pat thumps the table with stuff like:
“you have no concept of the difference between error and uncertainty”
” the difference between error and uncertainty is in fact central to understanding the argument”
You’re making the difference pretty fuzzy.

Matthew R Marler
Reply to  Nick Stokes
September 13, 2019 7:48 pm

Nick Stokes: You’re making the difference pretty fuzzy.

Only when you ignore the “distribution” of the error, and treat the error as fixed. Consider for example the “standard error of the mean”, which is the standard deviation of the distribution of the potential error, not a fixed error. My reading of your comments and Roy Spencer’s comments is that you do ignore the distribution of the error.

Schrodinger's Cat
September 13, 2019 1:58 am

I have a model that calculates the temperature each year for a hundred years for a range of rcp trajectories. The results are excellent, closely matching my expectations but disappointing compared with observation.

I tried introducing my best estimates and consequences of different cloud cover conditions but the model output was all over the place.

I then introduced fudges that effectively suppressed the effect of clouds and the models returned to the former excellent performance. Pity about the observations.

This thought experiment illustrates that the uncertainty in simulating the climate exists whether or not I include cloud cover in my model or whether I fudge its effect. The model will process only what it is programmed to do and is independent of the uncertainty. Ignoring elements of uncertainty (e.g. cloud cover) may make the model output look impressive but in fact introduce serious limitations in the simulation. These affect current comparisons with observation but have an unknown influence on future predictions.

In order to judge the predictive usefulness of my model I need to estimate the impact of all uncertainties.

Chris Thompson
September 13, 2019 2:51 am

Dr. Frank’s paper provides extremely wide uncertainty bounds for the various models. He says that the bounds he proposes are not possible real temperatures that might actually happen, just the uncertainty bounds of the model.

The normal way to validate uncertainty bounds is to assess the performance of the uncertainty bounds. Being statistical in nature, 1 in 20 tests should, when run, exceed those bounds, and a plot of multiple runs should show them scattering all around the range of the bounds.

The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.

Dr. Frank’s bounds connect to the uncertainty of the model predictions of the earth’s temperature. For his uncertainty bounds to be feasible, all values within the range must be physically achievable. An uncertainty bound for a physical measurement that is impossible to achieve is meaningless. If someone tried to tell me that the uncertainty range of the predicted midday temperature tomorrow was +/- 100 degrees C, it would be ludicrous, since temperatures within that range are impossible for this time of year. We can be certain that that uncertainty bound is incorrect. Even if the calculated uncertainty of the measurement technique used for the prediction was indeed that inaccurate, the derived uncertainty bears no association with the true uncertainty. It is a meaningless and wrong estimate of the true uncertainty.

We know the earth simply cannot warm or cool as much as Dr. Franks uncertainty suggests. Therefore his estimate of the uncertainty of the models cannot be correct because his uncertainty itself cannot be correct.

Both these simple observations indicate that the assumptions on which these bounds were calculated must be false and that the true uncertainty is far less. In other words, Dr. Franks uncertainty bounds are themselves most uncertain.

Reply to  Chris Thompson
September 13, 2019 4:20 am

I fully agree, that is what I have been trying to say more ineptly. The result doesn’t pass the common sense test. Neither the planet, nor the models as they have been published can do that.

Jordan
Reply to  Chris Thompson
September 13, 2019 6:26 am

Chris and Javier, you are being far too literal in your reading of the uncertainty range.

Somebody mentioned the models will produce similar results because they operate within a “constraint corridor” (boundary conditions and assumptions like TOA energy balance). That’s a very appealing way to describe a significant aspect of their operation.

Does this “corridor” reduce uncertainty? Certainly not!

Uncertainty is LOST INFORMATION. Once lost, it’s gone forever as far as a model run is concerned. From any position, further modelling can only increase the uncertainty. And that’s essentially what Pat is telling you.

So what about a model which has a limited range of feasible outcomes? If Pat’s theoretical uncertainty range exceeds the feasible range of outcomes, this only means the uncertainty cannot fell you anything about the future position within the range.

The fact that Pat’s uncertainty bounds exceeds this range is just surplus information about the uncertainty. Pat’s method is not modelling the climate, why would it need to be aware of a detail like feasible range of MODEL outputs? As somebody else keeps telling us, uncertainty is a property of the model, NOT an output.

Like I said, you are being far too literal and inflexible in your Interpretation of Pat’s results. Your objections are ill founded.

Reply to  Jordan
September 13, 2019 10:53 am

Following an example from above if you cut a piece to a ±0.5 mm error and then you assemble 100 units of the piece your propagating error would be ±50 mm. Although quite unlikely your assembly could be 50 mm off, and that is your uncertainty. There is a real possibility albeit small of that, but the possibility is not small that you could be 25 mm off.

If you make multiple runs with a model that has a ±15°C uncertainty you should see plenty of ±7°C results. As that doesn’t happen models are either constrained as you say or programmed so that errors cancel. In both cases that reduces the uncertainty over the final result.

In any case if Pat’s mathematical treatment produces a result that does not agree with how models behave, it is either wrong or it has been made irrelevant by the way models work. It is as in the example all pieces with an error above ±0.1mm are discarded. Not very practical but you won’t get an assembly with >10 mm error despite the error in making the pieces is still large.

Paul Penrose
Reply to  Javier
September 13, 2019 12:00 pm

Javier,
No, you are talking about precision errors. Instead think about what would happen if you cut each piece to the same length within +/-0.1mm, but your ruler was .5mm too long (calibration error). Now how far would you be off after adding the 100 pieces off? The precision errors would all mostly cancel, but the resulting assembly would be 50mm too long. Now before you even started cutting, let’s say you knew that the ruler could be +/0.5mm out of spec, but you don’t know how much. How could you predict what the length of the final assembly will be? What confidence would you have that it would be within +/- 2mm?

Jordan
Reply to  Javier
September 13, 2019 1:56 pm

Javier

This is the misinterpretation that Pat is being forced to play “Wack-a-Mole” with.

Uncertainty never cancels in the way you assume. Once information is lost (for a model run) it is lost for the remainder of the run. It can NEVER be recovered by constraints and other modelling assumptions. All these things do us add their own uncertainties for subsequent steps.

Where uncertainties are independent of each other (and that’s the general assumption until somebody can demonstrate otherwise), uncertainties propagate in quadrature (Pythagoras). They never reduce numerically, and they never reduce in practice.

Pat shows you how to do it. His expertise on the topic is way above anybody else’s o this thread. We have a great opportunity to LEARN.

Reply to  Chris Thompson
September 13, 2019 9:24 am

In reply to Mr Thompson, it is precisely because all of the models’ predictions fall within Professor Frank’s uncertainty envelope that all of their predictions are valueless.

It does not matter that they all agree that the expected global warming will be between 2.1 and 5.4 K per CO2 doubling, because that entire interval falls within the envelope of uncertainty that Professor Frank has calculated, which is +/- 20 K.

Note that that uncertainty envelope is not a prediction. It is simply a statistical yardstick, external to the models but shaped by their inputs and calculated by the standard and well-demonstrated statistical technique of deriving propagation of uncertainty by summation in quadrature.

Or think of the yardstick as a ballpark. There is a ball somewhere in the ballpark, but we are outside the ballpark and we can’t see in, so, unless the ball falls outside the ballpark, we can’t find it.

What is necessary, then, is to build a much smaller ballpark – the smaller the better. Then there is more chance that the ball will land outside the ballpark and we’ll be able to find it.

In climate, that means understanding clouds a whole lot better than we do. And that’s before we consider the cumulative propagation of uncertainties in the other uncertain variables that constitute the climate object.

Subject to a couple of technical questions, to which I have sought answers, I reckon Professor Frank is correct.

Matthew Schilling
Reply to  Monckton of Brenchley
September 13, 2019 11:58 am

+1

Dave Day
Reply to  Monckton of Brenchley
September 13, 2019 12:07 pm

Bravo!!!

Carlo, Monte
Reply to  Monckton of Brenchley
September 13, 2019 2:09 pm

Dr. Franks linearization of the module output is quite ingenious, which makes for an analytic uncertainty calculation from just a single parameter, the LWCF. In the Guide to Expression of Uncertainty (the GUM, referenced in Dr. Franks paper) another way to obtain uncertainty values is with Monte Carlo methods (calculations). Treating a given GCM as a black box with numeric inputs and a single output (temperature), it may be possible to calculate the temperature uncertainty with the following exercise:

1) Identify all the adjustable parameters that are inputs to the model
2) Obtain or estimate uncertainty values for each parameter
3) Obtain or estimate probability distributions for each parameter
4) Randomly select values of each parameter, using the uncertainty statistics for each
5) Run the model, record the temperature output
6) Repeat 4-5 many times, such as 10,000 or more

The temperature uncertainty is then extracted from a histogram of the temperatures, which should dampen the “your number is too large” objections.

However, the usefulness of Monte Carlo methods is limited by computation time: the more input parameters there are, the more repetitions are needed. Does any know how many adjustable parameters these models have, and any knowledge of the computation time a single run requires?

Matthew R Marler
Reply to  Chris Thompson
September 13, 2019 5:25 pm

Chris Thompson: The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.

The model runs have not systematically or randomly varied this parameter throughout its confidence interval, so information on the uncertainty in output associated with uncertainty in its value has not been computed.

Geoff Sherrington
September 13, 2019 3:13 am

Roy,

The first time I saw uncertainty estimates for the UAH lower troposphere temperatures, eyebrows went high because this seemed to be remarkably good performance for any instrumental system, lat alone one operating way up at satellite height and difficult to monitor and adjust for suspected in-situ errors. For years I had tried hard at the lab bench for such performance and failed.

It would be great if, as a result of comprehending the significance of Pat’s paper, you were able to issue a contemplative piece on whether you found a need to adjust your uncertainty estimates, or at least express them with different caveats.

In climate research, there are several major examples of wholesale junking of past results from older instruments when newer ones were introduced. Some examples are Argo buoys for SST, pH of ocean waters, aspects of satellite measurements of TOA flux, early rocketry T results versus modern, plus some that are candidates for junking, like either Liquid-in-glass thermometers or thermocouple/electronic type devices (one or the other, they are incompatible). There are past examples of error analysis favoring rejection of large slabs of data thought reliable, but overcome by better measurement devices. Science progresses this way if it is done well.

These comments are in no way unkind to your excellent work in simulation of air temperatures via microwave emissions from gases, one of the really good breakthroughs in climate research of the last 50 years. Geoff S

Paramenter
September 13, 2019 3:28 am

Hey Greg,

I do hydraulic studies (flood modeling). The object of the model isn’t to be precise, there is no way you can be precise. Yes the output is to 4 decimal places, but the storm you’re modeling isn’t a real storm

I appreciate that, what I’m trying to say is that some claim models closely follow actual air temperatures in the recent decades. If that is the case why is that? By mere luck? If uncertainty is huge I would expect significant deviations from actual air temperature. If models consistently give consistent results in tight ranges and those results are close to actual temperature changes then what’s the point of complaining about massive uncertainty?

mothcatcher
September 13, 2019 3:55 am

Huge thanks to Pat Frank for this tenacious work, and also to Roy Spencer for providing a much needed critique. The fact that it comes from Dr. Spencer, who is much admired on the sceptic side, makes it all the more valuable. So, what is the result…does Dr Spencer have a handle on this?

After quite a lot of vacillation, I come down pretty clearly on the side of Dr. Frank. I really do think Roy Spencer has been defeated in this argument. Although always doubtful of the models, I am usually a sceptic of any challenge to the basics, always feeling that such challenges require very substantial evidence. I’m also somewhat limited mathematically, and was at first very sympathetic to the specific challenge by Nick Stokes and others, relating to the time units Pat introduced into the equations, and the sign on the errors. Took me a long time to get over that one, and I expect the argument will go on. Eventually I saw it as a diversion rather than a real obstacle to acceptance of the fundamental finding of Pat Frank’s work.

Stepping back for a moment it is clear to see that it is in the very nature of the model programs that the errors must propagate with time, and can be restrained only by adjustment of the parameters used, and by a training program based on historical data. I would suggest that all of us – everybody, including Roy Spencer, including the modellers themselves-really know this is true. It cannot be otherwise. And it shouldn’t take several years of hard slog by Pat Frank to demonstrate it.

Let’s take an analogy that non-mathematicians and non-statisticians can relate to. That is, the weather models that are used routinely for short range weather forecasts. Okay, I understand that there are important differences between those and GCMs, but please bear with me. That forecasting is now good. Compared with 30 years ago, it is very good indeed. The combination of large computing power, and a view from satellites has changed the game. I can now rely on the basics of the general forecast for my area enough to plan weather-sensitive projects pretty well. At least, about a day or a day and half ahead. Thereafter, not so good. Already after a few hours the forecast is degrading. It is particularly poor for estimating local details of cloud cover, which is personally important for me, just hours ahead. After three of four days, it is of very little use (unless we are sitting under a large stationary weather system – when I can do my own pretty good predictions anyway!). After a week or so, it is not much better than guesswork. In truth, those short-range models are spiralling out of control, rapidly, and after a comparatively short time the weather map they produce will look not remotely like the actual, real weather map that develops. The reason is clear – propagation of error.

Weather forecasting organisations update their forecasts hourly and daily. Keep an eye on the forecasts, and watch them change! The new forecasts for a given day are more accurate than those they succeed. They can do that because they have a new set set of initial conditions to work from, which cancels the errors that have propagated within that short space of time. And so on. But climate models can’t control that error propagation, because they don’t, by definition, have constantly new initial conditions to put their forecast -“projection”- back on track. Apologists for the models may counter that GCMs are fundamentally different, in that they are not projecting weather, but are projecting temperatures, decades ahead, and that these are directly linked to the basic radiative physics of greenhouse gases which are well reflected by modelling. Well, perhaps yes, but that smacks of a circular argument, doesn’t it? As Pat Frank demonstrates, that is really all there is in the models.. a linear dependence upon CO2. The rest is juggling. We’ve been here before.

Roy Spencer, I’d like you to consider the possibility you might be basing your critique on a very basic misconception of Dr Frank’s work.

Mark Broderick
Reply to  mothcatcher
September 13, 2019 4:34 am

….Well said…

David Wells
Reply to  mothcatcher
September 13, 2019 4:57 am

There is no purpose to this argument. Models use various means to achieve a balance which in nature does not exist. Ice ages? Then modellers feed in Co2 as a precursor for warming. Roy Spencer is correct. Climate change is accidental not ruled by mathematical equations which cannot under any circumstances represent the unpredictable nature of our climate. This argument is about how interested parties arrive at exactly the same conclusion. Models cannot predict our future climate hence modellers predilection for Co2. If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil. Models are dross.
What alarmism never contemplates is the absurdity of their own rhetoric. Hypothetically if Co2 causes warming then mitigating of Co2 would cause cooling. Historically there is no evidence that Co2 has caused warming or cooling. Models exist to give the misleading impression that we do understand the way in which our climate functions when the only active ingredient upon which predictions can be postulated is Co2. The models of themselves are noise.
“In climate research and modelling, we should recognise that we are dealing with a coupled nonlinear chaotic system and therefore that long term prediction of our future climate states is not possible”. The intergovernmental panel on climate change (IPCC) Third Assessment Report (2001) Section 14.2.2.2 Page 774.
https://wattsupwiththat.com/2016/12/29/scott-adams-dilbert-author-the-climate-science-challenge/

Jordan
Reply to  David Wells
September 13, 2019 9:00 am

David Wells. Pat’s paper is a formal analysis to back-up your assertions.

“If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil.”

Pat shows this with his emulation if GCMs. GAST projections are nothing more than iterative linear extrapolation of assumed CO2 forcing inputs. Forget all the detail and mystery that their creators like to hide behind, and just call them by what they do: iterative extrapolators. Forget the $bn sunk to get to this conclusion. Pat shows time and again that all we have is iterative linear extrapolators of assumed CO2 forcing.

Pat can then present familiar concepts of uncertainty propagation in iterative linear extrapolators to show that the outputs of GCMs are not reliable. There is a maximum degree of uncertainty they can tolerate to be able to discern the effect of CO2 forcing, and they fail to achieve this standard.

It’s a beautiful logical chain of reasoning, well supported by evidence and analysis.

Reply to  Jordan
September 13, 2019 12:02 pm

Excellent comment. Regardless of how complicated the GCM’s are, their output in relation to CO2 is linear. Dr. Frank has shown this remarkable observation is true. The corollary then follows that uncertainty is calculated through well known formulas.

Jordan
Reply to  Jim Gorman
September 13, 2019 1:46 pm

Agreed Jim.

Pat’s work is important, and it needs to be supported against the naysayers who cannot stand the blunt truth they are faced with.

Reply to  David Wells
September 13, 2019 9:13 am

Mr Wells has misunderstood Professor Frank’s method. Consider three domains. First, the real world, in which we live and move and have our being, and which we observe and measure. Secondly, the general-circulation models of the climate, which attempt to represent the behavior of the climate system. Thirdly, the various theoretical methods by which it is possible to examine the plausibility of the models’ outputs.

Our team’s approach, which demonstrates that if temperature feedback is correctly defined (as it is not in climatology) climate sensitivity is likely to be about a third of current midrange projections. To reach that result, we do not need to know in detail how the models work: we can treat them as a black box. We do need to know how the real world works, so that we can make sure the theory is correct. All we need to know is the key inputs to and outputs from the models. Everything in between is not necessary to our analysis.

Professor Frank is taking our approach. Just as we are treating the models as a black box and studying their inputs and outputs in the light of established control theory, so he is treating the models as a black box and studying their inputs and outputs in the light of the established statistical method of propagating uncertainty.

If Professor Frank is correct in saying that the models are finding that the uncertainty in the longwave cloud forcing, expressed as an annually-moving 20-year mean, is 4 Watts per square meter – and his reference is to the Lauer/Hamilton paper, where that figure is given – then applying the usual rules for summation in quadrature one can readily establish that the envelope of uncertainty in any model – however constructed – that incorporates such an annual uncertainty will be plus or minus 20 K, or thereby.

However, that uncertainty envelope is not, repeat not, a prediction. All it says is that if you have an annual uncertainty of 4 Watts per square meter anywhere in the model, then any projection of future global warming derived from that model will be of no value unless that projection falls outside the uncertainty envelope.

The point – which is actually a simple one – is that all the models’ projections fall within the uncertainty envelope; and, because they fall within the envelope, they cannot tell us anything about how much global warming there will be.

Propagation of uncertainty by summation in quadrature is simply a statistical yardstick. It does not matter how simple or complex the general-circulation models are. Since that yardstick establishes, definitively, that any projection falling within the envelope is void, and since all the models’ projections fall within that envelope, they cannot – repeat cannot – tell us anything whatsoever about how much global warming we may expect.

Ragnaar
September 13, 2019 5:47 am

I am still trying to reach a conclusion on this. Where is Steven Mosher when you need him?

Jordan
Reply to  Ragnaar
September 13, 2019 7:07 am

Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information. This is the essence of Pats analysis.

Roy Spencer seems to agree in principle, but doesn’t seem to accept Pat’s approach.

I have a number of points I’d like to add.

Uncertainty can only increase with each model step. A model has no prospect of “patching in” new assumptions to compensate for loss of information in earlier steps.

Pats uncertainty bounds go beyond what some people consider to be a feasible range. Fine, then crop Pats uncertainty to whatever range you are comfortable with. All you will conclude is the same thing: you have no way of knowing where the future will lie within your range. That’s fundamentally the same conclusion as Pat’s, but you have made it more palatable to yourself. It doesn’t mean Pat is wrong in any way.

Models produce similar outputs because they are operating within “constraint corridors” (as somebody called it) which exclude them from producing a wider range of outputs. It is not evidence of reducing uncertainty. Lost information is gone, and lots of models running with similar levels of lost information cannot create any new information.

Constraints do not reduce uncertainty. They only introduce assumptions with their own inherent uncertainties, and therefore total uncertainty increases when a constraint is relied upon as a model step. For this, I would like to refer to the assumed TOA energy balance using the following very simple equation:

N(+/-n) = A(+/-a) + B(+/-b) + X(+/-x)

Uppercase are model OUTPUTS and lower case are uncertainties which are model PROPERTIES.

N has a value of zero because it is the model assumed TOA flux balance.

A and B balance, representing Roy’s biases and assumed (but unidentified) Counter biases when the assumed TOA constraint is applied.

X is zero (not recognised by the model) and represents concepts like Pat’s modelling errors.

The fact that the uppercase items can add to zero does not mean the lowercase uncertainties cancel each other. In fact the opposite is true. Roy’s assumption of counter biases represents more lost information (if we knew about them we should be modelling them).so the value of ‘b’ has the effect of increasing ‘n’ as the uncertainties are compounded in quadrature.

Dave Day
Reply to  Jordan
September 13, 2019 10:35 am

To me, this is a very valuable comment.

Thank you, Jordan.

Dave Day

Reply to  Jordan
September 13, 2019 5:48 pm

“Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information.”
GCM’s famously do not keep a memory of their initial states. Nor do CFD programs. In this they correctly mimic reality. You can study the wind. What was it’s initial state? You can do it more scientifically in a wind tunnel. No-one tries to determine the initial state there either. It is irrelevant.

So yes, the lost information can’t be recovered, but it doesn’t matter. It didn’t contain anything you wanted to know. And much of this error is of that kind. The reason it doesn’t matter is that what you actually find out from GCM or CFD is how bulk properties interact. How does lift of a wing depend on oncoming flow? Or on angle of attack? None of these depend on the information you lost.

Jordan
Reply to  Nick Stokes
September 14, 2019 2:24 am

I totally disagree with that Nick Stokes. But you have widely advertised your complete inability to understand these concept on this thread. And your description of Eqn1 as “cartoonish” was a breathtaking display of arrogance and lack of self awareness. I really have no interest in what you have to say, so don’t bother responding to my comments.

kribaez
Reply to  Jordan
September 14, 2019 6:09 am

What you write is correct as far as it goes, but now consider this system which is much closer to the model we are all supposed to be considering.

Temperature varies with the net flux (imbalance), N(t)
N(t) = A – B + F – lambda*deltaT + X(+/-x)
A = B with a correlation of 1

B = sum over i of (b_i(+/-)error_i)

Can you calculate the contribution of the uncertainty in b_i to X?

BallBounces
September 13, 2019 6:03 am

Franks: The uncertainty is huge so the results are meaningless; Spencer: models are adjusted to stay within the bounds of a physically-realizable outcome so this uncertainty is meaningless. Let me ask a question. If physics indicated that temperature swings could in fact be 25C or higher so that no artificial bounding of modeled results would be needed, would the models be producing different outcomes? If so, then I would say Franks is right — the models are physically meaningless.

BallBounces
Reply to  BallBounces
September 13, 2019 6:04 am

Frank not Franks.

Windchaser
Reply to  BallBounces
September 13, 2019 9:45 am

On the flip side: if the real world cannot plausibly vary in temperature this much, then it also implies that the uncertainty is also not this high. It doesn’t take a fancy computer model — simply the idea that cloud forcings could change by 20W/m2 within a few decades is itself pretty implausible.

And the only reason that Frank is saying that cloud forcing could vary this much is because he treated the cloud forcing uncertainty (4 W/m2) as a change in cloud forcing uncertainty (4 W/m2/year). So he can integrate that over time, and the uncertainty in cloud forcing in the real world grows over time, without end or bound, to infinity. Does that sound physically realistic?

Units are important, yo

John_QPublic
Reply to  Windchaser
September 13, 2019 10:05 am

Lauer et al indicate on average it changed +\- 4 W/sqm per year. Your argument then is with Lauer.

Windchaser
Reply to  John_QPublic
September 13, 2019 10:28 am

No, they indicated it changed 4W/m2, not per year. It is the same over any time period. At any given point in time, it can be within this +/- 4W/m2 range, and this does not change over time.

On a previous discussion on this point, someone went so far as to actually email Lauer himself. Here’s the reply (emph. mine)

“The RMSE we calculated for the multi-model mean longwave cloud forcing in our 2013 paper is the RMSE of the average *geographical* pattern. This has nothing to do with an error estimate for the global mean value on a particular time scale.”

So Lauer also says that there’s no particular timescale attached to the value. It’s just W/m2, not W/m2/year.

https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-1443

JohnQPublic
Reply to  Windchaser
September 13, 2019 10:49 am

Thanks. That is one interpretation I was considering. Still it represents the uncertainty. In that case it may still be able to propagate, but may need to be treated differently. The division by sqrt(N) may be sufficient.

JohnQPublic
Reply to  Windchaser
September 13, 2019 10:54 am

This was the first question I had about Pat Frank’s study. I hope this get clarified. Otherwise I still believe the approach is correct. Lauer’s paper strongly implied this was a 20 year multi model annual mean value.

Jordan
Reply to  Windchaser
September 13, 2019 1:40 pm

“ At any given point in time, it can be within this +/- 4W/m2 range, and this does not change over time.”

But the models ITERATE windchaser. That changes everything.

You are falling into the trap of assuming uncertainty cancellation over model steps. No, no, no!

Uncertainty propogates in quadrature (read Pat’s paper). Quadrature means uncertainty never reduces/cancels.

It represents lost information. Once lost, information is lost for the remainder of the model run.

The model drifts along in its own merry way. Stable, but nevertheless blissfully unaware of how off-course it might be.

Uncertainty is the PROPERTY (not OUTPUT) that quantifies how far adrift the model could be. It is calculated separately from the model, as Pat shows.

Reply to  Windchaser
September 13, 2019 5:52 pm

“But the models ITERATE windchaser”
Yes, they do. But Pat does not attach accumulation to the iteration, which is in 30 min steps, and would give absurd results. Instead he arbitrarily accumulates it over a year, which has nothing to do with model structure. He then gets somewhat less absurd results.

September 13, 2019 6:12 am

Here is what I think is going on – in very simple terms.

Does anyone think that in the development of climate models there weren’t cases where results wondered off into solutions that made no sense. Of course there were. What was/is the solution to the problem of runaway models? Introduce fudge factors to constraint the models to produce outcomes that at least make sense. The problem is that the fudge factors cover up the fact that much of the physics is imprecise.

Pat takes errors introduced from inaccurate modeling in clouds to show through time they lead to huge potential variance in possible future outcomes. The whole point is that such errors accumulate through time. This makes perfect sense to me.

Roy comes back and argues that the results Pat shows are nonphysical. Climate models don’t produce those kind of results. I think he is right in this regard, but the question is why.

As I said above, climate modelers have introduced error corrections to keep model output in the “feasible” choice set. The problem with this approach is that the underlying errors aren’t corrected through better physics. This leads to a situation where the internal dynamics are kept in line by ad hoc measures that are not governed by laws of physics, but rather by the need to get output that at least makes sense.

The way I think of what Pat is doing is that he is showing just how much tinkering needs to be done to keep the models in the real world because if they are striped of the ad hoc error correction they would produce nonphysical results.

The takeaway is that model results have little to no claim to getting the internal dynamics of the earth’s climate “right.” Climate models suffer from the same problem as weather models, as you move through time results go off the rails. With weather models, the results go off the rails in a matter of a few short weeks. I don’t think anyone would give much credence to predicted results from the GSF weather model18 months out. Of course, because they update the forecast frequently, long range predictions aren’t really the goal. I also believe that weather models don’t have the same amount of ad hoc adjustments that climate models have. Climate models need the ad hoc adjusts because without them the long range forecast would tend toward extremes that make no sense.

My bottom line is that I think Pat and Roy are talking past each other. I don’t think either is wrong, they are just saying different things.

Antero Ollila
Reply to  Nelson
September 13, 2019 7:07 am

Quote: “Pat takes errors introduced from inaccurate modeling in clouds to show through time they lead to huge potential variance in possible future outcomes. The whole point is that such errors accumulate through time. This makes perfect sense to me.”

If this was true, then the present climate models would not show logical warming results for GH gases. The reason is that only water feedback is included. Those models do not have cloud forcing effects, because modelers have not enough knowledge to formulate them mathematically. That is why a huge uncertainty has been shown for cloud forcing.

You can discuss this matter back and forth but you cannot find a solution.

Reply to  Antero Ollila
September 13, 2019 9:30 am

Antero, “f this was true, then the present climate models would not show logical warming results for GH gases.

Not correct. You’re equating a calibration error statistic with an energy flux, Antero. My critics make this mistake over and yet over again.

Calibration errors do not affect a simulation. Why this is so hard for some people to understand is beyond knowing.

mothcatcher
Reply to  Pat Frank
September 13, 2019 9:55 am

Pat –
yes, this is crazy! How many times have you got to say it?
There are some very smart people contributing to this thread, and yet they don’t seem to be getting the message. And doubtless some very smart people following it, who don’t wish to get involved….I’d love to know what some of the prominent lurkers are thinking.. perhaps they will feel obliged to address the actual paper.

David Wells
Reply to  Nelson
September 13, 2019 7:10 am

The IPCC says that we cannot predict our climate future. The models that the IPCC uses have all failed even the most basic tasks. Roy and Pat whilst disagreeing on minutiae agree with the IPCC. It doesn’t matter how many or how exquisite the equations the only thing which matters is data. And the data contradicts the belief, alarmism, prevarication and the predictions/projections predicated on the model output which are driven not by reality but the need to misrepresent Co2 as a potential threat.

The IPCC and alarmism use modelling as a front to disguise the purpose of its deceit. The IPCC uses the supposed complexity of models to overwhelm the gullible and disguise their deceit. Dame Julia Slingo having left the Met Office said “technology needed to be at least 1,000 times faster before we have a cat in hells chance of using math to predict our climate future”. But if math cannot even begin to approach the accidental and unpredictable nature of our climate as the IPCC has admitted then what exactly is the point of spending countless billions on modelling when climate has the potential to turn on a sixpence and freeze the planet at its convenience? Arguments about how you convince the gullible that modelling is nonsense and rage about Co2 idiotic remain inadequate for that task. They remain the portent of the elite beyond what ordinary folk could ever comprehend which is why alarmism continues to prospect because their message is simple.

Ban Co2 and we will have a stable climate a golden age a land of milk and honey.

Jurgen
September 13, 2019 6:13 am

Recently did read the chapter “Oh Amaricano, Otra Vez!” in the book “Surely You’re Joking, Mr. Feynman!”. Teaching in Brazil to his amazement he discovered that studying physics in Brazil was limited to memorizing words and formula’s. The students had no idea what they meant in the real world, they could not connect them to real physical phenomena. And the few teaching scientist who could make that connection were educated abroad. So Feynman concluded that “no science is being taught in Brazil”.

The parallel I see with the discussion here is the basic question: what does (or does not) the theory tell us about real physical phenomena? What is their meaning in the real world? To rephrase that question: “what do climate models tell us about the climate as observed in the real world? What do they learn us apart from the figures they do spawn?”.

My understanding here is: Spencer concludes: “The figures they spawn are within realistic limits because of the way the models work”. Frank concludes: “The models fail to tell us how the climate operates. They cannot do this because their inner logic is flawed and meaningless for the real world.”

Analog Design Engineer
September 13, 2019 7:23 am

Can someone please explain to me in plain English which GCM accurately models the Earth? In other words, which one will accurately predict what happens in 2 years, 5 years, 10 years and so on? If there isn’t one then what use are they? If there is one then why the need for more than one model?

Mark Broderick
Reply to  Analog Design Engineer
September 13, 2019 7:39 am

The Russian model comes close….Oh no, another Russian conspiracy ! lol

September 13, 2019 8:02 am

Analog,

None of them will. Models are not designed to predict the future state of Earth’s climate.

As Dr. Spencer explained on his blog, the models are tweaked and parameterized so they produce a steady-state, unchaining climate in multi-decade test runs. This is not a model of the real climate. It’s a fake, steady state climate. Then CO2, aerosols, etc. are added to see what effect those things might have on the fake-climate model. If Dr. Spencer is correct, the models could not reproduce the Little Ice Age, the warming at the early part of the 20th century, the cooling from the 1940s to the 1970s, etc. All of which were presumably natural climate phenomena.

One would think that things called “climate model” would be actually modeling climate, but they don’t. They are not designed to mimic the earth’s climate. They are only designed as scientific tools for calculating warming caused by CO2.

As predictors of future climate the climate models they are “not even wrong.” They were not designed for that purpose, presumably because such a model would be far too complex to run on modern super computers.

However, Dr. Frank has shown that they are not fit for that purpose either. They get the physics so wrong that the uncertainty is much larger than the result. For me, knowing that the models get clouds so wrong is enough to prove that the models can’t be right.

The IPCC claims that they know man’s CO2 caused the warming because if they run the models without CO2, they don’t get any warming. I used to think that we circular reasoning. Now I see that it is a lie. The models don’t show warming in the absence of CO2 because they are programmed that way.

If I’m wrong about that, I sure would appreciate being corrected.

Analog Design Engineer
Reply to  Thomas
September 13, 2019 8:21 am

Thank you Thomas for the ‘plain English’ explanation.

September 13, 2019 8:02 am

That the models use TOA balance as an initial condition is itself a large source of error. CERES shows clearly TOA flux is NOT in balance.

comment image

LW flux is flat to slightly increasing. SW flux decrease is responsible for warming. CO2 is not a significant absorber of SW radiation.