September 12th, 2019 by Roy W. Spencer, Ph. D.

**NOTE:** *This post has undergone a few revisions as I try to be more precise in my wording. The latest revision was at 0900 CDT Sept. 12, 2019.*

*If this post is re-posted elsewhere, I ask that the above time stamp be included.*

Yesterday I posted an extended and critical analysis of Dr. Pat Frank’s recent publication entitled *Propagation of Error and the Reliability of Global Air Temperature Projections.* Dr. Frank graciously provided rebuttals to my points, none of which have changed my mind on the matter. I have made it clear that I don’t trust climate models’ long-term forecasts, but that is for different reasons than Pat provides in his paper.

What follows is the crux of my main problem with the paper, which I have distilled to its essence, below. I have avoided my previous mistake of paraphrasing Pat, and instead I will quote his conclusions *verbatim*.

In his Conclusions section, Pat states “*As noted above, a GCM simulation can be in perfect external energy balance at the TOA while still expressing an incorrect internal climate energy-state.*”

This I agree with, and I believe climate modelers have admitted to this as well.

But, he then further states, “*LWCF* [longwave cloud forcing] *calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models*.”

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a *non sequitur*. **All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)! **

Why?

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”. Data available from https://climexp.knmi.nl/

**Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives.** Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

**The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.**

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).

That’s partly why different modeling groups around the world build their own climate models: so they can test the impact of different assumptions on the models’ temperature forecasts.

Statistical modelling assumptions and error analysis do not change this fact. A climate model (like a weather forecast model) has time-dependent differential equations covering dynamics, thermodynamics, radiation, and energy conversion processes. There are physical constraints in these models that lead to internally compensating behaviors. There is no way to represent this behavior with a simple statistical analysis.

Again, I am not defending current climate models’ projections of future temperatures. I’m saying that errors in those projections are not due to what Dr. Frank has presented. They are primarily due to the processes controlling climate sensitivity (and the rate of ocean heat uptake). And climate sensitivity, in turn, is a function of (for example) *how clouds change with warming*, and apparently not a function of *errors in a particular model’s average cloud amount*, as Dr. Frank claims.

The similar behavior of the wide variety of different models with differing errors is proof of that. *They all respond to increasing greenhouse gases*, contrary to the claims of the paper.

The above represents the crux of my main objection to Dr. Frank’s paper. I have quoted his conclusions, and explained why I disagree. If he wishes to dispute my reasoning, I would request that he, in turn, quote what I have said above and why he disagrees with me.

Thank you both for this very enlightening posts. It is great to see civil disagreements with discussion on the technical issues.

Agreed. But, am curious why wasn’t the basics of this theory settled before the policy makers attempted to implement one side of the equation?

Hearty agreement, TRM

One thought not mentioned elsewhere –

even if Dr. Frank’s paper is found in the end to be significantly flawed, it still should be published. It examines very serious issues which are rarely scrutinised.

“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out”

No Dr Spencer – only the top of atmosphere balances but as there are plenty of flux into and out of the ocean there is not the slightest reason to believe there is a balance.

your argument is rubbish – it is the usual rubbish we get from those who can’t imagine that any heat goes into and out the ocean and the changes are at the TOA – the conclusion of +/-15C is still valid.

Mike:

The fact that climate can change as a result of changes in heat fluxes in and out of the ocean is a separate issue. And I already mentioned how changes in the rate of ocean heat storage is one of the things that causes models to differ.

Did you even bother to read what I wrote? Or do you only like Dr. Frank’s conclusion, so you will blindly follow it?

Roy

I am not sure i see your point, the models are in balance and only forcing from GHGs take them out of balance, in the real world if this was the case there would have been no LIA or any of the several warming periods over the last 15,000 years.

+1

Roy,

Yes, that premise has been thoroughly falsified. The climate fluctuates at short intervals with no outside forcing. So why continue with models based on false assumptions?

There are things to learn running models with constraints that differ from (isolate and simplify) reality…BUT YOU CAN’T RESTRUCTURE THE WORLD ECONOMY AND POLITICAL SYSTEMS based on them.

“why continue with models based on false assumptions? ”

Because ‘ex falso, quodlibet’. If they assume the false things as true, they can prove anything. They can prove there is an invisible Godzilla made by humans which controls the climate, if they want to. It’s very convenient for the climastrological cargo cult.

What needs to be challenged here, is this term “pre-industrial control run”. This is not a “control” in the usual scientific sense, yet is used in a way to imply some control is being done.

What this is, is a calibration run where ASSERTIONS that some arbitrary period in climatological past was somehow in the perfect natural state and that it was in equilibrium , such that if models are run from that state they should stay basically flat.

We only have to look at all that we know of the Earths past climate to know that this is a false supposition.

That models are designed and tuned create stable output under such conditions is proof in itself that they are seriously defective.They are designed to demonstrate a pre-imposed assumption.

This “control-run” process ensures that all the egregious errors and assumptions ( parametrisations ) about clouds and whatever else in any individual model are carefully tweaked to be in overall balance and produce flat output in the absence of CO2. If cloud amounts are off by 14%, one or a group of opposing, ill-constrained parameters have been tweaked in the other direction to maintain the required balance.

Dr. Spencer is essentially right.

“This is not a “control” in the usual scientific sense”Yes, it is. It is the sense of an experimental control. You have a program that you can set to run with its properties unchanged. Then you run it with various changes made, according to scenarios.

There is no attachment to some halcyon climate. Nothing much is to be read into pre-industrial. They simply put in a set of gas concentrations corresponding to what was reckoned pre-industrial, and let it run. No doubt if it shows some major trends they check their inputs. But it isn’t designed to prove anything. It just answers the question, if a scenario shows some change happening, what would it have been without the scenario?

Nick Stokes said: «They simply put in a set of gas concentrations corresponding to what was reckoned pre-industrial, and let it run. No doubt if it shows some major trends they check their inputs.»

They do that and never observe the known natural fluctuation of unforced climate.

Sorry Nick, that is not a “control-run”for the REAL PHYSICAL CLIMATE. It is merely a control-run for that specific model of climate. Changing an input and monitoring the output only tells you HOW THAT MODEL reacts to the change not how the REAL PHYSICAL CLIMATE reacts.

The “tweaking” being talked about here is simply being done without much if any true knowledge of how everything ties together. It is more “cut and fit” to see if any way can be found to make the energy balance using “fudge factors”. Consequently, everyone should recognize that the modeler(s) don’t really know how everything fits together in the real physical world.

Until you have a model that can accurately forecast climate an hour, day, month, year, or decade ahead, then you don’t have a model that can produce a “control-run” for the real physical climate. Until then you simply have a collection of assumptions, hunches, and guesses that make a model.

“They do that and never observe the known natural fluctuation of unforced climate.”That observation is the point of the control, and of the diagram Dr Spencer showed at the top.

This is known as sensitivity analysis. It is not a “control.” A control is

independentof the model or experiment run. A model run isNOTan experiment.Base climate errors do not subtract away.

They get injected into the subsequent simulation step, where the initial value errors are mangled even further by the theory-error inherent in the GCM.

You folks make all sorts of self-serving assumptions.

Nick likes that.

Dr. Spencer,

I do not have the background to evaluate the GCMs. However, consider the reason for double-blind studies in drug trials.

You wrote,

“The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, …”

Your statement seems to imply some credence to these findings on the basis that multiple, independent lines of research reached the same conclusion. However, this is not any type of proof of concept. I cannot conceive of any model run that output cooler global temperatures being reported in the literature – because everyone “knows” that GHGs warm the atmosphere!

I believe the GCMs are complicated enough, that they would be “debugged” until they produce a result that is reasonable, i.e. warming. When everyone already knows the “correct” answer it is very difficult for someone to produce a contrary result and publish.

The climate forecast models are constantly wrong — anyone can see that. Plus NOAA reports that the atmosphere has cooled during the past 2 years while CO2 concentrations skyrocket. Amazing.

Yep, JS, Roy has AMSU and only sees radiation. IPCC has models and sees nothing else. But researchers like ourselves, Piers Corbyn; and Nasa Ames and Langley; have been watching the Sun and see the Tayler Instability. That and the Ideal Gas Laws make all their dreams go poof!

Those of us who live in the real world see crops struggling from the Wild Jetstream Effect and wonder why the Southern Hemisphere warms not at all under its CO2 loading. The false weather-gods espoused are now being put to nature’s test. I sit under southern SSWs and am in little doubt of the result. I thank you, Pat Frank’ and Roy too. But please open your eyes Roy. Brett Keane, NZ

I don’t consider climate models independent lines of research. Perhaps if they were transparent and generally unmolested, but that’s far from the case.

As it is, it is more like an exercise in finding out how many ways you can set the dials to get the same (predetermined) result.

Roy, I think, rather than claiming Pat’s argument is semantic. Please deal with his question specifically.

I work with my hands, building and constructing. Every day, I frame an out, of square object at least once. So I deal frequently with fit. After so many years I’m quite good at skipping steps to achieve fit. But I deal and in margins 1mm to 3mm.

Years ago I was tasked with CNC parts manufacturing. Tolerances sometimes demanded accuracy to .0001 of an inch. This is where the whole game changes.

To get dimensional fit within those margins we need to understand how error propagates. Fit can be achieved in a number of ways. But to do it consistently without fear of error propagation, the method must be sound and tested.

Pat is making a very simple point. A point you have made for years. Just because we are getting relative agreement at TOA, does not mean we arrived at those numbers while properly representing the variables.

We are all eager to constrain model outputs. And we are all eager to identify where and why we need to employ qualifying theories. You yourself have proposed mechanisms to better constrain our understanding of vertical heat transport.

Pat’s paper is not making a claim beyond determining error within model physics. And he has, in my mind adequately explained how that error can propagate while at the same time achieving relative fit.

This is all perfectly clear to any of us who have had to wrestle with Ikea installations. 🙂

Saying all of that. I hope you feel humbled and thoroughly rebuked. For my part I have the deepest respect for your work and I continue to follow your career. It’s just nice to see you and Mr. Stokes agreeing on something, as well it’s nice to feel smarter than both you in this moment.

The feeling is fleeting.

..to get that TOA agreement….they go back and adjust a whole pile of unknowns…without knowing who what when or where

Absolutely true, the height of a deck of cards is the same no matter how you stack the deck, but the order in which you stack them will greatly favor one game vs. another.

and that is exactly the point … the ‘models’ are constantly reset to compliance with the required objective. If the ‘models’ need manual intervention then they are not modelling the system as they ‘remove’ the error without knowing what the error is … who knows what they are doing, you don’t know what you don’t know but which is becoming increasingly obvious.

I wholeheartedly agree with Suppes above. Real world calibration, accuracy and resolution are critical factors.

Pat is using labels incorrectly, but he is correct in the main. He is correct that if the resolution of the cloud forcing accuracy is +/- 4 units; and suspected GHG forcing is only +1.6 units; you CANNOT trust any of the results, for any practical real world application! Period, end of discussion!

If I had to accurately weigh some critical items and the weight needed to be 1.6 grams (+0.1/-0.0), and my balance has a resolution of 1 gram, but it’s accuracy is +/-4 grams: I cannot use this balance for this determination.

As to Pat’s use of W/m² – Pat if you apply that terminology, it is Power per unit Area, in this case per square meter. Period, end of discussion! Watts is power, Joules are energy. 1 Joule/second is 1 Watt.

If you intended to make it some dimensionless quantity say a coefficient of error, or a percent error you must not label it as Power per unit Area which is W/m².

Perhaps you both need some re-study of some fundamental first principles! (although you are both correct, the “models” are producing garbage results regards real world)

I think Pat is correct from the simple fact that error propagation is the most misunderstood part of science. It seems to be why no one likes to put error bars on their graph predicting the future. This seems to me to be Pats point. If your uncertainty/error is so great that the possible future temperature is predicted /projected plus minus 300x the predicted/projected value then what is the value of the prediction? precisely nothing. Hence the issue between accuracy and precision. They develop a precise method of missing the barn door by a mile.

v/r,

David Riser

I call W/m^2 energy flux, D. Boss. That’s the term of use in the literature.

Maybe sometimes I leave off the “flux,” if I’m going too fast or being a bit careless.

Roy, I think, rather than claiming Pat’s argument is semantic. Please deal with his question specifically.

I work with my hands, building and constructing. Every day, I frame an out, of square object at least once. So I deal frequently with fit. After so many years I’m quite good at skipping steps to achieve fit. But I deal and in margins 1mm to 3mm.

Years ago I was tasked with CNC parts manufacturing. Tolerances sometimes demanded accuracy to .0001 of an inch. This is where the whole game changes.

To get dimensional fit within those margins we need to understand how error propagates. Fit can be achieved in a number of ways. But to do it consistently without fear of error propagation, the method must be sound and tested.

Pat is making a very simple point. A point you have made for years. Just because we are getting relative agreement at TOA, does not mean we arrived at those numbers while properly representing the variables.

We are all eager to constrain model outputs. And we are all eager to identify where and why we need to employ qualifying theories. You yourself have proposed mechanisms to better constrain our understanding of vertical heat transport.

Pat’s paper is not making a claim beyond determining error within model physics. And he has, in my mind adequately explained how that error can propagate while at the same time achieving relative fit.

This is all perfectly clear to any of us who have had to wrestle with Ikea installations. 🙂

Saying all of that. I hope you feel humbled and thoroughly rebuked. For my part I have the deepest respect for your work and I continue to follow your career. It’s just nice to see you and Mr. Stokes agreeing on something, as well it’s nice to feel smarter than both you in this moment.

The feeling is fleeting.

If this is a duplicate the moderator is welcome to delete one of these comments.

Dr Spencer, your argument is that of creationists: because you can’t imagine anything but the obvious things that you can see could have caused the world we see, they claim god, you claim that the models must be in balance.

What Dr Frank has used is an analysis that accounts for the possibility of unknowns. You claim you don’t know of any unknowns and therefore falsely claim the climate must only be responding to what you know. Dr Frank has demonstrated that there unknowns, I then point to the fact we know heat goes into and out of the ocean as being quite capable of explaining all the unknown variations. So, it is not as you imply a physical impossibility that unknowns exist.

Dr Frank has quantified these unknowns and when they are taken into account there is a +/-15C error at the end of 100 years.

You have taken a creationist type argument … you can’t see anything that could change the climate and therefore concluded that because you don’t know of any unknowns as an omniscient scientist you can conclude they don’t exist. And they you falsely conclude Dr Frank must be wrong.

We’re still trying to model a chaotic ststem? How rude!

Nice try at an adolescent drive-by swipe at religion. Epic fail, but nice try!

You have it exactly backwards on who lacks imagination. If we found a plate and a spoon on Mars that we knew for certain humans didn’t place there, it would be massive news. But the same people who would get the vapors over a plate and a spoon on Mars look at ATP Synthase and yawn.

Dr. Spencer, I think I agree a bit more with your view than Dr. Frank’s. But, the difference in the views is subtle. I share your distrust with using simple statistics in disputing a theory (or hypothesis) based upon a lot of data and modeling. Too many places to make mistakes with that approach. However, cloud cover is a huge unknown. How cloud cover will change as climate changes is an even larger unknown. The modelers admit this, we all know it. Dr. Frank did not really disprove the model results, but he did point out how big the possible cloud cover error is, relative to what the modelers are trying to estimate, this is a valuable contribution, at least in my humble opinion.

I tend to agree…

But I think that’s mostly because Dr. Spencer does a great job in communicating it in plain language to those of us who would prefer not to relive the nightmare of Differential Equations… 😉

Kudos to both Dr. Frank and Dr. Spencer and to Anthony Watts for hosting this discussion.

See my reply to Andy, David.

Or my replies to Roy on his blog.

He is arguing that a model calibration error statistic is an energy.

Pat:

I have no idea what your point is here. Please dispute a specific claim I have made in the current post.

I know you have no idea of my point, Roy. That’s the point.

An error statistic is not an energy.

You’re treating the ±4 W/m^2 as an energy.

You’re supposing that if it were important, the models would not be in TOA energy balance, and that they’d not all predict similar impacts from CO2 emissions. None of that is true or correct.

The models are all (required to be) in energy balance. Nevertheless, they make large errors in total cloud fraction. Different models even have different cloud fraction errors.

The error in cloud fraction means their simulated tropospheric thermal energy flux is wrong, even though the model is in over-all TOA energy balance.

That wrong thermal flux averaged across all models, yields a model calibration error statistic. It conditions the model predictions.

This sort of thinking, common in the physical sciences, is apparently unknown in climatology. Hence your problem, Roy.

I want to support Pat on this point.

Roy is claiming that a propagated error, which is an attribute of the system, has to affect the calculated values, and further claims that because the calculated values are generally repeatable, therefore they are “certain”.

This is simply not how results and propagated uncertainty are reported. The result is the result and the uncertainty is appended, depending on the uncertainty of various inputs. X±n

The uncertainty ±n is independent of the output value X of the model.

If every model in the world gave the same answer 1000 times, it doesn’t reduce the propagated uncertainty one iota. Roy claims it does. It doesn’t. Look it up on Wikipedia, even. That is simply not how error propagation works.

It is very instructive to see a highly intelligent, accomplished man utterly miss a key concept, even when it is succinctly explained to him a couple of times. It’s a very good reminder to stay humble and to be very thankful for Eureka! moments of clarity of thought.

Thank-you Crispin. Your description is very good.

Hoping you don’t mind, I reposted your comment on Roy’s site, as an example of someone who understands the case.

It is unfortunate that people who see (+/-) confuse the following number with sigma which is standard deviation and its relation of variance. These are statistical calculations describing a population of data points. Sigma used as a statistical calculation assumes a mean value and the deviations from the mean.

This paper describes the propagation of error throughout calculations to the end result.

As an example, take the numbers 1, 2, and 3. This could be a population of data points.

You would get the following

(range = 2)

(variance = 0.667)

(standard deviation = 0.816)

Now assume these are consecutive measurements, in feet, to determine the length of an object and assume each measurement has a possible error of +/- 0.1. In other words they are added together to find a total.

(1 +/- 0.1) + (2 +/- 0.1) + (3 +/- 0.1) You can’t describe this as 6 +/- 0.1 because that is incorrect.

What are the worst possible outcomes? 1.1 + 2.1 + 3.1 = 6.3 ft or 0.9 + 1.9 + 2.9 = 5.7

So the answer would be 6 +/- 0.3 feet. This is how measurement errors propagate.

Notice that (+/- 0.3) is different from (+/- 0.816). The standard deviation could easily remain at +/- 0.816 depending on the distribution of additional data points. However, +/- 0.3 will just keep growing for each iteration of additional measurements.

/An error statistic is not an energy./

First, thank for all of the time you have put into this.

I was just wondering if there might be any way to simplify all of this with a gadanken experiment, maybe?

For example, suppose we have a planet orbiting a star and you use an equation/model to report the planet’s position each year, with some degree of uncertainty (distance). So, each year, even though your model might give you nearly the same position, the uncertainty would accumulate / propagate. If the process went on long enough, eventually the uncertainty (distance) would become as great as the orbital circumference. (and then we have a problem)

You could think of the uncertainty as a “real distance” up to that point, however once the uncertainty exceeds the orbital circumference, it can no longer be treated as a distance due to the overlap (positional degeneracy). So, in the end, after many iterations, you have a calculation that gives a specific position, but has a tremendous amount of uncertainty in it –> that if given as a distance would be meaningless.

I am not sure if that works?

cwfisher, that’s a pretty good analogy, thanks.

As the uncertainty increases, even though the predicted position of the planet is discrete, it becomes less and less likely we’ll find it there.

Eventually, when the uncertainty is larger than the orbital circumference, as you point out, we’d have no idea where the planet is, no matter the discrete prediction.

A very interesting point. Yet when we know the Newton pair-wise approach to a 3-body problem explodes , the reason is not uncertainty , rather the linear pair-wise action-at-a-distance encoded in the maths. Which is why we needed General Relativity with curved spacetime. Mercury’s perihelion being the case in point.

Now , is it possible that besides uncertainty, resolution, (a very thorough, refreshing analysis by Pat Frank, thanks) the entire linear pair-wise flat climate models encode the same error?

And this consideration leads me immediately to a typical MHD plasma model – has anybody run his kind of analysis on, for example solar flares (outside Los Alamaos, I mean). If the error is not just statistics, but an encoded pair-wise ideology, we will never understand tha Sun, nor contain fusion.

David M.

Diff E simplifies calculus what’s not to like.

Diff Eq does not simplify calculus… Calculus, derivatives and integrals, was easy. Differential Equations, Cauchy-Euler, Variation of Parameters, etc., was a fracking nightmare. I have no idea how I got a B in it.

Do you think a statistic is an energy, Andy?

Because that equation is the essence of Roy’s argument.

Again, I support this. An uncertainty of 4 W/m^2 is an uncertainty about the value, not a additional forcing. It can also be written that the uncertainty of the forcing per sq m is “4”. Suppose the forcing was 250 Watts. It should be written

250 ±4 W/m^2

Just because a variation from a 4 W change doesn’t show up in the result(s) has no bearing on the uncertainty, which has its own source and propagation. If one wanted to find the results that would obtain at the limits, run the model with 254 and 246 W and observe the results. Simple. It will be different.

Because in the beginning the uncertainty is 4 W, it cannot be less than that at the end, after propagation through the formulas. Running the model with 250 and getting consistent answers doesn’t “improve” the quality of 250±4. You can’t claim, with a circular argument, that running it with 250 only means there is no 4 W uncertainty.

+1

+1

Yep. Gets a plus 1 from me too

Dr Frank

Please clarify a point that I find a sticking point in trying to conceptualize your analysis.

Is this 4W/square meter uncertainty a yearly value that would be a different number for other sized steps, such as (4W/sm)/12 for monthly steps?

Pat, I agree with Crispin. It’s not an energy, but the uncertainty in the measurement. Your statistical analysis shows that the measurements and models are not accurate enough to estimate man’s contribution to climate change and I thank you for doing this. Well done. But the statistical analysis does not disprove the models, it just shows they’ve been misused. A subtle difference to be sure and perhaps not an important one, but there none-the-less.

I have made plenty of comments, but have been careful not to weigh in on the question of validity.

I am awaiting clarity to smack me over the head.

I try to keep it straight what I know is true, what I think may be true, and what I am not sure of.

After all, the whole question in so much of this from the get-go to the present discussion centers on false certainty.

It is not so much the subtlety of the difference in views that may exist from one knowledgeable person to the next.

For me it is not being sure how the models even purport to tell us what they tell us.

I am not even sure that we are given clear information about what exactly it is the models output, and what they do not.

Do they produce three dimensional temperature maps of the Earth?

Is temperature the only thing they keep track of?

We know they do not keep track of clouds, so how do they keep track of movements of energy from convection, latent heat, precip, etc?

The models are flawed from the get go because they try to balance the TOA energy. The real model would balance the input of energy from the surface and outward flux at TOA. Only problem with that is the release of energy from the surface is chaotic, so it can’t be accurately modeled or predictable.

The incoming energy gets converted and stored in all kinds of media, water, ice, biomass, kinetic energy, GHGs, asphalt, etc. Some of that energy is stored for thousands, maybe even millions of years before it is released back to the atmosphere. As such the incoming energy on a particular day is fairly meaningless for that day. Today’s Energy penetrating the depths of the ocean is not going to affect the atmospheric temperature today, tomorrow or even next week, nor will it contribute to outgoing radiation today, tomorrow or next week. Energy stored in biomass this year may not be released for a couple of years, … or maybe even decades …. or in coal and oils case, millions of years.

The problem with the GHG theory, is that GHGs emit at the same rate they absorb. Thus, while GHG are yet another energy sink, they have no long term storage capacity, and thus just reflect the current state of energy flow from the surface to TOA. Doubling GHG doubles the amount of LW radiated down, but it also doubles the amount radiated up, so it is a zero sum game.

Balancing creates a propagated system error throughout the entire process. That error can never be random and can never be cancelled out. Any intensive property like temperature has to reflect a parameterization of the process. You can’t count up to get a temperature. You must do it by parameterization. The very fact of parameterization itself creates a systematic error. Any equation with a constant has been derived from experiment. There are systematic errors in those experiments.

“global effect of anthropogenic CO2 emissions” presupposes that you know the non anthropogenic CO2 emissions which are highly inaccurate themelves. And we most certainly don’t know the mix of the 2 causes of temperature (anthropogenic and non anthropogenic). We also don’t know to any great accuracy the actual global temperature increase. For climate models (that don’t know the answer to the last 2 points) to be fit for policy, should mean that those answers are known to much less error uncertainty than at present.

Roy is arguing that because 1 part of the system has been forced to be in balance, that lessens the system error. One can never lessen the system error unless the parameterization through experiments is calibrated more accurately. Unfortunately whole system experiments on the earth system, are difficult , to say the least.

Also correct. You cant magic away errors that are non gaussian… satellite sea level data products have the same issuel

If you double the amount of energy being emitted upwards downwards and sideways then the emitting medium, has to increase in temperature, both the CO2, to emit twice as much, and the rest of the air due to the build up in kinetic energy to enable the CO2 to be hot enough to absorb and to emit at that increased rate.

It is a zero sum eventually in terms of energy in and out but the bit that causes the emission is running along at a higher ernergy (heat) level.

Dr Deanster posted: “The problem with the GHG theory, is that GHGs emit at the same rate they absorb. Thus, while GHG are yet another energy sink, they have no long term storage capacity, and thus just reflect the current state of energy flow from the surface to TOA. Doubling GHG doubles the amount of LW radiated down, but it also doubles the amount radiated up, so it is a zero sum game.”

Well, there are so many problems with these statements, one hardly knows where to begin.

First, GHGs do NOT emit at the same rate as they absorb. GHGs can and do lose energy they absorb via radiation by pure “thermal” exchange (collisional exchange of vibrational energy) with other non-GHG atmospheric constituents, mainly nitrogen and oxygen. I believe the physical concept is called thermal equilibration of mixed gases.

Second, if you examine the vertical energy exchange during nighttime, it is obvious that GHGs are only seeing incoming energy from Earth’s surface (the net radiant energy exchange within the atmosphere itself is negligible in comparison) . . . but the GHGs radiate equally in all directions, that is, isotropically). Therefore, approximately half of their radiation is directed back toward Earth’s surface. Hence, they radiate less energy outbound (to TOA and space) than the energy they receive from Earth’s surface over the range of nighttime temperatures.

Third, to the extent that CHGs have very little heat capacity in and of themselves (being such small mass fractions of the global atmosphere, excluding water vapor) they don’t qualify as “sinks” for energy, even considering short term variations.

Finally, it is absurd to say that doubling any GHG doubles the amount of energy radiated up/down. It is well-know (well, perhaps outside of global climate models) that any gas absorbing radiation can become “saturated” in terms of radiation energy absorption if the “optical column length” exceeds a certain value, generally taken to be six e-folding lengths. This is well summarized in the following paragraph extracted from http://clivebest.com/blog/?p=1169 :

“There is a very interesting paper here : http://brneurosci.org/co2.html which describes the basic physics. The absorption length for the existing concentration of (atmospheric – GD) CO2 is around 25 meters i.e. the distance to reduce the intensity by 1/e. All agree that direct IR radiation in the main CO2 bands is absorbed well below 1 km above the earth. Increasing levels of CO2 merely cause the absorption length to move closer to the surface. Doubling the amount of CO2 does not double the amount of global warming. Any increase could be at most logarithmic, and this is also generally agreed by all sides.”

This point touches on an issue I have with the models: Do we have any theory, or actual measurement, of what % of the heat energy at the Earth’s surface rises well up into the atmosphere (say above 50% of the CO2 in the atmosphere) via convection (especially with thunderstorms) vs what % rises by IR radiation?

Heat rising by convection to such a height would partially neutralise a rising CO2-greenhouse effect. It that % rises as the Earth warms a little, because storms become stronger (as we have been told recently re hurricanes, which are said to be strengthening with “climate change”), then this could constitute a significant negative feedback for global warming.

I would hazard a guess that if even a few % of the rising heat is due to convection, and if that % is sensitive to rising oceans temps, then we may have an explanation separate from cloud cover changes for why the climate models all seem to fail in predicting real world temperature change. Heat trapping effects of changing 300 ppm CO2 to 400 ppm CO2 might be neutralised by a correspondingly small change in convective heat loss.

Does anyone know of any real numbers on this % and its sensitivity to ocean temps?

How much Global Warming could be blamed by all the combusted exhaust that is leaving chimneys that are poking out of the roofs of commercial buildings and industries and at power plants? These temperatures range from 250F to 1500 F. This all has to be recognized to be wasted heat energy. Why is this being allowed if Global Warming really is the problem that it is claimed to be.

Recovering that waste heat energy and utilizing it will greatly reduce the amount of fossil fuel that will be needed to be combusted. This will greatly reduce CO2 emissions. With natural gas, for every 1 million Btu’s that is recovered and utilized, 117 lbs of CO2 will not be put into the atmosphere. What natural gas is not wasted today, will be there to be used tomorrow.

In every 1 million Btu’s of combusted natural gas are 5 gallons of recoverable distilled water. To get at this water the heat energy has to be removed from the combusted exhaust. This is done with the technology of Condensing Flue Gas Heat Recovery. The lower the exhaust temperature is lowered, the greater the volume of water produced. Have you ever seen combusted natural gas irrigate the lawns and flower beds?

In the summer time there will be times when the exhaust leaving these chimneys can be cooler than the outside air temperature. Global Cooling?

That should never happen. If the atmosphere is our “dead state” then air entering it at a cooler temperature would mean we are wasting availability (exergy) by over-cooling a waste stream. In general one should consider recuperating waste heat, but there are circumstances in which it has no purpose and is non-economic to do so.

Only wrongly dimesioned condensors may be affected, if outside it’s to hot, they can’t liquify.

I imagine if cost effective the utilities would already be doing this. If not cost effective but more effective than current green subsidies then government is better to spent subsidy money on your idea.

I’m just a layman, so help me out here:

How can anyone recover waste heat energy and utilize it any manner that doesn’t result in most of the waste heat eventually entering the ocean or atmosphere?

If I use waste heat to generate steam, for example, and use that steam to drive a turbine, doesn’t the condensation of the steam release the part of the energy that didn’t go into driving the turbine?

You are correct that the reclaimed heat will turn into electricity and then into heat again.

But if you didn’t have this process the electricity would have to come from another source instead. So twice the amount of heat would be generated in total.

“If I use waste heat to generate steam, for example, and use that steam to drive a turbine, doesn’t the condensation of the steam release the part of the energy that didn’t go into driving the turbine?”

Yes and no.

The turbines cannot extract all the energy out of the steam, nor should they. If they did, the steam would condense within the turbines, and that is to be avoided for corrosion, erosion, and all sorts of other reasons harmful to turbine blades and bearings. So, the turbines want to accept high temperature high pressure steam and exhaust lower temperature and lower pressure steam but still well above condensation. This still highly energetic steam is then used to pre-heat incoming water headed to the boiler (which is actually recovered condensed steam from the same loop) where it begins to condense before it needs further cooling provided by external heat exchangers.

To get the entire system to extract energy from a heat source (burning fuel) you need a heat sink (external condensers). The thermal difference is what drives the system. How you use your waste heat stream to reheat condensed water will improve efficiency, but there are theoretical and practical limits regarding how much useful energy can be extracted from steam systems. The thermodynamics of steam energy regarding best temperature and pressure ranges for optimal energy conversion have been around for a very long time.

Its been over 40 years ago I did my thermo at the ‘tute, but I remember the final exam. Only one question (with multiple parts). It gave the schematic of a power plant and asked, “Calculate the overall efficiency of the system with calculations shown for each subsystem.”

There is this little thing called entropy that somewhat gets in the way of reusing the ‘wasted’ heat energy. It quickly gets to the point of requiring more energy expenditure than one can possible capture, to capture and use that wasted heat , not to mention the general impossibility of getting it to someplace useful.

Yearly energy usage (actually used plus wasted) of all human activity per year is equal to only a few hours of one day of energy input from the sun. Which is to say it is so far below the precision of measurement of total energy flux that it cannot effect any TOA energy balance measurements or theoretical calculations there of.

Wouldn’t an arbitrary constrate on the deviation of LWCF in response to increased CO2, introduce constraints in model results that may badly miss actual results, should LWCF respond in non-linear fashion to delta CO2?

There is no explicit constraint on how a model responds to increasing CO2. And the model allows clouds to change nonlinearly to warming resulting from increasing CO2. The models allow all kinds of complex nonlinear behavior to occur. That still doesn’t mean their projections are useful, though. And any nonlinearities add uncertainty. For example, I don’t think the models handle changes in precipitation efficiency with warming, which affects water vapor feedback, and thus how much warming. It’s not that they don’t handle the nonlinearity, it’s that they don’t have ANY relationship included. A known unknown.

Dr. Spencer, the UN IPCC climate models’ average earth temperatures vary up to 3C between them. How does that temperature difference affect the physics calculations within the different models?

Yes. There are all sorts of nonlinearities in the model. That’s why Frank is wrong, the uncertainty explodes exponentially in climate models, not linearly.

What he says is true about his linear model. As long as his model simulates the bullshit simulators well, his model provides a lower bound for uncertainty in the pure and absolute crap models.

So yes, it’s worse than anyone thought.

+1

Is it possible express Pat’s figures as chance of models hitting within say a 1C envelope over 20 years compared with the measured? I guess if chances are very slim and models still hit the envelope they do this WHO programmed boundaries. If so actual prediction skills are close to zero.

I’m coming to the conclusion that if the models included estimates of the poorly understood factors that affect our climate in the real world then we have Dr Frank’s scenario. If these factors are fiddled with to effectively neuralise them and not include them in the passage of time (contrary to nature) then you get Dr Spencer’s scenario.

If this over-simplified explanation is correct then both scientists are correct but surely the point is that if the models truly simulated the real world with at their current state of development we get Dr Frank’s scenario, i.e. they fail.

Dr. Frank \’s paper says nothing about how the GCM’s get to there projections. His whole point is that , given the uncertainty of the cloud cover and its repeated computation in the time wise iterations of the models, the results are very uncertain. His point is that there is no reason to trust the accuracy (correctness) of their results even if their precision is reasonable. If his emulations of the GCM results is a valid analytic step, his final answer of uncertainty is correct. He is not saying the model cannot show a change for CO2 forcing. That is what they were programed to do. He is saying that programed change is well within the uncertainty bounds so is useful only for study and useless for prediction.

Exactly right, DMA. You’ve got it.

Dr Frank!

Do you agree with my angle of the argument a few lines above?

Not exactly, Mr. Z. It says even if the models happen to get close to a correct value, there’s no way to know.

Thanks,

If the error bars are as high as you calculate and they still manage to produce values within a 1C envelope over a 20 year period (and they do comparing calculated vs measured) there must be some kind of boundaries involved.

My laymen logic tells me either there is a “boundary corridor” or the error bars are not properly calculated. A value can not stay withing range over 20 years by pure luck.

Please help me understand where my thinking is wrong. (Maybe with an example of a prediction where an initial uncertainty does not go ape).

Not exactly, Mr. Z. It says even if the models happen to get close to a correct value, there’s no way to know.So, assuming scenario that a model is reasonably close to the actual values is it because of pure luck? How many possible trajectories are possible within bounds of uncertainty? Surely dozens if not hundreds – does it mean each trajectory is equally probable?

To Paramenter:

I do hydraulic studies (flood modeling). The object of the model isn’t to be precise, there is no way you can be precise. Yes the output is to 4 decimal places, but the storm you’re modeling isn’t a real storm, nor is the soil infiltration, and the runoff routing, or retention/detention values, and the tides and even temperature (viscosity) will affect the flow values.

The point of a model is to produce a value where you can say “This will satisfy the requirements for safety and damage to property, for a given expected storm event”.

I get the impression that atmosphere modeling is similar if not many times more complex than flood modeling. Precision isn’t what you’re looking for. There is no such thing as precision with these dynamic random systems.

Mr. Z, error is not uncertainty.

If the physical theory is incorrect, one may produce a tight range of results, but they’ll have a large uncertainty bound because the calculated values provide no information about the physical state of the system.

Well, if that’s what Pat is saying, then I am wasting my time trying to explain this to people. It’s a very basic concept, and without understanding global energy balance and the basics of how models work, I will not be able to convince you that you can’t swoop in with statistical assumptions to analyze the problem in this fashion. I don’t know how to put it any more simply that what I have done in this post. If you do not understand what I have said from a physics point of view, we are wasting our time.

Where can we go for a clear explanation of what exactly the models do, how they do it, what they do not do, etc?

I think that at least some people believe the GCMs construct a miniature Earth as it exists in space, with the Sun shining on it, and with an atmosphere, and land, and oceans, just as the exist in real life, and than program in all the laws of physics and physical constants, and then just let the thing run to the future.

Roy W Spencer:

I will not be able to convince you that you can’t swoop in with statistical assumptions to analyze the problem in this fashion.“Swoop” is obviously the wrong word for this multiyear project.

If you want freedom from statistical assumptions, you’ll probably have to go with bootstrapping, which a problem as large as this would require computers with about 100 times the speed of today’s supercomputers. Or, you can try to work with more accurate statistical assumptions than those used by Pat Frank.

Meanwhile, because there are many sources of uncertainty, Pat Frank has most likely underestimated the uncertainty in the final calculation.

I do understand what you’re saying from a physics point of view, Roy.

However, as soon as you convert an error statistic into a physical magnitude, you go off the physics rails.

You’re thinking of an error statistic as though it were a physical magnitude. It’s not.

Calibration error statistics condition predictions made using physical theory.

This is standard in physics and chemistry. But not in climate science.

Perhaps if you found a trusted experimental physicist in the UA physics department — not one engaged in climate studies — and asked about this it might help.

Ask that person whether a ±4 W/m^2 calibration error statistic will impact the magnitude of the simulated TOA energy balance of a climate model.

Dr. Roy

You said ” you can’t swoop in with statistical assumptions to analyze the problem in this fashion”. Does this mean tat Dr. Frank’s use of the linear emulations of the models is invalid as a step to understand the uncertainties in the models? If so why? If Frank’s method is valid his results are valid. If his method is not valid we have no mathematical means of determining uncertainty in the complex equations used in the models. Then we are left with your suspicions that the models are not accurate and the argument that their results rely on circular reasoning and are therefore potentially spurious and not fit for policy making.

I do support the explanation of Dr. Spencer.

I would ask in plain language what Dr. Spencer means with this clause: “It’s not that they don’t handle the nonlinearity, it’s that they don’t have ANY relationship included. A known unknown.”

I understand this clause to mean that climate models do not have a cloud fraction relationship included.

The most important feature of climate models is to calculate climate sensitivity value. There is no cloud fraction variable included in the model but it has been assumed to have the same effect at 280 ppm concentration as well as 560 ppm concentration.

Pat Frank says that the climate sensitivity value – for example 1.8 degrees – has no meaning because the cloud fraction error destroys it totally through error propagation mechanism. Dr. Spencer says that the cloud fraction propagation error does not destroy it, if a model does not have this term in the model.

It is another analysis, what cloud fraction can do in the real climate. At any time it can have its input, which may be +4 W/m2 or – 4 W/m2 or something between. It is anyway enough to destroy the radiative forcing of CO2, which is only +3.7 W/m2.

Antero,

“It is anyway enough to destroy the radiative forcing of CO2, which is only +3.7 W/m2 [per doubling].”

Exactly, complex statistical analysis is interesting but not necessary. Dr. Spencer himself has noted that a small change in cloud cover is enough to overwhelm CO2. It is one of his most persuasive arguments against catastrophic warming.

Antero, “

I do support the explanation of Dr. Spencer.”So, Antero, you think a calibration error statistic is an energy flux.

Great physical thinking, that.

Dr. Frank,as you note this concept is absolutely critical. May I humbly suggest that you acknowledge that this concept is exceedingly difficult for most people to truly grasp, regardless of their background,experience or education. It is an overwhelmingly pervasive impediment to really understanding if any projection has usable “skill”. My 40 years or so in various fields of engineering has tried to teach me that unless you patiently work on finding ways to communicate this core concept you will get now where. Dr. Spencer is listening, patience, understanding, and empathy with the real difficulty of grasping this concept is key.

Please keep looking for ways to clarify your position. Those who do not see it are not stupid,not willfully missing it, it’s just damned hard to grasp for most.

Regards,

Ethan Brand

Exactly.

DMA:

He is saying that programed change is well within the uncertainty bounds so is useful only for study and useless for prediction.That is the exact point that Dr Spencer misses in this quote:

All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!The estimated sizes of the CO2 effect calculated from the models are well within the probable range of the error/uncertainty propagated through the model calculations. It is operationally indistinguishable from the effect of a chance variation in an important parameter.

In a simple linear case such a y = b0 + b1X + e, the random variation in the data produces random variation in the estimates of b0 and b1. If you then use the estimated values of b0 and b1 to predict/model

Y(n+1) = b0 + b1X(n + 1), the uncertainties of the b0 and b1 propagate through to uncertainties in the predicted Y(n + 1). For the simple linear case, the uncertainty of Y(n+ 1) can be estimated, and for a given X(n + 1) might be much greater than the estimate itself. In that case, you would have to accept that you had no useful information about the true value of Y(n + 1). This happens all the time when you are trying to measure small concentrations of chemicals: the estimate is then called “below the minimum detection level”.

With more complex models, the uncertainty that is propagated through the calculations based on the uncertainties in the parameter estimates is harder to estimate. Pat Frank has shown that the result of (a reasonable first attempt) at a calculation shows the uncertainty of the CO2 effect computed by GCMs to be much greater than the nominal effect estimate. The CO2 effect is below the minimum detection level.

oops:

In that case, you would have to accept that you had no useful information about the true value of Y(n + 1).Except possibly that the true value was very small, if not 0.

+1

+1

Nice succinct explanation!

What Dr. Frank’s critics seem not to get is that it is quite possible for a measurement error to be small while the uncertainty of the measurement is large. For example I might have an instrument which is calibrated by comparison to a reference with a large uncertainty. Say a 100 kg scale calibration weight with an uncertainty of +/- 5 kg. If I then weigh something and the scale shows 100 kg, the error might well be very small because it’s quite possible the calibration weight was in fact very close to 100 kg. But, it might have been as low as 95 kg or as high as 105 kg. That’s what uncertainty means. We just don’t know. So while the actual error of our measurement might be small, we still must disclose that our measurement has an uncertainty of +/- 5 kg. Maybe not a big deal if we were weighing a bag of compost, but a different story if we’re weighing a bar of gold bullion.

And in the case of the scale, it might be very precise so that repeated measurements of the same object is always within +/-.1 kg. But it doesn’t matter, we still don’t know the weight of the object to better than +/- 5kb. If we try to weigh something that is less than 5kg, we literally still won’t *know* what it *actually* weighs. Now consider if we have 10 scales, all of them calibrated to the same reference (with the large uncertainty). In that case, we should not be surprised if they all return close to the same result for the same object. But that does not prove that we know the *actual* weight of the object because they all have the same level of uncertainty, just as all the GCMs are based on the same incomplete theory.

Paul P. Exactly right. Now consider what happens if we are asked to determine the total weight of 10 similar items. Since each weighs something close to 100 kg and our scale capacity is not much greater, we have to weigh them one by one and add up the weights. What’s the uncertainty of our result?

Hint SQRT(5^2 x 10) = +/- 15.8 kg.

In my original thoughts on Pat Frank’s paper I said that I wasn’t sure if the uncertainties stack-up as he suggested. I think Dr. Spencer’s essay here lays this out more specifically.

Let’s write the climate modeling process as a linear state system in the form most engineers would recognize.

is the matrix describing atmospheric dynamics. It is a huge matrix. Operate on a current state (X) with this matrix and what comes out is the time rate of change of the state (temperatures, pressure, etc). There are inputs to this, like insolation, described in and which fed into the rate; and there are uncertainties and noise, which represents, which drive or disturb the dynamics, respectively. These drivers and uncertainties are vectors. Their effect on through the matrices A, and B might correlate one with another, or even anti-correlate. It is not possible to tell without knowing A and B. The propagation of uncertainty in this situation is very complex.

BTW, in the state space model above the vector Y is what we observe. It is a function of the current state, X, but not necessarily exactly equal to it. The dynamics of the system can be hidden and completely opaque to our observations. The uncertainty vector, w, represents that our measurements (and the corrections we apply) are subject to their own uncertainties. We should propagate all the way to Y to fully characterize the model uncertainties.

Do you think a statistic is an energy, Kevin?

That’s Roy’s argument.

Pat,

I am pretty sure I understand the point you are trying to make. I am pretty well versed in propagation of error and figuring uncertainty because I have taught the subjects in physics courses and engineering courses. I don’t know that asking the question “Do you think a statistic is an energy?” necessarily illustrates the distinction between your respective view points. No, I do not think the uncertainty estimates you use from calibration or determine from propagating error is a physical energy, and despite an ambiguous explanation in his Figure 1, I don’t think he does either. He can weigh in an tell me I’m wrong of course.

I do understand that if one hopes to measure an effect, say a rise in mean earth temperature, and relate it to a calculated expectation (from a GCM) then the uncertainty in temperature expectations delivered by the GCM has to be smaller or at least of the same size as resolution of temperature measurements. And your efforts (which I do not minimize in any way) suggest that as temperatures would be expect to rise from warming, the uncertainty of the calculated expectations rise faster.

However, Dr. Spencer’s point appears to me as, if envelope of uncertainty is so large as you maintain, then why do model runs not show more variability, and why do they remain so close to energy conservation? I think he has a point, and a few days ago I referred to the NIST Statistics Handbook which points to square root variances derived from observations (the GCM models in this case) as valid estimates of uncertainty–propagation of error is another means. Now I don’t know if the somewhat close correspondence of calculated expectations has a physical basis (as Dr. Spencer says) or a sociological basis (which is well documented in many fields–see Fischoff and Henrion), but I pointed out above that propagation of error done using the matrices “A,B,C,D” in my post above, it is a measurement equation system after all, might explain this, and would be more convincing.

I have been thinking of a simpler example to illustrate what I think are the issues between the two of you. Maybe I’ll post it if I think of one.

..my guess would be that after entering all the unknowns…differently…and going back and adjusting all those unknowns…differently

…they all ended up with X increase in CO2 causes X increase in temp

Kevin Kilty:

I think you also do not really see what Pat Frank is saying. Pat Frank is not talking about the performance of the models in their internal operation. What Pat Frank has done is taken known, measured accepted, and verified experimental data reduced to an uncertainty and impose that signal into the models. The extreme in output indicates that the models did not model the real physics of the system in such a way as to be able to handle that imposed signal, signifying that the models are incapable of doing what they claim to do.

Model calibration runs don’t show variability, Kevin, because they are tuned to reproduce observables.

So-called perturbed physics tests, which which parameters are systematically varied within their uncertainty bounds, in a given model, do show large-scale variability.

In any case, the core of Roy Spencer’s argument is that the ±4 W/m^2 LWCF error statistic should produce TOA imbalances. His view is a basic misunderstanding of its meaning.

I think this comes down to how many model runs are actually being produced. So you do a million runs and pick only the ones that seem good to report. Simple. Since everybody knows what sort if numbers you are looking for in advance it is trivial to constrain it to an arbitrary degree of precision.

So the question is: Do modelers simply chuck the runs that show 20 degree warming per doubling of CO2? If they do, then there’s your answer right there.

If I run a model 100 times I then have a population of data points. I can then determine all kinds of statistical information about that distribution. I can say the “true value” is the mean +/- error of the mean. As pointed out above, this describes the precision of the results.

However, if a calibration error is included at the very beginning, assuming no other errors, that error is propagated throughout. It can result in an additive error for linear calculation or worse when there are non-linear calculations. This type of error can not be reduced by many “runs, the uncertainty remains.

Kevin,

They all produce similar results because they are all based on the same incomplete theory. Internally they are all bound by energy balancing requirements, so they can wander only so far off track. But that doesn’t mean they are right. Just because a stopped (analog) clock shows a valid time does not mean it is correct. All you can know is that it is correct twice a day, but it is useless for telling you what the current time is. So don’t let the magnitude of the uncertainly envelope bother you – once it goes outside the range of possible physical solutions it just means the model can’t tell you anything. Those large uncertainty values say nothing about the possible states of the real climate nor the models, and I don’t think Pat ever suggests they do.

“why do model runs not show more variability”

Well, that’s easy. I have a model which says ‘just pick a value between 0 and infinity and then stick to it’.

I randomly picked one gazillion pseudo-degrees of pseudo-warming and now I’m stuck with it, with zero variability. I’ll let you guess what the uncertainty is for this prediction. No, it’s not related to variability.

As for ‘why?’ (although it’s an irrelevant question), here is one reason, directly copied from a climastrological computer model (picked at random):

C**** LIMIT SIZE OF DOWNDRAFT IF NEEDED

IF(DDRAFT.GT..95d0*(AIRM(L-1)+DMR(L-1)))

* DDRAFT=.95d0*(AIRM(L-1)+DMR(L-1))

EDRAFT=DDRAFT-DDRUP

It’s copy/paste from a computer model I downloaded to look into it from time to time and have a good laugh. The file is CLOUDS2.f from a ‘GISS model IE’, modelE2_AR5_branch.

Now, see that .95 value in there? There is no fundamental law of nature for that. The value is not magical. It’s just some non physical bound they added in the code so the model would not explode so ugly. This does not limit the uncertainty at all, but limits the model variability.

The computer models are filled with such anti-physical bounds. Despite those, the models exponentially explode spectacularly, in a very short time of simulation.

Here is what I found in some readme for that project:

“Occasionally (every 15-20 model years), the model will produce very fast velocities in the lower stratosphere near the pole (levels 7 or 8 for the standard layering). This will produce a number of warnings from the advection (such as limitq warning: abs(a)>1) and then finally a crash (limitq error: new sn < 0")"

Now, their advice for such a case is to 'go back in time' a little and restart the model with some adjustment of values in such a way that the exponential amplification will lead the model evolution far away from the bullshit values. Of course, Nature does not work that way. It doesn't go back in time and restarts the Universe when it reaches 'inconvenient' values.

Also, anybody that thinks that while such an increase in winds are non-physical, but the evolution with just a little bit of adjustment of a past state is physical and not still exponentially far away from reality, is highly delusional. Just because the fairy tales cartoon physics 'looks' real to you, doesn't mean it's close to reality.

Another reason is the parametrization: they cannot simulate a lot of physical processes on Earth, so they use all sorts of parametrized bullshit instead. Very complex phenomena that would lead to a very high variability if simulated, is limited by the modelers cargo cult religious beliefs.

So, in short, you don't see the real variability in the models because it's anti-physically limited and also they do not put crashes in the results for obvious reasons (for example, they don't even reach the range of those 100 years).

Climate models are pure garbage.

“The propagation of uncertainty in this situation is very complex.”Yes, but the principles are known. Propagation is via the solution space of the homogeneous part of your equation system

dX/dt = A.X

You can even write that explicitly if you want. If you make a proportional error e in X, then its propagation is as e*exp(∫A dt) (if it stays linear). The point is that you have to take the differential equation and its solution space into account.

Isn’t that what I just said?

“what I just said”I’m expanding on your “very complex”. It isn’t totally inscrutable. You can identify the part of the system that propagates errors, and see what it does. The key for linear systems is the eigenvalues of that integral ∫A dt. If one is positive, the error, or uncertainty, will grow rapidly. That is what leads to blowing up, instability. If they are all negative (bounded away from zero), uncertainly will be propagated, but diminishing. That is what a proper analysis of propagation of uncertainty requires. And of course, none of it is present in Pat Frank’s cartoonish Eq 1.

Nick:

You have the purely mathematical argument essentially correct. Although I claim no hands-on experience with GCMs, extensive work with other geophysical models makes me skeptical of a perpetually increasing envelope of uncertainty projected on the basis of some unknown, but fixed cloud-fraction error in the control runs used to determine the base state of model climate. A random-walk statistical model may be physically appropriate for diffusion processes, with wholly independent increments in time, but not for autocorrelated climate processes driven by known forces.

Nevertheless, it seems that what you call the “cartoonish Eq.1” quite fairly represents the way GCMs treat the LWIR-powered unicorn of “CO2 forcing,” ostensibly governing “climate change” from the model base state. Isn’t it this physical confusion between actual dynamic forcing and mere state of matter that opens up a Pandora’s box of uncertainty for GCM projections of surface temperatures, irrespective of any planetary TOA energy balancing?

Sky

“Nevertheless, it seems that what you call the “cartoonish Eq.1” quite fairly represents the way GCMs treat the LWIR-powered unicorn of “CO2 forcing,” ostensibly governing “climate change” from the model base state.”It is cartoonish because, while it may represent a particular solution of the equations, it in no way represents alternative solutions that would be followed if something varied. In fact, it is so bereft of alternative solutions that he has to make one up with the claim that the observed rmse of 4 W/m2 somehow compounds annually (why annually?).

I made a planetary analogy elsewhere. Suppose you had a planet in circular orbit under gravity. You could model it with a weight rotating without gravity but held to the centre with a weightless rod. You could get exactly the same solution. But how would it treat uncertainty about velocity? Or even position? It can’t show any variation in position (radially), while with velocity, it could move faster, but without the change in orbit that would follow under gravity. And velocity that isn’t orthogonal to the radius? All it can do is break the rod.

Propagation of uncertainty with differential equations depends entirely on how it is carried by the solution space. If you have a different solution space, or none at all, you’ll get meaningless answers.

Annually because it’s an annual error, Nick.

And that “cartoonish Eq. 1” accurately emulates the air temperature projections of CMIP5 models. Embarrassing for you, isn’t it. Hence your cartoonish disparagements.

The response of the emulation equation to forcing mirrors the response of the models to forcing. Its response to step-wise uncertainty in forcing then indicates the impact of step-wise uncertainty on the model response.

Nick Stokes:

It is cartoonish because, while it may represent a particular solution of the equations, it in no way represents alternative solutions that would be followed if something varied.That is not true. Although this is sometimes called “error propagation”, what is being propagated is not an error or a few errors, but the probability distribution of a range of errors, through the propagation of the standard deviation of the random components of the results of the computations. Pat Frank assumes that the variances of the error distributions add at each step in the solution of the difference equation; he has calculated the correlations of successive errors in order to add in the appropriate covariances, instead of making the common assumption of independent errors. The cone shape results from his graphing the standard deviation of the error distribution instead of its variance.

One could, alternatively, run each GCM entire, many trials, changing each trial only the single parameter that is being studied, on a grid of evenly spaced values within its 95% CI. That would propagate particular possible errors, and produce a cone of model outputs, probably not shaped as prettily as Pat Frank’s cone. This would be less time-consuming that the bootstrap procedure I mentioned elsewhere, but still require faster computers than those available now.

“One could, alternatively, run each GCM entire, many trials, changing each trial only the single parameter that is being studied, on a grid of evenly spaced values”A great deal has been said in these threads which is not only ignorant of how error really is propagated in differential equations, but of practice in dealing with it. Practitioners have to know about the solution space, partly because if propagation of error grows, the solution with fail (instability), and partly because they really are interested in what happens to uncertainty. If you look at the KNMI CMIP 5 table of GCM results, you’ll see a whole lot of models, scenarios and result types. But if you look at the small number beside each radio button, it is the ensemble number. Sometimes it is only one – you don’t have to do an ensemble in every case. But very often it is 5,6 or even 10, just for 1 program. CMIP has a special notation for recording whether the ensembles are varying just initial conditions or some parameter. You don’t have to do a complete scan of possibilities in each case. There is often not much difference following from the source of perturbation.

This is a far more rigorous and effective way of seeing what the GCM really does do to variation than speculating with random walks.

Nick, This thread has become a tangled mess of people looking at this in incongruous ways I am afraid, but by looking at the eigenvalues of “A” what you are doing is verifying that the solution converges. This is not the same thing I am speaking of, which is explained as something like looking at the difference of two solutions (both of which converge) having a small change of some parameter, and determining if one can, at some point, resolve one solution from the other, given the distribution of uncertainties in the problem parameters and initial data. I thought this is what Pat Frank was getting at, but I am no longer sure of that. I also thought Spencer had a valid point, but now I am not sure I have interpreted his point correctly.

I had thought about an example to illustrate this, but the whole discussion has become so unclear, that I don’t think we are even discussing the same things. I need to focus on my paying job today.

Nick Stokes:

Practitioners have to know about the solution space, partly because if propagation of error grows, the solution with fail (instability), and partly because they really are interested in what happens to uncertainty.As experts sometimes say, I have done this a lot and you can have large errors without instability; you can get what looks like a nice stable solution that has a huge error, without any indication of error.. The point about propagation of uncertainty still seems to be eluding you: the error is not known, only its distribution (at least approximated by a confidence interval), and it is the distribution of the error that is propagated.

This brings us back to the question addressed by Pat Frank, a question formerly ignored: Given the uncertainty in the parameter estimate, what is the best estimate of the uncertainty in the forecast? Hopefully, Pat Frank has started the ball rolling, and there will be lots more attempts at an answer in the future.

“This is not the same thing I am speaking of, which is explained as something like looking at the difference of two solutions (both of which converge) having a small change of some parameter, and determining if one can, at some point, resolve one solution from the other, given the distribution of uncertainties in the problem parameters and initial data. “I think it is the same. You have formulated as a first order system, so it is characterised by its starting value. If you start from state s0, the solution is s0*exp(∫A dt). If you were wrong, and really started from s1, the solution is s1*exp(∫A dt). The evolution of the error is (s1-s0)*exp(∫A dt). It’s true that the exponential determines convergence of the solutions, but it also determines what happens to the error. It looks like it is all just scaling, but with non-linearity the separation of solutions can be more permanent than the convergence/divergence of solutions.

Kevin Kilty:

The uncertainty vector, w, represents that our measurements (and the corrections we apply) are subject to their own uncertainties. We should propagate all the way to Y to fully characterize the model uncertainties.In applications, the matrices A, B, C, and D all have to be estimated from the data, hence are subject to random variation and uncertainty. When computing the modeled value Y for a new case of X, or for the next time step in the sequence of a solution of the differential equation, those uncertainties also contribute to the uncertainty in the modeled value of Y. I agree that “We should propagate all the way to Y to fully characterize the model uncertainty.” I have merely elaborated a detail of the process.

In his reply to me, Roy dismissed the distinction between an energy flux and an error statistic as “semantics.”

This extraordinary mistake shows up immediately in Roy’s post above. Quoting, “

But, he then further states,“LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”“

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!“”Roy plain does not get the difference between a calibration error statistic and an energy flux. He is treating the ±4 W/m^2 long wave cloud forcing error statistic as an energy.

He clearly thinks this ±4 W/m^2 statistic should impact model expectation values.

I’ve pointed out over and over that calibration error statistics are derived from comparisons between simulations and observations.

See my replies to Roy on his site.

Chemists, physicists and engineers learn about error analysis and methods calibration in their first undergraduate year.

But Roy doesn’t get it.

Nor, apparently, do most other climate modelers (I exclude my reviewers at Frontiers).

Roy’s objection has no critical force in terms of physical science. None.

A statistic is not an energy. It’s that simple.

If ±4 W/m^2 isn’t a statistical bound on power (not energy!) density, then it has no place in physical science. It’s that simple.

It is an estimate of our ignorance. It places lower bounds on what the models call tell us about the climate state at any point in the future.

This tautological statement is uninstructive in the present case. Inasmuch as the purported effect of misestimating cloud fraction is mistakenly treated as a recurrent error in system forcing, rather than an initial error in an internal system state, the bounding problem is ill-posed at the outset. Such cavalier treatment tells us little about the uncertainty of future climate states, which depend upon many (sometimes chaotic) factors not considered here.

It isn’t meant to tell us anything about future climate states (as in the actual climate), but about the future climate states predicted by the models. And in that regard it is very instructive. And you have mischaracterized the error; it is a specification error (the base theory is incomplete), which like calibration errors puts a limit on the accuracy of the calculations. These types of errors accumulate at each iteration of the calculation, which is to say, at each time-step.

That the discussion is about errors in model predictions is self-evident. But model errors can be of a great variety, with entirely different effects upon model output. And, by definition, model error is deviation of model output from actual states. Without identifying the precise nature of the error in the context of the workings of the model, your “estimate of our ignorance” is uninstructive.

The model error to which I’m calling attention was clearly specified as the misestimation of the base state of INTERNAL power fluxes due to mistaken “cloud-forcing” during the calibration phase. Unless all the models handle this uniformly as a variable, rather than a presumed fixed parameter (as most models do with relative humidity), then there’s no “accumulation” of that particular error at each model time-step No doubt, other errors may ensue and indeed propagate, but not according to the prescription given here by Frank.

Two questionsthat I have not seen addressed regarding climate models:

First, if models are being improved to say, more realistically account for clouds, how is this improved model incorporated into projections? Specifically, there is some starting point (let’s say 1980) where a model model is initialized and begins its “projection run”. Climate history before this period is used to adjust empirical constants (ie “tune” or “fudge”). Some time later, perhaps decades, along comes an improved version. However, now the actual climate history since model initiation is known. Does the run time of the model get restarted from the present? Or is the model re-initialized to 1980. If the latter, the actual projection time is obviously considerably shorter. Doesn’t this mean that there is insufficient projection time to really judge the quality of the model? If the former, what is to stop modelers from using the post 1980 climate record to “look at the answer key” and further “tune” their models?

Second, as I understand it, atmospheric CO2 levels in the model are not dynamically calculated but are inputs based on the Representative Concentration Pathways. I understand the reasons for this. Which RCP most closely matches actual data (so far) and how much have they differed?

I think the conclusion is if the models were improved in such a way that they accurately model the cloud forcing behaviors, then we might end up concluding that cloud forcing behaviors control the climate and that likely the CO2 component is insignificant.

Roy, it’s the only way to see this issue. Thanks!

Roy wrote, “

The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.”Neither Roy nor anyone else can show that my paper says that models do not respond to increasing greenhouse gases.

That’s because such a statement is nowhere in the paper.

What the paper says is that the response of models to greenhouse gases has no physical meaning; a very, very,

verydifferent message than what Roy avers.The reason for lack of physical meaning is that the resolution lower limit of the models is much larger than

the perturbation they claim to detect.

The models calculate a response. The calculated response is meaningless.

Roy, I regret to say this, but for all your work you plain have not understood anything about the paper. Not word one.

I’ve gotten emails from physicists who are happy with the paper.

I really do think that the cure for the mess that is climate science would be to require all of them to take an undergraduate major in experimental physics or chemistry, before going on to climate studies.

None of them seem to know anything about physical error analysis.

“None of them seem to know anything about physical error analysis.”

That would be my guess as well. Cause I was always wondering that with all the uncertainties, complex calculations and adding errors over running time of the models they could still give such small confidence intervals.

Errors propagate and amplify in physical measurements combined from different sources. It makes no sense that they shouldn’t from modeled values over time and that the errors would therefore very huge.

The exactness of my understanding of all this is peon, compared to most here, but the glimmer of rational insight that I do have about it leads me to think that Pat is arguing on one level, while Roy and others are arguing on another. Pat is on the top-tier level, and the others have not quite ascended to there yet. Hence, the dissonance between perspectives.

Pat talks about reality, while others talk about performance. If performance cannot be compared to reality with great certainty, then model performance is just model performance — of an interesting educational toy.

Even though I’m out of my league in terms of commanding the deep understanding, somehow I think I still get Pat’s drift and feel confident that his years of studying this and pursuing the explanation of this are not wasted.

I see people already trashing him, calling him nuts, painting him as lacking in some fundamental understanding, demeaning his publisher, etc., etc., … totally what I expected. I didn’t even need a model to predict this reliably.

Bingo. The critics do not seem to agree that the models’ outputs should in someway be compared to reality.

I think Dr Spencers quote

“And climate sensitivity, in turn, is a function of (for example) how clouds change with warming, and apparently not a function of errors in a particular model’s average cloud amount, as Dr. Frank claims.”show the area of disagreement clearly and indicates to me that Dr Frank is correct.If, as we expect, that clouds change with temperature and temperature IS propagated through the model, then any error with cloud functionality will necessarily propagate through the model.

Steve,

Your pinpoint dissection of the crucial distinction in this arguments is the way I it.

I’d like to see a detailed expose explaining how it could possibly be otherwise.

I have great respect for both Spencer and Frank. The only dog I have in this “fight” (discussion) is the truth.

I will continue to follow this thread.

Dr Spencer, thank you for your essay.

Pat Frank, thank you for your responses.

This is an interesting debate. Both Dr Frank and Dr Spencer agree that the models are faulty, but disagree on how.

I would be interested in evidence that the models are useful, especially for determining policy.

I have not heard of any debate on the pro IPCC side in which fundamental assumptions behind the models are questioned. I would like to believe those debates occurr and are as lively as this one.

Colin Landrum,

You say,

“I would be interested in evidence that the models are useful, especially for determining policy.”

A model can only be useful for assisting policy-making when it has demonstrated forecasting skill.

No climate model has existed for 50 or 100 years so no climate model has any demonstrated forecasting skill for such periods.

In other words, the climate models have no more demonstrated usefulness to policymakers than the casting of chicken bones to foretell the future.

Richard

Richard is it possible to plug in the variables at the time of the 30s, 40s, 50s and forecast the cooling of the 70s?

Then plug in variables for the 60s, 70s to see the warming of the 80s, 90s?

Then what do we use for the pause?

Thanks

Derg,

You ask me,

“Richard is it possible to plug in the variables at the time of the 30s, 40s, 50s and forecast the cooling of the 70s?

Then what do we use for the pause?”

I answer as follows.

Several people have independently demonstrated that the advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. Some (i.e. Pat Frank and Willis Eschenbach) have reported their determinations of this on WUWT.

Therefore, if one were to “plug in the variables at the time of the 30s, 40s, 50s” the model would not “forecast the cooling of the 70s” because atmospheric GHGs have been increasing in the air to the present. However, it is possible to ‘plug in’ assumed cooling from atmospheric sulphate aerosols. Such a ‘plug in’ of historic cooling would be a ‘fix’ and not a forecast.

There is precedent for modellers using this ‘fix’ and I published a report of it; ref, Courtney RS, An Assessment of Validation Experiments Conducted on Computer Models of Global climate (GCM) Using the General Circulation Modelof the UK Hadley Cenre, Energy & Environment, v.10, no.5 (1999).

That peer-reviewed paper concluded;

“The IPCC is basing predictions of man-made global warming on the outputs of GCMs. Validations of these models have now been conducted, and they demonstrate beyond doubt that these models have no validity for predicting large climate changes. The IPCC and the Hadley Centre have responded to this problem by proclaiming that the inputs which they fed to a model are evidence for existence of the man-made global warming. This proclamation is not true and contravenes the principle of science that hypotheses are tested against observed data.”

Importantly, global temperature has been rising intermittently for centuries as it recovers from the depths of the Little Ice Age (LIA). The estimates of global temperature show that most of that warming occurred before 1940 but 80% of the anthropogenic (i.e. human caused) GHG emissions were after that. Indeed, the start of the cooling period coincided with the start of the major emissions. Advocates of human-made global warming excuse this problem by attributing

(a) almost all the rise before 1940 to be an effect of the Sun,

(b) the cooling from 1940 to 1970 to be an effect of human emissions of aerosols, and

(c) the warming after 1970 to be mostly an effect of human emissions of greenhouse gases.

Evidence is lacking for this convoluted story to excuse the disagreement of the emissions with the temperature history. And they have yet to agree on an excuse for the ‘pause’ since 1998.

Furthermore, the climate models are based on assumptions that may not be correct. The basic assumption used in the models is that change to climate is driven by change to radiative forcing. And it is very important to recognise that this assumption has not been demonstrated to be correct. Indeed, it is quite possible that there is no force or process causing climate to vary. I explain this as follows.

The climate system is seeking an equilibrium that it never achieves. The Earth obtains radiant energy from the Sun and radiates that energy back to space. The energy input to the system (from the Sun) may be constant (although some doubt that), but the rotation of the Earth and its orbit around the Sun ensure that the energy input/output is never in perfect equilibrium.

The climate system is an intermediary in the process of returning (most of) the energy to space (some energy is radiated from the Earth’s surface back to space). And the Northern and Southern hemispheres have different coverage by oceans. Therefore, as the year progresses the modulation of the energy input/output of the system varies. Hence, the system is always seeking equilibrium but never achieves it.

Such a varying system could be expected to exhibit oscillatory behaviour. And, importantly, the length of the oscillations could be harmonic effects which, therefore, have periodicity of several years. Of course, such harmonic oscillation would be a process that – at least in principle – is capable of evaluation.

However, there may be no process because the climate is a chaotic system. Therefore, the observed oscillations (ENSO, NAO, etc.) could be observation of the system seeking its chaotic attractor(s) in response to its seeking equilibrium in a changing situation.

Very, importantly, there is an apparent ~900 year oscillation that caused the Roman Warm Period (RWP), then the Dark Age Cool Period (DACP), then the Medieval Warm Period (MWP), then the Little Ice Age (LIA), and the present warm period (PWP). All the observed rise of global temperature in recent times could be recovery from the LIA that is similar to the recovery from the DACP to the MWP. And the ~900 year oscillation could be the chaotic climate system seeking its attractor(s). If so, then all global climate models are based on the false premise that there is a force or process causing climate to change when no such force or process exists.

But the assumption that climate change is driven by radiative forcing may be correct. If so, then it is still extremely improbable that – within the foreseeable future – the climate models could be developed to a state whereby they could provide reliable predictions. This is because the climate system is extremely complex. Indeed, the climate system is more complex than the human brain (the climate system has more interacting components – e.g. biological organisms – than the human brain has interacting components – e.g. neurones), and nobody claims to be able to construct a reliable predictive model of the human brain. It is pure hubris to assume that the climate models are sufficient emulations for them to be used as reliable predictors of future climate when they have no demonstrated forecasting skill.

This is a brief response to your important question and I hope this brief response is a sufficient answer.

Richard

Richard.

This is a critical point. The satelite data record is a fraction of the projected time.

The error factors in one year seasonal hurricane models / forecasts highlights this problem. These forecasts are revealed only weeks prior to the season but can be widely inaccurate, a six month forecast based on 40 years of data.

So if my forecast of 3 Atlantic USA touch downs and ace of 80 is the closest to the final outcome, does that make the best forecaster. No, just the luckiest despite the reasoning.

Martin Cropp,

Your point is true. However, it is important to note that the climate models have not existed for 50 years and, therefore, they have yet to provide any predictions that can be evaluated for skill at predicting climate over such periods.

Climate models have NO demonstrated predictive skill; none, zilch, nada.

Richard

CL, there isn’t any. See my previous model specific posts here trying to explain why from a completely different first principles,perspective.

Computational intractability (CFL constraint on numerical solutions to partial differential equations) forces parameterization which forces parameter tuning which brings in the attribution question. All from illustrated first principles, no fancy math needed. You can find my original themProblem with Models (emulating the famous problem with Tribbles Star Trek episode) post in the search WUWT sidebar. There are also several related followups.

Climate models’ global energy balances may be stable, but that doesn’t mean they correctly replicate the true components of Earth’s energy balance. That’s an obvious point, and Dr Frank makes it, but Dr Spencer seems to take, as a starting point, that the replication *is* true. It may be that as a climate scientist, he has to. I made an important point in the other thread which bears repeating:

A practitioner of any discipline must accept *some* tenets of that discipline. A physicist who rejects all physical laws won’t be considered a physicist by other physicists, and won’t be finding work as a physicist. Similarly, Dr Spencer must accept certain practices of his climate science peers, if only to have a basis for peer discussion and to be considered qualified for his climate work. Dr Frank doesn’t have that limitation in his overview of climate science — he is able to reject the whole climate change portfolio in a way which Dr Spencer can’t. This is the elephant in the room.

NZWillie,

Science can’t start with an assumption, no matter if all agree and trillions $ burnt in piety. May the elephant in the room become visible.

The king really is a buck naked embarrassment.

Exactly . And why climate ” science ” is unable to advance .

Dr Spencer, It would seem to me that the argument connected to Figure 1 is a total miss. It stands to reason that the test model runs shown in Figure 1 are either 1) run with all data inputs held constant ( thus no deviation from “0”), or ,, 2) are tuned to show zero natural forcing on climate (again, no deviation from zero)

The point I took away from Frank, is the error associated with cloud data that is input into the system is larger than the CO2 forcing signal. As such, the error in the LWCF of the model, be it from the data itself, or from the tunning within the system is larger than the CO2 signal. It’s apparent that the tunning for LWCF moves with a change in data, thus if you hold the data steady in a test run, you get the resulting flat line.

Otherwise, the test models are nothing more than flat lines designed to show a forcing from CO2, which could true as well.

Anything could be true, until data shows otherwise. So far, the data shows that all climate prediction models are wrong. So while studies in error & sensitivity analyses are useful, it’s clear that the fundamental premise that CO2 controls the climate is wrong. If that CO2 premise was correct, that validating climate model would been on the front page of the NYT long ago.

” . . . the fundamental premise that CO2 controls the climate is wrong.”

And here is where I think all climate models (well, perhaps with the singular exception of the Russian model) fail.

To the extent that basics physics says that CO2 should act as a “greenhouse gas,” which is credible due to its absorption and re-radiation spectral bands, it likely became saturated in ability to cause such an effect at much lower concentration levels (likely in the range of 200-300 ppm, see https://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/ ), now leaving only water vapor and methane as the current non-saturated GHGs.

More specifically, it is absurd to say the atmospheric CO2 forcing is linear going into the future (a belief held by the IPCC and most GCMs) . . . that doubling any GHG doubles the amount of energy radiated up/down. It is well-know (well, perhaps outside of global climate models) that any gas absorbing radiation can become “saturated” in terms of radiation energy absorption if the “optical column length” exceeds a certain value, generally taken to be six e-folding lengths. This is well summarized in the following paragraph extracted from http://clivebest.com/blog/?p=1169 :

“The absorption length for the existing concentration of (atmospheric – GD) CO2 is around 25 meters i.e. the distance to reduce the intensity by 1/e. All agree that direct IR radiation in the main CO2 bands is absorbed well below 1 km above the earth. Increasing levels of CO2 merely cause the absorption length to move closer to the surface. Doubling the amount of CO2 does not double the amount of global warming. Any increase could be at most logarithmic, and this is also generally agreed by all sides.”

Dr. Spencer has a spreadsheet climate model posted on his website. It is set up to run daily. Its a 50 year analysis (over 18400 cells).

If CO2 forcing is on and the time step is daily, then there are parameters for:

Water depth (m)

Initial CO2 (W/m2)

CO2 increase per decade (W/m2)

Non radiative random heat flux ( a coefficient parameter)

Radiative random heat flux (a coefficient parameter)

Specified feedback parameter

Check it out.

http://www.drroyspencer.com/research-articles/

Excellent discussion! From Dr Spencer’s response I don’t quite get this:

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behaviorBut that’s only shows that the model is internally consistent – it doesn’t actually show if this consistency mirror the real life. A model can be consistent and internally well balanced but may have nothing to do with physical reality. How this control run over 100 years was validated against actual temperature changes with such precision? Normally, some parts of a simulation as finite element analysis are run and compared with actual experiments like deformation of material. Model consistency may come out from favourable assumptions or reduced sensitivity. As far as I understand those models did not anticipate ‘hiatus’ in global warming we saw recently for several years. If uncertainties in such energy fluxes balanced in a model are much greater than a signal due to CO2 I reckon Dr Frank is right pointing at that.

How they carry out control runs over 100 years with varying cloud fractions? From which source they take cloud fraction data? Or do they have a mathematical model inside the model calculating cloud fractions?

No, they have not this data and they cannot calculate cloud fractions. Do you think that the models can calculate an annual cloud fraction for the year 2020? I do not believe so, because clouds are a great unknown, which is not included in the models. Therefore there is no propagation error of cloud fractions.

I have been asking a piece of certain information, in which way climate models calculate the cloud fraction effect. No response.

Hey Antero,

Do you think that the models can calculate an annual cloud fraction for the year 2020? I do not believe so, because clouds are a great unknown, which is not included in the models.These are good questions. Fact that a model is well balanced against energy fluxes doesn’t tell you much how uncertainty was treated. If we have, say, 3 components: 50+/-3 W/m^2; 20+/-2 W/m^2 and 30+/-1 W/m^2 where last two terms counteract first one. If we run simulations with all terms in middle values it will all balance out nicely. But if errors are systematic and we run simulation with values for first term 48 W/m^2 and for two other terms 22 W/m^2 and 31 W/m^2 those fluxes will not balance out (53>48) pushing simulation out of balance. So my guess is that those uncertainties were treated in such way that they cancel out.

You can search scholar.google.com and look for the answer in the climate modeling literature, Antero.

An explanation of how GCMs model clouds is outside the limits of this conversation.

The fact that they don’t calculate cloud fractions *is* the error. It is a missing part of the theory that the models were derived from.

Not a comment about Pat Frank’s paper or Roy Spencer’s reply but, rather, about WUWT.

I remember awhile ago discussions of the need for an alternative to “pal-review” and that, a blog forum such as WUWT could be something like that.

I think we’re seeing it here.

Two honest and real scientists who respect each other are having an open disagreement about what is in a paper.

It’s not time for either to “dig their heels in” but to consider what each other (and others) have to say.

Have I misunderstood this? Are the models tuned so that they explicitly do NOT reproduce past temperatures but rather an artificial stable state before human CO2 emissions were deemed significant? I must have been naive as I always assumed that the models were tuned to reproduce past climate change, which in itself would not be a guarantee that they were any use predicting the future but might give you a fighting chance. If anyone justifies this with reference to the horizontal line of a hockey stick then I might curse and swear!

Pat Frank’s model uncertainty (over 30°C span by 2100) does not represent physical temperature uncertainty as we do know the global average temperature cannot change that much in such a short time and much less because of changes in cloud cover. It does not represent modeled temperature uncertainty, as we do know that models temperature cannot change that much in such a short time. It is unclear to me what it represents. Pat Frank says it represents statistical uncertainty, but if it does not have any connection with what physical or modeled temperatures can do, why is it relevant and why is it expressed in °C? My guess is that it represents the uncertainty introduced by Pat Frank’s assumptions, that are different from models assumptions.

In any case I am glad that Pat Frank’s article got published so these issues can be examined in detail. The idea that only correct science should be published is silly and contrary to science interest. Controversial hypotheses need to be published even if wrong because nobody can predict the influence they will have on other researchers and some of the best articles were highly controversial when published.

If Pat Frank is correct ( and even after all the debate I currently feel he is ) then it represents the models in ability to mimic reality. If the model cannot handle uncertainties that are experimentally known to be real, then the model clearly does not contain the mechanisms required to control and dampen the climate to achieve the type of result which makes more sense. The models are so constrained to only react to greenhouse gas forcing, and nothing else.

If that is correct (and to a certain extent it probably is), then Pat Frank’s uncertainty makes even less sense, as GHGs show very limited capacity to change over time. Atmospheric CO2 changes by about 2-3 ppm per year, less than 1%, and the change over time is one of the most constant parameters in climate. It cannot build up to a huge uncertainty.

Hey Javier,

Atmospheric CO2 changes by about 2-3 ppm per year, less than 1%, and the change over time is one of the most constant parameters in climate. It cannot build up to a huge uncertainty.My understanding what Pat is saying is that uncertainty alone due to cloud forcing is order of magnitude greater than forcing due to CO2. Hence the question is how would you know that temperature changes are due to CO2 forcing and not due to unknowns in cloud forcing? In other words changes in energy flux due to cloud forcing can completely eclipse energy flux due to CO2.

If we are talking about the models the answer is clearly not. The models are programmed to respond essentially to GHG changes and temporarily to volcanic eruptions, and little else. Cloud changes or solar changes are not a significant factor in climate models, and cloud changes (and albedo) are programmed to respond to the increase in GHGs in the same direction (more warming). We know what makes the models tick and that cannot produce a huge uncertainty. If anything models are way too predictable knowing the change in GHGs.

Sincerely the uncertainty is much higher in the real world. See the pause for example. But unless there is a second coming of either Jesus or the Chicxulub impactor there is no way we could see much more than one tenth of the temperature change by 2100 that Pat Frank’s uncertainty allows.

The reason that the models can’t build up to a huge temperature change is that they are force fed code that constrains their volatility and temperature boundaries. They were known to blow up until modellers constained this. Thus they all (except the Russian model gravitate around a predefined range of temperature change based on a rather simple linear transposition of increased CO2 to increased temperature. This in no way lessens the system error. It simply artificially constrains it. Roy is arguing that because the energy system input/output is then balanced that that somehow lessens the system error. It is impossible to lessen the system error until you carry out real world experiments that produce real world data that then lets you paramterize your equations better. The idea that their use of Navier Stokes equations is reproducing the temperature profile of the globe is ludicrous

It’s not even really related to the models. If you say that the real-world uncertainty in the cloud forcings is 4W/m2^year, then you’re saying our

actualunderstanding of the cloud forcings is increasing that much, year-over-year. That it is totally plausible that within ~15 years, cloud forcings could be 20 W/m2 lower, or 20 W/m2higher.That’s a major conclusion of Frank’s paper, and it feeds into the models. But it’s significant even before applying it to the models, because a huge uncertainty in climate forcings is going to cause a huge uncertainty in temperatures, in the real world

andin any model, even an accurate one.Likewise, if the real world could

notplausibly warm up or cool down by these huge temperature amounts within 20 years, then our real-world climate forcing cannot increase by 4W/m2/year. Soin the real world, this number must be wrong. Our uncertainty about climate forcings is either not this high, or does not increase this rapidly year-over-year.I think this is the major problem with representing uncertainty as 4W/m2/year, instead of 4W/m2.

I am not sure that is the case. The plus or -4 W per square meter is an uncertainty it is not an increase every year as you say. I could go up if you go down or it could be somewhere in between. We do not know. We do know it can change that much in a year. That is experimentally determined, verified, and excepted. The fact that the model cannot deal with no one variations is a clear indication that the model is not truly predictive.

Right, but if you just integrate that uncertainty envelope over time (4W/m2), you don’t

getPat Frank’s huge, ever-growing uncertainty in temperatures.You only get that if you integrate it as W/m2/

year.Check the math!

Windchaser

Incorrect.

Just like Roy, Nick and many others (I am utterly dumbfounded by the apparent ignorance of the purpose or meaning of uncertainty and error propagation, it is a real shame), you are assuming that uncertainty must somehow be bound within a certain limit based on reason x, y or z, or that uncertainty manifests in result.

There is no part of how uncertainty is defined that suggests this; it simply does not work this way.

I’ve been following along and this has already been explained many times in the comment section of all three articles. Yet Roy and Nick, in particular, seem to ignore the responses that clearly highlight the issue, instead choosing to persist with a critique based on a fundamentally flawed assumption that an exception to the above exists.

It doesn’t, resulting in what is essentially a straw man.

Uncertainty has been used in the same manner for centuries and does not care that climate models are involved.

If anything, climate modelling is the perfect example of the extent to which an output functions value, shape, or apparent agreement with other models (or even reality) is disconnected from its uncertainty.

My assessment is that the increasing uncertainty is the area under the Bell Curve and not representative of actual temperatures. As the Bell Curve grows due to the compounding of uncertainties, it renders any prediction more and more meaningless even though it may occasionally get the temperature prediction “correct” (or somewhere close to global average at any given point in time). The confidence factor plummets. Perhaps the uncertainty should be expressed as a P value of confidence/significance, not temperature.

Just a layman’s thoughts…..

Javier …. true, but he qualifies in the appear that the research is not on the actual prediction of the model, but rather on the statistical propagation of error. He lines up the arguments quite well to me as follows.

1) all GCMs are complicated models with all kinds of parameters, but in the end, their output is nothing more than a theoretical linear model based on GHG forcing as determined by the model.

2) GHG forcing includes all related forcing, not just CO2, thus the LWR of clouds is included.

3) the parameters of the models when run in their native states have an error in predicting cloud cover that is latitude dependent dependent, grossly overestimating or underestimating depending on latitude. AND … that this is a systemic error, not just a random error.

(

I think this is where Dr Spencer and he are getting cross, as Franks analysis is by latitude, which if taken as a whole much of it would cancel out globally, whereas, I think Spencer is viewing purely as a homogenized global effect.).4) If … the model is incorrectly predicting clouds, then by default, it is incorrectly calculating the overall GHG forcing. … Thus, there is a systemic error being propagated throughout the system.

5) …. to your point, the predictions of GCMs are only expressed as the mean, as is the linear equation that matches GCM output almost perfectly. Neither is expressed with the uncertainty [error bars] of the propagated error …. BUT SHOULD BE. …. and that error is huge. (

somehow I don’t feel that would make a very convincing picture for public consumption if you were trying to halt fossil fuel usage)Bottom line, the theory underlying the models is incomplete, the models contain a propagated systemic error, … and

the predictions of the models are worthless.OK, but since GHG forcing changes little over time, that makes them highly predictable in their response, not highly uncertain. I’ve seen (and plotted) model runs and model averages and if anything is clear is that the spaghetti coming from multiple runs of a model or from multiple models averages all show a consistent behavior that doesn’t make them highly uncertain.

With models instead of error bars what they do is multiple runs and that gives an idea of uncertainty. In the Texas sharpshooter analogy the model is not shooting at the bull’s eye but it is producing a grouping close enough to paint a target. That means the uncertainty within the model is not huge or there would not be a grouping to paint the target. With respect to the model/reality error, the distance from the painted target to the original target, it is clear that models run too hot, but again the difference is not enough to allow for a huge uncertainty.

Quite frankly I can’t see where an uncertainty of ±15°C in 80 years could be produced without leaving discernible trace. The distance between a glacial period and an interglacial is just 5°C.

I agree with what your saying Jav ….. consider this.

I think the big disconnect in Franks paper, is that the systematic error associated with Cloud Cover is latitude dependent. Thus, for very high latitudes, according to his graph, he gets this huge error in one direction, while in the tropics, he gets a huge error in the opposite direction. This gives the appearance that cloud cover for any grid cell has a huge uncertainty. As Roy and you point out, in the model runs for the total globe, the errors at each latitude combine and cancel each other out. … thus a run on the globe will never be to far from the tuned average.

In reply to:

Javier’s “OK, but since GHG forcing changes little over time, that makes them highly predictable in their response, not highly uncertain. I’ve seen (and plotted) model runs and model averages and if anything is clear is that the spaghetti coming from multiple runs of a model or from multiple models averages all show a consistent behavior that doesn’t make them highly uncertain.”

What you say would be correct if CO2 caused the warming and if the rise in atmospheric CO2 was caused by human emission. This is a logical fact not an argument.

The GCM models response is anchored in our minds in the one-dimension studies some of which (Hansen’s) showed a warming of 1.5C while others showed a warming of 0.1C to 0.2C as the lower tropical troposphere is close to water vapor saturation and the CO2 and water infrared emission overlap.

Hansen’s one-dimension study froze the lapse rate which is not physically incorrect and ignored the fact that the lower tropical troposphere is saturated with water vapour. Hansen did that to get the warming up to 1.2C. An unbiased one-dimensional study gives a warming of around 0.2C for a doubling of atmospheric CO2.

https://drive.google.com/file/d/0B74u5vgGLaWoOEJhcUZBNzFBd3M/view?pli=1

http://hockeyschtick.blogspot.ca/2015/07/collapse-of-agw-theory-of-ipcc-most.html

..In the 1DRCM studies, the most basic assumption is the fixed lapse rate of 6.5K/km for 1xCO2 and 2xCO2.

There is no guarantee, however, for the same lapse rate maintained in the perturbed atmosphere with 2xCO2 [Chylek & Kiehl, 1981; Sinha, 1995]. Therefore, the lapse rate for 2xCO2 is a parameter requiring a sensitivity analysis as shown in Fig.1. In the figure, line B shows the FLRA giving a uniform warming for the troposphere and the surface. Since the CS (FAH) greatly changes with a minute variation of the lapse rate for 2xCO2, the computed results of the 1DRCM studies in Table 1 are theoretically meaningless along with the failure of the

FLRA.

In physical reality, the surface climate sensitivity is 0.1~0.2K from the energy budget of the earth and the surface radiative forcing of 1.1W.m2 for 2xCO2. Since there is no positive feedback from water vapor and ice albedo at the surface, the zero feedback climate sensitivity CS (FAH) is also 0.1~0.2K. A 1K warming occurs in responding to the radiative forcing of 3.7W/m2 for 2xCO2 at the effective radiation height of 5km. This

gives the slightly reduced lapse rate of 6.3K/km from 6.5K/km as shown in Fig.2.

In the physical reality with a bold line in Fig.2, the surface temperature increases as much as 0.1~0.2K with the slightly decreased lapse rate of 6.3K/km from 6.5K/km.

Since the CS (FAH) is negligible small at the surface, there is no water vapor and ice albedo feedback which are large positive feedbacks in the 3DGCMs studies of the IPCC.

…. (c) More than 100 parameters are utilized in the 3DGCMs (William: Three dimensional General Circulation Models, silly toy models) giving the canonical climate sensitivity of 3K claimed by the IPCC with the tuning of them.

The followings are supporting data for the Kimoto lapse rate theory above.

(A) Kiehl & Ramanathan (1982) shows the following radiative forcing for 2xCO2.

Radiative forcing at the tropopause: 3.7W/m2.

Radiative forcing at the surface: 0.55~1.56W/m2 (averaged 1.1W/m2).

This denies the FLRA giving the uniform warming throughout the troposphere in the 1DRCM and the 3DGCMs studies.

(B) Newell & Dopplick (1979) obtained a climate sensitivity of 0.24K considering the

evaporation cooling from the surface of the ocean.

(C) Ramanathan (1981) shows the surface temperature increase of 0.17K with the

direct heating of 1.2W/m2 for 2xCO2 at the surface.

Javier:

It is unclear to me what it represents. Pat Frank says it represents statistical uncertainty, but if it does not have any connection with what physical or modeled temperatures can do, why is it relevant and why is it expressed in °C?The propagation of uncertainty shows that values of that parameter that are within the CI may produce forecasts that are extremely deviant from what will actually happen, indeed forecasts that are already extremely doubtful from the point of view of known physics.

Sorry to disagree, but ±15°C in 80 years violates known physics. The models can’t do that. You would need a large asteroid impact to produce it, and models don’t do asteroids. It is simply not believable. The small progressive increase in GHGs that models do cannot produce the uncertainty that Pat Frank describes. It would require more than two doubling of CO2. I don’t need to know much about error propagation to see that the result defies common sense.

The question remains: If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?

“The question remains: If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?”

Can we not expect a reduction in probability based on the magnitude of each delta T?

That is to say, a 3 Deg. (Within predictions) swing is much more likely than a 15 Deg. Swing.

With regard to negative response, (cooling) empirical data proves that, thus far, cloud formation, at best acts as a brake to CO2 induced albido deltas.

If all of the above is correct, then assigning linear probability to a 30 Deg. range is mathematically unsound.

Javier:

If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?How do you know that the model can not reach the limits of Pat Frank’s uncertainty if the parameter can reach the limit of its uncertainty? You do not know that, and in fact Pat Frank’s analysis shows that an error in the parameter could produce a model output that is ±15°C in error. The model output might then be recognized as absurd, or maybe not; but an absurd model result is compatible with what is known about the parameter. You’ll know from reading about the history of the models that some parameters have been intentionally adjusted and re-adjusted to eliminate the occasional absurd calculated results that have occurred.

I don’t need to know much about error propagation to see that the result defies common sense.This is not a study of the climate, it is a study of a model. What it shows is that a model result that defies common sense is reasonably compatible with what is known about the uncertainty of one of the parameters.

So, an important part here is that Pat Frank treats the

real worldcloud forcing uncertainty as W/m2/year, instead of W/m2.This means that when you integrate with respect to time, the result is that the probability envelope of the cloud forcing increases, year after year, without end. It increases with the square root of the integral, so sqrt(time).

Do you think it’s physically plausible that the

real worldcloud forcings could vary up to 4W/m2 this year, then in 9 years, they could be 12 W/m2 (i.e., 4*sqrt(9)), and in 25 years, they could be somewhere within 20 W/m2 envelope… and so on? Do you think our uncertainty of cloud forcings is really growing this way?This is all real-world questions, nothing about the models yet. But it highlights the importance of the difference between an uncertainty that’s in W/m2, versus one that’s in W/m2/year.

Of course if the actual cloud forcings can vary that much, that would produce

hugeswings in temperature, in both the real world and in the models. And Frank’s math has it growing without end, to infinity.You all seem to be missing the point. Dr. Frank has said that he can emulate the output of GCM’s relation to CO2 using a linear equation. That’s why so many people have doubts about GCM’s to begin with. It’s also why the GCM’s miss the cooling in the midst of CO2 growing.

Anyway, using that information and an iterative calculation, the uncertainty of any projection appears to be about +/- 20 degrees.

Here is a question to ask yourself. We know ice ages have occurred with higher level of CO2. What parameters would have to change to cause these? Volcanoes and asteroids are transitory so they can’t be the cause of long term ice ages. Why are there no GCM studies that look at this? Can they not handle this? If they can’t, what good are they?

Javier,

This represents the lower resolutions of the model output over a given time span. Since the models report temperature in degrees C, the resolution is expressed in the same units. It’s like having a scale that measures up to 1 kg, but has an accuracy of +/-20 kg. In other words, completely useless for weighing things, but it might be useful for comparing the weight of two things, depending on the precision.

Nitpicking aside, I cant argue with the conclusion

“The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.”

Computer games are useful for understanding a subject, or for finding a way forward if you are totally clueless.

Anyone taking them at face value is an idiot, albeit possibly a highly educated idiot

Unless you’re playing ” Global Thermonuclear War”.

Hasn’t Kiehl 2007 shown that it is the inverse correlation between climate sensitivity and aerosol forcing, not the ocean heat uptake, that enables the models to behave the same in their temperature projections, despite a wide range of climate sensitivities applied?

Kiehl gives the uncertainty of the ocean heat uptake with only ±0.2 W per m2. The aerosol forcing, on the other hand, is wildly uncertain. It differs by a factor of three in the models and is highly related to the total forcing. High total forcing means low aerosol cooling in the model and vice versa.

However, there are many more possible propagation errors that make the temperature projections of the models statistically unfit for any forecast of future climates and prediction of future weather states:

Source: Kiehl, J. T. (2007), Twentieth century climate model response and climate sensitivity, Geophys. Res. Lett., 34, L22710, doi:10.1029/ 2007GL031383.

All of Roy Spencers satellite data plus balloon ect data show that the models dont work. Pat Frank is correct there is the proof !

Dr. Spencer:

I think I hear you saying that you believe that Pat Frank is stating that the models predictions are not accurate because of errors in cloud forcing. In a sense that is true, but what he’s actually saying is because the errors in cloud forcing are so high, the models are meaningless. In other words quite possibly the whole idea of greenhouse gas forcing, which is the common theme of all the models may be invalid.

Suggested reading for the basics:

https://physicstoday.scitation.org/doi/10.1063/1.882103

Thank you Roy for taking the time to do an “objective” analysis from a “peer review” perspective. While others may have theories to the contrary, I suggest just wait till they submit their own WUWT articles, otherwise you’d just be responding to numerous “what-if” issues. Your stand-alone article is solid. Keep up the good work.

Pat Frank is correct, in general, regarding how error propagate. Dr Spencer is incorrect in his statement that balancing the models at the start proves anything (the real climate is never in balance at any time).

That said… it doesn’t matter.

This entire argument misses an important point. This battle won’t be won by debating error bars (and it IS a battle). The general public will neither understand the nuances of error propagation, nor will care even if they do understand. It’s entirely the wrong debate to be having.

The problem with the warmist position is NOT the error bars on the data… the problem is the data itself. Data that has been manipulated, adjusted, is prone to inconsistencies due to station dropout, gridding, and on and on. The problem is… outside of the computer models… there is not one single shred of actual real-world evidence to support their position. None.

That is the argument we need to be always pounding… not error bars. Getting lost in the weeds debating error bars is a waste of time.

Well, it would not be a waste of time, if we had data about which we were confident. We still need to argue about the legitimacy of the tools that ultimately handle such worthy data, if we were ever to have it.

I decided to let this perk before weighing in, altho knew was coming from lunch with CtM and addressing his own intuitive discomforts.

A lot of the disagreements here on Pat Frank’s paper are based on fuzzy definitions. Accuracy versus precision, and error versus uncertainty are the two biggies. Lets try to be less fuzzy.

Accuracy versus precision was illustrated in a figure from my guest post, Jason3: Fit for Purpose? (Answer no) Albeit the no accuracy and no precision figure part one could have been a bit more obvious. Accuracy is how close the average shot pattern is to the bullseye; precision is how tight the shot grouping whether or not on the bullseye.

Error is a physical notion of observational instruments measurement problems. Like the temperature record confounded by siting problems, or Jason 3 SLR struggling with waves, Earths non symmetric geoid, and orbital decay. Statistical in nature. Error bars express a ‘physical’ statistical uncertainty. Uncertainty itself is a mathematical rather than physical comstruct; how certain can we be that whatever the observational (with error bars) answer and error may be, it will be within the theoretical uncertainty envelope. Probability theory in nature. It is perfectly possible that all ‘strange attractor’ Lorenz nonlinear dynamic stable error nodes lie well within ‘so constrained’ the probabilistic uncertainty envelope. Because they are two different things, computed differently.

Frank’s paper says the accuracy uncertainty envelope from error propagation exceeds any ability of models to estimate precision with error bounds. That is very subtle, but fundamentally simple. Spencer’s rebuttal says possible error is different andprobanly constrained. True, but not relevant.

Frank’s paper says . . . Spencer’s rebuttal says . . .Thanks for the simplification. It seems the presuppositions underpinning the respective arguments are creating an epistemological language barrier – e.g., when the theologian discusses origins with the atheist.

Beyond this, didn’t the IPCC long ago admit Frank’s ultimate argument, i.e., that the models are worthless for prediction?

From the third assessment report, p. 774, section 14.2.2.2, “Balancing the need for finer scales and the need for ensembles”

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

Roy, let me take a different approach to the problem.

We agree that all your GCMs produce an energy balance at the TOA. All of them accurately simulate the observed air temperature within the calibration bounds.

Nevertheless, they all make errors in simulating total cloud fraction within the same calibration bounds. That means they all make errors in simulated long wave cloud forcing, within those calibration bounds.

The simulated tropospheric thermal energy flux is wrong within those calibration bounds. Tropospheric thermal energy flux is the determinant of air temperature.

So the simulated calibration air temperature is correct while the simulated calibration tropospheric thermal energy flux is wrong. How is this possible?

Jeffrey Kiehl told us why in 2007.

The reason is that the models are all tuned to reproduce air temperature in their calibration bounds. The correctness of the calibration air temperature is an artifact of the tuning.

A large variety of tuned parameter sets will produce a good conformance with the observed air temperature (Kiehl, 2007). Therefore, model tuning hides the large uncertainty in simulated air temperature.

The simulated air temperature has a large uncertainty, even though it has a small data-minus-simulation error. That small error is a spurious artifact of the tuning. We remain ignorant about the physical state of the climate.

Uncertainty is an ignorance-width. The uncertainty in simulated air temperature is there, even though it is hidden, because the models do not reproduce the correct physics of the climate. They do not solve the problem of the climate energy-state.

Although the TOA energy is balanced, the energy within the climate-state is not partitioned correctly among the internal climate sub-states. Hence the cloud fraction error.

Even though the simulated air temperature is in statistical conformance with the observed air temperature, the simulated air temperature tells us nothing about the energy-state of the physically real climate.

The simulated calibration air temperature is an artifact of the offsetting errors produced by tuning.

Offsetting errors do not improve the physical description of the climate. Offsetting errors just hide the uncertainty in the model expectation values.

With incorrect physics inside the model, there is no way to justify an assumption that the model will project the climate correctly into the future.

With incorrect physics inside the model, the model injects errors into the simulation with every calculational step. Every single simulation step starts out with an initial values error.

That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection.

However, we do not know the magnitude of the errors, because the prediction is of a future state where no information is available.

Hence, we instead calculate an uncertainty from a propagated calibration error statistic.

We know the average LWCF calibration error characteristic of CMIP5 models. That calibration error reflects the uncertainty in the simulated tropospheric thermal energy flux — the energy flux that determines air temperature.

It is the energy range within which we do not know the behavior of the clouds. The clouds of the physically real climate may adjust themselves within that energy range, but the models will not be able to reproduce that adjustment.

That’s because the simulated cloud error of the models is larger than the size of the change in the physically real cloud cover.

The size of the error means that the small energy flux that CO2 emissions contribute is lost within the thermal flux error of the models. That is, the models cannot resolve so small an effect as the thermal flux produced by CO2 emissions.

Propagating that model thermal-flux calibration error statistic through the projection then yields an uncertainty estimate for the projected air temperature. The uncertainty bounds are an estimate of the reliability of the projection; of our statement about the future climate state.

And that’s what I’ve done.

Perhaps I can simplify:

If you know the answer of a sum is 20 and you need one value to be 5 or higher, then it is simple matter of adjusting the other parameters to your heart’s content.

20=5+10+2+2+1

20=5*5-5

20=((5/10+100)*pi*r^2+(the number of albums Justin Bieber sold last year))xAlpha [where alpha is whatever it needs to be to make the equation balance]

None of this says anything about the accuracy of 20 as an answer, and would still not even if the answer was more precise like 20.01946913905.

You can add in any number of real parameters, it wouldn’t matter if you have enough fudge factors to compensate.

Pat and Roy,

There is a lot of money at stake here. GCM’s, yes clouds are a huge weakness, the behavior of clouds has not been predicted, and there is no assurance that the behavior of clouds can be predicted as CO2 level rise.

Goodness.

Fundamentally, there is no proof that CO2 rising ppm can heat the atmosphere! Saturated low, restricts the atmosphere from radiating freely to space high, but no one can calculate this effect. Could be Tiny, or even non-existent.

Speak the truth, both of you……

Dr Frank – perfect.

That is perfectly easy to understand. I do not understand why distinguished and clearly intelligent scientists cannot understand that, and if they still have issues address it from that understanding.

Here’s more from year 2017:

https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/

Roy – You again refer to “~20 different models”, but then acknowledge that “they all basically behave the same in their temperature projections for the same …..”.

As I commented on your first article,

I’m not sure that your argument re the “20 different models” is correct. All the models are tuned to the same very recent observation history, so their results are very unlikely to differ by much over quite a significant future period. In other words, the models are not as independent of each other as some would like to claim. In particular, they all seem to have very similar climate sensitivity – and that’s a remarkable absence of independence.I would add that I find your statement “

All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!” rather disturbing: the models misuse clouds for a large part of the CO2 effect, so I can’t accept that the models do show the effect of anthropogenic CO2 emissions, and I can’t accept that clouds can simply be ignored as you seem to suggest.So long as the fudge turns out somewhat edible at the end, it’s all good?

My High School Physics teacher would have flunked me for that egregious fudging assumption.

Nor is “in global energy balance” a valid criteria.

This statement astonishes me. A program that responds to increasing greenhouse gases is purposely written to respond.

Why there is not a standard defined for exactly how model programs respond to greenhouse gases puzzles me.

If all of the programs return different numbers, they are not in agreement; even if they stay within some weird boundary! Nor does adding up the program runs then publishing the result cancel anything. That is a bland acceptance for bad programming while hoping to foist the results and costs on the unsuspecting public.

That the models programs all fail to adhere to reality over the long term is the sign those programs are failures. Especially as model results run into future weeks, months, years.

Apparently, propagation of error is uncontrolled! Those who assume the errors will cancel are making a gross assumption in the face of horrible model runs.

Nit picking an article about the “propagation of errors” should do so constructively. Not harp about cancelling, balance, gross acceptance or whatever.

Pat Frank addresses one part of climate science’s refusal to address systemic error throughout global temperature monitoring, storage, handling, calculations and presentations.

Propagation error is a problem for climate science, but apparently ignored by many climate scientists.

Defending propagation of error in model runs because the assumption is that they are cancelled out by other model bias is absurd.

Nor is assuming that TOA Longwave Radiative Flux variations is validation of a GCM program.

The models are injected with brown (“black”) matter to conform with real processes, which are chaotic (e.g. evolutionary), not monotonic (e.g. progressive). The system has been incompletely, and, in fact, insufficiently characterized, and is estimated in an unwieldy space. This is why the models have demonstrated no skill to hindcast, forecast, let alone predict climate change.

The discussion appears to me revolving around multiple potential misunderstandings.

1. As often mentioned already: accuracy versus precision and error versus uncertainty

2. Simple statistical analysis on measurement & linear processing versus emulations running Navier-Stokes equations approaching various states of equilibrium and complex feedback.

While the uncertainty and general unreliability of climate models can be argued for, and seems well understood within the sciences, even without all the mathematics, Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states. What’s left are more modest uncertainty bounds with the, I’d argue, well understood general short-coming of any model addressing reality. But the presence of unqualified, non-linear components in the real climate does not necessarily mean the model has no value when establishing a general trend for the future (through drawing scenarios, not merely predicting). The model can be overthrown each and every second by reality. This is no different than cosmology and astrophysics but that understanding will not make astrophysicists abandon their models on formation of stars or expansion of the universe. Of course nobody is asking yet for trillions of dollars based on arguments deriving from astrophysical models.

And that last bit is in my view the bigger problem: uncertainty versus money.

“Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states.”

Most agree that most of the time the climate is an equilibrium engine. It searches for that. An equilibrium is its anchor or the thing it revolves around like a planet around its sun.

We can calculate an orbit of a planet with errors similar to the errors in a climate model. Now predict 100 years in the future. Measure Earth’s distance from the Sun’s average position. Now be Galileo and do the same thing with his technology. His errors can be argued to be huge. Yet his model was probably pretty good for figuring the future Earth/Sun change in distance.

If a tall building’s upper floors displace in high winds, we don’t add the errors. We can’t calculate how much they displace at any time to 6 decimal places. But these errors do not add. But if we calculate a difference at the 6th decimal place and keep iterating that error, we are going to get a displacement that indicates a building failure eventually.

John,

No. What Pat is talking about is a specification error; that is to say a limit on accuracy. As such it can’t be cancelled or reduced in any way because it literally is a loss of information, like a black hole of knowledge. There’s no way to use mathematics to change “I don’t know” into “I know”.

As a CME who studied the hard sciences and engineering to get a PhD, it is amazing to see that Dr. Spencer and others do not appear to understand the difference between error and uncertainty. Simple searches find many good explanations including this one (https://www.bellevuecollege.edu/physics/resources/measure-sigfigsintro/b-acc-prec-unc/) or this one (https://www.nhn.ou.edu/~johnson/Education/Juniorlab/Error-SigFig/SigFiginError-043.pdf).

In this case, we cannot know the error in the model projections because we do not know the true value for the temperature in the future. Anyone who is discussing errors is missing the point.

We must, however, estimate the uncertainty on our projection calculations so that we then know what we can say with certainty about the model projections, e.g. so we can say “the temperature 100 years from now lies between A and B degrees”, or more typically that “the temperature 100 years from now will be X +/- y degrees.”

The estimate of the uncertainty can be made without ever running a single simulation as long as we have an idea of the errors in the “instruments” we are using for our experiments. This is what Pat Franks has done, estimated the uncertainty based on the estimated error in the parameterization of clouds that is used in all GCMs.

The result is that the best we can say is that we are certain that the future temperature (in 100 years) will be X +/- 18C where X is the output of your favorite GCM.

“Anyone who is discussing errors is missing the point.”The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?

Well, to be more explicit, the errors in the “instruments” is what is propagated resulting in the uncertainty. In this particular case, the “instrument” that has the error is the parameterization of the effects of clouds.

Nick Stokes:

The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?As happens frequently, the phrase “propagation of error” has at least 2 distinct but related meanings.

a. It can mean the propagation of a known or hypothesized specific error;

b. It can mean the propagation of the probability distribution of the potential errors.

Pat Frank has been using it in the sense of (b).

“Pat Frank has been using it in the sense of (b).”So then what is the difference between “error”, meaning “the probability distribution of the potential errors”, and “uncertainty”?

Nick Stokes:

“the probability distribution of the potential errors”, and “uncertainty”?The probability distribution of the potential outcomes is one of the mathematical models of uncertainty.

WellWell, Pat thumps the table with stuff like:

“you have no concept of the difference between error and uncertainty”” the difference between error and uncertainty is in fact central to understanding the argument”You’re making the difference pretty fuzzy.

Nick Stokes:

You’re making the difference pretty fuzzy.Only when you ignore the “distribution” of the error, and treat the error as fixed. Consider for example the “standard error of the mean”, which is the standard deviation of the distribution of the potential error, not a fixed error. My reading of your comments and Roy Spencer’s comments is that you do ignore the distribution of the error.

I have a model that calculates the temperature each year for a hundred years for a range of rcp trajectories. The results are excellent, closely matching my expectations but disappointing compared with observation.

I tried introducing my best estimates and consequences of different cloud cover conditions but the model output was all over the place.

I then introduced fudges that effectively suppressed the effect of clouds and the models returned to the former excellent performance. Pity about the observations.

This thought experiment illustrates that the uncertainty in simulating the climate exists whether or not I include cloud cover in my model or whether I fudge its effect. The model will process only what it is programmed to do and is independent of the uncertainty. Ignoring elements of uncertainty (e.g. cloud cover) may make the model output look impressive but in fact introduce serious limitations in the simulation. These affect current comparisons with observation but have an unknown influence on future predictions.

In order to judge the predictive usefulness of my model I need to estimate the impact of all uncertainties.

Dr. Frank’s paper provides extremely wide uncertainty bounds for the various models. He says that the bounds he proposes are not possible real temperatures that might actually happen, just the uncertainty bounds of the model.

The normal way to validate uncertainty bounds is to assess the performance of the uncertainty bounds. Being statistical in nature, 1 in 20 tests should, when run, exceed those bounds, and a plot of multiple runs should show them scattering all around the range of the bounds.

The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.

Dr. Frank’s bounds connect to the uncertainty of the model predictions of the earth’s temperature. For his uncertainty bounds to be feasible, all values within the range must be physically achievable. An uncertainty bound for a physical measurement that is impossible to achieve is meaningless. If someone tried to tell me that the uncertainty range of the predicted midday temperature tomorrow was +/- 100 degrees C, it would be ludicrous, since temperatures within that range are impossible for this time of year. We can be certain that that uncertainty bound is incorrect. Even if the calculated uncertainty of the measurement technique used for the prediction was indeed that inaccurate, the derived uncertainty bears no association with the true uncertainty. It is a meaningless and wrong estimate of the true uncertainty.

We know the earth simply cannot warm or cool as much as Dr. Franks uncertainty suggests. Therefore his estimate of the uncertainty of the models cannot be correct because his uncertainty itself cannot be correct.

Both these simple observations indicate that the assumptions on which these bounds were calculated must be false and that the true uncertainty is far less. In other words, Dr. Franks uncertainty bounds are themselves most uncertain.

I fully agree, that is what I have been trying to say more ineptly. The result doesn’t pass the common sense test. Neither the planet, nor the models as they have been published can do that.

Chris and Javier, you are being far too literal in your reading of the uncertainty range.

Somebody mentioned the models will produce similar results because they operate within a “constraint corridor” (boundary conditions and assumptions like TOA energy balance). That’s a very appealing way to describe a significant aspect of their operation.

Does this “corridor” reduce uncertainty? Certainly not!

Uncertainty is LOST INFORMATION. Once lost, it’s gone forever as far as a model run is concerned. From any position, further modelling can only increase the uncertainty. And that’s essentially what Pat is telling you.

So what about a model which has a limited range of feasible outcomes? If Pat’s theoretical uncertainty range exceeds the feasible range of outcomes, this only means the uncertainty cannot fell you anything about the future position within the range.

The fact that Pat’s uncertainty bounds exceeds this range is just surplus information about the uncertainty. Pat’s method is not modelling the climate, why would it need to be aware of a detail like feasible range of MODEL outputs? As somebody else keeps telling us, uncertainty is a property of the model, NOT an output.

Like I said, you are being far too literal and inflexible in your Interpretation of Pat’s results. Your objections are ill founded.

Following an example from above if you cut a piece to a ±0.5 mm error and then you assemble 100 units of the piece your propagating error would be ±50 mm. Although quite unlikely your assembly could be 50 mm off, and that is your uncertainty. There is a real possibility albeit small of that, but the possibility is not small that you could be 25 mm off.

If you make multiple runs with a model that has a ±15°C uncertainty you should see plenty of ±7°C results. As that doesn’t happen models are either constrained as you say or programmed so that errors cancel. In both cases that reduces the uncertainty over the final result.

In any case if Pat’s mathematical treatment produces a result that does not agree with how models behave, it is either wrong or it has been made irrelevant by the way models work. It is as in the example all pieces with an error above ±0.1mm are discarded. Not very practical but you won’t get an assembly with >10 mm error despite the error in making the pieces is still large.

Javier,

No, you are talking about precision errors. Instead think about what would happen if you cut each piece to the same length within +/-0.1mm, but your ruler was .5mm too long (calibration error). Now how far would you be off after adding the 100 pieces off? The precision errors would all mostly cancel, but the resulting assembly would be 50mm too long. Now before you even started cutting, let’s say you knew that the ruler could be +/0.5mm out of spec, but you don’t know how much. How could you predict what the length of the final assembly will be? What confidence would you have that it would be within +/- 2mm?

Javier

This is the misinterpretation that Pat is being forced to play “Wack-a-Mole” with.

Uncertainty never cancels in the way you assume. Once information is lost (for a model run) it is lost for the remainder of the run. It can NEVER be recovered by constraints and other modelling assumptions. All these things do us add their own uncertainties for subsequent steps.

Where uncertainties are independent of each other (and that’s the general assumption until somebody can demonstrate otherwise), uncertainties propagate in quadrature (Pythagoras). They never reduce numerically, and they never reduce in practice.

Pat shows you how to do it. His expertise on the topic is way above anybody else’s o this thread. We have a great opportunity to LEARN.

In reply to Mr Thompson, it is precisely because all of the models’ predictions fall within Professor Frank’s uncertainty envelope that all of their predictions are valueless.

It does not matter that they all agree that the expected global warming will be between 2.1 and 5.4 K per CO2 doubling, because that entire interval falls within the envelope of uncertainty that Professor Frank has calculated, which is +/- 20 K.

Note that that uncertainty envelope is not a prediction. It is simply a statistical yardstick, external to the models but shaped by their inputs and calculated by the standard and well-demonstrated statistical technique of deriving propagation of uncertainty by summation in quadrature.

Or think of the yardstick as a ballpark. There is a ball somewhere in the ballpark, but we are outside the ballpark and we can’t see in, so, unless the ball falls outside the ballpark, we can’t find it.

What is necessary, then, is to build a much smaller ballpark – the smaller the better. Then there is more chance that the ball will land outside the ballpark and we’ll be able to find it.

In climate, that means understanding clouds a whole lot better than we do. And that’s before we consider the cumulative propagation of uncertainties in the other uncertain variables that constitute the climate object.

Subject to a couple of technical questions, to which I have sought answers, I reckon Professor Frank is correct.

+1

Bravo!!!

Dr. Franks linearization of the module output is quite ingenious, which makes for an analytic uncertainty calculation from just a single parameter, the LWCF. In the Guide to Expression of Uncertainty (the GUM, referenced in Dr. Franks paper) another way to obtain uncertainty values is with Monte Carlo methods (calculations). Treating a given GCM as a black box with numeric inputs and a single output (temperature), it may be possible to calculate the temperature uncertainty with the following exercise:

1) Identify all the adjustable parameters that are inputs to the model

2) Obtain or estimate uncertainty values for each parameter

3) Obtain or estimate probability distributions for each parameter

4) Randomly select values of each parameter, using the uncertainty statistics for each

5) Run the model, record the temperature output

6) Repeat 4-5 many times, such as 10,000 or more

The temperature uncertainty is then extracted from a histogram of the temperatures, which should dampen the “your number is too large” objections.

However, the usefulness of Monte Carlo methods is limited by computation time: the more input parameters there are, the more repetitions are needed. Does any know how many adjustable parameters these models have, and any knowledge of the computation time a single run requires?

Chris Thompson:

The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.The model runs have not systematically or randomly varied this parameter throughout its confidence interval, so information on the uncertainty in output associated with uncertainty in its value has not been computed.

Roy,

The first time I saw uncertainty estimates for the UAH lower troposphere temperatures, eyebrows went high because this seemed to be remarkably good performance for any instrumental system, lat alone one operating way up at satellite height and difficult to monitor and adjust for suspected in-situ errors. For years I had tried hard at the lab bench for such performance and failed.

It would be great if, as a result of comprehending the significance of Pat’s paper, you were able to issue a contemplative piece on whether you found a need to adjust your uncertainty estimates, or at least express them with different caveats.

In climate research, there are several major examples of wholesale junking of past results from older instruments when newer ones were introduced. Some examples are Argo buoys for SST, pH of ocean waters, aspects of satellite measurements of TOA flux, early rocketry T results versus modern, plus some that are candidates for junking, like either Liquid-in-glass thermometers or thermocouple/electronic type devices (one or the other, they are incompatible). There are past examples of error analysis favoring rejection of large slabs of data thought reliable, but overcome by better measurement devices. Science progresses this way if it is done well.

These comments are in no way unkind to your excellent work in simulation of air temperatures via microwave emissions from gases, one of the really good breakthroughs in climate research of the last 50 years. Geoff S

Hey Greg,

I do hydraulic studies (flood modeling). The object of the model isn’t to be precise, there is no way you can be precise. Yes the output is to 4 decimal places, but the storm you’re modeling isn’t a real stormI appreciate that, what I’m trying to say is that some claim models closely follow actual air temperatures in the recent decades. If that is the case why is that? By mere luck? If uncertainty is huge I would expect significant deviations from actual air temperature. If models consistently give consistent results in tight ranges and those results are close to actual temperature changes then what’s the point of complaining about massive uncertainty?

Huge thanks to Pat Frank for this tenacious work, and also to Roy Spencer for providing a much needed critique. The fact that it comes from Dr. Spencer, who is much admired on the sceptic side, makes it all the more valuable. So, what is the result…does Dr Spencer have a handle on this?

After quite a lot of vacillation, I come down pretty clearly on the side of Dr. Frank. I really do think Roy Spencer has been defeated in this argument. Although always doubtful of the models, I am usually a sceptic of any challenge to the basics, always feeling that such challenges require very substantial evidence. I’m also somewhat limited mathematically, and was at first very sympathetic to the specific challenge by Nick Stokes and others, relating to the time units Pat introduced into the equations, and the sign on the errors. Took me a long time to get over that one, and I expect the argument will go on. Eventually I saw it as a diversion rather than a real obstacle to acceptance of the fundamental finding of Pat Frank’s work.

Stepping back for a moment it is clear to see that it is in the very nature of the model programs that the errors must propagate with time, and can be restrained only by adjustment of the parameters used, and by a training program based on historical data. I would suggest that all of us – everybody, including Roy Spencer, including the modellers themselves-really know this is true. It cannot be otherwise. And it shouldn’t take several years of hard slog by Pat Frank to demonstrate it.

Let’s take an analogy that non-mathematicians and non-statisticians can relate to. That is, the weather models that are used routinely for short range weather forecasts. Okay, I understand that there are important differences between those and GCMs, but please bear with me. That forecasting is now good. Compared with 30 years ago, it is very good indeed. The combination of large computing power, and a view from satellites has changed the game. I can now rely on the basics of the general forecast for my area enough to plan weather-sensitive projects pretty well. At least, about a day or a day and half ahead. Thereafter, not so good. Already after a few hours the forecast is degrading. It is particularly poor for estimating local details of cloud cover, which is personally important for me, just hours ahead. After three of four days, it is of very little use (unless we are sitting under a large stationary weather system – when I can do my own pretty good predictions anyway!). After a week or so, it is not much better than guesswork. In truth, those short-range models are spiralling out of control, rapidly, and after a comparatively short time the weather map they produce will look not remotely like the actual, real weather map that develops. The reason is clear – propagation of error.

Weather forecasting organisations update their forecasts hourly and daily. Keep an eye on the forecasts, and watch them change! The new forecasts for a given day are more accurate than those they succeed. They can do that because they have a new set set of initial conditions to work from, which cancels the errors that have propagated within that short space of time. And so on. But climate models can’t control that error propagation, because they don’t, by definition, have constantly new initial conditions to put their forecast -“projection”- back on track. Apologists for the models may counter that GCMs are fundamentally different, in that they are not projecting weather, but are projecting temperatures, decades ahead, and that these are directly linked to the basic radiative physics of greenhouse gases which are well reflected by modelling. Well, perhaps yes, but that smacks of a circular argument, doesn’t it? As Pat Frank demonstrates, that is really all there is in the models.. a linear dependence upon CO2. The rest is juggling. We’ve been here before.

Roy Spencer, I’d like you to consider the possibility you might be basing your critique on a very basic misconception of Dr Frank’s work.

….Well said…

There is no purpose to this argument. Models use various means to achieve a balance which in nature does not exist. Ice ages? Then modellers feed in Co2 as a precursor for warming. Roy Spencer is correct. Climate change is accidental not ruled by mathematical equations which cannot under any circumstances represent the unpredictable nature of our climate. This argument is about how interested parties arrive at exactly the same conclusion. Models cannot predict our future climate hence modellers predilection for Co2. If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil. Models are dross.

What alarmism never contemplates is the absurdity of their own rhetoric. Hypothetically if Co2 causes warming then mitigating of Co2 would cause cooling. Historically there is no evidence that Co2 has caused warming or cooling. Models exist to give the misleading impression that we do understand the way in which our climate functions when the only active ingredient upon which predictions can be postulated is Co2. The models of themselves are noise.

“In climate research and modelling, we should recognise that we are dealing with a coupled nonlinear chaotic system and therefore that long term prediction of our future climate states is not possible”. The intergovernmental panel on climate change (IPCC) Third Assessment Report (2001) Section 14.2.2.2 Page 774.

https://wattsupwiththat.com/2016/12/29/scott-adams-dilbert-author-the-climate-science-challenge/

David Wells. Pat’s paper is a formal analysis to back-up your assertions.

“If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil.”

Pat shows this with his emulation if GCMs. GAST projections are nothing more than iterative linear extrapolation of assumed CO2 forcing inputs. Forget all the detail and mystery that their creators like to hide behind, and just call them by what they do: iterative extrapolators. Forget the $bn sunk to get to this conclusion. Pat shows time and again that all we have is iterative linear extrapolators of assumed CO2 forcing.

Pat can then present familiar concepts of uncertainty propagation in iterative linear extrapolators to show that the outputs of GCMs are not reliable. There is a maximum degree of uncertainty they can tolerate to be able to discern the effect of CO2 forcing, and they fail to achieve this standard.

It’s a beautiful logical chain of reasoning, well supported by evidence and analysis.

Excellent comment. Regardless of how complicated the GCM’s are, their output in relation to CO2 is linear. Dr. Frank has shown this remarkable observation is true. The corollary then follows that uncertainty is calculated through well known formulas.

Agreed Jim.

Pat’s work is important, and it needs to be supported against the naysayers who cannot stand the blunt truth they are faced with.

Mr Wells has misunderstood Professor Frank’s method. Consider three domains. First, the real world, in which we live and move and have our being, and which we observe and measure. Secondly, the general-circulation models of the climate, which attempt to represent the behavior of the climate system. Thirdly, the various theoretical methods by which it is possible to examine the plausibility of the models’ outputs.

Our team’s approach, which demonstrates that if temperature feedback is correctly defined (as it is not in climatology) climate sensitivity is likely to be about a third of current midrange projections. To reach that result, we do not need to know in detail how the models work: we can treat them as a black box. We do need to know how the real world works, so that we can make sure the theory is correct. All we need to know is the key inputs to and outputs from the models. Everything in between is not necessary to our analysis.

Professor Frank is taking our approach. Just as we are treating the models as a black box and studying their inputs and outputs in the light of established control theory, so he is treating the models as a black box and studying their inputs and outputs in the light of the established statistical method of propagating uncertainty.

If Professor Frank is correct in saying that the models are finding that the uncertainty in the longwave cloud forcing, expressed as an annually-moving 20-year mean, is 4 Watts per square meter – and his reference is to the Lauer/Hamilton paper, where that figure is given – then applying the usual rules for summation in quadrature one can readily establish that the envelope of uncertainty in any model – however constructed – that incorporates such an annual uncertainty will be plus or minus 20 K, or thereby.

However, that uncertainty envelope is not, repeat not, a prediction. All it says is that if you have an annual uncertainty of 4 Watts per square meter anywhere in the model, then any projection of future global warming derived from that model will be of no value unless that projection falls outside the uncertainty envelope.

The point – which is actually a simple one – is that all the models’ projections fall within the uncertainty envelope; and, because they fall within the envelope, they cannot tell us anything about how much global warming there will be.

Propagation of uncertainty by summation in quadrature is simply a statistical yardstick. It does not matter how simple or complex the general-circulation models are. Since that yardstick establishes, definitively, that any projection falling within the envelope is void, and since all the models’ projections fall within that envelope, they cannot – repeat cannot – tell us anything whatsoever about how much global warming we may expect.

I am still trying to reach a conclusion on this. Where is Steven Mosher when you need him?

Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information. This is the essence of Pats analysis.

Roy Spencer seems to agree in principle, but doesn’t seem to accept Pat’s approach.

I have a number of points I’d like to add.

Uncertainty can only increase with each model step. A model has no prospect of “patching in” new assumptions to compensate for loss of information in earlier steps.

Pats uncertainty bounds go beyond what some people consider to be a feasible range. Fine, then crop Pats uncertainty to whatever range you are comfortable with. All you will conclude is the same thing: you have no way of knowing where the future will lie within your range. That’s fundamentally the same conclusion as Pat’s, but you have made it more palatable to yourself. It doesn’t mean Pat is wrong in any way.

Models produce similar outputs because they are operating within “constraint corridors” (as somebody called it) which exclude them from producing a wider range of outputs. It is not evidence of reducing uncertainty. Lost information is gone, and lots of models running with similar levels of lost information cannot create any new information.

Constraints do not reduce uncertainty. They only introduce assumptions with their own inherent uncertainties, and therefore total uncertainty increases when a constraint is relied upon as a model step. For this, I would like to refer to the assumed TOA energy balance using the following very simple equation:

N(+/-n) = A(+/-a) + B(+/-b) + X(+/-x)

Uppercase are model OUTPUTS and lower case are uncertainties which are model PROPERTIES.

N has a value of zero because it is the model assumed TOA flux balance.

A and B balance, representing Roy’s biases and assumed (but unidentified) Counter biases when the assumed TOA constraint is applied.

X is zero (not recognised by the model) and represents concepts like Pat’s modelling errors.

The fact that the uppercase items can add to zero does not mean the lowercase uncertainties cancel each other. In fact the opposite is true. Roy’s assumption of counter biases represents more lost information (if we knew about them we should be modelling them).so the value of ‘b’ has the effect of increasing ‘n’ as the uncertainties are compounded in quadrature.

To me, this is a very valuable comment.

Thank you, Jordan.

Dave Day

“Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information.”GCM’s famously do not keep a memory of their initial states. Nor do CFD programs. In this they correctly mimic reality. You can study the wind. What was it’s initial state? You can do it more scientifically in a wind tunnel. No-one tries to determine the initial state there either. It is irrelevant.

So yes, the lost information can’t be recovered, but it doesn’t matter. It didn’t contain anything you wanted to know. And much of this error is of that kind. The reason it doesn’t matter is that what you actually find out from GCM or CFD is how bulk properties interact. How does lift of a wing depend on oncoming flow? Or on angle of attack? None of these depend on the information you lost.

I totally disagree with that Nick Stokes. But you have widely advertised your complete inability to understand these concept on this thread. And your description of Eqn1 as “cartoonish” was a breathtaking display of arrogance and lack of self awareness. I really have no interest in what you have to say, so don’t bother responding to my comments.

What you write is correct as far as it goes, but now consider this system which is much closer to the model we are all supposed to be considering.

Temperature varies with the net flux (imbalance), N(t)

N(t) = A – B + F – lambda*deltaT + X(+/-x)

A = B with a correlation of 1

B = sum over i of (b_i(+/-)error_i)

Can you calculate the contribution of the uncertainty in b_i to X?

Franks: The uncertainty is huge so the results are meaningless; Spencer: models are adjusted to stay within the bounds of a physically-realizable outcome so this uncertainty is meaningless. Let me ask a question. If physics indicated that temperature swings could in fact be 25C or higher so that no artificial bounding of modeled results would be needed, would the models be producing different outcomes? If so, then I would say Franks is right — the models are physically meaningless.

Frank not Franks.

On the flip side: if the real world cannot plausibly vary in temperature this much, then it also implies that the uncertainty is also not this high. It doesn’t take a fancy computer model — simply the idea that cloud forcings could change by 20W/m2 within a few decades is itself pretty implausible.

And the only reason that Frank is saying that cloud forcing

couldvary this much is because he treated the cloud forcing uncertainty (4 W/m2) as achangein cloud forcing uncertainty (4 W/m2/year). So he can integrate that over time, and the uncertainty in cloud forcingin the real worldgrows over time, without end or bound, to infinity. Does that sound physically realistic?Units are important, yo

Lauer et al indicate on average it changed +\- 4 W/sqm per year. Your argument then is with Lauer.

No, they indicated it changed 4W/m2, not per year. It is the same over any time period. At

anygiven point in time, it can be within this +/- 4W/m2 range, and this does not change over time.On a previous discussion on this point, someone went so far as to actually email Lauer himself. Here’s the reply (emph. mine)

So Lauer also says that there’s no particular timescale attached to the value. It’s

justW/m2, not W/m2/year.https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-1443

Thanks. That is one interpretation I was considering. Still it represents the uncertainty. In that case it may still be able to propagate, but may need to be treated differently. The division by sqrt(N) may be sufficient.

This was the first question I had about Pat Frank’s study. I hope this get clarified. Otherwise I still believe the approach is correct. Lauer’s paper strongly implied this was a 20 year multi model annual mean value.