September 12th, 2019 by Roy W. Spencer, Ph. D.

**NOTE:** *This post has undergone a few revisions as I try to be more precise in my wording. The latest revision was at 0900 CDT Sept. 12, 2019.*

*If this post is re-posted elsewhere, I ask that the above time stamp be included.*

Yesterday I posted an extended and critical analysis of Dr. Pat Frank’s recent publication entitled *Propagation of Error and the Reliability of Global Air Temperature Projections.* Dr. Frank graciously provided rebuttals to my points, none of which have changed my mind on the matter. I have made it clear that I don’t trust climate models’ long-term forecasts, but that is for different reasons than Pat provides in his paper.

What follows is the crux of my main problem with the paper, which I have distilled to its essence, below. I have avoided my previous mistake of paraphrasing Pat, and instead I will quote his conclusions *verbatim*.

In his Conclusions section, Pat states “*As noted above, a GCM simulation can be in perfect external energy balance at the TOA while still expressing an incorrect internal climate energy-state.*”

This I agree with, and I believe climate modelers have admitted to this as well.

But, he then further states, “*LWCF* [longwave cloud forcing] *calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models*.”

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a *non sequitur*. **All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)! **

Why?

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behavior:

Figure 1. Yearly- and global-average longwave infrared energy flux variations at top-of-atmosphere from 10 CMIP5 climate models in the first 100 years of their pre-industrial “control runs”. Data available from https://climexp.knmi.nl/

**Importantly, this forced-balancing of the global energy budget is not done at every model time step, or every year, or every 10 years. If that was the case, I would agree with Dr. Frank that the models are useless, and for the reason he gives.** Instead, it is done once, for the average behavior of the model over multi-century pre-industrial control runs, like those in Fig. 1.

**The ~20 different models from around the world cover a WIDE variety of errors in the component energy fluxes, as Dr. Frank shows in his paper, yet they all basically behave the same in their temperature projections for the same (1) climate sensitivity and (2) rate of ocean heat uptake in response to anthropogenic greenhouse gas emissions.**

Thus, the models themselves demonstrate that their global warming forecasts do not depend upon those bias errors in the components of the energy fluxes (such as global cloud cover) as claimed by Dr. Frank (above).

That’s partly why different modeling groups around the world build their own climate models: so they can test the impact of different assumptions on the models’ temperature forecasts.

Statistical modelling assumptions and error analysis do not change this fact. A climate model (like a weather forecast model) has time-dependent differential equations covering dynamics, thermodynamics, radiation, and energy conversion processes. There are physical constraints in these models that lead to internally compensating behaviors. There is no way to represent this behavior with a simple statistical analysis.

Again, I am not defending current climate models’ projections of future temperatures. I’m saying that errors in those projections are not due to what Dr. Frank has presented. They are primarily due to the processes controlling climate sensitivity (and the rate of ocean heat uptake). And climate sensitivity, in turn, is a function of (for example) *how clouds change with warming*, and apparently not a function of *errors in a particular model’s average cloud amount*, as Dr. Frank claims.

The similar behavior of the wide variety of different models with differing errors is proof of that. *They all respond to increasing greenhouse gases*, contrary to the claims of the paper.

The above represents the crux of my main objection to Dr. Frank’s paper. I have quoted his conclusions, and explained why I disagree. If he wishes to dispute my reasoning, I would request that he, in turn, quote what I have said above and why he disagrees with me.

Thank you both for this very enlightening posts. It is great to see civil disagreements with discussion on the technical issues.

Agreed. But, am curious why wasn’t the basics of this theory settled before the policy makers attempted to implement one side of the equation?

Hearty agreement, TRM

One thought not mentioned elsewhere –

even if Dr. Frank’s paper is found in the end to be significantly flawed, it still should be published. It examines very serious issues which are rarely scrutinised.

“If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out”

No Dr Spencer – only the top of atmosphere balances but as there are plenty of flux into and out of the ocean there is not the slightest reason to believe there is a balance.

your argument is rubbish – it is the usual rubbish we get from those who can’t imagine that any heat goes into and out the ocean and the changes are at the TOA – the conclusion of +/-15C is still valid.

Mike:

The fact that climate can change as a result of changes in heat fluxes in and out of the ocean is a separate issue. And I already mentioned how changes in the rate of ocean heat storage is one of the things that causes models to differ.

Did you even bother to read what I wrote? Or do you only like Dr. Frank’s conclusion, so you will blindly follow it?

Roy

I am not sure i see your point, the models are in balance and only forcing from GHGs take them out of balance, in the real world if this was the case there would have been no LIA or any of the several warming periods over the last 15,000 years.

+1

Roy,

Yes, that premise has been thoroughly falsified. The climate fluctuates at short intervals with no outside forcing. So why continue with models based on false assumptions?

There are things to learn running models with constraints that differ from (isolate and simplify) reality…BUT YOU CAN’T RESTRUCTURE THE WORLD ECONOMY AND POLITICAL SYSTEMS based on them.

“why continue with models based on false assumptions? ”

Because ‘ex falso, quodlibet’. If they assume the false things as true, they can prove anything. They can prove there is an invisible Godzilla made by humans which controls the climate, if they want to. It’s very convenient for the climastrological cargo cult.

What needs to be challenged here, is this term “pre-industrial control run”. This is not a “control” in the usual scientific sense, yet is used in a way to imply some control is being done.

What this is, is a calibration run where ASSERTIONS that some arbitrary period in climatological past was somehow in the perfect natural state and that it was in equilibrium , such that if models are run from that state they should stay basically flat.

We only have to look at all that we know of the Earths past climate to know that this is a false supposition.

That models are designed and tuned create stable output under such conditions is proof in itself that they are seriously defective.They are designed to demonstrate a pre-imposed assumption.

This “control-run” process ensures that all the egregious errors and assumptions ( parametrisations ) about clouds and whatever else in any individual model are carefully tweaked to be in overall balance and produce flat output in the absence of CO2. If cloud amounts are off by 14%, one or a group of opposing, ill-constrained parameters have been tweaked in the other direction to maintain the required balance.

Dr. Spencer is essentially right.

“This is not a “control” in the usual scientific sense”Yes, it is. It is the sense of an experimental control. You have a program that you can set to run with its properties unchanged. Then you run it with various changes made, according to scenarios.

There is no attachment to some halcyon climate. Nothing much is to be read into pre-industrial. They simply put in a set of gas concentrations corresponding to what was reckoned pre-industrial, and let it run. No doubt if it shows some major trends they check their inputs. But it isn’t designed to prove anything. It just answers the question, if a scenario shows some change happening, what would it have been without the scenario?

Nick Stokes said: «They simply put in a set of gas concentrations corresponding to what was reckoned pre-industrial, and let it run. No doubt if it shows some major trends they check their inputs.»

They do that and never observe the known natural fluctuation of unforced climate.

Sorry Nick, that is not a “control-run”for the REAL PHYSICAL CLIMATE. It is merely a control-run for that specific model of climate. Changing an input and monitoring the output only tells you HOW THAT MODEL reacts to the change not how the REAL PHYSICAL CLIMATE reacts.

The “tweaking” being talked about here is simply being done without much if any true knowledge of how everything ties together. It is more “cut and fit” to see if any way can be found to make the energy balance using “fudge factors”. Consequently, everyone should recognize that the modeler(s) don’t really know how everything fits together in the real physical world.

Until you have a model that can accurately forecast climate an hour, day, month, year, or decade ahead, then you don’t have a model that can produce a “control-run” for the real physical climate. Until then you simply have a collection of assumptions, hunches, and guesses that make a model.

“They do that and never observe the known natural fluctuation of unforced climate.”That observation is the point of the control, and of the diagram Dr Spencer showed at the top.

This is known as sensitivity analysis. It is not a “control.” A control is

independentof the model or experiment run. A model run isNOTan experiment.Base climate errors do not subtract away.

They get injected into the subsequent simulation step, where the initial value errors are mangled even further by the theory-error inherent in the GCM.

You folks make all sorts of self-serving assumptions.

Nick likes that.

Dr. Spencer,

I do not have the background to evaluate the GCMs. However, consider the reason for double-blind studies in drug trials.

You wrote,

“The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, …”

Your statement seems to imply some credence to these findings on the basis that multiple, independent lines of research reached the same conclusion. However, this is not any type of proof of concept. I cannot conceive of any model run that output cooler global temperatures being reported in the literature – because everyone “knows” that GHGs warm the atmosphere!

I believe the GCMs are complicated enough, that they would be “debugged” until they produce a result that is reasonable, i.e. warming. When everyone already knows the “correct” answer it is very difficult for someone to produce a contrary result and publish.

The climate forecast models are constantly wrong — anyone can see that. Plus NOAA reports that the atmosphere has cooled during the past 2 years while CO2 concentrations skyrocket. Amazing.

Yep, JS, Roy has AMSU and only sees radiation. IPCC has models and sees nothing else. But researchers like ourselves, Piers Corbyn; and Nasa Ames and Langley; have been watching the Sun and see the Tayler Instability. That and the Ideal Gas Laws make all their dreams go poof!

Those of us who live in the real world see crops struggling from the Wild Jetstream Effect and wonder why the Southern Hemisphere warms not at all under its CO2 loading. The false weather-gods espoused are now being put to nature’s test. I sit under southern SSWs and am in little doubt of the result. I thank you, Pat Frank’ and Roy too. But please open your eyes Roy. Brett Keane, NZ

I don’t consider climate models independent lines of research. Perhaps if they were transparent and generally unmolested, but that’s far from the case.

As it is, it is more like an exercise in finding out how many ways you can set the dials to get the same (predetermined) result.

Roy, I think, rather than claiming Pat’s argument is semantic. Please deal with his question specifically.

I work with my hands, building and constructing. Every day, I frame an out, of square object at least once. So I deal frequently with fit. After so many years I’m quite good at skipping steps to achieve fit. But I deal and in margins 1mm to 3mm.

Years ago I was tasked with CNC parts manufacturing. Tolerances sometimes demanded accuracy to .0001 of an inch. This is where the whole game changes.

To get dimensional fit within those margins we need to understand how error propagates. Fit can be achieved in a number of ways. But to do it consistently without fear of error propagation, the method must be sound and tested.

Pat is making a very simple point. A point you have made for years. Just because we are getting relative agreement at TOA, does not mean we arrived at those numbers while properly representing the variables.

We are all eager to constrain model outputs. And we are all eager to identify where and why we need to employ qualifying theories. You yourself have proposed mechanisms to better constrain our understanding of vertical heat transport.

Pat’s paper is not making a claim beyond determining error within model physics. And he has, in my mind adequately explained how that error can propagate while at the same time achieving relative fit.

This is all perfectly clear to any of us who have had to wrestle with Ikea installations. 🙂

Saying all of that. I hope you feel humbled and thoroughly rebuked. For my part I have the deepest respect for your work and I continue to follow your career. It’s just nice to see you and Mr. Stokes agreeing on something, as well it’s nice to feel smarter than both you in this moment.

The feeling is fleeting.

..to get that TOA agreement….they go back and adjust a whole pile of unknowns…without knowing who what when or where

Absolutely true, the height of a deck of cards is the same no matter how you stack the deck, but the order in which you stack them will greatly favor one game vs. another.

and that is exactly the point … the ‘models’ are constantly reset to compliance with the required objective. If the ‘models’ need manual intervention then they are not modelling the system as they ‘remove’ the error without knowing what the error is … who knows what they are doing, you don’t know what you don’t know but which is becoming increasingly obvious.

I wholeheartedly agree with Suppes above. Real world calibration, accuracy and resolution are critical factors.

Pat is using labels incorrectly, but he is correct in the main. He is correct that if the resolution of the cloud forcing accuracy is +/- 4 units; and suspected GHG forcing is only +1.6 units; you CANNOT trust any of the results, for any practical real world application! Period, end of discussion!

If I had to accurately weigh some critical items and the weight needed to be 1.6 grams (+0.1/-0.0), and my balance has a resolution of 1 gram, but it’s accuracy is +/-4 grams: I cannot use this balance for this determination.

As to Pat’s use of W/m² – Pat if you apply that terminology, it is Power per unit Area, in this case per square meter. Period, end of discussion! Watts is power, Joules are energy. 1 Joule/second is 1 Watt.

If you intended to make it some dimensionless quantity say a coefficient of error, or a percent error you must not label it as Power per unit Area which is W/m².

Perhaps you both need some re-study of some fundamental first principles! (although you are both correct, the “models” are producing garbage results regards real world)

I think Pat is correct from the simple fact that error propagation is the most misunderstood part of science. It seems to be why no one likes to put error bars on their graph predicting the future. This seems to me to be Pats point. If your uncertainty/error is so great that the possible future temperature is predicted /projected plus minus 300x the predicted/projected value then what is the value of the prediction? precisely nothing. Hence the issue between accuracy and precision. They develop a precise method of missing the barn door by a mile.

v/r,

David Riser

I call W/m^2 energy flux, D. Boss. That’s the term of use in the literature.

Maybe sometimes I leave off the “flux,” if I’m going too fast or being a bit careless.

Roy, I think, rather than claiming Pat’s argument is semantic. Please deal with his question specifically.

I work with my hands, building and constructing. Every day, I frame an out, of square object at least once. So I deal frequently with fit. After so many years I’m quite good at skipping steps to achieve fit. But I deal and in margins 1mm to 3mm.

Years ago I was tasked with CNC parts manufacturing. Tolerances sometimes demanded accuracy to .0001 of an inch. This is where the whole game changes.

To get dimensional fit within those margins we need to understand how error propagates. Fit can be achieved in a number of ways. But to do it consistently without fear of error propagation, the method must be sound and tested.

Pat is making a very simple point. A point you have made for years. Just because we are getting relative agreement at TOA, does not mean we arrived at those numbers while properly representing the variables.

We are all eager to constrain model outputs. And we are all eager to identify where and why we need to employ qualifying theories. You yourself have proposed mechanisms to better constrain our understanding of vertical heat transport.

Pat’s paper is not making a claim beyond determining error within model physics. And he has, in my mind adequately explained how that error can propagate while at the same time achieving relative fit.

This is all perfectly clear to any of us who have had to wrestle with Ikea installations. 🙂

Saying all of that. I hope you feel humbled and thoroughly rebuked. For my part I have the deepest respect for your work and I continue to follow your career. It’s just nice to see you and Mr. Stokes agreeing on something, as well it’s nice to feel smarter than both you in this moment.

The feeling is fleeting.

If this is a duplicate the moderator is welcome to delete one of these comments.

Dr Spencer, your argument is that of creationists: because you can’t imagine anything but the obvious things that you can see could have caused the world we see, they claim god, you claim that the models must be in balance.

What Dr Frank has used is an analysis that accounts for the possibility of unknowns. You claim you don’t know of any unknowns and therefore falsely claim the climate must only be responding to what you know. Dr Frank has demonstrated that there unknowns, I then point to the fact we know heat goes into and out of the ocean as being quite capable of explaining all the unknown variations. So, it is not as you imply a physical impossibility that unknowns exist.

Dr Frank has quantified these unknowns and when they are taken into account there is a +/-15C error at the end of 100 years.

You have taken a creationist type argument … you can’t see anything that could change the climate and therefore concluded that because you don’t know of any unknowns as an omniscient scientist you can conclude they don’t exist. And they you falsely conclude Dr Frank must be wrong.

We’re still trying to model a chaotic ststem? How rude!

Nice try at an adolescent drive-by swipe at religion. Epic fail, but nice try!

You have it exactly backwards on who lacks imagination. If we found a plate and a spoon on Mars that we knew for certain humans didn’t place there, it would be massive news. But the same people who would get the vapors over a plate and a spoon on Mars look at ATP Synthase and yawn.

Dr. Spencer, I think I agree a bit more with your view than Dr. Frank’s. But, the difference in the views is subtle. I share your distrust with using simple statistics in disputing a theory (or hypothesis) based upon a lot of data and modeling. Too many places to make mistakes with that approach. However, cloud cover is a huge unknown. How cloud cover will change as climate changes is an even larger unknown. The modelers admit this, we all know it. Dr. Frank did not really disprove the model results, but he did point out how big the possible cloud cover error is, relative to what the modelers are trying to estimate, this is a valuable contribution, at least in my humble opinion.

I tend to agree…

But I think that’s mostly because Dr. Spencer does a great job in communicating it in plain language to those of us who would prefer not to relive the nightmare of Differential Equations… 😉

Kudos to both Dr. Frank and Dr. Spencer and to Anthony Watts for hosting this discussion.

See my reply to Andy, David.

Or my replies to Roy on his blog.

He is arguing that a model calibration error statistic is an energy.

Pat:

I have no idea what your point is here. Please dispute a specific claim I have made in the current post.

I know you have no idea of my point, Roy. That’s the point.

An error statistic is not an energy.

You’re treating the ±4 W/m^2 as an energy.

You’re supposing that if it were important, the models would not be in TOA energy balance, and that they’d not all predict similar impacts from CO2 emissions. None of that is true or correct.

The models are all (required to be) in energy balance. Nevertheless, they make large errors in total cloud fraction. Different models even have different cloud fraction errors.

The error in cloud fraction means their simulated tropospheric thermal energy flux is wrong, even though the model is in over-all TOA energy balance.

That wrong thermal flux averaged across all models, yields a model calibration error statistic. It conditions the model predictions.

This sort of thinking, common in the physical sciences, is apparently unknown in climatology. Hence your problem, Roy.

I want to support Pat on this point.

Roy is claiming that a propagated error, which is an attribute of the system, has to affect the calculated values, and further claims that because the calculated values are generally repeatable, therefore they are “certain”.

This is simply not how results and propagated uncertainty are reported. The result is the result and the uncertainty is appended, depending on the uncertainty of various inputs. X±n

The uncertainty ±n is independent of the output value X of the model.

If every model in the world gave the same answer 1000 times, it doesn’t reduce the propagated uncertainty one iota. Roy claims it does. It doesn’t. Look it up on Wikipedia, even. That is simply not how error propagation works.

It is very instructive to see a highly intelligent, accomplished man utterly miss a key concept, even when it is succinctly explained to him a couple of times. It’s a very good reminder to stay humble and to be very thankful for Eureka! moments of clarity of thought.

Thank-you Crispin. Your description is very good.

Hoping you don’t mind, I reposted your comment on Roy’s site, as an example of someone who understands the case.

It is unfortunate that people who see (+/-) confuse the following number with sigma which is standard deviation and its relation of variance. These are statistical calculations describing a population of data points. Sigma used as a statistical calculation assumes a mean value and the deviations from the mean.

This paper describes the propagation of error throughout calculations to the end result.

As an example, take the numbers 1, 2, and 3. This could be a population of data points.

You would get the following

(range = 2)

(variance = 0.667)

(standard deviation = 0.816)

Now assume these are consecutive measurements, in feet, to determine the length of an object and assume each measurement has a possible error of +/- 0.1. In other words they are added together to find a total.

(1 +/- 0.1) + (2 +/- 0.1) + (3 +/- 0.1) You can’t describe this as 6 +/- 0.1 because that is incorrect.

What are the worst possible outcomes? 1.1 + 2.1 + 3.1 = 6.3 ft or 0.9 + 1.9 + 2.9 = 5.7

So the answer would be 6 +/- 0.3 feet. This is how measurement errors propagate.

Notice that (+/- 0.3) is different from (+/- 0.816). The standard deviation could easily remain at +/- 0.816 depending on the distribution of additional data points. However, +/- 0.3 will just keep growing for each iteration of additional measurements.

/An error statistic is not an energy./

First, thank for all of the time you have put into this.

I was just wondering if there might be any way to simplify all of this with a gadanken experiment, maybe?

For example, suppose we have a planet orbiting a star and you use an equation/model to report the planet’s position each year, with some degree of uncertainty (distance). So, each year, even though your model might give you nearly the same position, the uncertainty would accumulate / propagate. If the process went on long enough, eventually the uncertainty (distance) would become as great as the orbital circumference. (and then we have a problem)

You could think of the uncertainty as a “real distance” up to that point, however once the uncertainty exceeds the orbital circumference, it can no longer be treated as a distance due to the overlap (positional degeneracy). So, in the end, after many iterations, you have a calculation that gives a specific position, but has a tremendous amount of uncertainty in it –> that if given as a distance would be meaningless.

I am not sure if that works?

cwfisher, that’s a pretty good analogy, thanks.

As the uncertainty increases, even though the predicted position of the planet is discrete, it becomes less and less likely we’ll find it there.

Eventually, when the uncertainty is larger than the orbital circumference, as you point out, we’d have no idea where the planet is, no matter the discrete prediction.

A very interesting point. Yet when we know the Newton pair-wise approach to a 3-body problem explodes , the reason is not uncertainty , rather the linear pair-wise action-at-a-distance encoded in the maths. Which is why we needed General Relativity with curved spacetime. Mercury’s perihelion being the case in point.

Now , is it possible that besides uncertainty, resolution, (a very thorough, refreshing analysis by Pat Frank, thanks) the entire linear pair-wise flat climate models encode the same error?

And this consideration leads me immediately to a typical MHD plasma model – has anybody run his kind of analysis on, for example solar flares (outside Los Alamaos, I mean). If the error is not just statistics, but an encoded pair-wise ideology, we will never understand tha Sun, nor contain fusion.

David M.

Diff E simplifies calculus what’s not to like.

Diff Eq does not simplify calculus… Calculus, derivatives and integrals, was easy. Differential Equations, Cauchy-Euler, Variation of Parameters, etc., was a fracking nightmare. I have no idea how I got a B in it.

Do you think a statistic is an energy, Andy?

Because that equation is the essence of Roy’s argument.

Again, I support this. An uncertainty of 4 W/m^2 is an uncertainty about the value, not a additional forcing. It can also be written that the uncertainty of the forcing per sq m is “4”. Suppose the forcing was 250 Watts. It should be written

250 ±4 W/m^2

Just because a variation from a 4 W change doesn’t show up in the result(s) has no bearing on the uncertainty, which has its own source and propagation. If one wanted to find the results that would obtain at the limits, run the model with 254 and 246 W and observe the results. Simple. It will be different.

Because in the beginning the uncertainty is 4 W, it cannot be less than that at the end, after propagation through the formulas. Running the model with 250 and getting consistent answers doesn’t “improve” the quality of 250±4. You can’t claim, with a circular argument, that running it with 250 only means there is no 4 W uncertainty.

+1

+1

Yep. Gets a plus 1 from me too

Dr Frank

Please clarify a point that I find a sticking point in trying to conceptualize your analysis.

Is this 4W/square meter uncertainty a yearly value that would be a different number for other sized steps, such as (4W/sm)/12 for monthly steps?

Pat, I agree with Crispin. It’s not an energy, but the uncertainty in the measurement. Your statistical analysis shows that the measurements and models are not accurate enough to estimate man’s contribution to climate change and I thank you for doing this. Well done. But the statistical analysis does not disprove the models, it just shows they’ve been misused. A subtle difference to be sure and perhaps not an important one, but there none-the-less.

I have made plenty of comments, but have been careful not to weigh in on the question of validity.

I am awaiting clarity to smack me over the head.

I try to keep it straight what I know is true, what I think may be true, and what I am not sure of.

After all, the whole question in so much of this from the get-go to the present discussion centers on false certainty.

It is not so much the subtlety of the difference in views that may exist from one knowledgeable person to the next.

For me it is not being sure how the models even purport to tell us what they tell us.

I am not even sure that we are given clear information about what exactly it is the models output, and what they do not.

Do they produce three dimensional temperature maps of the Earth?

Is temperature the only thing they keep track of?

We know they do not keep track of clouds, so how do they keep track of movements of energy from convection, latent heat, precip, etc?

The models are flawed from the get go because they try to balance the TOA energy. The real model would balance the input of energy from the surface and outward flux at TOA. Only problem with that is the release of energy from the surface is chaotic, so it can’t be accurately modeled or predictable.

The incoming energy gets converted and stored in all kinds of media, water, ice, biomass, kinetic energy, GHGs, asphalt, etc. Some of that energy is stored for thousands, maybe even millions of years before it is released back to the atmosphere. As such the incoming energy on a particular day is fairly meaningless for that day. Today’s Energy penetrating the depths of the ocean is not going to affect the atmospheric temperature today, tomorrow or even next week, nor will it contribute to outgoing radiation today, tomorrow or next week. Energy stored in biomass this year may not be released for a couple of years, … or maybe even decades …. or in coal and oils case, millions of years.

The problem with the GHG theory, is that GHGs emit at the same rate they absorb. Thus, while GHG are yet another energy sink, they have no long term storage capacity, and thus just reflect the current state of energy flow from the surface to TOA. Doubling GHG doubles the amount of LW radiated down, but it also doubles the amount radiated up, so it is a zero sum game.

Balancing creates a propagated system error throughout the entire process. That error can never be random and can never be cancelled out. Any intensive property like temperature has to reflect a parameterization of the process. You can’t count up to get a temperature. You must do it by parameterization. The very fact of parameterization itself creates a systematic error. Any equation with a constant has been derived from experiment. There are systematic errors in those experiments.

“global effect of anthropogenic CO2 emissions” presupposes that you know the non anthropogenic CO2 emissions which are highly inaccurate themelves. And we most certainly don’t know the mix of the 2 causes of temperature (anthropogenic and non anthropogenic). We also don’t know to any great accuracy the actual global temperature increase. For climate models (that don’t know the answer to the last 2 points) to be fit for policy, should mean that those answers are known to much less error uncertainty than at present.

Roy is arguing that because 1 part of the system has been forced to be in balance, that lessens the system error. One can never lessen the system error unless the parameterization through experiments is calibrated more accurately. Unfortunately whole system experiments on the earth system, are difficult , to say the least.

Also correct. You cant magic away errors that are non gaussian… satellite sea level data products have the same issuel

If you double the amount of energy being emitted upwards downwards and sideways then the emitting medium, has to increase in temperature, both the CO2, to emit twice as much, and the rest of the air due to the build up in kinetic energy to enable the CO2 to be hot enough to absorb and to emit at that increased rate.

It is a zero sum eventually in terms of energy in and out but the bit that causes the emission is running along at a higher ernergy (heat) level.

Dr Deanster posted: “The problem with the GHG theory, is that GHGs emit at the same rate they absorb. Thus, while GHG are yet another energy sink, they have no long term storage capacity, and thus just reflect the current state of energy flow from the surface to TOA. Doubling GHG doubles the amount of LW radiated down, but it also doubles the amount radiated up, so it is a zero sum game.”

Well, there are so many problems with these statements, one hardly knows where to begin.

First, GHGs do NOT emit at the same rate as they absorb. GHGs can and do lose energy they absorb via radiation by pure “thermal” exchange (collisional exchange of vibrational energy) with other non-GHG atmospheric constituents, mainly nitrogen and oxygen. I believe the physical concept is called thermal equilibration of mixed gases.

Second, if you examine the vertical energy exchange during nighttime, it is obvious that GHGs are only seeing incoming energy from Earth’s surface (the net radiant energy exchange within the atmosphere itself is negligible in comparison) . . . but the GHGs radiate equally in all directions, that is, isotropically). Therefore, approximately half of their radiation is directed back toward Earth’s surface. Hence, they radiate less energy outbound (to TOA and space) than the energy they receive from Earth’s surface over the range of nighttime temperatures.

Third, to the extent that CHGs have very little heat capacity in and of themselves (being such small mass fractions of the global atmosphere, excluding water vapor) they don’t qualify as “sinks” for energy, even considering short term variations.

Finally, it is absurd to say that doubling any GHG doubles the amount of energy radiated up/down. It is well-know (well, perhaps outside of global climate models) that any gas absorbing radiation can become “saturated” in terms of radiation energy absorption if the “optical column length” exceeds a certain value, generally taken to be six e-folding lengths. This is well summarized in the following paragraph extracted from http://clivebest.com/blog/?p=1169 :

“There is a very interesting paper here : http://brneurosci.org/co2.html which describes the basic physics. The absorption length for the existing concentration of (atmospheric – GD) CO2 is around 25 meters i.e. the distance to reduce the intensity by 1/e. All agree that direct IR radiation in the main CO2 bands is absorbed well below 1 km above the earth. Increasing levels of CO2 merely cause the absorption length to move closer to the surface. Doubling the amount of CO2 does not double the amount of global warming. Any increase could be at most logarithmic, and this is also generally agreed by all sides.”

This point touches on an issue I have with the models: Do we have any theory, or actual measurement, of what % of the heat energy at the Earth’s surface rises well up into the atmosphere (say above 50% of the CO2 in the atmosphere) via convection (especially with thunderstorms) vs what % rises by IR radiation?

Heat rising by convection to such a height would partially neutralise a rising CO2-greenhouse effect. It that % rises as the Earth warms a little, because storms become stronger (as we have been told recently re hurricanes, which are said to be strengthening with “climate change”), then this could constitute a significant negative feedback for global warming.

I would hazard a guess that if even a few % of the rising heat is due to convection, and if that % is sensitive to rising oceans temps, then we may have an explanation separate from cloud cover changes for why the climate models all seem to fail in predicting real world temperature change. Heat trapping effects of changing 300 ppm CO2 to 400 ppm CO2 might be neutralised by a correspondingly small change in convective heat loss.

Does anyone know of any real numbers on this % and its sensitivity to ocean temps?

How much Global Warming could be blamed by all the combusted exhaust that is leaving chimneys that are poking out of the roofs of commercial buildings and industries and at power plants? These temperatures range from 250F to 1500 F. This all has to be recognized to be wasted heat energy. Why is this being allowed if Global Warming really is the problem that it is claimed to be.

Recovering that waste heat energy and utilizing it will greatly reduce the amount of fossil fuel that will be needed to be combusted. This will greatly reduce CO2 emissions. With natural gas, for every 1 million Btu’s that is recovered and utilized, 117 lbs of CO2 will not be put into the atmosphere. What natural gas is not wasted today, will be there to be used tomorrow.

In every 1 million Btu’s of combusted natural gas are 5 gallons of recoverable distilled water. To get at this water the heat energy has to be removed from the combusted exhaust. This is done with the technology of Condensing Flue Gas Heat Recovery. The lower the exhaust temperature is lowered, the greater the volume of water produced. Have you ever seen combusted natural gas irrigate the lawns and flower beds?

In the summer time there will be times when the exhaust leaving these chimneys can be cooler than the outside air temperature. Global Cooling?

That should never happen. If the atmosphere is our “dead state” then air entering it at a cooler temperature would mean we are wasting availability (exergy) by over-cooling a waste stream. In general one should consider recuperating waste heat, but there are circumstances in which it has no purpose and is non-economic to do so.

Only wrongly dimesioned condensors may be affected, if outside it’s to hot, they can’t liquify.

I imagine if cost effective the utilities would already be doing this. If not cost effective but more effective than current green subsidies then government is better to spent subsidy money on your idea.

I’m just a layman, so help me out here:

How can anyone recover waste heat energy and utilize it any manner that doesn’t result in most of the waste heat eventually entering the ocean or atmosphere?

If I use waste heat to generate steam, for example, and use that steam to drive a turbine, doesn’t the condensation of the steam release the part of the energy that didn’t go into driving the turbine?

You are correct that the reclaimed heat will turn into electricity and then into heat again.

But if you didn’t have this process the electricity would have to come from another source instead. So twice the amount of heat would be generated in total.

“If I use waste heat to generate steam, for example, and use that steam to drive a turbine, doesn’t the condensation of the steam release the part of the energy that didn’t go into driving the turbine?”

Yes and no.

The turbines cannot extract all the energy out of the steam, nor should they. If they did, the steam would condense within the turbines, and that is to be avoided for corrosion, erosion, and all sorts of other reasons harmful to turbine blades and bearings. So, the turbines want to accept high temperature high pressure steam and exhaust lower temperature and lower pressure steam but still well above condensation. This still highly energetic steam is then used to pre-heat incoming water headed to the boiler (which is actually recovered condensed steam from the same loop) where it begins to condense before it needs further cooling provided by external heat exchangers.

To get the entire system to extract energy from a heat source (burning fuel) you need a heat sink (external condensers). The thermal difference is what drives the system. How you use your waste heat stream to reheat condensed water will improve efficiency, but there are theoretical and practical limits regarding how much useful energy can be extracted from steam systems. The thermodynamics of steam energy regarding best temperature and pressure ranges for optimal energy conversion have been around for a very long time.

Its been over 40 years ago I did my thermo at the ‘tute, but I remember the final exam. Only one question (with multiple parts). It gave the schematic of a power plant and asked, “Calculate the overall efficiency of the system with calculations shown for each subsystem.”

There is this little thing called entropy that somewhat gets in the way of reusing the ‘wasted’ heat energy. It quickly gets to the point of requiring more energy expenditure than one can possible capture, to capture and use that wasted heat , not to mention the general impossibility of getting it to someplace useful.

Yearly energy usage (actually used plus wasted) of all human activity per year is equal to only a few hours of one day of energy input from the sun. Which is to say it is so far below the precision of measurement of total energy flux that it cannot effect any TOA energy balance measurements or theoretical calculations there of.

Wouldn’t an arbitrary constrate on the deviation of LWCF in response to increased CO2, introduce constraints in model results that may badly miss actual results, should LWCF respond in non-linear fashion to delta CO2?

There is no explicit constraint on how a model responds to increasing CO2. And the model allows clouds to change nonlinearly to warming resulting from increasing CO2. The models allow all kinds of complex nonlinear behavior to occur. That still doesn’t mean their projections are useful, though. And any nonlinearities add uncertainty. For example, I don’t think the models handle changes in precipitation efficiency with warming, which affects water vapor feedback, and thus how much warming. It’s not that they don’t handle the nonlinearity, it’s that they don’t have ANY relationship included. A known unknown.

Dr. Spencer, the UN IPCC climate models’ average earth temperatures vary up to 3C between them. How does that temperature difference affect the physics calculations within the different models?

Yes. There are all sorts of nonlinearities in the model. That’s why Frank is wrong, the uncertainty explodes exponentially in climate models, not linearly.

What he says is true about his linear model. As long as his model simulates the bullshit simulators well, his model provides a lower bound for uncertainty in the pure and absolute crap models.

So yes, it’s worse than anyone thought.

+1

Is it possible express Pat’s figures as chance of models hitting within say a 1C envelope over 20 years compared with the measured? I guess if chances are very slim and models still hit the envelope they do this WHO programmed boundaries. If so actual prediction skills are close to zero.

I’m coming to the conclusion that if the models included estimates of the poorly understood factors that affect our climate in the real world then we have Dr Frank’s scenario. If these factors are fiddled with to effectively neuralise them and not include them in the passage of time (contrary to nature) then you get Dr Spencer’s scenario.

If this over-simplified explanation is correct then both scientists are correct but surely the point is that if the models truly simulated the real world with at their current state of development we get Dr Frank’s scenario, i.e. they fail.

Dr. Frank \’s paper says nothing about how the GCM’s get to there projections. His whole point is that , given the uncertainty of the cloud cover and its repeated computation in the time wise iterations of the models, the results are very uncertain. His point is that there is no reason to trust the accuracy (correctness) of their results even if their precision is reasonable. If his emulations of the GCM results is a valid analytic step, his final answer of uncertainty is correct. He is not saying the model cannot show a change for CO2 forcing. That is what they were programed to do. He is saying that programed change is well within the uncertainty bounds so is useful only for study and useless for prediction.

Exactly right, DMA. You’ve got it.

Dr Frank!

Do you agree with my angle of the argument a few lines above?

Not exactly, Mr. Z. It says even if the models happen to get close to a correct value, there’s no way to know.

Thanks,

If the error bars are as high as you calculate and they still manage to produce values within a 1C envelope over a 20 year period (and they do comparing calculated vs measured) there must be some kind of boundaries involved.

My laymen logic tells me either there is a “boundary corridor” or the error bars are not properly calculated. A value can not stay withing range over 20 years by pure luck.

Please help me understand where my thinking is wrong. (Maybe with an example of a prediction where an initial uncertainty does not go ape).

Not exactly, Mr. Z. It says even if the models happen to get close to a correct value, there’s no way to know.So, assuming scenario that a model is reasonably close to the actual values is it because of pure luck? How many possible trajectories are possible within bounds of uncertainty? Surely dozens if not hundreds – does it mean each trajectory is equally probable?

To Paramenter:

I do hydraulic studies (flood modeling). The object of the model isn’t to be precise, there is no way you can be precise. Yes the output is to 4 decimal places, but the storm you’re modeling isn’t a real storm, nor is the soil infiltration, and the runoff routing, or retention/detention values, and the tides and even temperature (viscosity) will affect the flow values.

The point of a model is to produce a value where you can say “This will satisfy the requirements for safety and damage to property, for a given expected storm event”.

I get the impression that atmosphere modeling is similar if not many times more complex than flood modeling. Precision isn’t what you’re looking for. There is no such thing as precision with these dynamic random systems.

Mr. Z, error is not uncertainty.

If the physical theory is incorrect, one may produce a tight range of results, but they’ll have a large uncertainty bound because the calculated values provide no information about the physical state of the system.

Well, if that’s what Pat is saying, then I am wasting my time trying to explain this to people. It’s a very basic concept, and without understanding global energy balance and the basics of how models work, I will not be able to convince you that you can’t swoop in with statistical assumptions to analyze the problem in this fashion. I don’t know how to put it any more simply that what I have done in this post. If you do not understand what I have said from a physics point of view, we are wasting our time.

Where can we go for a clear explanation of what exactly the models do, how they do it, what they do not do, etc?

I think that at least some people believe the GCMs construct a miniature Earth as it exists in space, with the Sun shining on it, and with an atmosphere, and land, and oceans, just as the exist in real life, and than program in all the laws of physics and physical constants, and then just let the thing run to the future.

Roy W Spencer:

I will not be able to convince you that you can’t swoop in with statistical assumptions to analyze the problem in this fashion.“Swoop” is obviously the wrong word for this multiyear project.

If you want freedom from statistical assumptions, you’ll probably have to go with bootstrapping, which a problem as large as this would require computers with about 100 times the speed of today’s supercomputers. Or, you can try to work with more accurate statistical assumptions than those used by Pat Frank.

Meanwhile, because there are many sources of uncertainty, Pat Frank has most likely underestimated the uncertainty in the final calculation.

I do understand what you’re saying from a physics point of view, Roy.

However, as soon as you convert an error statistic into a physical magnitude, you go off the physics rails.

You’re thinking of an error statistic as though it were a physical magnitude. It’s not.

Calibration error statistics condition predictions made using physical theory.

This is standard in physics and chemistry. But not in climate science.

Perhaps if you found a trusted experimental physicist in the UA physics department — not one engaged in climate studies — and asked about this it might help.

Ask that person whether a ±4 W/m^2 calibration error statistic will impact the magnitude of the simulated TOA energy balance of a climate model.

Dr. Roy

You said ” you can’t swoop in with statistical assumptions to analyze the problem in this fashion”. Does this mean tat Dr. Frank’s use of the linear emulations of the models is invalid as a step to understand the uncertainties in the models? If so why? If Frank’s method is valid his results are valid. If his method is not valid we have no mathematical means of determining uncertainty in the complex equations used in the models. Then we are left with your suspicions that the models are not accurate and the argument that their results rely on circular reasoning and are therefore potentially spurious and not fit for policy making.

I do support the explanation of Dr. Spencer.

I would ask in plain language what Dr. Spencer means with this clause: “It’s not that they don’t handle the nonlinearity, it’s that they don’t have ANY relationship included. A known unknown.”

I understand this clause to mean that climate models do not have a cloud fraction relationship included.

The most important feature of climate models is to calculate climate sensitivity value. There is no cloud fraction variable included in the model but it has been assumed to have the same effect at 280 ppm concentration as well as 560 ppm concentration.

Pat Frank says that the climate sensitivity value – for example 1.8 degrees – has no meaning because the cloud fraction error destroys it totally through error propagation mechanism. Dr. Spencer says that the cloud fraction propagation error does not destroy it, if a model does not have this term in the model.

It is another analysis, what cloud fraction can do in the real climate. At any time it can have its input, which may be +4 W/m2 or – 4 W/m2 or something between. It is anyway enough to destroy the radiative forcing of CO2, which is only +3.7 W/m2.

Antero,

“It is anyway enough to destroy the radiative forcing of CO2, which is only +3.7 W/m2 [per doubling].”

Exactly, complex statistical analysis is interesting but not necessary. Dr. Spencer himself has noted that a small change in cloud cover is enough to overwhelm CO2. It is one of his most persuasive arguments against catastrophic warming.

Antero, “

I do support the explanation of Dr. Spencer.”So, Antero, you think a calibration error statistic is an energy flux.

Great physical thinking, that.

Dr. Frank,as you note this concept is absolutely critical. May I humbly suggest that you acknowledge that this concept is exceedingly difficult for most people to truly grasp, regardless of their background,experience or education. It is an overwhelmingly pervasive impediment to really understanding if any projection has usable “skill”. My 40 years or so in various fields of engineering has tried to teach me that unless you patiently work on finding ways to communicate this core concept you will get now where. Dr. Spencer is listening, patience, understanding, and empathy with the real difficulty of grasping this concept is key.

Please keep looking for ways to clarify your position. Those who do not see it are not stupid,not willfully missing it, it’s just damned hard to grasp for most.

Regards,

Ethan Brand

Exactly.

DMA:

He is saying that programed change is well within the uncertainty bounds so is useful only for study and useless for prediction.That is the exact point that Dr Spencer misses in this quote:

All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!The estimated sizes of the CO2 effect calculated from the models are well within the probable range of the error/uncertainty propagated through the model calculations. It is operationally indistinguishable from the effect of a chance variation in an important parameter.

In a simple linear case such a y = b0 + b1X + e, the random variation in the data produces random variation in the estimates of b0 and b1. If you then use the estimated values of b0 and b1 to predict/model

Y(n+1) = b0 + b1X(n + 1), the uncertainties of the b0 and b1 propagate through to uncertainties in the predicted Y(n + 1). For the simple linear case, the uncertainty of Y(n+ 1) can be estimated, and for a given X(n + 1) might be much greater than the estimate itself. In that case, you would have to accept that you had no useful information about the true value of Y(n + 1). This happens all the time when you are trying to measure small concentrations of chemicals: the estimate is then called “below the minimum detection level”.

With more complex models, the uncertainty that is propagated through the calculations based on the uncertainties in the parameter estimates is harder to estimate. Pat Frank has shown that the result of (a reasonable first attempt) at a calculation shows the uncertainty of the CO2 effect computed by GCMs to be much greater than the nominal effect estimate. The CO2 effect is below the minimum detection level.

oops:

In that case, you would have to accept that you had no useful information about the true value of Y(n + 1).Except possibly that the true value was very small, if not 0.

+1

+1

Nice succinct explanation!

What Dr. Frank’s critics seem not to get is that it is quite possible for a measurement error to be small while the uncertainty of the measurement is large. For example I might have an instrument which is calibrated by comparison to a reference with a large uncertainty. Say a 100 kg scale calibration weight with an uncertainty of +/- 5 kg. If I then weigh something and the scale shows 100 kg, the error might well be very small because it’s quite possible the calibration weight was in fact very close to 100 kg. But, it might have been as low as 95 kg or as high as 105 kg. That’s what uncertainty means. We just don’t know. So while the actual error of our measurement might be small, we still must disclose that our measurement has an uncertainty of +/- 5 kg. Maybe not a big deal if we were weighing a bag of compost, but a different story if we’re weighing a bar of gold bullion.

And in the case of the scale, it might be very precise so that repeated measurements of the same object is always within +/-.1 kg. But it doesn’t matter, we still don’t know the weight of the object to better than +/- 5kb. If we try to weigh something that is less than 5kg, we literally still won’t *know* what it *actually* weighs. Now consider if we have 10 scales, all of them calibrated to the same reference (with the large uncertainty). In that case, we should not be surprised if they all return close to the same result for the same object. But that does not prove that we know the *actual* weight of the object because they all have the same level of uncertainty, just as all the GCMs are based on the same incomplete theory.

Paul P. Exactly right. Now consider what happens if we are asked to determine the total weight of 10 similar items. Since each weighs something close to 100 kg and our scale capacity is not much greater, we have to weigh them one by one and add up the weights. What’s the uncertainty of our result?

Hint SQRT(5^2 x 10) = +/- 15.8 kg.

In my original thoughts on Pat Frank’s paper I said that I wasn’t sure if the uncertainties stack-up as he suggested. I think Dr. Spencer’s essay here lays this out more specifically.

Let’s write the climate modeling process as a linear state system in the form most engineers would recognize.

is the matrix describing atmospheric dynamics. It is a huge matrix. Operate on a current state (X) with this matrix and what comes out is the time rate of change of the state (temperatures, pressure, etc). There are inputs to this, like insolation, described in and which fed into the rate; and there are uncertainties and noise, which represents, which drive or disturb the dynamics, respectively. These drivers and uncertainties are vectors. Their effect on through the matrices A, and B might correlate one with another, or even anti-correlate. It is not possible to tell without knowing A and B. The propagation of uncertainty in this situation is very complex.

BTW, in the state space model above the vector Y is what we observe. It is a function of the current state, X, but not necessarily exactly equal to it. The dynamics of the system can be hidden and completely opaque to our observations. The uncertainty vector, w, represents that our measurements (and the corrections we apply) are subject to their own uncertainties. We should propagate all the way to Y to fully characterize the model uncertainties.

Do you think a statistic is an energy, Kevin?

That’s Roy’s argument.

Pat,

I am pretty sure I understand the point you are trying to make. I am pretty well versed in propagation of error and figuring uncertainty because I have taught the subjects in physics courses and engineering courses. I don’t know that asking the question “Do you think a statistic is an energy?” necessarily illustrates the distinction between your respective view points. No, I do not think the uncertainty estimates you use from calibration or determine from propagating error is a physical energy, and despite an ambiguous explanation in his Figure 1, I don’t think he does either. He can weigh in an tell me I’m wrong of course.

I do understand that if one hopes to measure an effect, say a rise in mean earth temperature, and relate it to a calculated expectation (from a GCM) then the uncertainty in temperature expectations delivered by the GCM has to be smaller or at least of the same size as resolution of temperature measurements. And your efforts (which I do not minimize in any way) suggest that as temperatures would be expect to rise from warming, the uncertainty of the calculated expectations rise faster.

However, Dr. Spencer’s point appears to me as, if envelope of uncertainty is so large as you maintain, then why do model runs not show more variability, and why do they remain so close to energy conservation? I think he has a point, and a few days ago I referred to the NIST Statistics Handbook which points to square root variances derived from observations (the GCM models in this case) as valid estimates of uncertainty–propagation of error is another means. Now I don’t know if the somewhat close correspondence of calculated expectations has a physical basis (as Dr. Spencer says) or a sociological basis (which is well documented in many fields–see Fischoff and Henrion), but I pointed out above that propagation of error done using the matrices “A,B,C,D” in my post above, it is a measurement equation system after all, might explain this, and would be more convincing.

I have been thinking of a simpler example to illustrate what I think are the issues between the two of you. Maybe I’ll post it if I think of one.

..my guess would be that after entering all the unknowns…differently…and going back and adjusting all those unknowns…differently

…they all ended up with X increase in CO2 causes X increase in temp

Kevin Kilty:

I think you also do not really see what Pat Frank is saying. Pat Frank is not talking about the performance of the models in their internal operation. What Pat Frank has done is taken known, measured accepted, and verified experimental data reduced to an uncertainty and impose that signal into the models. The extreme in output indicates that the models did not model the real physics of the system in such a way as to be able to handle that imposed signal, signifying that the models are incapable of doing what they claim to do.

Model calibration runs don’t show variability, Kevin, because they are tuned to reproduce observables.

So-called perturbed physics tests, which which parameters are systematically varied within their uncertainty bounds, in a given model, do show large-scale variability.

In any case, the core of Roy Spencer’s argument is that the ±4 W/m^2 LWCF error statistic should produce TOA imbalances. His view is a basic misunderstanding of its meaning.

I think this comes down to how many model runs are actually being produced. So you do a million runs and pick only the ones that seem good to report. Simple. Since everybody knows what sort if numbers you are looking for in advance it is trivial to constrain it to an arbitrary degree of precision.

So the question is: Do modelers simply chuck the runs that show 20 degree warming per doubling of CO2? If they do, then there’s your answer right there.

If I run a model 100 times I then have a population of data points. I can then determine all kinds of statistical information about that distribution. I can say the “true value” is the mean +/- error of the mean. As pointed out above, this describes the precision of the results.

However, if a calibration error is included at the very beginning, assuming no other errors, that error is propagated throughout. It can result in an additive error for linear calculation or worse when there are non-linear calculations. This type of error can not be reduced by many “runs, the uncertainty remains.

Kevin,

They all produce similar results because they are all based on the same incomplete theory. Internally they are all bound by energy balancing requirements, so they can wander only so far off track. But that doesn’t mean they are right. Just because a stopped (analog) clock shows a valid time does not mean it is correct. All you can know is that it is correct twice a day, but it is useless for telling you what the current time is. So don’t let the magnitude of the uncertainly envelope bother you – once it goes outside the range of possible physical solutions it just means the model can’t tell you anything. Those large uncertainty values say nothing about the possible states of the real climate nor the models, and I don’t think Pat ever suggests they do.

“why do model runs not show more variability”

Well, that’s easy. I have a model which says ‘just pick a value between 0 and infinity and then stick to it’.

I randomly picked one gazillion pseudo-degrees of pseudo-warming and now I’m stuck with it, with zero variability. I’ll let you guess what the uncertainty is for this prediction. No, it’s not related to variability.

As for ‘why?’ (although it’s an irrelevant question), here is one reason, directly copied from a climastrological computer model (picked at random):

C**** LIMIT SIZE OF DOWNDRAFT IF NEEDED

IF(DDRAFT.GT..95d0*(AIRM(L-1)+DMR(L-1)))

* DDRAFT=.95d0*(AIRM(L-1)+DMR(L-1))

EDRAFT=DDRAFT-DDRUP

It’s copy/paste from a computer model I downloaded to look into it from time to time and have a good laugh. The file is CLOUDS2.f from a ‘GISS model IE’, modelE2_AR5_branch.

Now, see that .95 value in there? There is no fundamental law of nature for that. The value is not magical. It’s just some non physical bound they added in the code so the model would not explode so ugly. This does not limit the uncertainty at all, but limits the model variability.

The computer models are filled with such anti-physical bounds. Despite those, the models exponentially explode spectacularly, in a very short time of simulation.

Here is what I found in some readme for that project:

“Occasionally (every 15-20 model years), the model will produce very fast velocities in the lower stratosphere near the pole (levels 7 or 8 for the standard layering). This will produce a number of warnings from the advection (such as limitq warning: abs(a)>1) and then finally a crash (limitq error: new sn < 0")"

Now, their advice for such a case is to 'go back in time' a little and restart the model with some adjustment of values in such a way that the exponential amplification will lead the model evolution far away from the bullshit values. Of course, Nature does not work that way. It doesn't go back in time and restarts the Universe when it reaches 'inconvenient' values.

Also, anybody that thinks that while such an increase in winds are non-physical, but the evolution with just a little bit of adjustment of a past state is physical and not still exponentially far away from reality, is highly delusional. Just because the fairy tales cartoon physics 'looks' real to you, doesn't mean it's close to reality.

Another reason is the parametrization: they cannot simulate a lot of physical processes on Earth, so they use all sorts of parametrized bullshit instead. Very complex phenomena that would lead to a very high variability if simulated, is limited by the modelers cargo cult religious beliefs.

So, in short, you don't see the real variability in the models because it's anti-physically limited and also they do not put crashes in the results for obvious reasons (for example, they don't even reach the range of those 100 years).

Climate models are pure garbage.

“The propagation of uncertainty in this situation is very complex.”Yes, but the principles are known. Propagation is via the solution space of the homogeneous part of your equation system

dX/dt = A.X

You can even write that explicitly if you want. If you make a proportional error e in X, then its propagation is as e*exp(∫A dt) (if it stays linear). The point is that you have to take the differential equation and its solution space into account.

Isn’t that what I just said?

“what I just said”I’m expanding on your “very complex”. It isn’t totally inscrutable. You can identify the part of the system that propagates errors, and see what it does. The key for linear systems is the eigenvalues of that integral ∫A dt. If one is positive, the error, or uncertainty, will grow rapidly. That is what leads to blowing up, instability. If they are all negative (bounded away from zero), uncertainly will be propagated, but diminishing. That is what a proper analysis of propagation of uncertainty requires. And of course, none of it is present in Pat Frank’s cartoonish Eq 1.

Nick:

You have the purely mathematical argument essentially correct. Although I claim no hands-on experience with GCMs, extensive work with other geophysical models makes me skeptical of a perpetually increasing envelope of uncertainty projected on the basis of some unknown, but fixed cloud-fraction error in the control runs used to determine the base state of model climate. A random-walk statistical model may be physically appropriate for diffusion processes, with wholly independent increments in time, but not for autocorrelated climate processes driven by known forces.

Nevertheless, it seems that what you call the “cartoonish Eq.1” quite fairly represents the way GCMs treat the LWIR-powered unicorn of “CO2 forcing,” ostensibly governing “climate change” from the model base state. Isn’t it this physical confusion between actual dynamic forcing and mere state of matter that opens up a Pandora’s box of uncertainty for GCM projections of surface temperatures, irrespective of any planetary TOA energy balancing?

Sky

“Nevertheless, it seems that what you call the “cartoonish Eq.1” quite fairly represents the way GCMs treat the LWIR-powered unicorn of “CO2 forcing,” ostensibly governing “climate change” from the model base state.”It is cartoonish because, while it may represent a particular solution of the equations, it in no way represents alternative solutions that would be followed if something varied. In fact, it is so bereft of alternative solutions that he has to make one up with the claim that the observed rmse of 4 W/m2 somehow compounds annually (why annually?).

I made a planetary analogy elsewhere. Suppose you had a planet in circular orbit under gravity. You could model it with a weight rotating without gravity but held to the centre with a weightless rod. You could get exactly the same solution. But how would it treat uncertainty about velocity? Or even position? It can’t show any variation in position (radially), while with velocity, it could move faster, but without the change in orbit that would follow under gravity. And velocity that isn’t orthogonal to the radius? All it can do is break the rod.

Propagation of uncertainty with differential equations depends entirely on how it is carried by the solution space. If you have a different solution space, or none at all, you’ll get meaningless answers.

Annually because it’s an annual error, Nick.

And that “cartoonish Eq. 1” accurately emulates the air temperature projections of CMIP5 models. Embarrassing for you, isn’t it. Hence your cartoonish disparagements.

The response of the emulation equation to forcing mirrors the response of the models to forcing. Its response to step-wise uncertainty in forcing then indicates the impact of step-wise uncertainty on the model response.

Nick Stokes:

It is cartoonish because, while it may represent a particular solution of the equations, it in no way represents alternative solutions that would be followed if something varied.That is not true. Although this is sometimes called “error propagation”, what is being propagated is not an error or a few errors, but the probability distribution of a range of errors, through the propagation of the standard deviation of the random components of the results of the computations. Pat Frank assumes that the variances of the error distributions add at each step in the solution of the difference equation; he has calculated the correlations of successive errors in order to add in the appropriate covariances, instead of making the common assumption of independent errors. The cone shape results from his graphing the standard deviation of the error distribution instead of its variance.

One could, alternatively, run each GCM entire, many trials, changing each trial only the single parameter that is being studied, on a grid of evenly spaced values within its 95% CI. That would propagate particular possible errors, and produce a cone of model outputs, probably not shaped as prettily as Pat Frank’s cone. This would be less time-consuming that the bootstrap procedure I mentioned elsewhere, but still require faster computers than those available now.

“One could, alternatively, run each GCM entire, many trials, changing each trial only the single parameter that is being studied, on a grid of evenly spaced values”A great deal has been said in these threads which is not only ignorant of how error really is propagated in differential equations, but of practice in dealing with it. Practitioners have to know about the solution space, partly because if propagation of error grows, the solution with fail (instability), and partly because they really are interested in what happens to uncertainty. If you look at the KNMI CMIP 5 table of GCM results, you’ll see a whole lot of models, scenarios and result types. But if you look at the small number beside each radio button, it is the ensemble number. Sometimes it is only one – you don’t have to do an ensemble in every case. But very often it is 5,6 or even 10, just for 1 program. CMIP has a special notation for recording whether the ensembles are varying just initial conditions or some parameter. You don’t have to do a complete scan of possibilities in each case. There is often not much difference following from the source of perturbation.

This is a far more rigorous and effective way of seeing what the GCM really does do to variation than speculating with random walks.

Nick, This thread has become a tangled mess of people looking at this in incongruous ways I am afraid, but by looking at the eigenvalues of “A” what you are doing is verifying that the solution converges. This is not the same thing I am speaking of, which is explained as something like looking at the difference of two solutions (both of which converge) having a small change of some parameter, and determining if one can, at some point, resolve one solution from the other, given the distribution of uncertainties in the problem parameters and initial data. I thought this is what Pat Frank was getting at, but I am no longer sure of that. I also thought Spencer had a valid point, but now I am not sure I have interpreted his point correctly.

I had thought about an example to illustrate this, but the whole discussion has become so unclear, that I don’t think we are even discussing the same things. I need to focus on my paying job today.

Nick Stokes:

Practitioners have to know about the solution space, partly because if propagation of error grows, the solution with fail (instability), and partly because they really are interested in what happens to uncertainty.As experts sometimes say, I have done this a lot and you can have large errors without instability; you can get what looks like a nice stable solution that has a huge error, without any indication of error.. The point about propagation of uncertainty still seems to be eluding you: the error is not known, only its distribution (at least approximated by a confidence interval), and it is the distribution of the error that is propagated.

This brings us back to the question addressed by Pat Frank, a question formerly ignored: Given the uncertainty in the parameter estimate, what is the best estimate of the uncertainty in the forecast? Hopefully, Pat Frank has started the ball rolling, and there will be lots more attempts at an answer in the future.

“This is not the same thing I am speaking of, which is explained as something like looking at the difference of two solutions (both of which converge) having a small change of some parameter, and determining if one can, at some point, resolve one solution from the other, given the distribution of uncertainties in the problem parameters and initial data. “I think it is the same. You have formulated as a first order system, so it is characterised by its starting value. If you start from state s0, the solution is s0*exp(∫A dt). If you were wrong, and really started from s1, the solution is s1*exp(∫A dt). The evolution of the error is (s1-s0)*exp(∫A dt). It’s true that the exponential determines convergence of the solutions, but it also determines what happens to the error. It looks like it is all just scaling, but with non-linearity the separation of solutions can be more permanent than the convergence/divergence of solutions.

Kevin Kilty:

The uncertainty vector, w, represents that our measurements (and the corrections we apply) are subject to their own uncertainties. We should propagate all the way to Y to fully characterize the model uncertainties.In applications, the matrices A, B, C, and D all have to be estimated from the data, hence are subject to random variation and uncertainty. When computing the modeled value Y for a new case of X, or for the next time step in the sequence of a solution of the differential equation, those uncertainties also contribute to the uncertainty in the modeled value of Y. I agree that “We should propagate all the way to Y to fully characterize the model uncertainty.” I have merely elaborated a detail of the process.

In his reply to me, Roy dismissed the distinction between an energy flux and an error statistic as “semantics.”

This extraordinary mistake shows up immediately in Roy’s post above. Quoting, “

But, he then further states,“LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”“

While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!“”Roy plain does not get the difference between a calibration error statistic and an energy flux. He is treating the ±4 W/m^2 long wave cloud forcing error statistic as an energy.

He clearly thinks this ±4 W/m^2 statistic should impact model expectation values.

I’ve pointed out over and over that calibration error statistics are derived from comparisons between simulations and observations.

See my replies to Roy on his site.

Chemists, physicists and engineers learn about error analysis and methods calibration in their first undergraduate year.

But Roy doesn’t get it.

Nor, apparently, do most other climate modelers (I exclude my reviewers at Frontiers).

Roy’s objection has no critical force in terms of physical science. None.

A statistic is not an energy. It’s that simple.

If ±4 W/m^2 isn’t a statistical bound on power (not energy!) density, then it has no place in physical science. It’s that simple.

It is an estimate of our ignorance. It places lower bounds on what the models call tell us about the climate state at any point in the future.

This tautological statement is uninstructive in the present case. Inasmuch as the purported effect of misestimating cloud fraction is mistakenly treated as a recurrent error in system forcing, rather than an initial error in an internal system state, the bounding problem is ill-posed at the outset. Such cavalier treatment tells us little about the uncertainty of future climate states, which depend upon many (sometimes chaotic) factors not considered here.

It isn’t meant to tell us anything about future climate states (as in the actual climate), but about the future climate states predicted by the models. And in that regard it is very instructive. And you have mischaracterized the error; it is a specification error (the base theory is incomplete), which like calibration errors puts a limit on the accuracy of the calculations. These types of errors accumulate at each iteration of the calculation, which is to say, at each time-step.

That the discussion is about errors in model predictions is self-evident. But model errors can be of a great variety, with entirely different effects upon model output. And, by definition, model error is deviation of model output from actual states. Without identifying the precise nature of the error in the context of the workings of the model, your “estimate of our ignorance” is uninstructive.

The model error to which I’m calling attention was clearly specified as the misestimation of the base state of INTERNAL power fluxes due to mistaken “cloud-forcing” during the calibration phase. Unless all the models handle this uniformly as a variable, rather than a presumed fixed parameter (as most models do with relative humidity), then there’s no “accumulation” of that particular error at each model time-step No doubt, other errors may ensue and indeed propagate, but not according to the prescription given here by Frank.

Two questionsthat I have not seen addressed regarding climate models:

First, if models are being improved to say, more realistically account for clouds, how is this improved model incorporated into projections? Specifically, there is some starting point (let’s say 1980) where a model model is initialized and begins its “projection run”. Climate history before this period is used to adjust empirical constants (ie “tune” or “fudge”). Some time later, perhaps decades, along comes an improved version. However, now the actual climate history since model initiation is known. Does the run time of the model get restarted from the present? Or is the model re-initialized to 1980. If the latter, the actual projection time is obviously considerably shorter. Doesn’t this mean that there is insufficient projection time to really judge the quality of the model? If the former, what is to stop modelers from using the post 1980 climate record to “look at the answer key” and further “tune” their models?

Second, as I understand it, atmospheric CO2 levels in the model are not dynamically calculated but are inputs based on the Representative Concentration Pathways. I understand the reasons for this. Which RCP most closely matches actual data (so far) and how much have they differed?

I think the conclusion is if the models were improved in such a way that they accurately model the cloud forcing behaviors, then we might end up concluding that cloud forcing behaviors control the climate and that likely the CO2 component is insignificant.

Roy, it’s the only way to see this issue. Thanks!

Roy wrote, “

The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper.”Neither Roy nor anyone else can show that my paper says that models do not respond to increasing greenhouse gases.

That’s because such a statement is nowhere in the paper.

What the paper says is that the response of models to greenhouse gases has no physical meaning; a very, very,

verydifferent message than what Roy avers.The reason for lack of physical meaning is that the resolution lower limit of the models is much larger than

the perturbation they claim to detect.

The models calculate a response. The calculated response is meaningless.

Roy, I regret to say this, but for all your work you plain have not understood anything about the paper. Not word one.

I’ve gotten emails from physicists who are happy with the paper.

I really do think that the cure for the mess that is climate science would be to require all of them to take an undergraduate major in experimental physics or chemistry, before going on to climate studies.

None of them seem to know anything about physical error analysis.

“None of them seem to know anything about physical error analysis.”

That would be my guess as well. Cause I was always wondering that with all the uncertainties, complex calculations and adding errors over running time of the models they could still give such small confidence intervals.

Errors propagate and amplify in physical measurements combined from different sources. It makes no sense that they shouldn’t from modeled values over time and that the errors would therefore very huge.

The exactness of my understanding of all this is peon, compared to most here, but the glimmer of rational insight that I do have about it leads me to think that Pat is arguing on one level, while Roy and others are arguing on another. Pat is on the top-tier level, and the others have not quite ascended to there yet. Hence, the dissonance between perspectives.

Pat talks about reality, while others talk about performance. If performance cannot be compared to reality with great certainty, then model performance is just model performance — of an interesting educational toy.

Even though I’m out of my league in terms of commanding the deep understanding, somehow I think I still get Pat’s drift and feel confident that his years of studying this and pursuing the explanation of this are not wasted.

I see people already trashing him, calling him nuts, painting him as lacking in some fundamental understanding, demeaning his publisher, etc., etc., … totally what I expected. I didn’t even need a model to predict this reliably.

Bingo. The critics do not seem to agree that the models’ outputs should in someway be compared to reality.

I think Dr Spencers quote

“And climate sensitivity, in turn, is a function of (for example) how clouds change with warming, and apparently not a function of errors in a particular model’s average cloud amount, as Dr. Frank claims.”show the area of disagreement clearly and indicates to me that Dr Frank is correct.If, as we expect, that clouds change with temperature and temperature IS propagated through the model, then any error with cloud functionality will necessarily propagate through the model.

Steve,

Your pinpoint dissection of the crucial distinction in this arguments is the way I it.

I’d like to see a detailed expose explaining how it could possibly be otherwise.

I have great respect for both Spencer and Frank. The only dog I have in this “fight” (discussion) is the truth.

I will continue to follow this thread.

Dr Spencer, thank you for your essay.

Pat Frank, thank you for your responses.

This is an interesting debate. Both Dr Frank and Dr Spencer agree that the models are faulty, but disagree on how.

I would be interested in evidence that the models are useful, especially for determining policy.

I have not heard of any debate on the pro IPCC side in which fundamental assumptions behind the models are questioned. I would like to believe those debates occurr and are as lively as this one.

Colin Landrum,

You say,

“I would be interested in evidence that the models are useful, especially for determining policy.”

A model can only be useful for assisting policy-making when it has demonstrated forecasting skill.

No climate model has existed for 50 or 100 years so no climate model has any demonstrated forecasting skill for such periods.

In other words, the climate models have no more demonstrated usefulness to policymakers than the casting of chicken bones to foretell the future.

Richard

Richard is it possible to plug in the variables at the time of the 30s, 40s, 50s and forecast the cooling of the 70s?

Then plug in variables for the 60s, 70s to see the warming of the 80s, 90s?

Then what do we use for the pause?

Thanks

Derg,

You ask me,

“Richard is it possible to plug in the variables at the time of the 30s, 40s, 50s and forecast the cooling of the 70s?

Then what do we use for the pause?”

I answer as follows.

Several people have independently demonstrated that the advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. Some (i.e. Pat Frank and Willis Eschenbach) have reported their determinations of this on WUWT.

Therefore, if one were to “plug in the variables at the time of the 30s, 40s, 50s” the model would not “forecast the cooling of the 70s” because atmospheric GHGs have been increasing in the air to the present. However, it is possible to ‘plug in’ assumed cooling from atmospheric sulphate aerosols. Such a ‘plug in’ of historic cooling would be a ‘fix’ and not a forecast.

There is precedent for modellers using this ‘fix’ and I published a report of it; ref, Courtney RS, An Assessment of Validation Experiments Conducted on Computer Models of Global climate (GCM) Using the General Circulation Modelof the UK Hadley Cenre, Energy & Environment, v.10, no.5 (1999).

That peer-reviewed paper concluded;

“The IPCC is basing predictions of man-made global warming on the outputs of GCMs. Validations of these models have now been conducted, and they demonstrate beyond doubt that these models have no validity for predicting large climate changes. The IPCC and the Hadley Centre have responded to this problem by proclaiming that the inputs which they fed to a model are evidence for existence of the man-made global warming. This proclamation is not true and contravenes the principle of science that hypotheses are tested against observed data.”

Importantly, global temperature has been rising intermittently for centuries as it recovers from the depths of the Little Ice Age (LIA). The estimates of global temperature show that most of that warming occurred before 1940 but 80% of the anthropogenic (i.e. human caused) GHG emissions were after that. Indeed, the start of the cooling period coincided with the start of the major emissions. Advocates of human-made global warming excuse this problem by attributing

(a) almost all the rise before 1940 to be an effect of the Sun,

(b) the cooling from 1940 to 1970 to be an effect of human emissions of aerosols, and

(c) the warming after 1970 to be mostly an effect of human emissions of greenhouse gases.

Evidence is lacking for this convoluted story to excuse the disagreement of the emissions with the temperature history. And they have yet to agree on an excuse for the ‘pause’ since 1998.

Furthermore, the climate models are based on assumptions that may not be correct. The basic assumption used in the models is that change to climate is driven by change to radiative forcing. And it is very important to recognise that this assumption has not been demonstrated to be correct. Indeed, it is quite possible that there is no force or process causing climate to vary. I explain this as follows.

The climate system is seeking an equilibrium that it never achieves. The Earth obtains radiant energy from the Sun and radiates that energy back to space. The energy input to the system (from the Sun) may be constant (although some doubt that), but the rotation of the Earth and its orbit around the Sun ensure that the energy input/output is never in perfect equilibrium.

The climate system is an intermediary in the process of returning (most of) the energy to space (some energy is radiated from the Earth’s surface back to space). And the Northern and Southern hemispheres have different coverage by oceans. Therefore, as the year progresses the modulation of the energy input/output of the system varies. Hence, the system is always seeking equilibrium but never achieves it.

Such a varying system could be expected to exhibit oscillatory behaviour. And, importantly, the length of the oscillations could be harmonic effects which, therefore, have periodicity of several years. Of course, such harmonic oscillation would be a process that – at least in principle – is capable of evaluation.

However, there may be no process because the climate is a chaotic system. Therefore, the observed oscillations (ENSO, NAO, etc.) could be observation of the system seeking its chaotic attractor(s) in response to its seeking equilibrium in a changing situation.

Very, importantly, there is an apparent ~900 year oscillation that caused the Roman Warm Period (RWP), then the Dark Age Cool Period (DACP), then the Medieval Warm Period (MWP), then the Little Ice Age (LIA), and the present warm period (PWP). All the observed rise of global temperature in recent times could be recovery from the LIA that is similar to the recovery from the DACP to the MWP. And the ~900 year oscillation could be the chaotic climate system seeking its attractor(s). If so, then all global climate models are based on the false premise that there is a force or process causing climate to change when no such force or process exists.

But the assumption that climate change is driven by radiative forcing may be correct. If so, then it is still extremely improbable that – within the foreseeable future – the climate models could be developed to a state whereby they could provide reliable predictions. This is because the climate system is extremely complex. Indeed, the climate system is more complex than the human brain (the climate system has more interacting components – e.g. biological organisms – than the human brain has interacting components – e.g. neurones), and nobody claims to be able to construct a reliable predictive model of the human brain. It is pure hubris to assume that the climate models are sufficient emulations for them to be used as reliable predictors of future climate when they have no demonstrated forecasting skill.

This is a brief response to your important question and I hope this brief response is a sufficient answer.

Richard

Richard.

This is a critical point. The satelite data record is a fraction of the projected time.

The error factors in one year seasonal hurricane models / forecasts highlights this problem. These forecasts are revealed only weeks prior to the season but can be widely inaccurate, a six month forecast based on 40 years of data.

So if my forecast of 3 Atlantic USA touch downs and ace of 80 is the closest to the final outcome, does that make the best forecaster. No, just the luckiest despite the reasoning.

Martin Cropp,

Your point is true. However, it is important to note that the climate models have not existed for 50 years and, therefore, they have yet to provide any predictions that can be evaluated for skill at predicting climate over such periods.

Climate models have NO demonstrated predictive skill; none, zilch, nada.

Richard

CL, there isn’t any. See my previous model specific posts here trying to explain why from a completely different first principles,perspective.

Computational intractability (CFL constraint on numerical solutions to partial differential equations) forces parameterization which forces parameter tuning which brings in the attribution question. All from illustrated first principles, no fancy math needed. You can find my original themProblem with Models (emulating the famous problem with Tribbles Star Trek episode) post in the search WUWT sidebar. There are also several related followups.

Climate models’ global energy balances may be stable, but that doesn’t mean they correctly replicate the true components of Earth’s energy balance. That’s an obvious point, and Dr Frank makes it, but Dr Spencer seems to take, as a starting point, that the replication *is* true. It may be that as a climate scientist, he has to. I made an important point in the other thread which bears repeating:

A practitioner of any discipline must accept *some* tenets of that discipline. A physicist who rejects all physical laws won’t be considered a physicist by other physicists, and won’t be finding work as a physicist. Similarly, Dr Spencer must accept certain practices of his climate science peers, if only to have a basis for peer discussion and to be considered qualified for his climate work. Dr Frank doesn’t have that limitation in his overview of climate science — he is able to reject the whole climate change portfolio in a way which Dr Spencer can’t. This is the elephant in the room.

NZWillie,

Science can’t start with an assumption, no matter if all agree and trillions $ burnt in piety. May the elephant in the room become visible.

The king really is a buck naked embarrassment.

Exactly . And why climate ” science ” is unable to advance .

Dr Spencer, It would seem to me that the argument connected to Figure 1 is a total miss. It stands to reason that the test model runs shown in Figure 1 are either 1) run with all data inputs held constant ( thus no deviation from “0”), or ,, 2) are tuned to show zero natural forcing on climate (again, no deviation from zero)

The point I took away from Frank, is the error associated with cloud data that is input into the system is larger than the CO2 forcing signal. As such, the error in the LWCF of the model, be it from the data itself, or from the tunning within the system is larger than the CO2 signal. It’s apparent that the tunning for LWCF moves with a change in data, thus if you hold the data steady in a test run, you get the resulting flat line.

Otherwise, the test models are nothing more than flat lines designed to show a forcing from CO2, which could true as well.

Anything could be true, until data shows otherwise. So far, the data shows that all climate prediction models are wrong. So while studies in error & sensitivity analyses are useful, it’s clear that the fundamental premise that CO2 controls the climate is wrong. If that CO2 premise was correct, that validating climate model would been on the front page of the NYT long ago.

” . . . the fundamental premise that CO2 controls the climate is wrong.”

And here is where I think all climate models (well, perhaps with the singular exception of the Russian model) fail.

To the extent that basics physics says that CO2 should act as a “greenhouse gas,” which is credible due to its absorption and re-radiation spectral bands, it likely became saturated in ability to cause such an effect at much lower concentration levels (likely in the range of 200-300 ppm, see https://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/ ), now leaving only water vapor and methane as the current non-saturated GHGs.

More specifically, it is absurd to say the atmospheric CO2 forcing is linear going into the future (a belief held by the IPCC and most GCMs) . . . that doubling any GHG doubles the amount of energy radiated up/down. It is well-know (well, perhaps outside of global climate models) that any gas absorbing radiation can become “saturated” in terms of radiation energy absorption if the “optical column length” exceeds a certain value, generally taken to be six e-folding lengths. This is well summarized in the following paragraph extracted from http://clivebest.com/blog/?p=1169 :

“The absorption length for the existing concentration of (atmospheric – GD) CO2 is around 25 meters i.e. the distance to reduce the intensity by 1/e. All agree that direct IR radiation in the main CO2 bands is absorbed well below 1 km above the earth. Increasing levels of CO2 merely cause the absorption length to move closer to the surface. Doubling the amount of CO2 does not double the amount of global warming. Any increase could be at most logarithmic, and this is also generally agreed by all sides.”

Dr. Spencer has a spreadsheet climate model posted on his website. It is set up to run daily. Its a 50 year analysis (over 18400 cells).

If CO2 forcing is on and the time step is daily, then there are parameters for:

Water depth (m)

Initial CO2 (W/m2)

CO2 increase per decade (W/m2)

Non radiative random heat flux ( a coefficient parameter)

Radiative random heat flux (a coefficient parameter)

Specified feedback parameter

Check it out.

http://www.drroyspencer.com/research-articles/

Excellent discussion! From Dr Spencer’s response I don’t quite get this:

If a model has been forced to be in global energy balance, then energy flux component biases have been cancelled out, as evidenced by the control runs of the various climate models in their LW (longwave infrared) behaviorBut that’s only shows that the model is internally consistent – it doesn’t actually show if this consistency mirror the real life. A model can be consistent and internally well balanced but may have nothing to do with physical reality. How this control run over 100 years was validated against actual temperature changes with such precision? Normally, some parts of a simulation as finite element analysis are run and compared with actual experiments like deformation of material. Model consistency may come out from favourable assumptions or reduced sensitivity. As far as I understand those models did not anticipate ‘hiatus’ in global warming we saw recently for several years. If uncertainties in such energy fluxes balanced in a model are much greater than a signal due to CO2 I reckon Dr Frank is right pointing at that.

How they carry out control runs over 100 years with varying cloud fractions? From which source they take cloud fraction data? Or do they have a mathematical model inside the model calculating cloud fractions?

No, they have not this data and they cannot calculate cloud fractions. Do you think that the models can calculate an annual cloud fraction for the year 2020? I do not believe so, because clouds are a great unknown, which is not included in the models. Therefore there is no propagation error of cloud fractions.

I have been asking a piece of certain information, in which way climate models calculate the cloud fraction effect. No response.

Hey Antero,

Do you think that the models can calculate an annual cloud fraction for the year 2020? I do not believe so, because clouds are a great unknown, which is not included in the models.These are good questions. Fact that a model is well balanced against energy fluxes doesn’t tell you much how uncertainty was treated. If we have, say, 3 components: 50+/-3 W/m^2; 20+/-2 W/m^2 and 30+/-1 W/m^2 where last two terms counteract first one. If we run simulations with all terms in middle values it will all balance out nicely. But if errors are systematic and we run simulation with values for first term 48 W/m^2 and for two other terms 22 W/m^2 and 31 W/m^2 those fluxes will not balance out (53>48) pushing simulation out of balance. So my guess is that those uncertainties were treated in such way that they cancel out.

You can search scholar.google.com and look for the answer in the climate modeling literature, Antero.

An explanation of how GCMs model clouds is outside the limits of this conversation.

The fact that they don’t calculate cloud fractions *is* the error. It is a missing part of the theory that the models were derived from.

Not a comment about Pat Frank’s paper or Roy Spencer’s reply but, rather, about WUWT.

I remember awhile ago discussions of the need for an alternative to “pal-review” and that, a blog forum such as WUWT could be something like that.

I think we’re seeing it here.

Two honest and real scientists who respect each other are having an open disagreement about what is in a paper.

It’s not time for either to “dig their heels in” but to consider what each other (and others) have to say.

Have I misunderstood this? Are the models tuned so that they explicitly do NOT reproduce past temperatures but rather an artificial stable state before human CO2 emissions were deemed significant? I must have been naive as I always assumed that the models were tuned to reproduce past climate change, which in itself would not be a guarantee that they were any use predicting the future but might give you a fighting chance. If anyone justifies this with reference to the horizontal line of a hockey stick then I might curse and swear!

Pat Frank’s model uncertainty (over 30°C span by 2100) does not represent physical temperature uncertainty as we do know the global average temperature cannot change that much in such a short time and much less because of changes in cloud cover. It does not represent modeled temperature uncertainty, as we do know that models temperature cannot change that much in such a short time. It is unclear to me what it represents. Pat Frank says it represents statistical uncertainty, but if it does not have any connection with what physical or modeled temperatures can do, why is it relevant and why is it expressed in °C? My guess is that it represents the uncertainty introduced by Pat Frank’s assumptions, that are different from models assumptions.

In any case I am glad that Pat Frank’s article got published so these issues can be examined in detail. The idea that only correct science should be published is silly and contrary to science interest. Controversial hypotheses need to be published even if wrong because nobody can predict the influence they will have on other researchers and some of the best articles were highly controversial when published.

If Pat Frank is correct ( and even after all the debate I currently feel he is ) then it represents the models in ability to mimic reality. If the model cannot handle uncertainties that are experimentally known to be real, then the model clearly does not contain the mechanisms required to control and dampen the climate to achieve the type of result which makes more sense. The models are so constrained to only react to greenhouse gas forcing, and nothing else.

If that is correct (and to a certain extent it probably is), then Pat Frank’s uncertainty makes even less sense, as GHGs show very limited capacity to change over time. Atmospheric CO2 changes by about 2-3 ppm per year, less than 1%, and the change over time is one of the most constant parameters in climate. It cannot build up to a huge uncertainty.

Hey Javier,

Atmospheric CO2 changes by about 2-3 ppm per year, less than 1%, and the change over time is one of the most constant parameters in climate. It cannot build up to a huge uncertainty.My understanding what Pat is saying is that uncertainty alone due to cloud forcing is order of magnitude greater than forcing due to CO2. Hence the question is how would you know that temperature changes are due to CO2 forcing and not due to unknowns in cloud forcing? In other words changes in energy flux due to cloud forcing can completely eclipse energy flux due to CO2.

If we are talking about the models the answer is clearly not. The models are programmed to respond essentially to GHG changes and temporarily to volcanic eruptions, and little else. Cloud changes or solar changes are not a significant factor in climate models, and cloud changes (and albedo) are programmed to respond to the increase in GHGs in the same direction (more warming). We know what makes the models tick and that cannot produce a huge uncertainty. If anything models are way too predictable knowing the change in GHGs.

Sincerely the uncertainty is much higher in the real world. See the pause for example. But unless there is a second coming of either Jesus or the Chicxulub impactor there is no way we could see much more than one tenth of the temperature change by 2100 that Pat Frank’s uncertainty allows.

The reason that the models can’t build up to a huge temperature change is that they are force fed code that constrains their volatility and temperature boundaries. They were known to blow up until modellers constained this. Thus they all (except the Russian model gravitate around a predefined range of temperature change based on a rather simple linear transposition of increased CO2 to increased temperature. This in no way lessens the system error. It simply artificially constrains it. Roy is arguing that because the energy system input/output is then balanced that that somehow lessens the system error. It is impossible to lessen the system error until you carry out real world experiments that produce real world data that then lets you paramterize your equations better. The idea that their use of Navier Stokes equations is reproducing the temperature profile of the globe is ludicrous

It’s not even really related to the models. If you say that the real-world uncertainty in the cloud forcings is 4W/m2^year, then you’re saying our

actualunderstanding of the cloud forcings is increasing that much, year-over-year. That it is totally plausible that within ~15 years, cloud forcings could be 20 W/m2 lower, or 20 W/m2higher.That’s a major conclusion of Frank’s paper, and it feeds into the models. But it’s significant even before applying it to the models, because a huge uncertainty in climate forcings is going to cause a huge uncertainty in temperatures, in the real world

andin any model, even an accurate one.Likewise, if the real world could

notplausibly warm up or cool down by these huge temperature amounts within 20 years, then our real-world climate forcing cannot increase by 4W/m2/year. Soin the real world, this number must be wrong. Our uncertainty about climate forcings is either not this high, or does not increase this rapidly year-over-year.I think this is the major problem with representing uncertainty as 4W/m2/year, instead of 4W/m2.

I am not sure that is the case. The plus or -4 W per square meter is an uncertainty it is not an increase every year as you say. I could go up if you go down or it could be somewhere in between. We do not know. We do know it can change that much in a year. That is experimentally determined, verified, and excepted. The fact that the model cannot deal with no one variations is a clear indication that the model is not truly predictive.

Right, but if you just integrate that uncertainty envelope over time (4W/m2), you don’t

getPat Frank’s huge, ever-growing uncertainty in temperatures.You only get that if you integrate it as W/m2/

year.Check the math!

Windchaser

Incorrect.

Just like Roy, Nick and many others (I am utterly dumbfounded by the apparent ignorance of the purpose or meaning of uncertainty and error propagation, it is a real shame), you are assuming that uncertainty must somehow be bound within a certain limit based on reason x, y or z, or that uncertainty manifests in result.

There is no part of how uncertainty is defined that suggests this; it simply does not work this way.

I’ve been following along and this has already been explained many times in the comment section of all three articles. Yet Roy and Nick, in particular, seem to ignore the responses that clearly highlight the issue, instead choosing to persist with a critique based on a fundamentally flawed assumption that an exception to the above exists.

It doesn’t, resulting in what is essentially a straw man.

Uncertainty has been used in the same manner for centuries and does not care that climate models are involved.

If anything, climate modelling is the perfect example of the extent to which an output functions value, shape, or apparent agreement with other models (or even reality) is disconnected from its uncertainty.

My assessment is that the increasing uncertainty is the area under the Bell Curve and not representative of actual temperatures. As the Bell Curve grows due to the compounding of uncertainties, it renders any prediction more and more meaningless even though it may occasionally get the temperature prediction “correct” (or somewhere close to global average at any given point in time). The confidence factor plummets. Perhaps the uncertainty should be expressed as a P value of confidence/significance, not temperature.

Just a layman’s thoughts…..

Javier …. true, but he qualifies in the appear that the research is not on the actual prediction of the model, but rather on the statistical propagation of error. He lines up the arguments quite well to me as follows.

1) all GCMs are complicated models with all kinds of parameters, but in the end, their output is nothing more than a theoretical linear model based on GHG forcing as determined by the model.

2) GHG forcing includes all related forcing, not just CO2, thus the LWR of clouds is included.

3) the parameters of the models when run in their native states have an error in predicting cloud cover that is latitude dependent dependent, grossly overestimating or underestimating depending on latitude. AND … that this is a systemic error, not just a random error.

(

I think this is where Dr Spencer and he are getting cross, as Franks analysis is by latitude, which if taken as a whole much of it would cancel out globally, whereas, I think Spencer is viewing purely as a homogenized global effect.).4) If … the model is incorrectly predicting clouds, then by default, it is incorrectly calculating the overall GHG forcing. … Thus, there is a systemic error being propagated throughout the system.

5) …. to your point, the predictions of GCMs are only expressed as the mean, as is the linear equation that matches GCM output almost perfectly. Neither is expressed with the uncertainty [error bars] of the propagated error …. BUT SHOULD BE. …. and that error is huge. (

somehow I don’t feel that would make a very convincing picture for public consumption if you were trying to halt fossil fuel usage)Bottom line, the theory underlying the models is incomplete, the models contain a propagated systemic error, … and

the predictions of the models are worthless.OK, but since GHG forcing changes little over time, that makes them highly predictable in their response, not highly uncertain. I’ve seen (and plotted) model runs and model averages and if anything is clear is that the spaghetti coming from multiple runs of a model or from multiple models averages all show a consistent behavior that doesn’t make them highly uncertain.

With models instead of error bars what they do is multiple runs and that gives an idea of uncertainty. In the Texas sharpshooter analogy the model is not shooting at the bull’s eye but it is producing a grouping close enough to paint a target. That means the uncertainty within the model is not huge or there would not be a grouping to paint the target. With respect to the model/reality error, the distance from the painted target to the original target, it is clear that models run too hot, but again the difference is not enough to allow for a huge uncertainty.

Quite frankly I can’t see where an uncertainty of ±15°C in 80 years could be produced without leaving discernible trace. The distance between a glacial period and an interglacial is just 5°C.

I agree with what your saying Jav ….. consider this.

I think the big disconnect in Franks paper, is that the systematic error associated with Cloud Cover is latitude dependent. Thus, for very high latitudes, according to his graph, he gets this huge error in one direction, while in the tropics, he gets a huge error in the opposite direction. This gives the appearance that cloud cover for any grid cell has a huge uncertainty. As Roy and you point out, in the model runs for the total globe, the errors at each latitude combine and cancel each other out. … thus a run on the globe will never be to far from the tuned average.

In reply to:

Javier’s “OK, but since GHG forcing changes little over time, that makes them highly predictable in their response, not highly uncertain. I’ve seen (and plotted) model runs and model averages and if anything is clear is that the spaghetti coming from multiple runs of a model or from multiple models averages all show a consistent behavior that doesn’t make them highly uncertain.”

What you say would be correct if CO2 caused the warming and if the rise in atmospheric CO2 was caused by human emission. This is a logical fact not an argument.

The GCM models response is anchored in our minds in the one-dimension studies some of which (Hansen’s) showed a warming of 1.5C while others showed a warming of 0.1C to 0.2C as the lower tropical troposphere is close to water vapor saturation and the CO2 and water infrared emission overlap.

Hansen’s one-dimension study froze the lapse rate which is not physically incorrect and ignored the fact that the lower tropical troposphere is saturated with water vapour. Hansen did that to get the warming up to 1.2C. An unbiased one-dimensional study gives a warming of around 0.2C for a doubling of atmospheric CO2.

https://drive.google.com/file/d/0B74u5vgGLaWoOEJhcUZBNzFBd3M/view?pli=1

http://hockeyschtick.blogspot.ca/2015/07/collapse-of-agw-theory-of-ipcc-most.html

..In the 1DRCM studies, the most basic assumption is the fixed lapse rate of 6.5K/km for 1xCO2 and 2xCO2.

There is no guarantee, however, for the same lapse rate maintained in the perturbed atmosphere with 2xCO2 [Chylek & Kiehl, 1981; Sinha, 1995]. Therefore, the lapse rate for 2xCO2 is a parameter requiring a sensitivity analysis as shown in Fig.1. In the figure, line B shows the FLRA giving a uniform warming for the troposphere and the surface. Since the CS (FAH) greatly changes with a minute variation of the lapse rate for 2xCO2, the computed results of the 1DRCM studies in Table 1 are theoretically meaningless along with the failure of the

FLRA.

In physical reality, the surface climate sensitivity is 0.1~0.2K from the energy budget of the earth and the surface radiative forcing of 1.1W.m2 for 2xCO2. Since there is no positive feedback from water vapor and ice albedo at the surface, the zero feedback climate sensitivity CS (FAH) is also 0.1~0.2K. A 1K warming occurs in responding to the radiative forcing of 3.7W/m2 for 2xCO2 at the effective radiation height of 5km. This

gives the slightly reduced lapse rate of 6.3K/km from 6.5K/km as shown in Fig.2.

In the physical reality with a bold line in Fig.2, the surface temperature increases as much as 0.1~0.2K with the slightly decreased lapse rate of 6.3K/km from 6.5K/km.

Since the CS (FAH) is negligible small at the surface, there is no water vapor and ice albedo feedback which are large positive feedbacks in the 3DGCMs studies of the IPCC.

…. (c) More than 100 parameters are utilized in the 3DGCMs (William: Three dimensional General Circulation Models, silly toy models) giving the canonical climate sensitivity of 3K claimed by the IPCC with the tuning of them.

The followings are supporting data for the Kimoto lapse rate theory above.

(A) Kiehl & Ramanathan (1982) shows the following radiative forcing for 2xCO2.

Radiative forcing at the tropopause: 3.7W/m2.

Radiative forcing at the surface: 0.55~1.56W/m2 (averaged 1.1W/m2).

This denies the FLRA giving the uniform warming throughout the troposphere in the 1DRCM and the 3DGCMs studies.

(B) Newell & Dopplick (1979) obtained a climate sensitivity of 0.24K considering the

evaporation cooling from the surface of the ocean.

(C) Ramanathan (1981) shows the surface temperature increase of 0.17K with the

direct heating of 1.2W/m2 for 2xCO2 at the surface.

Javier:

It is unclear to me what it represents. Pat Frank says it represents statistical uncertainty, but if it does not have any connection with what physical or modeled temperatures can do, why is it relevant and why is it expressed in °C?The propagation of uncertainty shows that values of that parameter that are within the CI may produce forecasts that are extremely deviant from what will actually happen, indeed forecasts that are already extremely doubtful from the point of view of known physics.

Sorry to disagree, but ±15°C in 80 years violates known physics. The models can’t do that. You would need a large asteroid impact to produce it, and models don’t do asteroids. It is simply not believable. The small progressive increase in GHGs that models do cannot produce the uncertainty that Pat Frank describes. It would require more than two doubling of CO2. I don’t need to know much about error propagation to see that the result defies common sense.

The question remains: If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?

“The question remains: If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?”

Can we not expect a reduction in probability based on the magnitude of each delta T?

That is to say, a 3 Deg. (Within predictions) swing is much more likely than a 15 Deg. Swing.

With regard to negative response, (cooling) empirical data proves that, thus far, cloud formation, at best acts as a brake to CO2 induced albido deltas.

If all of the above is correct, then assigning linear probability to a 30 Deg. range is mathematically unsound.

Javier:

If models cannot under any circumstance reach the ±15°C limits of Pat Frank’s uncertainty, what does that uncertainty represent?How do you know that the model can not reach the limits of Pat Frank’s uncertainty if the parameter can reach the limit of its uncertainty? You do not know that, and in fact Pat Frank’s analysis shows that an error in the parameter could produce a model output that is ±15°C in error. The model output might then be recognized as absurd, or maybe not; but an absurd model result is compatible with what is known about the parameter. You’ll know from reading about the history of the models that some parameters have been intentionally adjusted and re-adjusted to eliminate the occasional absurd calculated results that have occurred.

I don’t need to know much about error propagation to see that the result defies common sense.This is not a study of the climate, it is a study of a model. What it shows is that a model result that defies common sense is reasonably compatible with what is known about the uncertainty of one of the parameters.

So, an important part here is that Pat Frank treats the

real worldcloud forcing uncertainty as W/m2/year, instead of W/m2.This means that when you integrate with respect to time, the result is that the probability envelope of the cloud forcing increases, year after year, without end. It increases with the square root of the integral, so sqrt(time).

Do you think it’s physically plausible that the

real worldcloud forcings could vary up to 4W/m2 this year, then in 9 years, they could be 12 W/m2 (i.e., 4*sqrt(9)), and in 25 years, they could be somewhere within 20 W/m2 envelope… and so on? Do you think our uncertainty of cloud forcings is really growing this way?This is all real-world questions, nothing about the models yet. But it highlights the importance of the difference between an uncertainty that’s in W/m2, versus one that’s in W/m2/year.

Of course if the actual cloud forcings can vary that much, that would produce

hugeswings in temperature, in both the real world and in the models. And Frank’s math has it growing without end, to infinity.You all seem to be missing the point. Dr. Frank has said that he can emulate the output of GCM’s relation to CO2 using a linear equation. That’s why so many people have doubts about GCM’s to begin with. It’s also why the GCM’s miss the cooling in the midst of CO2 growing.

Anyway, using that information and an iterative calculation, the uncertainty of any projection appears to be about +/- 20 degrees.

Here is a question to ask yourself. We know ice ages have occurred with higher level of CO2. What parameters would have to change to cause these? Volcanoes and asteroids are transitory so they can’t be the cause of long term ice ages. Why are there no GCM studies that look at this? Can they not handle this? If they can’t, what good are they?

Javier,

This represents the lower resolutions of the model output over a given time span. Since the models report temperature in degrees C, the resolution is expressed in the same units. It’s like having a scale that measures up to 1 kg, but has an accuracy of +/-20 kg. In other words, completely useless for weighing things, but it might be useful for comparing the weight of two things, depending on the precision.

Nitpicking aside, I cant argue with the conclusion

“The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.”

Computer games are useful for understanding a subject, or for finding a way forward if you are totally clueless.

Anyone taking them at face value is an idiot, albeit possibly a highly educated idiot

Unless you’re playing ” Global Thermonuclear War”.

Hasn’t Kiehl 2007 shown that it is the inverse correlation between climate sensitivity and aerosol forcing, not the ocean heat uptake, that enables the models to behave the same in their temperature projections, despite a wide range of climate sensitivities applied?

Kiehl gives the uncertainty of the ocean heat uptake with only ±0.2 W per m2. The aerosol forcing, on the other hand, is wildly uncertain. It differs by a factor of three in the models and is highly related to the total forcing. High total forcing means low aerosol cooling in the model and vice versa.

However, there are many more possible propagation errors that make the temperature projections of the models statistically unfit for any forecast of future climates and prediction of future weather states:

Source: Kiehl, J. T. (2007), Twentieth century climate model response and climate sensitivity, Geophys. Res. Lett., 34, L22710, doi:10.1029/ 2007GL031383.

All of Roy Spencers satellite data plus balloon ect data show that the models dont work. Pat Frank is correct there is the proof !

Dr. Spencer:

I think I hear you saying that you believe that Pat Frank is stating that the models predictions are not accurate because of errors in cloud forcing. In a sense that is true, but what he’s actually saying is because the errors in cloud forcing are so high, the models are meaningless. In other words quite possibly the whole idea of greenhouse gas forcing, which is the common theme of all the models may be invalid.

Suggested reading for the basics:

https://physicstoday.scitation.org/doi/10.1063/1.882103

Thank you Roy for taking the time to do an “objective” analysis from a “peer review” perspective. While others may have theories to the contrary, I suggest just wait till they submit their own WUWT articles, otherwise you’d just be responding to numerous “what-if” issues. Your stand-alone article is solid. Keep up the good work.

Pat Frank is correct, in general, regarding how error propagate. Dr Spencer is incorrect in his statement that balancing the models at the start proves anything (the real climate is never in balance at any time).

That said… it doesn’t matter.

This entire argument misses an important point. This battle won’t be won by debating error bars (and it IS a battle). The general public will neither understand the nuances of error propagation, nor will care even if they do understand. It’s entirely the wrong debate to be having.

The problem with the warmist position is NOT the error bars on the data… the problem is the data itself. Data that has been manipulated, adjusted, is prone to inconsistencies due to station dropout, gridding, and on and on. The problem is… outside of the computer models… there is not one single shred of actual real-world evidence to support their position. None.

That is the argument we need to be always pounding… not error bars. Getting lost in the weeds debating error bars is a waste of time.

Well, it would not be a waste of time, if we had data about which we were confident. We still need to argue about the legitimacy of the tools that ultimately handle such worthy data, if we were ever to have it.

I decided to let this perk before weighing in, altho knew was coming from lunch with CtM and addressing his own intuitive discomforts.

A lot of the disagreements here on Pat Frank’s paper are based on fuzzy definitions. Accuracy versus precision, and error versus uncertainty are the two biggies. Lets try to be less fuzzy.

Accuracy versus precision was illustrated in a figure from my guest post, Jason3: Fit for Purpose? (Answer no) Albeit the no accuracy and no precision figure part one could have been a bit more obvious. Accuracy is how close the average shot pattern is to the bullseye; precision is how tight the shot grouping whether or not on the bullseye.

Error is a physical notion of observational instruments measurement problems. Like the temperature record confounded by siting problems, or Jason 3 SLR struggling with waves, Earths non symmetric geoid, and orbital decay. Statistical in nature. Error bars express a ‘physical’ statistical uncertainty. Uncertainty itself is a mathematical rather than physical comstruct; how certain can we be that whatever the observational (with error bars) answer and error may be, it will be within the theoretical uncertainty envelope. Probability theory in nature. It is perfectly possible that all ‘strange attractor’ Lorenz nonlinear dynamic stable error nodes lie well within ‘so constrained’ the probabilistic uncertainty envelope. Because they are two different things, computed differently.

Frank’s paper says the accuracy uncertainty envelope from error propagation exceeds any ability of models to estimate precision with error bounds. That is very subtle, but fundamentally simple. Spencer’s rebuttal says possible error is different andprobanly constrained. True, but not relevant.

Frank’s paper says . . . Spencer’s rebuttal says . . .Thanks for the simplification. It seems the presuppositions underpinning the respective arguments are creating an epistemological language barrier – e.g., when the theologian discusses origins with the atheist.

Beyond this, didn’t the IPCC long ago admit Frank’s ultimate argument, i.e., that the models are worthless for prediction?

From the third assessment report, p. 774, section 14.2.2.2, “Balancing the need for finer scales and the need for ensembles”

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

Roy, let me take a different approach to the problem.

We agree that all your GCMs produce an energy balance at the TOA. All of them accurately simulate the observed air temperature within the calibration bounds.

Nevertheless, they all make errors in simulating total cloud fraction within the same calibration bounds. That means they all make errors in simulated long wave cloud forcing, within those calibration bounds.

The simulated tropospheric thermal energy flux is wrong within those calibration bounds. Tropospheric thermal energy flux is the determinant of air temperature.

So the simulated calibration air temperature is correct while the simulated calibration tropospheric thermal energy flux is wrong. How is this possible?

Jeffrey Kiehl told us why in 2007.

The reason is that the models are all tuned to reproduce air temperature in their calibration bounds. The correctness of the calibration air temperature is an artifact of the tuning.

A large variety of tuned parameter sets will produce a good conformance with the observed air temperature (Kiehl, 2007). Therefore, model tuning hides the large uncertainty in simulated air temperature.

The simulated air temperature has a large uncertainty, even though it has a small data-minus-simulation error. That small error is a spurious artifact of the tuning. We remain ignorant about the physical state of the climate.

Uncertainty is an ignorance-width. The uncertainty in simulated air temperature is there, even though it is hidden, because the models do not reproduce the correct physics of the climate. They do not solve the problem of the climate energy-state.

Although the TOA energy is balanced, the energy within the climate-state is not partitioned correctly among the internal climate sub-states. Hence the cloud fraction error.

Even though the simulated air temperature is in statistical conformance with the observed air temperature, the simulated air temperature tells us nothing about the energy-state of the physically real climate.

The simulated calibration air temperature is an artifact of the offsetting errors produced by tuning.

Offsetting errors do not improve the physical description of the climate. Offsetting errors just hide the uncertainty in the model expectation values.

With incorrect physics inside the model, there is no way to justify an assumption that the model will project the climate correctly into the future.

With incorrect physics inside the model, the model injects errors into the simulation with every calculational step. Every single simulation step starts out with an initial values error.

That includes a projection starting from an equilibrated base-climate. The errors in the projection accumulate step-by-step during a projection.

However, we do not know the magnitude of the errors, because the prediction is of a future state where no information is available.

Hence, we instead calculate an uncertainty from a propagated calibration error statistic.

We know the average LWCF calibration error characteristic of CMIP5 models. That calibration error reflects the uncertainty in the simulated tropospheric thermal energy flux — the energy flux that determines air temperature.

It is the energy range within which we do not know the behavior of the clouds. The clouds of the physically real climate may adjust themselves within that energy range, but the models will not be able to reproduce that adjustment.

That’s because the simulated cloud error of the models is larger than the size of the change in the physically real cloud cover.

The size of the error means that the small energy flux that CO2 emissions contribute is lost within the thermal flux error of the models. That is, the models cannot resolve so small an effect as the thermal flux produced by CO2 emissions.

Propagating that model thermal-flux calibration error statistic through the projection then yields an uncertainty estimate for the projected air temperature. The uncertainty bounds are an estimate of the reliability of the projection; of our statement about the future climate state.

And that’s what I’ve done.

Perhaps I can simplify:

If you know the answer of a sum is 20 and you need one value to be 5 or higher, then it is simple matter of adjusting the other parameters to your heart’s content.

20=5+10+2+2+1

20=5*5-5

20=((5/10+100)*pi*r^2+(the number of albums Justin Bieber sold last year))xAlpha [where alpha is whatever it needs to be to make the equation balance]

None of this says anything about the accuracy of 20 as an answer, and would still not even if the answer was more precise like 20.01946913905.

You can add in any number of real parameters, it wouldn’t matter if you have enough fudge factors to compensate.

Pat and Roy,

There is a lot of money at stake here. GCM’s, yes clouds are a huge weakness, the behavior of clouds has not been predicted, and there is no assurance that the behavior of clouds can be predicted as CO2 level rise.

Goodness.

Fundamentally, there is no proof that CO2 rising ppm can heat the atmosphere! Saturated low, restricts the atmosphere from radiating freely to space high, but no one can calculate this effect. Could be Tiny, or even non-existent.

Speak the truth, both of you……

Dr Frank – perfect.

That is perfectly easy to understand. I do not understand why distinguished and clearly intelligent scientists cannot understand that, and if they still have issues address it from that understanding.

Here’s more from year 2017:

https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/

Roy – You again refer to “~20 different models”, but then acknowledge that “they all basically behave the same in their temperature projections for the same …..”.

As I commented on your first article,

I’m not sure that your argument re the “20 different models” is correct. All the models are tuned to the same very recent observation history, so their results are very unlikely to differ by much over quite a significant future period. In other words, the models are not as independent of each other as some would like to claim. In particular, they all seem to have very similar climate sensitivity – and that’s a remarkable absence of independence.I would add that I find your statement “

All of the models show the effect of anthropogenic CO2 emissions, despite known errors in components of their energy fluxes (such as clouds)!” rather disturbing: the models misuse clouds for a large part of the CO2 effect, so I can’t accept that the models do show the effect of anthropogenic CO2 emissions, and I can’t accept that clouds can simply be ignored as you seem to suggest.So long as the fudge turns out somewhat edible at the end, it’s all good?

My High School Physics teacher would have flunked me for that egregious fudging assumption.

Nor is “in global energy balance” a valid criteria.

This statement astonishes me. A program that responds to increasing greenhouse gases is purposely written to respond.

Why there is not a standard defined for exactly how model programs respond to greenhouse gases puzzles me.

If all of the programs return different numbers, they are not in agreement; even if they stay within some weird boundary! Nor does adding up the program runs then publishing the result cancel anything. That is a bland acceptance for bad programming while hoping to foist the results and costs on the unsuspecting public.

That the models programs all fail to adhere to reality over the long term is the sign those programs are failures. Especially as model results run into future weeks, months, years.

Apparently, propagation of error is uncontrolled! Those who assume the errors will cancel are making a gross assumption in the face of horrible model runs.

Nit picking an article about the “propagation of errors” should do so constructively. Not harp about cancelling, balance, gross acceptance or whatever.

Pat Frank addresses one part of climate science’s refusal to address systemic error throughout global temperature monitoring, storage, handling, calculations and presentations.

Propagation error is a problem for climate science, but apparently ignored by many climate scientists.

Defending propagation of error in model runs because the assumption is that they are cancelled out by other model bias is absurd.

Nor is assuming that TOA Longwave Radiative Flux variations is validation of a GCM program.

The models are injected with brown (“black”) matter to conform with real processes, which are chaotic (e.g. evolutionary), not monotonic (e.g. progressive). The system has been incompletely, and, in fact, insufficiently characterized, and is estimated in an unwieldy space. This is why the models have demonstrated no skill to hindcast, forecast, let alone predict climate change.

The discussion appears to me revolving around multiple potential misunderstandings.

1. As often mentioned already: accuracy versus precision and error versus uncertainty

2. Simple statistical analysis on measurement & linear processing versus emulations running Navier-Stokes equations approaching various states of equilibrium and complex feedback.

While the uncertainty and general unreliability of climate models can be argued for, and seems well understood within the sciences, even without all the mathematics, Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states. What’s left are more modest uncertainty bounds with the, I’d argue, well understood general short-coming of any model addressing reality. But the presence of unqualified, non-linear components in the real climate does not necessarily mean the model has no value when establishing a general trend for the future (through drawing scenarios, not merely predicting). The model can be overthrown each and every second by reality. This is no different than cosmology and astrophysics but that understanding will not make astrophysicists abandon their models on formation of stars or expansion of the universe. Of course nobody is asking yet for trillions of dollars based on arguments deriving from astrophysical models.

And that last bit is in my view the bigger problem: uncertainty versus money.

“Dr. Spencer appears to make the correct remark that known uncertainty levels do not propagate inside these types of emulation but over the “long run” cancel each other out within the equilibrium states.”

Most agree that most of the time the climate is an equilibrium engine. It searches for that. An equilibrium is its anchor or the thing it revolves around like a planet around its sun.

We can calculate an orbit of a planet with errors similar to the errors in a climate model. Now predict 100 years in the future. Measure Earth’s distance from the Sun’s average position. Now be Galileo and do the same thing with his technology. His errors can be argued to be huge. Yet his model was probably pretty good for figuring the future Earth/Sun change in distance.

If a tall building’s upper floors displace in high winds, we don’t add the errors. We can’t calculate how much they displace at any time to 6 decimal places. But these errors do not add. But if we calculate a difference at the 6th decimal place and keep iterating that error, we are going to get a displacement that indicates a building failure eventually.

John,

No. What Pat is talking about is a specification error; that is to say a limit on accuracy. As such it can’t be cancelled or reduced in any way because it literally is a loss of information, like a black hole of knowledge. There’s no way to use mathematics to change “I don’t know” into “I know”.

As a CME who studied the hard sciences and engineering to get a PhD, it is amazing to see that Dr. Spencer and others do not appear to understand the difference between error and uncertainty. Simple searches find many good explanations including this one (https://www.bellevuecollege.edu/physics/resources/measure-sigfigsintro/b-acc-prec-unc/) or this one (https://www.nhn.ou.edu/~johnson/Education/Juniorlab/Error-SigFig/SigFiginError-043.pdf).

In this case, we cannot know the error in the model projections because we do not know the true value for the temperature in the future. Anyone who is discussing errors is missing the point.

We must, however, estimate the uncertainty on our projection calculations so that we then know what we can say with certainty about the model projections, e.g. so we can say “the temperature 100 years from now lies between A and B degrees”, or more typically that “the temperature 100 years from now will be X +/- y degrees.”

The estimate of the uncertainty can be made without ever running a single simulation as long as we have an idea of the errors in the “instruments” we are using for our experiments. This is what Pat Franks has done, estimated the uncertainty based on the estimated error in the parameterization of clouds that is used in all GCMs.

The result is that the best we can say is that we are certain that the future temperature (in 100 years) will be X +/- 18C where X is the output of your favorite GCM.

“Anyone who is discussing errors is missing the point.”The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?

Well, to be more explicit, the errors in the “instruments” is what is propagated resulting in the uncertainty. In this particular case, the “instrument” that has the error is the parameterization of the effects of clouds.

Nick Stokes:

The paper is titled “Propagation of Error and the Reliability of Global Air Temperature Projections.” If you are going to insist that error can only mean a difference between a measured value and truth, then how can it be propagated?As happens frequently, the phrase “propagation of error” has at least 2 distinct but related meanings.

a. It can mean the propagation of a known or hypothesized specific error;

b. It can mean the propagation of the probability distribution of the potential errors.

Pat Frank has been using it in the sense of (b).

“Pat Frank has been using it in the sense of (b).”So then what is the difference between “error”, meaning “the probability distribution of the potential errors”, and “uncertainty”?

Nick Stokes:

“the probability distribution of the potential errors”, and “uncertainty”?The probability distribution of the potential outcomes is one of the mathematical models of uncertainty.

WellWell, Pat thumps the table with stuff like:

“you have no concept of the difference between error and uncertainty”” the difference between error and uncertainty is in fact central to understanding the argument”You’re making the difference pretty fuzzy.

Nick Stokes:

You’re making the difference pretty fuzzy.Only when you ignore the “distribution” of the error, and treat the error as fixed. Consider for example the “standard error of the mean”, which is the standard deviation of the distribution of the potential error, not a fixed error. My reading of your comments and Roy Spencer’s comments is that you do ignore the distribution of the error.

I have a model that calculates the temperature each year for a hundred years for a range of rcp trajectories. The results are excellent, closely matching my expectations but disappointing compared with observation.

I tried introducing my best estimates and consequences of different cloud cover conditions but the model output was all over the place.

I then introduced fudges that effectively suppressed the effect of clouds and the models returned to the former excellent performance. Pity about the observations.

This thought experiment illustrates that the uncertainty in simulating the climate exists whether or not I include cloud cover in my model or whether I fudge its effect. The model will process only what it is programmed to do and is independent of the uncertainty. Ignoring elements of uncertainty (e.g. cloud cover) may make the model output look impressive but in fact introduce serious limitations in the simulation. These affect current comparisons with observation but have an unknown influence on future predictions.

In order to judge the predictive usefulness of my model I need to estimate the impact of all uncertainties.

Dr. Frank’s paper provides extremely wide uncertainty bounds for the various models. He says that the bounds he proposes are not possible real temperatures that might actually happen, just the uncertainty bounds of the model.

The normal way to validate uncertainty bounds is to assess the performance of the uncertainty bounds. Being statistical in nature, 1 in 20 tests should, when run, exceed those bounds, and a plot of multiple runs should show them scattering all around the range of the bounds.

The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.

Dr. Frank’s bounds connect to the uncertainty of the model predictions of the earth’s temperature. For his uncertainty bounds to be feasible, all values within the range must be physically achievable. An uncertainty bound for a physical measurement that is impossible to achieve is meaningless. If someone tried to tell me that the uncertainty range of the predicted midday temperature tomorrow was +/- 100 degrees C, it would be ludicrous, since temperatures within that range are impossible for this time of year. We can be certain that that uncertainty bound is incorrect. Even if the calculated uncertainty of the measurement technique used for the prediction was indeed that inaccurate, the derived uncertainty bears no association with the true uncertainty. It is a meaningless and wrong estimate of the true uncertainty.

We know the earth simply cannot warm or cool as much as Dr. Franks uncertainty suggests. Therefore his estimate of the uncertainty of the models cannot be correct because his uncertainty itself cannot be correct.

Both these simple observations indicate that the assumptions on which these bounds were calculated must be false and that the true uncertainty is far less. In other words, Dr. Franks uncertainty bounds are themselves most uncertain.

I fully agree, that is what I have been trying to say more ineptly. The result doesn’t pass the common sense test. Neither the planet, nor the models as they have been published can do that.

Chris and Javier, you are being far too literal in your reading of the uncertainty range.

Somebody mentioned the models will produce similar results because they operate within a “constraint corridor” (boundary conditions and assumptions like TOA energy balance). That’s a very appealing way to describe a significant aspect of their operation.

Does this “corridor” reduce uncertainty? Certainly not!

Uncertainty is LOST INFORMATION. Once lost, it’s gone forever as far as a model run is concerned. From any position, further modelling can only increase the uncertainty. And that’s essentially what Pat is telling you.

So what about a model which has a limited range of feasible outcomes? If Pat’s theoretical uncertainty range exceeds the feasible range of outcomes, this only means the uncertainty cannot fell you anything about the future position within the range.

The fact that Pat’s uncertainty bounds exceeds this range is just surplus information about the uncertainty. Pat’s method is not modelling the climate, why would it need to be aware of a detail like feasible range of MODEL outputs? As somebody else keeps telling us, uncertainty is a property of the model, NOT an output.

Like I said, you are being far too literal and inflexible in your Interpretation of Pat’s results. Your objections are ill founded.

Following an example from above if you cut a piece to a ±0.5 mm error and then you assemble 100 units of the piece your propagating error would be ±50 mm. Although quite unlikely your assembly could be 50 mm off, and that is your uncertainty. There is a real possibility albeit small of that, but the possibility is not small that you could be 25 mm off.

If you make multiple runs with a model that has a ±15°C uncertainty you should see plenty of ±7°C results. As that doesn’t happen models are either constrained as you say or programmed so that errors cancel. In both cases that reduces the uncertainty over the final result.

In any case if Pat’s mathematical treatment produces a result that does not agree with how models behave, it is either wrong or it has been made irrelevant by the way models work. It is as in the example all pieces with an error above ±0.1mm are discarded. Not very practical but you won’t get an assembly with >10 mm error despite the error in making the pieces is still large.

Javier,

No, you are talking about precision errors. Instead think about what would happen if you cut each piece to the same length within +/-0.1mm, but your ruler was .5mm too long (calibration error). Now how far would you be off after adding the 100 pieces off? The precision errors would all mostly cancel, but the resulting assembly would be 50mm too long. Now before you even started cutting, let’s say you knew that the ruler could be +/0.5mm out of spec, but you don’t know how much. How could you predict what the length of the final assembly will be? What confidence would you have that it would be within +/- 2mm?

Javier

This is the misinterpretation that Pat is being forced to play “Wack-a-Mole” with.

Uncertainty never cancels in the way you assume. Once information is lost (for a model run) it is lost for the remainder of the run. It can NEVER be recovered by constraints and other modelling assumptions. All these things do us add their own uncertainties for subsequent steps.

Where uncertainties are independent of each other (and that’s the general assumption until somebody can demonstrate otherwise), uncertainties propagate in quadrature (Pythagoras). They never reduce numerically, and they never reduce in practice.

Pat shows you how to do it. His expertise on the topic is way above anybody else’s o this thread. We have a great opportunity to LEARN.

In reply to Mr Thompson, it is precisely because all of the models’ predictions fall within Professor Frank’s uncertainty envelope that all of their predictions are valueless.

It does not matter that they all agree that the expected global warming will be between 2.1 and 5.4 K per CO2 doubling, because that entire interval falls within the envelope of uncertainty that Professor Frank has calculated, which is +/- 20 K.

Note that that uncertainty envelope is not a prediction. It is simply a statistical yardstick, external to the models but shaped by their inputs and calculated by the standard and well-demonstrated statistical technique of deriving propagation of uncertainty by summation in quadrature.

Or think of the yardstick as a ballpark. There is a ball somewhere in the ballpark, but we are outside the ballpark and we can’t see in, so, unless the ball falls outside the ballpark, we can’t find it.

What is necessary, then, is to build a much smaller ballpark – the smaller the better. Then there is more chance that the ball will land outside the ballpark and we’ll be able to find it.

In climate, that means understanding clouds a whole lot better than we do. And that’s before we consider the cumulative propagation of uncertainties in the other uncertain variables that constitute the climate object.

Subject to a couple of technical questions, to which I have sought answers, I reckon Professor Frank is correct.

+1

Bravo!!!

Dr. Franks linearization of the module output is quite ingenious, which makes for an analytic uncertainty calculation from just a single parameter, the LWCF. In the Guide to Expression of Uncertainty (the GUM, referenced in Dr. Franks paper) another way to obtain uncertainty values is with Monte Carlo methods (calculations). Treating a given GCM as a black box with numeric inputs and a single output (temperature), it may be possible to calculate the temperature uncertainty with the following exercise:

1) Identify all the adjustable parameters that are inputs to the model

2) Obtain or estimate uncertainty values for each parameter

3) Obtain or estimate probability distributions for each parameter

4) Randomly select values of each parameter, using the uncertainty statistics for each

5) Run the model, record the temperature output

6) Repeat 4-5 many times, such as 10,000 or more

The temperature uncertainty is then extracted from a histogram of the temperatures, which should dampen the “your number is too large” objections.

However, the usefulness of Monte Carlo methods is limited by computation time: the more input parameters there are, the more repetitions are needed. Does any know how many adjustable parameters these models have, and any knowledge of the computation time a single run requires?

Chris Thompson:

The climate models have been run long enough to assess how widely they diverge. None of the models come close to that kind of variability. They all sit well within Dr. Franks wide bounds. This indicates that the uncertainty bounds proposed are very unlikely to be as large as Dr. Frank calculates them to be.The model runs have not systematically or randomly varied this parameter throughout its confidence interval, so information on the uncertainty in output associated with uncertainty in its value has not been computed.

Roy,

The first time I saw uncertainty estimates for the UAH lower troposphere temperatures, eyebrows went high because this seemed to be remarkably good performance for any instrumental system, lat alone one operating way up at satellite height and difficult to monitor and adjust for suspected in-situ errors. For years I had tried hard at the lab bench for such performance and failed.

It would be great if, as a result of comprehending the significance of Pat’s paper, you were able to issue a contemplative piece on whether you found a need to adjust your uncertainty estimates, or at least express them with different caveats.

In climate research, there are several major examples of wholesale junking of past results from older instruments when newer ones were introduced. Some examples are Argo buoys for SST, pH of ocean waters, aspects of satellite measurements of TOA flux, early rocketry T results versus modern, plus some that are candidates for junking, like either Liquid-in-glass thermometers or thermocouple/electronic type devices (one or the other, they are incompatible). There are past examples of error analysis favoring rejection of large slabs of data thought reliable, but overcome by better measurement devices. Science progresses this way if it is done well.

These comments are in no way unkind to your excellent work in simulation of air temperatures via microwave emissions from gases, one of the really good breakthroughs in climate research of the last 50 years. Geoff S

Hey Greg,

I do hydraulic studies (flood modeling). The object of the model isn’t to be precise, there is no way you can be precise. Yes the output is to 4 decimal places, but the storm you’re modeling isn’t a real stormI appreciate that, what I’m trying to say is that some claim models closely follow actual air temperatures in the recent decades. If that is the case why is that? By mere luck? If uncertainty is huge I would expect significant deviations from actual air temperature. If models consistently give consistent results in tight ranges and those results are close to actual temperature changes then what’s the point of complaining about massive uncertainty?

Huge thanks to Pat Frank for this tenacious work, and also to Roy Spencer for providing a much needed critique. The fact that it comes from Dr. Spencer, who is much admired on the sceptic side, makes it all the more valuable. So, what is the result…does Dr Spencer have a handle on this?

After quite a lot of vacillation, I come down pretty clearly on the side of Dr. Frank. I really do think Roy Spencer has been defeated in this argument. Although always doubtful of the models, I am usually a sceptic of any challenge to the basics, always feeling that such challenges require very substantial evidence. I’m also somewhat limited mathematically, and was at first very sympathetic to the specific challenge by Nick Stokes and others, relating to the time units Pat introduced into the equations, and the sign on the errors. Took me a long time to get over that one, and I expect the argument will go on. Eventually I saw it as a diversion rather than a real obstacle to acceptance of the fundamental finding of Pat Frank’s work.

Stepping back for a moment it is clear to see that it is in the very nature of the model programs that the errors must propagate with time, and can be restrained only by adjustment of the parameters used, and by a training program based on historical data. I would suggest that all of us – everybody, including Roy Spencer, including the modellers themselves-really know this is true. It cannot be otherwise. And it shouldn’t take several years of hard slog by Pat Frank to demonstrate it.

Let’s take an analogy that non-mathematicians and non-statisticians can relate to. That is, the weather models that are used routinely for short range weather forecasts. Okay, I understand that there are important differences between those and GCMs, but please bear with me. That forecasting is now good. Compared with 30 years ago, it is very good indeed. The combination of large computing power, and a view from satellites has changed the game. I can now rely on the basics of the general forecast for my area enough to plan weather-sensitive projects pretty well. At least, about a day or a day and half ahead. Thereafter, not so good. Already after a few hours the forecast is degrading. It is particularly poor for estimating local details of cloud cover, which is personally important for me, just hours ahead. After three of four days, it is of very little use (unless we are sitting under a large stationary weather system – when I can do my own pretty good predictions anyway!). After a week or so, it is not much better than guesswork. In truth, those short-range models are spiralling out of control, rapidly, and after a comparatively short time the weather map they produce will look not remotely like the actual, real weather map that develops. The reason is clear – propagation of error.

Weather forecasting organisations update their forecasts hourly and daily. Keep an eye on the forecasts, and watch them change! The new forecasts for a given day are more accurate than those they succeed. They can do that because they have a new set set of initial conditions to work from, which cancels the errors that have propagated within that short space of time. And so on. But climate models can’t control that error propagation, because they don’t, by definition, have constantly new initial conditions to put their forecast -“projection”- back on track. Apologists for the models may counter that GCMs are fundamentally different, in that they are not projecting weather, but are projecting temperatures, decades ahead, and that these are directly linked to the basic radiative physics of greenhouse gases which are well reflected by modelling. Well, perhaps yes, but that smacks of a circular argument, doesn’t it? As Pat Frank demonstrates, that is really all there is in the models.. a linear dependence upon CO2. The rest is juggling. We’ve been here before.

Roy Spencer, I’d like you to consider the possibility you might be basing your critique on a very basic misconception of Dr Frank’s work.

….Well said…

There is no purpose to this argument. Models use various means to achieve a balance which in nature does not exist. Ice ages? Then modellers feed in Co2 as a precursor for warming. Roy Spencer is correct. Climate change is accidental not ruled by mathematical equations which cannot under any circumstances represent the unpredictable nature of our climate. This argument is about how interested parties arrive at exactly the same conclusion. Models cannot predict our future climate hence modellers predilection for Co2. If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil. Models are dross.

What alarmism never contemplates is the absurdity of their own rhetoric. Hypothetically if Co2 causes warming then mitigating of Co2 would cause cooling. Historically there is no evidence that Co2 has caused warming or cooling. Models exist to give the misleading impression that we do understand the way in which our climate functions when the only active ingredient upon which predictions can be postulated is Co2. The models of themselves are noise.

“In climate research and modelling, we should recognise that we are dealing with a coupled nonlinear chaotic system and therefore that long term prediction of our future climate states is not possible”. The intergovernmental panel on climate change (IPCC) Third Assessment Report (2001) Section 14.2.2.2 Page 774.

https://wattsupwiththat.com/2016/12/29/scott-adams-dilbert-author-the-climate-science-challenge/

David Wells. Pat’s paper is a formal analysis to back-up your assertions.

“If you want to predict temperature based upon Co2 all you need is a sheet of graph paper, a pocket calculator, ruler and pencil.”

Pat shows this with his emulation if GCMs. GAST projections are nothing more than iterative linear extrapolation of assumed CO2 forcing inputs. Forget all the detail and mystery that their creators like to hide behind, and just call them by what they do: iterative extrapolators. Forget the $bn sunk to get to this conclusion. Pat shows time and again that all we have is iterative linear extrapolators of assumed CO2 forcing.

Pat can then present familiar concepts of uncertainty propagation in iterative linear extrapolators to show that the outputs of GCMs are not reliable. There is a maximum degree of uncertainty they can tolerate to be able to discern the effect of CO2 forcing, and they fail to achieve this standard.

It’s a beautiful logical chain of reasoning, well supported by evidence and analysis.

Excellent comment. Regardless of how complicated the GCM’s are, their output in relation to CO2 is linear. Dr. Frank has shown this remarkable observation is true. The corollary then follows that uncertainty is calculated through well known formulas.

Agreed Jim.

Pat’s work is important, and it needs to be supported against the naysayers who cannot stand the blunt truth they are faced with.

Mr Wells has misunderstood Professor Frank’s method. Consider three domains. First, the real world, in which we live and move and have our being, and which we observe and measure. Secondly, the general-circulation models of the climate, which attempt to represent the behavior of the climate system. Thirdly, the various theoretical methods by which it is possible to examine the plausibility of the models’ outputs.

Our team’s approach, which demonstrates that if temperature feedback is correctly defined (as it is not in climatology) climate sensitivity is likely to be about a third of current midrange projections. To reach that result, we do not need to know in detail how the models work: we can treat them as a black box. We do need to know how the real world works, so that we can make sure the theory is correct. All we need to know is the key inputs to and outputs from the models. Everything in between is not necessary to our analysis.

Professor Frank is taking our approach. Just as we are treating the models as a black box and studying their inputs and outputs in the light of established control theory, so he is treating the models as a black box and studying their inputs and outputs in the light of the established statistical method of propagating uncertainty.

If Professor Frank is correct in saying that the models are finding that the uncertainty in the longwave cloud forcing, expressed as an annually-moving 20-year mean, is 4 Watts per square meter – and his reference is to the Lauer/Hamilton paper, where that figure is given – then applying the usual rules for summation in quadrature one can readily establish that the envelope of uncertainty in any model – however constructed – that incorporates such an annual uncertainty will be plus or minus 20 K, or thereby.

However, that uncertainty envelope is not, repeat not, a prediction. All it says is that if you have an annual uncertainty of 4 Watts per square meter anywhere in the model, then any projection of future global warming derived from that model will be of no value unless that projection falls outside the uncertainty envelope.

The point – which is actually a simple one – is that all the models’ projections fall within the uncertainty envelope; and, because they fall within the envelope, they cannot tell us anything about how much global warming there will be.

Propagation of uncertainty by summation in quadrature is simply a statistical yardstick. It does not matter how simple or complex the general-circulation models are. Since that yardstick establishes, definitively, that any projection falling within the envelope is void, and since all the models’ projections fall within that envelope, they cannot – repeat cannot – tell us anything whatsoever about how much global warming we may expect.

I am still trying to reach a conclusion on this. Where is Steven Mosher when you need him?

Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information. This is the essence of Pats analysis.

Roy Spencer seems to agree in principle, but doesn’t seem to accept Pat’s approach.

I have a number of points I’d like to add.

Uncertainty can only increase with each model step. A model has no prospect of “patching in” new assumptions to compensate for loss of information in earlier steps.

Pats uncertainty bounds go beyond what some people consider to be a feasible range. Fine, then crop Pats uncertainty to whatever range you are comfortable with. All you will conclude is the same thing: you have no way of knowing where the future will lie within your range. That’s fundamentally the same conclusion as Pat’s, but you have made it more palatable to yourself. It doesn’t mean Pat is wrong in any way.

Models produce similar outputs because they are operating within “constraint corridors” (as somebody called it) which exclude them from producing a wider range of outputs. It is not evidence of reducing uncertainty. Lost information is gone, and lots of models running with similar levels of lost information cannot create any new information.

Constraints do not reduce uncertainty. They only introduce assumptions with their own inherent uncertainties, and therefore total uncertainty increases when a constraint is relied upon as a model step. For this, I would like to refer to the assumed TOA energy balance using the following very simple equation:

N(+/-n) = A(+/-a) + B(+/-b) + X(+/-x)

Uppercase are model OUTPUTS and lower case are uncertainties which are model PROPERTIES.

N has a value of zero because it is the model assumed TOA flux balance.

A and B balance, representing Roy’s biases and assumed (but unidentified) Counter biases when the assumed TOA constraint is applied.

X is zero (not recognised by the model) and represents concepts like Pat’s modelling errors.

The fact that the uppercase items can add to zero does not mean the lowercase uncertainties cancel each other. In fact the opposite is true. Roy’s assumption of counter biases represents more lost information (if we knew about them we should be modelling them).so the value of ‘b’ has the effect of increasing ‘n’ as the uncertainties are compounded in quadrature.

To me, this is a very valuable comment.

Thank you, Jordan.

Dave Day

“Uncertainty represents lost information. Once it us gone, there is no way to recover the lost information.”GCM’s famously do not keep a memory of their initial states. Nor do CFD programs. In this they correctly mimic reality. You can study the wind. What was it’s initial state? You can do it more scientifically in a wind tunnel. No-one tries to determine the initial state there either. It is irrelevant.

So yes, the lost information can’t be recovered, but it doesn’t matter. It didn’t contain anything you wanted to know. And much of this error is of that kind. The reason it doesn’t matter is that what you actually find out from GCM or CFD is how bulk properties interact. How does lift of a wing depend on oncoming flow? Or on angle of attack? None of these depend on the information you lost.

I totally disagree with that Nick Stokes. But you have widely advertised your complete inability to understand these concept on this thread. And your description of Eqn1 as “cartoonish” was a breathtaking display of arrogance and lack of self awareness. I really have no interest in what you have to say, so don’t bother responding to my comments.

What you write is correct as far as it goes, but now consider this system which is much closer to the model we are all supposed to be considering.

Temperature varies with the net flux (imbalance), N(t)

N(t) = A – B + F – lambda*deltaT + X(+/-x)

A = B with a correlation of 1

B = sum over i of (b_i(+/-)error_i)

Can you calculate the contribution of the uncertainty in b_i to X?

Franks: The uncertainty is huge so the results are meaningless; Spencer: models are adjusted to stay within the bounds of a physically-realizable outcome so this uncertainty is meaningless. Let me ask a question. If physics indicated that temperature swings could in fact be 25C or higher so that no artificial bounding of modeled results would be needed, would the models be producing different outcomes? If so, then I would say Franks is right — the models are physically meaningless.

Frank not Franks.

On the flip side: if the real world cannot plausibly vary in temperature this much, then it also implies that the uncertainty is also not this high. It doesn’t take a fancy computer model — simply the idea that cloud forcings could change by 20W/m2 within a few decades is itself pretty implausible.

And the only reason that Frank is saying that cloud forcing

couldvary this much is because he treated the cloud forcing uncertainty (4 W/m2) as achangein cloud forcing uncertainty (4 W/m2/year). So he can integrate that over time, and the uncertainty in cloud forcingin the real worldgrows over time, without end or bound, to infinity. Does that sound physically realistic?Units are important, yo

Lauer et al indicate on average it changed +\- 4 W/sqm per year. Your argument then is with Lauer.

No, they indicated it changed 4W/m2, not per year. It is the same over any time period. At

anygiven point in time, it can be within this +/- 4W/m2 range, and this does not change over time.On a previous discussion on this point, someone went so far as to actually email Lauer himself. Here’s the reply (emph. mine)

So Lauer also says that there’s no particular timescale attached to the value. It’s

justW/m2, not W/m2/year.https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-1443

Thanks. That is one interpretation I was considering. Still it represents the uncertainty. In that case it may still be able to propagate, but may need to be treated differently. The division by sqrt(N) may be sufficient.

This was the first question I had about Pat Frank’s study. I hope this get clarified. Otherwise I still believe the approach is correct. Lauer’s paper strongly implied this was a 20 year multi model annual mean value.

“ At any given point in time, it can be within this +/- 4W/m2 range, and this does not change over time.”

But the models ITERATE windchaser. That changes everything.

You are falling into the trap of assuming uncertainty cancellation over model steps. No, no, no!

Uncertainty propogates in quadrature (read Pat’s paper). Quadrature means uncertainty never reduces/cancels.

It represents lost information. Once lost, information is lost for the remainder of the model run.

The model drifts along in its own merry way. Stable, but nevertheless blissfully unaware of how off-course it might be.

Uncertainty is the PROPERTY (not OUTPUT) that quantifies how far adrift the model could be. It is calculated separately from the model, as Pat shows.

“But the models ITERATE windchaser”Yes, they do. But Pat does not attach accumulation to the iteration, which is in 30 min steps, and would give absurd results. Instead he arbitrarily accumulates it over a year, which has nothing to do with model structure. He then gets somewhat less absurd results.

Here is what I think is going on – in very simple terms.

Does anyone think that in the development of climate models there weren’t cases where results wondered off into solutions that made no sense. Of course there were. What was/is the solution to the problem of runaway models? Introduce fudge factors to constraint the models to produce outcomes that at least make sense. The problem is that the fudge factors cover up the fact that much of the physics is imprecise.

Pat takes errors introduced from inaccurate modeling in clouds to show through time they lead to huge potential variance in possible future outcomes. The whole point is that such errors accumulate through time. This makes perfect sense to me.

Roy comes back and argues that the results Pat shows are nonphysical. Climate models don’t produce those kind of results. I think he is right in this regard, but the question is why.

As I said above, climate modelers have introduced error corrections to keep model output in the “feasible” choice set. The problem with this approach is that the underlying errors aren’t corrected through better physics. This leads to a situation where the internal dynamics are kept in line by ad hoc measures that are not governed by laws of physics, but rather by the need to get output that at least makes sense.

The way I think of what Pat is doing is that he is showing just how much tinkering needs to be done to keep the models in the real world because if they are striped of the ad hoc error correction they would produce nonphysical results.

The takeaway is that model results have little to no claim to getting the internal dynamics of the earth’s climate “right.” Climate models suffer from the same problem as weather models, as you move through time results go off the rails. With weather models, the results go off the rails in a matter of a few short weeks. I don’t think anyone would give much credence to predicted results from the GSF weather model18 months out. Of course, because they update the forecast frequently, long range predictions aren’t really the goal. I also believe that weather models don’t have the same amount of ad hoc adjustments that climate models have. Climate models need the ad hoc adjusts because without them the long range forecast would tend toward extremes that make no sense.

My bottom line is that I think Pat and Roy are talking past each other. I don’t think either is wrong, they are just saying different things.

Quote: “Pat takes errors introduced from inaccurate modeling in clouds to show through time they lead to huge potential variance in possible future outcomes. The whole point is that such errors accumulate through time. This makes perfect sense to me.”

If this was true, then the present climate models would not show logical warming results for GH gases. The reason is that only water feedback is included. Those models do not have cloud forcing effects, because modelers have not enough knowledge to formulate them mathematically. That is why a huge uncertainty has been shown for cloud forcing.

You can discuss this matter back and forth but you cannot find a solution.

Antero, “

f this was true, then the present climate models would not show logical warming results for GH gases.”Not correct. You’re equating a calibration error statistic with an energy flux, Antero. My critics make this mistake over and yet over again.

Calibration errors do not affect a simulation. Why this is so hard for some people to understand is beyond knowing.

Pat –

yes, this is crazy! How many times have you got to say it?

There are some very smart people contributing to this thread, and yet they don’t seem to be getting the message. And doubtless some very smart people following it, who don’t wish to get involved….I’d love to know what some of the prominent lurkers are thinking.. perhaps they will feel obliged to address the actual paper.

The IPCC says that we cannot predict our climate future. The models that the IPCC uses have all failed even the most basic tasks. Roy and Pat whilst disagreeing on minutiae agree with the IPCC. It doesn’t matter how many or how exquisite the equations the only thing which matters is data. And the data contradicts the belief, alarmism, prevarication and the predictions/projections predicated on the model output which are driven not by reality but the need to misrepresent Co2 as a potential threat.

The IPCC and alarmism use modelling as a front to disguise the purpose of its deceit. The IPCC uses the supposed complexity of models to overwhelm the gullible and disguise their deceit. Dame Julia Slingo having left the Met Office said “technology needed to be at least 1,000 times faster before we have a cat in hells chance of using math to predict our climate future”. But if math cannot even begin to approach the accidental and unpredictable nature of our climate as the IPCC has admitted then what exactly is the point of spending countless billions on modelling when climate has the potential to turn on a sixpence and freeze the planet at its convenience? Arguments about how you convince the gullible that modelling is nonsense and rage about Co2 idiotic remain inadequate for that task. They remain the portent of the elite beyond what ordinary folk could ever comprehend which is why alarmism continues to prospect because their message is simple.

Ban Co2 and we will have a stable climate a golden age a land of milk and honey.

Recently did read the chapter “Oh Amaricano, Otra Vez!” in the book “Surely You’re Joking, Mr. Feynman!”. Teaching in Brazil to his amazement he discovered that studying physics in Brazil was limited to memorizing words and formula’s. The students had no idea what they meant in the real world, they could not connect them to real physical phenomena. And the few teaching scientist who could make that connection were educated abroad. So Feynman concluded that “no science is being taught in Brazil”.

The parallel I see with the discussion here is the basic question: what does (or does not) the theory tell us about real physical phenomena? What is their meaning in the real world? To rephrase that question: “what do climate models tell us about the climate as observed in the real world? What do they learn us apart from the figures they do spawn?”.

My understanding here is: Spencer concludes: “The figures they spawn are within realistic limits because of the way the models work”. Frank concludes: “The models fail to tell us how the climate operates. They cannot do this because their inner logic is flawed and meaningless for the real world.”

Can someone please explain to me in plain English which GCM accurately models the Earth? In other words, which one will accurately predict what happens in 2 years, 5 years, 10 years and so on? If there isn’t one then what use are they? If there is one then why the need for more than one model?

The Russian model comes close….Oh no, another Russian conspiracy ! lol

Analog,

None of them will. Models are not designed to predict the future state of Earth’s climate.

As Dr. Spencer explained on his blog, the models are tweaked and parameterized so they produce a steady-state, unchaining climate in multi-decade test runs. This is not a model of the real climate. It’s a fake, steady state climate. Then CO2, aerosols, etc. are added to see what effect those things might have on the fake-climate model. If Dr. Spencer is correct, the models could not reproduce the Little Ice Age, the warming at the early part of the 20th century, the cooling from the 1940s to the 1970s, etc. All of which were presumably natural climate phenomena.

One would think that things called “climate model” would be actually modeling climate, but they don’t. They are not designed to mimic the earth’s climate. They are only designed as scientific tools for calculating warming caused by CO2.

As predictors of future climate the climate models they are “not even wrong.” They were not designed for that purpose, presumably because such a model would be far too complex to run on modern super computers.

However, Dr. Frank has shown that they are not fit for that purpose either. They get the physics so wrong that the uncertainty is much larger than the result. For me, knowing that the models get clouds so wrong is enough to prove that the models can’t be right.

The IPCC claims that they know man’s CO2 caused the warming because if they run the models without CO2, they don’t get any warming. I used to think that we circular reasoning. Now I see that it is a lie. The models don’t show warming in the absence of CO2 because they are programmed that way.

If I’m wrong about that, I sure would appreciate being corrected.

Thank you Thomas for the ‘plain English’ explanation.

That the models use TOA balance as an initial condition is itself a large source of error. CERES shows clearly TOA flux is NOT in balance.

https://geosciencebigpicture.files.wordpress.com/2018/04/ceres-toa-all-sky-inverted1.png

LW flux is flat to slightly increasing. SW flux decrease is responsible for warming. CO2 is not a significant absorber of SW radiation.

I still remain unconvinced that it’s even possible to model our climate at all, let alone the effect of CO2 with in our atmosphere. There’s just too many moving parts! You have the external forces we don’t understand, from the sun, and we’ve been experiencing changes there like we’ve never been able to see or measure before. You have the ocean currents, and Judith’s Curry’s latest paper on that and the effect from solar activity. You have a lot sorts of moisture spewed into the air through stacks, exhaust pipes, and of course major irrigation that increases constantly. You have the greening from increased CO2, and it’s effect on everything. Planes delivering some CO2 up high, while also delivering moisture and manufacturing false cloud cover in the process.

I know of no computer that model all of this effectively. I know people always consider this metric or that metric to be weather, not climate, but after 30 or 40 years or more, of these predictions, one would think we would have some model, any model, that could get our world figured out? I certainly don’t think it’s possible. If the old adage about a butterfly flapping it’s wings in Peru, or wherever, can effect the weather here in MN, I’d say we are going to need a lot of processing power to figure out how our climate will be affected when for all we know, we’re causing more or less of these butterflies to be created, or the flap more or less when they are happy or sad! And then we imagine that this little trace gas, is the culprit of a global disaster! Based on settled science, no less!

It seems we can’t even agree on the terms to model these things…. but we should send the world back into poverty, just in case… of something, that clearly is no where near as bad as it was touted to be. We need to put this whole thing on hold, and come up with a clear way to demonstrate real measurable “unfudgeable” results that hold up year over year. Then we can start tweaking parameters that make the difference when we have a model that actually works. Right now, all we have are wild theories, and none of them seem to work, but we call it settled science!

Good to spar, better to let others do it for you. Watching Roy v Pat with great interest.

Takeaway so far is Pat contending that he is just showing the propagation of systemic error in GCM’s quite different to random error which both Pat and an oilman point out would tend to disappear with more runs.

Mathematically the systemic error or its root mean square used by Pat over a 65 year period has to lead to a large positive and negative uncertainty range as he shows.

He specifically states it is only the maths in computer models not the physics that he is discussing not the real or observational world.

Mathematically this is sound.

Practically it’s relevance is debatable but it does open up a can of worms about the way computer programmes now project uncerainty into the future.

Unless this is specifically addressed and tightened up by the programmers, Mosher would agree as he supports more open coding, and we can all see the assumptions and uncertainties going into these programs then Pat is free to make his comments and those disagreeing need to show that their programmes do not have the systemic bias in he seems to have identified.

angech

Good point. Sounds good to the zero-level layman in the field discussed here.

Is it? He’s talking about the actual cloud forcing uncertainty, as measured in the real world, and saying this is 4W/m2/year, not 4W/m2. As a result, if you integrate this value with respect to time, the uncertainty in cloud forcing increases, year after year.

So is it really physically plausible that the

real worldcloud forcing uncertainty can increase, year-over-year, without bound?Of course if you take that result and plug it into the model, you’ll get nonsense. It’s incorrect in the real world.

“He specifically states it is only the maths in computer models not the physics”But he says nothing about the maths actually in computer models, and I don’t think he knows anything about it. He made up some maths.

No, I developed an equation that accurately emulates GCM air temperature projections, Nick.

That’s hardly making stuff up.

As usual, you mislead to confuse.

My take on this goes in another direction entirely. We continually see temperature measurements published to .1 or .01 degrees. But my own observation is that realistically, the measurement could not possibly have any significant digits past the decimal place, just from the fact that temperature can vary dramatically within a few feet. Yet we see an anomaly calculation to .01 or .001 degrees, meanwhile claiming that that the 20th century warming of about 2 degrees is ‘unprecedented’. With a realistic minimum confidence interval of +-1 degree, and probably more like +-2 degrees, how can we do anything but laugh at claims of ‘unprecedented warming’ and an anomaly measured to hundreths of a degree?

They present the temperature

anomalyto hundredths of a degree, not the temperature.While temperature varies rather quickly with altitude, shade, distance to the coast, etc., temperature

anomaly, the integral of how temperature changes with respect to time, is a bit more constant. There are a bunch of studies of this.You ignore the problem of how the base was calculated to obtain the anomaly and what measurement error propagation calculations are included in that. There is also the problem of just what an average temperature tells you about enthalpy.

In taking the difference to get an anomaly, the uncertainty in the temperature and the root-mean-error in the base-period average must be combined in quadrature, Windchaser.

The uncertainty in the anomaly is always larger than the uncertainty in the temperature.

This common practice is entirely missing in the work done by the compilers of the GMST record. It’s hugely negligent.

Those folks assume all measurement errors are random, and just blithely average them all away. It’s totally self-serving and very wrong. As demonstrated by the systematic non-random measurement errors revealed in sensor calibration experiments.

I have published on that negligence, here (1 mb pdf)

A challenge to Pat Frank :

Windchaser Pointed out that Dr. Brown contacted Dr. Lauer, and Dr. Lauer indicated that the 4 W per square meter uncertainty was not intended to be premised on any given timeframe.

https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-1443

It is clear that Dr. Lauer derived this statistic from 20 year multi model average annual means. This implies a one-year timeframe may be reasonable. The square root of N in the divisor may be sufficient to control the period. Could also be the case that this is a fixed uncertainty bar on all measurements at any given time.

Could you comment more on the statistic and how you justify using it as an annual uncertainty?

An average implies a division operation, does it not? What was the divisor? If it was time, then it has to be 4 W per square meter per something, depending on the unit of the divisor. So what was the divisor?

articulations pointing vectors drawn backwards for x and y axes

MultiFramePoseData doesn’t save/load correctly for Hubble data?

Say you have 100 people. Let’s say the average has $5, and let’s say there’s a little bit of variability, so there’s an uncertainty of +/- $1. In other words, if we randomly selected a person, ~95% of them would have between $4 and $6. That’s what that uncertainty means.

From these statistics you can say “there is $5/person, on average”, and then multiply by the number of people to get the

totalnumber of dollars. This is essentially integrating the average over the number of people to get the total number of dollars.Or you can say “an average person has $5”. That’s the same thing.

But you

cannotsay “the average person has $5/person”, as that would imply that if you had a group of 4 people, the average person would have $20. Ex:4 persons * “average has $5/person”= “average has $20”.

——————-

Now imagine you added an extra 100 people, basically as a copy of the first 100 people. Person #101 has the same $ as person #1. Person #102 has the same $ as person #2. And so on. A copy of the people and their $.

You now have

200people, and the average person still has $5, right? Your average did not change just because you added more people.Likewise, the uncertainty also stays the same. It’s still +/- $1. Both the uncertainty and the average do not change as you add more people.

So if we go out in the real world and measure the cloud forcing over some 20 years, and we get a value and some uncertainty, then the next year, we can still expect that the cloud forcing is going to be similar. And the year after that. And the year after that. (If it was properly measured and properly sampled). There may be uncertainty, but if you’ve measured it right, that uncertainty stays the same, month after month, year after year, decade after decade. The uncertainty

alreadyaccounts for any variation, and 95% of the time, at any given moment, the cloud forcing should fall within that measured value + its uncertainty.Just as doubling the number of people does not double the average $ or the uncertainty, doubling the number of years should not double the cloud forcing or

itsuncertainty.And while there is still an uncertainty, and it still propagates through any proper mathematical calculation. But it propagates through as W/m2, not as W/m2/time.

Sorry, ignore the first-two lines of that last comment. That’s what I get for copy-pasting from Notepad where I’m keeping some unrelated notes. ><

I think you are put into it too much emphasis on the units. I am not in a position to look at the paper now, but I do not believe that Pat Frank needs the units of flux per time. He just puts in the flux. When you do linear differential mathematics you just calculate the next state, and there are no considerations for the units of pool time.

Windchaser,

In your example, $5/person × 4 person = $20; not $20/person. The units (person) still cancel out.

Windchaser:

In other words, if we randomly selected a person, ~95% of them would have between $4 and $6. That’s what that uncertainty means.And make the further assumption that the mean is 5, and for simplicity let 4 and 6 be the known range, instead of a CI.

So one person has between 4 and 6, but you don’t know how much, only that the expected value is 5.

Two people have a total between 8 and 12, but you don’t know how much, except that the mean of 2 is 10.

Three people have a total between 12 and 18, but you don’t know how much,, except that the mean is 15.

…

N people have a total between N*4 and N*6, but

you don’t know how muchexcept that the mean total for N is N*5. When the value added is unknown, but represented by an uncertainty interval, the uncertainty of the sum grows.If (4,6) is a CI, then you have to work with the variance estimate, and sum the variances, and take the square root of the sum of variances (assuming independence, which Pat Frank does not do — he calculates the effect of the correlation.) But the result is still that the uncertainty in the total grows with N.

“take the square root of the sum of variances (assuming independence, which Pat Frank does not do”He took the square root of the sum of variances, as in Eq 6, or eq 9 of the SI. That assumes independence. That assumes independence, although he doesn’t say so. There is some earlier talk of autocorrelation, but I cannot see that it is used anywhere in the calculations. No values are stated there.

Nick Stokes:

There is some earlier talk of autocorrelation, but I cannot see that it is used anywhere in the calculations.Although not explicitly displayed, the correlation would be used in computing the covariance of the step(i) and step(i+1) uncertainties that is displayed in equation 4. The exponent 2 on the covariance might be a typo or a convention different from what I have read.

If he had omitted the correlation in this equation, then he would have a serious under-estimate of the final uncertainty. He does not say so (or I missed it or forgot it) but the partial autocorrelation for lags greater than 1 is probably 0 or close to it, so only the lag 1 covariances are needed.

Matthew, I use a multi-model, multi-year average calibration error statistic.

It is static and does not covary. There’s no covariance term to add the the propagation.

Paul:

There was a lot of averaging going on. Over 20 years over all grid points, and finally over all the models.

…Interestingly, we do not retain all of those modes of averaging in our units to the point I just made to wind chaser about over emphasizing units. I do not see anything about per grid point, per model, etc. So the fact that it does not have “per year” is not as significant as some people make it out to seem.

The final average is per model. I do not see that in the units.

Interestingly, we do not retain all modes of averaging in our units to the point I just made to wind chaser about over emphasizing units. I do not see anything about per grid point, per model, etc. So the fact that it does not have “per year” is not as significant as some people make it out to seem.

When Dr. Frank does his calculation of the forcings, he adds in the uncertainty at each timestep. Since each timestep is a year long (in his calculation), and he

re-adds this +/-4W/m2 in at each timestep, he’s adding it in, each year, and treating it as a yearly value.If you have a time-based equation, and you add something to the iterative function once per year, you’re treating it as a yearly value. A fixed value would get added once at the beginning, a per-year value gets added in each year.

Note that his results come out completely differently if you change the length of the timesteps to, say, months, and still add this uncertainty in at each timestep. (Because then the equation would add in 4W/m2/month, not 4 W/m2/year).

It’s a sequential series of calculations, Windchaser, with an uncertainty associated with every single input into every single step of the calculation.

That uncertainty must therefore be propagated through every step of the calculation.

There’s no mystery or ambiguity about it. Doing so is standard uncertainty analysis in every data-oriented experimental science.

Windchaser

“Note that his results come out completely differently if you change the length of the timesteps to, say, months, and still add this uncertainty in at each timestep. (Because then the equation would add in 4W/m2/month, not 4 W/m2/year).”

–

Windchaser this is something that you raise now and that Roy Spencer mentioned last article [wrongly, sorry Roy] and that Nick Stokes repeatedly uses.

–

There are 2 different ways to approach the 4W/m-2.

Remembering that a plain 4 W/m-2 simply means a unit of energy over time even though the word time does not appear in the subscript it is implicit and explicit in the term Watts.

The unit of power (the watt) in SI, the International System of Units. 1 watt = 1 joule per second = 1 newton meter per second = 1 kg m2 s-3.

–

The “watt per square meter” is the SI unit for radiative and other energy fluxes in geophysics. The “solar constant”, the solar irradiance at the mean earth-sun distance is approximately 1370 W m-2. The global and annual average solar irradiance at the top of the earth’s atmosphere is one-fourth of the solar constant or about 343 W m-2.

–

The first step of Nicks is to use either, both quite different definitions, when he is arguing and then swaps them in place of each other.

–

Hence the “solar constant” is repeatedly referred to as a timeless unit by both he and ATTP when he supports Nick’s argument.

Note that the solar constant does have a time dimension to it and an area dimension despite numerous denials by people who should know better. A flux is not just a number without units. Secondly it is not actually a constant, it varies as the distance from the sun.

–

So the timestep is 1 second, always.

–

Pat Frank uses the 4W/m-2. iin his paper in a totally different way. He is discussing the

CMIP5 Model Calibration Error in Global Average Annual Total Cloud Fraction (TCF).

The words Global Average Annual mean just that. CMIP5 , not Frank

He and they are comparing annual, not monthly, not per second factors and giving the calibration error as a percentage of the average annual rate per second.

Very important.

–

Three things spring to mind.

You say,

“his results come out completely differently if you change the length of the timesteps to, say, months, and still add this uncertainty in at each timestep.”

You ignore the fact that when you change the timestep you have to change the amount of energy received in that time step by the same factor, The total amount of energy in 1 month is 1/12th of that in a year = 1/3 of a W/m-2/month.

–

Further your contention [“and still add this uncertainty in at each timestep”] is wrong. In that you now use a compounding rate for different times lengths and assume that the result should be the same .Roy made the same compounding comment and inbuilt error.

If you use a month as your basis you have to assess monthly. You have to add each months energy received together for 12 months, not compound them. The sun does not keep increasing heat each month. The earth is no hotter after 12 individual months in a row than the same year. This is one of the basic ways of avoiding error. Check that the end point agrees with the assumptions that your calculations give.

–

Thirdly that Nick Stokes should be called out each and every time that he purposely swaps terms. This has been quite difficult for years as he tends to parse his arguments, deliberately, into one part which is quite true and one that is ever so subtly wrong.

A subtle reminder is him attacking Pat Frank for using mean root squares .

This is a standard way of assessing the size of an error.

Nick and others still have comments up claiming that since a root mean square is always positive that Frank cannot use the figure as a plus or minus for the standard deviation in uncertainty.

This is despite this being the typical way that uncertainty is quantified. A deviation plus or minus of the mean root square.

Rant finished.

You’ll find my reply here, John.

https://patricktbrown.org/2017/01/25/do-propagation-of-error-calculations-invalidate-climate-model-projections-of-global-warming/#comment-1451

Specifically from the post: Here’s Prof. Lauer as provided: “

The RMSE we calculated for the multi-model mean longwave cloud forcing in our 2013 paper is the RMSE of the average *geographical* pattern. This has nothing to do with an error estimate for the global mean value on a particular time scale.”Prof Lauer is, of course, correct. The crux issue is that he referred the error to a “particular time scale.” The LCF error they calculated is an error estimate for the global mean simulated value on an averaged time-scale. The mean error is a representative average time-scale error, not a particular time-scale error.

It should be clear to all that an average of errors says nothing about the magnitude of any particular error, and that an annual average says nothing about a particular year or a particular time-range.

Following from the step-by-step dimensional analysis above, the Lauer and Hamilton LCF rmse dimension is ±4 Wm⁻² year⁻¹ (grid-point)⁻¹, and is representative of CMIP5 climate models. The geospatial element is included.

Thank you, Pat. I did make a point about “per grid point”, etc. Even Lauer did not retain those units, but everyone knows they are there.

Similar things are done in other fields, such as protein-folding.

The model just can’t do it on its own. So it is helped (forced) to come to what is known to be the “correct” answer. In many instances this may well end up with something approximating the truth but does not advance the science because the model will still fail on problems where the answer has not already been determined by other means.

True. With protein folding [I presume] direct empirical validation is possible. What is powerful about Pat Frank’s work is that in fact he used real, actual, experimental, validated, and excepted data to determine the uncertainty.

” in fact he used real, actual, experimental, validated, and excepted data to determine the uncertainty”Really? Where?

The Lauer figure of 4 W/m2 is the difference between GCM results and a measurement. Or more precisely, the correlation of the difference.

The ±4 W/m^2 is a calibration error statistic, Nick.

Correlations are dimensionless.

It’s very disingenuous of you to transfer a nearby description over to an incorrect object.

You know exactly what you’re doing. You’re obfuscating to confuse, and it’s shameful.

The rmse is derived directly from the correlation; your entire basis for the 4 W/m2 is this statement of Lauer:

“For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m-2)”I queried JQP’s assertion

“in fact he used real, actual, experimental, validated, and excepted data to determine the uncertainty”No answer yet.

Nick, “

your entire basis for the 4 W/m2 is this statement of Lauer:”“For CMIP5…

No it’s not. My basis is the description in L&H of how they calculated their error.

To fix this problem with the models they will have to assume a certain distribution of cloud forcing error. Perhaps saying the error each year is normally distributed and independent of previous year would work, but that assumption needs to be validated with historical actual cloud data.

These measurements, of cloud forcing and uncertainty, come from historical cloud data.

Yes, more specifically, he’s saying that there’s an emulated linear relationship between the forcings and the temperature.

Then he adds an uncertainty for the cloud forcing. Okay, so far, so good. This would (normally) result in an uncertainty for the temperature, right? There’s a direct feed-through: if there’s a linear relationship between forcings and temperature, then there’s a linear relationship between forcing

uncertaintyand temperature uncertainty.But Dr. Frank then makes the forcing uncertainty grow over time. And grow. And grow. And grow.

In the real world, is it plausible that the cloud forcing uncertainty grows forever, ad infinity? Can it

reallyhave a value of +/-1000 W/m2, when you look at a long enough time period? This would be enough to boil the oceans or freeze the planet to 0K.No! It can’t. This idea that cloud forcings could be somewhere between {huge ridiculous numbers} is itself wrong. But that’s what Dr. Frank’s paper says: that the cloud forcing uncertainty grows forever, without limit.

Maybe it could not grow ad infinity. But certainly I could grow two or three years in a row or maybe even 10. In that case one would expect some compensatory mechanism in the models to take over and damp and out the signal. clearly the emulator is not doing that. That does beg the question of whether the full nonlinear capable model would do that, but I think that is left up to the modelers to answer Pat Frank.

In finance for example a common model of interest rates employs a mean reversion. Meaning interest rates in reality don’t grow unlimited, they add a mean reversion to force the rate to return to a mean.

apologize for the mistakes- I did this while boarding a plane using dictation. I tried to proof read it, but did not catch all the errors. “ad infinitum”, dampen (not damp and), etc.

Looks like Dr. Frank gives an uncertainty at the end of 100 years which is huge. What do the creators of the models say the error distribution is and how do they arrive at it ?

They do ensemble tests and publish the results.

The ensemble tests are only tests of precision and only using the models themselves- no external analysis of uncertainty as Pat Frank. did.

People here love parroting precision and accuracy, but to no useful effect. You can’t determine accuracy of predictions of things that haven’t happened. The ensemble tests test exactly what is being talked about here – the propagation of uncertainty. They tell you how much a change in present state affects the future. There is no better test than to see what GCMs actually do to variation, via ensemble.

You may want to call it parroting, Nick, but the point is the only error the ensemble test produces is error internal to the models themselves. Pat Frank is comparing the model to a real, actual, measure, verified error statistic. In other words he in a sense is comparing the model to reality, nit just checking the model for internal errors.

A long while ago I tried to find the formal proven theoretical framework by which the process, utility and results of these ensemble GCM tests can be evaluated. Does anybody have a current reference? My recollection is that is doesn’t exist but if it does, it would be good to know.

Nick, “

The ensemble tests test exactly what is being talked about here – the propagation of uncertainty.”No they don’t. Ensemble tests are about variability of model runs; how well models agree. That has no connection with propagation of error.

In fact, ensemble tests provide no bearing on accuracy at all. Or on reliability.

Reliability is what propagation of error reveals.

Model ensemble tests have nothing to say about whether we should believe climate simulations.

The LWCF average annual uncertainty is always ±4W/m^2 Windchaser. It doesn’t grow.

However, as it is a model calibration error statistic representing theory error, the model injects it into every step of a simulation.

That means the uncertainty in projected temperature, not in forcing, grows with the number of simulation steps. All the projection uncertainty bounds I describe are ±T.

“The LWCF average annual uncertainty is always ±4W/m^2 “No, you have been saying emphatically that it is ±4W/m^2/year/model. But I see this dimensioning is quite variable in the SI. Eq 9 gives it as 4 W/m2, and in fact the units get messed up if you try to add a /year (let alone a /unit).

The SI has no eqn 9, Nick.

The derivation is in eqns. 6-1 through 6-4. Dimensional analysis necessarily produces a global ±4W/m^2/year/model.

I’m impressed with a level of professional acuity, yours, that denies averages their denominator.

Hey Javier,

If we are talking about the models the answer is clearly not. The models are programmed to respond essentially to GHG changes and temporarily to volcanic eruptions, and little else.Methinks that’s not quite right. Models respond to several energy fluxes described in such models. But because much greater energy fluxes are in equilibrium, relatively small CO2 signal may tip the balance and push trend upwards. That’s my understanding what Dr Spencer is saying about models and their ‘test runs’ – balancing several energy flows. The question is now what if uncertainty associated with some energy fluxes (i.e. cloud forcing) is far larger than entire signal due to CO2? Signal due to CO2 is simply lost in the noise of uncertainty. Even worse – because this ‘uncertainty calibration error’ propagates longer simulation runs wider uncertainty becomes.

I liked the analogy of the ruler used by one of the commentators.

Let me see whether I can use this analogy correctly here (repeating what has already been said probably):

Say we have a one-meter ruler whose length we are not sure of — the ruler is labeled as a one-meter ruler, but we know that the person who made it was impaired somehow and could have made it 4 cm longer or 4 cm shorter.

We want to use this ruler to measure the sizes of watermelons in a contest to declare someone the winner of the largest-watermelon award. It’s the only ruler available to us, and so we are stuck with it.

Six judges use the ruler to measure a series of melons, and the six judges come up with roughly the same results. We know that rulers, in general, are accurate within, let’s say one centimeter. This +/- centimeter is one sort of error, right? The judges, as a group, have great precision, because they all are never off by more or less than one centimeter — they always agree within +/- one centimeter.

But the uncertainty of the ruler’s ACTUAL length is an overriding uncertainty, no matter how many judges come within the +/-centimeter precision of their measures.

The cloud-forcing uncertainty in climate models (i.e., climate-forecasting rulers) is such an uncertainty, yes?, no?, maybe?, sort of?

If not even close, then forgive my confusion and my confusing others — try to forget you ever read this. If correct or close, then yipee, I get it.

Yes, but the error with the ruler is a scale error. It doesn’t compound.

I’d like another assessment of that.

Say we measure a melon, and observe a certain number of gradations on the ruler. Each gradation has an uncertainty about it, because we cannot be confident that a centimeter is a real centimeter.

We don’t know when the gradation is or isn’t a real centimeter. A centimeter on the ruler could be off by some millimeters either way, and, as we continue to measure, not only are we uncertain of exactly how many centimeters we are REALLY measuring, but also we are not certain, whether the rulers measure represents any one melon’s true length.

The longer the melon, the more gradations of uncertain centimeters accumulate. How is this not compounded with each longer melon?

The measuring instrument itself does not have a knowable value that reflects the real quantity that it measures. With each increasing length, our uncertainty about what this measurement truly represents becomes greater and greater. We have less and less confidence about these measurements over a greater time, even though the output of the measuring instrument falls within certain bounds.

Remember, we don’t know what a gradation on the faulty ruler represents. A centimeter could be 0.6 cm or 1.4 cm or 1 cm, but we cannot know with confidence. And as the distance we measure increases, our uncertainty about what the ruler measures increases. Maybe 2 cm, as measured by the faulty ruler, is any combination of those three possibilities of 0.6, 1.4, or 1 in a sum, that is:

0.6 + 0.6, meaning 2 was really 1.2

0.6 + 1, meaning 2 was really 1.6

1.4 +1.4, ……………….. really 2.8

1 +1, ……………………. actually 2 (but we can’t know)

1 + 1.4, ………………… really 2.4

0.6 +1.4, ………………. actually 2 (it happens)

Then 3 cm is any combination of combination of those, and so on and so on.

My head is a jumble, but I hope I’m making my point somewhat coherently, which is there seems to be some sort of propagation here, as we go longer and longer out with an instrument whose scale’s correspondence to reality is not certainly known.

The ruler is measuring lengths consistently. It’s just that the units are 0.96m rather than 1m.

Nick Stokes:

The ruler is measuring lengths consistently. It’s just that the units are 0.96m rather than 1m.You are confounding

biasandvariance.There is no variance here. There is just one ruler, incorrectly graduated.

Nick Stokes:

There is just one ruler, incorrectly graduatedYou do not know by how much.

How much negative forcing from clouds do I have to use to get the models to show cooling?

“The similar behavior of the wide variety of different models with differing errors is proof of that. They all respond to increasing greenhouse gases, contrary to the claims of the paper”

I don’t believe that’s true, and here’s why:

the models don’t do a “prediction run” until they have had their free (although constrained) parameters tuned so that they produce a stable state during a control run. This means that they are “tuned” to “cancel out” their errors, leaving only CO2 as a “driver”. Which means that, within the tuning limits, they are all “similar”. Can’t be otherwise – they *must* show a change when a forcing (CO2) changes, and due to tuning and it’s constraints, this *must* be similar to the other models. There is also further post-hoc selection, where runs that “go off the rails” are discarded – like all “that’s odd” moments, such runs contain valuable information on the operation of the models that is, alas, currently being ignored.

In general, I suspect that the disagreement comes down to the difference between engineering and science – engineers want usefully predictive models with known errors so they can make things work – to them, it doesn’t matter if the model is “realistic” or not, in terms of it’s workings, only that it makes correct predictions.

Science, on the other hand, wants a model that explains the workings of the system accurately – having “correct” representations of the workings is more important than how accurate the prediction is.

Completely different goals that produce completely different interpretations of what is “useful”.

Both are valuable bits of information, for different reasons.

In short: “In theory [science], theoretically and practically are the same. In practice [engineering], they are different.”

There seems to be a misunderstanding afoot in the interpretation of the description of uncertainty in iterative climate models. I offer the following examples in the hopes that they clear up some of the mistaken notions apparently driving these erroneous interpretations.

Uncertainty: Describing uncertainty for human understanding is fraught with difficulties, evidence being the lavish casinos that persuade a significant fraction of the population that you can get something from nothing. There are many other examples, some clearer that others, but one successful description of uncertainty is that of the forecast of rain. We know that a 40% chance of rain does not mean it will rain everywhere 40% of the time, nor does it mean that it will rain all of the time in 40% of the places. We however intuitively understand the consequences of comparison of such a forecast with a 10% or a 90% chance of rain.

Iterative Models: Let’s assume we have a collection of historical daily high temperature data for a single location, and we wish to develop a model to predict the daily high temperature at that location on some date in the future. One of the simplest, yet effective, models that one can use to predict tomorrow’s high temperature is to use today’s high temperature. This is the simplest of models, but adequate for our discussion of model uncertainty. Note that at no time will we consider instrument issues such as accuracy, precision and resolution. For our purposes, those issues do not confound the discussion below.

We begin by predicting the high temperatures from the historical data from the day before. (The model is, after all, merely a single day offset) We then measure model uncertainty, beginning by calculating each deviation, or residual (observed minus predicted). From these residuals, we can calculate model adequacy statistics, and estimate the average historical uncertainty that exists in this model. Then, we can use that statistic to estimate the uncertainty in a single-day forward prediction.

Now, in order to predict tomorrow’s high temperature, we apply the model to today’s high temperature. From this, we have an “exact” predicted value ( today’s high temperature). However, we know from applying our model to historical data, that, while this prediction is numerically exact, the actual measured high temperature tomorrow will be a value that contains both deterministic and random components of climate. The above calculated model (in)adequacy statistic will be used to create an uncertainty range around this prediction of the future. So we have a range of ignorance around the prediction of tomorrow’s high temperature. At no time is this range an actual statement of the expected temperature. This range is similar to % chance of rain. It is a method to convey how well our model predicts based on historical data.

Now, in order to predict out two days, we use the “predicted” value for tomorrow (which we know is the same numerical value as today, but now containing uncertainty ) and apply our model to the uncertain predicted value for tomorrow. The uncertainty in the input for the second iteration of the model cannot be ‘canceled out’ before the number is used as input to the second application model. We are, therefore, somewhat ignorant of what the actual input temperature will be for the second round. And that second application of the model adds its ignorance factor to the uncertainty of the predicted value for two days out, lessening the utility of the prediction as an estimate of day-after-tomorrow’s high temperature. This repeats so that for predictions for several days out, our model is useless in predicting what the high temperature actually will be.

This goes on for each step, ever increasing the ignorance and lessening the utility of each successive prediction as an estimate of that day’s high temperature, due to the growing uncertainty.

This is an unfortunate consequence of the iterative nature of such models. The uncertainties accumulate. They are not biases, which are signal offsets. We do not know what the random error will be until we collect the actual data for that step, so we are uncertain of the value to use in that step when predicting.

Really a nice explanation, Bill.

It captures the difference between error offset and uncertainty very well.

Also, I like the analogy to the uncertainty in predictions of rain. That’s something everyone can get.

The uncertainties are not errors. They’re expressions of ignorance. Exactly right.

Thanks. 🙂

I think we are at the point where climate modelers ought to sit back and allow some statisticians or at least physical scientists and engineers with statistical error analysis background come in and add to the discussion.

John Q,

Agreed. And Haag is a statistician .

Windchaser:

Now imagine you added an extra 100 people, basically as a copy of the first 100 people. Person #101 has the same $ as person #1. Person #102 has the same $ as person #2. And so on. A copy of the people and their $.You are assuming that the individuals are known to have the same, and that the value is known. With propagation of error, neither is known, so the variance of the total increases.

Windchaser

“Note that his results come out completely differently if you change the length of the timesteps to, say, months, and still add this uncertainty in at each timestep. (Because then the equation would add in 4W/m2/month, not 4 W/m2/year).”

–

Windchaser this is something that you raise now and that Roy Spencer mentioned last article [wrongly, sorry Roy] and that Nick Stokes repeatedly uses.

–

There are 2 different ways to approach the 4W/m-2.

Remembering that a plain 4 W/m-2 simply means a unit of energy over time even though the word time does not appear in the subscript it is implicit and explicit in the term Watts.

The unit of power (the watt) in SI, the International System of Units. 1 watt = 1 joule per second = 1 newton meter per second = 1 kg m2 s-3.

–

The “watt per square meter” is the SI unit for radiative and other energy fluxes in geophysics. The “solar constant”, the solar irradiance at the mean earth-sun distance is approximately 1370 W m-2. The global and annual average solar irradiance at the top of the earth’s atmosphere is one-fourth of the solar constant or about 343 W m-2.

–

The first step of Nicks is to use either, both quite different definitions, when he is arguing and then swaps them in place of each other.

–

Hence the “solar constant” is repeatedly referred to as a timeless unit by both he and ATTP when he supports Nick’s argument.

Note that the solar constant does have a time dimension to it and an area dimension despite numerous denials by people who should know better. A flux is not just a number without units. Secondly it is not actually a constant, it varies as the distance from the sun.

–

So the timestep is 1 second, always.

–

Pat Frank uses the 4W/m-2. iin his paper in a totally different way. He is discussing the

CMIP5 Model Calibration Error in Global Average Annual Total Cloud Fraction (TCF).

The words Global Average Annual mean just that. CMIP5 , not Frank

He and they are comparing annual, not monthly, not per second factors and giving the calibration error as a percentage of the average annual rate per second.

Very important.

–

Three things spring to mind.

You say,

“his results come out completely differently if you change the length of the timesteps to, say, months, and still add this uncertainty in at each timestep.”

You ignore the fact that when you change the timestep you have to change the amount of energy received in that time step by the same factor, The total amount of energy in 1 month is 1/12th of that in a year = 1/3 of a W/m-2/month.

–

Further your contention [“and still add this uncertainty in at each timestep”] is wrong. In that you now use a compounding rate for different times lengths and assume that the result should be the same .Roy made the same compounding comment and inbuilt error.

If you use a month as your basis you have to assess monthly. You have to add each months energy received together for 12 months, not compound them. The sun does not keep increasing heat each month. The earth is no hotter after 12 individual months in a row than the same year. This is one of the basic ways of avoiding error. Check that the end point agrees with the assumptions that your calculations give.

–

Thirdly that Nick Stokes should be called out each and every time that he purposely swaps terms. This has been quite difficult for years as he tends to parse his arguments, deliberately, into one part which is quite true and one that is ever so subtly wrong.

A subtle reminder is him attacking Pat Frank for using mean root squares .

This is a standard way of assessing the size of an error.

Nick and others still have comments up claiming that since a root mean square is always positive that Frank cannot use the figure as a plus or minus for the standard deviation in uncertainty.

This is despite this being the typical way that uncertainty is quantified. A deviation plus or minus of the mean root square.

Rant finished.

Windchaser is wrong.

The propagated uncertainty does not change with time step because the magnitude of the LWCF error varies with the time step.

I pointed this out to Roy, and I’m pointing it out here. It’s an obvious point.

One of the reviewers asked about it. I showed the following calculation, which satisfied the concern.

Let’s first approach the shorter time interval. Suppose a one-day (86,400 seconds) GCM time step is assumed. The ±4 Wm^–2 calibration error is a root-mean-square annual mean of 27 models across 20 years. Call that a rms average of 540 model-years.The model cloud simulation error across one day will be much smaller than the average simulation error across one year, because the change in both simulated and real global average cloud cover will be small over short times.

We can estimate the average per-day calibration error as (±4 Wm^–2)^2 = [sum over 365 days(e_i )^2], where e_i is the per-day error. Working through this, e_i = ±0.21 Wm^–2. If we put that into the right side of equation 5.2 and set F_0=33.30 Wm^-2, then the one-day per-step uncertainty is ±0.087 C. The total uncertainty after 100 years is sqrt[(0.087)^2*365*100] = ±16.6 C.

Likewise, the estimated 25-year mean model calibration uncertainty is sqrt(16*25) = ±20 Wm^-2. Following from eqn. 5.2, the 25-year per-step uncertainty is ±8.3 C. After 100 years the uncertainty is sqrt[(8.3)^2*4)] = ±16.6 C.

These are average uncertainties following from the 540 simulation years and the assumption of a linearly uniform error across the average simulation year. Individual models will vary.

Unfortunately, neither the observational resolution nor the model resolution is able to provide a per-day simulation error. However, the 25-year mean is relatively indicative because the time-frame is only modestly extended beyond the 20-year mean uncertainty calculated in Lauer and Hamilton, 2013.You’re exactly right, angech.

I’ll be adding a long comment at the bottom of the post thread about how uncertainty is propagated and the meaning of root-mean-square, all taken from papers about engineering and model errors.

All the relevant WUWT and Roy Spencer blog posts will get their very own copy.

Those papers fully support the way I did the analysis.

It is obvious that many commenters here think that the “LW cloud forcing” is a forcing. Despite its misleading name, it is not. It forms part of the pre-run net flux balance.

Dr Frank wrote ““LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”

Dr Spencer replied above:- “While I agree with the first sentence, I thoroughly disagree with the second. Together, they represent a non sequitur. ”

Pat Frank implies in the above statement that the LWCF is a forcing. It is not. In his uncertainty estimation, he further assumes that any and all flux errors in LWCF can be translated into an uncertainty in forcing in his emulator. No, it cannot.

Forcings – such as those used in Dr Franks’s emulator – are exogenously imposed changes to the net TOA flux, and can be thought of essentially as deterministic inputs. The cumulative forcing (which is what Dr Frank uses to predict temperature change in his emulator) is unambiguously applied to a system in net flux balance. The LWCF variable is a different animal. It is one of the multiple components in the net flux balance, and it varies in magnitude over time as other state-variables change, in particular as the temperature field changes.

They have the same dimensions, but they are not similar in their effect.

If I change a controlling parameter to introduce a +4 W/m^2 downward change in LWCF at TOA at the start of the 500 year spin-up period in any AOGCM, the effect on subsequent incremetal temperature projections is small, bounded and may, indeed, be negligible. If, on the other hand I introduce an additional 4 W/m^2 to the forcing series at the start of a run, then it will add typically about 3 deg C to the incremental temperature projection over any extended period.

The reason is that, during the spin-up period, the model will be brought into net flux balance. This is not achieved by “tweaking” or “intervention”. It happens because the governing equations of the AOGCM recognise that heating is controlled by net flux imbalance. If there is a positive/negative imbalance in net TOA flux at the aggregate level then the planet warms/cools until it is brought back into balance by restorative fluxes, most notably Planck. My hypothetical change of +4 W/m^2 in LWCF at the start of the spin-up period (with no other changes to the system) would cause the absolute temperature to rise by about 3 deg C relative to its previous base. Once forcings are introduced for the run (i.e. after this spin-up period), the projected temperature gain will be expressed relative to this revised base and will be affected only by any change in sensitivity arising. It is important to note that even if such sensitivty change were visible, Dr Frank has no way to mimic any uncertainty propagation via a changing sensitivity. It would correspond to a change in his fixed gradient which relates temperature change to cumulative net flux, but he has no degree of freedom to change this.

None of the above should be interpreted to mean that it is OK to have errors in the internal energy of the system. It is only to emphasise that such errors and particularly systemic errors can not be treated as adjustments or uncertainties in the forcing.

I make it very clear, kribaez, that LWCF is part of the tropospheric thermal energy flux. Forcing is not my invented term. It’s the term of use.

The ±4 W/m^2 model LWCF calibration error is not a forcing. It’s an uncertainty in the CMIP5 simulation of forcing. It says nothing at all about TOA balance, either in the real climate or in a simulated climate.

You wrote, “

errors and particularly systemic errors can not be treated as adjustments or uncertainties in the forcing.”Adjustments, no. Uncertainties, yes.

Uncertainty isn’t error, kribaez. Uncertainty is a knowledge deficit. The ±4W/m^2 LWCF error means one does not know the physically correct intensity of simulated LWCF to better than ±4 W/m^2.

Whatever the imbalance caused by, e.g., a 0.035 W/m^2 annual perturbation by CO2 forcing is lost within that uncertainty.

The model will produce a simulation response to the addition of CO2 forcing, but that simulated response will be physically meaningless.

and Then There’s Physics made an interesting comment on his site

I append it and a comment I made

“The problem is that the intrinsic variability means that the range could represent the range of actually physically plausible pathways, rather than simply representing an uncertainty around the actual pathway. If we have a perfect model, and chaos was not a problem, then maybe we could determine the actual pathway, but we probably can’t. Hence, we shouldn’t necessarily expect the ensemble mean to represent the best estimate of reality.”

Thanks for this refreshing bit of reality.

Also I see.

“James Annan says the following view is implausible, because we can never know the “truth”:

–

This sums up the Pat Frank controversy.

He is pointing out that the way we do things still has lots of potential errors in them.

This means that there is a small chance that the arguments for AGW may be wrong.

Shooting the messenger is not the right response.

Improving the understanding as Paul suggests is the correct way to go.

People should be thanking him for raising the issue and addressing his concerns on their merits. Then doing the work to address the concerns.

–

My worry is that if he is correct the models have a lot more self regulating in them addressed to TOA than they should which in turn makes them run warm.

This discussion is really appalling and disappointing. It is clear that many if not most commenters as well as Dr. Spencer do not understand the uncertainty term. They don’t understand where it comes from, how it is calculated, or what it means. The uncertainty calculation is a completely separate calculation from the model output calculation. This is basic science. This is freshman engineering (first semester, even). If Pat Frank’s calculated uncertainty is correct then the models are useless for making predictions. End of story. Full stop. Whenever I have seen climatic temperature predictions I have never seen associated error bars or uncertainties. It is clear that climate scientists and modelers are simply not doing this critical work. They’ve never even heard of it apparently.

If you suspect your three year old child has a fever would you use a thermometer with a calibrated uncertainty of +/-5 degrees? If the thermometer indicated 98.6 degrees F (sorry, American here), which is normal, but the child is lethargic and feels hot, do you believe this thermometer? Or would you find a thermometer with a calibrated uncertainty of +/- 0.1 degrees F?

My example is simply the output of one single instrument. In a complicated model with many terms, each with it’s own uncertainty, the collective and combined uncertainty in the final output must be accounted for. You simply can’t produce a clean simple output of +3 degrees C from a model full of terms with their own unique uncertainties. The final output must also be uncertain, given that the inputs were uncertain. All Dr Frank has done is to calculate a final uncertainty for the models.

+1

Especially the “This discussion is really appalling and disappointing. It is clear that many if not most commenters as well as Dr. Spencer do not understand the uncertainty term.”

I apparently had really good teachers in what we called back in the 60s “accelerated” science classes. We were not allowed to submit work without a graphical representation and discussion of error propagation.

No “A”s were given without correct error bars.

You cannot with a straight face call yourself a scientist if you don’t understand and use these constraints on what you can claim to know.

I personally think WUWT has been too complacent about this and as a community we should insist on discussions of error propagation in all posts involving calculations. Lets start setting an example in addition to offering constructive criticisms.

I am just a lay person with a good basic set of foundations. For my part, I yesterday ordered several

books on the subject from Amazon.

Dave Day

Thank-you, Mark.

You’re right. I just focused on end-state uncertainty produced by a lower limit of error.

Your point about parameter uncertainties also appears in the paper. They’re never, ever propagated through a projection.

Instead, what we get, are “perturbed physics” tests, in which parameters are varied across their uncertainties in many model runs. Then, the variability of the runs around their mean is presented as the projection uncertainty.

I’ve had climate modelers call that exercise, propagation of error. Really incredible.

I’ll be posting a long comment below, presenting the method of error propagation and the concept of uncertainty from a variety of papers in engineering journals. I suspect you know them all.

You may not know of Vasquez and Whiting, that looks at uncertainty estimation in the context of non-linear models. I found that paper particularly valuable.

Agreed, Dr. Frank. Your pointing this out was very important to me. Since I had also noticed the same lack that you, and others, noticed. I think I’ve even commented to that effect years ago.

For the benefit of all, I’ve put together an extensive post that provides quotes, citations, and URLs for a variety of papers — mostly from engineering journals, but I do encourage everyone to closely examine Vasquez and Whiting — that discuss error analysis, the meaning of uncertainty, uncertainty analysis, and the mathematics of uncertainty propagation.

These papers utterly support the error analysis in “Propagation of Error and the Reliability of Global Air Temperature Projections.”

Summarizing: Uncertainty is a measure of ignorance. It is derived from calibration experiments.

Multiple uncertainties propagate as root sum square. Root-sum-square has positive and negative roots (+/-). Never anything else, unless one wants to consider the uncertainty absolute value.

Uncertainty is an ignorance width. It is not an energy. It does not affect energy balance. It has no influence on TOA energy or any other magnitude in a simulation, or any part of a simulation, period.

Uncertainty does not imply that models should vary from run to run, Nor does it imply inter-model variation. Nor does it necessitate lack of TOA balance in a climate model.

For those who are scientists and who insist that uncertainty is an energy and influences model behavior (none of you will be engineers), or that a (+/-)uncertainty is a constant offset, I wish you a lot of good luck because you’ll not get anywhere.

For the deep-thinking numerical modelers who think rmse = constant offset or is a correlation: you’re wrong.

The literature follows:

Moffat RJ. Contributions to the Theory of Single-Sample Uncertainty Analysis. Journal of Fluids Engineering. 1982;104(2):250-8.

“

Uncertainty Analysis is the prediction of the uncertainty interval which should be associated with an experimental result, based on observations of the scatter in the raw data used in calculating the result.Real processes are affected by more variables than the experimenters wish to acknowledge. A general representation is given in equation (1), which shows a result, R, as a function of a long list of real variables. Some of these are under the direct control of the experimenter, some are under indirect control, some are observed but not controlled, and some are not even observed.

R=R(x_1,x_2,x_3,x_4,x_5,x_6, . . . ,x_N)

It should be apparent by now that the uncertainty in a measurement has no single value which is appropriate for all uses. The uncertainty in a measured result can take on many different values, depending on what terms are included. Each different value corresponds to a different replication level, and each would be appropriate for describing the uncertainty associated with some particular measurement sequence.

The Basic Mathematical FormsThe uncertainty estimates, dx_i or dx_i/x_i in this presentation, are based, not upon the present single-sample data set, but upon a previous series of observations (perhaps as many as 30 independent readings) … In a wide-ranging experiment, these uncertainties must be examined over the whole range, to guard against singular behavior at some points.

Absolute Uncertaintyx_i = (x_i)_avg (+/-)dx_i

Relative Uncertaintyx_i = (x_i)_avg (+/-)dx_i/x_i

Uncertainty intervals throughout are calculated as (+/-)sqrt[(sum over (error)^2].

The uncertainty analysis allows the researcher to anticipate the scatter in the experiment, at different replication levels, based on present understanding of the system.

The calculated value dR_0 represents the minimum uncertainty in R which could be obtained. If the process were entirely steady, the results of repeated trials would lie within (+/-)dR_0 of their mean …”

Nth Order UncertaintyThe calculated value of dR_N, the Nth order uncertainty, estimates the scatter in R which could be expected with the apparatus at hand if, for each observation, every instrument were exchanged for another unit of the same type. This estimates the effect upon R of the (unknown) calibration of each instrument, in addition to the first-order component. The Nth order calculations allow studies from one experiment to be compared with those from another ostensibly similar one, or with “true” values.”Here replace, “instrument” with ‘climate model.’ The relevance is immediately obvious. An Nth order GCM calibration experiment averages the expected uncertainty from N models and allows comparison of the results of one model run with another in the sense that the reliability of their predictions can be evaluated against the general dR_N.

Continuing: “

The Nth order uncertainty calculation must be used wherever the absolute accuracy of the experiment is to be discussed. First order will suffice to describe scatter on repeated trials, and will help in developing an experiment, but Nth order must be invoked whenever one experiment is to be compared with another, with computation, analysis, or with the “truth.”Nth order uncertainty, “*Includes instrument calibration uncertainty, as well as unsteadiness and interpolation.

*Useful for reporting results and assessing the significance of differences between results from different experiment and between computation and experiment.

The basic combinatorial equation is the Root-Sum-Square:

dR = sqrt[sum over((dR_i/dx_i)*dx_i)^2]”https://doi.org/10.1115/1.3241818

Moffat RJ. Describing the uncertainties in experimental results. Experimental Thermal and Fluid Science. 1988;1(1):3-17.

“

The error in a measurement is usually defined as the difference between its true value and the measured value. … The term “uncertainty” is used to refer to “a possible value that an error may have.” … The term “uncertainty analysis” refers to the process of estimating how great an effect the uncertainties in the individual measurements have on the calculated result.THE BASIC MATHEMATICSThis section introduces the(my bold)root-sum-square (RSS)combination, the basic form used for combining uncertainty contributions in both single-sample and multiple-sample analyses. In this section, the term dX_i refers to the uncertainty in X_i in a general and nonspecific way: whatever is being dealt with at the moment (for example, fixed errors, random errors, or uncertainties).Describing One VariableConsider a variable X_i, which has a known uncertainty dX_i. The form for representing this variable and its uncertainty is

X=X_i(measured) (+/-)dX_i (20:1)

This statement should be interpreted to mean the following:

* The best estimate of X, is X_i (measured)

* There is an uncertainty in X_i that may be as large as (+/-)dX_i

* The odds are 20 to 1 against the uncertainty of X_i being larger than (+/-)dX_i.

The value of dX_i represents 2-sigma for a single-sample analysis, where sigma is the standard deviation of the population of possible measurements from which the single sample X_i was taken.The uncertainty (+/-)dX_i Moffat described, exactly represents the (+/-)4W/m^2 LWCF calibration error statistic derived from the combined individual model errors in the test simulations of 27 CMIP5 climate models.

For multiple-sample experiments, dX_i can have three meanings. It may represent tS_(N)/(sqrtN) for random error components, where S_(N) is the standard deviation of the set of N observations used to calculate the mean value (X_i)_bar and t is the Student’s t-statistic appropriate for the number of samples N and the confidence level desired. It may represent the bias limit for fixed errors (this interpretation implicitly requires that the bias limit be estimated at 20:1 odds). Finally, dX_i may represent U_95, the overall uncertainty in X_i.From the “basic mathematics” section above, the over-all uncertainty U = root-sum-square = sqrt[sum over((+/-)dX_i)^2] = the root-sum-square of errors (rmse). That is U = sqrt[(sum over(+/-)dX_i)^2] = (+/-)rmse.

The result R of the experiment is assumed to be calculated from a set of measurements using a data interpretation program (by hand or by computer) represented byR = R(X_1,X_2,X_3,…, X_N)

The objective is to express the uncertainty in the calculated result at the same odds as were used in estimating the uncertainties in the measurements.

The effect of the uncertainty in a single measurement on the calculated result, if only that one measurement were in error would be

dR_x_i = (dR/dX_i)*dX_i)

When several independent variables are used in the function R, the individual terms are combined by a root-sum-square method.

dR = sqrt[sum over(dR/dX_i)*dX_i)^2]

This is the basic equation of uncertainty analysis. Each term represents the contribution made by the uncertainty in one variable, dX_i, to the overall uncertainty in the result, dR.http://www.sciencedirect.com/science/article/pii/089417778890043X

Vasquez VR, Whiting WB. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis. 2006;25(6):1669-81.

[S]ystematic errors are associated with calibration bias in the methods and equipment used to obtain the properties. Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected.Usually, it is assumed that the scientist has reduced the systematic error to a minimum, but there are always irreducible residual systematic errors. On the other hand, there is a psychological perception that reporting estimates of systematic errors decreases the quality and credibility of the experimental measurements, which explains why bias error estimates are hardly ever found in literature data sources.

Of particular interest are the effects of possible calibration errors in experimental measurements. The results are analyzed through the use of cumulative probability distributions (cdf) for the output variables of the model.”

A good general definition of systematic uncertainty is the difference between the observed mean and the true value.”

Also, when dealing with systematic errors we found from experimental evidence that in most of the cases it is not practical to define constant bias backgrounds. As noted by Vasquez and Whiting (1998) in the analysis of thermodynamic data, the systematic errors detected are not constant and tend to be a function of the magnitude of the variables measured.”

Additionally, random errors can cause other types of bias effects on output variables of computer models. For example, Faber et al. (1995a, 1995b) pointed out that random errors produce skewed distributions of estimated quantities in nonlinear models. Only for linear transformation of the data will the random errors cancel out.”

Although the mean of the cdf for the random errors is a good estimate for the unknown true value of the output variable from the probabilistic standpoint, this is not the case for the cdf obtained for the systematic effects, where any value on that distribution can be the unknown true. The knowledge of the cdf width in the case of systematic errors becomes very important for decision making (even more so than for the case of random error effects) because of the difficulty in estimating which is theunknown trueoutput value.(emphasisi in original)”It is important to note that when dealing with nonlinear models, equations such as Equation (2) will not estimate appropriately the effect of combined errors because of the nonlinear transformations performed by the model.Equation (2) is the standard uncertainty propagation sqrt[sum over(±sys error statistic)^2].

In principle, under well-designed experiments, with appropriate measurement techniques, one can expect that the mean reported for a given experimental condition corresponds truly to the physical mean of such condition, but unfortunately this is not the case under the presence of unaccounted systematic errors.When several sources of systematic errors are identified, beta is suggested to be calculated as a mean of bias limits or additive correction factors as follows:

beta ~ sqrt[sum over(theta_S_i)^2], where i defines the sources of bias errors and theta_S is the bias range within the error source i. Similarly, the same approach is used to define a total random error based on individual standard deviation estimates,

e_k = sqrt[sum over(sigma_R_i)^2]

A similar approach for including both random and bias errors in one fterm is presented by Deitrich (1991) with minor variations, from a conceptual standpoint, from the one presented by ANSI/ASME (1998)http://dx.doi.org/10.1111/j.1539-6924.2005.00704.x

Kline SJ. The Purposes of Uncertainty Analysis. Journal of Fluids Engineering. 1985;107(2):153-60.

The Concept of UncertaintySince no measurement is perfectly accurate, means for describing inaccuracies are needed. It is now generally agreed that the appropriate concept for expressing inaccuracies is an “uncertainty” and that the value should be provided by an “uncertainty analysis.”An uncertainty is not the same as an error. An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.

The term “calibration experiment” is used in this paper to denote an experiment which: (i) calibrates an instrument or a thermophysical property against established standards; (ii) measures the desired output directly as a measurand so that propagation of uncertainty is unnecessary.

The information transmitted from calibration experiments into a complete engineering experiment on engineering systems or a record experiment on engineering research needs to be in a form that can be used(my bold).in appropriate propagation processes… Uncertainty analysis is the sine qua non for record experiments and for systematic reduction of errors in experimental work.Uncertainty analysis is … an additional powerful cross-check and procedure for ensuring that requisite accuracy is actually obtained with minimum cost and time.

Propagation of Uncertainties Into ResultsIn calibration experiments, one measures the desired result directly. No problem of propagation of uncertainty then arises; we have the desired results in hand once we complete measurements. In nearly all other experiments, it is necessary to compute the uncertainty in the results from the estimates of uncertainty in the measurands. This computation process is called “propagation of uncertainty.”

Let R be a result computed from n measurands x_1, … x_n„ and W denotes an uncertainty with the subscript indicating the variable. Then, in dimensional form, we obtain: (W_R = sqrt[sum over(error_i)^2]).”

https://doi.org/10.1115/1.3242449

Henrion M, Fischhoff B. Assessing uncertainty in physical constants. American Journal of Physics. 1986;54(9):791-8.

“Error” is the actual difference between a measurement and the value of the quantity it is intended to measure, and is generally unknown at the time of measurement. “Uncertainty” is a scientist’s assessment of the probably magnitude of that error.https://aapt.scitation.org/doi/abs/10.1119/1.14447

Bill Haag’s example is very clever, and rings true.

However, let’s think about the same model a little differently.

Let’s say our dataset of thousands of days shows the hottest ever day was 34 degrees C and the lowest 5 degrees C. The mean is 20 degrees C, with a standard deviation of +/- 6 degrees C.

Let’s say today is 20 degrees C. Tomorrow is likely closer to 20 than 34. The standard deviation tells us that 19 out of 20 times, tomorrow’s temperature will range between 14 and 26 degrees.

But is this the correct statistic to predict tomorrow’s temperature, given today’s?

Actually, that statistic is a little different. A better statistic would be the uncertainty of the change in temperature from one day to the next.

So let’s say we go back to the dataset and find that 19 out of 20 days are likely to be within +/- 5 degrees C of the day before.

Is this a more helpful statistic? When today’s temperature is in the middle of the range, +/- 5 degrees C sounds fair and reasonable. But what if today’s temperature was 33 degrees C, does +/- 5 degrees C still sound fair and reasonable – given that it’s never exceeded 34 degrees C, in the entire dataset?

It’s clear that the true uncertainty distribution for what happens after very hot, or very cold days is that the next day is more likely to be closer to the average, than cooler than warmer.

To properly calculate the uncertainty bounds for a question like this one has to get the uncertainty bounds for each starting point, and then we find that like beams of a torch, they all point ahead, but towards the middle. And overall, it is never possible for the uncertainty to exceed that of the entire dataset, no matter what the previous day’s temperature. The uncertainty in prediction does not get bigger if we look 4 days or 40 or 400 days ahead. The outer bounds of the true uncertainty range is restricted by the limitations of the actual possible range of temperatures that are possible. The rate of change of those limits is orders of magnitude greater than that predicted by extrapolation of the accuracy of a single day to day uncertainty.

To flesh this out, let’s try compounding uncertainties in the light of this dataset. Let’s say that we know our uncertainty is +/5 degrees, on average, starting at 20 degrees C. Is the uncertainty range for a prediction 2 days ahead +/- 10 degrees? If we went out 10 days, does the uncertainty grow to +/- 50 degrees Centigrade? Plainly not.

We can’t just keep adding uncertainties like that, because should a day actually get 5 degrees hotter, two days in a row, it gets close the record maximum for our location, and it has great difficult getting a lot hotter than that, whereas it is very much more likely to get cooler.

Statistically, random unconstrained uncertainties are, as Dr. Frank has pointed out, added by the the square root of the sample count. After four days, our uncertainty would double to 10 degrees C, and after 16 days, double again to 20 degrees C. After 64 days, the extrapolated uncertainty range becomes an impossible +/- 40 degrees C.

Since such a range is truly impossible, there must be something wrong with our uncertainty calculation… and there is.

The mistake was to extrapolate a daily uncertainty, ad infinitum, on an inverse square method. Exactly the method used by Dr. Frank. It is wrong in this setting and it was wrong in his.

The simple fact is that the uncertainty range for any given day cannot exceed +/- 6 degrees C, the standard deviation of the dataset, no matter how far out we push our projection. It wouldn’t matter much what the temperature of the start day was, future uncertainty doesn’t get any greater than that.

An analysis of this kind shows us that that measures of uncertainty cannot not be compounded infinitely – at least, in systems of limited absolute uncertainty.

Dr. Frank’s paper is based entirely on projecting yearly uncertainties out into the future. Unfortunately he is misusing the statistics he professes to understand so well.

That the various models have not spread and distributed themselves more widely despite having been run for some time indicates that the uncertainties predicted are excessively wide. Of course, in time, we will find the answer; Dr. Frank and I would agree that the test of an uncertainty estimate is resolved by how well the dataset ends up matching the uncertainty bounds. If the models stay well inside those boundaries and do not come close to its borders, then we will know that those uncertainty bounds were incorrect – and vice versa.

I am a practising researcher and I do understand the underlying statistical methods.

Chris,

Thank you for the kind words in the first sentence.

However you are not “thinking about the same model a little differently”, you are changing the model. So everything after is not relevant to my points. Perhaps to other points, but not to my example of the projection of uncertainty, which was my point.

Once again, the model was to use the prior day’s high temperature to predict each day’s high temperature. The total range of the data over how ever many days of data you have is irrelevant for this model. From the historical data, a set of residuals are calculated for each observed-minus-predicted pair. These residuals are the ‘error’ in each historical prediction. The residuals are then used to calculate a historical model-goodness statistic (unspecified here to avoid other disagreements posted on the specifics of such calculations)

This model is then used going forward. See the earlier post for details, but it is the uncertainty not the error that is propagated. The model estimate for the second day out from today is forced to use the uncertain estimated value from the model of the first day out, while contributing its own uncertainty to its prediction. And so it goes forward.

Bill

Chris,

You also are confusing uncertainty with error. The uncertainty is a quantity that describes the ignorance of a predicted value. Like the 40% chance of rain, it is not a description of physical reality, or physical future. It doesn’t rain 40% of the time everywhere, nor does it rain all the time in 40% of the places. But the uncertainty of rainfall is communicated without our believing that one of the two physical realities is being predicted.

Bill

While such iterative propagation of purely PROBABILISTIC uncertainty may be applicable to some problems and their models, the signal analysis conception of “prediction” necessarily entails unequivocal specification of a particular value of the variable of interest. That is the nature of predictions provided by Wiener or Kalman filters that are used in sophisticated modeling of geophysical time-series. In that context, model error is far more telling than some a priori academic specification of “uncertainty.”

GCMs, on the other hand, are a mish-mash of attempts at DETERMINISTIC physical modeling, with diverse simplifications and/or parametrizations substituting for genuine physics in treating key factors, such as cloud effects. What has not been been adequately established here is how the different GCMs actually treat the posited ±4 W/m^2 uncertainty in the “cloud forcing.” Is that “forcing” simply held constant, as is relative humidity in most GCMs, or is it computed recursively at each time-step–and if so, how? Until this basic question is resolved, the applicability of the simple propagation model given here to the actual problem at hand will remain itself uncertain.

“simply held constant, as is relative humidity in most GCMs”Relative humidity is not held constant in any GCMs. They couldn’t do it even if they wanted. Water is conserved.

I had in mind that RH remains fixed in GCMs on a global scale. As Isaac Held points out (https://www.gfdl.noaa.gov/blog_held/31-relative-humidity-in-gcms/):

In the first (albeit rather idealized) GCM simulation of the response of climate to an increase in CO2…[Manabe and Wetherald] found, in 1975, that water vapor did increase throughout the model troposphere at roughly the rate needed to maintain fixed RH.

You’re correct, however, that RH simulations vary spatio-temporally. In reference to his model animations, Held states:

But no one has answered my essential question of how the “cloud forcing” at issue here is handled by various models. I suspect that it’s treated differently by different models, with highly non-uniform consequences upon model uncertainty.

Wrong. The model outputs have nothing to do with the uncertainty Pat Frank has calculated. Uncertainty has a very specific meaning here. It is well-defined and it is basic science. Whether it is a climate model, some other kind of model, or a calculated temperature for a turbine engine, if the inputs have uncertainties then the output must also. And the inputs always have uncertainties. And therefore the output must also have an uncertainty, yet climate modelers neither calculate it nor state it.

The calculation of the uncertainty for the final output has nothing to do with the output of the model itself. The uncertainty calculation is a separate calculation. In my simple example above, a calibrated thermometer, you can take the temperature of 50, 100, 1000, or 10,000 people and the uncertainty will always be +/- 5 degrees for that thermometer. Period. The uncertainty of this thermometer does not improve the more you use it. You could then analyze this set of 10,000 temperature data points and calculate the standard deviation of this set if you wanted to. But that would be irrelevant to the uncertainty of the thermometer which would remain +/- 5 degrees. The only way you can reduce this device-specific uncertainty is to get a new thermometer with a tighter uncertainty. But it will have an uncertainty, too.

If this thermometer is used in an experiment, along with a scale and a ruler, each device has an associated uncertainty. If we get readings from our thermometer, scale, and ruler and these readings are used in a calculation, the calculation’s final answer must also have an uncertainty because the inputs were uncertain. This final uncertainty is derived from the individual uncertainties of the elements of the end calculation and not the end calculation itself. It is a separate analysis and it only considers the individual uncertainty of the various elements that went intro the calculation. It is agnostic to the output of our calculation. It is separate and distinct from it.

If I model the internal temperature of a new clean-sheet turbine engine design and at one point it reaches 700 degrees F that’s useful information, if the model is correct. But what if the uncertainty of our calculated (modeled) temperature is +/- 3000 degrees F? 700 degrees F +/- 3000 degrees F could easily put us in a place where things start to melt. This is not a useful model.

And as Pat Frank has demonstrated, neither are these climate models, based solely on the output uncertainty. And while there may be other reasons to reject this crop of climate models, as has Roy Spencer, there is really no reason to go further than the uncertainty analysis that Frank has performed. These models are like a hurricane cone spanning 180 degrees. That only tells us it’s going somewhere, which we already knew.

Your discussion is wrong Chris Thompson, because you’re assigning physical meaning to an uncertainty.

Your mistake becomes very clear when you write that, “could the uncertainties add up to +/- 50 degrees Centigrade? Plainly not. We can’t just keep adding uncertainties like that, because should a day actually get hot two days in a row, it has great difficult getting a lot hotter, and becomes more likely to get cooler.”

That (+/-)50 C says nothing about what the temperature could actually be. It’s an estimate of what you actually know about the temperature 10 days hence. Namely, nothing.

That’s all it means. It’s a statement of your ignorance, not of temperature likelihood.

You’re a practicing researcher, Chris Thompson, but you do not understand the meaning of the statistics you use.

This illustration might clarify the meaning of (+/-)4 W/m^2 of uncertainty in annual average LWCF.

The question to be addressed is what accuracy is necessary in simulated cloud fraction to resolve the annual impact of CO2 forcing?

We know from Lauer and Hamilton that the average CMIP5 (+/-)12.1% annual cloud fraction (CF) error produces an annual average (+/-)4 W/m^2 error in long wave cloud forcing (LWCF).

We also know that the annual average increase in CO2 forcing is about 0.035 W/m^2.

Assuming a linear relationship between cloud fraction error and LWCF error, the (+/-)12.1% CF error is proportionately responsible for (+/-)4 W/m^2 annual average LWCF error.

Then one can estimate the level of resolution necessary to reveal the annual average cloud fraction response to CO2 forcing as, (0.035 W/m^2/(+/-)4 W/m^2)*(+/-)12.1% cloud fraction = 0.11% change in cloud fraction.

This indicates that a climate model needs to be able to accurately simulate a 0.11% feedback response in cloud fraction to resolve the annual impact of CO2 emissions on the climate.

That is, the cloud feedback to a 0.035 W/m^2 annual CO2 forcing needs to be known, and able to be simulated, to a resolution of 0.11% in CF in order to know how clouds respond to annual CO2 forcing.

Alternatively, we know the total tropospheric cloud feedback effect is about -25 W/m^2. This is the cumulative influence of 67% global cloud fraction.

The annual tropospheric CO2 forcing is, again, about 0.035 W/m^2. The CF equivalent that produces this feedback energy flux is again linearly estimated as (0.035 W/m^2/25 W/m^2)*67% = 0.094%.

Assuming the linear relations are reasonable, both methods indicate that the model resolution needed to accurately simulate the annual cloud feedback response of the climate, to an annual 0.035 W/m^2 of CO2 forcing, is about 0.1% CF.

To achieve that level of resolution, the model must accurately simulate cloud type, cloud distribution and cloud height, as well as precipitation and tropical thunderstorms.

This analysis illustrates the meaning of the (+/-)4 W/m^2 LWCF error. That error indicates the overall level of ignorance concerning cloud response and feedback.

The CF ignorance is such that tropospheric thermal energy flux is never known to better than (+/-)4 W/m^2. This is true whether forcing from CO2 emissions is present or not.

GCMs cannot simulate cloud response to 0.1% accuracy. It is not possible to simulate how clouds will respond to CO2 forcing.

It is therefore not possible to simulate the effect of CO2 emissions, if any, on air temperature.

As the model steps through the projection, our knowledge of the consequent global CF steadily diminishes because a GCM cannot simulate the global cloud response to CO2 forcing, and thus cloud feedback, at all for any step.

It is true in every step of a simulation. And it means that projection uncertainty compounds because every erroneous intermediate climate state is subjected to further simulation error.

This is why the uncertainty in projected air temperature increases so dramatically. The model is step-by-step walking away from initial value knowledge further and further into ignorance.

On an annual average basis, the uncertainty in CF feedback is (+/-)144 times larger than the perturbation to be resolved.

The CF response is so poorly known, that even the first simulation step enters terra incognita.

Pat Frank, you say, “That (+/-)50 C says nothing about what the temperature could actually be. It’s an estimate of what you actually know about the temperature 10 days hence. Namely, nothing.”

All uncertainty estimates, especially unknown future uncertainty estimates, are based on certain assumptions. You are estimating the uncertainty of models of the earth’s future temperature. If your estimate of their uncertainty 5 years from now is so much wider than any actual possible range into which the earth itself could fall, then your method of estimating that uncertainty is incorrect. If the models were themselves to predict changes outside of that range, they would plainly be incorrect also, and for the same reason.

You said yourself that uncertainty is resolved as time passes. The test is whether or not actual events fit well in the middle of the uncertainty range, or deviate widely from it. A good, accurate uncertainty range is one in which, if the experiment is repeated multiple times, the outcome falls within the range of uncertainty 19 times out of 20. If repeated measurements either fall widely outside the predicted uncertainty range, or, the opposite, if they fall in a tight narrow band nowhere near the predicted uncertainty range, then that uncertainty range was wrong.

Your future uncertainty estimates were about the uncertainty of temperature of the earth as predicted by the models. You argue that there were errors in the models such that the future uncertainty range would increase widely, at inverse square rate every year, infinitely. However the reality is that this is not possible. You take this to indicate that the models are wrong. An equally valid interpretation is that the way you calculated the predicted future uncertainty range is wrong. The latter view is mine and that of many scientists around you.

The uncertainty range in the ‘lets predict the temperature of some day in the future based on today’ example reaches a maximum uncertainty value, since there is a time beyond which uncertainty does not, and cannot get any greater, no matter how many days we predict forward. I would be absolutely correct in saying the 95% confidence limit of any prediction going forward even 100 days sits within the standard deviation of all temperatures ever recorded, or close thereto. Whether I predict 20 or 50 or 100 or even 500 days ahead, the actual uncertainty of the prediction cannot get any bigger than that possible range. This is a simple example of bounded uncertainty.

I’m sure there are unbounded examples, where uncertainty limits can increase to the inverse square, infinitely, forever, over time. But this is a good example of one where future uncertainty range is limited by some external bound and cannot increase infinitely over time. There are physical constraints that mean that it cannot.

I do agree with you that the models you criticise become meaningless once their future uncertainty range, projected forward, reaches the maximum possible increase in temperature of the earth. Predicting absolutely impossible future events indicates that the uncertainty range is inappropriate. However, if the models stay, as time passes, well within your theoretical predicted uncertainty range for the models, then your uncertainty range estimated for them was false.

You seem to want to have your cake and eat it. You suggest that the models are wrong *in predicting the earth’s temperature* because they could be out by an extraordinarily wide uncertainty range in only a few years, but at the same time, you say *but my uncertainty range has nothing to do with actual possible predictions of the earths temperature*. You can’t have it both ways. It either is an uncertainty about the earth’s future temperature, or it is not. If it is, then it is a constrained value; if it is not about the earths temperature, then it is irrelevant.

Chris, you wrote, “

If your estimate of their uncertainty 5 years from now is so much wider than any actual possible range into which the earth itself could fall, then your method of estimating that uncertainty is incorrect.”You’re confusing uncertainty with physical measurement, Chris. Uncertainty concerns the reliability of the prediction. It has nothing to do with the actual behavior of the physical system.

Uncertainty in the prediction tells us the reliability of the prediction. It is a measure of how reliable are the prediction methods one has in hand, in this case, climate models. Whatever the physical system does later has no bearing on the validity of the uncertainty estimate.

If the physical system produces values that are very far away from the uncertainty bound, then this means the errors and uncertainties we used to estimate the final uncertainty were not adequate to the job.

Uncertainty bounds are always an interval around the predicted value.

If the physical system produces values that are very far away from the uncertainty bound, this means that the predicted value itself is also very far away from the final state of the physical system.

This would mean the physical model is also very poor because it predicts results that are far away from reality.

It does not mean that the method used to make the uncertainty calculation (root-sum-square) was wrong.

This basic mistake informs your entire analysis. Wrong premise, wrong conclusions.

Let’s say I have a prediction method for future temperature that I think has narrow uncertainty bounds. And a whole bunch of other people have their own methods, and we all end up with similarly narrow uncertainty bounds. All are well within the ability of the earth to change by the amount predicted; let’s say they are all relatively modest in the extent of their change.

Someone else does some maths and suggest that these models actually have very wide uncertainty bounds. Far wider, in fact, than the earth actually can change in temperature.

After 15 years, it turns out that when we get the actual temperatures of the earth, and compare them to the predictions, the majority – let’s say 49 out of 50 – fit within the narrow uncertainty bounds originally estimated by the people who made the models. None of them come anywhere close to the much wider uncertainty bounds predicted by others.

Which of those uncertainty predictions just got proved to me a more accurate estimate of the true uncertainty of the models?

While you say, “uncertainty has nothing to do with the actual behaviour of the physical system”, thats not correct. The uncertainty of a prediction has everything to do with the actual behaviour of the real physical system, because uncertainty is an estimate of the possible range of the differences between the behaviour of the model and the actual behaviour of the real physical system – the range of likely future errors between the model and the real system. Uncertainty about how a real system may perform in the future cannot be imagined to be separate from the realities of the physical system with which the prediction is ultimately to be compared.

If a proponent of a model were to suggest that its uncertainty bounds lie well out side the acknowledged realm of possible future values of the physical system, that model would not get much traction, unless it was intended to indicate that the acknowledged realm was incorrect. A simple way to attack the utility of a predictive model is to suggest it has wider uncertainty limits than are physically possible. And that’s your premise.

In my example of predicting temperature going forward, if someone said that their estimate of the uncertainty of my future estimation method was +/- 200 degrees centigrade by 100 days ahead, when the greatest temperature range in the last 1000 days was only +/- 20 degrees centigrade, they would need extraordinary evidence that something truly amazing was about to happen, because otherwise the odds are that their uncertainty estimate is much too wide.

Even if my method of future estimation was to randomly take any number anywhere between the lowest and highest temperatures ever recorded, and say. “that’s what it will be in 100 days”, the precision that estimate would be not be much worse than it would be at 10 days. In fact, after a short time, the likely precision of the estimate of any particular temperature would only depend on the temperature value predicted, not the time ahead, since the ranges would not be symmetrically distributed for values further away from the mean, and each temperature would have a reduced probability of occurrence the further it was away from the mean. It becomes apparent that the true uncertainty of the method depends only, after a time, on the value selected as the predicted future value, not the duration over which the prediction is made in advance. By your logic, the uncertainty always grows and grows, infinitely; however, in a model of this kind, it actually does not.

Having found a good example of a simple model in which your approach can be shown to fail, you need to admit that there are some systems in which your approach can only lead to incorrect conclusions.

The error you make is apply a compounding errors method using inverse squares may well be appropriate for some unbounded systems, but it is simply the wrong method to use for calculating the uncertainty of the future conditions of bounded systems.

The simplest example is some uncertainty measurement of the location of a gas molecule over time. The range of uncertainty about its future position, compared to its present position, gets infinitely greater over time by some constant and a least squares model of time. But if we put that molecule in a box that it cannot escape from, suddenly our prediction model needs to change. That is the difference between predicting uncertainty in a bounded vs an unbounded system. Since the earths temperature is bounded, uncertainty predictions like yours. that sit way outside those bounds are meaningless.

Chris Thompson, your supposed model of making predictions and then seeing how they turn out is analogous to someone going into the future to see how the cards appear in a gambling casino, and then going back in time and saying that the game of chance has no uncertainty.

Your model is ludicrous.

You wrote, “

… because uncertainty is an estimate of the possible range of the differences between the behaviour of the model and the actual behaviour of the real physical system.”No, it is not. Uncertainty is the estimate of the predictive reliability of the model. It has nothing whatever to do with the final magnitude of error.

You continue to make the same mistake. Uncertainty is not error.

You wrote, “

A simple way to attack the utility of a predictive model is to suggest it has wider uncertainty limits than are physically possible. And that’s your premise.”Uncertainty bounds wider than physical possibility means that the prediction has no physical meaning. It means the model is predictively useless. That is not my premise. That is the analytical meaning of uncertainty.

I made a post here of material from published literature about the meaning of uncertainty. It is mostly from engineering journals, with links to the literature. Try looking at that, Chris. You’ll discover that your conception is entirely wrong.

Here is a small quote from S. J. Kline (1985)

The Purposes of Uncertainty Analysis. Journal of Fluids Engineering 107(2), 153-160.From the paper: “

(my bold)”An uncertainty is not the same as an error.An error in measurement is the difference between the true value and the recorded value; an error is a fixed number and cannot be a statistical variable. An uncertainty is a possible value that the error might take on in a given measurement. Since the uncertainty can take on various values over a range, it is inherently a statistical variable.That quote alone refutes your entire position. I’m not going to dispute with you after this, Chris. There is no point in continuing to discuss with someone who insists on a wrong argument.

You wrote, “

… but it is simply the wrong method to use for calculating the uncertainty of the future conditions of bounded systems.”Once again you are confusing uncertainty with error. This is the core problem. You continue to argue from the wrong position. Uncertainty grows without bound. When uncertainty bounds of a prediction are beyond the range of a physically bounded system, it means that the prediction has no physical meaning.

You need to study more, Chris. Try applying yourself to the papers abstracted in the post I linked. You’ll become a better scientist (as I did).

To me the basic error in these models stems from either ignoring or assuming that there are no mechanisms in the science other than radiation that influence the climate equilibrium budget.

It is interesting to note that the TOA energy equilibrium is deemed (or forced) to be fixed in these models and that all of them “respond to increasing greenhouse gases”; this resulting in behaving similarly with respect to climate sensitivity and ocean heat uptake.

Analysis of the thermodynamic behaviour of water, particularly at evaporative Phase Change shows that there is a strong mechanism here which results in the transport of large energies (some 694 WattHrs/Kg) up through the atmosphere due to the inherent buoyancy** of the vapor, oblivious of GHGs such as CO2 for dissipation on the way up and to space. Thus providing a strong influence on TOA energy balance.

Further this process takes place at constant temperature and thus has a zero value sensitivity coefficient (S) in the Planck equation dF = S*dT. which, if ignored in the calculation of the Global Sensitivity would lead to an overestimate of its value; hence the models running HOT.

The thermodynamics also affects the calculation of ocean heat uptake; so evident in the fact that we all sweat to keep cool.

IMO this simple omission, albeit very complex in cloud structures, is a root cause of much of the problems.

** Note: I find that in all the literature on the subject, both sceptic and otherwise references to Convection, rarely if ever, include this buoyancy factor which is an entirely different mechanism and does not depend on temperature differential.

A vital distinction when considering clouds.