Propagation of Error and the Reliability of Global Air Temperature Projections, Mark II.

Guest post by Pat Frank

Readers of Watts Up With That will know from Mark I that for six years I have been trying to publish a manuscript with the post title. Well, it has passed peer review and is now published at Frontiers in Earth Science: Atmospheric Science. The paper demonstrates that climate models have no predictive value.

Before going further, my deep thanks to Anthony Watts for giving a voice to independent thought. So many have sought to suppress it (freedom denialists?). His gift to us (and to America) is beyond calculation. And to Charles the moderator, my eternal gratitude for making it happen.

Onward: the paper is open access. It can be found here , where it can be downloaded; the Supporting Information (SI) is here (7.4 MB pdf).

I would like to publicly honor my manuscript editor Dr. Jing-Jia Luo, who displayed the courage of a scientist; a level of professional integrity found lacking among so many during my 6-year journey.

Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo. They produced critically constructive reviews that helped improve the manuscript. To these reviewers I am very grateful. They provided the dispassionate professionalism and integrity that had been in very rare evidence within my prior submissions.

So, all honor to the editors and reviewers of Frontiers in Earth Science. They rose above the partisan and hewed the principled standards of science when so many did not, and do not.

A digression into the state of practice: Anyone wishing a deep dive can download the entire corpus of reviews and responses for all 13 prior submissions, here (60 MB zip file, Webroot scanned virus-free). Choose “free download” to avoid advertising blandishment.

Climate modelers produced about 25 of the prior 30 reviews. You’ll find repeated editorial rejections of the manuscript on the grounds of objectively incompetent negative reviews. I have written about that extraordinary reality at WUWT here and here. In 30 years of publishing in Chemistry, I never once experienced such a travesty of process. For example, this paper overturned a prediction from Molecular Dynamics and so had a very negative review, but the editor published anyway after our response.

In my prior experience, climate modelers:

· did not know to distinguish between accuracy and precision.

· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.

· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).

· confronted standard error propagation as a foreign concept.

· did not understand the significance or impact of a calibration experiment.

· did not understand the concept of instrumental or model resolution or that it has empirical limits

· did not understand physical error analysis at all.

· did not realize that ‘±n’ is not ‘+n.’

Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.

More thorough-going analyses have been posted up at WUWT, here, here, and here, for example.

In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.

Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.

In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).

Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.

A summary of results: The paper shows that advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. That fact is multiply demonstrated, with the bulk of the demonstrations in the SI. A simple equation, linear in forcing, successfully emulates the air temperature projections of virtually any climate model. Willis Eschenbach also discovered that independently, awhile back.

After showing its efficacy in emulating GCM air temperature projections, the linear equation is used to propagate the root-mean-square annual average long-wave cloud forcing systematic error of climate models, through their air temperature projections.

The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. The predictive content in the projections is zero.

In short, climate models cannot predict future global air temperatures; not for one year and not for 100 years. Climate model air temperature projections are physically meaningless. They say nothing at all about the impact of CO₂ emissions, if any, on global air temperatures.

Here’s an example of how that plays out.

clip_image002

Panel a: blue points, GISS model E2-H-p1 RCP8.5 global air temperature projection anomalies. Red line, the linear emulation. Panel b: the same except with a green envelope showing the physical uncertainty bounds in the GISS projection due to the ±4 Wm⁻² annual average model long wave cloud forcing error. The uncertainty bounds were calculated starting at 2006.

Were the uncertainty to be calculated from the first projection year, 1850, (not shown in the Figure), the uncertainty bounds would be very much wider, even though the known 20th century temperatures are well reproduced. The reason is that the underlying physics within the model is not correct. Therefore, there’s no physical information about the climate in the projected 20th century temperatures, even though they are statistically close to observations (due to model tuning).

Physical uncertainty bounds represent the state of physical knowledge, not of statistical conformance. The projection is physically meaningless.

The uncertainty due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is about ±114 times larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²). A complete inventory of model error would produce enormously greater uncertainty. Climate models are completely unable to resolve the effects of the small forcing perturbation from GHG emissions.

The unavoidable conclusion is that whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now.

It seems Exxon didn’t know, after all. Exxon couldn’t have known. Nor could anyone else.

Every single model air temperature projection since 1988 (and before) is physically meaningless. Every single detection-and-attribution study since then is physically meaningless. When it comes to CO₂ emissions and climate, no one knows what they’ve been talking about: not the IPCC, not Al Gore (we knew that), not even the most prominent of climate modelers, and certainly no political poser.

There is no valid physical theory of climate able to predict what CO₂ emissions will do to the climate, if anything. That theory does not yet exist.

The Stefan-Boltzmann equation is not a valid theory of climate, although people who should know better evidently think otherwise including the NAS and every US scientific society. Their behavior in this is the most amazing abandonment of critical thinking in the history of science.

Absent any physically valid causal deduction, and noting that the climate has multiple rapid response channels to changes in energy flux, and noting further that the climate is exhibiting nothing untoward, one is left with no bearing at all on how much warming, if any, additional CO₂ has produced or will produce.

From the perspective of physical science, it is very reasonable to conclude that any effect of CO₂ emissions is beyond present resolution, and even reasonable to suppose that any possible effect may be so small as to be undetectable within natural variation. Nothing among the present climate observables is in any way unusual.

The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.

The analysis is straight-forward. It could have been done, and should have been done, 30 years ago. But was not.

All the dark significance attached to whatever is the Greenland ice-melt, or to glaciers retreating from their LIA high-stand, or to changes in Arctic winter ice, or to Bangladeshi deltaic floods, or to Kiribati, or to polar bears, is removed. None of it can be rationally or physically blamed on humans or on CO₂ emissions.

Although I am quite sure this study is definitive, those invested in the reigning consensus of alarm will almost certainly not stand down. The debate is unlikely to stop here.

Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.

In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.

Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.

But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.

All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.

All for nothing.

There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.

From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.

Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.

The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.

These outrages: the deaths, the injuries, the anguish, the strife, the malused resources, the ecological offenses, were in their hands to prevent and so are on their heads for account.

In my opinion, the management of every single US scientific society should resign in disgrace. Every single one of them. Starting with Marcia McNutt at the National Academy.

The IPCC should be defunded and shuttered forever.

And the EPA? Who exactly is it that should have rigorously engaged, but did not? In light of apparently studied incompetence at the center, shouldn’t all authority be returned to the states, where it belongs?

And, in a smaller but nevertheless real tragedy, who’s going to tell the so cynically abused Greta? My imagination shies away from that picture.

An Addendum to complete the diagnosis: It’s not just climate models.

Those who compile the global air temperature record do not even know to account for the resolution limits of the historical instruments, see here or here.

They have utterly ignored the systematic measurement error that riddles the air temperature record and renders it unfit for concluding anything about the historical climate, here, here and here.

These problems are in addition to bad siting and UHI effects.

The proxy paleo-temperature reconstructions, the third leg of alarmism, have no distinct relationship at all to physical temperature, here and here.

The whole AGW claim is built upon climate models that do not model the climate, upon climatologically useless air temperature measurements, and upon proxy paleo-temperature reconstructions that are not known to reconstruct temperature.

It all lives on false precision; a state of affairs fully described here, peer-reviewed and all.

Climate alarmism is artful pseudo-science all the way down; made to look like science, but which is not.

Pseudo-science not called out by any of the science organizations whose sole reason for existence is the integrity of science.

Get notified when a new post is published.
Subscribe today!
4.1 9 votes
Article Rating
886 Comments
Inline Feedbacks
View all comments
Janice A Moore
September 8, 2019 2:55 pm

Dear Pat,

So happy for you, a true scientist. This recognition is long overdue. Way to persevere. Truth, thanks to people like you, is marching on. And truth, i.e., data-based science, will, in in the end, win.

But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.

Selah.

Gratefully,

Janice

Reply to  Janice A Moore
September 8, 2019 6:00 pm

Thank-you, Janice, and very good to see you here again.

I’ve appreciated your support, not to mention your good humor. 🙂

Janice Moore
Reply to  Pat Frank
September 8, 2019 6:57 pm

Hi, Pat,

Thank you! 🙂

And, it was my pleasure.

Wish I could hang out on WUWT like I used to. I miss so many people… But the always-into-moderation comment delay along with the marked lukewarm atmosphere keep me away.

Also, I can’t post videos (and often images) anymore. Those were often essential to my creative writing here.

Miss WUWT (as it was).

Take care, down there,

Janice

Scott W Bennett
Reply to  Janice Moore
September 9, 2019 5:11 am

Janice,

Do keep an eye on us, and please comment every now and then.
The world needs every perspective, more than ever today, in this growing, globalist, google gulag!

WUWT is a shadow of its former self in terms of empowering individual commenters. No images, no editing, no real-time comments and it strains belief that all this is just circumstance! ;-(

cheers,

Scott

Janice Moore
Reply to  Scott W Bennett
September 9, 2019 9:43 am

Thank you, Scott, for the encouragement.

And, yes, I agree — with ALL of that. 🙁 There are those who make money off the perpetuation of misinformation about CO2 who influence WUWT. Too bad.

Take care, out there. Thank you, again, for the shout out, 🙂

Janice

Lizzie
September 8, 2019 3:05 pm

The unsinkable CAGW seems to be CQD. Congratulations – I admire the persistence!

Reply to  Lizzie
September 8, 2019 6:00 pm

Thanks Lizzie. 🙂

September 8, 2019 5:43 pm

So CAGW is related to a mere theory that is unsupported by observational data and unsupported by the climate models upon which the IPCC and their followers have relied.

John Q Public
Reply to  Mike Smith
September 8, 2019 6:35 pm

That appears to be the implication.

Don K
September 8, 2019 6:24 pm

Pat

Good paper. It seems to confirm my intuitive feeling that error accumulation almost certainly makes climate models pretty much worthless as predictors. Maybe there are usable ways to predict future climate, but step by step forward state integration from the current state seems to me a fundamentally unworkable approach. Heck, one couldn’t predict exactly where a satellite would be at 0000Z on January 1, 2029 given its current orbital elements. And that’s with far simpler physics than climate and only very slight uncertainties in current position and velocity.

There’s a lot of stuff there that requires some thinking about. And I suppose there could be actual significant flaws. But overall, it’s pretty impressive. Congratulations on getting it published.

Reply to  Don K
September 8, 2019 10:03 pm

Thanks, Don. I sweated bullets working on it. 🙂

Chris Hanley
September 8, 2019 6:38 pm

Clear, coherent, concise and thoroughly convincing.

Reply to  Chris Hanley
September 8, 2019 10:04 pm

Thanks, Chris.

September 8, 2019 6:51 pm

I hate to admit it, but I always thought that the plus-or-minus in a statement of error was a literal range of possible values for a real-world measure. Now Pat comes along and blows my mind with the revelation that I have been thinking incorrectly all these years [I think].

I need to dwell on this to rectify my dissonance.

Phil
Reply to  Robert Kernodle
September 8, 2019 8:45 pm

One should not confuse measurement error with modeling error. Your understanding is correct for a “real-world measure.” When a model is wildly inaccurate, as GCMs are, then its uncertainty can be greater than the bounds that we would expect for real-world temperatures. A model is not reality. When the model uncertainty is greater than the expected values of that which is being modeled (i.e. the world’s atmosphere), then the model is not informative. In short, the uncertainty that this paper is referring to is not of that which is being estimated. While the model outputs seems to be within the bounds of the system being modeled, those outputs are probably constrained. Years ago, I remember discussions on Climate Audit about models “blowing up” (mathematically) (i.e. becoming unstable or going out of bounds.)

Reply to  Phil
September 8, 2019 10:05 pm

Dead on again, Phil. Thanks. I’m really glad you’re here.

Paul Penrose
Reply to  Phil
September 9, 2019 10:35 am

Phil,
Even in the real world there is accuracy error (uncertainty) and precision error (noise). Accuracy is generally limited by the resolution of your measurement device and calibration. These are physical limitations, so accuracy can’t be improved by any post measurement procedure and so this uncertainty must be propagated through all subsequent steps that use the data. Noise, if it is uniformly distributed can be reduced post measurement by mathematical means (averaging/filtering).

Clyde Spencer
Reply to  Phil
September 9, 2019 7:28 pm

Phil
+1

September 8, 2019 7:40 pm

If the possible range of error in the CMIP5 models is so wide then one wonders how it is that observed surface temperatures have so far been constrained within the relatively narrow model envelope: http://blogs.reading.ac.uk/climate-lab-book/files/2014/01/fig-nearterm_all_UPDATE_2018-1.png

According to the example given in the article, error margins since the forecast period began in 2006 could already be expected to have caused the models to stray by as much as around +/- 7 deg C from observations. Yet observations throughout the forecast period have been constrained within the model range of +/- less than 1 deg. C. Is this just down to luck?

Reply to  TheFinalNail
September 8, 2019 10:00 pm

Uncertainty is not physical error, TFN. The ±C are not temperatures. They are ignorance widths.

They do not imply excursions in simulated temperature. At all.

Reply to  Pat Frank
September 8, 2019 11:21 pm

Thanks Pat. But if the model range’s relatively narrow window constrains the observations, as it has done so far over the forecast period, then would you not agree that the model ensemble range has, so far anyway, been useful predictive tools, even without any expressed ‘ignorance widths’ for the individual model runs? Rgds.

Reply to  TheFinalNail
September 9, 2019 7:24 pm

SO much ignorance on display here.

#1: You do not know the meaning of “constraint” (a limitation or restriction). The models “constrain” nothing in the real world. Note: in the fantasy world, models do constrain the “temperatures” used for initialization – you must start with the appropriate “temperatures” in order to ensure that your model predicts disaster. That those “temperatures” are not from observations has become more and more obvious over the years.

#2: The “ensemble range” is meaningless. The “ensemble mean” obviously has no relation to reality, as it continues to diverge more and more from the observations (the unadjusted ones, that is). An “ensemble” of models is completely meaningless. You have A model that works, or you do not. The models that have already diverged so significantly from reality are obviously garbage, and any real researcher would have sent them to the dust bin a long time ago. Of those that are left, which have been more or less tracking reality – they are in the category of “interesting, might be useful, but not yet proven.” Not enough time yet to tell whether they diverge from reality.

#3: For those who think that the models do so well on the past (hindcasting) – well, of course they do. The model is tweaked until it does “predict” the past (either the real past, or the fantasy past). The tweaks – adding and subtracting parameters, adjusting the values of the parameters, futzing with the observed numbers – have no relation to, or justification in, the real world; they simply cause the calculations to come out correctly. (An analogy would be if I thought I should be able to write a check for a new Ford F3500 at a dealership tomorrow. This is absolutely true, so long as my model ignores certain pesky withdrawals from my account, rounds some deposits up to the next $1,000, assumes a 25% interest rate on my savings account, etc. The bank, for some reason, doesn’t accept MY model of my financial condition…).

Reply to  Writing Observer
September 9, 2019 10:55 pm

Writing Observer

It is simply a fact that observations have remained within the multi-model range over the forecast period, which started in 2006: http://blogs.reading.ac.uk/climate-lab-book/files/2014/01/fig-nearterm_all_UPDATE_2018-1.png

It is another fact that temperature projections across the model range up to the present time are contained (since you don’t like ‘constrained’) within less than 1 deg C, warmest to coolest.

Contrast this with Pat Frank’s claim, shown in his example chart in the main text, that by 2019 the models could already be expected to show an error of up to +/- 7 deg C. If the models really do have such a wide error range over such a short period, then it is remarkable that observations so far are contained within such a relatively narrow projected range of temperature.

Reply to  TheFinalNail
September 10, 2019 12:14 am

TFN, no, because the large uncertainty bounds show that the model cannot resolve the effect of the perturbation.

The lower limit of resolution is much, much larger than the perturbation; like trying to resolve a bug in a picture with yard-wide pixels.

The underlying physics is incorrect, so that one can have no confidence in the accuracy of the result, bounded or not.

The meaning of the uncertainty bound is an ignorance width.

Charlie
Reply to  TheFinalNail
September 9, 2019 3:53 pm

I also wonder about this; how are they as close as they are for past conditions ?

Matthew Schilling
Reply to  Charlie
September 11, 2019 9:04 am

How closely did Ptolemaic models, with their wheels within wheels, match observations? How often where additional wheels added after an observation to make the model better emulate reality?

n.n
September 8, 2019 8:19 pm

The science of evaluating fitness of computer models and other simulations.

John Q Public
September 8, 2019 8:39 pm

How much has earth’s temperature varied throughout history? Is it possible clouds played a role in that variation?

Also, the propogated uncertainty range is more an expression of the unrealism of the models under the observed uncertainties. That range is not claimed to occur in the real world, only under the action of the GCM models.

Reply to  John Q Public
September 8, 2019 9:44 pm

John Q Public

… the propogated uncertainty range is more an expression of the unrealism of the models under the observed uncertainties. That range is not claimed to occur in the real world, only under the action of the GCM models.

If the model uncertainty range is as wide as claimed in this new paper then it’s remarkable that, so far at least, observations have remained within the relatively narrow model range over the forecast period (since Jan 2006).

The paper concludes that an AGW signal can not emerge from the climate noise “because the uncertainty width will necessarily increase much faster than any projected trend in air temperature.” This claim appears to be contradicted by a comparison of observations with the model range over the forecast period to date (13 years). Perhaps the modellers have just been fortunate so far; or perhaps their models are less unrealistic than this paper suggests.

John Q Public
Reply to  TheFinalNail
September 8, 2019 10:10 pm

I think it is related to the fact that the modelers are not including the uncertainty in their models. See my post below regarding how this could be done (not actually feasible, but “theoretically”).

Reply to  John Q Public
September 8, 2019 11:35 pm

As I understand it the model ensemble (representing the range of variation across the individual model runs) is intended to provide a de-facto range of uncertainty. If observations stray significantly outside the model range then clearly this would indicate that it is inadequate as a predictive tool. But I struggle to see how it can be dismissed as a predictive tool if observations remain inside its still relatively narrow range (relative compared to Pat’s suggested error margins, that is), as they have done so far throughout the forecast period (since 2006) and look set to continue to do in 2019.

Sweet Old Bob
Reply to  TheFinalNail
September 9, 2019 8:12 am

Observations do not “stray” …. model outputs do .
If any model does not match observations , it needs to be reworked .
Or junked .

John Q Public
Reply to  TheFinalNail
September 9, 2019 9:15 am

I interpret the range to indicate where the output could move to within the parameterization scheme of the model itself, but under the influence of an externally determined (by NASA, Lauer) uncertainty in the model’s performance. In other words, NASA showed what the cloud situation was with satellites. This is compared to what the models predicted, and the uncertainty came from this comparison. This indicates that the model does not have sufficient predictive power to predict what NASA satellites observed, and when this lack of prediction is propogated (extrapolated) for many years, it shows the futility of the calculation.

Paul Penrose
Reply to  TheFinalNail
September 9, 2019 10:19 am

Nail,
What this paper tells us is that given the known errors in the theory that the models are built on, they can’t tell us anything useful about future climate. They are insufficient for that task. The fact that they are “close” to recent historic data over a short time-frame should not be too surprising since they were tuned to follow recent weather patterns. This is not proof in any way that they have predictive value. They might, but because of the systemic errors, we can’t know one way or the other.

Dave Bufalo
September 8, 2019 9:17 pm

Regarding “…no one at the EPA objected…”, Alan Carlin, a physicist and an economist, who had a 30 plus year career with the EPA as a senior policy analysist, wrote a book called “Environmentalism Gone Mad”, wherein he severely criticizes the EPA for supporting man made global warming. Carlin was silenced by the EPA on his views.

John Q Public
September 8, 2019 10:03 pm

Another way to look at this.

Let’s say we took a CMIP5 model but modified it as such.

Take the first time step (call it one year, and use as many sub-time steps as needed). Consider that answer the annual mean. Now run two more first time step runs: a +uncertainty and -uncertainty run (where the uncertainty means modifying the cloud forcing by + or – 4 W/sqm). Now we have three realizations of theoretical climate states in the first year. A mean, a -uncertainty, and a plus uncertainty.

Now go to the second time step. For EACH of the three realizations of the first time step repeat what we did for the first time step. We now have 9 realizations of the second time step In the second year.

Continue that for 100 years (computer makers become very rich, power consumption electric grids goes up exponentially) and we have 3^100 realizations of the 100th year, and 3^N for any year between 1 and 100.

Now for every year in the sequence take the highest and lowest temperatures of all the 3^N realizations for that year. Those become the uncertainty error bar for that year.

Or do it the way Pat Frank did it and probably save a boatload of money and end up with nearly the same answer.

John Q Public
September 8, 2019 10:04 pm

To my post above- you would actually need to repeat it with every model Pat Frank tested to do what Pat Frank did…

John Q Public
Reply to  John Q Public
September 9, 2019 8:59 am

oops… multimodel mean… Still main point stays.

September 8, 2019 11:16 pm

I come back to the question of mine if clouding forcing is a part of climate models. I know that in simple climate models like that of the IPCC it is not: dT = CSP * 5.35 * ln(CO2/280). This model gives the same global warming values than GCMs from 280 ppm to 1370 ppm. There is no cloud forcing factor.

Firstly two quotes from the comments above for my question about the cloud forcing:
1) John Tillman” GCMs don’t do clouds. GIGO computer gamers simply parameterize them with a fudge factor.” This is also my understanding.
2) Pat Frank: “I can’t speak to what people do with, or put into, models, Antero, sorry. I can only speak to the structure of their air temperature projections.”

Then I copy the following quote from the manuscript: “The resulting long-wave cloud forcing
(LWCF) error introduces an annual average +/- 4 W/m2 uncertainty into the simulated
tropospheric thermal energy flux. This annual +/-4 W/m2 simulation uncertainty is
+/- 114 x larger than the annual average +/- 0.035 W/m2 change in tropospheric
thermal energy flux produced by increasing GHG forcing since 1979.”

For me, it looks very clear that you have used the uncertainty of the cloud forcing as a very fundamental basis in your analysis to show that this error alone destroys the temperature calculations of climate models. The next conclusion is from your paper: “Tropospheric thermal energy flux is the determinant of global air temperature. Uncertainty in simulated tropospheric thermal energy flux imposes uncertainty on projected air
temperature.”

For me, it looks like you want to deny the essential part of your paper’s findings.

Paul Penrose
Reply to  Antero Ollila
September 9, 2019 10:01 am

Antero,
I think it is obvious that clouds do affect the climate, and given the amount of energy released by condensation, it can’t be insignificant. So, to the extent that GCMs don’t model clouds (whether they ignore them or parameterize them), this is an error in their physical theory. Even the IPCC and some modelers acknowledge as much. What this paper does is quantify that error and show how it propagates forward in simulation-time.

Reply to  Paul Penrose
September 9, 2019 9:10 pm

I do not deny the effects of clouds. I think they have an important role in the sun theory.

But they are not parts of the IPCC’s climate models.

John Tillman
Reply to  Antero Ollila
September 9, 2019 11:21 am

As noted above, grid cells in GCMs are too much bigger than clouds for the latter to be modelled directly. To do so would require too much computing power.

The cells in numerical models vary a lot in size, but typical for mid-latitudes would be around 200 by 300 km, ie 60,000 sq km. Just guessing here, but an average cloud might be one kilometer square, and possibly smaller.

Reply to  Antero Ollila
September 11, 2019 10:20 pm

The climate models include clouds, Antero.

Look at Lauer and Hamilton, 2013, from which paper I obtained the long wave cloud forcing error. They discuss the cloud simulations in detail.

Phoenix44
September 9, 2019 12:23 am

It’s not just climate modelers. Exactly the same problems exist in finance/economics (my current profession) and medicine/epidimiology (my training). Those asking for models don’t understand the limitations and just want proof, those running the models don’t understand what they are modelling and neither of them check the output against common sense. Far too many think models produce new knowledge rather than model the assumptions put in. They believe models produce emergent properties but they do not – unless the assumptions used are empirically derived “laws”.

It is all a horrible mess.

Mike Haseler (Scottish Sceptic)
September 9, 2019 12:25 am

“The unavoidable conclusion is that a temperature signal from anthropogenic CO2 emissions (if any) cannot have been, nor presently can be, evidenced in climate observables.”

Or as I’ve been saying for more than a decade: “natural variation is more than enough to explain the entire temperature change”.

We now have almost all “scientific” institutions claiming a 95% or is it now 99% confidence the warming was due to CO2, and a sound systematic assessment of the models which says that there can be no confidence at all that any of the warming is due to CO2.

In short, we can be 100% confident the “scientific” institutions are bonkers.

And, based on theory, we can say, given we don’t have explicit evidence that CO2 causes cooling, that on balance it should have caused some warming with about 0.6 (Harde) to 1C(Hansen) warming being the most likely range per doubling of CO2. But likewise, unless something dramatic changes, none of us is every going to be able to be certain that that theory is correct.

Reply to  Mike Haseler (Scottish Sceptic)
September 9, 2019 11:48 am

I think the latest DSM, DMS-5, discourages the use of the problematic descriptor “bonkers”, when referring to the described mental condition.
The approved phrase is “bananas”, although “nutty as a fruitcake” is gaining ground as the more appropriate phrasing.

Matheus Carvalho
September 9, 2019 3:56 am

I am hanging this pic at the university where I work:

https://imgur.com/a/llWHW23

Clyde Spencer
Reply to  Matheus Carvalho
September 9, 2019 7:33 pm

Matheus
I hope you have tenure!

September 9, 2019 4:44 am

Pat,
I am really happy that finally your paper on uncertainties has been published, and I applaud your very detailed and well written comment here at WUWT… I especially admire that you did not cave in and depress when confronted which so may often ugly negative comments and peer reviews of the past.

Reply to  Francis MASSEN
September 10, 2019 10:27 pm

Thank-you Francis. I actually recommended you as a reviewer. 🙂

You’re a trained physicist, learned in meteorology, and you’d give a dispassionate, critical and honest review no matter what.

Paramenter
September 9, 2019 5:26 am

Hey Pat,

Massive well done for all your work and congratulations for publishing your article! The mere fact that this article has made through such hostile ‘mainstream science’ environment means your message conveys weight that cannot be simply ignored. Before I allow myself to ask couple of questions with respect to your article, I’d like to highlight that findings presented in your article are very consistent with another article published recently in the Nature communications (source). It’s basically a stark warning against putting too much faith in complex modelling. Have a look at that:

All model-knowing is conditional on assumptions. […] Unfortunately, most modelling studies don’t bother with a sensitivity analysis – or perform a poor one. A possible reason is that a proper appreciation of uncertainty may locate an output on the right side of Fig. 1, which is a reminder of the important trade-off between model complexity and model error.

Indeed, as your work proves in the field of climate modelling. Figure 1 the author of the article in Nature refers is rather a disturbing graph. It’s basically errors contained in a model versus model complexity. When model grows in complexity errors grow too, especially propagation errors. We see that, as complexity of model growths, uncertainty in the input variables accumulates and propagates to the output of the model.

Whilst statisticians are getting lots of heat due to reproducibility crisis ‘modellers’ are getting free pass. That should not be the case because:

unlike statistics, mathematical modelling is not a discipline. It cannot discuss possible fixes in disciplinary fora under the supervision of recognised leaders. It cannot issue authoritative statements of concern from relevant institutions such as e.g., the American Statistical Association or the columns of Nature.

And the cherry on this cake:

Integrated climate-economy models pretend to show the fate of the planet and its economy several decades ahead, while uncertainty is so wide as to render any expectations for the future meaningless.

Indeed. Sorry for skidding towards the article in Nature but i reckon it plays nicely with your findings!

John Q Public
Reply to  Paramenter
September 9, 2019 12:09 pm

HIs findings demonstrate the statement in manner others cannot brush off [forever].

Alastair Brickell
September 9, 2019 6:05 am

Matheus Carvalho
September 9, 2019 at 3:56 am

Yes, it’s a tremendous, sobering and revealing graph of just what’s been going on for so long. Some sanity at last.
Thank you Pat and Anthony, Ctm and others for bringing it to light.

Mickey Reno
September 9, 2019 6:13 am

Congratulations on a very thoughtful paper, Pat. I’m sure it will be noticed, since I’m sure they occasionally look in here at WUWT, but then thoroughly ignored by the IPCC climate high priests and gatekeepers who claim to be scientists. More’s the pity.

BallBounces
September 9, 2019 6:14 am

I found these points from the author’s 2012 WUWT article helpful in understanding the concept:

* Climate models represent bounded systems. Bounded variables are constrained to remain within certain limits set by the physics of the system. However, systematic uncertainty is not bounded.

* The errors in each preceding step of the evolving climate calculation propagate into the following step.

* When the error is systematic, prior uncertainties do not diminish like random error. Instead, uncertainty propagates forward, producing a widening spread of uncertainty around time-stepped climate calculations. [Cf. the out-of-focus lens fuzz-factor mentioned earlier in comments]

* The uncertainty increases with each annual step.

* When the uncertainty due to the growth of systematic error becomes larger than the physical bound, the calculated variable no longer has any physical meaning….The projected temperature would become no more meaningful than a random guess.

* The uncertainty of (+/-)25 C does not mean… the model predicts air temperatures could warm or cool by 25 C between 1958 and 2020. Instead, the uncertainty represents the pixel size resolvable by a CMIP5 GCM. [Again, the out-of-focus lens analogy.]

* Each year, the pixel gets larger, meaning the resolution of the GCM gets worse. After only a few years, the view of a GCM is so pixelated that nothing important can be resolved.

John Bills
September 9, 2019 6:46 am

Models suck:
https://www.nature.com/articles/s41558-018-0355-y

Taking climate model evaluation to the next level
Earth system models are complex and represent a large number of processes, resulting in a persistent spread across climate projections for a given future scenario. Owing to different model performances against observations and the lack of independence among models, there is now evidence that giving equal weight to each available model projection is suboptimal.

comment image?as=webp

Billy the Kid
September 9, 2019 7:52 am

I have some science background but putting my finger on the crux of the argument. Can someone explain it to me? Here’s what I understand.

These GCM models mis-forecast LWCF by 4 W/sq meter per year. But elsewhere, the impact of doubling CO2 is supposed to add 3.7 W/sq meter. Is it accurate to say that it’s just silly to expect that you can forecast something giving 3.7W/sq meter if your other errors with clouds are 4W/sq meter per year?

What does “error propagation” mean? Thanks in advance.

John Tillman
Reply to  Billy the Kid
September 9, 2019 11:24 am
Billy the Kid
Reply to  John Tillman
September 9, 2019 11:29 am

Thank you. I understand the error propagation piece now. That’s what I thought it meant I just hadn’t heard the word propagation used before.

And is the main source of the error the mis-forecasting of the cloud cover? i.e. the +/- 4W/sq meter over years of iterations makes the uncertainty enormous as compared with the forecast?

Is it right to say that 4W/Sq Meter error is HUGE compared to the overall forcing today of CO2?

John Q Public
Reply to  Billy the Kid
September 9, 2019 11:51 am

” The resulting long-wave cloud forcing (LWCF) error introduces an annual average ±4 Wm^2 uncertainty into the simulated tropospheric thermal energy flux. This annual ±4 Wm^2 simulation uncertainty is ±114 × larger than the annual average ∼0.035 Wm^2 change in tropospheric thermal energy flux produced by increasing GHG forcing since 1979.”

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790375

Reply to  John Tillman
September 9, 2019 11:38 am

Referring to a lecture or article is the easy way to do it, John Tillman.
Probably the better way.
Everyone interested in this or who finds themselves at somewhat of a loss to understand such things as the distinction between precision and accuracy can and should read reference material on these subjects.
Wikipedia is fine for reading about such subjects, although we know it is unreliable on more specific and controversial subjects.

John Tillman
Reply to  Nicholas McGinley
September 9, 2019 12:43 pm

Thanks. IMO better than my attempting an inexpert explanation or definition.

Wiki and other Net sites are great, so long as you check the original sources.

John Tillman
Reply to  Nicholas McGinley
September 9, 2019 12:59 pm

Nicholas,

Power went out just as I replied re. the need to check original sources for Internet references, so dunno if the response will appear.

If forced to state my own inexpert understanding of uncertainty propagation, I’d say that errors multiply with each succeeding measurement or analysis in a process, potentially leading to uncertainty greater than observed variation in the phenomenon under study.

Reply to  Billy the Kid
September 9, 2019 11:32 am

All measurements contain some uncertainty.
When you do iterative mathematical calculations using numbers with a certain amount of uncertainty, you are multiplying two uncertain numbers, and doing so over and over gain.
With each iteration, the uncertainty grows…because it is being multiplied (or added or dived or whatever).
At a certain point the uncertainty is larger than the measured quantity.
That is one sort of error propagation.
You learned about reporting significant figures in the science classes you took, no?
If you are measuring a velocity of some object in motion, and use a stop watch that you can only read to the nearest second, and you use a meter stick that measures to the one hundredths of a meter, how many significant figures can you report in your measurement of velocity?
You might know the distance to three significant figures, but you only measured time to one sig fig.
Now let’s say you used that result to calculate something else, like maybe a force or a velocity, in which you used other measurements.
And then you used that result to calculate something else, using still more measurements or still ore parameters.
If you are not careful to follow the rules regarding uncertainty, you might easily arrive at completely erroneous results due to propagating errors (or uncertainties) from any or all of your measurements.

Billy the Kid
Reply to  Nicholas McGinley
September 9, 2019 11:41 am

Thank you! It’s all coming back to me from Gen Chem.

Stats question. The error bars; are they the standard deviation or do they represent the 95% confidence interval of some sort?

Reply to  Billy the Kid
September 10, 2019 3:33 am

Error bars mean different things depending on the data and what is done with it.
Clyde Spencer (among others) has several recent articles here in which this entire subject is discussed at length.
One thing I like to keep in mind is, some data can be compared to a known (also called an accepted) value, while other measurements will never have an accepted value to compare a measured and/or calculated result to.
We do not know what the true GAST of the Earth was yesterday, or last year, and not in 1880.
And we never will.
We have many here who argue rather convincingly that there is no such number that has a physical meaning, and related arguments regarding the relationship of temperature to energy content, or enthalpy (Ex: What is an air temp that does not have a humidity value to go with it really telling us?).
I for one dislike the idea of one single number that purports to tell us about “the climate”.
My understanding of what exactly the word climate means, tells me this is an exceptionally inane concept. The planet has climate regimes, not “a climate”. And there is far more to a climate than just a temperature. As just one example, two locations can have the exact same average annual temperature, or even the same daily temperature, and yet have starkly contrasting weather. A rainforest might have a daily low of 82° and a daily high of 88°, and so have a daily average temp of 85°. A desert, on the same day of the year, might have a low of 49° and a high of 121°, and so have the same average temp. But nothing else about these two places is remotely similar. The rainforest is near the equator and has 12 hours of daylight and 12 hours of darkness, while the desert is at a high latitude and has a far different length of day, which could be either 8 hours of 16 hours.
There is very little information contained in the so-called GAST.
(Sorry, bit of a tangent there)

Reply to  Nicholas McGinley
September 10, 2019 7:55 am

You are close to some really important info concerning the so called “global temperature”. I ran across a reference talking about meteorology that said weather folks must look at the temperature regime in an area to see if is similar to another area. They talked about a coastal city where temperature ranges are buffered by the ocean versus a location on the Great Plains where temperature ranges can be large.

Another way to say this is that over a period of time the coastal temps will have a small standard deviation while the Great Plains location will have a large standard deviation. This raises a question. Can you truly average the two temps together and claim a higher accuracy because of the error of the mean. I would say no. The population of temps at each location is different with different standard deviations.

Can you average them at all. Sure, but you are diminishing the range artificially. Is the average temperature of the two locations an accurate description of temperature? Heck no. Consequently, does a “global temperature” tell you anything about climate? Not really.

John Q Public
September 9, 2019 8:17 am

” The resulting long-wave cloud forcing (LWCF) error introduces an annual average ±4 Wm^2 uncertainty into the simulated tropospheric thermal energy flux. This annual ±4 Wm^2 simulation uncertainty is ±114 × larger than the annual average ∼0.035 Wm^2 change in tropospheric thermal energy flux produced by increasing GHG forcing since 1979.”

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2790375

Jan E Christoffersen
September 9, 2019 8:44 am

Pat,

Paragraph 16, 3rd line: “multiply” should be “multiplely”

Picky, I know but such a good, hard-hitting post should be free of even one spelling error.

Reply to  Jan E Christoffersen
September 10, 2019 10:22 pm

I’d fix it if I could do, Jan, thanks. 🙂