Propagation of Error and the Reliability of Global Air Temperature Projections, Mark II.

Guest post by Pat Frank

Readers of Watts Up With That will know from Mark I that for six years I have been trying to publish a manuscript with the post title. Well, it has passed peer review and is now published at Frontiers in Earth Science: Atmospheric Science. The paper demonstrates that climate models have no predictive value.

Before going further, my deep thanks to Anthony Watts for giving a voice to independent thought. So many have sought to suppress it (freedom denialists?). His gift to us (and to America) is beyond calculation. And to Charles the moderator, my eternal gratitude for making it happen.

Onward: the paper is open access. It can be found here , where it can be downloaded; the Supporting Information (SI) is here (7.4 MB pdf).

I would like to publicly honor my manuscript editor Dr. Jing-Jia Luo, who displayed the courage of a scientist; a level of professional integrity found lacking among so many during my 6-year journey.

Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo. They produced critically constructive reviews that helped improve the manuscript. To these reviewers I am very grateful. They provided the dispassionate professionalism and integrity that had been in very rare evidence within my prior submissions.

So, all honor to the editors and reviewers of Frontiers in Earth Science. They rose above the partisan and hewed the principled standards of science when so many did not, and do not.

A digression into the state of practice: Anyone wishing a deep dive can download the entire corpus of reviews and responses for all 13 prior submissions, here (60 MB zip file, Webroot scanned virus-free). Choose “free download” to avoid advertising blandishment.

Climate modelers produced about 25 of the prior 30 reviews. You’ll find repeated editorial rejections of the manuscript on the grounds of objectively incompetent negative reviews. I have written about that extraordinary reality at WUWT here and here. In 30 years of publishing in Chemistry, I never once experienced such a travesty of process. For example, this paper overturned a prediction from Molecular Dynamics and so had a very negative review, but the editor published anyway after our response.

In my prior experience, climate modelers:

· did not know to distinguish between accuracy and precision.

· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.

· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).

· confronted standard error propagation as a foreign concept.

· did not understand the significance or impact of a calibration experiment.

· did not understand the concept of instrumental or model resolution or that it has empirical limits

· did not understand physical error analysis at all.

· did not realize that ‘±n’ is not ‘+n.’

Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.

More thorough-going analyses have been posted up at WUWT, here, here, and here, for example.

In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.

Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.

In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).

Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.

A summary of results: The paper shows that advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. That fact is multiply demonstrated, with the bulk of the demonstrations in the SI. A simple equation, linear in forcing, successfully emulates the air temperature projections of virtually any climate model. Willis Eschenbach also discovered that independently, awhile back.

After showing its efficacy in emulating GCM air temperature projections, the linear equation is used to propagate the root-mean-square annual average long-wave cloud forcing systematic error of climate models, through their air temperature projections.

The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. The predictive content in the projections is zero.

In short, climate models cannot predict future global air temperatures; not for one year and not for 100 years. Climate model air temperature projections are physically meaningless. They say nothing at all about the impact of CO₂ emissions, if any, on global air temperatures.

Here’s an example of how that plays out.

clip_image002

Panel a: blue points, GISS model E2-H-p1 RCP8.5 global air temperature projection anomalies. Red line, the linear emulation. Panel b: the same except with a green envelope showing the physical uncertainty bounds in the GISS projection due to the ±4 Wm⁻² annual average model long wave cloud forcing error. The uncertainty bounds were calculated starting at 2006.

Were the uncertainty to be calculated from the first projection year, 1850, (not shown in the Figure), the uncertainty bounds would be very much wider, even though the known 20th century temperatures are well reproduced. The reason is that the underlying physics within the model is not correct. Therefore, there’s no physical information about the climate in the projected 20th century temperatures, even though they are statistically close to observations (due to model tuning).

Physical uncertainty bounds represent the state of physical knowledge, not of statistical conformance. The projection is physically meaningless.

The uncertainty due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is about ±114 times larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²). A complete inventory of model error would produce enormously greater uncertainty. Climate models are completely unable to resolve the effects of the small forcing perturbation from GHG emissions.

The unavoidable conclusion is that whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now.

It seems Exxon didn’t know, after all. Exxon couldn’t have known. Nor could anyone else.

Every single model air temperature projection since 1988 (and before) is physically meaningless. Every single detection-and-attribution study since then is physically meaningless. When it comes to CO₂ emissions and climate, no one knows what they’ve been talking about: not the IPCC, not Al Gore (we knew that), not even the most prominent of climate modelers, and certainly no political poser.

There is no valid physical theory of climate able to predict what CO₂ emissions will do to the climate, if anything. That theory does not yet exist.

The Stefan-Boltzmann equation is not a valid theory of climate, although people who should know better evidently think otherwise including the NAS and every US scientific society. Their behavior in this is the most amazing abandonment of critical thinking in the history of science.

Absent any physically valid causal deduction, and noting that the climate has multiple rapid response channels to changes in energy flux, and noting further that the climate is exhibiting nothing untoward, one is left with no bearing at all on how much warming, if any, additional CO₂ has produced or will produce.

From the perspective of physical science, it is very reasonable to conclude that any effect of CO₂ emissions is beyond present resolution, and even reasonable to suppose that any possible effect may be so small as to be undetectable within natural variation. Nothing among the present climate observables is in any way unusual.

The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.

The analysis is straight-forward. It could have been done, and should have been done, 30 years ago. But was not.

All the dark significance attached to whatever is the Greenland ice-melt, or to glaciers retreating from their LIA high-stand, or to changes in Arctic winter ice, or to Bangladeshi deltaic floods, or to Kiribati, or to polar bears, is removed. None of it can be rationally or physically blamed on humans or on CO₂ emissions.

Although I am quite sure this study is definitive, those invested in the reigning consensus of alarm will almost certainly not stand down. The debate is unlikely to stop here.

Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.

In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.

Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.

But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.

All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.

All for nothing.

There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.

From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.

Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.

The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.

These outrages: the deaths, the injuries, the anguish, the strife, the malused resources, the ecological offenses, were in their hands to prevent and so are on their heads for account.

In my opinion, the management of every single US scientific society should resign in disgrace. Every single one of them. Starting with Marcia McNutt at the National Academy.

The IPCC should be defunded and shuttered forever.

And the EPA? Who exactly is it that should have rigorously engaged, but did not? In light of apparently studied incompetence at the center, shouldn’t all authority be returned to the states, where it belongs?

And, in a smaller but nevertheless real tragedy, who’s going to tell the so cynically abused Greta? My imagination shies away from that picture.

An Addendum to complete the diagnosis: It’s not just climate models.

Those who compile the global air temperature record do not even know to account for the resolution limits of the historical instruments, see here or here.

They have utterly ignored the systematic measurement error that riddles the air temperature record and renders it unfit for concluding anything about the historical climate, here, here and here.

These problems are in addition to bad siting and UHI effects.

The proxy paleo-temperature reconstructions, the third leg of alarmism, have no distinct relationship at all to physical temperature, here and here.

The whole AGW claim is built upon climate models that do not model the climate, upon climatologically useless air temperature measurements, and upon proxy paleo-temperature reconstructions that are not known to reconstruct temperature.

It all lives on false precision; a state of affairs fully described here, peer-reviewed and all.

Climate alarmism is artful pseudo-science all the way down; made to look like science, but which is not.

Pseudo-science not called out by any of the science organizations whose sole reason for existence is the integrity of science.

Get notified when a new post is published.
Subscribe today!
4.1 9 votes
Article Rating
886 Comments
Inline Feedbacks
View all comments
Richard S Courtney
September 8, 2019 12:50 am

Pat Frank,

You say,
“In my prior experience, climate modelers:
· did not know to distinguish between accuracy and precision.
· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.
· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).
· confronted standard error propagation as a foreign concept.
· did not understand the significance or impact of a calibration experiment.
· did not understand the concept of instrumental or model resolution or that it has empirical limits
· did not understand physical error analysis at all.
· did not realize that ‘±n’ is not ‘+n.’

Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.

SADLY, I CAN REPORT THAT THE PROBLEM IS WORSE THAN YOU SAY AND HAS EXISTED FOR DECADES.

I first came across it in the last century and published ; ref.Courtney RS, An Assessment of Validation Experiments Conducted on Computer Models of Global climate (GCM) Using the General Circulation Modelof the UK Hadley Cenre, Energy & Environment, v.10, no.5 (1999).
That paper concluded;
“The IPCC is basing predictions of man-made global warming on the outputs of GCMs. Validations of these models have now been conducted, and they demonstrate beyond doubt that these models have no validity for predicting large climate changes. The IPCC and the Hadley Centre have responded to this problem by proclaiming that the inputs which they fed to a model are evidence for existence of the man-made global warming. This proclamation is not true and contravenes the principle of science that hypotheses are tested against observed data.”

The IPCC’s Fourth Assessment Report (AR4) was published in 2007 and the IPCC subsequently published a Synthesis Report. The US National Oceanic and Atmospheric Administration (NOAA) asked me to review each draft of the AR4 Report, and Rajendra Pechauri (the then IPCC Chairman) asked me to review the draft Synthesis Report.

My review comments on the first and second drafts of the AR4 were completely ignored. Hence, I did not bother to review the Synthesis Report.

I posted the following summary of my Review Comments of the first draft of the AR4.

“Expert Peer Review Comments of the first draft of the IPCC’s Fourth Assessment Report
provided by Richard S Courtney

General Comment on the draft Report.

My submitted review comments are of Chapters 1 and 2 and they are offered for use, but their best purpose is that they demonstrate the nature of the contents of the draft Report. I had intended to peer review the entire document but I have not bothered to complete that because the draft is of such poor quality that my major review comment is:

The draft report should be withdrawn and a report of at least acceptable scientific quality should be presented in its place.

My review comments include suggested corrections to
• a blatant lie,
• selective use of published data,
• use of discredited data,
• failure to state (important) limitations of stated information,
• presentation of not-evidenced assertions as information,
• ignoring of all pertinent data that disproves the assertions,
• use of illogical arguments,
• failure to mention the most important aerosol (it provides positive forcing greater than methane),
• failure to understand the difference between reality and virtual reality,
• arrogant assertion that climate modellers are “the scientific community”,
• claims of “strong correlation” where none exists,
• suggestion that correlation shows causality,
• claim that peer review proves the scientific worth of information,
• claim that replication is not essential to scientific worth of information,
• misleading statements,
• ignorance of the ‘greenhouse effect’ and its components,
• and other errors.

Perhaps the clearest illustration of the nature of the draft Report is my comment on a Figure title. My comment says;

Page 1-45 Chapter 1 Figure 1.3 Title
Replace the title with,
“Figure 1.3. The Keeling curve showing the rise of atmospheric carbon dioxide concentration measured at Mauna Loa, Hawaii”
because the draft title is untrue, polemical assertion (the report may intend to be a sales brochure for one very limited scientific opinion but there is no need to be this blatant about it).
Richard S Courtney (exp.) ”

I received no response to my recommendation that
“The draft report should be withdrawn and a report of at least acceptable scientific quality should be presented in its place”,
but I was presented with the second draft that contained many of the errors that I had asked to be corrected in my review comments of the first draft (that I summarised as stated above).

I again began my detailed review of the second draft of the AR4. My comments totalled 36 pages of text requesting specific changes. The IPCC made them available for public observation on the IPCC’s web site. I commented on the Summary for Policy Makers (SPM) and the first eight chapters of the Technical Summary. At this point I gave up and submitted the comments I had produced.

I gave up because it was clear that my comments on the first draft had been ignored, and there seemed little point in further review that could be expected to be ignored, too. Upon publication of the AR4 it became clear that I need not have bothered to provide any of my review comments.

And I gave up my review of the AR4 in disgust at the IPCC’s over-reliance on not-validated computer models. I submitted the following review comment to explain why I was abandoning further review of the AR4 second draft.

Page 2-47 Chapter 2 Section 2.6.3 Line 46
Delete the phrase, “and a physical model” because it is a falsehood.
Evidence says what it says, and construction of a physical model is irrelevant to that in any real science.

The authors of this draft Report seem to have an extreme prejudice in favour of models (some parts of the Report seem to assert that climate obeys what the models say; e.g. Page 2-47 Chapter 2 Section 2.6.3 Lines 33 and 34), and this phrase that needs deletion is an example of the prejudice.

Evidence is the result of empirical observation of reality.
Hypotheses are ideas based on the evidence.
Theories are hypotheses that have repeatedly been tested by comparison with evidence and have withstood all the tests.
Models are representations of the hypotheses and theories. Outputs of the models can be used as evidence only when the output data is demonstrated to accurately represent reality. If a model output disagrees with the available evidence then this indicates fault in the model, and this indication remains true until the evidence is shown to be wrong.

This draft Report repeatedly demonstrates that its authors do not understand these matters. So, I provide the following analogy to help them. If they can comprehend the analogy then they may achieve graduate standard in their science practice.
A scientist discovers a new species.
1. He/she names it (e.g. he/she calls it a gazelle) and describes it (e.g. a gazelle has a leg in each corner).
2. He/she observes that gazelles leap. (n.b. the muscles, ligaments etc. that enable gazelles to leap are not known, do not need to be discovered, and do not need to be modelled to observe that gazelles leap. The observation is evidence.)
3. Gazelles are observed to always leap when a predator is near. (This observation is also evidence.)
4. From (3) it can be deduced that gazelles leap in response to the presence of a predator.
5. n.b. The gazelle’s internal body structure and central nervous system do not need to be studied, known or modelled for the conclusion in (4) that “gazelles leap when a predator is near” to be valid. Indeed, study of a gazelle’s internal body structure and central nervous system may never reveal that, and such a model may take decades to construct following achievement of the conclusion from the evidence.

(Having read all 11 chapters of the draft Report, I had intended to provide review comments on them all. However, I became so angry at the need to point out the above elementary principles that I abandoned the review at this point: the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence).”

I could have added that the global climate system is more complex than the central nervous system of a gazelle and that an incomplete model of a gazelle’s central nervous system could be expected to provide incorrect indications of gazelle behaviour.

Simply, the climate modellers are NOT scientists: they seem to think reality does not require modelling but, instead, reality has to obey ideas they present as models.

Richard

Reply to  Richard S Courtney
September 8, 2019 11:34 am

Richard, you’ve been a hero on this topic for many years. We can hope you finally get satisfaction.

Reply to  Pat Frank
September 8, 2019 2:54 pm

Ditto.

HAS
Reply to  ...and Then There's Physics
September 8, 2019 2:47 am

ATTP I don’t think your suggestion that GCMs being stable to perturbations in initial conditions demonstrates the cloud forcing are an offset. The argument is that GCMs lack information about that forcing and this means they are imprecise as a consequence, and their behavior is therefore an unreliable witness. The way they are constructed means they are likely to be stable.

What the emulator does is give a simple model of GCMs to explore the impact of that imprescion without running lots of GCMs and, assuming it is a good emulator, it says that current GCMs could be significantly out in their projections. Your line of argument needs to address whether the way the emulator is used to estimate the impact of the imprecision is robust – the behavior of the GCMs is not really relevant at this point.

However I’d add that if the emulator didn’t show the same behavior as the GCMs that would be relevant.

Reply to  HAS
September 8, 2019 3:50 am

The point about GCMs being stable to perturbations in the initial conditions is simply meant to illustrate that the cloud forcing uncertainty clearly doesn’t propagate as claimed by Pat Frank. A key point is that the uncertainty that Pat Frank is claiming is +- 4W/m^2/year/model is really a root mean square error which simply has units of W/m^2 (there is no year^{-1} model^{-1}). It is essentially an base state offset that should not be propagated from timestep to timestep. You can also read Nick Stokes’ new post about this.

https://moyhu.blogspot.com/2019/09/another-round-of-pat-franks-propagation.html

Reply to  ...and Then There's Physics
September 8, 2019 5:42 am

illustrate that the cloud forcing uncertainty clearly doesn’t propagate as claimed by Pat Frank.

But that’s not the point at all. Saying that the propagated error is much larger than the range of values returned over multiple runs doesn’t mean there is an expectation that runs can ever reach those values. It means that whatever value that is reached is meaningless.

Just because the models are constrained to stay within sensible boundaries doesn’t make the result meaningful and make no mistake, GCMs can and do spiral off outside those boundaries and need to be carefully managed to keep them in a sensible range.

For example

Global Climate Models and Their Limitations
http://solberg.snr.missouri.edu/gcc/_09-09-13_%20Chapter%201%20Models.pdf

Observational error refers to the fact that instrumentation cannot measure the state of the atmosphere with infinite precision; it is important both for establishing the initial conditions and validation. Numerical error covers many shortcomings including “aliasing,” the tendency to misrepresent the sub-grid scale processes as largerscale features. In the downscaling approach, presumably errors in the large-scale boundary conditions also will propagate into the nested grid. Also, the numerical methods themselves are only approximations to the solution of the mathematical equations, and this results in truncation error. Physical errors are manifest in parameterizations, which may be approximations, simplifications, or educated guesses about how real processes work. An example of this type of error would be the representation of cloud formation and dissipation in a model, which is generally a crude approximation.

Each of these error sources generates and propagates errors in model simulations. Without some “interference” from model designers, model solutions accumulate energy at the smallest scales of resolution or blow up rapidly due to computational error.

John Q Public
Reply to  ...and Then There's Physics
September 8, 2019 12:04 pm

Good point. I spent some time trying to find the “per year part” in Frank’s ref 8 (Lauer, et. al.), and found some evidence that this is what they intended to say, but it is not clear. Maybe Pat Frank can elaborate.

In section 3. Lauer talks about “Multiyear annual mean” On page 3831 I read “Biases in annual average SCF…”, but on page 3833, where teh +/-4 W/swm is given they just say “the correlation of the multimodel mean LCF is 0.93 (rmse 5 4 W m22) and ranges between 0.70 and 0.92 (rmse 5 4–11 W m22) for the individual models.” (Still in section 3)

John Q Public
Reply to  John Q Public
September 8, 2019 12:53 pm

In the conclusions, Lauer, et al. state “The CMIP5 versus CMIP3 differences in the statistics of **interannual** variability of SCF and LCF are quite modest, although a systematic overestimation in **interannual** variability of CA in CMIP3 is slightly improved over the continents in CMIP5.” (** added)

“The better performance of the models in reproducing observed annual mean SCF and LCF therefore suggests that this good agreement is mainly a result of careful model tuning rather than an accurate fundamental representation of cloud processes in the models”

John Q Public
Reply to  John Q Public
September 8, 2019 12:58 pm

At the start of the section where Lauer introduces the LCF +/-4W/sqm, hte states for LWP:

“Just as for CA, the performance in reproducing the observed multiyear **annual** mean LWP did not improve considerably in CMIP5 compared with CMIP3. The rmse ranges between 20 and 129 g m22 in CMIP3 (multimodel mean 5 22 g m22) and between 23 and 95 g m22 in CMIP5 (multimodel mean 5 24 g m22).”

He continues with the other parameters, but appears to drop the formality of stating “observed multiyear annual mean” in preface to the values. To me this strongly implies the 4 W/sqm is an annual mean.

Reply to  John Q Public
September 8, 2019 1:03 pm

An annual mean is a yearly average is per year, John.

Reply to  ...and Then There's Physics
September 8, 2019 12:49 pm

Nick is wrong yet again.

He supposes that if one averages a time-varying error over a time range, that the average does not include error/time.

Tim the Tool Man above makes a fine analogy in terms of errors in steps per mile.

Nick would have it, and ATTP too, that if one averages the step error over a large number of steps, the final average would _not_ be error/step.

Starting out with this very basic mistake, they both go wildly off on irrelevant criticisms.

Nick goes on to say this: “I vainly pointed out that if he had gathered the data monthly instead of annually, the average would be assigned units/month, not /year, and then the calculated error bars would be sqrt(12) times as wide.

No, the error bars would not be sqrt(12) times greater because the average error units would be twelve times smaller.

Earth to Nick (and to ATTP): 1/240*(sum of errors) is not equal to 1/20*(sum of errors).

See Section 6-2 in the SI.

Nick goes on to say, “There is more detailed discussion of this starting here. In fact, Lauer and Hamilton said, correctly, that the RMSE was 4 Wm-2. The year-1 model-1 is nonsense added by PF…

Nick is leaving out qualifying context.

Here’s what Laur and Hamilton actually write: “A measure of the performance of the CMIP model ensemble in reproducing observed mean cloud properties is obtained by calculating the differences in modeled (x_mod) and observed (x_obs) 20-yr means These differences are then averaged over all N models in the CMIP3 or CMIP5 ensemble…

A 20 year mean is average/year. What’s to question?

Count apples in various baskets. Take the average: apples/basket. This is evidently higher math than Nick can follow.

The annual average of a sum of time-varying error values taken over a set of models is error per model per year. Apples per basket per room.

Lauer and Hamilton go on, “Figure 2 shows 20-yr annual means for liquid water path, total cloud amount, and ToA CF from satellite observations and the ensemble mean bias of the CMIP3 and CMIP5 models. (my bold)”

Looking at Figure 2, one sees positive and negative errors depicted across the globe. The global mean error is the root-mean-square, leading to ±error. Given that the mean error is taken across multiple models it represents ±error/model.

Given that the mean error is the annual error taken across multiple models taken across 20 years, it represents ±error/model/year.

This obvious result is also on the Nick Stokes/ATTP denial list.

Average the error across all the models: ±(error/model). Average the error for all the models across the calibration years: ±(error per model per year). Higher math, indeed.

This is first year algebra, and neither Nick Stokes nor ATTP seem to get it.

For Long wave cloud forcing (LCF) error, Lauer and Hamilton describe it this way: “For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m^-2) and ranges between 0.70 and 0.92 (rmse = 4–11 W m^-2) for the individual models. (my bold)”

Nick holds that rmse doesn’t mean root-mean-squared-error, i.e., ±error. It means positive sign vertical offset.

Nick’s logic also requires that standard deviations around any mean are not ±, but mere positive sign values. He even admits it: “Who writes an RMS as ±4? It’s positive.

Got that? According to Nick Stokes, -4 (negative 4) is not one of the roots of sqrt(16).

When taking the mean of a set of values, and calculating the rmse about the mean, Nick allows only the positive values of the deviations.

It really is incredible.

Reply to  Pat Frank
September 8, 2019 4:07 pm

Over on my blog, Steve and Nick joined in the discussion of significant digits and error calculation. (The URL is
https://jaschrumpf.wordpress.com/2019/03/28/talking-about-temperatures
if anyone is interested in reading the thread.)

In one post Steve stated that when they report the anomaly as ,e.g. 0.745C, they are saying the prediction of .745C will have the smallest error of prediction, that it would be smaller than the error from using.7C or .8C.

However, what that number (the standard error in the mean) is saying is that if you resampled the entire population again, your mean would stand a 67% chance of being within that 0.745C of the first calculation of the mean.

It doesn’t mean that the mean is accurate to three decimals. If the measurements were in tenths of a degree, the mean has to be stated n tenths of a degree, regardless of how many decimals are carried out the calculation.

Neither seemed to have any grasp of the importance of that in scientific measurement at all.

Reply to  Pat Frank
September 8, 2019 5:36 pm

“A 20 year mean is average/year.”
No, it doesn’t, in any sane world. It’s the same mean as if calculated for 240 months. But that doesn’t make it average/month.

You say in the paper
“The CMIP5 models were reported to produce an annual average LWCF RMSE = ±4 Wm^-2 year^11 model^-1, relative to the observational cloud standard (Lauer and Hamilton, 2013).”
That is just misrepresentation. Lauer and Hamilton 2012 said, clearly and explicitly, as you quoted it:
“For CMIP5, the correlation of the multimodel mean LCF is 0.93 (rmse = 4 W m^-2) and ranges between 0.70 and 0.92 (rmse = 4–11 W m^-2) for the individual models. (my bold)”

Again,“rmse = 4 W m^-2”. No ± and no per year (or per model). It is just a state figure. It is your invention that, because they binned their data annually, the units are pre year. They didn’t say so, and it isn’t true. If they had binned their data some other way, or not at all, the answer would still be the same – 4 W m^-2.

Actually, we don’t even know how they binned their data. You’ve constructed the whole fantasy on the basis that they chose annual averages for graphing. There is no difference between averaging rmse and averaging temperature, say. You don’t say that Miami has an average temperature of 24°C/year because you averaged annual averages.

“Got that?”
Well we’ve been through that before, but without you finding any usage, anywhere, where people referred to rmse with ±. Lauer and Hamilton just give a positive number. This is just an eccentricity of yours, harmless in this case. But your invention of extra units feeds straight into your error arithmetic, and gives a meaningless result.

Reply to  Pat Frank
September 9, 2019 1:19 pm

James S –> I read your blog for the first time. I have been working on a paper discussing these same things since February and things keep interfering with my finishing it.

I wanted to point out that you are generally right in what you’re saying. But let me elucidate a little more. Let’s use very simple temp measurements that are reported to integer values with an error of +/- 0.5 degrees. For example, let’s use 50 and 51 to start.

When you see 50 +/- 0.5 degrees, this means the temperature could have been from 49.5 to 50.5. Similarly, 51 +/- 0.5 degrees means a temperature of 50.5 to 51.5. What is the probability of any certain temperature within this range? It is equal to 1. In other words, a temp of 49.6 is just as likely as 50.2516 in the lower recorded value. There is simply no way to know what the real temperature was at the time of the reading and recording. I call this ‘recording error’ and it is systemic. This means recording errors of different measurements can not be considered to be random and the error of the mean is not an appropriate descriptor. The central limit theorem does not apply. This requires measuring the SAME THING with the same device multiple times. Or you must take multiple samples from a common population. You can statistically derive a value that is close to the true value when these apply. What you have with temperature measurements are multiple non-overlapping populations. Measuring a temperature at a given point in time is ONE MEASUREMENT of ONE THING. There is simply no way to find a mean since (1/sqrt N) = 1.

What are the ramifications of this when averaging? Both temps could be at the low value or they could both be at the high value! You simply don’t know or have any way of knowing.

What is the average of the possible lows – 49.5 and 50.5? It is 50.

What is the average of the possible highs – 50.5 and 51.5. It is 51.

What is the correct way to report this? It is 50.5 +/- 0.5. This is the only time I know of where adding a significant digit is appropriate. However, you can only do this if the recording error component is propagated throughout the calculations. You can not characterize the value using standard deviation or error of the mean because those remove the original range of what the readings could have been.

On your blog, Nick tried to straw man this by using multiple measurements of a ruler to find the distance of 50 m. The simple answer is that above. Making multiple measurements of varying marks within the 50m is not measuring the same thing multiple times. The measurement error of each measurement IS NOT reduced through a statistical calculation of the error of the mean Why? You are not taking samples of a population. If each measurement had an error of +/- 0.2, then they all could have been +0.2 or they all could have been -0.2. The appropriate report would be the measurements added together and an uncertainty of 50 * +/- 0.2 = +/- 10 cm. This what uncertainty is all about. Now if you had made 50 attempts of measuring the 50m, then you could have taken the error of the mean of the max and min measurements. But guess what? The measurement errors would still have to propagate.

Here is a little story to think about. Engineers deal with this all the time. I can take 10,000 1.5k +/- 20% ohm resistors and measure them. I can average the values and get a very, very accurate mean value. Let’s say, 1.483k +/- 0.01 ohms. Yet when I tell the designers what the tolerance is, can I use the 1.483 +/- 0.01 (uncertainty of the mean) ohms, or do I specify the 1.48k +/- 18% ohms (the three sigma tolerance) ?

HAS
Reply to  ...and Then There's Physics
September 8, 2019 2:10 pm

I expanded a bit on what I see as the difficulty with your approach to your critque in response to Nick Stokes above. You do need to be rigous in separating out the various domain in play.

I must look more closely at specific issue of cloud forcing and dimensions etc, but at first blush a systemic error in forcing in “emulator world” based on the particular linear equations would seem to propergate. If that seems inappropriate in either the real world or “GCM world” then that obviously needs to be explored.

Reply to  ...and Then There's Physics
September 8, 2019 11:36 am

And be sure to read the debate below Patrick Brown’s video.

He is a very smart guy, but has been betrayed by his professors. They taught him nothing about physical error analysis.

And you betray no such knowledge either, ATTP.

Reply to  Pat Frank
September 8, 2019 1:20 pm

Pat,
If I remember correctly, I was involved in the debate below Patrick Brown’s video. Just out of interest, how many physical scientists have you now encountered who – according to you – have no knowledge of physical error analysis? My guess is that it’s quite a long list. Have you ever pondered the possible reasons for this?

Reply to  ...and Then There's Physics
September 8, 2019 4:33 pm

Not one actual physical scientist with whom I discussed the analysis failed to understand it immediately, ATTP. Not one.

They all immediately understood and accepted it.

I have given my talk before audiences that included physicists and chemists. No objections were raised.

Nor did they raise any objections about averages having a denominator unit — your objection, and Patrick Brown’s, and Nick’s.

My climate modeler reviewers have raised amazingly naive objections, such as that a ±15 C uncertainty implied that the model must be wildly oscillating between greenhouse and ice-house states.

With that sort of deep insight — and that was far from the only example — no one can blame me for concluding a lack of training in physical error analysis.

However, none of them raised the particular objection that averages have no denominator unit, either. Patrick Brown was the only one. And apparently you and Nick.

Nick Stokes is no scientist, ATTP. Nor, I suspect, are you.

Reply to  Pat Frank
September 8, 2019 6:02 pm

And you, Mr. Pat Frank are not a statistician. In the absolute continuous case, the “average” of a random variable is it’s expected value.

https://wikimedia.org/api/rest_v1/media/math/render/svg/2dabe1557bd0386dc158ef46669f9b8123af5f7a

Good luck assigning per year, per month, per day, per hour, or per day to that.

...
Reply to  Pat Frank
September 9, 2019 5:26 am

Pat,
How do you define a scientist?

John Tillman
Reply to  Pat Frank
September 9, 2019 12:52 pm

You didn’t ask me, but at a minimum, a researcher or analyzer has to practice the scientific method, which consensus of supposed experts isn’t.

John Tillman
Reply to  Pat Frank
September 9, 2019 12:54 pm

Donald,

Please help us out here by pointing out the statistical errors which you believe Pat has made.

Thanks!

Reply to  Pat Frank
September 9, 2019 8:17 pm

The responses to Pat Frank here posted before and at 7:11 PM PDT 9/9/2019 by someone using my name were not posted by me.

Reply to  Donald L. Klipstein
September 9, 2019 8:32 pm

all removed ~ctm

Reply to  Pat Frank
September 9, 2019 11:48 pm

Mr/s <strong…, Someone trained, self- or otherwise, in the practice of, and engaged in the forwarding of, falsifiable theory and replicable result.

That training will include the pragmatics of experiment. Neither Nick nor ATTP evidence any of that.

September 8, 2019 2:04 am

So much to say about this.
But it is late and I just want to say something very clearly:
Whenever you speak to someone who has been taken in by the global warming malarkey, just know you are speaking to someone who either has no idea what they are talking about, or they are a deliberate and malicious liar.

Fool or liar.

Several flavors of each, but all are one of these.

September 8, 2019 2:47 am

I would love to be wrong but this work would be essentially ignored. The climate debate has moved beyond science into psychological emotion. The emotion of impending Apocalypse, the emotion of saving the world through sacrifice. The emotional nature of the debate is personified in Greta Thunberg. You can’t fight that with science, and much less when scores of scientists making a living out of the “climate emergency” will contradict you.

The fight has been lost, we are just a testimony that not everyone was overcome by climate madness. But we are irrelevant when climate directive after climate directive are being approved in Western countries.

Reply to  Javier
September 8, 2019 4:47 am

Wait until the lights start going out and crops start failing or just running out.
People and families freezing in the dark and with no food will not die quietly.
At least, that has never been what has happened in the past.
We all know how long it took for Venezuela to go from the most prosperous country in South America, to empty shelfs, people eating dogs and cats, and hungry hordes scavenging in city dumps for morsels of food or scraps to sell.
Not long.
No idea where you live, but no fight has been lost here in the US.
We have not even had a real fight yet.
I would not bet on the snowflakes winning if and when one occurs.

Reply to  Javier
September 8, 2019 7:01 am

There is plenty of hope.
Don’t judge the world by what you read in the newspapers or see on the internet.
Large areas of this world (China, Russia, South America, Africa, Southeast Asia), that is, most of the non-Western and/or non-European world, which is most of the world, don’t buy into this stuff.
England, for example, makes a lot of noise about renewables and climate and CO2. England contributes 1.2% of the global human caused CO2 emissions. They don’t even count but you wouldn’t know that by their crowing.
Try to think of this global warming stuff like WW I, a madness affecting Europeans. Very self-destructive, and which overturned the status quo of Europe, but, in the end, thing went back to “normal” and the world moved on. WW II was just a tidying up of the mess made by WW I. If we think of global warming like Marxism, then, yes, I would be much more worried, but unlike Marxism, global warming seems to have little attraction for non-Europeans.

PETER BUCHAN
Reply to  Javier
September 8, 2019 9:03 am

Javier, I must admit that with ever-greater frequency your posts – even when though pithy and terse at times – keep rising in value to this site. With this one serving as a perfect example.

For what it is worth, I am neither a scienist nor a scholar, but I am an inveterate student, a serial entrepreneur with business interests and supply chain spanning 5 continents – and old and well traveled enough to have glimsed the multi-layered currents at work as the world and human society grow ever more complex; enough to know that in Climate (and many other fields) appeals to “science”, “lived experience” and (bona fide) “cautionary principles” are now PROXIMATE, while the underlying and expedient economic, socio-political and geo-strategic doctrines are ULTIMATE.

Pat Frank’s – and the work of many others striving for sense as global society loses it mind ever more rapidly – may well get their moment in the sun. But that will come in a time of reflection, after the true effects of the borderless One-World-One-Mind-One-Currency utopian doctrine has bitten so hard that enough of the Mob comes to its senses “slowly, and one by one”.

As usual with such things, hope and salvation seem likely to spring from an unexpected direction. So take it pragmatically from me (if you will): we’ve entered acceleration-phase of a funamental tectonic event in the global monetary system that promises to strip away the silky veneer that covers the intent behind the CAGW ideologues and their true intentions. Sure, global temperatures will more than continue to creep upward, but faced with far greater, more immediate and more tangible problems, billions of ordinary people will simply do what they have always done: adapt and mitigate.

Until the next existential crisis in harnessed, and to the exact same ends.

Keep up the good work, Sir. Not to belabour the point but when you threatened to get “outta here” a while back I wrote directly to Anthony to make the case that your absence from this forum would deal it a severe blow.

Reply to  PETER BUCHAN
September 8, 2019 5:57 pm

Thank you for your words, Peter. I am glad some people appreciate my modest contribution to this complex issue.

I agree very much with what you say and also think that the monetary experiment the central banks of the world embarked after the great financial crisis is unlikely to have a good outcome at the end and the climate worries of the people will evaporate the moment we have more serious problems.

30 years ago I would have found a lot more difficult to believe that Europe would be stuck in negative interest rates than we would be having a serious climate crisis, yet here we are with modest warming and insignificant sea level rise but with interests sinking below zero because lots of countries can hardly pay the interests on their debt. Yet people are worried about the climate. Talk about serious disconnect.

Clyde Spencer
Reply to  Javier
September 8, 2019 11:16 am

Javier
It is not at all unlike the behavior of superstitious primitives quick to sacrifice a virgin to the angry volcano god. It is hard to convince the natives that it was all in vain when the volcano eventually stops erupting, which they always do! The irony is that (in my experience) the liberals on the AGW bandwagon view themselves as being intellectually and morally superior to the “deplorables” in ‘fly over country.’ The reality is, they are no better than the primitive natives. They just think that they are superior, with little more evidence to prove it than they demand for the beliefs they hold.

Loydo
Reply to  Clyde Spencer
September 10, 2019 1:14 am

Say a tiny claque of angry white men, huddled around in an echo chamber. Huge changes are afoot but their blinkers hide it. They think everyone else (that is every single scientific organisation and every meteorological organisation in the world) are “superstitious primitives”. They just think that they are superior.

Clyde Spencer
Reply to  Loydo
September 10, 2019 9:35 am

Loydo
The alarmists may not be pleased with your description of them.

Derg
Reply to  Loydo
September 10, 2019 9:38 am

Loydo…angry white men. You got the talking points down 😉

Latitude
Reply to  Loydo
September 10, 2019 7:11 pm

..that is so like…deep like

John Q Public
Reply to  Javier
September 8, 2019 1:15 pm

Had Hillary Clinton won, it would be game over. Trump won, and whether you like him or not, he has given reason a little breathing room. If he wins again, our chances increase.

Reply to  Javier
September 8, 2019 2:41 pm

We still have our Secret Weapon… Trump.

OK… he’s not so secret anymore. But Democrat’s in their elitist arrogance and hubris consistently misunderstand the man and his methods, thus they underestimate what is happening to them as they Sprint Leftward as a response to their derangement-induced insanity.

Trump is not the force but he is catalyzing the Left’s self-Destruction. By definitions “catalysis” only speeds up reaction. Trump’s just helping Democrat’s find their natural state of insanity at a much quicker pace.

Warren
Reply to  Joel O'Bryan
September 9, 2019 1:25 am

Well said Joel!

Knr
September 8, 2019 3:28 am

The process to deal with this paper , from the climate doom’ prospective, is simply. Strave it to death, no coverage and relie on the fact the world moves on and lots of papers get published on a daily bases so it will become old news very quickly.
For once again, it can be stated that in a battle that has little to do with science. Showing their science to be wrong, is not an effective way to beat them.

Editor
September 8, 2019 4:16 am

Pat,

Wow! I need to read this a few dozen times for it to fully sink in… But, this seems to literally be a “stake in the heart.”

Reply to  David Middleton
September 13, 2019 6:12 pm

You’re clearly unclear on the meaning of “literally.”

Pyrthroes
September 8, 2019 4:51 am

Though standard sources studiously omit all reference to Holmes’ Law (below), asserting that “no climate theory addresses CO2 factors contributing to global temperature” is quite wrong.

In December 2017, Australian researcher Robert Holmes’ peer-reviewed Molar Mass version of the Ideal Gas Law definitively refuted any possible CO2 connection to climate variations: Where GAST Temperature T = PM/Rp, any planet’s –repeat, any– near-surface global Temperature derives from its Atmospheric Pressure P times Mean Molar Mass M over its Gas Constant R times Atmospheric Density p.

On this easily confirmed, objectively measurable basis, Holmes derives each planet’s well-established temperature with virtually zero error margin, meaning that no .042% (420 ppm) “greenhouse gas” (CO2) component has any relevance whatever.

ghalfrunt
September 8, 2019 5:01 am

OK so let’s assume that AGW is safe or even non existent.
We know we are in a period of quiet sun (lower TSI).
Milankovitch cycles are on a downward temperature but in any case over 50+ years will have insignificant effect
Let us assume all ground based temperature sequences are fake.
We can see that all satellite temperatures show an increasing temperature.
So with lower TSI, Milankovitch cycles insignificant, TSI at its lowest for decades. Just what is causing the increase in temperature as shown by the satellite temperature record?
Things like the cyclical events (el Niño etc.) are just that – cyclical with no decadal energy increase so just what is the cause????

Reply to  ghalfrunt
September 8, 2019 5:29 am

I think if we had those same satellite temps going back to the turn of the 20th century, it would be obvious there is nothing to be concerned about.
Where is the catastrophe?
What climate crisis?

Phil
Reply to  ghalfrunt
September 8, 2019 10:14 am

The assumption is that the world’s climate is a univariate system with only one significant variable: carbon dioxide. The world’s climate is much more likely to be a multi-variate system with many significant variables. Carbon dioxide is a trace gas. Not significant. There can be many causes, including changes in cloud fraction. That cloud fraction is poorly modeled within GCMs is a red flag that the theory isn’t correct. Changes in cloud fraction can explain changes in observed temperatures. However, modeling clouds is difficult, so it is difficult to know exactly what is causing changes in observed temperatures. We are being presented with a false choice: changes in observed temperatures are caused by minute changes in a trace gas or not. There are more choices, but it has all been boiled down to a binary choice.

Reply to  ghalfrunt
September 10, 2019 6:59 pm

[1] So with lower TSI, Milankovitch cycles insignificant, TSI at its lowest for decades. Just what is causing the increase in temperature as shown by the satellite temperature record? [2] Things like the cyclical events (el Niño etc.) are just that – cyclical with no decadal energy increase so just what is the cause????

ghalfrunt – you’re OT but here’s the answer from my journey:

[1] TSI and it’s effects are misunderstood. The greatest climate risks derive from long duration high solar activity cycles, and from the opposite condition, long low solar activity duration. The type of climate risks go in different directions for each extreme with one exception, high UVI under low TSI.

[2] Integrated MEI, of mostly positive MEI during decades of predominately El Ninos, drove HadSST3 and Total ACE higher, from sunspot activity higher (TSI). Higher climate risk from hurricanes/cyclones occurs from higher solar activity, higher TSI.

The temperature climbs from long-term high solar activity above 95 v2 SN.

The thing to know is the decadal solar ocean warming threshold of 95 v2 SN was exceeded handily in SC24, despite the low activity. Of all the numbered solar cycles, only #5 & #6 of the Dalton minimum were below that level. Cooling now in progress too from low solar…

Reply to  ghalfrunt
September 11, 2019 1:02 pm

Ghalfrunt, this is not a Sherlock Holmes mystery where the answer is revealed in the last chapter.
We are gaining understanding of what is clearly a “chaotic” system. Maybe some day we will understand all of the inter-relationships and can properly characterize the interdependent variables.

But until then, we must be satisfied with the world’s most underutilized 3 word phrase”
“WE DON’T KNOW”.

Ronny A.
September 8, 2019 5:29 am

I’m going to steal the title of one of Naomi Klein’s gas-o-ramas: ‘This Changes Everything”. Congratulations and unending gratitude from the peanut gallery.

Roy W. Spencer
September 8, 2019 5:52 am

I doubt that anyone here has actually read the whole paper and understands it. I don’t believe the author has demonstrated what he thinks he has demonstrated. I’d be glad to be shown otherwise.

Reply to  Roy W. Spencer
September 8, 2019 6:11 am

Dr. Spencer,

I think if you could explain where Pat went wrong, most of us would appreciate it. I have to admit, I don’t understand it enough to draw any firm conclusions… Of course, I’m a geologist, not an atmospheric physicist… So, I never fully understood Spencer & Braswell, 2010; but I sure enjoyed the way you took Andrew Dessler to task regarding the 2011 Texas drought.

Beta Blocker
Reply to  Roy W. Spencer
September 8, 2019 6:17 am

A more detailed exposition of your criticisms will be forthcoming, is that correct?

knr
Reply to  Roy W. Spencer
September 8, 2019 6:44 am

Then highlighting the errors would be the clear path to take , so why not do so ?

Loydo
Reply to  knr
September 9, 2019 11:17 pm

Because he agrees with Nick Stokes, ATTP and others, but to elaborate would cruel Pat and all the whole credulous cheer squad, like a “stake in the heart.”

“And yes, the annual average of maximum temperature would be 15 C/year.”
Um, no.

Reply to  Roy W. Spencer
September 8, 2019 7:38 am

Roy W. Spencer wrote:

I don’t believe the author has demonstrated what he thinks he has demonstrated.

On what is your belief based? If you yourself understand the whole paper, then I would appreciate your explanation of how it has caused your belief to be as it is.

Your comment seems very general. You speak of “what he thinks he has demonstrated”. Well, spell out for us what you are talking about. What is it that you think he has tried to demonstrate that you believe that he has not.

I believe that you might be hard pressed to do so, but I am open to being made to believe otherwise.

Clyde Spencer
Reply to  Roy W. Spencer
September 8, 2019 11:27 am

Roy
You are the one who has objected to the conclusion of Pat’s work. I think the onus is on you to demonstrate where you think that he has erred. Isn’t it normal practice in peer review to point out the mistakes made in a paper? I can understand that sometimes after reading something, one is left with an uneasy feeling that something is wrong, despite not being able to articulate it. I think that you would be doing everyone a great service if you could find the ‘syntax error.’

I have read the whole paper. While I won’t claim to completely understand everything, nothing stood out as being obviously wrong.

Eric Barnes
Reply to  Clyde Spencer
September 8, 2019 12:32 pm

The alternative is not appealing for Dr. Spencer. It’s the “I prefer to not have egg on my face” position.

Reply to  Roy W. Spencer
September 8, 2019 1:01 pm

I think I’ve demonstrated that projected global air temperatures are a linear extrapolation of GHG forcing.

Charles Taylor
Reply to  Pat Frank
September 8, 2019 4:42 pm

Yes you have. And quite well at that. I think the problem with people accepting it is that a simple linear model with minimum parameters reproduces who knows how many lines of code run on supercomputers coded by untold numbers of programmers and so on.

Reply to  Roy W. Spencer
September 8, 2019 3:36 pm

In response to Roy Spencer, I read every word of Pat’s paper before commenting on it, and have also had the advantage of hearing him lecture on the subject, as well as having most educative discussions with him. I am, therefore, familiar with the propagation of error (i.e., of uncertainty) in quadrature, and it seems to me that Pat has a point.

I have also seen various criticisms of Pat’s idea, but those criticisms seem to me, with respect, to have been misconceived. For instance, he is accused of having applied a 20-year forcing as though it were a one-year forcing, but that is to misunderstand the fact that the annual forcing may vary by +/- 4 W/m^2.

He is accused of not taking account of the fact that Hansen’s 1988 forecast has proven correct: but it is not correct unless one uses the absurdly exaggerated GISS temperature record, which depends so little on measurement and so much on adjustment that it is no longer a reliable source. Even then, Hansen’s prediction was only briefly correct at the peak of the 2016/17 el Nino. The rest of the time it has been well on the side of exaggeration.

Unless Dr Spencer (who has my email address) is able to draw my attention to specific errors in Pat’s paper, I propose to report what seems to me to be an important result to HM Government and other parties later this week.

sycomputing
Reply to  Roy W. Spencer
September 8, 2019 3:38 pm

Somehow this doesn’t jive with what one would expect to hear from Dr. Spencer if he objected to any particular theory.

Is this the real Dr. Roy Spencer?

Moderators, haven’t there been recent confirmed instances of imposters using the names of known, long time commenters here (e.g., Geoff Sherrington) to forward some agenda driven opera of false witness against their neighbor? Is this the case here? You just never know what a scallywag might attempt to do.

ferd berple
Reply to  sycomputing
September 9, 2019 9:18 pm

Is this the real Dr. Roy Spencer?
=====================
I have serious doubts. The comment appears insulting and trivializes 6 years of work without substantiation. It seems completely out of character.

Loydo
Reply to  ferd berple
September 10, 2019 12:17 am

Mmm, a day and a half later…it was Roy alright.

Since when is “doubt” insulting? Oh when you’ve pinned all your hopes on some lone rider on a white horse comin’ in ta clean up the town only to realize its a clown on a donkey.

sycomputing
Reply to  Loydo
September 10, 2019 5:01 pm

Get thee behind me, Loydo, thou pre-amateur reconteur!

Don’t you contradict yourself?

Should you imprudently opine of equines bearing deceitful champions whilst you yourself churl about clownishly, borne atop your own neddy named Spencer?

Well should you?

I am the real Don Klipstein, and I first started reading this WUWT post at or a little before 10 PM EDT Monday 9/8/2019, 2 days after it was posted. All posts and attempted posts by someone using my name earlier than my submission at 7:48 PDT this day are by someone other than me.

(Thanks for the tip, cleaning up the mess) SUNMOD

https://wattsupwiththat.com/2019/09/07/propagation-of-error-and-the-reliability-of-global-air-temperature-projections-mark-ii/#comment-2791169

Loydo
Reply to  Loydo
September 10, 2019 11:21 pm

I don’t know who you are sir, or where you come from, but you’ve done me a power of good.

sycomputing
Reply to  Loydo
September 11, 2019 5:38 am

. . . but you’ve done me a power of good.

You betcha there Boodrow! Y’all come back any time now, ya hear?

🙂

Matthew R Marler
Reply to  Roy W. Spencer
September 9, 2019 12:46 pm

Roy W. Spencer: I doubt that anyone here has actually read the whole paper and understands it.

I read it. What do you need help with?

unka
Reply to  Roy W. Spencer
September 9, 2019 5:51 pm

Dr. Spencer,

I agree. The author is confused. A victim of self-deception. I am surprised the paper was published anywhere.

John Tillman
Reply to  unka
September 9, 2019 6:21 pm

Please expand on this drive-by baseless comment.

Thanks!

Skeptics really want to know what are your best arguments against physical reality.

unka
Reply to  John Tillman
September 10, 2019 2:24 pm

https://moyhu.blogspot.com/2019/09/another-round-of-pat-franks-propagation.html

“There has been another round of the bizarre theories of Pat Frank, saying that he has found huge uncertainties in GCM outputs that no-one else can see.”

https://moyhu.blogspot.com/2017/11/pat-frank-and-error-propagation-in-gcms.html

Reply to  unka
September 10, 2019 10:55 pm

Nick was wrong the first time around and has not improved his position since.

Reply to  Roy W. Spencer
September 9, 2019 8:38 pm

I did not read the paper, but the parabolic shape of the error range is noticeably typical of positive and negative square root curves. It looks like the error is supposed to be up to +/- 1.8 degrees C (from an error of +/- 4 W/m^2), and every year an error of up to 1.8 degrees C (or 4 W/m^2) in either direction gets added to this as if by adding the results of rolling a die every year. This looks like the expansion of the likely range of a 2-dimensional random walk as time goes on. However, I doubt an error initially that large in modeling the effect of clouds expands with time like that as time goes on. I don’t see cloud effect having ability to drift like a two dimensional random walk with no limit. Instead, I expect a large drift in the effect of clouds to eventually face over 50% probability of running into something that reverses it and under 50% probability of running into something that maintains the drift’s increasing.

Reply to  Donald L. Klipstein
September 9, 2019 11:51 pm

You’re confusing error with uncertainty, Donald. The envelope is growth of uncertainty, not of error.

Jordan
Reply to  Pat Frank
September 10, 2019 1:43 am

Well said Pat. It’s going to be a game of Wak-A-Mole on that point, especially when people don’t bother to read the paper.

But it’s going to be worth it because you are addressing a very widely misunderstood feature of modelling. The wider debate will improve from what your expertise brings to the party. As time goes on, you’ll have many others to help stamp out the miscomprehension.

Reply to  Pat Frank
September 10, 2019 7:22 am

Error, uncertainty … I’m used to bars showing range of uncertainty on graphs of global temperature datasets and projections being called error bars. Either way, I don’t see that from cloud effects growing as limitlessly as a 2-dimensional random walk.

September 8, 2019 6:50 am

I am no expert on statistics or climate, but I have a basic understanding of both. I am very aware of propagation of error. My training and experience has taught me that predictive equations with multiple variables and associated parameters have very poor predictive value due to:
1. Errors in the parameters
2. Interactions between variables.
3. Unaccounted for variables. (If you have a lot of variables impacting your result, who is to say there isn’t one more?)

Serious propagation of error in this sort of situation is unavoidable. AND, since we are doing observational science, not experimental science, there is no way to really test your predictive equation by varying the inputs.
So, it has always seem obvious to me from the very start that these complicated computer models cannot have predictive value.
What is also obvious is that it is easy to “tune” your complicated predictive equations by adjusting your parameters and adding or dropping out certain variables.
It has also been obvious from the beginning that the modelers were frauds, since they admitted CO2 is a weak green house gas but they concocted a theory that this weak effect would cause a snowballing increase in water vapor which would lead to a change in climate.
These models were garbage.
There is no need to do anything complicated to discredit their models.

Kurt
Reply to  joel
September 8, 2019 10:58 am

“Serious propagation of error in this sort of situation is unavoidable. AND, since we are doing observational science, not experimental science, there is no way to really test your predictive equation by varying the inputs.”

But it’s not observational science, either. Observational science would be watching people eat the things they eat over time, and observing what percentages of people eating which diets get cancer. Experimental science would be force feeding people specific controlled diets over time compared to a control group and measuring the results. In climate science, the latter is impossible and the former would take too long for satisfaction of the climate professorial class, who want their precious peer reviewed research papers published now.

Running a computer simulation and pretending that the output is a measure of the real world, as a shortcut to the long and hard work of actual experimentation or actual measurements, is not not science at all.

Steve O
September 8, 2019 6:52 am

“…simulation uncertainty is ±114 × larger than the annual average ∼0.035…”

To be precise, if we’re talking about the uncertainty itself, wouldn’t it be +114 larger? The range is 114 times wider. Am I reading it right?

Reply to  Steve O
September 8, 2019 1:15 pm

If you want to do the addition, Steve, then the ±4 W/m^2 is +113.286/-115.286 times the size of the ~0.035 W/m^2 average annual forcing change from CO2 emissions.

Steve O
Reply to  Pat Frank
September 10, 2019 6:49 pm

Okay, I see now how you’re doing it. I got hung up on something being “negative x times larger.”

September 8, 2019 6:53 am

The passion with which this author writes is disturbing. Do you think, given his emotional commitment to this theory, he would ever be able to admit he were wrong? Isn’t this very emotional commitment antithetical to science?

Reply to  joel
September 8, 2019 1:16 pm

Where’s there emotion in the paper or SI, joel.

sycomputing
Reply to  joel
September 8, 2019 3:30 pm

Do you think, given his emotional commitment to this theory, he would ever be able to admit he were wrong?

Thank you for pointing out to the buffoons here how important it is that the author himself should be the arbiter of that which is true in his theory, and this based entirely upon his emotional commitment to it. Never mind the rigorous back and forth that normally accompanies manuscripts such as these in their respective field of study. I’m speaking of course about objections to the published theory, answers to the objections, objections to the answers to the objections and so forth and so on, until, in the end something about the truth of the theory gets worked out by those involved.

Be gone stagnant discourse, nauseous discussion and stagnant debate in the search for Truth! Rather, come hither the pure, sweet redolence of the only the word slinger’s passion to determine the veracity of his argument!

Reply to  sycomputing
September 9, 2019 4:11 am

Oh, the debate?
Right…the debate!
You obviously mean like what occurred prior to emergence of a consensus among 97% of every intelligent and civilized human being in the galaxy, that CO2 is the temperature control knob of the Earth, that a few degrees of warming is catastrophic and unsurvivable, that a warmer world has a higher number of ever more severe storms of every type, as well as being hotter, colder, wetter, dryer, and in general worse in every possible way, right?
Oh plus when it was agreed after much back and forth that every possible bad thing that could or has ever happened is due to man made CO2 and the accompanying global warming/climate change/climate crisis/climate catastrophe?
Something stagnant and nauseatingly redolent alrighty.
Funny how you only noticed it right at this particular point in time.
Funny how it is only ideas which you disagree with that need to be discussed at length prior to generally acceptance.
It seemed to me that a discussion is exactly what we have been having, at great length, with years of endless back and forth, on the subject of this paper today and in the past, and regarding a great many aspects of related ideas.
It also seems to me that discussions moderated by adherents to one side, to one point of view, during all of this, have been curiously unwilling to tolerate any contrary opinion from appearing on their pages.
And that a scant few, such as this one right here, have allowed both sides of any discussions to have free and equal free access.
Nauseating and redolent?
Like I said above…only fools and liars.

Lonny Eachus
Reply to  Nicholas McGinley
September 9, 2019 7:07 am

Mr. McGinley:

You START with a falsehood, and continue from there.

That “97%” figure is a myth, and always has been.

Reply to  Lonny Eachus
September 9, 2019 11:14 am

Maybe read what I said again, Lonny.
Did you read my comment to the end?
I am not sure how it might seem apparent I am arguing in favor of any consensus, even if one did exist.
My point is that climate alarmists and their CO2 induced global warming assertions have never engaged in the sort of back and forth that Sycomputing asserts is necessary prior to any idea being widely accepted.
And the alarmist side has systematically and unprecedentedly stifled debate, silenced contrary points of view, censored individuals from being able to participate in any public dialogue, etc.
None of the major news or science publications in the world have allowed a word of dissent or even back and forth discussion on the topic of climate or any related subject (even if only tangentially related) for many years now.
Many of them have completely shut down discussion pages on their sites, even after years of preventing any skeptical voices from intruding on the conversations there.
One might wonder if it was due to the amount of manpower and effort it took to silence contrary opinions or informative discussions. Or if perhaps it was because huge numbers of people were finding that any questions at all were met with instant censorship and banning of that individual from making any future comments.
Which all by itself is quite damning.
It occurs to me that sycomputing may in fact have intended his comment to be sarcastic, and if that is the case then I apologize, if such is necessary.
Poe’s law tells us that it is well nigh impossible to discern parody or sarcasm when discussing certain subject matter, and this is very much the case with the topic at hand.

Lonny Eachus
Reply to  Lonny Eachus
September 9, 2019 12:07 pm

My mistake.

I saw the “97%” and immediately jumped to the “true believer” conclusion.

I should know better.

Reply to  Lonny Eachus
September 9, 2019 12:49 pm

S’alright.
I may have done it myself with my comment to the person I was responding too.
I meant for this to be an early clue: “97% of every intelligent and civilized human being in the galaxy…”
😉

sycomputing
Reply to  Lonny Eachus
September 9, 2019 12:53 pm

It occurs to me that sycomputing may in fact have intended his comment to be [satire,] and if that is the case then I apologize, if such is necessary.

Absolutely no such thing is necessary. Quite the contrary. Physician, you’ve healed thyself, and in doing so accomplished at least 2 things for certain, and likely one more:

1) You’ve paid me (albeit unwittingly) a wonderful compliment for which I thank you!
2) You’ve contradicted joel’s theory above with irrefutable evidence.
3) You’ve shown Poe’s “law” ought to be relegated back to a theory, if not outright rejected as just so much empirically falsified nonsense!

You are my hero for the day. All the best!

Reply to  Lonny Eachus
September 10, 2019 10:36 am

Oh, heck…I can make mincemeat of Joel’s criticism very much more simply, by just pointing out that he has not actually offered any specific criticism of the paper.
All he has done is make an ad hominem smear.

Beyond that, I do not think any idea should be rejected or accepted depending on one’s own opinion of how the person who had the idea would possibly react if the idea was found to be in error. That does not even make any sense.

Imagine if we had an hypothesis that was only kept from the dustbin of history because the people who advocated for it jumped up and down and screamed very loudly anytime it looked like someone was about to shoot a big hole in the hypothesis?

Of course, jumping up and down and screaming is nothing compared to having people fired, refused tenure, prevented from publishing, subjected to outright character assassination, and so on.

I would have to say that if the only fault to be found with a scientific paper involves complaining that the personality of the author rubs someone the wrong way, or is found to be “disturbing”…that sounds like nothing has been found with the actual paper.
And that some people are delicate snowflakes who whine when “disturbed”.

It seems to me that making ad hominem remarks instead of addressing the subject material and the finding, is precisely antithetical to science.

sycomputing
Reply to  Lonny Eachus
September 10, 2019 12:11 pm

Oh, heck…I can make mincemeat of Joel’s criticism very much more simply . . .

Well certainly you’re able Nicholas, no doubt about it. But the innocently simplistic complexity in which the actual refutation emerged natürlich was just such a thing of poetic beauty was it not?

In common with Joel’s argument against Frank, here you were (or appeared to be) in quite the fit of passionate contravention yourself, heaping bucket after bucket of white hot reproof upon mine recalcitrant head, your iron fisted grip warping a steel rod of correction with each blow.

But then, after a moment, it occurred to you, “Hmm. Well now what if I was wrong?”

And thus, Joel’s original contemptible claptrap is so exquisitely refuted with pulchritudinous precision (or is it “accuracy”?) in a wholly natural progression within his very own thread on the matter.

Really good stuff. Love it!

Reply to  Lonny Eachus
September 11, 2019 6:30 am

Sycomputing,
Have you ever read any of Brad Keyes’ articles, or comment threads responding to comments he has made?
There are ones from years ago, and even more recently, that go on for days without anyone, as far as I can tell, realizing that Keyes is a skeptic, using parody and satire and sarcasm so effectively, that if Poe’s Law was not already named, it would have had to be invented and called Keyes Law.

On a somewhat more inane note, we have several comments right here on this thread in which various individuals are complaining that skeptics need to be more open to debate and criticism!

sycomputing
Reply to  Lonny Eachus
September 11, 2019 1:02 pm

Have you ever read any of Brad Keyes’ articles . . .

All of them that I could find. Believe it or not, Brad once sought me out to offer me the Keyes of grace, and on that day I understood what it means to be recognized by one’s hero. My own puny, worthless contribution to his legacy is above.

September 8, 2019 7:25 am

joel, I’m not understanding your comment:

The passion with which this author writes is disturbing. Do you think, given his emotional commitment to this theory, he would ever be able to admit he were wrong? Isn’t this very emotional commitment antithetical to science?

Are you referring to Pat Frank? Are you serious? Am I missing some obvious context?

Clarify, if you will. Thanks.

Steve S.
September 8, 2019 8:38 am

I looked at the paper (btw there is an exponent missing in eqn. 3). Although, I am sympathetic to it’s overall message, I am not convinced the methodology is solid. There is a lot of subtlety going on here since a model of a model is being used. Extensive care is warranted since, if the paper is rock solid, then a LOT of time and money has been thrown into the climate change modeling rat hole.
I will have to give it more thought.

TRM
September 8, 2019 9:00 am

Paste the URL for the article far and wide! Use the one from the published, peer reviewed article to avoid the filters that block WUWT.

information@sierraclub.org

Let’s spam, ahem, I mean INFORM every organization, site and group that supports CAGW.

Kevin kilty
September 8, 2019 9:19 am

I am slogging my way through the paper, and have a couple of points so far that I think are pertinent.

1. Quite a few people on this site have complained that error bars on observations are either never represented at all in graphics or are minimized. Certainly no one has ever made an estimate of the full range of uncertainty in climate simulations that I recall seeing. My suspicion is that all errors are treated statistically in the most optimistic way possible. One statement from the paper will illustrate what I mean…

However, the error profiles of the GCM cloud fraction means do not display random-like dispersions around the zero-error line.

In this case one wonder if the errors “stack-up” as in a manufactured item. If they do, and they might if the simulation integrates sufficiently as it steps forward, then the “iron-clad rule” of stack-up is that one should not use root mean squares but rather add absolute values in order to not underestimate uncertainty. I have never seen such discussion applied in climate science, and its difficult to even suggest to some people that systematic errors might be significant.

2.

A large autocorrelation R-value means the magnitudes of the xi+1 are closely descended from the magnitudes of the xi. For a smoothly deterministic theory, extensive autocorrelation of an ensemble mean error residual shows that the error includes some systematic part of the observable. That is, it shows the simulation is incomplete.

I don’t think this is so necessarily. Magnitudes of x_{i+1} being highly correlated to x_{i} might reflect true climate dynamics if the climate system contains integrators, which it undoubtedly does. It might exaggerate the correlation if there is a pole too close to the unit circle in the system of equations of the model–a near unit root.

3. I had wondered about propagation of uncertainty in GCMs, but never launched into it more deeply because I thought one would really have to examine the codes themselves for the needed sensitivity parameters, and then find credible estimates of uncertainty per parameter. It looked like a Herculean task. The approach here is very interesting.

We usually calculate likely uncertainty through a “measurement equation” to obtain the needed sensitivity parameters, and then supply uncertainty values through calibration or experience. The emulation equation plays that role here, or at least plays part of the role. So it is an interesting approach for simplifying a complex problem.

One thing I do wonder about is this. If the uncertainty is truly as large as claimed in this paper, then do some model runs show it? If they do, are these results halted early, trimmed, or in some other way never reach being placed into an ensemble of model runs? Are the model runs so constrained by initial conditions that “model spread is never uncertainty”? (Victor Venema discusses this at http://variable-variability.blogspot.com/ for those interested).

If anyone thinks that uncertainty can only be supplied through propagation of error, and the author seems to imply this, then the NIST engineering handbook must be wrong, for it states that one can estimate it through statistical means.

Kevin kilty
Reply to  Kevin kilty
September 8, 2019 10:58 am

I might add that the NIST Handbook suggests that uncertainty can be assessed through statistical means or other methods. The two other methods that come to mind are propagation of error and building an error budget from calibration and experience. However, no method is very robust in the presence of bias, which is something the “iron-clad” rule of stack-up tries to get at. The work of Fischoff and Henrion showed that physical scientists are not very good at assessing bias in their models and experiments.

Eric Barnes
Reply to  Kevin kilty
September 8, 2019 7:18 pm

“scientists are not very good at assessing bias in their models”

Especially when they are paid for their results.

September 8, 2019 9:22 am

I have carried out tens of spectral calculations to find out what are the radiative forcing (RF) values of GH gases. The reproduction of the equation by Myhre et al. gave about 41 % lower RF value for CO2. I have applied simple linear climate models because they give the same RF and temperature warming values as the GCM simulations referred by the IPCC.

In my earlier comment, I mixed up cloud forcing and cloud feedback. It is clear that the IPCC models do not use cloud feedback in their climate models for climate sensitivity calculations (TCS).

The question is if cloud feedback has been really applied in the IPCC’s climate models. In the simple climate model, there is no such factor, because there is a dependency on the GH concentration and the positive water feedback only.

My question to Pat Frank is that in which way cloud forcing has been applied in simple climate models and in the GCMs? My understanding is that it is not included in models as a separate factor.

John Tillman
Reply to  Antero Ollila
September 8, 2019 11:37 am

GCMs don’t do clouds. GIGO computer gamers simply parameterize them with a fudge factor.

John Tillman
Reply to  John Tillman
September 9, 2019 11:14 am

The short version is that the cells in numerical models are too big and clouds are too small. Also, modelling rather than parameterizing them would require too much computing power.

Reply to  Antero Ollila
September 8, 2019 4:39 pm

I can’t speak to what people do with, or put into, models, Antero, sorry.

I can only speak to the structure of their air temperature projections.

michel
September 8, 2019 9:56 am

This is really incredible. The argument is detailed but the point is extremely simple.

When you calculate probabilities with quantities which involve some margin of error, the errors propagate into the result according to standard formulae. In general the error in the result will exceed that in the individual quantities.

Well, Pat is saying that in all the decades of calculation and modeling of the physics of the end quantity, the warming, none of the researchers have used or referred to these standard formulae, none have taken account of the way error propagates in calculations, and therefore all of the projections are invalid.

Because if the errors had been correctly projected, the error bars would be so wide that the projection would have no information content.

He is saying, if I understand him correctly, that if you are trying to calculate something like the volume of a swimming pool, then if you multiply together height, width, breadth, the error in your estimate of volume will be much greater than the errors in your estimate of the individual height breadth and depth.

If you are now dealing with something which changes over a century, like temperature, the initially perhaps quite small, errors are not only higher to start with in year one, but rise with every year of the projection until you end up saying that the global mean temperature will be somewhere in a 20C range, which tells you nothing at all. I picked 20C out of a hat for illustration purposes.

And he is saying, no-one has done this correctly in all these years?

Nick Stokes, where are you now we really need you?

Reply to  michel
September 8, 2019 5:48 pm

Close, but not quite michel. It’s not that “you end up saying that the global mean temperature will be somewhere in a 20C range

It’s that you end up not knowing what the temperature will be between ±20 C uncertainty bounds (choosing your value).

This uncertainty is far larger than any possible physically real temperature change. The projected temperature then provides no information about what the true future temperature will be.

In other words, the projection provides no information at all about the magnitude of the future temperature.

michel
Reply to  Pat Frank
September 9, 2019 2:11 am

Yes, thanks. Its worse than we had thought!

I admit to feeling incredulous that a whole big discipline can have gone off the rails in such an obvious way. But I’m still waiting for someone to appear and show it has not, and that your argument is wrong.

The thing is, the logic of the point is very simple, and if the argument is correct, quite devastating. Its not a matter of disputing the calculations. If it really is true that they have all just not done error propagation, they are toast, whether your detailed calculations have some flaws or not.

Windchaser
Reply to  michel
September 10, 2019 9:09 am

Michel, I’d recommend reading some of the other posts, e.g., at AndThenTheresPhysics or Nick Stokes’ post at moyhu.com. Those past posts cover this pretty well, I think.

The short version: the uncertainty mentioned here is a static uncertainty in forcing related to cloud cover: +/-4 W/m2. That is an uncertainty in a flow of energy, constantly applied: joules/s/m^2.
The actual forcing value is somewhere in this +/- 4W/m^2 range, not changing, not accumulating, just fixed. We just don’t know exactly what it is.

If you propagate this uncertainty, i.e., if you integrate it with respect to time, you get an uncertainty in the accumulated energy. An uncertainty of 4W/m^2 means that each second, the energy absorbed could be in the range of 4 joules higher to 4 joules lower, per meter. And at the next second, the same. And so on. The accumulation of this error means a growing uncertainty in the energy/temperature in the system.

That adds up, certainly. But the Stefan-Boltzmann Law, the dominant feedback in the climate system, will restrict this energy-uncertainty pretty sharply so that it cannot grow without limit.

Mathematically, that’s how this this error should be propagated through. But Frank changes the units of the uncertainty, to W/m^2/year, and as a result the rest of the math is also wonky. Adding this extra “/year” means that the uncertainty *itself* constantly is growing with respect to time.
But that’s false. This would mean our measurements are getting worse each year; like, our actual ability to measure the cloud cover is getting worse, and worse, and worse, so the uncertainty grows year over year. (No, the uncertainty is static; a persistent uncertainty in what the cloud cover forcing is).

Ultimately, this is just a basic math mistake, which is why it’s so… I dunno, somewhere between hilarious and maddening. It’s an argument over the basic rules of statistics.

Reply to  Windchaser
September 10, 2019 10:48 pm

It’s not error growth, Windchaser, it’s growth of uncertainty.

You wrote, “But Frank changes the units of the uncertainty, to W/m^2/year,…

No, I do not.

Lauer and Hamilton calculated an annual mean error statistic. It’s right there in their paper.

The per year is therefore implicitly present in their every usage of that statistic.

Nick knows that. His objection is fake.

Reply to  Windchaser
September 11, 2019 1:08 am

“Lauer and Hamilton calculated an annual mean error statistic. It’s right there in their paper.”
What their paper say is:
“These give the standard deviation and linear correlation with satellite observations of the total spatial variability calculated from 20-yr annual means.”
And for those 20 years they give a single figure. 4 W/m2. Not 4 W/m2/year – you made that bit up.

Windchasers
Reply to  Windchaser
September 11, 2019 9:09 am

Lauer and Hamilton calculated an annual mean error statistic. It’s right there in their paper.

Lauer himself said that your interpretation is incorrect. I refer to this comment posted by Patrick Brown in a previous discussion:

I have contacted Axel Lauer of the cited paper (Lauer and Hamilton, 2013) to make sure I am correct on this point and he told me via email that “The RMSE we calculated for the multi-model mean longwave cloud forcing in our 2013 paper is the RMSE of the average *geographical* pattern. This has nothing to do with an error estimate for the global mean value on a particular time scale.”.

This extra timescale has nothing to do with it. The units for a measurement (W/m2) has the same units as the uncertainty (W/m2). This works the same in all fields.

Reply to  michel
September 10, 2019 10:51 pm

Not only have they not done error propagation michel, but I have yet to encounter a climate modeler who even understands error propagation.

One of my prior reviewers insisted that projection variation about a model mean was propagated error.

chris
September 8, 2019 10:06 am

i’d love to read the paper and respond, but i’m on my way to Alabama to volunteer for storm damage clean-up.

Phil
Reply to  chris
September 8, 2019 10:34 am

Hurricane Irma was first forecast to hit Southeast Florida, including the Miami area, so many people evacuated to the west coast of Florida. Then it was forecast to hit the Tampa-St. Pete area, so some people evacuated again to the interior. Then it went right up the middle of Florida with some people evacuating a third time to Georgia. Hurricanes are notoriously difficult to forecast just a few days out, so warnings tend to be overly broad. However, people are encouraged to “stay tuned” as forecasts can change rapidly. No one (and I mean no one) forecast that Dorian would park itself over the Bahamas as it did. Yet, we are to believe that forecasts of climate 100 years in the future are reliable. When you can forecast Hurricanes accurately (which no one can), then maybe your sarcasm is warranted.

sycomputing
Reply to  Phil
September 8, 2019 7:32 pm

When you can forecast Hurricanes accurately (which no one can), then maybe your sarcasm is warranted.

Are you sure chris is being sarcastic?

Thinking back on the bulk of the historic commentary from this user I can recall, I suspect he/she is telling the truth.

Phil
Reply to  sycomputing
September 8, 2019 9:41 pm

Since Sharpiegate has been in the news and Dorian not only missed Alabama, but appears to have affected Florida to only a limited extent when compared to early predictions, it does seem sarcastic to me. There has been no reported hurricane damage to Alabama. If it isn’t sarcastic, then it is confusing, because storm damage clean-up is needed in places that are somewhat removed geographically from Alabama. “Going to Alabama” would seem to imply from some other state or country other than Alabama. If one were in another state or country and one wanted to volunteer for “storm damage clean-up,” why wouldn’t you go directly to where you would be needed? It appears to be a “drive-by” comment.

John Tillman
Reply to  chris
September 8, 2019 11:32 am

On August 30, computer forecast of Dorian’s likely path still showed AL in danger:

https://www.youtube.com/watch?v=l36Ach0ZOeE

Luckily, the hurricane turned sharply right after slamming the northern Bahamas, following the coast, rather than crossing FL, then proceeding to GA and AL.

Reply to  John Tillman
September 8, 2019 1:30 pm

We were very close to giving the order to begin securing some of our GOM platforms for evacuation on the same models. The storm appeared to be veering towards the Gulf at the time. A day later, it was back to running up the Atlantic coast.

John Tillman
Reply to  David Middleton
September 8, 2019 4:34 pm

On the 30th, both the European and US models were wrong, but, as usual, the American was farther off, with the projected track more to the west.

ResourceGuy
September 8, 2019 10:34 am

Since the internet never forgets, I think it’s time for an updated list of science professional organizations that have stayed silent or joined in the the pseudoscience parade and enforcement efforts against science process and science integrity.

Robert Stewart
September 8, 2019 11:44 am

Pat Frank, Congratulations! I first learned of your work by listening to your T-Shirt lecture (Cu?) on youTube.
https://www.youtube.com/watch?v=THg6vGGRpvA dated July of 2016.

Like M&M’s critique of the HockeyStick, your explanation and analysis made a great deal of sense, the sort thing that should have been sufficient to cast all of the CO2 nonsense into the dust bin of history. But of course it didn’t. And like M&M, you have also had a great deal of trouble publishing in a “peer reviewed” form.

We are in a very strange place in the history of science. With the growth of the administrative state, the reliance on “credentials” and “peer review” has become armor for the activists who wield the powers of government through their positions as “civil servants”. At the same time, our universities have debased themselves providing the needed “credentials” in all sorts of meaningless interdisciplinary degrees that lack any substantial foundation in physics and mathematics, let alone a knowledge of history and the human experience.

In your remarks above, you said:
“The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.” Which is exactly right. Sadly, few recent university graduates will have even a rudimentary understanding of the Lysenko and Stalin references. They Google such things and rely on an algorithm to lead them to “knowledge”, which resides in their short-term memory only long enough to satisfy a passing curiousity. We must realize that control of the “peer review” process is essential to those who seek to monopolize power in our society. The nominal and politically-controlled review process provides the logical structure that supports the bureaucrats who seek to rule us.

I would encourage everyone to approach these issues as a personal responsibility. Meaning that we must seek to understand this issues on their own merit, and not based on the word of some “credentialed” individual or group. The NAS review of Mann’s work should serve as fair warning that the rot goes deep, and reliance on “expert” opinion is a fool’s path to catastrophe. That said, I did enjoy Anthony’s response to DLK’s submission, where Anthony challenged DLK to provide ” a peer reviewed paper to counter this one”. Hoisting them with their own petard!

Thank you for your persistence and devotion to speaking the truth. I look forward to digging into your supporting information in the pdf files.

onion
September 8, 2019 11:49 am

This may be a great paper. I have a query. As uncertainty propagates (in this case through time), the uncertainty due to all factors (including that due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is two orders of magnitude larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²).

I have a thought experiment where uncertainty reduces through time. Imagine a model that predicts a coin-toss. It states that the coin lands head with frequency 50%. The uncertainty bound on the first coin toss is [heads, tails, on its side]. The more coin tosses there are, the less the uncertainty becomes. By the millionth toss, the observed frequency of coin tosses landing ‘heads’ will be very close to 50% exactly.

My understanding of the claims made by Alarmists is that the uncertainty from natural climate variability is steady year on year. As anthropogenic GHG concentrations rise, the ‘signal’ from GHG-warming (forcing) is first predicted to be observable and then overwhelms natural climate variability (Hansen predicted this to happen by 2000 with an approx 10y uncertainty bound). As GHG grows year on year, it overwhelms more and more other factors (including el Nino etc, a useful observable prediction). Essentially, they are arguing that GHG forcing is like the coin toss where uncertainty diminishes over time.

What is the counterargument against this?

Robert Stewart
Reply to  onion
September 8, 2019 12:36 pm

onion, examine your underlying assumption. You presume that the phenomena is unchanging in time, that is, that the same coin is tossed over and over. As Lorenz found about 60 years ago, weather is a chaotic system. Assuming we could properly initialize a gigantic computer model, it would begin to drift away from reality after about two weeks due to the growth of tiny “errors” in the initialization process. And such a model, and the detailed data needed to initialize it, is the stuff of science fiction, wormholes and FTL travel, so to speak. It could be the case that there are conditions in the atmosphere that lend themselves to longer predictions, but it would take centuries of detailed data to identify these special cases. A week ago they were trying to predict where Dorian would go, and when it would get there. Need I say more?

Jordan
Reply to  onion
September 8, 2019 1:28 pm

Where is the evidence to say GHGs will overwhelm anything? All we have is some theorising including GCMs. Pat shows the GCMs are indistinguishable from linear extrapolation of GHG forcing with accumulating uncertainty.
Once uncertainty takes over, we can’t say much about any factor.

Reply to  Jordan
September 8, 2019 5:42 pm

The ±4 Wm⁻² is a systematic calibration error, deriving from model theory error.

It does not average away with time.

That point is examined in detail in the paper.

Jordan
Reply to  Pat Frank
September 8, 2019 9:39 pm

Thanks for your response Dr Frank. My response was addressed to onion, sorry if I wasn’t clear there. I wanted to challenge the assertion that GHG forcing would become overwhelming, developing your point that the propagation of uncertainty renders that assumption unsupportable.

S. Geiger
Reply to  Pat Frank
September 9, 2019 9:09 am

I’m still curious, as Nick Stokes pointed out, how it came to be that the +/- 4 W/m^2 was treated as an annual value. Is there some reason this was chosen (as opposed to, say, monthly, or even the equivalent time of each model step, as pointed out previously?) Is this an arbitrary decision OR is it stated in the original paper as to why the +/- 4 W/m^2 is treated as an annual average?

Thanks for any further info on this! Just trying to understand.

John Q Public
Reply to  S. Geiger
September 9, 2019 10:57 am

I see that on page 3833, Section 3, Lauer starts to talk about the annual means. He says:

“Just as for CA, the performance in reproducing the
observed multiyear **annual** mean LWP did not improve
considerably in CMIP5 compared with CMIP3.”

He then talks a bit more about LWP, then starts specifying the means for LWP and other means, but appears to drop the formalism of stating “annual” means.

For instance, immediately following the first quote he says,
“The rmse ranges between 20 and 129 g m^-2 in CMIP3
(multimodel mean = 22 g m^-2) and between 23 and
95 g m^-2 in CMIP5 (multimodel mean = 24 g m^-2).
For SCF and LCF, the spread among the models is much
smaller compared with CA and LWP. The agreement of
modeled SCF and LCF with observations is also better
than that of CA and LWP. The linear correlations for
SCF range between 0.83 and 0.94 (multimodel mean =
0.95) in CMIP3 and between 0.80 and 0.94 (multimodel
mean = 0.95) in CMIP5. The rmse of the multimodel
mean for SCF is 8 W m^-2 in both CMIP3 and CMIP5.”

A bit further down he gets to LCF (the uncertainty Frank employed,
“For CMIP5, the correlation of the multimodel mean LCF is
0.93 (rmse = 4 W m^-2) and ranges between 0.70 and
0.92 (rmse = 4–11 W m^-2) for the individual models.”

I interpret this as just dropping the formality of stating “annually” for each statistic because he stated it up front in the first quote.

Reply to  S. Geiger
September 9, 2019 12:02 pm

“Lauer starts to talk about the annual means”
Yes, he talks about annual means. Or you could have monthly means. That is just binning. You need some period to average over. Just as if you average temperature in a place, you might look at averaging over a month or year. That doesn’t mean, as Pat insists, that the units of average temperature are °C/year (or °C/month). Lauer doesn’t refer to W/m2/year anywhere.

Phil
Reply to  S. Geiger
September 9, 2019 6:08 pm

Nick Stokes stated:

Lauer doesn’t refer to W/m2/year anywhere.

Lauer doesn’t have to. It is implicit. The 4 W/m2 is a flux. “Flux is a rate of flow through a surface or substance in physics”. Flow doesn’t exist without implicit time units. The unit of time for the 4 W/m2 is clearly a year.

Reply to  S. Geiger
September 9, 2019 6:41 pm

“The unit of time for the 4 W/m2 is clearly a year.”
As I asked above, why?
And my example above , solar constant. It is a flux, and isn’t quite constant, so people average over periods of time. maybe a year, maybe a solar cycle, whatever. It comes to about 1361 W/m2, whatever period you use. That isn’t 1361 W/m2/year, or W/m2/cycle. It is W/m2.

S. Geiger
Reply to  S. Geiger
September 9, 2019 6:51 pm

In response to Phil, isn’t ‘time’ dimension embedded in the ‘watt’ term? (joules per second), at least as far as the ‘flux’ goes (?) However, I do see that it would seem we are talking about an uncertainty in that term that would seemingly have to evolve over a period of time (presumably, the longer the time period, the higher the uncertainty). From that standpoint I don’t really understand Nick’s criticism.

Clyde Spencer
Reply to  S. Geiger
September 9, 2019 7:23 pm

Stokes
Consider this: If you take 20 simultaneous measurements of a temperature, you can determine the average by dividing the sum by 20 (unitless), or to be more specific, use units of “thermometer,” so that you end up with “average temperature per thermometer.” There is more information in the latter than the former.

On the other hand, if you take 20 readings, each annually, then strictly speaking the units of the average are a temperature per year, because you divide the sum of the temperatures by 20 years, leaving units of 1/year. This tells the reader that they are not simultaneous or even contemporary readings. They have a dimension of time.

It has been my experience that mathematicians tend to be very cavalier about precision and units.

Reply to  S. Geiger
September 9, 2019 8:27 pm

Clyde,
“It has been my experience that mathematicians tend to be very cavalier about precision and units.”
So do you refer to averaged temperature as degrees per thermometer? Do you know anyone who does? Is is just mathematicians who fail to see the wisdom of this unit?

In fact, there are two ways to think about average. The math way is ∫T dS/∫1 dS, where you are integrating over S as time, or space or maybe something else. Over a single variable like time, the denominator would probably be expressed as the range of integration. In either case the units of S cancel out, and the result has the units of T.

More conventionally, the average is ΣTₖ/Σ1 summed over the same range, usually written ΣTₖ/N, where N is the count, a dimensionless integer. Again the result has the same dimension as T.

Reply to  S. Geiger
September 9, 2019 8:35 pm

S. Geiger
“From that standpoint I don’t really understand Nick’s criticism”
You expressed a critical version of it in your first comment. If you are going to simply accumulate the amounts of 4 W/m2, how often do you accumulate. That is critical to the result, and there is no obvious answer. The arguments for 1 year are extremely weak and arbitrary. Better is the case for per timestep of the calculation. Someone suggested that above, but Pat slapped that down. It leads to errors of hundreds of degrees within a few days, of which numerical weather forecasting makes nonsense.

There may be an issue of how error propagates, but Pat Frank’s simplistic approach falls at that first hurdle.

S. Geiger
Reply to  S. Geiger
September 9, 2019 10:43 pm

OK, watched both Brown’s and Frank’s videos, and then read their back-and-forth at Brown’s blog. Here is my next question. I thought Brown actually missed the mark in several of his criticism; however, the big outstanding issue still seems to be whether +/- 4 watts/m^2 is tethered to “per year”. I think both parties stipulate that it was derived based on 20 year model runs (and evaluting differences over that time period). Here is my question: would it be expected that the +/- 4 watt/m^2 number would be less had it been based on, say, 10 year model runs? Or, more, if it were based on 30 year model runs? (in other words….is that 4 watts/m^2 based on some rate (of error) that was integrated over 20 years?) As always, much appreciated if someone can respond.

Reply to  S. Geiger
September 9, 2019 11:18 pm

“would it be expected that the +/- 4 watt/m^2 number would be less had it been based on, say, 10 year model runs?”
I think not, but it is not really the right question here. The argument for per year units, and subsequently adding in another 4 W/m2 every year, is not the 20 year but that Lauer and Hamilton used annual averages as an intermediate stage. This is binning; normally when you get the average of something like temperature (or equally LWCF correlation) you build up with monthly averages, then annual, and then average the annual over 20 years. That is a convenience; you’d get the same answer if you averaged the monthly over 20 years, or even the daily. But binning is convenient. You can choose whatever helps.

Pat Frank wants to base his claim for GCM error bars on the fact that Lauer used annual binning, when monthly or biannual binning would also have given 4 W/m2.

Reply to  S. Geiger
September 10, 2019 12:08 am

Nick, “Pat Frank wants to base his claim for GCM error bars on the fact that Lauer used annual binning, when monthly or biannual binning would also have given 4 W/m2.

Really a cleaver argument, Nick.

I encourage everyone to read section 6-2 in the Supporting Information.

You’ll see that, according to Nick, 1/20 = 1/240 = 1/40.

Good demonstration of your thinking skills, Nick.

Lauer and Hamilton calculated a rmse, the square root of the error variance. It’s ±4W/m^2, not +4W/m^2 despite Nick’s repeated willful opacifications.

Reply to  Pat Frank
September 10, 2019 3:38 am

Pat,

“opacifications”
What a wonderful new word for me.
It fits beautifully with the adage – There are none so blind as those who will not see.

S. Geiger
Reply to  S. Geiger
September 10, 2019 6:56 am

Dr. Frank, does the issue of +/- 4 watts/m^2 vs. +4 watts/m^2 (as you keep pointing out) have anything to do with accruing the +/- value on a yearly basis in your accounting of the error? While you may be pointing out an error in Nick’s thinking, I’m not seeing the relevance to the (as I see it) crucial question of the validity of considering the value of some ‘annual’ uncertainty that needs to be added in every year of simulation (vs. some other arbitrary time period).

But aside from that, what does seem clear to me is that there IS some amount of uncertainty in these terms and that, do date, this hasn’t been appropriately discussed (or displayed) in the model outputs (above and beyond the ‘model spread’ which is typically shown). Seems the remaining question is HOW do incorporate this uncertainty into model results. Appreciate folks entertaining my simplistic questions on this.

Clyde Spencer
Reply to  S. Geiger
September 10, 2019 10:12 am

Stokes
“Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in which there is no natural ordering of the observations…”
https://en.wikipedia.org/wiki/Time_series

You said, “More conventionally, the average is ΣTₖ/Σ1 summed over the same range, usually written ΣTₖ/N, where N is the count, a dimensionless integer.” I think that you are making my point, mathematician. You have assumed, without support, that N is always dimensionless.

Consider the following: You have an irregular hailstone that you wish to characterize by measuring its dimensions. You measure the diameters many times in a sufficiently short time as to reasonably call “instantaneous.” When calculating the average, it makes some sense to ignore the trivial implied units of “per measurement” that would yield “average diameter (per measurement)” Now, consider that you take a similar number of measurements during a period of time sufficiently long that the hailstone experiences melting and sublimation. The subsequent measurements will be smaller, and continue to decrease in magnitude. Here, one loses information in calculating the average diameter if it isn’t specified as “per unit of time.” For example, “x millimeters per minute, average diameter,” tells us something about the average diameter during observation, and is obviously different than the instantaneous measurements. It is not the same as the rate of decline, which would be the slope of a line at a specified point. As long as the units are carefully defined, and scrupulously assigned where appropriate, they should cancel out. That is more rigorous than assuming that the count in the denominator is always unitless.

Reply to  S. Geiger
September 10, 2019 4:49 pm

Clyde
“You have assumed, without support, that N is always dimensionless.”
I’m impressed by the ability of sceptics to line up behind any weirdness that is perceived to be tribal.
RMS and sd should be written with ±? Sure, I’ve always done that.
Averaged annual temperature for a location should be in °C/year – yes, of course, that’s how it’s done.

I can’t imagine any other time when the proposition that you get an average by summing and dividing by the number, to get a result of the same dimension, would be regarded as anything other than absolutely elementary.

“As long as the units are carefully defined, and scrupulously assigned where appropriate, they should cancel out. “
And as I said with the integral formulation, you can do that if you want. The key is to be consistent with numerator and denominator, so the average of a constant will turn out to be that constant, in the same units. As you say, if you do insist on putting units in the denominator, you will have to treat the numerator the same way, so they will cancel.

Reply to  S. Geiger
September 10, 2019 10:42 pm

S. Geiger, Nick argues for +4W/m^2 rather than the correct ±4 W/m^2 to give false cover to folks like ATTP who argue that all systematic error is merely a constant offset.

They then subtract that offset and argue a perfectly accurate result.

Patrick Brown made that exact claim a central part of his video, and ATTP argued it persistently both there, and since.

Nick would like them to have that ground, false though it is.

The ±4 W/m^2 is a systematic calibration error of CMIP5 climate models. Its source is the model itself. So, cloud error shows up in every step of a simulation.

This increases the uncertainty of the prediction with every calculational step, because it implies the simulation is wandering away from the physically correct trajectory of the real climate.

The annual propagation time is not arbitrary, because the ±4 W/m^2 is the annual average of error.

Reply to  S. Geiger
September 10, 2019 11:06 pm

“Nick argues for +4W/m^2 rather than the correct ±4 W/m^2 to give false cover to folks like ATTP who argue that all systematic error is merely a constant offset.”
This is nonsense. Let me wearily say it again. As your metrology source reinforced, there are two aspects to an uncertainty interval. There is the (half-width) σ, a positive square root of the variance, as the handbook said over and over. And there is the interval that follows, x±σ. Using the correct convention to express the width as a positive number (as everyone except Pat Frank does, does not imply a one-sided interval.

” the ±4 W/m^2 is the annual average of error”
It is, as Lauer said, the average over 20 years. It is not an increasing error. He chose to collect annual averages first, and then get the 20 year average.

Windchaser
Reply to  S. Geiger
September 11, 2019 9:32 am

Lauer doesn’t have to. It is implicit. The 4 W/m2 is a flux. “Flux is a rate of flow through a surface or substance in physics”. Flow doesn’t exist without implicit time units. The unit of time for the 4 W/m2 is clearly a year.

This is incorrect.

A “watt” is one joule per second. This describes the flow of energy – one joule per second.

If you want to “propagate” an uncertainty in W/m^2 (i.e., J/s/m2), you integrate with respect to time (s) and over the surface (m2). The result is an uncertainty in joules, which can be converted to an uncertainty in temperature through the heat capacity of the body in question.

In both real life and in the models, though, an uncertainty of temperature cannot grow without bounds; it is sharply limited by the Stefan-Boltzmann law, which says that hotter bodies radiate away heat much faster, and colder bodies radiate heat away much slower. Combining the two, the control from the SB law dominates the uncertainty, and the result of propagating the forcing uncertainty is a static uncertainty in temperature.

Now, if your uncertainty was in W/m2/year, meaning that your forcing uncertainty was growing year over year, then yeah, that’s something different.

Reply to  onion
September 9, 2019 12:28 pm

Onion,
In addition to the counterarguments above, there is the issue of the magnitude of natural variability.
We have seen great effort expended by alarmists to convince everyone that natural variability is very small.
They have done so using a variety of deceptive means.
Unless one accepts hockey stick graphs based on proxies, and accepts highly dubious adjustments to historical data, there is no reason to believe what they say about recent warming being outside the bounds of natural variability.
There is no place on the globe where the current temperature regime is outside what has been observed and measured historically.
IOW…there is no place on Earth where the past year has been the warmest year ever measured and recorded, but we are to believe that somehow the whole planet is warmer than ever?
On top of that, almost all measured surface warming consists of less severe low temperatures in Winter, at night, and in the high latitudes.
Why are we not told we are having a global milding catastrophe then?

Reply to  onion
September 9, 2019 8:07 pm

The uncertainty of future throws is always the same. The throws are mutually exclusive and each throw stands on its own, even if you’ve already thrown a gazillion times.

The uncertainty can never diminish.

Windchasers
Reply to  Jim Gorman
September 11, 2019 9:42 am

But neither does it increase, as would be the case if your uncertainty had units of /time.

Reply to  Windchasers
September 11, 2019 10:32 am

Time in relation to the outcome of unique events has no meaning to begin with. Including time with coin tosses makes no sense at all. Trying to assign a time value to unique events that have a limited and finite outcome just doesn’t work. Coin tosses are not flows that have a value over a time interval.

Windchaser
Reply to  Jim Gorman
September 11, 2019 1:08 pm

Coin tosses are not flows that have a value over a time interval.

Sure. And the flows over a time interval have an uncertainty, sure. But that uncertainty is in the same units as the flows themselves.

W/m2 can also be described so as to make the time explicit: Joules, per second, per meters squared. J/m2/s. If you try to measure this, and do so imperfectly, your uncertainty is also J/m2/second.

Frank is adding an extra time unit on to this: J/m2/second/year. But just as changing m/s to m/s/s makes you go from velocity to its rate of change, acceleration, Frank’s change would also make this now describe the rate of change of the uncertainty.

The value given by these scientists was explicitly about the uncertainty. They measured the forcing (W/m2), and then gave an uncertainty value for it (also W/m2). The uncertainty can not also describe the rate of change of the uncertainty. They are two different things.

I think this is just a mistake with respect to units. Nothing more, nothing less.

September 8, 2019 1:44 pm

Pat
An alarming conclusion about CAGW alarmism!
The predictions may be invalid as you have shown due to chaotic and stochastic instability of the system and consequent uncontrolled error propagation.
But I guess that’s not the same thing as confirming the validity or otherwise about the hypothesised mechanism of CO2 back radiation warming.
That hypothesis runs into problems of its own also related to chaos and regulatory self-organisation.
But that’s not the same as the problems of error propagation that your paper deals with?
Is this a valid distinction or not?
Thanks.

Reply to  Phil Salmon
September 8, 2019 5:52 pm

Phil, the cloud fraction (CF) error need not be due to chaotic and stochastic instability. It could be due to deployment of incorrect theory.

The fact that the error in simulated CF is strongly pair-wise correlated in the CMIP5 models argues for this interpretation. They all make highly similar errors in CF.

Jordan
September 8, 2019 1:53 pm

Pat Frank. I have spent the day reading your paper and looking at the responses. I really like your approach and logical reasoning, and I expect it to be a worthy challenge to both the GCM community, and those who are so utterly dependent on GCM output to reach their “conclusions”.

I wonder if your point about “spread” as a measure of precision could have consequences for those who seem to consider GCM unforced variability is some kind of indicator of natural variability. Just a thought.

I see one source of indignation as (in effect) demonstrating $bn spent on simulating the physical atmosphere having little overall difference (in terms of GAST) to linear extrapolation of CO2 forcing. That’s going to feel like a bit of a slap in the face.

Another challenge will be those who characterise your emulation of GCMs as tantamount to creating your own GCM (such as Stokes). It could take quite a lot of wiping to get this off the bottom of your shoe (figuratively speaking).

Reply to  Jordan
September 8, 2019 5:58 pm

Thanks, Jordan.

You’re right that some people mistakenly see the emulator as a climate model. This came up repeatedly among my reviewers.

But in the paper, I make it clear — repeated several times — that the emulator has nothing to do with the climate. It has only to do with the behavior of GCMs.

It shows that GCM air temperature projections are just linear extrapolations of GHG forcing.

With that and the long wave CF error, the rest of the analysis follows.

You’re also right that there could be a huge money fallout. One can only hope, because it would rectify a huge abuse.