Propagation of Error and the Reliability of Global Air Temperature Projections, Mark II.

Guest post by Pat Frank

Readers of Watts Up With That will know from Mark I that for six years I have been trying to publish a manuscript with the post title. Well, it has passed peer review and is now published at Frontiers in Earth Science: Atmospheric Science. The paper demonstrates that climate models have no predictive value.

Before going further, my deep thanks to Anthony Watts for giving a voice to independent thought. So many have sought to suppress it (freedom denialists?). His gift to us (and to America) is beyond calculation. And to Charles the moderator, my eternal gratitude for making it happen.

Onward: the paper is open access. It can be found here , where it can be downloaded; the Supporting Information (SI) is here (7.4 MB pdf).

I would like to publicly honor my manuscript editor Dr. Jing-Jia Luo, who displayed the courage of a scientist; a level of professional integrity found lacking among so many during my 6-year journey.

Dr. Luo chose four reviewers, three of whom were apparently not conflicted by investment in the AGW status-quo. They produced critically constructive reviews that helped improve the manuscript. To these reviewers I am very grateful. They provided the dispassionate professionalism and integrity that had been in very rare evidence within my prior submissions.

So, all honor to the editors and reviewers of Frontiers in Earth Science. They rose above the partisan and hewed the principled standards of science when so many did not, and do not.

A digression into the state of practice: Anyone wishing a deep dive can download the entire corpus of reviews and responses for all 13 prior submissions, here (60 MB zip file, Webroot scanned virus-free). Choose “free download” to avoid advertising blandishment.

Climate modelers produced about 25 of the prior 30 reviews. You’ll find repeated editorial rejections of the manuscript on the grounds of objectively incompetent negative reviews. I have written about that extraordinary reality at WUWT here and here. In 30 years of publishing in Chemistry, I never once experienced such a travesty of process. For example, this paper overturned a prediction from Molecular Dynamics and so had a very negative review, but the editor published anyway after our response.

In my prior experience, climate modelers:

· did not know to distinguish between accuracy and precision.

· did not understand that, for example, a ±15 C temperature uncertainty is not a physical temperature.

· did not realize that deriving a ±15 C uncertainty to condition a projected temperature does *not* mean the model itself is oscillating rapidly between icehouse and greenhouse climate predictions (an actual reviewer objection).

· confronted standard error propagation as a foreign concept.

· did not understand the significance or impact of a calibration experiment.

· did not understand the concept of instrumental or model resolution or that it has empirical limits

· did not understand physical error analysis at all.

· did not realize that ‘±n’ is not ‘+n.’

Some of these traits consistently show up in their papers. I’ve not seen one that deals properly with physical error, with model calibration, or with the impact of model physical error on the reliability of a projected climate.

More thorough-going analyses have been posted up at WUWT, here, here, and here, for example.

In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.

Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.

In their hands, climate modeling has become a kind of subjectivist narrative, in the manner of the critical theory pseudo-scholarship that has so disfigured the academic Humanities and Sociology Departments, and that has actively promoted so much social strife. Call it Critical Global Warming Theory. Subjectivist narratives assume what should be proved (CO₂ emissions equate directly to sensible heat), their assumptions have the weight of evidence (CO₂ and temperature, see?), and every study is confirmatory (it’s worse than we thought).

Subjectivist narratives and academic critical theories are prejudicial constructs. They are in opposition to science and reason. Over the last 31 years, climate modeling has attained that state, with its descent into unquestioned assumptions and circular self-confirmations.

A summary of results: The paper shows that advanced climate models project air temperature merely as a linear extrapolation of greenhouse gas (GHG) forcing. That fact is multiply demonstrated, with the bulk of the demonstrations in the SI. A simple equation, linear in forcing, successfully emulates the air temperature projections of virtually any climate model. Willis Eschenbach also discovered that independently, awhile back.

After showing its efficacy in emulating GCM air temperature projections, the linear equation is used to propagate the root-mean-square annual average long-wave cloud forcing systematic error of climate models, through their air temperature projections.

The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. The predictive content in the projections is zero.

In short, climate models cannot predict future global air temperatures; not for one year and not for 100 years. Climate model air temperature projections are physically meaningless. They say nothing at all about the impact of CO₂ emissions, if any, on global air temperatures.

Here’s an example of how that plays out.

clip_image002

Panel a: blue points, GISS model E2-H-p1 RCP8.5 global air temperature projection anomalies. Red line, the linear emulation. Panel b: the same except with a green envelope showing the physical uncertainty bounds in the GISS projection due to the ±4 Wm⁻² annual average model long wave cloud forcing error. The uncertainty bounds were calculated starting at 2006.

Were the uncertainty to be calculated from the first projection year, 1850, (not shown in the Figure), the uncertainty bounds would be very much wider, even though the known 20th century temperatures are well reproduced. The reason is that the underlying physics within the model is not correct. Therefore, there’s no physical information about the climate in the projected 20th century temperatures, even though they are statistically close to observations (due to model tuning).

Physical uncertainty bounds represent the state of physical knowledge, not of statistical conformance. The projection is physically meaningless.

The uncertainty due to annual average model long wave cloud forcing error alone (±4 Wm⁻²) is about ±114 times larger than the annual average increase in CO₂ forcing (about 0.035 Wm⁻²). A complete inventory of model error would produce enormously greater uncertainty. Climate models are completely unable to resolve the effects of the small forcing perturbation from GHG emissions.

The unavoidable conclusion is that whatever impact CO₂ emissions may have on the climate cannot have been detected in the past and cannot be detected now.

It seems Exxon didn’t know, after all. Exxon couldn’t have known. Nor could anyone else.

Every single model air temperature projection since 1988 (and before) is physically meaningless. Every single detection-and-attribution study since then is physically meaningless. When it comes to CO₂ emissions and climate, no one knows what they’ve been talking about: not the IPCC, not Al Gore (we knew that), not even the most prominent of climate modelers, and certainly no political poser.

There is no valid physical theory of climate able to predict what CO₂ emissions will do to the climate, if anything. That theory does not yet exist.

The Stefan-Boltzmann equation is not a valid theory of climate, although people who should know better evidently think otherwise including the NAS and every US scientific society. Their behavior in this is the most amazing abandonment of critical thinking in the history of science.

Absent any physically valid causal deduction, and noting that the climate has multiple rapid response channels to changes in energy flux, and noting further that the climate is exhibiting nothing untoward, one is left with no bearing at all on how much warming, if any, additional CO₂ has produced or will produce.

From the perspective of physical science, it is very reasonable to conclude that any effect of CO₂ emissions is beyond present resolution, and even reasonable to suppose that any possible effect may be so small as to be undetectable within natural variation. Nothing among the present climate observables is in any way unusual.

The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.

The analysis is straight-forward. It could have been done, and should have been done, 30 years ago. But was not.

All the dark significance attached to whatever is the Greenland ice-melt, or to glaciers retreating from their LIA high-stand, or to changes in Arctic winter ice, or to Bangladeshi deltaic floods, or to Kiribati, or to polar bears, is removed. None of it can be rationally or physically blamed on humans or on CO₂ emissions.

Although I am quite sure this study is definitive, those invested in the reigning consensus of alarm will almost certainly not stand down. The debate is unlikely to stop here.

Raising the eyes, finally, to regard the extended damage: I’d like to finish by turning to the ethical consequence of the global warming frenzy. After some study, one discovers that climate models cannot model the climate. This fact was made clear all the way back in 2001, with the publication of W. Soon, S. Baliunas, S. B. Idso, K. Y. Kondratyev, and E. S. Posmentier Modeling climatic effects of anthropogenic carbon dioxide emissions: unknowns and uncertainties. Climate Res. 18(3), 259-275, available here. The paper remains relevant.

In a well-functioning scientific environment, that paper would have put an end to the alarm about CO₂ emissions. But it didn’t.

Instead the paper was disparaged and then nearly universally ignored (Reading it in 2003 is what set me off. It was immediately obvious that climate modelers could not possibly know what they claimed to know). There will likely be attempts to do the same to my paper: derision followed by burial.

But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.

All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.

All for nothing.

There’s plenty of blame to go around, but the betrayal of science garners the most. Those offenses would not have happened had not every single scientific society neglected its duty to diligence.

From the American Physical Society right through to the American Meteorological Association, they all abandoned their professional integrity, and with it their responsibility to defend and practice hard-minded science. Willful neglect? Who knows. Betrayal of science? Absolutely for sure.

Had the American Physical Society been as critical of claims about CO₂ and climate as they were of claims about palladium, deuterium, and cold fusion, none of this would have happened. But they were not.

The institutional betrayal could not be worse; worse than Lysenkoism because there was no Stalin to hold a gun to their heads. They all volunteered.

These outrages: the deaths, the injuries, the anguish, the strife, the malused resources, the ecological offenses, were in their hands to prevent and so are on their heads for account.

In my opinion, the management of every single US scientific society should resign in disgrace. Every single one of them. Starting with Marcia McNutt at the National Academy.

The IPCC should be defunded and shuttered forever.

And the EPA? Who exactly is it that should have rigorously engaged, but did not? In light of apparently studied incompetence at the center, shouldn’t all authority be returned to the states, where it belongs?

And, in a smaller but nevertheless real tragedy, who’s going to tell the so cynically abused Greta? My imagination shies away from that picture.

An Addendum to complete the diagnosis: It’s not just climate models.

Those who compile the global air temperature record do not even know to account for the resolution limits of the historical instruments, see here or here.

They have utterly ignored the systematic measurement error that riddles the air temperature record and renders it unfit for concluding anything about the historical climate, here, here and here.

These problems are in addition to bad siting and UHI effects.

The proxy paleo-temperature reconstructions, the third leg of alarmism, have no distinct relationship at all to physical temperature, here and here.

The whole AGW claim is built upon climate models that do not model the climate, upon climatologically useless air temperature measurements, and upon proxy paleo-temperature reconstructions that are not known to reconstruct temperature.

It all lives on false precision; a state of affairs fully described here, peer-reviewed and all.

Climate alarmism is artful pseudo-science all the way down; made to look like science, but which is not.

Pseudo-science not called out by any of the science organizations whose sole reason for existence is the integrity of science.

4.1 9 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

886 Comments
Inline Feedbacks
View all comments
bit chilly
September 7, 2019 5:00 pm

Well done Dr Frank, an excellent display of tenacity in the face of obstinacy.Have followed this story since you first wrote about it on WUWT and congratulate you on pushing it to its conclusion. I can’t wait to read what a certain Mr Stokes has to say.

John Q Public
September 7, 2019 5:25 pm

Conclusion: S/N = ~~0

PATRICK J MICHAELS
September 7, 2019 5:34 pm

The rot affecting climate “science” (i.e. data trashing and acceptance of failing models) is not confined to just this corrupted field. On November 7 Cato Books will release my new “Scientocracy”: The Tangled Web of Public Science and Public Policy”.

Besides climate science, we have fine contributions covering Dietary Fat, Dietary Salt, A general review of scientific corruption, the destructive opioid war, ionizing radiation and carcinogen regulations, PM 2.5 regulations, and massive government takings in the name of “science”, including the US’ largest uranium deposit and the worlds largest Copper Gold Moly deposit.

4 Eyes
Reply to  PATRICK J MICHAELS
September 7, 2019 8:32 pm

I very much look forward to getting a copy. Guys like you and Pat F and Anthony W and the many other fine highly qualified posters here give me confidence that all is not lost. Thank you all.

John F. Hultquist
Reply to  PATRICK J MICHAELS
September 7, 2019 8:55 pm

PJM,
Thanks for the heads-up.
I’ve always found the history of science interesting.
A good analogy to the current post can be found in the development of understanding of the mega-floods proposed as the cause of Eastern Washington’s Channeled Scablands. J. Harlen Bretz’s massive flooding hypothesis was seen as arguing for a catastrophic explanation of the geology, against the prevailing view of uniformitarianism.

Also, thanks to Pat Frank and those who support him.

September 7, 2019 5:40 pm

Where’s the Steven Mosher driveby?
And where’s Nick Stokes?

Clyde Spencer
Reply to  markx
September 7, 2019 8:35 pm

markx
The “local crew?” 🙂 I imagine we will eventually hear from them after they put their heads together with others to come up with some smoke to blow. If there was anything seriously wrong with Pat’s paper it would have jumped out at them and provided them with an immediate response.

Reply to  Clyde Spencer
September 7, 2019 11:13 pm

” If there was anything seriously wrong with Pat’s paper it would have jumped out”
The paper isn’t new. I’ve had plenty to say on previous threads, eg here, and it’s all still true. And it agrees with those 30 previous reviews that rejected it. They were right.

Here’s one conundrum. He starts out with a simple model that he says emulates very closely the behaviour of numerous GCM’s. He says, for example, “Figure 2 shows the further successful emulations of SRES A2, B1, and A1B GASAT projections made using six different
CMIP3 GCMs.”
And that is basically over the coming century, and there is good agreement.

But then he says that the GCMs are subject to huge uncertainties, as shown in the head diagram. Eg “At the current level of theory an AGW signal, if any, will never emerge from climate noise no matter how long the observational record because the uncertainty width will necessarily increase much faster than any projected trend in air temperature.”

How then is it possible for his simple model to so closely emulate the GCM predictions if the predictions are so uncertain as to be meaningless?

HAS
Reply to  Nick Stokes
September 8, 2019 12:23 am

Nick, I trust you understand that the simple emulator emulates the GCMs without the systemic uncertainty. It is then used to identify the reduction in precision that would be in the the GCMs had they included that uncertainty.

You might have some legit objections (I must look back), but this isn’t one of them.

Reply to  HAS
September 8, 2019 2:19 am

“emulates the GCMs without the systemic uncertainty”
It is calculated independently, using things that GCM’s don’t use, such as feedback factors and forcings. Yet it yields very similar results for a long period. How is this possible if GCM’s have huge inherent uncertainties? How did the emulator emulate the effects of those uncertainties to reproduce the same result?

“some legit objections (I must look back)”
Well, here is one you could start with. Central to the arithmetic is PF’s proposition that if you average 20 years of cloud cover variability (it comes to 4 W/m2) the units of the average are not W/m2, but W/m2 per year, because the data was binned in years. That then converts to a rate, which determines the error spread. If you binned in months’ you’d get a different (and much larger) estimate of GCM error.

Reply to  HAS
September 8, 2019 10:38 am

The emulation is of the GCM projected temperatures, Nick. They’re numbers. The uncertainty concerns the physical meaning of those numbers, not their magnitude.

But you knew that.

I deal with your prior objections, beginning here. None of your objections amounted to anything.

I don’t “say” the GCM emulation equation is successful, Nick. I demonstrate the success.

Reply to  HAS
September 8, 2019 11:29 am

“The uncertainty concerns the physical meaning of those numbers, not their magnitude.”
You say here
“The uncertainty in projected temperature is ±1.8 C after 1 year for a 0.6 C projection anomaly and ±18 C after 100 years for a 3.7 C projection anomaly. “
If ±18 C doesn’t refer to magnitude, what does it refer to?

“I demonstrate the success.”
Not disputed (here). My point is that how could an emulating process successfully predict the GCM results if the GCM’s labor under so much uncertainty? You say “The predictive content in the projections is zero.”. But then you produce emulating processes that seem to totally agree. You may say that they have no predictive value either. But how can two useless predictors agree so well?

HAS
Reply to  HAS
September 8, 2019 1:46 pm

Nick you need to make a rigous distinction between the domain of GCM results and the real world. If we stick to the former then the emulator is fitted to it over the instrumental period and shows a good fit to the 100 year projections. The emulator won’t (necessarily) emulate stuff that isn’t in the GCM domain. If GCMs were changed so they modelled the cloud system accurately then that would define a new domain of GCM results, and the current emulator would most likely not work. It is estimating the difference between the current and better GCM domains that this work addresses as I read it.

Two additional comments:

1 . It is likely that the better GCMs will converge and be stable around a different set of projections. The way they are developed and the intrinsic degrees of freedom mean that any that don’t will be discarded, and this is the error ATTP makes below. The fact that they aren’t unstable only tells us that Darwin was right.

I should add that your language seems to suggest you are thinking that the claim being made is that the GCMs are somehow individually unstable, rather the claim is that the error (lack of precision) is systemic, reinforcing the point about the likely convergence of better GCMs (think, better instruments).

2. One critique of the method is that the emulator might not be stable when applied to the better GCM domain, and therefore the error calculations derived from it can’t be mapped back i.e. errors derived in emulator world don’t apply in GCM domain. One thought (and this might have been done) is to simply apply the emulator to its observed inputs and run a projection with errors and compare that with the output of GCMs.

Anyway I need to look more closely, but as I say I think you are barking up the wrong tree.

Reply to  HAS
September 8, 2019 3:30 pm

“If ±18 C doesn’t refer to magnitude, what does it refer to?”

You know, it almost sounds as if Nick doesn’t know what uncertainty is actually a measurement of. I’m pretty sure that he and Steven think that the Law of Large Numbers improves the accuracy of the mean.

I know that Steven posted on my blog that BEST doesn’t produce averages, they produce predictions. However, this does not stop the BEST page from claiming “2018 — Fourth Hottest Year on Record” or what have you.

Reply to  HAS
September 8, 2019 4:50 pm

“You know, it almost sounds as if Nick doesn’t know “
So can you answer the question – what does it refer to?

Reply to  HAS
September 8, 2019 6:28 pm

Nick, “If ±18 C doesn’t refer to magnitude, what does it refer to?

It refers to an uncertainty bound.

Nick, “Not disputed (here).

Where is it disputed, Nick?

Nick, “My point is that how could an emulating process successfully predict the GCM results if the GCM’s labor under so much uncertainty?

Because uncertainty does not affect the magnitude of an expectation value. It provides an expression of the reliability of that magnitude.

Reply to  HAS
September 8, 2019 6:33 pm

One more point about, “My point is that how could an emulating process successfully predict the GCM results if the GCM’s labor under so much uncertainty? ,” which is that uncertainty is not simulation error.

You seem to be confused about the difference, Nick.

Reply to  HAS
September 8, 2019 7:54 pm

“Nick, “If ±18 C doesn’t refer to magnitude, what does it refer to?
It refers to an uncertainty bound.”

So what are the numbers? If you write ±18, it means some number has a range maybe 18 higher or lower. As in 24±18. But what is the number here? Is it the bound to which you apply the ±18?

“that uncertainty is not simulation error”
Well, they are using different data, and still get the same result. What else is there?

Phil
Reply to  HAS
September 8, 2019 8:30 pm

Stokes September 8, 2019 at 2:19 am

Central to the arithmetic is PF’s proposition that if you average 20 years of cloud cover variability (it comes to e 4 W/m2) the units of the average are not W/m2, but W/m2 per year, because the data was binned in years.

OK, Nike I’ll bite. You say the error is 4 W/m2 and not 4 W/m2 per year. That means that every time clouds are calculated the error in the model is 4 W/m2. IIRC, GCMs have a time step of around 20 minutes. Therefore, one would have to assume a propagation error of 4 W/m2 every step or every 20 minutes. That would mean that Pat Frank is way wrong and has grossly underestimated the uncertainty since he assumes the ridiculously low figure of 4 W/m2 per year. In one year there would be 26,300 or so iterations. Is that what you mean?

Reply to  HAS
September 8, 2019 8:35 pm

Nick Stokes
September 8, 2019 at 4:50 pm

“You know, it almost sounds as if Nick doesn’t know “
So can you answer the question – what does it refer to?

I explained it to you several times in that thread — do you still not remember? It’s the standard deviation of the sampling distribution of the mean. It is not an improvement on the accuracy of the mean — it says that if you repeat the sampling experiment again, you will have a 67% chance that the new mean will be within the standard error of the mean to the first mean.

It does not say that if you take 10,000 temperature measurements reported to one decimal point, you can claim to know the mean to three decimal points.

It’s a reduction in the uncertainty, not an increase in the accuracy.

Reply to  HAS
September 8, 2019 9:22 pm

“That would mean that Pat Frank is way wrong”
The conclusion is true, but not the premise. Your argument is a reductio ad absurdum; errors of thousands of degrees. But Pat’s conclusions are absurd enough to qualify.

In fact the L&H figure is based on the correlation between GCMs and observed, so it doesn’t make sense trying to express correlation on the scale of GCM timesteps. There has to be some aggregation. The point is that it is an estimate of a state variable, like temperature. You can average it by aggregating over months, or years. Whatever you do, you get what should be an estimate of the same quantity, which L&H express as 4 W/m^-2.

It’s just a constant level of uncertainty, but Pat Frank wants to regard it as accumulating at a certain rate. That’s wrong, but at once the question would be, what rate? If you do it per timestep, you get obviously ridiculous results. Pat adjusts the L&H data to say 4 W/m^-2/year, on the basis that they graphed annual averages, and gets slightly less ridiculous results. Better than treating it as a rate per month, which would be equally arbitrary. Goodness knows what he would have done if L&H had applied a smoothing filter to the unbinned data.

Reply to  HAS
September 8, 2019 9:47 pm

Nick, “So what are the numbers? If you write ±18, it means some number has a range maybe 18 higher or lower.

Around an experimental measurement mean, yes.

Calibration error propagated as uncertainty around a model expectation value, no.

Reply to  HAS
September 8, 2019 9:50 pm

If he does, Phil, he’s wrong, because the ±4 W/m^2 is an annual average calibration error, not a 20 minute average.

Phil
Reply to  HAS
September 9, 2019 6:04 pm

Nick Stokes on September 8, 2019 at 9:22 pm

You state:

The point is that (4 W/m2) is an estimate of a state variable, like temperature.

The 4 W/m2 is a flux. “Flux is a rate of flow through a surface or substance in physics”. It is not a state variable. Flow doesn’t exist without implicit time units. The unit of time for the 4 W/m2 is clearly a year.

Reply to  HAS
September 9, 2019 6:26 pm

“The unit of time for the 4 W/m2 is clearly a year.”
Why a year? But your argument is spurious. Lauer actually says
” the correlation of the multimodel mean LCF is
0.93 (rmse = 4 W m^-2)”

So he’s actually calculating a correlation, which he then interprets as a flux. The issue is about averaging a quantity over time, whether a flux or not. If your trying to hang it on units, the unit of time is a second, not a year.

Suppose you were trying the estimate the solar constant. It’s a flux. You might average insolation over a year, and find something like 1361 W/m2. The fact that you averaged over a year doesn’t make it 1361 W/m2/year.

RW
Reply to  HAS
September 9, 2019 10:04 pm

Pat Frank was curve fitting. Sure, with some principles behind it and to reduce overly complicated GCMs to something manageable for error analysis. But the fact that we can curve fit doesn’t impart validity to the values the curve is trying to fit.

Reply to  HAS
September 10, 2019 5:53 pm

“So as I’ve suggested stop squabbling about Frank’s first law”
I’ve said way back, I don’t dispute Frank’s first law (pro tem). I simply note that if GCMs are held to have huge uncertainty due to supposed accumulation of cloud uncertainty, and if another simple model doesn’t have that uncertainty but matches almost exactly, then something doesn’t add up.

You haven’t said anything about Pat’s accumulation of what is a steady uncertainty, let alone why it should be accumulated annually.

Reply to  Nick Stokes
September 8, 2019 2:21 am

How then is it possible for his simple model to so closely emulate the GCM predictions if the predictions are so uncertain as to be meaningless?

Because models are simply responding to ever increasing CO2, that’s why they are so easy to imitate with a simple function. However as the climate has not been shown to respond in the same way to CO2, any small difference in the response will propagate creating a huge uncertainty. And the difference is likely to be huge because CO2 appears to have little effect on climate, much lower than the ever increasing effect in each model crop.

Reply to  Javier
September 8, 2019 2:50 am

” any small difference in the response will propagate creating a huge uncertainty”
But the emulator and GCMs are calculating the response differently. How can they come so close despite that “huge uncertainty”?

Reply to  Nick Stokes
September 8, 2019 5:53 am

Barely tracking above random noise isn’t really a version of “come so close.”

Fig. 1. Global (70S to 80N) Mean TLT Anomaly plotted as a function of time. The black line is the time series for the RSS V4.0 MSU/AMSU atmosperhic temperature dataset. The yellow band is the 5% to 95% range of output from CMIP-5 climate simulations.

http://www.remss.com/research/climate/

Reply to  Javier
September 8, 2019 3:37 am

Obviously because the emulator “emulates” the result of the GCMs not the way they work. We have all seen the spaghetti of model simulations in CMIP5. Nearly all of them are packed with very similar trends, and that similarity is claimed to be reproducibility when in reality it means they all work within very similar specifications constrained by having to reproduce past temperature and respond to the same main forcing (CO2) in a similar manner. It is all a very expensive fiction.

Reply to  Javier
September 8, 2019 10:08 am

“Barely tracking above random noise”
But it is far above the random noise that Pat Frank claims – several degrees.

“Obviously because the emulator “emulates” the result of the GCMs not the way they work”
It has to emulate some aspect of the way they work, else there is no point in analysing its error propagation.

Reply to  Javier
September 8, 2019 3:50 pm

It has to emulate some aspect of the way they work

Their dependency on CO2 to produce the warming if I understood correctly.

HAS
Reply to  Javier
September 9, 2019 3:56 pm

Nick
“But the emulator and GCMs are calculating the response differently. How can they come so close despite that ‘huge uncertainty’?”

I think the point is that the emulator does well without the cloud uncertainty, but with it you get a large difference from the set of GCMs.

As I said above the question to explore is why the emulator works OK with the variability in forcing, but not when a systematic cloud forcing error is introduced – is it that the GCMs have been tuned to stabalise the other forcings, but haven’t had to address variability in the clouds and therefore fall down, or is it that there is a problem with the incorporation of the cloud errors. (I still haven’t put in the hard yards on the latter).

The basic question seems to be that if GCMs incorporated cloud variability (if possible) would we see pretty similar results to today’s efforts, or could they be quite differnt.

Reply to  Javier
September 9, 2019 4:23 pm

“I think the point is that the emulator does well without the cloud uncertainty, but with it you get a large difference from the set of GCMs.”
No, “emulator does well” means it agrees with GCMs. And that is without cloud uncertainty. I don’t see that large difference demonstrated.

What do you think of the accumulation claims? Does the fact that cloud rmse of 4 W/m2 was measured by first taking annual averages mean that you can then say it accumulates every year by that amount (actually in quadrature as in Eq 6). Would you accumulate every month if you had taken monthly averages?

For extra fun, see if you can work out the units of the output of Eq 6.

HAS
Reply to  Javier
September 9, 2019 8:37 pm

Nick

“No, ’emulator does well’ means it agrees with GCMs. And that is without cloud uncertainty. I don’t see that large difference demonstrated.”

I’m unclear what you mean by the first two sentences. The “No” sounds like you disagree, but then you go on and repeat what I say.

As to the last sentence the emulator will inevitably move away from existing GCMs with a systemic change to forcings because of its basic structure. In the emulator the forcings are linear with dT.

What we don’t know is whether that is also what the updated GCMs would do. That is is what is moot.

As I said I haven’t had time to look at the nature of the errors issue.

Reply to  Javier
September 9, 2019 9:01 pm

“I’m unclear what you mean”
The emulator as expressed in Eq 1, with no provision for cloud cover, agrees with GCM output. Pat makes a big thing of this. Yet the GCM has the cloud uncertainty. How could the simple model achieve that emulation without some treatment of clouds?

But in a way, it’s circular. The inputs to Eq 1 are derived from GCM output (not necessarily the same ones), so really all that is being shown is something about how that derivation was done. It’s linear, which explains the linear dependence. It does not show, as Pat claims, that
“An extensive series of demonstrations show that GCM air temperature projections
are just linear extrapolations of fractional greenhouse gas (GHG) forcing.”

It also means that it is doubtful that analysing this “emulator” is going to tell anything about error propagation in the GCM.

HAS
Reply to  Javier
September 10, 2019 3:59 am

Nick, the GCMs don’t have the cloud uncertainty, they have different parameterized approximations across the model set. The emulator just emulates what that set produces by way of temperature output. This bit is all quite simple, but you don’t seem to be taking my earlier advice about being rigorous in your thinking about the different domains involved.

Your claim that the forcings used are a product of GCMs and therefore the system is circular is incorrect out of sample and irrelevant within. The emulator has been defined and perform pretty well with the projections.

Pat’s wording that they “are just linear extrapolations” is obviously not correct, but had he said “can be modelled by linear extrapolations” could you object?

Just accept that there is a simple linear emulator of the set of GCMs that does a pretty good job. Science is littered with simple models of more complex systems and they’ve even helped people like Newton make a name for themselves.

Geoff Sherrington
Reply to  Javier
September 10, 2019 8:22 am

For Nick Stokes,
It might help your thrust if you described why the average of numerous GCM is used in CMIP exercises.
It might help further if you describe how the error terms of each model are treated so that an overall error estimate can be made of this average.
This is, as you know, a somewhat loaded question given that many feel that such an average is meaningless in the real physics world. Geoff S

Reply to  Javier
September 10, 2019 1:31 pm

“had he said “can be modelled by linear extrapolations” could you object?”

Yes, because it only describes the result. It’s like dismissing Newton’s first law
“Every body persists in its state of being at rest or of moving uniformly straight forward …”
Bah, that’s just linear extrapolation.
If something is behaving linearly, and the GCM’s say so, that doesn’t reveal an inadequacy of GCMs.

I don’t object to the fact that in this case a simple emulator can get the GCM result. I just point out that it makes nonsense of the claim that the GCMs have huge uncertainty. If that were true, nothing could emulate them.

HAS
Reply to  Javier
September 10, 2019 2:24 pm

Nick, spend more time reading what I wrote (and what the author wrote). All the emulator does is emulate the current set of GCMs. The uncertainty only arises when there is a new set of GCMs that incorproate the uncertainty in the clouds and the way it might progate. The current set of GCMs don’t do that.

Until you grasp that any further critque is a waste of time. You aren’t understanding the propostion and, as I said are barking up the wrong tree.

HAS
Reply to  Javier
September 10, 2019 2:58 pm

I was going to just let the First Law coment pass, but on reflection it also suggests you are misunderstanding what is being argued.

Frank’s first law of current GCM temperature projections, (“change in temperature from projected current GCMs is a linear function of forcings”) is exactly analogous to Newton’s contributions, and just as one needs to step out of the domain of classical mechanics to invalidate his laws, so (it appears) we need to step outside the domain of current GCMs to invalidate Frank’s first law.

( I hasten to add that Frank’s first law is much more contingent than Newton’s, but the analogy applies directly).

So as I’ve suggested stop squabbling about Frank’s first law and move on to discussing what happens when you are no longer dealing with a domain of GCMs that simplify cloud uncertainty. That’s what helped to make Einstein and Planck famous.

Reply to  Javier
September 10, 2019 5:55 pm

“So as I’ve suggested stop squabbling about Frank’s first law”
I’ve said way back, I don’t dispute Frank’s first law (pro tem). I simply note that if GCMs are held to have huge uncertainty due to supposed accumulation of cloud uncertainty, and if another simple model doesn’t have that uncertainty but matches almost exactly, then something doesn’t add up.

You haven’t said anything about Pat’s accumulation of what is a steady uncertainty, let alone why it should be accumulated annually. Error propagation is important in solving differential equations – stability for one thing – but it doesn’t happen like this.

HAS
Reply to  Javier
September 10, 2019 7:01 pm

Nick, what is it about this that you find hard to understand?

The current set of GCMs don’t model the cloud uncertainty, therefore there is nothing that doesn’t add up about them being able to be modelled by a simple emulator.

It’s what would happens if the GCMs included the uncertainty and the means to propergate that through the projections that is under discussion.

The problem that you are creating for yourself is that you are coming across as though there is a fatal flaw where there isn’t, and that undermines the seriousness anyone will take your other claims.

Still haven’t had time to look at that. Too much to do, so little time.

Reply to  Javier
September 10, 2019 10:37 pm

David M
Javier puts forward a similar chart. A picture paints a thousand words. Calculus minuta is always debatable, as Nick Stokes always valuable contributions point out.

How far from the RCP 4.5 model forecasts does actual temperatures have to deviate for the believers to say, hold on a minute, there is something wrong.

Anthony talks of war, others quite rightly ask how do you communicate the contents of Pat’s paper. I have regularly stated that the most powerful weapon is a simple clean easy to digest chart like Javier’s, or David’s, with simple description embedded below. Updated monthly. Top of page. So far no response.

When the general public ask why the divergence, you give them Pat Frank’s paper. Simple, structured communication. I think they call it marketing. That’s why they have science communicators. It’s these charts that I include in polite correspondence to political leaders. They are not stupid, just misinformed. Theory versus reality.

Well done Pat. I understand your paper better having read all of the comments below.
Regards

Gwan
Reply to  Nick Stokes
September 8, 2019 2:39 am

Nick Stokes .You have lost any respect that many of us had for your opinions with your attacks on this and many others papers because they have searched for the truth.
Climate models are JUNK and now governments around the world are making stupid decisions based on junk science .
All but one climate model runs hot so it is very obvious to any one with a brain that the wrong parameters have been entered and the formula that is putting CO2 in the drivers seat is faulty when it is a very small bit player .
We know that you are a true believer in global warming but clouds cannot and have not been modeled and that is where all climate models fail .Clouds both cool and warm the earth .
Surely with the desperate searching that has taken place the theoretical tropical hot spot would have been located and rammed down the throats of the climate deniers if it exists .
The tropical hot spot is essential to global warming theory .
Your defense for Mike Mann here on WUWT also tells a lot about you .
Graham

Phil
Reply to  Nick Stokes
September 8, 2019 9:21 am

How then is it possible for his simple model to so closely emulate the GCM predictions if the predictions are so uncertain as to be meaningless?

This is rhetorical BS. Uncertainties are always calculated separately from model results. The model model does a reasonable job of emulating the GCMs as can be observed graphically, so the model model can be used to estimate the propagation of uncertainty. Word games.

Reply to  Phil
September 8, 2019 10:42 am

Thank-you Phil. Dead on.

Steve McIntyre used to call Nick, “Racehorse Nick Stokes.” And for the reason you just illuminated.

Clyde Spencer
Reply to  Nick Stokes
September 8, 2019 6:10 pm

Stokes,
I read the original article you linked to when it first came out; I also read, and contributed to, the comments.

It seems that you have it in your mind that your comments were devastating and conclusive. However, going back and re-reading, I see that most commenters were not only not convinced, but came back with reasons why YOU were wrong.

What is needed is a compelling explanation as to why Pat is wrong. So, far, I haven’t seen it. But then you have a reputation with me (and others) of engaging in sophistry to win an argument, with the truth be damned. That is, you have low credibility, particularly when your arguments can be challenged.

Reply to  Clyde Spencer
September 8, 2019 6:58 pm

Clyde,
So do you accept that, if you average annual maxima for London over 30 years, the result should be given as 15°C/year? That reframing as a rate is critical to the result here. And its wrongness is blatant and elementary.

Reply to  Clyde Spencer
September 8, 2019 8:40 pm

±4 Wm^-2/year is not a rate, Nick. Neither is 15 C/year. There’s no velocity involved.

And yes, the annual average of maximum temperature would be 15 C/year. The metric just would not be useful for very much.

Reply to  Clyde Spencer
September 9, 2019 12:26 am

“And yes, the annual average of maximum temperature would be 15 C/year. The metric just would not be useful for very much.”
I got it from the Wikipedia table here. They give the units as °C, as with the other averages in the table. I wonder if any other readers here think they should have given it as °C/year? Or if you can find any reference that follows that usage?

Clyde Spencer
Reply to  Clyde Spencer
September 9, 2019 9:39 am

Stokes
You asked about London temperatures, “… the result should be given as 15°C/year?” I’m not clear on what your point is. Can you be specific as to how your question pertains to the thesis presented by Pat?

The use of a time unit in a denominator implies a rate of change, or velocity, which may be instantaneous or an average over some unit of time. The determination of whether the denominator is appropriate can be judged by doing a unit analysis of the equation in question. If all the units cancel, or leave only the units desired in the answer, then the parameter is used correctly.

If you are implying that somehow the units in Pat’s equation(s) are wrong, make direct reference to that, rather than bringing up some Red Herring called London.

Reply to  Clyde Spencer
September 9, 2019 11:20 am

Clyde
“If you are implying that somehow the units in Pat’s equation(s) are wrong, make direct reference to that”
I have done so, very loudly. But so has Pat. He maintains the nutty view that if you average some continuous variable over time, the units of the result are different to that of the variable, acquiring a /year tag. Pat does it with the rmse of the cloud error quantity LWCF. The units of that are not so familiar, but it is exactly the same in principle as averaging temperature. And Pat has confirmed that in his thinking, the units of average temperature should be &def;C/year. You seem not so sure. I wondered what others think.

In fact, even that doesn’t get the units right. I have added an update to my post, which focuses on this equation (6) in his paper, which makes explicit the accumulation process, which goes by variance – ie addition in quadrature. So he claims that the rmse is not 4 W/m2, as his source says, but 4 W/m2/year. When added in quadrature over 20 years, say, that is multiplied by sqrt(20), since the numbers are the same. That is partly why he gets such big numbers. Now in normal maths, the units of that would still be W/m2/year, which makes no sense, because it is a fixed period. Pat probably wants to turn around his logic and say the 20 is years so that changes the units. But because it is added in quadrature, the answer is now not W/m2, but W/m2/sqrt(year), which makes even less sense.

But you claim to have read it and found it makes sense. Surely you have worked out the units?

John Q Public
Reply to  Clyde Spencer
September 9, 2019 12:26 pm

Nick Stokes:

Here is what I read in Lauer’s paper.

I see that on page 3833, Section 3, Lauer starts to talk about the annual means. He says:

“Just as for CA, the performance in reproducing the
observed multiyear **annual** mean LWP did not improve
considerably in CMIP5 compared with CMIP3.”

He then talks a bit more about LWP, then starts specifying the mean values for LWP and other means, but appears to drop the formalism of stating “annual” means.

For instance, immediately following the first quote he says,
“The rmse ranges between 20 and 129 g m^-2 in CMIP3
(multimodel mean = 22 g m^-2) and between 23 and
95 g m^-2 in CMIP5 (multimodel mean = 24 g m^-2).
For SCF and LCF, the spread among the models is much
smaller compared with CA and LWP. The agreement of
modeled SCF and LCF with observations is also better
than that of CA and LWP. The linear correlations for
SCF range between 0.83 and 0.94 (multimodel mean =
0.95) in CMIP3 and between 0.80 and 0.94 (multimodel
mean = 0.95) in CMIP5. The rmse of the multimodel
mean for SCF is 8 W m^-2 in both CMIP3 and CMIP5.”

A bit further down he gets to LCF (the uncertainty Frank employed,
“For CMIP5, the correlation of the multimodel mean LCF is
0.93 (rmse = 4 W m^-2) and ranges between 0.70 and
0.92 (rmse = 4–11 W m^-2) for the individual models.”

I interpret this as just dropping the formality of stating “annually” for each statistic because he stated it up front in the first quote.

Reply to  Clyde Spencer
September 9, 2019 6:19 pm

Let’s leave off the per year, as you’d have it Nick: 15 Celsius alone is the average maximum temperature for 30 years.

We now want to recover the original sum. So, we multiply 15 C by 30 years.

We get 450 Celsius-years.

What are they, Nick, those Celsius-years? Does Wikipedia know what they are, do you think? Do you know? Does anyone know?

Let’s see you find someone who knows what a Celsius-year is. After all, it’s your unit.

You’ve inadvertently supplied us with a basic lesson about science practice, which is to always do the dimensional analysis of your equations. One learns that in high-school.

One keeps all dimensions present throughout a calculation.

One then has a check that allows one to verify that when all the calculations are finished, the final result has the proper dimensions. All the intermediate dimensions must cancel away.

The only way to get back the original sum in Celsius is to retain the dimensions throughout. That means retaining the per year one obtains when dividing a sum of annual average temperatures by the number of years going into the average.

On doing so, the original sum of temperatures is recovered: 15 C/year x 30 years = 450 Celsius.

The ‘years’ dimension cancels away. Amazing, what?

The ‘per year’ does not indicate a velocity. It indicates an average.

One has to keep track of meaning and context in these things.

Clyde Spencer
Reply to  Clyde Spencer
September 9, 2019 6:25 pm

Stokes
In your “moyhu,” you say, “Who writes an RMS as ±4? It’s positive.”
Yes, just as with standard deviation, the way that the value is calculated, only the absolute value is used because it isn’t defined for the square root of a negative number. However, again as with standard deviation, it is implied that the absolute value has meaning that includes a negative deviation from the trend line. That is, the uses of “±” explicitly recognizes that the RMSE has meaning as variation in both positive and negative directions from the trend line. It doesn’t leave to one’s imagination whether the RMSE should only be added to the signal. In that sense, it is preferred because it makes it very clear how the parameter should be used.

I’m working through your other complaints and will get back to you.

Reply to  Clyde Spencer
September 9, 2019 6:55 pm

“What are they, Nick, those Celsius-years? Does Wikipedia know what they are, do you think?”
The idea of average temperature is well understood, as are the units °C (or F). Most people could tell you something about the average temperature where they live. The idea of a sum of temperatures is not so familiar; as you are expressing it, it would be a time integral, and would indeed have the units °C year, or whatever.

And yes, Wikipedia does know about it.

Michael Jankowski
Reply to  Clyde Spencer
September 10, 2019 5:29 pm

“…The idea of average temperature is well understood, as are the units °C (or F). Most people could tell you something about the average temperature where they live…”

Matthew R Marler
Reply to  Nick Stokes
September 9, 2019 12:36 pm

Nick Stokes: How then is it possible for his simple model to so closely emulate the GCM predictions if the predictions are so uncertain as to be meaningless?

Pat Frank’s model reproduces GCM output accurately, but GCMs do not model climate accurately. You know that.

Reply to  Matthew R Marler
September 9, 2019 4:14 pm

But how can his model, which doesn’t include the clouds source of alleged accumulating error, match GCMs, which Pat says are overwhelmed by it?

Do you believe that the right units for average temperature in a location are °C/year, as Pat insists?

Matthew R Marler
Reply to  Matthew R Marler
September 9, 2019 5:30 pm

Nick Stokes: But how can his model, which doesn’t include the clouds source of alleged accumulating error, match GCMs, which Pat says are overwhelmed by it?

You are shifting your ground. Do you really not understand how the linear models reproduce the GCM-modeled CO2-temp relationship?

Reply to  Matthew R Marler
September 9, 2019 7:03 pm

“Do you really not understand how the linear models reproduce the GCM-modeled CO2-temp relationship?”
The linear relationship is set out in Equation 1. It is supposed to be an emulation of the process of generating surface temperatures, so much so that Pat can assert, as here and in the paper
“An extensive series of demonstrations show that GCM air temperature projections
are just linear extrapolations of fractional greenhouse gas (GHG) forcing.”

Equation 1 contains no mention of cloud fraction. GCMs are said to be riddled with error because of it. Yet Eq 1 gives results that very much agree with the output of GCM’s, leading to Pat’s assertion about “just extrapolation”.

Matthew R Marler
Reply to  Matthew R Marler
September 9, 2019 7:51 pm

Nick Stokes: It is supposed to be an emulation of the process of generating surface temperatures, so much so that Pat can assert, as here and in the paper
“An extensive series of demonstrations show that GCM air temperature projections
are just linear extrapolations of fractional greenhouse gas (GHG) forcing.”

It is clearly not an “emulation of the process” (my italics); all it “emulates” is the input-output relationship, which is indistinguishable from a linear extrapolation.

Matthew R Marler
Reply to  Matthew R Marler
September 9, 2019 8:19 pm

Nick Stokes: Do you believe that the right units for average temperature in a location are °C/year, as Pat insists?

Interesting enough question. You have to read the text to disambiguate that it is the average over a number of years, not a rate of change per year. Miles per hour is a rate, but yards per carry in American Football isn’t. These unit questions arise whenever you compute the mean of some quantity where the sum does not in fact refer to the accumulation of anything, like the center of lift of an aircraft wing, the mean weight of the offensive linemen, or the average height of an adult population. Usually the “per unit” is dropped, which also requires rereading the text for understanding. It’s a convention as important as spelling “color” or “colour” properly, or the correct pronunciation of “shibboleth”.

Reply to  Matthew R Marler
September 9, 2019 8:53 pm

“all it “emulates” is the input-output relationship, which is indistinguishable from a linear extrapolation.”
In a way that is true, but it makes nonsense of the claim that
“An extensive series of demonstrations show that GCM air temperature projections are just linear extrapolations of fractional greenhouse gas (GHG) forcing.”
That’s circular, because his model takes input (forcing) that is derived linearly from GCM output and says it proves that GCMs are linear extrapolations, when in fact it just regenerates the linear process whereby forcings and feedbacks are derived. And that probably undermines my argument from the coincidence of the results, but only by undermining a large part of the claims of Pat Frank’s paper. It means you can’t use the simple model to model error accumulation, because it is not modelling what the model did, but only whatever was done to derive forcings from GCM output.

“You have to read the text to disambiguate that it is the average over a number of years, not a rate of change per year. “
The problem is that he uses it as a rate of change. To get the change over a period, he sums (in quadrature) the 4 W/m2/year (his claimed unit) over the appropriate number of years, exactly as you would do for a rate. And so what you write as the time increment matters. In this case, he in effect multiplies the 4 W/m2 by sqrt(20) (see Eq 6). If the same figure had been derived from monthly averages, he would multiply by sqrt(240) to get a quite different result, though the measured rmse is still 4 W/m2.

And it doesn’t even work. If he wrote rmse as 4 W/m2/year and multiplied by sqrt(20 years), his estimate for the 20 years would be 4 W/m2/sqrt(year). Now there’s a unit!

Matthew R Marler
Reply to  Matthew R Marler
September 9, 2019 10:57 pm

Nick Stokes: In a way that is true, but it makes nonsense of the claim that
“An extensive series of demonstrations show that GCM air temperature projections are just linear extrapolations of fractional greenhouse gas (GHG) forcing.”
That’s circular, because his model takes input (forcing) that is derived linearly from GCM output and says it proves that GCMs are linear extrapolations, when in fact it just regenerates the linear process whereby forcings and feedbacks are derived.

Last first, it does not “regenerate” the “process” by which the forcings and feedbacks are derived; the analysis shows that despite the complications in the process (actually, because of the complications but in spite of our expectations of complicated processes), the input-output model relationship relationship is linear. I think you are having trouble accepting that this is in fact an intrinsic property of the complex models. Second, it is true in the way that counts: it permits a simple linear model to predict, accurately, the output of the complex model.

To get the change over a period, he sums (in quadrature) the 4 W/m2/year (his claimed unit) over the appropriate number of years, exactly as you would do for a rate.

Not exactly as you would do for a rate, exactly as you would do when calculating the mean squared error of the model (or a variance of a sum if the means of the summands were 0). A similar calculation is performed with CUSUM charts, where the goal is to determine whether the squared error (deviation of the product from the target) is constant; then you could say that the process was under control when the mean deviation of the batteries (or whatever) from the standard is less than 1% per battery. {It gets more complicated, but that will do for now.}

At RealClimate I once recommended that they annually compute the squared error of the yearly or monthly mean forecasts (for each of the 100+ model runs that they display in their spaghetti charts) and sum the squares as Pat Frank did here, and keep the CUSUM tally. Now that Pat Frank has shown the utility of computing the sum of squared error and the mean squared error and its root, perhaps someone will begin to do that. To date the CUSUMS are deviating from what they would be if the models were reasonably accurate, though the most recent El Nino put some lipstick on them, so to speak.

Reply to  Matthew R Marler
September 9, 2019 11:36 pm

Nick, “input (forcing) that is derived linearly from GCM output and says it proves that GCMs are linear extrapolations, … but only whatever was done to derive forcings from GCM output.

Wrong, Nick. The forcings are the standard SRES or RCP forcings, taken independently of any model.

The forcings weren’t derived from the models at all, or from their output. The forcings are entirely independent of the models.

The fitting enterprise derives the f_CO2. And its success yet is another indication that GCM air temperature projections are just linear extrapolations of forcing.

Nick ends up, “but only whatever was done to derive forcings from GCM output.

Premise wrong, conclusion wrong.

Nick, “The problem is that he uses it as a rate of change.

Not at all. I use it for what it is: theory-error reiterated in every single step of a climate simulation.

You’re just making things up, Nick.

Nick, “To get the change over a period, he sums (in quadrature) the 4 W/m2/year…

Oh, Gawd, Nick thinks uncertainty in temperature is a temperature.

Maybe you’re not making things up, Nick. Maybe you really are that clueless.

Nick, “In this case, he in effect multiplies the 4 W/m2 by sqrt(20) (see Eq 6).

No I don’t. Eqn. 6 does no such thing. There’s no time unit anywhere in it.

Eqn. 6 is just the rss uncertainty, Nick. Your almost favorite thing, including the ± you love so much.

Reply to  Matthew R Marler
September 10, 2019 12:35 am

” The forcings are the standard SRES or RCP forcings, taken independently of any model.”
Yes, but where do they come from? Forcings in W/m2 usually come from some stage of GCM processing, often from the output.

““To get the change over a period, he sums (in quadrature) the 4 W/m2/year…””
You do exactly as I describe, and as set out in Eq 6 here.

“Eqn. 6 does no such thing. There’s no time unit anywhere in it.”
Of course there is. In the paragraph introducing Eq 6 you say:
“For the uncertainty analysis below, the emulated air temperature projections were calculated in annual time steps using equation 1”
and
“The annual average CMIP5 LWCF
calibration uncertainty, ±4 Wm􀀀2 year􀀀1, has the appropriate
dimension to condition a projected air temperature emulated in
annual time-steps.”

“annual” is a time unit. You divide the 20 (or whatever) years into annual steps and sum in quadrature.

You should read the paper some time, Pat.

Matthew R Marler
Reply to  Matthew R Marler
September 10, 2019 9:01 am

Nick Stokes: You should read the paper some time, Pat.

I think you are in over your head.

Reply to  Matthew R Marler
September 10, 2019 1:10 pm

“over your head”
OK, can you explain how it is that Eq 6 does not have time units when the text very clearly states that the steps are annual?

Matthew R Marler
Reply to  Matthew R Marler
September 10, 2019 6:52 pm

Nick Stokes: OK, can you explain how it is that Eq 6 does not have time units when the text very clearly states that the steps are annual?

Clearly, as you write, the time units on the index of summation would be redundant. What exactly is your problem?

Matthew R Marler
Reply to  Matthew R Marler
September 10, 2019 7:02 pm

Nick Stokes: OK, can you explain how it is that Eq 6 does not have time units when the text very clearly states that the steps are annual?

Let me rephrase my answer in the form of a question: Would you be happier if the index of summation were t(i) throughout: {t(i), i = 1, … N}?

Reply to  Matthew R Marler
September 10, 2019 8:59 pm

“What exactly is your problem?”
Well, actually, several
1. We were told emphatically that there are no time units. But there are. So what is going on?
2. Only one datum is quoted, Lauer’s 4 W/m2, with no time units. And as far as I can see, that is the μ, after scaling by the constant. But it is summed in quadrature n times. n is determined by the supposed time step, so the answer is proportional to √n. But the value of n depends on that assumed time step. If annual, it would be √20, for 20 years. If monthly, √240. These are big differences, and the basis for which to choose seems to me to be arbitrary. Pat seems to say it is annual because Lauer used annual binning in calculating the average. That has nothing to do with the performance of GCMs.
3. The units don’t work anyway. In the end, the uncertainty should have units W/m2, so it can be converted to T, as plotted. If μ has units W/m2, as Lauer specified, the RHS of 6 would then have units W/m2*sqrt(year). Pat, as he says there, clearly intended that assigning units W/m2/year should fix that. But it doesn’t; the units of the RHS are W/m2/sqrt(year), still no use.
4. The whole idea is misconceived anyway. Propagation of error with a DE system involved error inducing components of other solutions, and how it evolves depends on how that goes. Pat’s Eq 1 is a very simple de, with only one other solution. A GCM has millions, but more importantly, they are subject to conservation laws – ie physics. And whatever error does, it can’t simply accumulate by random walk, as Pat would have it – that is non-physical. A GCM will enforce conservation at every step.

Matthew R Marler
Reply to  Matthew R Marler
September 10, 2019 11:21 pm

Nick Stokes: 4. The whole idea is misconceived anyway. Propagation of error with a DE system involved error inducing components of other solutions, and how it evolves depends on how that goes. Pat’s Eq 1 is a very simple de, with only one other solution

Well, I think this is the best that has been done on this topic, not to mention that it is the first serious effort. Now that you have your objections, take them and improve the effort.

I think you are thoroughly confused.

Sam Capricci
September 7, 2019 5:41 pm

The analysis upsets the entire IPCC applecart. It eviscerates the EPA’s endangerment finding, and removes climate alarm from the US 2020 election. There is no evidence whatever that CO₂ emissions have increased, are increasing, will increase, or even can increase, global average surface air temperature.

If only this were true. There is too much invested in the current fear mongering promoted by the media, governments and teachers to allow this paper to be given credibility or taken seriously. I find it VERY interesting and will be adding it to my bookmarks for reference but think of the number of people who would lose jobs or money if those promoting the AGW farce had to come out and say, never mind.

To many in power the AGW farce was a ticket to more power and control over people and businesses.

They had an easier transition when they went from global cooling to global warming, they only lost some believers, like me, that is when I became skeptical.
Thank you Pat.

Yooper
September 7, 2019 5:43 pm

Monckton of Brenchley: Your recent post here on WUWT (https://wattsupwiththat.com/2019/09/02/the-thermageddonites-are-studying-us-be-afraid-be-very-afraid/) melds nicely with this article, or should I call it “peer reviewed paper”? One comment above said that CTM sent it out for review before he would post it. It actually looks like WUWT is doing more real science publication than the “traditional jounals”, eh?
Bravo, Anthony!

Kevin kilty
September 7, 2019 6:23 pm

To be fair, it is not common for recent science Ph.D.s in any field to have much background in probability, statistics or error analysis. Recognizing this, the university where I work offered a course in these topics for new hires for no other reason than to improve the quality of research work. We have had budgetary and management problems now for the past 6 years or so, and I don’t know if we still offer this class. We are becoming more run-of-the-mill with every passing year.

Many papers submitted to journals are rejected with a single very negative review–this is not limited to climate science. Controversy is often very difficult for an editor to manage. Some journals do not have a process for handling papers with highly variable reviews, and many will not reconsider even if one demonstrates the incompetence of a review.

Reply to  Kevin kilty
September 8, 2019 2:24 am

Many papers submitted to journals are rejected with a single very negative review–this is not limited to climate science. Controversy is often very difficult for an editor to manage. Some journals do not have a process for handling papers with highly variable reviews, and many will not reconsider even if one demonstrates the incompetence of a review.

How true. Only mediocrity and consensus abiding have a free pass at publication.

September 7, 2019 6:24 pm

Reality bites doesn’t it?

Reply to  Donald L. Klipstein
September 7, 2019 7:08 pm

nope, your version of reality is just fluff

Reply to  Donald L. Klipstein
September 7, 2019 8:31 pm

Radiative convective equilibrium of the atmosphere with a given distribution of relative humidity is computed as the asymptotic state of an initial value problem.

And how useful do you see simple models that miss most of the relevant feed backs?

The results show that it takes almost twice as long to reach the state of radiative convective equilibrium for the atmosphere with a given distribution of relative humidity than for the atmosphere with a given distribution of absolute humidity.

And one might wonder how they managed their humidity representation if it represented an atmosphere that wasn’t natural. “Here’s an unrealistic atmosphere, lets see how it behaves” is just another version of what GCMs do when they think they’re representing a “natural” atmosphere and project into the future where the atmosphere state is unknown to us and cant even be confidently parameterised. But the differences are far too subtle for most people to understand.

September 7, 2019 6:39 pm

Congratulations on getting this important paper published. It says a great deal about the corruption of science and morality that such papers weren’t being published in the normal course of scientific work from the very beginning.

Normally, I just skim such threads although I am an engineer and geologist involved in modelling ore deposits, mineral processing and hydrometallurgy where you have to be substantially correct before financiers put up a billion dollars, but I have to say that your intellect, passion for science, outrage and compassion for the millions of victims of this horrible scam and your mastery of language made me a willing captive.

I rank this essay a tie with that of Michael Crichton on the same subject. Thanks for this. You, Chris Monckton, Anthony Watts and a small but hearty band of others are the army that will win this battle for civilization and freedom and relief for the hundreds of millions of victims and even the willing perpetrators who seem to be unaware of the Dark Age they are working toward. The latter, of course, like the Nile crocodile will snap at the asses of those trying to save them.

Roy Edwards
September 7, 2019 6:40 pm

Nice to have a [guest] guets Author.
But no bio?
Who is he and why should I believe his paper

Reply to  Roy Edwards
September 7, 2019 7:40 pm

Roy, you lose a cheap shot. I will now take you brutally down.
Had you read the paper, you would have known that he is a senior professor at SlAC. Fully identified in the epub paper front.

So, you prove hereby you did not read the paper. And also are a bigoted ignoramus.

Reply to  Rud Istvan
September 7, 2019 8:16 pm

Scientific staff, Rud, thanks. 🙂

I’m on LinkedIn, so people can find my profile there.

For those like Roy who need political reassurances, I have a Ph.D. (Stanford) and am a physical methods experimental chemist, using mostly X-ray absorption spectroscopy. I sweat physical error in all my work. The paper is about error analysis, merely applied to climate models.

I have international collaborators, and my publication record includes about 70 peer reviewed papers in my field, all done without the help of fleets of grad students or post-docs.

Roy Edwards
Reply to  Pat Frank
September 7, 2019 9:18 pm

Thank you Pat.
I am just a layperson trying to get a handle on reality regarding Climate change.
I came across your guest post in Whatsup with That which I have recently come across.

No cheap shot intended. Just an honest attempt to discover who you are and your credentials(Which I accept are great and I do not challenge that.)

It my layman’s world (not being one of the in crowd) My criticism is really with the Whatsup With that administrators.

TRM
Reply to  Roy Edwards
September 8, 2019 10:08 am

“Who is he and why should I believe his paper”

This is science. You should never believe. Belief is for religion, consensus is for politics and predictions are science.

DANNY DAVIS
Reply to  Roy Edwards
September 8, 2019 2:10 pm

Roy – the page you are reading is “WATTS up with That”
It is the passion of Anthony WATTS.
He has many friends in the world of Science that are working together to present a solid source of analysis of the “Climate Change” collusion.
The established cabal that wish to discard the challenging sceptic voices that are the mark of true Scientific investigation.

– Stay Tuned –

Reply to  Roy Edwards
September 9, 2019 10:52 am

TRM,
Exactly.
No one should be believed or given credence simply because of who they are, what degree program they have or have not completed, or how well one recognizes their name.
Science is about ideas and evidence.
There is a specific method that is used to help us elucidate that which is objectively true, and to differentiate it from that which is merely an idea, opinion, or assertion.
Believing some person because of who they are, and/or not believing someone else for the same reason, is not logical, and it is certainly not scientific.
It is in fact a large part of the problem we in the “skeptic” community have found common cause in addressing.
Believing or disbelieving some thing because of who tells you it is true, or how many people think some thing to be true, is not scientific, and is in fact exactly what the scientific method replaced.
Phlogiston is not a false concept because people stopped believing it, or because a consensus now believes it to be false. The miasma theory of disease is not false because the medical community decided they like other ideas better.
These ideas are believed to be false because of evidence to the contrary.
The evidence is what matters.
And it is important to note, that disproving one idea is not contingent on having an alternative explanation available.
Semmelweis did not prove that germs cause diseases.
But he did show conclusively that washing hands will greatly lower the incidence of disease. Thereby showing that filthy hands were in fact transmitting diseases to previously healthy people.

David Jay
Reply to  Roy Edwards
September 9, 2019 7:43 pm

Or, as the quip goes: In God We Trust; all others show code and data.

paul courtney
Reply to  Roy Edwards
September 11, 2019 12:19 pm

Roy Edwards: Not intended as a cheap shot, but clearly a half-cocked one. A constructive suggestion- next time you type that question, try to answer it yourself rather than posting your question first. You’ll appear much smarter by not appearing at all!
P.S.: It didn’t help you that Pat Frank comments here often, has other guest posts, is known to us laymen as one of the more sciency guys here. Sorry for that- he’s a legitimate scientist in the field of …….. well, I’d like to say “field of climate science”, but I’d rather refer to a scientific field.

Steven Fraser
Reply to  Roy Edwards
September 7, 2019 8:31 pm

Why not start with his short bio IN the paper, and work from there.

Kurt
Reply to  Roy Edwards
September 8, 2019 12:20 am

Who cares who he is. Anyone who needs to know the identity of a person making an argument in order to evaluate the persuasiveness of that argument is a person too comfortable with letting other people do his thinking for him.

Clyde Spencer
Reply to  Kurt
September 8, 2019 10:18 am

Kurt
+1

That is why I have avoided posting my CV. I want and expect my arguments to stand on their own merits, not on the subjective evaluation of my credentials. The position of those like Roy are equivalent to saying, “I’ll consider your facts if, and only if, you meet my subjective bar of competence.”

Charles Taylor
September 7, 2019 6:43 pm

I hope this stays as the top post on WUWT for a while.

Reply to  Charles Taylor
September 7, 2019 10:30 pm

Charles Taylor 6:43

Bingo!

John Q Public
September 7, 2019 7:00 pm

Did you directly address this point by reviewer 1?

“Thus, the error (or uncertainty) in the simulated warming only depends on the change
B n the bias between the beginning and the end of the simulation, not on the
evolution in-between. For the coefficient 0.416 derived from the paper, a bias change
B = ±4 Wm-2 would indicate an error of ±1.7 K in the simulated temperature change.
This is substantial, but nowhere near the ±15 K claimed by the paper. For producing
this magnitude of error in temperature change, B should reach ±36 Wm-2 which is
entirely implausible.
In deriving the ±15 K estimate, the author seemingly assumes that the uncertainty in
the Fi :s in equation (6) adds up quadratically from year to year (equation 8 in the
manuscript). This would be correct if the Fi :s were independent. However, as shown
by (R1), they are not. Thus, their errors cancel out except for the difference between
the last and the first time step.”

Reply to  John Q Public
September 7, 2019 7:41 pm

There was no reviewer #1 at Frontiers, John. That reviewer didn’t submit a review at all.

You got that review comment from a different journal submission, but have neglected to identify it.

Let me know where that came from — I’m not going to search my files for it — and I’ll post up my reply.

If you got that comment from the zip file of reviews and responses I uploaded, then you already know how I replied. Let’s see: that would make your question disingenuous.

John Q Public
Reply to  Pat Frank
September 7, 2019 9:57 pm

Sorry- Adv Met Round 1, refereereport.regular.3852317.v1 is where I found it , right under the heading: Section 2, 2. Why the main argument of the paper fails

I find it interesting that he claims the errors cancel except for the first and last years.

Reply to  John Q Public
September 7, 2019 11:18 pm

In answer to John Q. Public, it matters not that the errors (i.e., the uncertainties) sum to zero except for the first and last years. For the error propagation statistic is determined in quadrature: i.e., as the square root of the sums of the squares of the individual uncertainties. That value will necessarily be positive. The reviewer, like so many modelers, and like the troll “John Q. Public”, appears not to have known that.

John Q Public
Reply to  Monckton of Brenchley
September 8, 2019 11:07 am

Thank you for the answer, troll “Monckton of Brenchley”, makes sense. Why don’t you look at some of my other responses.

Reply to  Monckton of Brenchley
September 8, 2019 3:18 pm

Note that in my previous response to JQ Public “That value will necessarily be positive” should read “That absolute value will necessarily be significant even where the underlying errors self-cancel”.

Reply to  John Q Public
September 8, 2019 10:56 am

Ah, yes. That was my Gavinoid reviewer.

Over my 6 years of effort, three different manuscript editors recruited him. He supplied the same mistake-riddled review each time.

I found 10 serious mistakes in the criticism you raised. I’m going to copy and paste my response here. Some of the equations won’t come out, because they’re pictures rather than text. But you should be able to get the thrust of the reply.

Here goes:
++++++++++++++
2.1. The reviewer referred parenthetically to a, “[bias] due to an error in the long-wave cloud forcing as assumed in the paper.”

The manuscript does not assume this error. The GCM average long-wave cloud forcing (LWCF) error was reported in Lauer and Hamilton, manuscript reference 59, [3] and given prominent notice in Section 2.4.1, page 25, paragraph 1: “The magnitude of CMIP5 TCF global average atmospheric energy flux error.”

In 2.1 above, the reviewer has misconstrued a published fact as an author assumption.

The error is not a “bias,” but rather a persistent difference between model expectation values and observation.

2.2. The reviewer wrote, “Suppose a climate model has a bias in its energy balance (e.g. due to an error in the long-wave cloud forcing as assumed in the paper). This energy balance bias (B) essentially acts like an additional forcing in (R3),…”

2.2.1. The reviewer has mistakenly construed that the LWCF error is a bias in energy balance. This is incorrect and represents a fatal mistake. It caused the review to go off into irrelevance.

LWCF error is the difference between simulated cloud cover and observed cloud cover. There is no energy imbalance.

Instead, the incorrect cloud cover means that energy is incorrectly partitioned within the simulated climate. The LWCF error means there is a 4 Wm-2 uncertainty in the tropospheric energy flux.

2.2.2. The LWCF error is not a forcing. LWCF error is a statistic reflecting an annual average uncertainty in simulated tropospheric flux. The uncertainty originates from errors in cloud cover that emerge in climate simulations, from theory bias within climate models.

Therefore LWCF error is not “an additional forcing in R3.” This misconception is so fundamental as to be fatal, and perfuses the review.

2.2.3The reviewer may also note the “” sign attached to the 4 Wm-2 uncertainty in LWCF and ask how “an additional forcing” can be simultaneously positive and negative.

That incongruity alone should have been enough to indicate a deep conceptual error.

2.3. “… leading to an error in the simulated warming:

ERR(Tt-T0) = 0.416((Ft+Bt)-(F0+B0)) = 0.416(F+B) R4”

2.3 Reviewer equation R4 includes many mistakes, some of them conceptual.

2.3.1. First mistake: the 4 Wm-2 average annual LWCF error is an uncertainty statistic. The reviewer has misconceived it as an energy bias. R4 is missing the “” operator throughout. On the right side of the equation, every +B should instead be U.

2.3.2. Second mistake: The “ERR” of R4 should be ‘UNC’ as in ‘uncertainty.’ The LWCF error statistic propagates into an uncertainty. It does not produce a physical error magnitude.

The meaning of uncertainty was clearly explained in manuscript Section 2.4.1 par. 2, which further recommended consulting Supporting Information Section 10.2, “The meaning of predictive uncertainty.” The reviewer apparently did not heed this advice. Statistical uncertainty is an ignorance width, as opposed to physical error which marks divergence from observation.

Further, manuscript Section 3, “Summary and Discussion” par. 3ff explicitly discussed and warned against the reviewer’s mistaken idea that the 4 Wm-2 uncertainty is a forcing (cf. also 2.2.2 above).

Correcting R4: it is given as:

ERR(Tt-T0) = 0.416((Ft+Bt)-(F0+B0)) = 0.416(F+B)

Ignoring any further errors (discussed below), the “B” term in R4 should be U, and ERR should be UNC, thus:

UNC(Tt-T0) = 0.416((Ft±Ut)-(F0±U0)) = 0.416(F±U)

because the LWCF root-mean-error statistic ±U, is not a positive forcing bias, +B.

2.3.3. Third mistake: correcting +B to ±U brings to the fore that the reviewer has ignored the fact that ±U arises from an inherent theory-error within the models. Theory error injects a simulation error into every projection step. Therefore ±U enters into every single simulation step.

An uncertainty ±Ui present in every step accumulates across n steps into a final result as . Therefore, UNC(Tt-T0) = ±Ut, not ±Ut-±U0. Thus R4 is misconceived as it stands.

One notes that ±Ui = ±4 Wm-2 average per annual step, after 100 annual steps then becomes = ±40 Wm-2 uncertainty, not error, and TUNC = 0.416(±40) = 16.6 K, i.e., the manuscript result.

2.3.4. Fourth mistake incorporates two mistakes. In writing, “a bias change B = 4Wm-2 would indicate an error of 1.7 K”, the reviewer has not used eqn. R4, because the “” term on the temperature error has no counterpart in reviewer R4. That is, reviewer R4 is ERR = 0.416(F+B). From where did the “” in 1.7 K come?

Second, in the quote above, the reviewer has set a positive bias “B” to be simultaneously positive and negative, i.e., “4Wm-2.” How is this possible?

2.3.5. Fifth mistake: the reviewer’s 1.7 K is from 0.416(±U), not from 0.416(F±U), the way it should be if calculated from (corrected) R4.

Corrected eqn. R4 says ERROR = T = 0.416(F±U) = TF±TU Thus the reviewer’s R4 error term should be, ‘TF±(the spread from TU).’

For example, from RCP 8.5, if F2000-2100 = 7 Wm-2, then from the reviewer’s R4 with a corrected U term, ERR = 0.416(74) K = 2.91.7 K.

That is, the reviewer incorrectly represented 1.7 K as ERR, when it is instead the spread in ERR.

2.3.6. Sixth mistake, the reviewer’s B0 does not exist. Forcing F0 does not have an associated LWCF uncertainty (or bias) because F0 is the base forcing at the start of the simulation, i.e., it is assigned before any simulation step.

This condition is explicit in manuscript eqn. 6, where subscript “i” designates the change in forcing per simulation step, Fi. Therefore, “i” can only begin at unity with simulation step one. There is no zeroth step simulation error because there is no zeroth simulation.

2.3.7. Seventh mistake: the reviewer has invented a magnitude for Bt.

The reviewer’s calculation in R4 (4 Wm-2  1.7 K error) requires that Bt – B0 = B = 4 Wm-2 (applying the 2.3.1 “” correction).

The reviewer has supposed B0 = 4 Wm-2. However, reviewer’s B is also 4 Wm-2. Then it must be that Bt-4 Wm-2 = 4 Wm-2, and the reviewer’s Bt must be 8 Wm-2.

From where did that 8 Wm-2 come? The reviewer does not say. It seems from thin air.

2.3.8. Eighth mistake: R4 says that for any simulated Tt the bias is always Bt = Bt-B0, the difference between the first and last simulation step.

However, B is misconstrued as an energy bias. Instead it is a simulation error statistic, U, that originates in an imperfect theory, and is therefore imposed on every single simulation step. This continuous imposition is an inexorable feature of an erroneous theory.

However, R4 takes no notice of intermediate simulation steps and their sequentially imposed error. It is not surprising then that having excluded intermediate steps, the reviewer concludes they are irrelevant.

2.3.9. Ninth mistake: The “t” is undefined in R4 as the reviewer has it. As written, the “t” can equally define a 1-step, a 2-step, a 10-step, a 43-, a 62-, an 87-, or a 100-step simulation.

The reviewer’s Bt = Bt-B0 always equals 4 Wm-2 no matter whether “t” is one year or 100 years or anywhere in between. This follows directly from having excluded intermediate simulation steps from any consideration.

This mistaken usage is in evidence in review Part 2, par. 2, where the reviewer applied the 4 Wm-2 to the uncertainty after a 100-year projection, stating, “a bias change B = 4 Wm-2 would indicate an error of 1.7 K [which is] nowhere near the 15 K claimed by the paper.” That is, for the reviewer, Bt=100 = 4 Wm-2.

However, the 4 Wm-2 is the empirical average annual LWCF uncertainty, obtained from a 20-year hindcast experiment using 26 CMIP5 climate models. [3]

This means an LWCF error is generated by a GCM across every single simulation year, and the 4 Wm-2 average uncertainty propagates into every single annual step of a simulation.

Thus, intermediate steps must be included in an uncertainty assessment. If the Bt represents the uncertainty in a final year anomaly, it cannot be a constant independent of the length of the simulation.

2.3.10. Tenth mistake: the reviewer’s error calculation is incorrect. The reviewer proposed that an annual average 4 Wm-2 LWCF error produced a projection uncertainty of 1.7 K after a simulation of 100 years.

This cannot be true (cf. 2.3.3, 2.3.8, and 2.3.9) because the average 4 Wm-2 LWCF error appears across every single annum in a multi-year simulation. The projection uncertainty cannot remain unchanged between year 1 and year 100.

This understanding is now applied to the uncertainty produced in a multi-year simulation, using the corrected R4 and applying the standard method of uncertainty propagation.

The physical error “ produced in each annual projection step is unknown because the future physical climate is unknown. However, the uncertainty “u” in each projection step is known because hindcast tests have revealed the annual average error statistic.

For a one step simulation, i.e., 01, U0 = 0 because the starting conditions are given and there is no LWCF simulation bias.

However, at the end of simulation year 1 an unknown error  has been produced, the 4 Wm-2 LWCF uncertainty has been generated, and Ut = U0,1.

For a two-step simulation, 012, the zeroth year LWCF uncertainty, U0, is unchanged at zero. However, at the terminus of year 1, the LWCF uncertainty is U0,1.

Simulation step 2 necessarily initiates from the (unknown) 1 error in simulation step 1. Thus, for step 2 the initiating  is 0,1.

Step 2 proceeds on to generate its own additional LWCF error 12 of unknown magnitude, but for which U1,2 = 4 Wm-2. Combining these ideas: step 2 initiates with uncertainty U0,1. Step 2 generates new uncertainty U1,2. The sequential change in uncertainty is then U0=0U0,1U1,2. The total uncertainty at the end of step 2 must then be the root-sum-square of the sequential step-wise uncertainties, Ut=02 = [(U0,1)2+(U1,2)2] = 5.7 Wm-2. [1, 2]

R4 is now corrected to take explicit notice of the sequence of intermediate simulation steps, using a three-step simulation as an example. As before, the corrected zeroth year LWCF U0 = 0 Wm-2.

Step 1: UNC(Tt-T0) = (T1-T0) = 0.416((F1±U0,1)-(F0±U0)) = 0.416(F0,1±U0,1) = u0,1
Step 2: UNC(Tt-T0) = (T2-T1) = 0.416((F2±U0,2)-(F1±U0,1)) = 0.416(F1,2±U1,2) = u1,2
Step 3: UNC(Tt-T0) = (T2-T1) = 0.416((F3±U0,3)-(F2±U0,2)) = 0.416(F2,3±U2,3) = u2,3

where “u” is uncertainty. These formalisms exactly follow the reviewer’s condition that “t” is undefined. But “t” must acknowledge the simulation annual step-count.

Each t+1 simulation step initiates from the end of step t, and begins with the erroneously simulated climate of prior step t. For each simulation step, the initiating T0 = Tt-1 and its initiating LWCF error  is t-1. For t>1, physical error but its magnitude is necessarily unknown.

The uncertainty produced in each simulation step, “t” is ut-1,t as shown. However the total uncertainty in the final simulation step is the uncertainty propagated through each step. Each simulation step initiates from the accumulated error in all the prior steps, and carries the total uncertainty propagated through those steps.

Following NIST, and Bevington and Robinson, [1, 2] the propagated uncertainty variance in the final step is the root-sum-square of the error in each of the individual steps, i.e., . When ui = 4 Wm-2, the above example yields a three-year simulation temperature uncertainty variance of 2 = 8.3 K.

As discussed both in the manuscript and in SI Section 10.2, this is not an error magnitude, but an uncertainty statistic. The distinction is critical. The true error magnitude is necessarily unknown because the future physical climate is unknown.

The projection uncertainty can be known, however, as it consists of the known simulation average error statistic propagated through each simulation step. The propagated uncertainty expresses the level of ignorance concerning the physical state of the future climate.

2.4 The reviewer wrote that, “For producing this magnitude of error in temperature change, B should reach 36 Wm-2, which is entirely implausible.”

2.4.1. The reviewer has once again mistaken an uncertainty statistic for an energetic perturbation. Under reviewer section 2, B is defined as, an “energy balance bias (B),” i.e., an energetic offset.

One may ask the reviewer again how a physical energy offset can be both positive and negative simultaneously. That is, a ‘energy-bias’ is physically incoherent. This mistake alone render’s the reviewer’s objection meritless.

As a propagated uncertainty statistic the reviewer’s 36 Wm-2 is entirely plausible because, a) it represents the accumulated uncertainty across 100 error-prone annual simulation steps, and b) statistical uncertainty is not subject to physical bounds.

2.4.2 The 15 K that so exercises the reviewer is not an error in temperature magnitude. It is an uncertainty statistic. B is not a forcing and cannot be a forcing because it is an uncertainty statistic.

The reviewer has completely misconstrued uncertainty statistics to be thermodynamic quantities. This is as fundamental a mistake as is possible to make.

The 15 K does not suggest that air temperature itself could be 15 K cooler or warmer in the future. The reviewer clearly supposes this incorrect meaning, however.

The reviewer has utterly misconceived the meaning of the error statistics. A statistical T is not a temperature. A statistical Wm-2 is not an energy flux or a forcing.

All of this was thoroughly discussed in the manuscript and the SI, but the reviewer apparently overlooked these sections.

2.5 In Section R2 par. 3, the reviewer wrote that review eqn. R1 shows the uncertainty is not independent of Fi and therefore cancels out between simulation steps.

However, R1 determines the total change in forcing, Ft-F0, across a projection. No uncertainty term appears in R1, making the reviewer’s claim a mystery.

2.5.2 Contrary the reviewer’s claim, the average annual 4 Wm-2 LWCF error statistic is independent of the magnitude of Fi. The 4 Wm-2 is the constant average LWCF uncertainty revealed by CMIP5 GCMs (manuscript Section 2.3.1 and Table 1). GCM LWCF error is injected into each simulation year, and is entirely independent of the (GHG) Fi forcing magnitudes.

In particular, LWCF error is an average annual uncertainty in the global tropospheric heat flux, due to GCM errors in simulated cloud structure and extent.

2.5.3. The reviewer’s attempt at error analysis is found in eqn. R4 not R1. However R4 also fails to correctly assess LWCF error. Sections 2.x.x above shows R4 has no analytical merit.

2.6 In section R2, par 4, the reviewer supposes that use of 30 minute time-steps in an uncertainty propagation, rather than annual steps, must involve 17520 entries of 4 Wm-2 in an annual error propagation.

In this, the reviewer has overlooked the fact that 4 Wm-2 is an annual average error statistic. As such it is irrelevant to a 30-minute time step, making the 200 K likewise irrelevant.

2.7 In R2 final sentence, the reviewer asks whether it is reasonable to assume that model biases in LWCF actually change by 4 Wm-2.

However, the LWCF error is not itself a model bias. Instead, it is the observed average error between model simulated LWCF and observed LWCF.

The reviewer has misconstrued the meaning of the average LWCF error throughout the review. LWCF error is an uncertainty statistic. The reviewer has comprehensively insisted on misinterpreting it as a forcing bias — a thermodynamic quantity.

The reviewer’s question is irrelevant to the manuscript and merely betrays a complete misapprehension of the meaning of uncertainty.
+++++++++++++

John Q Public
Reply to  Pat Frank
September 9, 2019 10:40 am

Thanks, I think that was included somewhere (maybe multiple places) in the review files. Just wasn’t clear it associated to the one I mentioned.

John Q Public
September 7, 2019 7:10 pm

You might re-couch this analysis into the concept of S/N ratio and submit it to engineering publications.The noise propogation error is based on the reality observed (+/- 4 W/sqm, annually) regardless of what a model may predict.

Lonny Eachus
Reply to  John Q Public
September 8, 2019 12:48 am

Fail.

The point is that the physical error is not CARRIED THROUGH THE MODELS, as it necessarily must be.

Shoddy “science”, plain and simple.

Other scientists have been pointing this out for years. And yet others (like yourself), don’t seem to understand how that works.

John Q Public
Reply to  Lonny Eachus
September 9, 2019 10:43 am

I was analogously treating the model output as the signal and the propagated uncertainty as teh noise.

September 7, 2019 7:57 pm

The take away point is that all assumptions based on proxy observations are deeply flawed due to previously unacknowledged factors that lead to all the proxies being unreliable indicators of past conditions.
That still leaves us with the question as to why the surface temperature of planets beneath atmospheres is higher than that predicted from the radiation only S-B equation.
So, Pat has done a great job in tearing down a false edifice but we are now faced with the task of reconstruction.
Start with a proper analysis of non – radiative energy transfers.

John Q Public
Reply to  Stephen Wilde
September 7, 2019 10:29 pm

This article highlighted by Judith Curry on Twitter (the modern purveyor of scientific knowledge) may be relevant

New Insights on the Physical Nature of the Atmospheric Greenhouse Effect Deduced from an Empirical Planetary Temperature Model, Ned Nikolov* and Karl Zeller, Environment Pollution and Climate Change

Reply to  John Q Public
September 7, 2019 11:10 pm

No: Nikolov and Zeller are not relevant. Their paper is an instance of the logical fallacy of petitio principii, or circular argument. They point out, correctly, that one can derive the surface temperature of a planetary body if one knows the insolation and the surface barometric pressure, and that one does not need to know the greenhouse-gas concentration. But they do not consider the fact that the barometric pressure is itself dependent upon the greenhouse-gas concentration.

Reply to  Monckton of Brenchley
September 7, 2019 11:36 pm

How is barometric pressure dependent on GHG concentration?

Lonny Eachus
Reply to  Monckton of Brenchley
September 8, 2019 1:01 am

I think this is a straw-man, and misses the point.

Of course barometric pressure is dependent on the combined partial pressures of the gases, but CO2 is 0.04% of the atmosphere, more or less.

I’d have to resort to the Ideal Gas Law to properly determine its partial pressure, but undoubtedly it is small.

Therefore according to Nikolov and Zeller’s own equations it should have a minuscule (though not zero) effect.

Philip Mulholland
Reply to  Lonny Eachus
September 8, 2019 5:57 am

“I’d have to resort to the Ideal Gas Law to properly determine its partial pressure, but undoubtedly it is small.”

Lonny Eachus,
No need to do that. Dalton’s Law of Partial Pressures supplies the answer because we already know the atmospheric composition by volume.
https://www.thoughtco.com/what-is-daltons-law-of-partial-pressures-604278

Chaswarnertoo
Reply to  Lonny Eachus
September 10, 2019 11:05 am

As I was about to point out to his lordship. The partial pressure is negligible, after sweating through my little used A level physics. I reckon his lordship owes me a partial apology, because I rescind my previous acknowledgement of his rightness

Reply to  Monckton of Brenchley
September 8, 2019 1:37 am

You mean density? Pressure is given by the mass of the atmosphere.

Philip Mulholland
Reply to  Monckton of Brenchley
September 8, 2019 4:23 am

But they do not consider the fact that the barometric pressure is itself dependent upon the greenhouse-gas concentration.

Monckton of Brenchley,
Sir,
Your statement appears to imply that a rapidly rotating terrestrial planet with a 1 bar atmosphere of pure nitrogen illuminated by a single sun will not have any dynamic meteorology.
Surface barometric pressure is a direct consequence of the total quantity of volatile material (aka gas) in a planetary atmosphere. The mass of an atmosphere held on the surface of a terrestrial planet by gravity generates a surface pressure that is completely independent of the nature and form of the volatile materials that constitute that atmosphere.
Atmospheric opacity does not generate the climate. The sun generates the climate.

John Q Public
Here is the complete list of our climate modelling essays that Anthony kindly allowed to be published on WUWT:

1. Calibrating the CERES Image of the Earth’s Radiant Emission to Space
2. An Analysis of the Earth’s Energy Budget
3. Modelling the Climate of Noonworld: A New Look at Venus
4. Return to Earth
5. Using an Iterative Adiabatic Model to study the Climate of Titan

Reply to  John Q Public
September 7, 2019 11:32 pm

Ned and Karl set out the observation but do not provide a mechanism.
Philip Mulholland and I have provided the mechanism.

Steven Mosher
September 7, 2019 8:09 pm

still not even wrong, pat

John Tillman
Reply to  Steven Mosher
September 7, 2019 9:54 pm

Steven,

You have outdone yourself in the drive by sweepstakes.

If you have something concrete to contribute, please do so.

If not, why drive by?

Pat is a scientist. You, not so much. As in, not at all.

Reply to  Steven Mosher
September 7, 2019 10:37 pm

because….Mosh???????

Propagation of uncertainty of a parameter in a model where the underlying algorithm’s using it run iterative loops is a basic concept.

Example: If some cosmologist wants to study expansion of space-time using iteratively looped calculations of his favorite thorems, and those calculations use a value of c (speed of light in vacuum) that (say) is only approximated to 1 part per thousand (~+/- 0.1%) then that approximation (uncertainty) error will rapidly propagate and build so by far less than 100 iterations, anything you think you’re seeing in the model output on evolution of an expanding universe is meaningless garbage. (we know c to an uncertainty of about 4 parts per billion now).
That’s long accepted physics. That’s why everyone wants to use the most accurate constants and then recognize where uncertainty is propagating as possible.

And it is also the underlying inevitable truncation error that digital computer calculations face with fixed float precision that led Edward Lorenz to realize that long range weather forecasting was hopelessly doomed. Climate models running temperature evolution projections years in to the future using a cloud forcing parameter that has orders of magnitude more uncertainty than uncertainty of the CO2 forcing they are studying are no different in this regard.

So what Pat has shown here about the impact of cloud forcing uncertainty values on iteratively computing climate models outputs out decades is no different. Their outputs are meaningless. Except that climate has become politicized. Vast sums of money have been spent in hopes of renweable energy payday by many rich people. And tribal camps have set up to defend their cherished “consensus science” for their selfish political and reputational reasons.

Not science. Climate modeling is junk science… all the way down.

That’s not denying CO2 is GHG. That’s not denying there will likely be some warning. But GCMs are not fit to the task of answering how much. The real science deniers are the deniers of that basic outcome.

So are you Denier now Steve?

Reply to  Joel O'Bryan
September 7, 2019 11:24 pm

I left out the “or” between “fixed float” precision: as in, “fixed or float precision.”
I understand the difference in computations. And I meant “warming,” not “warning.”
I also left out a few “a”‘s
I miss edit.

Reply to  Joel O'Bryan
September 8, 2019 11:05 am

Really great reply, Joel, thanks. 🙂

Reply to  Joel O'Bryan
September 9, 2019 1:34 pm

Joel, you are talking to people that believe more significant digits can be obtained just by adding up enough numbers and dividing.

I learned that fallacy in sixth grade. Now, I didn’t get error propagation in any of my coding classes, not even the FORTRAN ones, so I suppose that their ignorance is somewhat forgivable. I actually learned that from a numerical analysis and FORTRAN text, but one that was not used in any of my classes (published 1964).

In my opinion, nobody should be awarded a diploma in any field that uses mathematics without at least three to six credit hours devoted entirely to all of the ways in which you can get the wrong results.

Reply to  Steven Mosher
September 7, 2019 11:12 pm

Mr Mosher’s pathetic posting is, in effect, an abject admission of utter defeat. He has nothing of science or of argument to offer. He is not fit to tie the laces of Pat Frank’s boots.

Reply to  Steven Mosher
September 8, 2019 4:07 am

Steven,
Is this perhaps an attempt at humor?
Drawing a caricature of yourself with only five words!
It is laughable.
But not funny.
Don’t quit your day job.

Clyde Spencer
Reply to  Steven Mosher
September 8, 2019 10:35 am

Mosher
It is obvious that you think more highly of yourself than most of the readers here do! If you had a sterling reputation like Feynman, you might be able to get a nod to your expertise, and people would tentatively accept your opinion has having some merit. However, you aren’t a Feynman! Driving by, and shouting “wrong,” gets you nothing but eye rolling. If you have something to contribute (such as a defense of your opinion), contribute it. Otherwise, if you were as smart as you seem to think you are, you would realize that you are responsible for heaping scorn on yourself because of your arrogance. Behavior unbecoming to even a teenager does nothing to bolster your reputation.

Reply to  Steven Mosher
September 8, 2019 11:02 am

Still not knowing what you’re talking about Steve.

Mark Broderick
Reply to  Pat Frank
September 8, 2019 12:42 pm

Thats Ok Pat, neither does Steve ! : )

Reply to  Steven Mosher
September 8, 2019 2:15 pm

“…not even wrong…” was clever, witty, and original….. when it was first used.

But now it has become a transparently trite and meaningless comment to be used by everyone who happens to think he’s a little bit cleverer than everyone else, but can’t quite explain why.

Matthew R Marler
Reply to  Steven Mosher
September 10, 2019 12:10 am

Steven Mosher: still not even wrong, pat

Now that he has done it, plenty of people can follow along doing it wrong. You perhaps.

John Q Public
September 7, 2019 8:23 pm

For John Q Public one of the interesting outcomes is the following:

In order to be fair and assess the state of climate science, I talked to actual climate modelers and they assured me that they do not just apply a forcing function (in the more advanced models). But what appears to be the case is that even though they do not explicitly do this, the net effect as that the outputs can still be represented a linear sequences of parameters. This is probably due to the use of a lot of linearization within the models to facilitate efficient computation.

September 7, 2019 8:47 pm

Here is an analogy for consideration.

Supposed we take a large population of people and get them all to walk a mile. We carefully count the number of steps they take, noting a small fraction of a step that takes them beyond the mile so we end up with an average number of steps for people to walk a mile and an average error or “overstepping”. Lets say its 1,500 steps with an average overstep of 0.5 steps.

Now we take a single person, tell them to take 15,000 steps and we expect they’ll have walked 10 miles +- 5 steps.

But we chose a person who was always going to take 17,000 steps because they had smaller than average steps. And furthermore the further they walked the more tired they got and the smaller steps they took….so it ends up taking 18,500 steps.

How does that +- 5 steps look now?

Reply to  TimTheToolMan
September 7, 2019 10:46 pm

Edward Lorenz realized in 1961 that long-range weather forecasting was a mathematical impossibility due to that unknowable propagating, expanding error impact on the output in simulating dynamical systems and their state at any long-range point.

http://www.stsci.edu/~lbradley/seminar/butterfly.html

Reply to  Joel O'Bryan
September 8, 2019 12:04 am

The AGW argument is that while we wont know what the weather will be, we’ll know what the accumulated energy will be and so they just create a lot of weather under those conditions and average it out and call it climate.

Well the problem is that they dont know what the energy is going to be because they dont know how fast it will accumulate and they dont know how the weather will look at different earth energy levels and forcings either.

What we do know is that the GCMs get very little “right”. And what they do get “right” is because they were tuned that way.

To take the analogy a little further, suppose there is a hypothesis that if the person carried some helium balloons then they’d take slightly bigger steps and they model that 15,000 steps will take the person 11 miles instead of 10 miles.

So as before the actual person naturally takes smaller steps so they were below the 10 miles at 15,000 steps and the steps got smaller so they were below that even more. In fact they only got to 15,000/18500 * 10 miles = 8.1 miles with some due to the helium balloons… maybe. Are they able to say anything about their hypothesis at the end of that?

In that case the hypothesis was going to impact the result with a much smaller figure than the error in the steps…so in the same way Pat Frank is saying, there is nothing that can be said about the impact of the helium balloons.

Taylor Pohlman
Reply to  Joel O'Bryan
September 8, 2019 7:56 am

I sometimes refer to this as the ‘Lorenz,Edward Contradiction’. (Physics majors will get the joke).

Reply to  TimTheToolMan
September 9, 2019 4:37 pm

Hehe, from memory, the ‘mile’ in English came from Latin, which if I am remembering correctly, was 1000 steps taken by soldiers marching. Sure, there’d be variation; but for the purpose of having an army advance, it is good enough. Being one who was once in a marching band, after a bit of training, it got pretty facile to march at nearly one yard per stride on a football (US) field. An army’d likely take longer strides, so 1760 yards per mile follows, for me.

Steven Fraser
Reply to  cdquarles
September 10, 2019 1:56 pm

1000 paces. 1 pace= 2 steps.

Reply to  Steven Fraser
September 11, 2019 6:12 am

That makes it believable.
A mile is 5,280 feet, so each stride would need to be 5.28 feet.
Half that sounds very reasonable.
If you are gonna march all day, you do not extend your legs as far as you can.
I have had to work out the stride to take in order to have them be equal to 3 feet…it is a straight-legged slightly longer than completely natural step.
So ~4 1/3 inches less sounds right.

Sara Bennett
September 7, 2019 9:34 pm

This paper’s findings would appear to justify an immediate, swift, and complete end to funding for climate modelling.

What needs to be done, and by whom, to achieve that result?

John Q Public
Reply to  Sara Bennett
September 7, 2019 10:12 pm

It needs to get published, then debated. In the interim it will strengthen skeptics very significantly.

The fact that it could “justify an immediate, swift, and complete end to funding for climate modelling” is potentially the very reason this has not happened.

Reply to  Sara Bennett
September 8, 2019 4:36 am

We are currently living through a declared “climate catastrophe”, which has been announced by legislatures, confirmed by press reports, lamented by millions of hand-wringing and panic stricken citizens, and addressed by hundreds of billions in annual worldwide spending on endless studies and useless alternative energy money spigots.
And yet there is zero actual evidence of one single thing that is even a little unusual vs historical averages, let alone catastrophic in point of fact.

We have ample and growing reasons to be quite certain that GCMs are worthless, CO2 concentration cannot possibly be the thermostat knob of the planet, and in fact no reason to think warming is a bad thing on a planet which is in an ice age and has large portions of the surface perpetually frozen to deadly temperatures.

This has never been about evidence, science, logic, or truth.

As Pat Frank correctly points out:
“In climate model papers the typical uncertainty analyses are about precision, not about accuracy. They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.
Climate modelers are evidently not trained in the scientific method. They are not trained to be scientists. They are not scientists. They are apparently not trained to evaluate the physical or predictive reliability of their own models. They do not manifest the attention to physical reasoning demanded by good scientific practice. In my prior experience they are actively hostile to any demonstration of that diagnosis.”

And:

“But we now know this for a certainty: all the frenzy about CO₂ and climate was for nothing.
All the anguished adults; all the despairing young people; all the grammar school children frightened to tears and recriminations by lessons about coming doom, and death, and destruction; all the social strife and dislocation. All the blaming, all the character assassinations, all the damaged careers, all the excess winter fuel-poverty deaths, all the men, women, and children continuing to live with indoor smoke, all the enormous sums diverted, all the blighted landscapes, all the chopped and burned birds and the disrupted bats, all the huge monies transferred from the middle class to rich subsidy-farmers.
All for nothing.”

And all the while:
“Those offenses would not have happened had not every single scientific society neglected its duty to diligence…”

The whole thing is a power grab and is fed and powered by a bureaucratic gravy-train juggernaut.
Such expenditures are virtually self perpetuating in the places in which they occur, which at this point seems to be virtually everywhere taxpayers exist who can be fleeced.

We are living through what I believe will be viewed as the most dramatic and widespread and long lasting case of mass hysteria ever to occur.

What needs to be done and by whom, to stop mass insanity, to end widespread delusions, and an epic worldwide pocket-picking and self inflicted economic destruction?

At this point I am wondering if skeptics are currently engaged in the hard part of the work to do that…or the easy part?

John Q Public
Reply to  Nicholas McGinley
September 8, 2019 11:22 am

“They are appropriate to engineering models that reproduce observables within their calibration (tuning) bounds. They are not appropriate to physical models that predict future or unknown observables.”

Or, interpolate but do not extrapolate (engineering summary)

“We are living through what I believe will be viewed as the most dramatic and widespread and long lasting case of mass hysteria ever to occur.”

Will make the Tulip bubble look like a walk through a garden.

RW
Reply to  Sara Bennett
September 10, 2019 4:55 am

Sara. We’re dealing with a religious cult with 100s of millions of followers. First hurdle is to put together the contrary view and hey it out there with film (not paywalled) and podcast /long-form interview circuit. Second hurdle is electing non cynical politicians who are aware of the bs behind it. Good luck with that. Third hurdle is then defunding all the scare mongering research.

Kurt
September 8, 2019 12:16 am

I think that the source of the problem that the climate science community, and specifically the climate modeling community, has with Pat Frank’s analysis is that the climate scientists use models out of desperation as a substitute means to PRODUCE climate in the first instance, and not to measure something that HAS BEEN PRODUCED (sorry for the shouting – don’t know how to italicize in a post).

In the real world we can, say, measure the the shore hardness of the same block of metal 20 times in a calibration step, and take an average knowing that there is some “true value” somewhere in there (they can’t all be simultaneously correct) as a way of asking how precise our measurement ability is. Then we can use that measurement instrument to actually measure a single thing in an experiment, assign it an error range, then let the error propagate through subsequent calculations. In the real world, it makes sense that precision and error have two conceptually different meanings.

But to climate scientists, models do not produce results that are then measured. They just produce results (numbers) that are of necessity definitionally presumed to BE climate, or a possible version of climate. It’s just a single step, not two. When one model is run with different inputs, or when different models are run with different assumptions, neither “precision” nor “error” make any sense at all because each model run is a sample of a completely different (albeit theoretical) thing and there is no actual way of determining the difference between a model run and a “true” version of climate. So in the end, you just get a spaghetti graph having absolutely no real-world meaning, and the climate modelers just attach these amorphous and nonsensical “95% confidence” bars to give the silly presentation a veneer of scientific meaning, when there really is none.

Mark Broderick
Reply to  Kurt
September 8, 2019 5:29 am

Kurt September 8, 2019 at 12:16 am
“(sorry for the shouting – don’t know how to italicize in a post).”

https://wattsupwiththat.com/test-2/

This is italicized text
This is bold text

Kurt
Reply to  Mark Broderick
September 8, 2019 10:11 am

Well, let’s try this out.

Mark Broderick
Reply to  Kurt
September 8, 2019 12:46 pm

See, we learn something new at WUWT everyday ! : )

Cheers…..

September 8, 2019 12:19 am

You say “In their hands, climate modeling has become a kind of subjectivist narrative”

This is so true. For the modellers, the models and the real world are separate.

An example of this from WG1AR5.

When talking about the difference between the models and the real world,
from page 1011 in the 5ar WG1 chapter 11 above Figure 11.25

“The assessment here provides only a likely range for GMST.(Global Mean Surface Tenperature)

Possible reasons why the real world might depart from this range include:…………the possibility that model sensitivity to anthropogenic forcing may differ from that of the real world …….

The reduced rate of warming ….is related to evidence that ‘some CMIP5 models have a… larger response to other anthropogenic forcings ….. than the real world (medium confidence).’

Math
September 8, 2019 12:21 am

Congratulations to the publication Patrick! I think there is a minor typo in Eq. 3. The partial derivative dx/dv should be squared, right? Not that it is of any importance for the paper, but I thought you might like to know.

Reply to  Math
September 8, 2019 11:15 am

Opps, you’re right, Math.

That escaped me in the proof corrections. Thanks. 🙂

September 8, 2019 12:29 am

I doubt this paper will be endorsed by M. E. Mann. Without that, it has no authoritative standing – just denialist words on paper. How dare anyone of so-called learning suggest Trump is right on Climate Change!!

How can real scientists undo this mess. For example, who will admit the billions spent on ambient intermittent electricity generating sources in Germany, California and Australia, is a complete waste. A massive lost opportunity for mankind. Humungous invested interests. The UN needs to be defunded and criminal proceedings begun.

What is the next step? How can Peter Ridd’s stand be amplified so real scientists can reverse the course of this new religion.

Can the IPCC ever admit their massive error? Can their findings be properly scrutinised and challenged?

Lonny Eachus
September 8, 2019 12:43 am

Congratulations to Patrick Frank and the final stake in the heart to the undead vampire called AGW.

It somehow resisted all the garlic, crosses, and closed windows, but will not survive this.

Well done sir.

Joe H
September 8, 2019 12:49 am

Pat,

I take a lively interest in the field of error analysis. Previously, I researched instrumental resolution limits and whether such limits are a random or a systematic error. My research has turned up conflicting viewpoints on it. To my mind, instrument resolution limits are systematic error not random. Do you agree?

If so, it has significant implications for assumed precision of ocean temperature rise estimates (and other enviro variables too). I recall Willis doing some posting here on the limits of the 1/sqrt(n) power series of reducing standard error. If resolution error is systematic surely that is a limiting factor on a reducing SE for increasing n?

Congrats btw on getting the paper finally published – I hope it receives the attention it deserves.

Joe

Lonny Eachus
Reply to  Joe H
September 8, 2019 1:25 am

I have read from countless sources that error is random and should therefore cancel out… but it is my understanding that instrumental and human errors tend to not be random.

Therefore there is no justification for “cancelling”.

Just my experience from reading so much of the literature on climate change.

So I would agree with you. In some cases the error could be additive, or even worse.

Reply to  Lonny Eachus
September 8, 2019 5:13 am

There are different classes of errors.
Some are random, and can be expected to generally cancel out, at least under certain scenarios.
But others are systematic, and do not tend to cancel.
And then there are errors related to device resolution, which effect, for example, how many significant figures can correctly be reported in a result.
When iterative calculations are performed using numbers which have any form of error, then these errors will tend to multiply, rather than simply to add up.
And then there are statistical treatment errors.
One can reduce measurement errors and uncertainty by making multiple measurements of the same quantity or parameter. The people who calculate global average temperatures have been using the assumption that measurements of air temperature at various locations at various points in time using different instruments, can be dealt with as if they are all multiple measurements of the same thing.
Climate scientists think they know what the average temperature of the entire planet was140 years ago, to within a hundredths of a degree. They present graphs purporting such, that do not even make mention of error bars or uncertainty, let alone give guidance of such within the graphs, even though back then measurements over most of the globe were sparse to nonexistent, and device resolution was 100 times larger than the graduations on the graphs.
Accuracy, precision, device resolution, propagation of error…when science students ignore these, or even fail to know the exact rules for dealing with each…they get failing grades. At least that is how it used to be.
But we now have an entire branch of so-called science which somehow has come to wield a tremendous amount of influence regarding national economic and taxation and energy policies, and which seems to have no knowledge of these concepts.

Lonny Eachus
Reply to  Nicholas McGinley
September 9, 2019 6:49 am

Yes, but…

Generally speaking, instrumental error is not random.

Also, making multiple measurements can reduce the uncertainty only under certain conditions, which often don’t apply in climate data. There was an excellent article on the subject posted here in October of 2017: “Durable Original Measurement Uncertainty”.

https://wattsupwiththat.com/2017/10/14/durable-original-measurement-uncertainty/

Reply to  Lonny Eachus
September 9, 2019 10:20 am

As to the first point, I think some sorts of instrument error may be random, while other sorts are almost certainly not random.
As to the second point, I agree completely. This was my point exactly.
My understanding is that making multiple measurements can reduce uncertainty only in very specific circumstances, most particularly when one makes multiple measurements of the same thing.
I believe I am not alone when I say that measuring the temperature of the air on different days in different places is in no way the same as making multiple measurements of the same thing.
I have found to my astonishment that there are people who have commented regularly on WUWT who feel that this is not the case…that they are all measurements of the same thing…the so-called global average temperature. I personally think this is ridiculous, but some individuals have tried to make the point at great length and tirelessly, and refuse to change their minds despite being shown to be logically incorrect by large numbers of separate persons and lines of reasoning.

Lonny Eachus
Reply to  Lonny Eachus
September 9, 2019 12:06 pm

Nicholas:

Yes, I too understand that climatological data often does not meet the criteria for reducing uncertainty via multiple measurements, as has often been claimed.

For example: temperature data at different stations are separated in time and space, measurements may take place at different times of day, and even more importantly, step-wise shifts are caused when instrumentation or location is changed.

This does not represent the continuous, consistent measurement of “the same thing”.

Reply to  Lonny Eachus
September 8, 2019 11:31 am

You’re right, Lonny.

Random error is the assumption common throughout the air temperature literature. It is self-serving and false.

Reply to  Joe H
September 8, 2019 11:30 am

I agree, Joe.

Resolution limits are actually a data limit. There are no data below the resolution limit.

The people who compile the global averaged surface temperature record completely neglect the resolution limits of the historical instruments.

Up to about 1980 and the introduction of the MMTS sensor, the instrumental resolution alone was no better than ±0.25 C. This by itself is larger than the allowed uncertainty in the published air temperature record for 1900.

It’s incredible, really, that such carelessness has remained uncommented in the literature. Except here.