Consensus Climatology in a Nutshell: Betrayal of Integrity

Guest essay by Pat Frank

Today’s offering is a morality tale about the clash of honesty with self-interest, of integrity with income, and of arrogance with ignorance.

I’m bringing out the events below for general perusal only because they’re a perfect miniature of the sewer that is consensus climatology.

And also because corrupt practice battens in the dark. With Anthony’s help, we’ll let in some light.

On November third Anthony posted about a new statistical method of evaluating climate models, published in “Geoscientific Model Development” (GMD), a journal then unfamiliar to me.

WUWT readers will remember my recent post about unsuccessful attempts to publish on error propagation and climate model reliability. So I thought, “A new journal to try!”

Copernicus Publications publishes Geoscientific Model Development under the European Geosciences Union.

The Journal advertises itself as, “an international scientific journal dedicated to the publication and public discussion of the description, development, and evaluation of numerical models of the Earth system and its components.

It welcomes papers that include, “new methods for assessment of models, including work on developing new metrics for assessing model performance and novel ways of comparing model results with observational data.

GMD is the perfect Journal for the new method of model evaluation by propagation of calibration error.

So I gave it a try, and submitted my manuscript, “Propagation of Error and the Reliability of Global Air Temperature Projections“; samizdat manuscript here (13.5 mb pdf). Copernicus assigned a “topical editor” by reference to manuscript keywords.

My submission didn’t last 24 hours. It was rapidly rejected and deleted from the journal site.

The topical editor was Dr. James Annan, a climate modeler. Here’s what he wrote in full:

Topical Editor Initial Decision: Reject (07 Nov 2017) by James Annan

 

“Comments to the Author:

 

“This manuscript is silly and I’d be embarrassed to waste the time of reputable scientists by sending it out for review. The trivial error of the author is the assumption that the ~4W/m^2 error in cloud forcing is compounded on an annual basis. Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially, which would make a huge difference to the results. The ~4W/m^2 error is in fact essentially time-invariant and thus if one is determined to pursue this approach, the correct time scale is actually infinite. Of course this is what underpins the use of anomalies for estimating change, versus using the absolute temperatures. I am confident that the author has already had this pointed out to them on numerous occasions (see refs below) and repeating this process in GMD will serve no useful purpose.”

Before I parse out the incompetent wonderfulness of Dr. Annan’s views, let’s take a very relevant excursion into GMD’s ethical guidelines about conflict of interest.

But if you’d like to anticipate the competence assessment, consult the 12 standard reviewer mistakes. Dr. Annan managed many ignorant gaffes in that one short paragraph.

But on to ethics: GMD’s ethical guidelines for editors include:

“An editor should give unbiased consideration to all manuscripts offered for publication…”

“Editors should avoid situations of real or perceived conflicts of interest in which the relationship could bias judgement of the manuscript.”

Copernicus Publications goes further and has a specific “Competing interests policy” for editors:

“A conflict of interest takes place when there is any interference with the objective decision making by an editor or objective peer review by the referee. Such secondary interests could be financial, personal, or in relation to any organization. If editors or referees encounter their own conflict of interest, they have to declare so and – if necessary – renounce their role in assessing the respective manuscript.”

 

In a lovely irony, my cover letter to chief editor Dr. Julia Hargreaves made this observation and request:

Unfortunately, it is necessary to draw to your attention the very clear professional conflict of interest for any potential reviewer reliant on climate models for research. The same caution applies to a reviewer whose research is invested in the consensus position concerning the climatological impact of CO2 emissions.

 

“Therefore, it is requested that the choice of reviewers be among scientists who do not suffer such conflicts.

 

“I do understand that this study presents a severe test of professional integrity. Nevertheless I have confidence in your commitment to the full rigor of science.

It turns out that Dr. Annan is co-principal of Blue Sky Research, Inc. Ltd., a for-profit company that offers climate modeling for hire, and that has at least one corporate contract.

Is it reasonable to surmise that Dr. Annan might have a financial conflict of interest with a critically negative appraisal of climate model reliability?

Is it another reasonable surmise that he may possibly have a strong negative, even reflexive, rejectionist response to a study that definitively finds climate models to have no predictive value?

In light of his very evident financial conflicts of interest, did editor Dr. Annan recuse himself knowing the actuality, not just the image, of a serious and impending impropriety? Nope.

It gets even better, though.

Dr. Julia Hargreaves is the GMD Chief Executive Editor. I cc’d her on the email correspondence with the Journal (see below). It is her responsibility to administer journal ethics.

Did she remove Dr. Annan? Nope.

I communicated Dr. Annan’s financial and professional conflicts of interest to Copernicus Publications (see the emails below). The Publisher is the ultimate administrator of Journal ethics.

Did the publisher step in to excuse Dr. Annan? Nope.

It also turns out that GMD Chief Executive Editor Dr. Julia Hargreaves is the other co-principal of Blue Sky Research, Inc. Ltd.

She shares the identical financial conflict of interest with Dr. Annan.

Julia Hargreaves and James Annan are also a co-live-in couple, perhaps even married.

One can’t help but wonder if there was a dinner-table conversation.

Is Julia capable of administering James’ obvious financial conflict of interest violation? Apparently no more than is James.

Is Julia capable of administering her own obvious financial conflict of interest? Does James have free rein at GMD, Julia’s Executive Editorship withal? Evidently, the answers are no and yes.

Should financially conflicted Julia and James have any editorial responsibilities at all, at a respectable Journal pretending critical appraisals of climate models?

Both Dr. Annan and Dr. Hargreaves also have a research focus on climate modeling. Any grant monies depend on the perceived efficacy of climate models.

They will have a separate professional conflict of interest with any critical study of climate models that comes to negative conclusions.

So much for conflict of interest.

Let’s proceed to Dr. Annan’s technical comments. This will be brief.

We can note his very unprofessional first sentence and bypass it in compassionate silence.

He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +.

And ±4 W/m^2 is a calibration error statistic, not an energetic forcing.

That one phrase alone engages mistakes 2, 4, and 6.

How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?

How comes a PhD mathematician unable to discern a physically real energy from a statistic?

Next, “the assumption that the [error] is compounded on an annual basis”

That “assumption” is instead a demonstration. Ten pages of the manuscript are dedicated to showing the error arises within the models, is a systematic calibration error, and necessarily propagates stepwise.

Dr. Annan here qualifies for the honor of mistakes 4 and 5.

Next, “Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially,…

Exactly “why” was fully explained in manuscript Section 2.4.1 (pp. 28-30), and the full derivation was provided in Supporting Information Section 6.2.

Dr. Annan merits a specialty award for extraordinarily careless reading.

On to, “The ~4W/m^2 error is in fact essentially time-invariant…

Like Mr. andthentheresphysics, Nick Stokes, and Dr. Patrick Brown, Dr. Annan apparently does not understand that a time average is a statistic conveying, ‘mean magnitude per time-unit.’ This concept is evidently not covered in the Ph.D.

And then, “the correct time scale is actually infinite.

Except it’s not infinite, (see above), but here Dr. Annan has made a self-serving interpretative choice. Dr. Annan actually wrote that his +4 W/m^2 is “time-invariant,” which is also consistent with an infinitely short time. The propagated uncertainty is then also infinite; good job, Dr. Annan.

Penultimately, “this is what underpins the use of anomalies for estimating change…

Dr. Annan again assumed ±4 W/m^2 statistic is a constant +4 W/m^2 physical offset error, reiterating mistakes 4, 6, 7, and 9.

And it’s always nice to finish up with an irony: “I am confident that the author has already had this pointed out to them on numerous occasions…

In this, finally, Dr. Annan is correct (except grammatically; referencing a singular noun with a plural pronoun).

I have yet to encounter a single climate modeler who understands:

  • that “±” is not “+,”
  • that an error statistic is not a physical energy,
  • that taking anomalies does not remove physical uncertainty,
  • that models can be calibrated at all,
  • or that systematic calibration error propagates through subsequent calculations.

Dr. Annan now joins that chorus.

The predominance of mathematicians among climate modelers, like Dr. Annan, explains why climate modeling is in such a shambles.

Dr. Annan’s publication list illustrates the problem. Not one paper concerns incorporating new physical theory into a model. Climate modeling is all about statistics.

It hardly bears mentioning that statistics is not physics. But that absolutely critical distinction is obviously lost on climate modelers, and even on consensus-supporting scientists.

None of these people are scientists. None of them know how to think scientifically.

They have made the whole modeling enterprise a warm little pool of Platonic idealism, untroubled by the cold relentless currents of science and its dreadfully impersonal tests of experiment, observation, and physical error.

In their hands, climate models have become more elaborate but not more accurate.

In fact, apart from Lindzen and Choi’s Iris theory, there doesn’t seem to have been any advance in the physical theory of climate since at least 1990.

Such is the baleful influence on science of unconstrained mathematical idealism.

The whole Journal response reeks of fake ethics and arrogant incompetence.

In my opinion, GMD ethics have proven to be window dressing on a house given over to corruption; a fraud.

Also in my opinion, this one episode is emblematic of all of consensus climate science.

Finally, the email traffic is reproduced below.

My responses to the Journal pointed out Dr. Annan’s conflict of interest and obvious errors. On those grounds, I asked that the manuscript be reinstated. I always cc’d GMD Chief Executive Editor Dr. Julia Hargreaves.

The Journal remained silent, no matter even the clear violations of its own ethical pronouncements; as did Dr. Hargreaves.


1. GMD’s notice of rejection:

From: editorial@xxx.xxx

Subject: gmd-2017-281 (author) – manuscript not accepted

Date: November 7, 2017 at 6:07 AM

To: pfrankxx@xxx.xxx

Dear Patrick Frank,

We regret that your following submission was not accepted for publication in GMD:

Title: Propagation of Error and the Reliability of Global Air Temperature Projections

Author(s): Patrick Frank

MS No.: gmd-2017-281

MS Type: Methods for assessment of models

Iteration: Initial Submission

You can view the reasons for this decision via your MS Overview: http://editor.copernicus.org/GMD/my_manuscript_overview

To log in, please use your Copernicus Office user ID xxxxx.

We thank you very much for your understanding and hope that you will consider GMD again for the publication of your future scientific papers.

In case any questions arise, please contact me.

Kind regards,

Natascha Töpfer

Copernicus Publications

Editorial Support

editorial@xxx.xxx

on behalf of the GMD Editorial Board

+++++++++++++++

2. My first response:

From: Patrick Frank pfrankxx@xxx.xxx

Subject: Re: gmd-2017-281 (author) – manuscript not accepted

Date: November 7, 2017 at 7:46 PM

To: editorial@xxx.xxx

Cc: jules@xxx.xxx.xxx

Dear Ms. Töpfer,

Dr. Annan has a vested economic interest in climate modeling. He does not qualify as editor under the ethical conflict of interest guidelines of the Journal.

Dr. Annan’s posted appraisal is factually, indeed fatally, incorrect.

Dr. Annan wrongly claimed the ±4 W/m^2 annual error is explained “nowhere in the manuscript.” It is explained on page 30, lines 571-584.

The full derivation is provided in Supporting Information Section 6.2.

There is no doubt that the ±4 W/m^2 is an annual calibration uncertainty.

One can only surmise that Dr. Annan did not read the manuscript before coming to his decision.

Dr. Annan also made the naïve error of supposing that the ±4 W/m^2 calibration uncertainty is a constant offset physical error.

Plus/minus cannot be constant positive (or negative). It cannot be subtracted away in an anomaly.

Dr. Annan’s rejection is not only scientifically unjustifiable. It is not even scientific.

I ask that Dr. Annan be excused on ethical grounds, and on the grounds of an obviously careless and truly incompetent initial appraisal.

I further respectfully ask that the manuscript be reinstated and re-assigned to an alternative editor who is capable of non-partisan stewardship.

Thank-you for your consideration,

Pat

Patrick Frank, Ph.D.

Palo Alto, CA 94301

email: pfrankxx@xxx.xxx

++++++++++++++++

3. Journal response #1: silence.

+++++++++++++

4. My second response:

From: Patrick Frank pfrankxx@xxx.xxx

Subject: Re: gmd-2017-281

Date: November 8, 2017 at 8:08 PM

To: editorial@xxx.xxx

Cc: jules@xxx.xxx.xxx

Dear Ms. Töpfer,

One suspects the present situation is difficult for you. So, let me make things plain.

I am a Ph.D. physical methods experimental chemist with emphasis in X-ray spectroscopy. I work at Stanford University.

My email address there is xxx@xxx.edu, if you would like to verify my standing.

I have 30+ years of experience, international collaborators, and an extensive publication record.

My most recent paper is Patrick Frank, et al., (2017) “Spin-Polarization-Induced Pre-edge Transitions in the Sulfur KEdge XAS Spectra of Open-Shell Transition-Metal Sulfates: Spectroscopic Validation of σBond Electron Transfer” Inorganic Chemistry 56, 1080-1093; doi: 10.1021/acs.inorgchem.6b00991.

Physical error analysis is routine for me. Manuscript gmd-2017-281 strictly focuses on physical error analysis.

Dr. Annan is a mathematician. He has no training in the physical sciences. He has no training or experience in assessing systematic physical error and its impacts.

He is unlikely to ever have made a measurement, or worked with an instrument, or to have propagated systematic physical error through a calculation.

A survey of Dr. Annan’s publication titles shows no indication of physical error analysis.

His comments on gmd-2017-281 reveal no understanding of the physical uncertainty deriving from model calibration error.

He evidently does not realize that physical knowledge statements are conditioned by physical uncertainty.

Dr. Annan has no training in physical error analysis. He has no experience with physical error analysis. He has never engaged the systematic error that is the focus of gmd-2017-281.

Dr. Annan is not qualified to evaluate the manuscript. He is not competent to be the manuscript editor. He is not competent to be a reviewer.

Dr. Annan’s comments on gmd-2017-281 are no more than ignorant.

This is all in addition to Dr. Annan’s very serious conflict of financial and professional interest with the content of gmd-2017-281.

Journal ethics demand that he should have immediately recused himself. However, he did not do so.

I ask you to reinstate gmd-2017-281 and assign a competent and ethical editor capable of knowledgeable and impartial review.

Geoscientific Model Development can be a Journal devoted to science.

Or it can play at nonsense.

The choice is yours.

I will not bother you further, of course. Silence will be evidence of your choice for nonsense.

Best wishes,

Pat

Patrick Frank, Ph.D.

Palo Alto, CA 94301

email: pfrankxx@xxx.xxx

++++++++++++++++++

5. Journal response #2: silence.

++++++++++++++++++

The journal has remained silent as of 11 November 2017.

They have chosen to play at nonsense. So chooses all of consensus climate so-called science.

0 0 votes
Article Rating
447 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 12, 2017 12:20 am

Why would people expect Dr. Annan to allow publication of a paper that will show fault with his work, when he is obviously a believer in the Phil Jones mantra of “why should I show you my data when all you want to do is find fault with it.”

Keep chasing them Pat Frank.

November 12, 2017 12:23 am

Brilliant dismantling of pal review.

Alastair Brickell
November 12, 2017 12:57 am

Another very sad tale of loss of scientific integerty in this field. Thanks for bringing it to our attention.

pdtillman
Reply to  Alastair Brickell
November 12, 2017 7:19 am

Sad, blatant, unashamed….

For shame!

Hot under the collar
Reply to  Alastair Brickell
November 12, 2017 10:51 am

“It also turns out that GMD Chief Executive Editor Dr. Julia Hargreaves is the other co-principal of Blue Sky Research, Inc. Ltd.
She shares the identical financial conflict of interest with Dr. Annan.
Julia Hargreaves and James Annan are also a co-live-in couple, perhaps even married.
One can’t help but wonder if there was a dinner-table conversation.”

I believe ‘caught red handed in the cookie jar’ would be more accurate than conflict of interest.

Science or Fiction
November 12, 2017 1:00 am

Without being knowledgeable within the issues raised in this article, I just remembered this quote from the IPCC AR5;WGI report:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift.”
(Ref: IPCC;AR5;WGI Chapter 11 11.2.3 Prediction Quality; Page 967)

I, wonder what that means, but anyhow it doesn´t sound good.

Quinn the Eskimo
Reply to  Science or Fiction
November 12, 2017 5:40 am

That sounds kind of significant. In English, the statement that “Biases can be largely removed using empirical techniques a posteriori,” seems to mean that the claim of model-observation agreement is the product of “a posteriori” “adjustments” to “model drift,” which increases with “forecast time.”

“Empirical techniques” – pretty fancy name for fraud.

Reply to  Quinn the Eskimo
November 12, 2017 8:06 am

It means that their results come from their posterior.

Greg
Reply to  Quinn the Eskimo
November 12, 2017 11:20 am

It means that the models do not give credible results, so they frig them afterwards to give the impression that they do.

The corollary of this is that the remaining warming remaining is mere an arbitrarily chosen value which chosen to be small enough to be credible whilst still serving the alarmist agenda.

jon
Reply to  Quinn the Eskimo
November 12, 2017 4:31 pm

My thought is that the models can be accurate with adjustments ‘a posteriori’ that is after the forecasts/projections/predictions have been made and the time for the supposed event has passed, then the model can be adjusted (changed from wrong to right) using the actual results.
So it’s useless as a prediction tool.

barryjo
Reply to  Science or Fiction
November 12, 2017 6:52 am

My attempt at translation. Basically, it means that we will continue revising our projections to agree with the observations on a periodical basis. Therefore, we will make our numbers look valid and we will get more grant money.

Nick Stokes
Reply to  Science or Fiction
November 12, 2017 8:22 am

This is not describing GCM’s as currently implemented. It is describing the still experimental process of decadal prediction. It has long been understood that chaotic processes cannot for long be determined by their initial conditions. So climate modelling focusses on the attractor, which is approached independently of starting point. Decadal prediction is trying to get away from this, and this para is just describing the basic difficulty in maintaining dependence on initial conditions.

RACookPE1978
Editor
Reply to  Nick Stokes
November 12, 2017 8:39 am

Nick Stokes

Decadal prediction is trying to get away from this, and this para is just describing the basic difficulty in maintaining dependence on initial conditions.

If that is the case, then how can the CAGW academic-bureaucratic-industry claim that they were accurately able to “tune” EVERY climate models’ output between 1980 and 2017 for the only two irregular, short-term (8-10 month) intervals volcanic activity between 1970 and 1990?

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 8:54 am

I don’t know if that is true, but I see no connection.

Reply to  Nick Stokes
November 12, 2017 9:10 am

Nick,

You have illustrated one of the major failings of climate science, which is to conflate the chaos of the transition from one equilibrium state to another (weather) with the deterministic equilibrium state being transitioned to (climate). The reason this arises is because climate models attempt to model the chaos of interactions, rather than model what the states MUST be.

The General Circulation Models used for weather forecasting and climate modelling ostensibly model the physics, except that they have innumerable knobs and dials to fit them to subjectively interpreted historical data, moreover; when used for climate modeling, the temporal and spatial resolutions used are far larger then those used for weather forecasting. The hope is that the correct macroscopic behavior will emerge by simulating low resolution weather far into the future based on constraints dictated by the recent past. To paraphrase a lunch time discussion with some colleagues about String Theory, “Given enough degrees of freedom any behavior can be reproduced by any model”. What this means for climate models is that the more adjustments are required to fit a model to expectations, the less certain its predictions of the future will be, and curve fitting GCM’s to expectations requires a plethora of adjustments.

paqyfelyc
Reply to  Nick Stokes
November 12, 2017 9:14 am

“So climate modelling focusses on the attractor, ”
ye, sure, that’s probly why searching the whole AR5 with IPCC own tool (http://www.ipcc.ch/report/ar5/index.shtml), you’ll find +11,000 occurrence of the word “projection” and … 2 (TWO !!) occurrence of word “attractor” , both in a single paragraph worth quoting:

A climate model driven with external forcing alone is not expected to
replicate the observed evolution of internal variability, because of the
chaotic nature of the climate system, but it should be able to capture
the statistics of this variability (often referred to as ‘noise’). The relia-
bility of forecasts of short-term variability is also a useful test of the
representation of relevant processes in the models used for attribution,
but forecast skill is not necessary for attribution: attribution focuses on
changes in the underlying moments of the ‘weather attractor’, mean-
ing the expected weather and its variability, while prediction focuses
on the actual trajectory of the weather around this att

Which translate into “here we admit in weasel words that we bullshit you in the face, don’t say you had not be told”

Just after that they write
” the new guidance recognized that it may be possible, in
some instances, to attribute a change in a particular variable to some
external factor before that change could actually be detected in the
variable itself”
I say WOW! I mean, with such a guidance, I can attribute to WUWT exposure the change in Nick Stokes’ state of mind index into skeptic zone, before that change could actually be detected in the variable itself (no, Nick , you haven’t your word in this. This is MY index, proprietory technology)
And even with this post-modern science criterium, they didn’t succeed in attributing extrem event to climate change?

Science or Fiction
Reply to  Nick Stokes
November 12, 2017 9:33 am

How come that a decadal model will drift towards imperfect climatology (whatever that is) while a centennial model will not drift towards imperfect climatology?

Reply to  Nick Stokes
November 12, 2017 9:51 am

paqyfelyc,

The true nature of the attractor is the end state as it constrains the chaos. Climate modellers hope that the attractor emerges which will never happen as the attractor keeps the chaos from running in an open loop, or unconstrained manner. This is why the models are so dependent on initial conditions. If the models are correct, they will eventually converge to the same state, independent of initial conditions. If the end state was as chaotic as claimed, every summer and every winter would be significantly different from each other. The fact that the seasonal climate is quite consistent from year to year across the globe is strong evidence that the climate is no where near as chaotic as is often presumed.

Running a model with varying initial conditions does not result in the emergence of the actual attractor by cancelling out the chaos as claimed, but converges to a false attractor quantified by the many assumptions, one of which is the assumption of a much larger effect from CO2 then is possible.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 9:52 am

“How come that a decadal model will drift towards imperfect climatology “
In science, as in life, we never have perfect knowledge. We have imperfect knowledge of the initial state, and imperfect knowledge of the climatology. GCM’s, recognising this, wind back a long way to start, so the imperfections in initial state will fade, to concentrate on climatology. Decadal tries to span this period, by working harder to get a good initial state. They would like to get the evolution from this state, for decadal prediction, but part of that gets confounded with the trend to (imperfect) climatology (which GCMs allow to go to completion before they start).

Science or Fiction
Reply to  Nick Stokes
November 12, 2017 9:59 am

The quote was from a setion about decadal predictions. I find your explanation plausible. There are issues with tuning of general climate models but I guess these dont drift away that fast.

Reply to  Nick Stokes
November 12, 2017 10:09 am

It is not uncommon for models to run off the rails, either too thermageddon hot house or icehouse states. The tuning of multiple parameters is what creates a reliable attractor for the model output to run to. That of course means that, within limits – wide limits it turns out with enough parameters, the modeler can tweak to achieve whatever climate sensitivity their confirmation bias wants.

Those model outputs that don’t conform to the resulting GroupThink are excluded from the intercomparison projects. Funding dries up, publications get rejected. Conform or perish is the lesson the modelers have learned.

jclarke341
Reply to  Nick Stokes
November 12, 2017 10:13 am

“attractor” = initial assumption. As in: “It has long been understood that chaotic processes cannot for long be determined by their initial conditions. So climate modelling focusses on the initial assumption, which is approached independently of starting point.”

In other words, climate models are self-fulfilling prophesies, derived from initial assumptions and unhampered by reality, which is nothing more than the canvas on which a climate crisis is painted! The only time it is even necessary to mention reality is when it deviates from the models, at which point mainstream climate scientists come forth to explain why reality is wrong!

I would have thought that this astounding perversion of science would be impossible, but every time I think human stupidity has peaked, people come along and prove me wrong!

Quinn the Eskimo
Reply to  Nick Stokes
November 12, 2017 10:20 am

Despite Nick Stokes’ attempt to distinguish between decadal prediction and climate modeling, the text of this section of AR5 makes that impossible:

It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions (e.g., Ho et al., 2012b). There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques. There are also difficulties in estimating the drift in the presence of volcanic eruptions.

AndyG55
Reply to  Nick Stokes
November 12, 2017 11:38 am

“It is describing the still experimental process of decadal prediction”

SAY WHAT. !!!!! Did you read what you typed, and keep a straight face ?????

So all these TRILLIONS of dollars have been spent based on an EXPERIMENTAL PROCESS that can’t even get decadal predictions correct.

YOU HAVE GOT TO BE JOKING

Nick, you have just DESTROYED the whole AGW agenda in that one sentence. !!

AndyG55
Reply to  Nick Stokes
November 12, 2017 11:42 am

““So climate modelling focusses on the attractor, ””

On the attractor of CLIMATE FUNDING !!

Reply to  Nick Stokes
November 12, 2017 3:29 pm

“co2isnotevil November 12, 2017 at 9:10 am

The General Circulation Models used for weather forecasting and climate modelling ostensibly model the physics, except that they have innumerable knobs and dials to fit them to subjectively interpreted historical data, moreover; when used for climate modeling, the temporal and spatial resolutions used are far larger then those used for weather forecasting. The hope is that the correct macroscopic behavior will emerge by simulating low resolution weather far into the future based on constraints dictated by the recent past. To paraphrase a lunch time discussion with some colleagues about String Theory, “Given enough degrees of freedom any behavior can be reproduced by any model”. What this means for climate models is that the more adjustments are required to fit a model to expectations, the less certain its predictions of the future will be, and curve fitting GCM’s to expectations requires a plethora of adjustments.”

Excellent comment CO2isnotevil.

Wonderful phrasing!
“Subjectively interpreted historical data”, phrasing that reminds me of “interpretive dancers” working in red light districts.

Robert of Ottawa
Reply to  Science or Fiction
November 12, 2017 8:49 am

It means they pull the answer out of their posterior 🙂

How else can you have an a posteriori imulation.

Robert of Ottawa
Reply to  Robert of Ottawa
November 12, 2017 8:49 am

emulation

PiperPaul
Reply to  Robert of Ottawa
November 12, 2017 9:20 am

Maybe they need some Imodium…

jon
Reply to  Robert of Ottawa
November 12, 2017 4:40 pm

Reminds me of a previous Prime Minister who stated “No one is the suppository of all wisdom”.
But perhaps climate science is?

ferdberple
Reply to  Science or Fiction
November 12, 2017 8:51 am

using empirical techniques a posteriori.
≠=======
the model will drift due to lack of precision errors. this is typically “corrected” by smearing the error linearly across all the nodes. which is itself a source of error.

ferdberple
Reply to  ferdberple
November 12, 2017 9:08 am

The ~4W/m^2 error is in fact essentially time-invariant
==========
this does not render the model immune to the effects of the error. this is at the heart of why computers are unable to reliably predict the future.

I do not agree that this is time invariant. however it is not annual either, unless the model increments are annual. otherwise the error compounds similar to compound interest. 4% compounded daily is greater then 4% compounded annually.

as such, calculating error annually is a lower bound as. models will cycle faster.

Clyde Spencer
Reply to  ferdberple
November 12, 2017 10:51 am

ferdberple.
It seems to me that the error (uncertainty) is compounded with each calculation. The time interval is an artifact of the units of time assigned to the looping or iteration. That is, the interval between calculations is irrelevant. It is the total number of calculations that determines the magnitude of the final uncertainty.

Anonymoose
Reply to  ferdberple
November 12, 2017 9:35 pm

The paper says it is per year. Compared to the annual cloud forcing, 4 is quite significant: “Global cloud forcing (CF) is net cooling, with an estimated global average annual magnitude of about -27.6 Wm-2 [76, 77].”

Nick Stokes
Reply to  ferdberple
November 12, 2017 9:51 pm

” That is, the interval between calculations is irrelevant.”
No. The time interval of prediction is fixed, say to 2100. The interval determines the number of steps. If it’s monthly instead of annual, that’s 12 times as many steps till 2100, or about 3.5 times the spread. And who’s to say whether month or year is right?

“The paper says it is per year. “
Pat Frank’s paper says that. His source, Lauer and Hamilton, do not.
comment image

RW
Reply to  ferdberple
November 13, 2017 12:16 am

Nick. The root mean square error is just a standard deviation. The quote is a bit albiguous, but if I had to guess i’d say the authors linearly regressed the observed values on the mean of the predicted values that come out of a bunch of models.

A common way to report an analysis like that is to report the correlation between the two variables. The correlation is pretty good. The authors also report the root of the average sum of squared deviations (observed minus predicted) – which is merely, as I already said above, a sample standard deviation.

If the model predictions and the observed values annual, then the +/- 4 is per year. This value applies at each predicted value. It is subject to the assumption of homoscedasticity.

+/- 4, like any sample standard deviation, does not change systematically with N. The standard error of thr mean, however, does change with N.

To propagate that +/- 4 error, one would divide the +/-4 by the root of N and then do some partial derivative calculus.

I would also like to point out that if the observed values are themselves averages, then even the +/- 4 per year underestimates the error.

Yo do text book error propagation, you have to start with instrument error.

RW
Reply to  ferdberple
November 13, 2017 12:52 am

Clyde. There are rules for applying error propagation formula. Every novel calculation from measured values requires error to be propagated. In some softer sciences, this is often not done at all. In harder sciences and of course engineering, the stakes are way too high to ignore them.

Nick should try to make time to study error propogation if he still cannot field a valid counter argument to Pat Frank’s concern.

Nick Stokes
Reply to  ferdberple
November 13, 2017 1:28 am

RW
“If the model predictions and the observed values annual, then the +/- 4 is per year.”
They aren’t. The model steps every 30 minutes or so. The observations are more frequent than annual. Pat expresses the algebra of combining the averages in these two equations from the Supplementary:
First the averaging within each year:
comment image

Then the averaging of the annuals over 20 years:
comment image

The first, 6-2 is normal averaging, and yields an average with the same units as the things averaged, cloud cover unit. But when he averages over years (6-4), the units of the average change. It’s now cloud cover units per year. I haven’t been able to get any rational explanation of the inconsistency. But it determines the timescale. If he had subsummed over decades instead of years, say, then the units would have been per decade, with a three times slower propagation of error.

What do you think the units of average should be?

RW
Reply to  ferdberple
November 13, 2017 3:26 am

Nick. As you said he takes an average of the model predictions within a given year, does this for each one of 20 years, then takes a grand 20 year mean. I need more context so I will have to go and read the bit myself. One reason he might have selected an annual unit is bevause the observed values are annual units. I would like to know how he aggregated the measured cloud cover values as well (if he did at all).

Whatever the case, to get the rmse, he seems to have done what i described before based on the other part you quoted. Using regression, he can posit that the +/- 4 is uniform over the regression line, applying the error at small time scales as well. But I’m not sure that assumption would hold. Perhaps the observed values are sloppier or biased (over or under estimates) some years than others. In the absence of any additional info, it seems safer to me to stick to the annual time scale where things might behave in better accordance with the statistical assumptions.

Nick Stokes
Reply to  ferdberple
November 13, 2017 2:04 pm

” it seems safer to me to stick to the annual time scale”
It’s not a matter of safety. The time scale determines the alleged rate of growth of the error. And it’s arbitrary. If it’s safe to gather in years, then it’s safe to gather in decades (if only by summing the years). But you get a different result.

I’ve focussed on this because I think it shows the nonsense of his approach. I actually agree with Annan and the other referees that the errors described just don’t accumulate in the way Pat says. But when folks like Eric think they know all about that, there isn’t much point in pressing that directly. Instead I point to the flaw that I think should be evident to anyone. There just isn’t a proper timescale associated with the alleged accumulation. So they are just plucked out of the air. In an earlier thread Pat was claiming that this physical scale could be justified by the convention of publishing projections annually.

And I did think the sheer nuttiness of claiming that averaging changed the units (sometimes) would strike a chord. But apparently not.

Reply to  ferdberple
November 16, 2017 7:11 pm

The periodicity of the global cloud cycle is one year; following the seasonal cycle.

The same annual seasonal cycle forms the rationale for reporting the global temperature as annual averages. The seasonal cycle is finished in a year, and the average annual temperature is representative of the year.

When Jiang, et al, [1] reported their multi-year, multi-model assessment of CMIP5 simulated cloud error, they did so as annual average error, for that reason. The average CMIP5 simulation error in cloud cover was ±12.1%.

Nick would say Jiang, et al. were exhibiting “sheer nuttiness for calling their time scale annual because the per-month average error is numerically identical. So is the per-second average.

Likewise, Nick will have Lauer and Hamilton expressing “sheer nuttiness” for reporting their data throughout as “annual means,” because Nick insists they’re really just monthly means. Also daily means. Also per-second means.

But average annual cloud cover does not change much year-to-year. Jiang, et al., note this, “…no significant trends in clouds and water vapor are found in the model averaging periods. These multiyear means are regarded representative of “recent past climate,” for which our analyses are intended.

Sheer nuttiness, Nick?

A Nick Stokesian diagnosis of “sheer nuttiness” will also apply to Phil Jones, John Kennedy at UKMet, Gavin Schmidt, and Richard Mueller for nuttily representing their global mean temperature records as an annual average.

Their annual average temperature is numerically identical to each year’s monthly average temperature.

Also daily, hourly, and per-second average temperatures, too. Because averaging smooths out the entire duration of the average into a single value at every time-scale.

The way out of the numerical dilemma is to recognize that only the yearly average has useful physical meaning.

What is the meaning an average that supposedly represents an hourly temperature across 365 days when each day varies strongly in temperature across 24 hours, and the hours traverse across months that cycle through the seasons?

What is the physical utility or meaning of an hourly average temperature for a given year?

What is the physical sense of an average that says every month across the year, summer and winter, had an average monthly temperature of, e.g., 12 C?

The only average temperature that makes physical sense is the annual average, representing the full cycle of the seasons. Annual averages can be usefully compared. Each mean samples the entire seasonal cycle of the year.

Annual mean is the only physically rational mean.

The same reasoning applies to mean annual error in simulated long wave cloud forcing. The monthly errors combined into an annual error that samples and represents the entire year.

Cloudiness changes by the day and by the month across the seasons and across the years.

The annual average of simulated cloud error across the cycle of seasons is the only average that makes physical sense.

It is the only average that has any real physical utility for, or applicability to, a multi-year projection expressed in annual steps.

The mean annual simulation error averaged across 20 years smooths out any year-by-year variations.

It provides a useful, physically relevant, model calibration error metric that can be used to appraise the predictive value of an air temperature projection.

And the annual mean average has physical meaning, no matter the other numerical constructs. An annual average error is the only error that can be propagated into an annual time-step.

Nick says the numerically identical monthly error could as well be propagated. And so it could be done, given monthly simulation time steps.

What would be the result? Hugely ballooning uncertainty envelopes. And they would be statistically valid.

However, they’d not tell us anything physically worthwhile because only the full annual seasonal cycle is representative of the range of simulation error produced by climate models.

Nor would their message be novel. The propagation through annual time steps already shows us that climate models have no predictive value.

A monthly propagation would reveal the identical conclusion. Nothing is gained. But physical relevance is reduced.

This kind of physical reasoning is a necessity within the physical sciences and engineering.

Nick has never displayed any understanding of how to think as a scientist. He has displayed no understanding of instruments, or of instrumental resolution, or of systematic measurement error.

And, as noted here, Nick doesn’t know how to extract physical meaning from an average.

And in these threads he’s used a tabular convention to make an opportunistic play that square roots are only positive, not plus/minus.

[1] Jiang, J. H., et al. (2012), Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations, J. Geophys. Res., 117(D14), D14105, doi: 10.1029/2011jd017237.

Reply to  ferdberple
November 16, 2017 7:24 pm

Nick, “I actually agree with Annan and the other referees that the errors described just don’t accumulate in the way Pat says.

It’s not “the way Pat says.” It’s the way Bevington and Robinson says.

It’s the way NIST says (see “Propagation of error formula”).

It’s the way every valid authority recommends propagating error through a calculation.

Long wave cloud forcing error is systematic model error. The models inject it into every single step of a simulation.

Propagation of that error is the only valid means of determining reliability of the projection of future temperature.

Nick Stokes
Reply to  ferdberple
November 16, 2017 8:34 pm

Pat,
“It’s the way NIST says (see “Propagation of error formula”).”
What NIST is talking about has no relation to what you are doing. They describe how to combine errors in a composite formula. It uses a derivative to linearise, and then expresses the variance of a weighted sum.

You are talking about the solution of a differential equation, with a driving term. After discretisation, this is a recurrence system in A, which could be linear
A_(i+1) = -S*A_i + f_i
where A is in a GCM a huge vector, S a non-negative definite matrix, and f a vector of driving terms, which in your case could be considered errors. The point is that this isn’t forming a simple sum of the errors f. At each stage, it modifies and effectively reduces the contribution of past f. If S is constant, the i’th terms is
f_i – S*f_(i-1) + + S*S*f_(i-2) + …
And that is what you have to get the variance of.

This is fundamental in de solution, because the condition for stability is that S, as applied, has no negative eigenvalues, and there is some fuss if there are zero eigenvalues. I spent a large part of my professional life dealing with these issues.

Reply to  ferdberple
November 16, 2017 9:52 pm

You’re making a simple problem complicated, Nick. GCM air temperature projections are no more than linear extrapolations of forcing. All your vector math notwithstanding.

As linear output machines, linear propagation of error is entirely justified no matter what goes on inside.

In any case, Nick, the NIST site refers the reader to Ku (1960 Notes on the Use of Propagation of Error Formulas J. Res. NIST 70C(4) 263-273.

In that paper, Ku discusses systematic errors. He writes, “When there are a number of systematic errors to be propagated, one approach is to take |Δw| as the square root of the sum of squares of terms on the right-hand side of (2.12), instead of adding together the absolute values of all the terms. This procedure presupposes that some of the systematic errors may be positive and the others negative, and the two classes cancel each other to a certain extent. (my bold)”

Ku there recommends exactly the root-sum-square approach as I took, under exactly the conditions describing GCM LWCF calibration error.

Garofalo and Daniels (2014) Mass Point Leak Rate Technique with Uncertainty Analysis Res. Nondest. Eval. 25, 125-149 recommend propagating systematic (bias) errors through a calculation by means of root-sum-square (rss), which again is exactly my approach. See under 2.4.1 Bias and Precision.

The identical rss approach is recommended to propagate systematic error in Vasquez and Whiting Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods Risk Analysis 25(6), 1669-1681. See their equation 2.

Phillips, Eberhardt, and Parry (1997) Guidelines for Expressing the Uncertainty of Measurement Results Containing Uncorrected Bias J. Res. NIST 102, 577-585 throughout discuss propagating uncorrected systematic bias by various forms of rss.

There’s no way around it.

Nick Stokes
Reply to  ferdberple
November 17, 2017 2:04 am

I misspoke a little there. The conditions I wrote on S relate to the differential equation
dy/dt=-S*y+f
This will be unstable if S has negative eigenvalues. The condition for the corresponding recurrence relation is that the eigenvalues of S should have magnitude less than one.

November 12, 2017 1:11 am

We are with tou Pat Frank.
Corruption is rampant unfortunately.

BallBounces
Reply to  Glenn Thompson
November 12, 2017 7:29 am

You mean avec tou 😉

Sheri
Reply to  BallBounces
November 12, 2017 8:45 am

Avec tu?

Bartemis
Reply to  BallBounces
November 12, 2017 12:11 pm

Or, avec vous. I guess he just split the difference.

Nick Stokes
Reply to  BallBounces
November 12, 2017 12:54 pm

Avec toi sounds better.

benben
November 12, 2017 1:11 am

Oh Pat, it’s just a bad paper. And you know there are plenty of journals out there that will publish something like this (just try one of the Chinese journals, they give zero cares about the American culture wars). So the fact that you keep posting these rejection posts implies that you’re really more after stoking some anti-science outrage rather than just getting the thing out there. Sad state of affairs.

Cheers from a fellow scientist,
Ben

Reply to  benben
November 12, 2017 1:18 am

That may be true. But the peer reviewer was unable to find a valid criticism.
And nor could his partner.
So you are probably wrong.

Chris
Reply to  M Courtney
November 12, 2017 2:21 am

The peer reviewer was unable to find a valid criticism?

“The trivial error of the author is the assumption that the ~4W/m^2 error in cloud forcing is compounded on an annual basis. Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially, which would make a huge difference to the results. The ~4W/m^2 error is in fact essentially time-invariant and thus if one is determined to pursue this approach, the correct time scale is actually infinite. Of course this is what underpins the use of anomalies for estimating change, versus using the absolute temperatures. I am confident that the author has already had this pointed out to them on numerous occasions (see refs below) and repeating this process in GMD will serve no useful purpose.”

AndyG55
Reply to  M Courtney
November 12, 2017 2:58 am

Just needs to find someone who actually understands error propagation.

So far, its been beyond the reviewers understanding.

And no Chris, that was not pointing out an error, that was pointing out that the reviewer didn’t understand.

ferdberple
Reply to  M Courtney
November 12, 2017 9:15 am

underpins the use of anomalies for estimating change, versus using the absolute temperatures.
=≠=====
nope. anomalies reduce the variance of the the data. this reduces the. standard error making the result appear statistically more reliable than it actually is. while at. the same time making natural variability appear smaller than it is.

paqyfelyc
Reply to  M Courtney
November 12, 2017 9:40 am

“The trivial error of the author is the assumption that the ~4W/m^2 error in cloud forcing is compounded on an annual basis.”
trivial error needs no explanation.

“Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially, which would make a huge difference to the results. ”
Indeed. however, obviously, the shorter the time scale, the bigger the resulting error, so taking a year gives a lower bound of the error

“The ~4W/m^2 error is in fact essentially time-invariant”
Nonsense, that contradict the previous sentence. An error is like a profit margin, it is time dependent. 4% profit (or loss!) per hour, day or century are hugely different. Which was precisely stated in the previous sentance, meaning the reviewer contradicts himself.

“and thus if one is determined to pursue this approach, the correct time scale is actually infinite. ”
Please someone explain what “An infinite time scale” is supposed to mean in a time-step modelling process…?
Infinite time beetween two step, that is, a single run, zero iteration in the process? That would be nonsense.
An error that compound to finish at ~4W/m^2 at the end of the simulation (so, something like ~0.004 W/m^2 per step is the simulation has 1000 step) ? Wouldn’t more sense, either.
So, what does this mean?

“Of course this is what underpins the use of anomalies for estimating change, versus using the absolute temperatures. ”
nonsense again. The use of anomaly has just nothing to do with errors. It is a basic linearization technique (… linear system, again…).

RW
Reply to  M Courtney
November 13, 2017 12:34 am

Chris, look up the definition of a valid argument. The premise of the editor’s cricism is false. To believe otherwise would be to believe that Pat Frank has no idea what he wrote. So, you think implying Pat Frank makes stuff up is a constructive way to debate or argue?

It is disturbing that the arguments of global warming advocates so often are at root pathetic ad hominem nonsense. Benben’s comment is right in that category too.

I Came I Saw I Left
Reply to  benben
November 12, 2017 5:22 am

benben, does exposing the corruption of scientific integrity bother you? If not, why call such efforts anti-science and even bother to comment? Without integrity science means nothing.

Old England
Reply to  I Came I Saw I Left
November 12, 2017 6:08 am

Bebben’s comment suggests to me that he cannot be a ‘scientist’ as he claims to be.

F. Leghorn
Reply to  I Came I Saw I Left
November 12, 2017 7:23 am

Old England on November 12, 2017 at 6:08 am
Bebben’s comment suggests to me that he cannot be a ‘scientist’ as he claims to be.

The standards for “climate scientist” are way different from what you “hard science” guys are used to. You probably don’t even know what the “unicorn hypothesis” is.

Reply to  I Came I Saw I Left
November 12, 2017 8:05 pm

Exactly.
It only takes one comment like this to safely ignore anything this person has to say from now until forever.

Streetcred
Reply to  I Came I Saw I Left
November 12, 2017 9:51 pm

I imagine somebody with the nickname “benben” in diapers, sporting a silly bonnet, a rattle in one uncontrollable hand and the look of a glazed doughnut.

RW
Reply to  I Came I Saw I Left
November 13, 2017 12:38 am

Agreed menicholas. I’ll add though that confronting and exposing a bully is often the best way to go. Each and every time.

Reply to  benben
November 12, 2017 8:03 am

Then, of course, the meme would be that it was published in a Chinese journal, and they will publish anything. Nice try Ben – no cigar. Papers stand on their merits, not reviews by conflicted editors/reviewers.

Cheers from a fellow scientist,
Richard

Louis Hooffstetter
Reply to  benben
November 12, 2017 8:52 am

benben:

What are your credentials and experience as “a fellow scientist”?
I just want to make sure we all understand which orifice you are talking out of.

Reply to  benben
November 12, 2017 9:47 am

benben, “Oh Pat, it’s just a bad paper.

Mount your criticism, benben, “fellow scientist..” Bet you can’t do it.

Your silence will fully tell your vacuous tale.

Streetcred
Reply to  Pat Frank
November 12, 2017 9:52 pm

Pat, I think benben is still trying to master his rattle.

Louis Hooffstetter
Reply to  Pat Frank
November 13, 2017 6:19 am

As Pat suspected, benben proves himself to be a cowardly troll.

Pete
November 12, 2017 1:12 am

There’s only one word for this: CORRUPTION OF THE SCIENTIFIC METHOD.

Science or Fiction
November 12, 2017 1:13 am

When I think about how the Climate Model Intercomparison Project CMIP5, it occurs to me that the exam for models gives a clue about which models were selected by IPCC:
«RCP8.5 is a so-called ‘baseline’ scenario that does not include any specific climate mitigation target. The greenhouse gas emissions and concentrations in this scenario increase considerably over time, leading to a radiative forcing of 8.5 W/m2 at the end of the century. While many scenario assumptions and results of the RCP8.5 are already well documented, we review in this paper some of the main scenario characteristics with respect to the relative positioning compared to the broader scenario literature. In addition, we summarize main methodological improvements and extensions that were necessary to make the RCP8.5 ready for its main purpose, i.e., to serve as input to the Coupled Model Intercomparison Project Phase 5 (CMIP5) of the climate community. CMIP5 forms an important element in the development of the next generation of climate projections for the forthcoming IPCC Fifth Assessment Report (AR5).»
https://link.springer.com/content/pdf/10.1007%2Fs10584-011-0149-y.pdf

The CMIP5 was not exactly a blind test, the expected radiative forcing wase given in the task, like:
Given that the expected output is 8.5 W/m2 at the end of the century, for the inputs provided in this task that is called: “RCP 8.5”. What output, in form of radiative forcing at the end of the century, is provided from your model at the end of the century?

Jane Rush
November 12, 2017 1:15 am

What are James Anan’s qualifications then? His partner Julia lists hers prominently on their Blue Skies website but nothing for him that I can see. Given his mistakes, I am curious as to what they are.

Julia Hargreaves are:
Institute of Astronomy & Corpus Christi College, Cambridge University, UK, 1991-1995
PhD in Astronomy and Astrophysics, 1995. Mass-to-light ratio of dwarf galaxies.

The Queen’s College, Oxford University, UK, 1988-1991.
BA in Physics (Class 2:1), 1991.

Adam Gallon
Reply to  Jane Rush
November 12, 2017 5:45 am

Helps if you spell Annan correctly, then a quick Google. https://www.researchgate.net/profile/James_Annan He has a D.Phil from Oxford, he’s a mathematician.

Clyde Spencer
Reply to  Adam Gallon
November 12, 2017 11:07 am

Adam Gallon,

I see that they are both relatively ‘newly minted,’ and trying to get a reputation. They frequently co-publish, raising the question at to whether the two of them carry any more weight than any one of them publishing singly.

It seems that mathematicians are typically used to dealing with exact numbers. Thus, they are not focused on how uncertainties can affect their results. “Out of sight, out of mind.”

tom0mason
November 12, 2017 1:16 am

Thank-you Patrick you only re-enforce my understanding that the climate models are nothing more than a circle-jerk routine for wannabe statisticians and math students out to make a name for themselves. Science it is not!

As I have said before (https://wattsupwiththat.com/2017/11/11/144-year-earliest-cold-record-for-new-york-city-to-be-broken/comment-page-1/#comment-2663398)

I feel that Their models are a tragedy of incompetence. They have all the predictive value of homogenized astrology readings. If the GISTemp model (see https://chiefio.wordpress.com/gistemp/ ) is a good example then it is just a chaotic morass of unphysical, unscientific, codified guesswork, inaccurate estimations, and data manipulations. You might as well read tea-leaves!
Anyone here who works on these nonsensical models should be ashamed. Ashamed for taking money under false pretenses.

Louis Hooffstetter
Reply to  tom0mason
November 12, 2017 6:29 pm

+10
tom0mason nails it!

Reply to  tom0mason
November 13, 2017 3:52 am

Exactly right…incredibly expensive, wildly elaborate, amazingly detailed wild ass guesses…all the way down.
Some of us have come to this conclusion over time…and some of us have known it right from the very start.

November 12, 2017 1:16 am

The journal has remained silent as of 11 November 2017.

In fairness, it is the weekend.
This may not be over…

November 12, 2017 1:57 am

Well done Pat, the corrupt ‘pals review’ system needs outing.

Keep at it & Illegitimi non carborundum

Nick Stokes
November 12, 2017 1:59 am

“None of these people are scientists. None of them know how to think scientifically.”
That’s seven journals, now. And must be about 30 reviewers. On would have to entertain the possibility that they are right and Pat Frank is wrong.

Reply to  Nick Stokes
November 12, 2017 2:10 am

Of course anything is possible. Taking into consideration the damage rampant conflict of interests can inflict internally, it’s much worse where you stand. Pity you didn’t see it.

Chris
Reply to  jaakkokateenkorva
November 12, 2017 2:28 am

“Taking into consideration the damage rampant conflict of interests can inflict internally, it’s much worse where you stand. Pity you didn’t see it.”

It’s a pity you don’t see the possibility it’s just a bad paper. Oh, you say “anything is possible”, but that’s a throwaway concession.

Mark T
Reply to  jaakkokateenkorva
November 12, 2017 7:06 am

It is a pity that neither of you can offer a valid criticism, either. Yet still you both beat your drums.

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 7:48 am

” neither of you can offer a valid criticism”
I have offerred plenty. You might like to explain this one
“How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?”
Do you think rms is a “±”? Do you know what he is talking about here?

LdB
Reply to  jaakkokateenkorva
November 12, 2017 8:35 am

I am curious where you are going with that one Nick … RMS is just a form of average hence the mean in the value. You seem to be implying that +- is not possible and if that is what you are implying let me give you a warning by example

USA you have 120 VAC voltage, Australia has 230 VAC both are RMS numbers. The voltage range is still quoted with a plus and minus
https://en.wikipedia.org/wiki/Mains_electricity
USA

120 V and allow a range of 114 V to 126 V (RMS) (−5% to +5%)

Australia

230 V as the nominal standard with a tolerance of +10%/−6%

If you are implying it can’t go negative you probably need to withdraw your statement.

Reply to  jaakkokateenkorva
November 12, 2017 8:35 am

A root mean square is not ±. They are two different things. And neither is a +.
But taking the rms will lose the sign of the original factor. Re-read the article and it does make sense.

You are giving the impression of wilfully missing the point.

Surely Nick, you don’t really think that a calculation error, which can be positive or negative, will propagate in the same way as an error that only goes one way?
More importantly for climate science, does your partner agree with you?

LdB
Reply to  jaakkokateenkorva
November 12, 2017 8:43 am

I must say M Courtney I read Nick’s answer the same way and couldn’t believe he thought the square root somehow magically got applied to the systemic error to make it only positive. So you obviously got what I did from Nick’s comment.

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 9:00 am

” Re-read the article and it does make sense.”
Please explain that “sense”. RMS is a magnitude. It is positive. You square the voltage to make it positive, and then take he positive square root of the mean.

And LdB, you may have 120±something. But you don’t have ±120.

RACookPE1978
Editor
Reply to  Nick Stokes
November 12, 2017 9:03 am

And LdB, you may have 120±something. But you don’t have ±120.

Hmmmn. Last time I measured an alternating voltage, I did measure +120 volts, followed shortly therefater by -120 volts, followed shortly thereafter by +120 volts … Never did measure a sq root of -1 either, but I know it exists.

LdB
Reply to  jaakkokateenkorva
November 12, 2017 9:08 am

I agree with that Nick and if you had stated it that way it would have made sense. The whole “+” sign in your answer is very confusing as no-one would ever put it in front of an RMS value so you lead us to think you must be talking about the error.

LdB
Reply to  jaakkokateenkorva
November 12, 2017 9:17 am

Can I ask one other question Nick on your response above, the author describes

4W/m^2 error in cloud forcing

So is your contention that 4W/m^2 is the constant presumably with some smaller error tacked on the back?

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 9:40 am

“The whole “+” sign in your answer”
It’s not my answer. I was quoting Pat’s article. As to what the 4 W/m2 really means, you need to go to the source paper, which is Lauer and Hamilton, 2013.

Reply to  jaakkokateenkorva
November 12, 2017 11:37 am

Yes Chris and Mark. Some are willing to entertain the idea CO2 warms the outside air. Accepting that level of probability, anything is possible.

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 11:50 am

Poor Nick. you are showing just how out of your depth you are.

RMS does not have a sign. It is a magnitude that can be in either direction.

Stick to basic mathematics, Nick…. no need to actually UNDERSTAND.

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 11:55 am

Just like saying that a sin wave has an amplitude of +1 is nonsense.

It has an amplitude of 1 which can be +1 or -1.

Sorry if basic comprehension of reality is beyond you, Nick.

HAS
Reply to  jaakkokateenkorva
November 12, 2017 11:56 am

Nick Stokes: “Do you think rms is a “±”? Do you know what he is talking about here?”

Nice try Nick, but you and I both knew he was giving us a list of two separate issues.

Back to the substance …..

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 12:13 pm

“Do you think rms is a “±”?”

Nick, do you think it is NOT a “±”

REALLY ??

Is your comprehension that seriously lacking ???

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 12:16 pm

RMS = Root Mean Square.

Nick when you take the square root of a number , it is ALWAYS “±”

Basic junior high school stuff !!

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 12:52 pm

“but you and I both knew he was giving us a list of two separate issues.”
Really? What are they?

“when you take the square root of a number , it is ALWAYS “±””
No. 2 is a square root of 4. -2 is another. RMS is the positive square root.

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 1:07 pm

You TRULY ARE IGNORANT !!!

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 1:09 pm

RMS is a magnitude.

It does not have a + or –

Where did you NOT learn your maths???

You seem to have a very simplistic comprehension of what anything actually means.

ZERO sense of any actual physical understanding.

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 1:11 pm

Only when a number is given a direction does it become + or –

RMS can be either

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 1:25 pm

You put a trend line through some numbers, then calculate the RMS error.

Are you saying the RMS error is always +ve and thus all errors are on one side of the trend line?

You truly are a mathematical INEPT !!

AndyG55
Reply to  jaakkokateenkorva
November 12, 2017 1:44 pm

In this context, RMS error (magnitude ⸫ no sign) exists in BOTH + and – direction.

Get over it, and try to get some basic physical comprehension of what you are talking about.

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 1:47 pm

The classic RMS is standard deviation. It is always positive.

I would invite anyone to find any reputable publication that shows a negative or ± RMS. Anywhere.

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 2:13 pm

Is is a coincidence that the only people confused about the sign of RMS (apart from Pat Frank) come from the land of AC/DC?

HAS
Reply to  jaakkokateenkorva
November 12, 2017 6:32 pm

Nick Stokes in response to my ‘but you and I both knew he was giving us a list of two separate issues [RMS and ±]’:

“Really? What are they?”

Now back on the earlier WUWT thread you were donkey deep in a discussion with Pat Frank about RMS calibration error statistics, and how this was not an energetic forcing statistic. The point he repeats above.

He separately makes the point that “±” isn’t “+”.

Now I could follow that, and I wasn’t even party to the earlier conversation.

As I said, “Nice try Nick …”

Can you get back to the substance?

Nick Stokes
Reply to  jaakkokateenkorva
November 12, 2017 7:09 pm

HAS,
“He separately makes the point that “±” isn’t “+”.”
Separately? With respect to what, if not rmse?

As to substance, the key is not so much his addition of a ± to the 4 W/m2 found in L&H but the extra adornment of a /year in the units. This is on the basis that if you average something over 20 years, the average acquires a /year unit. I think that is nonsense, but a critical immediate point is, why /year. Why not /month or /decade. Any ideas?

HAS
Reply to  jaakkokateenkorva
November 12, 2017 8:28 pm

He separately makes the points that

(1) there is a difference between applying a + to a RMSE and applying a +/-
(2) there is a difference between the way you should treat calibration errors and forcing statistics

The substance of the first comes down to how you are using the statistic, and the second is how you treat an annual error if you are propagating errors in a simulation. (The per year bit is trivial, the statistic is derived from annual means).

Reply to  jaakkokateenkorva
November 12, 2017 10:28 pm

Nick’s purpose here seems to be drowning the fish mostly. Pat provided evidence for a conflict of interest case. As the result Copernicus Publications has similar minority interest value as The Watchtower.

AndyG55
Reply to  jaakkokateenkorva
November 13, 2017 1:05 am

Nick. YOU ARE TOTALLY and UTTERLY WRONG

“The classic RMS is standard deviation.”

Which is ALWAYS “±” about the mean. Thank you for CONFIRMING THAT POINT.

You seem to be bathing in your IGNORANCE between magnitude (which is signless) and applied direction.

I can only assume you skipped basic mathematics in high school.

Nick Stokes
Reply to  jaakkokateenkorva
November 13, 2017 1:08 am

“Which is ALWAYS “±” about the mean”
Ask the 6σ people how ± they feel.

Nick Stokes
Reply to  jaakkokateenkorva
November 13, 2017 1:13 am

“The per year bit is trivial”
It isn’t trivial. It is the crux of the case. The units given there determine how many steps per unit time are taken in the random walk, and hence how fast the errors grow, in Pat’s model. It determines the time scale in a propagation model that otherwise has none.

RW
Reply to  jaakkokateenkorva
November 13, 2017 1:25 am

RMS and RMSE are +/- because they are merely sample standard deviations. Just like all standard deviations, they become meaningful once an assumption is adopted concerning the nature of the distribution of the underlying population.

If the population under study is distributed Normal, 68% of the population will fall within +/- 1 standard deviation of the mean. Given the article quote Nick provided, +/- 4 reflects an estimate of the population standard deviation of the predicted variable which in this case id the observed values. The RMSE applies to each value of the predictor variable which in this case is the mean predicted value from a bunch of different models.

In the propagation formulas i have seen, one would divide the +/-4 by the root of N. If the values are annual scores, then N would be the number of years. This resulting standard error would then plug into the formula for error propagation. The formula ends up depending on the equation or function you are using to generate nrw numbers with (i.e. Pat Frank’s neat linear model that closely approximates the climate model output).

Nick Stokes
Reply to  jaakkokateenkorva
November 13, 2017 1:42 am

“RMS and RMSE are +/- because they are merely sample standard deviations”,/i>

So how do you do a one-tailed test if σ is ±1?

AndyG55
Reply to  jaakkokateenkorva
November 13, 2017 2:07 am

“Ask the 6σ people how ± they feel.”

That is probably the WEAKEST, most moronic thing you have said all post.

Maths isn’t about “feelings” NIck.

Seem you are destined to stay in the -3σ group.

You keep digging your ignorance deeper and deeper.

AndyG55
Reply to  jaakkokateenkorva
November 13, 2017 2:13 am

“So how do you do a one-tailed test “

You really are showing your ignorance, Nick

You CHOOSE which of the + or – tails you wish to test.

But according to you, a two tail test cannot exist, because σ is only positive.

And you are telling everyone that -σ does not exist.

You are getting DUMBER and DUMBER, Nick.. heading rapidly for DUMBEST !!!

Nick Stokes
Reply to  jaakkokateenkorva
November 13, 2017 2:16 am

“Maths isn’t about “feelings” NIck.”
But 6σ is.

Admin
Reply to  Nick Stokes
November 12, 2017 2:11 am

Pat’s point is very simple. The error produced by each model iteration step is a random walk. Pat has calculated the approximate magnitude of each step of that random walk, and used that calculation to try to determine how the random walk will cause projected results to wander away from reality over time. The result is the joke size range of possible projections Pat has produced.

Count to 10
Reply to  Eric Worrall
November 12, 2017 9:04 am

I’ve been having trouble following the argument here, but if you are right, that clarifies things. The original model has an uncertain “forcing” that is fixed as a constant at the beginning of each run (or “realization”). The author’s criticism is that the forcing should be re-randomized within its uncertainty each year. The reviewer’s criticism of the author is that a year is an arbitrary time to perform that randomization.
The original model assumes the uncertainty in the forcing is our ignorance of the true value of a physical observable that does not change over time, while the author interprets that uncertainty as a year to year variation in the physical observable. This would actually be an entirely valid reason to reject the paper, since these two things are completely different. To publish the paper, the author would have to justify the year to year variation separately, because his whole argument depends on that interpretation.

ferdberple
Reply to  Eric Worrall
November 12, 2017 9:28 am

Count to 10 on November 12, 2017 at 9:04 am
==========
yes. the error. propagation is cycle dependent and thus becomes time dependent because each. cycle has non zero finite time.

as such the error would be like compound interest. 4% annual compounded daily for a model that cycles 1 time per. day.

ferdberple
Reply to  Eric Worrall
November 12, 2017 9:32 am

or more likely 4% compounded daily and ignore the annual completely.

Admin
Reply to  Eric Worrall
November 12, 2017 7:57 pm

Count to 10 I checked with Pat, he is happy with my use of the phrase “random walk”, though he cautions not to assume the error averages to zero over time.

F. Leghorn
Reply to  Nick Stokes
November 12, 2017 3:37 am

I don’t know about the other journals but this one stinks to high heaven. The arrogance and bias in Anan’s and Hargreaves words come through quite clearly.

Reply to  Nick Stokes
November 12, 2017 4:10 am

Even great papers were rejected by peer review https://www.sciencealert.com/these-8-papers-were-rejected-before-going-on-to-win-the-nobel-prize Great papers, in real sciences. The situation is incomprehensibly worse in cargo cult sciences, in post normal and post modern sciences.

Yeah, I know, pseudo scientists will think of their favorite pseudo science as being as scientific as, let’s say, physics and think of their 95% ‘certainty’ as equal with the five sigmas of, let’s say, particle physics. And of course they will think that statisticulation and a little bit of pretending to have rational thinking by using logical fallacies in their ‘inferences’, or simply the principle of explosion in their ‘scientific’ method is enough to pretend to be a science.

What would one expect when even in physics there are attacks against falsifiability nowadays?

Reply to  Adrian Roman
November 12, 2017 10:10 am

This is why engineers make better scientists. In engineering, not only do you have to understand the science, you need to understand it well enough for whatever it is you are engineering to actually work.

I Came I Saw I Left
Reply to  Nick Stokes
November 12, 2017 5:38 am

“That’s seven journals, now. And must be about 30 reviewers. On would have to entertain the possibility that they are right and Pat Frank is wrong.”

That’s certainly possible, but IMO to be considered even reasonably intelligent one also would have to entertain the possibility that there is systemic corruption in the industry. And that the benefactors of said corruption would take a negative view of their profiteering being threatened via of exposure.

Sheri
Reply to  Nick Stokes
November 12, 2017 8:50 am

Until someone points out the errors in the paper, I’ll go with Pat Frank is right.

Latitude
Reply to  Nick Stokes
November 12, 2017 8:56 am

“That’s seven journals, now. And must be about 30 reviewers. On would have to entertain the possibility that they are right and Pat Frank is wrong.”

I suppose he could just go to some snarky lowrent overseas journal and pay them to publish…..

Louis Hooffstetter
Reply to  Nick Stokes
November 12, 2017 9:17 am

Nick: Dr. Richard Feynman on how science works:

“In general, we look for a new law by the following process: First, we guess it, no, don’t laugh, that’s really true. Then we compute the consequences of the guess, to see what?, if this is right, if this law we guess is right, to see what it would imply and then we compare the computation results to nature, or we say compare to experiment or experience, compare it directly with observations to see if it works. If it disagrees with experiment, it’s wrong.

In that simple statement is the key to science.

It doesn’t make any difference how beautiful your guess is, it doesn’t make any difference how smart you are, who made the guess, or what his name is If it disagrees with experiment, it’s wrong. That’s all there is to it.”

For more than three decades, the projections of climate models have been negated by reality. They have proven to be less accurate than a monkey randomly flinging his poop at a wall of climate projections. So Dr. Frank is absolutely correct in saying “None of these people are scientists. None of them know how to think scientifically.” Having an advanced degree does not make you a scientist. To be a scientist, you must follow the scientific method. Climate modelers obviously don’t.

paqyfelyc
Reply to  Louis Hooffstetter
November 12, 2017 9:47 am

+1

Nick Stokes
Reply to  Louis Hooffstetter
November 12, 2017 9:55 am

“Having an advanced degree does not make you a scientist.”
Showing a Youtube of Feynman doesn’t either.

Stonyground
Reply to  Louis Hooffstetter
November 12, 2017 10:18 am

I have often wondered what would happen if these computer model projections were run with the alleged warming effect of CO2 removed. Let us assume that the warming effect of more CO2 in the atmosphere is non existent and run the models on that basis just to see what happens. Surely it would be easy and inexpensive to do it and If the projections produced were closer to reality that would be quite a significant discovery. My guess is that they have already tried it and are too terrified to let on. “You know all that money that you spent trying to reduce emissions because we said that it was a massive problem? Oh well, our bad, turns out it isn’t a problem after all, not even a little one.

Latitude
Reply to  Louis Hooffstetter
November 12, 2017 10:20 am

..but the advanced degree helps in recognizing the significance

Nick Stokes
Reply to  Louis Hooffstetter
November 12, 2017 10:26 am

“I have often wondered what would happen if these computer model projections were run with the alleged warming effect of CO2 removed.”
That is usually the first thing they do. It’s called a control run. Attribution of CO2 effect is worked out as the difference with CO2 present and absent.

jclarke341
Reply to  Louis Hooffstetter
November 12, 2017 10:33 am

Showing a youtube video of Feynman doesn’t make one a scientist, but it at least shows that you acknowledge what science is. Doing something other than what Feynman describes in the video and proclaiming it is science, shows that you don’t know what science is, or you do know, and are lying.

jclarke341
Reply to  Louis Hooffstetter
November 12, 2017 10:51 am

“I have often wondered what would happen if these computer model projections were run with the alleged warming effect of CO2 removed.”

They do almost nothing. With the CO2 levels unchanging, the models are quasi-stationary. There are no significant changes in climate ever if CO2 does not change. That fact alone falsifies the models. CO2 levels have been relatively constant for the last 5 million years, while climate has been significantly more variable than CO2. All physical evidence (science) indicates that the minor changes in CO2 levels that have occurred over the last 5 million years have been driven by temperature, not the other way around.

Latitude
Reply to  Louis Hooffstetter
November 12, 2017 11:11 am

^ what jclarke just said…..+1

old engineer
Reply to  Louis Hooffstetter
November 12, 2017 8:34 pm

On stoneyground’s comment about running the model without CO2. Nick Stokes is right. The control run is most often the first thing that is done. For instance see Figure 1. of Hanson’s 1988 paper. It is very interesting by the way, as it shows cooling from about 2010 to 2040.

old engineer
Reply to  Louis Hooffstetter
November 12, 2017 8:37 pm

of course that should J, Hansen’s 1988 paper..

Reply to  Nick Stokes
November 12, 2017 11:05 am

As Roy Spencer puts it. “95% of the models agree. The measurements must be wrong.”

Clyde Spencer
Reply to  Nick Stokes
November 12, 2017 11:15 am

Nick Stokes,

By your own admission, numbers count. You routinely get a drubbing for your comments and analysis on this blog. You should entertain the possibility that the commenters here are right and that you are wrong.

Another possibility is that the conflict of interest is so entrenched in the lucrative publishing business that they have a vested interest in keeping the gates closed to gad flies.

Nick Stokes
Reply to  Clyde Spencer
November 12, 2017 12:48 pm

“Another possibility is that the conflict of interest is so entrenched “
Even Ronan Connolly?

AZeeman
Reply to  Nick Stokes
November 12, 2017 1:41 pm

Every religion has it’s “peer reviewed” holy book. Every religion has an overwhelming majority consensus that it’s holy book is correct and also that the religion itself is correct. But all the religions contradict each other. That’s why they are called religions and not science. Religions can neither be proven right or wrong.
Science is based on hard numbers, facts and predictions that have means to be proven wrong. Peer review is only a form of error correction, it doesn’t prove that a paper’s thesis is right or wrong, only that the proof given is consistent with the known laws of nature and is logically consistent. It’s not an appeal to authority. Just because an eminent scientist hasn’t found a mistake in a paper doesn’t make the paper correct. Computer programs are regularly peer reviewed, yet programs still crash and are hacked. A million tests can be run successfully only to have a failure by a single wrong bit in the input data.
Climate “science” does not have a mathematical basis, ignores the laws of thermodynamics and relies only on consensus and various versions of it’s holy book, the IPCC report.
No climate scientist can come up with a proof of how CO2 will affect temperature using known natural laws. No climate scientist can come up with an average global temperature using known natural laws. It’s not even possible for a climate “scientist” to demonstrate the effects of CO2 on temperature using experimental techniques without resorting to fraud.
Until climate “science” has a solid basis mathematical basis like physics, it will remain forever a religion, forever argued over and forever unprovable, but with lots of peer review and a very robust consensus. Praise be to the IPCC and death to the heretics.

Reply to  Nick Stokes
November 14, 2017 11:47 am

I have entertained the idea Franks is wrong, and yet his reasoning seems far more sound than the repeated insistence by modellers that error has no time dimension and does not propagate. Like Pat says, this is not physics. The models might be mathematically interesting, even useful in some contexts, but they are not physics.

And as the old saying goes, what makes physics more interesting than other pursuits (like, say, abstract mathematics) is that physics actually describes the world around us.

It’s almost as though modellers have enormous incentives to ignore/misunderstand the problem…

Reply to  talldave2
November 16, 2017 7:27 pm

talldave2, see the email below I received from Dr. Didier Roche, who was assigned by the journal to re-evaluate Dr. Annan’s decision to deny review.

It is a study in ‘find some reason, any reason, to reject.’

Nick Stokes
November 12, 2017 2:31 am

“Today’s offering is a morality tale about the clash of honesty with self-interest, of integrity with income, and of arrogance with ignorance.”

I noted that one of the reviewers of the paper for Earth Space Sciences was none other than Dr Ronan Connolly. Pat Frank made the reviews available in the previous post; the link is to here. Dr Connolly made a point of identifying himself. Dr Connolly, an independent scientist, will be known to WUWT readers through his frequent contributions here, often co-authored with Andy May; the lastest was in August. He gave a relatively sympathetic review, citing Koutsoyiannis, and Willie Soon. He thought radical changes were needed, and was somewhat doubtful that they would be made, but he said the paper might be publishable if they were.

The response at that stage was firstly a blast for not remaining anonymous, and then the usual listing of reviewer errors, eg
“The reviewer’s recommended major revisions are misconceived and, if followed, would leave nothing publishable”
And so, of course no changes were made.

Dr Connolly was not impressed. He recommended rejection, noting among other things:
“Despite this, the author has decided to resubmit his rejected manuscript to ESS essentially unaltered, albeit with some small changes addressing a few minor technical points and typos identified by the reviewers. Instead of attempting to modify his manuscript in light of the major criticisms made by all five reviewers (including myself), the author has chosen to write lengthy responses to each of the reviews claiming that they: “[have] no critical merit” (“Review #1”); “[are]…misconstrued… mistaken…[and] confused” (“Review #3” and “Review #4”); “fundamentally misguided” and unable to “[survive] critical scrutiny” (“Review #5”); as well as involving “the mistake[s] of a naive college freshman” (“Review #6”).

Well, Dr Frank was not impressed either. He wrote in response:
“This review is no more than a disgraceful polemic. The editor would have done better to exclude it on the grounds of bringing ill repute to the Journal. “
A few of his summary points:
“Summary Response:
This review:
1. Is analytically vacuous throughout
2. Inadvertently validated the manuscript study (items 7.2.2, 7.3.2)
3. Was expressly dishonest (items 1.3.1, 1.3.2, 1.4, 1.5.1, 1.5.2, 2.1, 3.2.1, 5.2.1, 6.9, 6.10.2.2, 9.2.4, 12.8, and 12.9)”

etc (11 rather similar points in all)

Then a detailed list starting with
“Unnecessary and shallow introductory complaints are deleted. However, certain points of critical failure or dishonesty require attention.”

And the list enumerated many points of Dr Connolly’s alleged dishonesty.

Now Dr Connolly is indeed an independent scientist, who certainly has no financial interest in trying to suppress Dr Frank’s theories. Yet he is bundled in with the rest.

And FWIW, I agree with Dr Connolly.

Admin
Reply to  Nick Stokes
November 12, 2017 3:29 am

Like I said, Pat’s point is simple. The systemic errors are effectively a random walk, in terms of our ability to predict them. Therefore the errors accumulate over time. Pat estimated the approximate magnitude of the random walk step at each iteration of the calculation, and used the model calculation itself to determine how far the projection could drift from the correct value, because of this random walk of errors. Pat’s calculation show that the drift occurs very rapidly – that the models are unphysical.

Reply to  Eric Worrall
November 14, 2017 12:56 am

Errors in such a non linear system do not accumulate as steps in a random walk. They are amplified and explode exponentially.

Gerry Parker
Reply to  Nick Stokes
November 12, 2017 4:29 am

And yet, the models do not correctly predict the future.

Nick Stokes
Reply to  Gerry Parker
November 12, 2017 7:03 am

How do you know?

Mark T
Reply to  Gerry Parker
November 12, 2017 7:09 am

Because they haven’t.

Mark T
Reply to  Gerry Parker
November 12, 2017 7:15 am

Ever.

Reply to  Gerry Parker
November 12, 2017 8:38 am

But they will in the future. You need to have faith.
That is the error in this paper that prevents its publication.
It deals with physics. When climate science is a branch of theology.

Sheri
Reply to  Gerry Parker
November 12, 2017 8:54 am

We have no idea if the models correctly predict the future. To date, they have failed to do so. We haven’t seen the future, so we don’t know if the models are right or not. That being said, we also have no reason to heed the models. Random modeling would probably produce equally accurate results. Without predictability, the models do us no good.

Latitude
Reply to  Gerry Parker
November 12, 2017 8:58 am

How do you know?….

…by their own admission they have to leave out too many things that are not “understood”

I Came I Saw I Left
Reply to  Gerry Parker
November 12, 2017 10:10 am

Actually, we have an excellent idea if the models correctly predict the future, The future is now – for their many prior predictions that have not happened up to now. That destroys any confidence in their ability to predict beyond this point.

Reply to  Gerry Parker
November 12, 2017 11:38 am

Heh, not only is the future now, the future was also yesterday; given the track record.

One very glaring assumption, which has not been shown to be true at current conditions, is that the heating (increased internal kinetic energy) from shining a strong light on a bottled gas in a lab will carry over to the far less constrained open atmosphere.

Reply to  Gerry Parker
November 14, 2017 11:49 am

“How do you know we’re wrong?” seems to be the basis of the entire multi-trillion-dollar global policy consulting racket.

Well, Pat just explained how. Welcome to physics.

Reply to  Nick Stokes
November 12, 2017 11:35 am

“I have often wondered what would happen if these computer model projections were run with the alleged warming effect of CO2 removed.”
That is usually the first thing they do. It’s called a control run. Attribution of CO2 effect is worked out as the difference with CO2 present and absent.”

Actually Nick, you’ve identified the exact problem. Since we don’t have a smoking gun to show what else it could be……….it must all be from CO2. Since no other factors can be identified and represented in model equations(from natural processes, for instance, that we know with certainty have had a powerful influence in the past-Medieval Warm Period-Little Ice Age for instance) the only way to get warming is to use, not just CO2(if we just used CO2 and it’s logarithmic effect on temperatures as it increases-that would not do it) but to use additional positive feedback equations from the increase in H2O……..based on a speculative theory that exists because we don’t know or at least can’t model the true effect of anything natural.

We can’t even correctly model the projected positive feedback from increasing H2O which includes more low clouds. This blocks the more powerful SW radiation of the sun, especially when the sun angle is high in the sky(and has the most heating power). So a powerful negative feedback has tremendous uncertainty in the models.

Instead of knowing the real reason for all the warming, we find the right equations, using CO2, then add the right positive feedback equations to get the desired result.
Comparing that to control models that have variations that don’t yield as much warming does not tell you that your equations accurately represent the actual processes in your simulation.

If I have a known product in a simple math addition problem that comes to 100 but don’t know the actual numbers which were added to yield 100, I can make up whatever numbers I want to get to 100. They don’t have to represent the real ones.
Maybe a climate modeler has some clues on some of the real numbers/equations that can be justified(like the physics of greenhouse gas warming of CO2). When that gets them part of the way to the solution, climate modelers look for additional equations(like those that represent positive feedback from H2O) to amplify the warming. When the solution(s) eventually match up to the desired warming……….that is not verification of anything except creativity in using mathematical equations that result in X amount of warming.

How can one have a 95% certainty level for a specific range with modeled data projections when much of this is just a guessing game?

And increase the certainty level after the models prove to be too warm?
A big element of certainty that we know about climate models so far is this: They have been too warm but are not being reconciled in timely fashion. Instead, mainstream climate science defends the indefensible.
Change the equations(guesses) so that the global temperatures are actually tracking close to the global climate model ensemble mean much of the time, with close to equal time below and above it.

Right now, the only time the global temperature can get to the model mean is at the top of an El Nino spike. Ignoring this is blatant bias and using the model for something other than science.

gwan
Reply to  Mike Maguire
November 12, 2017 9:00 pm

+10

Gerontius
Reply to  Nick Stokes
November 12, 2017 3:19 pm

Ronan Connelly, is he the one that proposed a new phase in the atmosphere? On the whole I think the other member of the Connelly clan, yes Billy is the one that talks most sense

terra non firma
November 12, 2017 2:50 am

Climate Modelers make one overwhelming error: They can not realize that complex systems can not be resolved with complicated tools. They ought to study complexity-theory.

Ed Zuiderwijk
November 12, 2017 3:09 am

‘None of these people are scientists’.

The word you are looking for is ‘quacks’.

Robert B
November 12, 2017 3:15 am

I had a similar problem. Couldn’t publish as the editor wanted keep a grub happy. Nothing to do with climate science or money, just personal.
When I pointed out how silly the objections were, including that not only did I consider something I’d supposedly hadn’t, there was a section with a title that was very explicit. The editor gave it back to him to review. Rejected because of grammatical errors like writing has been instead of had been.

Admin
November 12, 2017 3:37 am

The errors are effectively a random walk. Over time the random walk of errors causes the model to drift away from reality. Pat demonstrated that the period during which the models could be considered reliable is impractically short – the accumulation of errors rapidly renders the projection useless.

What is so difficult to understand?

The is no fudge factor which can be applied to correct the error, because we can’t predict what the error will be. The best which can be done is to determine how quickly the error undermines the usefulness of the model projections – which in this case is almost immediately.

jclarke341
Reply to  Eric Worrall
November 12, 2017 11:32 am

“…the accumulation of errors rapidly renders the projection useless.” Yes. This was the main criticism of the models back in the 80’s. It is still the main criticism of the models today, and is basically the essence of Dr. Franks paper. But climate science took a turn away from science back in the 80s, and created a self-fulfilling prophecy, pretending it was science.

Nick Stokes summed it up nicely above, when he said: “It has long been understood that chaotic processes cannot for long be determined by their initial conditions. So climate modelling focusses on the attractor, which is approached independently of starting point.”

So what is the ‘attractor’ and what does it mean to ‘focus’ on it? It is none other than the CAGW theory itself! It is the assumed climate sensitivity of atmospheric temperatures to increasing CO2! Focusing on the assumed climate sensitivity simply means determining the temperature increase from our assumed energy increase, with all else being quasi-equal. The climate sensitivity was tuned by selecting a period of warming and assuming that the warming was entirely man-made. No other period of time would lead to such a high climate sensitivity. In fact, a similar time period immediately preceding the one selected, would have given a climate sensitivity of zero, or even negative.

Natural climate variability in the models is limited to volcanoes and the tiny changes in total solar irradiance. The calculation could be done on the back of an envelop. The models will always reach that same answer no matter what, since that is what they focus on. Exactly how they reach it can vary depending on the tweaks and nuances in the individual models, but they are all focusing on the ‘attractor’, and will get to it sooner or later. They cannot do otherwise.

The models are programed to reach the initial assumption. When they do, they are used as proof that the initial assumption was correct. What would you call that? I certainly wouldn’t call it science.

Latitude
Reply to  jclarke341
November 12, 2017 1:22 pm

If they had used the previous 30 years….they would have shown temps falling

noaaprogrammer
Reply to  jclarke341
November 12, 2017 10:32 pm

I thought that chaotic climate systems could have more than one attractor. If GCMs only consider one attractor, doesn’t that cripple their ability to have free reign in modeling reality?

jclarke341
Reply to  jclarke341
November 13, 2017 7:32 am

noaaprogramer – you are assuming that the goal of the models is to model reality. That is not the case. The goal of the models is to model a man-made global warming crisis. The IPCC was tasked to find the human impact on climate. The IPCC has virtually ignored natural climate variability, aside from the most rudimentary understanding of it. While we have tons of historical and geological evidence that natural climate variability is robust and very significant, there is no attempt to understand or quantify it in mainstream climate science. In fact, there has been a significant effort to deny that it even exists.

One cannot begin to model reality if you refuse to even look at it.

Count to 10
Reply to  Eric Worrall
November 12, 2017 12:09 pm

So, it looks like the proble is that the quoted “error” he is quoting reflects the measurement uncertainty in a quantity that does not vary over time (in the model), but he is propagating it as if it were the yearly variation of a well known quantity. His paper is basically a lot of mathematical extrapolation from the single assertion that the uncertainty of a constant is actually the yearly variation of a parameter.
The bulk of his paper should actually be about justifying this assertion, and no journal should accept the paper without that justification.

Admin
Reply to  Count to 10
November 12, 2017 5:26 pm

The paper contains a justification of why the error is unpredictable and systemic. I confirmed with Pat the error functions as a random walk in terms of model ability to make reliable predictions. Each iteration of the model the error introduces a random walk drift. This random walk drift then forms part of the input for the next iteration.

The fact that model hindcasting sort of works despite the error simply demonstrates the models have been fitted to past data. The measured magnitude of the error and its impact on projected values means future projections rapidly tend to nonsense.

Louis Hooffstetter
Reply to  Count to 10
November 12, 2017 7:04 pm

“The fact that model hindcasting sort of works despite the error simply demonstrates the models have been fitted to past data.”

Yes, and that’s called CHEATING! Yet I’m amazed at how proud these witch doctors are of their ability to cheat!

Google ‘Climate Model Hindcast’ and check out how many papers trumpet the fact that climate models can spit out the right number when the modellers know what the number is supposed to be.

RW
Reply to  Eric Worrall
November 13, 2017 1:37 am

Eriv, so the errors are random because the target is moving? So some years more global cloud cover than others? Correct?

Twobob
November 12, 2017 4:25 am

I do not suppose ,That its possible for Mr Trump Could publish it?

John Bills
November 12, 2017 4:25 am

I sure think Nick Stokes is a quack.
The state of climatescience in 2017:
https://climateaudit.org/2017/07/11/pages2017-new-cherry-pie/

Bill Illis
November 12, 2017 4:28 am

Even the CERES satellite is missing about 4 W/m2 of energy flows somewhere.

They just adjust ALL the numbers until they get something like the assumed annual energy accumulation rate (well, they use Hansen 2005 estimate of 0.85 W/m2/year which is not even the real measured number which is about 0.6 W/m2/year).

In climate science, you just adjust everything until it gives you what you want. It doesn’t have to reflect reality or even a known measured number, just whatever you want it to be.

https://ceres.larc.nasa.gov/science_information.php?page=EBAFbalance

paqyfelyc
November 12, 2017 4:40 am

ranting article, that is too long because it tries to adress too many issue.
“interest conflict”? Well, peer-review is all about asking permission off people that already are deep entrenched in the field, have reason to think they know (like: being asked to teach) and obviously will find YOU are mistaken if you try and show them wrong. You are the pupil here. Ever tried to show your teacher wrong? Only works with the best of the best; for all practical purpose, never works.

renbutler
Reply to  paqyfelyc
November 12, 2017 9:25 am

That sounds like the very basis of the lie “the science is settled.”

paqyfelyc
Reply to  renbutler
November 12, 2017 9:54 am

Not exactly. the peer-review system allows incremental additions to knowledge, it works provided the basis is solid and the whole science building is known to be “work in progress”, so peers welcome any addition (no threat to them) .
Unfortunately CAGW doesn’t have solid basis, and “the science is settled.” meme implies no work is needed anymore, so …

Louis Hooffstetter
Reply to  renbutler
November 12, 2017 7:21 pm

Peer-review is like Socialism in that it sounds like a great idea but turns out to be a complete disaster when it is actually implemented.

noaaprogrammer
Reply to  renbutler
November 12, 2017 10:48 pm

@ paqyfelyc: Even in areas that have a solid foundation like mathematics, politics are involved in who gets published, who gets appointments, etc. Take for example Kronecker’s rejection of Georg Cantor’s revolutionary ideas in set theory, diagonalization, hierarchies of infinities, etc. all of which became mainstream.

paqyfelyc
Reply to  renbutler
November 13, 2017 3:43 am

@ Louis Hooffstetter
i agree. It lacks a destroying process, a review system where you don’t get points (fame etc.) when you get citations, but when you destroys peer papers for being wrong
@ noaaprogrammer
i agree, in fact. peer-review is a filter, and as such often turn “false positive” (accept bullsit or unworthy trivial results ) and “false negative” (reject good stuff, often for petty reasons)

markdwilkinson
November 12, 2017 5:34 am

Dr. Frank,

Can I suggest that you submit the paper to a post-publication-review journal such as PeerJ? They take papers in the Environmental Sciences. They are a well-respected journal (at least in my field), with a respectable impact factor. The paper is published as a preprint, and then allows the review process to be conducted openly and publicly, as well as encouraging wider public comment both on the paper and the reviews.

If your problem is with the reviewers, then maybe let the reviewers be reviewed at the same time as you are?!

Just a thought…

(I declare I have no CoI in this message)

Gerald Machnee
November 12, 2017 5:51 am

Nick:
**“None of these people are scientists. None of them know how to think scientifically.”
That’s seven journals, now. And must be about 30 reviewers. On would have to entertain the possibility that they are right and Pat Frank is wrong.**
The Hockey Team and pal review is large.

Tim
November 12, 2017 5:59 am

No matter whose scientific arguments are possibly correct, the one overriding factor here is an obvious conflict of interest from the top. To me, this known quantity alone makes any judgments inevitably biased and therefore invalid.

mikewaite
November 12, 2017 6:08 am

I was interested enough in the origin of the Journals and the EGU (European Geosciences Union) to look a bit further and found that a number of open access journals are published under their aegis . I spent some time (which should have better employed tidying up the garden) browsing through some of the articles , in “Climate of the Past” and “Nonlinear processes in Geophysics ” from which I found that the surface mass balance of ice in the Antarctic has apparently been increasing in recent years (well to 2010) and that hurricane statistics in the Gulf and surrounding ocean are best described as being “on the edge of chaos” . Don’t know what that means but pretty sure it is not the model that Al Gore and the BBC are putting out.
One that rather destroyed the image of climate scientists being motivated by less than honest or altruistic motives is one which describes the cyclical changes in a simple (the authors call it a “toy model”) ocean + vegetated land model. Something like sawtoothed ice ages result . However in their conclusions the authors are refreshingly and rather charmingly self effacing :

“-Our paper is only trying to make a case for the possibility
of vegetation playing a more important role than contemplated
heretofore and does not claim in the least to have
definitively proven that this is so. A similar argument about
local versus global effects has been made with respect to
the oceans’ thermohaline circulation. Recall that the Stommel
(1961) paper – much quoted recently in the context of
multiple equilibria and symmetry breaking in the meridional
overturning of the Atlantic or even global ocean – was originally
written to explain seasonal changes in the overturning
of “large semi-enclosed seas (e.g. Mediterranean and Red
Seas)”; see, for instance, Dijkstra and Ghil (2005).
There is no better way of concluding this broader assessment
of our toy model’s results than by citing Karl Popper:
“Science may be described as the art of systematic oversimplification”
(Popper, 1982). It might be well to remember this
statement, given an increasing tendency in the climate sciences
to rely more and more on GCMs, to the detriment of
simpler models in the hierarchy.”
https://www.nonlin-processes-geophys.net/22/275/2015/

” science as the art of systematic oversimplification ” – another Popper phrase to add to those people here like to quote.

R. Shearer
November 12, 2017 6:32 am

Is a $10 million lawsuit in order?

November 12, 2017 6:40 am

Pat

I’ll comment here rather than your recent post on your paper. The main contention in it is that the models are effectively mathematically equivalent to a linear sum of forcing that is then iterated. There may be a few noise terms in there but this is the point in general. From here it’s easy to show that the uncertainties will compound i.e. the increasing systematic error envelope.

So the question is: can this equivalence be shown from first principles? As in is it in the design itself rather than just being similar in form? Because as someone who has built models before, and dealt with modellers, there may be a pedantic point to say the models are not linear sums even though they produce behaviour like it.

It’s a nuance point but it’s also a niggling one that means they can easily dismiss what you say.

jim
Reply to  mickyhcorbett75
November 12, 2017 7:03 am

The design replicates linear sums quite deliberately. As I have said before , replace all variables with the timetables of London buses, leaving the forcing intact and you end up with the same answers ( and errors).
Its numerology, GIGO, call it what you want, its nothing to do with science.
And people like Nick are so proud/bemused by their ‘shiny complicated models’ that they don’t realise it. And those that do, stay dumb because their noses are in the trough.

Reply to  jim
November 12, 2017 8:56 am

Jim

I went and searched for “climate model mathematics” and came across :

MATHEMATICAL MODELS OF LIFE SUPPORT SYSTEMS – Vol. I – Mathematical Models for Prediction of Climate – Dymnikov V.P.

Equation (1) in this short article (that is taken from a book) is that most if not all climate models can be reduced to a canonical state:

∂φ/∂t + K(φ)φ = −Sφ + f

where:

Here, φ is the vector function
characterizing the state of the system, φ ∈ Φ; Φ is the phase space of the system which is
regarded as a Hilbert space with the inner product(⋅,⋅); ƒ is the external impact which
can depend on the solution; S is positive definite operator describing the dissipation in the system:
(Sφ,φ)≥ μ(φ,φ), μ > 0.
Κ (φ ) is the skew-symmetric operator linearly depending on the solution: (Κ(φ)φ,φ)=0.

So integrating over multiple steps will integrate the uncertainties in f. So the thing that Pat gets wrong is that the time period should actually be the time period of integration, which may be a month.

A sensitivity analysis would should that the models are useless if f is not properly bounded.

The alternative is to theorise a value for f but that just means the models are hypothetical exercises and not fit for anything.

Nick Stokes
Reply to  jim
November 12, 2017 9:13 am

“So integrating over multiple steps will integrate the uncertainties in f.”
No, it doesn’t. That’s a misunderstanding of differential equations, which is somewhat relevant to where Pat goes wrong. Suppose K is zero and f is constant. The solution is f/S+C*exp(-S*t), where C is constant that gives different solutions (fixed by initial conditions). It doesn’t increase linearly with f, as would an integral. In fact, it is a control equation which pulls the solution to a particular trajectory. And if f is something that fluctuates about zero, that will certainly not produce a greater rate of increase.

Nick Stokes
Reply to  jim
November 12, 2017 9:23 am

“It doesn’t increase linearly with f”
sorry, linearly with time t.

Reply to  jim
November 12, 2017 9:56 am

Nick

If you iterate the equation then the dt is removed and you move from step to step. If f is an external factor at t = 0 with uncertainty then this will affect the change of state. At time = 1 step then the error in f will affect the next result and so on.

You appear to be conflating continuous solutions with numerical methods. You are solving the equation rather than using the equation iteratively. My question originally was about whether forcing was treated as a linear factor. As it turns out it is canonically. So it should apply to all climate models.

And this equation shows that if you use a real world value you need to be very careful if the uncertainties are large as well as trying to find a suitable time period before it blows up. As others have pointed out, that’s good for weather.

I saw the same thing modelling plasmas compared to erosion caused be plasmas.

Nick Stokes
Reply to  jim
November 12, 2017 10:07 am

“You are solving the equation rather than using the equation iteratively.”
An iterative numerical process isn’t worth much if it doesn’t solve the equation.

Reply to  jim
November 12, 2017 10:26 am

An iterative process is a numerical way to model dynamics. You can run it to achieve a solution or just to see what happens. It depends on what you want to achieve. A control algorithm can be written as a differential equation but it doesn’t have a solution, just a range and possible limiting functions. I wrote such a function for ion thruster control.

The point is that if you have uncertainties in the inputs to each step you quickly can diverge from what you expect unless you account for these. That’s a pretty standard check in numerical modelling, Nick. And it’s what Pat is talking about.

Nick Stokes
Reply to  jim
November 12, 2017 10:33 am

“The point is that if you have uncertainties in the inputs to each step you quickly can diverge from what you expect unless you account for these. That’s a pretty standard check in numerical modelling, Nick.”
It’s what I have spent a large part of my professional life dealing with. It is what is illustrated with your equation, with K=0. If S is positive, deviations from the solution are corrected. The solution is stable. If S is negative, the solution is unstable. Errors grow. If it is a system of equations, you need all the eigenvalues of S to be positive. Your text actually specifies that (S is positive definite). People really know about this stuff. They need to.

Reply to  jim
November 12, 2017 10:39 am

Nick

They don’t know K, or phi, or S. That’s the point. They use the equation to try and solve for this. What is known is f (with uncertainty). So they iterate with a forcing number. That’s the issue.

paqyfelyc
Reply to  jim
November 12, 2017 10:52 am

Nick,
you example is just irrelevant. As you pointed out, it is an example of fully controlled, exponentially damped, equilibrium bound system, where the pertinent variable isn’t φ, but φ-f/S, and where time basically has exponentialy decreasing importance (so of course the error do not depend on time: nothing does!)
Is Climate this sort of system? No it isn’t…

No one discuss the fact that some systems are able to controlable, and any error will be damped to effectively zero. That the whole point of control theory!

The purpose of the paper was to check what behavior can be expected in the case of climate models. The author says that in this case, the error propagates to infinity, that is, the ratio (φ-φ’)/(f-f’) is NOT bound in any way
(where f’ is the real, unknown, forcing; f-f’ is the error; and φ’ is the real trajectory with the real forcing f’)
Which is just basically a definition of a chaotic system, BTW.
In essence, the reviewer states that climate model are not chaotic. Which is double wrong. They ARE (as evidenced by the spaghetti), and since the climate is chaotic, they have better be chaotic too.

Nick Stokes
Reply to  jim
November 12, 2017 12:46 pm

“you example is just irrelevant”
For heaven’s sake, it isn’t my example.

jim
Reply to  jim
November 12, 2017 3:04 pm

Nick, you have been found out. You don’t really understand the application of those lovely complicated models you run. You ‘believe’, you don’t ‘know’. There is a world of difference.
Oh, and you continue to show basic misunderstanding of statistics. But you continue to lie in vain attempts to cover your shortcomings
‘ People really know about this stuff. They need to.’ Yes they do, otherwise ‘real things’ would break or fall down. You clearly don’t, but it doesn’t matter except for the support people like you give to those who bleed economies around the world financing useless ‘energy projects’. That ultimately will cost lives, millions of them. I hope you sleep well.

AndyG55
Reply to  jim
November 13, 2017 2:26 am

“I hope you sleep well.”

I doubt Nick has even the slightest bit of shame or conscience that he is supporting an agenda that, in its own words, is trying to bring down western society.

And he will LIE and squirm and deceive and misrepresent, against all rational maths and science, as long as he can to keep his support for that evil, irksome agenda going,

Reply to  mickyhcorbett75
November 12, 2017 8:03 pm

Micky, I haven’t checked the models themselves. I’ve only emulated their behavior.

However, a repeated criticism of my reviewers has been that the emulation equation is incomplete physics because it does not include a term for ocean heat capacity.

One can infer from that comment, that the models do just incorporate a linear extrapolation of forcing, but that it’s modified by other thermal responses.

Also, in one of my responses, I noted that the IPCC itself states there is a linear relation between forcing and projected air temperature, which they express as ΔTs = λΔF, where λ is model climate sensitivity.

That’s in Pyle, J., et al. (2016), Chapter 1. Ozone and Climate: A Review of Interconnections, in Safeguarding the Ozone Layer and the Global Climate System: Issues Related to Hydrofluorocarbons and Perfluorocarbons” IPCC/TEAP Geneva.

The emulation equation isn’t about physics, of course, which makes irrelevant the criticism that it’s physically incomplete.

Reply to  Pat Frank
November 12, 2017 11:16 pm

I’ve read your paper and based on what I had a look at (the canonical equation above) and your findings with the emulation, the basic idea is that irrespective of what the details of climate models are doing, their behaviour is numerically equivalent to a much more simple linear sum of forcing. So it doesn’t matter what fancy maths is happening or how differential equations are being solved or limited, the effect can easily be replicated by a much more simple equation.

In doing so it highlights the sensitivity of the models to forcing and it appears that if you take for example, the yearly forcing, and include uncertainties in that value, the expansion of the range of possible temperatures makes the models become not very useful.

So the key element is the effect of the numerical wizardry is to produce a much more simple relationship that can be emulated. And this linear relationship is also expressed by the IPCC.

The key is that if you can emulate with a simple relationship and that it shows very good agreement with a whole host of models of different types, then the resultant core of the models is linear i.e higher order terms are being minimised.

It is actually a very nuanced argument Pat. It’s like using a complicated polynomial expansion only to find out your higher terms are all zero over the range of values you use it on!

Reply to  Pat Frank
November 12, 2017 11:43 pm

Just to add: because you use an emulation, effectively like a reverse engineering process, and you don’t necessarily need to know exactly what is going on in the models, I believe this is why you are getting the responses.

Playing Devil’s Advocate: First of all, it’s not a derivation or understanding from first principles. It also does not detail how the forcing values are used to calculate the internal states, run or solve differential relationships and so on. A modeller would look at all the whistles and bells and say, no the model is not run like that.

However, mathematically, what matters is the result and how it behaves within a range of data. It is the reverse argument to many here who look for higher terms in temperature data, only to be told that a linear fit applies.

Whatever higher terms and processes are going on the result can be fit to a linear sum, which then implies that the behaviour of the model produces a result with characteristics similar to a linear sum. One being sensitivity to uncertainties as you have shown.

I don’t know if this is way you describe the argument though Pat. It might be lost in translation a bit. I could be wrong.

RW
Reply to  Pat Frank
November 13, 2017 2:02 am

micky, I interpret Pat Frank’s work similarly. I haven’t yet seen a single valid rebutal of the core findings 1) a very simple linear finction can emulate complex climate model output. Kind of embarassing if you’re demanding super computers tl run your complex models, and 2) uncertainty in the parameter values that come frome measurement is neither reported nor accounted for in the model output, and the correspondemce suggests that many climate modellers do not care for or understand error propagation, and do not possess a very good grasp of basic statistics.

Reply to  Pat Frank
November 13, 2017 5:21 am

RW

I also now see where Pat got the yearly error from. RMS error of the year is quoted in the L&H model so I can see why Pat uses the yearly emulation. It’s the highlight the problem using L&H as a candidate example.

Nick Stokes
Reply to  Pat Frank
November 13, 2017 5:40 am

“RMS error of the year is quoted in the L&H model”
It isn’t. They just quote rmse error 4 W/m2. . Nothing said about “of the year”. This is crucial to Pat’s numbers.

Editor
Reply to  Pat Frank
November 13, 2017 7:26 am

Pat,

Pardon a layman’s question, but based on micky’s explanation of your paper (which helped put it in context for me), it sounds as though there are two separate issues: 1) climate models are essentially just linear sums of forcing, and 2) error propagation in a model of linear sums should be calculated in such-and-such a way.

If this is accurate, shouldn’t there be two papers then: One arguing and demonstrating the first point, and another the second? This seems especially necessary since the second point is the one that you’re really interested in, and it appears that it’s dependent on the first.

Just a thought.

rip

Reply to  Pat Frank
November 13, 2017 8:58 am

Apologies to Pat. I read section 2.4.1 and the yearly uncertainty is because even the 20 year value is a sum of yearly calculations. Thats why the uncertainty is per year. The basis is a yearly value.

November 12, 2017 6:58 am

There has got to be a free market solution to this problem:
1) The Journals are controlled by activists, not interested in the truth. A climate journal needs to be edited by unbiased and disinterested people in the Stats, engineering and Mathematics fields. Rejected Climate Articles should be submitted to statistics or mathematics journals for publication. The existing climate science would never pass the rigor needed for publication in a real science journal.
2) Reproducibility and the Application of the Scientific Method would be a requirement for publication in any new Climate Journal. The very fact that the new journal announces that requirements would put the other journals on the defensive.
3) The new Journal could start by simply doing that happens here on WUWT. Existing published articles could be critiqued, and the flaws in their science and statistics could be exposed and validated by people in the math and science fields.

The first thing communist totalitarians do it take over the media and educational system. They have to control the message and censor all opposition. That is their well-known MO. Real scientists need to break that truth embargo imposed my the slimate climatists. There has to be a market for the truth, an entrepreneur just has to tap it. Aren’t there any scientific journals interested in the truth anymore?

Reply to  co2islife
November 12, 2017 7:03 am

WUWT Site stats
333,102,862 views

There is enough firepower there to generate interest in a new Journal. WUWT could team up with the other Global Warming Blog and start publishing an Alt-Science Journal, a journal clearly intended to challenge the status quo. The Alt-Title might appeal to the rebellious Millennials. Bottom line, there are no real barriers to entry to the Science Journal Industry, and WUWT has a vehicle to bring everyone together.

Reply to  co2islife
November 12, 2017 7:57 am

1) Hit counters are meaningless
2) New Journal? Try this: https://theoas.org/journal-of-the-oas/

Mark T
November 12, 2017 7:01 am

These people don’t even demonstrate a solid understanding of statistics, let alone how statistics would connect to physical phenomena.

Larry Good, P.E. CSSBB
November 12, 2017 7:09 am

Errors of measurement seem to be far better understood by physical scientists and engineers than mathematicians which is ironic given the concept’s roots in statistics.

Tom Halla
November 12, 2017 7:24 am

The editor and “reviewer” have definite conflicts of interest that bias them towards the consensus.

Nick Stokes
Reply to  Tom Halla
November 12, 2017 7:43 am

Even Ronan Connolly?

Reply to  Nick Stokes
November 19, 2017 2:53 pm

What did Ronan Connolly get right, Nick?

Nick Stokes
November 12, 2017 7:41 am

More weirdnesses
“How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?”
Just about anyone understands that rms is positive. Who talks about their voltage being ±110V?

James Annan says
“Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially, which would make a huge difference to the results.”
He’s right; I made that point at some length. A referee pointed out this weirdness – PF takes a 20 year average of something in W/m2 and says that the result has units W/m2/year. But why /year, just because the time period was described as 20 years? It’s also 240 months; why not W/m2/month? As James Annan says, it would make a huge difference to the result.

Taylor Ponlman
Reply to  Nick Stokes
November 12, 2017 8:53 am

Nick,
“Who talks about their voltage being +-110V?” Well, an electrical engineer for one. Put a diode on one side of that +- feed and measure the result. Now reverse the polarity of the diode – get the same result? Now add a capacitor of sufficient size across the circuit and repeat the procedure. Answers the same in all cases? I think not, particularly depending on the type of measuring instrument. So it depends on your perspective and your needs. The consumer relies on the fact that is 110V appliance works when plugged into a 110V AC outlet, but try plugging a transformer-based device into 110DC. Obviously, it matters, so please don’t attack an attempt at clarity with a trivialization of his point.

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 9:18 am

No, the voltage may be ±, but the RMS measures the magnitude. An engineer would multiply the RMS by a phase term (after converting to peak to peak). It is James Annan who is correctly using magnitude.

LdB
Reply to  Taylor Ponlman
November 12, 2017 9:50 am

Nick now try your answer with an AC voltage on a DC offset the situation Taylor describes with a half wave ripple.
http://www.ka-electronics.com/Images/jpg/Crest_Factor.JPG
The full wave rectified sine and half wave rectified sign whilst they have an RMS value you often put a +- in front of to show the DC offset direction.

I think I got your meaning but be very careful trying to make that absolute.

Reply to  Taylor Ponlman
November 12, 2017 9:55 am

Nick,

You’re giving away some of your lack of knowledge. RMS is not the magnitude of an alternating waveform. It is merely the value (magnitude) of an equivalent DC voltage that gives the same power. You can not determine the RMS value of an alternating current, especially an asymmetric one by a simple multiplying of a phase term.

As before, this is dealing with a real world item. Simple math doesn’t always apply. By the way what is the RMS value of a sine wave of +- 110vac +- 5v?

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 10:03 am

RMS means root mean square. It’s as simple as that. There are no ± (or -) signs in LdB’s table. Yes, of course for non-sunusoids you can’t use a simple amplitude and phase characterisation. But RMS still means root mean square. Positive.

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:09 am

So you are saying in your field the RMS of a series of all negative numbers is positive, that would make analysis fun 🙂

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:12 am

I know what you mean but in many field we add the sign in for meaning. You can go thru the process of trying to split hairs the sign isn’t part of the RMS value but that is being vexatious 🙂

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 10:13 am

“So you are saying in your field the RMS of a series of all negative numbers is positive”
Of course it is.

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:18 am

I should say if you want to play vexatious then I am going to tell you that you can’t do square roots of negative numbers so any offset negative waveform can’t have an RMS 🙂

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 10:21 am

“you can’t do square roots of negative numbers”
Again, RMS is root mean square. Root. Mean. Square. Before you do anything else, the argument is squared. Everything is then positive. The mean is positive, so has a sqrt. RMS(-V)=RMS(V).

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:26 am

You see where this goes Nick all you can do is add a minus out the front to get all the numbers positive
-X + RMS
Then I am going to tell you that formula shows you specifically that you can’t do the RMS because you had to put a term in front of the RMS and you kicked and own goal.

As I said I would settle for your answer without vexatious extension 🙂

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:31 am

The basic problem is the negative has meaning no electrician or QM person is going to accept a positive RMS value on a negative basis offset because you lose meaning. You may never accept our answer but equally we can’t accept yours.

LdB
Reply to  Taylor Ponlman
November 12, 2017 10:42 am

To give you an example if I had a -20VRMS and +20VRMS waveform I would correctly deduce there is 40Volts RMS between them. In your case I would have 20VRMS and 20VRMS and I would conclude there is 0Volts between them. Do you see the answer is completely miss leading. I i write your it long hand using your offset above I get the right answer -20VDC + 20VRMS and 20VDC + 20VRMS but it’s a lot more complicated, so you can think of it as shorthand.

AndyG55
Reply to  Taylor Ponlman
November 12, 2017 11:30 am

Nick says..”RMS is a magnitude”

Yes, that means it can be in either direction

WAKE UP NICK !!

AndyG55
Reply to  Taylor Ponlman
November 12, 2017 12:51 pm

“RMS is root mean square.”

Nick, you mathematical IMBECILE.

Root = square root

ALWAYS a “±” answer.

A long time since you did junior high maths, isn’t it Nick.

Go back and RE-LEARN.

Kurt
Reply to  Taylor Ponlman
November 12, 2017 2:06 pm

You guys do realize you’re wasting all this time and space on the purely semantic distinction between expressing RMS as a magnitude only, whose value must always be positive (and therefore implicitly understanding the +/- as part of the definition of RMS), or expressing RMS with the +/- signs?

Gnrnr
Reply to  Taylor Ponlman
November 12, 2017 2:24 pm

Is it just me or does Nick Stokes not understand the difference between a Magnitude and a Vector? RMS is magnitude. He keeps making it a positive vector.

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 3:08 pm

Kurt,
“You guys do realize you’re wasting all this time and space on the purely semantic distinction”
You could say that. My point is that RMS is well defined and is positive. You could make sense of an alternative usage, and getting the semantics messed up is not the worst thing in the world. My point is, well, I’ll repeat PF:
“How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+””
He’s using JA’s perfectly conventional and correct usage to try to discredit him as a scientist.

Gnrnr,
One thing RMS and magnitude of a vector do have in common is that they are both positive.

Gnrnr
Reply to  Taylor Ponlman
November 12, 2017 3:33 pm

“One thing RMS and magnitude of a vector do have in common is that they are both positive.” Bzzzttt. You just failed a basic 1st year type engineering exam question. Vector has a direction, could be negative or positive. Magnitude has no direction, it is just a magnitude, not positive or negative.

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 3:57 pm

“Vector has a direction, could be negative or positive.”
Well, it has multiple components. But I have not spoken of sign of a vector. Only its magnitude, which is positive (and scalar).

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 4:08 pm

“He’s using JA’s perfectly conventional and correct usage to try to discredit him as a scientist.”
I should add that the issue to me isn’t the unfairness of that. It’s the ignorance. Undergrads, even school students, are supposed to know how to use RMS. You can possibly justify an alternative usage, with great care for consistency, but to slam Annan for orthodox use just shows ignorance of that undergrad teaching.

Gnrnr
Reply to  Taylor Ponlman
November 12, 2017 4:50 pm

You still aren’t understanding it Nick.

“Well, it has multiple components. But I have not spoken of sign of a vector. Only its magnitude, which is positive (and scalar).”

As soon as you assign a positive or negative to it, you change it from a magnitude to a vector i.e. you give it a direction relative to some co-ordinate system. A magnitude is neither positive or negative (but is is scalar). Gravity has a magnitude of 9.81m/s^2, whether is is increasing your velocity or decreasing your velocity, depends on the direction you assign it (+ or -) with respect to the co-ordinate system you are working with. These is very basic concepts on magnitudes vs vectors. You keep conflating them together. Like I said earlier, you would fail basic 1st year engineering exams with your comments thus far.

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 5:56 pm

“As soon as you assign a positive or negative to it, you change it from a magnitude to a vector”
Does that work for your bank account? But anyway, I’m the one that is resisting applying signs. A magnitude is positive in the sense that your height is positive. What else would it be? In any arithmetic, it is treated as a positive number.

Remember, the excoriation of James Annan was for not providing a sign.

Gnrnr
Reply to  Taylor Ponlman
November 12, 2017 6:25 pm

“Does that work for your bank account?”

Most certainly does. Magnitude of the transaction is the $ amount of the transaction. Whether it adds to the account or subtracts from it makes it the vector (I personally like ones that add :)).

Gnrnr
Reply to  Taylor Ponlman
November 12, 2017 6:38 pm

“A magnitude is positive in the sense that your height is positive.”

Yes, people’s heights are always positive. Good observation. The magnitude of the error of measurements of those heights if you take the RMS of the errors will also be to use your thinking, a positive number. eg, 1cm. The effect of that error will sometime be positive and sometimes be negative, hence +-1cm.

The magnitude is 4W/m^2, but the effect is +-4W/m^2, not +4W/m^2. Do you still not see your logic error?

LdB
Reply to  Taylor Ponlman
November 12, 2017 8:45 pm

Your wasting your time Nick is clearly engaging it semantics and that is all he is interested in to justify an answer. What Nick is not willing to discuss is what the intent of RMS is, which is and lets quote it

To allow RMS amplitudes to sum directly

Nick is ignoring the intent to be deliberately deceptive.

Nick has the same argument that you can’t have negative money, hence a number such as -$10 can’t be written in an account. You either put it in a different column or color it red would be Nick’s argument.

If Nick or Climate Science is going to engage in this level of semantics they need to publish a formal definition of terms because you can’t use any known standards that are in use by the general community.

LdB
Reply to  Taylor Ponlman
November 12, 2017 9:01 pm

Nick, I would also warn you that if you look at all the truely great science papers in physics and I applied your level of semantics I don’t know any of them that would actually have been published.

There are a number of line by line analysis of Einsteins 1905 paper around and most will pick up the couple of errors. Using your level of semantics it would have been thrown out or is at the very least completely wrong.

I am pretty sure you could reject any paper based on semantics if you really put your mind to it.

Nick Stokes
Reply to  Taylor Ponlman
November 12, 2017 9:03 pm

“Nick has the same argument that you can’t have negative money”,/i>
No. I just said that it would be odd to say from its sign status that it is a vector. I have experienced negative money.

“and lets quote it”
I don’t know what you are quoting there. But it is very rare than you can add RMS directly. More often it is in quadrature. You can add the squares.

“Climate Science is going to engage in this level of semantics “
No, the semantics are from Pat Frank. He blasted James Annan for what is simply standard usage (also used by his source). And surely that raises the question – what’s going on here? What kind of world are we in?

LdB
Reply to  Taylor Ponlman
November 12, 2017 9:23 pm

If you and climate science in general go to this level you are on a slippery slope.

I haven’t got the time but if someone wants to do it go to all the important papers in climate science and just look at the quantities and exact wording. Find how many mistakes there are and then suggest they reject the papers based on semantic errors because that is where we have come to.

LdB
Reply to  Taylor Ponlman
November 12, 2017 9:30 pm

I guess it would also be interesting to ask Nick about papers with the expression -Energy in a physics paper. Energy is after defined almost everywhere as a positive value. Can I have -Energy in a paper?

RW
Reply to  Taylor Ponlman
November 13, 2017 2:08 am

Wow. So much commemt space was abused by this RMS nonsense.

Reply to  Nick Stokes
November 12, 2017 9:37 am

As above, don’t trivialize. RMS of what? A square, triangular, sine wave? How about an asymmetric waveform that could have a negative value? How about a +-110v +_5 v.

LdB
Reply to  Jim Gorman
November 12, 2017 9:51 am

Sorry Jim missed you had answered that. Yes hopefully we have explained Nick to be careful taking that to far.

Nick Stokes
Reply to  Jim Gorman
November 12, 2017 9:58 am

With RMS of anything, you square it, which has to be positive, take the mean, and then the positive square root. The answer is a magnitude and has to be positive.

LdB
Reply to  Jim Gorman
November 12, 2017 10:05 am

No it doesn’t Nick just look at the waveforms above turn the 2nd and 3rd upside down. You need to be able to separate the two waveforms and one is positive the other is negative. You possibly can’t do that in your problem but it happens in many problems. We get the same thing in QM where we have RMS to some Ket basis.

Reply to  Jim Gorman
November 12, 2017 7:39 pm

Nick, “With RMS of anything, you square it, which has to be positive, take the mean, and then the positive square root. The answer is a magnitude and has to be positive

Now you’re bringing in external physical meaning, Nick, which changes everything.

And which explanation (physical meaning) you’ve always resisted whenever it produced conclusions you didn’t like, such as in the physical meaning of a time-average.

When only one root of a square root has physical meaning, within the context of science or engineering, that root is chosen for that reason: i.e., by reason of an externally located physical meaning.

The rms calculation itself always, repeat always, produces the ±root.

The fact that only physically meaningful roots are chosen in science has no bearing on the general result that RMS is always plus/minus.

Nick Stokes
Reply to  Jim Gorman
November 12, 2017 8:09 pm

“The rms calculation itself always, repeat always, produces the ±root.”
Does your calculator say that? Your computer? It’s nothing about physical meaning. It is a standard definition. RMS is always the positive square root.

Again my challenge – if it as you insist, point to just one reputable publication that uses that convention. For the actual RMS numbers. Your L&H source certainly doesn’t.

Reply to  Jim Gorman
November 12, 2017 9:40 pm

Here you go, Nick, Wiki itself:

In experimental sciences, the [plus/minus] sign commonly indicates the confidence interval or error in a measurement, often the standard deviation or standard error. The sign may also represent an inclusive range of values that a reading might have.

Standard deviation: rms conditioned by loss of one degree of freedom.

Nick Stokes
Reply to  Jim Gorman
November 12, 2017 10:09 pm

“Here you go, Nick”,/i>
Going round and round endlessly on this incredibly elementary stuff That link is actually to a page on the ± symbol. And it describes its use in defining a confidence interval. That says that the CI is a±b, where b is some rmse, sd or a multiple. That is the CI, but b, the RMSE, is a positive number. It makes no sense to speak of a±±4.

Still no progress with the challenge – to find an RMSE actually specified as, say, ±4, as you say Annan and L&H should have done.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 3:01 pm

“you know you are referring to a quantity which alternates between +110 and -110”
No. It alternates between about +155 and -155. The point is that RMS is a well-defined term, and is positive. It wouldn’t matter so much that Pat Frank has an eccentric view on it (it isn’t his worst) but one has to respond if he uses Annan’s prefectly conventional use to claim that he isn’t a scientist etc.

“your extremely selective quotation of a reviewers criticism without taking any account of the author’s response to that criticism”
I quoted both rounds, criticism and reply. But the main thing is the stream of accusations directed at Dr Connolly’s honesty (not to mention intellectual vacuity etc). Dr C is an independent scientist who often writes at WUWT. I think he’s a sceptic in good standing. So what is the basis for this? It can’t be supposed CoI.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 4:46 pm

“Your second point is just plain false.”
OK would you like to quote the parts of the author’s reply that would change the meaning from what I wrote?

Reply to  Nick Stokes
November 12, 2017 7:31 pm

Nick, “The point is that RMS is a well-defined term, and is positive.

Wrong. RMS is always ±.

4^2 = 16
(-4)^2 = 16

sqrt(16) = ±4 and nothing else.

It’s that easy and you never fail to get it wrong, Nick.

Reply to  Nick Stokes
November 12, 2017 7:44 pm

Nick’s numerical conundrum was resolved here, and again here, and by micro6500 here.

And that set doesn’t exhaust the retinue.

I can’t tell whether you really don’t get it, Nick, or whether you’re just sticking to an obscurantist narrative.

Nick Stokes
Reply to  Pat Frank
November 12, 2017 10:22 pm

“If asked I would say that RMS is a magnitude just like he does. If he is right..”
then James Annan was right. It’s Pat who is making an issue of something elementary, that wouldn’t be significant anywhere else..

“it takes him nowhere”
It seems from these posts that it’s Pat’s paper that is going nowhere.

RW
Reply to  Pat Frank
November 13, 2017 2:23 am

Pat is right to take issue since it is crucial to the topic that all parties understand and are perceived to understand by one another exactly what is being referred to by uncertainty and error. It is painstakingly obvious that many of the reviewers do not get it.

Reply to  Pat Frank
November 13, 2017 8:56 am

Just to elucidate a little. The RMS value of a waveform is the equivalent DC value that would generate the same heating value if dissipated in a resistance. The DC value can be positive or it can be negative with respect to ground. The same amount of heat is dissipated either way, i.e. +-RMS.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 8:01 pm

Pat,
“RMS is always ±.”
Your link does not talk about RMS. It talks about what you must do if taking the square root of a quadratic equation. And then indeed the result must reflect the range of possible solutions of that equation. But that is not relevant here. RMS is a measure of the magnitude of variation. It was positive everywhere in Lauer and Hamilton. It was positive in the table of values that LdB showed. I repeat my challenge, if “RMS is always ±” then just show any reputable publication where an RMS is shown so. Now I expect, like LdB, you’ll come back with stuff like a±σ, where σ is some RMS or standard deviation. But while that does express the error range, the measure σ, RMS, is a positive number. The expression wouldn’t make sense otherwise.

Now as I said elsewhere, I’m not so bothered that this is yet another of your “Pat Frank only” notions. The issue is that you savagely condemned James Annan for his standard usage (exactly as used by L&H), which can only show that you just don’t understand it. And it is high school stuff..

Reply to  Nick Stokes
November 12, 2017 9:27 pm

Nick, that link shows why taking a square root always produces a plus/minus.

It proves the generality of which rmse is a particular case.

RW
Reply to  Nick Stokes
November 13, 2017 2:20 am

Nick. You are just saying that a standard deviation, as in the parameter, is typically expressed without +/-. So I can pass a paramete value into a function that describes a distribution and that parameter is the standard deviation and it is typically not passed as negative.

The +/- comes into play when a point estimate is made.

In Pat’s 4 +/- case, point estimates are made and there is no reason to suggest that only one tail is required, so +/- is correct.

Nick Stokes
Reply to  Nick Stokes
November 13, 2017 2:22 am

“so +/- is correct”
So do you mark your students wrong if they don’t provide it?

RW
Reply to  Nick Stokes
November 13, 2017 3:41 am

This is about precision in language as much as it is about precision in measurement. A lot of meta reviews and meta evaluation happening rather than the math, stats, and error analysis that should be happening. We’ve all established that context determines the convention here. Knowledge and awareness of the context (among other things) is cuing fitness to evaluate the submission and the appropriateness of the conduct of the editors and reviewers. So under some circumstances I’d say Pat Frank’s point on +/- was ruthlessly pedantic, but in this case i think he’s within bounds. Picking on his point was a waste of time.

November 12, 2017 7:58 am

Qualifier: I am not a mathematician nor a statistician. So take what I say with a grain of salt:

Find a Journal who specializes in the nuances of mathematics, statistics and physics, one that is dedicated to tearing apart models and looking for holes. You’ll reach a broader audience. The problem may stem from nihilism, the problem may stem from an abject rejection of any idea that threatens the money train. But your problem is trying to beat down a fortified castle wall with the equivalent of a soap bubble.

Find another castle who’s door is open…preferably a neighboring one that has tunnels running under the climate one–undermining from within. Once an error is discovered, it can not be ignored by those that understand the error.

Just a suggestion, but I’d start shopping that paper to other parties of interest.

JimG1
November 12, 2017 8:02 am

Debating the physics and statistical content of the paper is actually a distraction in this posting as it is about integrity, as noted in the title of the post. It would seem that violating the reviewer’s own stated rules regarding maintenance of said integrity should be sufficient to make that point to all that bias is involved in the process and it lacks integrity.

Sheri
Reply to  JimG1
November 12, 2017 9:09 am

Integrity only counts if we’re talking skeptics. Believers can’t have conflict of interest because they are “pure and saintly”. Funding bias is only on skeptics, as is bad motivations (oil company checks and so forth). Again, all believers are pure and saintly, untainted by personal gain. Everyone knows this. Trying to impune these people is just very bad behaviour. And it hurts their teeny, tiny little feelings, so cut it out.

Schrodinger's Cat
November 12, 2017 9:03 am

I don’t pretend to understand the details of the error propagation at the centre of this dispute, but on the one hand it is seen as incompetence and lack of integrity and on the other side it is said to be an obvious error.

What surprises me is that even on its second exposure on this site the technical dispute has still not been resolved though there are plenty of critical comments on both sides. But then, why am I surprised? It seems that climate matters defy agreement as a universal rule.

Yet as others have pointed out, regardless of the technical matter, GMD seems not to have followed its own guidelines and illustrates the level of integrity we associate with pal review.

November 12, 2017 9:08 am

Outstanding essay. Every media outlet that aspires to maintain the highest ethical standards should publicize the essay as a case study of unethical behavior or worse. Dr. Frank: In my opinion, you could skip the “in my opinion” phrases.

Sheri
Reply to  Tom Bjorklund
November 12, 2017 9:11 am

“every media outlet that aspires to maintain the highest ethical standards”

I believe there may be no such thing out there.

Reply to  Sheri
November 12, 2017 9:31 am

Every non-governmental media outlet strives to maintain profits so that it can pay salaries and other operating costs with a residual profit to the owners. Any that doesn’t, doesn’t remain around very long.
If they call themselves a non-profit, private media outlet, then they are merely using a convenient accounting method for the purpose of tax-avoidance by the owners.

Every government media outlet merely serves the political interests of the government in power. Any government media outlet that doesn’t do so, doesn’t remain around very long.

Reply to  Tom Bjorklund
November 12, 2017 7:22 pm

I didn’t want to be sued for defamation, Tom B. Or to put Anthony into such peril.

November 12, 2017 9:12 am

So called climate scientists refuse to deal with the real world uncertainties and errors that happen in measuring real world things like temperature. They are all mathematicians that are terribly sure that their number crunching and statistical analyses are done correctly and they probably are. However, they are ignoring, mostly willfully, the most important part of the science, error and uncertainty.

Their mistake is that they are not dealing with pure numbers and they refuse to admit that any given number can be fuzzy out to the limits of uncertainty {recorded temperatures) thereby contaminating their well designed output of a simple number. They never dealt with that when learning about how to do math on regular finite numbers that were exact.

Every time they show a graph or state a number without including error and uncertainty they are misleading people into thinking they seeing exactly what will happen. Nothing could be further from the truth. Someday, people will ask them how they could ignore this. I hope they have a good answer.

November 12, 2017 9:14 am

“It turns out that Dr. Annan is co-principal of Blue Sky Research, Inc. Ltd., a for-profit company that offers climate modeling for hire”

The name of the modeling company is of course how a modeler’s model has to see the planet for it to have any physical validity. Which of course to say, that if Earth didn’t actually have a condensing 1%-3% precipitable GHG component, the models would do a much better job of estimating the forcing effects on climate of non-precipitable GHG changes.

A true blue sky planet with a small ocean:land ratio – the Earth is not. The Earth of course has a 7:3 ocean:land ratio. Those large oceans have decadal-scale internal non-linear cycles which the models cannot replicate. The planet has significant clouds, cloudy days, and rain which alters albedo over large areas in ways the models cannot caluculate. Water’s phase changes and large latent heats produce significant convective energy transport (at scales smaller than the calculable grid box) in ways which cannot be calculated by first principles. Thus the modelers parameterize those quite non-blue sky features.

So yes, a true blue sky planet is the climate modeler’s Platonic utopia. Damn those clouds and rain.

Reply to  Joel O’Bryan
November 12, 2017 10:37 am

Even a water world will have a deterministic climate which is actually easier to calculate than our mix of land and water. The problem is that climate science sees the chaos of weather which blinds them to how predictable the LTE state must be consequential to some change.

It would be interesting to see a water world modelled using a pedantic GCM. Many of its knobs and dials will go away, leaving only the core assumptions to affect the results.

paqyfelyc
Reply to  co2isnotevil
November 13, 2017 3:17 am

A decent modeler will exactly do that: simulate not just current Earth, but Earth in a lot of states: snowball, no land, land in a single mass, all land,…

Reply to  paqyfelyc
November 13, 2017 8:29 am

Yes. One of the tests I do for any model of a causal system is to vary initial conditions and make sure it converges to the same answer each time. If it doesn’t, then there’s something wrong with the model.

Yogi Bear
November 12, 2017 9:17 am

‘Consensus Climatology in a Nuccitellishell’

Sheri
November 12, 2017 9:18 am

As far as I can see, the problem with all of AGW is the suppositions are built into the models. It’s a giant circular argument. However, to prove or disprove the theory would require starting over with scientists who knew nothing of the theory and were asked to account for the current temperature rise based on the elements of climate we currently understand. This is probably impossible. Almost all theories I’ve seen either assume CO2 is the driver or one of the major drivers, assume one variable is dominent such as the sun, ocean currents, etc, and all have to guestimate many of the components of climate. Finding scientists that could take known factors and generate a model of today’s climate with no assumed conclusions may not be possible. Barring such an exercise, we will remain mired in the “my theory is right” battle forever or until such time as the earth gets very hot or very cold. Even then, the blame game will continue.

jclarke341
Reply to  Sheri
November 13, 2017 12:56 pm

Yes! It is a giant circular argument. Still, the models could be scientifically useful if the modeling was done with the assumption that the result will be wrong, and then compared to what has really happened in an attempt to gain understanding. That would be in line with the scientific method. It would be a form a trial and error. But it would fail miserably as political motivation, which needs more definitive, alarming statements to persuade the masses.

The atmospheric scientific community was left with a choice: stick to science and settle for very little funding, or move away from science, giving the funders what they wanted, and vastly increase funding. Over time, they abandoned science for scientific funding. This is exactly what Eisenhower warned us about in 1961, when he said: ““Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. We must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

Both of these predictions have come to pass with an accuracy that climate models can only dream of. Government contracts have replaced intellectual curiosity as the main driver of the scientific community. And government policy is being driven by self-anointed, scientific elites, as the distinction between science and government fades, to the detriment of both.

November 12, 2017 9:57 am

The regrettable reality of the bias and ignorance issues highlighted by Dr. Frank’s essay is that the wall of censorship created by Geoscientific Model Development on a small scale and by apparatchiks in massive government bureaucracies on a very large scale, so far, has been impenetrable.

Consider the U.S. Global Change Research Program (USGCRP). USGCRP “was established by Presidential Initiative in 1989 and mandated by Congress in the Global Change Research Act (GCRA) of 1990 to “assist the Nation and the world to understand, assess, predict, and respond to human-induced and natural processes of global change.” Thirteen Federal entities participate in the USGCRP programs
(Department of Commerce, Department of Agriculture, Department of Defense, Department of Energy, Department of Health & Human Services, Department of the Interior, Department of State, Department of Transportation, Environmental Protection Agency, National Aeronautics & Space Administration, National Science Foundation, Smithsonian Institution, U.S. Agency for International Development).

In the 1990 mandate, human-induced climate change is a given. That implies to me that from at least 1990 to the present, billions of dollars have been spent to indoctrinate the world on the untested hypothesis of CO2-driven global warming and not a cent has been spent to challenge orthodox thinking. Unorthodox thinking is prohibited by the mandate. Every result from their studies is a “What if” result. Where are the “What if not” studies?

What chance do a few rational scientists have against this vast Establishment juggernaut of money and power? The resources are in place to accomplish good science. The mandate of the USGCRP must be redirected by Congress to emphasis finding the truth instead of managing a multi-billion-dollar program to promote a political doctrine. That would really be draining the swamp. I wish I could be optimistic that the efforts of Dr. Frank and a few others will lead to positive changes in thinking about climate change. Sadly, I fear that those who desire change are whistling in the wind.

Casual Reader
November 12, 2017 10:07 am

Looked up Annan & Hargreaves on Facebook (gasp). A married couple. She seems to be fond of cats and lists herself as self employed.

I Came I Saw I Left
Reply to  Casual Reader
November 12, 2017 10:18 am

Imagine… A married couple thwarting publication of a paper that threatens their livelihood. Simply inconceivable.

Casual Reader
Reply to  I Came I Saw I Left
November 12, 2017 11:35 am

I used to review potential contractors for construction projects. Companies that swept the floors on a project would claim to be project managers in their slick sheet brochures. You have to look at financial statements and tangential things, not just the company resumes. Is Blue Sky a slick sheet website trying to get on the AGW cash bandwagon? I doubt they carry enough weight to justify all this discussion.

November 12, 2017 10:57 am

The most recent product of the U.S. Global Change Research Program is a Congress mandated quadrennial report that is archetypal of how the Establishment perpetuates group think on climate change. The report is filled with hyperbolic language intended to invoke apprehension about present and future threats of an out of control warming planet. The report has been widely publicized in scientific journals and elsewhere, including WUWT (https://wattsupwiththat.com/2017/11/03/what-you-wont-find-in-the-new-national-climate-assessment/) Check it out again re Dr. Frank’s experience with Geoscientific Model Development.

willhaas
November 12, 2017 11:23 am

There is no consensus on the AGW conjecture because scientists never registered and voted on the matter. Science is not a democracy. The laws of science are not some form of legislation. Scientific theories are not validated by a voting process. Much of the so called peer review is largely political in nature. Passing peer review does not make something correct or valid.

November 12, 2017 11:30 am

Clyde Spencer
Reply to  Steven Mosher
November 12, 2017 2:29 pm

This is an example of someone who is sloppy in his attention to detail in significant figures in measurements. He is demonstrating what is wrong with the alarmist position.

Greg
November 12, 2017 11:33 am

He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +.

where is “Dr. Annan’s positive sign” ? I don’t see one, this is a false charge made up to suit then rant.

Apparently for Pat Frank ~ = +

If that is where this vitriolic attack on Annan starts, I won’t bother my time with the rest.

I recall last time this “paper” came up it got fairly dismal reception here on WUWT, too.

Mark T
Reply to  Greg
November 12, 2017 5:59 pm

When there is no sign it is a plus, by definition. I can understand why you had to stop, you simply don’t understand basic mathematical symbolism and were incapable of understanding the rest. This is probably the weakest of his arguments (and even that was too much for you), all things considered.

Reply to  Greg
November 12, 2017 9:23 pm

Unspecified sign is always positive by default, Greg.

Further, later in his paragraph, Dr. Annan wrote, “Of course this is what underpins the use of anomalies for estimating change…,” which treats the value as an offset error and is an unambiguous indicator that he meant positive-sign 4 W/m^2.

Nothing in my post constituted a “vitriolic attack.”

November 12, 2017 11:38 am

Pat gets the time scale wrong
https://youtu.be/rmTuPumcYkI?t=15m42s

AndyG55
Reply to  Steven Mosher
November 12, 2017 12:48 pm

no Mosh. Time in-variant means it does not change from year to year.

Why continue to display you total LACK of any mathematical training.?

Mark T
Reply to  AndyG55
November 12, 2017 6:00 pm

He’s not even a number fiddler.

Reply to  Steven Mosher
November 12, 2017 6:08 pm

Again, here, Steve Mosher has linked only the youtube version of Patrick Brown’s video.

At the site of the original posting of that video, Patrick Brown and I had a long conversation.

I invite everyone to visit there and read through.

Patrick Brown’s analysis did not survive critical analysis; something for which Steve Mosher has never demonstrated an aptitude.

November 12, 2017 11:58 am

This is where I agree with Dr. Frank: “My most recent paper is Patrick Frank, et al., (2017) “Spin-Polarization-Induced Pre-edge Transitions in the Sulfur K‑Edge XAS Spectra of Open-Shell Transition-Metal Sulfates: Spectroscopic Validation of σ‑Bond Electron Transfer” Inorganic Chemistry 56, 1080-1093; doi: 10.1021/acs.inorgchem.6b00991.

Physical error analysis is routine for me. Manuscript gmd-2017-281 strictly focuses on physical error analysis.”

I’ve done this. The things discussed in his previous essay were also drilled into my head.

Talk about people talking past each other using the ‘same’ language. The kind of stuff done by climate scientists would have gotten an “F” in my chemistry lab classes. Yeah, the lab classes where previously described experiments get replicated. So when I do a bomb calorimetry setup and get a different number from the book, if I’ve done my error analysis correctly, I can then show why my result differed.

Admin
Reply to  cdquarles
November 12, 2017 12:38 pm

The part where critics are going wrong is they don’t get their head around the fact the errors are a “random walk”.

The errors can’t be contained with a simple offset, because they are unpredictable. Just as you can’t predict the physical error with some types of laboratory measurement.

In a situation like climate, where the output of the previous iteration feeds through into the input of the next iteration, the errors have to propagate – because you have no way of knowing in advance what those unpredictable random walk errors will be.

Nick Stokes
Reply to  Eric Worrall
November 12, 2017 2:18 pm

“The part where critics are going wrong is they don’t get their head around the fact the errors are a “random walk”.”
There is a simple counter to that, put by James Annan and the front slide of Mosh’s video (and by me). A random walk has a time scale. You take prescribed steps, but at a time rate. What is that here?

Pat Frank says once a year, but as JA says, there is no basis for that. It could be anything. And depending on what you choose, you can get any answer you like.

Clyde Spencer
Reply to  Eric Worrall
November 12, 2017 2:51 pm

NS,

You have on many occasions made the unsupported claim that the Law of Large Numbers cancels errors. While there is some truth to it, the reality is that to cancel periodicities you have to average over one year to reduce the effect of seasons. Yes, one could use any time increment for climate projections, but it does make sense to use annual steps because it is common practice to state temperature anomalies as annual anomalies, and to compare the average of one year with another, and to predict what future temperatures will be in a given year.

The uncertainty is propagated at each step in a series of calculations. It seems to me that we have a situation that is analogous to the Heisenberg Uncertainty Principle. That is, the finer the time resolution of the model, the less certain one is about the results. Using coarser time steps delays the projection from blowing up, and allows projections to be used farther into the future, at the cost of not being able to say anything at a resolution of less than a year. However, the one-year step illustrates the nature of the propagation of error.

Nick Stokes
Reply to  Eric Worrall
November 12, 2017 6:44 pm

“You have on many occasions made the unsupported claim that the Law of Large Numbers cancels errors.”
It’s not an unsupported claim. It is what the law of large numbers says. As sample size grows, sample mean approaches population mean. Or do you have a different version?

Reply to  Eric Worrall
November 12, 2017 8:29 pm

When error is systematic, the distribution of the parent population is not known to be normal.

When normality provision of the LLN is violated, reduction of uncertainty does not follow.

LdB
Reply to  Eric Worrall
November 13, 2017 1:28 am

Nick would you like to see a physical example of your argument. So lets have a quantum step of X, lets put the next quantum step size at Y. I have a photon level greater than X and less that Y. Fairly easy setup.

Do you know what will happen .. it’s called Bells Inequality?

Here is the physics experiment we do with polarized light and its a couple minutes to explain

Your QM radiative process works the same way your photons have to match an exact energy level.

What it shows you is the mathematics is not consistent in a classical sense and taking statistics on a non consistent mathematics just creates rubbish. You can use your statistics on the QM result but you have to be careful just throwing statistics at an answer.

Nick Stokes
Reply to  Eric Worrall
November 13, 2017 1:47 am

“When normality provision of the LLN is violated”
There is no normality provision of the LLN.

paqyfelyc
Reply to  Eric Worrall
November 13, 2017 5:22 am

“There is no normality provision of the LLN.”
There is. Even wikipedia knows that

Nick Stokes
Reply to  Eric Worrall
November 13, 2017 5:36 am

“Even wikipedia knows that”
Then quote it, please.

jclarke341
Reply to  Eric Worrall
November 13, 2017 1:35 pm

“Pat Frank says once a year, but as JA says, there is no basis for that. It could be anything. And depending on what you choose, you can get any answer you like.”

Seems to me the unknown error becomes a problem in the next iteration of the model. What is the time step of the model iterations? That is the one and only answer. If you are looking at all of the models, one could make the point effectively by taking the average time step of all of them. Yet, the argument is still effective if you choose a period of time that is greater than the average time step of the models. If you do so, and still get a result that falsifies the model, then it would certainly hold true with smaller time steps (more iterations per unit time) as well. If the time step in most climate models is less than or equal to one year, then Dr. Franks arguments are valid for the point he is trying to make, even if one year is ‘arbitrary’.

I have to admit, I do like most of your sentence, Nick, just not the subject. What if the subject was something more important than the time step of an error propagation? Then it could read something like: ‘The models all have a high climate sensitivity to changing CO2, but there is no basis for that. It could be anything. And, depending on what you choose, you can get any answer you like.’

Now there is some truth for you!

Nick Stokes
Reply to  Eric Worrall
November 13, 2017 8:59 pm

“What is the time step of the model iterations?”
About 30 minutes. And if you add the error at that rate, you’ll get error bars about a hundred times Pat’s already ridiculous levels.

But the reality is, this error does not compound in this way at all, as umpteen referees and editors have said. It’s just an ongoing uncertainty. That’s why you get only artificial and arbitrary results when you try to come up with a time scale.

November 12, 2017 12:32 pm

When the climate idiots and scaremongers reject you, that usually means you were right !

If a global warming nut ever read my climate change blog and agreed with anything I wrote,
it would ruin my day!

“Modern Climate Science” is wild guesses, scaremongering,
and character attacks paid for almost entirely by goobermints,
who can seize more power,
if people think a real climate catastrophe is in progress.

Most people on the planet are simpletons, easily fooled by religious and political leaders.

We currently live in the best climate ever for humans and animals — the only improvement would be adding more CO2 to the air to make our green plants happy — 1,000 ppm would be a good target.

You’d expect the liberals to lead the climate scaremongering — only dumb people who prefer socialism over free markets are capable of such a delusion!

The ‘coming runaway climate change catastrophe’ is the biggest fairy tale in human history, and is based entirely on wild guess computer game predictions … that have already been wrong for 30 years — because they assume CO2 controls the climate, when in fact, it is a minor variable … and may even have no measurable effect!

This is what the “warmunists” actually claim,
believe it or not … and it is very hard to believe:

1940: 
Natural climate change stops after 4.5 billion years of natural global warming and cooling (mainly cooling, and declining atmospheric CO2 levels) !
– Explanation: “Because we say so” (no explanation).

1940: 
Man made ‘aerosols’ suddenly dominate the climate.
– Explanation: “Because we say so” (no explanation). 

We had global cooling from 1940 to 1975:

1975: 
Man made aerosols suddenly ‘disappear’.
– Explanation: “Because we say so” (no explanation). 

1975: 
Man made CO2 suddenly dominates the climate.
– Explanation: “Because we say so” (no explanation). 

We had global warming from 1975 to 2000:
(mainly in the 1990s):

 2000: 
Man made CO2 goes on a temporary ‘vacation’. 
– Explanation: “Because we say so” (no explanation)

We had a flat average temperature trend from 2000 to 2015:


2015 / 2016: 
A warming spike — the cause is a natural, temporary “El Nino” (a cyclical Pacific Ocean heat release).

Warmunists bellow about the heat, but don’t mention the cause is natural, the heat is local, and it is temporary, 

We had a global warming spike in 2015 and 2016 caused by local El Nino warming in the Pacific Ocean.

The 2015/2016 El Nino warming peak global average temperature was only +0.1 degrees C, warmer than the peak of the equally strong 1998 El Nino warming peak, 17 years earlier:

2017 to 20xx: 
Global warming is expected to return, 
with manmade CO2 expected 
to be dominating the climate again.
– Explanation: “Because we say so” (no explanation).

20xx  (decades in the future):
Runaway global warming becomes unstoppable.
– Explanation: “Because we say so” (no explanation).

2xxx (hundreds of years in the future):
  The end of all life on Earth !
-Explanation: “Because we say so” (no explanation).

Most of the above post was from my free, no ads,
climate blog for non-scientists, written as a public service:
http://www.elOnionBloggle.Blogspot.com

November 12, 2017 12:32 pm

pat gets the net error wrong

https://youtu.be/rmTuPumcYkI?t=18m2s

Reply to  Steven Mosher
November 13, 2017 8:42 pm

Steve Mosher has merely and uncritically reiterated Dr. Brown’s incorrect arguments.

I have posted m original initial reply to them downthread here.

For my full debate with Dr. Brown, see his personal website here.

Steve Mosher will not have understood any of it, possibly accounting for his excited declamations here.

However, those of you who understand algebra will be bemused seeing Dr. Brown’s attempt to save his argument by claiming that, when taking an average, the dimensions of a denominator are also in the numerator.

November 12, 2017 12:35 pm

Pat cherry picks only ONE element of energy balance

https://youtu.be/rmTuPumcYkI?t=20m56s

November 12, 2017 12:37 pm

More screwing up the errors by Pat

You folks get why Pat gets rejected? its cause he is wrong.

https://youtu.be/rmTuPumcYkI?t=24m3s

Latitude
Reply to  Steven Mosher
November 12, 2017 1:26 pm

…is this like a movie in 6 parts?

Reply to  Steven Mosher
November 12, 2017 6:05 pm

Steve Mosher has linked only the youtube version of Patrick Brown’s video.

At the site of the original posting of that video, Patrick Brown and I had a long conversation.

I invite everyone to visit there and read through.

Patrick Brown’s analysis did not survive critical analysis; something for which Steve Mosher has never demonstrated an aptitude.

photios
November 12, 2017 1:04 pm

James Annan: ‘I am confident that the author has already had this pointed out to them…’

Anyone who cannot distinguish between one and more-than-one cannot be a mathematician.
Anyone who can make the distinction but chooses not to (on, I presume, idealogical grounds)
should not be reviewing scientific papers.

Tom Anderson
November 12, 2017 1:30 pm

“is such a shambles.” No “in.”
“Shamble,” in medieval times a market bench, particularly for meat. A collection of shambles (meat benches) was a slaughterhouse. Shakespeare: “Far be it from Richard to make a shambles of parliament.”

Ozonebust
November 12, 2017 1:57 pm

Pat
Put your findings in front of mathematicians for review, not climate scientists running models.
It is not a science problem, but one of compounding mathematical calculation.
Just a though

Nick Stokes
Reply to  Ozonebust
November 12, 2017 2:21 pm

“Put your findings in front of mathematicians for review”
From the article
“How does it happen that a PhD in mathematics does not understand rms (root-mean-square) and cannot distinguish a “±” from a “+”?”
His problem seems to be with people who understand mathematics.

Reply to  Nick Stokes
November 12, 2017 3:03 pm

Thanks Nick

Mark T
Reply to  Nick Stokes
November 12, 2017 6:11 pm

His problem is with people that are supposed to understand mathematics, but demonstrate they don’t. Big difference.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 6:21 pm

“His problem is with people that are supposed to understand mathematics, but demonstrate they don’t.”
So do you think failure to prefix a “±” shows lack of understanding? Do you always write RMS with a “±”?

November 12, 2017 2:45 pm

Held off commenting because wanted to see how thread developed, plus go back and reread Pat Frank’s previous guest post and his draft paper at issue. Conclusion: two wrongs do not make a right.
Annan and Hargreaves do in fact have multiple conflicts. Negative pal review at work. Wrong.
But, the paper is also full of errors needing substantial rework, as pointed out by several upthread. Wrong for Pat Frank to reject Ronan’s unconflicted feedback and think what he has done is fine when it isn’t. Mosher provides sufficient technical substance upthread for those that did not/cannot spot the paper’s problems themselves.

Reply to  ristvan
November 12, 2017 5:14 pm

FG, no. The only flaw in Mosher’s posted youtube explanation is point five, Hansen 1988,which uses an El Nino 2015 cherrypick to ‘justify’ a palpably wrong 1988 forecast. The first four points in the video are legitimate critiques.
I have guest posted here several times various ‘sound bites’ on unassailable climate model objections: unavoidable tuning drags in attribution assumptions, missing tropical troposphere hot spot, model sensitivity twice observed, and so on. Error propagation is esoteric, so of literally no use in the political debate whether right or wrong. It is a political, no longer scientific, debate. Adopt corresponding weapons. This complaint aint one.

Reply to  ristvan
November 12, 2017 6:24 pm

Steve Mosher has provided zero technical substance upthread, ristvan.

Patrick Brown’s analysis is wrong, as I demonstrated in our debate at his site.

Ronan’s feedback “thought experiment” required air temperature as the intensive variable in the uncertainty estimate. It isn’t. His idea shows a complete misunderstanding of what I did. Air temperature plays no part in the error propagation. Ronan’s thought experiment is nonsense.

In fact, the last sentence in Ronan’s contribution there, “Instead, the global warming projected by the CMIP5 models is mostly a consequence of rising GHG concentrations.” inadvertently validated my error analysis.

The propagation in fact involves fractional change in GHG forcing; which exactly follows from “rising GHG concentrations.

Reply to  ristvan
November 13, 2017 8:33 pm

Rud, I’m impressed you didn’t catch Dr. Brown confusing a rmse statistic with a forcing.

FYI, I have posted my initial reply to Dr. Brown here.

Mosher’s posts are uncritical reiterations of refuted arguments.

My paper is not “full of errors” by any criterion stemming from Dr. Brown’s video criticism.

hunter
November 12, 2017 4:01 pm

Who was that academic at Mason University(?) who had the dubious climate hype NGO that was funneling money to his wife and/or kid?
This publication sounds more like an infomercial trade rag dressed up as a sciencey publication.
Move on and find a real publication to peer review it.
A math or stats or QC/QA ISO oriented publication.

Reply to  hunter
November 12, 2017 5:17 pm

Shukla and his wife and daughter. George Mason University in Virginia.

hunter
November 12, 2017 4:03 pm

There is no way thst errors, uncorrected honestly, don’t propagate and grow in any system.

Reply to  hunter
November 12, 2017 5:18 pm

True. The question is how. And PF has the how just wrong, in my opinion.

Reply to  ristvan
November 12, 2017 8:24 pm

Your opinion is thus far insubstantiated, ristvan.

Bartemis
November 12, 2017 4:17 pm

I would have to spend time studying this to render a verdict, time I do not have. So, my prima facie impression could be off but, the nub of the issue seems to be poor definition on both sides.

The review says:

“The trivial error of the author is the assumption that the ~4W/m^2 error in cloud forcing is compounded on an annual basis. Nowhere in the manuscript it is explained why the annual time scale is used as opposed to hourly, daily or centennially, which would make a huge difference to the results.”

Yet, the paper clearly defines the error in units of W/m^2/year, implying a yearly evaluation. This error also appears to be propagated (no pun intended) in Mosher’s vid.

However, it appears both sides are basically showing a square-root dependence of this error, so they are both suggesting that we have broadband (essentially white) noise being integrated into a random walk, in which uncertainty builds as the square root of time. The units for the variable that quantifies such a process should be W/m^2/sqrt(time). E.g., if we have a process

dT/dt = k*W

with W in W/m^2 and k in K/(W/m^2), and W is afflicted with essentially white noise with spectral density sigma^2 in (W/m^2)^2/year, then T will experience a random walk with uncertainty k*sigma*sqrt(t) with t in years.

Since sigma is in W/m^2/sqrt(year), the units all work out to temperature in K.

So, it appears to me the reviewer is wrong – zero mean, wideband noise in an essentially cumulative system does propagate to a random walk. It appears to be Dr. Frank has been careless with units. And, the real question is whether the square root of spectral density of the disturbance is really as large as +/- 4 W/m^2/sqrt(year) or not.

Nick Stokes
Reply to  Bartemis
November 12, 2017 4:43 pm

“Yet, the paper clearly defines the error in units of W/m^2/year, implying a yearly evaluation.”
PF’s paper defines it so. But the actual number comes from Lauer and Hamilton, who give it as 4 W/m2. PF has added the /year, because he says that a quantity averaged over 20 years should acquire a /year unit. That seems to be where the yearly frequency comes from, but is quite arbitrary. If you exprssedthe 20 years as 240 months, you’d have a monthly frequency.

Bartemis
Reply to  Nick Stokes
November 12, 2017 5:25 pm

Not if you used the correct units.

A parameter of 4 W/m^2/sqrt(year) gives you 4*sqrt(20) = 17.8 W/m^2 in 20 years.

A parameter of 4/sqrt(12) = 1.15 W/m^2/sqrt(months) gives you 1.15*sqrt(240) = 17.8 W/m^2 in 240 months which is 20 years.

Bartemis
Reply to  Nick Stokes
November 12, 2017 5:29 pm

So, the first question seems to be, is there a forcing variation of this magnitude from year to year? The next is, over what timeline can it be considered to induce a random walk before limiting factors assert themselves?

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 5:39 pm

Bart,
“A parameter of 4 W/m^2/sqrt(year)”
Well, that is a new one. It’s a variant of the notion that if you average something (W/m2) over a 20 years, you somehow get different units. But there is still the problem, why sqrt(year)? Why not sqrt(month) or sqrt(sec)?

Lauer and Hamilton, who actually derived the number, were having none of this. Their units were W/m2.

The sqrt() aspect occurred to me when I was contemplating Pat Frank’s proposition that
“The average height of people in a room is meters/person, not meters.”
The logic of that says the standard deviation is in meters/sqrt(person).

Bartemis
Reply to  Nick Stokes
November 12, 2017 6:10 pm

Continuous time white noise does not actually exist in nature, as it requires infinite energy. However, many processes can be considered “white” if they are uniform over a wide band of frequencies of interest.

Continuous zero mean white noise is quantified in units of whatever quantity it represents per square root of frequency (inverse time). This is because the autocorrelation of a white noise process w(t) is

E{w(t2)*w(t1)} = sigma^2 * delta(t2-t1)

where sigma is the quantifying parameter, and delta(t2-t1) is the Dirac delta function. The Dirac delta function is a function that integrates to unity, so it has units of inverse time.Thus, e.g., when you multiply the quantity units squared per sec^-1 times a Dirac function in units of sec^-1, you get the quantity units squared.

A random walk is a sampled Weiner process, which can be thought of as the integral of white noise. It has uncertainty increasing as the square root of time. So, when you multiply the units of sigma times those of the square root of time, you get the units of the quantity.

We measure the spectral density of a white noise process using the PSD. If we perform a PSD on the variability in the rate of change of forcing, and if it is white noise, it will produce a flat line at a level of some (W/m^2/year)^2/frequency_units. If frequency_units is scaled to years^-1, then that gives (W/m^2)^2/year. The square root of that is the sigma value, in W/m^2/sqrt(years).

Integrating the rate data would then produce a random walk with parameter in units of W/m^2/sqrt(year), and the uncertainty would increase with the square root of time.

The lack of fractional time units is telling me the people involved have neglected the most basic of validation methods one learns in undergraduate studies – the units have always got to match up. Whatever mathematical functions are applied, the units must map appropriately.

Bartemis
Reply to  Nick Stokes
November 12, 2017 6:17 pm

I can’t get too involved in this right now. But, my basic proposition is that if you have a random walk that has uncertainty increasing as

sigma*sqrt(t)

then, sigma must be in units of the quantity per square root of time.

What I suspect may be a problem is that the variation in rate of change is probably not white. If it has zero energy at low frequency (as it would if it is the rate of change of a white noise process), then it will not integrate into a random walk.

Reply to  Bartemis
November 12, 2017 5:53 pm

Nick is still bemused by the fact that in a time average, every lesser time interval within the averaged duration has the identical magnitude.

Every second is ±4 W/m^2 in the 20 year average, Nick. For you the time unit is then, equally, ±4 W/m^2/sec., isn’t it. What a mystery!

This numerical conundrum was explained to you, and again here, and micro6500 tried to explain it to you more than just that once.

And you’ve never gotten it.

Nick, “who give it as 4 W/m2” Wrong. Lauer and Hamilton give it as the rmse of the 20-year multimodel annual mean. Annual root-mean-square-error = ±4 W/m^2/year, not your positive sign value.

It appears you never lose an opportunity to misunderstand something in a convenient way, Nick.

Do you still insist that thermometers have infinite resolution, too?

Nick Stokes
Reply to  Pat Frank
November 12, 2017 6:04 pm

” Lauer and Hamilton give it as the rmse of the 20-year multimodel annual mean. Annual root-mean-square-error = ±4 W/m^2/year, not your positive sign value.”
Here is where they give it. No ±, no /year.
comment image

Reply to  Pat Frank
November 12, 2017 7:17 pm

RMSE, ‘a measure the spread of the residuals.’

RMSE, ‘a measure of the spread of the y values around the average.’

RMSE, ‘the square root of the mean squared error … is the statistic that determines the width of the confidence intervals for predictions … so a 95% confidence interval for a forecast is approximately equal to the point forecast “plus or minus 2 standard errors”–i.e., plus or minus 2 times the standard error of the regression.’

Guess what ‘spread of values’ means.

Guess what ‘width of the confidence interval’ about a value means.

You’re wrong, Nick, and obviously wrong.

L&H’s “rmse = 4 W/m^2” denotes a confidence width: ±4 W/m^2.

Nick Stokes
Reply to  Pat Frank
November 12, 2017 7:25 pm

“‘width of the confidence interval’”
So how could a width be negative? But as for
“L&H’s “rmse = 4 W/m^2” denotes a confidence width: ±4 W/m^2.”
It denotes a rmse. And it is 4 W/m2, just like they said.

Reply to  Pat Frank
November 12, 2017 8:22 pm

Nick, “So how could a width be negative?

Can an error be negative, Nick?

Can you guess the sign designation of the rms of positive and negative physical errors?

That’s called rmse. Its sign designation is ±.

Always.

Reply to  Bartemis
November 12, 2017 6:43 pm

In L&H, the dimensional analysis for cloud cover of a given simulation is: (cloud-cover)/grid-point × 1/year × 1/model × grid-points/globe = (cloud-cover) year^-1 model^-1 globe^-1.

The individual observational dimension is cloud-cover/grid-point/year.

The annual mean error for any given model is the difference between simulation and observation, which is Δ(cloud cover/grid-point/year).

The global rmse calculated from the mean annual errors of all the models is then of dimension sqrt{[Δ(cloud cover/year)]^2} = ±(cloud-cover)/year.

Error in ±cloud-cover/year is converted into error in long wave cloud forcing, in units of ±W/m^2/year. There’s no sqrt(year) dimension in the error metric.

Nick Stokes
Reply to  Pat Frank
November 12, 2017 6:51 pm

L&H is your source, and they give the numbers in exactly the same format as James Annan used. You wrote of that
“We can note his very unprofessional first sentence and bypass it in compassionate silence.
He wrote, “… ~4W/m^2 error in cloud forcing…” except it is ±4 W/m^2 not Dr. Annan’s positive sign +4 W/m^2. Apparently for Dr. Annan, ± = +.”

L&H is your source for the numbers. You obviously didn’t bypass that in compassionate silence.

Reply to  Pat Frank
November 12, 2017 8:17 pm

James Annan is just making the same mistake you are Nick, ignoring the meaning of root-mean-square calibration error. It’s no big mystery.

Anyone who understands calibration experiments knows what the rmse means.

You don’t understand them. Neither does James Annan. And why should you? You have no training.

That doesn’t stop you carrying on in ignorance, though.

Nick Stokes
Reply to  Pat Frank
November 12, 2017 8:24 pm

“You don’t understand them. Neither does James Annan.”
Continuing to avoid two things
1. How does it happen that your source for the number, L&H, express it as a simple positive number, just as JA and I do.
2. My challenge – just point to some reputable source somewhere that expresses the number for RMS as other than a positive number.

Reply to  Pat Frank
November 12, 2017 9:01 pm

Nick, “Continuing to avoid two things
1. How does it happen that your source for the number, L&H, express it as a simple positive number, just as JA and I do.

Search on my posts: I have never avoided either of those things.

You continue to take that number out of the context L&H provided: “rmse = 4 W m^2“.

You continue to ignore the invariant meaning of rmse, which is plus/minus.

2. My challenge – just point to some reputable source somewhere that expresses the number for RMS as other than a positive number.

Provided for you here, third link.

Here’s another, Wiki itself: “In experimental sciences, the [plus/minus] sign commonly indicates the confidence interval or error in a measurement, often the standard deviation or standard error. The sign may also represent an inclusive range of values that a reading might have.

Note the reference to the standard deviation.

Here’s Wiki’s URL: https://en.wikipedia.org/wiki/Standard_deviation

SD is the rmse conditioned with loss of one degree of freedom and is plus/minus. Search the page for “±” and you’ll find it in use.

The case is now closed, and in your disfavor.

Nick Stokes
Reply to  Pat Frank
November 12, 2017 9:28 pm

Pat,
“Provided for you here, third link.”
No. As I predicted, you provide a definition of confidence interval in terms of the standard error. They say
“so a 95% confidence interval for a forecast is approximately equal to the point forecast “plus or minus 2 standard errors”–i.e., plus or minus 2 times the standard error of the regression.”
Clearly there the standard error is positive. How does ±±4 make sense?

Your first wiki link again is defining use of ± in defining a confidence interval, not the sd. On the second wiki page, there are indeed a number of standard deviations quoted. Each one is a simple, positive number, eg
“Their standard deviations are 7, 5, and 1, respectively. “
“For example, the average height for adult men in the United States is about 70 inches (177.8 cm), with a standard deviation of around 3 inches (7.62 cm).”

My challenge was to find someone actually specifying an RMS with a ± in front. Lack of that is what you said was Annan’s failure that demonstrated his incapacity.

Reply to  Pat Frank
November 12, 2017 10:22 pm

Nick, rather, yes.

My bold throughout below.

The Wiki RMS page says, “The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values.

The standard deviation page says, “If a data distribution is approximately normal then about 68 percent of the data values are within one standard deviation of the mean (mathematically, μ ± σ, where μ is the arithmetic mean), about 95 percent are within two standard deviations (μ ± 2σ), and about 99.7 percent lie within three standard deviations (μ ± 3σ).

Anyone can make the obvious logical link: rmse = standard deviation = ±.

Is it anyone but you, Nick?

Reply to  Pat Frank
November 12, 2017 10:52 pm

Here’s another refutation for you, Nick, this time with some irony.

T. Chai and R. R. Draxler (2014) Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in the literature Geosci. Model Dev. 7, 1247-1250.

Note the journal.

Chai and Draxler mention that, “ One distinct advantage of RMSEs over MAEs is that RMSEs avoid the use of absolute value, which is highly undesirable in many mathematical calculations.

Guess what “avoid the use of absolute value” means.

Chai and Draxler’s Table 1 lists some test RMSEs and MAEs, and all of them are presented as absolute values.

Does that mean Chai and Dexler contradicted themselves?

Or does it mean that it is so obvious that rmse is ± (not an absolute value) that they don’t feel a need to display the ±?

MAE and RMSE are defined in their eqns. 1 and 2.

1) MAE = 1/n(sum over |e|), where |e| is absolute value of error.
2) RMSE = sqrt[1/n(sum over e^2)]

RMSE is not given as a Nick Stokesian absolute value, i.e., |sqrt[1/n(sum over e^2)]|.

Guess why the distinction is important. Because “sqrt” always produces “±.,” and that’s what Chai and Draxler meant to convey.

Reply to  Pat Frank
November 12, 2017 11:48 pm

Here’s a paper, Nick, where the obvious is stated explicitly.

P. Ineichen, et al. (1987) “The Importance of Correct Albedo Determination for Adequately Modeling Energy Received by Tilted Surfaces” Solar Energy 39(4) 301-305.

p. 302, “Each point [in figures 1(a) to 1(f)] is surrounded by plus/minus one relative root mean square deviation (RRMS).

In their Table 2, the RRMSs are given as positive values.

Let’s see, does that mean RRMS is sometimes ± and sometimes +?

Or does it mean that people write the positive value as a convention, knowing their audience understands that rms means ±?

Ineichen, et al., distinguish RMMS from relative mean bias error, by the way, which actually does have a plus or minus sign attached to each of the values.

No ground left for you, Nick.

There never was.

Nice try diverting the challenge away from meaning and into convention, though.

Nick Stokes
Reply to  Pat Frank
November 13, 2017 12:26 am

Pat,
“Guess what “avoid the use of absolute value” means.”
It’s very clear. Here is more context:
“One distinct advantage of RMSEs over MAEs is that RMSEs avoid the use of absolute value, which is highly undesirable in many mathematical calculations. For instance, it might be difficult to calculate the gradient or sensitivity of the MAEs with respect to certain model parameters.”
They avoid the abs value function because it is not differentiable at zero, while RMS is differentiable everyhere.

But again, every RMSE that is quoted is a simple positive number. And then there is this:
“When both metrics are calculated, the RMSE is by definition never smaller than the MAE. “
Now the MAE is just the mean of absolute values, and cannot be negative. And, they say, the RMSE is not less. Actually, the fact that they talk about ordering at all clearly means RMSE is not ±.

So remember, yet again, you excoriated Annan because he showed a RMSE without ±. yet every link you have shown does exactly the same.

And Ineichen – same story. Look at table 2. A page full of RRMS. And each one expressed as a simple positive number. Just like JA.

P 302 – again just one of these cases where they express the CI in terms of ±RRMS. That defines the range, but RRMS is positive. Else again you would have ±±.

So you are reduced to saying – well, they give a positive number, but we know they plan to use it as ± in a CI. Well, even if so, the same could be said of Annan. They are doing exactly the same as he did.

Reply to  Pat Frank
November 14, 2017 12:27 am

Nick, I’ve further investigated your preposterous claims about rmse.

Not to establish you’re wrong throughout, already known, but just to ameliorate the question you will have generated in the minds of those less familiar with physical science.

So, then, let’s proceed.

In Bevington, P. R., and D. K. Robinson (2003), Data Reduction and Error Analysis for the Physical Sciences, 3rd ed., McGraw-Hill, Boston, pp 10-11: The standard deviation (SD) is a measure of the dispersion of the observations about the mean.

The standard deviation is the root mean square of he deviations…

An example on page 22 explicitly gives SD as ±.

Thus: RMSE = standard deviation = ±σ.

Next: Chapter 3 of my old copy of Skoog and West, Fundamentals of Analytical Chemistry, discusses statistical treatment of data. In text examples of RMSE/SD unambiguously require the ±.

Interestingly the tables of rmses provide only the positive values, while their in-text illustrative use includes the ±.

It’s clear that the tabulation of positive values for rmse is merely a convention, where the ± is present but implied.

The JCGM (100:2008), Evaluation of measurement data, define SD as the dispersion of measurements about a mean and recommend reporting as the positive root.

But within the JCGM, the illustrative usage SD is always ±σ.

Next: In H. W. Coleman and W. G. Steele (1995) Engineering Application of Experimental Uncertainty Analysis AIAA J. 33(10) 1888-1896, every use of SD is ±σ.

Coleman and Steele also specify propagation of error as the root-sum-square.

Next, the 2007 Guide to the Expression of Uncertainties for the Evaluation of Critical Experiments published by the International Criticality Safety Benchmark Evaluation Project (ICSBEP)

p. 6 says, Decision theory tells us that if the distribution is to be summarized by just two numbers, it is best to give its mean x and its variance var x = [(x-x_avg)^2] and to state the experimental result as x_avg ± Δx, where Δx ≡ sqrt(var x) is the standard deviation (root-mean-square error).

That’s definitive, isn’t it.

L. Lyons, 1991 A Practical Guide to Data Analysis for Physical Science Students Cambridge U. along with the usual equations on page 12, says, Thus σ is the RMS (root mean square) deviation form the mean and is also known as the ‘standard deviation’ of the mean.

And after a long discussion out to page 17, gives an example of method and usage reporting x = μ±σ.

Those examples completely settle the question that never had any real need for settling.

The only question remaining, Nick, is whether you as an astute mathematics guy really didn’t know that rmse is always ±σ, or was the tabular convention an occasion for some consciously opportunistic dissemblance.

Reply to  Pat Frank
November 14, 2017 1:38 am

Chai and Draxler prefer RMSE to MAE when model errors are normal. They don’t ground their preference at all in differentiability.

They also point out in their footnote 1 that the standard error (SE) is equivalent to the RMSE under unbiased error.

Standard error takes the ±. So does standard deviation. See https://en.wikipedia.org/wiki/Plus-minus_sign

Or do you now claim that standard error along with standard deviation are also positive-sign only?

Nick “And Ineichen – same story. Look at table 2. A page full of RRMS. And each one expressed as a simple positive number. Just like JA.

The same RRMS they describe as plus/minus and plot as plus/minus.

Here’s how Ineichen describe the RRMS in Table 2: “relative root mean square deviation (RRMS) describing the short-term fluctuation around the average bias” (my bold).

Do fluctuations about an average bias have only positive excursions? Does that sound statistically valid to you? It’s very clear that the Table 2 RRMS values include the implied ±.

The positive values in Table 2 merely follow tabular convention.

So you are reduced to saying – well, they give a positive number, but we know they plan to use it as ± in a CI.

I’m left saying that the RRMS is ± throughout, obviously so in Ineichen.

You are left with strained malaprops of obvious statistical meaning.

Your “challenge” Nick “was to find someone actually specifying an RMS with a ± in front.

I did so with Ineichen, and in response you shifted your ground.

Well, even if so, the same could be said of Annan. They are doing exactly the same as he did.

Ineichen are obviously not doing so. Ineichen are presenting RMSE as ± throughout.

Annan’s extended comment, “…this is what underpins the use of anomalies for estimating change,” clearly requires that he saw the rmse ±4 W/m^2/year calibration uncertainty statistic as a constant positive sign offset error in forcing.

He’s wrong on sign; wrong on error; wrong on forcing.

And you’re wrong, too, Nick.

November 12, 2017 4:52 pm

Have other people published papers about the propagation of errors in climate computer models? If not, that answers the question.
BTW, this sort of thing goes on all the time in every area. I gave up trying to publish anything in pathology because it wasn’t worth publishing something everybody knows, and if you try to publish something that is original or goes against the consensus, it gets rejected.
Yeah, it is corrupt. That is why you really need a marketplace of ideas.

Reply to  Joel
November 12, 2017 6:17 pm

Yes they have. Google is your friend. Gosh, even I have.

Reply to  ristvan
November 12, 2017 9:03 pm

No, they have not. They have calculated run standard deviation about an ensemble mean. That’s not the same thing at all. AT ALL.

Reply to  ristvan
November 15, 2017 7:52 pm

And you haven’t propagated errors through climate model simulations, either, Rud.

M Montgomery
November 12, 2017 5:05 pm

It boggles the mind what still goes on in this climate space on a daily basis despite real scientists finally coming out of the woodwork to dispute the consensus even more fervently (and less afraid of being fired) since Trump was elected. The alarmist are ever-more emboldened today. They have been able to latch on and ride the Trump Derangement Syndrome to justify their continued false narrative, and its’ corresponding gravy-train. It’s way too easy for them to continue the dishonesty given the money and marketing that have created such a vast following at this point.

daved46
November 12, 2017 5:13 pm

I’m not sure whose side I’m going to be supporting, but I find a large problem in terms of understanding what is being dealt with here and it begins with W/M2 Watts per meter squared. But most everyone knows what a kilowatt hour is. It’s the thing you pay the power company for. A tangible entity. Now divide that by a time unit, say a year. So we have a kilowatt hr/y which is a constant So the watt in watt/Meter squared carrys a value which is not independent of the time over which we are mearsuring. If we want to work with watts, we will have a tangable entity which will accumulate over the time scale we’re working with. I think this is what Pat is alluding to, as the energy change from not being able to have a good measure of cloud forcing will produce some sort of system change which accumulates over time.

Michael Carter
November 12, 2017 5:36 pm

cdquaries wrote:
“The kind of stuff done by climate scientists would have gotten an “F” in my chemistry lab classes. Yeah, the lab classes where previously described experiments get replicated. So when I do a bomb calorimetry setup and get a different number from the book, if I’ve done my error analysis correctly, I can then show why my result differed”

As someone who Is not a mathematician, but rather an observer of natural systems (geologist), this is the most sensible post in an intriguing topic. Surely there are enough brains here to formulate an experiment that would validate or debunk Pat’s theory. Now that would be fun. The possibilities are endless and require an appreciation of the influence of variables. This is science, not just number crunching!

RACookPE1978
Editor
November 12, 2017 6:01 pm

Pat Frank

I recommend a practical experiment that compares radiation input values, and model output values.

The solar physics community maintains that the sun is NOT the source of enough of a change in Top of Atmosphere Radiation levels (TOA) since satellite measurements began to have affected the earth’s global average temperature. Let us assume that is true.

Thus, according to these solar physicists, the actual TOA radiation has not changed since the mid-1980’s.

http://spot.colorado.edu/~koppg/TSI/TSI.png

However, the MEASURED TOA radiation levels HAVE CHANGED substantially during that period – decreasing from the 1985 levels of 1372 watts/m^2 to today’s 1362 watts/m^2. This is because, the solar scientists claim, the original instruments were calibrated incorrectly, and so “more recent” TOA radiation levels are now recording 10 watts/m^2 LESS than the original TOA radiation levels. The plot shows Total Solar Irradiation levels, TOA values are proportional to TSI, but vary according to each day’s distance from the sun around the earth’s orbit. (The alternative assumption – that actual solar energy levels have decreased by 10 watts/m^2 since 1985 – is rejected.)

According to the climate alarmist community, the computer climate mode runs completed this year, produce the same results (with the same uncertainity!) as the early model runs completed between 1980-1990. According to the same climate alarmist community, the only valid change between the first model runs in 1985-1990 and today are the “start level” assigned to CO2, and – we assume – the first input temperature conditions: today may be as much as 0.30 degrees warmer (on average) than in the 1970’s.

Now, every climate model can only be “theoretical”, NO climate model can measure or reflect the real world. Thus, each year’s run of every climate model at every computer laboratory across the world MUST USE some assumed TOA radiation value assigned for the day it is programmed, and for the minute its program starts running the near-infinite finite element feedback loops inside each computer model. If ANY climate model is running on last year’s TOA value, the last decade’s TOA value, or a TOA value now 30 years out of date, it MUST BE discarded, right? If ANY climate model is run with an “invalid”, totally wrong TOA radiation value, the predicted results of that model running with invalid data are equally invalid, right?

NO self-claimed “climate scientist” can use a 100 year future prediction foretelling doom and (literally) the end of life on this planet” based on a forecasted 3 watt/m^2 increase in the earth’s average heat balance, if the “year 0” of that 100 year prediction begins 10 watts/m^2 too large, or do they?

The climate forecast model results from the early 1980-1990-2000 years are the same as 2015-2016-2017, are they not? A 10 watt/m^2 DECREASE in TOA solar radiation for a climate model run in 2017 at 1362 watts/m^2 is the same end temperature as a model run in 1995 using 1372 watts/m^2?

How can they pretend a 3 watt/m^2 “forcing” due to an increase in CO2 levels makes a life-ending change, when a 10 watts/m^2 decrease in input energy levels makes no difference at all in their predicted temperatures (er, climates) 83 years from now?

(Or are they so careless with input parameters that they haven’t noticed every model run since 1996 has been dead wrong?)

paqyfelyc
Reply to  RACookPE1978
November 13, 2017 6:06 am

That’s because “climate scientist” models basically assume equilibrium in the first place, before any anthropo “forcing”. So if solar scientists tell them that TSI is 1372, or 1400, or 1320, or whatever, they assume this just cancel out with relevant out radiation, no matter what. et voilà.
They effectively think like, “Nature is at equilibrium for whatever natural value TSI etc. are. Now lets look at the effect of human forcing”

Reply to  paqyfelyc
November 14, 2017 12:17 pm

It’s like no one ever taught them the dangers of inference from incomplete knowledge.

reallyskeptical
November 12, 2017 8:30 pm

I have always thought that no matter how lame a manuscript, it can get published somewhere. Frank’s paper disproves this hypothesis.

November 12, 2017 8:43 pm

Earlier on WUWT there was chat between Nick Stokes and self about how much you can interpolate between data points – see
https://wattsupwiththat.com/2017/11/08/the-uscrn-revisited/#comment-2659942
I asked Nick – “Do you ever reach a stage when you say, I shall not publicise this matter because the data are not good enough to support it?”
Part of his reply was “No. The data are what they are. I am not responsible for them; I just try to show them as clearly as possible.”
Nick’s response floored me. We are dealing with science, not with a popularity chat show. If I know that data are false, I do not publish them. Nick is more liberal.
This interchange again reminded me of how different sectors of the scientific community can be. There seems to be a group that think about responsibility more than “I am not responsible”. I’m part of that group. One of its properties is that accountability enters the equation. Accountability has often been measured, but I prefer the old way, by dollars, because that’s part of the reason why dollars were invented or used.
So I am trying to see of a geological anomaly I have found is likely to be a success. I estimate grade and shape and a host of other factors, some of them by interpolation between data points. Arriving at a summary, I sat that there is a high probability that the geological thing can now be regarded as a resource rather than as a reserve and I give estimates of grade and tonnes and economics and other relevant factors, like whether I am an independent consultant or whether I have a possibility of financial gain through covert acts.
Meanwhile Nick might be classed in another group. Its workers as research scientists in government organisations might feel a general compulsion to do their work well, but they find it hard to express their work value in hard dollars because it is not a big part of their function. Their accountability is smaller – if their results are right or wrong, life goes on, the government still pays the wages.
But, if I knowingly make a false assertion about my ore deposit, I can go to jail for fraud.
To me, it is wrong to publicise data that I know to be wrong. Nick’s ‘group’ seem to be less troubled. With data, it is what it is, wash my hands of accountability.
I find this difference to be astounding. It occurs again on this thread, where Nick notes at 9.52 a.m. “In science, as in life, we never have perfect knowledge. We have imperfect knowledge of the initial state, and imperfect knowledge of the climatology.” So, in essence, he is saying that he allows knowingly faulty information to go forwards. There might be a need for such times, but they need to be qualified by careful, correct statements of the errors involved. This is part of Pat’s points.
I am finally drawn to the conclusion that there are in this climate science field, groups of participants whose approach to scientific life ranges from rigid, with punishment for mistakes, to laissez faire, close enough for government work and to hell with the consequences.
That difference cannot be resolved with all the good will and thoughts of contributors to WUWT.
Its cure is more surgical, like a frontal lobotomy, though I’d prefer a bottle in front o’ me.
Geoff

LdB
Reply to  Geoff Sherrington
November 12, 2017 9:17 pm

I think the more scary part is the intent of the review process. I made the point to Nick above that none of the truely great papers in physics would stand up to a semantic attack them, you would end up rejecting them. The alarming part to me is that the discussion is not about intent of a paper it’s about semantics.

The danger here is Nick and his friends would reject a paper if you wrote -$10 or -240V RMS it would be rejected. They are apply a literal interpretation that the quantities can only be positive. The fact the terms have unambiguous meaning and are shorthand used by huge sections of the real world is not important, go to the physical definition and it says they can only be positive quantities.

So a paper may be absolutely unambiguous about what it says and be very important but out it goes because it doesn’t meet the definitions as used by the gatekeepers.

Nick Stokes
Reply to  LdB
November 12, 2017 9:43 pm

“Nick and his friends would reject a paper if you wrote -$10 or -240V RMS it would be rejected”
No, they would ask for a change, as Ronan Connolly did. That is routine. It is Pat Frank’s responses to such requests that gum up the works.

But again, it is Pat with the semantics here. He is the one that blasted James Annan for writing -4 W/m2.

Reply to  LdB
November 12, 2017 9:53 pm

Pointing out his (and your) basic sign error is not semantics, Nick.

LdB
Reply to  LdB
November 12, 2017 9:59 pm

Yeah I have problems with where this goes. If classical physics had employed these tactics they could have stopped QM papers by simply stating Energy is an absolute positive thing and the the universe is 3 Dimensions. You have to be able to argue new meaning in quantities, nothing is fixed in stone as we did in QM and needed to add ket notations.

It will be really interesting with the expanding branch of Quantum Thermodynamics you aren’t going to be able use it in climate science because it won’t conform to your standards 🙂

Reply to  LdB
November 12, 2017 10:17 pm

Who said submitting your paper over and over again, expecting different results was???

LdB
Reply to  LdB
November 12, 2017 10:22 pm

Here is Nick in history.

Einstein: Dear Nick I attach my theory of General relativity.
Nick: Sorry Mr Einstein we reject you paper it implies a 4th dimension and the universe has only 3.
Einstein: Yes Nick that is the point of the paper I am showing there is a hidden 4th dimension.
Nick: Well if you go to the current textbook and look it clearly states the universe is only 3 dimensions you have made a silly mistake .. paper rejected.
Einstein: You don’t seem to be understanding, I am saying the text books are wrong.
Nick: They are textbooks Mr Einstein and they clearly define the universe as 3D … paper rejected.

Nick Stokes
Reply to  Geoff Sherrington
November 12, 2017 9:39 pm

“If I know that data are false, I do not publish them.”
I don’t know or believe that the data are false. On the contrary. But they are not my data. I generally think they are good. I graph them in various ways, as with the WebGL display of that thread. People need to know what they are.

This actually goes on endlessly at WUWT. People pile on to see how scornful they can be about temperature measures. Yet the posts are full of graphs of temperature data. If you can’t see what the data is, how can you talk about it?

Reply to  Nick Stokes
November 13, 2017 12:01 am

Nick Stokes “I don’t know or believe that the data are false.”

Nick, we were doing interpolation a while back. The mathematical method named Geostatistics was developed in part because of the frequent question like “How far apart can 2 data points be before one loses predictive power for the value of the other?” and the related question like “Is my sampling adequately dense, or do I need to resample and infill?”
Geostatistics evolved with hope for it as a better way to treat a problem, inferring that there was a prior problem and that it needed fixing. If, as can happen, there are 2 representations of data, there is a probability that one is better than the other. How you define ‘better’ is more semantics, but it usually is associated with proficiency of understanding existing art. Patent examiners for example meet the problem of prior art and some become capable enough to know of two inventions, which one is better than the other.

You are trying to tell me that a researcher cannot know that one representation of data is better than another. I do not buy this. If it were so, we would not have the concept of peer review of journals.
So, we can have one representation that is better than another, one of which for brevity I earlier labelled ‘false’. False in the sense that representing it as ‘best’ knowing it is not, is not scientifically good.

It is also not scientifically good to publicise concepts like GCMs as admissible devices if the errors involved are known to be too large for them to be useful. That is the type of falsity that should be considered for punishment.

You can plead before a court that you did not know that the errors were so large; that you did not know how to estimate them; that you thought you could just treat them as dataq; that you did not think you could be accountable for their failure; that you did not think it wrong to promote them without estimation of the financial consequences.

No, you cannot get off with arguments like that. You should know, before you try to influence others. Otherwise, you debase science and the populace.
Geoff.

paqyfelyc
Reply to  Nick Stokes
November 13, 2017 5:25 am

+1 Geoff Sherrington

Dr. S. Jeevananda Reddy
November 12, 2017 9:53 pm

Please find below the publishers details for my book “Hard Cover”: “Climate Change and its Impacts: Ground Realities”

http://www.bspublications.net/downloads/059e46035c9c40_Climate%20Change%20and%20its%20Impacts%20Ground%20Realities.pdf

Dr. S. Jeevananda Reddy

BillP
November 13, 2017 3:03 am

It is not often that I find myself agreeing with climate modellers, but I do this time.

I think that Pat Frank has misunderstood what the ±4 W/m^2 figure means and hence it effect on climate models, for reasons already stated by others.

I caution other sceptics that supporting faulty criticism of climate models weakens our case against them; as it helps the modellers pretend that all criticism is faulty.

Pat Frank’s decision to accuse one of the reviewers of bias and to claim that bias is the sole reason his paper was rejected also damages the sceptic cause.

Reply to  BillP
November 14, 2017 1:55 am

I think the ±4 W/m^2 means what Lauer and Hamilton present it as: the 20-year annual mean uncertainty in simulated long-wave cloud forcing.

What do you think it means, BillP, and why?

I didn’t accuse the reviewers of bias. I showed they have a serious conflict of interest.

I’ve also never claimed that bias is the sole reason for rejection. I didn’t direct the word “bias” at them at all, in my post.

My prior posts on the topic of rejection, here and here, I present evidence that the rejections were grounded in incompetence.

michael hart
November 13, 2017 4:51 am

With apologies…

So, so you think you can tell
Heaven from hell
Blue skies from pain
Can you tell a green field
From a cold steel rail?
A smile from a veil?
Do you think you can tell?

November 13, 2017 8:27 pm

Steve Mosher brought up Dr. Patrick Brown’s critique in several posts, starting here.

Rud Istvan has also weighed in positively on Dr. Brown’s critique, especially here and here.

Dr. Brown mistakes the rmse ±4 W/m^2/year calibration uncertainty statistic as a positive sign 4 W/m^2 forcing error. This misconstrues a statistic as an energy. This mistake alone is fatal to pretty much his entire argument.

Steve Mosher knows nothing of science. His attacks on me always display that ignorance. He regularly and uncritically quotes disproven criticisms. So, it’s no surprise that he should post excited declamations about Dr. Brown’s video.

However Rud is pretty well trained, and I’m surprised he didn’t catch Dr. Brown’s obvious misconstrual of a statistic for an energy, and uncertainty for physical error.

As these two have brought Dr. Brown’s video to the fore, I’ve decided to post my opening critique of it here as a reference for those interested in the debate here.

I’ll link this post upthread at Mosher’s and Rud’s comments.

The conversation at Dr. Brown’s site continued after the post below. Dr. Brown’s argument did not prevail. It got pretty desperate with his claim that in a division, the numerator includes the dimension of the denominator. But in any case Dr. Brown never accepted that a rmse statistic is not a forcing.
+++++++++Post Follows+++++++++++
Before proceeding, I’d like to thank Dr. Brown for kindly notifying me of his critique after posting it. His email was very polite and temperate; qualities that were very much appreciated. His video critique is thoughtful, very reasoned, and very clear and calm in presentation. Dr. Brown gave an accurate summary of my method. I also gratefully acknowledge Dr. Brown’s scientific integrity, very apparent in his presentation and especially in his deportment.

I also acknowledge that, in the first several minutes of his presentation, Dr. Brown correctly described the error propagation method I used.

I’ll begin by noting that my presentation shows beyond doubt that GCM global air temperature projections are no more than linear extrapolations of green house gas forcing. Linear propagation of error is therefore directly warranted.

GCMs make large thermal errors. Propagation of these errors through a global air temperature projection will inevitably produce large uncertainty bars.

Even a uncertainty of ±1 W/m² in tropospheric thermal energy flux will propagate out to an uncertainty of ±4.3 C after 100 years, which is about the same size as the ~4 C mean 2000-2100 anomaly from RCP 8.5, and about 4 times the projection uncertainty admitted by the IPCC.

Before proceeding to specific points, I’ll mention that in minute 12:35, Dr. Brown observed that the ±17 C uncertainty envelope in RCP 8.5, derived from long wave cloud forcing (LCF) error is, “a completely unphysical range of uncertainty, so it’s totally not plausible that temperature could decrease by 15 degrees as we’re increasing CO₂. And it’s implausible as well that temperature could increase by 17 decrees as we’re increasing CO₂ under the RCP 8.5 scenario. But as I understand it, this is the point Dr. Frank is trying to make.

A temperature uncertainty statistic is not a physical temperature. Statistical uncertainties cannot be “unphysical” in the sense Dr. Brown implies. The large uncertainty bars do not indicate possible increases or decreases in air temperature. They indicate a state of knowledge. The uncertainty bars are an ignorance width. I made this very point in my DDP presentation, when the propagated uncertainty envelopes were first introduced.

It is true that the very large uncertainty bars subsume any possible future air temperature excursion. This condition indicates that no future air temperature can falsify a climate model air temperature projection. No knowledge of future air temperature is contained in, or transmitted by, a climate model temperature expectation value.

Dr. Brown continued, “So he’s essentially saying that when you properly account for the uncertainty in the climate model projections, the uncertainty becomes so large so quickly that you can’t actually draw any meaning from the projections that the climate models are making.” On this, we are agreed.

The assessment below of Dr. Brown’s presentation is long. To accommodate readers who do not wish to read through it, here’s a summary. Dr. Brown has:

• throughout mistaken the time-average statistic of a dynamical response error for a time-invariant error;
• throughout mistaken theory-bias error for base-state error;
• repeatedly and wrongly appended a plus/minus to a single-sign offset error, in effect creating a fictitious root-mean-square (rms) error;
• repeatedly and improperly propagated the fictitious rms error to produce uncertainty envelopes with one fictitious wing;
• apparently does not recognize that only a unique model expectation value qualifies as prediction in science.

This list is not exhaustive, but in-and-of itself is sufficient to vitiate the analytical merit of Dr. Brown’s analysis, in its entirety.

Now to specifics:

Dr. Brown’s critique was presented under five headings:
1. Arbitrary use of 1 year as the compounding time scale.
2. Use of spatial root-mean-square instead of global mean net error.
3. Use of error in one component of the energy budget rather than error in net imbalance.
4. Use of a base state error rather than a response error.
5. Reality check: Hansen (1988) projection.

These are taken in turn. I assume the readers are familiar with the contents of Dr. Brown’s video.

Minute 15:07, 1. Arbitrary use of 1 year as the compounding time scale.

From Lauer and Hamilton, page 3831: “A measure of the performance of the CMIP model ensemble in reproducing observed mean cloud properties is obtained by calculating the differences in modeled (x_mod) and observed (x_obs) 20-yr means. These differences are then averaged over all N models in the CMIP3 or CMIP5 ensemble to calculate the multimodel ensemble mean bias delta_mm which is defined at each grid point as delta_mm = (1/N){sum_over[(x_mod)_i] – x_obs}, for all i= 1 to N.”

Page 3831 “The CF [cloud forcing] is defined as the difference between ToA [top of the atmosphere] all-sky and clear-sky outgoing radiation in the solar spectral range (SCF [short-wave cloud forcing]) and in the thermal spectral range (LCF [long-wave cloud forcing]).”

That is, the ±4 W/m² LCF root-mean-square-error (rmse) is the annual average CMIP5 thermal flux error. The choice of annual error compounding was therefore analytically based, not arbitrary.

Further, the ±4 W/m² is not a time-invariant error, as Dr. Brown suggested, but rather a time-average error of climate model cloud dynamics. It says that CMIP5 models will average ±4 W/m² error in long-wave cloud forcing each year, every year, while simulating the evolution of the climate.

Although Dr. Brown did not discuss it, part of my presentation showed that CMIP5 LCF error arises from a theory-bias error common to all tested models. A theory-bias error is an error in the physical theory deployed within the model. Theory-bias errors introduce systematic errors into individual model outputs, and continuing sequential errors into step-wise calculations.

CMIP5 models introduce an annual average ±4 W/m² LCF error into the thermal flux within the simulated troposphere, continuously and progressively each year and every year in a climate projection.

Next, Dr. Brown suggested that the annual average could be arbitrarily used for 20 years or for one second. It should now be obvious that he is mistaken. An annual average error can be applied only to a calculation of annual span.

Dr. Brown’s alternative propagation in 20-year steps used the ±4 W/m² one-year rmse LCF error. A 20-year time step requires a 20-year uncertainty statistic.

The CMIP5 ±4 W/m² annual average can be scaled back up to a 20-year average LCF rms uncertainty, “±u_20,” calculated as ±u_20 (W/m²) = sqrt[42*20] W/m² = ±17.9 W/m².

Using Dr. Brown’s RCP 8.5 scenario as the example, the 2000-2019 change in GHG forcing is 0.89 W/m². The year 2000 base greenhouse gas (GHG) forcing is taken as the sum of the contributions from CO₂+N2O+CH4, and is 32.321 W/m², calculated from the equations in G. Myhre, et al., (1998) GRL 25(14), 2715-2718. GHG forcing have recently been updated, but the difference doesn’t impact the force of this demonstration.

Starting from year 2000, and using the linear model, the uncertainty across a projection consisting of a single 20-year time-step is [(0.42*33.833*±17.9)/32.321] = ±7.9 C, where 33.833 C is the year 2000 net greenhouse temperature.

In comparison, at year 2019, i.e., after 20 years, the annual-step RCP 8.5 ±4 W/m² annual average uncertainty compounds to ±7.6 C.

Likewise, after a series of five 20-year time-steps, the propagated uncertainty at year 2100 is ±17.3 C.

In comparison, the RCP 8.5 centennial uncertainty obtained propagating the annual ±4 W/m² over 100 yearly time steps from 2000 to 2099 is ±17.1 C.

So, in both cases, the annually propagated uncertainties are effectively the same values as the propagated 20-year time-steps.

This comparison shows that, correctly calculated, the final propagated uncertainty is negligibly dependent on time-step size.

All of this demonstrates that Dr. Brown’s conclusion at the end of section 1 (minute 16:50), though true, is misguided and irrelevant to the propagated error analysis.

Minute 17:10, 2. Use of spatial root-mean-square instead of global mean net error.

In his analysis, Dr. Brown immediately and incorrectly characterized the CMIP5 ±4 W/m² annual average LCF rms error as a “base-state error.”

However, the LCF rms error was derived from 20 years of simulated climate — the 1986-2005 global climate states. These model years were extracted from historical model runs starting from an 1850 base state.

The actual “base-state” error would be the difference between the simulated and observed 1850 climate. However, the 1850 climate is nearly unknown. Therefore the true base-state error is unknowable.

In contrast, the model ±4 W/m² LCF error represents the annual average dynamical misallocation of simulated tropospheric thermal energy flux, during the 20 years of simulation. It is not a base-state error.

As a relevant aside, looking carefully at the scale-bar to the left of Dr. Brown’s graphic of LCF model error (minute 17:57), the errors vary in general between +10 W/m² and –10 W/m² across the entire globe, with a scatter of deeper excursions.

With these ±10 W/m² errors in simulated tropospheric thermal flux, we are expected to credit that the models can resolve the effect of an annual GHG forcing perturbation of about 0.035 W/m²; a perturbation ±286 times smaller than the general levels of error in Dr. Brown’s graphic.

Next, Dr. Brown says that by squaring the LCF error, one makes the error positive. This, he says, doesn’t make sense. However, that representation is incorrect. Squaring the error provides a positive variance. The uncertainty used is the square root of the error variance, which makes it “±,” i.e., plus/minus, not positive. This is not an “absolute value error,” as Dr. Brown represents.

In minute 18:30, Dr. Brown compares the mean LCF of 28 models with observed cloud LCF, showing that they are similar. By inference, this model mean error is what Dr. Brown means by “net error.”

However, taking a mean allows positive and negative errors to cancel. Considering only the mean hides the fact that models do in fact make both positive and negative errors in cloud forcing across the globe, as Dr. Brown’s prior graphic showed. These plus/minus errors indicate that the simulated climate state does not correspond to the physically correct climate state.

In turn, this climate state error puts uncertainty into the simulated air temperature because the climate simulation that produced the temperature is physically incorrect. Therefore, focusing on the mean model LCF hides the physical error in the simulated climate state, and confers a false certainty on the simulated air temperature.

The point is clearer when considering Dr. Brown’s minute 18:30 graphic. The 28 climate models shown there have differing LCF errors. Their simulated climate states not only do not represent the physically correct climate state, but their simulated states also are all differently incorrect.

That is, these models not only simulate the climate state incorrectly, but they produce simulation errors that vary across the model set. Nevertheless, the models all adequately reproduce the 1850-to-present global air temperature trend.

Temperature correspondence among the models means that the same air temperature can be produced by a wide variety of alternative and incorrectly simulated climate states. The question becomes, what certainty can reside in a simulated air temperature that is consistently produced by multiple climate states, all of which are not only physically incorrect, but also incorrect in different ways? Further, when it is known that climate states are simulated incorrectly, what certainty resides in the climate-state evolution in time?

Taking the mean error hides the plus/minus errors that indicate the simulated climate states are physically incorrect. The approach Dr. Brown prefers confers an invalid certainty on model results.

In minute 19:37, Dr. Brown then compared the FGOALS and GFDL climate models with widely differing mean LCF offset errors, of -9 W/m² or +0.1 W/m², respectively, and showed they produced hugely different uncertainty envelopes when propagated.

Propagating these errors is a mistake, however, because they are single-sign single-model mean offsets. They are not the root-mean-square error of each single-model global LCF simulation (see below).

Neither offset error is a plus/minus value. However, the right side of Dr. Brown’s graphic incorrectly represents them as “±.” Dr. Brown has incorrectly appended a “±” to these single-sign errors. The strictly positive GFDL error can produce only a small positive wing, while the FGOALS calculation is restricted to a large negative wing. That is, Dr. Brown’s double-winged uncertainty envelopes resulted from improperly appending a “±” to mean errors that are strictly positive or negative values.

Thus, both uncertainty calculations are wrong because a single-model single-sign mean offset was wrongly entered into a propagation scheme requiring a plus/minus rms error.

The global LCF error for a single model simulation is the rmse calculated from simulated minus observed LCF in the requisite unit-areas across the globe. Taking the root-mean-square of the individual errors produces the global mean single-model plus/minus LCF uncertainty. Propagation of the LCF “±” rmse then produces both positive and negative wings of the uncertainty envelope.

In minute 20:06, Dr. Brown asked, “Does it make sense that two models that predict similar amounts of warming by 2100 would have uncertainty ranges that differ by orders of magnitude?”

We’ve seen that Dr. Brown’s error ranges are wrongly calculated and pretty much meaningless.

Further, the fact that two models deploying the same physics make such different mean LCF errors shows that large parameter disparities are hidden in the models. In order to produce the same air temperature even though the respective mean LCF errors are widely different, the two models must have different suites of offsetting internal errors. That is, Dr. Brown’s objection here actually confirms my analysis. A large uncertainty must attach to a consistent air temperature emergent from disparately incorrect models.

Minute 20:30, 3. Use of error in one component of the energy budget rather than error in net imbalance.

Dr. Brown’s argument here does not take cognizance of the difference between the so-called instantaneous response to a forcing change and the equilibrium response. My analysis concerns the instantaneous response to GHG forcing. The equilibrium response includes the oceans, which respond on a much longer time scale. So, inclusion of the ocean heat capacity in Dr. Brown’s argument is a non-sequitur with respect to my error analysis.

Next, the choice of LCF rms error confines the uncertainty analysis to the tropospheric thermal energy flux, where GHG forcing makes its immediate impact on global air temperature. GHG forcing enters directly into the tropospheric thermal energy flux and becomes part of it. An uncertainty in tropospheric thermal energy flux imposes an uncertainty in the thermal impact of GHG forcing.

The CMIP5 ±4 W/m² annual average LCF error is ±114 times larger than the annual average ca. 0.035 W/m² forcing increase CO₂ emissions introduce into the troposphere.

Dr. Brown proposed that a model with perfect global net energy balance would produce no uncertainty envelope in an error-propagation. However, restricting the question to global net flux in a perfectly balanced model neglects the problem of correctly partitioning the available energy flux among and within the climate sub-states.

A model with offsetting errors among short-wave cloud forcing, long-wave cloud forcing, albedo, aerosol forcing, etc., etc., can have perfect net energy balance all the while producing physically incorrect simulated climate states, because the available energy flux is misallocated among all the climate sub-states.

The necessary consequence is very large uncertainty envelopes associated with the time-wise projection of any simulated observable, no matter that the total energy flux is in balance.

Cognizance of these uncertainties requires a detailed accounting of the energy flux distribution within the climate. As noted above, the LCF error directly impacts the ability of models to resolve the very small additional forcing associated with GHG emissions.

This remains true in any model with an overall zero error in net global energy balance, but with significant errors in partitioned energy-flux among climate sub-states. Presently, this caveat to Dr. Brown’s argument includes all climate models.

Minute 23:50, 4. Use of a base state error rather than a response error.

Dr. Brown’s opening statement suggests I used a base-state error rather than a response error. This claim was discussed under item 1, where it was noted that the LCF rms error is not a time-invariant error, as Dr. Brown suggested, but a time-average error.

At the risk of being pedantic, but just to be absolutely clear, a time-invariant error is constant across all time. A time-average error is calculated from individual errors that may, and in this case do, vary across time. The time-average error derived from many models allows one to calculate a time-wise uncertainty that is representative of those models.

This point was more extensively discussed under item 2 where it was noted that the model LCF error represents the model average of the dynamically misallocated simulated tropospheric thermal energy flux, not a base-state error.

In pursuing this line, Dr. Brown introduced a simple physical model of the climate, and investigated what would happen with a 5% positive offset (base-state) error in terrestrial emissivity in a temperature projection across 100 years, using that model.

However, a model positive offset error is not a correct analogy to global LCF rmse error. Correctly analogized to LCF rmse, Dr. Brown’s simple climate model should suffer from a rmse uncertainty of ±5% in terrestrial emissivity. Clearly a rmse uncertainty is not a constant offset error.

The positive offset error Dr. Brown invoked here represents the same mistaken notion as was noted under item 2, where Dr. Brown incorrectly used a strictly single-sign single-model mean LCF offset error rather than, properly, the single-model global LCF rms error.

In minute 26:55, Dr. Brown again improperly attached a “±” onto his strictly positive +5% emission offset error. This mistake allowed him to introduce the plus/minus uncertainty envelope said to represent the uncertainty calculated using the linear error model.

However, the negative wing of Dr. Brown’s uncertainty envelope is entirely fictitious. Likewise, as noted previously, a single-sign offset error cannot be validly propagated.

Next, when Dr. Brown’s model is correctly analogized, the ±5% emissivity error builds an uncertainty into the structure of the model. The emissivity of the base state has a ±5% uncertainty and so does the emissivity of the succeeding simulated climate states, because the ±5% uncertainty in emissivity is part of the model itself. The model propagates this error into everything it is used to calculate.

Correctly calculated, the base-state temperature suffers from an uncertainty imposed by the ±5% uncertainty in emissivity. The correct representation of base-state temperature is 288(+3.7/-3.5) C.

The model itself then imposes this uncertainty on the temperature of every subsequent simulated climate state in a step-wise projection.

The temperature of every simulation step “n-1” used to calculate the temperature of step “n” carries its “n-1” plus/minus temperature uncertainty with it. The temperature of simulated state “n” then suffers its own uncertainty because it was calculated with the model having the structural ±5% uncertainty in emissivity built into it. The total uncertainty of temperature “n” combines with the ±T uncertainty of step “n-1.”

These successive uncertainties combine as the root-sum-square (rss) in a temperature projection.

To show the effect of a ±5% uncertainty in emissivity, I duplicated Dr. Brown’s initial 100-year temperature calculation and achieved the same result, 288.04 C → 291.93 C after 100 years. I then calculated the temperature uncertainties resulting from a ±5% uncertainty in the value of the changing emissivity, as it step-wise reduced by 5% across 100 years. The rss error was then calculated for each step.

The result is that the initial 288(+3.7/-3.5) C became 289.95(+26.6/-25.0) C in the 50th simulation year, and 291.93(+37.6/-35.3) C in the 100th.

So, properly analogized and properly assessed, Dr. Brown’s model verifies the method and results of my original climate model error propagation.

Next, at minute 28:00, Dr. Brown showed that there is no relationship between model base-state error in global average air temperature and model equilibrium climate sensitivity (ECS). However, the Figure 9.42(a) he displayed merely shows the behavior of climate model simulations with respect to themselves. This is a measure of model precision. Figure 9.42(a) does not show the physical accuracy of the models, i.e., how well they represent the physically true climate.

The fact that Figure 9.42(a) says nothing about physical accuracy, means it also can say nothing about whether any actual systematic physical error leaks from a base-state simulation into projected states. There is no measure of physical error in Figure 9.42(a).

Figure 9.42(a) has another message, however. It shows that climate models deploying the same physical theory produce highly variable base-state temperatures and highly variable ECS values. This variability in model behavior demonstrates that the models are parameterized differently.

Climate modelers choose each parameter to be somewhere within its known uncertainty range. The high variability evident in Figure 9.42(a) shows that these uncertainty ranges are very significant. These parameter uncertainties must impose an uncertainty on any calculated air temperature. Indeed, there must be a large uncertainty in the air temperatures displayed in Figure 9.42(a). However, none of the points sport any uncertainty bars. For the same reason of hidden parameter uncertainties, the ECS values must be similarly uncertain, but there are no ECS uncertainty bars, either.

In standard physical science, parameter uncertainties are propagated through a calculation to indicate the reliability of a result. In consensus climate modeling, this is never done.

The parameter sets within climate models are typically tuned using known observables, such as the ToA flux, so as to generate parameter values that provide a reasonable base-state climate simulation and to project a reasonable facsimile of known climate observables over a validation time-range. However, tuned models are not known to accurately reproduce the physics of the true climate. Tuning a model parameter set to get a reasonable correspondence merely hides the uncertainty intrinsic to a simulation; an uncertainty that is obviously present when regarding Figure 9.42(a).

Next, Dr. Brown’s height-weight example is again an incorrect analogy because it is an empirical correlation within a non-causal epidemiological model, whereas a climate model is causal and deploys a physical theory. Dr. Brown’s comparison is categorically invalid.

A proper comparison would involve using some causal physical model of the human body complete with genetic inputs and resource availability to predict a future height vs. weight curve of a population given certain sets of conditions. Elements of this model would have plus/minus uncertainties associated with them that introduce uncertainties into the output.

Then, starting from year 2000, the calculation is made to predict the height vs. weight profile through to year 2100. The step-wise calculational uncertainties are propagated forward through the projection. The resulting uncertainty bars condition the prediction, and indicate its reliability.

The height-weight example marks the third time in his analysis that Dr. Brown improperly misrepresented a constant offset error as a plus/minus uncertainty. He has again incorrectly appended a “±” to a positive-sign offset error. The negative wing of his calculated uncertainty envelope (minute 29:48) is again entirely fictitious.

This example also again shows that Dr. Brown continued to mistake a theory-bias error, i.e., a plus/minus rmse uncertainty within the structure of a physical theory, for a single-value offset error as might be present in a single calculation. This mistaken notion ramifies through Dr. Brown’s entire analysis.

Finally, this same mistake does similar violence to Dr. Brown’s step-size example in minute 30:30, where he, once again, mis-analogized theory-error as a base-state error.

In his example, the correct analogy with rmse LCF error is a rmse plus/minus uncertainty in the size of each step.

Dr. Brown correctly propagated the 2-feet uncertainty in step-size as the rss, the distance traveled after three steps, with its correct uncertainty of 15±3.46 feet.

Dr. Brown’s 5-feet offset error only affects the uncertainty in the final distance from an initial reference point. It has nothing to do with an uncertainty in the distance traveled. It is not a correct analogy for the plus/minus LCF error statistic of climate models.

So, Dr. Brown’s final statement in this section (minute 31:53), that, “[A] bias or error in the base state should not be treated as the same thing as an error in the response (or change),” is correct, but completely irrelevant to propagation of the plus/minus LCF error statistic. The statement only illustrates Dr. Brown’s invariably mistaken notion of the sort of error under examination.

Again, the CMIP5 ±4 W/m² LCF error is not a constant, single-event base-state error, nor an offset error, nor a time-invariant error. The CMIP5 ±4 W/m² LCF error is a time-average error that arises from, and is representative of, the dynamical errors produced by climate models deploying an incorrect physical theory. It appears in every single step of a climate simulation and propagates forward through a time-wise projection.

Minute 32:14, 5. Reality check: Hansen (1988) projection.

Dr. Brown proposed a reality check, which was to plot the observed temperature trend over the Hansen, 1988 Model II scenario projections, shown in minute 34:02.

Dr. Brown’s mistake here is subtle but critically central. He is treating Hansen scenario B as a unique result; as though there were no other temperature projection possible, under the scenario GHG forcings.

Before getting to that, however, look carefully at Dr. Brown’s red overlay of observed temperatures. The ascent from scenario C to scenario B is due to the recent El Niño, which is presently in decline. Prior to 2015 – before this El Niño — the observed temperature trend matches scenario C quite well, but does not match scenario B.

According to NASA, air temperatures are now “returning to normal” after el Nino 2016. The current air temperature trend shown at Carbon Brief illustrates this decline back to the pre-existing, non-scenario B, state.

So, it appears that Dr. Brown’s model-observation correspondence claim rests upon a convenient transient.

Now back to the point concerning the absolutely critical need for unique results in the physical sciences. Unique results from theory are central to empirical test by falsification. Only unique results are testable against experiment or observation. If a physical model has so many internal uncertainties so as to produce a wide spray of outputs (expectation values) for the same set of inputs, that model cannot be falsified by any accurate single-valued observation. Such a model does not produce predictions in a scientific sense because even if one of its outputs corresponds to observations, a correspondence between the state of the model and physical reality cannot be inferred.

The discussion around Figure 9.42(a) above shows that the physics within climate models includes significantly large uncertainties. The models do not, and can not, produce unique results. Their projections are not predictions, and the internal state of the model does not imply the state of the physically real climate.

I discussed this point in detail in terms of “perturbed physics” tests of climate model projections, in a post at Anthony Watts’ Watts Up With That (WUWT) blog, here. Interested readers should refer to Figures 1 and 2, and the associated text, in that post.

The WUWT discussion featured the HADCM3L climate model. When model parameters are varied, the HADCM3L produces a large range of air temperature projections for the identical set of forcings. This result demonstrates the HADCM3L cannot produce a unique solution to the climate energy state. Nor can any other advanced climate model.

From the post, “No set of model parameters is known to be any more valid than any other set of model parameters. No projection is known to be any more physically correct (or incorrect) than any other projection.

“This means, for any given projection, the internal state of the model is not known to reveal anything about the underlying physical state of the true terrestrial climate.”

The same is true of Dr. Hansen’s 1988 projection. Variation of its parameters within their known range of uncertainties would have produced a large number of alternative air temperature trends. The displayed scenario B is just one of them, and is not unique to its set of forcings. Scenario B is not a prediction, and it is not validated as physically correct, merely because it happens to approximate the observed air temperature trend.

In his 2005 essay, “Michael Chrichton’s “Scientific Method,”” Dr. Hansen himself wrote that the agreement between his scenario B and observed air temperature is fortuitous, in part because the Model II ECS was too large and also because of “other uncertain factors.” Dr. Hansen’s modestly described, “other uncertain factors,” are likely to be the large parameter uncertainties and the errors in the physical theory, as discussed above. Dr. Hansen’s 2005 article is available here: http://www.columbia.edu/~jeh1/2005/Crichton_20050927.pdf.

Fortuitousness of agreement does not lend itself to Dr. Brown’s claim of predictive validity.

Dr. Hansen went on to say about his 1988 scenario B that, “it is becoming clear that our prediction was in the right ballpark”, showing that he, too, apparently does not understand the critical requirement – indeed the sine qua non — of a unique result to qualify a calculation from theory as a scientific prediction.

Similar criticism applies to Dr. Brown’s Figure at minute 34:52, “Modeled and Observed Global Mean Surface Temperature.” The air temperature uncertainty envelope is merely the standard deviation of the CMIP5 model projections around the ensemble model mean. This is a measure of model precision, and indicates nothing about the physical accuracy of the mean projection.

The models have all been tuned to produce alternative suites of parameters that permit a reasonable-seeming projection. The HADCM3L example illustrates that under conditions of perturbed physics, each of those models would produce a range of projections with a spread much larger than Dr. Brown’s Figure admits, all with the identical set of forcings.

Neither the mean projection, nor any of the individual model projections represent a unique result. Tuning the parameter sets and reporting just the one projection has merely hidden the large uncertainty inherent in each projection.

The correct plus/minus uncertainty in the mean projection is the [rms/(n-1)] uncertainty calculated from the uncertainties in the individual projections, meaning that the occult uncertainty in the ensemble mean is larger than the occult uncertainty in each individual projection.

Dr. Brown’s question at the end, “How long would observed temperature need to stay close to the climate model projections before we can say that climate models are giving us useful information about how temperature responds to greenhouse gas forcing?” is unfortunate.

Models have been constructed to require the addition of greenhouse gas forcing in order to reproduce global air temperature. Then turning around and saying that models with greenhouse gas forcings produce temperature projections close to observed air temperatures, is to invoke a circular argument.

Given the IPCC forcings, the linear model of my analysis reproduces the recent air temperature trend just as well as do the CMIP5 climate models. In the spirit of Dr. Brown’s question, we can just as legitimately ask, ‘How long would observed temperature need to stay close to the linear model projections before we can say that the linear model gives us useful information about how temperature responds to greenhouse gas forcing?’ The obvious answer is ‘forever,’ because the linear model will never ever give us such useful information.

And now that we know about the uncertainties hidden within the CMIP5, and prior, climate models, we also know the same, ‘forever, never, ever,’ answer applies to them as well.

We know the terrestrial climate has emerged from the Little Ice Age, and has been warming steadily since about 1850. Following Dr. Brown’s final question, even if the warming continues into the 21st century, and the projections of tuned, adjusted and tendentious (constructed to need the forcing from GHG emissions) climate models stay near that warming air temperature trend, the model projection uncertainties are so large and so and the expectation values are so non-unique, that any future correspondence cannot escape Dr. Hansen’s diagnosis of “fortuitous.”

Summary conclusion: Not one of Dr. Brown’s objections survives critical examination.

Reply to  Pat Frank
November 14, 2017 12:10 pm

Thanks Pat… I’ve gradually gotten less skeptical of your arguments here. They remind me a bit of when Lancet published a study on the Iraq war which claimed some number of people had been killed, even though it actually had error bars so large it could not even rule out the possibility that the war had saved lives.

The large uncertainty bars do not indicate possible increases or decreases in air temperature. They indicate a state of knowledge….Further, the ±4 W/m² is not a time-invariant error, as Dr. Brown suggested, but rather a time-average error of climate model cloud dynamics. It says that CMIP5 models will average ±4 W/m² error in long-wave cloud forcing each year, every year, while simulating the evolution of the climate.

That seems to be the crux. The sensible thing for modellers to do would be to acknowledge this is obviously true, that they just as obviously cannot hope to produce meaningful predictions with the current state of uncertainties, and shrug it all off as the best they can do right now. Instead we get… well, climate science.

The same is true of Dr. Hansen’s 1988 projection. Variation of its parameters within their known range of uncertainties would have produced a large number of alternative air temperature trends. The displayed scenario B is just one of them, and is not unique to its set of forcings. Scenario B is not a prediction, and it is not validated as physically correct, merely because it happens to approximate the observed air temperature trend.

Part of the problem with AGW policy is that when “predicting” a simple scalar trend in a system as complex and subject to uncertainty as terrestrial climate, the odds of getting the scalar trend right by accident are astronomically higher than the odds of modelling all the pieces of the system correctly simply because it has so many fewer degrees of freedom… even if the errors weren’t so large as to render the “prediction” meaningless.

Reply to  talldave2
November 19, 2017 4:16 pm

Thanks so much talldave2.

You’re right that the model-based continual and step-wise injection of error is the crux issue for propagating the error. The other issue is the linearity of model output.

Interesting point about the likelihood of accidental correspondence. Following on, the climate seems pretty stable over the short term in any case. Tuning a model to reproduce the immediate past, and then projecting a few parameterized alternatives forward seems like a pretty good scatter-gun way of getting something close to the short term trend of the climate.

You’re entirely right that such correspondences are physically meaningless, and are not predictions.

Your final general agreement, starting from a critical stance, gives me hope despite the obdurate obscurantist darkness that I have encountered among consensus climatologists.

November 16, 2017 7:50 pm

Geoscientific Model Development turned out to be sensitive to the notion that Dr. Annan has serious professional and financial conflicts of interest with the content of my manuscript, perhaps clouding the objectivity of his evaluation.

On Tuesday, November 14, I received an email out of the blue from Dr. Didier Roche.

Dr. Roche is an Executive Editor at GMD.

Dr. Roche is also a climate modeler employed at the IPSL/Laboratoire des Sciences du Climat et de l’Environnement.

A climate modeler who apparently is thought able to provide a dispassionate appraisal of a manuscript demonstrating that climate models are unreliable.

Here’s his emailed evaluation. It deserves wide appreciation as a fine example consensus analysis.

My response is in the next post.
+++++++++++++++++

From: Didier M. Roche didier.roche@xxx.xxx.xx
Subject: Your manuscript gmd-2017-281
Date: November 14, 2017 at 7:13 AM
To: Patrick Frank pfrankzzz@xxx.xx

Dear Patrick Frank,

Following the rejection of your manuscript gmd-2017-281 and your subsequent email to Copernicus, it has been decided that it will be treated as an appeal of the rejection decision.

In such cases an Executive Editor is nominated to provide an independent evaluation of the manuscript in question to confirm or reject the previous decision.

In the case of your manuscript, I have been asked to handle the appeal. I have now read your manuscript in details two times and evaluated the decision of Dr. James Annan who previously rejected your submission.

My analysis of your manuscript is that indeed it is not suitable for publication in GMD as it is. The reasoning you develop is based on the premises that the error arising from simulated cloud cover on an annual mean is a 4 W.m-2 error in longwave radiation calculations in CMIP models.

However clouds are highly variable in time and space. By thus doing an average over the year, you ignore completely their variations over the year. Similarly, when you state that “Global Cloud forcing is net cooling” (page 30) you also ignore the fact that different types of clouds (low vs. high for example) have different radiation effects and that therefore their vertical distribution is also of major importance.

The point related to the annual timescale was already pointed to you by Dr. Annan. I agree with his analysis.

Let me also highlight that in your appeal you incorrectly stated that “Dr. Annan wrongly claimed the ±4 W/m^2 annual error is explained “nowhere in the manuscript.” It is explained on page 30, lines 571-584.”

However, the valid point of Dr. Annan is that the *annual* timescale is explain nowhere in the manuscript. He never claimed, as you seem to suggest in your answer, that you did not explained the calculation method for your ±4 W/m^2 error.

Based on my expertise and on the material I received from your submission and appeal, I thus fully confirm the rejection of your
manuscript as submitted under number gmd-2017-281.

With best wishes,

Didier Roche (Exec. Editor GMD)

=========
Didier M. Roche

IPSL/Laboratoire des Sciences du Climat et de l’Environnement

Adresse:
Laboratoire des Sciences du Climat et de l’Environnement
Centre d’Etude de Saclay
CEA-Orme des Merisiers, bat. 701
F-91191 GIF-SUR-YVETTE CEDEX

Tel.:
+33 (0) x xx xx xx xx
Didier.Roche@xxx.xxx.xx

November 16, 2017 8:22 pm

Didier Roche addressed his evaluation only to me, by the way. Which I found a little peculiar.

Under the circumstances, wouldn’t the journal have wanted to be inclusively aware of his reasoning?

My reply included Executive Editor Julia Hargreaves and the journal office.

They deserve to know the full quality of editorial acuity they deploy.

November 16, 2017 8:23 pm

From: Patrick Frank pfrankzzz@xxxx.xx
Subject: Re: manuscript gmd-2017-281
Date: November 14, 2017 at 9:42 PM
To: Didier M. Roche didier.roche@xxx.xxx.fr
Cc: jules@xxxxx.xxx.uk, editorial@xxxx.org

Dear Dr. Roche,

Thank-you for your email.

This will be short. Quote and response.

You wrote, “premises that the error arising from simulated cloud cover on an annual mean is a 4 W.m-2 error in long wave radiation calculations in CMIP models.

This is not my premise. It is a result reported in Lauer and Hamilton, 2013.

The quantity ±4 W/m^2 is a rms uncertainty statistic. It is not a positive-sign physical error as you represented it.

By thus doing an average over the year, you ignore completely their variations over the year. … you also ignore the fact that different types of clouds (low vs. high for example) have different radiation effects and that therefore their vertical distribution is also of major importance.

Calculating annual GMST does not presume that every point on Earth is of uniform temperature every day, everywhere, all year.

Calculating global average irradiance does not presume that every point on Earth receives 340 W/m^2.

Calculating global average cloud forcing (Hartmann, 1992; Stephens, 2005) does not presume all clouds are the same everywhere.

Taking an average does not presume that a uniform magnitude reigns everywhere.

The complete ignorance reflected in your argument calls a judgment of incompetence.

However, the valid point of Dr. Annan is that the *annual* timescale is explain nowhere in the manuscript.

My source, Lauer and Hamilton, 2013, reported annual means; mentioned in ms line 575.

Manuscript lines 578-579 show exactly how and where the annual timescale arises. SI Section 6.2 derived the annual timescale
exactly.

Your statement is factually and demonstrably wrong; as was that of Dr. Annan.

You may have read the manuscript twice, but you did not understand it even once.

You are a climate modeler, Dr. Roche. Your publication list shows no relevant expertise; a condition obvious in the quality of your
commentary.

Like Dr. Annan you have profound professional and career conflicts of interest with a manuscript demonstrating that climate models
have no predictive value, which they do not.

You people are determinedly rejectionist. Protocol is your cover.

Yours sincerely,

Pat

Patrick Frank, Ph.D.
Palo Alto, CA 94301
email: pfrankzzz@xxxx.net

For those who have read this far (thanks), here’s what manuscript lines 578-579 say:

Dimensional analysis of the derivation yields the units of the calibration error statistic: (cloud-cover unit)/grid-point × 1/year × 1/model × grid-points/globe = (cloud-cover unit) year-1 model-1 globe-1. This is a global annual average CMIP simulation error in cloud cover.

Throughout their paper, Lauer and Hamilton, 2013 invariably refer to their results as annual means.

November 16, 2017 8:46 pm

Somehow my reply to Dr. Roche has not appeared, after two attempts to post it.

...and Then There's Physics
Reply to  Pat Frank
November 18, 2017 2:38 am

Pat,
I think your paper is obviously flawed as I have explained on numerous occasions before. Patrick Brown gave a lengthy explanation as to the error. I believe Gavin Schmidt also explained the issue to you many years ago too. Now you have James Annan providing a similar explanation, plus an extra evaluation by the executive editor. It has now – IIRC – been rejected 7 times. Is there a stage at which you might take a step back and consider that you may actually be wrong?

Reply to  ...and Then There's Physics
November 19, 2017 3:10 pm

Argument from authority, ATTP. Objectively wrong arguments multiply repeated do not become right arguments. Your arguments are wrong, and you’re wrong.

And here you are again, claiming ±4 W/m^-2 calibration error statistic is instead a positive-sign +4 W/m^2 forcing offset error.

How hard is it to realize that a statistic is not an energy?

Your self-anointed appellation of “and Then There’s Physics” is an utterly unintentional but perfect irony. Your entire criticism abuses physics.

Further, neither you, nor Patrick Brown, nor Gavin Schmidt ever realized that linear extrapolation of forcing entirely justifies linear propagation of error. This is the core issue, and it’s opaque to you.

That GCM output is linear in forcing is fully demonstrated. The rest follows. It’s obvious. And none of you get it. Or won’t get it.

AndyG55
Reply to  ...and Then There's Physics
November 19, 2017 3:26 pm
Reply to  ...and Then There's Physics
November 19, 2017 3:35 pm

Submitting the same paper over and over expecting different results (it being accepted) is well defined.

Reply to  ...and Then There's Physics
November 19, 2017 4:24 pm

Robert Kernodle, editors and reviewers vary with the journal.

One can always rationally hope to finally find intellectual courage in the one and competence in the other at a different journal.

Thus far, unfortunately, I have found neither at any. Except for three reviews that did recommend publication, including one of two at Earth Science Reviews.

That event should have warranted publication given a compelling response to the negative review. But editor Timothy Li at ESR is a climate modeler. Guess what he decided about a paper definitively showing that models are unreliable.

Nick Stokes
Reply to  ...and Then There's Physics
November 19, 2017 4:27 pm

“And here you are again, claiming ±4 W/m^-2 calibration error statistic is instead a positive-sign +4 W/m^2 forcing offset error.”
Despite my challenge, you have not found a single instance where someone writes a rmse as ±x. All the locations you found, and even your source, L&H, write it as a positive number.

I see in some recent posts, you still trot out cases where an interval is described as a±b, where b is an RMS. That does not imply a variable sign on b. You can write the interval 1 to 5 as 3±2. But 2 is still a positive number.

Reply to  ...and Then There's Physics
November 19, 2017 5:37 pm

Pat Frank, there are plenty of of journals that will publish your work……all you have to do is pay the fee.

Reply to  ...and Then There's Physics
November 19, 2017 5:41 pm

Have you tried the Chinese journal where His Majesty , the Viscount of Benchley got his paper published?

Reply to  ...and Then There's Physics
November 19, 2017 6:13 pm

Nick, “Despite my challenge, you have not found a single instance where someone writes a rmse as ±x

To the contrary: here (November 12, 2017 at 10:22 pm ), and here (November 12, 2017 at 11:48 pm Ineichen, et al.), and here (November 14, 2017 at 12:27 am, several examples).

We’ve established that reporting standard deviations (rmse) as positive roots is a mere convention.

Square roots are always “±.” RMSE is always “±.”

The “interval from 1 to 5” is not a rms. Non-sequitur.

Your argument also and nonsensically implies that error itself can only be of positive sign (November 12, 2017 at 8:22 pm ).

You know all that very well. Your challenge is your attempt to divert the debate into meaningless convention. So we all now know you’re just lying Nick.

Reply to  ...and Then There's Physics
November 19, 2017 6:18 pm

Robert Kernodle, you mean Science Bulletin.” Yes, I did try them.

Their response was to say that my study didn’t meet their standards of “novelty and significance.”

In other words, the first ever propagation of error through a GCM projection and the first ever demonstration that GCMs have no predictive value wasn’t interesting enough to even consider.

Then they blocked my email account.

Reply to  ...and Then There's Physics
November 19, 2017 6:22 pm

Science Bulletin cannot “block” your email account. Ignoring you is not “blocking.”

Reply to  ...and Then There's Physics
November 19, 2017 6:42 pm

Anyone can block anyone’s email account, Robert. The domain name goes into the blocked sender list.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 19, 2017 11:23 pm

Pat,

Argument from authority, ATTP. Objectively wrong arguments multiply repeated do not become right arguments. Your arguments are wrong, and you’re wrong.

I wasn’t really making an argument. I was asking a question, which you have still not answered. At what stage would you take a step back and consider that you may actually be wrong?

Reply to  ...and Then There's Physics
November 20, 2017 8:32 am

ATTP, I’ll step back when someone demonstrates an objective mistake.

You certainly have not done so. Neither has Patrick Brown, nor Nick Stokes.

Reply to  ...and Then There's Physics
November 20, 2017 9:19 am

And at what stage do you step back admit you’re wrong, ATTP?

My demonstration is intact, that climate models have no predictive value.

You must know that your arguments that annual means are not annual means, and that a ±rmse statistic is an energy are utterly and *obviously* wrong.

When do you step back and admit it, ATTP? You have no case.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 20, 2017 9:37 am

Pat,

ATTP, I’ll step back when someone demonstrates an objective mistake.

As far as I can see, many have already done this, so what you probably mean is “demonstrate an objective mistake to your satisfaction“. Given that this is probably not possible, is there some other point at which you would consider stepping back and considering that you have indeed made some kind of mistake?

Reply to  ...and Then There's Physics
November 20, 2017 3:33 pm

Let’s have your mistake, ATTP. We can have it out right here.

Many years ago, Gavin was reduced to a manufactured log(0) error; one I never made. That was his entire remaining criticism by the end of our debate. He lost.

Gavin also apparently thinks that a ±K uncertainty is a physical temperature. He thinks that an uncertainty interval implies the model is wildly oscillating. There’s your source of critical authority, ATTP. A paragon of incompetent ideas.

You and Patrick Brown both assert what is obviously a ±4 W/m^2/year calibration error statistic is instead a +4 W/m^2 positive offset forcing error. You don’t recognize the difference between a statistic and a physical energy.

Years ago I’d have thought such a mistake incredible. Now it’s a banal universality.

Of course, the same people think a numerical construct is a physical temperature.

Further, you evidently cannot be convinced by the explicit derivation I provided and the extracted dimensional analysis that each demonstrates my point and your mistake.

You people are wrong, I’ve shown you are wrong, and your repeated insistence on wrong establishes nothing.

You’re apparently a trained physicist, ATTP, and you insistently argue that a rms calibration error statistic is an energetic forcing. You should feel ashamed of making such a naïve error.

I’m not surprised that Patrick Brown made that mistake, because I have yet to encounter a climate modeler who has any understanding whatever of calibration and physical error.

But you’re apparently a different case. You’ve evidently put aside all your training and taken up advocacy.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 20, 2017 11:35 pm

Pat,
I take it that the answer to my question is no?

Reply to  ...and Then There's Physics
November 21, 2017 8:54 am

Stop playing around ATTP. If you have a mistake to air out, let’s see it.

Otherwise you’re just hiding behind rhetorical poses.

You claim to know my mistakes. So here’s your challenge, ATTP. Put up or shut up.

Patrick Brown thinks that a ±K uncertainty is a physical temperature; quote below.

Do you think that, too, ATTP? Is one of your objections, like his, that a propagated uncertainty of ±15 K is unphysical?

Quote from minute 12:35: the ±15 C uncertainty envelope is, “a completely unphysical range of uncertainty, so it’s totally not plausible that temperature could decrease by 15 degrees as we’re increasing CO₂. And it’s implausible as well that temperature could increase by 17 decrees as we’re increasing CO₂ under the RCP 8.5 scenario. But as I understand it, this is the point Dr. Frank is trying to make.

There it is. He thinks an uncertainty statistic is a physical temperature.

Is that one of your main objections, too, ATTP? Grounded in the idea that a statistic is a temperature?

Are you, apparently a trained physicist, truly that naïve?

And if you’re not, would you not worry that such a basic misunderstanding of uncertainty might indicate a profound lack of training in physical error analysis?

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 8:59 am

Pat,
I’m not really playing around. I was trying to establish if there was some realistic scenario under which you would take and step back and maybe consider that you were wrong. I think I have now established (to my satisfaction, admittedly) that there is not. I’m more than happy to leave it at that.

Reply to  ...and Then There's Physics
November 21, 2017 10:42 am

You made a disparaging reference to my satisfaction above, ATTP. And now you’ve credited yours. That’s not very consistent of you, is it.

Not one of your analytical criticisms has been valid, ATTP. Not one. That also is apparently to your satisfaction.

Let the record show you dodged the opportunity to make your case.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 11:58 am

Pat,
What I’m finding remarkable is that you’ve had a paper rejected 7 times (I think) and had numerous others criticise what you’ve presented. Rather than trying to find some way to engage better with your critics, you seem pretty convinced that you’re completely right, and they’re completely wrong, and you’ve managed to insult most of them in the process. Even if by some chance you are correct, this seems a poor way in which to establish this, but my guess is that that is not actually your intent. I may be wrong, of course.

AndyG55
Reply to  ...and Then There's Physics
November 21, 2017 12:19 pm

Pat ought to submit it to a mathematics journal,

Climatologist have proven they don’t have the wherewithal to comprehend.

They continue to be utterly blinkered because of their ideology and lower level of cognitive function

Like you, mr ZERO-physics.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 12:21 pm

Pat ought to submit it to a mathematics journal

Yes, this seem like a good suggestion.

Reply to  ...and Then There's Physics
November 21, 2017 1:34 pm

Where is a correct criticism, ATTP? Yours are certainly not correct. Nor are Patrick Brown’s, nor Nick Stokes’.

My responses to the reviewers are available for examination. Some of them made mistakes similar to yours. Others made different mistakes.

Not even one rejectionist reviewer ever grasped the core point that linear extrapolation of forcing warrants linear propagation of error. You have never shown any awareness of that either.

Nevertheless, that warrant alone validates the uncertianty envelopes and the conclusion that climate models are predictively worthless.

How else should I engage my critics except to produce a valid study?

I’ve engaged you. I’ve engaged Patrick Brown. I engaged Nick Stokes and Gavin Schmidt. I engaged my critics. All with polite and detailed argument. I have demonstrated the intellectual poverty of your criticisms.

What good has it done?

I’m in the position of someone who might have tried to publish on the failings of Marxism in Soviet journals. Rejectionist reviewers and editorial malfeasance would rule the process.

Would the utter uniformity of their rejection indicate an invalid study?

Any hope for publication in a consensus climate journal has turned out to be a chimerical dream.

No ideologist is willing to entertain a disproof. That’s you people.

As to insults, I’ve not insulted anyone. Those I’ve called incompetent have merited that judgment; by supposing that ±K is a temperature for example. A statement of fact is not an insult.

Reply to  ...and Then There's Physics
November 21, 2017 1:36 pm

James Annan is a mathematician. Gavin Schmidt is a mathematician. Neither of them know the first thing about physical error analysis.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 1:37 pm

Pat,
Since you’re the one with the ground-breaking idea, you should probably be the one defending/explaining it. In that light, can you briefly explain the basics of energy balance in the context of our climate, mentioning most of the major forcings/feedbacks?

Reply to  ...and Then There's Physics
November 21, 2017 1:58 pm

Transparent attempt to divert from a losing argument, ATTP.

Reply to  ...and Then There's Physics
November 21, 2017 1:59 pm

For the sake of readers ATTP’s objection is rendered here: href=“https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2643716:

The error that you’re trying to propagate is not an error at every timestep, but an offset.

ATTP makes two objections:
1. The error does not arise in every time step.
2. The error is a constant offset, which can be subtracted away.

In response:
1. The manuscript shows CMIP5 simulated cloud error is ≥0.95 correlated among models. The error is thus not random but arises from model theory bias. It arises from the incorrect and/or incomplete physical theory deployed within the model itself.

This means long wave cloud forcing (LWCF) error also arises from within the model. LWCF error then necessarily makes a new appearance in every single step of a climate projection.

A multi-model, multi-year root-mean-square (rms) of LWCF error is a general model calibration error statistic. Model calibration error statistics express the uncertainty in a predicted magnitude.

When an error-ridden model is used to make sequential calculations, the model-inherent error is injected into every calculational step.

Each stepwise result is then erroneous. Each erroneous result is used as the initial value for the subsequent calculational step. Error must accumulate across the sequence of calculations.

But the magnitude of the error in each step is unknown, because no observations are available for reference.

As the error magnitude is unknowable, propagated uncertainty is the only available measure of reliability.

It is completely justified to propagate a calibration error uncertainty statistic to account for the increasing uncertainty due to an error of unknown magnitude reiterated in every sequential step of a stepwise calculation.

2. Lauer and Hamilton 2013 reported a multi-model 20-year annual mean root-mean-square (rms) error for simulated long wave cloud forcing. RMS error is a ± uncertainty statistic. For CMIP5 models the reported annual average LWCF rms error was ±4 W/m^2/year; applicable to the air temperature projection of any CMIP5 model.

A plus/minus uncertainty statistic is not a single-sign constant physical error. It cannot be subtracted away.

ATTP’s objections do not withstand critical examination.

ATTP wants to know what would convince me to agree that his obviously incorrect objections are *not* incorrect.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 2:01 pm

Pat,
I’m actually trying to establish if you even understand the basics well enough to make the claims that you’re making. There’s no reason why you wouldn’t want to at least try to illustrate that you do, is there?

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 2:15 pm

Pat,
Actually, I didn’t say 2. The error is a constant offset, which can be subtracted away. In light of this, why don’t you just demonstrate that you do indeed understand the basics of energy balance in the context of our climate?

Reply to  ...and Then There's Physics
November 21, 2017 2:44 pm

Pat Frank says: “Anyone can block anyone’s email account, Robert”

No Pat, the only place an email account can be blacklisted is on the server. Your email client cannot tell it’s server what to blacklist. I guess you are unable to differentiate between “client” and “server” in the world of email.

Your ignorance of such things is telling. Does it carry over to your knowledge of climate models?

Reply to  ...and Then There's Physics
November 21, 2017 3:46 pm

If you’re not composing a diversion, ATTP, you’re just displaying ignorance.

My analysis has nothing to do with climate physics. Or energy balance.

It’s all about the observable behavior of climate models and physical error analysis.

Your challenge is an irrelevant non-sequitur through and through. Ignorance or diversion: there’s no third possibility.

You’ve continued to ignore that you agree ± = + and a statistic = an energy. How about ±K = K; do you agree with that, too?

Reply to  ...and Then There's Physics
November 21, 2017 4:46 pm

Robert Kernodle, so you’ve decided that Science Bulletin doesn’t have access to their email server. Your decision to invent criticisms lets us know that you’re just so sincere.

Reply to  ...and Then There's Physics
November 21, 2017 5:00 pm

ATTP, you wrote, “Actually, I didn’t say 2. The error is a constant offset, which can be subtracted away.

To the contrary, that’s been your position all along. In the cross post on your own site, you say (January 26, 2017 at 6:10 pm), “However, this doesn’t mean that one should propagate those uncertainties through the model, because either this error is being compensated for elsewhere, or the model is not correctly representing the absolute state, but might still be useful for determine how the system responds to changes. (my bold)”

The bolded statement, specifically “changes, is standard usage in consensus work for taking differences to remove a supposedly constant-offset simulation error. Several of my reviewers emphatically make this claim.

Like you, they misconstrue the ±4 W/m^2 calibration statistic to be a +4 W/m^2 offset error in forcing. Like you, they claim a statistic is an energy.

The simulated change in climate is projected state minus base-state, right? And all your constant-offset errors just subtract away. Isn’t that convenient. And doesn’t that reveal why you all argue so vehemently that ± = +.

But ±rmse calibration uncertainty is not physical error, ATTP. And ± ≠ +.

And this, also your statement there, is relevant to our discussion (January 27, 2017 at 8:55 am): “However, Frank is essentially arguing that an individual model should have a large uncertainty because of the uncertainty in the cloud forcing, but the cloud forcing in an individual models does not vary wildly from step to step; it may differ – in an absolute sense – from observations, but that difference doesn’t mean that the cloud forcing in that model will wildly vary relative to observation.

You there show a complete misunderstanding of uncertainty. You treat a ±rmse uncertainty statistic as though it were a physical energetic perturbation on the model.

A ±rmse calibration uncertainty has no impact whatever on model expectation values. You make a freshman mistake, ATTP. Is that competent?

Reply to  ...and Then There's Physics
November 21, 2017 5:18 pm

Pat Frank writes: “The domain name goes into the blocked sender list.”
…..
So, should they want to block xyz@gmail.com, they block the domain? Do you realize how ignorant that is? If the domain is blocked, EVERYBODY with a gmail is blocked.
..
Next Pat Frank makes another dumb assertion, “you’ve decided that Science Bulletin doesn’t have access to their email server.” You misinterpreted what I said. I said, “Your email client cannot tell it’s server what to blacklist.”
..
For someone writing a paper about computer models, you display a lack of understanding about something as simple as email. The user of a mail server can access he server via POP, IMAP and often by web browser access. Such access does not include blacklisting a “domain” (your term.)

Reply to  ...and Then There's Physics
November 21, 2017 6:18 pm

Robert Kernodle, “So, should they want to block xyz@gmail.com, they block the domain? Do you realize how ignorant that is? If the domain is blocked, EVERYBODY with a gmail is blocked.

I guess you’ve got me Robert. I should have written the email address goes into a blocked sender list.

Whoop-de-do. That sure means I know nothing of physical error analysis, alright.

Your line of argument is over the border and into foolish-land, Robert. Try sticking to physical error analysis, the topic actually at hand.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 21, 2017 11:21 pm

Pat,

My analysis has nothing to do with climate physics. Or energy balance.

Yes, this may well be true, ironically. The problem, though, is that error analysis does require some understanding of the actual calculation/measurements. I’m trying to establish if you do indeed understand the underlying principles, because that is required if you are to do a proper error analysis.

AndyG55
Reply to  ...and Then There's Physics
November 21, 2017 11:52 pm

Mr Empty of physics.

You have shown by your comments that your mathematical knowledge is not up to understanding even the basics of error propagation.

It is very obvious that your whole comprehension is deeply flawed.

Its is just part of being you.

http://www.populartechnology.net/2015/01/who-is-and-then-theres-physics.html

Reply to  ...and Then There's Physics
November 22, 2017 7:47 am

ATTP, climate model air temperature projections are fully demonstrated to be nothing but linear extrapolations of forcing. Therefore they are subject to linear propagation of error.

Refute that.

If you can’t refute that, you have no case.

Your excursion into energy balance is an attempt (most likely studied) to divert the conversation into an irrelevancy; a try to rescue a lost debate.

You have also signally ignored addressing your own mistakes of claiming a statistic is an energy, of claiming ± = +, and of claiming a ±rmse calibration uncertainty statistic is a single-sign positive offset physical error.

Given all that one is left bemused that you’d offer yourself as a judge of whether someone else is capable of physical error analysis. Your mistaken claims demonstrate you know nothing about it.

Your silence on the point also indicates you now admit that your adamant support of offset errors indeed includes that they subtract away.

You’ve yet to answer by the way whether you, like Patrick Brown and Gavin Schmidt, think that ±K = K.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 22, 2017 8:36 am

Pat,

ATTP, climate model air temperature projections are fully demonstrated to be nothing but linear extrapolations of forcing. Therefore they are subject to linear propagation of error.

If we’re talking about GCMs then they’re three-dimensional, dynamical simulations. However, it is roughly correct that the globally averaged change in temperature depends approximately linearly on the change in forcing. To propagate the error, you would need to know the error in the change in forcing, not simply the error in one component of the forcings.

Reply to  ...and Then There's Physics
November 22, 2017 10:02 am

ATTP, thank-you for agreeing that GCM air temperature projections are linear extrapolations of forcing. Not your “roughly linear,” but demonstrated to be exactly linear.

Linear propagation of error obviously follows.

You wrote, “you would need to know the error in the change in forcing

I need only know the uncertainty in the simulated tropospheric thermal energy flux. The calibration statistic, LWCF ±rmse gives me exactly that.

The average annual change in forcing since 1979 is about 0.035 W/m^2. The lower limit annual average uncertainty in simulated tropospheric thermal energy flux is ±4 W/m^2.

That annual 0.035 W/m^2 enters the tropospheric thermal energy flux and becomes part of it. The ±4 W/m^2 uncertainty tells us that we have no idea how clouds will respond to a 0.035 W/m^2 perturbation. That represents physical ignorance, ATTP, and is the source of the propagated uncertainty.

LWCF ±rmse represents a general model theory-bias error. It is injected into every single simulation step of a projection. Uncertainty in projected air temperature necessarily grows stepwise.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 22, 2017 10:43 am

Pat.

ATTP, thank-you for agreeing that GCM air temperature projections are linear extrapolations of forcing. Not your “roughly linear,” but demonstrated to be exactly linear.

No, they are not exactly linear.

I need only know the uncertainty in the simulated tropospheric thermal energy flux. The calibration statistic, LWCF ±rmse gives me exactly that.

No, you need to know the change in forcing and its uncertainty. The latter is not the same as the uncertainty in the tropospheric thermal energy flux.

Reply to  ...and Then There's Physics
November 22, 2017 1:50 pm

ATTP, I used the IPCC SRES or Meinshausen forcings throughout. The emulations are excellent.

You wrote, “No, you need to know the change in forcing and its uncertainty. The latter is not the same as the uncertainty in the tropospheric thermal energy flux.

Your second sentence is nearly correct. Your first is not.

The uncertainty I used is the ±rmse in simulated long wave cloud forcing. Cloud forcing and CO2 forcing jointly enter the tropospheric thermal energy flux.

Tropospheric thermal energy flux determines air temperature. Uncertainty in tropospheric thermal energy flux puts an uncertainty into the derived air temperature.

Simulated LWCF ±rmse is an uncertainty in tropospheric thermal energy flux. It puts an uncertainty into the derived air temperature.

I don’t need to know the uncertainty in CO2 forcing at all. I can derive an uncertainty in projected air temperature from simulated LWCF ±rmse.

If you can’t see that, you’re lost.

However, according to Entminan, et al. (2017) , Geophysical Research Letters, 43(24), 12,614-612,623, the uncertainty in CO2 long wave forcing is about ±5% (1 SD).

The simulated LWCF ±rmse provides a lower limit of uncertainty. If simulated LWCF ±rmse were combined with the uncertainty in CO2 forcing, the projection uncertainty envelopes would only become larger.

...and Then There's Physics
Reply to  ...and Then There's Physics
November 22, 2017 2:25 pm

Pat,
I’ll repeat this one more time. To do what you’re trying to do, you need to know the uncertainty in the change in forcing (remember it’s linear in change in forcing). This is not what you are using.

Reply to  ...and Then There's Physics
November 22, 2017 3:43 pm

ATTP, ”you need to know the uncertainty in the change in forcing.

No, I don’t ATTP. I need only to know the uncertainty in the simulated tropospheric forcing. And that I do know, in a lower limit.

A GCM simulates the cloud forcing as it varies across every projection year. That simulation includes the effect of increasing [CO2] on clouds and whatever else.

The GCM LWCF calibration error statistic tells us that the simulated annual average tropospheric thermal energy flux, including the simulated response to the annual change in CO2 forcing, is accurate only to ±4 W/m^2.

The simulated tropospheric thermal energy flux is never known to more accuracy than that ±rmse; a lower limit of resolution.

That means the change in air temperature due to increased CO2 is conditioned by our ignorance of the physically correct magnitude of the tropospheric thermal energy flux, within which CO2 has its effect.

That effect is worth only 0.035 W/m^2/year. That perturbation is completely lost within the annual average LWCF uncertainty of ±4 W/m^2/year.

If the GCMs cannot simulate the tropospheric thermal energy flux to better resolution than ±4 W/m^2, they cannot resolve a 0.035 W/m^2 perturbation and cannot accurately simulate the cloud response to that perturbation.

That means they cannot simulate the change in tropospheric thermal energy flux due to that perturbation or the corresponding response of air temperature.

At every time step, the simulated cloud cover is always freshly wrong but to some unknown amount.

Our ignorance of the relative phase-space positions of the simulated air temperature and the physically correct air temperature increases with every single projection simulation time-step.

Hence the increasing uncertainty bounds.

This analysis and conclusion should be obvious to any trained physicist (or chemist, or engineer).

AndyG55
Reply to  Pat Frank
November 18, 2017 2:41 am

Oh look, AGW troll “ZERO-physics” brings his arrant nonsense and base level ignorance……

….. in a vain attempt to try to help the drowning Nick.

AndyG55
Reply to  Pat Frank
November 18, 2017 2:42 am

Gavin Schmidt…… roflmao..

The guy wouldn’t even face up to Roy Spencer.

Cannot afford to have is mathematical malfeasance exposed.

Reply to  Pat Frank
November 21, 2017 4:41 pm

ATTP, “Actually, I didn’t say 2. The error is a constant offset, which can be subtracted away.

To the contrary, that’s been your position all along. In the cross post of Patrick Brown’s video on your own site, you say (January 26, 2017 at 6:10 pm), “However, this doesn’t mean that one should propagate those uncertainties through the model, because either this error is being compensated for elsewhere, or the model is not correctly representing the absolute state, but might still be useful for determine how the system responds to changes. (my bold)”

The bolded statement, specifically “changes,” is standard usage in consensus work for taking differences to remove a supposedly constant-offset simulation error. Several of my reviewers emphatically make this claim.

The simulated change in climate is projected state minus base-state, right? And all your constant-offset errors just subtract away, don’t they. Isn’t that convenient. And doesn’t that reveal why you all argue so vehemently that ± = +.

The accuracy wonderfulness of taking differences to remove error has been your position right from the start. But ±uncertainty is not physical error, ATTP. And ± ≠ +.

And this, also your statement on your site, is relevant to our discussion here (January 27, 2017 at 8:55 am): “However, Frank is essentially arguing that an individual model should have a large uncertainty because of the uncertainty in the cloud forcing, but the cloud forcing in an individual models does not vary wildly from step to step; it may differ – in an absolute sense – from observations, but that difference doesn’t mean that the cloud forcing in that model will wildly vary relative to observation.

You there show a complete misunderstanding of uncertainty. You treat a ±rmse uncertainty statistic as though it were a physical energetic perturbation on the model.

A ±rmse calibration uncertainty has no impact whatever on model expectation values. You make a freshman mistake, ATTP. Is that competent?

November 19, 2017 3:19 pm

Let’s try again: here’s my response to Dr. Roche.
+++++++++++++++
From: Patrick Frank pfrankzzz@xxxxx.net
Subject: Re: manuscript gmd-2017-281
Date: November 14, 2017 at 9:42 PM
To: Didier M. Roche didier.roche@xxx.xxx.fr
Cc: jules@xxxxx.xxx.uk, editorial@xxxxx.org

Dear Dr. Roche,

Thank-you for your email.

This will be short. Quote and response.

You wrote, “…premises that the error arising from simulated cloud cover on an annual mean is a 4 W.m-2 error in long wave radiation calculations in CMIP models.

This is not my premise. It is a result reported in Lauer and Hamilton, 2013.

The quantity ±4 W/m^2 is a rms uncertainty statistic. It is not a positive-sign physical error as you represented it.

By thus doing an average over the year, you ignore completely their variations over the year. … you also ignore the fact that different types of clouds (low vs. high for example) have different radiation effects and that therefore their vertical distribution is also of major importance.

Calculating annual GMST does not presume that every point on Earth is of uniform temperature every day, everywhere, all year.

Calculating global average irradiance does not presume that every point on Earth receives 340 W/m^2.

Calculating global average cloud forcing (Hartmann, 1992; Stephens, 2005) does not presume all clouds are the same everywhere.

Taking an average does not presume that a uniform magnitude reigns everywhere.

The complete ignorance reflected in your argument calls a judgment of incompetence.

However, the valid point of Dr. Annan is that the *annual* timescale is explain nowhere in the manuscript.

My source, Lauer and Hamilton, 2013, reported annual means; mentioned in ms line 575. Manuscript lines 578-579 show exactly how and where the annual timescale arises. SI Section 6.2 derived the annual timescale exactly.

Your statement is factually and demonstrably wrong; as was that of Dr. Annan.

You may have read the manuscript twice, but you did not understand it even once.

You are a climate modeler, Dr. Roche. Your publication list shows no relevant expertise; a condition obvious in the quality of your commentary.

Like Dr. Annan you have profound professional and career conflicts of interest with a manuscript demonstrating that climate models
have no predictive value, which they do not.

You people are determinedly rejectionist. Protocol is your cover.

Yours sincerely,

Pat

Patrick Frank, Ph.D.
Palo Alto, CA 94301
email: pfrankzzz@xxxx.xx
++++++++++++++++++++++++++++++++++++
These things are, we conjecture, like the truth;
But as for certain truth, no one has known it.
Xenophanes, 570-500 BCE
++++++++++++++++++++++++++++++++++++

November 19, 2017 3:22 pm

Let’s try again: my response to Dr. Roche.

From: Patrick Frank pfrankzzz@xxxxx.net

Subject: Re: manuscript gmd-2017-281

Date: November 14, 2017 at 9:42 PM

To: Didier M. Roche didier.roche@xxx.xxx.fr

Cc: jules@xxxxx.xxx.uk, editorial@xxxxx.org

Dear Dr. Roche,

Thank-you for your email.

This will be short. Quote and response.

You wrote, “…premises that the error arising from simulated cloud cover on an annual mean is a 4 W.m-2 error in long wave radiation calculations in CMIP models.

This is not my premise. It is a result reported in Lauer and Hamilton, 2013.

The quantity ±4 W/m^2 is a rms uncertainty statistic. It is not a positive-sign physical error as you represented it.

By thus doing an average over the year, you ignore completely their variations over the year. … you also ignore the fact that different types of clouds (low vs. high for example) have different radiation effects and that therefore their vertical distribution is also of major importance.

Calculating annual GMST does not presume that every point on Earth is of uniform temperature every day, everywhere, all year.

Calculating global average irradiance does not presume that every point on Earth receives 340 W/m^2.

Calculating global average cloud forcing (Hartmann, 1992; Stephens, 2005) does not presume all clouds are the same everywhere.

Taking an average does not presume that a uniform magnitude reigns everywhere.

The complete ignorance reflected in your argument calls a judgment of incompetence.

However, the valid point of Dr. Annan is that the *annual* timescale is explain nowhere in the manuscript.

My source, Lauer and Hamilton, 2013, reported annual means; mentioned in ms line 575. Manuscript lines 578-579 show exactly how and where the annual timescale arises. SI Section 6.2 derived the annual timescale exactly.

Your statement is factually and demonstrably wrong; as was that of Dr. Annan.

You may have read the manuscript twice, but you did not understand it even once.

You are a climate modeler, Dr. Roche. Your publication list shows no relevant expertise; a condition obvious in the quality of your commentary.

Like Dr. Annan you have profound professional and career conflicts of interest with a manuscript demonstrating that climate models have no predictive value, which they do not.

You people are determinedly rejectionist. Protocol is your cover.

Yours sincerely,

Pat

Patrick Frank, Ph.D.
Palo Alto, CA 94301
email: pfrankzzz@xxxx.xx
++++++++++++++++++++++++++++++++++++
These things are, we conjecture, like the truth;
But as for certain truth, no one has known it.
Xenophanes, 570-500 BCE
++++++++++++++++++++++++++++++++++++

November 19, 2017 3:24 pm

Dear Mod – attempts to post my reply to Dr. Roche seems to disappear into the ether. Can you check please if it’s in moderation?

[Done, two copies found. One deleted, one released. .mod]

November 19, 2017 3:27 pm

Oops. Mod, please never mind. I see my reply above. Didn’t notice it on first go-round. 🙂
Please delete out my last comments about it.