A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling

Propagation of Error and the Reliability of Global Air Temperature Projections

Guest essay by Pat Frank

Regular readers at Anthony’s Watts Up With That will know that for several years, since July 2013 in fact, I have been trying to publish an analysis of climate model error.

The analysis propagates a lower limit calibration error of climate models through their air temperature projections. Anyone reading here can predict the result. Climate models are utterly unreliable. For a more extended discussion see my prior WUWT post on this topic (thank-you Anthony).

The bottom line is that when it comes to a CO2 effect on global climate, no one knows what they’re talking about.

Before continuing, I would like to extend a profoundly grateful thank-you! to Anthony for providing an uncensored voice to climate skeptics, over against those who would see them silenced. By “climate skeptics” I mean science-minded people who have assessed the case for anthropogenic global warming and have retained their critical integrity.

In any case, I recently received my sixth rejection; this time from Earth and Space Science, an AGU journal. The rejection followed the usual two rounds of uniformly negative but scientifically meritless reviews (more on that later).

After six tries over more than four years, I now despair of ever publishing the article in a climate journal. The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers.

Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. Their competence comes into question. Grants disappear. Universities lose enormous income.

Given all that conflict of interest, what consensus climate scientist could possibly provide a dispassionate review? They will feel justifiably threatened. Why wouldn’t they look for some reason, any reason, to reject the paper?

Somehow climate science journal editors have seemed blind to this obvious conflict of interest as they chose their reviewers.

With the near hopelessness of publication, I have decided to make the manuscript widely available as samizdat literature.

The manuscript with its Supporting Information document is available without restriction here (13.4 MB pdf).

Please go ahead and download it, examine it, comment on it, and send it on to whomever you like. For myself, I have no doubt the analysis is correct.

Here’s the analytical core of it all:

Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.

Complicated, isn’t it. I have yet to encounter a consensus climate scientist able to grasp that concept.

Willis Eschenbach demonstrated that climate models are just linearity machines back in 2011, by the way, as did I in my 2008 Skeptic paper and at CA in 2006.

The manuscript shows that this linear equation …

clip_image002

… will emulate the air temperature projection of any climate model; fCO2 reflects climate sensitivity and “a” is an offset. Both coefficients vary with the model. The parenthetical term is just the fractional change in forcing. The air temperature projections of even the most advanced climate models are hardly more than y = mx+b.

The manuscript demonstrates dozens of successful emulations, such as these:

clip_image004

Legend: points are CMIP5 RCP4.5 and RCP8.5 projections. Panel ‘a’ is the GISS GCM Model-E2-H-p1. Panel ‘b’ is the Beijing Climate Center Climate System GCM Model 1-1 (BCC-CSM1-1). The PWM lines are emulations from the linear equation.

CMIP5 models display an inherent calibration error of ±4 Wm-2 in their simulations of long wave cloud forcing (LWCF). This is a systematic error that arises from incorrect physical theory. It propagates into every single iterative step of a climate simulation. A full discussion can be found in the manuscript.

The next figure shows what happens when this error is propagated through CMIP5 air temperature projections (starting at 2005).

clip_image006

Legend: Panel ‘a’ points are the CMIP5 multi-model mean anomaly projections of the 5AR RCP4.5 and RCP8.5 scenarios. The PWM lines are the linear emulations. In panel ‘b’, the colored lines are the same two RCP projections. The uncertainty envelopes are from propagated model LWCF calibration error.

For RCP4.5, the emulation departs from the mean near projection year 2050 because the GHG forcing has become constant.

As a monument to the extraordinary incompetence that reigns in the field of consensus climate science, I have made the 29 reviews and my responses for all six submissions available here for public examination (44.6 MB zip file, checked with Norton Antivirus).

When I say incompetence, here’s what I mean and here’s what you’ll find.

Consensus climate scientists:

1. Think that precision is accuracy

2. Think that a root-mean-square error is an energetic perturbation on the model

3. Think that climate models can be used to validate climate models

4. Do not understand calibration at all

5. Do not know that calibration error propagates into subsequent calculations

6. Do not know the difference between statistical uncertainty and physical error

7. Think that ±” uncertainty means positive error offset

8. Think that fortuitously cancelling errors remove physical uncertainty

9. Think that projection anomalies are physically accurate (never demonstrated)

10. Think that projection variance about a mean is identical to propagated error

11. Think that a “±K” uncertainty is a physically real temperature

12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states

Item 12 is especially indicative of the general incompetence of consensus climate scientists.

Not one of the PhDs making that supposition noticed that a “±” uncertainty bar passes through, and cuts vertically across, every single simulated temperature point. Not one of them figured out that their “±” vertical oscillations meant that the model must occupy the ice-house and hot-house climate states simultaneously!

If you download them, you will find these mistakes repeated and ramified throughout the reviews.

Nevertheless, my manuscript editors apparently accepted these obvious mistakes as valid criticisms. Several have the training to know the manuscript analysis is correct.

For that reason, I have decided their editorial acuity merits them our applause.

Here they are:

  • Steven Ghan___________Journal of Geophysical Research-Atmospheres
  • Radan Huth____________International Journal of Climatology
  • Timothy Li____________Earth Science Reviews
  • Timothy DelSole_______Journal of Climate
  • Jorge E. Gonzalez-cruz__Advances in Meteorology
  • Jonathan Jiang_________Earth and Space Science

Please don’t contact or bother any of these gentlemen. On the other hand, one can hope some publicity leads them to blush in shame.

After submitting my responses showing the reviews were scientifically meritless, I asked several of these editors to have the courage of a scientist, and publish over meritless objections. After all, in science analytical demonstrations are bullet proof against criticism. However none of them rose to the challenge.

If any journal editor or publisher out there wants to step up to the scientific plate after examining my manuscript, I’d be very grateful.

The above journals agreed to send the manuscript out for review. Determined readers might enjoy the few peculiar stories of non-review rejections in the appendix at the bottom.

Really weird: several reviewers inadvertently validated the manuscript while rejecting it.

For example, the third reviewer in JGR round 2 (JGR-A R2#3) wrote that,

“[emulation] is only successful in situations where the forcing is basically linear …” and “[emulations] only work with scenarios that have roughly linearly increasing forcings. Any stabilization or addition of large transients (such as volcanoes) will cause the mismatch between this emulator and the underlying GCM to be obvious.”

The manuscript directly demonstrated that every single climate model projection was linear in forcing. The reviewer’s admission of linearity is tantamount to a validation.

But the reviewer also set a criterion by which the analysis could be verified — emulate a projection with non-linear forcings. He apparently didn’t check his claim before making it (big oh, oh!) even though he had the emulation equation.

My response included this figure:

clip_image008

Legend: The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings. The lines are the linear emulations.

The volcanic forcings are non-linear, but climate models extrapolate them linearly. The linear equation will successfully emulate linear extrapolations of non-linear forcings. Simple. The emulations of Jim Hansen’s GISS Model II simulations are as good as those of any climate model.

The editor was clearly unimpressed with the demonstration, and that the reviewer inadvertently validated the manuscript analysis.

The same incongruity of inadvertent validations occurred in five of the six submissions: AM R1#1 and R2#1; IJC R1#1 and R2#1; JoC, #2; ESS R1#6 and R2#2 and R2#5.

In his review, JGR R2 reviewer 3 immediately referenced information found only in the debate I had (and won) with Gavin Schmidt at Realclimate. He also used very Gavin-like language. So, I strongly suspect this JGR reviewer was indeed Gavin Schmidt. That’s just my opinion, though. I can’t be completely sure because the review was anonymous.

So, let’s call him Gavinoid Schmidt-like. Three of the editors recruited this reviewer. One expects they called in the big gun to dispose of the upstart.

The Gavinoid responded with three mostly identical reviews. They were among the most incompetent of the 29. Every one of the three included mistake #12.

Here’s Gavinoid’s deep thinking:

“For instance, even after forcings have stabilized, this analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

And there it is. Gavinoid thinks the increasingly large “±K” projection uncertainty bars mean the climate model itself is oscillating increasingly wildly between ice-house and hot-house climate states. He thinks a statistic is a physically real temperature.

A naïve freshman mistake, and the Gavinoid is undoubtedly a PhD-level climate modeler.

The majority of Gavinoid’s analytical mistakes include list items 2, 5, 6, 10, and 11. If you download the paper and Supporting Information, section 10.3 of the SI includes a discussion of the total hash Gavinoid made of a Stefan-Boltzmann analysis.

And if you’d like to see an extraordinarily bad review, check out ESS round 2 review #2. It apparently passed editorial muster.

I can’t finish without mentioning Dr. Patrick Brown’s video criticizing the youtube presentation of the manuscript analysis. This was my 2016 talk for the Doctors for Disaster Preparedness. Dr. Brown’s presentation was also cross-posted at “andthentheresphysics” (named with no appreciation of the irony) and on youtube.

Dr. Brown is a climate modeler and post-doctoral scholar working with Prof. Kenneth Caldiera at the Carnegie Institute, Stanford University. He kindly notified me after posting his critique. Our conversation about it is in the comments section below his video.

Dr. Brown’s objections were classic climate modeler, making list mistakes 2, 4, 5, 6, 7, and 11.

He also made the nearly unique mistake of confusing an root-sum-square average of calibration error statistics with an average of physical magnitudes; nearly unique because one of the ESS reviewers made the same mistake.

Mr. andthentheresphysics weighed in with his own mistaken views, both at Patrick Brown’s site and at his own. His blog commentators expressed fatuous insubstantialities and his moderator was tediously censorious.

That’s about it. Readers moved to mount analytical criticisms are urged to first consult the list and then the reviews. You’re likely to find your objections critically addressed there.

I made the reviews easy to apprise by starting them with a summary list of reviewer mistakes. That didn’t seem to help the editors, though.

Thanks for indulging me by reading this.

I felt a true need to go public, rather than submitting in silence to what I see as reflexive intellectual rejectionism and indeed a noxious betrayal of science by the very people charged with its protection.

Appendix of Also-Ran Journals with Editorial ABM* Responses

Risk Analysis. L. Anthony (Tony) Cox, chief editor; James Lambert, manuscript editor.

This was my first submission. I expected a positive result because they had no dog in the climate fight, their website boasts competence in mathematical modeling, and they had published papers on error analysis of numerical models. What could go wrong?

Reason for declining review: “the approach is quite narrow and there is little promise of interest and lessons that transfer across the several disciplines that are the audience of the RA journal.

Chief editor Tony Cox agreed with that judgment.

A risk analysis audience not interested to discover there’s no knowable risk to CO2 emissions.

Right.

Asia-Pacific Journal of Atmospheric Sciences. Songyou Hong, chief editor; Sukyoung Lee, manuscript editor. Dr. Lee is a professor of atmospheric meteorology at Penn State, a colleague of Michael Mann, and altogether a wonderful prospect for unbiased judgment.

Reason for declining review: “model-simulated atmospheric states are far from being in a radiative convective equilibrium as in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.

Chief editor Songyou Hong supported that judgment.

The manuscript is about error analysis, not about climate. It uses data from Manabe and Wetherald but is very obviously not based upon it.

Dr. Lee’s rejection follows either a shallow analysis or a convenient pretext.

I hope she was rewarded with Mike’s appreciation, anyway.

Science Bulletin. Xiaoya Chen, chief editor, unsigned email communication from “zhixin.”

Reason for declining review: “We have given [the manuscript] serious attention and read it carefully. The criteria for Science Bulletin to evaluate manuscripts are the novelty and significance of the research, and whether it is interesting for a broad scientific audience. Unfortunately, your manuscript does not reach a priority sufficient for a full review in our journal. We regret to inform you that we will not consider it further for publication.

An analysis that invalidates every single climate model study for the past 30 years, demonstrates that a global climate impact of CO2 emissions, if any, is presently unknowable, and that indisputably proves the scientific vacuity of the IPCC, does not reach a priority sufficient for a full review in Science Bulletin.

Right.

Science Bulletin then courageously went on to immediately block my email account.

*ABM = anyone but me; a syndrome widely apparent among journal editors.

5 1 vote
Article Rating
629 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
...and Then There's Physics
October 23, 2017 12:11 am

Pat,
This has already been explained to you numerous, so it’s unlikely that this attempt will be any more successful than previous attempts. The error that you’re trying to propagate is not an error at every timestep, but an offset. It simply influences the background/equilibrium state, rather than suggesting that there is an increasing range of possible states at every step. For example, if we ran two simulations with different solar forcings (but everything else the same), this wouldn’t suddenly mean that they would/could diverge with time, it would mean that they would settle to different background/equilibrium states.

Old England
Reply to  ...and Then There's Physics
October 23, 2017 1:11 am

@ and Then There’s Physics

I’m a layman and no mathematician but having read the first few pages of the paper it seems to me that your points are answering the wrong question. (?)

The point made, or so it appears to me, is that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band with a diverging top and bottom spread of values. In other words they diverge.

If the uncertainties are not included as part of the model then surely it is linear and unable to produce meaningful results?

If you have multiple uncertainties, as in climate, which are input into a model then the spread or divergence must become even greater with time.
Some of those would seem to be (but far from limited too) temperatures and effect on on atmospheric water vapour levels; cloud formation and cloud cover; solar activity; volcanic activity etc etc. each would have an effect on some of the others and with a amount of uncertainty which would need to be expressed.

As I said, I am a layman and would appreciate it if you could enlighten me.
Thanks

...and Then There's Physics
Reply to  Old England
October 23, 2017 1:20 am

The point made, or so it appears to me, is that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band with a diverging top and bottom spread of values. In other words they diverge.

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality). In this case, it is quite possible that the cloud forcing is “wrong” by a few W/m^2. What this would mean is that the equilibrium state would also then be “wrong”. It doesn’t mean, however, that the range of possible equilibrium states will grow with time, since this error does not propagate.

As I mentioned in the first comment, imagine we could run a perfect model in which every parameter exactly matched reality. Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

Hivemind
Reply to  Old England
October 23, 2017 1:59 am

“An uncertainty only propagates if it applies at every step”

Um… No. A climate model is essentially an attempt to integrate a bunch of co-dependent variables numerically. If you knew anything about numerical integration, you would know that errors propagate wildly. The tool is, fundamentally, unsuited to the purpose it is being put.

TimTheToolMan
Reply to  Old England
October 23, 2017 2:13 am

ATTP, Stop focusing on the output result and thinking it’s error within an acceptable range therefore ok to propagate.

The issue is that the uncertainty that is propagated at each time step isn’t seen in the output because the output has been constrained by design to be within “reasonable” values. This is seen to be evidence the model is “doing the right thing” but the real problem is that at every single time step the output is meaningless for a climate calculation because the climate signal is much smaller than the error and what we’re left with is a fitted result.

Now you’ll arc up and suggest CGMs aren’t fits and are based on physics but again you’re mistaken because there are components (eg clouds) that aren’t and they’re approximations, they’re fits and by including them the models themselves are reduced to fits.

The whole GCM enterprise further relies on the assumption that errors cancel at each step throughout and that’s a ridiculous assumption. Completely unjustified and most certainly incorrect. In fact there is a small (unintentionally) built in bias that results in an expected result.

AndyG55
Reply to  Old England
October 23, 2017 2:43 am

“imagine we could run a perfect model in which every parameter exactly matched reality.”

Yet you are UNABLE to run one where ANY parameter matches reality.

The ONLY thing you have is hallucinogenic anti-science IMAGINATION and FAIRY-TAILS

Nick Stokes
Reply to  Old England
October 23, 2017 3:52 am

” If you knew anything about numerical integration, you would know that errors propagate wildly.”
I know lots about numerical integration (so does ATTP). I have spent a large part of my professional life doing it, in computational fluid dynamics, a regular engineering activity of which GCMs are a subset. Your statement is nonsense.

Reply to  Old England
October 23, 2017 7:23 am

As I mentioned in the first comment, imagine we could run a perfect model in which every parameter exactly matched reality. Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

There’s no reason that difference would be equal over time. That’s a sign you have created a linear model, and it’s a decidedly non-linear system you’re modeling.

If this is what you think you guys are lost.

Editor
Reply to  Old England
October 23, 2017 8:32 am

Micro,

I totally agree. This is probably the most telling comment of this whole discussion.

rip

Reply to  Old England
October 23, 2017 9:24 am

ATTP, I was thinking about this more, you totally do not get that WV acts as a regulating medium, it actively alters the out going radiation response based on cooling temperatures, and not the stupid SB 4th pwr decay, this is on top of that, it’s the bends in the clear sky cooling profile. And since this is decidedly non-linear, and it controls the response to Co2, you’re not accounting for it in your models.

Think about how much the atm column shrinks at night. when it’s calm, it can only cool by radiation, and radiation is omni directional. Also for every gram of water vapor, there is a 4.21 J exchange of IR for a condense – reevaporation cycle as let’s say a 3,000 meter tall stack cools.
Interestingly it cools really quickly till air temps near dew point, then it stops cooling. It’s just there’s about -50W/m^2 of radiation to space just through the optical window based on SB calculations, yet net radiation is less than -20W/m^2. There’s about 35W’m^2 of sensible heat keeping the surface temp from falling as quickly.
comment image
There’s a 90F differences in the middle of the spectrum, I’ve measured over 100F differences.comment image
How much energy is about 1 psi change between morning min T and afternoon Max temps at the surface(plus enthalpy lost, water condensed)? Oh wait, without the pressure change, average of about 3,300W/m^3.

beng135
Reply to  Old England
October 23, 2017 9:42 am

attp says:

Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

Funny, your example uses the ONLY independent variable in the whole shabang. Use any other co-dependent variable, and your example is busted.

Leo Smith
Reply to  Old England
October 23, 2017 10:05 am

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality)

Only in linear systems

In chaotic systems a single butterfly flapping its wings once….

…and that is a huge point. Climate models treat the climate as a linear system, because we do not have computational tools that can address the uncertainty of non linear systems.

To accept chaotic behaviour is merely to affirm ‘we can’t predict where this is going at all’. Or to put it in the vernacular. Climate science is at that level just bunk.

Even those people here who look for ‘cycles’ in climate with the ardent passion of ‘chemtrail’ observers, may in the end be barking up only a slightly less egregious gum tree than the climate scientists. Chaotic behaviour produces quasi-periodic fluctuations: That is over short times spans it may look briefly like a cycle, but then as it moves towards new attractors, it will enter a different ‘cycle’ and those of us who have built electronic circuits utilising chaotic feedback (super regenerative radios) know that, absent of a forcing signal, what you get is NOISE pure and simple, with no detectable single spectral component.

Nothing is more infuriating than to have someone lecturing you on the characteristics of linear equations challenging you to disprove their finer points, when your whole position is predicated in a provable assertion that what is being modelled cannot be represented by linear equations in the first place.

Mark - Helsinki
Reply to  Old England
October 23, 2017 10:10 am

“Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps”

incorrect, the value increases with each step over time, you are a completely anti scientific chappy, clueless

ATheoK
Reply to  Old England
October 23, 2017 11:52 am

“…and Then There’s Physics October 23, 2017 at 1:20 am

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step)…”

Typical attp tactic, start off with a lie then spin sophistry round your false strawman.

Mark - Helsinki
Reply to  Old England
October 23, 2017 12:10 pm

Micro thanks for exposing ATTP’s cut and paste knowledge. Once you get in depth with him, he vanishes every time and runs back to his echo chamber

Admin
Reply to  Old England
October 23, 2017 1:59 pm

this annual average ±4.0 Wm-2 year-1 uncertainty in simulated LWCF is approximately ±150% larger than all the forcing due to all the anthropogenic greenhouse gases put into the atmosphere since 1900 (~2.6 Wm-2), and approximately ±114× larger than the average annual ~0.035 Wm-2 year-1 increase in greenhouse gas forcing since 1979

The error DOES in my opinion propagate.

And Then There’s Physics says If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate.

If the correction for cloud fraction error was a simple linear adjustment to models to correct the error, we would never have known about it. The adjustment would have been applied, and the model prediction would have aligned with observed cloud fraction.

Since nobody can accurately predict how clouds respond to GHG forcing, the margin for error grows with every iteration step, The uncertainty of how clouds will respond to the GHG forcing applied in a single step has to be carried through to the next iteration.

When the margin for error drastically exceeds what is physically plausible, I think we can safely assume the predictions of the model are total nonsense.

Page 23 of Pat Frank’s paper, hindcast cloud fraction error of global climate models.

M Courtney
Reply to  Old England
October 23, 2017 2:50 pm

ATTP says,

An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality).

This assumes a linear response. It assumes that climate (and thus, presumably, weather) is a linear function of forcings.

If the initial value is “wrong” by some amount – or inaccurate by some amount – then that will affect the next iteration in some way.
If the next iteration is affected by the same amount every single time then the response is always constant.

Once again we have pseudoscience pretending that clouds don’t exist. That phase changes (water vapour to water droplets, for example) are smooth.

Why does ATTP worry about a declining Arctic Icecap when he doesn’t believe in non-linear phase changes? Melting can’t exist in his understanding of climate!
Except he has no understanding. He’s just a climate fanatic. It’s faith, not science.

Pat Frank
Reply to  Old England
October 23, 2017 9:04 pm

You’ve got the essence, Old England, “that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band…

You have grasped the central point that continually eludes ATTP and virtually every single climate modeler.

The error is systematic, resident in the model, and is introduced into a simulation by the model itself. It enters every simulation time-step, and necessarily produces an increasing uncertainty in the projection.

Look at ATTP’s reply to you.. His “wrong by some amount” supposes a constant offset error and is a completely wrong description of the systematic error.

Look at manuscript Figure 5. Every single model makes has a different error profile with positive and negative excursions. I pointed this out to ATTP in prior conversations. He ignores it. Perhaps because he doesn’t understand the significance. Change the parameter set of any one model, and its error profile will be different.

But ATTP (and others) want to add up all the errors to get one number, and then assume that number is a constant offset error that will correct any model expectation value to be error-free. His (their) idea is beyond parody.

Then he goes on to suppose statistical uncertainty is physical error, i.e., ATTP: “the range of possible equilibrium states will grow with time, since this error does not propagate,

ATTP makes a standard mistake of my reviewers, here specifically number 6, but he has already also made mistakes 4, 5, 7 and 8.

He makes those same mistakes over, and over again.

Pat Frank
Reply to  Old England
October 23, 2017 9:11 pm

TimTheToolMan gets it right, as usual.

Tim, do you have any idea why uncertainty is so opaque to climate modelers?

It’s dead obvious to any experimental scientist or engineer.

Pat Frank
Reply to  Old England
October 23, 2017 9:17 pm

In this post, Nick Stokes admitted that GCMs are engineering models. I.e., Nick: “a regular engineering activity of which GCMs are a subset.

Engineering models are useless outside their calibration bounds. Nick has repudiated the entire global warming scary-2100 enterprise.

Yet another inadvertent validation in an attempted refutation. Thank-you, Nick.

Pat Frank
Reply to  Old England
October 23, 2017 9:30 pm

Eric Worrall, your comment is right on.

Thanks for posting Figure 5. It shows that every model has a different error profile, with positive and negative excursions.

Mere inspection of the figure shows how ludicrous is ATTP’s idea that all those errors should be merely added together into a number. And then subtracted away to make everything accurate. Only in consensus climate science.

Nick Stokes
Reply to  Old England
October 23, 2017 9:31 pm

“Engineering models are useless outside their calibration bounds.”
So what are the “calibration bounds” of, say, Nastran? Or Fluent, or Ansys? Pat, you don’t have a clue about engineering models.

Reply to  Nick Stokes
October 24, 2017 4:43 am

So what are the “calibration bounds” of, say, Nastran? Or Fluent, or Ansys?

well it’s obvious you don’t understand this.
Calibration isn’t defined by the simulator, but the models as applied to the design you’re evaluating. And it’s in comparison to the real circuit in operation.

TimTheToolMan
Reply to  Old England
October 23, 2017 9:54 pm

Pat writes

Tim, do you have any idea why uncertainty is so opaque to climate modelers?

I dont think it is. I think even Nick gets it and one day might even accept it (no Nick, it doesn’t mean your CFD work is dead, or weather models are wrong – models have their place still!) but no climate modeler can admit to it because it’d be well…a career limiting move. And as the GCMs are the cornerstone to so much of our science today, untangling the mess would be horrendous. Better to let sleeping dogs lie.

Reply to  Old England
October 24, 2017 7:04 am

Pat Frank,

Look at manuscript Figure 5.

Ok, it shows latitudinal profiles of 25-year averaged model cloud fraction error versus cloud fraction observations averaged over a similar timescale. It demonstrates latitudinal error offsets between models and observations, as well as showing differences between models.

Every single model makes has a different error profile with positive and negative excursions.

Yes, this is well known and clearly understood by ATTP. Different models, different offsets.

Change the parameter set of any one model, and its error profile will be different.

Yes, this would obviously be true but how is it relevant to error propagation within a projection? Within an individual model projection run the parameter set will remain the same, thereby maintaining the same offset error.

Put in context of your Figure 5, your error propagation suggests that those error profiles should change quite dramatically over time. Why would that happen?

Pat Frank
Reply to  Old England
October 26, 2017 6:28 pm

Nick Stokes, no matter your diversionary sneering, I’m clued in enough to know that engineering models are unreliable outside their calibration bounds.

Pat Frank
Reply to  Old England
October 26, 2017 6:40 pm

Paulski0Within an individual model projection run the parameter set will remain the same, thereby maintaining the same offset error.

Not correct, for two reasons. The parameters are not unique. They have large uncertainty widths. One can get the same apparent error with different suites of parameters. A given error is just representative. It does not transmit the true range of model errors. The uncertainty is made cryptic unless this is taken into account.

Second, even with unchanging parameter sets, any given projection simulation step is wrong, but to some unknown amount. Those wrong climate states are projected forward. Every step begins with initial value errors.

The projection error from step to step therefore varies, and in unknowable ways.

In a futures projection, one can’t know the errors. One only knows the uncertainty, by way to the propagated calibration error statistic. And uncertainty grows with each projection step because of increasing ignorance of the relative positions of the simulated state and the correct physical state in phase space.

Error propagation says nothing about error profiles in projection simulations. It addresses the reliability of the projection. Not its error.

pbweather
Reply to  ...and Then There's Physics
October 23, 2017 1:20 am

I think you and the reviewers may be missing the point here attp. Millions spent on building climate models…and a simple linear model can recreate them very closely…..surely this is worth publishing and worth investigating further?

Leo Smith
Reply to  pbweather
October 23, 2017 10:25 am

…and a simple linear model can recreate them very closely…..

A simple linear model is what climate models are, stripped of decorative complexity, but whilst models may be represented by a model of that nature, reality it seems is just too complicated for that class of model to have a snowballs chance in hell of representing the vagaries of actual climate.

So I dont know what you are saying, but its not worth spending a copper nickel on.

I looked into cutting edge attempts by seriously bright mathematicians to even discern whether a given set of non linear partial derivatives led to a bounded set of solutions (broadly, a climate that never goes below snowball earth or boils the oceans dry) and we cant even do THAT. Observationally climate is amazingly stable.

But wobbles a lot as well.

And we have absolutely no idea whether it could one day wobble off to a whole new regime, just because a butterfly flapped its wings, let alone by injecting tons of CO2 into it. All we can say is that in times gone by, when CO2 was way greater than it is today, or is likely to be in the foreseeable future, the climate seems to have been stable enough for life to flourish.

The state of climate change science stripped down to the actual science, which is almost none, is simply stated

1/. We don’t know.
2/. Even if we did know the partial differentials governing it, we still wouldn’t know what the climate will do.
3/. We lack both the mathematics and the computational power to ever know better than that.
4/. Climate change is therefore not worth spending any grant money on.
5/. Even WUWT has no function beyond pointing out points 1, 2, 3 and 4.
6/. The IPCC is an organization without any purpose, since it exists to advise governments on situations that have no existence in reality.
7/. Renewable energy is therefore a crock of excrement, a pointless waste of money.
8/. Anyone who disagrees with any of the points above is like a holocaust denier.
9/. There is an urgent need to set up an international organisation to help whole swathes of the population come to terms with the facts that:
– the cheque isn’t in the post
– the tooth fairy doesn’t exist
– he/she wont love you in the morning.
– ‘man made climate change’ is as real as Tinkerbelle.

Pat Frank
Reply to  pbweather
October 23, 2017 9:36 pm

Great post, Leo Smith. Your number 1 has been the conclusion of my AGW assessment from the first. 🙂

If only you were head of the US National Academy. Or Pres. Trump’s science advisor. 🙂

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 1:24 am

Another point, that I think I’ve made to Pat before, is that if he is correct he should be able to easily demonstrate this. If you’re running computational models, one way to estimate the uncertainty is to simply run them many times with different initial conditions. If the uncertainty propagates as Pat suggests, then the range of results should reflect this. As I understand it, this has been done, and they do not.

Nick Stokes
Reply to  ...and Then There's Physics
October 23, 2017 2:36 am

This is clear even from the published CMIP5 simulations. Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100. And he seems to infer the cloud error from disagreement between the models. But the CMIP5 models clearly do not diverge by 16°C by 2100. Here is a plotcomment image

The spread is mainly due to the different scenarios; for an individual scenario it is maybe ±0.6°C.

AndyG55
Reply to  ...and Then There's Physics
October 23, 2017 2:45 am

Yes Nick

Hundreds of scam CO2-hatred “scenarios”.

NOT ONE anywhere near REALITY.

Thanks for drawing that to everybody’s attention.

Bob boder
Reply to  ...and Then There's Physics
October 23, 2017 3:41 am

Nick

And that’s your argument to establish the effectiveness of the models? Really

Steve Keppel-Jones
Reply to  ...and Then There's Physics
October 23, 2017 5:43 am

That’s not true Andy! Give Nick his due. ONE of those models is quite close to reality – the one at the very bottom. The rest of the models should clearly be fired. But that one should be given a prize, and it shows that temperatures in 2100 will be about the same as today. So according to the one believable model, there is no C in CAGW, and no real W either. Great! Can we all pack up and go home now? And stop wasting money on this nonsense?

MJB
Reply to  ...and Then There's Physics
October 23, 2017 5:56 am

@aTTP (1:24am) and Nick Stokes (2:36am)

It seems the issue is not in getting different results with different initial conditions but rather running slightly different models from the same initial condition.

The simplest setup would be to select a single tunable parameter (e.g. clouds), vary the value up or down to create 2 model formulations, and run them both from the same initial conditions. The different values may cause divergence or other feedbacks/interactions may dampen it to insignificant.

If I understand the source of Nick’s spaghetti graph, the graph demonstrates the differences between models, not the potential uncertainty inherent in any one model. Each spaghetti line has it’s own uncertainty band that it not displayed.

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 6:06 am

MJB,
Yes, you could also do what you suggest (i.e., run with the same initial conditions, but different parameters). If we consider clouds, then there is probably a range of a few W/m^2. This would correspond to potentially difference of about 1K; not even close to the +-15K suggested by Pat Frank.

As far as the spaghetti graph is concerned, I think it is a combination of individual models run more than once and different models, so you are correct that it isn’t a true uncertainty. However, it does illustrate that the range is unlikely to be as large as suggested by Pat Frank.

Frenchie77
Reply to  ...and Then There's Physics
October 23, 2017 7:14 am

If constraints are being applied for each calculation then you are not getting modelled outputs but constrained outputs. Do the runs with no constraints to see the inherent validity of the underlying physics, not the hand-tailoring needed to sell a story.

But hey, if the need is to sell a story….

Editor
Reply to  ...and Then There's Physics
October 23, 2017 8:20 am

I find it hilarious that Nick and others still think the chart at this comment is relevant since it is pseudoscience crap:

https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2643766

How can anyone think wild guesses to year 2100, be considered good science,when most of it UNVERIFIABLE! Models are a TOOL for research,not to create actual fact based science,since it lacks real data for the next 83 years. This is the what the AGW conjecture is based on,a puddle of unverifiable guesses,

Bwahahahahahahahahaha!!!

Imagine what real Meteorologists, who do short term modeling for weather prediction in the next few days know how quickly short term predictions can quickly spiral out of reality. I see them adjusting their forecasts daily,sometimes even in hours, as new information comes in,but can still be waaaaay off anyway,as they were in my city just yesterday.

Models are a TOOL for research,not a creator of data.

Clyde Spencer
Reply to  ...and Then There's Physics
October 23, 2017 8:28 am

…and Then There’s Physics ,
You said, “However, it does illustrate that the range is unlikely to be as large as suggested by Pat Frank.” I’m not sure that you can justify that statement. The propagation of errors provides a probabilistic uncertainty range, which is an upper bound, not the most likely outcomes. That is, with numerous ensemble runs, they are most likely to cluster around the most probable values, but that doesn’t preclude them from sometimes reaching the maximum values if a large enough number of runs are made.

Bryan A
Reply to  ...and Then There's Physics
October 23, 2017 8:29 am

Extrapolating the apparent arc of the upper limit from the spaghetti plot of model runs, you reach a maximum divergence value of approximately 8.5K to 9.5K truly slightly more than half the 15K to 16K suggested

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 8:36 am

Clyde,

I’m not sure that you can justify that statement. The propagation of errors provides a probabilistic uncertainty range, which is an upper bound, not the most likely outcomes. That is, with numerous ensemble runs, they are most likely to cluster around the most probable values, but that doesn’t preclude them from sometimes reaching the maximum values if a large enough number of runs are made.

Normally what’s presented are 1, or 2, sigma uncertainties. This would mean that either about 66%(1 sigma), or 95% (2 sigma), of your results should lie within this range. Depending on what is presented, you would expect either 1/3 of your results (1 sigma), or 5% of your results (2 sigma), to lie outside the range. Therefore, if you ran a lot of simulations and the results never ended up outside the range, then the range would probably be too large.

Bryan,

Extrapolating the apparent arc of the upper limit from the spaghetti plot of model runs, you reach a maximum divergence value of approximately 8.5K to 9.5K truly slightly more than half the 15K to 16K suggested

Except, the range is mostly because of the range of emission scenarios, rather than scatter for a single scenario. Therefore, the overall range isn’t representative of some kind of model uncertainty.

crackers345
Reply to  ...and Then There's Physics
October 23, 2017 8:43 am

sun: the RCPs aren’t “guesses,”
they’re assumptions.

Editor
Reply to  ...and Then There's Physics
October 23, 2017 8:55 am

Crackers,when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it, you are playing word game here. They are unverifiable,can’t run a hypothesis on it since most of it is far into the future,thus qualifies as wild guesses.

He writes,

“sun: the RCPs aren’t “guesses,”
they’re assumptions.”

Yawn, is this how low science literacy has fallen?

MarkW
Reply to  ...and Then There's Physics
October 23, 2017 9:10 am

The difference between an assumption and a guess is basically the reputation of the person making them.

Joe Crawford
Reply to  ...and Then There's Physics
October 23, 2017 9:28 am

Fenchie77,
You are of course correct. The fact that the models require constraints is enough to invalidate them.

I doubt the is a Mechanical Engineer in the crowd that would trust his/her family’s safety to a 5th floor apartment deck that was designed with, or the design was verified by, a stress analysis (i.e., modelling) program that required constraints be placed within it to keep the calculations within reasonable ranges.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:31 am

ATTP, “ one way to estimate the uncertainty is to simply run them many times with different initial conditions.

No, it’s not. Your proposed method tells one nothing about physical uncertainty.

Mistakes 1, 3, 4, 6, and 10. Good job, ATTP.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:36 am

Nick Stokes, “Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100.

No, I don’t Nick. You’re proposing that ±16°C is a physically real temperature.

It’s an uncertainty statistic. An ignorance measure. It’s not physical error.

You’ve made mistakes 2, 6, 11 and, implicitly, 12.

You’ve many times now demonstrated knowing nothing about physical error analysis. Now it’s many times plus one more.

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 9:36 am

Pat,
Hold on. You’re suggesting the results from the models are far more uncertain than mainstream climate modellers suggest and yet you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty. This doesn’t seem consistent.

whiten
Reply to  ...and Then There's Physics
October 23, 2017 9:58 am

Nick Stokes
October 23, 2017 at 2:36 am

My comment to you it is not actually about the particular point you trying to make there, but more in the aspect of contemplating the validity of the whole argument in question here about GCMs.

You see, you have a clear beautiful plot there, but really no much relevant, as it does not have the according ppm concentration trends also.

Last time I checked AGW is all about temps as per ppm…….and the correlation there…..

Ignoring this actually puts one in the position of misinterpreting the value of GCMs as an experiment…..either intentionally or not.

So when the nice plot you posted helps with your point, maybe, in its essence misleads towards a result of misinterpretation and confusion about the actual value of GCMs as an experiment, which by the way are not climate models anyhow, and very very expensive experimental tools at that.

Don’t you think that the plot you provide, the way it stands has no much support value about the RF or the fCO2 as contemplated by the AGW hypothesis one way or another?

cheers

Mark - Helsinki
Reply to  ...and Then There's Physics
October 23, 2017 10:14 am

“That’s not true Andy! Give Nick his due. ONE of those models is quite close to reality – the one at the very bottom.”

yeah, predict 1 2 3 4 5 6 and throw a dice, and one will be right.

Logic is not you nick or ATP

Idiots pretending to be scientists. Why not get English lit Mosher in on the act too

or some more of pseudo science sensitivity studies that are nothing but tuned junk driven by observations

Mark - Helsinki
Reply to  ...and Then There's Physics
October 23, 2017 10:19 am

ATTP
“Pat,
Hold on. You’re suggesting the results from the models are far more uncertain than mainstream climate modellers suggest and yet you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty. This doesn’t seem consistent.”

It’s not inconsistent.
The models are far more uncertain than claimed, because of 1 much comes from hindcast tuning, not physics (not incomplete and some much not understood physics) Unless you are going to be uber absurd and claim that is not true
The range outcomes is uncertainy (in model physics which leas to instability (not variability)), error, and different tunings.

as with Mosher, logical examination is not for you, as usual, add Nick in there

Leo Smith
Reply to  ...and Then There's Physics
October 23, 2017 10:29 am

However if you replace the linear models by non linear ones, the behaviour is exactly as he describes.
It is not the coherence of linear models that is under criticism, it is their applicability at all.

It is of no use to refute the fact that your cat scratched my leg, by pointing out that dogs just dont do that.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 11:04 am

ATTP, physical uncertainty is with respect to physical reality, not with respect to model spread.

You’re conflating model precision with model accuracy (mistake #1). You make this mistake repeatedly. So do climate modelers. You all seem unable to grasp the difference.

Running a model over and over, with different initial conditions, tells you nothing, nothing, about physical uncertainty (mistake #3).

Unless (BIG! unless here) your model is falsifiable and produces physically unique predictions.

Climate models violate both conditions.

Run them until you’re blue in the face, and you’ll have learned nothing except how they move around.

Tom In Indy
Reply to  ...and Then There's Physics
October 23, 2017 11:08 am

Nick Stokes October 23, 2017 at 2:36 am
Nick, can you extend your chart so we can see how high the projections go for RCP 8.5? The chart cuts them off at the year ~ 2080.

I suspect that if you increase the vertical axis and in addition, include the uncertainty surrounding each run, you will end up with roughly the range suggested by the author of this post.

ATheoK
Reply to  ...and Then There's Physics
October 23, 2017 11:57 am

“…and Then There’s Physics October 23, 2017 at 1:24 am
Another point, that I think I’ve made to Pat before, is that if he is correct he should be able to easily demonstrate this. If you’re running computational models, one way to estimate the uncertainty is to simply run them many times with different initial conditions.”

Think!?
A never believable claim from confirmed liars or misdirection specialists.

If you believe your falsehood, write up a mathematical article and publish it.

Until then, your belief is just so much speculation.
Without proof or logic.

crackers345
Reply to  ...and Then There's Physics
October 23, 2017 12:01 pm

sun says ‘when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it’

no, they’re assumptions, not guesses.
there can be no evidence from
the future, only assumptions.

a model has to assume path of future emissions.
these
are the RCPs.
there are four of them for different
scenarios of future energy
use.

unless you can predict for us
that future path. go ahead and try.

ATheoK
Reply to  ...and Then There's Physics
October 23, 2017 12:08 pm

“Nick Stokes October 23, 2017 at 2:36 am
This is clear even from the published CMIP5 simulations. Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100. And he seems to infer the cloud error from disagreement between the models. But the CMIP5 models clearly do not diverge by 16°C by 2100. Here is a plot…”

So much for contributions from Nick.

What are the starting uncertainties in climate models, Nick?

Technically, adjusting a temperature record is an immediate admission of error and even roughly identifies the error range.
Yet, not one of the models initializes with that one uncertainty or propagates it through.

Gross assumptions regarding total lack of temperature equipment calibration or certification
Total lack of side by side measurements before swapping equipment.
Total lack of side by side measurements before moving the temperature station.
Total failure to track temperature station infestations or to identify errors caused.

Instead, Nick apparently espouses averaging temperatures repeatedly to accurize numbers and improve precision.
Run the models many times…

A solution that is far worse than claiming stopped clocks are correct twice a day.

RW
Reply to  ...and Then There's Physics
October 23, 2017 12:10 pm

The sample standard deviation (SD) in a statistical sense is only meaningful if the underlying population is normally distributed; The percentage of values claimed to fall within some error window depends on the shape of the population distribution. If instead you are talking about the standard deviation of a sampling distribution of a summary statistic (such as the mean), then the central limit theorem is invoked to adopt the assumption that the theoretical sampling distribution of that summary statistic (which you are sampling from) is distributed Normal. The standard error (SE) is the sample estimate of the standard deviation of that sampling distribution.

If the SE (or sometimes the SD though far less likely) is used to support a statement of confidence about the population parameter, such as the mean, then the correct confidence statement is that the error window has some x chance of encompassing the population parameter. Again, assuming a Normal distribution. The notion that the one confidence window you calculate will contain x percent of ‘the data’ or sample statistic should I run the process over and over again is incorrect. Each time you sample, both the mean and the SE vary, and as such so will any confidence statement drawn from the sample statistics.

The proper statement of interpretation of confidence (or uncertainty) is that, in the long run of N (very large) samples of size ‘n’, my ‘x level of confidence’ error windows will capture the population parameter x percentage lf times.

Error propagation is different altogether. Different formulae, and they also depend on what operations you are performing on your data.

Generating an error bar from a large collection of predictions from different models and even, within each model, varrying the initial conditions is an ad hoc method to generate error intervals. It seems supremely niave to believe that varrying these things will happen to capture the uncertainty in the accuracy of the coefficients and values of the model parameters for any coefficient or parameter value that itself posseses some none neglibeable and varied amount of uncertainty associated with them.

Even in a bivariate linear regression model, Y = B1X1 + B2, there is uncertainty in the prediction of Y (y’) and uncertainty in the estimate (b1) of the B1 coefficient and the estimate (b2) of the B2 intercept and, often times, even uncertainty in the observations (x) of X used to generate the model in the first place.

Suppose we sample from a linear system. We don’t know it but the X’s in our model are all appropriate in explaining Y. Good for us so far. But we don’t know what the exact values for X are. So, we sample Y, we also sample X1 to Xk, we then crunch the numbers (do the regression) and come up with the estimates (b1 to bk) of the coefficiemts (B1 to Bk). Thus, we now have a model. The accuracy of our measurements of Y and X1 to Xk (and, normally, the appropriateness of our X1 to Xk in explaining Y but, again, here we are assuming they are appropriate) will help determine how well this model actually does in explaining and predicting Y. Y is unknown as are (probably most) of the true values of X – presumably Time (year) would be one of them. Our measurements of Y and X1 to Xk (y and x1 to xk) are, for the most part, all we have, but we based our model off of the measurements. There are uncertainties in the measurements. We don’t know the direction of those errors or their magnitude (offsets as i think is being used above), because we don’t know the relation of the measurements to their true value. These errors will propagate as the model is run iteratively, being fed its own outputs as inputs at each iteration.

Tweaking the estimated values of X1 to Xk and b1 to bk to generate different estimates of Y (y’) is an ad hoc attempt to quantify this additional uncertainty in X and Y through ’empirical’ simulation.

Pat Frank well-approximates the model temperature outputs using a simplified linear equation. He then focuses on the effect of cloud coverage on solar insolence (if memory serves) and (presumably) uses error propagation formulae to quantify the effect of this uncertainty in the estimate of temperature.

There is either a theoretical/mathematical explanation for why error propagation does not apply or there isn’t and the modellers technique for evaluatiom is gravely misguided.

Mark - Helsinki
Reply to  ...and Then There's Physics
October 23, 2017 12:17 pm

Pat Frank October 23, 2017 at 9:36 am
Nick Stokes, “Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100.”

No, I don’t Nick. You’re proposing that ±16°C is a physically real temperature.

It’s an uncertainty statistic. An ignorance measure. It’s not physical error.

You’ve made mistakes 2, 6, 11 and, implicitly, 12.

You’ve many times now demonstrated knowing nothing about physical error analysis. Now it’s many times plus one more.

_________________________

What is it they say Pat, a little knowledge is…. 😉

At least Nick might run off now and try understand physical error analysis, seems the sort that does not like understanding things 🙂

Mark - Helsinki
Reply to  ...and Then There's Physics
October 23, 2017 12:19 pm

* Does not like Not understanding things.. heh, wish I could edit my stupidity instead of posting again 🙁

Nick Stokes
Reply to  ...and Then There's Physics
October 23, 2017 1:09 pm

“the effect of cloud coverage on solar insolence”
Busy old fool, unruly sun

Reply to  ...and Then There's Physics
October 23, 2017 2:09 pm

Much as i hate to chip in, in support of both Nick and aTTP. They are giving you accurate information. If the models were wrong in the ways described above…. they would be “more” wrong and it would be very obvious to even the most committed warmist modeller. All models are wrong, it’s inherent in modelling. Some are really, really wrong. But most of the ones in active use are not. I would agree that the current crop run hot and im not a massive fan of zekes recent work, trying to show that they dont. But we have apply healthy scepticism and critical thought to all of this. We cannot push that all to one side because we simply like the sound of what’s being said. Mosher, to his previously sceptical credit makes that point often. He sometimes at least recently doesn’t take his own advice. But i guess we are all guilty of that.

Depending on your nationality there’s always the PNAS route to publishing. Pal reviews can cut both ways.

Editor
Reply to  ...and Then There's Physics
October 23, 2017 2:36 pm

Cracker,

Assumption

“a thing that is accepted as true or as certain to happen, without proof.”

Guess

“estimate or suppose (something) without sufficient information to be sure of being correct.”

Meanwhile you keep playing word games while I keep saying they are junk,you never disputed that they are junk.

I stated:

“Crackers,when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it, you are playing word game here. They are unverifiable,can’t run a hypothesis on it since most of it is far into the future,thus qualifies as wild guesses.”

and,

“How can anyone think wild guesses to year 2100, be considered good science,when most of it UNVERIFIABLE! Models are a TOOL for research,not to create actual fact based science,since it lacks real data for the next 83 years. This is the what the AGW conjecture is based on,a puddle of unverifiable guesses,”

You have NOTHING to sell here.

You are pathetic.

Gary Pearse
Reply to  ...and Then There's Physics
October 23, 2017 8:25 pm

Nick by eyeball, the spread is eight + from the smudge at ~0C in 2100 to the topmost steeply exiting the top of the graph at about 2075. And these represent the models that survived the cut. You would still be wrong with a linear model but more difficult to criticize had you guys not been charged with the task by Grouchmarxist highschool drop out Maurice Strong (creator of both UNFCCC and IPCC) to find burning fossil fuels will destroy the planet, thereby justifying trashing economies and freedoms and having global governance by elites. Models vs observations to date show climate sensitivity to be at most ~1, but this takes the scare out of rising CO2.

I’m thinking we should crowd source a large fund and place a bet that with the collapse of the Paris agreement we will not achieve a rise of 1.5C going gangbusters with fracking oil and gas, burning coal, making concrete, etc. If we haven’t got over halfway there by 2050 we declare a win and make the fund available to third world economies for developing cheap reliable electricity generation. Honesty in temperature collection would need some resources and oversight.

Nick Stokes
Reply to  ...and Then There's Physics
October 23, 2017 9:27 pm

“Nick by eyeball, the spread is eight + from the smudge at ~0C in 2100 to the topmost steeply exiting the top of the graph at about 2075.”
The spread for each scenario is much smaller. The fact that scientists don’t know what will be done about GHGs and have to cover the range of possibilities has nothing to do with error propagation. But there is a real test of PF’s ridiculous errors. ±15°C would be about ±9°C in the 30 years since Hansen’s prediction. Now we quibble about small fractions of a degree difference in scenarios, and another small fraction that might be a transient for El Nino, but there is nothing like a 9°C error.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:42 pm

Nick Stokes first thinks uncertainty is physical error (mistake #6), and then effortlessly moves on to suppose it’s a physical temperature instead (mistake 11).

Nick’s self-contradictory assignments also implicitly embrace mistakes 2, 4 and 12.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:50 pm

Clyde Spencer it’s even worse than that, because the cloud forcing error is inherent in the model and is systematic.

That means one never knows the most probable value.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:55 pm

ATTP supposes that model precision is a measure of reliability.

Mistakes 1 and 3.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 10:06 pm

blunder bunny wrote, “they would be “more” wrong and it would be very obvious to even the most committed warmist modeller.

Not correct. GCMs are tuned to give a reasonable projection. That practice hides physical error and side-steps uncertainties.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 10:09 pm

Mark – Helsinki I can’t offer a rationale for it all. 🙂

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 10:13 pm

RW I can’t add anything to your thoughtful post, but can mention that,

Vasquez, V. R., and W. B. Whiting (2006), Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods, Risk Analysis, 25(6), 1669-1681, doi: 10.1111/j.1539-6924.2005.00704.x.

assess random and systematic errors in nonlinear numerical models and recommend propagating systematic model error as the root-sum-square.

The precedent of that paper, by the way, encouraged me to make Risk Analysis my first journal for submission. The rest is history. 🙂

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 10:17 pm

Nick StokesBut there is a real test of PF’s ridiculous errors. ±15°C …

That’s not physical error, Nick.

Mistakes 4, 5, 6 and 11, and probably 12 implicitly.

Well done. 🙂

Jarryd Beck
Reply to  Pat Frank
October 24, 2017 3:36 am

Why don’t people understand uncertainty? They taught us that in first year physics. I could easily make a model that only ever has one outcome, but if it propagates an uncertain value then the error bars will be huge by the end. That doesn’t mean my model will ever show that, thus conflating model precision with uncertainty. The error bar means that my model could be wrong by that much. Of course if your model is wrong it won’t tell you, that’s the whole point of error bars.

RW
Reply to  ...and Then There's Physics
October 24, 2017 12:44 pm

Pat Frank, thanks for the reference. I have downloaded it and will check it out.

John Dowser
Reply to  ...and Then There's Physics
October 25, 2017 5:42 am

From ..and Then There’s Physics October 23, 2017 at 9:36 am

“you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty.”

But if the model would indeed propagate the suggested systematic, “throughout”, physical error, it *will* be noticed. The current models are not sufficiently taking into account non-linear effects of known modelling errors. This then causes the accuracy of the model to decrease rapidly with each time-step and explains perfectly the issues seen today comparing measurements with runs of 10-20 years ago.

Many climate scientists seem to make the same mistake simply because they continue to apply tools without allowing rigorous review of the validity of using those tools that way. This is a larger systematic *human* error in that particular field. And it’s not the first time in recent history but certainly becoming the most costly. And the cause of it lies within underlying role of politics, money and emotion, which has grown into something big to “fail”. The cure here is “back to basics”: re-examination of the toolbox itself.

Pat Frank
Reply to  ...and Then There's Physics
October 26, 2017 6:46 pm

Jarryd Beck, thank-you. 🙂

It seems to me that training in climate modeling completely neglects physical error analysis. Not one climate modeler I’ve encountered has a clue about it. And they’re often hostile to it.

WTF
Reply to  ...and Then There's Physics
October 23, 2017 1:33 am

I like his self declared hero status after his sixth rejection, obviously due to the corrupt system and fear of what the analysis would unleash – no other explanation possible here.

MarkW
Reply to  WTF
October 23, 2017 6:44 am

The fact that the insiders circle the wagons when criticized is proof that the criticism is meritless.
Gotcha.

Mike Maguire
Reply to  WTF
October 23, 2017 9:18 am

“Imagine what real Meteorologists, who do short term modeling for weather prediction in the next few days know how quickly short term predictions can quickly spiral out of reality. I see them adjusting their forecasts daily,sometimes even in hours, as new information comes in,but can still be waaaaay off anyway”

This is a big reason why us real operational meteorologists (for 35 years) have such a high % of skeptics vs in other sciences. We must constantly reconcile the forecast with realities. Quickly adjust based on models that also quickly dial in new/fresh data and come out with a new scenario that can sometimes look much different than the previous one………..with errors/changes often growing exponentially with time.

Individual ensemble members of the same model can look completely different beyond a week. Different models in week 2 can have very different outcomes, not just regionally but in the position of many large scale features that define the pattern.

However, despite this, climate models are much different and they are not as effected by the random, chaotic short term fluctuations in initial conditions that can never be captured perfectly and lead to exponentially growing errors with time.

For instance, if the amount of solar forcing in a climate model was too high/low, one would not expect it to result in output/projections that amplify exponentially over time. It would remain pretty much constant. There would also be potential negative/positive feedbacks but they would be limited and probably not greater than the error from the solar forcing being too high/low.

Another difference. With weather models, we change them/equations every several years or so to make potential slight improvements, with experimental models constantly being run and compared to the existing models……..with mixed results.
I am in not involved in modeling but it seems clear that certain models are superior than others, especially when it comes to handling particular atmospheric dynamics. However, the gatekeepers of all models seem committed to making improvements of their models vs justifying keeping the current one(s).
Skill scores for different time frames are constantly tracked and accountability/performance is well known and acknowledged based on the blatantly obvious, non adjusted statistic for all to see.

I don’t see this being the case for climate models. Adjustments have lagged well behind the reality of observations screaming out loud and clear that they are too warm. Anyone with a few objective brain cells can see that global temperatures are not increasing at the rate of model projections. If it takes an El Nino spike higher in global temperatures to get up close to the model projections ensemble mean for instance, instead of treading along the lower baseline of the range for a decade, then the models are too warm.

There can be no scientific justification to continue with those same models. They need to be adjusted. Wishing and hoping and having decades before needing to truly reconcile models with reality because you are convinced the equations are right and the atmosphere will come around is not authentic science……….it’s just a tool to be used for something other than authentic climate science.

Pat,
Thank you very much for this excellent article, the work and well thought out discussion. I may not agree entirely with everything but believe you make some great points and it deserves to be read/published………even if the gatekeepers don’t agree with all of it.
One wonders if they disagreed with just as much but it supported the CAGW narrative, if it would have been published.

Editor
Reply to  WTF
October 23, 2017 9:26 am

WTF,

Pat referred to a nice post Willis Eschenbach made a few years ago,which YOU should visit,that materially support the main point Pat makes here,here is a useful quote from Willis:

” Willis Eschenbach
May 16, 2011 at 12:01 am

Steve McIntyre has posted up R code for the analysis I’ve done, at ClimateAudit.

The main issue for me is that the climate model isn’t adding anything. I mean, if you can forecast the future directly from the forcings, then there’s no value-added. A good model should give you something that you can’t get from a simple transformation of the inputs. It should add information to the mix.

But the GCMs don’t add anything new, they just spit the forcings out in a slightly different form.

Now, you could say that the model is valuable because it allows us to calculate the variables of lambda and tau … except that each model comes out with a different value of those two.

The main problem, however, is that we have nothing to show us that the underlying concept is true, that forcing actually controls temperature linearly. So that means that the different lambdas and taus we might get from the model may mean nothing at all …

w.”

https://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/#comment-661218

Imagine people trying to model chaos with linear functions……….,using ZERO data as real data, but not yet existing data of the future………

Ha ha ha ha ha…………..

Editor
Reply to  WTF
October 23, 2017 9:42 am

Mike, I wasn’t trying to denigrate Meteorologists with their prediction being wrong in my city,just trying to point out that even with short term predictions based on REAL data can STILL be off from the forecast target.

You wrote,

“This is a big reason why us real operational meteorologists (for 35 years) have such a high % of skeptics vs in other sciences. We must constantly reconcile the forecast with realities. Quickly adjust based on models that also quickly dial in new/fresh data and come out with a new scenario that can sometimes look much different than the previous one………..with errors/changes often growing exponentially with time.”

The big difference is that you usereal updated data regularly to adjust the forecast with, While IPCC create a spaghetti based climate model using a lot of assumptions on forcings we know little about and say we can make a forecast far into the future with significant confidence.,

The whole thing is absurd!

Mike Maguire
Reply to  WTF
October 23, 2017 10:06 am

Sunset,
I never considered your comment as denigrating meteorologists. Just the opposite, a compliment with regards to how we are reality based in using models based on their usefulness.

I’ve busted at least hundreds of forecasts……..it part of the job. The best busted forecast is the one that gets updated the quickest. I was on television for 11 years and that means that thousands of people see the face and person who busts forecasts and you hear about it.

In the earliest years, I hesitated to update as quickly because of believing the models when I made the first forecast and sort of hoping they would revert to the previous solution when they diverted the wrong way.
I also showed over confidence because of too much trust in models.
The reality is that you can be the best model data analyst on the planet but if the model is wrong, it doesn’t matter………you will be wrong.
With experience, you learn to be more skeptical and certain model tendencies. With so many more models and ensembles available, it provides an enormous opportunty to consider potentially different scenarios.

In the 1980’s, most of us just used one (or 2) operational model and went with whatever it showed.

Duster
Reply to  WTF
October 23, 2017 10:32 am

Reading the “climategate” emails, the “corruption” is well documented. I would not regard every single individual with bias as corrupt, since they also display expectation bias. Trenberth’s assertion that there must be something wrong with the data tells an entire story in one brief sentence. Other emails such as Jones indicating that papers critical of model results and methods need to be suppressed (not published) rather than addressed substantively are also revealing. The “corruption” may initially have been more due to “noble cause” fixation than to economic bias, but once economics and university and agency policy enters the picture, the result can be out right corruption. Any of the journals could have published Dr. Frank’s paper and then left the podium open for actual discussion and demonstration of any mistake he might have made. Not doing so looks unscientific, and outright faith-based rather than grounded in scientific argument.

Pat Frank
Reply to  WTF
October 23, 2017 10:22 pm

WTF, ad hominem comment.

If you can’t appraise the manuscript and the reviews you have nothing worthwhile to offer.

So far, you’ve offered nothing more worthwhile than a view into your character.

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 2:08 am

Forrest,
I think that is pretty easy. Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 2:18 am

Forrest,
As far as I’m aware, they have. There is some uncertainty (i.e., running a model with different initial conditions does indeed produce a different path/output) but they do not show the output diverging as suggested by Pat Frank’s analysis. We expect the equilibrium state to be constrained by energy balance and so it is very hard to see how it could diverge, as suggested by Pat Frank, without violating energy conservation.

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 2:33 am

Forrest,

I am also not convinced that an energy balance could reasonably be expected to produce a steady state as you seem to suggest. The earth’s geological history suggests that climate is anything but steady state.

I wasn’t suggesting that the equilibrium state should be the same at all times, I’m pointing out that it should tend towards a state in which energy is in balance (i.e., energy coming in matches energy going out). The reason it has changed in the past is because things have happened to change the energy balance. The Sun’s output isn’t constant. Our orbit around the Sun can vary. Volcanoes can erupt. Ice sheets can retreat/advance (often due to orbital variations), greenhouse gases can be released/takenup etc. However, the state to which it will tend will be one in which the energy coming in matches the energy going out.

So, if someone wants to argue that the range of possible temperature is 30K (as appears to be suggested by Pat Frank’s error analysis) then one should try to explain how these states all satisfy the condition that they should be in approximate energy balance (or tending towards it).

...and Then There's Physics
Reply to  ...and Then There's Physics
October 23, 2017 2:51 am

As far as I can tell that is the controversy.

Yes, that is the controversy. Pat is essentially arguing that something that would produce an offset should be propagated – at every timestep – as an error. This is not correct, which should be pretty clear from Nick Stokes’s recent comment with the output from climate models.

AndyG55
Reply to  ...and Then There's Physics
October 23, 2017 3:11 am

I see that “nophysics” has very little comprehension of error propagation.

Why is that not a surprise?

Little errors GROW to be big errors… that is the way the climate change mantra works !!

pbweather
Reply to  ...and Then There's Physics
October 23, 2017 3:25 am

in response to ATTP,
This argument is seriously flawed.

I think that is pretty easy. Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.

Just like shorter range EPS global weather models the outturn is constrained to within realistic climatic values…otherwise they do indeed blow out into massive range of error. Climate models will be no different, but the constraint range means error propagation is limited with each time step.

Latitude
Reply to  ...and Then There's Physics
October 23, 2017 6:09 am

Seems to me that since the models are blatantly wrong…the offsets, forcings, whatever, are cancelling each other out….either way, you end up linear that exactly matches CO2….something anyone could do with a ruler
First problem seems to be getting modelers to admit that…

but then they are handicapped from the get go…..they are having to back cast to a fake temp history in the first place

crackers345
Reply to  Latitude
October 23, 2017 8:49 am

forcings add. (aerosol forcing is negative).
by 2016 the anthro
GHG forcings add to
a CO2-equivalent of 489 ppmv
https://www.esrl.noaa.gov/gmd/aggi/aggi.html

paqyfelyc
Reply to  Latitude
October 23, 2017 1:12 pm

489 ppm CO2eq of anthropo forcing? …
current level of CO2 is ~400ppm, meaning, without human action Earth would “enjoy” -89 CO2eq GHG forcing. +2K per CO2 doubling is also -2K per CO2 /2, hence -2K for the effect of going from 400 to 200, another -2K for going from 200 to 100 etc. Let’s stop here, although the theory goes that we should keep going on.
So the theory says that without human GHG, Earth temp would be no less than 4K below current level. Remember that LIA was only 1K below current (so says IPCC), so imagine the effect
I say: LOL !!!

crackers345
Reply to  Latitude
October 24, 2017 10:36 am

paqyfelyc claim – “THERE IS a line of code that says “this much more CO2 give this much less heat loss (aka warming)”

there is a line of code,
a well-honed equation with evidence
to back it up, that uses CO2’s
radiative forcing (which is not warming), at
the tropospause, not the
surface.

because it’s a fact that CO2
absorbs IR. and a fact that the
earth emits
IR. it’s not difficult
to understand, with a model
or equations, why that
means more CO2 means
more warming.

Reply to  crackers345
October 24, 2017 11:49 am

because it’s a fact that CO2
absorbs IR. and a fact that the
earth emits
IR. it’s not difficult
to understand, with a model
or equations, why that
means more CO2 means
more warming.

You’re ignoring the water vapor that has 10 or 20 times the energy content, with a temperature sensitivity at Sea level air pressure and temp. And it does what it wants.

crackers345
Reply to  Latitude
October 24, 2017 12:14 pm

micro: water vapor certainly isn’t ignored in
climate models.

but water vapor in the atmosphere only
changes when the temperature first changes;
then it’s a feedback.

Reply to  crackers345
October 24, 2017 12:41 pm

but water vapor in the atmosphere only
changes when the temperature first changes;
then it’s a feedback.

Bzzzzzzzz! Wrong.
Do you live someplace that you get dew at night?

Oh’s it a feedback, about -35W/m^2

A lot more than Co2’s forcing.

paqyfelyc
Reply to  Latitude
October 25, 2017 5:31 am

“radiative forcing” is orwellian newspeak. Indeed CO2 radiates (as just any matter…), and that’s the real radiation that should appear in the equations, not some “forcing”.

it’s not difficult to understand, even without a model or equations, why that means more CO2 means
more RADIATION in and out atmosphere and less radiation directly from Earth gets to space. If, and if then to what extend, this result in warming (or even cooling!), is much more questionable.

Reply to  paqyfelyc
October 25, 2017 7:06 am

If, and if then to what extend, this result in warming (or even cooling!), is much more questionable.

This is what my work addresses. Specifically cooling under clear calm skies. This is the only condition that really matters. But that’s another argument for another time.

What I found was surface cooling rates adjust themselves, as it get near dew point, water vapor condenses, and that sensible heat supplies a significant portion of the energy radiating to space, which at dusk, was cooling the surface at 3 or more degrees F/hr, but an hour or two later it can be near zero, and there’s still 5 hours of dark, and there is still a 100F (the other night here) temp difference that has to be radiating to space.
comment image
comment image

You can see this everywhere by just logging RH, Dew Point and Air temp, and you see under clear skies the temp stops falling some nights, and you can measure a 80-100F temperature differential with an IR thermometer, and it isn’t cloudy either.
comment image

Everyone assumed it was just reaching equilibrium, it is not. This is the biggest “discovery” in climate science in 100 years, because it shows us water vapor has been actively regulating temps, not ghg’s.

Oh, so CS is just the ratio of the the two cooling rates times the 1.1C/doubling Co2, so above location that got measured it’d be about 1.1C/3, so 0.33C/doubling.

Frankly I should get a Noble Prize for this.

paqyfelyc
Reply to  Latitude
October 25, 2017 7:44 am

@micro6500
Your work makes sense, so much so that i don’t see anything new in it. Of course atmospheric water is a major heat buffer, that prevent temperature to go down as long as there remain water vapor to turn into liquid water, and hence to compensate heat escaping away through radiation. I doubt very much this deserves a Nobel, or Captain Obvious would already had been awarded (but who knows ? Al gore and Obama got one, so with the right political connection …)
Even “climate scientists” know that, although I suspect they don’t care. The word “dew” doesn’t even appear in the description of the NCAR Community Atmosphere Model (CAM 3.0) : the only water movement they care about is evaporation and cloud formation.

Reply to  paqyfelyc
October 25, 2017 8:00 am

But I figured out it was a negative feedback to Co2. Tell me anyone who has proof of that?

But you’re right, it was stupid obvious. But people assumed it was something else. I recognized it for what it was, the end of co2 panic.

crackers345
Reply to  Latitude
October 25, 2017 8:10 am

paqyfelyc says – “radiative forcing” is orwellian newspeak. Indeed CO2 radiates (as just any matter…), and that’s the real radiation that should appear in the equations, not some “forcing”.”

RF comes from solving the two-stream
equations, which are obtained from
applying energy
conservation and the Planck law to
the atmosphere.

now i think it’s the two-stream equations
that appear in the models, and not the
RF relations. see, for example, equations
4.229 & 4.230 in this model description:

http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

Reply to  crackers345
October 25, 2017 8:35 am

RF comes from solving the two-stream
equations, which are obtained from
applying energy
conservation and the Planck law to
the atmosphere.

The problem then is they either are doing the wrong terms or they are leaving the big one out. The assumption that Co2 adds is incomplete, it adds, but water vapor drops nearly as much as was added, it is the negative feedback that is either unknown or ignored. And it only does so for part of the night, averaging a whole day hides the fact it varies.

paqyfelyc
Reply to  Latitude
October 25, 2017 10:30 am

October 25, 2017 at 8:10 am

No, RF comes from the difference between to virtual numbers: the modeled radiation with [anything], and the same without. This as the same value than a seller pretend you gained 30$ on a thing you paid 70$, because he pretends its price should have been 100$, or an official pretending he made 10M$ saving while he spend 110 M$ instead of 90 M$ previous year, because without the saving he would have had spend 120M$. Pure bovine outgoing matter, that i wouldn’t buy if i were you.

You’ll find numerous instance of the word “forcing” in the model.

Just after one of them i found this extract:
“the large warm bias in simulated July surface temperature over the Northern Hemisphere, the systematic over-prediction of precipitation over warm land areas, and a large component of the stationary-wave error in CCM2, were also reduced as a result of cloud-radiation improvements”
Which translates:
“the model can fit the elephant as need be, It has far enough parameters to “improve” ”

BTW you still didn’t react to my comment
https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2644237

crackers345
Reply to  Latitude
October 25, 2017 12:48 pm

paqyfelyc claimed – “No, RF comes from the difference between to virtual numbers: the modeled radiation with [anything], and the same without.”

difference of what?

this paper calculates RF; this paper describes their methods:

>> We use the Spectral Mapping for Atmospheric Radiative Transfer code, written by David Crisp [Meadows
and Crisp, 1996], for our radiative transfer calculations. This code works at line-by-line resolution but
uses a spectral mapping algorithm to treat different wave number regions with similar optical properties
together, giving significant savings in computational cost. We evaluate the radiative transfer in the range
50–100,000 cm−1 (0.1–200 𝜇m) as a combined solar and thermal calculation.

Line data for all radiatively active gases are taken from the HITRAN 2012 database. Cross sections are
taken from the NASA Astrobiology Institute Virtual Planetary Laboratory Spectral Database http://depts.
washington.edu/naivpl/content/molecular-database.<<

B. Byrne and C. Goldblatt
http://onlinelibrary.wiley.com/doi/10.1002/2013GL058456/pdf

Reply to  crackers345
October 25, 2017 1:01 pm

Line data for all radiatively active gases are taken from the HITRAN 2012 database. Cross sections are
taken from the NASA Astrobiology Institute Virtual Planetary Laboratory Spectral Database http://depts.
washington.edu/naivpl/content/molecular-database.<<
B. Byrne and C. Goldblatt
http://onlinelibrary.wiley.com/doi/10.1002/2013GL058456/pdf

It’s worthless, unless you don’t want to know what it’s doing. Now if they ran it over 24 hours and included H2O, you’d see H2O changing, negatively in response to the increase.

But they leave that out.

Funny how they all seem to leave that out.

crackers345
Reply to  Latitude
October 25, 2017 12:49 pm

ps – RF is calculated
at the
tropopause,
not the surface or TOA

paqyfelyc
Reply to  Latitude
October 26, 2017 12:59 am

crackers345
We are talking about radiative forcing, and you make a long (and boring) quotation about … “radiative transfer”. WTF ? Do you think these are the same?
And you still didn’t reply to my comment https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/comment-page-1/#comment-2644237

Phoenix44
Reply to  ...and Then There's Physics
October 23, 2017 6:36 am

If the climate settles into an equilibrium state. And if the equilibrium state results in a constant temperature, but that is not necessarily how the equilibrium will look – why should it? Nothing in the climate ever stops changing because it cannot ever do so. One of the biggest problems with modeling the climate is knowing what the starting point is.Get one parameter wrong by a little, and your projections can be wildly wrong.

crackers345
Reply to  Phoenix44
October 23, 2017 8:53 am

“Get one parameter wrong by a little, and your projections can be wildly wrong.”

climate models are
“spun up” so they start
in a equilibrium
state.
see
http://www.oc.nps.edu/nom/modeling/initial.html

paqyfelyc
Reply to  Phoenix44
October 25, 2017 5:34 am

And why would start start in equilibrium, when the Earth is supposed to be an out of equilibrium system (that’s what the Gaia hypothesis is all about, as you probably don’t know)?

paqyfelyc
Reply to  ...and Then There's Physics
October 23, 2017 7:30 am

@ aTTP
What are the mathematical condition for “if we ran two simulations with different [whatever source] perturbation [aka “forcings” in climate newspeak] (but everything else the same), this wouldn’t suddenly mean that they would/could diverge with time, it would mean that they would settle to different background/equilibrium states” ?
Answer: stable, non chaotic system, that can be treated through perturbation analysis.
You assume 1) equilibrium with null forcing, and 2) forcing will just offset equilibirum by some finite and calculable amount.
The first is obviously false regarding climate, since it wildly varies with zero forcing, as a true chaotic system it is. The second assumption is “not even wrong” when the first isn’t true.
So your objection just means the “climate” you are modelling is from some other world.
Your pseudo is usurped.

Reply to  ...and Then There's Physics
October 23, 2017 7:58 am

“An uncertainty only propagates if it applies at every step”

This is pure comic gold!

MarkW
Reply to  micro6500
October 23, 2017 9:12 am

If the inputs to a function are uncertain, then the output of the function will at best be equally uncertain.
In reality, every time you perform an operation on uncertain data, you increase the uncertainty.

Reply to  MarkW
October 23, 2017 9:42 am

And since this is solving simultaneous differential equations by time step, where you have to allow all nodes to reach numerical stability prior to the next step. Each of these nodes carry the uncertainty into the next iteration. And they are modeling an abstraction of the real system.
I’ll point out I spent 15 years as a simulation subject matter expert, covered about a dozen simulators, and created models and circuits that got checked out and reviewed by engineers who had actually build the real thing and tested it extensively on a lab bench. Including simulators that operate like gcm’s operate. Also designed a chip for NASA GSFC, fastest design for them at the time.

whiten
Reply to  micro6500
October 23, 2017 11:44 am

MarkW
October 23, 2017 at 9:12 am

If the inputs to a function are uncertain, then the output of the function will at best be equally uncertain.
In reality, every time you perform an operation on uncertain data, you increase the uncertainty.
———–

Maybe I am wrong, and also missing your point, with this simplicity of mine, but just for the sake of it.
There is a “100% certainty” with this models… they all do a warming in a very significant correlation with CO2…SIGNIFICANT AND CLEAR CORRELATION BETWEEN THE WARMING TREND AND CO2 TREND in all of these simulations….. “100% certain”, as far as I am aware.

Also as far as I am aware, these models are not set up or made to do that, they just do it…….there is no any line of code that “says”: “you get this much CO2 give me this much warming”,
or some thing like that…..and besides as per my understanding of these models, they do not actually do any “detectable” quantity warming as caused by CO2……as strange as that may seem.
Correlation does not necessarily mean causation, still it needs a confirmation and some kind of validation even in the case of the GCMs, even when and where it may seem from the outset to be so obvious….but still needed though.

cheers

paqyfelyc
Reply to  micro6500
October 23, 2017 1:18 pm

@whiten
THERE IS a line of code that says “this much more CO2 give this much less heat loss (aka warming)”. If there wasn’t , CO2 wouldn’t appear at all.
The truth is, this sort of code cannot prove the assumption, it can only prove it is wrong. And it does, fairly well.

Crispin in Waterloo
Reply to  micro6500
October 23, 2017 2:17 pm

Everyone, listen to MarkW.

whiten
Reply to  micro6500
October 24, 2017 10:24 am

paqyfelyc
October 23, 2017 at 1:18 pm

I have no much choice, but wholly to agree with you there…..in principle.

Fairly well in the prospect. 🙂

considering that it could be proved at some point.

thanks.

cheers

Clyde Spencer
Reply to  ...and Then There's Physics
October 23, 2017 8:45 am

…and Then There’s Physics,

You suggested, “Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.”

Actually, that has been done: it is illustrated in the ‘spaghetti graph’ above supplied by Nick Stokes. One of the most critical input parameters is the assumed, and unknowable, RCP. I have not seen a similar presentation for other assumptions about all the input parameters that are known with imperfection even for their current values, let alone future values. I have rarely seen estimates of the ‘albedo’ with a precision greater than 2 significant figures. What would the outputs of ensembles look like if a reasonable range of albedos with different values were used as initial conditions? When we start varying ALL the inputs, one at a time, that will give us a better idea how they may influence the total output. They might even come close to Frank’s upper-bound uncertainty.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 9:25 am

Right, ATTP. You say a plus/minus root-mean-square uncertainty statistic is a positive sign physical offset error.

It’s not. (+/-) does not equal “+”. I know it’s a hard concept, but do try.

You’ve made mistake number 7. And that’s over and over, for you.

You also show no understanding of the difference between physical error, which can be known, and statistical uncertainty, which is an ignorance metric.

The first requires the observation as a test against a prediction.

The second conditions a prediction where the observation is not known.

You don’t get that distinction here. You’ve never gotten it in any of our conversations.

I rather doubt you’ll ever figure it out.

Crispin in Waterloo
Reply to  Pat Frank
October 23, 2017 2:18 pm

±1

ferdberple
Reply to  ...and Then There's Physics
October 23, 2017 10:14 am

very hard to see how it could diverge,
============
The result (future) only converges over a narrow range of conditions even if the energy is identical.

For example. Hot land and. Cold ocean vs cold land and hot ocean. The energy is the same but the climate is not. The nonlinearity of the system allows both possibilities to occur. Or at least to remain outside of current mathematics to calculate any more than we can predict the next roll of the dice.

PureAbsolute
Reply to  ...and Then There's Physics
October 23, 2017 11:24 am

I’m a super layman here — however TTP’s statement cried out to me. How is a solar forcing not applied at every step? If there is extra heat in step 1, then step 2 will proceed from that extra heat step. Of course, we know that extra heat will be radiated out to some extent. Is that a linear process? Do the propagation from those errors not become cumulative also? Every joule not released back into space also accumulates.

So while I agree with your basic premise — the errors have to be added at every step — I disagree with your disagreement — the errors *do* add at every step.

talldave2
Reply to  ...and Then There's Physics
October 23, 2017 1:03 pm

No… each step has physical uncertainties. There is no “background,” just a series of steps, each of which contributes the potential for a certain amount of error.

I don’t know why this is so hard to understand. Consider the operation of moving a wheel 1 mm. Each time you perform the operation, you miss by 1mm a little bit. Some of those errors cancel out, but over the 1000-step process of moving the wheek 1m the total possible error increases at each step.

If we ask “where is the wheel after 1000 steps?” we would have to qualify the answer with the total possible physical error in the process to give a true estimate of position. You can’t just run a bunch of simulations and say “Look! they converge near 1m!” That’s a different question.

I don’t know that this quite renders models totally useless, but it certainly demonstrates some important limitations.

David A
Reply to  talldave2
October 23, 2017 2:16 pm

Yes and also various functions have feedbacks. Thus some errors propagate in this manner. The feedbacks are not necessarily linear either. Each of Nick’s runs in his chart above showing the disparate rcp scenarios also has error bars, widening the top and bottom of the existing spread.

michel
Reply to  ...and Then There's Physics
October 23, 2017 1:08 pm

“This has already been explained to you numerous…”

I draw everyone’s attention to this common rhetorical trick of speech. The attempt is rhetorically to position the speaker as the expert and teacher, the addressee as ignorant and naive.

Examples of usage from other contexts:

It has already been explained to you repeatedly that there were no camps or penal colonies under Stalin, and it is unlikely that this attempt will be any more successful than previous attempts. The allegation was invented by right wing anti party conspirators.

It has already been explained to you repeatedly that there was no famine under Mao…..

It has already been explained to you repeatedly that eating cholesterol raises blood cholesterol…

It has already been explained to you repeatedly…..

No, it hasn’t. What has happened is that someone has asserted these things. They have not explained repeatedly.

When the activists in a field commonly resort to this sort of speech, as if by a collective agreement, we know, and have explained repeatedly, that this is a bunch who have abandoned any critical thought and just mouth and parrot the party line.

Mark
Reply to  michel
October 24, 2017 7:04 pm

And sometimes it’s not a rhetorical trick; it’s just someone that’s frustrated because he really has explained it over and over.

bitchilly
Reply to  ...and Then There's Physics
October 23, 2017 1:47 pm

attp, i wonder what the reason for all the messing around with aerosols was ? would models do what pat says if initial conditions are not constrained with variable parameters down the line after initiation of the model run ?

Philo
Reply to  ...and Then There's Physics
October 23, 2017 1:59 pm

I believe the point Pat is trying to make, and nobody seems to get, is that when measurements are used as input into model equations the measurements have a physically known error range, which should be traceable to a National Institute of Standards reference. Once that physical error is accounted for it propagates throughout each iteration of the model. The reference errors are not a statistic of the measurement but an absolute value of the accuracy- i.e. a temperature could be any number that falls in the error range any time a measurement is made.
The equation y=ax + b, run once, generates and absolute error range of a*{AE} +b*(AE). if x=100 and b=10 and the absolute error is .0001 the result is 110*(.0001)+ 0.001=.0111. The next iteration, further extending the calculation would start with an Absolute error of 0.0111.test

It’s easy to see that the potential error in the calculation can easily balloon after a number of iterations. Based solely on the absolute error of the instrument the potential error can easily become much larger than the any statistical test would suggest.

Observations are not the same as statistics of observations.

Crispin in Waterloo
Reply to  ...and Then There's Physics
October 23, 2017 2:16 pm

aTTP

“Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.”

This statement reflects a fundamental misunderstanding about what a model run is and what an uncertainty is. The uncertainty is an inherent property of a measurement, or in the case of clouds, an assumption. The uncertainty about a calculated value is not based on the variability of the result or of multiple runs of a model. It is an inherent property of the inputs and propagates through the calculations in a standard fashion according to strict rules. The output of a model might be exactly the same each time with different inputs! That doesn’t have any influence on the propagated uncertainty.

That this fact of mathematics escapes anyone in a position to affect public policy, I find concerning.

Because this math fundamental escapes so many in the modeling field, apparently, here is a primer from Wikipedia: https://en.wikipedia.org/wiki/Propagation_of_uncertainty

“When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function.”

In the case of clouds, which are poorly characterised, having to choose a forcing without knowing its real effect to better than say, 4 W/m^2, (σ1) is the same as a measurement with an uncertainty of 4W. Picking ‘the wrong number’ does not reduce the uncertainty about what follows. It is not “30 W ±4W, therefore the true answer is between 26 and 34”. It is that any number selected has an uncertainty of 4W. It is 26±4, 34±4, 30±4 or any other number like 10 or 50.

“For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation σ from the central value x, which means that the region x ± σ will cover the true value in roughly 68% of cases.” (ibid)

The Resistance formula shows that it is the largest input uncertainties that cause the majority of the magnitude of a propagated uncertainty. Thus temperature, which has a relative low % uncertainty, is minor compared with forcing due to clouds where the uncertainty is large compared with its value.

I encourage everyone to read the Wiki entry and if it is too difficult, try putting in some numbers using the Resistance formula. It will show you that uncertainty never decreases through a calculation.

The Author is correct, and the rebuffs from several journals show that they accept his arguments, but excused themselves from publishing it on the grounds that the readers would not be interested in finding the true answers to this important question. It’s their call, but the rejection was not because the work is incorrect. Obviously many responses and reviews were inane. I am not surprised, I continue to be disappointed by the sorry state of climate science.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 8:40 pm

ATTP, “The error that you’re trying to propagate is not an error at every timestep, but an offset.

You’re wrong. I show in the manuscript that it long wave cloud forcing error is systematic and inherent in the models. It enters every single time step.

I’ve explained that to *you* several times, and and you never grasp the concept.

Just to elaborate further, adding up calibration errors of various models to get a final number does not make error a constant to be subtracted away from a prediction.

Model calibration errors vary with the model, with the forcings, and in each model with the choices of poorly constrained parameters. Your proposed subtraction is a meaningless exercise.

michel
Reply to  Pat Frank
October 24, 2017 5:21 am

Yes.

Pat Frank
Reply to  ...and Then There's Physics
October 23, 2017 10:32 pm

ATTP, “Pat is essentially arguing that something that would produce an offset should be propagated – at every timestep – as an error.

No, I’m not. I’m propagating a model calibration error statistic.

Calibration error statistics are not offset errors.

Model cloud error is not an offset error (mere inspection of ms Figure 5, or the figure in Eric Worrall,s comment, is enough to prove the case.

It’s explained in my manuscript.

I’ve explained it to you repeatedly.

You insistently make the same mindless mistake over and over again.

It was wrong the first time you supposed it. It’s wrong this time. It’ll always be wrong.

It will never be right no matter how often you repeat it.

But that won’t stop you, will it.

Reply to  ...and Then There's Physics
October 24, 2017 3:44 pm

ATTP
It’s really this simple; the. Earth system cannot be accurately simulated unless all the climatic variables are precisely accounted for. The tiniest inaccuracy will garbage the run. And it wont be known where the mistake was created. Current models rely heavily on inference. They are all utterly unskilled in projecting.

Michael S. Kelly
Reply to  ...and Then There's Physics
October 24, 2017 4:25 pm

If anyone is getting a “background equilibrium state” out of a climate model, the model is worthless. The boundary conditions for climate change continuously (TOA solar intensity changes +/- 47 W/m^2 every 180 days, the tilt of the earth changes every 18.6 years, the cloud cover – and hence albedo – changes over a tremendous range hourly, water vapor distribution in the atmosphere – the major climate driver – changes constantly in a manner that isn’t even known, etc., etc.), some in a semi-periodic manner and some randomly. We don’t even know what all of the variables are, but from what we do know, the climate can never reach a state of equilibrium.

Having said that, if you take the position that the models are based on calculating perturbations away from a background equilibrium (a common technique for analyzing non-linear systems), then I think you’ve made Mr. Frank’s case in part. In that case, you have linearized a highly non-linear system, and his error propagation analysis is perfectly correct.

crackers345
Reply to  ...and Then There's Physics
October 25, 2017 12:53 pm

palc wrote – “BTW you still didn’t react to my comment
https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2644237

it’s completely wrong, and
shows you don’t know
the science.

October 23, 2017 12:25 am

Trillion$. The simple number of reasons for rejection.

George Tetley
Reply to  Henry Galt
October 23, 2017 1:17 am

What is needed is the $trillions to publish as a supplement in the NY Times ? or ?
(following the money trail always leads to the edge of a cliff )

October 23, 2017 12:29 am

attp. Nice to be able to comment isn’t it. Under your rock … not so much.

Sheri
Reply to  Henry Galt
October 23, 2017 5:32 am

No one is allowed to comment in “wrong” ways on said site because it is RIGHT. You know, omnipotent. It’s an interesting trait found in most climate change propagandist sites. It used to be that science was smart enough to explain itself and win an argument, but the collective understanding has dropped to where silencing the opposition is the only answer. You remember the Dark Ages, right?

Colorado Wellington
Reply to  Sheri
October 23, 2017 9:59 am
Reply to  Sheri
October 23, 2017 10:04 am

Wellington, if you have a point to make, please make it so that you add something to this discussion.

(It is important to me,since there is a possibility he is using at least two or more accounts here,which is a bannable offense) MOD

Reply to  Sheri
October 23, 2017 12:00 pm

Mark: I could not recall at first what the references were about “allowed to comment”. Then I remembered I’ve read something way back but everyone must judge the veracity of the link for himself.

Mod: I’m sorry. I do not know more than what my quick “memory refresher” search found.

Everyone: I care about the actual argument, not who is making it. However, I do consider circumstances like someone preventing an adversarial argument at one’s own site while engaging in it elsewhere (when that applies).

bitchilly
Reply to  Henry Galt
October 23, 2017 2:11 pm

indeed, i live not too far from attp. i may have a word about the moderation on his own site in person.

JWurts
Reply to  bitchilly
October 25, 2017 8:25 pm

Please, keep us informed

Thanks

JW

October 23, 2017 12:29 am

Hmm. You identify one reviewer as a Gavinoid. Are you sure you were not observing a Nickoid instead? The plumages are remarkably similar…

NeedleFactory
Reply to  Writing Observer
October 23, 2017 9:35 am

The two comments above by WO and FG are unhelpful and violate WUWT commenting policy: “those without manners that insult others or begin starting flame wars may find their posts deleted.”

Reply to  Writing Observer
October 23, 2017 10:04 am

– I was not aware that WUWT has a new moderator?

In any case, there is somewhat of a difference between pointing and laughing at an opponent, and viciously attacking them. Not much, but some. For examples, you can look at some of Nick’s comments elsewhere here, which, in between his ad hominems, simply prove the point that Forrest makes.

HotScot
October 23, 2017 12:36 am

This’ll be interesting. I won’t understand a word but look forward to the debate.

Reply to  HotScot
October 23, 2017 1:02 am

Good response, Hotdog. You will have given some relief from guilt to thousands of skeptics like me who havn’t a clue about the subject nor the time to find out, but who nevertheless will be hoping that this is the definitive moment when the wall of pseudo-academic superiority behind all the modelling nonsense begins to be broken.

October 23, 2017 1:09 am

“Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.”

Err No.

The temperature outputs are the result of ALL THE INPUTS.
Those inputs include ALL KNOWN FORCING, not just GHGs, but solar, volcanic, land use, etc.
in addition there are feedbacks which cannot be predicted and which are emergent.

Your paper has not been accepted because you are wrong.

Sheri
Reply to  Steven Mosher
October 23, 2017 5:34 am

If there are feedbacks which cannot be predicted and which are emergent, there’s no reason to believe the models in the first place. They could be completely overturned tommorrow by a pesky emergent feedback.

Nick Stokes
Reply to  Sheri
October 23, 2017 5:46 am

The feedbacks emerge from the models. If you don’t believe the models, you don’t believe the feedbacks.

MarkW
Reply to  Sheri
October 23, 2017 6:45 am

Are you trying to argue that nobody knew about feedbacks until the models discovered them?
Sheesh, you don’t need models to determine that feedbacks exist. Just think for yourself.

Duster
Reply to  Sheri
October 23, 2017 10:52 am

Nick, the key word employed by both Mosher and Sheri was “emergent” – “feedback.” That is unforseen “feedbacks” – I’m pretty sure you are quibbling over terminology, but pay attention to the intent instead. Those “emergent” conditions would create unexpected, unmodeled behaviour in the empirical data, and create unanticipated divergences between modeled results and measured empirical conditions. If those “emergent” influences tend to have a bias that cannot be accounted for, then the mean model results and empirical data will diverge over time – creating “hiatuses” or “pauses,” possibly even long term states like Little Ice Ages.

whiten
Reply to  Sheri
October 23, 2017 12:08 pm

Sheri
October 23, 2017 at 5:34 am

But models do only one significant feedback…..temp to CO2, or maybe the other way around, where other feedback have no any potential or detectable effect, as actually is supposed to be under a RF warming ever increasing….the main standing point of AGW, for not saying the entire point of AGW….

So an RF warming can not actually be messed up by other feedback, especially when in fast up going trends….

So, the question: What actually ate all that supposed AGW RF expected warming!?
A “dog” feedback” perhaps!

cheers

Reply to  whiten
October 23, 2017 2:06 pm

So an RF warming can not actually be messed up by other feedback, especially when in fast up going trends….
So, the question: What actually ate all that supposed AGW RF expected warming!?
A “dog” feedback” perhaps!

Water vapor let’s it go to space until the surface cools off, then drains energy stored in atm column and as water vapor to slow cooling once air temps near dew points.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/

crackers345
Reply to  Sheri
October 23, 2017 1:24 pm

Duster said – “Those “emergent” conditions would create unexpected, unmodeled behaviour in the empirical data, and create unanticipated divergences between modeled results and measured empirical conditions”

no. the feedbacks emerge as a result of
the models’ underlying equations. viz
of the physics incorporated into the
model.

example: ice-albedo feedback. basic warming
from CO2 melts ice, so less
sunlight is reflected back to space
and so the ocean & air warm more.

this emerges from models, because they
continually calculate ice extents. they assume
ice has a certain albedo, and ocean another.
thus, when ice melts, more warming occurs,
beyond that of CO2 alone.

Reply to  crackers345
October 23, 2017 2:56 pm

no. the feedbacks emerge as a result of
the models’ underlying equations. viz
of the physics incorporated into the
model.
example: ice-albedo feedback. basic warming
from CO2 melts ice, so less
sunlight is reflected back to space
and so the ocean & air warm more.
this emerges from models, because they
continually calculate ice extents. they assume
ice has a certain albedo, and ocean another.
thus, when ice melts, more warming occurs,
beyond that of CO2 alone.

And if you implement it like you described, it’s wrong. Because that is wrong, once the incident angle gets under 20degrees or so, open water has nearly the same albedo as ice, so there’s only about 1/4 day under solar noon that is positive. The rest, if the sky is clear is a huge radiator to space.

bitchilly
Reply to  Sheri
October 23, 2017 2:14 pm

micro, i notice you never get a response from nick or mosher to your posts . you should get your work written up and submitted .

Reply to  bitchilly
October 23, 2017 3:05 pm

you should get your work written up and submitted

I’m not very good at that kind of stuff. And I know I would jet get jerked around until either I got tired of it or they did. But it’d never be published while it matters.
So I published it at wordpress, and code and reports on sourceforge. I’m sure it’s been seen by more people through social media than some pay to play journal.

And sooner or later it’ll be the end of this mess.
I just want it called the “Crow Effect” lol!!!!

David A
Reply to  Sheri
October 23, 2017 2:26 pm

Crackers, what you are describing is certainly NOT linear.

Joel O’Bryan
Reply to  Sheri
October 23, 2017 4:33 pm

I have seen Gavin tweets where he fully acknowledges the models do not model the feedbacks correctly if at all. One example, The ENSO pseudocycles are clearly chaotic responses that feed into GMST +/-, but the models are helpless on it.

Joel O’Bryan
Reply to  Sheri
October 23, 2017 4:33 pm

I have seen Gavin tweets where he fully acknowledges the models do not model the feedbacks correctly if at all. One example, The ENSO pseudocycles are clearly chaotic responses that feed into GMST +/-, but the models are helpless on it.

RW
Reply to  Sheri
October 23, 2017 8:37 pm

Nick Stokes quibble over terminology? He lives for that sort of thing.

whiten
Reply to  Sheri
October 24, 2017 10:37 am

micro6500
October 23, 2017 at 2:06 pm

Thank you micro.

Appreciated a lot.

From my point, almost all comments of yours appreciated in my part.
But if I have not got this wrong…hopefully, as only a superficial pass at your work there…..it seems mostly, as far as I can tell, as a further detailed and very interesting at that, about the Trenberth “Iris”……which may explain how the earth and atmospheric response works in relation to RF forcing in short term.

Please do forgive me if I happen to have misunderstood your work…..but it seems to be very important in away to try and explain the non linearity of reality of climate versus the linearity propagated by the GCMs…

Please do let me know, if you would not mind, if I happen to have misunderstood your point……..no body is perfect..:)

Thanks.

Cheers

Reply to  whiten
October 24, 2017 12:36 pm

But if I have not got this wrong…hopefully, as only a superficial pass at your work there…..it seems mostly, as far as I can tell, as a further detailed and very interesting at that, about the Trenberth “Iris”……which may explain how the earth and atmospheric response works in relation to RF forcing in short term.

I’m not sure it operates like an iris, more like a turbo button.
I think what’s happening is the sensible heat from the cooling atm column, including all the water vapor that is condensing (and then re-evaporating), keeps the surface warm, near dew point temp until the Sun comes up to store up energy to do it again.

So, more like a bucket of water with a hole in it, and after it lowers air temp to dew point, opens a spigot that supplements the water level so it doesn’t drop much more than this until the Sun come up and fill them both back up(all the while the one is still draining).

crackers345
Reply to  Sheri
October 25, 2017 8:37 pm

micro6500 commented “Water vapor let’s it go to space….”

what
the he11 does this
mean?

Reply to  crackers345
October 26, 2017 12:37 am

Means you do not understand how the surface cools at night

Phoenix44
Reply to  Steven Mosher
October 23, 2017 6:44 am

Except that they have been shown to not be. You can make assertions about how models are supposed to worj and how modelers think they do work, but unless you are a unique set of humans/modelers that never make errors, you are going to have prove what you say is right.

As for emergent feedbacks from models, please. The idea that your model is so brilliant that it is showing us things we didn’t know rather than being errors is the sot of arrogance that gets modelers a really, really bad name.

Reply to  Steven Mosher
October 23, 2017 5:43 pm

putz

kyle_fouro
Reply to  Steven Mosher
October 23, 2017 9:35 pm

Mosher contradicts Mosher

https://imgur.com/a/DEZYf

Pat Frank
Reply to  Steven Mosher
October 23, 2017 10:47 pm

Steve Mosher, the linearity of GCM air temperature projections is demonstrated in dozens of examples right there in front of your eyes.

In the manuscript and the Supporting Information document.

I doubt you’ve even looked at either, though; much less read them, much less understood them.

Which might explain your denial of the demonstrated.

Nick Stokes
October 23, 2017 1:18 am

“Here’s the analytical core of it all:

Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.”

It’s the core of the nonsense. For a start, they aren’t “extrapolations” of forcing. You can find a curve fit, by fiddling parameters. So? That is true of many things. It doesn’t mean that the mechanism of the model is wrong or trivial, or even that its error propagation should follow the curve fit.

The statement that “therefore” they are subject to the linear propagation of error is just assertion. It has no basis.

“The volcanic forcings are non-linear, but climate models extrapolate them linearly.”
Gobbledook. What does non-linear here even mean? With respect to what? But again, climate models don’t “extrapolate” them. They admit them as a forcing in the set of equations, and give an approximately proportional response. Not unexpected.

From the figure captions
“The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings.”
Actually no. Scenario A did not include volcanics. Pat’s argument proceeds regardless.

TimTheToolMan
Reply to  Nick Stokes
October 23, 2017 2:39 am

Nick writes

For a start, they aren’t “extrapolations” of forcing. You can find a curve fit, by fiddling parameters. So? That is true of many things.

Including say clouds in the models. Fitted but meaningless.

AndyG55
Reply to  Nick Stokes
October 23, 2017 3:13 am

“It doesn’t mean that the mechanism of the model is wrong or trivial”

But you KNOW that it is wrong, don’t you NIck.

All that bluster to hide KNOWN errors.

So sad. !!

Nick Stokes
Reply to  Nick Stokes
October 23, 2017 3:45 am

“And Jim Hansen’s scenarios assumed NO volcanic forcings?”
You don’t read, and you don’t know anything. Scenario A assumed no volcanic forcings. B&C had forcings, clearly reflected in the featured figure.

RW
Reply to  Nick Stokes
October 23, 2017 12:29 pm

I didn’t take Pat Frank’s argument to imply the first part of what you wrote. I took his curve fit (as you put it) to be a simplified model of what the model’s output. His model of the models does a pretty good job of doing that. Using the curve fit, he then propagates a specific error to generate the uncertaintany at each step of his model. The implication is that the more complex models are not properly propagating error.

The notion that Frank’s critique is innapplicable ‘because d-d-d-different models!’ is gibberish baloney.

Team up with other ‘error propogation is not applicable’ folk around here and write a rubuttal guest post.

Pat Frank
Reply to  RW
October 23, 2017 11:12 pm

RW wrote, “I didn’t take Pat Frank’s argument to imply the first part of what you wrote. I took his curve fit (as you put it) to be a simplified model of what the model’s output.

Thank-you RW. That’s a very succinct recapitulation.

Honestly, it’s a relief to read the remarks of folks here who get the analysis. Thank-you all. 🙂

RW
Reply to  RW
October 26, 2017 8:54 am

crackers345, so yes one could conceivably hard code the Earth’s orbital path into the code as an influence on solar insolation. I am not sure where the controversy lies here, and as I have said elsewhere I don’t know climate models.

RW
Reply to  RW
October 26, 2017 9:54 am

^ wrong subthread

bitchilly
Reply to  Nick Stokes
October 23, 2017 2:18 pm

which scenario was closest to reality and how many volcanoes erupted while that reality played out ?

Pat Frank
Reply to  Nick Stokes
October 23, 2017 11:03 pm

Nick Stokes wrote, “For a start, they aren’t “extrapolations” of forcing.

The emulations demonstrate GCMs do exactly that. In any case any “projection” is an extrapolation of conditions into future states. So, you’re wrong empirically and in priciple, Nick, and all in one sentence.

You can find a curve fit, by fiddling parameters. So?

So, it means that climate model air temperature projections linearly extrapolate forcing to project air temperature. That’s all they do.

The consequence? Linear propagation of error. And that’s QED.

It doesn’t mean that the mechanism of the model is wrong or trivial, or even that its error propagation should follow the curve fit.

The demonstration has nothing to do with “the mechanism of the model.” The model is a black box.

The demonstration has to do with model output. It’s shown to be linear. That’s the only thing necessary to show, to justify linear propagation of error.

The linear equation successfully emulates the air temperature projections of any GCM. That makes it completely appropriate to use for propagation of projection error.

What does non-linear here even mean?

It means what the Gavinoid implied it means: inflective departure of forcing from a smooth curve. Take a look at the graph. Forcing does that when volcanoes enter the picture.

They admit them as a forcing in the set of equations, and give an approximately proportional response. Not unexpected.

With the bolded phrase, you’ve inadvertantly validated my analysis, Nick. That’s twice now. Thanks again. 🙂

Scenario A did not include volcanics.

Scenario A included volcanics prior to 1990. The historical set when viewed from 1988. It’s right there in the graph.

HotScot
October 23, 2017 1:22 am

David Cosserat

I doubt it will be a definitive moment, that was supposed to be Climategate which was all to easily swept under the carpet.

However, there is always room for debate on any subject and having it in the open is beneficial to us all, sceptics and alarmists.

As for me not understanding the content, I’m not a scientists, nor even well educated, but I long ago learned that the climate debate is more than just science. Besides, after 60 years of observation, I don’t see any meaningful change in the planet’s climate other than my garden plants growing better than they ever have.

BallBounces
Reply to  HotScot
October 23, 2017 5:22 am

“the climate debate is more than just science” Egg zactly. Make into a poster and plaster on every wall.

Colorado Wellington
Reply to  HotScot
October 23, 2017 10:14 am

I doubt it will be a definitive moment, that was supposed to be Climategate which was all to easily swept under the carpet.

As a system, the socio-political climate complex has shown high stability and resilience built on strong across-the-board negative feedback to any forcing.

They don’t even need a carpet.

Nick Stokes
October 23, 2017 1:30 am

I tried to download the paper and just got a complaint from the host about an ad-blocker, and promotional material.

Urederra
Reply to  Nick Stokes
October 23, 2017 2:50 am

The problem is in your side. I downloaded it without any problem, no pop-ups, pop-unders or anything. Here is a screenshoot:

http://oi64.tinypic.com/15q4rx4.jpg

There is a 97% chance you may probably have a virus in your computer. Have you visited any naughty website?

Nick Stokes
Reply to  Urederra
October 23, 2017 3:41 am

No, I tried to download it directly, like you would any other file. When I went through their website, resisting their $5 blandishments, it came through.

john harmsworth
Reply to  Urederra
October 23, 2017 6:54 am

I modeled an attempt to download it and it worked every time.

sy computing
Reply to  Urederra
October 23, 2017 4:55 pm

I modeled an attempt to download it and it worked every time.

LOL

AndyG55
Reply to  Nick Stokes
October 23, 2017 2:50 am

diddums !! do you need a hanky?

Sheri
Reply to  Nick Stokes
October 23, 2017 5:35 am

Interesting. I do adblock and had no such problem…..Maybe the site simply doesn’t like you?

Duster
Reply to  Nick Stokes
October 23, 2017 11:00 am

Nick, you did notice that there are two download options. Only one asks for money. The other is labeled “Slow Download.” It really isn’t that slow unless you are using a 1990s modem for your connection.

Nick Stokes
Reply to  Duster
October 23, 2017 12:04 pm

PF gave a link, I right-clicked, said save link as, and got a whole lot of gibberish html.

John Mauer
Reply to  Nick Stokes
October 23, 2017 3:40 pm

Nick, it downloads fine. If you really don’t have a copy, I can put it in DropBox for you.

Nick Stokes
Reply to  John Mauer
October 23, 2017 3:59 pm

As I noted above, when I tried to save with “save link as”, I got that nonsense. When I got through to the page displayed by Atheok, I was able to download it, as I noted above. I have read it, quoted sections, and shown images of text from it.

Jan PC Lindstrom
October 23, 2017 1:31 am

To me it seems like an overall confusion between error and uncertainty? They are not the same according to the GUM standard (Guide on Uncertainty in Measurements). An error can be corrected (calibrated) if known. An uncertainty cannot.

Pat Frank
Reply to  Jan PC Lindstrom
October 24, 2017 7:52 pm

You got a crux issue, Jan PC. 🙂 I have yet to encounter a climate modeler who understands that difference.

Another point raised in GUM is that random errors become systematic when they are propagated forward into a calculation. That’s another form of systematic error thoroughly ignored in climate modeling.

willhaas
October 23, 2017 1:40 am

The computer simulations in question have hard coded in that an increase in CO2 causes warming. Hence these computer simulations beg the question as, does CO2 cause warming, and therefore are of no value. In terms of atmospheric physics there is plenty of reasoning to support the idea that the climate sensivity of CO2 is really zero.

Nick Stokes
Reply to  willhaas
October 23, 2017 2:15 am

“The computer simulations in question have hard coded in that an increase in CO2 causes warming.”
Evidence?

Nick Stokes
Reply to  Nick Stokes
October 23, 2017 2:38 am

I’m asking for evidence that “have hard coded in that an increase in CO2 causes warming”. Do you have any?

AndyG55
Reply to  Nick Stokes
October 23, 2017 2:52 am

“I’m asking for evidence that “have hard coded in that an increase in CO2 causes warming””

Do you use GISS as a hind-cast?

There is your answer.

Or are you that naive? really ???

TimTheToolMan
Reply to  Nick Stokes
October 23, 2017 2:55 am

Nick writes

Evidence?

Well we know for a fact that adjustable parameters are changed to set the required radiative imbalance in the models. How’s that?

Reply to  Nick Stokes
October 23, 2017 3:04 am

The scenarios with more GHG’s lead to warmer model projections…
comment image

This is either a coincidence or a pretty good clue that the models “have hard coded in that an increase in CO2 causes warming.”

Nick Stokes
Reply to  Nick Stokes
October 23, 2017 3:12 am

‘This is either a coincidence or a pretty good clue that the models “have hard coded in that an increase in CO2 causes warming.”’
No. It suggests that the GHE physics means CO2 would cause warming, and that they correctly model the physics. But “hard-coded”. That is just made up.

On that logic you could say that computation could never reveal anything. Because if it predicts anything, then the result must have been hard-coded in.

AndyG55
Reply to  Nick Stokes
October 23, 2017 3:14 am

“It suggests that the GHE physics means CO2 would cause warming,”

So you ADMIT that ERRONEOUS science is programmed into the models.

FINALLY you are waking up to reality !

WELL DONE , Nick.

AndyG55
Reply to  Nick Stokes
October 23, 2017 3:16 am

Nick, you do realise that you just admitted to every word Forrest has said, don’t you ?

So FUNNY !!.

Try a new pair of socks.. those one don’t seem to be so tasty for you. !!

George Tetley
Reply to  Nick Stokes
October 23, 2017 3:31 am

Evidence ( or dense? )
Unless Nick wrote it, it ant !

Nick Stokes
Reply to  Nick Stokes
October 23, 2017 3:38 am

So if an economic model predicts 2% inflation, they must have hard-coded 2% in?

Sheri
Reply to  Nick Stokes
October 23, 2017 5:37 am

If CO2 isn’t hard coded in the models, then what is the point?

Latitude
Reply to  Nick Stokes
October 23, 2017 7:25 am

“If CO2 isn’t hard coded in the models, then what is the point?”

……we have a winner!

Vanna has some great parting gifts for the rest of you…….

Editor
Reply to  Nick Stokes
October 23, 2017 8:27 am

Come on Nick, don’t be THAT stupid:

“Evidence?”

David made the OBVIOUS reply to your, …. he he….question.
comment image?w=700

Reply to  Nick Stokes
October 23, 2017 11:36 am

Nick Stokes October 23, 2017 at 2:15 am
“The computer simulations in question have hard coded in that an increase in CO2 causes warming.”
Evidence?

Nick Stokes October 23, 2017 at 3:38 am
So if an economic model predicts 2% inflation, they must have hard-coded 2% in?

I have friends and neighbors who still think that climate alarmists are arguing in good faith.

Unbelievable.

whiten
Reply to  Nick Stokes
October 23, 2017 12:48 pm

David Middleton
October 23, 2017 at 3:04 am

The scenarios with more GHG’s lead to warmer model projections…
—————-
Not trying to be picky, but at the best the above still is no more than an assumption, even in the case of the GCMs, when it may seem to be so obvious and “certain”.

It still needs a kind of validation……otherwise it remains an assumption in principle.

Considering the strong correlation of CO2 ppm trend with temps in GCMs, and the connective relation, is no hard to consider that detectable distinction “who jumps first to increase” the temps or CO2 ppms in any GCMs scenario may clarify that is possible.
I never know of any such trial feat ever attempted or performed, as per a way of validating the assumption!

For as long as this point remains not clarified, in principle, the assumption remains so at its best, an assumption, no matter how strange it would seem to consider it that way, under the circumstances. .

cheers

talldave2
Reply to  Nick Stokes
October 23, 2017 1:10 pm

“If CO2 isn’t hard coded in the models, then what is the point?”

Sometimes they code in everything else, and then infer CO2 or GHG as what’s left. It’s a valid technique as long as you’re completely omniscient on every other factor involved.

RW
Reply to  Nick Stokes
October 23, 2017 8:59 pm

More obfuscation from Nick. A model IS hard coded. The result predetermined. A model produces different results because it is initialized differently by the user, provided different values for the parameters by the user, or provided different parameters by the user, or perhaps because the code, for whatever insane reason, uses a random number generator. Where you draw the line between one model and the next is arbitrary. Comparing predictions to observations are the only way to definitively test a model. A given model is refuted when its prediction does not match observation. When a model is based on thr observations it is used to predict, it is overfitted, liable to be modeling more noise than it should, and will underperform with new observations.

Nick Stokes
Reply to  Nick Stokes
October 23, 2017 9:15 pm

“A model IS hard coded. The result predetermined.”
This gets to silly quibbling. Of course you can say that any computer program is hard coded, and computers do what they are told. So when Deep Blue wins at chess against Kasparov, that was hard-coded. Gets a bit silly, but technically true. It doesn’t mean that the programmer put in the tricks that brought Kasparov undone.

There is a popular line of articles at WUWT about chaos and the unpredictability of GCMs (and CFD, and weather, for that matter). It’s true that GCM’s approach attractor solutions that can’t be worked out from initial conditions without that computation. As with CFD, you learn things from computation that the programmers couldn’t have told you.

kyle_fouro
Reply to  Nick Stokes
October 23, 2017 9:39 pm
Reply to  Nick Stokes
October 23, 2017 11:13 pm

1sky1
I said: “Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon.”

You said: “That would be the case only if the Moon had an GHG-less atmosphere of equal density!”

What you said is only true if you subscribe to the minority view that the pressure (hence density) of the GHG-less components of an atmosphere have a warming effect. That is no longer a mainstream sceptical view because the physics involved in this so-called non-GHG warming effect have never been satisfactorily described.

RW
Reply to  Nick Stokes
October 24, 2017 10:49 am

Nick. So I think we agree that this is just weed territory. Having said that, it is a waste of comment space to nit pick at stuff like a claim that CO2 is hard coded into the model. Clearly it is at some level hard coded in the model to increase temperature with CO2, all other things being equal. Just because there are other factors that might swamp that influence out in the model doesn’t mean the comment was worthy of additional scrutiny. It’s borderline troll territory in my view to get into weeds like that. I’m willing to grant intellectual charity to the poster. I’m willing to believe that they are aware that C02 is not the only factor in these models and that, in fact, depending on some of the other factors, the model could predict reduced temperatures despite increasing levels of CO2.

RW
Reply to  Nick Stokes
October 24, 2017 10:58 am

Nick. No argument from me vis a vis the utility of modelling and running them to see what happens. I’m willing to believe though that there is a mind out there that is sharp enough to foresee what the model will do (or have a pretty good idea) without running it. But for sure, for the rest of us, we need to run the model to see what happens. The deterministic aspect does not hinge on our ability to work out what the model will output though. I don’t know climate models though, so if there is some built-in random number generation (simulated stochastic stuff) then obviously no one would be able to know what the model will output in advance.

crackers345
Reply to  Nick Stokes
October 25, 2017 8:43 pm

RW commented – “A model IS hard coded”

Given the 1/R^2 law of
gravitation, is the orbital
path of the Earth
“hard coded?”

RW
Reply to  Nick Stokes
October 26, 2017 8:48 am

craclers345. If something is hard coded, it is baked into the architecture, the programmer’s code rather than being a parameter. So, altough levels of CO2 itself is undoubtedly a specifiable parameter, what is done wirh the value is hard coded – i.e. the complex function that outputs temperature among other things given rhe values of many other variables in the function. This is what I take willhaas to be saying when he wrote the bit Nick objected to. Not sure what your question has to do with programming a climate model, but it smells like more quibbling over arbitrary distinctions to me.

RW
Reply to  Nick Stokes
October 26, 2017 8:56 am

crackers345, so yes one could conceivably hard code the Earth’s orbital path into the code as an influence on solar insolation. I am not sure where the controversy lies here, and as I have said elsewhere I don’t know climate models.

RACookPE1978
Editor
Reply to  RW
October 26, 2017 9:43 am

To within 1/4 watt/m^2 at TOA, this formula agrees with Lief Svaalberg’s daily recorded TOA values.

TOA_DOY =1362.36+46.142*(COS(0.0167299*(DOY)+0.03150896)) for DOY = 1 on Jan 1 to 365. (Excel format)

Reply to  willhaas
October 23, 2017 3:11 am

Willhaas,

You say: In terms of atmospheric physics there is plenty of reasoning to support the idea that the climate sensitivity of CO2 is really zero.

I strongly disagree. You are (I hope inadvertently) undermining the mainstream sceptical position.

It is certain that the presence of CO2 in the atmosphere since time immemorial contributes in a minor way to the current mean surface temperature of 15degC (288K), the main (invisible) contributor being water vapour.

Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon, namely -75degC (198K) as determined from the NASA Moon orbiter. In the absence of all such gases, the earth’s atmosphere would be transparent to the remaining atmospheric constituents – Nitrogen and Oxygen – which are not significantly radiative/absorptive (by several orders of magnitude) at earth atmospheric temperatures.

Therefore, since CO2 is a minor contributor to the warm world we currently experience, a doubling in CO2 must, in logic, cause some change in mean surface temperature.

The real debate is: how much of a change? Sceptics say, not a lot. Alarmists say, by a dangerous amount. But there is no reason in physics (atmospheric or otherwise) that says the climate sensitivity to changing CO2 is exactly zero. To assert that is to walk straight into the climate alarmist trap…

Reply to  David Cosserat
October 23, 2017 5:17 am

Your argument applies equally to homeopathics. Fortunately mainstream skeptic position is an oxymoron.

Sheri
Reply to  David Cosserat
October 23, 2017 5:40 am

David: I tend to agree. At one time, it was forbidden to say CO2 had no effect because it made skeptics look unscientific. Now, it seems common and almost mainstream here. To say it has “no effect” makes the same assumptions saying it does have a great effect does—that we know everything there is to know about climate. I can’t see how a real scientist can make a statement to that effect.

Hugs
Reply to  David Cosserat
October 23, 2017 7:46 am

‘argument applies equally to homeopathics’

Chutzpah you have but you are not a Jedi yet.

paqyfelyc
Reply to  David Cosserat
October 23, 2017 8:36 am

Actually, Earth radiates as if it were -18 C (255 K) for an average power of 240W, while the moon, which has only 0.13 albedo (Vs 0.3 Earth’s), receives and radiates ~295 W. So, this just doesn’t add up to a mean surface temperature of 198K for the moon. Or, rather, to have things add up, you have to consider that moon surface is so bumpy that it’s surface si much larger than 4Pi R², but then you cannot compare this temperature from NASA moon orbiter to Earth’s.

Reply to  David Cosserat
October 23, 2017 9:10 am

jaakkokateenkorva, October 23, 2017 at 5:17 am.

You say: Your argument applies equally to homeopathics.

In relation to the totality of ‘greenhouse’ gases, CO2 is a small proportion of GHGs. Some say 25%, others say 5%, but either way, it is NOT a vanishing small proportion. (Yes, it is a vanishing proportion of the atmosphere as a whole including the non-radiative/absorptive gases, but these do NOTt contribute to warming the planet.)

So the warming effect of CO2 is not in any way comparable to the charlatanry of homeopathy. It is real.

I find your ‘oxymoronic’ comment completely incomprehensible…

Reply to  David Cosserat
October 23, 2017 9:16 am

Thanks Hugs, but yeah. I don’t have enough midi-chlorians to speak on behalf of all skeptics, 97% scientists etc and, thus, limit writing only my own opinions. Doesn’t prevent me standing by the gas law pV=nRT though. Meaning the relationship between pressure, volume, temperature and mass of gas are interrelated irrespective of the composition.

Reply to  David Cosserat
October 23, 2017 9:22 am

David. The concept of “mainstream skeptic position” is much like “scientific consensus”. In my opinion equally useful at best and oxymoronic at worst.

TA
Reply to  David Cosserat
October 23, 2017 10:45 am

“It is certain”

It used to be certain that humans were causing the Earth’s atmosphere to cool, back in the 1970’s. Then the climate warmed up and we don’t hear that certainty anymore.

Being certain about something does not necessarily make it true. You are just guessing as to what CO2 is doing in the atmosphere. It may not be adding any net heat to the atmosphere at all. Prove it does.

This skeptic is skeptical you or anyone have any proof to the contrary. “Certain” is not good enough.

Am I hurting the skeptic’s cause by my assertions?

No, the skeptic’s cause is to demand proof of other’s assertions. If there is no proof, skeptics should say so. I’m saying so. Prove me wrong.

Duster
Reply to  David Cosserat
October 23, 2017 11:17 am

David Cosserat
October 23, 2017 at 9:10 am

… but these do NOTt contribute to warming the planet…

While I agree about the basic argument you offered, you do make a mistake. f for instance, you descend from Jerusalem to the Dead Sea, you experience sensible warming as you descend. Since the atmosphere has the same composition, the difference in temperature is not due to CO2, methane or water vapor.

RWturner
Reply to  David Cosserat
October 23, 2017 1:38 pm

The proponents of GHG planetary temperature say the surface temperature of a planet is mostly due to the atmospheric composition giving the greenhouse effect and irradiance, naysayers say it’s almost completely due to the density and molar mass of the atmosphere, as well as irradiance on the planet.

Of course there are models showing the later is correct whereas the former has never been empirically demonstrated in the real world.

Take a look at the Galilean Moons in order of descending atmospheric density: Io, Callisto, Ganymede, Europa. Now, I’ll list these in order of descending average temperature: Io, Callisto, Ganymede, Europa (coincidence?). Now, the irradiance is quite similar for all these moons and only one has a significant amount of greenhouse gases comprising its atmosphere — Callisto.

Now, can someone tell me why Io has a higher average temperature than Callisto, despite having an atmosphere almost entirely comprised of sulfur compounds whereas Callisto has an atmosphere of CO2? Hint: Io’s surface pressure is orders of magnitude higher than Callisto’s.

Why wouldn’t quantized molecular vibrations induced by back radiation have a significant impact on planetary surface temperature? It’s because heat transfer in an atmosphere is dominated by unquantized kinetic energy (molecular collisions that theoretically occur every 10^-7 s at Earth’s surface pressure) and convection. Furthermore, molecular vibrations are quantized, you simply can’t add more vibrational energy to a molecule if it is already in its energized state. Molecules already in their energized state are transparent to the radiation that already put it into that energized state. Trying to make planetary surface temperature about quantized molecular vibrations is like trying to flood the sea by spitting in the ocean.