A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling

Propagation of Error and the Reliability of Global Air Temperature Projections

Guest essay by Pat Frank

Regular readers at Anthony’s Watts Up With That will know that for several years, since July 2013 in fact, I have been trying to publish an analysis of climate model error.

The analysis propagates a lower limit calibration error of climate models through their air temperature projections. Anyone reading here can predict the result. Climate models are utterly unreliable. For a more extended discussion see my prior WUWT post on this topic (thank-you Anthony).

The bottom line is that when it comes to a CO2 effect on global climate, no one knows what they’re talking about.

Before continuing, I would like to extend a profoundly grateful thank-you! to Anthony for providing an uncensored voice to climate skeptics, over against those who would see them silenced. By “climate skeptics” I mean science-minded people who have assessed the case for anthropogenic global warming and have retained their critical integrity.

In any case, I recently received my sixth rejection; this time from Earth and Space Science, an AGU journal. The rejection followed the usual two rounds of uniformly negative but scientifically meritless reviews (more on that later).

After six tries over more than four years, I now despair of ever publishing the article in a climate journal. The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers.

Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. Their competence comes into question. Grants disappear. Universities lose enormous income.

Given all that conflict of interest, what consensus climate scientist could possibly provide a dispassionate review? They will feel justifiably threatened. Why wouldn’t they look for some reason, any reason, to reject the paper?

Somehow climate science journal editors have seemed blind to this obvious conflict of interest as they chose their reviewers.

With the near hopelessness of publication, I have decided to make the manuscript widely available as samizdat literature.

The manuscript with its Supporting Information document is available without restriction here (13.4 MB pdf).

Please go ahead and download it, examine it, comment on it, and send it on to whomever you like. For myself, I have no doubt the analysis is correct.

Here’s the analytical core of it all:

Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.

Complicated, isn’t it. I have yet to encounter a consensus climate scientist able to grasp that concept.

Willis Eschenbach demonstrated that climate models are just linearity machines back in 2011, by the way, as did I in my 2008 Skeptic paper and at CA in 2006.

The manuscript shows that this linear equation …

clip_image002

… will emulate the air temperature projection of any climate model; fCO2 reflects climate sensitivity and “a” is an offset. Both coefficients vary with the model. The parenthetical term is just the fractional change in forcing. The air temperature projections of even the most advanced climate models are hardly more than y = mx+b.

The manuscript demonstrates dozens of successful emulations, such as these:

clip_image004

Legend: points are CMIP5 RCP4.5 and RCP8.5 projections. Panel ‘a’ is the GISS GCM Model-E2-H-p1. Panel ‘b’ is the Beijing Climate Center Climate System GCM Model 1-1 (BCC-CSM1-1). The PWM lines are emulations from the linear equation.

CMIP5 models display an inherent calibration error of ±4 Wm-2 in their simulations of long wave cloud forcing (LWCF). This is a systematic error that arises from incorrect physical theory. It propagates into every single iterative step of a climate simulation. A full discussion can be found in the manuscript.

The next figure shows what happens when this error is propagated through CMIP5 air temperature projections (starting at 2005).

clip_image006

Legend: Panel ‘a’ points are the CMIP5 multi-model mean anomaly projections of the 5AR RCP4.5 and RCP8.5 scenarios. The PWM lines are the linear emulations. In panel ‘b’, the colored lines are the same two RCP projections. The uncertainty envelopes are from propagated model LWCF calibration error.

For RCP4.5, the emulation departs from the mean near projection year 2050 because the GHG forcing has become constant.

As a monument to the extraordinary incompetence that reigns in the field of consensus climate science, I have made the 29 reviews and my responses for all six submissions available here for public examination (44.6 MB zip file, checked with Norton Antivirus).

When I say incompetence, here’s what I mean and here’s what you’ll find.

Consensus climate scientists:

1. Think that precision is accuracy

2. Think that a root-mean-square error is an energetic perturbation on the model

3. Think that climate models can be used to validate climate models

4. Do not understand calibration at all

5. Do not know that calibration error propagates into subsequent calculations

6. Do not know the difference between statistical uncertainty and physical error

7. Think that ±” uncertainty means positive error offset

8. Think that fortuitously cancelling errors remove physical uncertainty

9. Think that projection anomalies are physically accurate (never demonstrated)

10. Think that projection variance about a mean is identical to propagated error

11. Think that a “±K” uncertainty is a physically real temperature

12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states

Item 12 is especially indicative of the general incompetence of consensus climate scientists.

Not one of the PhDs making that supposition noticed that a “±” uncertainty bar passes through, and cuts vertically across, every single simulated temperature point. Not one of them figured out that their “±” vertical oscillations meant that the model must occupy the ice-house and hot-house climate states simultaneously!

If you download them, you will find these mistakes repeated and ramified throughout the reviews.

Nevertheless, my manuscript editors apparently accepted these obvious mistakes as valid criticisms. Several have the training to know the manuscript analysis is correct.

For that reason, I have decided their editorial acuity merits them our applause.

Here they are:

  • Steven Ghan___________Journal of Geophysical Research-Atmospheres
  • Radan Huth____________International Journal of Climatology
  • Timothy Li____________Earth Science Reviews
  • Timothy DelSole_______Journal of Climate
  • Jorge E. Gonzalez-cruz__Advances in Meteorology
  • Jonathan Jiang_________Earth and Space Science

Please don’t contact or bother any of these gentlemen. On the other hand, one can hope some publicity leads them to blush in shame.

After submitting my responses showing the reviews were scientifically meritless, I asked several of these editors to have the courage of a scientist, and publish over meritless objections. After all, in science analytical demonstrations are bullet proof against criticism. However none of them rose to the challenge.

If any journal editor or publisher out there wants to step up to the scientific plate after examining my manuscript, I’d be very grateful.

The above journals agreed to send the manuscript out for review. Determined readers might enjoy the few peculiar stories of non-review rejections in the appendix at the bottom.

Really weird: several reviewers inadvertently validated the manuscript while rejecting it.

For example, the third reviewer in JGR round 2 (JGR-A R2#3) wrote that,

“[emulation] is only successful in situations where the forcing is basically linear …” and “[emulations] only work with scenarios that have roughly linearly increasing forcings. Any stabilization or addition of large transients (such as volcanoes) will cause the mismatch between this emulator and the underlying GCM to be obvious.”

The manuscript directly demonstrated that every single climate model projection was linear in forcing. The reviewer’s admission of linearity is tantamount to a validation.

But the reviewer also set a criterion by which the analysis could be verified — emulate a projection with non-linear forcings. He apparently didn’t check his claim before making it (big oh, oh!) even though he had the emulation equation.

My response included this figure:

clip_image008

Legend: The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings. The lines are the linear emulations.

The volcanic forcings are non-linear, but climate models extrapolate them linearly. The linear equation will successfully emulate linear extrapolations of non-linear forcings. Simple. The emulations of Jim Hansen’s GISS Model II simulations are as good as those of any climate model.

The editor was clearly unimpressed with the demonstration, and that the reviewer inadvertently validated the manuscript analysis.

The same incongruity of inadvertent validations occurred in five of the six submissions: AM R1#1 and R2#1; IJC R1#1 and R2#1; JoC, #2; ESS R1#6 and R2#2 and R2#5.

In his review, JGR R2 reviewer 3 immediately referenced information found only in the debate I had (and won) with Gavin Schmidt at Realclimate. He also used very Gavin-like language. So, I strongly suspect this JGR reviewer was indeed Gavin Schmidt. That’s just my opinion, though. I can’t be completely sure because the review was anonymous.

So, let’s call him Gavinoid Schmidt-like. Three of the editors recruited this reviewer. One expects they called in the big gun to dispose of the upstart.

The Gavinoid responded with three mostly identical reviews. They were among the most incompetent of the 29. Every one of the three included mistake #12.

Here’s Gavinoid’s deep thinking:

“For instance, even after forcings have stabilized, this analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

And there it is. Gavinoid thinks the increasingly large “±K” projection uncertainty bars mean the climate model itself is oscillating increasingly wildly between ice-house and hot-house climate states. He thinks a statistic is a physically real temperature.

A naïve freshman mistake, and the Gavinoid is undoubtedly a PhD-level climate modeler.

The majority of Gavinoid’s analytical mistakes include list items 2, 5, 6, 10, and 11. If you download the paper and Supporting Information, section 10.3 of the SI includes a discussion of the total hash Gavinoid made of a Stefan-Boltzmann analysis.

And if you’d like to see an extraordinarily bad review, check out ESS round 2 review #2. It apparently passed editorial muster.

I can’t finish without mentioning Dr. Patrick Brown’s video criticizing the youtube presentation of the manuscript analysis. This was my 2016 talk for the Doctors for Disaster Preparedness. Dr. Brown’s presentation was also cross-posted at “andthentheresphysics” (named with no appreciation of the irony) and on youtube.

Dr. Brown is a climate modeler and post-doctoral scholar working with Prof. Kenneth Caldiera at the Carnegie Institute, Stanford University. He kindly notified me after posting his critique. Our conversation about it is in the comments section below his video.

Dr. Brown’s objections were classic climate modeler, making list mistakes 2, 4, 5, 6, 7, and 11.

He also made the nearly unique mistake of confusing an root-sum-square average of calibration error statistics with an average of physical magnitudes; nearly unique because one of the ESS reviewers made the same mistake.

Mr. andthentheresphysics weighed in with his own mistaken views, both at Patrick Brown’s site and at his own. His blog commentators expressed fatuous insubstantialities and his moderator was tediously censorious.

That’s about it. Readers moved to mount analytical criticisms are urged to first consult the list and then the reviews. You’re likely to find your objections critically addressed there.

I made the reviews easy to apprise by starting them with a summary list of reviewer mistakes. That didn’t seem to help the editors, though.

Thanks for indulging me by reading this.

I felt a true need to go public, rather than submitting in silence to what I see as reflexive intellectual rejectionism and indeed a noxious betrayal of science by the very people charged with its protection.

Appendix of Also-Ran Journals with Editorial ABM* Responses

Risk Analysis. L. Anthony (Tony) Cox, chief editor; James Lambert, manuscript editor.

This was my first submission. I expected a positive result because they had no dog in the climate fight, their website boasts competence in mathematical modeling, and they had published papers on error analysis of numerical models. What could go wrong?

Reason for declining review: “the approach is quite narrow and there is little promise of interest and lessons that transfer across the several disciplines that are the audience of the RA journal.

Chief editor Tony Cox agreed with that judgment.

A risk analysis audience not interested to discover there’s no knowable risk to CO2 emissions.

Right.

Asia-Pacific Journal of Atmospheric Sciences. Songyou Hong, chief editor; Sukyoung Lee, manuscript editor. Dr. Lee is a professor of atmospheric meteorology at Penn State, a colleague of Michael Mann, and altogether a wonderful prospect for unbiased judgment.

Reason for declining review: “model-simulated atmospheric states are far from being in a radiative convective equilibrium as in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.

Chief editor Songyou Hong supported that judgment.

The manuscript is about error analysis, not about climate. It uses data from Manabe and Wetherald but is very obviously not based upon it.

Dr. Lee’s rejection follows either a shallow analysis or a convenient pretext.

I hope she was rewarded with Mike’s appreciation, anyway.

Science Bulletin. Xiaoya Chen, chief editor, unsigned email communication from “zhixin.”

Reason for declining review: “We have given [the manuscript] serious attention and read it carefully. The criteria for Science Bulletin to evaluate manuscripts are the novelty and significance of the research, and whether it is interesting for a broad scientific audience. Unfortunately, your manuscript does not reach a priority sufficient for a full review in our journal. We regret to inform you that we will not consider it further for publication.

An analysis that invalidates every single climate model study for the past 30 years, demonstrates that a global climate impact of CO2 emissions, if any, is presently unknowable, and that indisputably proves the scientific vacuity of the IPCC, does not reach a priority sufficient for a full review in Science Bulletin.

Right.

Science Bulletin then courageously went on to immediately block my email account.

*ABM = anyone but me; a syndrome widely apparent among journal editors.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Pat,
This has already been explained to you numerous, so it’s unlikely that this attempt will be any more successful than previous attempts. The error that you’re trying to propagate is not an error at every timestep, but an offset. It simply influences the background/equilibrium state, rather than suggesting that there is an increasing range of possible states at every step. For example, if we ran two simulations with different solar forcings (but everything else the same), this wouldn’t suddenly mean that they would/could diverge with time, it would mean that they would settle to different background/equilibrium states.

Old England

@ and Then There’s Physics

I’m a layman and no mathematician but having read the first few pages of the paper it seems to me that your points are answering the wrong question. (?)

The point made, or so it appears to me, is that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band with a diverging top and bottom spread of values. In other words they diverge.

If the uncertainties are not included as part of the model then surely it is linear and unable to produce meaningful results?

If you have multiple uncertainties, as in climate, which are input into a model then the spread or divergence must become even greater with time.
Some of those would seem to be (but far from limited too) temperatures and effect on on atmospheric water vapour levels; cloud formation and cloud cover; solar activity; volcanic activity etc etc. each would have an effect on some of the others and with a amount of uncertainty which would need to be expressed.

As I said, I am a layman and would appreciate it if you could enlighten me.
Thanks

The point made, or so it appears to me, is that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band with a diverging top and bottom spread of values. In other words they diverge.

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality). In this case, it is quite possible that the cloud forcing is “wrong” by a few W/m^2. What this would mean is that the equilibrium state would also then be “wrong”. It doesn’t mean, however, that the range of possible equilibrium states will grow with time, since this error does not propagate.

As I mentioned in the first comment, imagine we could run a perfect model in which every parameter exactly matched reality. Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

Hivemind

“An uncertainty only propagates if it applies at every step”

Um… No. A climate model is essentially an attempt to integrate a bunch of co-dependent variables numerically. If you knew anything about numerical integration, you would know that errors propagate wildly. The tool is, fundamentally, unsuited to the purpose it is being put.

TimTheToolMan

ATTP, Stop focusing on the output result and thinking it’s error within an acceptable range therefore ok to propagate.

The issue is that the uncertainty that is propagated at each time step isn’t seen in the output because the output has been constrained by design to be within “reasonable” values. This is seen to be evidence the model is “doing the right thing” but the real problem is that at every single time step the output is meaningless for a climate calculation because the climate signal is much smaller than the error and what we’re left with is a fitted result.

Now you’ll arc up and suggest CGMs aren’t fits and are based on physics but again you’re mistaken because there are components (eg clouds) that aren’t and they’re approximations, they’re fits and by including them the models themselves are reduced to fits.

The whole GCM enterprise further relies on the assumption that errors cancel at each step throughout and that’s a ridiculous assumption. Completely unjustified and most certainly incorrect. In fact there is a small (unintentionally) built in bias that results in an expected result.

AndyG55

“imagine we could run a perfect model in which every parameter exactly matched reality.”

Yet you are UNABLE to run one where ANY parameter matches reality.

The ONLY thing you have is hallucinogenic anti-science IMAGINATION and FAIRY-TAILS

” If you knew anything about numerical integration, you would know that errors propagate wildly.”
I know lots about numerical integration (so does ATTP). I have spent a large part of my professional life doing it, in computational fluid dynamics, a regular engineering activity of which GCMs are a subset. Your statement is nonsense.

As I mentioned in the first comment, imagine we could run a perfect model in which every parameter exactly matched reality. Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

There’s no reason that difference would be equal over time. That’s a sign you have created a linear model, and it’s a decidedly non-linear system you’re modeling.

If this is what you think you guys are lost.

ripshin

Micro,

I totally agree. This is probably the most telling comment of this whole discussion.

rip

ATTP, I was thinking about this more, you totally do not get that WV acts as a regulating medium, it actively alters the out going radiation response based on cooling temperatures, and not the stupid SB 4th pwr decay, this is on top of that, it’s the bends in the clear sky cooling profile. And since this is decidedly non-linear, and it controls the response to Co2, you’re not accounting for it in your models.

Think about how much the atm column shrinks at night. when it’s calm, it can only cool by radiation, and radiation is omni directional. Also for every gram of water vapor, there is a 4.21 J exchange of IR for a condense – reevaporation cycle as let’s say a 3,000 meter tall stack cools.
Interestingly it cools really quickly till air temps near dew point, then it stops cooling. It’s just there’s about -50W/m^2 of radiation to space just through the optical window based on SB calculations, yet net radiation is less than -20W/m^2. There’s about 35W’m^2 of sensible heat keeping the surface temp from falling as quickly.
comment image
There’s a 90F differences in the middle of the spectrum, I’ve measured over 100F differences.comment image
How much energy is about 1 psi change between morning min T and afternoon Max temps at the surface(plus enthalpy lost, water condensed)? Oh wait, without the pressure change, average of about 3,300W/m^3.

beng135

attp says:

Now imagine running the same model, apart from the Solar forcing being different by a few W/m^2. What would happen is that this would change the equilibrium state (there would be a constant offset between the “perfect” model and this other model). It would not mean that the difference between the model with the different solar forcing, and the “perfect” model would grow with time.

Funny, your example uses the ONLY independent variable in the whole shabang. Use any other co-dependent variable, and your example is busted.

Leo Smith

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality)

Only in linear systems

In chaotic systems a single butterfly flapping its wings once….

…and that is a huge point. Climate models treat the climate as a linear system, because we do not have computational tools that can address the uncertainty of non linear systems.

To accept chaotic behaviour is merely to affirm ‘we can’t predict where this is going at all’. Or to put it in the vernacular. Climate science is at that level just bunk.

Even those people here who look for ‘cycles’ in climate with the ardent passion of ‘chemtrail’ observers, may in the end be barking up only a slightly less egregious gum tree than the climate scientists. Chaotic behaviour produces quasi-periodic fluctuations: That is over short times spans it may look briefly like a cycle, but then as it moves towards new attractors, it will enter a different ‘cycle’ and those of us who have built electronic circuits utilising chaotic feedback (super regenerative radios) know that, absent of a forcing signal, what you get is NOISE pure and simple, with no detectable single spectral component.

Nothing is more infuriating than to have someone lecturing you on the characteristics of linear equations challenging you to disprove their finer points, when your whole position is predicated in a provable assertion that what is being modelled cannot be represented by linear equations in the first place.

Mark - Helsinki

“Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps”

incorrect, the value increases with each step over time, you are a completely anti scientific chappy, clueless

“…and Then There’s Physics October 23, 2017 at 1:20 am

Except this is not correct. An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step)…”

Typical attp tactic, start off with a lie then spin sophistry round your false strawman.

Mark - Helsinki

Micro thanks for exposing ATTP’s cut and paste knowledge. Once you get in depth with him, he vanishes every time and runs back to his echo chamber

this annual average ±4.0 Wm-2 year-1 uncertainty in simulated LWCF is approximately ±150% larger than all the forcing due to all the anthropogenic greenhouse gases put into the atmosphere since 1900 (~2.6 Wm-2), and approximately ±114× larger than the average annual ~0.035 Wm-2 year-1 increase in greenhouse gas forcing since 1979

The error DOES in my opinion propagate.

And Then There’s Physics says If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate.

If the correction for cloud fraction error was a simple linear adjustment to models to correct the error, we would never have known about it. The adjustment would have been applied, and the model prediction would have aligned with observed cloud fraction.

Since nobody can accurately predict how clouds respond to GHG forcing, the margin for error grows with every iteration step, The uncertainty of how clouds will respond to the GHG forcing applied in a single step has to be carried through to the next iteration.

When the margin for error drastically exceeds what is physically plausible, I think we can safely assume the predictions of the model are total nonsense.

Page 23 of Pat Frank’s paper, hindcast cloud fraction error of global climate models.

M Courtney

ATTP says,

An uncertainty only propagates if it applies at every step (i.e., if there is some uncertainty in the expected value at every step). If, however, some value is “wrong” by some amount that is the same at all time steps, then this does not propagate (by “wrong” I mean potentially different to reality).

This assumes a linear response. It assumes that climate (and thus, presumably, weather) is a linear function of forcings.

If the initial value is “wrong” by some amount – or inaccurate by some amount – then that will affect the next iteration in some way.
If the next iteration is affected by the same amount every single time then the response is always constant.

Once again we have pseudoscience pretending that clouds don’t exist. That phase changes (water vapour to water droplets, for example) are smooth.

Why does ATTP worry about a declining Arctic Icecap when he doesn’t believe in non-linear phase changes? Melting can’t exist in his understanding of climate!
Except he has no understanding. He’s just a climate fanatic. It’s faith, not science.

Pat Frank

You’ve got the essence, Old England, “that where there is uncertainty in the assumptions being made within a model then – if, as they should be those uncertainties are expressed and included within the model, as the time-steps are calculated then the uncertainty grows into a wide band…

You have grasped the central point that continually eludes ATTP and virtually every single climate modeler.

The error is systematic, resident in the model, and is introduced into a simulation by the model itself. It enters every simulation time-step, and necessarily produces an increasing uncertainty in the projection.

Look at ATTP’s reply to you.. His “wrong by some amount” supposes a constant offset error and is a completely wrong description of the systematic error.

Look at manuscript Figure 5. Every single model makes has a different error profile with positive and negative excursions. I pointed this out to ATTP in prior conversations. He ignores it. Perhaps because he doesn’t understand the significance. Change the parameter set of any one model, and its error profile will be different.

But ATTP (and others) want to add up all the errors to get one number, and then assume that number is a constant offset error that will correct any model expectation value to be error-free. His (their) idea is beyond parody.

Then he goes on to suppose statistical uncertainty is physical error, i.e., ATTP: “the range of possible equilibrium states will grow with time, since this error does not propagate,

ATTP makes a standard mistake of my reviewers, here specifically number 6, but he has already also made mistakes 4, 5, 7 and 8.

He makes those same mistakes over, and over again.

Pat Frank

TimTheToolMan gets it right, as usual.

Tim, do you have any idea why uncertainty is so opaque to climate modelers?

It’s dead obvious to any experimental scientist or engineer.

Pat Frank

In this post, Nick Stokes admitted that GCMs are engineering models. I.e., Nick: “a regular engineering activity of which GCMs are a subset.

Engineering models are useless outside their calibration bounds. Nick has repudiated the entire global warming scary-2100 enterprise.

Yet another inadvertent validation in an attempted refutation. Thank-you, Nick.

Pat Frank

Eric Worrall, your comment is right on.

Thanks for posting Figure 5. It shows that every model has a different error profile, with positive and negative excursions.

Mere inspection of the figure shows how ludicrous is ATTP’s idea that all those errors should be merely added together into a number. And then subtracted away to make everything accurate. Only in consensus climate science.

“Engineering models are useless outside their calibration bounds.”
So what are the “calibration bounds” of, say, Nastran? Or Fluent, or Ansys? Pat, you don’t have a clue about engineering models.

So what are the “calibration bounds” of, say, Nastran? Or Fluent, or Ansys?

well it’s obvious you don’t understand this.
Calibration isn’t defined by the simulator, but the models as applied to the design you’re evaluating. And it’s in comparison to the real circuit in operation.

TimTheToolMan

Pat writes

Tim, do you have any idea why uncertainty is so opaque to climate modelers?

I dont think it is. I think even Nick gets it and one day might even accept it (no Nick, it doesn’t mean your CFD work is dead, or weather models are wrong – models have their place still!) but no climate modeler can admit to it because it’d be well…a career limiting move. And as the GCMs are the cornerstone to so much of our science today, untangling the mess would be horrendous. Better to let sleeping dogs lie.

Pat Frank,

Look at manuscript Figure 5.

Ok, it shows latitudinal profiles of 25-year averaged model cloud fraction error versus cloud fraction observations averaged over a similar timescale. It demonstrates latitudinal error offsets between models and observations, as well as showing differences between models.

Every single model makes has a different error profile with positive and negative excursions.

Yes, this is well known and clearly understood by ATTP. Different models, different offsets.

Change the parameter set of any one model, and its error profile will be different.

Yes, this would obviously be true but how is it relevant to error propagation within a projection? Within an individual model projection run the parameter set will remain the same, thereby maintaining the same offset error.

Put in context of your Figure 5, your error propagation suggests that those error profiles should change quite dramatically over time. Why would that happen?

Pat Frank

Nick Stokes, no matter your diversionary sneering, I’m clued in enough to know that engineering models are unreliable outside their calibration bounds.

Pat Frank

Paulski0Within an individual model projection run the parameter set will remain the same, thereby maintaining the same offset error.

Not correct, for two reasons. The parameters are not unique. They have large uncertainty widths. One can get the same apparent error with different suites of parameters. A given error is just representative. It does not transmit the true range of model errors. The uncertainty is made cryptic unless this is taken into account.

Second, even with unchanging parameter sets, any given projection simulation step is wrong, but to some unknown amount. Those wrong climate states are projected forward. Every step begins with initial value errors.

The projection error from step to step therefore varies, and in unknowable ways.

In a futures projection, one can’t know the errors. One only knows the uncertainty, by way to the propagated calibration error statistic. And uncertainty grows with each projection step because of increasing ignorance of the relative positions of the simulated state and the correct physical state in phase space.

Error propagation says nothing about error profiles in projection simulations. It addresses the reliability of the projection. Not its error.

pbweather

I think you and the reviewers may be missing the point here attp. Millions spent on building climate models…and a simple linear model can recreate them very closely…..surely this is worth publishing and worth investigating further?

Leo Smith

…and a simple linear model can recreate them very closely…..

A simple linear model is what climate models are, stripped of decorative complexity, but whilst models may be represented by a model of that nature, reality it seems is just too complicated for that class of model to have a snowballs chance in hell of representing the vagaries of actual climate.

So I dont know what you are saying, but its not worth spending a copper nickel on.

I looked into cutting edge attempts by seriously bright mathematicians to even discern whether a given set of non linear partial derivatives led to a bounded set of solutions (broadly, a climate that never goes below snowball earth or boils the oceans dry) and we cant even do THAT. Observationally climate is amazingly stable.

But wobbles a lot as well.

And we have absolutely no idea whether it could one day wobble off to a whole new regime, just because a butterfly flapped its wings, let alone by injecting tons of CO2 into it. All we can say is that in times gone by, when CO2 was way greater than it is today, or is likely to be in the foreseeable future, the climate seems to have been stable enough for life to flourish.

The state of climate change science stripped down to the actual science, which is almost none, is simply stated

1/. We don’t know.
2/. Even if we did know the partial differentials governing it, we still wouldn’t know what the climate will do.
3/. We lack both the mathematics and the computational power to ever know better than that.
4/. Climate change is therefore not worth spending any grant money on.
5/. Even WUWT has no function beyond pointing out points 1, 2, 3 and 4.
6/. The IPCC is an organization without any purpose, since it exists to advise governments on situations that have no existence in reality.
7/. Renewable energy is therefore a crock of excrement, a pointless waste of money.
8/. Anyone who disagrees with any of the points above is like a holocaust denier.
9/. There is an urgent need to set up an international organisation to help whole swathes of the population come to terms with the facts that:
– the cheque isn’t in the post
– the tooth fairy doesn’t exist
– he/she wont love you in the morning.
– ‘man made climate change’ is as real as Tinkerbelle.

Pat Frank

Great post, Leo Smith. Your number 1 has been the conclusion of my AGW assessment from the first. 🙂

If only you were head of the US National Academy. Or Pres. Trump’s science advisor. 🙂

Another point, that I think I’ve made to Pat before, is that if he is correct he should be able to easily demonstrate this. If you’re running computational models, one way to estimate the uncertainty is to simply run them many times with different initial conditions. If the uncertainty propagates as Pat suggests, then the range of results should reflect this. As I understand it, this has been done, and they do not.

This is clear even from the published CMIP5 simulations. Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100. And he seems to infer the cloud error from disagreement between the models. But the CMIP5 models clearly do not diverge by 16°C by 2100. Here is a plotcomment image

The spread is mainly due to the different scenarios; for an individual scenario it is maybe ±0.6°C.

AndyG55

Yes Nick

Hundreds of scam CO2-hatred “scenarios”.

NOT ONE anywhere near REALITY.

Thanks for drawing that to everybody’s attention.

Bob boder

Nick

And that’s your argument to establish the effectiveness of the models? Really

Steve Keppel-Jones

That’s not true Andy! Give Nick his due. ONE of those models is quite close to reality – the one at the very bottom. The rest of the models should clearly be fired. But that one should be given a prize, and it shows that temperatures in 2100 will be about the same as today. So according to the one believable model, there is no C in CAGW, and no real W either. Great! Can we all pack up and go home now? And stop wasting money on this nonsense?

MJB

@aTTP (1:24am) and Nick Stokes (2:36am)

It seems the issue is not in getting different results with different initial conditions but rather running slightly different models from the same initial condition.

The simplest setup would be to select a single tunable parameter (e.g. clouds), vary the value up or down to create 2 model formulations, and run them both from the same initial conditions. The different values may cause divergence or other feedbacks/interactions may dampen it to insignificant.

If I understand the source of Nick’s spaghetti graph, the graph demonstrates the differences between models, not the potential uncertainty inherent in any one model. Each spaghetti line has it’s own uncertainty band that it not displayed.

MJB,
Yes, you could also do what you suggest (i.e., run with the same initial conditions, but different parameters). If we consider clouds, then there is probably a range of a few W/m^2. This would correspond to potentially difference of about 1K; not even close to the +-15K suggested by Pat Frank.

As far as the spaghetti graph is concerned, I think it is a combination of individual models run more than once and different models, so you are correct that it isn’t a true uncertainty. However, it does illustrate that the range is unlikely to be as large as suggested by Pat Frank.

Frenchie77

If constraints are being applied for each calculation then you are not getting modelled outputs but constrained outputs. Do the runs with no constraints to see the inherent validity of the underlying physics, not the hand-tailoring needed to sell a story.

But hey, if the need is to sell a story….

Sunsettommy

I find it hilarious that Nick and others still think the chart at this comment is relevant since it is pseudoscience crap:

https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2643766

How can anyone think wild guesses to year 2100, be considered good science,when most of it UNVERIFIABLE! Models are a TOOL for research,not to create actual fact based science,since it lacks real data for the next 83 years. This is the what the AGW conjecture is based on,a puddle of unverifiable guesses,

Bwahahahahahahahahaha!!!

Imagine what real Meteorologists, who do short term modeling for weather prediction in the next few days know how quickly short term predictions can quickly spiral out of reality. I see them adjusting their forecasts daily,sometimes even in hours, as new information comes in,but can still be waaaaay off anyway,as they were in my city just yesterday.

Models are a TOOL for research,not a creator of data.

Clyde Spencer

…and Then There’s Physics ,
You said, “However, it does illustrate that the range is unlikely to be as large as suggested by Pat Frank.” I’m not sure that you can justify that statement. The propagation of errors provides a probabilistic uncertainty range, which is an upper bound, not the most likely outcomes. That is, with numerous ensemble runs, they are most likely to cluster around the most probable values, but that doesn’t preclude them from sometimes reaching the maximum values if a large enough number of runs are made.

Bryan A

Extrapolating the apparent arc of the upper limit from the spaghetti plot of model runs, you reach a maximum divergence value of approximately 8.5K to 9.5K truly slightly more than half the 15K to 16K suggested

Clyde,

I’m not sure that you can justify that statement. The propagation of errors provides a probabilistic uncertainty range, which is an upper bound, not the most likely outcomes. That is, with numerous ensemble runs, they are most likely to cluster around the most probable values, but that doesn’t preclude them from sometimes reaching the maximum values if a large enough number of runs are made.

Normally what’s presented are 1, or 2, sigma uncertainties. This would mean that either about 66%(1 sigma), or 95% (2 sigma), of your results should lie within this range. Depending on what is presented, you would expect either 1/3 of your results (1 sigma), or 5% of your results (2 sigma), to lie outside the range. Therefore, if you ran a lot of simulations and the results never ended up outside the range, then the range would probably be too large.

Bryan,

Extrapolating the apparent arc of the upper limit from the spaghetti plot of model runs, you reach a maximum divergence value of approximately 8.5K to 9.5K truly slightly more than half the 15K to 16K suggested

Except, the range is mostly because of the range of emission scenarios, rather than scatter for a single scenario. Therefore, the overall range isn’t representative of some kind of model uncertainty.

sun: the RCPs aren’t “guesses,”
they’re assumptions.

Sunsettommy

Crackers,when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it, you are playing word game here. They are unverifiable,can’t run a hypothesis on it since most of it is far into the future,thus qualifies as wild guesses.

He writes,

“sun: the RCPs aren’t “guesses,”
they’re assumptions.”

Yawn, is this how low science literacy has fallen?

MarkW

The difference between an assumption and a guess is basically the reputation of the person making them.

Joe Crawford

Fenchie77,
You are of course correct. The fact that the models require constraints is enough to invalidate them.

I doubt the is a Mechanical Engineer in the crowd that would trust his/her family’s safety to a 5th floor apartment deck that was designed with, or the design was verified by, a stress analysis (i.e., modelling) program that required constraints be placed within it to keep the calculations within reasonable ranges.

Pat Frank

ATTP, “ one way to estimate the uncertainty is to simply run them many times with different initial conditions.

No, it’s not. Your proposed method tells one nothing about physical uncertainty.

Mistakes 1, 3, 4, 6, and 10. Good job, ATTP.

Pat Frank

Nick Stokes, “Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100.

No, I don’t Nick. You’re proposing that ±16°C is a physically real temperature.

It’s an uncertainty statistic. An ignorance measure. It’s not physical error.

You’ve made mistakes 2, 6, 11 and, implicitly, 12.

You’ve many times now demonstrated knowing nothing about physical error analysis. Now it’s many times plus one more.

Pat,
Hold on. You’re suggesting the results from the models are far more uncertain than mainstream climate modellers suggest and yet you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty. This doesn’t seem consistent.

whiten

Nick Stokes
October 23, 2017 at 2:36 am

My comment to you it is not actually about the particular point you trying to make there, but more in the aspect of contemplating the validity of the whole argument in question here about GCMs.

You see, you have a clear beautiful plot there, but really no much relevant, as it does not have the according ppm concentration trends also.

Last time I checked AGW is all about temps as per ppm…….and the correlation there…..

Ignoring this actually puts one in the position of misinterpreting the value of GCMs as an experiment…..either intentionally or not.

So when the nice plot you posted helps with your point, maybe, in its essence misleads towards a result of misinterpretation and confusion about the actual value of GCMs as an experiment, which by the way are not climate models anyhow, and very very expensive experimental tools at that.

Don’t you think that the plot you provide, the way it stands has no much support value about the RF or the fCO2 as contemplated by the AGW hypothesis one way or another?

cheers

Mark - Helsinki

“That’s not true Andy! Give Nick his due. ONE of those models is quite close to reality – the one at the very bottom.”

yeah, predict 1 2 3 4 5 6 and throw a dice, and one will be right.

Logic is not you nick or ATP

Idiots pretending to be scientists. Why not get English lit Mosher in on the act too

or some more of pseudo science sensitivity studies that are nothing but tuned junk driven by observations

Mark - Helsinki

ATTP
“Pat,
Hold on. You’re suggesting the results from the models are far more uncertain than mainstream climate modellers suggest and yet you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty. This doesn’t seem consistent.”

It’s not inconsistent.
The models are far more uncertain than claimed, because of 1 much comes from hindcast tuning, not physics (not incomplete and some much not understood physics) Unless you are going to be uber absurd and claim that is not true
The range outcomes is uncertainy (in model physics which leas to instability (not variability)), error, and different tunings.

as with Mosher, logical examination is not for you, as usual, add Nick in there

Leo Smith

However if you replace the linear models by non linear ones, the behaviour is exactly as he describes.
It is not the coherence of linear models that is under criticism, it is their applicability at all.

It is of no use to refute the fact that your cat scratched my leg, by pointing out that dogs just dont do that.

Pat Frank

ATTP, physical uncertainty is with respect to physical reality, not with respect to model spread.

You’re conflating model precision with model accuracy (mistake #1). You make this mistake repeatedly. So do climate modelers. You all seem unable to grasp the difference.

Running a model over and over, with different initial conditions, tells you nothing, nothing, about physical uncertainty (mistake #3).

Unless (BIG! unless here) your model is falsifiable and produces physically unique predictions.

Climate models violate both conditions.

Run them until you’re blue in the face, and you’ll have learned nothing except how they move around.

Tom In Indy

Nick Stokes October 23, 2017 at 2:36 am
Nick, can you extend your chart so we can see how high the projections go for RCP 8.5? The chart cuts them off at the year ~ 2080.

I suspect that if you increase the vertical axis and in addition, include the uncertainty surrounding each run, you will end up with roughly the range suggested by the author of this post.

“…and Then There’s Physics October 23, 2017 at 1:24 am
Another point, that I think I’ve made to Pat before, is that if he is correct he should be able to easily demonstrate this. If you’re running computational models, one way to estimate the uncertainty is to simply run them many times with different initial conditions.”

Think!?
A never believable claim from confirmed liars or misdirection specialists.

If you believe your falsehood, write up a mathematical article and publish it.

Until then, your belief is just so much speculation.
Without proof or logic.

sun says ‘when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it’

no, they’re assumptions, not guesses.
there can be no evidence from
the future, only assumptions.

a model has to assume path of future emissions.
these
are the RCPs.
there are four of them for different
scenarios of future energy
use.

unless you can predict for us
that future path. go ahead and try.

“Nick Stokes October 23, 2017 at 2:36 am
This is clear even from the published CMIP5 simulations. Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100. And he seems to infer the cloud error from disagreement between the models. But the CMIP5 models clearly do not diverge by 16°C by 2100. Here is a plot…”

So much for contributions from Nick.

What are the starting uncertainties in climate models, Nick?

Technically, adjusting a temperature record is an immediate admission of error and even roughly identifies the error range.
Yet, not one of the models initializes with that one uncertainty or propagates it through.

Gross assumptions regarding total lack of temperature equipment calibration or certification
Total lack of side by side measurements before swapping equipment.
Total lack of side by side measurements before moving the temperature station.
Total failure to track temperature station infestations or to identify errors caused.

Instead, Nick apparently espouses averaging temperatures repeatedly to accurize numbers and improve precision.
Run the models many times…

A solution that is far worse than claiming stopped clocks are correct twice a day.

RW

The sample standard deviation (SD) in a statistical sense is only meaningful if the underlying population is normally distributed; The percentage of values claimed to fall within some error window depends on the shape of the population distribution. If instead you are talking about the standard deviation of a sampling distribution of a summary statistic (such as the mean), then the central limit theorem is invoked to adopt the assumption that the theoretical sampling distribution of that summary statistic (which you are sampling from) is distributed Normal. The standard error (SE) is the sample estimate of the standard deviation of that sampling distribution.

If the SE (or sometimes the SD though far less likely) is used to support a statement of confidence about the population parameter, such as the mean, then the correct confidence statement is that the error window has some x chance of encompassing the population parameter. Again, assuming a Normal distribution. The notion that the one confidence window you calculate will contain x percent of ‘the data’ or sample statistic should I run the process over and over again is incorrect. Each time you sample, both the mean and the SE vary, and as such so will any confidence statement drawn from the sample statistics.

The proper statement of interpretation of confidence (or uncertainty) is that, in the long run of N (very large) samples of size ‘n’, my ‘x level of confidence’ error windows will capture the population parameter x percentage lf times.

Error propagation is different altogether. Different formulae, and they also depend on what operations you are performing on your data.

Generating an error bar from a large collection of predictions from different models and even, within each model, varrying the initial conditions is an ad hoc method to generate error intervals. It seems supremely niave to believe that varrying these things will happen to capture the uncertainty in the accuracy of the coefficients and values of the model parameters for any coefficient or parameter value that itself posseses some none neglibeable and varied amount of uncertainty associated with them.

Even in a bivariate linear regression model, Y = B1X1 + B2, there is uncertainty in the prediction of Y (y’) and uncertainty in the estimate (b1) of the B1 coefficient and the estimate (b2) of the B2 intercept and, often times, even uncertainty in the observations (x) of X used to generate the model in the first place.

Suppose we sample from a linear system. We don’t know it but the X’s in our model are all appropriate in explaining Y. Good for us so far. But we don’t know what the exact values for X are. So, we sample Y, we also sample X1 to Xk, we then crunch the numbers (do the regression) and come up with the estimates (b1 to bk) of the coefficiemts (B1 to Bk). Thus, we now have a model. The accuracy of our measurements of Y and X1 to Xk (and, normally, the appropriateness of our X1 to Xk in explaining Y but, again, here we are assuming they are appropriate) will help determine how well this model actually does in explaining and predicting Y. Y is unknown as are (probably most) of the true values of X – presumably Time (year) would be one of them. Our measurements of Y and X1 to Xk (y and x1 to xk) are, for the most part, all we have, but we based our model off of the measurements. There are uncertainties in the measurements. We don’t know the direction of those errors or their magnitude (offsets as i think is being used above), because we don’t know the relation of the measurements to their true value. These errors will propagate as the model is run iteratively, being fed its own outputs as inputs at each iteration.

Tweaking the estimated values of X1 to Xk and b1 to bk to generate different estimates of Y (y’) is an ad hoc attempt to quantify this additional uncertainty in X and Y through ’empirical’ simulation.

Pat Frank well-approximates the model temperature outputs using a simplified linear equation. He then focuses on the effect of cloud coverage on solar insolence (if memory serves) and (presumably) uses error propagation formulae to quantify the effect of this uncertainty in the estimate of temperature.

There is either a theoretical/mathematical explanation for why error propagation does not apply or there isn’t and the modellers technique for evaluatiom is gravely misguided.

Mark - Helsinki

Pat Frank October 23, 2017 at 9:36 am
Nick Stokes, “Pat Frank claims that the error arising from cloud uncertainty alone should accumulate to an extent of ±16°C by 2100.”

No, I don’t Nick. You’re proposing that ±16°C is a physically real temperature.

It’s an uncertainty statistic. An ignorance measure. It’s not physical error.

You’ve made mistakes 2, 6, 11 and, implicitly, 12.

You’ve many times now demonstrated knowing nothing about physical error analysis. Now it’s many times plus one more.

_________________________

What is it they say Pat, a little knowledge is…. 😉

At least Nick might run off now and try understand physical error analysis, seems the sort that does not like understanding things 🙂

Mark - Helsinki

* Does not like Not understanding things.. heh, wish I could edit my stupidity instead of posting again 🙁

“the effect of cloud coverage on solar insolence”
Busy old fool, unruly sun

Much as i hate to chip in, in support of both Nick and aTTP. They are giving you accurate information. If the models were wrong in the ways described above…. they would be “more” wrong and it would be very obvious to even the most committed warmist modeller. All models are wrong, it’s inherent in modelling. Some are really, really wrong. But most of the ones in active use are not. I would agree that the current crop run hot and im not a massive fan of zekes recent work, trying to show that they dont. But we have apply healthy scepticism and critical thought to all of this. We cannot push that all to one side because we simply like the sound of what’s being said. Mosher, to his previously sceptical credit makes that point often. He sometimes at least recently doesn’t take his own advice. But i guess we are all guilty of that.

Depending on your nationality there’s always the PNAS route to publishing. Pal reviews can cut both ways.

Sunsettommy

Cracker,

Assumption

“a thing that is accepted as true or as certain to happen, without proof.”

Guess

“estimate or suppose (something) without sufficient information to be sure of being correct.”

Meanwhile you keep playing word games while I keep saying they are junk,you never disputed that they are junk.

I stated:

“Crackers,when they run to year 2100, they are indeed wild guesses,since there is ZERO evidence to support it, you are playing word game here. They are unverifiable,can’t run a hypothesis on it since most of it is far into the future,thus qualifies as wild guesses.”

and,

“How can anyone think wild guesses to year 2100, be considered good science,when most of it UNVERIFIABLE! Models are a TOOL for research,not to create actual fact based science,since it lacks real data for the next 83 years. This is the what the AGW conjecture is based on,a puddle of unverifiable guesses,”

You have NOTHING to sell here.

You are pathetic.

Gary Pearse

Nick by eyeball, the spread is eight + from the smudge at ~0C in 2100 to the topmost steeply exiting the top of the graph at about 2075. And these represent the models that survived the cut. You would still be wrong with a linear model but more difficult to criticize had you guys not been charged with the task by Grouchmarxist highschool drop out Maurice Strong (creator of both UNFCCC and IPCC) to find burning fossil fuels will destroy the planet, thereby justifying trashing economies and freedoms and having global governance by elites. Models vs observations to date show climate sensitivity to be at most ~1, but this takes the scare out of rising CO2.

I’m thinking we should crowd source a large fund and place a bet that with the collapse of the Paris agreement we will not achieve a rise of 1.5C going gangbusters with fracking oil and gas, burning coal, making concrete, etc. If we haven’t got over halfway there by 2050 we declare a win and make the fund available to third world economies for developing cheap reliable electricity generation. Honesty in temperature collection would need some resources and oversight.

“Nick by eyeball, the spread is eight + from the smudge at ~0C in 2100 to the topmost steeply exiting the top of the graph at about 2075.”
The spread for each scenario is much smaller. The fact that scientists don’t know what will be done about GHGs and have to cover the range of possibilities has nothing to do with error propagation. But there is a real test of PF’s ridiculous errors. ±15°C would be about ±9°C in the 30 years since Hansen’s prediction. Now we quibble about small fractions of a degree difference in scenarios, and another small fraction that might be a transient for El Nino, but there is nothing like a 9°C error.

Pat Frank

Nick Stokes first thinks uncertainty is physical error (mistake #6), and then effortlessly moves on to suppose it’s a physical temperature instead (mistake 11).

Nick’s self-contradictory assignments also implicitly embrace mistakes 2, 4 and 12.

Pat Frank

Clyde Spencer it’s even worse than that, because the cloud forcing error is inherent in the model and is systematic.

That means one never knows the most probable value.

Pat Frank

ATTP supposes that model precision is a measure of reliability.

Mistakes 1 and 3.

Pat Frank

blunder bunny wrote, “they would be “more” wrong and it would be very obvious to even the most committed warmist modeller.

Not correct. GCMs are tuned to give a reasonable projection. That practice hides physical error and side-steps uncertainties.

Pat Frank

Mark – Helsinki I can’t offer a rationale for it all. 🙂

Pat Frank

RW I can’t add anything to your thoughtful post, but can mention that,

Vasquez, V. R., and W. B. Whiting (2006), Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods, Risk Analysis, 25(6), 1669-1681, doi: 10.1111/j.1539-6924.2005.00704.x.

assess random and systematic errors in nonlinear numerical models and recommend propagating systematic model error as the root-sum-square.

The precedent of that paper, by the way, encouraged me to make Risk Analysis my first journal for submission. The rest is history. 🙂

Pat Frank

Nick StokesBut there is a real test of PF’s ridiculous errors. ±15°C …

That’s not physical error, Nick.

Mistakes 4, 5, 6 and 11, and probably 12 implicitly.

Well done. 🙂

Why don’t people understand uncertainty? They taught us that in first year physics. I could easily make a model that only ever has one outcome, but if it propagates an uncertain value then the error bars will be huge by the end. That doesn’t mean my model will ever show that, thus conflating model precision with uncertainty. The error bar means that my model could be wrong by that much. Of course if your model is wrong it won’t tell you, that’s the whole point of error bars.

RW

Pat Frank, thanks for the reference. I have downloaded it and will check it out.

John Dowser

From ..and Then There’s Physics October 23, 2017 at 9:36 am

“you’re also suggesting that if you ran the models many times (with different initial conditions and using different parameter values) you would get an overall result that was not representative of the uncertainty.”

But if the model would indeed propagate the suggested systematic, “throughout”, physical error, it *will* be noticed. The current models are not sufficiently taking into account non-linear effects of known modelling errors. This then causes the accuracy of the model to decrease rapidly with each time-step and explains perfectly the issues seen today comparing measurements with runs of 10-20 years ago.

Many climate scientists seem to make the same mistake simply because they continue to apply tools without allowing rigorous review of the validity of using those tools that way. This is a larger systematic *human* error in that particular field. And it’s not the first time in recent history but certainly becoming the most costly. And the cause of it lies within underlying role of politics, money and emotion, which has grown into something big to “fail”. The cure here is “back to basics”: re-examination of the toolbox itself.

Pat Frank

Jarryd Beck, thank-you. 🙂

It seems to me that training in climate modeling completely neglects physical error analysis. Not one climate modeler I’ve encountered has a clue about it. And they’re often hostile to it.

WTF

I like his self declared hero status after his sixth rejection, obviously due to the corrupt system and fear of what the analysis would unleash – no other explanation possible here.

MarkW

The fact that the insiders circle the wagons when criticized is proof that the criticism is meritless.
Gotcha.

Mike Maguire

“Imagine what real Meteorologists, who do short term modeling for weather prediction in the next few days know how quickly short term predictions can quickly spiral out of reality. I see them adjusting their forecasts daily,sometimes even in hours, as new information comes in,but can still be waaaaay off anyway”

This is a big reason why us real operational meteorologists (for 35 years) have such a high % of skeptics vs in other sciences. We must constantly reconcile the forecast with realities. Quickly adjust based on models that also quickly dial in new/fresh data and come out with a new scenario that can sometimes look much different than the previous one………..with errors/changes often growing exponentially with time.

Individual ensemble members of the same model can look completely different beyond a week. Different models in week 2 can have very different outcomes, not just regionally but in the position of many large scale features that define the pattern.

However, despite this, climate models are much different and they are not as effected by the random, chaotic short term fluctuations in initial conditions that can never be captured perfectly and lead to exponentially growing errors with time.

For instance, if the amount of solar forcing in a climate model was too high/low, one would not expect it to result in output/projections that amplify exponentially over time. It would remain pretty much constant. There would also be potential negative/positive feedbacks but they would be limited and probably not greater than the error from the solar forcing being too high/low.

Another difference. With weather models, we change them/equations every several years or so to make potential slight improvements, with experimental models constantly being run and compared to the existing models……..with mixed results.
I am in not involved in modeling but it seems clear that certain models are superior than others, especially when it comes to handling particular atmospheric dynamics. However, the gatekeepers of all models seem committed to making improvements of their models vs justifying keeping the current one(s).
Skill scores for different time frames are constantly tracked and accountability/performance is well known and acknowledged based on the blatantly obvious, non adjusted statistic for all to see.

I don’t see this being the case for climate models. Adjustments have lagged well behind the reality of observations screaming out loud and clear that they are too warm. Anyone with a few objective brain cells can see that global temperatures are not increasing at the rate of model projections. If it takes an El Nino spike higher in global temperatures to get up close to the model projections ensemble mean for instance, instead of treading along the lower baseline of the range for a decade, then the models are too warm.

There can be no scientific justification to continue with those same models. They need to be adjusted. Wishing and hoping and having decades before needing to truly reconcile models with reality because you are convinced the equations are right and the atmosphere will come around is not authentic science……….it’s just a tool to be used for something other than authentic climate science.

Pat,
Thank you very much for this excellent article, the work and well thought out discussion. I may not agree entirely with everything but believe you make some great points and it deserves to be read/published………even if the gatekeepers don’t agree with all of it.
One wonders if they disagreed with just as much but it supported the CAGW narrative, if it would have been published.

Sunsettommy

WTF,

Pat referred to a nice post Willis Eschenbach made a few years ago,which YOU should visit,that materially support the main point Pat makes here,here is a useful quote from Willis:

” Willis Eschenbach
May 16, 2011 at 12:01 am

Steve McIntyre has posted up R code for the analysis I’ve done, at ClimateAudit.

The main issue for me is that the climate model isn’t adding anything. I mean, if you can forecast the future directly from the forcings, then there’s no value-added. A good model should give you something that you can’t get from a simple transformation of the inputs. It should add information to the mix.

But the GCMs don’t add anything new, they just spit the forcings out in a slightly different form.

Now, you could say that the model is valuable because it allows us to calculate the variables of lambda and tau … except that each model comes out with a different value of those two.

The main problem, however, is that we have nothing to show us that the underlying concept is true, that forcing actually controls temperature linearly. So that means that the different lambdas and taus we might get from the model may mean nothing at all …

w.”

https://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/#comment-661218

Imagine people trying to model chaos with linear functions……….,using ZERO data as real data, but not yet existing data of the future………

Ha ha ha ha ha…………..

Sunsettommy

Mike, I wasn’t trying to denigrate Meteorologists with their prediction being wrong in my city,just trying to point out that even with short term predictions based on REAL data can STILL be off from the forecast target.

You wrote,

“This is a big reason why us real operational meteorologists (for 35 years) have such a high % of skeptics vs in other sciences. We must constantly reconcile the forecast with realities. Quickly adjust based on models that also quickly dial in new/fresh data and come out with a new scenario that can sometimes look much different than the previous one………..with errors/changes often growing exponentially with time.”

The big difference is that you usereal updated data regularly to adjust the forecast with, While IPCC create a spaghetti based climate model using a lot of assumptions on forcings we know little about and say we can make a forecast far into the future with significant confidence.,

The whole thing is absurd!

Mike Maguire

Sunset,
I never considered your comment as denigrating meteorologists. Just the opposite, a compliment with regards to how we are reality based in using models based on their usefulness.

I’ve busted at least hundreds of forecasts……..it part of the job. The best busted forecast is the one that gets updated the quickest. I was on television for 11 years and that means that thousands of people see the face and person who busts forecasts and you hear about it.

In the earliest years, I hesitated to update as quickly because of believing the models when I made the first forecast and sort of hoping they would revert to the previous solution when they diverted the wrong way.
I also showed over confidence because of too much trust in models.
The reality is that you can be the best model data analyst on the planet but if the model is wrong, it doesn’t matter………you will be wrong.
With experience, you learn to be more skeptical and certain model tendencies. With so many more models and ensembles available, it provides an enormous opportunty to consider potentially different scenarios.

In the 1980’s, most of us just used one (or 2) operational model and went with whatever it showed.

Duster

Reading the “climategate” emails, the “corruption” is well documented. I would not regard every single individual with bias as corrupt, since they also display expectation bias. Trenberth’s assertion that there must be something wrong with the data tells an entire story in one brief sentence. Other emails such as Jones indicating that papers critical of model results and methods need to be suppressed (not published) rather than addressed substantively are also revealing. The “corruption” may initially have been more due to “noble cause” fixation than to economic bias, but once economics and university and agency policy enters the picture, the result can be out right corruption. Any of the journals could have published Dr. Frank’s paper and then left the podium open for actual discussion and demonstration of any mistake he might have made. Not doing so looks unscientific, and outright faith-based rather than grounded in scientific argument.

Pat Frank

WTF, ad hominem comment.

If you can’t appraise the manuscript and the reviews you have nothing worthwhile to offer.

So far, you’ve offered nothing more worthwhile than a view into your character.

Forrest,
I think that is pretty easy. Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.

Forrest,
As far as I’m aware, they have. There is some uncertainty (i.e., running a model with different initial conditions does indeed produce a different path/output) but they do not show the output diverging as suggested by Pat Frank’s analysis. We expect the equilibrium state to be constrained by energy balance and so it is very hard to see how it could diverge, as suggested by Pat Frank, without violating energy conservation.

Forrest,

I am also not convinced that an energy balance could reasonably be expected to produce a steady state as you seem to suggest. The earth’s geological history suggests that climate is anything but steady state.

I wasn’t suggesting that the equilibrium state should be the same at all times, I’m pointing out that it should tend towards a state in which energy is in balance (i.e., energy coming in matches energy going out). The reason it has changed in the past is because things have happened to change the energy balance. The Sun’s output isn’t constant. Our orbit around the Sun can vary. Volcanoes can erupt. Ice sheets can retreat/advance (often due to orbital variations), greenhouse gases can be released/takenup etc. However, the state to which it will tend will be one in which the energy coming in matches the energy going out.

So, if someone wants to argue that the range of possible temperature is 30K (as appears to be suggested by Pat Frank’s error analysis) then one should try to explain how these states all satisfy the condition that they should be in approximate energy balance (or tending towards it).

As far as I can tell that is the controversy.

Yes, that is the controversy. Pat is essentially arguing that something that would produce an offset should be propagated – at every timestep – as an error. This is not correct, which should be pretty clear from Nick Stokes’s recent comment with the output from climate models.

AndyG55

I see that “nophysics” has very little comprehension of error propagation.

Why is that not a surprise?

Little errors GROW to be big errors… that is the way the climate change mantra works !!

pbweather

in response to ATTP,
This argument is seriously flawed.

I think that is pretty easy. Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.

Just like shorter range EPS global weather models the outturn is constrained to within realistic climatic values…otherwise they do indeed blow out into massive range of error. Climate models will be no different, but the constraint range means error propagation is limited with each time step.

Latitude

Seems to me that since the models are blatantly wrong…the offsets, forcings, whatever, are cancelling each other out….either way, you end up linear that exactly matches CO2….something anyone could do with a ruler
First problem seems to be getting modelers to admit that…

but then they are handicapped from the get go…..they are having to back cast to a fake temp history in the first place

forcings add. (aerosol forcing is negative).
by 2016 the anthro
GHG forcings add to
a CO2-equivalent of 489 ppmv
https://www.esrl.noaa.gov/gmd/aggi/aggi.html

paqyfelyc

489 ppm CO2eq of anthropo forcing? …
current level of CO2 is ~400ppm, meaning, without human action Earth would “enjoy” -89 CO2eq GHG forcing. +2K per CO2 doubling is also -2K per CO2 /2, hence -2K for the effect of going from 400 to 200, another -2K for going from 200 to 100 etc. Let’s stop here, although the theory goes that we should keep going on.
So the theory says that without human GHG, Earth temp would be no less than 4K below current level. Remember that LIA was only 1K below current (so says IPCC), so imagine the effect
I say: LOL !!!

paqyfelyc claim – “THERE IS a line of code that says “this much more CO2 give this much less heat loss (aka warming)”

there is a line of code,
a well-honed equation with evidence
to back it up, that uses CO2’s
radiative forcing (which is not warming), at
the tropospause, not the
surface.

because it’s a fact that CO2
absorbs IR. and a fact that the
earth emits
IR. it’s not difficult
to understand, with a model
or equations, why that
means more CO2 means
more warming.

because it’s a fact that CO2
absorbs IR. and a fact that the
earth emits
IR. it’s not difficult
to understand, with a model
or equations, why that
means more CO2 means
more warming.

You’re ignoring the water vapor that has 10 or 20 times the energy content, with a temperature sensitivity at Sea level air pressure and temp. And it does what it wants.

micro: water vapor certainly isn’t ignored in
climate models.

but water vapor in the atmosphere only
changes when the temperature first changes;
then it’s a feedback.

but water vapor in the atmosphere only
changes when the temperature first changes;
then it’s a feedback.

Bzzzzzzzz! Wrong.
Do you live someplace that you get dew at night?

Oh’s it a feedback, about -35W/m^2

A lot more than Co2’s forcing.

paqyfelyc

“radiative forcing” is orwellian newspeak. Indeed CO2 radiates (as just any matter…), and that’s the real radiation that should appear in the equations, not some “forcing”.

it’s not difficult to understand, even without a model or equations, why that means more CO2 means
more RADIATION in and out atmosphere and less radiation directly from Earth gets to space. If, and if then to what extend, this result in warming (or even cooling!), is much more questionable.

If, and if then to what extend, this result in warming (or even cooling!), is much more questionable.

This is what my work addresses. Specifically cooling under clear calm skies. This is the only condition that really matters. But that’s another argument for another time.

What I found was surface cooling rates adjust themselves, as it get near dew point, water vapor condenses, and that sensible heat supplies a significant portion of the energy radiating to space, which at dusk, was cooling the surface at 3 or more degrees F/hr, but an hour or two later it can be near zero, and there’s still 5 hours of dark, and there is still a 100F (the other night here) temp difference that has to be radiating to space.
comment image
comment image

You can see this everywhere by just logging RH, Dew Point and Air temp, and you see under clear skies the temp stops falling some nights, and you can measure a 80-100F temperature differential with an IR thermometer, and it isn’t cloudy either.
comment image

Everyone assumed it was just reaching equilibrium, it is not. This is the biggest “discovery” in climate science in 100 years, because it shows us water vapor has been actively regulating temps, not ghg’s.

Oh, so CS is just the ratio of the the two cooling rates times the 1.1C/doubling Co2, so above location that got measured it’d be about 1.1C/3, so 0.33C/doubling.

Frankly I should get a Noble Prize for this.

paqyfelyc

@micro6500
Your work makes sense, so much so that i don’t see anything new in it. Of course atmospheric water is a major heat buffer, that prevent temperature to go down as long as there remain water vapor to turn into liquid water, and hence to compensate heat escaping away through radiation. I doubt very much this deserves a Nobel, or Captain Obvious would already had been awarded (but who knows ? Al gore and Obama got one, so with the right political connection …)
Even “climate scientists” know that, although I suspect they don’t care. The word “dew” doesn’t even appear in the description of the NCAR Community Atmosphere Model (CAM 3.0) : the only water movement they care about is evaporation and cloud formation.

But I figured out it was a negative feedback to Co2. Tell me anyone who has proof of that?

But you’re right, it was stupid obvious. But people assumed it was something else. I recognized it for what it was, the end of co2 panic.

paqyfelyc says – “radiative forcing” is orwellian newspeak. Indeed CO2 radiates (as just any matter…), and that’s the real radiation that should appear in the equations, not some “forcing”.”

RF comes from solving the two-stream
equations, which are obtained from
applying energy
conservation and the Planck law to
the atmosphere.

now i think it’s the two-stream equations
that appear in the models, and not the
RF relations. see, for example, equations
4.229 & 4.230 in this model description:

http://www.cesm.ucar.edu/models/atm-cam/docs/description/description.pdf

RF comes from solving the two-stream
equations, which are obtained from
applying energy
conservation and the Planck law to
the atmosphere.

The problem then is they either are doing the wrong terms or they are leaving the big one out. The assumption that Co2 adds is incomplete, it adds, but water vapor drops nearly as much as was added, it is the negative feedback that is either unknown or ignored. And it only does so for part of the night, averaging a whole day hides the fact it varies.

paqyfelyc

@crackers345 October 25, 2017 at 8:10 am

No, RF comes from the difference between to virtual numbers: the modeled radiation with [anything], and the same without. This as the same value than a seller pretend you gained 30$ on a thing you paid 70$, because he pretends its price should have been 100$, or an official pretending he made 10M$ saving while he spend 110 M$ instead of 90 M$ previous year, because without the saving he would have had spend 120M$. Pure bovine outgoing matter, that i wouldn’t buy if i were you.

You’ll find numerous instance of the word “forcing” in the model.

Just after one of them i found this extract:
“the large warm bias in simulated July surface temperature over the Northern Hemisphere, the systematic over-prediction of precipitation over warm land areas, and a large component of the stationary-wave error in CCM2, were also reduced as a result of cloud-radiation improvements”
Which translates:
“the model can fit the elephant as need be, It has far enough parameters to “improve” ”

BTW you still didn’t react to my comment
https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2644237

paqyfelyc claimed – “No, RF comes from the difference between to virtual numbers: the modeled radiation with [anything], and the same without.”

difference of what?

this paper calculates RF; this paper describes their methods:

>> We use the Spectral Mapping for Atmospheric Radiative Transfer code, written by David Crisp [Meadows
and Crisp, 1996], for our radiative transfer calculations. This code works at line-by-line resolution but
uses a spectral mapping algorithm to treat different wave number regions with similar optical properties
together, giving significant savings in computational cost. We evaluate the radiative transfer in the range
50–100,000 cm−1 (0.1–200 𝜇m) as a combined solar and thermal calculation.

Line data for all radiatively active gases are taken from the HITRAN 2012 database. Cross sections are
taken from the NASA Astrobiology Institute Virtual Planetary Laboratory Spectral Database http://depts.
washington.edu/naivpl/content/molecular-database.<<

B. Byrne and C. Goldblatt
http://onlinelibrary.wiley.com/doi/10.1002/2013GL058456/pdf

Line data for all radiatively active gases are taken from the HITRAN 2012 database. Cross sections are
taken from the NASA Astrobiology Institute Virtual Planetary Laboratory Spectral Database http://depts.
washington.edu/naivpl/content/molecular-database.<<
B. Byrne and C. Goldblatt
http://onlinelibrary.wiley.com/doi/10.1002/2013GL058456/pdf

It’s worthless, unless you don’t want to know what it’s doing. Now if they ran it over 24 hours and included H2O, you’d see H2O changing, negatively in response to the increase.

But they leave that out.

Funny how they all seem to leave that out.

ps – RF is calculated
at the
tropopause,
not the surface or TOA

paqyfelyc

crackers345
We are talking about radiative forcing, and you make a long (and boring) quotation about … “radiative transfer”. WTF ? Do you think these are the same?
And you still didn’t reply to my comment https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/comment-page-1/#comment-2644237

Phoenix44

If the climate settles into an equilibrium state. And if the equilibrium state results in a constant temperature, but that is not necessarily how the equilibrium will look – why should it? Nothing in the climate ever stops changing because it cannot ever do so. One of the biggest problems with modeling the climate is knowing what the starting point is.Get one parameter wrong by a little, and your projections can be wildly wrong.

“Get one parameter wrong by a little, and your projections can be wildly wrong.”

climate models are
“spun up” so they start
in a equilibrium
state.
see
http://www.oc.nps.edu/nom/modeling/initial.html

paqyfelyc

And why would start start in equilibrium, when the Earth is supposed to be an out of equilibrium system (that’s what the Gaia hypothesis is all about, as you probably don’t know)?

paqyfelyc

@ aTTP
What are the mathematical condition for “if we ran two simulations with different [whatever source] perturbation [aka “forcings” in climate newspeak] (but everything else the same), this wouldn’t suddenly mean that they would/could diverge with time, it would mean that they would settle to different background/equilibrium states” ?
Answer: stable, non chaotic system, that can be treated through perturbation analysis.
You assume 1) equilibrium with null forcing, and 2) forcing will just offset equilibirum by some finite and calculable amount.
The first is obviously false regarding climate, since it wildly varies with zero forcing, as a true chaotic system it is. The second assumption is “not even wrong” when the first isn’t true.
So your objection just means the “climate” you are modelling is from some other world.
Your pseudo is usurped.

“An uncertainty only propagates if it applies at every step”

This is pure comic gold!

MarkW

If the inputs to a function are uncertain, then the output of the function will at best be equally uncertain.
In reality, every time you perform an operation on uncertain data, you increase the uncertainty.

And since this is solving simultaneous differential equations by time step, where you have to allow all nodes to reach numerical stability prior to the next step. Each of these nodes carry the uncertainty into the next iteration. And they are modeling an abstraction of the real system.
I’ll point out I spent 15 years as a simulation subject matter expert, covered about a dozen simulators, and created models and circuits that got checked out and reviewed by engineers who had actually build the real thing and tested it extensively on a lab bench. Including simulators that operate like gcm’s operate. Also designed a chip for NASA GSFC, fastest design for them at the time.

whiten

MarkW
October 23, 2017 at 9:12 am

If the inputs to a function are uncertain, then the output of the function will at best be equally uncertain.
In reality, every time you perform an operation on uncertain data, you increase the uncertainty.
———–

Maybe I am wrong, and also missing your point, with this simplicity of mine, but just for the sake of it.
There is a “100% certainty” with this models… they all do a warming in a very significant correlation with CO2…SIGNIFICANT AND CLEAR CORRELATION BETWEEN THE WARMING TREND AND CO2 TREND in all of these simulations….. “100% certain”, as far as I am aware.

Also as far as I am aware, these models are not set up or made to do that, they just do it…….there is no any line of code that “says”: “you get this much CO2 give me this much warming”,
or some thing like that…..and besides as per my understanding of these models, they do not actually do any “detectable” quantity warming as caused by CO2……as strange as that may seem.
Correlation does not necessarily mean causation, still it needs a confirmation and some kind of validation even in the case of the GCMs, even when and where it may seem from the outset to be so obvious….but still needed though.

cheers

paqyfelyc

@whiten
THERE IS a line of code that says “this much more CO2 give this much less heat loss (aka warming)”. If there wasn’t , CO2 wouldn’t appear at all.
The truth is, this sort of code cannot prove the assumption, it can only prove it is wrong. And it does, fairly well.

Crispin in Waterloo

Everyone, listen to MarkW.

whiten

paqyfelyc
October 23, 2017 at 1:18 pm

I have no much choice, but wholly to agree with you there…..in principle.

Fairly well in the prospect. 🙂

considering that it could be proved at some point.

thanks.

cheers

Clyde Spencer

…and Then There’s Physics,

You suggested, “Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.”

Actually, that has been done: it is illustrated in the ‘spaghetti graph’ above supplied by Nick Stokes. One of the most critical input parameters is the assumed, and unknowable, RCP. I have not seen a similar presentation for other assumptions about all the input parameters that are known with imperfection even for their current values, let alone future values. I have rarely seen estimates of the ‘albedo’ with a precision greater than 2 significant figures. What would the outputs of ensembles look like if a reasonable range of albedos with different values were used as initial conditions? When we start varying ALL the inputs, one at a time, that will give us a better idea how they may influence the total output. They might even come close to Frank’s upper-bound uncertainty.

Pat Frank

Right, ATTP. You say a plus/minus root-mean-square uncertainty statistic is a positive sign physical offset error.

It’s not. (+/-) does not equal “+”. I know it’s a hard concept, but do try.

You’ve made mistake number 7. And that’s over and over, for you.

You also show no understanding of the difference between physical error, which can be known, and statistical uncertainty, which is an ignorance metric.

The first requires the observation as a test against a prediction.

The second conditions a prediction where the observation is not known.

You don’t get that distinction here. You’ve never gotten it in any of our conversations.

I rather doubt you’ll ever figure it out.

Crispin in Waterloo

±1

very hard to see how it could diverge,
============
The result (future) only converges over a narrow range of conditions even if the energy is identical.

For example. Hot land and. Cold ocean vs cold land and hot ocean. The energy is the same but the climate is not. The nonlinearity of the system allows both possibilities to occur. Or at least to remain outside of current mathematics to calculate any more than we can predict the next roll of the dice.

PureAbsolute

I’m a super layman here — however TTP’s statement cried out to me. How is a solar forcing not applied at every step? If there is extra heat in step 1, then step 2 will proceed from that extra heat step. Of course, we know that extra heat will be radiated out to some extent. Is that a linear process? Do the propagation from those errors not become cumulative also? Every joule not released back into space also accumulates.

So while I agree with your basic premise — the errors have to be added at every step — I disagree with your disagreement — the errors *do* add at every step.

talldave2

No… each step has physical uncertainties. There is no “background,” just a series of steps, each of which contributes the potential for a certain amount of error.

I don’t know why this is so hard to understand. Consider the operation of moving a wheel 1 mm. Each time you perform the operation, you miss by 1mm a little bit. Some of those errors cancel out, but over the 1000-step process of moving the wheek 1m the total possible error increases at each step.

If we ask “where is the wheel after 1000 steps?” we would have to qualify the answer with the total possible physical error in the process to give a true estimate of position. You can’t just run a bunch of simulations and say “Look! they converge near 1m!” That’s a different question.

I don’t know that this quite renders models totally useless, but it certainly demonstrates some important limitations.

David A

Yes and also various functions have feedbacks. Thus some errors propagate in this manner. The feedbacks are not necessarily linear either. Each of Nick’s runs in his chart above showing the disparate rcp scenarios also has error bars, widening the top and bottom of the existing spread.

michel

“This has already been explained to you numerous…”

I draw everyone’s attention to this common rhetorical trick of speech. The attempt is rhetorically to position the speaker as the expert and teacher, the addressee as ignorant and naive.

Examples of usage from other contexts:

It has already been explained to you repeatedly that there were no camps or penal colonies under Stalin, and it is unlikely that this attempt will be any more successful than previous attempts. The allegation was invented by right wing anti party conspirators.

It has already been explained to you repeatedly that there was no famine under Mao…..

It has already been explained to you repeatedly that eating cholesterol raises blood cholesterol…

It has already been explained to you repeatedly…..

No, it hasn’t. What has happened is that someone has asserted these things. They have not explained repeatedly.

When the activists in a field commonly resort to this sort of speech, as if by a collective agreement, we know, and have explained repeatedly, that this is a bunch who have abandoned any critical thought and just mouth and parrot the party line.

Mark

And sometimes it’s not a rhetorical trick; it’s just someone that’s frustrated because he really has explained it over and over.

bitchilly

attp, i wonder what the reason for all the messing around with aerosols was ? would models do what pat says if initial conditions are not constrained with variable parameters down the line after initiation of the model run ?

Philo

I believe the point Pat is trying to make, and nobody seems to get, is that when measurements are used as input into model equations the measurements have a physically known error range, which should be traceable to a National Institute of Standards reference. Once that physical error is accounted for it propagates throughout each iteration of the model. The reference errors are not a statistic of the measurement but an absolute value of the accuracy- i.e. a temperature could be any number that falls in the error range any time a measurement is made.
The equation y=ax + b, run once, generates and absolute error range of a*{AE} +b*(AE). if x=100 and b=10 and the absolute error is .0001 the result is 110*(.0001)+ 0.001=.0111. The next iteration, further extending the calculation would start with an Absolute error of 0.0111.test

It’s easy to see that the potential error in the calculation can easily balloon after a number of iterations. Based solely on the absolute error of the instrument the potential error can easily become much larger than the any statistical test would suggest.

Observations are not the same as statistics of observations.

Crispin in Waterloo

aTTP

“Run a climate model many times with different initial conditions, and show that the range of outputs diverges as suggested by Pat’s proposed error propagation.”

This statement reflects a fundamental misunderstanding about what a model run is and what an uncertainty is. The uncertainty is an inherent property of a measurement, or in the case of clouds, an assumption. The uncertainty about a calculated value is not based on the variability of the result or of multiple runs of a model. It is an inherent property of the inputs and propagates through the calculations in a standard fashion according to strict rules. The output of a model might be exactly the same each time with different inputs! That doesn’t have any influence on the propagated uncertainty.

That this fact of mathematics escapes anyone in a position to affect public policy, I find concerning.

Because this math fundamental escapes so many in the modeling field, apparently, here is a primer from Wikipedia: https://en.wikipedia.org/wiki/Propagation_of_uncertainty

“When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function.”

In the case of clouds, which are poorly characterised, having to choose a forcing without knowing its real effect to better than say, 4 W/m^2, (σ1) is the same as a measurement with an uncertainty of 4W. Picking ‘the wrong number’ does not reduce the uncertainty about what follows. It is not “30 W ±4W, therefore the true answer is between 26 and 34”. It is that any number selected has an uncertainty of 4W. It is 26±4, 34±4, 30±4 or any other number like 10 or 50.

“For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation σ from the central value x, which means that the region x ± σ will cover the true value in roughly 68% of cases.” (ibid)

The Resistance formula shows that it is the largest input uncertainties that cause the majority of the magnitude of a propagated uncertainty. Thus temperature, which has a relative low % uncertainty, is minor compared with forcing due to clouds where the uncertainty is large compared with its value.

I encourage everyone to read the Wiki entry and if it is too difficult, try putting in some numbers using the Resistance formula. It will show you that uncertainty never decreases through a calculation.

The Author is correct, and the rebuffs from several journals show that they accept his arguments, but excused themselves from publishing it on the grounds that the readers would not be interested in finding the true answers to this important question. It’s their call, but the rejection was not because the work is incorrect. Obviously many responses and reviews were inane. I am not surprised, I continue to be disappointed by the sorry state of climate science.

Pat Frank

ATTP, “The error that you’re trying to propagate is not an error at every timestep, but an offset.

You’re wrong. I show in the manuscript that it long wave cloud forcing error is systematic and inherent in the models. It enters every single time step.

I’ve explained that to *you* several times, and and you never grasp the concept.

Just to elaborate further, adding up calibration errors of various models to get a final number does not make error a constant to be subtracted away from a prediction.

Model calibration errors vary with the model, with the forcings, and in each model with the choices of poorly constrained parameters. Your proposed subtraction is a meaningless exercise.

michel

Yes.

Pat Frank

ATTP, “Pat is essentially arguing that something that would produce an offset should be propagated – at every timestep – as an error.

No, I’m not. I’m propagating a model calibration error statistic.

Calibration error statistics are not offset errors.

Model cloud error is not an offset error (mere inspection of ms Figure 5, or the figure in Eric Worrall,s comment, is enough to prove the case.

It’s explained in my manuscript.

I’ve explained it to you repeatedly.

You insistently make the same mindless mistake over and over again.

It was wrong the first time you supposed it. It’s wrong this time. It’ll always be wrong.

It will never be right no matter how often you repeat it.

But that won’t stop you, will it.

ATTP
It’s really this simple; the. Earth system cannot be accurately simulated unless all the climatic variables are precisely accounted for. The tiniest inaccuracy will garbage the run. And it wont be known where the mistake was created. Current models rely heavily on inference. They are all utterly unskilled in projecting.

Michael S. Kelly

If anyone is getting a “background equilibrium state” out of a climate model, the model is worthless. The boundary conditions for climate change continuously (TOA solar intensity changes +/- 47 W/m^2 every 180 days, the tilt of the earth changes every 18.6 years, the cloud cover – and hence albedo – changes over a tremendous range hourly, water vapor distribution in the atmosphere – the major climate driver – changes constantly in a manner that isn’t even known, etc., etc.), some in a semi-periodic manner and some randomly. We don’t even know what all of the variables are, but from what we do know, the climate can never reach a state of equilibrium.

Having said that, if you take the position that the models are based on calculating perturbations away from a background equilibrium (a common technique for analyzing non-linear systems), then I think you’ve made Mr. Frank’s case in part. In that case, you have linearized a highly non-linear system, and his error propagation analysis is perfectly correct.

palc wrote – “BTW you still didn’t react to my comment
https://wattsupwiththat.com/2017/10/23/propagation-of-error-and-the-reliability-of-global-air-temperature-projections/#comment-2644237

it’s completely wrong, and
shows you don’t know
the science.

Henry Galt

Trillion$. The simple number of reasons for rejection.

George Tetley

What is needed is the $trillions to publish as a supplement in the NY Times ? or ?
(following the money trail always leads to the edge of a cliff )

Henry Galt

attp. Nice to be able to comment isn’t it. Under your rock … not so much.

Sheri

No one is allowed to comment in “wrong” ways on said site because it is RIGHT. You know, omnipotent. It’s an interesting trait found in most climate change propagandist sites. It used to be that science was smart enough to explain itself and win an argument, but the collective understanding has dropped to where silencing the opposition is the only answer. You remember the Dark Ages, right?

Colorado Wellington

Wellington, if you have a point to make, please make it so that you add something to this discussion.

(It is important to me,since there is a possibility he is using at least two or more accounts here,which is a bannable offense) MOD

Colorado Wellington

Mark: I could not recall at first what the references were about “allowed to comment”. Then I remembered I’ve read something way back but everyone must judge the veracity of the link for himself.

Mod: I’m sorry. I do not know more than what my quick “memory refresher” search found.

Everyone: I care about the actual argument, not who is making it. However, I do consider circumstances like someone preventing an adversarial argument at one’s own site while engaging in it elsewhere (when that applies).

bitchilly

indeed, i live not too far from attp. i may have a word about the moderation on his own site in person.

JWurts

Please, keep us informed

Thanks

JW

Hmm. You identify one reviewer as a Gavinoid. Are you sure you were not observing a Nickoid instead? The plumages are remarkably similar…

NeedleFactory

The two comments above by WO and FG are unhelpful and violate WUWT commenting policy: “those without manners that insult others or begin starting flame wars may find their posts deleted.”

@NeedleFactory – I was not aware that WUWT has a new moderator?

In any case, there is somewhat of a difference between pointing and laughing at an opponent, and viciously attacking them. Not much, but some. For examples, you can look at some of Nick’s comments elsewhere here, which, in between his ad hominems, simply prove the point that Forrest makes.

This’ll be interesting. I won’t understand a word but look forward to the debate.

Good response, Hotdog. You will have given some relief from guilt to thousands of skeptics like me who havn’t a clue about the subject nor the time to find out, but who nevertheless will be hoping that this is the definitive moment when the wall of pseudo-academic superiority behind all the modelling nonsense begins to be broken.

“Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.”

Err No.

The temperature outputs are the result of ALL THE INPUTS.
Those inputs include ALL KNOWN FORCING, not just GHGs, but solar, volcanic, land use, etc.
in addition there are feedbacks which cannot be predicted and which are emergent.

Your paper has not been accepted because you are wrong.

Sheri

If there are feedbacks which cannot be predicted and which are emergent, there’s no reason to believe the models in the first place. They could be completely overturned tommorrow by a pesky emergent feedback.

The feedbacks emerge from the models. If you don’t believe the models, you don’t believe the feedbacks.

MarkW

Are you trying to argue that nobody knew about feedbacks until the models discovered them?
Sheesh, you don’t need models to determine that feedbacks exist. Just think for yourself.

Duster

Nick, the key word employed by both Mosher and Sheri was “emergent” – “feedback.” That is unforseen “feedbacks” – I’m pretty sure you are quibbling over terminology, but pay attention to the intent instead. Those “emergent” conditions would create unexpected, unmodeled behaviour in the empirical data, and create unanticipated divergences between modeled results and measured empirical conditions. If those “emergent” influences tend to have a bias that cannot be accounted for, then the mean model results and empirical data will diverge over time – creating “hiatuses” or “pauses,” possibly even long term states like Little Ice Ages.

whiten

Sheri
October 23, 2017 at 5:34 am

But models do only one significant feedback…..temp to CO2, or maybe the other way around, where other feedback have no any potential or detectable effect, as actually is supposed to be under a RF warming ever increasing….the main standing point of AGW, for not saying the entire point of AGW….

So an RF warming can not actually be messed up by other feedback, especially when in fast up going trends….

So, the question: What actually ate all that supposed AGW RF expected warming!?
A “dog” feedback” perhaps!

cheers

So an RF warming can not actually be messed up by other feedback, especially when in fast up going trends….
So, the question: What actually ate all that supposed AGW RF expected warming!?
A “dog” feedback” perhaps!

Water vapor let’s it go to space until the surface cools off, then drains energy stored in atm column and as water vapor to slow cooling once air temps near dew points.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/

Duster said – “Those “emergent” conditions would create unexpected, unmodeled behaviour in the empirical data, and create unanticipated divergences between modeled results and measured empirical conditions”

no. the feedbacks emerge as a result of
the models’ underlying equations. viz
of the physics incorporated into the
model.

example: ice-albedo feedback. basic warming
from CO2 melts ice, so less
sunlight is reflected back to space
and so the ocean & air warm more.

this emerges from models, because they
continually calculate ice extents. they assume
ice has a certain albedo, and ocean another.
thus, when ice melts, more warming occurs,
beyond that of CO2 alone.

no. the feedbacks emerge as a result of
the models’ underlying equations. viz
of the physics incorporated into the
model.
example: ice-albedo feedback. basic warming
from CO2 melts ice, so less
sunlight is reflected back to space
and so the ocean & air warm more.
this emerges from models, because they
continually calculate ice extents. they assume
ice has a certain albedo, and ocean another.
thus, when ice melts, more warming occurs,
beyond that of CO2 alone.

And if you implement it like you described, it’s wrong. Because that is wrong, once the incident angle gets under 20degrees or so, open water has nearly the same albedo as ice, so there’s only about 1/4 day under solar noon that is positive. The rest, if the sky is clear is a huge radiator to space.

bitchilly

micro, i notice you never get a response from nick or mosher to your posts . you should get your work written up and submitted .

you should get your work written up and submitted

I’m not very good at that kind of stuff. And I know I would jet get jerked around until either I got tired of it or they did. But it’d never be published while it matters.
So I published it at wordpress, and code and reports on sourceforge. I’m sure it’s been seen by more people through social media than some pay to play journal.

And sooner or later it’ll be the end of this mess.
I just want it called the “Crow Effect” lol!!!!

David A

Crackers, what you are describing is certainly NOT linear.

joelobryan

I have seen Gavin tweets where he fully acknowledges the models do not model the feedbacks correctly if at all. One example, The ENSO pseudocycles are clearly chaotic responses that feed into GMST +/-, but the models are helpless on it.

joelobryan

I have seen Gavin tweets where he fully acknowledges the models do not model the feedbacks correctly if at all. One example, The ENSO pseudocycles are clearly chaotic responses that feed into GMST +/-, but the models are helpless on it.

RW

Nick Stokes quibble over terminology? He lives for that sort of thing.

whiten

micro6500
October 23, 2017 at 2:06 pm

Thank you micro.

Appreciated a lot.

From my point, almost all comments of yours appreciated in my part.
But if I have not got this wrong…hopefully, as only a superficial pass at your work there…..it seems mostly, as far as I can tell, as a further detailed and very interesting at that, about the Trenberth “Iris”……which may explain how the earth and atmospheric response works in relation to RF forcing in short term.

Please do forgive me if I happen to have misunderstood your work…..but it seems to be very important in away to try and explain the non linearity of reality of climate versus the linearity propagated by the GCMs…

Please do let me know, if you would not mind, if I happen to have misunderstood your point……..no body is perfect..:)

Thanks.

Cheers

But if I have not got this wrong…hopefully, as only a superficial pass at your work there…..it seems mostly, as far as I can tell, as a further detailed and very interesting at that, about the Trenberth “Iris”……which may explain how the earth and atmospheric response works in relation to RF forcing in short term.

I’m not sure it operates like an iris, more like a turbo button.
I think what’s happening is the sensible heat from the cooling atm column, including all the water vapor that is condensing (and then re-evaporating), keeps the surface warm, near dew point temp until the Sun comes up to store up energy to do it again.

So, more like a bucket of water with a hole in it, and after it lowers air temp to dew point, opens a spigot that supplements the water level so it doesn’t drop much more than this until the Sun come up and fill them both back up(all the while the one is still draining).

micro6500 commented “Water vapor let’s it go to space….”

what
the he11 does this
mean?

Means you do not understand how the surface cools at night

Phoenix44

Except that they have been shown to not be. You can make assertions about how models are supposed to worj and how modelers think they do work, but unless you are a unique set of humans/modelers that never make errors, you are going to have prove what you say is right.

As for emergent feedbacks from models, please. The idea that your model is so brilliant that it is showing us things we didn’t know rather than being errors is the sot of arrogance that gets modelers a really, really bad name.

putz

kyle_fouro

Mosher contradicts Mosher

https://imgur.com/a/DEZYf

Pat Frank

Steve Mosher, the linearity of GCM air temperature projections is demonstrated in dozens of examples right there in front of your eyes.

In the manuscript and the Supporting Information document.

I doubt you’ve even looked at either, though; much less read them, much less understood them.

Which might explain your denial of the demonstrated.

“Here’s the analytical core of it all:

Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.”

It’s the core of the nonsense. For a start, they aren’t “extrapolations” of forcing. You can find a curve fit, by fiddling parameters. So? That is true of many things. It doesn’t mean that the mechanism of the model is wrong or trivial, or even that its error propagation should follow the curve fit.

The statement that “therefore” they are subject to the linear propagation of error is just assertion. It has no basis.

“The volcanic forcings are non-linear, but climate models extrapolate them linearly.”
Gobbledook. What does non-linear here even mean? With respect to what? But again, climate models don’t “extrapolate” them. They admit them as a forcing in the set of equations, and give an approximately proportional response. Not unexpected.

From the figure captions
“The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings.”
Actually no. Scenario A did not include volcanics. Pat’s argument proceeds regardless.

TimTheToolMan

Nick writes

For a start, they aren’t “extrapolations” of forcing. You can find a curve fit, by fiddling parameters. So? That is true of many things.

Including say clouds in the models. Fitted but meaningless.

AndyG55

“It doesn’t mean that the mechanism of the model is wrong or trivial”

But you KNOW that it is wrong, don’t you NIck.

All that bluster to hide KNOWN errors.

So sad. !!

“And Jim Hansen’s scenarios assumed NO volcanic forcings?”
You don’t read, and you don’t know anything. Scenario A assumed no volcanic forcings. B&C had forcings, clearly reflected in the featured figure.

RW

I didn’t take Pat Frank’s argument to imply the first part of what you wrote. I took his curve fit (as you put it) to be a simplified model of what the model’s output. His model of the models does a pretty good job of doing that. Using the curve fit, he then propagates a specific error to generate the uncertaintany at each step of his model. The implication is that the more complex models are not properly propagating error.

The notion that Frank’s critique is innapplicable ‘because d-d-d-different models!’ is gibberish baloney.

Team up with other ‘error propogation is not applicable’ folk around here and write a rubuttal guest post.

Pat Frank

RW wrote, “I didn’t take Pat Frank’s argument to imply the first part of what you wrote. I took his curve fit (as you put it) to be a simplified model of what the model’s output.

Thank-you RW. That’s a very succinct recapitulation.

Honestly, it’s a relief to read the remarks of folks here who get the analysis. Thank-you all. 🙂

RW

crackers345, so yes one could conceivably hard code the Earth’s orbital path into the code as an influence on solar insolation. I am not sure where the controversy lies here, and as I have said elsewhere I don’t know climate models.

RW

^ wrong subthread

bitchilly

which scenario was closest to reality and how many volcanoes erupted while that reality played out ?

Pat Frank

Nick Stokes wrote, “For a start, they aren’t “extrapolations” of forcing.

The emulations demonstrate GCMs do exactly that. In any case any “projection” is an extrapolation of conditions into future states. So, you’re wrong empirically and in priciple, Nick, and all in one sentence.

You can find a curve fit, by fiddling parameters. So?

So, it means that climate model air temperature projections linearly extrapolate forcing to project air temperature. That’s all they do.

The consequence? Linear propagation of error. And that’s QED.

It doesn’t mean that the mechanism of the model is wrong or trivial, or even that its error propagation should follow the curve fit.

The demonstration has nothing to do with “the mechanism of the model.” The model is a black box.

The demonstration has to do with model output. It’s shown to be linear. That’s the only thing necessary to show, to justify linear propagation of error.

The linear equation successfully emulates the air temperature projections of any GCM. That makes it completely appropriate to use for propagation of projection error.

What does non-linear here even mean?

It means what the Gavinoid implied it means: inflective departure of forcing from a smooth curve. Take a look at the graph. Forcing does that when volcanoes enter the picture.

They admit them as a forcing in the set of equations, and give an approximately proportional response. Not unexpected.

With the bolded phrase, you’ve inadvertantly validated my analysis, Nick. That’s twice now. Thanks again. 🙂

Scenario A did not include volcanics.

Scenario A included volcanics prior to 1990. The historical set when viewed from 1988. It’s right there in the graph.

David Cosserat

I doubt it will be a definitive moment, that was supposed to be Climategate which was all to easily swept under the carpet.

However, there is always room for debate on any subject and having it in the open is beneficial to us all, sceptics and alarmists.

As for me not understanding the content, I’m not a scientists, nor even well educated, but I long ago learned that the climate debate is more than just science. Besides, after 60 years of observation, I don’t see any meaningful change in the planet’s climate other than my garden plants growing better than they ever have.

BallBounces

“the climate debate is more than just science” Egg zactly. Make into a poster and plaster on every wall.

Colorado Wellington

I doubt it will be a definitive moment, that was supposed to be Climategate which was all to easily swept under the carpet.

As a system, the socio-political climate complex has shown high stability and resilience built on strong across-the-board negative feedback to any forcing.

They don’t even need a carpet.

I tried to download the paper and just got a complaint from the host about an ad-blocker, and promotional material.

Urederra

The problem is in your side. I downloaded it without any problem, no pop-ups, pop-unders or anything. Here is a screenshoot:

http://oi64.tinypic.com/15q4rx4.jpg

There is a 97% chance you may probably have a virus in your computer. Have you visited any naughty website?

No, I tried to download it directly, like you would any other file. When I went through their website, resisting their $5 blandishments, it came through.

john harmsworth

I modeled an attempt to download it and it worked every time.

sy computing

I modeled an attempt to download it and it worked every time.

LOL

AndyG55

diddums !! do you need a hanky?

Sheri

Interesting. I do adblock and had no such problem…..Maybe the site simply doesn’t like you?

Duster

Nick, you did notice that there are two download options. Only one asks for money. The other is labeled “Slow Download.” It really isn’t that slow unless you are using a 1990s modem for your connection.

PF gave a link, I right-clicked, said save link as, and got a whole lot of gibberish html.

John Mauer

Nick, it downloads fine. If you really don’t have a copy, I can put it in DropBox for you.

As I noted above, when I tried to save with “save link as”, I got that nonsense. When I got through to the page displayed by Atheok, I was able to download it, as I noted above. I have read it, quoted sections, and shown images of text from it.

Jan PC Lindstrom

To me it seems like an overall confusion between error and uncertainty? They are not the same according to the GUM standard (Guide on Uncertainty in Measurements). An error can be corrected (calibrated) if known. An uncertainty cannot.

Pat Frank

You got a crux issue, Jan PC. 🙂 I have yet to encounter a climate modeler who understands that difference.

Another point raised in GUM is that random errors become systematic when they are propagated forward into a calculation. That’s another form of systematic error thoroughly ignored in climate modeling.

willhaas

The computer simulations in question have hard coded in that an increase in CO2 causes warming. Hence these computer simulations beg the question as, does CO2 cause warming, and therefore are of no value. In terms of atmospheric physics there is plenty of reasoning to support the idea that the climate sensivity of CO2 is really zero.

“The computer simulations in question have hard coded in that an increase in CO2 causes warming.”
Evidence?

I’m asking for evidence that “have hard coded in that an increase in CO2 causes warming”. Do you have any?

AndyG55

“I’m asking for evidence that “have hard coded in that an increase in CO2 causes warming””

Do you use GISS as a hind-cast?

There is your answer.

Or are you that naive? really ???

TimTheToolMan

Nick writes

Evidence?

Well we know for a fact that adjustable parameters are changed to set the required radiative imbalance in the models. How’s that?

The scenarios with more GHG’s lead to warmer model projections…
comment image

This is either a coincidence or a pretty good clue that the models “have hard coded in that an increase in CO2 causes warming.”

‘This is either a coincidence or a pretty good clue that the models “have hard coded in that an increase in CO2 causes warming.”’
No. It suggests that the GHE physics means CO2 would cause warming, and that they correctly model the physics. But “hard-coded”. That is just made up.

On that logic you could say that computation could never reveal anything. Because if it predicts anything, then the result must have been hard-coded in.

AndyG55

“It suggests that the GHE physics means CO2 would cause warming,”

So you ADMIT that ERRONEOUS science is programmed into the models.

FINALLY you are waking up to reality !

WELL DONE , Nick.

AndyG55

Nick, you do realise that you just admitted to every word Forrest has said, don’t you ?

So FUNNY !!.

Try a new pair of socks.. those one don’t seem to be so tasty for you. !!

George Tetley

Evidence ( or dense? )
Unless Nick wrote it, it ant !

So if an economic model predicts 2% inflation, they must have hard-coded 2% in?

Sheri

If CO2 isn’t hard coded in the models, then what is the point?

Latitude

“If CO2 isn’t hard coded in the models, then what is the point?”

……we have a winner!

Vanna has some great parting gifts for the rest of you…….

Sunsettommy

Come on Nick, don’t be THAT stupid:

“Evidence?”

David made the OBVIOUS reply to your, …. he he….question.
comment image?w=700

Colorado Wellington

Nick Stokes October 23, 2017 at 2:15 am
“The computer simulations in question have hard coded in that an increase in CO2 causes warming.”
Evidence?

Nick Stokes October 23, 2017 at 3:38 am
So if an economic model predicts 2% inflation, they must have hard-coded 2% in?

I have friends and neighbors who still think that climate alarmists are arguing in good faith.

Unbelievable.

whiten

David Middleton
October 23, 2017 at 3:04 am

The scenarios with more GHG’s lead to warmer model projections…
—————-
Not trying to be picky, but at the best the above still is no more than an assumption, even in the case of the GCMs, when it may seem to be so obvious and “certain”.

It still needs a kind of validation……otherwise it remains an assumption in principle.

Considering the strong correlation of CO2 ppm trend with temps in GCMs, and the connective relation, is no hard to consider that detectable distinction “who jumps first to increase” the temps or CO2 ppms in any GCMs scenario may clarify that is possible.
I never know of any such trial feat ever attempted or performed, as per a way of validating the assumption!

For as long as this point remains not clarified, in principle, the assumption remains so at its best, an assumption, no matter how strange it would seem to consider it that way, under the circumstances. .

cheers

talldave2

“If CO2 isn’t hard coded in the models, then what is the point?”

Sometimes they code in everything else, and then infer CO2 or GHG as what’s left. It’s a valid technique as long as you’re completely omniscient on every other factor involved.

RW

More obfuscation from Nick. A model IS hard coded. The result predetermined. A model produces different results because it is initialized differently by the user, provided different values for the parameters by the user, or provided different parameters by the user, or perhaps because the code, for whatever insane reason, uses a random number generator. Where you draw the line between one model and the next is arbitrary. Comparing predictions to observations are the only way to definitively test a model. A given model is refuted when its prediction does not match observation. When a model is based on thr observations it is used to predict, it is overfitted, liable to be modeling more noise than it should, and will underperform with new observations.

“A model IS hard coded. The result predetermined.”
This gets to silly quibbling. Of course you can say that any computer program is hard coded, and computers do what they are told. So when Deep Blue wins at chess against Kasparov, that was hard-coded. Gets a bit silly, but technically true. It doesn’t mean that the programmer put in the tricks that brought Kasparov undone.

There is a popular line of articles at WUWT about chaos and the unpredictability of GCMs (and CFD, and weather, for that matter). It’s true that GCM’s approach attractor solutions that can’t be worked out from initial conditions without that computation. As with CFD, you learn things from computation that the programmers couldn’t have told you.

kyle_fouro

1sky1
I said: “Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon.”

You said: “That would be the case only if the Moon had an GHG-less atmosphere of equal density!”

What you said is only true if you subscribe to the minority view that the pressure (hence density) of the GHG-less components of an atmosphere have a warming effect. That is no longer a mainstream sceptical view because the physics involved in this so-called non-GHG warming effect have never been satisfactorily described.

RW

Nick. So I think we agree that this is just weed territory. Having said that, it is a waste of comment space to nit pick at stuff like a claim that CO2 is hard coded into the model. Clearly it is at some level hard coded in the model to increase temperature with CO2, all other things being equal. Just because there are other factors that might swamp that influence out in the model doesn’t mean the comment was worthy of additional scrutiny. It’s borderline troll territory in my view to get into weeds like that. I’m willing to grant intellectual charity to the poster. I’m willing to believe that they are aware that C02 is not the only factor in these models and that, in fact, depending on some of the other factors, the model could predict reduced temperatures despite increasing levels of CO2.

RW

Nick. No argument from me vis a vis the utility of modelling and running them to see what happens. I’m willing to believe though that there is a mind out there that is sharp enough to foresee what the model will do (or have a pretty good idea) without running it. But for sure, for the rest of us, we need to run the model to see what happens. The deterministic aspect does not hinge on our ability to work out what the model will output though. I don’t know climate models though, so if there is some built-in random number generation (simulated stochastic stuff) then obviously no one would be able to know what the model will output in advance.

RW commented – “A model IS hard coded”

Given the 1/R^2 law of
gravitation, is the orbital
path of the Earth
“hard coded?”

RW

craclers345. If something is hard coded, it is baked into the architecture, the programmer’s code rather than being a parameter. So, altough levels of CO2 itself is undoubtedly a specifiable parameter, what is done wirh the value is hard coded – i.e. the complex function that outputs temperature among other things given rhe values of many other variables in the function. This is what I take willhaas to be saying when he wrote the bit Nick objected to. Not sure what your question has to do with programming a climate model, but it smells like more quibbling over arbitrary distinctions to me.

RW

crackers345, so yes one could conceivably hard code the Earth’s orbital path into the code as an influence on solar insolation. I am not sure where the controversy lies here, and as I have said elsewhere I don’t know climate models.

To within 1/4 watt/m^2 at TOA, this formula agrees with Lief Svaalberg’s daily recorded TOA values.

TOA_DOY =1362.36+46.142*(COS(0.0167299*(DOY)+0.03150896)) for DOY = 1 on Jan 1 to 365. (Excel format)

Willhaas,

You say: In terms of atmospheric physics there is plenty of reasoning to support the idea that the climate sensitivity of CO2 is really zero.

I strongly disagree. You are (I hope inadvertently) undermining the mainstream sceptical position.

It is certain that the presence of CO2 in the atmosphere since time immemorial contributes in a minor way to the current mean surface temperature of 15degC (288K), the main (invisible) contributor being water vapour.

Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon, namely -75degC (198K) as determined from the NASA Moon orbiter. In the absence of all such gases, the earth’s atmosphere would be transparent to the remaining atmospheric constituents – Nitrogen and Oxygen – which are not significantly radiative/absorptive (by several orders of magnitude) at earth atmospheric temperatures.

Therefore, since CO2 is a minor contributor to the warm world we currently experience, a doubling in CO2 must, in logic, cause some change in mean surface temperature.

The real debate is: how much of a change? Sceptics say, not a lot. Alarmists say, by a dangerous amount. But there is no reason in physics (atmospheric or otherwise) that says the climate sensitivity to changing CO2 is exactly zero. To assert that is to walk straight into the climate alarmist trap…

Your argument applies equally to homeopathics. Fortunately mainstream skeptic position is an oxymoron.

Sheri

David: I tend to agree. At one time, it was forbidden to say CO2 had no effect because it made skeptics look unscientific. Now, it seems common and almost mainstream here. To say it has “no effect” makes the same assumptions saying it does have a great effect does—that we know everything there is to know about climate. I can’t see how a real scientist can make a statement to that effect.

Hugs

‘argument applies equally to homeopathics’

Chutzpah you have but you are not a Jedi yet.

paqyfelyc

Actually, Earth radiates as if it were -18 C (255 K) for an average power of 240W, while the moon, which has only 0.13 albedo (Vs 0.3 Earth’s), receives and radiates ~295 W. So, this just doesn’t add up to a mean surface temperature of 198K for the moon. Or, rather, to have things add up, you have to consider that moon surface is so bumpy that it’s surface si much larger than 4Pi R², but then you cannot compare this temperature from NASA moon orbiter to Earth’s.

jaakkokateenkorva, October 23, 2017 at 5:17 am.

You say: Your argument applies equally to homeopathics.

In relation to the totality of ‘greenhouse’ gases, CO2 is a small proportion of GHGs. Some say 25%, others say 5%, but either way, it is NOT a vanishing small proportion. (Yes, it is a vanishing proportion of the atmosphere as a whole including the non-radiative/absorptive gases, but these do NOTt contribute to warming the planet.)

So the warming effect of CO2 is not in any way comparable to the charlatanry of homeopathy. It is real.

I find your ‘oxymoronic’ comment completely incomprehensible…

Thanks Hugs, but yeah. I don’t have enough midi-chlorians to speak on behalf of all skeptics, 97% scientists etc and, thus, limit writing only my own opinions. Doesn’t prevent me standing by the gas law pV=nRT though. Meaning the relationship between pressure, volume, temperature and mass of gas are interrelated irrespective of the composition.

David. The concept of “mainstream skeptic position” is much like “scientific consensus”. In my opinion equally useful at best and oxymoronic at worst.

TA

“It is certain”

It used to be certain that humans were causing the Earth’s atmosphere to cool, back in the 1970’s. Then the climate warmed up and we don’t hear that certainty anymore.

Being certain about something does not necessarily make it true. You are just guessing as to what CO2 is doing in the atmosphere. It may not be adding any net heat to the atmosphere at all. Prove it does.

This skeptic is skeptical you or anyone have any proof to the contrary. “Certain” is not good enough.

Am I hurting the skeptic’s cause by my assertions?

No, the skeptic’s cause is to demand proof of other’s assertions. If there is no proof, skeptics should say so. I’m saying so. Prove me wrong.

Duster

David Cosserat
October 23, 2017 at 9:10 am

… but these do NOTt contribute to warming the planet…

While I agree about the basic argument you offered, you do make a mistake. f for instance, you descend from Jerusalem to the Dead Sea, you experience sensible warming as you descend. Since the atmosphere has the same composition, the difference in temperature is not due to CO2, methane or water vapor.

RWturner

The proponents of GHG planetary temperature say the surface temperature of a planet is mostly due to the atmospheric composition giving the greenhouse effect and irradiance, naysayers say it’s almost completely due to the density and molar mass of the atmosphere, as well as irradiance on the planet.

Of course there are models showing the later is correct whereas the former has never been empirically demonstrated in the real world.

Take a look at the Galilean Moons in order of descending atmospheric density: Io, Callisto, Ganymede, Europa. Now, I’ll list these in order of descending average temperature: Io, Callisto, Ganymede, Europa (coincidence?). Now, the irradiance is quite similar for all these moons and only one has a significant amount of greenhouse gases comprising its atmosphere — Callisto.

Now, can someone tell me why Io has a higher average temperature than Callisto, despite having an atmosphere almost entirely comprised of sulfur compounds whereas Callisto has an atmosphere of CO2? Hint: Io’s surface pressure is orders of magnitude higher than Callisto’s.

Why wouldn’t quantized molecular vibrations induced by back radiation have a significant impact on planetary surface temperature? It’s because heat transfer in an atmosphere is dominated by unquantized kinetic energy (molecular collisions that theoretically occur every 10^-7 s at Earth’s surface pressure) and convection. Furthermore, molecular vibrations are quantized, you simply can’t add more vibrational energy to a molecule if it is already in its energized state. Molecules already in their energized state are transparent to the radiation that already put it into that energized state. Trying to make planetary surface temperature about quantized molecular vibrations is like trying to flood the sea by spitting in the ocean.

bitchilly

of course it has an effect. whether that effect leads to x amount of warming over x amount of time is an entirely different subject . remember negative feedback , i don’t see the oceans boiling off any time soon . nor turning to acid,david attenborough what can i say, another childhood hero ruined.

1sky1

Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon.

That would be the case only if the Moon had an GHG-less atmosphere of equal density! What is overlooked here is the fact that moist convection–not LW radiation–is the principal mechanism of heat transfer from Earth’s surface. Were the Earth totally dry, its GHG-less atmosphere would still be warmed largely by convection, making the surface temperature problem far more thermodynamically complex than that of the Moon.

RW Turner

You said: The proponents of GHG planetary temperature say the surface temperature of a planet is mostly due to the atmospheric composition giving the greenhouse effect and irradiance, naysayers say it’s almost completely due to the density and molar mass of the atmosphere, as well as irradiance on the planet.

You justify your position by citing data concerning the Gallelean Moons.

You say: Now, can someone tell me why Io has a higher average temperature than Callisto, despite having an atmosphere almost entirely comprised of sulfur compounds whereas Callisto has an atmosphere of CO2? Hint: Io’s surface pressure is orders of magnitude higher than Callisto’s.

No I can’t. But how reliable is your data and how reliable your conclusions? If you have such a dramatic example of non-GHG warming, surely you have investigated the data in depth and could further enlighten us and give us some references? Please!

1sky1

I said: “Without water vapour and CO2 and the other minor radiative/absorptive gases, the surface would be a rocky waterless planet with a mean surface temperature around that of the Moon.”

You said: “That would be the case only if the Moon had an GHG-less atmosphere of equal density!”

What you said is only true if you subscribe to the minority view that the pressure (hence density) of the GHG-less components of an atmosphere have a warming effect. That is no longer a mainstream sceptical view because the physics involved in this so-called non-GHG warming effect have never been satisfactorily described.

RWturner

“That is no longer a mainstream sceptical view because the physics involved in this so-called non-GHG warming effect have never been satisfactorily described.”

What are you talking about, never described? It’s been described for hundreds of years by the first chemists and expanded ever since. It’s such basic physics that I believe they start teaching it in primary school these days.

PV = nRT

https://www.researchgate.net/publication/317570648_New_Insights_on_the_Physical_Nature_of_the_Atmospheric_Greenhouse_Effect_Deduced_from_an_Empirical_Planetary_Temperature_Model

RW: that paper doesn’t use physics to
describe the claimed non-GHG heating,
it does curve fitting only.

it’s also very wrong, which is why
it couldn’t get published anywhere
except in
a
predatory journal.

Clyde Spencer

jaakkokateenkorva,

Anyone who has used a hand pump to inflate a tire, and has been careless enough to touch the outlet valve, has received an immediate affirmation of adiabatic heating by compression, as specified by the Universal Gas Law. However, it will be safe and comfortable to touch the same spot an hour later.

The Earth has had a thick atmosphere for billions of years. Because a hot body radiates in proportion to the fourth-power of the absolute temperature, the atmosphere would have been radiating at a much higher rate when first formed. Clearly, both the original molten surface and the atmosphere have cooled during the intervening billions of years. We still see local heating of an air mass when it rises over a mountain range and plunges down the leeward side, or when an air mass dives off a plateau into a depressed basin. However, in both cases, if the air mass became stationary, it would radiate the excess heat and cool down. That is, what we observe today is largely potential and kinetic energy being converted locally into palpable heat energy. If any of the parameters in the equation T=pV/nR are CHANGED, one can expect a change in the temperature. However, the major changes took place far enough back in time that most of the initial temperature increase has leaked away. The major role of the atmosphere, with respect to heating, is to be a transport medium for water vapor and clouds.

RW Turner,

Concerning my challenge to you (October 24, 2017 at 12.19am), I have been eagerly awaiting your response which sadly has not so far been forthcoming. So I did some research on Io and Callisto. I Googled Wikipedia (mainly) and Universe Today and assembled the following data:

Io
1. Mean orbit radius: 421,700 km
2. Mean diameter: 3643 km
3. Surface area: 41,910,000 km2 (0.082 Earths)
4. Atmospheric composition: 90% SO2
5. Surface atmospheric pressure: 1 nanobar [this from Universe Today]
6. Mean surface temperature 110K.
7. Albedo: 0.63
8. Internal heat source: 0.6 to 1.6×10^14 W (global total)*

*Note: Io’s main source of internal heat comes from tidal dissipation rather than radioactive isotope decay, the result of Io’s orbital resonance with Europa and Ganymede. The figure given across the whole surface is equivalent to a significant heat flux of 0.41W/m^2.

Callisto
1. Mean orbit radius: 1,882,700 km
2. Mean diameter:4820km
3. Surface area: 73,0000,000 km2 (0.143 earths)
4. Atmospheric composition: 100% CO2
5. Surface atmospheric pressure: 7.5 picobar
6. Mean surface temperature: 134K
7. Albedo: 0.22
8. Internal heat source: Negligible

So, yes, as you said Io has an atmospheric pressure around 2 orders of magnitude greater than Callisto. But at a pressure of 1 nanobar this is still totally insignificant. In fact it is 9 orders of magnitude below earth’s surface atmospheric pressure! Even if a non-GHG such as SO2 were capable of doing the job that you claim it can do, it would not be enough to warm a single flying gnat.

It is obvious that the surface temperatures of Io and Callisto are influenced simply by their respective albedos and, in the case of Io by its significant source of internal energy.

In any case as a final blow to your theory, it appears that the surface of Io is NOT as you have claimed warmer than the surface of Callisto.

So I think your “Io versus Callisto” hypothesis is in total ruins. 🙂

RW Turner,

Re. your comment (October 24, 2017 at 8.46am), the physics involved in the so-called non-GHC warming effect has indeed been described over and over and over ad nauseam. But what I had claimed to you was that it has not been satisfactorily described.

The two responses that follow on from yours, from Crackers and Clyde Spencer, say it all…

Crackers is correct – curve fitting alone will not persuade anybody unless the underlying physics is made clear. I am still hoping that my friends N&Z will be able to do this one day, but so far I have not been convinced.

Clyde Spencer is also correct – the tyre pumping analogy is complete crap because it does not represent a steady state flow of energy situation, as is the case with planetary surface temperature elevation.

RWturner

David, I think you found the brightness temperatures, not the true temperatures. I have the surface temperature as estimated by Voyager data at 143K and Callisto at 134K.

https://books.google.com/books?id=SO48AAAAIAAJ&pg=PA331&lpg=PA331&dq=average+temperature+of+io+143+K&source=bl&ots=h95La-5A01&sig=CrraabuweItecqyQr3tDY7t7ivs&hl=en&sa=X&ved=0ahUKEwjZ6uPCkYrXAhUB-GMKHQL0A3gQ6AEIQjAD#v=onepage&q=average%20temperature%20of%20io%20143%20K&f=false

https://www.space.com/16419-io-facts-about-jupiters-volcanic-moon.html

Crackers, the only criticism I’ve ever seen of that paper is the journal it is in and that it’s “simply wrong.” Sounds like a cultist argument. Perhaps you would like to be specific in how the paper is incorrect in its conclusions. It’s like saying the models show a near perfect fit, but I’ll stick with the models that don’t work. You do that, I’ll go with the empirical based models that do work. Furthermore, the paper doesn’t do simple curve fitting, it used physics based models to estimate surface temperatures of rocky planets and then compared that to empirical observations and then investigated why the models did or didn’t work.

“A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition. Consequently, the global down-welling long-wave ux presently assumed to drive Earth’s surface warming appears to be a product of the air temperature set by solar heating and atmospheric pressure. In other words, the so-called ‘greenhouse back radiation’ is globally a result of the atmospheric thermal effect rather than a cause for it ”

You have all the time in the world, now go ahead and actually demonstrate how this is wrong.

RWturner

“So, yes, as you said Io has an atmospheric pressure around 2 orders of magnitude greater than Callisto. But at a pressure of 1 nanobar this is still totally insignificant.”

Totally insignificant? Yet Io has a significantly higher surface temperature than Ganymede and Europa has an even lower surface temperature corresponding with its atmospheric density being the lowest of the 4 moons.

This might help…

http://formulas.tutorvista.com/physics/work-done-by-gravity-formula.html

1sky1

David Cosserat:

If the well-known ideal gas law and the attendant adiabatic heating, which gives rise to a ubiquitous atmospheric lapse rate, are not physical explanation enough for you, consider the issue of convection of heat into a non-existent atmosphere. Your projection of Moon-like surface temperatures on a GHG-less Earth is physical nonsense.

1sky1

[I]f the air mass became stationary, it would radiate the excess heat and cool down.

Not so! Unlike a pressurized tire cooling down to the surrounding ambient atmospheric temperature, adiabatic heating applies to ALL parcels of air at a given elevation. There simply is no cooler air surrounding any parcel (unless introduced by advection).

Latitude

“It suggests that the GHE physics means CO2 would cause warming, and that they correctly model the physics.”

no…it means they were hindcast to a period of time when both CO2 and temps were rising

Micro6500 says: It never would have a temp like the moon, it has an atm. Enthalpy at daily Tmax / atm cubic meter of air at sea level is ~38.8kJ/kg/m^3, and drops to 24.9kJ/kg/m^3 at Tmin.

What is it about the thought experiment we are discussing, involving earth with a GHT-less atmosphere that you now suddenly don’t get? Earlier, I thanked you for supporting me in saying that all the radiation to space would be from the surface, which would be at a comparable mean temperature to that of the Moon. Now you are contradicting yourself. Yes its GHT-less atmosphere would have a heat content (enthalpy) but this atmosphere would not radiate. It would simply be at a comparable mean temperature to the surface, maintained by conduction/convection between the two.

What is it about the thought experiment we are discussing, involving earth with a GHT-less atmosphere that you now suddenly don’t get? Earlier, I thanked you for supporting me in saying that all the radiation to space would be from the surface, which would be at a comparable mean temperature to that of the Moon. Now you are contradicting yourself. Yes its GHT-less atmosphere would have a heat content (enthalpy) but this atmosphere would not radiate. It would simply be at a comparable mean temperature to the surface, maintained by conduction/convection between the two.

I’m not sure it would be as cold as the moon, and having an atm, even here might be enough to start the water cycle, as with no GHG’s the atm would be hard to cool, and transport might easily allow excursions over 0C, and the equator isn’t average temp of the earth anyways.

So my comments were on topic. I believe you are wrong about water on an Earth without GHG’s being ice. At least sometimes at the equator it will be water.

paqyfelyc

You say: Actually, Earth radiates as if it were -18 C (255 K) for an average power of 240W, while the moon, which has only 0.13 albedo (Vs 0.3 Earth’s), receives and radiates ~295 W. So, this just doesn’t add up to a mean surface temperature of 198K for the moon.

1. The earth without any GHGs would be a waterless rocky planet, with a similar albedo to the Moon.
2. Your calculations may not add up to a mean surface temperature of 198K but that is what the NASA Orbiter measured. So think again carefully about what is wrong with your reasoning…not what is wrong with the NASA data. 😊

paqyfelyc

I don’t think NASA measure are wrong. Nor is my calculation (which isn’t mine, anyway!).
I just say you cannot compare the 198K of moon surface to Earth surface temperature. You should use the temperature moon would enjoy if it were perfectly flat, or dulled by an atmosphere so that it behaved as a flat surface (as Earth does, apparently).

paqyfelyc

1. The earth without any GHGs would be a waterless rocky planet, with a similar albedo to the Moon.
Well, of course, it would, since to remove GHG you have to remove all water .
Now, is this would be true without GHG effect from water and other gas? No it wouldn’t. Earth would still radiate as a 255K body (240 W in and out) from somewhere around ~10km above surface, and because of lapse rate (which has nothing to do with , surface temp would still be so that liquid water would cover most of the planet.

paqyfelyc,

You are getting muddled. Without any GHGs in the earth’s atmosphere the earth would not “radiate as a 255K body from somewhere around ~10km above the surface” because non-GHG gases DONT RADIATE!

The only radiation would be from the surface. Just like happens on the Moon.

Robert W Turner

I don’t know where the myth of non-GHGs not emitting radiation originated, but I’m pretty sure that everything with a temperature a few degrees above absolute zero is going to emit IR. Besides, how do you propose that an atmosphere with no GHGs loses heat? Convection and collisions will transmit heat within the atmosphere, but how is heat going to exit the atmosphere without emission from these gases?

paqyfelyc

@David Cosserat
As Robert W Turner pointed out:
even an otherworldly non radiating atmosphere must have some way to lose energy to compensate for the energy it will get from the surface (condensing water of clouds, for instance). Or else, it’s energy and temperature will forever rise with no limit, which cannot happen.
Liquid water and ice of clouds, dust, etc. will do the job.
So one way or another, atmosphere would still emits energy according to it temperature, itself according to its altitude.
GHG probably play some role, so that the emitting apparent altitude would be lower without them, but it won’t be zero

Robert W Turner commented – “Besides, how do you propose that an atmosphere with no GHGs loses heat?”

it escapes out the top
of the atmosphere

it escapes out the top
of the atmosphere

What?
This is the problem, hardly anyone actually understands EM wave propagation.
Non-GHG’s don’t radiate. It would cool by IR radiation from the surface to space based on SB equations.

Robert W Turner,

You say: I’m pretty sure that everything with a temperature a few degrees above absolute zero is going to emit IR.

An oxygen/nitrogen-only atmosphere (the point of discussion here) would not radiate significantly at earth temperatures. Hence the standard use of the term “non-GHGs” to describe such gases.

You ask: …how do you propose that an atmosphere with no GHGs loses heat?

It doesn’t. As I explained previously to paqyfelyc, all the heat would be lost by radiation directly from the surface to space. Just like happens in the case of the Moon.

And the hypothetical nitrogen/oxygen-only atmosphere would be warm just like the surface but would have no way (or need) to loose that heat to space.

paqyfelyc,

You say: …even an otherworldly non radiating atmosphere must have some way to lose energy to compensate for the energy it will get from the surface (condensing water of clouds, for instance). Or else, it’s energy and temperature will forever rise with no limit, which cannot happen.

You are muddled again. The subject of discussion is a non-GHG atmosphere. Such a case usually assumes a waterless earth.

But, to indulge you, if water were to be introduced into such a waterless world, it would be in the form of ice because the surface temperature would be very much less than the freezing point of water (just like the Moon’s). The ice would increase the albedo and therefore further reduce the surface temperature. 🙂

But, to indulge you, if water were to be introduced into such a waterless world, it would be in the form of ice because the surface temperature would be very much less than the freezing point of water

Not necessarily, it would depend on the orbit and Sun.

I’m not sure if we are warm enough with a nitrogen oxygen atm if water would be ice or not, but there nothing stopping such planet, and it wouldn’t have to be ice, and once the water cycle started, would be a lot like earth.

Micro6500 says: …hardly anyone actually understands EM wave propagation. Non-GHG’s don’t radiate. It would cool by IR radiation from the surface to space based on SB equations.

Thanks for your coherent support. It seems it is not possible to have a sensible scientific discussion with people who have read just enough physics to learn that, yes, “all gases at temperatures about absolute zero radiate” but who insist that oxygen and nitrogen radiate significantly when it is well established that they do not. Which is why it is standard practice, in the context of the earth’s current atmosphere, to refer to oxygen and nitrogen as “non-GHGs”.

Micro6500 says: Not necessarily, it would depend on the orbit and Sun.

Er…well, yes, true if you introduced water onto a non-GHG version of Mercury, for example!! But I think you slipped a cog… we were discussing a GHG-less earth having a surface temperature similar to that of the Moon. 🙂

So was I. The big differences is we have enough gravity to hold an atm, the moon doesn’t. All we would need is enough solar to get the water cycle running. Without GHG’s, maybe we need a closer orbit, but nothing like Mercury, probably not even Venus. So there is a range where you could have a water cycle and no other GHG’s. Although that isn’t going to happen anyways, but have at it.

we were discussing a GHG-less earth having a surface temperature similar to that of the Moon

It never would have a temp like the moon, it has an atm.
Enthalpy at daily Tmax / atm cubic meter of air at sea level is ~38.8kJ/kg/m^3, and drops to 24.9kJ/kg/m^3 at Tmin.

Micro6500 says: The big differences is we have enough gravity to hold an atm, the moon doesn’t. All we would need is enough solar to get the water cycle running. Without GHG’s, maybe we need a closer orbit, but nothing like Mercury, probably not even Venus.

This is way off topic which is about the earth in its current orbit.

Micro6500 says: It never would have a temp like the moon, it has an atm. Enthalpy at daily Tmax / atm cubic meter of air at sea level is ~38.8kJ/kg/m^3, and drops to 24.9kJ/kg/m^3 at Tmin.

What is it about the thought experiment we are discussing, involving earth with a GHT-less atmosphere that you now suddenly don’t get? Earlier, I thanked you for supporting me in saying that all the radiation to space would be from the surface, which would be at a comparable mean temperature to that of the Moon. Now you are contradicting yourself. Yes its GHT-less atmosphere would have a heat content (enthalpy) but this atmosphere would not radiate. It would simply be at a comparable temperature to the surface, maintained by conduction/convection between the two.

Hivemind

My sympathies and best wishes. The warmists really are hoof-deep in the trough and will use any malign tools to protect their grants.

AndyG55

Hey Pat,

you KNOW you are over the target when you start taking flack form the self-appointed small-guns of the AGW farce.

WELL DONE. 🙂

Pat Frank

Thanks, Andy. 🙂

Doubting Rich

You’re right about ESS reviewer 2 round 2. However you were too focussed on his errors about what he said about your paper to notice this hilarious one:

“if the GMST of one year was solely a function of the GMST of the previous year, then GMST would be effectively constant over time.”

Maybe he should talk to Mandelbrot about Z[n+1] = a Z[n] (1 – Z[n])

I know the error is irrelevant in the context, but to think that someone claims to be able to understand modelling does not know about such sequences is … kind of frightening, given the power that has been given to these people.

Pat Frank

Hi Rich — thanks and you’re right I was too focused to notice that.

I did notice that he left “function” undefined though. 🙂

The discussion in the paper about error propagation is just nutty. It says
“In a climate projection of “n” steps, each time step “i” initializes with the climate variables delivered by the “i-1” step.”
OK.
And then the linear fit in Sec 2.2 is described with annual steps. Well, OK, could be anything. It’s just a fit PF has chosen. You could have any step length.

So then he works out the accumulation of variance. It accumulates as sqrt(n), which he then takes to be n=100 (years). That’s where the ±15°C comes from.

But it’s supposed to be the error of the GCM, and they don’t have annual steps. They have steps of about 30min. That is about n=1.75e6 steps in a century, or ±419°C.

There are two things just wrong here:
1) The uncertainty claimed is not step to step. It’s a claimed error in the identification of TCF. That doesn’t switch between ±4 W/m2 every 30 mins, or even every year. But the error is calculated as if it did.
2) The model has physics. If it ever did wander by 15°C or 419°C, that would greatly change energy balance. i/o flux would change to restore.

AndyG55

“The model has ERRONEOUS physics.

” If it ever did wander by 15°C or 419°C,”

OMG, you just keep digging your hole deeper and deeper.

“i/o flux would change to restore.”

With that one little clause, you have destroyed the WHOLE AGW meme.

but do you even realise it ?????? 😉

Beautifully done, Nick !!

Who’s side are you really on. 🙂

Sheri

His own?

pbweather

The following image is a combination of the ECMWF 15 day and 46 day forecast temperature for the UK. Note how the outcome is constrained within upper and lower bounds. This is done to restrict output to within climatic normal values. What it doesn’t show is that if these constraints were not there these individual ensemble members would blow out into a massive spread of solutions. I would imagine climate models are similarly constrained in output so as to give a more clear and desired outcome. This process restricts time step error propagation….but also allows modelers to steer the output towards a desired outcome.
comment image

“Note how the outcome is constrained within upper and lower bounds.”
And notice what they are. Between 14°C and 0°C. It isn’t an artificial computational limit. It’s just physics. There just isn’t enough heat coming in to sustain a temperature above 14°C. And too much to go below 0.

pbweather

In reply to Nick Stokes:
It isn’t an artificial computational limit. It’s just physics. There just isn’t enough heat coming in to sustain a temperature above 14°C. And too much to go below 0.</quote.

I am sorry but these constraints are mostly mathematical not physics. I have run basic models without these artificially applied constraints and they blow up into wild output in no time. Hell the American GFS weather model still does this sometimes.

For example, let's say ensemble member number one at time step 2 comes out with a forecast temp of 14 deg C. The model calculations then can make the next time step potentially go warmer or cooler because of the multitude of input variables and their associated errors….if the next time step is warmer and the next and the next..then you have blow out. They have built in climatic normal limits to make sure that these ensemble members do not venture outside of climatic normals. The physics of the models may act to some extent to constrain the range of outcomes, but mathematics don't care about these when it comes to error propagation so artificial climatic normal constraints must be applied.

Gavin

Nick Stokes October 23, 2017 at 4:03 am
“Note how the outcome is constrained within upper and lower bounds.”
And notice what they are. Between 14°C and 0°C. It isn’t an artificial computational limit. It’s just physics. There just isn’t enough heat coming in to sustain a temperature above 14°C. And too much to go below 0.

That seems to imply that the model does not represent the modeled system realistically enough to constrain itself and has to have limits set to prevent it “blowing out” as pbweather describes?

Bob boder

Gavin

And yet the models are based on cumulative forcing and feedbacks, it seems to me total energy input and it’s resultant effect should limit such a model to within the bounds of what is possible with out artificial constraint.

pbweather

Gavin

And yet the models are based on cumulative forcing and feedbacks, it seems to me total energy input and it’s resultant effect should limit such a model to within the bounds of what is possible with out artificial constraint.

Not at all. You have to imagine that each new time step in the model is like the beginning of a new model and that the input variables into that model is the result of the previous time step calculations. If the result of the previous time step is an extreme outlier, then the input for the next time step starts with probably unrealistic input variables and from there could blow out into something physically/climatically not possible/likely. Hence why weather model models set restrictive bounds linked to climatology not just annually, but seasonally.

If this process was not included you may have a situation, for example, where the model forecasts 35 deg C in the middle of January for the UK. Physically this is within annual ranges of possibilities, but seasonally it is not possible or likely. To say that the physics of the models restricts this range is pure not completely true. It is an mathematical algorithm that checks input and output at each time steps to see if it is within normal climatic limits as the main limiting factor.

Tom in Florida

Just wondering if commenters here were to be on a mission to be placed in orbit around Saturn and the path to get there had been modeled to find the fastest and safest method but the models included programed constraints on what could happen, would any of you get on board that spacecraft?

Frenchie77

If you need constraints to stay reasonable then your underlying physics are not well understood. for AGW we know that this can’t be the case since “the science is settled”.

So prove it, run all the models without constraints and let the data fall where they may…What, have you got something to hide???

Steve Fraser

It would be nice for once, to see just 1 of the traces from start to end, with the initial conditions (parameters)

I see that none of the trolls shows the slightest sign of having read the paper.

The overwhelming ridicule has already been achieved, at least from metrology perspective, with only the idea of measuring the average global outside air temperature anomalies with 0.1 °C precision and accuracy today. Planetary gas composition and energy balance anomalies are even funnier. Modelling all these figures decades into the past & future takes it all into a new dimension. Might work in Star Trek, but doubtfully they’ve gone that far yet.

Sheri

“I see that none of the trolls shows the slightest sign of having read the paper.” Translation: The speaker is omniscient and the conclusion self-evident, so if you disagree, you obviously did not read the paper.

Few things are self-evident, though in climate science, it seems to be the entire area is self-evident and therefore anyone disagreeing did not read the material. Sorry, life does not work that way. Nor does science.

@Nick Error accumulates as sqrt(n) if and only if the error is random with mean zero.
If the error is systematic, i.e. embedded in the program logic and function parameters, the error should accumulate > sqrt(n), possibly approaching n

Pat Frank

Thanks, Stephen.

And when one is making a sequential series of calculations, each subsequent calculation depending on the last and each a prediction further out along some coordinate, the systematic uncertainty increases similarly.

When the prediction is a future time state, physical error is unknown and all one has to condition the prediction is accumulated uncertainty.

Clyde Spencer

NS,

You said, “i/o flux would change to restore.” Are you suggesting that prophesies of a Tipping Point and runaway Venus-like conditions are invalid?

RW

Nick’s point 1: Who said anything switches between +/- 4 W/m^2 at any time interval? Pat Frank didn’t. The error reflects a range of possibilities. Nick’2 point 2: not all that relevant to whether or not error propagation is applicable to the climate models.

Stephen Rasey,
“@Nick Error accumulates as sqrt(n) if and only if the error is random with mean zero.”
RW,
“Who said anything switches between +/- 4 W/m^2 at any time interval?”

He’s saying that successive time intervals contribute independent errors with sd 4. And yes, he’s accumulating them as sqrt(n), as the second figure in the article shows. Here is the section of the paper:
comment image

And why annual, when GCMs have steps of 30 minutes? It comes back to that averaging nonsense. He averaged 20 years of data, and said the average was 4 W/m2/year, not 4 W/m2. If he had expressed it as averaging 240 months of data, he would have taken monthly steps, and compounded the error at sqrt(12) times the rate.

Pat Frank

Nick Stokes, “He’s saying that successive time intervals contribute independent errors with sd 4.

No, I’m not. I’m saying that uncertainty increases, not that error increases.

The error of a future time state is entirely unknown.

You’ve made mistakes 2, 4, and 6 again.

Nick, “He averaged 20 years of data, and said the average was 4 W/m2/year, not 4 W/m2.

I did no such thing. Serious mistake, Nick. And you claimed to have read the paper.

I averaged no set of data. Lauer and Hamilton averaged data, not me.

They calculated the 20-year annual mean calibration error statistic made by their set of 27 CMIP5 models.

Their 20-year annual average ensemble mean bias (eqn. 1) was calculated as delta = (1/N)[sum over N of (model minus observed).

Guess the dimension of the 1/N divisor. Does that dimension enter the dividend?

I took their rms annual average long-wave cloud forcing error statistic. They reported that as ±4 Wm^-2 annual average = ±4 Wm^-2year^-1.

It’s a statistical average, Nick. Not a measurement average.

Statistical averages are of dimension (property average)/(unit averaged). The average height of people in a room is meters/person, not meters.

Twenty calibration error magnitudes averaged over 20 years is annual average error is error/year. Multiply by twenty, recover the sum.

Your assessment is utterly wrong (again).

Benjamin Winchester

The average height of people in a room is meters/person, not meters.

wat

Toneb

Benjamin ….. exactly

Pat Frank

Back to middle school math for you, Benjamin Winchester, and take Toneb with you.

Pat Frank

Nick Stokes, “annual steps. Well, OK, could be anything. It’s just a fit PF has chosen. You could have any step length.

Air temperature projections are published as annual time series. The CMIP5 LWCF calibration error statistic of Lauer and Hamilton is an annual rms average. Those realities completely justify my choices.

Nick, “[GCMs] don’t have annual steps. They have steps of about 30min. That is about n=1.75e6 steps in a century, or ±419°C.

Nick supposes the LWCF calibration error statistic for 30 minutes is the same as for one year. Pretty naïve mistake, Nick.

Nick, “It’s a claimed error in the identification of TCF.” It’s a measured and published simulation error. No one claimed anything. Except you.

Nick, “doesn’t switch between ±4 W/m2 every 30 mins, or even every year. But the error is calculated as if it did.

It’s not error, Nick. It’s an uncertainty statistic. Mistakes 2, 4, 5 and 6. You’re out.

Nick, “If it ever did wander by 15°C or 419°C, that would greatly change energy balance

You’re treating an uncertainty as a physical temperature and a calibration uncertainty statistic as an energetic perturbation on the model. Mistakes 2, 11, and implicitly 12. Seriously fatal mistakes.

In fact each of your mistakes is fatal, Nick.

Your analysis is wrong throughout.

Indeed, your analysis is so clueless it doesn’t even rise to wrong.

“Air temperature projections are published as annual time series. The CMIP5 LWCF calibration error statistic of Lauer and Hamilton is an annual rms average. Those realities completely justify my choices.”

That is nonsensical reasoning. Your results are entirely dependent on the time step. If you have 100 steps, you multiply by sqrt(100). If you have 1000, you would multiply by sqrt(1000). Sp you are telling us that the whole magnitude of your estimated error is dependent on arbitrary publishing conventions.

You have committed errors 0, 7, 32 and 77. Average error score 29 sec^-1. You didn’t last long long.

Pat Frank

Nick StokesYour results are entirely dependent on the time step.

No, they’re not. You’re assuming that the average annual ±4 Wm^2/year LWCF calibration error is invariant with the averaging interval.

I dealt with that in ESS Round 1 reviewer 4, item 4.

If the averaging interval changes, so does the calibration error value. Put those new values into the error propagation equation, and a completely comparable uncertainty comes out.

The centennial projection uncertainty is about ±15 K no matter what.

I’m horrified allegedly qualified scientists don’t know what error is.

Allegedly. Ah yes, only Pat Frank does.

Pat Frank

You certainly don’t Nick. The evidence for that is fully in view above.

By the way, my perception of error is taken from the literature. One can read all about it in Sections 7 and 10 of my Supporting Information document. You did read that, too, didn’t you Nick.

Sheri

They know. They just don’t care.

Benjamin Winchester

They know, which is why Pat Frank’s paper keeps getting rejected.

Lewskannen

Excellent article.
This is precisely where we need to focus.
Iterative models exlode even the tiniest errors and faster computers just generate garbage faster.
I will definitely be downloading and propagating this.

You ask – at 2:52 am in the same time zone that Willis is in.
Some of us need a night’s sleep now and again.

Sandra Graham

Hello Pat,

I am no scientist, although I did love learning about Geology & as well as watching the stars.

Keep your head held high and hold to what you believe is true.

One of my favourite sayings, ‘To thine own self be true.

Sandra

Pat Frank

Thanks, Sandra. 🙂

If models are moving away from linear propagation of everything to more life-like nonlinear behaviour, involving a mix of positive and negative feedbacks – then that is a move in the right direction.

benben

Haha Oh Pat Frank, you’ve lost the plot here.

“The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers. Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. ”

I actually showed your work to a friend of mine who work on climate models. He was not impressed. Sloppy math. That’s all there is to it. No massive conspiracy that you could singlehandedly expose IF ONLY they’d publish it in the peer reviewed corner of the internet rather then the WUWT corner.

Cheers
Ben

Butch2

So show us an example of the the “Sloppy Math” !! Or do you just do Drive By Whining ?

john harmsworth

So why doesn’t your “friend” come and comment on it if you don’t understand the math? It seems, from the evidence, that anyone who works on models could use all the ideas for improvement they could get, seeing that not one of them can produce anything that looks like reality!

bitchilly

john, it’s hard to get imaginary friends to interact with real people in the real world.

MarkW

A climate modeler rejects criticism of his models.
Color me surprised.

pbweather

I actually showed your work to a friend of mine who work on climate models. He was not impressed.

LOL!

I have a mate who knows a mate…who knows something about ……errr….what are we talking about again?

Steve Fraser