A skeptic attempts to break the ‘pal review’ glass ceiling in climate modeling

Propagation of Error and the Reliability of Global Air Temperature Projections

Guest essay by Pat Frank

Regular readers at Anthony’s Watts Up With That will know that for several years, since July 2013 in fact, I have been trying to publish an analysis of climate model error.

The analysis propagates a lower limit calibration error of climate models through their air temperature projections. Anyone reading here can predict the result. Climate models are utterly unreliable. For a more extended discussion see my prior WUWT post on this topic (thank-you Anthony).

The bottom line is that when it comes to a CO2 effect on global climate, no one knows what they’re talking about.

Before continuing, I would like to extend a profoundly grateful thank-you! to Anthony for providing an uncensored voice to climate skeptics, over against those who would see them silenced. By “climate skeptics” I mean science-minded people who have assessed the case for anthropogenic global warming and have retained their critical integrity.

In any case, I recently received my sixth rejection; this time from Earth and Space Science, an AGU journal. The rejection followed the usual two rounds of uniformly negative but scientifically meritless reviews (more on that later).

After six tries over more than four years, I now despair of ever publishing the article in a climate journal. The stakes are just too great. It’s not the trillions of dollars that would be lost to sustainability troughers.

Nope. It’s that if the analysis were published, the career of every single climate modeler would go down the tubes, starting with James Hansen. Their competence comes into question. Grants disappear. Universities lose enormous income.

Given all that conflict of interest, what consensus climate scientist could possibly provide a dispassionate review? They will feel justifiably threatened. Why wouldn’t they look for some reason, any reason, to reject the paper?

Somehow climate science journal editors have seemed blind to this obvious conflict of interest as they chose their reviewers.

With the near hopelessness of publication, I have decided to make the manuscript widely available as samizdat literature.

The manuscript with its Supporting Information document is available without restriction here (13.4 MB pdf).

Please go ahead and download it, examine it, comment on it, and send it on to whomever you like. For myself, I have no doubt the analysis is correct.

Here’s the analytical core of it all:

Climate model air temperature projections are just linear extrapolations of greenhouse gas forcing. Therefore, they are subject to linear propagation of error.

Complicated, isn’t it. I have yet to encounter a consensus climate scientist able to grasp that concept.

Willis Eschenbach demonstrated that climate models are just linearity machines back in 2011, by the way, as did I in my 2008 Skeptic paper and at CA in 2006.

The manuscript shows that this linear equation …

clip_image002

… will emulate the air temperature projection of any climate model; fCO2 reflects climate sensitivity and “a” is an offset. Both coefficients vary with the model. The parenthetical term is just the fractional change in forcing. The air temperature projections of even the most advanced climate models are hardly more than y = mx+b.

The manuscript demonstrates dozens of successful emulations, such as these:

clip_image004

Legend: points are CMIP5 RCP4.5 and RCP8.5 projections. Panel ‘a’ is the GISS GCM Model-E2-H-p1. Panel ‘b’ is the Beijing Climate Center Climate System GCM Model 1-1 (BCC-CSM1-1). The PWM lines are emulations from the linear equation.

CMIP5 models display an inherent calibration error of ±4 Wm-2 in their simulations of long wave cloud forcing (LWCF). This is a systematic error that arises from incorrect physical theory. It propagates into every single iterative step of a climate simulation. A full discussion can be found in the manuscript.

The next figure shows what happens when this error is propagated through CMIP5 air temperature projections (starting at 2005).

clip_image006

Legend: Panel ‘a’ points are the CMIP5 multi-model mean anomaly projections of the 5AR RCP4.5 and RCP8.5 scenarios. The PWM lines are the linear emulations. In panel ‘b’, the colored lines are the same two RCP projections. The uncertainty envelopes are from propagated model LWCF calibration error.

For RCP4.5, the emulation departs from the mean near projection year 2050 because the GHG forcing has become constant.

As a monument to the extraordinary incompetence that reigns in the field of consensus climate science, I have made the 29 reviews and my responses for all six submissions available here for public examination (44.6 MB zip file, checked with Norton Antivirus).

When I say incompetence, here’s what I mean and here’s what you’ll find.

Consensus climate scientists:

1. Think that precision is accuracy

2. Think that a root-mean-square error is an energetic perturbation on the model

3. Think that climate models can be used to validate climate models

4. Do not understand calibration at all

5. Do not know that calibration error propagates into subsequent calculations

6. Do not know the difference between statistical uncertainty and physical error

7. Think that ±” uncertainty means positive error offset

8. Think that fortuitously cancelling errors remove physical uncertainty

9. Think that projection anomalies are physically accurate (never demonstrated)

10. Think that projection variance about a mean is identical to propagated error

11. Think that a “±K” uncertainty is a physically real temperature

12. Think that a “±K” uncertainty bar means the climate model itself is oscillating violently between ice-house and hot-house climate states

Item 12 is especially indicative of the general incompetence of consensus climate scientists.

Not one of the PhDs making that supposition noticed that a “±” uncertainty bar passes through, and cuts vertically across, every single simulated temperature point. Not one of them figured out that their “±” vertical oscillations meant that the model must occupy the ice-house and hot-house climate states simultaneously!

If you download them, you will find these mistakes repeated and ramified throughout the reviews.

Nevertheless, my manuscript editors apparently accepted these obvious mistakes as valid criticisms. Several have the training to know the manuscript analysis is correct.

For that reason, I have decided their editorial acuity merits them our applause.

Here they are:

  • Steven Ghan___________Journal of Geophysical Research-Atmospheres
  • Radan Huth____________International Journal of Climatology
  • Timothy Li____________Earth Science Reviews
  • Timothy DelSole_______Journal of Climate
  • Jorge E. Gonzalez-cruz__Advances in Meteorology
  • Jonathan Jiang_________Earth and Space Science

Please don’t contact or bother any of these gentlemen. On the other hand, one can hope some publicity leads them to blush in shame.

After submitting my responses showing the reviews were scientifically meritless, I asked several of these editors to have the courage of a scientist, and publish over meritless objections. After all, in science analytical demonstrations are bullet proof against criticism. However none of them rose to the challenge.

If any journal editor or publisher out there wants to step up to the scientific plate after examining my manuscript, I’d be very grateful.

The above journals agreed to send the manuscript out for review. Determined readers might enjoy the few peculiar stories of non-review rejections in the appendix at the bottom.

Really weird: several reviewers inadvertently validated the manuscript while rejecting it.

For example, the third reviewer in JGR round 2 (JGR-A R2#3) wrote that,

“[emulation] is only successful in situations where the forcing is basically linear …” and “[emulations] only work with scenarios that have roughly linearly increasing forcings. Any stabilization or addition of large transients (such as volcanoes) will cause the mismatch between this emulator and the underlying GCM to be obvious.”

The manuscript directly demonstrated that every single climate model projection was linear in forcing. The reviewer’s admission of linearity is tantamount to a validation.

But the reviewer also set a criterion by which the analysis could be verified — emulate a projection with non-linear forcings. He apparently didn’t check his claim before making it (big oh, oh!) even though he had the emulation equation.

My response included this figure:

clip_image008

Legend: The points are Jim Hansen’s 1988 scenario A, B, and C. All three scenarios include volcanic forcings. The lines are the linear emulations.

The volcanic forcings are non-linear, but climate models extrapolate them linearly. The linear equation will successfully emulate linear extrapolations of non-linear forcings. Simple. The emulations of Jim Hansen’s GISS Model II simulations are as good as those of any climate model.

The editor was clearly unimpressed with the demonstration, and that the reviewer inadvertently validated the manuscript analysis.

The same incongruity of inadvertent validations occurred in five of the six submissions: AM R1#1 and R2#1; IJC R1#1 and R2#1; JoC, #2; ESS R1#6 and R2#2 and R2#5.

In his review, JGR R2 reviewer 3 immediately referenced information found only in the debate I had (and won) with Gavin Schmidt at Realclimate. He also used very Gavin-like language. So, I strongly suspect this JGR reviewer was indeed Gavin Schmidt. That’s just my opinion, though. I can’t be completely sure because the review was anonymous.

So, let’s call him Gavinoid Schmidt-like. Three of the editors recruited this reviewer. One expects they called in the big gun to dispose of the upstart.

The Gavinoid responded with three mostly identical reviews. They were among the most incompetent of the 29. Every one of the three included mistake #12.

Here’s Gavinoid’s deep thinking:

“For instance, even after forcings have stabilized, this analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

And there it is. Gavinoid thinks the increasingly large “±K” projection uncertainty bars mean the climate model itself is oscillating increasingly wildly between ice-house and hot-house climate states. He thinks a statistic is a physically real temperature.

A naïve freshman mistake, and the Gavinoid is undoubtedly a PhD-level climate modeler.

The majority of Gavinoid’s analytical mistakes include list items 2, 5, 6, 10, and 11. If you download the paper and Supporting Information, section 10.3 of the SI includes a discussion of the total hash Gavinoid made of a Stefan-Boltzmann analysis.

And if you’d like to see an extraordinarily bad review, check out ESS round 2 review #2. It apparently passed editorial muster.

I can’t finish without mentioning Dr. Patrick Brown’s video criticizing the youtube presentation of the manuscript analysis. This was my 2016 talk for the Doctors for Disaster Preparedness. Dr. Brown’s presentation was also cross-posted at “andthentheresphysics” (named with no appreciation of the irony) and on youtube.

Dr. Brown is a climate modeler and post-doctoral scholar working with Prof. Kenneth Caldiera at the Carnegie Institute, Stanford University. He kindly notified me after posting his critique. Our conversation about it is in the comments section below his video.

Dr. Brown’s objections were classic climate modeler, making list mistakes 2, 4, 5, 6, 7, and 11.

He also made the nearly unique mistake of confusing an root-sum-square average of calibration error statistics with an average of physical magnitudes; nearly unique because one of the ESS reviewers made the same mistake.

Mr. andthentheresphysics weighed in with his own mistaken views, both at Patrick Brown’s site and at his own. His blog commentators expressed fatuous insubstantialities and his moderator was tediously censorious.

That’s about it. Readers moved to mount analytical criticisms are urged to first consult the list and then the reviews. You’re likely to find your objections critically addressed there.

I made the reviews easy to apprise by starting them with a summary list of reviewer mistakes. That didn’t seem to help the editors, though.

Thanks for indulging me by reading this.

I felt a true need to go public, rather than submitting in silence to what I see as reflexive intellectual rejectionism and indeed a noxious betrayal of science by the very people charged with its protection.

Appendix of Also-Ran Journals with Editorial ABM* Responses

Risk Analysis. L. Anthony (Tony) Cox, chief editor; James Lambert, manuscript editor.

This was my first submission. I expected a positive result because they had no dog in the climate fight, their website boasts competence in mathematical modeling, and they had published papers on error analysis of numerical models. What could go wrong?

Reason for declining review: “the approach is quite narrow and there is little promise of interest and lessons that transfer across the several disciplines that are the audience of the RA journal.

Chief editor Tony Cox agreed with that judgment.

A risk analysis audience not interested to discover there’s no knowable risk to CO2 emissions.

Right.

Asia-Pacific Journal of Atmospheric Sciences. Songyou Hong, chief editor; Sukyoung Lee, manuscript editor. Dr. Lee is a professor of atmospheric meteorology at Penn State, a colleague of Michael Mann, and altogether a wonderful prospect for unbiased judgment.

Reason for declining review: “model-simulated atmospheric states are far from being in a radiative convective equilibrium as in Manabe and Wetherald (1967), which your analysis is based upon.” and because the climate is complex and nonlinear.

Chief editor Songyou Hong supported that judgment.

The manuscript is about error analysis, not about climate. It uses data from Manabe and Wetherald but is very obviously not based upon it.

Dr. Lee’s rejection follows either a shallow analysis or a convenient pretext.

I hope she was rewarded with Mike’s appreciation, anyway.

Science Bulletin. Xiaoya Chen, chief editor, unsigned email communication from “zhixin.”

Reason for declining review: “We have given [the manuscript] serious attention and read it carefully. The criteria for Science Bulletin to evaluate manuscripts are the novelty and significance of the research, and whether it is interesting for a broad scientific audience. Unfortunately, your manuscript does not reach a priority sufficient for a full review in our journal. We regret to inform you that we will not consider it further for publication.

An analysis that invalidates every single climate model study for the past 30 years, demonstrates that a global climate impact of CO2 emissions, if any, is presently unknowable, and that indisputably proves the scientific vacuity of the IPCC, does not reach a priority sufficient for a full review in Science Bulletin.

Right.

Science Bulletin then courageously went on to immediately block my email account.

*ABM = anyone but me; a syndrome widely apparent among journal editors.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

629 Comments
Inline Feedbacks
View all comments
SkepticGoneWild
October 23, 2017 6:49 pm

The climate models do not meet the important scientific concept of falsifiability, and are therefore unscientific.

Falsifiability is the principle that in hypothesis testing a proposition cannot be considered scientific if it does not admit the possibility of being shown to be false. For a proposition to be falsifiable, it must – at least in principle – be possible to make an observation that would show the proposition to be false, even if that observation has not actually been made [//psychology.wikia.com/wiki/Falsifiability]

No one in our lifetime can make an observation that would show the model to be false, since model projections extend to the year 2100. Therefore the models are unscientific. Just voodoo pseudo-science.

DWR54
Reply to  SkepticGoneWild
October 24, 2017 5:43 am

If observations fell outside the full model range at any point from the start of the forecast period (2006) to 2100 they would be falsified.

knr
Reply to  SkepticGoneWild
October 24, 2017 6:55 am

But by the ‘magic of climate science’ and the application ‘heads you lose , tails I win ‘ this is not an issue .
After all the first rule of climate ‘science’ , is when the models and reality differ in value, it is reality which is in error .

Reg Nelson
October 23, 2017 7:50 pm

The bigger picture is not why Climate Models have been so spectacularly wrong (they have been, 97% of the time : ); it is why they have been so wrong, and will always be so, because of political, not scientific reasons.

I used to view Science as a noble profession, one that lifted an ordinary person like me to incredible heights, not imagined by my hard-working ancestors.

Sad that a once noble profession has been subjugated to political propaganda.

Tom Halla
Reply to  crackers345
October 23, 2017 10:26 pm

Crackers, bad boy! You are using what looks like GISSTEMP to validate models. Try defending the cooking by GISS first, then claim them a support. A complex question fallacy?

DWR54
Reply to  crackers345
October 24, 2017 6:01 am

Tom Halla

You are using what looks like GISSTEMP to validate models.

HadCRUT4 shows similar results. Annual observations in 2016 were very close to the multi-model average (fig. from Ed Hawkins).

http://www.climate-lab-book.ac.uk/files/2016/01/fig-nearterm_all_UPDATE_2016.png

Tom Halla
Reply to  DWR54
October 24, 2017 7:19 am

They are both FUBAR. Showing 1998 as notably cooler than 2017, or either as substantially warmer than 1938, are examples of creative writing.

bitchilly
Reply to  crackers345
October 24, 2017 6:52 pm

crackers, come back and post that crap again when la nina is in full swing .the response from you and dwr to tom halla is laughable.

crackers345
Reply to  crackers345
October 24, 2017 7:29 pm

many ENSOs have happened, yet
the temperature has reached this point.

how will the next la nina compare to
previous la nina years? they’ve each
been getting warmer. why do el nino
years keep getting warming?

crackers345
Reply to  crackers345
October 24, 2017 7:33 pm

billy: for both NOAA surface
and UAH LT, the latest
el nino season (2015-16), la nina season (2016-17; weak) and neutral season (2014-15)
all set record highs
for their ENSO classification.

October 23, 2017 8:12 pm

How accurate are various GCMs in any case?
Here are some figures from the earlier CMIP3 exercise.
http://www.geoffstuff.com/DOUGLASS%20MODEL%20JOC1651.pdf Please refer to Table II. Your attention is drawn to the performance of the CSIRO Mark 3 model, coded 15, against the ensemble means at various altitudes. Trends are in millidegrees C per decade.
Surface 1000 925 850 700 600 500 400 300 250 200 150 100

163 213 174 181 199 204 226 271 307 299 255 166 53
156 198 166 177 191 203 227 272 314 320 307 268 78
____________

Boiled down to its essence, we have our Australian CSIRO publishing that they have calculated model temperatures whose least significant figure of 0.001 °C per decade.
At three altitudes, the model result is within +/- 0.001 °C per decade of the average of many simulations by others.

Outstanding!!

jonesingforozone
Reply to  Geoff Sherrington
October 23, 2017 8:45 pm

Yes, the models provide unbelievable precision, absolutely unbelievable.

October 23, 2017 10:55 pm

“Think that fortuitously cancelling errors remove physical uncertainty”. As in if you’re lucky the sum of all the positive differences between measurements and the real value will exactly equal the negative. Somehow that has become a law that if you do take enough measurements then it will happen.
I used to use an example of a sniper to teach the difference between precision and accuracy. A good shooter will be precise ie. a small spread of holes in the target, even if the sight is off (not accurate).
If 10 000 shots are spread randomly around the bullseye over 10cm, it would be quite fortuitous for the mean to be within 1mm ie. perfectly random. For a climate scientist, its law that it will happen, never mind that many shots after calibration will not make it better.
PS You’ve hit a nerve. This has been painful to type on this site.

Reply to  Robert B
October 24, 2017 12:52 am

Robert B,
We can use your example to illustrate propagation of errors. It is not so easy to invent a good analogy for propagation of error.
Suppose that there is a telescopic sight that can move. If it moves once, then steadies, it might send all shots to one side of the target. This is an offset, a term used above.
If, however, the sight was forever loose, it could move to left or right or up or down, this being an error that can be negative or positive but without the ability of negative excursions to be wiped out by positive ones – the shots with their errors are already in the wall. The +/- case.
We can take each new magazine as the equivalent of starting a new round of model computation. The errors of the sights will continue to be present with each magazine change. They do not go back to a magic zero error when the magazine is changed. Because there are more shots accumulating all the time, there is a probability that wider and wider errors will happen. The errors propagate. The uncertainty bounds look like Pat’s illustration with +/- 18 degrees.
Some who posted above seem to think only in terms of the sights being firm, but offset. This produces what they argue for, but it is a precision concept that they are left with, once the offset error is corrected. Theirs is a wrong, naive analogy. It is not the +/- case.
But Pat is dealing with the analogy of the loose sights that can end up with bullets anywhere, any time. Unconstrained except going in the general direction of the barn wall and not back to hit the shooter. Hi Pat, please correct if I am wrong.
Pat, I think you can add another climate science common error to your list. It is an a priori assumption that if variable A increases, variable B will be more likely to increase than decease. It comes from thinking too often that if CO2 increases, then temperature will increase – almost by immutable law. Geoff.

DMA
Reply to  Geoff Sherrington
October 24, 2017 10:58 am

Geoff S. and Robert B.
The sniper analogy can be improved by adding a shooter that knows how to adjust the sights but doesn’t know enough to tighten the mounts. He sees each shot and adjusts the sight as if everything on that shot was good. He then aims at the center of the target and repeats the process. Not only is the loose sight causing error (mostly random) the next shot is based on the position of the last. Thus the error propagates and the group spreads far beyond the accuracy capability of the rifle.
I believe the iterative process of the climate models works the same way with uncertainty growing with each iteration to the point that they quickly get into the realm of meaningless results even if they are constrained to give realistically possible results.

Reply to  Geoff Sherrington
October 24, 2017 7:46 pm

Hi Geoff, it’s not that the gun can send bullets anywhere.

It’s that it systematically sends them somewhere but we don’t know where.

And every different gun send them systematically somewhere else, and again we don’t know where.

All we know is that when we set up a nearby target and test all the different guns, the bullets get spattered in some way about the target with an average scatter of, say, ±2m.

And the problem is somewhere in the gun, or somewhere in the bullet, or both, but we don’t know where.

And when we shoot at the real target, we know it’s 1000 meters away but we have no idea where it’s located. Also, we can’t see the bullets, can’t find the bullets, and have no idea where they are with respect to the real target (which remains invisible).

But we do know that the bullets get to within ±2m of a target at 10 meters. 🙂

That’s climate models and their long wave cloud forcing calibration error.

On another topic, Geoff, you’ve probably noticed that ATTP thinks one can add up all the individual measurement errors in a calibration experiment, combine them into one number (so positive and negative calibration error values cancel), and then subtract that one final number from real measurement data and decide those data are now error-free.

As a seriously well-trained and experienced analytical chemist, do you think you can find a way to explain to him that he’s got a really bad idea?

I’ve tried many times, and he’s completely refractory.

Best wishes to you, Geoff. 🙂

Reply to  Geoff Sherrington
October 24, 2017 10:39 pm

Geoff, the sniper analogy can be improved by assuming the sniper is letting go of bloated balloons at his/her target. The balloons flazzzpt randomly directional. Certain physicals laws allow for a spead if sorts and so we take a mean of the spread and call it projection.

RichardLH
October 24, 2017 4:05 am

Some noted Climate Scientists think that Nyquist only applies to time and not space. He would be turning in his grave. Sampling theorem and its observations about all samples/measurement we rely on is not to be ignored. IMHO of course.

RichardLH
Reply to  RichardLH
October 24, 2017 5:22 am

Analogue is always accurate but never precise. Digital is always precise but never accurate.

That goes for writing down the figures as well is in the computer/instrument.

ferdberple
October 24, 2017 7:12 am

T(n+1) = T(n)+λ ∆F(n+1) / τ + ΔT(n) exp( -1 / τ )

The “black box” equation is linear on each time slice. However the result is matching the model mean not individual runs. Whether the error term converges or diverges would seem to me to be an important question.

My best guess is that climate due not have a fixed mean or variance and the law of large numbers does not apply and the error term does not converge. Rather climate is a fractal. A 1/f distribution and scale invariant at all time scales.

But keep in mind our actual climate is not a model mean. The climate we experience is like a single run of a climate model. We end up with just 1 outcome from all possible outcomes.
The climate models however are. Showing us the average of all outcomes. Which is a very different beast statistically.

Fundamentally the climate models are wrong because they project future climate to be the mean of all possible climates.

In actual fact all the climate models are actually showing us is that there are an infinite number of future climates possible from a single forcing. And the spaghetti graph shows the boundaries.

The ensemble mean is simply a projection without predictive power. Climate models have been given a bad reputation because the model mean has been misrepresented to the public as having predictive power. Which it does not because this is a boundary value problem.
Similar to a simulation of a roll of the dice we know the boundary is 2 and 12. But the average of 7 has no predictive power for what will actually be rolled.

Clyde Spencer
Reply to  ferdberple
October 24, 2017 11:02 am

ferdberple,

The ‘Spaghetti Graph’ shows us the sensitivity of the models to initialization perturbations and differences in assumptions about future CO2 emissions. It shows us a range of possible outputs from the models. However, without some way of calibrating or validating the models, there is no reason to believe that the mean or median, or even the binned-mode, is the best estimate of the future state of temperatures. Even if the models had expertise in predicting temperatures, without a reliable prediction of future increases in CO2 (Assuming it actually is the control knob!) there is no hope of the ensembles having predictive value. At best, all the modelers can say honestly, is “Assuming that one of the RCPs is close to what the future emissions will be like, we think that the future temperatures will be within a range demonstrated for the ensemble for that particular RCP.”

At this point in time, it appears that only the Russian model (near the low extreme) is tracking the actual average global mean temperature. Logically, there can only be one best prediction. Averaging it with all the other predictions only reduces the quality of the prediction.

Philip Schaeffer
October 24, 2017 8:19 am

And yet again, this is just embarrassing. People who actually understand the math vs those who don’t. Well, it’s definitely one way to progress. The hard way.

October 24, 2017 11:26 am

Joe Crawford October 23 @9:28AM:

“I doubt there is a Mechanical Engineer in the crowd that would trust his/her family’s safety to a 5th floor apartment deck that was designed with, or the design was verified by, a stress analysis (i.e., modelling) program that required constraints be placed within it to keep the calculations within reasonable ranges.”

You have forgotten that there is a Mechanical Engineer in the crowd, Bill Nye ‘The Science Guy’.

Joe Crawford
Reply to  Tom Monfort
October 27, 2017 9:14 am

We don’t claim him, Tom. There is at least one exception to every rule.

October 26, 2017 7:53 am

Let Eli make this simple. Take some parameter B. Nick Stokes is saying that three values used for annual runs are

1. 2.0
2. 2.1
3. 1.9

Pat Frank is saying the three values must be

1. 1.0
2. 2.1
3. 2.9

In both cases the average is 2.0. Nick says this is an average of 2.0. Pat says this is an average of 2.0/yr

Now you would think that if Pat Frank were correct, running a model without changing the atmospheric composition would give wildly diverging results as the number of years increased. But GCMs don’t behave that way, and indeed doing such runs is a basic test of the model and tells something about the unforced variability in the model on different time scales which can be compared to the observed natural variability on those time scales.

Reply to  Eli Rabett
October 27, 2017 4:38 am

Now you would think that if Pat Frank were correct, running a model without changing the atmospheric composition would give wildly diverging results as the number of years increased. But GCMs don’t behave that way, and indeed doing such runs is a basic test of the model and tells something about the unforced variability in the model on different time scales which can be compared to the observed natural variability on those time scales.

Running the models with no changes to gas mixture, should replicate the range in temps due to the ocean cycles, and el ninos.
These alone should make a wide range of run result.

If you’re saying they don’t, just more proof of how flawed they really are.
Oceans are thermal storage in a model, they have to have delayed thermal processes associated to them or they are not modeled correctly.

Reply to  Eli Rabett
October 29, 2017 12:47 pm

Eli “Now you would think that if Pat Frank were correct, running a model without changing the atmospheric composition would give wildly diverging results as the number of years increased.

Eli would think that. So would my climate modeler reviewers. No trained physical scientist would think that, though, because they’d all know the difference between physical error and an uncertainty statistic.

They’d also have in mind that models are tuned to produce “reasonable” values. They’d know that tuning models does nothing to reduce uncertainty.

Eli doesn’t know any of that. Eli is not a physical scientist. He’s a member of this caste.

Nick Stokes
Reply to  Pat Frank
October 29, 2017 12:59 pm
Reply to  Pat Frank
October 29, 2017 1:44 pm

In that case, Nick, he’s no credit to his profession. He’s a volunteer member of that caste.

Nick Stokes
Reply to  Pat Frank
October 29, 2017 1:58 pm

He is a trained physical scientist who thinks you are wrong. A very large number of scientists think you are wrong. You have not produced any who think you are right. And you give no references to support your nutty ideas on averaging.

Reply to  Pat Frank
October 29, 2017 2:08 pm

Pat Frank, if you were even close to being right, you wouldn’t have such a problem getting published.

Reply to  Pat Frank
October 30, 2017 8:32 pm

Nick Stokes, argument from authority.

Eli is required to give a quantitative reason. He’s not done that. His “wildly diverging” was so wrong analytically as to imply complete ignorance.

Three of my reviews expressed agreement with the analysis and recommended publication. Every single physical scientist I’ve spoken to directly has understood and agreed with the analysis.

The only rejectionaires have been climate scientists, all of whom worked from a huge professional conflict of interest. And their arguments are demonstrated wrong, or like Eli’s, to be candid expressions of utter ignorance.

Reply to  Pat Frank
October 30, 2017 8:35 pm

Mark S. Johnson thank-you for your outstandingly naïve comment.

Brad
October 27, 2017 8:05 am

Good work. Not only do you show that mankind is risking vast amounts of resources on quasi-science but, you also show how they get away with the fraud by shutting out any opposing views. A class action lawsuit has to be launched.

October 31, 2017 4:32 pm

As of this post and to my best knowledge, I have resolved all the objections on this thread.

If I am wrong and any remain unresolved, please point them out in reply below.