Are Climate Modelers Scientists?

Guest essay by Pat Frank

For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.

The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.

Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.

Here’s an illustration: the Figure below shows what happens when the average ±4 Wm-2 long-wave cloud forcing error of CMIP5 climate models [1], is propagated through a couple of Community Climate System Model 4 (CCSM4) global air temperature projections.

CCSM4 is a CMIP5-level climate model from NCAR, where Kevin Trenberth works, and was used in the IPCC AR5 of 2013. Judy Curry wrote about it here.

clip_image002

In panel a, the points show the CCSM4 anomaly projections of the AR5 Representative Concentration Pathways (RCP) 6.0 (green) and 8.5 (blue). The lines are the PWM emulations of the CCSM4 projections, made using the standard RCP forcings from Meinshausen. [2] The CCSM4 RCP forcings may not be identical to the Meinhausen RCP forcings. The shaded areas are the range of projections across all AR5 models (see AR5 Figure TS.15). The CCSM4 projections are in the upper range.

In panel b, the lines are the same two CCSM4 RCP projections. But now the shaded areas are the uncertainty envelopes resulting when ±4 Wm-2 CMIP5 long wave cloud forcing error is propagated through the projections in annual steps.

The uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.

It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.

Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.

And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.

That brings me to the reason I’m writing here. My manuscript has been rejected four times; twice each from two high-ranking climate journals. I have responded to a total of ten reviews.

Nine of the ten reviews were clearly written by climate modelers, were uniformly negative, and recommended rejection. One reviewer was clearly not a climate modeler. That one recommended publication.

I’ve had my share of scientific debates. A couple of them not entirely amiable. My research (with colleagues) has over-thrown four ‘ruling paradigms,’ and so I’m familiar with how scientists behave when they’re challenged. None of that prepared me for the standards at play in climate science.

I’ll start with the conclusion, and follow on with the supporting evidence: never, in all my experience with peer-reviewed publishing, have I ever encountered such incompetence in a reviewer. Much less incompetence evidently common to a class of reviewers.

The shocking lack of competence I encountered made public exposure a civic corrective good.

Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.

Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything. Geoff Sherrington has been eloquent about the hazards and trickiness of experimental error.

All of the physical sciences hew to these standards. Physical scientists are bound by them.

Climate modelers do not and by their lights are not.

I will give examples of all of the following concerning climate modelers:

  • They neither respect nor understand the distinction between accuracy and precision.
  • They understand nothing of the meaning or method of propagated error.
  • They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
  • They don’t understand the meaning of physical error.
  • They don’t understand the importance of a unique result.

Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.

The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.

And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.

1. Accuracy vs. Precision

The distinction between accuracy and precision is central to the argument presented in the manuscript, and is defined right in the Introduction.

The accuracy of a model is the difference between its predictions and the corresponding observations.

The precision of a model is the variance of its predictions, without reference to observations.

Physical evaluation of a model requires an accuracy metric.

There is nothing more basic to science itself than the critical distinction of accuracy from precision.

Here’s what climate modelers say:

“Too much of this paper consists of philosophical rants (e.g., accuracy vs. precision) …”

“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”

“The best way to test the errors of the GCMs is to run numerical experiments to sample the predicted effects of different parameters…”

“The author is simply asserting that uncertainties in published estimates [i.e., model precision – P] are not ‘physically valid’ [i.e., not accuracy – P]- an opinion that is not widely shared.”

Not widely shared among climate modelers, anyway.

The first reviewer actually scorned the distinction between accuracy and precision. This, from a supposed scientist.

The remainder are alternative declarations that model variance, i.e., precision, = physical accuracy.

The accuracy-precision difference was extensively documented to relevant literature in the manuscript, e.g., [3, 4].

The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.

Every climate modeler reviewer who addressed the precision-accuracy question similarly failed to grasp it. I have yet to encounter one who understands it.

2. No understanding of propagated error

“The authors claim that published projections do not include ‘propagated errors’ is fundamentally flawed. It is clearly the case that the model ensemble may have structural errors that bias the projections.”

I.e., the reviewer supposes that model precision = propagated error.

“The repeated statement that no prior papers have discussed propagated error in GCM projections is simply wrong (Rogelj (2013), Murphy (2007), Rowlands (2012)).”

Let’s take the reviewer examples in order:

Rogelj (2013) concerns the economic costs of mitigation. Their Figure 1b includes a global temperature projection plus uncertainty ranges. The uncertainties, “are based on a 600-member ensemble of temperature projections for each scenario…” [5]

I.e., the reviewer supposes that model precision = propagated error.

Murphy (2007) write, “In order to sample the effects of model error, it is necessary to construct ensembles which sample plausible alternative representations of earth system processes.” [6]

I.e., the reviewer supposes that model precision = propagated error.

Rowlands (2012) write, “Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. “ and go on to state that, “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure.” [7]

I.e., the reviewer supposes that model precision = propagated error.

Not one of this reviewer’s examples of propagated error includes any propagated error, or even mentions propagated error.

Not only that, but not one of the examples discusses physical error at all. It’s all model precision.

This reviewer doesn’t know what propagated error is, what it means, or how to identify it. This reviewer also evidently does not know how to recognize physical error itself.

Another reviewer:

“Examples of uncertainty propagation: Stainforth, D. et al., 2005: Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433, 403-406.

“M. Collins, R. E. Chandler, P. M. Cox, J. M. Huthnance, J. Rougier and D. B. Stephenson, 2012: Quantifying future climate change. Nature Climate Change, 2, 403-409.”

Let’s find out: Stainforth (2005) includes three Figures; Every single one of them presents error as projection variation. [8]

Here’s their Figure 1:

clip_image004

Original Figure Legend: “Figure 1 Frequency distributions of T g (colours indicate density of trajectories per 0.1 K interval) through the three phases of the simulation. a, Frequency distribution of the 2,017 distinct independent simulations. b, Frequency distribution of the 414 model versions. In b, T g is shown relative to the value at the end of the calibration phase and where initial condition ensemble members exist, their mean has been taken for each time point.

Here’s what they say about uncertainty: “[W]e have carried out a grand ensemble (an ensemble of ensembles) exploring uncertainty in a state-of-the-art model. Uncertainty in model response is investigated using a perturbed physics ensemble in which model parameters are set to alternative values considered plausible by experts in the relevant parameterization schemes.

There it is: uncertainty is directly represented as model variability (density of trajectories; perturbed physics ensemble).

The remaining figures in Stainforth (2005) derive from this one. Propagated error appears nowhere and is nowhere mentioned.

Reviewer supposition: model precision = propagated error.

Collins (2012) state that adjusting model parameters so that projections approach observations is enough to “hope” that a model has physical validity. Propagation of error is never mentioned. Collins Figure 3 shows physical uncertainty as model variability about an ensemble mean. [9] Here it is:

clip_image006

Original Legend: “Figure 3 | Global temperature anomalies. a, Global mean temperature anomalies produced using an EBM forced by historical changes in well-mixed greenhouse gases and future increases based on the A1B scenario from the Intergovernmental Panel on Climate Change’s Special Report on Emission Scenarios. The different curves are generated by varying the feedback parameter (climate sensitivity) in the EBM. b, Changes in global mean temperature at 2050 versus global mean temperature at the year 2000, … The histogram on the x axis represents an estimate of the twentieth-century warming attributable to greenhouse gases. The histogram on the y axis uses the relationship between the past and the future to obtain a projection of future changes.

Collins 2012, part a: model variability itself; part b: model variability (precision) represented as physical uncertainty (accuracy). Propagated error? Nowhere to be found.

So, once again, not one of this reviewer’s examples of propagated error actually includes any propagated error, or even mentions propagated error.

It’s safe to conclude that these climate modelers have no concept at all of propagated error. They apparently have no concept whatever of physical error.

Every single time any of the reviewers addressed propagated error, they revealed a complete ignorance of it.

3. Error bars mean model oscillation – wherein climate modelers reveal a fatal case of naive-freshman-itis.

“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”

“[T]his analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

“Indeed if we carry such error propagation out for millennia we find that the uncertainty will eventually be larger than the absolute temperature of the Earth, a clear absurdity.”

“An entirely equivalent argument [to the error bars] would be to say (accurately) that there is a 2K range of pre-industrial absolute temperatures in GCMs, and therefore the global mean temperature is liable to jump 2K at any time – which is clearly nonsense…”

Got that? These climate modelers think that “±” error bars imply the model itself is oscillating (liable to jump) between the error bar extremes.

Or that the bars from propagated error represent physical temperature itself.

No sophomore in physics, chemistry, or engineering would make such an ignorant mistake.

But Ph.D. climate modelers have invariably done. One climate modeler audience member did so verbally, during Q&A after my seminar on this analysis.

The worst of it is that both the manuscript and the supporting information document explained that error bars represent an ignorance width. Not one of these Ph.D. reviewers gave any evidence of having read any of it.

5. Unique Result – a concept unknown among climate modelers.

Do climate modelers understand the meaning and importance of a unique result?

“[L]ooking the last glacial maximum, the same models produce global mean changes of between 4 and 6 degrees colder than the pre-industrial. If the conclusions of this paper were correct, this spread (being so much smaller than the estimated errors of +/- 15 deg C) would be nothing short of miraculous.”

“In reality climate models have been tested on multicentennial time scales against paleoclimate data (see the most recent PMIP intercomparisons) and do reasonably well at simulating small Holocene climate variations, and even glacial-interglacial transitions. This is completely incompatible with the claimed results.”

“The most obvious indication that the error framework and the emulation framework

presented in this manuscript is wrong is that the different GCMs with well-known different cloudiness biases (IPCC) produce quite similar results, albeit a spread in the

climate sensitivities.”

Let’s look at where these reviewers get such confidence. Here’s an example from Rowlands, (2012) of what models produce. [7]

clip_image008

Original Legend: “Figure 1 | Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.” [7]

The variable black line in the middle of the group represents the observed air temperature. I added the horizontal black lines at 1 K and 3 K, and the vertical red line at year 2055. Part of the red line is in the original figure, as the precision uncertainty bar.

This Figure displays thousands of perturbed physics simulations of global air temperatures. “Perturbed physics” means that model parameters are varied across their range of physical uncertainty. Each member of the ensemble is of equivalent weight. None of them are known to be physically more correct than any of the others.

The physical energy-state of the simulated climate varies systematically across the years. The horizontal black lines show that multiple physical energy states produce the same simulated 1 K or 3 K anomaly temperature.

The vertical red line at year 2055 shows that the identical physical energy-state (the year 2055 state) produces multiple simulated air temperatures.

These wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.

The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.

That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.

That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.

Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.

Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?

Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?

Obviously not.

Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.

Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.

Any physical scientist would (should) know this. The climate modeler reviewers uniformly do not.

6. An especially egregious example in which the petard self-hoister is unaware of the air underfoot.

Finally, I’d like to present one last example. The essay is already long, and yet another instance may be overkill.

But I finally decided it is better to risk reader fatigue than to not make a public record of what passes for analytical thinking among climate modelers. Apologies if it’s all become tedious.

This last truly demonstrates the abysmal understanding of error analysis at large in the ranks of climate modelers. Here we go:

“I will give (again) one simple example of why this whole exercise is a waste of time. Take a simple energy balance model, solar in, long wave out, single layer atmosphere, albedo and greenhouse effect. i.e. sigma Ts^4 = S (1-a) /(1 -lambda/2) where lambda is the atmospheric emissivity, a is the albedo (0.7), S the incident solar flux (340 W/m^2), sigma is the SB coefficient and Ts is the surface temperature (288K).

“The sensitivity of this model to an increase in lambda of 0.02 (which gives a 4 W/m2 forcing) is 1.19 deg C (assuming no feedbacks on lambda or a). The sensitivity of an erroneous model with an error in the albedo of 0.012 (which gives a 4 W/m^2 SW TOA flux error) to exactly the same forcing is 1.18 deg C.

“This the difference that a systematic bias makes to the sensitivity is two orders of magnitude less than the effect of the perturbation. The author’s equating of the response error to the bias error even in such a simple model is orders of magnitude wrong. It is exactly the same with his GCM emulator.”

The “difference” the reviewer is talking about is 1.19 C – 1.18 C = 0.01 C. The reviewer supposes that this 0.01 C is the entire uncertainty produced by the model due to a 4 Wm-2 offset error in either albedo or emissivity.

But it’s not.

First reviewer mistake: If 1.19 C or 1.18 C are produced by a 4 Wm-2 offset forcing error, then 1.19 C or 1.18 C are offset temperature errors. Not sensitivities. Their tiny difference, if anything, confirms the error magnitude.

Second mistake: The reviewer doesn’t know the difference between an offset error (a statistic) and temperature (a thermodynamic magnitude). The reviewer’s “sensitivity” is actually “error.”

Third mistake: The reviewer equates a 4 W/m2 energetic perturbation to a ±4 W/m2 physical error statistic.

This mistake, by the way, again shows that the reviewer doesn’t know to make a distinction between a physical magnitude and an error statistic.

Fourth mistake: The reviewer compares a single step “sensitivity” calculation to multi-step propagated error.

Fifth mistake: The reviewer is apparently unfamiliar with the generality that physical uncertainties express a bounded range of ignorance; i.e., “±” about some value. Uncertainties are never constant offsets.

Lemma to five: the reviewer apparently also does not know the correct way to express the uncertainties is ±lambda or ±albedo.

But then, inconveniently for the reviewer, if the uncertainties are correctly expressed, the prescribed uncertainty is ±4 W/m2 in forcing. The uncertainty is then obviously an error statistic and not an energetic malapropism.

For those confused by this distinction, no energetic perturbation can be simultaneously positive and negative. Earth to modelers, over. . .

When the reviewer’s example is expressed using the correct ± statistical notation, 1.19 C and 1.18 C become ±1.19 C and ±1.18 C.

And these are uncertainties for a single step calculation. They are in the same ballpark as the single-step uncertainties presented in the manuscript.

As soon as the reviewer’s forcing uncertainty enters into a multi-step linear extrapolation, i.e., a GCM projection, the ±1.19 C and ±1.18 C uncertainties would appear in every step, and must then propagate through the steps as the root-sum-square. [3, 10]

After 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.

So, correctly done, the reviewer’s own analysis validates the very manuscript that the reviewer called a “waste of time.” Good job, that.

This reviewer:

  • doesn’t know the meaning of physical uncertainty.
  • doesn’t distinguish between model response (sensitivity) and model error. This mistake amounts to not knowing to distinguish between an energetic perturbation and a physical error statistic.
  • doesn’t know how to express a physical uncertainty.
  • and doesn’t know the difference between single step error and propagated error.

So, once again, climate modelers:

  • neither respect nor understand the distinction between accuracy and precision.
  • are entirely ignorant of propagated error.
  • think the ± bars of propagated error mean the model itself is oscillating.
  • have no understanding of physical error.
  • have no understanding of the importance or meaning of a unique result.

No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.

And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.

Apparently, such thinking is critically convincing to certain journal editors.

Given all this, one can understand why climate science has fallen into such a sorry state. Without the constraint of observational physics, it’s open season on finding significations wherever one likes and granting indulgence in science to the loopy academic theorizing so rife in the humanities. [11]

When mere internal precision and fuzzy axiomatics rule a field, terms like consistent with, implies, might, could, possible, likely, carry definitive weight. All are freely available and attachable to pretty much whatever strikes one’s fancy. Just construct your argument to be consistent with the consensus. This is known to happen regularly in climate studies, with special mentions here, here, and here.

One detects an explanation for why political sentimentalists like Naomi Oreskes and Naomi Klein find climate alarm so homey. It is so very opportune to polemics and mindless righteousness. (What is it about people named Naomi, anyway? Are there any tough-minded skeptical Naomis out there? Post here. Let us know.)

In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.

Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.

The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.

They should be nowhere near important discussions or decisions concerning science-based social or civil policies.


References:

1. Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.

2. Meinshausen, M., et al., The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 2011. 109(1-2): p. 213-241.

The PWM coefficients for the CCSM4 emulations were: RCP 6.0 fCO = 0.644, a = 22.76 C; RCP 8.5, fCO = 0.651, a = 23.10 C.

3. JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement. 100:2008, Bureau International des Poids et Mesures: Sevres, France.

4. Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.

5. Rogelj, J., et al., Probabilistic cost estimates for climate change mitigation. Nature, 2013. 493(7430): p. 79-83.

6. Murphy, J.M., et al., A methodology for probabilistic predictions of regional climate change from perturbed physics ensembles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2007. 365(1857): p. 1993-2028.

7. Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.

8. Stainforth, D.A., et al., Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 2005. 433(7024): p. 403-406.

9. Collins, M., et al., Quantifying future climate change. Nature Clim. Change, 2012. 2(6): p. 403-409.

10. Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill. 320.

11. Gross, P.R. and N. Levitt, Higher Superstition: The Academic Left and its Quarrels with Science. 1994, Baltimore, MD: Johns Hopkins University. May be the most intellectually enjoyable book, ever.

5 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

449 Comments
Inline Feedbacks
View all comments
ferdberple
February 24, 2015 3:21 am

Are Climate Modelers Scientists?
============
what is the formal scientific definition of Climate Change? What is the formal term for Natural Climate Change as opposed to Anthropogenic Climate Change?
Should not the term “Climate Change” refer to all forms of Climate Change, both Natural and Anthropogenic? Why does climate science not follow the standard rule of language, from general to specific?
Why in climate science does the general refer to the specific, while to refer to the general you must use the specific? Where else in science is this done?
How can you do science if you cannot even define your terms, except to violate standard practice in language?

Reply to  ferdberple
February 24, 2015 8:56 pm

Guess you called it, Ferd — non-scientist modelers ended up doing non-science.

Allanj
February 24, 2015 3:50 am

Oh, for the good old days of slide rules. With slide rules you had to think through the problem to get the right magnitude. Now with calculators and computers you can get ten digits of precision with absolutely no understanding of the problem.

Alex
Reply to  Allanj
February 24, 2015 4:22 am

Yeah. Had to know what you were doing with a slide rule. I had many. Mini ones and circular ones. For some reason I never made a mistake.

Chip Javert
Reply to  Alex
February 24, 2015 6:58 pm

You just did with that claim.

PiperPaul
Reply to  Allanj
February 24, 2015 7:43 am

+97 billion (+/- a trillion)
Oops, sorry. I had the climate science math co-processor enabled on my computer.

Joe Crawford
Reply to  Allanj
February 24, 2015 11:08 am

At least with my old K & E Log Log Deci-Trig you had to mentally calculate the decimal point and check the reasonableness of the result. Most kids today, using hand-helds, haven’t the foggiest idea whether the they get answer is even close to the right order of magnitude.

Walt D.
Reply to  Joe Crawford
February 24, 2015 12:16 pm

You forgot the zeroth law of Climate “Science?” – 1 is approximately equal to 10.

February 24, 2015 4:04 am

Pretty devastating really.

Mark Westaby
February 24, 2015 4:05 am

All computer models are wrong but some are useful. Climate “experts” fail to recognise or even accept this basic truth. There are many reasons why computer models should never be used to predict the future — and there are even more when they apply to a complex system such as climate — of which this is a very good example.
It is CRUCIAL that papers such as this be published and there must be a publisher somewhere who recognises the difference between PROPER, objective scientific review and what passes for this in today’s supposedly scientific media.

Alex
Reply to  Mark Westaby
February 24, 2015 4:30 am

Thermal distribution software for printed circuits,( which you would imagine to be quite simple) still tell you that it is a simulation and you have to create the circuit and ‘suck it and see’.

Reply to  Mark Westaby
February 24, 2015 10:33 am

Truthseeker- chaotic non-linear systems are extremely difficult for classical physics to handle. The partial differential equations used are mathematics and do correctly describe particular phenomena but they cannot be solved as a unique value, only a numeric estimation. That estimate immediately evolves into a calculation error when numerous calculations involving small increments, like a climate model, are made. After some limited number of steps the errors overwhelm any actual result. But chaotic systems can still be studied scientifically, they just involve a whole series of intractable mathematical problems that haven’t been solved yet. Christopher Essex’s several lectures on the problems with computer numerical modelling and Lorenz ‘s orginal article on discovering the “butterfly effect”(through a climate model) are pretty much still up to date as an intro.
This essay on accuracy and precision and how to handle the errors in each show that climate modelers haven’t really grasped the ideas yet. My physical chemistry class spent the better part of a quarter(72 hours of class) just covering the very basic stuff on errors in measurement and how they ballooned even in very simple calculations.
As the Essex lecture points out, and many of us have also, there is no such thing as a global temperature because the way it is constructed it doesn’t deal with observations but statistical constructs from the data. Using the “GAT” as an input to any kind of simulation becomes a simplistic method of getting wrong answers because the physics involved has nothing to do with the non-existant average temperature but the particular temperature affecting a process in a particular place.

Alx
Reply to  logicalchemist
February 24, 2015 3:53 pm

Well put.
I always thought of this problem as what happens to 2 parallel lines, when one end of the line offsets by a fraction of a degree. How long before the parallel lines become meters apart? Kilometers a part?
So yes tiny errors can result in huge errors down the processing line.
Put another way, “To err is human, top really f**k it up you need a computer.”
The lesson being humans do make errors but computers can then replicate and compound those errors a thousand times a second.

February 24, 2015 4:07 am

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Seeing as the hundred of other models certainly don’t conform to the future behaviour of the climate – as they don’t all track each other – it must be a fortuitous accident.
If it isn’t, why not just run the one model that works?

whiten
Reply to  M Courtney
February 24, 2015 7:32 pm

@M Courtney
February 24, 2015 at 4:07 am
If it isn’t, why not just run the one model that works?
—————
The simple answer, for as far as I can tell, is:
Because you do not get enough warming projected, very little warming actually.
They force the models by a simple trick to generate extra warming.
The warming projected is not a warming due to GHG effect only, it is an artificially inflated warming.
They know that, because is done in purpose, not accidently, even that it may be considered as such, like one of these accidental errors.
They are not interested to do that. Simply a conflict of interest.
They do not want to know that, the right model that works, because then there no any AGW projections there, if that done.
That is why the beautiful and perfect work of Pat is rejected by these guys.
cheers

TLM
February 24, 2015 4:09 am

Brilliant piece! The “propagated error” point is a revelation to me. I have really learnt something here.
I read the Bank of England quarterly inflation reports and wondered why all their graphs had the same shape as the graph on the right of your figure. Now I know. They run economic models and clearly understand the uncertainties inherent in them and the effect of propagated error. Interestingly they add a probability function into their “fan charts” so that you can see that the chance that the errors are all in the same direction is lower than if they are more balanced, some positive, some negative. However the central point is that the actual result has a positive chance of being anywhere in the fan – and each quarter they critically compare their previous prediction with the actual outcome. Something environmental scientists seem reluctant to do.
See charts 5.1, 5.2 and 5.11 on the paper linked below:-
http://www.bankofengland.co.uk/publications/Documents/inflationreport/2015/feb5.pdf
I would really like to read this paper. Keep trying, perhaps you should try some Statistics journals rather than Environmental Science journals. They will have less of a vested interest in climate modelling and you might get reviewers who actually know what they are talking about. Maybe you could even get your paper accepted in an Economics journal, possibly rewritten to contrast the cleverness of economic modellers with the stupidity of climate modellers. Everybody responds to flattery!

Reply to  TLM
February 24, 2015 9:05 pm

Thanks, TLM. I’ll keep trying.

urederra
February 24, 2015 4:09 am

This is the best WUWT article I have read in a very long time.
The accuracy vs. precision problem is spot on. I have also noticed that some commenters in here have the same problem. When it is dicussed about world temperatures some people on both sides of the fence seem to have trouble telling them appart. They complain about error bars in world temperatures when the real problem is lack of accuracy. (Well, also that the concept of temperature of a system which is not in equilibrium is a messy one and does not equate to total energy of the system)
Also, graph b is the type of graph one would expect when performing any kind of modelling that consists of taking the results of one iteration and use them as the starting point of the next iteration.

emsnews
Reply to  urederra
February 24, 2015 9:59 am

Hard to be accurate with gross temperature data tampering going on.

Reply to  urederra
February 24, 2015 9:07 pm

Thanks, urederra. I’ve yet to meet a climate modeler who’d understand your point.

milodonharlani
February 24, 2015 4:12 am

“Climate science” replaced climatology when NCAR got access to a supercomputer designed to model thermonuclear explosions.

knr
February 24, 2015 4:23 am

One very good question to ask is , if not models what else .
In reality when you ask this question you find that the evidenced for the whole ‘we are doomed ‘ game is pretty much rubbish without the models . Given that you can see why , despite their inabilities, the models have to be defended and promoted so heavily. There are lot of careers , cash and politic ambitions resting on their shoulders.

Alex
Reply to  knr
February 24, 2015 4:47 am

Quite simple really. We are in the age of virtual reality. Most people hate their lives and live in a virtual world. You can have virtual love, relationships, sex (there is an app and equipment for that). Soapies, tv series, movies of every kind to suit every taste. Models are no different to that. The MSM can make high drama out of this and most of the sheep lap it up.
I, for one, am hanging on to the toilet rim and refuse to be flushed down with the rest of the idiots.

Bubba Cow
Reply to  Alex
February 24, 2015 5:20 am

well here in reality that 340W/m2 has moved (expanded?) my model thermo-meter from -30F to -20F since dawn and with a probably pretty good albedo given the whiteness of my view – but blue skies for the transparent greenhouse so nada in the backradiation scam
On the interior, firewood is oxidizing nicely.

Jim Francisco
Reply to  Alex
February 24, 2015 11:07 am

I’m going kicking and scratching all the way too Alex.

Jim Francisco
Reply to  Alex
February 24, 2015 12:16 pm

Bubba – you should get out of there! -30 is not fit for man nor beast. And that’s C or F degrees.

Reply to  knr
February 24, 2015 6:24 am

“One very good question to ask is , if not models what else?”
Ouija board,
Magic 8 ball,
tea leaves,
Mom’s intuition,
Great Zoltar
– I’m sure there are many others of comparable projection/prediction ability.

Quinn the Eskimo
Reply to  knr
February 24, 2015 7:03 am

EPA formally stated in the Endangerment Finding for GHGs that the attribution of warming to humans rests on 3 lines of evidence: 1. Temperature Records, 2. Physical Understanding of Climate, and 3. Models. They claimed >90% confidence based on these 3 lines of evidence. AR5 bumped that to 95%.
Nos. 2 and 3 are total crap.
Hot spot, anyone?
No. 1 – we are well within natural variability and so there is no basis for an inference that humans have caused an excursion beyond natural variability.

Reply to  knr
February 24, 2015 9:08 pm

I call it my trillion dollar paper, for exactly that reason, knr. 🙂

SanityP
February 24, 2015 4:44 am

Your work obviously belongs in a mathematical/statistical journal, not in “climate science”.

Reply to  SanityP
February 24, 2015 9:11 pm

Lots of climate modelers have their degrees in mathematics, SanityP. Science is pretty grubby to them, what with all that messy observational stuff and materiality (dirt). My instinct is to avoid such journals.

gaelansclark
February 24, 2015 4:48 am

Can you name the reviewers?
There are so-called “name and shame” campaigns that go after those who do not support the “consensus” position….why can we not know whom it is that have zero understanding of their own models?

Alex
Reply to  gaelansclark
February 24, 2015 4:55 am

Closed shop. The reviewers are ‘anonymous’. Nice thought though.

whiten
Reply to  Alex
February 24, 2015 7:40 pm

Oh come on, Mann could not be there…..or could he!…:-)
cheers

Urederra
Reply to  gaelansclark
February 24, 2015 4:55 am

No, you are not allowed to know the name of the reviewers when you submit a paper to a journal.

Alex
Reply to  Urederra
February 24, 2015 5:05 am

Sometimes the reviewers know each other and the person presenting the papers. Draw you own conclusion from that.

rd50
Reply to  Urederra
February 24, 2015 4:58 pm

Close but no cigar. Yes you are allowed, indeed some journals now have “open peer review”. But these are exceptions, I will grant you this.
However, the best example of open peer review I can give in this AGW field is the original paper of the “Father” of AGW.
The title of the paper “The Artificial Production of Carbon Dioxide and its Influence on Temperature”, published in 1938 in Q.J.R.M.S (certainly a top scientific journal) by G.S. Callendar.
You can download a copy of it for free from here:
http://onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/pdf
You can then read the comments of the reviewers, as well as their names, quite a few of them, under the Discussion of the paper. Then you can read the answers from Callendar to them. Surprise?
I certainly do not want to go of topic about peer review. But I also had a surprising experience.
In 1971, I submitted a review article and received comments from one reviewer in the typical anonymous fashion. When the article was published I was very surprised to see the name of the reviewer printed on the title page with the note that he was the reviewer of the article.
I am not sure, when I wrote the article I certainly was not yet established as a scientist in this particular field. He was over 60 and well respected.
Then the article was and is still cited and became a “fixture” in that field. Another surprise: several authors when citing the article added his name (I think by honest mistake) as a co-author!
A few years later, I had the pleasure of meeting him as we served on an advisory committee. We had a few drinks, a nice dinner and he was still teasing me about a small part of the article he did not like. I teased him about being a false co-author. So much for peer review. Never perfect, but needed.
My impression now is that with the Internet, we are seeing major changes in scientific publishing and we will also see major changes in peer reviews and open comments.
By the way, if you read the paper, you will see that the Father loved the increase in CO2!

whiten
Reply to  Urederra
February 24, 2015 7:44 pm

rd50
February 24, 2015 at 4:58 pm .
Funny, the Father was not even an AGWer, and certainly he would be mad if he was considered as such…..:-)
cheers

Reply to  gaelansclark
February 24, 2015 9:12 pm

Alex is right, gaelanclark. The reviewers were anonymous.

TRG
February 24, 2015 4:59 am

So, I wonder how many of the commenters here actually understand what Pat Frank wrote about. On casual reading, it was over my head.

Alex
Reply to  TRG
February 24, 2015 5:08 am

You want the truth? You can’t handle the truth. Develop a suspicious nature and read a little more widely. It’s something that is pervasive in alll sciences.

Reply to  TRG
February 24, 2015 6:37 am

If I read this correctly…
Basically, he wrote a paper that pointed out that accuracy of models (how close they are to reality) is not the same as precision (how much they wobble around – which is a function of the models, not the real world).
The peer reviewers got confused between the two ideas and thus, conveniently, rejected the paper.
In addition he points out that errors in the start values (or maybe the model assumptions) are iterative, they are repeated. As such they add up.
“You owe me a fiver ± a friendly pint” is fine. No-one keeps count of the friendly pint.
But if the same thing happens day after day post-work then you can feel the resentment growing. That “friendly pint” becomes significant.
Yet the peer reviewers seems to think that the wobbles around the start are a limit to the number of friendly pints so they can be ignored. They are wrong – it repeats and adds up.
He also pointed out that error boundaries (how far from physical reality the models are expected to be) are not the same as the range of wobbles (precision) as the wobbling is not wobbling about the real world; it wobbles about what the models are centred on. Again the reviewers get a little confused. Apart form the one he thinks isn’t a “Climate Scientist” and he therefore thinks may be a competent scientist.
The rest was further illustrations of that theme. If I understood the author correctly.
Hope that helps.

Reply to  M Courtney
February 24, 2015 3:16 pm

Thank you, M Courtney 😉

whiten
Reply to  M Courtney
February 24, 2015 7:57 pm

Perfect explanation. MC..:-)
If you allow to add me a single line of conclusion….please.:-)
The modelers were told by Pat in a very fine and clear way that obviously their models break the very first Commandment for the models….and the answer basically was that that does not matter at all…..that is how they like their models regardless how wrong and perverse that could be.
cheers

Reply to  TRG
February 24, 2015 9:13 pm

M Courtney gave a good basic description, TRG. I, too, hope that helped. Thanks, M Courtney!

garymount
February 24, 2015 5:13 am

Interesting new developments in the computing world with people building supercomputers with cheap $35 computers arranged into computing nodes. Here is an example of a 32 node compute-cluster using the pi raspberry version 1 (version 2 has 4 cores instead of only one core for version one so could total 128 computing cores for the same cost) :
Imagine what we the skeptics might have available to us before the end of this decade to investigate (run) climate models on our own.

Alex
Reply to  garymount
February 24, 2015 5:31 am

I’ve been looking at that stuff. Looks cool. The possibility to stream live (10 minutes) the output of satellite data. It will probably upset some people. HaHa

Reply to  garymount
February 24, 2015 8:10 am

Thanks, an excellent post, Pat Frank.
Another nail in the IPCC coffin.

Reply to  Andres Valencia
February 24, 2015 9:14 pm

Thanks, Andres.

Reply to  garymount
February 24, 2015 8:12 am

I hope these computer builders don’t waste their time running useless IPCC models.

DirkH
Reply to  garymount
February 24, 2015 11:59 am

Nice toy but wrong approach for a number cruncher. Highest performance in TFlops/Watt – and price as well – can only be achieved with high concentrations of actual computing pipelines, SIMD arrays, like the NVidia cards or SoC’s like Xilinx ZynQ – which has 250 DSP slices embedded in an FPGA (and 2 ARM cores for controlling the thing).

garymount
Reply to  DirkH
February 24, 2015 4:41 pm

These things have GPU’s on them that could also be used for computing. It is an inexpensive way for a person like me who would want to try out code for climate models.
On the other hand, Intel has an 18 core hyper-threaded chip that can run 36 threads simultaneously, but is rather expensive – in the thousand dollar range.
Microsoft is building a Windows 10 variant for the Raspberry Pi. When that becomes available, I will seriously look into building my own compute cluster.

Ian W
Reply to  garymount
February 24, 2015 5:48 pm

They are nice toys, but the problems raised in this post still hold good. It is straight forward Lorenz the start data is inaccurate, the models lack the capability to model everything in the chaotic climate system. Even the IPCC said: “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. “
Small errors in the initial state propagate but in a chaotic system they will not propagate uniformly. As the number of inaccurate variables is close to infinite and the chaotic system has “unknown unknowns” anyone who thinks that it is possible to model the climate with any level of correctness does not understand the climate.

Jonathan Abbott
February 24, 2015 5:20 am

A very good an interesting article. To a large extent I share your despair, but would encourage you to find other possible places to publish. My overview is that those working on GCMs have developed ‘ignorant expertise’: they have become expert in their own paradigm and groupthink but divorced from the tenets of science as a whole.

Reply to  Jonathan Abbott
February 24, 2015 9:17 pm

Thanks, Jonathan. I apologize for communicating despair; didn’t mean to. The ms has been submitted again, and I remain cautiously optimistic. You’re right about the modelers. Some came across as quite upset that I should suggest a means of analysis standard in physical science, but not standard in their field.

February 24, 2015 5:29 am

As I have used the phrase ‘climate-models-can’t-predict-squat’ in C3 articles multiple times, it’s a guilty pleasure to read an article that addresses the issue head-on.
Being a retired biz executive, the climate model output has always reminded me of marketing managers spending way too much time devising Excel algorithms that provide “empirical” evidence, with the end result always being that a new marketing campaign means total domination of a given market within a few years.
And these fairly smart sales/marketing manager types would truly come to believe their simulated outputs were the probable future reality. (This type of simulation “science” was also used to fertilize the crazed tech boom frenzy that ended badly with the severe 2000 dot-com bubble bust – instead of sales projections, it was the grandiose simulated predictions of ‘eyeballs captured’ that fed the investors’ appetites.)
Alas, the climate modelers are no different than the self-deceived jokers in the marketing/sales departments, who made faulty sales projections based on complex Excel formulas without an understanding/appreciation of the underlying nuances and unknown macro, micro, behavioral and innovation economics at work, globally, 24/7.
Climate modelers as scientists? Nope. Instead, they’re the climate science community’s jokers, closely related to their always failing brethren in the business world.

Jim Francisco
Reply to  C3 Editor
February 24, 2015 11:37 am

Sometimes I am amazed that we as a society ever got complicated machines like cars and airplanes built on such a large scale with such craziness going on. It seemed to me that in my world those who could not do their technical job very well realized their inabilities and therefore turned their attention to becoming managers. Many of them succeeded. The problem was that they were not good with determining who were technically competent and who were not. Eventually you are part of a group with a rightfully deserved bad reputation.

Rob Dawg
February 24, 2015 6:00 am

The dysfunction is more fundamental. Climate investigators don’t even know the difference between measurements, data and information.

February 24, 2015 6:04 am

Your manuscript rejection is largely due to claiming the naked emperor has on no clothes. How dare you challenge the cargo cult climate scientists

Gary
February 24, 2015 6:09 am

The problem with swallowing the blue pill is that you no longer recognize the possibility of a red pill.
https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
Modelers live in virtual reality so they can’t see what’s really happening. Computer programs are under their control and so give the illusion of mastery. Your frustration is akin to that of all teachers whose pupils just don’t have the capacity to understand. Thank you, though, for putting this on the record rather than just letting it go.

February 24, 2015 6:15 am

As Frank suggests between the lines, in science the purpose of a model is to make predictions of real world data. Science is a mapping on data to data.
The purpose of the climate modelers is to get published in an approved journal via peer review. To the latter, accuracy and precision mean no more than consistency in model results, and they have programs to make their models consistent, programs which have, by and large, been successful. GCMs predict future climate, but these predictions can and are never validated. They are near enough to require urgent funding, but far enough away to be untestable in our lifetimes.
In a brief, lucid moment, Richard Horton, Editor of Lancet, explained the modern publication process:
The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability – not the validity – of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed [jiggered, not repaired], often insulting, usually ignorant, occasionally foolish, and frequently wrong.
Science is not about voting. It is not about peer-review, publication, or consensuses. These are subjective. It’s about predictive power. Science is the (strictly) objective branch of knowledge.
Fortunately for science and society, and unfortunately for the climate modelers, the GCMs contain one accessible, implicit prediction: Climate Sensitivity. Data from the last decade and a half invalidate that prediction. The toast fell jelly side up.
Climate models fail – not because they are computer models, but because they butcher the physics of climate. They are incompetent. These postmodern modelers talk about feedback, but then leave out the most powerful feedback in all of climate, total cloud albedo, the number nominally put at about 31%, and which is in fact variable, gating the Sun on and off. It is a positive feedback, amplifying solar radiation (the burnoff effect) and a negative feedback, mitigating warming from any cause (from the Clausius-Clapeyron effect).
These top level aspects of the climate story can be widely understood, even reaching the general public.

rgbatduke
February 24, 2015 6:22 am

Good luck with that, Pat. You are still being way too nice to them. Additional points:
* They treat the PPE envelope as if it is error when it is not as you say. But they do not examine the structure of the individual traces, which themselves often have absolutely absurd variability and the wrong autocorrelation. I have remarked many times on what the wrong autocorrelation means physically via the fluctuation dissipation theorem. In a nutshell, if the autocorrelation times are not correct, then the physics of the open system is provably not correct, end of story.
* The models do not conserve energy per timestep. This means that at the end of every timestep the system has to be renormalized or it will run away. But they cannot fully renormalize it, or else the models would not run away the way they need them too. They therefore have to renormalize the energy balance enough to stabilize the model but in a way that permits GHGs to force the solution to grow over time. I won’t say that it is impossible to perform this sort of numerical magic without introducing all sorts of human bias into the result — I’ll just say that I am deeply skeptical about the entire process. It’s like solving a stiff set of coupled ODEs (very much like it, in fact, almost identical to it) so that it sort of diverges but doesn’t really diverge. How can you be sure that the result is actually a solution and not your beliefs about the solution.
* The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.
* In the end, how are the models any different from a simple direct physical computation of GHG forcing? They are obviously set up to have a median output around the centroid prediction of the usual logarithmic climate sensitivity, and everything else is just model-induced noise around this obvious trend. I could (and have) produced the centroid line just fitting and extrapolating the climate data in a one significant parameter purely statisical model fitting HadCRUT4. The PPE output is mere window dressing designed to make this fit somehow more plausible, or to emphasize that it COULD warm as much as 6 C — if there were no negative feedbacks in the system and all of the dice used in the model came up boxcars a hundred times in a row.
rgb

Walt D.
Reply to  rgbatduke
February 24, 2015 9:49 am

” The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.”. You mean that 15 wrongs don’t make a right? 🙁 It would seem that if 15 models all give different results, and we consider differences of 0.02C to be significant, then at lesst 14 of them have to be wrong.

Harold
Reply to  Walt D.
February 24, 2015 3:54 pm

No, he means 5 possums, 3 raccoons, 4 starfish and 3 spiders don’t make an elephant.

Reply to  rgbatduke
February 24, 2015 9:24 pm

I’ve often wondered, rgb, why you don’t write a critical article. You’re so totally qualified, and you (unlike me) understand the physics and math right down to the bedrock. I’m still wondering. It would go nuclear. Why not do it? Think of the children. 🙂

February 24, 2015 6:53 am

Does this stem from modelers’ dubious claim that chaos averages out?

CaligulaJones
February 24, 2015 6:54 am

Seems peer review has mutated into “friend review” for bad science, and “enemy review” for protection of paradigms. And funding.

Quibble
February 24, 2015 7:00 am

Typo: “It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal”
Sorry, it’s all I can contribute.

Quibble
February 24, 2015 7:00 am

Typo: “It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal”
Sorry, it’s all I can contribute.

Ralph Kramden
February 24, 2015 7:04 am

The Catastrophic Anthropogenic Global Warming (CAGW) theory has so many obvious flaws that in my opinion there are only two reasons someone might believe in it. Either they are being paid to or they’re not the sharpest tool in the box, i.e. they wear a polar bear suit to demonstrations.

Verified by MonsterInsights