Are Climate Modelers Scientists?

Guest essay by Pat Frank

For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.

The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.

Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.

Here’s an illustration: the Figure below shows what happens when the average ±4 Wm-2 long-wave cloud forcing error of CMIP5 climate models [1], is propagated through a couple of Community Climate System Model 4 (CCSM4) global air temperature projections.

CCSM4 is a CMIP5-level climate model from NCAR, where Kevin Trenberth works, and was used in the IPCC AR5 of 2013. Judy Curry wrote about it here.

clip_image002

In panel a, the points show the CCSM4 anomaly projections of the AR5 Representative Concentration Pathways (RCP) 6.0 (green) and 8.5 (blue). The lines are the PWM emulations of the CCSM4 projections, made using the standard RCP forcings from Meinshausen. [2] The CCSM4 RCP forcings may not be identical to the Meinhausen RCP forcings. The shaded areas are the range of projections across all AR5 models (see AR5 Figure TS.15). The CCSM4 projections are in the upper range.

In panel b, the lines are the same two CCSM4 RCP projections. But now the shaded areas are the uncertainty envelopes resulting when ±4 Wm-2 CMIP5 long wave cloud forcing error is propagated through the projections in annual steps.

The uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.

It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.

Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.

And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.

That brings me to the reason I’m writing here. My manuscript has been rejected four times; twice each from two high-ranking climate journals. I have responded to a total of ten reviews.

Nine of the ten reviews were clearly written by climate modelers, were uniformly negative, and recommended rejection. One reviewer was clearly not a climate modeler. That one recommended publication.

I’ve had my share of scientific debates. A couple of them not entirely amiable. My research (with colleagues) has over-thrown four ‘ruling paradigms,’ and so I’m familiar with how scientists behave when they’re challenged. None of that prepared me for the standards at play in climate science.

I’ll start with the conclusion, and follow on with the supporting evidence: never, in all my experience with peer-reviewed publishing, have I ever encountered such incompetence in a reviewer. Much less incompetence evidently common to a class of reviewers.

The shocking lack of competence I encountered made public exposure a civic corrective good.

Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.

Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything. Geoff Sherrington has been eloquent about the hazards and trickiness of experimental error.

All of the physical sciences hew to these standards. Physical scientists are bound by them.

Climate modelers do not and by their lights are not.

I will give examples of all of the following concerning climate modelers:

  • They neither respect nor understand the distinction between accuracy and precision.
  • They understand nothing of the meaning or method of propagated error.
  • They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
  • They don’t understand the meaning of physical error.
  • They don’t understand the importance of a unique result.

Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.

The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.

And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.

1. Accuracy vs. Precision

The distinction between accuracy and precision is central to the argument presented in the manuscript, and is defined right in the Introduction.

The accuracy of a model is the difference between its predictions and the corresponding observations.

The precision of a model is the variance of its predictions, without reference to observations.

Physical evaluation of a model requires an accuracy metric.

There is nothing more basic to science itself than the critical distinction of accuracy from precision.

Here’s what climate modelers say:

“Too much of this paper consists of philosophical rants (e.g., accuracy vs. precision) …”

“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”

“The best way to test the errors of the GCMs is to run numerical experiments to sample the predicted effects of different parameters…”

“The author is simply asserting that uncertainties in published estimates [i.e., model precision – P] are not ‘physically valid’ [i.e., not accuracy – P]- an opinion that is not widely shared.”

Not widely shared among climate modelers, anyway.

The first reviewer actually scorned the distinction between accuracy and precision. This, from a supposed scientist.

The remainder are alternative declarations that model variance, i.e., precision, = physical accuracy.

The accuracy-precision difference was extensively documented to relevant literature in the manuscript, e.g., [3, 4].

The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.

Every climate modeler reviewer who addressed the precision-accuracy question similarly failed to grasp it. I have yet to encounter one who understands it.

2. No understanding of propagated error

“The authors claim that published projections do not include ‘propagated errors’ is fundamentally flawed. It is clearly the case that the model ensemble may have structural errors that bias the projections.”

I.e., the reviewer supposes that model precision = propagated error.

“The repeated statement that no prior papers have discussed propagated error in GCM projections is simply wrong (Rogelj (2013), Murphy (2007), Rowlands (2012)).”

Let’s take the reviewer examples in order:

Rogelj (2013) concerns the economic costs of mitigation. Their Figure 1b includes a global temperature projection plus uncertainty ranges. The uncertainties, “are based on a 600-member ensemble of temperature projections for each scenario…” [5]

I.e., the reviewer supposes that model precision = propagated error.

Murphy (2007) write, “In order to sample the effects of model error, it is necessary to construct ensembles which sample plausible alternative representations of earth system processes.” [6]

I.e., the reviewer supposes that model precision = propagated error.

Rowlands (2012) write, “Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. “ and go on to state that, “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure.” [7]

I.e., the reviewer supposes that model precision = propagated error.

Not one of this reviewer’s examples of propagated error includes any propagated error, or even mentions propagated error.

Not only that, but not one of the examples discusses physical error at all. It’s all model precision.

This reviewer doesn’t know what propagated error is, what it means, or how to identify it. This reviewer also evidently does not know how to recognize physical error itself.

Another reviewer:

“Examples of uncertainty propagation: Stainforth, D. et al., 2005: Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433, 403-406.

“M. Collins, R. E. Chandler, P. M. Cox, J. M. Huthnance, J. Rougier and D. B. Stephenson, 2012: Quantifying future climate change. Nature Climate Change, 2, 403-409.”

Let’s find out: Stainforth (2005) includes three Figures; Every single one of them presents error as projection variation. [8]

Here’s their Figure 1:

clip_image004

Original Figure Legend: “Figure 1 Frequency distributions of T g (colours indicate density of trajectories per 0.1 K interval) through the three phases of the simulation. a, Frequency distribution of the 2,017 distinct independent simulations. b, Frequency distribution of the 414 model versions. In b, T g is shown relative to the value at the end of the calibration phase and where initial condition ensemble members exist, their mean has been taken for each time point.

Here’s what they say about uncertainty: “[W]e have carried out a grand ensemble (an ensemble of ensembles) exploring uncertainty in a state-of-the-art model. Uncertainty in model response is investigated using a perturbed physics ensemble in which model parameters are set to alternative values considered plausible by experts in the relevant parameterization schemes.

There it is: uncertainty is directly represented as model variability (density of trajectories; perturbed physics ensemble).

The remaining figures in Stainforth (2005) derive from this one. Propagated error appears nowhere and is nowhere mentioned.

Reviewer supposition: model precision = propagated error.

Collins (2012) state that adjusting model parameters so that projections approach observations is enough to “hope” that a model has physical validity. Propagation of error is never mentioned. Collins Figure 3 shows physical uncertainty as model variability about an ensemble mean. [9] Here it is:

clip_image006

Original Legend: “Figure 3 | Global temperature anomalies. a, Global mean temperature anomalies produced using an EBM forced by historical changes in well-mixed greenhouse gases and future increases based on the A1B scenario from the Intergovernmental Panel on Climate Change’s Special Report on Emission Scenarios. The different curves are generated by varying the feedback parameter (climate sensitivity) in the EBM. b, Changes in global mean temperature at 2050 versus global mean temperature at the year 2000, … The histogram on the x axis represents an estimate of the twentieth-century warming attributable to greenhouse gases. The histogram on the y axis uses the relationship between the past and the future to obtain a projection of future changes.

Collins 2012, part a: model variability itself; part b: model variability (precision) represented as physical uncertainty (accuracy). Propagated error? Nowhere to be found.

So, once again, not one of this reviewer’s examples of propagated error actually includes any propagated error, or even mentions propagated error.

It’s safe to conclude that these climate modelers have no concept at all of propagated error. They apparently have no concept whatever of physical error.

Every single time any of the reviewers addressed propagated error, they revealed a complete ignorance of it.

3. Error bars mean model oscillation – wherein climate modelers reveal a fatal case of naive-freshman-itis.

“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”

“[T]his analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”

“Indeed if we carry such error propagation out for millennia we find that the uncertainty will eventually be larger than the absolute temperature of the Earth, a clear absurdity.”

“An entirely equivalent argument [to the error bars] would be to say (accurately) that there is a 2K range of pre-industrial absolute temperatures in GCMs, and therefore the global mean temperature is liable to jump 2K at any time – which is clearly nonsense…”

Got that? These climate modelers think that “±” error bars imply the model itself is oscillating (liable to jump) between the error bar extremes.

Or that the bars from propagated error represent physical temperature itself.

No sophomore in physics, chemistry, or engineering would make such an ignorant mistake.

But Ph.D. climate modelers have invariably done. One climate modeler audience member did so verbally, during Q&A after my seminar on this analysis.

The worst of it is that both the manuscript and the supporting information document explained that error bars represent an ignorance width. Not one of these Ph.D. reviewers gave any evidence of having read any of it.

5. Unique Result – a concept unknown among climate modelers.

Do climate modelers understand the meaning and importance of a unique result?

“[L]ooking the last glacial maximum, the same models produce global mean changes of between 4 and 6 degrees colder than the pre-industrial. If the conclusions of this paper were correct, this spread (being so much smaller than the estimated errors of +/- 15 deg C) would be nothing short of miraculous.”

“In reality climate models have been tested on multicentennial time scales against paleoclimate data (see the most recent PMIP intercomparisons) and do reasonably well at simulating small Holocene climate variations, and even glacial-interglacial transitions. This is completely incompatible with the claimed results.”

“The most obvious indication that the error framework and the emulation framework

presented in this manuscript is wrong is that the different GCMs with well-known different cloudiness biases (IPCC) produce quite similar results, albeit a spread in the

climate sensitivities.”

Let’s look at where these reviewers get such confidence. Here’s an example from Rowlands, (2012) of what models produce. [7]

clip_image008

Original Legend: “Figure 1 | Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.” [7]

The variable black line in the middle of the group represents the observed air temperature. I added the horizontal black lines at 1 K and 3 K, and the vertical red line at year 2055. Part of the red line is in the original figure, as the precision uncertainty bar.

This Figure displays thousands of perturbed physics simulations of global air temperatures. “Perturbed physics” means that model parameters are varied across their range of physical uncertainty. Each member of the ensemble is of equivalent weight. None of them are known to be physically more correct than any of the others.

The physical energy-state of the simulated climate varies systematically across the years. The horizontal black lines show that multiple physical energy states produce the same simulated 1 K or 3 K anomaly temperature.

The vertical red line at year 2055 shows that the identical physical energy-state (the year 2055 state) produces multiple simulated air temperatures.

These wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.

The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.

That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.

That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.

Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.

Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?

Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?

Obviously not.

Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.

Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.

Any physical scientist would (should) know this. The climate modeler reviewers uniformly do not.

6. An especially egregious example in which the petard self-hoister is unaware of the air underfoot.

Finally, I’d like to present one last example. The essay is already long, and yet another instance may be overkill.

But I finally decided it is better to risk reader fatigue than to not make a public record of what passes for analytical thinking among climate modelers. Apologies if it’s all become tedious.

This last truly demonstrates the abysmal understanding of error analysis at large in the ranks of climate modelers. Here we go:

“I will give (again) one simple example of why this whole exercise is a waste of time. Take a simple energy balance model, solar in, long wave out, single layer atmosphere, albedo and greenhouse effect. i.e. sigma Ts^4 = S (1-a) /(1 -lambda/2) where lambda is the atmospheric emissivity, a is the albedo (0.7), S the incident solar flux (340 W/m^2), sigma is the SB coefficient and Ts is the surface temperature (288K).

“The sensitivity of this model to an increase in lambda of 0.02 (which gives a 4 W/m2 forcing) is 1.19 deg C (assuming no feedbacks on lambda or a). The sensitivity of an erroneous model with an error in the albedo of 0.012 (which gives a 4 W/m^2 SW TOA flux error) to exactly the same forcing is 1.18 deg C.

“This the difference that a systematic bias makes to the sensitivity is two orders of magnitude less than the effect of the perturbation. The author’s equating of the response error to the bias error even in such a simple model is orders of magnitude wrong. It is exactly the same with his GCM emulator.”

The “difference” the reviewer is talking about is 1.19 C – 1.18 C = 0.01 C. The reviewer supposes that this 0.01 C is the entire uncertainty produced by the model due to a 4 Wm-2 offset error in either albedo or emissivity.

But it’s not.

First reviewer mistake: If 1.19 C or 1.18 C are produced by a 4 Wm-2 offset forcing error, then 1.19 C or 1.18 C are offset temperature errors. Not sensitivities. Their tiny difference, if anything, confirms the error magnitude.

Second mistake: The reviewer doesn’t know the difference between an offset error (a statistic) and temperature (a thermodynamic magnitude). The reviewer’s “sensitivity” is actually “error.”

Third mistake: The reviewer equates a 4 W/m2 energetic perturbation to a ±4 W/m2 physical error statistic.

This mistake, by the way, again shows that the reviewer doesn’t know to make a distinction between a physical magnitude and an error statistic.

Fourth mistake: The reviewer compares a single step “sensitivity” calculation to multi-step propagated error.

Fifth mistake: The reviewer is apparently unfamiliar with the generality that physical uncertainties express a bounded range of ignorance; i.e., “±” about some value. Uncertainties are never constant offsets.

Lemma to five: the reviewer apparently also does not know the correct way to express the uncertainties is ±lambda or ±albedo.

But then, inconveniently for the reviewer, if the uncertainties are correctly expressed, the prescribed uncertainty is ±4 W/m2 in forcing. The uncertainty is then obviously an error statistic and not an energetic malapropism.

For those confused by this distinction, no energetic perturbation can be simultaneously positive and negative. Earth to modelers, over. . .

When the reviewer’s example is expressed using the correct ± statistical notation, 1.19 C and 1.18 C become ±1.19 C and ±1.18 C.

And these are uncertainties for a single step calculation. They are in the same ballpark as the single-step uncertainties presented in the manuscript.

As soon as the reviewer’s forcing uncertainty enters into a multi-step linear extrapolation, i.e., a GCM projection, the ±1.19 C and ±1.18 C uncertainties would appear in every step, and must then propagate through the steps as the root-sum-square. [3, 10]

After 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.

So, correctly done, the reviewer’s own analysis validates the very manuscript that the reviewer called a “waste of time.” Good job, that.

This reviewer:

  • doesn’t know the meaning of physical uncertainty.
  • doesn’t distinguish between model response (sensitivity) and model error. This mistake amounts to not knowing to distinguish between an energetic perturbation and a physical error statistic.
  • doesn’t know how to express a physical uncertainty.
  • and doesn’t know the difference between single step error and propagated error.

So, once again, climate modelers:

  • neither respect nor understand the distinction between accuracy and precision.
  • are entirely ignorant of propagated error.
  • think the ± bars of propagated error mean the model itself is oscillating.
  • have no understanding of physical error.
  • have no understanding of the importance or meaning of a unique result.

No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.

And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.

Apparently, such thinking is critically convincing to certain journal editors.

Given all this, one can understand why climate science has fallen into such a sorry state. Without the constraint of observational physics, it’s open season on finding significations wherever one likes and granting indulgence in science to the loopy academic theorizing so rife in the humanities. [11]

When mere internal precision and fuzzy axiomatics rule a field, terms like consistent with, implies, might, could, possible, likely, carry definitive weight. All are freely available and attachable to pretty much whatever strikes one’s fancy. Just construct your argument to be consistent with the consensus. This is known to happen regularly in climate studies, with special mentions here, here, and here.

One detects an explanation for why political sentimentalists like Naomi Oreskes and Naomi Klein find climate alarm so homey. It is so very opportune to polemics and mindless righteousness. (What is it about people named Naomi, anyway? Are there any tough-minded skeptical Naomis out there? Post here. Let us know.)

In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.

Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.

The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.

They should be nowhere near important discussions or decisions concerning science-based social or civil policies.


References:

1. Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.

2. Meinshausen, M., et al., The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 2011. 109(1-2): p. 213-241.

The PWM coefficients for the CCSM4 emulations were: RCP 6.0 fCO = 0.644, a = 22.76 C; RCP 8.5, fCO = 0.651, a = 23.10 C.

3. JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement. 100:2008, Bureau International des Poids et Mesures: Sevres, France.

4. Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.

5. Rogelj, J., et al., Probabilistic cost estimates for climate change mitigation. Nature, 2013. 493(7430): p. 79-83.

6. Murphy, J.M., et al., A methodology for probabilistic predictions of regional climate change from perturbed physics ensembles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2007. 365(1857): p. 1993-2028.

7. Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.

8. Stainforth, D.A., et al., Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 2005. 433(7024): p. 403-406.

9. Collins, M., et al., Quantifying future climate change. Nature Clim. Change, 2012. 2(6): p. 403-409.

10. Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill. 320.

11. Gross, P.R. and N. Levitt, Higher Superstition: The Academic Left and its Quarrels with Science. 1994, Baltimore, MD: Johns Hopkins University. May be the most intellectually enjoyable book, ever.

5 2 votes
Article Rating
449 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Slywolfe
February 24, 2015 1:07 am

“For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections”
Dana Nuccitelli says models do a good job.
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2015/feb/23/climatology-versus-pseudoscience-new-book-checks-whose-predictions-have-been-right

PaulC
Reply to  Slywolfe
February 24, 2015 2:08 am

The Guardian – enough said

Scottish Sceptic
Reply to  PaulC
February 24, 2015 2:23 am

Not one of their journalists as far as I can see has a degree relevant to climate. The closest is Moonbat who at least has a science degree, but I hardly count a zoologist as qualified to comment on atmospheric physics or renewable energy.
But does that stop them attacking us sceptics who overwhelmingly have these qualifications? Take our host Anthony Watts. Clearly qualified to speak about climate and the legitimate scientific dispute, producing the world class well researched articles that makes him the mainstream media in this area. And who attacks him? Uneducated the scientifically illiterate cut-and-paste “journalists” of the Guardian.

RWturner
Reply to  PaulC
February 24, 2015 11:34 am

Dana Nuccitelli — too much said.

Hivemind
Reply to  PaulC
February 27, 2015 10:33 pm

Grauniad, you mean.

knr
Reply to  Slywolfe
February 24, 2015 2:42 am

Dana Nuccitelli is BS artist promoted to well above his pay grade , that the Guardian has handing itself over to him and Bob ‘fast fingers ‘ Ward to self promote their egos and the pay masters financial outlooks . Is a shame but not unexpected has is has made it clear for years that only unquestioning support of ‘the cause ‘ is an acceptable stance for them . It tried to avoid even covering climategate and only did so when it became clear others newspapers would do so . Its always been an oddity of the paper that when it get obsessed on a subject , like AGW , it tends to take an absolute stance and the quality of its coverage goes down hill the more it covers it.

PeterK
Reply to  knr
February 24, 2015 9:36 am

Actually, Nuccitelli has long reached ‘his level of incompetence’ (Peter Principle). The only problem is he is not smart enough to know it.

Alex
Reply to  Slywolfe
February 24, 2015 3:41 am

Slywolfe
You are either a moron or a teenager (probably means the same at the moment). As a teenager you have a chance to grow out of it. As a moron you have no hope.

Editor
Reply to  Alex
February 24, 2015 4:10 am

I think he forgot “sarc”

Alex
Reply to  Slywolfe
February 24, 2015 4:11 am

Paul Homewood
Babies are like that

Alx
Reply to  Slywolfe
February 24, 2015 2:46 pm

Dana Nuccitelli? LOL. Thas’s all. LOL.
I have read enough Dana articles in the guardian to know he writes like a scientific idiot who is trying to forge a new career in polemics.

Pat Frank
Reply to  Slywolfe
February 24, 2015 7:13 pm

First, Anthony, thank-you very much for posting my essay about climate modelers. I am grateful for the opportunity.
Next, Slywolfe, if you understand the first figure of the essay, or the fourth, or the linked poster, you’ll know that climate models can’t make any predictions at all and so, ipso facto, can not “do a good job.” Unless making not-predictions is their job.
Crediting your credit, Dana doesn’t know what he’s talking about. And, as regards climate futures, neither does anyone else.

Joel O’Bryan
Reply to  Pat Frank
February 24, 2015 8:48 pm

Pat,
Thanks for generating a very worthwhile discussion on the GCM failures and allowing WUWT readers a “peer-reviewing” the sorry state of Climate Science manuscript peer-reviewers. Bob Tisdale and Christopher Monckton (as you may be aware) regularly update WUWT readers with GCM external failures. Your elucidation of the internal reasons for those GCM failures (along with RGBatDuke, Ferdburple, Jimbo, and many others) is very much appreciated.
I understood most of what you presented and took away a very important refresher lesson on the importance of a “unique result” in any science-based model. I also remember, that some months back someone at WUWT posted a comment that the GCM initializations used a single value for enthalpy of evaporation for 4º C water instead of 26º C as is for most of the tropical waters. They mentioned that evaporation enthalpy value error would propagate through the hundreds of iterations of the GCM’s, compounding until nothing was left but essentially a random noise signal. That made me realize that the GCMs of the IPCC are total crap, built with circular logic to deliver a politically-desired output.
Joel O’Bryan, PhD

Pat Frank
Reply to  Pat Frank
February 26, 2015 2:02 pm

Thanks, Joel. I’d never have thought of that water enthalpy error. One expects if all the physical errors of climate models were documented, their propagation would produce a centennial uncertainty envelope of approximately the size of North America.

Reply to  Pat Frank
March 3, 2015 12:27 pm

Pat Frank
I’ll make it ultra-simple for you: Predicting the future (anything) is very difficult for humans. One might as well flip a coin.
.
The IPCC Report Summary is leftist personal opinions formatted to look like a real scientific study.
.
As you can see from the formerly beloved Mann Hockey Stick chart, ‘predicting the past’ is just as difficult for the “climate astrologers” as predicting the future.
.
It’s a climate change cult. — a secular religion for people who reject traditional religions.
.
The coming global warming catastrophe scam is 99% politics and 1% science.
.
You can not debate a cult using data, logic and facts any more than you can debate the existence of god with a Baptist.
.
The long list of environmental boogeymen started with DDT in the 1960s, and as each new boogeyman lost its ability to scare people, a new boogeyman was created, and the old one was immediately forgotten.
If we are lucky, and it seems that we have been for two years so far, it will remain cold enough so the average person begins to doubt the coming global warming catastrophe predictions — thank you Mr. Sun and Mrs. Cosmic Rays, for riling up the leftist so they reveal their true bad character — with harsh character attacks on scientists who do not deserve them.

Pat Frank
Reply to  Pat Frank
March 3, 2015 8:09 pm

Richard, I don’t disagree with your general point.
But consider that Maxwell’s equations do a darn good job predicting the future behavior of emitted electromagnetic waves. And Newton’s theory does a good job at predicting the future positions of the planets — at least out to a billion years or so. In my field, QM does a pretty good job of predicting the details of x-ray absorption spectra before any measurement.
So, physical science has a good array of predictive theories. Climate modelers have managed to convince people that they can predict future climate to high resolution. Their claim is supported only by the abandonment of standard scientific practice. Abandonment not just in climatology, but by august bodies such as the Royal Society and the American Physical Society.
In a way the modelers themselves are innocents, because my experience shows they’re not trained physical scientists at all. They couldn’t have abandoned a method they never knew or understood. The true fault lays with the physical scientists, especially the APS, who let climate modelers get away with their ignorance and scientific incompetence.
I agree with you that AGW alarm has been seized upon by progressives as their politically opportune proof positive that capitalism is inherently evil. The history of the 20th century has shown that their preferred alternative is manifestly monstrous. But as committed ideological totalitarians, finding a moral position in lying, cheating, and stealing to get their utopian way, remediative introspection has never been a progressive strong suit.

Reply to  Slywolfe
March 1, 2015 8:10 am

His article didn’t pass peer review.. why is it even being discussed past that?

Pat Frank
Reply to  Chris Headleand
March 1, 2015 5:02 pm

Discussed for reasons noted in the head post, Chris. Or do you find the reviewer arguments convincing?

Mervyn
February 24, 2015 1:21 am

Dana Nuccitelli thinks models do a good job. That is irrelevant. Models will never succeed in simulating the climate system and the reasoning is brilliantly presented in the following video:

VicV
Reply to  Mervyn
February 24, 2015 9:53 am

Very nice. I think the most interesting part, the part that really brought it home, was right at the end, when talking about time-scales.
So, for very complex problems, the computer would still give us GIGO: God’s-truth In, Garbage out.

Pat Frank
Reply to  VicV
February 24, 2015 7:15 pm

Regarding models, Willie Soon says GIGO = Garbage In, Gospel Out. 🙂

Paul mackey
February 24, 2015 1:38 am

Brilliant. Well done.

Pethefin
February 24, 2015 1:49 am

Stop wasting your time with “climate journals”. They continue their gate-keeping while your message is being missed in the climate policy debate. Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?

1saveenergy
Reply to  Pethefin
February 24, 2015 2:10 am

“Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?”
Try finding one,….the cancer is well established.

Scottish Sceptic
Reply to  1saveenergy
February 24, 2015 2:18 am

It’s called a closed shop! If you are not part of the “doomsday warming” union then you are blacklisted, blackballed and if that doesn’t work blackmailed.

jeff
Reply to  1saveenergy
February 24, 2015 10:58 am

this should be published in a statistics journal.

Mike Maguire
Reply to  Pethefin
February 24, 2015 9:37 am

“Stop wasting your time with “climate journals””
He did a tremendous amount of authentic work on this manuscript. I’m guessing that he was confident that it would be recognized as such from anybody resembling an authority/expert and be most relevant in a climate journal………..even knowing the bias that exists.
Pat Frank,
I appreciate you taking the time to share this with us. It’s extraordinarily enlightening.
“Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.”
Back in college, in a synoptic meteorology lab class, we had to create a simple weather model. I was sort of overwhelmed by all the mathematical equations and some of the other stuff but did manage to get my model to work………..by repetitive trial and error.
I had little confidence that the equations/parameters were the best ones to represent the atmosphere but know that by continually tweeking and tweeking, it resulted in the changes which moved my model in the right direction and finally showed what it was supposed to show, to get the result we needed to pass.

Pat Frank
Reply to  Mike Maguire
February 24, 2015 7:55 pm

I did think that in my early naive days, Mike. But no more. I didn’t realize that climate modelers don’t know the first thing about assessing accuracy. Plus I didn’t know that many editors apparently lack the courage to publish anything truly controversial in a poisonous atmosphere such as has been deliberately created in climate science.
Also, thank-you for your kind words. Your description of constructing a weather model sounds like a good experience. First, you learned to do it, second you overcame your fears, and third you gained a critical perception of models. Nothing to feel tentative about.

Pat Frank
Reply to  Pethefin
February 24, 2015 7:38 pm

I worried about that very problem, Pethefin. So my first submission was to the journal Risk Analysis. After three weeks of silence, the manuscript editor came back and told me that the paper was not appropriate to the journal. So he declined to even send it out for review.
This decision was backed up by the chief editor who, in essence, said that the analysis would be of limited interest to their audience. That is, a paper showing there’s no knowable greenhouse warming risk is of small interest to risk analysis professionals. Incredible, but not worth arguing. I suspect they were relieved to have dodged a political bullet.

pat
February 24, 2015 1:49 am

Dana’n’John should be busy investigating if modelling TC Marcia as a Cat 5 is valid:
23 Feb: Joannenova: Category Five storms aren’t what they used to be
The 295 km/hr wind speed was repeated on media all over the world, but how was it measured? Not with any anemometer apparently — it was modeled. If the BOM is describing a Cat 2 or 3 as a “Cat 5″, that’s a pretty serious allegation. Is the weather bureau “homogenising” wind speeds between stations?…
http://joannenova.com.au/2015/02/category-five-storms-arent-what-they-used-to-be/#comment-1684713
australian CAGW sceptics have made the MSM and BOM has admitted no Cat 5 cyclone passed over Queensland. not that most MSM and the public have grasped this fact, as yet, such has been the hysteria:
24 Feb: Courier-Mail: Climate researcher questions Cyclone Marcia’s category 5 status
Jennifer Morohasy said the bureau had used computer modelling rather than early readings from weather stations to determine that Marcia was a category 5 cyclone, not a category 3…
Systems Engineering Australia principal Bruce Harper, a modelling and risk assessment consultant who analyses cyclones, said it was often difficult to determine whether a storm was a marginal 3, 4 or 5.
What was important was that after the bureau conducted its post-storm analysis, it told people that they experienced category 3 impacts as it passed over the land.
It was dangerous for residents to be thinking they had survived a category 5 when it was a storm that degraded quickly…
http://www.couriermail.com.au/news/queensland/climate-researcher-questions-cyclone-marcias-category-5-status/story-fnkt21jb-1227236188297

SandyInLimousin
Reply to  pat
February 24, 2015 4:45 am

But in folk memory this will be a Cat 5 Typhoon from now until the end of time. That’s how the propagandists of CAGW work, shout exaggerated claims from the rooftops, by the time they withdraw the claim the MSM have moved on.

Another Ian
Reply to  pat
February 24, 2015 1:39 pm
asybot
Reply to  Another Ian
February 24, 2015 9:07 pm

Dis you read the comments after that report, Jen Morohasy got absolutely lambasted ad hominem.

me3
February 24, 2015 1:51 am

Amateur statisticians?

February 24, 2015 1:52 am

Oh, they’re scientists. Just lousy scientists. Doesn’t take much to get a science degree these days. As becomes evident from the quality of the papers being routinely published in this field.

Alex
Reply to  Will Nitschke
February 24, 2015 3:56 am

Agree. Memorise some shit, suck up to your tutor and graduate. Then get a job at Bank of America where they don’t care about your qualifications as long as you graduated in something. It’s the same in most industries.

Jim Francisco
Reply to  Alex
February 24, 2015 8:07 am

Very good Alex. I would like to add a little to your observation. There are a few that are good at memorizing some shit but too dumb to realize that they should be working at Bank of America. They get hired by big businesses that a very difficult time firing people that don’t have the abilities that their diplomas say they should have.

Reply to  Will Nitschke
February 24, 2015 6:08 am

Will –
Is a scientist one who has a science degree of some sort or is a scientist one who practices the scientific method?

BullDurham
Reply to  JohnWho
February 24, 2015 10:58 am

This struck a chord with me. One of the best electronics engineers I ever knew (20 yrs USAF as engineer/project mgr + 26 yrs helping integrate hardware onto the ISS) had no degree whatsoever. He was entirely self-taught. He couldn’t get promoted beyond the grade he had when our company assumed a contract, because ‘company policy’ said you HAD to have a degree in “math, science, engineering or a related field’ to be an engineer. He taught me more about networks, software, hardware and how to solve integration problems than any of my four degrees. So, to answer the question: it depends on who you ask. Ask the journals or most academics, and it’s the degree that makes you a scientist. Ask anyone in the real world, and it’s your work that defines which category into which you should be placed.
Based on the latter criteria, anyone publishing results of work that don’t match reality is something, but it ain’t a scientist!

Pat Frank
Reply to  JohnWho
February 24, 2015 8:01 pm

It’s the latter, John. But you knew that. 🙂

Reply to  JohnWho
February 24, 2015 8:55 pm

Nobody gives a rat’s arse if you’re following the scientific method or not. Frederick Kekule’s discovered the structure of benzene by dreaming of a snake coiled and biting its tail. Did he follow the scientific method?
Not following the scientific method only becomes a problem when people later find out that your research was a useless waste of time.

Reply to  JohnWho
March 5, 2015 7:43 am

A scientist is a person with common sense who is very skeptical about every conclusion (hypothesis) presented by scientists, including his own conclusions. A degree is not relevant — the quality of his scientific work determines whether he deserves to be called a “scientist”.
.
Predicting the future with computer games, has nothing to do with science.
A scientist would never focus on ONLY one variable, CO2, probably a very minor variable with no correlation with average temperature, when there are dozens of variables affecting Earth’s climate … and then further focus only on manmade CO2, for political reasons (only that 3% of all atmospheric CO2 can be blamed on humans … which is the goal of climate modelers … along with getting more government grants.)
But Big Government, who wants a “crisis” that must be “solved” by increasing government power over the private sector, could not possibly influence scientists getting government grants and/or salaries, and of course NEVER has to be disclosed as part of an article, white paper or other report by any scientist on the goobermint dole.

Reply to  Will Nitschke
February 24, 2015 7:00 am

Perhaps grade inflation and lowered standards have combined with the world’s greatest fooling machine (computer+software) over the years to make peeple stoopids. It’s much easier these days to be a fraud and incompetent.

Pat Frank
Reply to  Will Nitschke
February 24, 2015 8:00 pm

They aren’t lousy scientists, Will. They exhibit no evidence of being scientists at all.

poitsplace
February 24, 2015 1:54 am

Unfortunately, with our current “everyone goes to college” mentality, the gifted scientists have been crowded out by an army of degreed morons.

ConfusedPhoton
February 24, 2015 2:04 am

“Are Climate Modelers Scientists?”
No they are just gamers

Reply to  ConfusedPhoton
February 24, 2015 5:35 am

If the Met Office put Climate Model Terminals in arcades around the UK – Their high scores would be beat inside a week.

NielsZoo
Reply to  ConfusedPhoton
February 24, 2015 2:14 pm

But if games reacted that poorly to input from the player and displayed a gaming “world” that was that whacked out from a “real” world… no one would buy or play them. You would need government to step in and mandate that everyone buy and force everyone to play those games… oh, we’re doing that now… never mind.

Pat Frank
Reply to  ConfusedPhoton
February 24, 2015 8:05 pm

Good point, CP. I think of climate modeling, as presently done, as video game science. It’s like trying to understand the physics of explosions by studying the hotel lobby explosion scene in “The Matrix.”

meltemian
Reply to  ConfusedPhoton
February 25, 2015 4:45 am

Well they sure as hell aren’t statisticians!

Brute
February 24, 2015 2:05 am

Pat, I appreciate the effort and, without getting into the merits of your work, rejection is part and parcel of academic/scientific publishing. An author that has not been rejected many more than 4 four times is an author that is either a genius or hustler. My advice is that you keep on trying. Don’t let the malice of incompetent reviewers get in your way.

Scottish Sceptic
Reply to  Brute
February 24, 2015 2:15 am

If it was about science, then he would get accepted. The simple fact is that this is motivated rejection of the science by those with a self-interest in keeping doomsday warming alive on life support.

Brute
Reply to  Scottish Sceptic
February 24, 2015 3:12 am

In another world, perhaps. In this one, academic/scientific publication requires unusual persistence no matter the content, no matter the field, no matter the venue.
You need to have some half a dozen major publications before experience allows you bypass with certain reliability the entry-level hurdles that knock off most rockies off.

Pat Frank
Reply to  Brute
February 24, 2015 8:10 pm

Thanks for the encouragement, Brute. I’ve worked through my share of rejections. These last have been set apart as unique by the uniform incompetence of the reviews. I do plan to keep trying.

RACookPE1978
Editor
Reply to  Pat Frank
February 24, 2015 9:02 pm

Pat Frank.
Help me out here, if I may ask a favor. I know 3D finite element analysis, have used it, have worked statistics control, have worked metrology, have worked reactor neutron flux curves and their shapes as the control rods are driven in and out at various levels of various poisons after shutdown at various times, have criticized answers (approximations) of stress-strain colored images from such models, and have worked in fluid dynamics problems with the solutions (approximations to the solutins) coming from such models. Fine. I know I know parts and pieces of the field fairly well. Others always should know more about their specialties.
In context of the criticism of your paper, and of the problems and failures in global circulation models (now being called global climate models by the way!), explain the different erros the global warming sim,ulators are making, and the different erros and assumptions about their errors and their error margins using this exampel.
I need to calculate the value of (e/pi)^ 10001.0001 Assume I set this problem up like the climate scientists have.
If I ran this problem using 2.7 / (22/7) what error am I making? Is climate science making this kind of error, and not knowing they are making this kind of error of simply using too many approximations of real world variables (albedos, transmission losses, cloud reflections, and everything else) that are NOT simple one-point constants?
If I ran this problem once using 10002.0002 what I be duplicating their error? if not, what error am I making?
If I ran this error changing the accuracy of “pi” everytime by 0.0001 percent, am I not promulgating that error through every subsequent multiplication?
If I ran it 4000 times using 10002.0001 would I be more accurate (in their minds) even though I would never get the right answer?
If they ran this problem 300,000 times on a supercomputer using a different algorithm for both constants every time, could they use the average of the random errors of their results to (a) get a more accurate answer or (b) just be displaying random errors in their generation sequence of both “constants”?
If they ran this problem using a program that printed “40000.00001” every time, would today’s climate scientists claim they had greater accuracy than my sliderule?
I know absolutely that many FEA runs using exact “perfect” data on a “perfect” crystal or pure piece of metal machined exactly per the model dimensions under loads exactly as described by the modeled equations will yield (on average) results similar to the average of many model runs. Each model run under those circumstances “should” be exact and perfect, but each will be a bit different even in the ideal case of a simple stress-strain issue. But, is this what forms the CAGW “religion” ? A belief that they have described the problem exactly and perfectly so every run using the same “core equations” as its kernel can be averaged into today’s world?

Pat Frank
Reply to  Pat Frank
February 26, 2015 2:18 pm

RACook, the 2.7/(22/7) case would be an accuracy problem, increasing the error with every step. I have no specifications on the source of climate model error. I just assessed the error and calculated its consequence.
The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds. Outside those bounds is dangerous ground, as I’m sure you know.
Climate models are like engineering models. They can be made to describe the behavior of elements of the climate within the time bounds were tuning data exist. However, they’re being used to project behavior well outside their bound. The claim is then made that they do this accurately, and that’s the problem.

BFL
Reply to  Pat Frank
February 27, 2015 2:01 pm

“The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds.”
And those model bounds have to be obtained in the real world through control of critical parameters such as maximum defect size and quantity and alloy material composition including unwanted contaminants that will reduce predicted performance. It is also necessary to understand to some degree how the defects will propagate under stress. To help insure this, intensive physical inspection and testing are usually incorporated throughout the manufacturing process.
There are so many parameters with so little accurate understanding of function covering such large areas of many differing kinds of surface, that there is no way possible to even begin to simulate anything similar with climate models.

Pat Frank
Reply to  Pat Frank
February 28, 2015 10:48 am

You’ve got it, BFL. Climate modelers abuse the process.

February 24, 2015 2:05 am

Whew. +18C or -15C ! That really would be Climate Change™
Luckily, Gaia and her negative feedback mechanisms are smarter than all the climate scientists and their models put together.
Loved this characterisation : “a liberal art expressed in mathematics”

DirkH
Reply to  StefanL
February 24, 2015 12:04 pm

It is only so small because the Stefan-Boltzmann law puts limits on how hot or cold the planet can get given constant insolation. If the error propagation continued unresisted by that -and absolute zero at 0 K-, it would be +/- 1000s of degrees C in 2100.

Pat Frank
Reply to  StefanL
February 24, 2015 8:17 pm

Hi Stefan — the (+/-) uncertainties are not temperatures. They are an ignorance width. When they become (+/-)15 C large, they just mean that the projection can’t tell us anything at all about the state of the future climate.

Windchaser
Reply to  Pat Frank
February 26, 2015 11:01 am

Pat,
I was puzzled by how your uncertainty ranges increased without bounds when there’s no time-aspect in your equations. But I think I figured it out.
It looks like you additively increase the cloud forcing uncertainty at each timestep. You compute a new cloud forcing at the present timestep, and add it to the last.
IIUC, this means that you’re not treating this +/- 4 as the uncertainty in the cloud forcing, but uncertainty in the change of the cloud forcing. In other words, your equations act as if the change in forcing from one year to the next must be within +/- 4 W/m2. Propagating this through allows the actual cloud forcing uncertainty in your equation to grow without bounds.
Compare this to the actual cloud forcing (in W/m2), what you’re using is a completely different metric, W/m2/year. These two metrics are as different as speed and location. Uncertainty in the derivative of forcing is verrrrry different from uncertainty in the forcing itself.
If the actual cloud forcing uncertainty is between +/- 4 W/m2, then that range is fixed. It doesn’t change, it doesn’t increase without end. It already represents the entire range of cloud forcing uncertainty.
And this is why your model produces nonsensical results. No, actual cloud forcing cannot grow or fall without bounds. You already established the actual cloud forcing: +/- 4 W/m2. The cloud forcing at 95% of any given timestep should be within these bounds.

Pat Frank
Reply to  Pat Frank
February 26, 2015 2:52 pm

Windchaser, there is an implied time aspect in the equation, found in the change in forcing over the time of the projection.
Error is propagated through a linear sum as the root-sum-square. That is the standard method.
I do not compute any new cloud forcings. I merely propagate the global annual average long-wave cloud forcing error made by CMIP5 climate models, in annual steps through a projection.
Sorry to say, YDUC. I am treating the (+/-)4 Wm^-2 as an error. It is injected into every annual modeled time step. The reason it is injected is that it is an error made by the models themselves. That is, intrinsic to some theory-bias error.
Every annual initiating state has a cloud forcing error, which is delivered to the start of the simulation of the subsequent state. The model makes a further long wave cloud forcing error when simulating that subsequent state. This sequence of error in, more error out is repeated with every step. Error is necessarily step-wise compounded.
However, we don’t know the magnitude of the error, because the simulated states lay in the future. But we can project the uncertainty by propagating the known average error. That’s what I’ve done.
Your “In other words, … statement is not correct. Error in forcing is propagated, not the change in forcing. Every step in a simulation simulates the entire climate, including the cloud forcing. Whatever the change in cloud forcing, the average error in the total long wave cloud forcing is (+/-)4 Wm^-2. Every time.
It’s not the error in the derivative of forcing. It’s the error in the forcing itself.
You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.
Windchaser, you’re clearly an intelligent guy. Let me try and explain this. When there is an average annual (+/-)4 Wm^-2 error in long wave cloud forcing, it means the available energy is not correctly partitioned among the climate substates.
This means that one is not simulating the correct climate, for that total energy state. That incorrect climate is then projected forward, but projected incorrectly relative to its particular and incorrect energy sub-states because the error derives from theory-bias.
So an already incorrect climate state is further projected incorrectly into the next step.
The uncertainty envelope describes the increasing lack of knowledge one has concerning the position of the simulated climate in its phase-space relative to the position of the physically correct climate. That lack of knowledge becomes worse and worse as the number of simulation steps increases, because of the unceasing injection and projection of error.
The uncertainty grows without bound, because it is not a physical quantity. It is an ignorance width. When the width becomes very large, it means the simulation no longer has any knowable information about the physically true climate state.
Such results are not nonsensical. They are cautionary; or should be.

Windchasers
Reply to  Pat Frank
February 27, 2015 1:03 pm

Error is propagated through a linear sum as the root-sum-square. That is the standard method.
Root-sum-square is the standard method for combining independent sources of error. For instance, let’s say I move in a straight line, twice. The first time, I measure that I have moved 100 m, +/- 10 m. The second time, 200m, +/- 5m.
The final error will be the root-sum-square of the previous, independent errors: the square root of (5*5 + 10*10). This is because each measurement and its error are independent.
Or, here’s another example: say I am traveling at 10 +/- 1 meters per second. At each timestep, no matter how long this continues, the error in my velocity remains the same, +/- 1. However, the error in the *distance* I’ve traveled grows as the root-mean-square, because the error ocurring in distance traveled at each timestep is independent of the error at any other timestep. Each second has its own error in distance traveled, of +/- 1 m.
After 1 second, I travel 10 meters. The error is +/- 1m. After 1 more second, I travel another 10 +/- 1m, so now I have travelled 20 meters, +/- 1.41m. After another second, I’ve traveled 30 meters, +/- 1.73m. Etc.
No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].
Similarly, if you determine an error in the overall cloud forcing, that error stays fixed from one timestep to the next. If this forcing is (f0 +/- 4) at the beginning, it should be (f0 +/- 4) at every timestep. Without some explicit relation to time, the errors do not propagate forward through time in the manner that you describe.
On the other hand, if this error, +/- 4, represented the derivative of the cloud forcing with respect to time, then, yes, the total cloud forcing uncertainty would grow over time and without bound, just like how, in the example above, the uncertainty in the derivative of distance-traveled caused the uncertainty in distance-traveled to grow over time without bound.
For verification, please refer to Bevington and Robinson, or to Larsen and Marx, or to whatever other book on statistics and errors that you prefer. But unless I’m really missing something, the uncertainty you calculated has no relationship to time, so it cannot propagate forward through time like you describe.

Pat Frank
Reply to  StefanL
February 28, 2015 10:42 am

windchaser, when you consulted Bevington and Robinson, or whatever, you found no time-dependence in the equations for propagating error.
All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.
Those conditions are met in a projection of air temperature. The calculation is step-wise. The final state and each intermediate state is a linear sum of prior calculated terms. Each term has an associated error. Propagated error follows. Uncertainty grows with the number of steps.
No “explicit relation to time” is required.
You wrote, “No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].
Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.
I’ve told you the origin of the (+/-)4 Wm^-2 error term. You can find it in Lauer and Hamilton, my reference [1].
It is the average long-wave cloud forcing error derived from comparing against observations, 20 years of hindcasts made by 26 CMIP5 models.
The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.
In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing. As such, the error will be present in every single simulation step of a step-wise projection. Such error must propagate and the uncertainty in air temperature must grow with the number of simulation steps.
Your analysis is not at all relevant, windchaser.
Climate modelers can be either scientists and get down to the hard gritty business of physics, or they can be gamers. But they can’t be gamers and pretend to be scientists.

Windchasers
Reply to  Pat Frank
February 28, 2015 11:51 am

All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.
Nope. Your units are wrong for this to be passed forward through time. Nor does that make intuitive sense.
Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.
/shrug. Same difference. We have a relationship between time and distance travelled, and the uncertainty is in that relationship. So as either time or distance travelled grows, so does the uncertainty.
Where is your uncertainty here? Just in the cloud forcing, no? It’s not in the cloud forcing’s relationship to other forcings, nor is it in some relationship to time. So the total uncertainty does not grow. It cannot.
Pull the equations out of Bevington and Robinson, if you like: you’ll notice that they discuss uncertainties in terms of differentials. Here, that would be the timestep, since that’s what you’re compounding it over, which means that your uncertainty must be in terms of time. With no differential / no relationship to time, there’s no multiple, independent uncertainties to perform a root-sum-square on.
The uncertainty you provide is fixed; it doesn’t change with respect to anything else. So how can you possibly compound it?
Your units are just wrong, which means your math is wrong.
In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing.
No. Again, this is nonsensical: you’re saying that if we stepped through time half as quickly, say at 0.5 years instead of 1 year, then the uncertainty would grow twice as quickly. This is nonsense: uncertainty propagation does not depend on the size of the timestep. And hey, look! If we use really large timesteps, then all this uncertainty goes away, and the model projections are fine again!
Climate modelers can be either scientists and get down to the hard gritty business of physics
*cough*. I’m not the one messing up basic statistics here. But by all means, keep insisting that you’re right, and keep submitting your paper and getting it rejected.

Windchasers
Reply to  Pat Frank
February 28, 2015 11:59 am

Here, I found a copy of Bevington and Robinson on the internet. We can go straight to them, if that’s the textbook that you prefer.
http://labs.physics.berkeley.edu/mediawiki/images/c/c3/Bevington.pdf
For error propagation, go to page 39-41 of the book (page ~54 of the pdf). You can try to walk me through your math and its units, if you like, but I’ll be surprised if you can: your units are wrong; they don’t make any sense.

Pat Frank
Reply to  Pat Frank
February 28, 2015 3:05 pm

windchaser, your “nope” directly contradicts the generalized derivations in Bevington and Robinson.
The differentials for propagating error, (sigma_x)^2 = (sigma_u)^2(dx/du)^2 +…, are generalized to any “x” and do not necessarily refer to time.
Your intuition is no test of correctness.
I already discussed your units argument: the error unit is W/m^2/year. The annual change in GHG forcing is also W/m^2/year. The head post figure is Celsius per year.
You’ve got no case.
You shrugged off the time/distance mistake in your own criticism. But you ignored the time/forcing equivalence I pointed out here. Your criticism is therefore illogical and self-servingly inconsistent, and you’ve ignored your own dismissal of your own prior criticism; a self-goal.
The propagation time-step is annual, because the long wave cloud forcing error is the annual average. The error is from GCM theory bias, putting it freshly into every single annual simulation step. That is why it must be compounded.
The annual error in long-wave cloud forcing is propagated in relation to annual greenhouse gas forcing. I should have thought that was obvious given the first head post figure and the linked poster.
Long-wave cloud forcing contributes to the tropospheric thermal flux. So does GHG forcing. The change in GHG emissions enters an annual average 0.035 Wm^-2 forcing into a simulated tropospheric flux bath that is resolved only to (+/-)4 W/m^2. And that’s a lower limit of error.
The lower-limit but 114-times larger cloud simulation flux error completely swamps any GHG effect. GCMs cannot resolve the effect, if any, of GHG emissions.
Semi-annual cloud error may be of different magnitude. GCM simulations can proceed in quite small time steps. A detailed and specific error estimate and propagation could produce very different uncertainty widths. Possibly much wider than those in the head post figure, because of the multiple sources of error in a GCM.
If GCMs are ever able to project climate in 100 year jumps, your ludicrous “really large timesteps” argument might have relevance. But then, of course, we’d have to apply a 100-year average error, not just an annual average. What would be the magnitude of that, I wonder.
I have a personal copy of B&R. The units aren’t wrong (see above).
You’ve struggled hard, windchaser, and have gotten nowhere.

Windchasers
Reply to  Pat Frank
February 28, 2015 5:07 pm

You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.
Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time, why do you present it in terms of W/m^2?
You can understand my confusion. Mathematically, you treat the number as the error in the derivative of cloud forcing with respect to time, but in terms of units, you present it as just a constant, flat error in the cloud forcing.
I haven’t looked very closely at your derivation, but it also seems to reflect just a flat (constant) error in cloud forcing, not something that changes from one timestep to the next, or that feeds back with any other terms in your equations. Is that incorrect? The poster suggests that you calculate the total average cloud forcing error over a block of time, not the error in the change in cloud forcing, for which the units would be W/m^2/year.
You seem to contradict yourself, as at other times in our discussion, you said: “The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.”
Which is it? Can you clarify this for me?
The head post figure is Celsius per year.
The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year. Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.
The one exception, of course, is the part of the equation where you sum over all the timesteps, summing the annual changes in forcing/year to get the total change in forcing. This is where you’d convert from W/m^2/year to W/m^2. But obviously, if you’re starting with an error in terms of W/m^2, you can’t integrate that over time to get W/m^2. It’d be like integrating speed over time and getting back your speed, instead of getting distance traveled.
Sorry, if this sounds simplistic, but I’m trying to explain it in the simplest terms possible. If you integrate a quantity over time, then your units must change.

Pat Frank
Reply to  Pat Frank
February 28, 2015 5:45 pm

windchaser, you wrote, “Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time,…
No, windchaser. I’ve told you over and over again, it’s the (rms) average annual long wave cloud forcing error.
A twenty years rms average, yielding the average annual error in the total forcing. Why is that so hard for you to understand?
”… why do you present it in terms of W/m^2?” Because that’s what it is, windchaser.
…you treat the number as the error in the derivative of cloud forcing with respect to time…” No, I do not. I treat it for what it is: the annual average error. It has nothing whatever to do with dynamics.
I’ve made no “contradiction,” but have been clear and consistent throughout. The mistake has been yours from the outset, given your insistence that a linear root-mean-square annual error is a derivative.
It appears your exposure of physical error is so lacking that you evidently have no grasp of its meaning.
You wrote, “The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year.
In what unit is the slope of the line in that figure, windchaser?
You wrote, “Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.
In the PWM equation, what does the subscript “i” represent?
You wrote, “…you can’t integrate that over time to get W/m^2.”
It’s a linear sum, windchaser. No units change.
You’re still getting nowhere.
I have to congratulate you though. At least you’re struggling with error propagation. That’s more than any of my climate modeler reviewers did.
Save the one reviewer who clearly was not a climate modeler, understood the error analysis, and recommended publication.

Windchasers
Reply to  Pat Frank
March 1, 2015 10:07 am

“…I treat it for what it is: the annual average error.”
One does not add a total error to the same series, over and over again. It’s added once.
“It’s a linear sum, windchaser. No units change. …In the PWM equation, what does the subscript “i” represent?”
The ith timestep, of course.
What you’re doing in that equation is exactly a numerical integration: you take the change in greenhouse gas forcing at a given point in time. You multiply it by delta-t, a change in time: 1 year. You get delta-F, the amount of change over the timestep. Then you sum over all these delta-Fs, to get the total change in GHG forcing over all timesteps.
It’s just this.
change in F == Sum over t: [dF/dt * delta-t]
Sorry for the cludgy representation of the math, but that’s a textbook numerical integration. So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F. Or if the cloud forcing error is not constant with respect to time, it should be integrated over with respect to time, just like dF/dt.
Arright, the remedial calculus lessons are done. You said that I’m “still getting nowhere”, and I can indeed see that. Please: find a mathematician you trust and run this by him. Perhaps he can explain this to you better than I can.
Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.

Pat Frank
Reply to  Pat Frank
March 1, 2015 11:38 am

windchaser, you wrote, “One does not add a total error to the same series, over and over again. It’s added once.
It’s a theory bias error. It enters into every single simulation step.
The ith timestep, of course.” There goes your argument that “there’s no time-aspect in your equations.
You multiply it by delta-t,…” where do you see a time delta-t anywhere in the equation?
The delta-F_i are the annual forcings recommended by the IPCC, e.g., for the standard SRES scenarios. Time enters only implicitly with the steps in forcing.
Sorry for the cludgy representation of the math, but that’s a textbook numerical integration.” Not a problem. So you’d agree that numerical integration is just a linear sum. Subject to linear propagation of error.
So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F.
Correct. For any step and including error, the forcing is [delta-F_i(+/-)4] Wm^-2. As mentioned umpteen times so far, the (+/-)4 Wm^-2 is the rms average CMIP5 LWCF error. As an average, it’s necessarily constant at every step, as a theory bias error it enters into every step, and its propagation yields a representative physical reliability of the projection.
Please: find a mathematician you trust and run this by him.” I’ve done that. No problems found.
Windchaser, look at your own analysis. You’ve described numerical integration as a linear sum. Linear propagation of error follows directly. All you need do now is recognize the serial impact of a theory-bias error on the growth of uncertainty in a step-wise simulation.
Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.
Thanks, and likewise.

Scottish Sceptic
February 24, 2015 2:13 am

Are Climate Modelers Scientists?
No! And those falsely claiming to be scientists should be challenged by sceptics and the public made aware of their bogus claims.

Reply to  Scottish Sceptic
February 24, 2015 7:06 am

They seem to be like self-taught CAD operators – able to generate nice, professional-looking output but heaven help anyone trying to construct or use anything they’ve touched.

cloa5132013
February 24, 2015 2:21 am

It just shows that mathematicians are not scientists. They think an internal inconsistent quantity of the sum to all integers equals -1/12. A sum that exceeds infinity.

Reply to  cloa5132013
February 24, 2015 3:07 am

I don’t understand that, but I’m not going to ask you to spend time elaborating. 🙂 But it seems that in mathematics, there are more possibilities than can exist in the physical universe.
In maths, you can have negative speed, negative money, negative direction, negative temperature etc. Those things are useful as tools,. Butthe real world simply does not allow for negative speed, negative money, negative direction, negative temperature, etc.

ferdberple
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 3:43 am

negative money
====
I have plenty of that.

Louis
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 9:23 am

I’m going to tell the debt collectors there is no such thing as “negative money” in the real world and see how that goes. 🙂

JamesS
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 5:03 pm

I realized there was something seriously wrong with the universe when I learned about i (the sqrt of -1), and then found that it had real-world applications.

Admin
February 24, 2015 2:31 am

I think I’m going to start referring to climate models as “compound error machines”.

knr
February 24, 2015 2:34 am

Are Climate Modelers Scientists?
Its a good question for another reason , when we consider what is a climate scientists we find that there is in fact there is no agreed definition of what this means. Given it is a term that has been applied to failed politicians and railway engineers and a host of others who have had no formal acedmic training in the area , we can see that in pratice its far from clear what actually makes a person a climate scientists.
From the alarmist prospective its simply , a climate ‘scientists’ is someone that works to support AGW . Its very useful way of looking at it because they can claim that no climate ‘scientists’ disagrees with them , therefore other ‘none ‘ climate ‘scientists’ can safely be ignored .
However its also an entirely dishonest way of looking at it , because even those climate ‘scientists’ they like vary on how they view the situation , the consensus is no such thing . Secondly there are clearly those who work in the area , whose training and academic standing at least equal others but who do not share the alarmist prospective . therefore should have the right to be called climate ‘scientists’ in any fair and honest system.
Always worth remember that when the infamous 97% claim is pulled out , that in pratice given they simply have no idea who many scientists , climate or otherwise , there is that they cannot know how many would be in the whole group for which a sub-group is supposed be a percentage off. So even setting aside the many problems of its methodology, the claim itself fails at a basic maths level, which shows even setting aside the many problems of the the value of this claim is the same as ‘nine out of ten cats prefer.

February 24, 2015 2:41 am

in a vague sense. As a Meteorologist we hold more comprehension of all that climate stuff by virtue of climate being front and center in forecasting. I can also call the bluffs of forecasters just by the wording they use.
I love WUWT!

February 24, 2015 2:52 am

You’re preaching to the choir on this site. Try posting this on skepticalscience or a Greenpeace website if you wish to educate someone new.
I will attach this link and post on a few myself 🙂

Katherine
Reply to  wickedwenchfan
February 24, 2015 3:04 am

Note that by posting his essay here, he got the attention of readers like you who can now pass it on. If he’d tried at SkS or Greenpeace, his article would never have seen the light—and you would never have become aware of it.
Besides WUWT isn’t an echo chamber, nor is it a closed site. New readers come in and learn something here.

Reply to  Katherine
February 24, 2015 3:20 am

Lighten up 🙂

Alex
Reply to  wickedwenchfan
February 24, 2015 4:07 am

The only thing I would show Greenpeace is the door

Questing Vole
Reply to  Alex
February 24, 2015 5:17 am

Unless there was a more convenient window?

Alex
Reply to  Alex
February 24, 2015 5:23 am

Questing Vole
I show the door and throw out the window. I guess I am a little old fashioned.

Alex
Reply to  wickedwenchfan
February 24, 2015 5:21 am

wickedwenchfan
Attach away. You may find that you don’t get a good response to it, It is possible you would be banned or your posts deleted.

Pat Frank
Reply to  wickedwenchfan
February 24, 2015 8:22 pm

wwfan, it’s a peculiarity, and an obvious pointer to the truth of the matter, isn’t it, that only AGW skeptical sites do not censor the comments.

Truthseeker
February 24, 2015 2:55 am

Science is about discovery of something previously not known or defined.
A model cannot contain anything that is not already known. A model can be useful to describe well understood outcomes where the variables are controllable or known.
Climate is a chaotic non-linear system where the variables are not controllable or known to any great extent.
Therefore Climate models are not science.
Q.E.D.

Arsten
Reply to  Truthseeker
February 24, 2015 4:51 am

A model is an output of science – generally used in Engineering. Take a comparatively simple airflow model*: Science creates and researches the methodology, variables, and uses, as well as constraints or error bars in conjunction with, the model. This model can also then be revised and revised over time. The model is then used by engineers to make cars or airplanes or sacks of peanuts flow better through the air. An engineer using this model is not a scientist – but an engineer improving this model (ideally publishing and spreading the revisions) is a scientist performing science.
*Simple in the field of medium dynamics models. 🙂

Reply to  Arsten
February 24, 2015 10:30 am

I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.

Brad
Reply to  Arsten
February 24, 2015 11:07 am

To TomB,
You said”
I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.”
Tom,
You imply 3 steps, model, build, test, correct? Seems climate modelers stop at step 1? Testing against observations is a failure, so we can’t go there, and forget step 3. Already built, just don’t understand it.
What allows them to get away with this, until the “PAUSE” occurred, is that the real results hopefully wouldn’t be known until after they retire and collect their govt pensions. In many cases it was after they were dead.
I wonder if they realize that what they are doing to collect a paycheck, and justify their existence, has little to no real validity? (Who has read the “Black Widowers” series by Asimov?)
I recently spent 3 hours with an engineer who created a program (model) that uses 15 minute interval data from smart meters to analyze energy use in buildings. Spent years creating it. Explaining to him how much he is potentially missing was emotionally draining. He had not even been onsite to survey the facility yet was willing to talk about potential energy savings.

Reply to  Arsten
February 24, 2015 1:13 pm

Having used “models” for years to design structures, pipelines, water networks, sewerage and water treatment systems, project management systems, financial management, I would have to agree. Thing is with all those models I used, there were empirical tests to proof the model as well as real world tests to see if what we “modelled” actually gave us the projected results – often an iterative process to “tune” the models. But in every case – real world testing and proofs.
Meteorologists get to test their models every day/week.
So why are climate models whacky and why does anyone accept them as anything but they are: first generation guesstimates.
Perhaps that is the difference between my engineering and “Climate Science”. Climate Models are still in the state of “Science” so real world proofing is unnecessary or beyond them.

firetoice2014
Reply to  Truthseeker
February 24, 2015 5:56 am

“A model cannot contain anything that is not already known.” …or, at least, assumed or postulated.
If climate models contained only what is KNOWN, their outputs would not differ as they do.
Climate is driven by some “known knowns”, several “known unknowns” and a myriad of “unknown unknowns”, making certainty about the future of climate somewhat difficult.

Jim Francisco
Reply to  Truthseeker
February 24, 2015 1:26 pm

“Science is about discovery of something previously not known or defined”
Reminds me of the problem I have with the word – research. I thought of myself as a researcher because I looked through old reports for answers that others had searched for and found.

Chip Javert
Reply to  Truthseeker
February 24, 2015 6:49 pm

Truthseeker
re “…A model cannot contain anything that is not already known…”
Einstein might disagree with you (so would I).
Relativity is a pretty interesting model that certainly appears to have taught lots of previously-unknown stuff to legions of physicists.

Pat Frank
Reply to  Truthseeker
February 24, 2015 8:25 pm

Truthseeker, models that include sufficiently well-developed physics can make unique predictions about observables. That opens them to falsification.
Prediction/observation/falsification (or not) is the way of science. So, physical models do have a critical part to play. Climate modelers, however, have removed their models from science, and sealed them away from the ruthless indifference of observation.

Dr. S. Jeevananda Reddy
February 24, 2015 2:55 am

n this connection, I would like to present my own experience in early 90s. I submitted a paper to an international journal. One reviewer pointed minor corrections and approved for publication and the 2nd reviewer gave excellent marks but at the end he made a statement saying it can also be fitted to linear curve. With this the regional editor rejected the paper for publication. Then I wrote a detailed letter to the Editor-in-chief of the journal. He sent this letter to three regional editors. All agreed with my observations and asked me to split in to three parts. They published these in 1995. All the three relate to papers by Editorial committee members. One of the paper related to climate change. The abstract states that “Climate change and its impact on environment, and thus the consequent effects on human, animal and plant life, is a hot topic for discussion at national and international forums both at scientific and political levels. However, the basis for such discussions are scientific reports. Unless these are based on sound foundation, the consequent effects will be costly to exchequer”. Here the authors tried to look into the impact of temperature and rainfall increase on ETp [evapotranspiration] and thus on crop factors. The percentage changes in ETp attributed to climate change can also be attributed [partly] to scientists’ induced factors, such as (i) the choice of ETp model and ETp model vs environment; (ii) probable changes in meteorological parameters due to climate change, expressed as absolute change or percentage change; and (iii) ETp changes expressed in terms of absolute changes or percentage changes. All these explained in the article using their article. Second paper deals with overemphasis on energy terms in crop yield models. — Three different groups working under three different country conditions come up with different conclusions on the impact of energy term on crop yield. Models to be more meaningful, in physical and practical sense, and to be applicable in wider environmental context, should be addressed under holistic systems by taking in to account abundantly available information in the literature on all principal components of a model. . With this, I presented an integrated curve that fits all the three conditions.
Dr. S. Jeevananda Reddy

Pat Frank
Reply to  Dr. S. Jeevananda Reddy
February 24, 2015 8:28 pm

Congratulations, and more power to you, Dr. S. Jeevananda Reddy. 🙂

February 24, 2015 3:00 am

“neither respect nor understand the distinction between accuracy and precision.”
Damn, we learned that in year 11 chemistry. Our chemistry teacher was, arguably, better than our physics teacher. He had strict standards but was rarely unnecessarily strict.
I’d like to make a point which perhaps might be as clarifying to you as it is to me (although I could be totally off, like an athlete who keeps on running even though the race is over). I say that there is a huge difference between EMULATION and SIMULATION.
Climate simulators are exactly that – they are superficially modelling a climate system. But they are not emulators, and an emulator behaves exactly like the original. And it is becoming obvious to me that one cannot in fact emulate a climate.
Is anyone here into retro computing? Then you’ll know that a computer emulator lets you perfectly imitate a different platform on a foreign host. For example, you can run an Atari ST emulator on a modern Macintosh. That emulator lets you run the system software and applications as if it was the original – there is no difference, save for bugs.
An Atari ST emulator, one of several, can be downloaded here:
http://www.atari.st/pacifist/
I once wrote an ST simulator in JavaScript. It merely resembled the ST’s desktop – it could not run software, save data, or anything. It just superficially resembled the desktop, with its drop-down menus and icons. Here is what it sort of looked like, though this is not mine:
http://www.atari.st/
As for statistics, lots of blowhards think they know statistics. I am a hack with some knowledge but I don’t pretend otherwise.
The psychologist Dr. Richard Wiseman thought he had ‘debunked’ a parapsychology meta-analysis but used incorrect statistics to do it.
Naomi Oreskes seems to think that p values of <0.05 are just 'convention' and apparently knows nothing about standard deviation. And because we apparently know that AGW is true, we don't need those high standards, so let's settle for <0.10 (which, when you think about it, makes no sense, because if AGW is so obviously true, all uncertainty about it would easily fall below 0.05).
And of course we have the usual bullshit PR nonsense that mammograms are 80%+ accurate, that HIV tests are 99% accurate, etc, etc.
On the name of Naomi: One of my favourite actresses is Naomi Watts. One Naomi I know is very gentle. Another is an in-law and is beautiful and happy. I guess it depends on geography!

ferdberple
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 3:42 am

The Base Rate Fallacy shows up in all sorts of places. False positives and false negatives quickly overpower our intuition, leading to overconfidence in our results. For example:
A group of policemen have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. 1/1000 of drivers are driving drunk. Suppose the policemen then stop a driver at random, and force the driver to take a breathalyzer test. It indicates that the driver is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?
Many would answer as high as 0.95, but the correct probability is about 0.02. (50 false alarms versus 1 actual drunk driver)
http://en.wikipedia.org/wiki/Base_rate_fallacy

Joel O'Bryan
Reply to  ferdberple
February 24, 2015 11:57 am

Your analogy works to a point. But BAC analyzer measurements are not a yes/no binary output.
Assume the legal definition of DUI is 0.08% BAC. You state that “the breathalyzers never fail to detect a truly drunk person.” Therein that statement assumes high accuracy with some un-stated precision, i.e. your analysis ignores measurement error. If the accused’s measured BAC is 0.085, the analyzer may round up to display 0.09%, thus legally DUI. But if the manufacturer says precision is +/- 0.01% then the accused could be at 0.075%, under the legal limit. Anyone whose recorded BAC is say 0.10%, the probability of then he or she is really drunk is quite high, and then presumed DUI conviction is beyond a reasonable doubt.
The skeptics here at WUWT (myself included) often hammer the dishonest alarmists over their willful ignoring of thermometer measurement precision in temperature records who then try and proclaim “highest-ever” alarmism, when the differences are being proclaimed to hundredths of a degree.
Do not be guilty of the same mistake.

garymount
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 5:02 am

I had some clients for the company I worked for use a math co-processor emulator so that they could run a certain CAD program that required that instruction set. My computer, utilizing an Intel 80386 processor had the math co-processor, at an several hundred additional cost, and it ran the software 10 times faster that the clients computers could. An emulator always runs slower than the real thing, but if you are using modern hardware to run old hardware emulators, you are not going to notice. By the way, the next generation Intel chip, 80486 then the Pentium series, had the math instruction sets built into them.
A more specific term might be – computer based simulation of coupled non-linear .?. by the Finite Element Method. When playing computer games, much of the graphics are simulated because more realistic ray-tracing algorithms takes many orders of magnitude more computing time.

Reply to  garymount
February 24, 2015 7:38 am

Are you sure that’s not a 286 needing the co-processor? I recall having to install one in order to run AutoCAD on an IBM PS2 back in the 80s. My first computer was a 486DX66 which didn’t need a co-processor, so it couldn’t have been that one, and I never had a 386.

Reply to  garymount
February 24, 2015 8:32 am

I had a 386 with a co-processor for running a CAD program.

Reply to  garymount
February 24, 2015 9:11 am

Oops, looks like I’m talking about a physically separately-installed chip and you’re not.

Kevin Kilty
Reply to  garymount
February 24, 2015 9:45 am

The 386 had a problem with the math co-processor on board. It was renamed the 386SX and the problem was corrected I’m pretty sure on subsequent revisions. Might have something to do with this.

D.J. Hawkins
Reply to  garymount
February 24, 2015 4:12 pm

I recall reading that when the co-processor became available, ALL 386 chips were printed with it. The co-processor tended to be unstable, so if it failed the burn-in, they laser cut the traces and viola! a 386SX chip was born.

garymount
Reply to  garymount
February 24, 2015 4:52 pm

When I bought my third computer (first a Commodore 64 (64KB of memory), second a Sanyo 8086 (or possibly 8088, I will have to look at the box but its currently in my garage attic ) it was a 80386. I bypassed the 286 unlike everyone else in my office that went cheap and bought an obsolete 16 bit system (though it was the biggest selling computer worldwide that year) and my boss paid for the separate math co-processor, about $400 or so back then, that plugged into the available socket on the mother-board / main-board.
As for the 386SX, it did not have a math co-processor.
“In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus mainly intended for lower cost PCs aimed at the home, educational, and small business markets while the 386DX would remain the high end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit board layout and reduce total cost.[13] The 16-bit bus simplified designs but hampered performance”

Reply to  garymount
February 24, 2015 6:10 pm

Meant to type “separate” before co-processor. It wasn’t even made by Intel.
386SX could only address half the RAM of a 386. When the 386 was first released, there were no 32-bit co-processors available, so machines were made with sockets to take existing 16-bit co-processors, but I never saw one.

James Hein
Reply to  garymount
February 24, 2015 6:45 pm

I had a 286 based PC that I used to run a program to calculate the position of the moon over a month that I’d written. It took about 30 minutes to complete all the calculations with the occasional printer line being generated. When I eventually I added a co-processer and it was so ‘fast’ the printer couldn’t keep up and the speed seemed so wrong the first time I couldn’t even watch 🙂
Similar experience with downloads when moving from a 300 baud to 9600 baud modem.

Pat Frank
Reply to  Karim D. Ghantous (@kdghantous)
February 24, 2015 8:34 pm

Karim DG, agree about learning error propagation in Chemistry. I learned it big-time when taking Analytical Chemistry as a college sophomore. I can’t speak to your distinction between emulation and simulation. Guess it’s a matter of applied meanings. But congratulations with your Naomis and you’ve got good taste in actresses.

Tony
February 24, 2015 3:01 am

The IPCC doesn’t have any models. They just have black box hindcast curve fitting. It is impossible to model anything when most of the science is unknown. Finite approximations are a joke when the cells are so big as to completely miss features climate features as huge as thunderstorms.
The real test of a real climate model will be whether it can predict the weather next week. Don’t hold your breath.

Alex
Reply to  Tony
February 24, 2015 4:18 am

The problem is that the IPCC does not make the claims that MSM makes. If you want to make a difference then eviscerate these activist journalists. Make them lose their jobs.

Jim Francisco
Reply to  Alex
February 24, 2015 1:44 pm

Brian Williams is one of whom you speak. I could not take any more of his alarmism about the climate so I tuned him out months ago. Maybe he’s gone for good.

TYoke
Reply to  Tony
February 24, 2015 8:31 pm

“They just have black box hindcast curve fitting. ” That assessment may be overly harsh, but the focus is certainly correct.
Steve McIntyre recently did some analysis on the large difference in agreement between the forecast and observations, versus the hindcast and observations.
http://climateaudit.org/2014/12/11/unprecedented-model-discrepancy/
The comment thread below that article is also well worth reading.
The modelers all hotly deny that they are overfitting, but the dramatic difference between the success of the Hindcast versus the Forecast, shows without a doubt that they are indeed overfitting, however much they wish to deny it.
Wikipedia actually has some nice articles on this topic.
http://en.wikipedia.org/wiki/Overfitting
http://en.wikipedia.org/wiki/Data_dredging
http://en.wikipedia.org/wiki/Misuse_of_statistics

Dorian
February 24, 2015 3:04 am

Once again we have to do this stupid story again.
Climate Science is based upon statistical analysis and computer modellling. Neither of which are scientific. At best, they may, and I say “may”, be assumed to be some sort of loose engineering, but sience they are not, never have been, and never will be.
Statistical analysis is a methodology of extracting information out of raw data. The problem is that supposed methodology has nothing to do with physical reality. It only deals with taking biased assumptions from the statistician and looking for the best mathematical construct to overlay on the data, and then extrapolate a physical reason. Whereas, real science, works the other way around, it is about taking solid reality based physical observations and let the mathematics evolve from the facts and data. In other words, science is about taking physical events and looking for the math, while statistics and therefore climate science, is about taking theoretical models and looking for the physics, which is not science, but religion by Ex Cathedra.
Yes Climate Science is an oxymoron, there is no science in “Climate Science”, it is just pure religion. And as I have been pointing out for quite some time now, Climate Science is not the only religion in science, we have the same problems in Physics (cosmology and astrophysics), Biology (evolution and BCS theory), Economics (macroeconomics), Archaeology (Egyptology and Sumerology), Anthropology, Paleontology and so on.
Focusing on the details on what is wrong with the reviewers will not change anything. Academia and the academic practice is now largely determined by “Ex Cathedra”. The true scientific method and the presuit of knowledge has now broadly been usurped by religious attitudes and protected by priests, high-priets and godlike mentalities. You can thank tenure for this. As tenure was once supposed to protect the voice or reason and progress, like everything once it has been around long enough, it is now corrupted. Tenure only now largely serves to protect, the criminal, dishonest, incompetent and meglomaniacs.
For thousands of years society, knowledge and science evolved without the use of tenure. But now, universities and schools are entering a new dark age of knowledge, where Science will be determined by the priests, pope, gods of the academic church, and not only backed but empowered with might of arms of the police and military the governments wield through the use of undemocratic legislation and the court system.
Science has only itself the blame for this disaster we are in. Complain all you like about the morons, idiots, incompetents, buffoons and meglomaniacs of academia. Just remember how they got there? They got there, because every one of you who went to university and played along with the rise of these corrupt people. Don’t forget as much as you like to criticize these thieves and criminals, they too have Ph.Ds, they too have Chairs of distinction, they too control the journals of note (like Nature, Physics Letters etc), they too run the departments as Deans, they too win the accolades, they too have built up reputations based upon their army of accolytes and sycophants, they too have the ears and attention of corrupt governments, and ABOVE ALL THEY, NOT YOU HAVE CONTROL OF THE AGENDA….ALL THANKS TO YOU FOR NOT DOING YOUR JOBS, when you should have been doing them in the past.
And now everybody is crying poor Science! This has been a long time in coming. I saw this when I went through my years at university in physics, where people lied on their papers about their results, where they took 1 set of data, and published multiple papers (in one case I remember well a Ph.D candidate got 7 papers from the same set of results in 7 different PRESTIGIOUS journals!), how those candidates supported the right Professor, got excellent funding, and those who didn’t, just hard ship, how researchers all around the world formed cliques of mutual interest and passed each other’s papers in the review process, how data was convienently fudged to look good, and when the raw data was asked for confirmation, it was always conveniently lost. Ah how many papers are out there in all these fields of endeavour are false? How many? 10, 20, 30% or more? You will be surprised if you did the same analysis above to all the papers, you would probably fail some 80 to 90% of all papers! How do I know this? For I too once reviewed, once! Not any more, for I too used to do the same analysis on every single paper, and barely 10% passed, and that only after revisions were made, and then the word came, I am failing to many papers, and that was the end of that. Standards must never be kept in Science, they must always be lessened, it so appears.
What are you people all crowing about! This blog entry is RIDICULOUS!
Go to any, ANY JOURNAL. Take a random sampling of 10 papers and put them through a thorough analytical review, INCLUDING, checking all the references (and here is another sneaking perversion of science), you will find that 8 or maybe 9 of them FAIL!
Complain all you like guys. But the truth is this, we have fields of endeavour that now belong more in science fiction movies than in academia. For example, we give Noble Prizes in Economics, are you aware that NOT A SINGLE THEORY IN ECONOMICS HAS EVER WORKED. That’s right economics is pure poppy-cock, and yet we teach it. Evolution absolulely no proof for evolution. Biological Classification theory is totally based upon committee, NO SCIENCE. Cosmology all based upon data and photos that nobody can verify. High Energy physics and String Theory has so many holes in it that it makes a black hole look full. How about Psychology, there is no science in it, psychiatry and its bible the DSM is run by committee – no science, no evidence, no proof, how many people are declared ill and forced, by even the courts, to take meds that have ABSOLUTELY no proof of aid.
You guys are complaining about Climate Science! Bwahahahahahahah!
We have so many problems that are far far greater. Our university systems need an over haul. Our economies need to be restructured, our democractic rights are being eroded by corrrupt incomptetent governments. Our Scientific journals are FILLED with rubbish.
You have lost sight of reality if you think that doing a paper review is going to change this problem, the problem of lying cheaters who never should have gotten a degree in science in the first place, for ….
THE LUNATICS ARE NOW RUNNING THE ASYLUM (aka university).
THE QUESTION YOU SHOULD ALL BE ASKING IS THIS….
HOW DO WE GET RID OF THE LUNATICS AND GET SOME SANITY BACK INTO THE SYSTEM?
Reviewing papers is not the issue, MORONS, LUNATICS AND SHARLATANS RUNNING AROUND AND MASQUERADING AS SCIENTISTS, PHYSICISTS, BIOLOGISTS, ECONOMISTS, POLITICIANS, JUDGES, AND NOBEL PRIZE WINNERS ARE THE PROBLEM?
How do we clean up system? For get about the toilet paper these people write, that’s easy to fix, you flush it down the toilet. How do we stop creating more of these lunatics?

Alex
Reply to  Dorian
February 24, 2015 4:02 am

The only cure for stupidity is death. I’m not suggesting wholesale culling. Darwin will sort it out eventually.

Jim Francisco
Reply to  Alex
February 24, 2015 9:54 am

Yes Alex. In the good old days we used to allow the stupid people to kill themselves. Now we do our darnedest to stop them. I think that the reason we try so hard to stop them is that now more so than in the past they take many undeserving people with them.

JohnTyler
Reply to  Dorian
February 24, 2015 7:25 am

Regarding your comments about economic “science,” note that not one of their theories has been or can be subjected to any sort of controlled experiment. Those theories promulgated by “influential” academics are considered sacrosanct and beyond refute. Economists NEVER, EVER consider their theories (actually, they are more akin to conjectures) are wrong even despite, when actually applied in the real world, they produce results contrary to those intended; and this occurs MOST of the time. Coin flipping would produce the correct strategy more often than the garbage produced by academic economists.
Economists produce papers that are awash in formal mathematics buried under unintelligible econo-jargon. What matters is “the model;” the more math, the better.
Astrologists at least can tell you where the planets will be sometime in the future. Economists CANNOT TELL YOU when a recession has hit until AFTER is has started!!! What kind of science is it that has ZERO predictive ability.
As a result of this “science,” based purely on opinion and the popularity of a particular individual or group of individuals we have the farce, the joke, the scam of “liberal” vs “conservative” economists.
WHAT !!!!………the POLITICAL IDEOLOGY of the economist will determine the economic strategies that should be pursued.
If you are seeking a “science ” more of a farce, a scam, a joke than climate “science,” take a gander at economic ‘science.” Unfortunately, just like the climate charlatans, the guinea pigs in their exercises are the citizenry, who get royally screwed over, once again, by the “elites.”

Jim Francisco
Reply to  JohnTyler
February 24, 2015 10:17 am

John. I received an economics lesson from my father about fifty years ago that makes me agree with your observation. He was a farmer as a youngster. He explained how the prices and more importantly the profits in animal feed and animal production would cycle and why they would cycle. It was a problem that they could live with until the government got involved and tried to fix it. All the government managed to do was lengthen the time period of the cycles. This made matters much worse for the farmers because it made the non profit period longer and thus more farmers had to throw in the towel. The more the government tried to fix things the more fixing was required. We are still living with the fixes and requiring more.

Neil Jordan
Reply to  JohnTyler
February 24, 2015 10:39 am

You drilled into a nerve on economics “science”. RJ Gilbert in Tau Beta Pi “Bent” issue Spring 1993 summarized it nicely: “. . .Economics is a difficult subject because it is not about the control of a passive system. Rather, it is about the design of policies in pursuit of complex objectives in a system comprised of people who are at least as intelligent as the government that is attempting to influence their behavior. . .”

dyugle
Reply to  JohnTyler
February 24, 2015 4:40 pm

Neo classical economics is based on numerous assumptions that are not true in the real world.
The two most ridiculous ones are 1. Perfect information and 2. Perfect Scalability.
Think about how many industries rely on selling information and the laws in place to protect information.
If we had perfect information none of them would need to exist.
Think about a mine and mineral processing plant. When the price drops they highgrade the orebody which means that the mine actually produces more mineral. If the price goes up the opposite happens so there is actually less mineral production. This is the opposite of perfect scalability.

Bill McCarter
Reply to  Dorian
February 24, 2015 8:28 am

I totally agree with you. The reason for this inaccuracy in our knowledge base is the inadequacy of the use of our spoken and written language. ie ” You keep using that word. I do not think it means what you think it means!” (S Morgenstern),,LOL
Misusing the word science to mean just about anything is a disservice to our gaining of knowledge. Indeed misusing any word confuses the logical train of thought. ( see wealth, money and jobs ). The general use the word Science as as noun, or a verb, or an adjective will guarantee obfuscation as to the real meaning of the thought conveyed. I prefer to use scence as a process, not a noun.
I am reminded of when I was cutting steel to precise sizes that I could not just use a ruler and measure it 500 times and compute the average to find the dimension to a ten thousanth.
Accuracy is easier to obtain when precision is in the mix.

Paul Marko
Reply to  Dorian
February 24, 2015 11:17 am

Why is it the Ted Kaczynski Manifesto comes to mind?

Harold
Reply to  Dorian
February 24, 2015 3:26 pm

I like it. The bumf solution.

Pat Frank
Reply to  Dorian
February 24, 2015 8:50 pm

Gotta say, Dorian, that my experience in Chemistry is not your experience in Physics — whatever branch it was. I review papers regularly, and most of them are competently done; possibly incomplete somewhere or perhaps not taking the analysis far enough. I’ve never been censured for being too critical. In fact, I’ve been thanked for being critical.
So, while science is certainly under vicious attack, mostly by Progressives these days, I tend to be long-term optimistic.
Tenure was a good idea, so long as academics honored their side of the contract. Their side to to speak as objectively as possible. The university side is no one can fire them for doing so.
But academics, especially in the Humanities, the soft sciences like Cultural Anthropology, and in any department with a name ending in “Studies” no longer speak objectively. They’ve become openly and loudly partisan and political. In my opinion, this violates and, indeed, abrogates, the tenure contract. University presidents have been grossly remiss in allowing this to continue. Politically partisan faculty should be let go, as having fatally violated their tenure contract.

Ed Zuiderwijk
February 24, 2015 3:13 am

There are three aspects of the global weather/climate system that are fundamental to its workings: the Pacific Decadal Oscillation, the North Atlantic Oscillation and the El Nino/La Nina perturbations. Any atmosphere/ocean coupled model worth its salt should have phenomena similar to these as emergent from simulations (that is with extent and time scales similar to the real thing). None of them do. Therefore somethings very fundamental are not yet understood let alone included in those models.
That climate modellers nevertheless think those models are good enough to base public policy on shows that they lack the self-criticism inherent in real science. They therefore are little more than glorified similators. Their models relate to the real world as cartoon figures to real people.

Reply to  Ed Zuiderwijk
February 24, 2015 7:43 am

Ed Zuiderwijk:

That climate modellers nevertheless think those models are good enough to base public policy on shows that they lack the self-criticism inherent in real science. . .

If the agenda is really public policy (e.g. ‘global governance’, ‘climate justice’) then it doesn’t matter if the models have any basis in reality; they have been created to support the agenda with a specious ‘scientific’ legitimacy. They have the advantage of being so arcane that they are beyond the ken of ordinary people; only the high priests of climate scientism are admitted into their mysteries.
Clearly the author of this post, Pat Frank, has not been properly initiated, or he would have seen that so naive a concept as ‘error propagation’ does not apply to the sacred models, which inhabit a realm unblemished by mere empirical facts.
/Mr Lynn

Pat Frank
Reply to  L. E. Joiner
February 24, 2015 8:53 pm

Prescient comment, L.E.J. One of my reviewers dismissed it as, “naive error propagation theory.” He went on to demonstrate in his review, that he knew nothing of it.

FAH
February 24, 2015 3:17 am

It is with deep sadness we note the passing of the Null Hypothesis in climate science. Born about 1925, Null has had a long and distinguished career testing the significance of an immense variety of theories and conjectures. In particular, Null brought to science a realization of the medical injunction to “First do no harm,” by not claiming the truth of a hypothesis without clear evidence. Unfortunately, in recent years Null fell into declining health, contracting a serious case of consensus from which Null never fully recovered. Finally, when complications set in from natural variability and other signs of “bad data,” Null finally expired. In lieu of flowers, Null’s estate asks that you donate to the statistician of your choice.

Reply to  FAH
February 24, 2015 6:17 am

“…the passing of the Null Hypothesis in climate science.”
Well, I don’t mean to be harsh here at such a solemn time, but perhaps if Null had stayed out of politics…

Harold
Reply to  JohnWho
February 24, 2015 3:27 pm

Nope. Politics came looking for Null.

Joel O’Bryan
Reply to  FAH
February 24, 2015 6:37 am

The climate psuedoscience Null Hypothesis did not not die a natural death. It was a back alley mugging paid hit job. A sort of 9mm-sized brain haemorrhage, if you will.

Oatley
February 24, 2015 3:18 am

Take heart, your post is important.

Pat Frank
Reply to  Oatley
February 24, 2015 8:54 pm

Thanks, Oatley.

ferdberple
February 24, 2015 3:21 am

Are Climate Modelers Scientists?
============
what is the formal scientific definition of Climate Change? What is the formal term for Natural Climate Change as opposed to Anthropogenic Climate Change?
Should not the term “Climate Change” refer to all forms of Climate Change, both Natural and Anthropogenic? Why does climate science not follow the standard rule of language, from general to specific?
Why in climate science does the general refer to the specific, while to refer to the general you must use the specific? Where else in science is this done?
How can you do science if you cannot even define your terms, except to violate standard practice in language?

Pat Frank
Reply to  ferdberple
February 24, 2015 8:56 pm

Guess you called it, Ferd — non-scientist modelers ended up doing non-science.

Allanj
February 24, 2015 3:50 am

Oh, for the good old days of slide rules. With slide rules you had to think through the problem to get the right magnitude. Now with calculators and computers you can get ten digits of precision with absolutely no understanding of the problem.

Alex
Reply to  Allanj
February 24, 2015 4:22 am

Yeah. Had to know what you were doing with a slide rule. I had many. Mini ones and circular ones. For some reason I never made a mistake.

Chip Javert
Reply to  Alex
February 24, 2015 6:58 pm

You just did with that claim.

Reply to  Allanj
February 24, 2015 7:43 am

+97 billion (+/- a trillion)
Oops, sorry. I had the climate science math co-processor enabled on my computer.

Joe Crawford
Reply to  Allanj
February 24, 2015 11:08 am

At least with my old K & E Log Log Deci-Trig you had to mentally calculate the decimal point and check the reasonableness of the result. Most kids today, using hand-helds, haven’t the foggiest idea whether the they get answer is even close to the right order of magnitude.

Walt D.
Reply to  Joe Crawford
February 24, 2015 12:16 pm

You forgot the zeroth law of Climate “Science?” – 1 is approximately equal to 10.

Jimmy Haigh.
February 24, 2015 4:04 am

Pretty devastating really.

Mark Westaby
February 24, 2015 4:05 am

All computer models are wrong but some are useful. Climate “experts” fail to recognise or even accept this basic truth. There are many reasons why computer models should never be used to predict the future — and there are even more when they apply to a complex system such as climate — of which this is a very good example.
It is CRUCIAL that papers such as this be published and there must be a publisher somewhere who recognises the difference between PROPER, objective scientific review and what passes for this in today’s supposedly scientific media.

Alex
Reply to  Mark Westaby
February 24, 2015 4:30 am

Thermal distribution software for printed circuits,( which you would imagine to be quite simple) still tell you that it is a simulation and you have to create the circuit and ‘suck it and see’.

Reply to  Mark Westaby
February 24, 2015 10:33 am

Truthseeker- chaotic non-linear systems are extremely difficult for classical physics to handle. The partial differential equations used are mathematics and do correctly describe particular phenomena but they cannot be solved as a unique value, only a numeric estimation. That estimate immediately evolves into a calculation error when numerous calculations involving small increments, like a climate model, are made. After some limited number of steps the errors overwhelm any actual result. But chaotic systems can still be studied scientifically, they just involve a whole series of intractable mathematical problems that haven’t been solved yet. Christopher Essex’s several lectures on the problems with computer numerical modelling and Lorenz ‘s orginal article on discovering the “butterfly effect”(through a climate model) are pretty much still up to date as an intro.
This essay on accuracy and precision and how to handle the errors in each show that climate modelers haven’t really grasped the ideas yet. My physical chemistry class spent the better part of a quarter(72 hours of class) just covering the very basic stuff on errors in measurement and how they ballooned even in very simple calculations.
As the Essex lecture points out, and many of us have also, there is no such thing as a global temperature because the way it is constructed it doesn’t deal with observations but statistical constructs from the data. Using the “GAT” as an input to any kind of simulation becomes a simplistic method of getting wrong answers because the physics involved has nothing to do with the non-existant average temperature but the particular temperature affecting a process in a particular place.

Alx
Reply to  logicalchemist
February 24, 2015 3:53 pm

Well put.
I always thought of this problem as what happens to 2 parallel lines, when one end of the line offsets by a fraction of a degree. How long before the parallel lines become meters apart? Kilometers a part?
So yes tiny errors can result in huge errors down the processing line.
Put another way, “To err is human, top really f**k it up you need a computer.”
The lesson being humans do make errors but computers can then replicate and compound those errors a thousand times a second.

M Courtney
February 24, 2015 4:07 am

There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.

Seeing as the hundred of other models certainly don’t conform to the future behaviour of the climate – as they don’t all track each other – it must be a fortuitous accident.
If it isn’t, why not just run the one model that works?

whiten
Reply to  M Courtney
February 24, 2015 7:32 pm

@M Courtney
February 24, 2015 at 4:07 am
If it isn’t, why not just run the one model that works?
—————
The simple answer, for as far as I can tell, is:
Because you do not get enough warming projected, very little warming actually.
They force the models by a simple trick to generate extra warming.
The warming projected is not a warming due to GHG effect only, it is an artificially inflated warming.
They know that, because is done in purpose, not accidently, even that it may be considered as such, like one of these accidental errors.
They are not interested to do that. Simply a conflict of interest.
They do not want to know that, the right model that works, because then there no any AGW projections there, if that done.
That is why the beautiful and perfect work of Pat is rejected by these guys.
cheers

TLM
February 24, 2015 4:09 am

Brilliant piece! The “propagated error” point is a revelation to me. I have really learnt something here.
I read the Bank of England quarterly inflation reports and wondered why all their graphs had the same shape as the graph on the right of your figure. Now I know. They run economic models and clearly understand the uncertainties inherent in them and the effect of propagated error. Interestingly they add a probability function into their “fan charts” so that you can see that the chance that the errors are all in the same direction is lower than if they are more balanced, some positive, some negative. However the central point is that the actual result has a positive chance of being anywhere in the fan – and each quarter they critically compare their previous prediction with the actual outcome. Something environmental scientists seem reluctant to do.
See charts 5.1, 5.2 and 5.11 on the paper linked below:-
http://www.bankofengland.co.uk/publications/Documents/inflationreport/2015/feb5.pdf
I would really like to read this paper. Keep trying, perhaps you should try some Statistics journals rather than Environmental Science journals. They will have less of a vested interest in climate modelling and you might get reviewers who actually know what they are talking about. Maybe you could even get your paper accepted in an Economics journal, possibly rewritten to contrast the cleverness of economic modellers with the stupidity of climate modellers. Everybody responds to flattery!

Pat Frank
Reply to  TLM
February 24, 2015 9:05 pm

Thanks, TLM. I’ll keep trying.

urederra
February 24, 2015 4:09 am

This is the best WUWT article I have read in a very long time.
The accuracy vs. precision problem is spot on. I have also noticed that some commenters in here have the same problem. When it is dicussed about world temperatures some people on both sides of the fence seem to have trouble telling them appart. They complain about error bars in world temperatures when the real problem is lack of accuracy. (Well, also that the concept of temperature of a system which is not in equilibrium is a messy one and does not equate to total energy of the system)
Also, graph b is the type of graph one would expect when performing any kind of modelling that consists of taking the results of one iteration and use them as the starting point of the next iteration.

emsnews
Reply to  urederra
February 24, 2015 9:59 am

Hard to be accurate with gross temperature data tampering going on.

Pat Frank
Reply to  urederra
February 24, 2015 9:07 pm

Thanks, urederra. I’ve yet to meet a climate modeler who’d understand your point.

milodonharlani
February 24, 2015 4:12 am

“Climate science” replaced climatology when NCAR got access to a supercomputer designed to model thermonuclear explosions.

knr
February 24, 2015 4:23 am

One very good question to ask is , if not models what else .
In reality when you ask this question you find that the evidenced for the whole ‘we are doomed ‘ game is pretty much rubbish without the models . Given that you can see why , despite their inabilities, the models have to be defended and promoted so heavily. There are lot of careers , cash and politic ambitions resting on their shoulders.

Alex
Reply to  knr
February 24, 2015 4:47 am

Quite simple really. We are in the age of virtual reality. Most people hate their lives and live in a virtual world. You can have virtual love, relationships, sex (there is an app and equipment for that). Soapies, tv series, movies of every kind to suit every taste. Models are no different to that. The MSM can make high drama out of this and most of the sheep lap it up.
I, for one, am hanging on to the toilet rim and refuse to be flushed down with the rest of the idiots.

Bubba Cow
Reply to  Alex
February 24, 2015 5:20 am

well here in reality that 340W/m2 has moved (expanded?) my model thermo-meter from -30F to -20F since dawn and with a probably pretty good albedo given the whiteness of my view – but blue skies for the transparent greenhouse so nada in the backradiation scam
On the interior, firewood is oxidizing nicely.

Jim Francisco
Reply to  Alex
February 24, 2015 11:07 am

I’m going kicking and scratching all the way too Alex.

Jim Francisco
Reply to  Alex
February 24, 2015 12:16 pm

Bubba – you should get out of there! -30 is not fit for man nor beast. And that’s C or F degrees.

Reply to  knr
February 24, 2015 6:24 am

“One very good question to ask is , if not models what else?”
Ouija board,
Magic 8 ball,
tea leaves,
Mom’s intuition,
Great Zoltar
– I’m sure there are many others of comparable projection/prediction ability.

Quinn the Eskimo
Reply to  knr
February 24, 2015 7:03 am

EPA formally stated in the Endangerment Finding for GHGs that the attribution of warming to humans rests on 3 lines of evidence: 1. Temperature Records, 2. Physical Understanding of Climate, and 3. Models. They claimed >90% confidence based on these 3 lines of evidence. AR5 bumped that to 95%.
Nos. 2 and 3 are total crap.
Hot spot, anyone?
No. 1 – we are well within natural variability and so there is no basis for an inference that humans have caused an excursion beyond natural variability.

Pat Frank
Reply to  knr
February 24, 2015 9:08 pm

I call it my trillion dollar paper, for exactly that reason, knr. 🙂

SanityP
February 24, 2015 4:44 am

Your work obviously belongs in a mathematical/statistical journal, not in “climate science”.

Pat Frank
Reply to  SanityP
February 24, 2015 9:11 pm

Lots of climate modelers have their degrees in mathematics, SanityP. Science is pretty grubby to them, what with all that messy observational stuff and materiality (dirt). My instinct is to avoid such journals.

gaelansclark
February 24, 2015 4:48 am

Can you name the reviewers?
There are so-called “name and shame” campaigns that go after those who do not support the “consensus” position….why can we not know whom it is that have zero understanding of their own models?

Alex
Reply to  gaelansclark
February 24, 2015 4:55 am

Closed shop. The reviewers are ‘anonymous’. Nice thought though.

whiten
Reply to  Alex
February 24, 2015 7:40 pm

Oh come on, Mann could not be there…..or could he!…:-)
cheers

Urederra
Reply to  gaelansclark
February 24, 2015 4:55 am

No, you are not allowed to know the name of the reviewers when you submit a paper to a journal.

Alex
Reply to  Urederra
February 24, 2015 5:05 am

Sometimes the reviewers know each other and the person presenting the papers. Draw you own conclusion from that.

rd50
Reply to  Urederra
February 24, 2015 4:58 pm

Close but no cigar. Yes you are allowed, indeed some journals now have “open peer review”. But these are exceptions, I will grant you this.
However, the best example of open peer review I can give in this AGW field is the original paper of the “Father” of AGW.
The title of the paper “The Artificial Production of Carbon Dioxide and its Influence on Temperature”, published in 1938 in Q.J.R.M.S (certainly a top scientific journal) by G.S. Callendar.
You can download a copy of it for free from here:
http://onlinelibrary.wiley.com/doi/10.1002/qj.49706427503/pdf
You can then read the comments of the reviewers, as well as their names, quite a few of them, under the Discussion of the paper. Then you can read the answers from Callendar to them. Surprise?
I certainly do not want to go of topic about peer review. But I also had a surprising experience.
In 1971, I submitted a review article and received comments from one reviewer in the typical anonymous fashion. When the article was published I was very surprised to see the name of the reviewer printed on the title page with the note that he was the reviewer of the article.
I am not sure, when I wrote the article I certainly was not yet established as a scientist in this particular field. He was over 60 and well respected.
Then the article was and is still cited and became a “fixture” in that field. Another surprise: several authors when citing the article added his name (I think by honest mistake) as a co-author!
A few years later, I had the pleasure of meeting him as we served on an advisory committee. We had a few drinks, a nice dinner and he was still teasing me about a small part of the article he did not like. I teased him about being a false co-author. So much for peer review. Never perfect, but needed.
My impression now is that with the Internet, we are seeing major changes in scientific publishing and we will also see major changes in peer reviews and open comments.
By the way, if you read the paper, you will see that the Father loved the increase in CO2!

whiten
Reply to  Urederra
February 24, 2015 7:44 pm

rd50
February 24, 2015 at 4:58 pm .
Funny, the Father was not even an AGWer, and certainly he would be mad if he was considered as such…..:-)
cheers

Pat Frank
Reply to  gaelansclark
February 24, 2015 9:12 pm

Alex is right, gaelanclark. The reviewers were anonymous.

TRG
February 24, 2015 4:59 am

So, I wonder how many of the commenters here actually understand what Pat Frank wrote about. On casual reading, it was over my head.

Alex
Reply to  TRG
February 24, 2015 5:08 am

You want the truth? You can’t handle the truth. Develop a suspicious nature and read a little more widely. It’s something that is pervasive in alll sciences.

M Courtney
Reply to  TRG
February 24, 2015 6:37 am

If I read this correctly…
Basically, he wrote a paper that pointed out that accuracy of models (how close they are to reality) is not the same as precision (how much they wobble around – which is a function of the models, not the real world).
The peer reviewers got confused between the two ideas and thus, conveniently, rejected the paper.
In addition he points out that errors in the start values (or maybe the model assumptions) are iterative, they are repeated. As such they add up.
“You owe me a fiver ± a friendly pint” is fine. No-one keeps count of the friendly pint.
But if the same thing happens day after day post-work then you can feel the resentment growing. That “friendly pint” becomes significant.
Yet the peer reviewers seems to think that the wobbles around the start are a limit to the number of friendly pints so they can be ignored. They are wrong – it repeats and adds up.
He also pointed out that error boundaries (how far from physical reality the models are expected to be) are not the same as the range of wobbles (precision) as the wobbling is not wobbling about the real world; it wobbles about what the models are centred on. Again the reviewers get a little confused. Apart form the one he thinks isn’t a “Climate Scientist” and he therefore thinks may be a competent scientist.
The rest was further illustrations of that theme. If I understood the author correctly.
Hope that helps.

Streetcred
Reply to  M Courtney
February 24, 2015 3:16 pm

Thank you, M Courtney 😉

whiten
Reply to  M Courtney
February 24, 2015 7:57 pm

Perfect explanation. MC..:-)
If you allow to add me a single line of conclusion….please.:-)
The modelers were told by Pat in a very fine and clear way that obviously their models break the very first Commandment for the models….and the answer basically was that that does not matter at all…..that is how they like their models regardless how wrong and perverse that could be.
cheers

Pat Frank
Reply to  TRG
February 24, 2015 9:13 pm

M Courtney gave a good basic description, TRG. I, too, hope that helped. Thanks, M Courtney!

garymount
February 24, 2015 5:13 am

Interesting new developments in the computing world with people building supercomputers with cheap $35 computers arranged into computing nodes. Here is an example of a 32 node compute-cluster using the pi raspberry version 1 (version 2 has 4 cores instead of only one core for version one so could total 128 computing cores for the same cost) :
Imagine what we the skeptics might have available to us before the end of this decade to investigate (run) climate models on our own.

Alex
Reply to  garymount
February 24, 2015 5:31 am

I’ve been looking at that stuff. Looks cool. The possibility to stream live (10 minutes) the output of satellite data. It will probably upset some people. HaHa

Reply to  garymount
February 24, 2015 8:10 am

Thanks, an excellent post, Pat Frank.
Another nail in the IPCC coffin.

Pat Frank
Reply to  Andres Valencia
February 24, 2015 9:14 pm

Thanks, Andres.

Reply to  garymount
February 24, 2015 8:12 am

I hope these computer builders don’t waste their time running useless IPCC models.

DirkH
Reply to  garymount
February 24, 2015 11:59 am

Nice toy but wrong approach for a number cruncher. Highest performance in TFlops/Watt – and price as well – can only be achieved with high concentrations of actual computing pipelines, SIMD arrays, like the NVidia cards or SoC’s like Xilinx ZynQ – which has 250 DSP slices embedded in an FPGA (and 2 ARM cores for controlling the thing).

garymount
Reply to  DirkH
February 24, 2015 4:41 pm

These things have GPU’s on them that could also be used for computing. It is an inexpensive way for a person like me who would want to try out code for climate models.
On the other hand, Intel has an 18 core hyper-threaded chip that can run 36 threads simultaneously, but is rather expensive – in the thousand dollar range.
Microsoft is building a Windows 10 variant for the Raspberry Pi. When that becomes available, I will seriously look into building my own compute cluster.

Ian W
Reply to  garymount
February 24, 2015 5:48 pm

They are nice toys, but the problems raised in this post still hold good. It is straight forward Lorenz the start data is inaccurate, the models lack the capability to model everything in the chaotic climate system. Even the IPCC said: “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. “
Small errors in the initial state propagate but in a chaotic system they will not propagate uniformly. As the number of inaccurate variables is close to infinite and the chaotic system has “unknown unknowns” anyone who thinks that it is possible to model the climate with any level of correctness does not understand the climate.

February 24, 2015 5:20 am

A very good an interesting article. To a large extent I share your despair, but would encourage you to find other possible places to publish. My overview is that those working on GCMs have developed ‘ignorant expertise’: they have become expert in their own paradigm and groupthink but divorced from the tenets of science as a whole.

Pat Frank
Reply to  Jonathan Abbott
February 24, 2015 9:17 pm

Thanks, Jonathan. I apologize for communicating despair; didn’t mean to. The ms has been submitted again, and I remain cautiously optimistic. You’re right about the modelers. Some came across as quite upset that I should suggest a means of analysis standard in physical science, but not standard in their field.

February 24, 2015 5:29 am

As I have used the phrase ‘climate-models-can’t-predict-squat’ in C3 articles multiple times, it’s a guilty pleasure to read an article that addresses the issue head-on.
Being a retired biz executive, the climate model output has always reminded me of marketing managers spending way too much time devising Excel algorithms that provide “empirical” evidence, with the end result always being that a new marketing campaign means total domination of a given market within a few years.
And these fairly smart sales/marketing manager types would truly come to believe their simulated outputs were the probable future reality. (This type of simulation “science” was also used to fertilize the crazed tech boom frenzy that ended badly with the severe 2000 dot-com bubble bust – instead of sales projections, it was the grandiose simulated predictions of ‘eyeballs captured’ that fed the investors’ appetites.)
Alas, the climate modelers are no different than the self-deceived jokers in the marketing/sales departments, who made faulty sales projections based on complex Excel formulas without an understanding/appreciation of the underlying nuances and unknown macro, micro, behavioral and innovation economics at work, globally, 24/7.
Climate modelers as scientists? Nope. Instead, they’re the climate science community’s jokers, closely related to their always failing brethren in the business world.

Jim Francisco
Reply to  C3 Editor
February 24, 2015 11:37 am

Sometimes I am amazed that we as a society ever got complicated machines like cars and airplanes built on such a large scale with such craziness going on. It seemed to me that in my world those who could not do their technical job very well realized their inabilities and therefore turned their attention to becoming managers. Many of them succeeded. The problem was that they were not good with determining who were technically competent and who were not. Eventually you are part of a group with a rightfully deserved bad reputation.

Rob Dawg
February 24, 2015 6:00 am

The dysfunction is more fundamental. Climate investigators don’t even know the difference between measurements, data and information.

Joel O’Bryan
February 24, 2015 6:04 am

Your manuscript rejection is largely due to claiming the naked emperor has on no clothes. How dare you challenge the cargo cult climate scientists

Gary
February 24, 2015 6:09 am

The problem with swallowing the blue pill is that you no longer recognize the possibility of a red pill.
https://en.wikipedia.org/wiki/Red_pill_and_blue_pill
Modelers live in virtual reality so they can’t see what’s really happening. Computer programs are under their control and so give the illusion of mastery. Your frustration is akin to that of all teachers whose pupils just don’t have the capacity to understand. Thank you, though, for putting this on the record rather than just letting it go.

February 24, 2015 6:15 am

As Frank suggests between the lines, in science the purpose of a model is to make predictions of real world data. Science is a mapping on data to data.
The purpose of the climate modelers is to get published in an approved journal via peer review. To the latter, accuracy and precision mean no more than consistency in model results, and they have programs to make their models consistent, programs which have, by and large, been successful. GCMs predict future climate, but these predictions can and are never validated. They are near enough to require urgent funding, but far enough away to be untestable in our lifetimes.
In a brief, lucid moment, Richard Horton, Editor of Lancet, explained the modern publication process:
The mistake, of course, is to have thought that peer review was any more than a crude means of discovering the acceptability – not the validity – of a new finding. Editors and scientists alike insist on the pivotal importance of peer review. We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed [jiggered, not repaired], often insulting, usually ignorant, occasionally foolish, and frequently wrong.
Science is not about voting. It is not about peer-review, publication, or consensuses. These are subjective. It’s about predictive power. Science is the (strictly) objective branch of knowledge.
Fortunately for science and society, and unfortunately for the climate modelers, the GCMs contain one accessible, implicit prediction: Climate Sensitivity. Data from the last decade and a half invalidate that prediction. The toast fell jelly side up.
Climate models fail – not because they are computer models, but because they butcher the physics of climate. They are incompetent. These postmodern modelers talk about feedback, but then leave out the most powerful feedback in all of climate, total cloud albedo, the number nominally put at about 31%, and which is in fact variable, gating the Sun on and off. It is a positive feedback, amplifying solar radiation (the burnoff effect) and a negative feedback, mitigating warming from any cause (from the Clausius-Clapeyron effect).
These top level aspects of the climate story can be widely understood, even reaching the general public.

rgbatduke
February 24, 2015 6:22 am

Good luck with that, Pat. You are still being way too nice to them. Additional points:
* They treat the PPE envelope as if it is error when it is not as you say. But they do not examine the structure of the individual traces, which themselves often have absolutely absurd variability and the wrong autocorrelation. I have remarked many times on what the wrong autocorrelation means physically via the fluctuation dissipation theorem. In a nutshell, if the autocorrelation times are not correct, then the physics of the open system is provably not correct, end of story.
* The models do not conserve energy per timestep. This means that at the end of every timestep the system has to be renormalized or it will run away. But they cannot fully renormalize it, or else the models would not run away the way they need them too. They therefore have to renormalize the energy balance enough to stabilize the model but in a way that permits GHGs to force the solution to grow over time. I won’t say that it is impossible to perform this sort of numerical magic without introducing all sorts of human bias into the result — I’ll just say that I am deeply skeptical about the entire process. It’s like solving a stiff set of coupled ODEs (very much like it, in fact, almost identical to it) so that it sort of diverges but doesn’t really diverge. How can you be sure that the result is actually a solution and not your beliefs about the solution.
* The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.
* In the end, how are the models any different from a simple direct physical computation of GHG forcing? They are obviously set up to have a median output around the centroid prediction of the usual logarithmic climate sensitivity, and everything else is just model-induced noise around this obvious trend. I could (and have) produced the centroid line just fitting and extrapolating the climate data in a one significant parameter purely statisical model fitting HadCRUT4. The PPE output is mere window dressing designed to make this fit somehow more plausible, or to emphasize that it COULD warm as much as 6 C — if there were no negative feedbacks in the system and all of the dice used in the model came up boxcars a hundred times in a row.
rgb

Walt D.
Reply to  rgbatduke
February 24, 2015 9:49 am

” The Multi-Model-Mean is an abomination, and all by itself proves utter ignorance about statistics in climate modeling.”. You mean that 15 wrongs don’t make a right? 🙁 It would seem that if 15 models all give different results, and we consider differences of 0.02C to be significant, then at lesst 14 of them have to be wrong.

Harold
Reply to  Walt D.
February 24, 2015 3:54 pm

No, he means 5 possums, 3 raccoons, 4 starfish and 3 spiders don’t make an elephant.

Pat Frank
Reply to  rgbatduke
February 24, 2015 9:24 pm

I’ve often wondered, rgb, why you don’t write a critical article. You’re so totally qualified, and you (unlike me) understand the physics and math right down to the bedrock. I’m still wondering. It would go nuclear. Why not do it? Think of the children. 🙂

February 24, 2015 6:53 am

Does this stem from modelers’ dubious claim that chaos averages out?

CaligulaJones
February 24, 2015 6:54 am

Seems peer review has mutated into “friend review” for bad science, and “enemy review” for protection of paradigms. And funding.

Quibble
February 24, 2015 7:00 am

Typo: “It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal”
Sorry, it’s all I can contribute.

Quibble
February 24, 2015 7:00 am

Typo: “It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal”
Sorry, it’s all I can contribute.

Ralph Kramden
February 24, 2015 7:04 am

The Catastrophic Anthropogenic Global Warming (CAGW) theory has so many obvious flaws that in my opinion there are only two reasons someone might believe in it. Either they are being paid to or they’re not the sharpest tool in the box, i.e. they wear a polar bear suit to demonstrations.

Joseph Murphy
February 24, 2015 7:17 am

Great read, thank you Pat Frank!

Pat Frank
Reply to  Joseph Murphy
February 24, 2015 9:33 pm

Thanks, Joseph.

mysterian
February 24, 2015 7:20 am

Sounds like Bevington and Robinson make no appearance in the Climatology curriculum.
Peer review replaced replication because it is cheaper and faster. It also suffers the Wikipedia “activist editor” problems.

Pat Frank
Reply to  mysterian
February 24, 2015 9:34 pm

Truer words … mysterian. It’s as though the modelers I’ve encountered have never been exposed to physical error analysis. They’ve evidenced no concept of it.

JDN
February 24, 2015 7:24 am

@Pat
There is so much error in your position that I hardly know where to begin. I hope WUWT people don’t think this article is the last word on computer simulations. Simulations don’t sample a statistical distribution, unless that is direclty programmed into the simulation. That’s why there is no “error propagation”. You can call this a fault of the simulation, but most simulations performed in all fields similarly lack a modeled error to the input parameters, and therefore do not and cannot propagate error.
The actual error of computer simulations is measured as propagation of truncation error due to limited precision of the computer. The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes. This error estimate is sort-of what the climate modelers are doing and what you have a problem with. I don’t think they’re any different than modelers in other fields. SO, Pat Frank meet windmills; windmills, Pat Frank. 🙂
Also, your writing style is opaque. I had a hard time parsing what you meant. This may have also been a problem for your reviewers.
Finally, I have always been skeptical of the dogmatic “propagated error” rules in physics. The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error. This is total nonsense. Using the well-known rules of error propagation, you can generate errors that are absurd and also depend upon the number of mathematical operations you perform without changing the underlying reality of how uncertain our knowledge is. It’s very clearly a game.
But suppose that I want proof of your claim that error propagation rules conform to reality. Where is your proof that error propagation in physics is correct in the physical world and not some inept game? How can you know *empirically* that error propagation calculations are correct in all fields? You don’t cite any articles saying that error propagation rules were observed to be correct!!! Everything you say is utter dogmatism without proof.

Joel O’Bryan
Reply to  JDN
February 24, 2015 8:02 am

JDN,
Your comments confirm the knowing refusal to accept the Null Hypothesis in Climate Science. Instead, climate modellers continue to redesign their bamboo control towers, adjust the layouts of their runways, and add bling to the controller’s headsets… and then wonder why the planes still do not land.

Steve Oregon
Reply to  JDN
February 24, 2015 9:05 am

JDN,
Are Climate Modelers scientists?

Quinn the Eskimo
Reply to  JDN
February 24, 2015 10:33 am

1. How or why does a lack of sampling of statistical distributions preclude error propagation? I may be a dumbass, but I have no idea why it is even a pertinent observation to say that simulations don’t sample statistical distributions. What difference does that make?
2. What does it mean to say that “most simulations” “lack a modeled error to the input parameters”? What difference does that make; how or why does this, whatever it is, preclude error propagation?

urederra
Reply to  JDN
February 24, 2015 10:36 am

The actual error of computer simulations is measured as propagation of truncation error due to limited precision of the computer.

It seems to me that you cannot tell the difference between ‘measurement precision’ and ‘floating point precision’, and maybe that is the problem the reviewers have as well.
I give you an example. If Earth albedo cannot be measured with more than 2 significant digits, there is not point on performing calculations or writting code in double precision or double extended precision format. Single precision will do. The third digit in your final calculation is not going to be significant, anyway.

Matthew R Marler
Reply to  JDN
February 24, 2015 12:18 pm

JDN: The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error.
that is not correct. The probability that a normally distributed random variable is greater than a particular large number is finite, but the probability goes to 0 as the large number is made larger. The mathematical fact that the normal distribution has infinite support has never prevented it being useful with lots of kinds of measurements that are physically bounded, such as Einstein’s model of Brownian motion.

Reply to  JDN
February 24, 2015 1:38 pm

JDN:
This is in reference to your concern over the compounding of uncertainty in model simulations. It is not an attempt to answer your epistemological challenge in the final paragraph.
From Wikipedia:

In statistics, propagation of uncertainty (or propagation of error) is the effect of variables’ uncertainties (or errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate to the combination of variables in the function.

Either climate modelers take proper account of inherent errors (in both measurements and in basic theory) and how they interact within each model or they do not. It seems like a legitimate area of investigation to me.

Pat Frank
Reply to  JDN
February 24, 2015 10:55 pm

JDN, error propagation does not require sampling statistical distributions. I propagated systematic error, which need not follow any standard statistical distribution at all.
And it was not an input error propagated, but a theory-bias error; one made by the models themselves and therefore present in every simulation step. Such errors can be estimated, and can always be propagated through a simulation. And should always be propagated through a simulation.
Truncation error is a numerical error. My post spoke to physical error. Do you understand the difference? Your idea that, “The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes.” shows exactly the confusion about accuracy vs. precision evidenced by all my climate modeler reviewers. Sampling parameter space is about precision. It tells one nothing about physical error. Physical error is determined by comparing model expectation values against the relevant observational magnitudes.
You wrote, “This error estimate is sort-of what the climate modelers are doing and what you have a problem with.” Correct; it is what htey do. Their method says nothing about the physical accuracy or reliability of their model projections.
I don’t think they’re any different than modelers in other fields.” If you’d like to see an actual model reliability analysis, consult Vasquez VR, and Whiting, WB. 2005. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis 25(6): 1669-1681, doi:10.1111/j.1539-6924.2005.00704.x.. They propagate error.
After reading that paper, I wrote to Whiting about it. His reply was illuminating: ”Yes, it is surprisingly unusual for modelers to include uncertainty analyses. In my work, I’ve tried to treat a computer model as an experimentalist treats a piece of laboratory equipment. A result should never be reported without giving reasonable error estimates.” There’s the attitude of a practicing physical scientist for you. Rather diametrical to your modeling standards, isn’t it.
So, it seems you may be right, and that modelers elsewhere make the same mistake you do here. The mistake so obvious to a trained physical scientist and so ubiquitous among climate modelers: the inability or unwillingness to perform physical reliability analyses on their model results. An inability to understand the difference between physical accuracy and model precision.
So, accuracy, meet JDN, JDN, accuracy. I recommend you learn to know it well. If you want to do a physical science.
Sorry you found my writing style opaque. You seem to be the only one who has (so far as I’ve read). Maybe the problem has something to do with your immersion in modeler-thought.
Regarding the statistics of propagated error, when the error is empirical (made vs. observations) one doesn’t know the true distribution. The uncertainties from propagation of error are therefore always estimates. But so what? In physical science, one looks for useful indications of reliability.
Rules for propagating error are not “dogmatic.” That’s just you being rhetorically disparaging.
The results of error propagation are not absurd errors, but non-absurd uncertainties. Of course, the uncertainty increases with the number of step-wise calculations. Every step transmits its uncertainty forward as input into the next step and each step also has its own internal parameter error or theory bias error. That means one’s knowledge of state magnitudes must decrease with every calculational step. You may not like it, but it’s not mysterious.
Take a look at the derivations in Bevington and Robinson 2003 Data Reduction and Error Analysis for the Physical Sciences. Show me their derivational dependence on location in the physical world, or limitation by discipline or field. Their generalized derivations make it obvious that they are universally applicable wherever physical calculations are carried out. You’re just grasping at straws, JDN.
You’re welcome to carry out your climate modeling sealed hermetically away from physical error analysis, and from the threat of countervailing observations. But then don’t pretend that you’re doing science. And don’t pretend that your models have anything to do with physical reality.

JDN
Reply to  Pat Frank
February 25, 2015 5:06 am

I agree that propagating error in computer code is possible, and I think you agree that it’s not usually done. Should it be? The reason I brought up answering the question empirically is because, to my knowledge, our basic error propagation technques were derived in fields that are suited to examining the results of these rules. For example, I’m reasonably sure the error estimate propagation techniques will work in atomic physics. I’m completely uncertain whether error should be propagated in the same way for “big world” simulations. The Monte Carlo approach to precision is what most people go with. With chemical simulations I’ve done, this is what I go with.
Your demand for “precision” makes you a character out of central casting. You would be the tragically flawed scientist who opposes the eventual hero. It’s just the “angry old man syndrome” talking. Just saying… “Precision” and “accuracy” are overloaded terms. They have been defined in so many ways, you can’t seriously expect people to *not* have cognitive dissonance reading these terms.
If you want to be understood, instead of just yelling at kids to get off your lawn, call these things something closer to what they are, maybe confidence interval of simulation output vs. variance from observation. See… that really clears things up. Demanding that people adhere to your jargon is unfriendly.
You can afford to be a friend because the climate simulations are completely bogus for so many other reasons.
And to answer the other commenters about my opinion of climate scientists, whether they’re scientists… they seem to be really bad ones. But my own field is gradually going this way as well. The corrosive effect of grant money means that the greatest scientific sin is to lack funding. It used to be an insult if someone said you would believe anything for money… now, you can put it on your CV. If you guys have so much time, stop messing around with criticizing bad stats and get control of the funding.

Pat Frank
Reply to  Pat Frank
February 25, 2015 9:29 am

JDN, your supposition that I am ‘demanding precision’ makes me think you haven’t actually read anything. My entire post is about the importance of accuracy to science. Demur as you like, but it is both appropriate and standard to propagate error through any multi-step calculation.
If you’d care to read Bevington and Robinson you’ll find accuracy and precision carefully defined, and in the standard manner. Accuracy and precision have not been defined in “many ways” as you have it, but in one way only. You lose sight of that at the peril of your work.

richardscourtney
February 24, 2015 7:32 am

Pat Frank
Thankyou for your fine article. I ask you to continue to press the issue because I have been pressing it with no success since 1999.
You summarise the problem when you write:

Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.

You are not the first to observe that the climate model fraternity lacks a “basic standard of science”. For example, in my peer review of the draft IPCC AR4 I wrote the following which was ignored; i.e. my recommendation saying “the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence” had no effect.
Page 2-47 Chapter 2 Section 2.6.3 Line 46
Delete the phrase, “and a physical model” because it is a falsehood.
Evidence says what it says, and construction of a physical model is irrelevant to that in any real science.
The authors of this draft Report seem to have an extreme prejudice in favour of models (some parts of the Report seem to assert that climate obeys what the models say; e.g. Page 2-47 Chapter 2 Section 2.6.3 Lines 33 and 34), and this phrase that needs deletion is an example of the prejudice.
Evidence is the result of empirical observation of reality.
Hypotheses are ideas based on the evidence.
Theories are hypotheses that have repeatedly been tested by comparison with evidence and have withstood all the tests.
Models are representations of the hypotheses and theories.
Outputs of the models can be used as evidence only when the output data is demonstrated to accurately represent reality.
If a model output disagrees with the available evidence then this indicates fault in the model, and this indication remains true until the evidence is shown to be wrong.
This draft Report repeatedly demonstrates that its authors do not understand these matters. So, I provide the following analogy to help them. If they can comprehend the analogy then they may achieve graduate standard in their science practice.
A scientist discovers a new species.
1. He/she names it (e.g. he/she calls it a gazelle) and describes it (e.g. a gazelle has a leg in each corner).
2. He/she observes that gazelles leap. (n.b. the muscles, ligaments etc. that enable gazelles to leap are not known, do not need to be discovered, and do not need to be modeled to observe that gazelles leap. The observation is evidence.)
3. Gazelles are observed to always leap when a predator is near. (This observation is also evidence.)
4. From (3) it can be deduced that gazelles leap in response to the presence of a predator.
5. n.b. The gazelle’s internal body structure and central nervous system do not need to be studied, known or modeled for the conclusion in (4) that “gazelles leap when a predator is near” to be valid. Indeed, study of a gazelle’s internal body structure and central nervous system may never reveal that, and such a model may take decades to construct following achievement of the conclusion from the evidence.
(Having read all 11 chapters of the draft Report, I had intended to provide review comments on them all. However, I became so angry at the need to point out the above elementary principles that I abandoned the review at this point: the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence).”
Richard

Chris Hanley
Reply to  richardscourtney
February 24, 2015 2:13 pm

I have a minor quibble.
“From (3) it can be deduced that gazelles leap in response to the presence of a predator”.
That would be a logical inference rather than a deduction.

Chip Javert
Reply to  Chris Hanley
February 24, 2015 7:14 pm

Actually, having been to Africa, some gazelles do not leap in response to the presence of a predator.
This is called “dinner”, and happens frequently enough to feed lots of predators.

richardscourtney
Reply to  Chris Hanley
February 24, 2015 11:55 pm

Chris Hanley
Yes, you are right. However, my post was not intended to introduce discussion of an illustration: I wrote in support of the assertion by Pat Frank that climate modelers lack an adequate “basic standard of science”.
Richard

Pat Frank
Reply to  richardscourtney
February 25, 2015 9:31 am

Thanks, Richard. You came to it earlier than I did.

jmrSudbury
February 24, 2015 7:35 am

I am not sure I can believe anyone who cannot count sequentially to 6. What do you have against 4 Pat?
Actually, I am just wondering if you forgot to cut and paste a section.
Hei hei
John M Reynods

TImo Soren
Reply to  jmrSudbury
February 24, 2015 8:52 am

That’s a hoot, I read the entirety and failed to see the absent 4! Nice comment.
Also, explaining how errors would propagate through a simple model would be nice as is it only the initial estimates or are there addition errors introduced during a model that can also propagate?

jmrSudbury
Reply to  TImo Soren
February 24, 2015 10:32 am

Timo
The propagation of errors is the errors in the physical measurements that can take the calculations off the rails. Each measurement, be it TSI, down welling radiation in W/m^2, surface air temperature, etc., has an error range associated with it. When you model these measurements into the future, you need to keep in mind that they all have a range of accuracy. The first step has a range of accuracy. As a result, the second step has does not have a fixed starting point. The error from the first step has to be included in the second step. At each subsequent step, the error of the previous step has to be added to that of the current step.
Example from the text: Using the root-sum-square, after 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.
Hei hei
John M Reynolds

Pat Frank
Reply to  TImo Soren
February 25, 2015 9:35 am

Thanks, John — that’s about it.

Pat Frank
Reply to  jmrSudbury
February 25, 2015 9:33 am

ulp, you’re right, jmrSudbury. No prejudice against 4, honestly. Must have missed a finger. 🙂

ImranCan
February 24, 2015 7:43 am

What was it Sinead O’connor said ? Have the strength to change the things you can, the courage to face the things you can’t, and the wisdom to know the difference.
You are not going to get a bunch of numpties to agree to something that would nullify their life’s work and render themselves irrelevant.
The truth will out in the end,

steveta_uk
Reply to  ImranCan
February 24, 2015 8:25 am

The Serenity Prayer is the common name for a prayer authored by the American theologian Reinhold Niebuhr[1][2] (1892–1971) – not Sinead O’Connor ;(

February 24, 2015 8:09 am

Pat, why not contact Professor Chris Essex at the University of Western Ontario, Department of Applied Matnematics. He outlines his views in a lecture at: https://www.youtube.com/watch?v=19q1i-wAUpY
He and Ross McKitrick, professor of economics have a book on the subject of modeling
Taken By Storm: The Troubled Science, Policy and Politics of Global Warming
http://www.amazon.com/Taken-Storm-Troubled-Science-Politics/dp/1552632121/ref=sr_1_13?ie=UTF8&qid=1424578417&sr=8-13&keywords=taken+by+storm

Pat Frank
Reply to  Frederick Colbourne
February 25, 2015 9:39 am

Frederick, I read “Taken by Storm” a coupla-three years ago, thanks. It’s an excellent book. If consensus climatology were a real field of science, books like that, and other published work, would have changed its direction long ago. I’ve been in touch with Chris Essex. He knows my work.

William Astley
February 24, 2015 8:29 am

“Kuhn’s book (William: Kuhn’s book “The Structure of Scientific Revolutions” challenged the popular belief that scientists (William: Particularly including ‘climate’ scientists who are extraordinary resistive to the piles and piles of logical arguments/observations/analysis results that indicate their theories are urban myths due to the climate wars) are skeptical, objective, and value-neutral thinkers. He argued instead that the vast majority of scientists are quite ‘conservative’; they’ve been indoctrinated with a set of core assumptions (William: The core assumption/theories become overtime unthinkable to be incorrect) and apply their intelligence to solve problems within the existing paradigms.
Scientists don’t test whether or not they are operating with a good (William: validate) paradigm as much as the paradigm tests whether or not they are good scientists. (William, ‘good’ scientific behaviour is defined as a person how does not publically question the groups’ core beliefs, imply that the groups’ core beliefs are an urban legend.)”
If there are fundamental errors in the base theory, there will be piles and piles of observational anomalies and paradoxes. The fact that there are piles and piles of anomalies in almost every field of ‘pure’, non applied science indicates there is something fundamentally incorrect with the methodology/approach and ‘culture’ of ‘pure’ science. It also explains why major, astonishing breakthroughs in pure science are possible. (Imagine decades and decades of research which provides the observations/analysis to solve the puzzles and a weird irrational culture that stops people from solving the problem.)
An example of an in your face failure to solve the very important scientific problem/puzzle: What is the origin of the earth’s atmosphere, oceans, ‘natural’ gas, crude oil, and black coal. The competing theories are (there was/is no theory competition) are: 1) The ‘late veneer theory’ (the late veneer theory is connected with the fossil fuel theory, where a tiny amount of CO2 is recycled in the upper mantle) vs/or 2) The deep core CH4 theory (See the late Nobel prize winnings, Astrophysics’ Thomas Gold’s book ‘Deep Hot Biosphere: The Myth of Fossil Fuels’, where there is a large continuous input of CH4 and CO2 into the biosphere from CH4 that is extrude from the core of the earth as it solidifies, which explains Humlum et al’s CO2 phase analysis result paradox and roughly 50 different geological paradoxes/anomalies) as to the origin of the earth’s atmosphere, oceans, and ‘natural’ gas/crude oil.
The fact that a standard, effective, structured approach to problem solving as basic as listing and organizing the anomalies/paradoxes in a very long review paper or a short book, looking at the logical implications of the anomalies/paradoxes, and formally exploring/developing alternative theories is not done, as it would highlight the fact that are fundamental errors in the base theories, that there some of the base theories are most certainly urban myths (i.e. cannot possibly be correct).
It is embarrassing, unthinkable for the group of specialists to question their core science, to suggest that there are or could be fundamental errors in their base theory and that those errors could have gone unaddressed for decades, that their base theory could be, is obvious an urban myth. New graduates that want to ‘progress’ in the field, that want to get university teaching and research positions (imagine a research department being made up of a couple of dozen professors, with seniority and a pecking order and each specialty field having a few hundred teaching members and a thousand want to be teachers/researchers, again with a pecking order and benefits that can be controlled/changed to encourage the culture) have no logical alternative but to continue to support the incorrect paradigms and ineffective approach to problem solving.
The IPCC climate models’ response to forcing changes is orders of magnitude too large (the general circulation models, GCM, amplifies forcing changes – positive feedback, rather than suppresses, resists forcing changes – negative feedback). If the real world’s response to forcing changes was to amplify the forcing change the earth’s temperature would widely oscillated in response to let’s say a large volcanic eruption or other large temporary forcing change.
The justification that the planet amplifies rather than resists forcing changes, is the fact that there is in the paleoclimatic record, very large, climate changes. These very, large, very rapid climate changes are not however random, there are cyclic. The Rickies (rapid climate change events RCCEs), correlate with massive solar magnetic cycle events and unexplained geomagnetic changes.
A climate model that has positive feedback can be ‘tuned’ by adjusting the inputs and internal model variables produce to make the model produce a rapid, very large, abrupt temperature response, to a small forcing change. As noted above however as the earth’s temperature does not widely oscillate when there are large temporary forcing changes, the explanation for cyclic abrupt climate change in the paleo record is not that the planet amplifies the forcing. The explanation for the Rickies is the sun can and does change in a manner that causes very, very large changes in the earth’s climate which is supported by the fact there are cosmogenic isotope changes at each and every abrupt climate change event and at the slower, less large climate change events.

Joel O’Bryan
Reply to  William Astley
February 24, 2015 9:28 am

A small few of the Climate Science practioners have grudgingly admitted their discomfort at the use of higher than observed mid latitude and tropical aerosols to balance the GHG forcing if the Arctic temp rises are to be replicated by the GCM ensemble results.
That problem, in and of itself if “climate sciencism” were a real science, should demand a major dumping of the inherent assumptions built into the models, and then the models themselves.

Pat Frank
Reply to  William Astley
February 25, 2015 8:10 pm

Scientists are heir to all the foibles that infect humanity, William. There’s nothing new or revelatory about that.
The interplay of falsifiable theory and replicable observation, though, is the particular strength of science. It’s not present in any other field of study.
So long as this method is freely and honestly practiced, the problems you note will be only temporary, uncomplimentary and braking of progress though they may be.
The problem in climatology has been the deliberate subversion of free and honest scientific practice. Had the major scientific institutions — the APS and AIP especially — stood up against the politicization of climate science, we’d never be in the Lysenkoist-like oppressive mess.

Joe Born
February 24, 2015 8:35 am

Rather than “Are modelers scientists?” Dr. Frank might as well have asked, “Are people logical?”
Having dealt extensively with the physicists, chemists, and engineers that Dr. Frank contrasts with climate modelers, I can assure you that they are more than capable of similarly flubbing basic distinctions.
In my working life I saw a group of them fail repeatedly to comprehend the difference between the flow of a fluid and the propagation of a disturbance through that fluid. I saw a large sum of money wasted because highly regarded scientists failed to focus on the distinction between length resolution and angle resolution. I could go on.
In the blog world–actually, on this very site–I’ve seen scientists repeatedly fail to distinguish between the stated conclusion of Robert G. Brown’s “Refutation of Stable Thermal Equilibrium Lapse Rates,” which can be interpreted as correct, and the logic by which that conclusion was reached, which is a farrago of latent ambiguities and unfounded assumptions defended by gauzy generalities and downright bad physics.
In the latter context I succumbed as Dr. Frank did to the temptation to be provocative, in my case by saying that we lawyers are justified in viewing science as too important to be left to scientists; it sometimes seems that you have to undergo a logicectomy in order to become a scientist. The truth, though, is that failure to recognize basic distinctions is less a characteristic of any particular occupation than a general human shortcoming that in varying degrees afflicts us all.