Guest essay by Pat Frank
For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.
The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.
Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.
Here’s an illustration: the Figure below shows what happens when the average ±4 Wm-2 long-wave cloud forcing error of CMIP5 climate models [1], is propagated through a couple of Community Climate System Model 4 (CCSM4) global air temperature projections.
CCSM4 is a CMIP5-level climate model from NCAR, where Kevin Trenberth works, and was used in the IPCC AR5 of 2013. Judy Curry wrote about it here.
In panel a, the points show the CCSM4 anomaly projections of the AR5 Representative Concentration Pathways (RCP) 6.0 (green) and 8.5 (blue). The lines are the PWM emulations of the CCSM4 projections, made using the standard RCP forcings from Meinshausen. [2] The CCSM4 RCP forcings may not be identical to the Meinhausen RCP forcings. The shaded areas are the range of projections across all AR5 models (see AR5 Figure TS.15). The CCSM4 projections are in the upper range.
In panel b, the lines are the same two CCSM4 RCP projections. But now the shaded areas are the uncertainty envelopes resulting when ±4 Wm-2 CMIP5 long wave cloud forcing error is propagated through the projections in annual steps.
The uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.
It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.
Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.
And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.
That brings me to the reason I’m writing here. My manuscript has been rejected four times; twice each from two high-ranking climate journals. I have responded to a total of ten reviews.
Nine of the ten reviews were clearly written by climate modelers, were uniformly negative, and recommended rejection. One reviewer was clearly not a climate modeler. That one recommended publication.
I’ve had my share of scientific debates. A couple of them not entirely amiable. My research (with colleagues) has over-thrown four ‘ruling paradigms,’ and so I’m familiar with how scientists behave when they’re challenged. None of that prepared me for the standards at play in climate science.
I’ll start with the conclusion, and follow on with the supporting evidence: never, in all my experience with peer-reviewed publishing, have I ever encountered such incompetence in a reviewer. Much less incompetence evidently common to a class of reviewers.
The shocking lack of competence I encountered made public exposure a civic corrective good.
Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.
Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything. Geoff Sherrington has been eloquent about the hazards and trickiness of experimental error.
All of the physical sciences hew to these standards. Physical scientists are bound by them.
Climate modelers do not and by their lights are not.
I will give examples of all of the following concerning climate modelers:
- They neither respect nor understand the distinction between accuracy and precision.
- They understand nothing of the meaning or method of propagated error.
- They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
- They don’t understand the meaning of physical error.
- They don’t understand the importance of a unique result.
Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.
The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.
And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.
1. Accuracy vs. Precision
The distinction between accuracy and precision is central to the argument presented in the manuscript, and is defined right in the Introduction.
The accuracy of a model is the difference between its predictions and the corresponding observations.
The precision of a model is the variance of its predictions, without reference to observations.
Physical evaluation of a model requires an accuracy metric.
There is nothing more basic to science itself than the critical distinction of accuracy from precision.
Here’s what climate modelers say:
“Too much of this paper consists of philosophical rants (e.g., accuracy vs. precision) …”
“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”
“The best way to test the errors of the GCMs is to run numerical experiments to sample the predicted effects of different parameters…”
“The author is simply asserting that uncertainties in published estimates [i.e., model precision – P] are not ‘physically valid’ [i.e., not accuracy – P]- an opinion that is not widely shared.”
Not widely shared among climate modelers, anyway.
The first reviewer actually scorned the distinction between accuracy and precision. This, from a supposed scientist.
The remainder are alternative declarations that model variance, i.e., precision, = physical accuracy.
The accuracy-precision difference was extensively documented to relevant literature in the manuscript, e.g., [3, 4].
The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.
Every climate modeler reviewer who addressed the precision-accuracy question similarly failed to grasp it. I have yet to encounter one who understands it.
2. No understanding of propagated error
“The authors claim that published projections do not include ‘propagated errors’ is fundamentally flawed. It is clearly the case that the model ensemble may have structural errors that bias the projections.”
I.e., the reviewer supposes that model precision = propagated error.
“The repeated statement that no prior papers have discussed propagated error in GCM projections is simply wrong (Rogelj (2013), Murphy (2007), Rowlands (2012)).”
Let’s take the reviewer examples in order:
Rogelj (2013) concerns the economic costs of mitigation. Their Figure 1b includes a global temperature projection plus uncertainty ranges. The uncertainties, “are based on a 600-member ensemble of temperature projections for each scenario…” [5]
I.e., the reviewer supposes that model precision = propagated error.
Murphy (2007) write, “In order to sample the effects of model error, it is necessary to construct ensembles which sample plausible alternative representations of earth system processes.” [6]
I.e., the reviewer supposes that model precision = propagated error.
Rowlands (2012) write, “Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. “ and go on to state that, “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure.” [7]
I.e., the reviewer supposes that model precision = propagated error.
Not one of this reviewer’s examples of propagated error includes any propagated error, or even mentions propagated error.
Not only that, but not one of the examples discusses physical error at all. It’s all model precision.
This reviewer doesn’t know what propagated error is, what it means, or how to identify it. This reviewer also evidently does not know how to recognize physical error itself.
Another reviewer:
“Examples of uncertainty propagation: Stainforth, D. et al., 2005: Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433, 403-406.
“M. Collins, R. E. Chandler, P. M. Cox, J. M. Huthnance, J. Rougier and D. B. Stephenson, 2012: Quantifying future climate change. Nature Climate Change, 2, 403-409.”
Let’s find out: Stainforth (2005) includes three Figures; Every single one of them presents error as projection variation. [8]
Here’s their Figure 1:
Original Figure Legend: “Figure 1 Frequency distributions of T g (colours indicate density of trajectories per 0.1 K interval) through the three phases of the simulation. a, Frequency distribution of the 2,017 distinct independent simulations. b, Frequency distribution of the 414 model versions. In b, T g is shown relative to the value at the end of the calibration phase and where initial condition ensemble members exist, their mean has been taken for each time point.”
Here’s what they say about uncertainty: “[W]e have carried out a grand ensemble (an ensemble of ensembles) exploring uncertainty in a state-of-the-art model. Uncertainty in model response is investigated using a perturbed physics ensemble in which model parameters are set to alternative values considered plausible by experts in the relevant parameterization schemes.”
There it is: uncertainty is directly represented as model variability (density of trajectories; perturbed physics ensemble).
The remaining figures in Stainforth (2005) derive from this one. Propagated error appears nowhere and is nowhere mentioned.
Reviewer supposition: model precision = propagated error.
Collins (2012) state that adjusting model parameters so that projections approach observations is enough to “hope” that a model has physical validity. Propagation of error is never mentioned. Collins Figure 3 shows physical uncertainty as model variability about an ensemble mean. [9] Here it is:
Original Legend: “Figure 3 | Global temperature anomalies. a, Global mean temperature anomalies produced using an EBM forced by historical changes in well-mixed greenhouse gases and future increases based on the A1B scenario from the Intergovernmental Panel on Climate Change’s Special Report on Emission Scenarios. The different curves are generated by varying the feedback parameter (climate sensitivity) in the EBM. b, Changes in global mean temperature at 2050 versus global mean temperature at the year 2000, … The histogram on the x axis represents an estimate of the twentieth-century warming attributable to greenhouse gases. The histogram on the y axis uses the relationship between the past and the future to obtain a projection of future changes.”
Collins 2012, part a: model variability itself; part b: model variability (precision) represented as physical uncertainty (accuracy). Propagated error? Nowhere to be found.
So, once again, not one of this reviewer’s examples of propagated error actually includes any propagated error, or even mentions propagated error.
It’s safe to conclude that these climate modelers have no concept at all of propagated error. They apparently have no concept whatever of physical error.
Every single time any of the reviewers addressed propagated error, they revealed a complete ignorance of it.
3. Error bars mean model oscillation – wherein climate modelers reveal a fatal case of naive-freshman-itis.
“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”
“[T]his analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”
“Indeed if we carry such error propagation out for millennia we find that the uncertainty will eventually be larger than the absolute temperature of the Earth, a clear absurdity.”
“An entirely equivalent argument [to the error bars] would be to say (accurately) that there is a 2K range of pre-industrial absolute temperatures in GCMs, and therefore the global mean temperature is liable to jump 2K at any time – which is clearly nonsense…”
Got that? These climate modelers think that “±” error bars imply the model itself is oscillating (liable to jump) between the error bar extremes.
Or that the bars from propagated error represent physical temperature itself.
No sophomore in physics, chemistry, or engineering would make such an ignorant mistake.
But Ph.D. climate modelers have invariably done. One climate modeler audience member did so verbally, during Q&A after my seminar on this analysis.
The worst of it is that both the manuscript and the supporting information document explained that error bars represent an ignorance width. Not one of these Ph.D. reviewers gave any evidence of having read any of it.
5. Unique Result – a concept unknown among climate modelers.
Do climate modelers understand the meaning and importance of a unique result?
“[L]ooking the last glacial maximum, the same models produce global mean changes of between 4 and 6 degrees colder than the pre-industrial. If the conclusions of this paper were correct, this spread (being so much smaller than the estimated errors of +/- 15 deg C) would be nothing short of miraculous.”
“In reality climate models have been tested on multicentennial time scales against paleoclimate data (see the most recent PMIP intercomparisons) and do reasonably well at simulating small Holocene climate variations, and even glacial-interglacial transitions. This is completely incompatible with the claimed results.”
“The most obvious indication that the error framework and the emulation framework
presented in this manuscript is wrong is that the different GCMs with well-known different cloudiness biases (IPCC) produce quite similar results, albeit a spread in the
climate sensitivities.”
Let’s look at where these reviewers get such confidence. Here’s an example from Rowlands, (2012) of what models produce. [7]
Original Legend: “Figure 1 | Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.” [7]
The variable black line in the middle of the group represents the observed air temperature. I added the horizontal black lines at 1 K and 3 K, and the vertical red line at year 2055. Part of the red line is in the original figure, as the precision uncertainty bar.
This Figure displays thousands of perturbed physics simulations of global air temperatures. “Perturbed physics” means that model parameters are varied across their range of physical uncertainty. Each member of the ensemble is of equivalent weight. None of them are known to be physically more correct than any of the others.
The physical energy-state of the simulated climate varies systematically across the years. The horizontal black lines show that multiple physical energy states produce the same simulated 1 K or 3 K anomaly temperature.
The vertical red line at year 2055 shows that the identical physical energy-state (the year 2055 state) produces multiple simulated air temperatures.
These wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.
The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.
That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.
That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.
There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.
Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.
Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.
Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?
Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?
Obviously not.
Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.
Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.
Any physical scientist would (should) know this. The climate modeler reviewers uniformly do not.
6. An especially egregious example in which the petard self-hoister is unaware of the air underfoot.
Finally, I’d like to present one last example. The essay is already long, and yet another instance may be overkill.
But I finally decided it is better to risk reader fatigue than to not make a public record of what passes for analytical thinking among climate modelers. Apologies if it’s all become tedious.
This last truly demonstrates the abysmal understanding of error analysis at large in the ranks of climate modelers. Here we go:
“I will give (again) one simple example of why this whole exercise is a waste of time. Take a simple energy balance model, solar in, long wave out, single layer atmosphere, albedo and greenhouse effect. i.e. sigma Ts^4 = S (1-a) /(1 -lambda/2) where lambda is the atmospheric emissivity, a is the albedo (0.7), S the incident solar flux (340 W/m^2), sigma is the SB coefficient and Ts is the surface temperature (288K).
“The sensitivity of this model to an increase in lambda of 0.02 (which gives a 4 W/m2 forcing) is 1.19 deg C (assuming no feedbacks on lambda or a). The sensitivity of an erroneous model with an error in the albedo of 0.012 (which gives a 4 W/m^2 SW TOA flux error) to exactly the same forcing is 1.18 deg C.
“This the difference that a systematic bias makes to the sensitivity is two orders of magnitude less than the effect of the perturbation. The author’s equating of the response error to the bias error even in such a simple model is orders of magnitude wrong. It is exactly the same with his GCM emulator.”
The “difference” the reviewer is talking about is 1.19 C – 1.18 C = 0.01 C. The reviewer supposes that this 0.01 C is the entire uncertainty produced by the model due to a 4 Wm-2 offset error in either albedo or emissivity.
But it’s not.
First reviewer mistake: If 1.19 C or 1.18 C are produced by a 4 Wm-2 offset forcing error, then 1.19 C or 1.18 C are offset temperature errors. Not sensitivities. Their tiny difference, if anything, confirms the error magnitude.
Second mistake: The reviewer doesn’t know the difference between an offset error (a statistic) and temperature (a thermodynamic magnitude). The reviewer’s “sensitivity” is actually “error.”
Third mistake: The reviewer equates a 4 W/m2 energetic perturbation to a ±4 W/m2 physical error statistic.
This mistake, by the way, again shows that the reviewer doesn’t know to make a distinction between a physical magnitude and an error statistic.
Fourth mistake: The reviewer compares a single step “sensitivity” calculation to multi-step propagated error.
Fifth mistake: The reviewer is apparently unfamiliar with the generality that physical uncertainties express a bounded range of ignorance; i.e., “±” about some value. Uncertainties are never constant offsets.
Lemma to five: the reviewer apparently also does not know the correct way to express the uncertainties is ±lambda or ±albedo.
But then, inconveniently for the reviewer, if the uncertainties are correctly expressed, the prescribed uncertainty is ±4 W/m2 in forcing. The uncertainty is then obviously an error statistic and not an energetic malapropism.
For those confused by this distinction, no energetic perturbation can be simultaneously positive and negative. Earth to modelers, over. . .
When the reviewer’s example is expressed using the correct ± statistical notation, 1.19 C and 1.18 C become ±1.19 C and ±1.18 C.
And these are uncertainties for a single step calculation. They are in the same ballpark as the single-step uncertainties presented in the manuscript.
As soon as the reviewer’s forcing uncertainty enters into a multi-step linear extrapolation, i.e., a GCM projection, the ±1.19 C and ±1.18 C uncertainties would appear in every step, and must then propagate through the steps as the root-sum-square. [3, 10]
After 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.
So, correctly done, the reviewer’s own analysis validates the very manuscript that the reviewer called a “waste of time.” Good job, that.
This reviewer:
- doesn’t know the meaning of physical uncertainty.
- doesn’t distinguish between model response (sensitivity) and model error. This mistake amounts to not knowing to distinguish between an energetic perturbation and a physical error statistic.
- doesn’t know how to express a physical uncertainty.
- and doesn’t know the difference between single step error and propagated error.
So, once again, climate modelers:
- neither respect nor understand the distinction between accuracy and precision.
- are entirely ignorant of propagated error.
- think the ± bars of propagated error mean the model itself is oscillating.
- have no understanding of physical error.
- have no understanding of the importance or meaning of a unique result.
No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.
And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.
Apparently, such thinking is critically convincing to certain journal editors.
Given all this, one can understand why climate science has fallen into such a sorry state. Without the constraint of observational physics, it’s open season on finding significations wherever one likes and granting indulgence in science to the loopy academic theorizing so rife in the humanities. [11]
When mere internal precision and fuzzy axiomatics rule a field, terms like consistent with, implies, might, could, possible, likely, carry definitive weight. All are freely available and attachable to pretty much whatever strikes one’s fancy. Just construct your argument to be consistent with the consensus. This is known to happen regularly in climate studies, with special mentions here, here, and here.
One detects an explanation for why political sentimentalists like Naomi Oreskes and Naomi Klein find climate alarm so homey. It is so very opportune to polemics and mindless righteousness. (What is it about people named Naomi, anyway? Are there any tough-minded skeptical Naomis out there? Post here. Let us know.)
In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.
Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.
The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.
They should be nowhere near important discussions or decisions concerning science-based social or civil policies.
References:
1. Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.
2. Meinshausen, M., et al., The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 2011. 109(1-2): p. 213-241.
The PWM coefficients for the CCSM4 emulations were: RCP 6.0 fCO₂ = 0.644, a = 22.76 C; RCP 8.5, fCO₂ = 0.651, a = 23.10 C.
3. JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement. 100:2008, Bureau International des Poids et Mesures: Sevres, France.
4. Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.
5. Rogelj, J., et al., Probabilistic cost estimates for climate change mitigation. Nature, 2013. 493(7430): p. 79-83.
6. Murphy, J.M., et al., A methodology for probabilistic predictions of regional climate change from perturbed physics ensembles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2007. 365(1857): p. 1993-2028.
7. Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.
8. Stainforth, D.A., et al., Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 2005. 433(7024): p. 403-406.
9. Collins, M., et al., Quantifying future climate change. Nature Clim. Change, 2012. 2(6): p. 403-409.
10. Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill. 320.
11. Gross, P.R. and N. Levitt, Higher Superstition: The Academic Left and its Quarrels with Science. 1994, Baltimore, MD: Johns Hopkins University. May be the most intellectually enjoyable book, ever.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Great read, thank you Pat Frank!
Thanks, Joseph.
Sounds like Bevington and Robinson make no appearance in the Climatology curriculum.
Peer review replaced replication because it is cheaper and faster. It also suffers the Wikipedia “activist editor” problems.
Truer words … mysterian. It’s as though the modelers I’ve encountered have never been exposed to physical error analysis. They’ve evidenced no concept of it.
@Patrick Guinness
There is so much error in your position that I hardly know where to begin. I hope WUWT people don’t think this article is the last word on computer simulations. Simulations don’t sample a statistical distribution, unless that is direclty programmed into the simulation. That’s why there is no “error propagation”. You can call this a fault of the simulation, but most simulations performed in all fields similarly lack a modeled error to the input parameters, and therefore do not and cannot propagate error.
The actual error of computer simulations is measured as propagation of truncation error due to limited precision of the computer. The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes. This error estimate is sort-of what the climate modelers are doing and what you have a problem with. I don’t think they’re any different than modelers in other fields. SO, Pat Frank meet windmills; windmills, Pat Frank. 🙂
Also, your writing style is opaque. I had a hard time parsing what you meant. This may have also been a problem for your reviewers.
Finally, I have always been skeptical of the dogmatic “propagated error” rules in physics. The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error. This is total nonsense. Using the well-known rules of error propagation, you can generate errors that are absurd and also depend upon the number of mathematical operations you perform without changing the underlying reality of how uncertain our knowledge is. It’s very clearly a game.
But suppose that I want proof of your claim that error propagation rules conform to reality. Where is your proof that error propagation in physics is correct in the physical world and not some inept game? How can you know *empirically* that error propagation calculations are correct in all fields? You don’t cite any articles saying that error propagation rules were observed to be correct!!! Everything you say is utter dogmatism without proof.
JDN,
Your comments confirm the knowing refusal to accept the Null Hypothesis in Climate Science. Instead, climate modellers continue to redesign their bamboo control towers, adjust the layouts of their runways, and add bling to the controller’s headsets… and then wonder why the planes still do not land.
JDN,
Are Climate Modelers scientists?
1. How or why does a lack of sampling of statistical distributions preclude error propagation? I may be a dumbass, but I have no idea why it is even a pertinent observation to say that simulations don’t sample statistical distributions. What difference does that make?
2. What does it mean to say that “most simulations” “lack a modeled error to the input parameters”? What difference does that make; how or why does this, whatever it is, preclude error propagation?
It seems to me that you cannot tell the difference between ‘measurement precision’ and ‘floating point precision’, and maybe that is the problem the reviewers have as well.
I give you an example. If Earth albedo cannot be measured with more than 2 significant digits, there is not point on performing calculations or writting code in double precision or double extended precision format. Single precision will do. The third digit in your final calculation is not going to be significant, anyway.
JDN: The underlying assumption of these rules is that the input parameters are normally distributed with a small but finite probability of being infinitely in error.
that is not correct. The probability that a normally distributed random variable is greater than a particular large number is finite, but the probability goes to 0 as the large number is made larger. The mathematical fact that the normal distribution has infinite support has never prevented it being useful with lots of kinds of measurements that are physically bounded, such as Einstein’s model of Brownian motion.
JDN:
This is in reference to your concern over the compounding of uncertainty in model simulations. It is not an attempt to answer your epistemological challenge in the final paragraph.
From Wikipedia:
Either climate modelers take proper account of inherent errors (in both measurements and in basic theory) and how they interact within each model or they do not. It seems like a legitimate area of investigation to me.
JDN, error propagation does not require sampling statistical distributions. I propagated systematic error, which need not follow any standard statistical distribution at all.
And it was not an input error propagated, but a theory-bias error; one made by the models themselves and therefore present in every simulation step. Such errors can be estimated, and can always be propagated through a simulation. And should always be propagated through a simulation.
Truncation error is a numerical error. My post spoke to physical error. Do you understand the difference? Your idea that, “The way to establish a statistical distribution of your outputs is through sampling parameter space and observing the outcomes.” shows exactly the confusion about accuracy vs. precision evidenced by all my climate modeler reviewers. Sampling parameter space is about precision. It tells one nothing about physical error. Physical error is determined by comparing model expectation values against the relevant observational magnitudes.
You wrote, “This error estimate is sort-of what the climate modelers are doing and what you have a problem with.” Correct; it is what htey do. Their method says nothing about the physical accuracy or reliability of their model projections.
“I don’t think they’re any different than modelers in other fields.” If you’d like to see an actual model reliability analysis, consult Vasquez VR, and Whiting, WB. 2005. Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods. Risk Analysis 25(6): 1669-1681, doi:10.1111/j.1539-6924.2005.00704.x.. They propagate error.
After reading that paper, I wrote to Whiting about it. His reply was illuminating: ”Yes, it is surprisingly unusual for modelers to include uncertainty analyses. In my work, I’ve tried to treat a computer model as an experimentalist treats a piece of laboratory equipment. A result should never be reported without giving reasonable error estimates.” There’s the attitude of a practicing physical scientist for you. Rather diametrical to your modeling standards, isn’t it.
So, it seems you may be right, and that modelers elsewhere make the same mistake you do here. The mistake so obvious to a trained physical scientist and so ubiquitous among climate modelers: the inability or unwillingness to perform physical reliability analyses on their model results. An inability to understand the difference between physical accuracy and model precision.
So, accuracy, meet JDN, JDN, accuracy. I recommend you learn to know it well. If you want to do a physical science.
Sorry you found my writing style opaque. You seem to be the only one who has (so far as I’ve read). Maybe the problem has something to do with your immersion in modeler-thought.
Regarding the statistics of propagated error, when the error is empirical (made vs. observations) one doesn’t know the true distribution. The uncertainties from propagation of error are therefore always estimates. But so what? In physical science, one looks for useful indications of reliability.
Rules for propagating error are not “dogmatic.” That’s just you being rhetorically disparaging.
The results of error propagation are not absurd errors, but non-absurd uncertainties. Of course, the uncertainty increases with the number of step-wise calculations. Every step transmits its uncertainty forward as input into the next step and each step also has its own internal parameter error or theory bias error. That means one’s knowledge of state magnitudes must decrease with every calculational step. You may not like it, but it’s not mysterious.
Take a look at the derivations in Bevington and Robinson 2003 Data Reduction and Error Analysis for the Physical Sciences. Show me their derivational dependence on location in the physical world, or limitation by discipline or field. Their generalized derivations make it obvious that they are universally applicable wherever physical calculations are carried out. You’re just grasping at straws, JDN.
You’re welcome to carry out your climate modeling sealed hermetically away from physical error analysis, and from the threat of countervailing observations. But then don’t pretend that you’re doing science. And don’t pretend that your models have anything to do with physical reality.
I agree that propagating error in computer code is possible, and I think you agree that it’s not usually done. Should it be? The reason I brought up answering the question empirically is because, to my knowledge, our basic error propagation technques were derived in fields that are suited to examining the results of these rules. For example, I’m reasonably sure the error estimate propagation techniques will work in atomic physics. I’m completely uncertain whether error should be propagated in the same way for “big world” simulations. The Monte Carlo approach to precision is what most people go with. With chemical simulations I’ve done, this is what I go with.
Your demand for “precision” makes you a character out of central casting. You would be the tragically flawed scientist who opposes the eventual hero. It’s just the “angry old man syndrome” talking. Just saying… “Precision” and “accuracy” are overloaded terms. They have been defined in so many ways, you can’t seriously expect people to *not* have cognitive dissonance reading these terms.
If you want to be understood, instead of just yelling at kids to get off your lawn, call these things something closer to what they are, maybe confidence interval of simulation output vs. variance from observation. See… that really clears things up. Demanding that people adhere to your jargon is unfriendly.
You can afford to be a friend because the climate simulations are completely bogus for so many other reasons.
And to answer the other commenters about my opinion of climate scientists, whether they’re scientists… they seem to be really bad ones. But my own field is gradually going this way as well. The corrosive effect of grant money means that the greatest scientific sin is to lack funding. It used to be an insult if someone said you would believe anything for money… now, you can put it on your CV. If you guys have so much time, stop messing around with criticizing bad stats and get control of the funding.
JDN, your supposition that I am ‘demanding precision’ makes me think you haven’t actually read anything. My entire post is about the importance of accuracy to science. Demur as you like, but it is both appropriate and standard to propagate error through any multi-step calculation.
If you’d care to read Bevington and Robinson you’ll find accuracy and precision carefully defined, and in the standard manner. Accuracy and precision have not been defined in “many ways” as you have it, but in one way only. You lose sight of that at the peril of your work.
Pat Frank
Thankyou for your fine article. I ask you to continue to press the issue because I have been pressing it with no success since 1999.
You summarise the problem when you write:
You are not the first to observe that the climate model fraternity lacks a “basic standard of science”. For example, in my peer review of the draft IPCC AR4 I wrote the following which was ignored; i.e. my recommendation saying “the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence” had no effect.
“ Page 2-47 Chapter 2 Section 2.6.3 Line 46
Delete the phrase, “and a physical model” because it is a falsehood.
Evidence says what it says, and construction of a physical model is irrelevant to that in any real science.
The authors of this draft Report seem to have an extreme prejudice in favour of models (some parts of the Report seem to assert that climate obeys what the models say; e.g. Page 2-47 Chapter 2 Section 2.6.3 Lines 33 and 34), and this phrase that needs deletion is an example of the prejudice.
Evidence is the result of empirical observation of reality.
Hypotheses are ideas based on the evidence.
Theories are hypotheses that have repeatedly been tested by comparison with evidence and have withstood all the tests.
Models are representations of the hypotheses and theories.
Outputs of the models can be used as evidence only when the output data is demonstrated to accurately represent reality.
If a model output disagrees with the available evidence then this indicates fault in the model, and this indication remains true until the evidence is shown to be wrong.
This draft Report repeatedly demonstrates that its authors do not understand these matters. So, I provide the following analogy to help them. If they can comprehend the analogy then they may achieve graduate standard in their science practice.
A scientist discovers a new species.
1. He/she names it (e.g. he/she calls it a gazelle) and describes it (e.g. a gazelle has a leg in each corner).
2. He/she observes that gazelles leap. (n.b. the muscles, ligaments etc. that enable gazelles to leap are not known, do not need to be discovered, and do not need to be modeled to observe that gazelles leap. The observation is evidence.)
3. Gazelles are observed to always leap when a predator is near. (This observation is also evidence.)
4. From (3) it can be deduced that gazelles leap in response to the presence of a predator.
5. n.b. The gazelle’s internal body structure and central nervous system do not need to be studied, known or modeled for the conclusion in (4) that “gazelles leap when a predator is near” to be valid. Indeed, study of a gazelle’s internal body structure and central nervous system may never reveal that, and such a model may take decades to construct following achievement of the conclusion from the evidence.
(Having read all 11 chapters of the draft Report, I had intended to provide review comments on them all. However, I became so angry at the need to point out the above elementary principles that I abandoned the review at this point: the draft should be withdrawn and replaced by another that displays an adequate level of scientific competence).”
Richard
I have a minor quibble.
“From (3) it can be deduced that gazelles leap in response to the presence of a predator”.
That would be a logical inference rather than a deduction.
Actually, having been to Africa, some gazelles do not leap in response to the presence of a predator.
This is called “dinner”, and happens frequently enough to feed lots of predators.
Chris Hanley
Yes, you are right. However, my post was not intended to introduce discussion of an illustration: I wrote in support of the assertion by Pat Frank that climate modelers lack an adequate “basic standard of science”.
Richard
Thanks, Richard. You came to it earlier than I did.
I am not sure I can believe anyone who cannot count sequentially to 6. What do you have against 4 Pat?
Actually, I am just wondering if you forgot to cut and paste a section.
Hei hei
John M Reynods
That’s a hoot, I read the entirety and failed to see the absent 4! Nice comment.
Also, explaining how errors would propagate through a simple model would be nice as is it only the initial estimates or are there addition errors introduced during a model that can also propagate?
Timo
The propagation of errors is the errors in the physical measurements that can take the calculations off the rails. Each measurement, be it TSI, down welling radiation in W/m^2, surface air temperature, etc., has an error range associated with it. When you model these measurements into the future, you need to keep in mind that they all have a range of accuracy. The first step has a range of accuracy. As a result, the second step has does not have a fixed starting point. The error from the first step has to be included in the second step. At each subsequent step, the error of the previous step has to be added to that of the current step.
Example from the text: Using the root-sum-square, after 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.
Hei hei
John M Reynolds
Thanks, John — that’s about it.
ulp, you’re right, jmrSudbury. No prejudice against 4, honestly. Must have missed a finger. 🙂
What was it Sinead O’connor said ? Have the strength to change the things you can, the courage to face the things you can’t, and the wisdom to know the difference.
You are not going to get a bunch of numpties to agree to something that would nullify their life’s work and render themselves irrelevant.
The truth will out in the end,
The Serenity Prayer is the common name for a prayer authored by the American theologian Reinhold Niebuhr[1][2] (1892–1971) – not Sinead O’Connor ;(
Pat, why not contact Professor Chris Essex at the University of Western Ontario, Department of Applied Matnematics. He outlines his views in a lecture at: https://www.youtube.com/watch?v=19q1i-wAUpY
He and Ross McKitrick, professor of economics have a book on the subject of modeling
Taken By Storm: The Troubled Science, Policy and Politics of Global Warming
http://www.amazon.com/Taken-Storm-Troubled-Science-Politics/dp/1552632121/ref=sr_1_13?ie=UTF8&qid=1424578417&sr=8-13&keywords=taken+by+storm
Frederick, I read “Taken by Storm” a coupla-three years ago, thanks. It’s an excellent book. If consensus climatology were a real field of science, books like that, and other published work, would have changed its direction long ago. I’ve been in touch with Chris Essex. He knows my work.
“Kuhn’s book (William: Kuhn’s book “The Structure of Scientific Revolutions” challenged the popular belief that scientists (William: Particularly including ‘climate’ scientists who are extraordinary resistive to the piles and piles of logical arguments/observations/analysis results that indicate their theories are urban myths due to the climate wars) are skeptical, objective, and value-neutral thinkers. He argued instead that the vast majority of scientists are quite ‘conservative’; they’ve been indoctrinated with a set of core assumptions (William: The core assumption/theories become overtime unthinkable to be incorrect) and apply their intelligence to solve problems within the existing paradigms.
Scientists don’t test whether or not they are operating with a good (William: validate) paradigm as much as the paradigm tests whether or not they are good scientists. (William, ‘good’ scientific behaviour is defined as a person how does not publically question the groups’ core beliefs, imply that the groups’ core beliefs are an urban legend.)”
If there are fundamental errors in the base theory, there will be piles and piles of observational anomalies and paradoxes. The fact that there are piles and piles of anomalies in almost every field of ‘pure’, non applied science indicates there is something fundamentally incorrect with the methodology/approach and ‘culture’ of ‘pure’ science. It also explains why major, astonishing breakthroughs in pure science are possible. (Imagine decades and decades of research which provides the observations/analysis to solve the puzzles and a weird irrational culture that stops people from solving the problem.)
An example of an in your face failure to solve the very important scientific problem/puzzle: What is the origin of the earth’s atmosphere, oceans, ‘natural’ gas, crude oil, and black coal. The competing theories are (there was/is no theory competition) are: 1) The ‘late veneer theory’ (the late veneer theory is connected with the fossil fuel theory, where a tiny amount of CO2 is recycled in the upper mantle) vs/or 2) The deep core CH4 theory (See the late Nobel prize winnings, Astrophysics’ Thomas Gold’s book ‘Deep Hot Biosphere: The Myth of Fossil Fuels’, where there is a large continuous input of CH4 and CO2 into the biosphere from CH4 that is extrude from the core of the earth as it solidifies, which explains Humlum et al’s CO2 phase analysis result paradox and roughly 50 different geological paradoxes/anomalies) as to the origin of the earth’s atmosphere, oceans, and ‘natural’ gas/crude oil.
The fact that a standard, effective, structured approach to problem solving as basic as listing and organizing the anomalies/paradoxes in a very long review paper or a short book, looking at the logical implications of the anomalies/paradoxes, and formally exploring/developing alternative theories is not done, as it would highlight the fact that are fundamental errors in the base theories, that there some of the base theories are most certainly urban myths (i.e. cannot possibly be correct).
It is embarrassing, unthinkable for the group of specialists to question their core science, to suggest that there are or could be fundamental errors in their base theory and that those errors could have gone unaddressed for decades, that their base theory could be, is obvious an urban myth. New graduates that want to ‘progress’ in the field, that want to get university teaching and research positions (imagine a research department being made up of a couple of dozen professors, with seniority and a pecking order and each specialty field having a few hundred teaching members and a thousand want to be teachers/researchers, again with a pecking order and benefits that can be controlled/changed to encourage the culture) have no logical alternative but to continue to support the incorrect paradigms and ineffective approach to problem solving.
The IPCC climate models’ response to forcing changes is orders of magnitude too large (the general circulation models, GCM, amplifies forcing changes – positive feedback, rather than suppresses, resists forcing changes – negative feedback). If the real world’s response to forcing changes was to amplify the forcing change the earth’s temperature would widely oscillated in response to let’s say a large volcanic eruption or other large temporary forcing change.
The justification that the planet amplifies rather than resists forcing changes, is the fact that there is in the paleoclimatic record, very large, climate changes. These very, large, very rapid climate changes are not however random, there are cyclic. The Rickies (rapid climate change events RCCEs), correlate with massive solar magnetic cycle events and unexplained geomagnetic changes.
A climate model that has positive feedback can be ‘tuned’ by adjusting the inputs and internal model variables produce to make the model produce a rapid, very large, abrupt temperature response, to a small forcing change. As noted above however as the earth’s temperature does not widely oscillate when there are large temporary forcing changes, the explanation for cyclic abrupt climate change in the paleo record is not that the planet amplifies the forcing. The explanation for the Rickies is the sun can and does change in a manner that causes very, very large changes in the earth’s climate which is supported by the fact there are cosmogenic isotope changes at each and every abrupt climate change event and at the slower, less large climate change events.
A small few of the Climate Science practioners have grudgingly admitted their discomfort at the use of higher than observed mid latitude and tropical aerosols to balance the GHG forcing if the Arctic temp rises are to be replicated by the GCM ensemble results.
That problem, in and of itself if “climate sciencism” were a real science, should demand a major dumping of the inherent assumptions built into the models, and then the models themselves.
Scientists are heir to all the foibles that infect humanity, William. There’s nothing new or revelatory about that.
The interplay of falsifiable theory and replicable observation, though, is the particular strength of science. It’s not present in any other field of study.
So long as this method is freely and honestly practiced, the problems you note will be only temporary, uncomplimentary and braking of progress though they may be.
The problem in climatology has been the deliberate subversion of free and honest scientific practice. Had the major scientific institutions — the APS and AIP especially — stood up against the politicization of climate science, we’d never be in the Lysenkoist-like oppressive mess.
Rather than “Are modelers scientists?” Dr. Frank might as well have asked, “Are people logical?”
Having dealt extensively with the physicists, chemists, and engineers that Dr. Frank contrasts with climate modelers, I can assure you that they are more than capable of similarly flubbing basic distinctions.
In my working life I saw a group of them fail repeatedly to comprehend the difference between the flow of a fluid and the propagation of a disturbance through that fluid. I saw a large sum of money wasted because highly regarded scientists failed to focus on the distinction between length resolution and angle resolution. I could go on.
In the blog world–actually, on this very site–I’ve seen scientists repeatedly fail to distinguish between the stated conclusion of Robert G. Brown’s “Refutation of Stable Thermal Equilibrium Lapse Rates,” which can be interpreted as correct, and the logic by which that conclusion was reached, which is a farrago of latent ambiguities and unfounded assumptions defended by gauzy generalities and downright bad physics.
In the latter context I succumbed as Dr. Frank did to the temptation to be provocative, in my case by saying that we lawyers are justified in viewing science as too important to be left to scientists; it sometimes seems that you have to undergo a logicectomy in order to become a scientist. The truth, though, is that failure to recognize basic distinctions is less a characteristic of any particular occupation than a general human shortcoming that in varying degrees afflicts us all.
But Dr. Frank’s response is undoubtedly salutary; venting is often better for the soul than a stiff drink when others blandly reject what you see with crystal clarity. That it will also serve as “a civic corrective good” in this case is devoutly to be hoped.
Please see my response to Willian, Joe. It perhaps applies as well to your experience, too. That scientists are fallible and given to their own brand of foolishness or failure is a constant across modern history. Recognizing that won’t help the situation, but it may allow you to let go, and help you feel a bit more optimistic about things.
Nice one Pat. I occasionally drink in the same pub as one of those ignorant PhDs. While ignorance is curable, it would seem the cure is not particularly popular these days…
The cure for alcoholics is not (1) more alcohol, nor (2)more money to buy better alcoholic beverages, nor (3) removal of life’s personal/family responsibilities to negate the negative effects in order to justify staying drunk.
Climate modellers and their climate scientist followers practice analogies of all three of the above. Similar to alcoholism, the cure for climate modelism will be the complete removal of funds which supports the dysfunction. Of course they will will resist with screaming fits and tantrums.
Thanks, PG. Today’s problem is, of course, state-mandated ignorance rather than mere ignorance itself.
It is true that we teach engineering juniors propagation of error (I prefer propagation of uncertainty), I’m not so sure that undergraduates in physics or chemistry necessarily see the same. I don’t recall it from my physics undergraduate days of 40-44 years ago. I do not recall seeing it in any chemistry course I have taken. I made a presentation about this a few years ago to a group of community college educators (science and mathematics) and no one could recall seeing it before.
I am very disturbed by this quotation from one of the reviewers.
Not only is it badly worded (“could resign”?), the writer fails to comprehend systematic error, or bias, and that a PDF does not quantify such. In fact, if one considers that consistency means that collecting more data must lead to an outcome closer to truth, bias results in a method that lacks consistency. I think the reviewer “could resign.”
Kevin Kilty: Not only is it badly worded (“could resign”?), the writer fails to comprehend systematic error, or bias, and that a PDF does not quantify such. In fact, if one considers that consistency means that collecting more data must lead to an outcome closer to truth, bias results in a method that lacks consistency.
Yeh. That’s bad. In my experience, reviews almost always included at least a few fundamental errors or misunderstandings of statistics. In my case, they were not the deciding factors in acceptance/rejection.
Here, I think the basic disagreement between Pat Frank and the reviewers is that the reviewers want to treat the value of 4W/m^2 as though it is really accurately known from considerations outside the modeling effort itself, and Pat Frank wants to show how much uncertainty in the modeling results is added when it is treated as any other poorly known parameter..
So, you are saying that the reviewers think there is no bias, and thus accuracy==precision. Fair enough. But they haven’t proven any such thing and have no basis for assuming such. I’ll repeat this again and again, even the metrologists measuring fundamental constants have trouble quantifying bias. Fischoff showed one of the experimentally determined speed of light values, from back when speed of light was not the definition of the meter, was 42 standard deviations away from the best accepted value of later experiments.
Kevin Kilty: But they haven’t proven any such thing and have no basis for assuming such.
I don’t disagree. But the value is based on reasoning outside the specific climate model. It’s widely cited as the best available working value for the effect of doubling CO2 concentration. Treating it as a random variable adds to prediction uncertainty (Pat Frank’s point, I think) without elucidating sources of model uncertainty (a cost, imho.) If you add enough to the prediction uncertainty, you can’t have (in my opinion) as much confidence that the models are wrong.
@ur momisugly Matthew R Marler
February 24, 2015 at 2:09 pm
Nor could there be any confidence that the models were right. Indeed with iterative models where the previous ‘prediction’ is the source of data for the next model iteration not only errors but also uncertainty will propagate. As the system is coupled non-linear chaotic system there is no way of telling in which direction or with what magnitude those errors and uncertainties will propagate. Indeed, the models are not very much better than random number generators. Presumably, the modelers put in what to them is a reasonableness test for values in the model which effectively means they have created a random number generator that is only allowed to show results that the modeler thinks are reasonable. Presumably those that meet the requirements of the funding agency.
Ian W: Nor could there be any confidence that the models were right.
I agree, and I am not confident that the models are right.
Matthew, it’s (+/-)4 W/m^2 and is the average annual global long-wave cloud forcing error made by CMIP5 climate models. It’s derived in reference [1], above.
So, it’s a physical error statistic. And it shows that advanced climate models do not correctly partition thermal energy among the various climate sub-states. I show that cloud error is highly correlated among CMIP5 climate models, which implies that the (+/-)4 W/m^2 reflects a theory-bias.
This error must then enter into every single step of a global climate simulation. I also show that global air temperature projections are just linear extrapolations of GHG forcing (any forcing, really). Hence the linear propagation of cloud forcing error.
The disagreement with climate modelers arises because, first they do not understand error propagation and so reject its diagnosis, and second they don’t understand the difference between a physical error statistic and an energetic perturbation, and so treat the statistic as though it impacts the model expectation values — in this case air temperature. That’s why several of the reviewers supposed that the error bars imply the model itself is oscillating between hot house and ice house conditions.
As you might imagine, all of this was a stunning revelation to me.
Ian, my ms has a section discussing exactly the point you raise — the unknown magnitude of the error in a futures simulation, and the entry of prior error into subsequent steps.
It also discusses the difference between growth of error and growth of uncertainty. The latter grows without bound in a futures simulation, but the former does not. As the magnitude of the error is unknown and unknowable in a futures simulation, all one has is uncertainty. And the greater the number of modeling steps, the greater the state of ignorance about the final state.
Pat Frank: Matthew, it’s (+/-)4 W/m^2 and is the average annual global long-wave cloud forcing error made by CMIP5 climate models. It’s derived in reference [1], above.
Thank you.
Kevin, you hit the nail right on the head. There isn’t one branch of consensus climatology where the practitioners take systematic error into account. Not one. Not the modelers, not the global air temperature people, and certainly not the paleo-temperature reconstructionists.
I use “practitioners” deliberately, because their level of practice doesn’t qualify as science.
I was exposed to propagation of error in my undergraduate classes that included measurement labs, most especially analytical chemistry. It was also emphasized in physical chemistry — all the courses where measurement data was important.
One of the physicists at work recommended Bevington and Robinson to me, when I asked about a good text on error analysis. He said he took a junior-level course organized around that book. That led me to believe that error analysis is a formal part of the usual undergraduate physics major. But I’ve made no survey and could be wrong about that.
As you clearly know, nothing could be more important than understanding and propagating error when evaluating the level of quantitative knowledge yielded by some experiment, observation, or sequential calculation — including use of a model to predict a result.
But as you saw in your chosen example, and probably recognized throughout, climate modelers evidently don’t have a clue. One suspects their education has a huge and scientifically fatal hole.
Pat
Thanks, firstly for sharing your work and thanks for being so candid.
I’m guessing you’re really frustrated when even truisms are being dismissed as nonsense. The inability to distinguish between precision and accuracy is a telling one. I think in many fields however, uncertainty analysis and estimates of error propagation are conflated – particularly in engineering. But you are right.
On the issue of propagation, it seems like quite a hard thing to get right before the event. I’m guessing the only way to really measure this, is to assess sensitivity to starting conditions. Therefore, run many models with different data sets, even different floating point precision, in order to see how these effect the model runs. But then they do this anyways when they collate the models from all the groups – or is this not the case. I’m I wrong here?
I’ve actually been to a number of talks that presented the results of simulated and measured error propagation; the results showed that simulated propagation (under various assumptions) was actually far greater than measured error propagation (using sensitivity-type approaches). Most of the studies pertained to mechanical failure of rocks.
So perhaps you and the reviewers are talking cross-purposes.
Thanks, cd, it’s been quite an experience. Let’s see, propagation of physical error is different from sensitivity to starting conditions or simulations with varying parameter sets or floating point precision. These modeling experiments test the variability of the model expectation values. They are a measure of model precision. They don’t say anything about whether the model is physically correct — whether it is accurate.
Evidence of physical accuracy comes with comparison against the relevant observable. Propagation of physical error takes the inaccuracy metric — the known observed physical error made by the model — and extrapolates it forward to produce a reliability estimate of model expectation values. The further out the model is pushed, the greater will be the uncertainty due to propagated physical error. The uncertainty is a measure of how much trust one can put in the expectation value magnitudes predicted by the model.
One could take the outcomes of different starting conditions, or use of different model parameters, and compare them against observations. This would give information about how accurate, or inaccurate, the model might be under those conditions.
“But this blog does not qualify as peer-reviewed literature” ( according to the emminent we-don’t-know-what-he-knows-warrenlb…) Please check “is ±114´ larger” above. Should that be 114% rather than a minute sign ‘
RACook, t should have been 114x, i.e., times. The global annual average (+/-)4 W/m^2 of CMIP5 cloud forcing error is 114 times larger than the global annual average 0.035 W/m^2 increase in GHG forcing since 1979.
In the real world, all the laws of physics are in effect all the time. The most important reason for one to attempt to develop a complex computer model is to learn whether you understand the situation well enough to include all relevant physical processes. Only after demonstrating that a model agrees with real world scenarios is the model useful to predict any real world “what if” outcomes. Clearly, Climate Science has failed to learn all the relevant.processes.. .
.
No doubt about it, Lester. The problem is that they’ve proceeded on to abuse the models.
The people who input the data into these models are not “modelers”, they are data entry clerks.
All they truly know is that they can “adjust” the program inputs to get the answers the managers want, and they get a paycheck every two weeks.
Forget the fact that the models can’t cope with reality, their financial situation can’t accept that concept.
Here is a thought for these modelers: Let’s have them submit their resumes to the top 5 aerospace firms to get a job modeling planes in flight. Do you think they would get hired?
Pat Frank wrote about the first figure:
I don’t see any blue in panel a. What am I missing?
… examine panel a closely and you will see a blue dotted line and a green dotted line within the shaded areas.
Thank you, joelobryan. Thot blue was black. 🙁
Thanks, Joel. My redundancy.
Policycritic, blue dots should show up near the purple line of the CCSM4 RCP8.5 simulation. They do in my browser — Firefox.
I have been saying for several years that the output of the GCMs provides no basis for rational discussion of future climates .see Section 1 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
Here’s an excerpt:
“The modelling approach is also inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4 ”
……..The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it. ”
The climate models are built without regard to the natural 60 and more importantly 1000 year periodicities so obvious in the temperature record. Their approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb – July and projecting it ahead linearly for 20 years or so. They back tune their models for less than 100 years when the relevant time scale is millennial. This is scientific malfeasance on a grand scale.
In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.
For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blogpost linked above,
The most important factor in climate forecasting is where earth is in regard to the quasi- millennial natural solar activity cycle which has a period in the 960 – 1020 year range.For evidence of this cycle see Figs 5-9. From Fig 9 it is obvious that the earth is just approaching ,just at or just past a peak in the millennial cycle. I suggest that more likely than not the general trends from 1000- 2000 seen in Fig 9 will likely repeat from 2000-3000 with the depths of the next LIA at about 2650. The best proxy for solar activity is the neutron monitor the count and 10 Be data. My view ,based on the Oulu neutron count – Fig 14 is that the solar activity millennial maximum peaked in Cycle 22 in about 1991. There is a varying lag between the change in the in solar activity and the change in the different temperature metrics. There is a 12 year delay between the neutron peak and the probable millennial cyclic temperature peak seen in the RSS data in 2003. http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
There has been a declining temperature trend since then (Usually interpreted as a “pause”) There is likely to be a steepening of the cooling trend in 2017- 2018 corresponding to the very important Ap index break below all recent base values in 2005-6. Fig 13. The Polar excursions of the last few winters are harbingers of even more extreme winters to come more frequently in the near future.
Excellent Article! Relating model predictions to reality is indeed basic, but not frequent. Even Albert Einstein made this mistake. He suggested actual reality did not exist. He suggested perception, regardless of reference, was equally valid. But nothing can be stated as valid without relating it to a known absolute reality.
And thus was physics and astronomy led astray for decades and decades. Not that there haven’t been some interesting stuff found out since Einstein, but they went off into la-la land a bit too far. It will take some hard headed thinkers to undo some of the silliness that has transpired.
And this “absolute reality” is?
“a known absolute reality”
WTF???
Is that a bit like “truth”?
Pachauri has stepped back! Good news.
“Are Climate Modelers Scientists?”
####
Good question. My impression is that they are heavy in mathematics and programming but only with a smattering of understanding of geophysics and radiative physics and utterly deficient in the other natural sciences. The study of climate is a multi-science affair that involves all fields of natural science.
I was educated as a physicist, but most of my career was in engineering. One thing that engineering taught me is that precious few, especially pure “scientists,” effectively understood the difference between accuracy and precision. Orders of magnitude true for the general population. And in engineers it is worse among those who came to the field after the introduction of digital displays on everything including calipers. If you’ve struggled with an old caliper, or even a slide rule, you intuitively understand these things. If your only experience is using a caliper that some idiot has tacked a 5 digit past the decimal point display on you’re less likely to understand.
I also did a great deal of computer simulation and modeling, but being as this was engineering not “pure” science, I was always constrained by reality. I wasn’t able to say, well, the data must be wrong the model is right! To do so would have been a quick trip to the unemployment office.
All of which explains why I have so little respect for “climate science” as practiced by the majority of its adherents. Feynman is probably spinning in his grave so fast if you could attach a shaft to him he would power a small city.
As a wannabe physicist who opted out of school but ended up in engineering myself, I can only say, “Amen!” to that.
Engineering will quite rapidly put your feet on solid ground. And any science that doesn’t have its feet on solid ground – WTF is it, anyway? I’ve shaken my head and rolled up my eyes now thousands of times at the silliness that academics/ivory tower dudes can come up with. I have to ask if they have ever done anything practical in their lives – ever had to make something that actually WORKS.
I wondrously got to spend 7+ years in R&D, and THAT allowed me to learn how to assess problems, possible solutions, and to derive REAL empirical experiments to test out what sounded like reasonable explanations. I was the person running the experiments – and sometimes the results confounded my best “reasonable” thinking. THAT is a lesson in humility. People would be amazed at how even in hard-nosed industry that many times “reasonable” ends up being wrong – when the experiments are actually done in the real world. And then extend that to “the frontiers of science” instead of industry – “reasonable ideas” are only starting points. It is reality that tells you what is science and what isn’t.
The relevant problem, Severian, is that climate modelers of my experience would have no idea what you’re talking about.
Thanks. I now have oatmeal in my sinuses and all over my computer. This should have been under lock and key until Friday Funny. WUWT, please handle dangerous material responsibly!
Well …
1) At least they are precise about misunderstanding accuracy.
2) They understand much about the meaning and method of propagating error. It’s called “socialism.”
3) There’s a physical error bar right down the street from me. It’s call: Mann. That place is oscillating every Friday and Saturday night.
4) Nor that the unformation is in the errers.
5) Of course they understand the importance of a unique result. Otherwise, why would they have so many of them?
We’re even. Now I had to clean my keyboard.
The errors associated with accuracy and precision are lost in the idea that there is such a thing as global average temperature that changes with time and the assumptions that go into models that are suppose to explain why that global average temperature changes. One false assumption that is critical to the AGW argument is that anthropogenic emissions of CO2 are the sole cause in the accumulation of CO2 in the atmosphere. I have done statistical analysis mass balances on different regions of the globe that strongly shows this critical assumption to be false. http://www.retiredresearcher.wordpress.com. At the end of this blog I dared to project the confidence limits of these statistical models into the future. However I did not consider propogation of errors. I would greatly appreciate reviews of this work by anyone. Your knowledge of propagation of errors would be greatly appreciated in such a review. Anyone who wishes to review this study can comment on my blog. You can get there by clicking on my name.
You should write up your work formally and submit it to some journal, fhhaynie. A formal systematic write-up will help your thinking because the process will cause you to think through every detail. You’ll discover any holes in your analysis. The entire exercise will give you the confidence to proceed to submission, and the knowledge to meet your reviews.
Did you read it or just scan it? I have spent a lot of time going through the details and I’m asking anyone to do critical reviews to let me know where I missed something. I thought that with your expertise I could get a good review. I haven’t written for journal publication in over twenty years and have no desire to begin again. For the last few years, I have been studying the work of “climate scientist” because I recognized years ago that their models were on the wrong track and they have gotten worse since. I hope that by openly presenting what I have learned that some minds will be changed and my great grandchildren will have a better future.
As for journal publication, I don’t recall ever having one of my papers rejected, I have been advised by reviewers and editors to make changes and corrections (which I did). I maintained membership in three societies (not now), served as chair on commitees and symposia, did many reviews of papers, and even served on the edtorial board of a short lived journal. You can find some of my published papers by googling Fred H. Haynie. My expetise was in atmospheric corrosion. If someone wants to write my blog post up for journal publication, I will help them.
Honestly, I just scanned it Fred. To do a proper review of your topic, which is outside of my professional field, would take days of effort. These, I presently do not have.
If you’ve written for publication, and have clearly already gone to significant effort, then it seems like a relatively small further effort to take the one last step and write it up formally. Energy and Environment is friendly to well-conceived critical papers about climate.
Like you I’m a veteran of peer-reviewed publication. All my professional manuscripts have been successfully published. It’s just that never have I run into such near uniformly poor-quality reviews as from these two top-flight climate journals. Like you, I’m in the fight for our future.
Thanks for being honest. Some Journals have been hijacked and I don’t consider them to be unbiased scientific journals. Pal review is well documented. As to me needing to formally publish in a journal, I would prefer some qualified scientist, engineer, statistician, or economist (who is twenty or thirty years younger) to take what I have done, improve on it and get it published. When I retired over 20 years ago, I retired from formal publishing. My wife thinks I spend to much time commenting on blogs as it is. My “honey do” list is about five years long.
I had a similar experience long ago. Six months after final rejection, someone better known in the community published essentially the same result, though with less detail and utility. I do not lay all the blame to my lack of publishing credentials at the time – the paper was not written in a manner to engage the casual reader or immediately make it clear why it was important, and differed from what had come before.
But, the thing that rankled so much at the time was some of the reviewers’ comments betrayed a total lack of awareness of some basic subject matter. I didn’t mind getting rejected so much as being rejected for completely wrong reasons.
It is pretty much a given in any profession, I think, that only a small minority really know what they are doing. You’ve got to spoon feed it to them, and clearly state what you are doing and why it is important, or at least different from what has come before. And, it helps to have a long paper trail and a well established reputation in the field.
In climate science, you’ve also got to have… well, maybe it’s just impossible in climate science today. After all, for the reviewers to approve, you’re actually asking them to stand at the window where so many have been defenestrated, even previously well-respected authorities.
I am surprised no one has posted one of these yet:
http://cdn.antarcticglaciers.org/wp-content/uploads/2013/11/precision_accuracy.png
GOOD ONE!
“A picture is worth a thousand words.” What a perfect example of that old truism.
I used that when teaching. Rather than an arrow, I used the analogy of a rifle shooter with a good technique and sight set up correctly but a short barrel. The shots get spread over a wide range around the bulls eye so he is still accurate if not precise. The shooter with a longer barrel and a poorly adjusted sight is more precise but inaccurate.
Great visual message, that really captures the essence of the difference. Climate models are presently upper left, with the precision-centric efforts of modelers bringing them inevitably to the upper right. There, they’ll stay, so long as current practice remains in force.