Guest essay by Pat Frank
For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.
The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.
Those interested can consult the invited poster (2.9 MB pdf) I presented at the 2013 AGU Fall Meeting in San Francisco. Error propagation is a standard way to assess the reliability of an experimental result or a model prediction. However, climate models are never assessed this way.
Here’s an illustration: the Figure below shows what happens when the average ±4 Wm-2 long-wave cloud forcing error of CMIP5 climate models [1], is propagated through a couple of Community Climate System Model 4 (CCSM4) global air temperature projections.
CCSM4 is a CMIP5-level climate model from NCAR, where Kevin Trenberth works, and was used in the IPCC AR5 of 2013. Judy Curry wrote about it here.
In panel a, the points show the CCSM4 anomaly projections of the AR5 Representative Concentration Pathways (RCP) 6.0 (green) and 8.5 (blue). The lines are the PWM emulations of the CCSM4 projections, made using the standard RCP forcings from Meinshausen. [2] The CCSM4 RCP forcings may not be identical to the Meinhausen RCP forcings. The shaded areas are the range of projections across all AR5 models (see AR5 Figure TS.15). The CCSM4 projections are in the upper range.
In panel b, the lines are the same two CCSM4 RCP projections. But now the shaded areas are the uncertainty envelopes resulting when ±4 Wm-2 CMIP5 long wave cloud forcing error is propagated through the projections in annual steps.
The uncertainty is so large because ±4 W m-2 of annual long wave cloud forcing error is ±114´ larger than the annual average 0.035 Wm-2 forcing increase of GHG emissions since 1979. Typical error bars for CMIP5 climate model projections are about ±14 C after 100 years and ±18 C after 150 years.
It’s immediately clear that climate models are unable to resolve any thermal effect of greenhouse gas emissions or tell us anything about future air temperatures. It’s impossible that climate models can ever have resolved an anthropogenic greenhouse signal; not now nor at any time in the past.
Propagation of errors through a calculation is a simple idea. It’s logically obvious. It’s critically important. It gets pounded into every single freshman physics, chemistry, and engineering student.
And it has escaped the grasp of every single Ph.D. climate modeler I have encountered, in conversation or in review.
That brings me to the reason I’m writing here. My manuscript has been rejected four times; twice each from two high-ranking climate journals. I have responded to a total of ten reviews.
Nine of the ten reviews were clearly written by climate modelers, were uniformly negative, and recommended rejection. One reviewer was clearly not a climate modeler. That one recommended publication.
I’ve had my share of scientific debates. A couple of them not entirely amiable. My research (with colleagues) has over-thrown four ‘ruling paradigms,’ and so I’m familiar with how scientists behave when they’re challenged. None of that prepared me for the standards at play in climate science.
I’ll start with the conclusion, and follow on with the supporting evidence: never, in all my experience with peer-reviewed publishing, have I ever encountered such incompetence in a reviewer. Much less incompetence evidently common to a class of reviewers.
The shocking lack of competence I encountered made public exposure a civic corrective good.
Physical error analysis is critical to all of science, especially experimental physical science. It is not too much to call it central.
Result ± error tells what one knows. If the error is larger than the result, one doesn’t know anything. Geoff Sherrington has been eloquent about the hazards and trickiness of experimental error.
All of the physical sciences hew to these standards. Physical scientists are bound by them.
Climate modelers do not and by their lights are not.
I will give examples of all of the following concerning climate modelers:
- They neither respect nor understand the distinction between accuracy and precision.
- They understand nothing of the meaning or method of propagated error.
- They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
- They don’t understand the meaning of physical error.
- They don’t understand the importance of a unique result.
Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.
The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.
And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.
1. Accuracy vs. Precision
The distinction between accuracy and precision is central to the argument presented in the manuscript, and is defined right in the Introduction.
The accuracy of a model is the difference between its predictions and the corresponding observations.
The precision of a model is the variance of its predictions, without reference to observations.
Physical evaluation of a model requires an accuracy metric.
There is nothing more basic to science itself than the critical distinction of accuracy from precision.
Here’s what climate modelers say:
“Too much of this paper consists of philosophical rants (e.g., accuracy vs. precision) …”
“[T]he author thinks that a probability distribution function (pdf) only provides information about precision and it cannot give any information about accuracy. This is wrong, and if this were true, the statisticians could resign.”
“The best way to test the errors of the GCMs is to run numerical experiments to sample the predicted effects of different parameters…”
“The author is simply asserting that uncertainties in published estimates [i.e., model precision – P] are not ‘physically valid’ [i.e., not accuracy – P]- an opinion that is not widely shared.”
Not widely shared among climate modelers, anyway.
The first reviewer actually scorned the distinction between accuracy and precision. This, from a supposed scientist.
The remainder are alternative declarations that model variance, i.e., precision, = physical accuracy.
The accuracy-precision difference was extensively documented to relevant literature in the manuscript, e.g., [3, 4].
The reviewers ignored that literature. The final reviewer dismissed it as mere assertion.
Every climate modeler reviewer who addressed the precision-accuracy question similarly failed to grasp it. I have yet to encounter one who understands it.
2. No understanding of propagated error
“The authors claim that published projections do not include ‘propagated errors’ is fundamentally flawed. It is clearly the case that the model ensemble may have structural errors that bias the projections.”
I.e., the reviewer supposes that model precision = propagated error.
“The repeated statement that no prior papers have discussed propagated error in GCM projections is simply wrong (Rogelj (2013), Murphy (2007), Rowlands (2012)).”
Let’s take the reviewer examples in order:
Rogelj (2013) concerns the economic costs of mitigation. Their Figure 1b includes a global temperature projection plus uncertainty ranges. The uncertainties, “are based on a 600-member ensemble of temperature projections for each scenario…” [5]
I.e., the reviewer supposes that model precision = propagated error.
Murphy (2007) write, “In order to sample the effects of model error, it is necessary to construct ensembles which sample plausible alternative representations of earth system processes.” [6]
I.e., the reviewer supposes that model precision = propagated error.
Rowlands (2012) write, “Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. “ and go on to state that, “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure.” [7]
I.e., the reviewer supposes that model precision = propagated error.
Not one of this reviewer’s examples of propagated error includes any propagated error, or even mentions propagated error.
Not only that, but not one of the examples discusses physical error at all. It’s all model precision.
This reviewer doesn’t know what propagated error is, what it means, or how to identify it. This reviewer also evidently does not know how to recognize physical error itself.
Another reviewer:
“Examples of uncertainty propagation: Stainforth, D. et al., 2005: Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433, 403-406.
“M. Collins, R. E. Chandler, P. M. Cox, J. M. Huthnance, J. Rougier and D. B. Stephenson, 2012: Quantifying future climate change. Nature Climate Change, 2, 403-409.”
Let’s find out: Stainforth (2005) includes three Figures; Every single one of them presents error as projection variation. [8]
Here’s their Figure 1:
Original Figure Legend: “Figure 1 Frequency distributions of T g (colours indicate density of trajectories per 0.1 K interval) through the three phases of the simulation. a, Frequency distribution of the 2,017 distinct independent simulations. b, Frequency distribution of the 414 model versions. In b, T g is shown relative to the value at the end of the calibration phase and where initial condition ensemble members exist, their mean has been taken for each time point.”
Here’s what they say about uncertainty: “[W]e have carried out a grand ensemble (an ensemble of ensembles) exploring uncertainty in a state-of-the-art model. Uncertainty in model response is investigated using a perturbed physics ensemble in which model parameters are set to alternative values considered plausible by experts in the relevant parameterization schemes.”
There it is: uncertainty is directly represented as model variability (density of trajectories; perturbed physics ensemble).
The remaining figures in Stainforth (2005) derive from this one. Propagated error appears nowhere and is nowhere mentioned.
Reviewer supposition: model precision = propagated error.
Collins (2012) state that adjusting model parameters so that projections approach observations is enough to “hope” that a model has physical validity. Propagation of error is never mentioned. Collins Figure 3 shows physical uncertainty as model variability about an ensemble mean. [9] Here it is:
Original Legend: “Figure 3 | Global temperature anomalies. a, Global mean temperature anomalies produced using an EBM forced by historical changes in well-mixed greenhouse gases and future increases based on the A1B scenario from the Intergovernmental Panel on Climate Change’s Special Report on Emission Scenarios. The different curves are generated by varying the feedback parameter (climate sensitivity) in the EBM. b, Changes in global mean temperature at 2050 versus global mean temperature at the year 2000, … The histogram on the x axis represents an estimate of the twentieth-century warming attributable to greenhouse gases. The histogram on the y axis uses the relationship between the past and the future to obtain a projection of future changes.”
Collins 2012, part a: model variability itself; part b: model variability (precision) represented as physical uncertainty (accuracy). Propagated error? Nowhere to be found.
So, once again, not one of this reviewer’s examples of propagated error actually includes any propagated error, or even mentions propagated error.
It’s safe to conclude that these climate modelers have no concept at all of propagated error. They apparently have no concept whatever of physical error.
Every single time any of the reviewers addressed propagated error, they revealed a complete ignorance of it.
3. Error bars mean model oscillation – wherein climate modelers reveal a fatal case of naive-freshman-itis.
“To say that this error indicates that temperatures could hugely cool in response to CO2 shows that their model is unphysical.”
“[T]his analysis would predict that the models will swing ever more wildly between snowball and runaway greenhouse states.”
“Indeed if we carry such error propagation out for millennia we find that the uncertainty will eventually be larger than the absolute temperature of the Earth, a clear absurdity.”
“An entirely equivalent argument [to the error bars] would be to say (accurately) that there is a 2K range of pre-industrial absolute temperatures in GCMs, and therefore the global mean temperature is liable to jump 2K at any time – which is clearly nonsense…”
Got that? These climate modelers think that “±” error bars imply the model itself is oscillating (liable to jump) between the error bar extremes.
Or that the bars from propagated error represent physical temperature itself.
No sophomore in physics, chemistry, or engineering would make such an ignorant mistake.
But Ph.D. climate modelers have invariably done. One climate modeler audience member did so verbally, during Q&A after my seminar on this analysis.
The worst of it is that both the manuscript and the supporting information document explained that error bars represent an ignorance width. Not one of these Ph.D. reviewers gave any evidence of having read any of it.
5. Unique Result – a concept unknown among climate modelers.
Do climate modelers understand the meaning and importance of a unique result?
“[L]ooking the last glacial maximum, the same models produce global mean changes of between 4 and 6 degrees colder than the pre-industrial. If the conclusions of this paper were correct, this spread (being so much smaller than the estimated errors of +/- 15 deg C) would be nothing short of miraculous.”
“In reality climate models have been tested on multicentennial time scales against paleoclimate data (see the most recent PMIP intercomparisons) and do reasonably well at simulating small Holocene climate variations, and even glacial-interglacial transitions. This is completely incompatible with the claimed results.”
“The most obvious indication that the error framework and the emulation framework
presented in this manuscript is wrong is that the different GCMs with well-known different cloudiness biases (IPCC) produce quite similar results, albeit a spread in the
climate sensitivities.”
Let’s look at where these reviewers get such confidence. Here’s an example from Rowlands, (2012) of what models produce. [7]
Original Legend: “Figure 1 | Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.” [7]
The variable black line in the middle of the group represents the observed air temperature. I added the horizontal black lines at 1 K and 3 K, and the vertical red line at year 2055. Part of the red line is in the original figure, as the precision uncertainty bar.
This Figure displays thousands of perturbed physics simulations of global air temperatures. “Perturbed physics” means that model parameters are varied across their range of physical uncertainty. Each member of the ensemble is of equivalent weight. None of them are known to be physically more correct than any of the others.
The physical energy-state of the simulated climate varies systematically across the years. The horizontal black lines show that multiple physical energy states produce the same simulated 1 K or 3 K anomaly temperature.
The vertical red line at year 2055 shows that the identical physical energy-state (the year 2055 state) produces multiple simulated air temperatures.
These wandering projections do not represent natural variability. They represent how parameter magnitudes varied across their uncertainty ranges affect the temperature simulations of the HadCM3L model itself.
The Figure fully demonstrates that climate models are incapable of producing a unique solution to any climate energy-state.
That means simulations close to observations are not known to accurately represent the true physical energy-state of the climate. They just happen to have opportunistically wonderful off-setting errors.
That means, in turn, the projections have no informational value. They tell us nothing about possible future air temperatures.
There is no way to know which of the simulations actually represents the correct underlying physics. Or whether any of them do. And even if one of them happens to conform to the future behavior of the climate, there’s no way to know it wasn’t a fortuitous accident.
Models with large parameter uncertainties can not produce a unique prediction. The reviewers’ confident statements show they have no understanding of that, or of why it’s important.
Now suppose Rowlands, et al., tuned the parameters of the HADCM3L model so that it precisely reproduced the observed air temperature line.
Would it mean the HADCM3L had suddenly attained the ability to produce a unique solution to the climate energy-state?
Would it mean the HADCM3L was suddenly able to reproduce the correct underlying physics?
Obviously not.
Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.
Every single recent, Holocene, or Glacial-era temperature hindcast is likewise non-unique. Not one of them validate the accuracy of a climate model. Not one of them tell us anything about any physically real global climate state. Not one single climate modeler reviewer evidenced any understanding of that basic standard of science.
Any physical scientist would (should) know this. The climate modeler reviewers uniformly do not.
6. An especially egregious example in which the petard self-hoister is unaware of the air underfoot.
Finally, I’d like to present one last example. The essay is already long, and yet another instance may be overkill.
But I finally decided it is better to risk reader fatigue than to not make a public record of what passes for analytical thinking among climate modelers. Apologies if it’s all become tedious.
This last truly demonstrates the abysmal understanding of error analysis at large in the ranks of climate modelers. Here we go:
“I will give (again) one simple example of why this whole exercise is a waste of time. Take a simple energy balance model, solar in, long wave out, single layer atmosphere, albedo and greenhouse effect. i.e. sigma Ts^4 = S (1-a) /(1 -lambda/2) where lambda is the atmospheric emissivity, a is the albedo (0.7), S the incident solar flux (340 W/m^2), sigma is the SB coefficient and Ts is the surface temperature (288K).
“The sensitivity of this model to an increase in lambda of 0.02 (which gives a 4 W/m2 forcing) is 1.19 deg C (assuming no feedbacks on lambda or a). The sensitivity of an erroneous model with an error in the albedo of 0.012 (which gives a 4 W/m^2 SW TOA flux error) to exactly the same forcing is 1.18 deg C.
“This the difference that a systematic bias makes to the sensitivity is two orders of magnitude less than the effect of the perturbation. The author’s equating of the response error to the bias error even in such a simple model is orders of magnitude wrong. It is exactly the same with his GCM emulator.”
The “difference” the reviewer is talking about is 1.19 C – 1.18 C = 0.01 C. The reviewer supposes that this 0.01 C is the entire uncertainty produced by the model due to a 4 Wm-2 offset error in either albedo or emissivity.
But it’s not.
First reviewer mistake: If 1.19 C or 1.18 C are produced by a 4 Wm-2 offset forcing error, then 1.19 C or 1.18 C are offset temperature errors. Not sensitivities. Their tiny difference, if anything, confirms the error magnitude.
Second mistake: The reviewer doesn’t know the difference between an offset error (a statistic) and temperature (a thermodynamic magnitude). The reviewer’s “sensitivity” is actually “error.”
Third mistake: The reviewer equates a 4 W/m2 energetic perturbation to a ±4 W/m2 physical error statistic.
This mistake, by the way, again shows that the reviewer doesn’t know to make a distinction between a physical magnitude and an error statistic.
Fourth mistake: The reviewer compares a single step “sensitivity” calculation to multi-step propagated error.
Fifth mistake: The reviewer is apparently unfamiliar with the generality that physical uncertainties express a bounded range of ignorance; i.e., “±” about some value. Uncertainties are never constant offsets.
Lemma to five: the reviewer apparently also does not know the correct way to express the uncertainties is ±lambda or ±albedo.
But then, inconveniently for the reviewer, if the uncertainties are correctly expressed, the prescribed uncertainty is ±4 W/m2 in forcing. The uncertainty is then obviously an error statistic and not an energetic malapropism.
For those confused by this distinction, no energetic perturbation can be simultaneously positive and negative. Earth to modelers, over. . .
When the reviewer’s example is expressed using the correct ± statistical notation, 1.19 C and 1.18 C become ±1.19 C and ±1.18 C.
And these are uncertainties for a single step calculation. They are in the same ballpark as the single-step uncertainties presented in the manuscript.
As soon as the reviewer’s forcing uncertainty enters into a multi-step linear extrapolation, i.e., a GCM projection, the ±1.19 C and ±1.18 C uncertainties would appear in every step, and must then propagate through the steps as the root-sum-square. [3, 10]
After 100 steps (a centennial projection) ±1.18 C per step propagates to ±11.8 C.
So, correctly done, the reviewer’s own analysis validates the very manuscript that the reviewer called a “waste of time.” Good job, that.
This reviewer:
- doesn’t know the meaning of physical uncertainty.
- doesn’t distinguish between model response (sensitivity) and model error. This mistake amounts to not knowing to distinguish between an energetic perturbation and a physical error statistic.
- doesn’t know how to express a physical uncertainty.
- and doesn’t know the difference between single step error and propagated error.
So, once again, climate modelers:
- neither respect nor understand the distinction between accuracy and precision.
- are entirely ignorant of propagated error.
- think the ± bars of propagated error mean the model itself is oscillating.
- have no understanding of physical error.
- have no understanding of the importance or meaning of a unique result.
No working physical scientist would fall for any one of those mistakes, much less all of them. But climate modelers do.
And this long essay does not exhaust the multitude of really basic mistakes in scientific thinking these reviewers made.
Apparently, such thinking is critically convincing to certain journal editors.
Given all this, one can understand why climate science has fallen into such a sorry state. Without the constraint of observational physics, it’s open season on finding significations wherever one likes and granting indulgence in science to the loopy academic theorizing so rife in the humanities. [11]
When mere internal precision and fuzzy axiomatics rule a field, terms like consistent with, implies, might, could, possible, likely, carry definitive weight. All are freely available and attachable to pretty much whatever strikes one’s fancy. Just construct your argument to be consistent with the consensus. This is known to happen regularly in climate studies, with special mentions here, here, and here.
One detects an explanation for why political sentimentalists like Naomi Oreskes and Naomi Klein find climate alarm so homey. It is so very opportune to polemics and mindless righteousness. (What is it about people named Naomi, anyway? Are there any tough-minded skeptical Naomis out there? Post here. Let us know.)
In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.
Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.
The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.
They should be nowhere near important discussions or decisions concerning science-based social or civil policies.
References:
1. Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.
2. Meinshausen, M., et al., The RCP greenhouse gas concentrations and their extensions from 1765 to 2300. Climatic Change, 2011. 109(1-2): p. 213-241.
The PWM coefficients for the CCSM4 emulations were: RCP 6.0 fCO₂ = 0.644, a = 22.76 C; RCP 8.5, fCO₂ = 0.651, a = 23.10 C.
3. JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement. 100:2008, Bureau International des Poids et Mesures: Sevres, France.
4. Roy, C.J. and W.L. Oberkampf, A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Engineer., 2011. 200(25-28): p. 2131-2144.
5. Rogelj, J., et al., Probabilistic cost estimates for climate change mitigation. Nature, 2013. 493(7430): p. 79-83.
6. Murphy, J.M., et al., A methodology for probabilistic predictions of regional climate change from perturbed physics ensembles. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2007. 365(1857): p. 1993-2028.
7. Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.
8. Stainforth, D.A., et al., Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 2005. 433(7024): p. 403-406.
9. Collins, M., et al., Quantifying future climate change. Nature Clim. Change, 2012. 2(6): p. 403-409.
10. Bevington, P.R. and D.K. Robinson, Data Reduction and Error Analysis for the Physical Sciences. 3rd ed. 2003, Boston: McGraw-Hill. 320.
11. Gross, P.R. and N. Levitt, Higher Superstition: The Academic Left and its Quarrels with Science. 1994, Baltimore, MD: Johns Hopkins University. May be the most intellectually enjoyable book, ever.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections”
Dana Nuccitelli says models do a good job.
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2015/feb/23/climatology-versus-pseudoscience-new-book-checks-whose-predictions-have-been-right
The Guardian – enough said
Not one of their journalists as far as I can see has a degree relevant to climate. The closest is Moonbat who at least has a science degree, but I hardly count a zoologist as qualified to comment on atmospheric physics or renewable energy.
But does that stop them attacking us sceptics who overwhelmingly have these qualifications? Take our host Anthony Watts. Clearly qualified to speak about climate and the legitimate scientific dispute, producing the world class well researched articles that makes him the mainstream media in this area. And who attacks him? Uneducated the scientifically illiterate cut-and-paste “journalists” of the Guardian.
Dana Nuccitelli — too much said.
Grauniad, you mean.
Dana Nuccitelli is BS artist promoted to well above his pay grade , that the Guardian has handing itself over to him and Bob ‘fast fingers ‘ Ward to self promote their egos and the pay masters financial outlooks . Is a shame but not unexpected has is has made it clear for years that only unquestioning support of ‘the cause ‘ is an acceptable stance for them . It tried to avoid even covering climategate and only did so when it became clear others newspapers would do so . Its always been an oddity of the paper that when it get obsessed on a subject , like AGW , it tends to take an absolute stance and the quality of its coverage goes down hill the more it covers it.
Actually, Nuccitelli has long reached ‘his level of incompetence’ (Peter Principle). The only problem is he is not smart enough to know it.
Slywolfe
You are either a moron or a teenager (probably means the same at the moment). As a teenager you have a chance to grow out of it. As a moron you have no hope.
I think he forgot “sarc”
Paul Homewood
Babies are like that
Dana Nuccitelli? LOL. Thas’s all. LOL.
I have read enough Dana articles in the guardian to know he writes like a scientific idiot who is trying to forge a new career in polemics.
First, Anthony, thank-you very much for posting my essay about climate modelers. I am grateful for the opportunity.
Next, Slywolfe, if you understand the first figure of the essay, or the fourth, or the linked poster, you’ll know that climate models can’t make any predictions at all and so, ipso facto, can not “do a good job.” Unless making not-predictions is their job.
Crediting your credit, Dana doesn’t know what he’s talking about. And, as regards climate futures, neither does anyone else.
Pat,
Thanks for generating a very worthwhile discussion on the GCM failures and allowing WUWT readers a “peer-reviewing” the sorry state of Climate Science manuscript peer-reviewers. Bob Tisdale and Christopher Monckton (as you may be aware) regularly update WUWT readers with GCM external failures. Your elucidation of the internal reasons for those GCM failures (along with RGBatDuke, Ferdburple, Jimbo, and many others) is very much appreciated.
I understood most of what you presented and took away a very important refresher lesson on the importance of a “unique result” in any science-based model. I also remember, that some months back someone at WUWT posted a comment that the GCM initializations used a single value for enthalpy of evaporation for 4º C water instead of 26º C as is for most of the tropical waters. They mentioned that evaporation enthalpy value error would propagate through the hundreds of iterations of the GCM’s, compounding until nothing was left but essentially a random noise signal. That made me realize that the GCMs of the IPCC are total crap, built with circular logic to deliver a politically-desired output.
Joel O’Bryan, PhD
Thanks, Joel. I’d never have thought of that water enthalpy error. One expects if all the physical errors of climate models were documented, their propagation would produce a centennial uncertainty envelope of approximately the size of North America.
Pat Frank
I’ll make it ultra-simple for you: Predicting the future (anything) is very difficult for humans. One might as well flip a coin.
.
The IPCC Report Summary is leftist personal opinions formatted to look like a real scientific study.
.
As you can see from the formerly beloved Mann Hockey Stick chart, ‘predicting the past’ is just as difficult for the “climate astrologers” as predicting the future.
.
It’s a climate change cult. — a secular religion for people who reject traditional religions.
.
The coming global warming catastrophe scam is 99% politics and 1% science.
.
You can not debate a cult using data, logic and facts any more than you can debate the existence of god with a Baptist.
.
The long list of environmental boogeymen started with DDT in the 1960s, and as each new boogeyman lost its ability to scare people, a new boogeyman was created, and the old one was immediately forgotten.
If we are lucky, and it seems that we have been for two years so far, it will remain cold enough so the average person begins to doubt the coming global warming catastrophe predictions — thank you Mr. Sun and Mrs. Cosmic Rays, for riling up the leftist so they reveal their true bad character — with harsh character attacks on scientists who do not deserve them.
Richard, I don’t disagree with your general point.
But consider that Maxwell’s equations do a darn good job predicting the future behavior of emitted electromagnetic waves. And Newton’s theory does a good job at predicting the future positions of the planets — at least out to a billion years or so. In my field, QM does a pretty good job of predicting the details of x-ray absorption spectra before any measurement.
So, physical science has a good array of predictive theories. Climate modelers have managed to convince people that they can predict future climate to high resolution. Their claim is supported only by the abandonment of standard scientific practice. Abandonment not just in climatology, but by august bodies such as the Royal Society and the American Physical Society.
In a way the modelers themselves are innocents, because my experience shows they’re not trained physical scientists at all. They couldn’t have abandoned a method they never knew or understood. The true fault lays with the physical scientists, especially the APS, who let climate modelers get away with their ignorance and scientific incompetence.
I agree with you that AGW alarm has been seized upon by progressives as their politically opportune proof positive that capitalism is inherently evil. The history of the 20th century has shown that their preferred alternative is manifestly monstrous. But as committed ideological totalitarians, finding a moral position in lying, cheating, and stealing to get their utopian way, remediative introspection has never been a progressive strong suit.
His article didn’t pass peer review.. why is it even being discussed past that?
Discussed for reasons noted in the head post, Chris. Or do you find the reviewer arguments convincing?
Dana Nuccitelli thinks models do a good job. That is irrelevant. Models will never succeed in simulating the climate system and the reasoning is brilliantly presented in the following video:
Very nice. I think the most interesting part, the part that really brought it home, was right at the end, when talking about time-scales.
So, for very complex problems, the computer would still give us GIGO: God’s-truth In, Garbage out.
Regarding models, Willie Soon says GIGO = Garbage In, Gospel Out. 🙂
Brilliant. Well done.
Stop wasting your time with “climate journals”. They continue their gate-keeping while your message is being missed in the climate policy debate. Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?
“Why not try publishing your paper in a journal where peer review is not redefined in order to protect the AGW-hypothesis and the climate model industry?”
Try finding one,….the cancer is well established.
It’s called a closed shop! If you are not part of the “doomsday warming” union then you are blacklisted, blackballed and if that doesn’t work blackmailed.
this should be published in a statistics journal.
“Stop wasting your time with “climate journals””
He did a tremendous amount of authentic work on this manuscript. I’m guessing that he was confident that it would be recognized as such from anybody resembling an authority/expert and be most relevant in a climate journal………..even knowing the bias that exists.
Pat Frank,
I appreciate you taking the time to share this with us. It’s extraordinarily enlightening.
“Tuned parameters merely obscure uncertainty. They hide the unreliability of the model. It is no measure of accuracy that tuned models produce similar projections. Or that their projections are close to observations. Tuning parameter sets merely off-sets errors and produces a false and tendentious precision.”
Back in college, in a synoptic meteorology lab class, we had to create a simple weather model. I was sort of overwhelmed by all the mathematical equations and some of the other stuff but did manage to get my model to work………..by repetitive trial and error.
I had little confidence that the equations/parameters were the best ones to represent the atmosphere but know that by continually tweeking and tweeking, it resulted in the changes which moved my model in the right direction and finally showed what it was supposed to show, to get the result we needed to pass.
I did think that in my early naive days, Mike. But no more. I didn’t realize that climate modelers don’t know the first thing about assessing accuracy. Plus I didn’t know that many editors apparently lack the courage to publish anything truly controversial in a poisonous atmosphere such as has been deliberately created in climate science.
Also, thank-you for your kind words. Your description of constructing a weather model sounds like a good experience. First, you learned to do it, second you overcame your fears, and third you gained a critical perception of models. Nothing to feel tentative about.
I worried about that very problem, Pethefin. So my first submission was to the journal Risk Analysis. After three weeks of silence, the manuscript editor came back and told me that the paper was not appropriate to the journal. So he declined to even send it out for review.
This decision was backed up by the chief editor who, in essence, said that the analysis would be of limited interest to their audience. That is, a paper showing there’s no knowable greenhouse warming risk is of small interest to risk analysis professionals. Incredible, but not worth arguing. I suspect they were relieved to have dodged a political bullet.
Dana’n’John should be busy investigating if modelling TC Marcia as a Cat 5 is valid:
23 Feb: Joannenova: Category Five storms aren’t what they used to be
The 295 km/hr wind speed was repeated on media all over the world, but how was it measured? Not with any anemometer apparently — it was modeled. If the BOM is describing a Cat 2 or 3 as a “Cat 5″, that’s a pretty serious allegation. Is the weather bureau “homogenising” wind speeds between stations?…
http://joannenova.com.au/2015/02/category-five-storms-arent-what-they-used-to-be/#comment-1684713
australian CAGW sceptics have made the MSM and BOM has admitted no Cat 5 cyclone passed over Queensland. not that most MSM and the public have grasped this fact, as yet, such has been the hysteria:
24 Feb: Courier-Mail: Climate researcher questions Cyclone Marcia’s category 5 status
Jennifer Morohasy said the bureau had used computer modelling rather than early readings from weather stations to determine that Marcia was a category 5 cyclone, not a category 3…
Systems Engineering Australia principal Bruce Harper, a modelling and risk assessment consultant who analyses cyclones, said it was often difficult to determine whether a storm was a marginal 3, 4 or 5.
What was important was that after the bureau conducted its post-storm analysis, it told people that they experienced category 3 impacts as it passed over the land.
It was dangerous for residents to be thinking they had survived a category 5 when it was a storm that degraded quickly…
http://www.couriermail.com.au/news/queensland/climate-researcher-questions-cyclone-marcias-category-5-status/story-fnkt21jb-1227236188297
But in folk memory this will be a Cat 5 Typhoon from now until the end of time. That’s how the propagandists of CAGW work, shout exaggerated claims from the rooftops, by the time they withdraw the claim the MSM have moved on.
Pat,
Jen is in the Courier again this morning.
http://www.couriermail.com.au/news/queensland/views-on-global-warming-led-to-weather-bureau-staff-exaggerating-strength-of-cyclone-marcia-claims-scientist/story-fnkt21jb-1227237685342
Did Marcia blow in some winds of change as well?
Dis you read the comments after that report, Jen Morohasy got absolutely lambasted ad hominem.
Amateur statisticians?
Oh, they’re scientists. Just lousy scientists. Doesn’t take much to get a science degree these days. As becomes evident from the quality of the papers being routinely published in this field.
Agree. Memorise some shit, suck up to your tutor and graduate. Then get a job at Bank of America where they don’t care about your qualifications as long as you graduated in something. It’s the same in most industries.
Very good Alex. I would like to add a little to your observation. There are a few that are good at memorizing some shit but too dumb to realize that they should be working at Bank of America. They get hired by big businesses that a very difficult time firing people that don’t have the abilities that their diplomas say they should have.
Will –
Is a scientist one who has a science degree of some sort or is a scientist one who practices the scientific method?
This struck a chord with me. One of the best electronics engineers I ever knew (20 yrs USAF as engineer/project mgr + 26 yrs helping integrate hardware onto the ISS) had no degree whatsoever. He was entirely self-taught. He couldn’t get promoted beyond the grade he had when our company assumed a contract, because ‘company policy’ said you HAD to have a degree in “math, science, engineering or a related field’ to be an engineer. He taught me more about networks, software, hardware and how to solve integration problems than any of my four degrees. So, to answer the question: it depends on who you ask. Ask the journals or most academics, and it’s the degree that makes you a scientist. Ask anyone in the real world, and it’s your work that defines which category into which you should be placed.
Based on the latter criteria, anyone publishing results of work that don’t match reality is something, but it ain’t a scientist!
It’s the latter, John. But you knew that. 🙂
Nobody gives a rat’s arse if you’re following the scientific method or not. Frederick Kekule’s discovered the structure of benzene by dreaming of a snake coiled and biting its tail. Did he follow the scientific method?
Not following the scientific method only becomes a problem when people later find out that your research was a useless waste of time.
A scientist is a person with common sense who is very skeptical about every conclusion (hypothesis) presented by scientists, including his own conclusions. A degree is not relevant — the quality of his scientific work determines whether he deserves to be called a “scientist”.
.
Predicting the future with computer games, has nothing to do with science.
A scientist would never focus on ONLY one variable, CO2, probably a very minor variable with no correlation with average temperature, when there are dozens of variables affecting Earth’s climate … and then further focus only on manmade CO2, for political reasons (only that 3% of all atmospheric CO2 can be blamed on humans … which is the goal of climate modelers … along with getting more government grants.)
But Big Government, who wants a “crisis” that must be “solved” by increasing government power over the private sector, could not possibly influence scientists getting government grants and/or salaries, and of course NEVER has to be disclosed as part of an article, white paper or other report by any scientist on the goobermint dole.
Perhaps grade inflation and lowered standards have combined with the world’s greatest fooling machine (computer+software) over the years to make peeple stoopids. It’s much easier these days to be a fraud and incompetent.
They aren’t lousy scientists, Will. They exhibit no evidence of being scientists at all.
Unfortunately, with our current “everyone goes to college” mentality, the gifted scientists have been crowded out by an army of degreed morons.
“Are Climate Modelers Scientists?”
No they are just gamers
If the Met Office put Climate Model Terminals in arcades around the UK – Their high scores would be beat inside a week.
But if games reacted that poorly to input from the player and displayed a gaming “world” that was that whacked out from a “real” world… no one would buy or play them. You would need government to step in and mandate that everyone buy and force everyone to play those games… oh, we’re doing that now… never mind.
Good point, CP. I think of climate modeling, as presently done, as video game science. It’s like trying to understand the physics of explosions by studying the hotel lobby explosion scene in “The Matrix.”
Well they sure as hell aren’t statisticians!
Pat, I appreciate the effort and, without getting into the merits of your work, rejection is part and parcel of academic/scientific publishing. An author that has not been rejected many more than 4 four times is an author that is either a genius or hustler. My advice is that you keep on trying. Don’t let the malice of incompetent reviewers get in your way.
If it was about science, then he would get accepted. The simple fact is that this is motivated rejection of the science by those with a self-interest in keeping doomsday warming alive on life support.
In another world, perhaps. In this one, academic/scientific publication requires unusual persistence no matter the content, no matter the field, no matter the venue.
You need to have some half a dozen major publications before experience allows you bypass with certain reliability the entry-level hurdles that knock off most rockies off.
Thanks for the encouragement, Brute. I’ve worked through my share of rejections. These last have been set apart as unique by the uniform incompetence of the reviews. I do plan to keep trying.
Pat Frank.
Help me out here, if I may ask a favor. I know 3D finite element analysis, have used it, have worked statistics control, have worked metrology, have worked reactor neutron flux curves and their shapes as the control rods are driven in and out at various levels of various poisons after shutdown at various times, have criticized answers (approximations) of stress-strain colored images from such models, and have worked in fluid dynamics problems with the solutions (approximations to the solutins) coming from such models. Fine. I know I know parts and pieces of the field fairly well. Others always should know more about their specialties.
In context of the criticism of your paper, and of the problems and failures in global circulation models (now being called global climate models by the way!), explain the different erros the global warming sim,ulators are making, and the different erros and assumptions about their errors and their error margins using this exampel.
I need to calculate the value of (e/pi)^ 10001.0001 Assume I set this problem up like the climate scientists have.
If I ran this problem using 2.7 / (22/7) what error am I making? Is climate science making this kind of error, and not knowing they are making this kind of error of simply using too many approximations of real world variables (albedos, transmission losses, cloud reflections, and everything else) that are NOT simple one-point constants?
If I ran this problem once using 10002.0002 what I be duplicating their error? if not, what error am I making?
If I ran this error changing the accuracy of “pi” everytime by 0.0001 percent, am I not promulgating that error through every subsequent multiplication?
If I ran it 4000 times using 10002.0001 would I be more accurate (in their minds) even though I would never get the right answer?
If they ran this problem 300,000 times on a supercomputer using a different algorithm for both constants every time, could they use the average of the random errors of their results to (a) get a more accurate answer or (b) just be displaying random errors in their generation sequence of both “constants”?
If they ran this problem using a program that printed “40000.00001” every time, would today’s climate scientists claim they had greater accuracy than my sliderule?
I know absolutely that many FEA runs using exact “perfect” data on a “perfect” crystal or pure piece of metal machined exactly per the model dimensions under loads exactly as described by the modeled equations will yield (on average) results similar to the average of many model runs. Each model run under those circumstances “should” be exact and perfect, but each will be a bit different even in the ideal case of a simple stress-strain issue. But, is this what forms the CAGW “religion” ? A belief that they have described the problem exactly and perfectly so every run using the same “core equations” as its kernel can be averaged into today’s world?
RACook, the 2.7/(22/7) case would be an accuracy problem, increasing the error with every step. I have no specifications on the source of climate model error. I just assessed the error and calculated its consequence.
The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds. Outside those bounds is dangerous ground, as I’m sure you know.
Climate models are like engineering models. They can be made to describe the behavior of elements of the climate within the time bounds were tuning data exist. However, they’re being used to project behavior well outside their bound. The claim is then made that they do this accurately, and that’s the problem.
“The FEA models you describe seem to be engineering models. These — the result of detailed physical experiments — accurately describe the behavior of their object within the model bounds.”
And those model bounds have to be obtained in the real world through control of critical parameters such as maximum defect size and quantity and alloy material composition including unwanted contaminants that will reduce predicted performance. It is also necessary to understand to some degree how the defects will propagate under stress. To help insure this, intensive physical inspection and testing are usually incorporated throughout the manufacturing process.
There are so many parameters with so little accurate understanding of function covering such large areas of many differing kinds of surface, that there is no way possible to even begin to simulate anything similar with climate models.
You’ve got it, BFL. Climate modelers abuse the process.
Whew. +18C or -15C ! That really would be Climate Change™
Luckily, Gaia and her negative feedback mechanisms are smarter than all the climate scientists and their models put together.
Loved this characterisation : “a liberal art expressed in mathematics”
It is only so small because the Stefan-Boltzmann law puts limits on how hot or cold the planet can get given constant insolation. If the error propagation continued unresisted by that -and absolute zero at 0 K-, it would be +/- 1000s of degrees C in 2100.
Hi Stefan — the (+/-) uncertainties are not temperatures. They are an ignorance width. When they become (+/-)15 C large, they just mean that the projection can’t tell us anything at all about the state of the future climate.
Pat,
I was puzzled by how your uncertainty ranges increased without bounds when there’s no time-aspect in your equations. But I think I figured it out.
It looks like you additively increase the cloud forcing uncertainty at each timestep. You compute a new cloud forcing at the present timestep, and add it to the last.
IIUC, this means that you’re not treating this +/- 4 as the uncertainty in the cloud forcing, but uncertainty in the change of the cloud forcing. In other words, your equations act as if the change in forcing from one year to the next must be within +/- 4 W/m2. Propagating this through allows the actual cloud forcing uncertainty in your equation to grow without bounds.
Compare this to the actual cloud forcing (in W/m2), what you’re using is a completely different metric, W/m2/year. These two metrics are as different as speed and location. Uncertainty in the derivative of forcing is verrrrry different from uncertainty in the forcing itself.
If the actual cloud forcing uncertainty is between +/- 4 W/m2, then that range is fixed. It doesn’t change, it doesn’t increase without end. It already represents the entire range of cloud forcing uncertainty.
And this is why your model produces nonsensical results. No, actual cloud forcing cannot grow or fall without bounds. You already established the actual cloud forcing: +/- 4 W/m2. The cloud forcing at 95% of any given timestep should be within these bounds.
Windchaser, there is an implied time aspect in the equation, found in the change in forcing over the time of the projection.
Error is propagated through a linear sum as the root-sum-square. That is the standard method.
I do not compute any new cloud forcings. I merely propagate the global annual average long-wave cloud forcing error made by CMIP5 climate models, in annual steps through a projection.
Sorry to say, YDUC. I am treating the (+/-)4 Wm^-2 as an error. It is injected into every annual modeled time step. The reason it is injected is that it is an error made by the models themselves. That is, intrinsic to some theory-bias error.
Every annual initiating state has a cloud forcing error, which is delivered to the start of the simulation of the subsequent state. The model makes a further long wave cloud forcing error when simulating that subsequent state. This sequence of error in, more error out is repeated with every step. Error is necessarily step-wise compounded.
However, we don’t know the magnitude of the error, because the simulated states lay in the future. But we can project the uncertainty by propagating the known average error. That’s what I’ve done.
Your “In other words, … statement is not correct. Error in forcing is propagated, not the change in forcing. Every step in a simulation simulates the entire climate, including the cloud forcing. Whatever the change in cloud forcing, the average error in the total long wave cloud forcing is (+/-)4 Wm^-2. Every time.
It’s not the error in the derivative of forcing. It’s the error in the forcing itself.
You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.
Windchaser, you’re clearly an intelligent guy. Let me try and explain this. When there is an average annual (+/-)4 Wm^-2 error in long wave cloud forcing, it means the available energy is not correctly partitioned among the climate substates.
This means that one is not simulating the correct climate, for that total energy state. That incorrect climate is then projected forward, but projected incorrectly relative to its particular and incorrect energy sub-states because the error derives from theory-bias.
So an already incorrect climate state is further projected incorrectly into the next step.
The uncertainty envelope describes the increasing lack of knowledge one has concerning the position of the simulated climate in its phase-space relative to the position of the physically correct climate. That lack of knowledge becomes worse and worse as the number of simulation steps increases, because of the unceasing injection and projection of error.
The uncertainty grows without bound, because it is not a physical quantity. It is an ignorance width. When the width becomes very large, it means the simulation no longer has any knowable information about the physically true climate state.
Such results are not nonsensical. They are cautionary; or should be.
Error is propagated through a linear sum as the root-sum-square. That is the standard method.
Root-sum-square is the standard method for combining independent sources of error. For instance, let’s say I move in a straight line, twice. The first time, I measure that I have moved 100 m, +/- 10 m. The second time, 200m, +/- 5m.
The final error will be the root-sum-square of the previous, independent errors: the square root of (5*5 + 10*10). This is because each measurement and its error are independent.
Or, here’s another example: say I am traveling at 10 +/- 1 meters per second. At each timestep, no matter how long this continues, the error in my velocity remains the same, +/- 1. However, the error in the *distance* I’ve traveled grows as the root-mean-square, because the error ocurring in distance traveled at each timestep is independent of the error at any other timestep. Each second has its own error in distance traveled, of +/- 1 m.
After 1 second, I travel 10 meters. The error is +/- 1m. After 1 more second, I travel another 10 +/- 1m, so now I have travelled 20 meters, +/- 1.41m. After another second, I’ve traveled 30 meters, +/- 1.73m. Etc.
No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].
Similarly, if you determine an error in the overall cloud forcing, that error stays fixed from one timestep to the next. If this forcing is (f0 +/- 4) at the beginning, it should be (f0 +/- 4) at every timestep. Without some explicit relation to time, the errors do not propagate forward through time in the manner that you describe.
On the other hand, if this error, +/- 4, represented the derivative of the cloud forcing with respect to time, then, yes, the total cloud forcing uncertainty would grow over time and without bound, just like how, in the example above, the uncertainty in the derivative of distance-traveled caused the uncertainty in distance-traveled to grow over time without bound.
For verification, please refer to Bevington and Robinson, or to Larsen and Marx, or to whatever other book on statistics and errors that you prefer. But unless I’m really missing something, the uncertainty you calculated has no relationship to time, so it cannot propagate forward through time like you describe.
windchaser, when you consulted Bevington and Robinson, or whatever, you found no time-dependence in the equations for propagating error.
All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.
Those conditions are met in a projection of air temperature. The calculation is step-wise. The final state and each intermediate state is a linear sum of prior calculated terms. Each term has an associated error. Propagated error follows. Uncertainty grows with the number of steps.
No “explicit relation to time” is required.
You wrote, “No matter how many seconds have gone by, my speed and its error are the same. 10 +/- 1. But the error in the distance grows with time, as [sqrt(t)*1].”
Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.
I’ve told you the origin of the (+/-)4 Wm^-2 error term. You can find it in Lauer and Hamilton, my reference [1].
It is the average long-wave cloud forcing error derived from comparing against observations, 20 years of hindcasts made by 26 CMIP5 models.
The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.
In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing. As such, the error will be present in every single simulation step of a step-wise projection. Such error must propagate and the uncertainty in air temperature must grow with the number of simulation steps.
Your analysis is not at all relevant, windchaser.
Climate modelers can be either scientists and get down to the hard gritty business of physics, or they can be gamers. But they can’t be gamers and pretend to be scientists.
All that’s required is a step-wise sequential calculation yielding some additive final result, with some error in every step. The error then propagates through the steps into the final result.
Nope. Your units are wrong for this to be passed forward through time. Nor does that make intuitive sense.
Actually, your uncertainty grows with distance traveled. It’s right there in your own description. Time doesn’t enter your calculation at all.
/shrug. Same difference. We have a relationship between time and distance travelled, and the uncertainty is in that relationship. So as either time or distance travelled grows, so does the uncertainty.
Where is your uncertainty here? Just in the cloud forcing, no? It’s not in the cloud forcing’s relationship to other forcings, nor is it in some relationship to time. So the total uncertainty does not grow. It cannot.
Pull the equations out of Bevington and Robinson, if you like: you’ll notice that they discuss uncertainties in terms of differentials. Here, that would be the timestep, since that’s what you’re compounding it over, which means that your uncertainty must be in terms of time. With no differential / no relationship to time, there’s no multiple, independent uncertainties to perform a root-sum-square on.
The uncertainty you provide is fixed; it doesn’t change with respect to anything else. So how can you possibly compound it?
Your units are just wrong, which means your math is wrong.
In every simulation step, every simulated annual total cloud fraction will produce an incorrect LW cloud forcing, which will be (+/-)4 Wm^-2 further divergent from the prior also and already incorrectly simulated prior LW cloud forcing.
No. Again, this is nonsensical: you’re saying that if we stepped through time half as quickly, say at 0.5 years instead of 1 year, then the uncertainty would grow twice as quickly. This is nonsense: uncertainty propagation does not depend on the size of the timestep. And hey, look! If we use really large timesteps, then all this uncertainty goes away, and the model projections are fine again!
Climate modelers can be either scientists and get down to the hard gritty business of physics
*cough*. I’m not the one messing up basic statistics here. But by all means, keep insisting that you’re right, and keep submitting your paper and getting it rejected.
Here, I found a copy of Bevington and Robinson on the internet. We can go straight to them, if that’s the textbook that you prefer.
http://labs.physics.berkeley.edu/mediawiki/images/c/c3/Bevington.pdf
For error propagation, go to page 39-41 of the book (page ~54 of the pdf). You can try to walk me through your math and its units, if you like, but I’ll be surprised if you can: your units are wrong; they don’t make any sense.
windchaser, your “nope” directly contradicts the generalized derivations in Bevington and Robinson.
The differentials for propagating error, (sigma_x)^2 = (sigma_u)^2(dx/du)^2 +…, are generalized to any “x” and do not necessarily refer to time.
Your intuition is no test of correctness.
I already discussed your units argument: the error unit is W/m^2/year. The annual change in GHG forcing is also W/m^2/year. The head post figure is Celsius per year.
You’ve got no case.
You shrugged off the time/distance mistake in your own criticism. But you ignored the time/forcing equivalence I pointed out here. Your criticism is therefore illogical and self-servingly inconsistent, and you’ve ignored your own dismissal of your own prior criticism; a self-goal.
The propagation time-step is annual, because the long wave cloud forcing error is the annual average. The error is from GCM theory bias, putting it freshly into every single annual simulation step. That is why it must be compounded.
The annual error in long-wave cloud forcing is propagated in relation to annual greenhouse gas forcing. I should have thought that was obvious given the first head post figure and the linked poster.
Long-wave cloud forcing contributes to the tropospheric thermal flux. So does GHG forcing. The change in GHG emissions enters an annual average 0.035 Wm^-2 forcing into a simulated tropospheric flux bath that is resolved only to (+/-)4 W/m^2. And that’s a lower limit of error.
The lower-limit but 114-times larger cloud simulation flux error completely swamps any GHG effect. GCMs cannot resolve the effect, if any, of GHG emissions.
Semi-annual cloud error may be of different magnitude. GCM simulations can proceed in quite small time steps. A detailed and specific error estimate and propagation could produce very different uncertainty widths. Possibly much wider than those in the head post figure, because of the multiple sources of error in a GCM.
If GCMs are ever able to project climate in 100 year jumps, your ludicrous “really large timesteps” argument might have relevance. But then, of course, we’d have to apply a 100-year average error, not just an annual average. What would be the magnitude of that, I wonder.
I have a personal copy of B&R. The units aren’t wrong (see above).
You’ve struggled hard, windchaser, and have gotten nowhere.
You’re right that the statistical dimension is W/m^2/year. But the annual change in GHG forcing is also W/m^2/year.
Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time, why do you present it in terms of W/m^2?
You can understand my confusion. Mathematically, you treat the number as the error in the derivative of cloud forcing with respect to time, but in terms of units, you present it as just a constant, flat error in the cloud forcing.
I haven’t looked very closely at your derivation, but it also seems to reflect just a flat (constant) error in cloud forcing, not something that changes from one timestep to the next, or that feeds back with any other terms in your equations. Is that incorrect? The poster suggests that you calculate the total average cloud forcing error over a block of time, not the error in the change in cloud forcing, for which the units would be W/m^2/year.
You seem to contradict yourself, as at other times in our discussion, you said: “The (+/-)4 Wm^-2 is not the derivative of cloud forcing with time. It’s the average annual long wave cloud forcing error in the simulated total cloud fraction.”
Which is it? Can you clarify this for me?
The head post figure is Celsius per year.
The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year. Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.
The one exception, of course, is the part of the equation where you sum over all the timesteps, summing the annual changes in forcing/year to get the total change in forcing. This is where you’d convert from W/m^2/year to W/m^2. But obviously, if you’re starting with an error in terms of W/m^2, you can’t integrate that over time to get W/m^2. It’d be like integrating speed over time and getting back your speed, instead of getting distance traveled.
Sorry, if this sounds simplistic, but I’m trying to explain it in the simplest terms possible. If you integrate a quantity over time, then your units must change.
windchaser, you wrote, “Ahh, great! If that’s actually the case, and you’re calculating the error in the change in forcing over time,…
No, windchaser. I’ve told you over and over again, it’s the (rms) average annual long wave cloud forcing error.
A twenty years rms average, yielding the average annual error in the total forcing. Why is that so hard for you to understand?
”… why do you present it in terms of W/m^2?” Because that’s what it is, windchaser.
“…you treat the number as the error in the derivative of cloud forcing with respect to time…” No, I do not. I treat it for what it is: the annual average error. It has nothing whatever to do with dynamics.
I’ve made no “contradiction,” but have been clear and consistent throughout. The mistake has been yours from the outset, given your insistence that a linear root-mean-square annual error is a derivative.
It appears your exposure of physical error is so lacking that you evidently have no grasp of its meaning.
You wrote, “The head figure is in total change in Celsius. It’s not ambiguous: the right axis is labelled as delta-C, not delta-C per year.”
In what unit is the slope of the line in that figure, windchaser?
You wrote, “Likewise, the head equation in your poster is also in terms of the total forcing and total temperature change, not forcing per year or temperature change per year.”
In the PWM equation, what does the subscript “i” represent?
You wrote, “…you can’t integrate that over time to get W/m^2.”
It’s a linear sum, windchaser. No units change.
You’re still getting nowhere.
I have to congratulate you though. At least you’re struggling with error propagation. That’s more than any of my climate modeler reviewers did.
Save the one reviewer who clearly was not a climate modeler, understood the error analysis, and recommended publication.
“…I treat it for what it is: the annual average error.”
One does not add a total error to the same series, over and over again. It’s added once.
“It’s a linear sum, windchaser. No units change. …In the PWM equation, what does the subscript “i” represent?”
The ith timestep, of course.
What you’re doing in that equation is exactly a numerical integration: you take the change in greenhouse gas forcing at a given point in time. You multiply it by delta-t, a change in time: 1 year. You get delta-F, the amount of change over the timestep. Then you sum over all these delta-Fs, to get the total change in GHG forcing over all timesteps.
It’s just this.
change in F == Sum over t: [dF/dt * delta-t]
Sorry for the cludgy representation of the math, but that’s a textbook numerical integration. So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F. Or if the cloud forcing error is not constant with respect to time, it should be integrated over with respect to time, just like dF/dt.
Arright, the remedial calculus lessons are done. You said that I’m “still getting nowhere”, and I can indeed see that. Please: find a mathematician you trust and run this by him. Perhaps he can explain this to you better than I can.
Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.
windchaser, you wrote, “One does not add a total error to the same series, over and over again. It’s added once.
It’s a theory bias error. It enters into every single simulation step.
”The ith timestep, of course.” There goes your argument that “there’s no time-aspect in your equations.”
“You multiply it by delta-t,…” where do you see a time delta-t anywhere in the equation?
The delta-F_i are the annual forcings recommended by the IPCC, e.g., for the standard SRES scenarios. Time enters only implicitly with the steps in forcing.
“Sorry for the cludgy representation of the math, but that’s a textbook numerical integration.” Not a problem. So you’d agree that numerical integration is just a linear sum. Subject to linear propagation of error.
“So if you’re including a cloud forcing error, which is in the same units as the LHS, it must either be in W/m^2, in which case it’s a constant, added to F.
Correct. For any step and including error, the forcing is [delta-F_i(+/-)4] Wm^-2. As mentioned umpteen times so far, the (+/-)4 Wm^-2 is the rms average CMIP5 LWCF error. As an average, it’s necessarily constant at every step, as a theory bias error it enters into every step, and its propagation yields a representative physical reliability of the projection.
“Please: find a mathematician you trust and run this by him.” I’ve done that. No problems found.
Windchaser, look at your own analysis. You’ve described numerical integration as a linear sum. Linear propagation of error follows directly. All you need do now is recognize the serial impact of a theory-bias error on the growth of uncertainty in a step-wise simulation.
“Honestly, best of luck with publishing your manuscript. And thanks for the conversation. It’s been interesting.”
Thanks, and likewise.
Are Climate Modelers Scientists?
No! And those falsely claiming to be scientists should be challenged by sceptics and the public made aware of their bogus claims.
They seem to be like self-taught CAD operators – able to generate nice, professional-looking output but heaven help anyone trying to construct or use anything they’ve touched.
It just shows that mathematicians are not scientists. They think an internal inconsistent quantity of the sum to all integers equals -1/12. A sum that exceeds infinity.
I don’t understand that, but I’m not going to ask you to spend time elaborating. 🙂 But it seems that in mathematics, there are more possibilities than can exist in the physical universe.
In maths, you can have negative speed, negative money, negative direction, negative temperature etc. Those things are useful as tools,. Butthe real world simply does not allow for negative speed, negative money, negative direction, negative temperature, etc.
negative money
====
I have plenty of that.
I’m going to tell the debt collectors there is no such thing as “negative money” in the real world and see how that goes. 🙂
I realized there was something seriously wrong with the universe when I learned about i (the sqrt of -1), and then found that it had real-world applications.
I think I’m going to start referring to climate models as “compound error machines”.
Are Climate Modelers Scientists?
Its a good question for another reason , when we consider what is a climate scientists we find that there is in fact there is no agreed definition of what this means. Given it is a term that has been applied to failed politicians and railway engineers and a host of others who have had no formal acedmic training in the area , we can see that in pratice its far from clear what actually makes a person a climate scientists.
From the alarmist prospective its simply , a climate ‘scientists’ is someone that works to support AGW . Its very useful way of looking at it because they can claim that no climate ‘scientists’ disagrees with them , therefore other ‘none ‘ climate ‘scientists’ can safely be ignored .
However its also an entirely dishonest way of looking at it , because even those climate ‘scientists’ they like vary on how they view the situation , the consensus is no such thing . Secondly there are clearly those who work in the area , whose training and academic standing at least equal others but who do not share the alarmist prospective . therefore should have the right to be called climate ‘scientists’ in any fair and honest system.
Always worth remember that when the infamous 97% claim is pulled out , that in pratice given they simply have no idea who many scientists , climate or otherwise , there is that they cannot know how many would be in the whole group for which a sub-group is supposed be a percentage off. So even setting aside the many problems of its methodology, the claim itself fails at a basic maths level, which shows even setting aside the many problems of the the value of this claim is the same as ‘nine out of ten cats prefer.
in a vague sense. As a Meteorologist we hold more comprehension of all that climate stuff by virtue of climate being front and center in forecasting. I can also call the bluffs of forecasters just by the wording they use.
I love WUWT!
You’re preaching to the choir on this site. Try posting this on skepticalscience or a Greenpeace website if you wish to educate someone new.
I will attach this link and post on a few myself 🙂
Note that by posting his essay here, he got the attention of readers like you who can now pass it on. If he’d tried at SkS or Greenpeace, his article would never have seen the light—and you would never have become aware of it.
Besides WUWT isn’t an echo chamber, nor is it a closed site. New readers come in and learn something here.
Lighten up 🙂
The only thing I would show Greenpeace is the door
Unless there was a more convenient window?
Questing Vole
I show the door and throw out the window. I guess I am a little old fashioned.
wickedwenchfan
Attach away. You may find that you don’t get a good response to it, It is possible you would be banned or your posts deleted.
wwfan, it’s a peculiarity, and an obvious pointer to the truth of the matter, isn’t it, that only AGW skeptical sites do not censor the comments.
Science is about discovery of something previously not known or defined.
A model cannot contain anything that is not already known. A model can be useful to describe well understood outcomes where the variables are controllable or known.
Climate is a chaotic non-linear system where the variables are not controllable or known to any great extent.
Therefore Climate models are not science.
Q.E.D.
A model is an output of science – generally used in Engineering. Take a comparatively simple airflow model*: Science creates and researches the methodology, variables, and uses, as well as constraints or error bars in conjunction with, the model. This model can also then be revised and revised over time. The model is then used by engineers to make cars or airplanes or sacks of peanuts flow better through the air. An engineer using this model is not a scientist – but an engineer improving this model (ideally publishing and spreading the revisions) is a scientist performing science.
*Simple in the field of medium dynamics models. 🙂
I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.
To TomB,
You said”
I think I’m gonna have to disagree here. An aeronautical engineer will use a virtual model and simulator to design various shapes and test the desired performance parameters. But then they’ll build a physical scale model and test it in a wind tunnel. You don’t go from a virtual model to the 10mm to the cm scale directly.”
Tom,
You imply 3 steps, model, build, test, correct? Seems climate modelers stop at step 1? Testing against observations is a failure, so we can’t go there, and forget step 3. Already built, just don’t understand it.
What allows them to get away with this, until the “PAUSE” occurred, is that the real results hopefully wouldn’t be known until after they retire and collect their govt pensions. In many cases it was after they were dead.
I wonder if they realize that what they are doing to collect a paycheck, and justify their existence, has little to no real validity? (Who has read the “Black Widowers” series by Asimov?)
I recently spent 3 hours with an engineer who created a program (model) that uses 15 minute interval data from smart meters to analyze energy use in buildings. Spent years creating it. Explaining to him how much he is potentially missing was emotionally draining. He had not even been onsite to survey the facility yet was willing to talk about potential energy savings.
Having used “models” for years to design structures, pipelines, water networks, sewerage and water treatment systems, project management systems, financial management, I would have to agree. Thing is with all those models I used, there were empirical tests to proof the model as well as real world tests to see if what we “modelled” actually gave us the projected results – often an iterative process to “tune” the models. But in every case – real world testing and proofs.
Meteorologists get to test their models every day/week.
So why are climate models whacky and why does anyone accept them as anything but they are: first generation guesstimates.
Perhaps that is the difference between my engineering and “Climate Science”. Climate Models are still in the state of “Science” so real world proofing is unnecessary or beyond them.
“A model cannot contain anything that is not already known.” …or, at least, assumed or postulated.
If climate models contained only what is KNOWN, their outputs would not differ as they do.
Climate is driven by some “known knowns”, several “known unknowns” and a myriad of “unknown unknowns”, making certainty about the future of climate somewhat difficult.
“Science is about discovery of something previously not known or defined”
Reminds me of the problem I have with the word – research. I thought of myself as a researcher because I looked through old reports for answers that others had searched for and found.
“
Truthseeker
re “…A model cannot contain anything that is not already known…”
Einstein might disagree with you (so would I).
Relativity is a pretty interesting model that certainly appears to have taught lots of previously-unknown stuff to legions of physicists.
Truthseeker, models that include sufficiently well-developed physics can make unique predictions about observables. That opens them to falsification.
Prediction/observation/falsification (or not) is the way of science. So, physical models do have a critical part to play. Climate modelers, however, have removed their models from science, and sealed them away from the ruthless indifference of observation.
n this connection, I would like to present my own experience in early 90s. I submitted a paper to an international journal. One reviewer pointed minor corrections and approved for publication and the 2nd reviewer gave excellent marks but at the end he made a statement saying it can also be fitted to linear curve. With this the regional editor rejected the paper for publication. Then I wrote a detailed letter to the Editor-in-chief of the journal. He sent this letter to three regional editors. All agreed with my observations and asked me to split in to three parts. They published these in 1995. All the three relate to papers by Editorial committee members. One of the paper related to climate change. The abstract states that “Climate change and its impact on environment, and thus the consequent effects on human, animal and plant life, is a hot topic for discussion at national and international forums both at scientific and political levels. However, the basis for such discussions are scientific reports. Unless these are based on sound foundation, the consequent effects will be costly to exchequer”. Here the authors tried to look into the impact of temperature and rainfall increase on ETp [evapotranspiration] and thus on crop factors. The percentage changes in ETp attributed to climate change can also be attributed [partly] to scientists’ induced factors, such as (i) the choice of ETp model and ETp model vs environment; (ii) probable changes in meteorological parameters due to climate change, expressed as absolute change or percentage change; and (iii) ETp changes expressed in terms of absolute changes or percentage changes. All these explained in the article using their article. Second paper deals with overemphasis on energy terms in crop yield models. — Three different groups working under three different country conditions come up with different conclusions on the impact of energy term on crop yield. Models to be more meaningful, in physical and practical sense, and to be applicable in wider environmental context, should be addressed under holistic systems by taking in to account abundantly available information in the literature on all principal components of a model. . With this, I presented an integrated curve that fits all the three conditions.
Dr. S. Jeevananda Reddy
Congratulations, and more power to you, Dr. S. Jeevananda Reddy. 🙂
“neither respect nor understand the distinction between accuracy and precision.”
Damn, we learned that in year 11 chemistry. Our chemistry teacher was, arguably, better than our physics teacher. He had strict standards but was rarely unnecessarily strict.
I’d like to make a point which perhaps might be as clarifying to you as it is to me (although I could be totally off, like an athlete who keeps on running even though the race is over). I say that there is a huge difference between EMULATION and SIMULATION.
Climate simulators are exactly that – they are superficially modelling a climate system. But they are not emulators, and an emulator behaves exactly like the original. And it is becoming obvious to me that one cannot in fact emulate a climate.
Is anyone here into retro computing? Then you’ll know that a computer emulator lets you perfectly imitate a different platform on a foreign host. For example, you can run an Atari ST emulator on a modern Macintosh. That emulator lets you run the system software and applications as if it was the original – there is no difference, save for bugs.
An Atari ST emulator, one of several, can be downloaded here:
http://www.atari.st/pacifist/
I once wrote an ST simulator in JavaScript. It merely resembled the ST’s desktop – it could not run software, save data, or anything. It just superficially resembled the desktop, with its drop-down menus and icons. Here is what it sort of looked like, though this is not mine:
http://www.atari.st/
As for statistics, lots of blowhards think they know statistics. I am a hack with some knowledge but I don’t pretend otherwise.
The psychologist Dr. Richard Wiseman thought he had ‘debunked’ a parapsychology meta-analysis but used incorrect statistics to do it.
Naomi Oreskes seems to think that p values of <0.05 are just 'convention' and apparently knows nothing about standard deviation. And because we apparently know that AGW is true, we don't need those high standards, so let's settle for <0.10 (which, when you think about it, makes no sense, because if AGW is so obviously true, all uncertainty about it would easily fall below 0.05).
And of course we have the usual bullshit PR nonsense that mammograms are 80%+ accurate, that HIV tests are 99% accurate, etc, etc.
On the name of Naomi: One of my favourite actresses is Naomi Watts. One Naomi I know is very gentle. Another is an in-law and is beautiful and happy. I guess it depends on geography!
The Base Rate Fallacy shows up in all sorts of places. False positives and false negatives quickly overpower our intuition, leading to overconfidence in our results. For example:
A group of policemen have breathalyzers displaying false drunkenness in 5% of the cases in which the driver is sober. However, the breathalyzers never fail to detect a truly drunk person. 1/1000 of drivers are driving drunk. Suppose the policemen then stop a driver at random, and force the driver to take a breathalyzer test. It indicates that the driver is drunk. We assume you don’t know anything else about him or her. How high is the probability he or she really is drunk?
Many would answer as high as 0.95, but the correct probability is about 0.02. (50 false alarms versus 1 actual drunk driver)
http://en.wikipedia.org/wiki/Base_rate_fallacy
Your analogy works to a point. But BAC analyzer measurements are not a yes/no binary output.
Assume the legal definition of DUI is 0.08% BAC. You state that “the breathalyzers never fail to detect a truly drunk person.” Therein that statement assumes high accuracy with some un-stated precision, i.e. your analysis ignores measurement error. If the accused’s measured BAC is 0.085, the analyzer may round up to display 0.09%, thus legally DUI. But if the manufacturer says precision is +/- 0.01% then the accused could be at 0.075%, under the legal limit. Anyone whose recorded BAC is say 0.10%, the probability of then he or she is really drunk is quite high, and then presumed DUI conviction is beyond a reasonable doubt.
The skeptics here at WUWT (myself included) often hammer the dishonest alarmists over their willful ignoring of thermometer measurement precision in temperature records who then try and proclaim “highest-ever” alarmism, when the differences are being proclaimed to hundredths of a degree.
Do not be guilty of the same mistake.
I had some clients for the company I worked for use a math co-processor emulator so that they could run a certain CAD program that required that instruction set. My computer, utilizing an Intel 80386 processor had the math co-processor, at an several hundred additional cost, and it ran the software 10 times faster that the clients computers could. An emulator always runs slower than the real thing, but if you are using modern hardware to run old hardware emulators, you are not going to notice. By the way, the next generation Intel chip, 80486 then the Pentium series, had the math instruction sets built into them.
A more specific term might be – computer based simulation of coupled non-linear .?. by the Finite Element Method. When playing computer games, much of the graphics are simulated because more realistic ray-tracing algorithms takes many orders of magnitude more computing time.
Are you sure that’s not a 286 needing the co-processor? I recall having to install one in order to run AutoCAD on an IBM PS2 back in the 80s. My first computer was a 486DX66 which didn’t need a co-processor, so it couldn’t have been that one, and I never had a 386.
I had a 386 with a co-processor for running a CAD program.
Oops, looks like I’m talking about a physically separately-installed chip and you’re not.
The 386 had a problem with the math co-processor on board. It was renamed the 386SX and the problem was corrected I’m pretty sure on subsequent revisions. Might have something to do with this.
I recall reading that when the co-processor became available, ALL 386 chips were printed with it. The co-processor tended to be unstable, so if it failed the burn-in, they laser cut the traces and viola! a 386SX chip was born.
When I bought my third computer (first a Commodore 64 (64KB of memory), second a Sanyo 8086 (or possibly 8088, I will have to look at the box but its currently in my garage attic ) it was a 80386. I bypassed the 286 unlike everyone else in my office that went cheap and bought an obsolete 16 bit system (though it was the biggest selling computer worldwide that year) and my boss paid for the separate math co-processor, about $400 or so back then, that plugged into the available socket on the mother-board / main-board.
As for the 386SX, it did not have a math co-processor.
“In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus mainly intended for lower cost PCs aimed at the home, educational, and small business markets while the 386DX would remain the high end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit board layout and reduce total cost.[13] The 16-bit bus simplified designs but hampered performance”
Meant to type “separate” before co-processor. It wasn’t even made by Intel.
386SX could only address half the RAM of a 386. When the 386 was first released, there were no 32-bit co-processors available, so machines were made with sockets to take existing 16-bit co-processors, but I never saw one.
I had a 286 based PC that I used to run a program to calculate the position of the moon over a month that I’d written. It took about 30 minutes to complete all the calculations with the occasional printer line being generated. When I eventually I added a co-processer and it was so ‘fast’ the printer couldn’t keep up and the speed seemed so wrong the first time I couldn’t even watch 🙂
Similar experience with downloads when moving from a 300 baud to 9600 baud modem.
Karim DG, agree about learning error propagation in Chemistry. I learned it big-time when taking Analytical Chemistry as a college sophomore. I can’t speak to your distinction between emulation and simulation. Guess it’s a matter of applied meanings. But congratulations with your Naomis and you’ve got good taste in actresses.
The IPCC doesn’t have any models. They just have black box hindcast curve fitting. It is impossible to model anything when most of the science is unknown. Finite approximations are a joke when the cells are so big as to completely miss features climate features as huge as thunderstorms.
The real test of a real climate model will be whether it can predict the weather next week. Don’t hold your breath.
The problem is that the IPCC does not make the claims that MSM makes. If you want to make a difference then eviscerate these activist journalists. Make them lose their jobs.
Brian Williams is one of whom you speak. I could not take any more of his alarmism about the climate so I tuned him out months ago. Maybe he’s gone for good.
“They just have black box hindcast curve fitting. ” That assessment may be overly harsh, but the focus is certainly correct.
Steve McIntyre recently did some analysis on the large difference in agreement between the forecast and observations, versus the hindcast and observations.
http://climateaudit.org/2014/12/11/unprecedented-model-discrepancy/
The comment thread below that article is also well worth reading.
The modelers all hotly deny that they are overfitting, but the dramatic difference between the success of the Hindcast versus the Forecast, shows without a doubt that they are indeed overfitting, however much they wish to deny it.
Wikipedia actually has some nice articles on this topic.
http://en.wikipedia.org/wiki/Overfitting
http://en.wikipedia.org/wiki/Data_dredging
http://en.wikipedia.org/wiki/Misuse_of_statistics
Once again we have to do this stupid story again.
Climate Science is based upon statistical analysis and computer modellling. Neither of which are scientific. At best, they may, and I say “may”, be assumed to be some sort of loose engineering, but sience they are not, never have been, and never will be.
Statistical analysis is a methodology of extracting information out of raw data. The problem is that supposed methodology has nothing to do with physical reality. It only deals with taking biased assumptions from the statistician and looking for the best mathematical construct to overlay on the data, and then extrapolate a physical reason. Whereas, real science, works the other way around, it is about taking solid reality based physical observations and let the mathematics evolve from the facts and data. In other words, science is about taking physical events and looking for the math, while statistics and therefore climate science, is about taking theoretical models and looking for the physics, which is not science, but religion by Ex Cathedra.
Yes Climate Science is an oxymoron, there is no science in “Climate Science”, it is just pure religion. And as I have been pointing out for quite some time now, Climate Science is not the only religion in science, we have the same problems in Physics (cosmology and astrophysics), Biology (evolution and BCS theory), Economics (macroeconomics), Archaeology (Egyptology and Sumerology), Anthropology, Paleontology and so on.
Focusing on the details on what is wrong with the reviewers will not change anything. Academia and the academic practice is now largely determined by “Ex Cathedra”. The true scientific method and the presuit of knowledge has now broadly been usurped by religious attitudes and protected by priests, high-priets and godlike mentalities. You can thank tenure for this. As tenure was once supposed to protect the voice or reason and progress, like everything once it has been around long enough, it is now corrupted. Tenure only now largely serves to protect, the criminal, dishonest, incompetent and meglomaniacs.
For thousands of years society, knowledge and science evolved without the use of tenure. But now, universities and schools are entering a new dark age of knowledge, where Science will be determined by the priests, pope, gods of the academic church, and not only backed but empowered with might of arms of the police and military the governments wield through the use of undemocratic legislation and the court system.
Science has only itself the blame for this disaster we are in. Complain all you like about the morons, idiots, incompetents, buffoons and meglomaniacs of academia. Just remember how they got there? They got there, because every one of you who went to university and played along with the rise of these corrupt people. Don’t forget as much as you like to criticize these thieves and criminals, they too have Ph.Ds, they too have Chairs of distinction, they too control the journals of note (like Nature, Physics Letters etc), they too run the departments as Deans, they too win the accolades, they too have built up reputations based upon their army of accolytes and sycophants, they too have the ears and attention of corrupt governments, and ABOVE ALL THEY, NOT YOU HAVE CONTROL OF THE AGENDA….ALL THANKS TO YOU FOR NOT DOING YOUR JOBS, when you should have been doing them in the past.
And now everybody is crying poor Science! This has been a long time in coming. I saw this when I went through my years at university in physics, where people lied on their papers about their results, where they took 1 set of data, and published multiple papers (in one case I remember well a Ph.D candidate got 7 papers from the same set of results in 7 different PRESTIGIOUS journals!), how those candidates supported the right Professor, got excellent funding, and those who didn’t, just hard ship, how researchers all around the world formed cliques of mutual interest and passed each other’s papers in the review process, how data was convienently fudged to look good, and when the raw data was asked for confirmation, it was always conveniently lost. Ah how many papers are out there in all these fields of endeavour are false? How many? 10, 20, 30% or more? You will be surprised if you did the same analysis above to all the papers, you would probably fail some 80 to 90% of all papers! How do I know this? For I too once reviewed, once! Not any more, for I too used to do the same analysis on every single paper, and barely 10% passed, and that only after revisions were made, and then the word came, I am failing to many papers, and that was the end of that. Standards must never be kept in Science, they must always be lessened, it so appears.
What are you people all crowing about! This blog entry is RIDICULOUS!
Go to any, ANY JOURNAL. Take a random sampling of 10 papers and put them through a thorough analytical review, INCLUDING, checking all the references (and here is another sneaking perversion of science), you will find that 8 or maybe 9 of them FAIL!
Complain all you like guys. But the truth is this, we have fields of endeavour that now belong more in science fiction movies than in academia. For example, we give Noble Prizes in Economics, are you aware that NOT A SINGLE THEORY IN ECONOMICS HAS EVER WORKED. That’s right economics is pure poppy-cock, and yet we teach it. Evolution absolulely no proof for evolution. Biological Classification theory is totally based upon committee, NO SCIENCE. Cosmology all based upon data and photos that nobody can verify. High Energy physics and String Theory has so many holes in it that it makes a black hole look full. How about Psychology, there is no science in it, psychiatry and its bible the DSM is run by committee – no science, no evidence, no proof, how many people are declared ill and forced, by even the courts, to take meds that have ABSOLUTELY no proof of aid.
You guys are complaining about Climate Science! Bwahahahahahahah!
We have so many problems that are far far greater. Our university systems need an over haul. Our economies need to be restructured, our democractic rights are being eroded by corrrupt incomptetent governments. Our Scientific journals are FILLED with rubbish.
You have lost sight of reality if you think that doing a paper review is going to change this problem, the problem of lying cheaters who never should have gotten a degree in science in the first place, for ….
THE LUNATICS ARE NOW RUNNING THE ASYLUM (aka university).
THE QUESTION YOU SHOULD ALL BE ASKING IS THIS….
HOW DO WE GET RID OF THE LUNATICS AND GET SOME SANITY BACK INTO THE SYSTEM?
Reviewing papers is not the issue, MORONS, LUNATICS AND SHARLATANS RUNNING AROUND AND MASQUERADING AS SCIENTISTS, PHYSICISTS, BIOLOGISTS, ECONOMISTS, POLITICIANS, JUDGES, AND NOBEL PRIZE WINNERS ARE THE PROBLEM?
How do we clean up system? For get about the toilet paper these people write, that’s easy to fix, you flush it down the toilet. How do we stop creating more of these lunatics?
The only cure for stupidity is death. I’m not suggesting wholesale culling. Darwin will sort it out eventually.
Yes Alex. In the good old days we used to allow the stupid people to kill themselves. Now we do our darnedest to stop them. I think that the reason we try so hard to stop them is that now more so than in the past they take many undeserving people with them.
Regarding your comments about economic “science,” note that not one of their theories has been or can be subjected to any sort of controlled experiment. Those theories promulgated by “influential” academics are considered sacrosanct and beyond refute. Economists NEVER, EVER consider their theories (actually, they are more akin to conjectures) are wrong even despite, when actually applied in the real world, they produce results contrary to those intended; and this occurs MOST of the time. Coin flipping would produce the correct strategy more often than the garbage produced by academic economists.
Economists produce papers that are awash in formal mathematics buried under unintelligible econo-jargon. What matters is “the model;” the more math, the better.
Astrologists at least can tell you where the planets will be sometime in the future. Economists CANNOT TELL YOU when a recession has hit until AFTER is has started!!! What kind of science is it that has ZERO predictive ability.
As a result of this “science,” based purely on opinion and the popularity of a particular individual or group of individuals we have the farce, the joke, the scam of “liberal” vs “conservative” economists.
WHAT !!!!………the POLITICAL IDEOLOGY of the economist will determine the economic strategies that should be pursued.
If you are seeking a “science ” more of a farce, a scam, a joke than climate “science,” take a gander at economic ‘science.” Unfortunately, just like the climate charlatans, the guinea pigs in their exercises are the citizenry, who get royally screwed over, once again, by the “elites.”
John. I received an economics lesson from my father about fifty years ago that makes me agree with your observation. He was a farmer as a youngster. He explained how the prices and more importantly the profits in animal feed and animal production would cycle and why they would cycle. It was a problem that they could live with until the government got involved and tried to fix it. All the government managed to do was lengthen the time period of the cycles. This made matters much worse for the farmers because it made the non profit period longer and thus more farmers had to throw in the towel. The more the government tried to fix things the more fixing was required. We are still living with the fixes and requiring more.
You drilled into a nerve on economics “science”. RJ Gilbert in Tau Beta Pi “Bent” issue Spring 1993 summarized it nicely: “. . .Economics is a difficult subject because it is not about the control of a passive system. Rather, it is about the design of policies in pursuit of complex objectives in a system comprised of people who are at least as intelligent as the government that is attempting to influence their behavior. . .”
Neo classical economics is based on numerous assumptions that are not true in the real world.
The two most ridiculous ones are 1. Perfect information and 2. Perfect Scalability.
Think about how many industries rely on selling information and the laws in place to protect information.
If we had perfect information none of them would need to exist.
Think about a mine and mineral processing plant. When the price drops they highgrade the orebody which means that the mine actually produces more mineral. If the price goes up the opposite happens so there is actually less mineral production. This is the opposite of perfect scalability.
I totally agree with you. The reason for this inaccuracy in our knowledge base is the inadequacy of the use of our spoken and written language. ie ” You keep using that word. I do not think it means what you think it means!” (S Morgenstern),,LOL
Misusing the word science to mean just about anything is a disservice to our gaining of knowledge. Indeed misusing any word confuses the logical train of thought. ( see wealth, money and jobs ). The general use the word Science as as noun, or a verb, or an adjective will guarantee obfuscation as to the real meaning of the thought conveyed. I prefer to use scence as a process, not a noun.
I am reminded of when I was cutting steel to precise sizes that I could not just use a ruler and measure it 500 times and compute the average to find the dimension to a ten thousanth.
Accuracy is easier to obtain when precision is in the mix.
Why is it the Ted Kaczynski Manifesto comes to mind?
I like it. The bumf solution.
Gotta say, Dorian, that my experience in Chemistry is not your experience in Physics — whatever branch it was. I review papers regularly, and most of them are competently done; possibly incomplete somewhere or perhaps not taking the analysis far enough. I’ve never been censured for being too critical. In fact, I’ve been thanked for being critical.
So, while science is certainly under vicious attack, mostly by Progressives these days, I tend to be long-term optimistic.
Tenure was a good idea, so long as academics honored their side of the contract. Their side to to speak as objectively as possible. The university side is no one can fire them for doing so.
But academics, especially in the Humanities, the soft sciences like Cultural Anthropology, and in any department with a name ending in “Studies” no longer speak objectively. They’ve become openly and loudly partisan and political. In my opinion, this violates and, indeed, abrogates, the tenure contract. University presidents have been grossly remiss in allowing this to continue. Politically partisan faculty should be let go, as having fatally violated their tenure contract.
There are three aspects of the global weather/climate system that are fundamental to its workings: the Pacific Decadal Oscillation, the North Atlantic Oscillation and the El Nino/La Nina perturbations. Any atmosphere/ocean coupled model worth its salt should have phenomena similar to these as emergent from simulations (that is with extent and time scales similar to the real thing). None of them do. Therefore somethings very fundamental are not yet understood let alone included in those models.
That climate modellers nevertheless think those models are good enough to base public policy on shows that they lack the self-criticism inherent in real science. They therefore are little more than glorified similators. Their models relate to the real world as cartoon figures to real people.
Ed Zuiderwijk:
If the agenda is really public policy (e.g. ‘global governance’, ‘climate justice’) then it doesn’t matter if the models have any basis in reality; they have been created to support the agenda with a specious ‘scientific’ legitimacy. They have the advantage of being so arcane that they are beyond the ken of ordinary people; only the high priests of climate scientism are admitted into their mysteries.
Clearly the author of this post, Pat Frank, has not been properly initiated, or he would have seen that so naive a concept as ‘error propagation’ does not apply to the sacred models, which inhabit a realm unblemished by mere empirical facts.
/Mr Lynn
Prescient comment, L.E.J. One of my reviewers dismissed it as, “naive error propagation theory.” He went on to demonstrate in his review, that he knew nothing of it.
It is with deep sadness we note the passing of the Null Hypothesis in climate science. Born about 1925, Null has had a long and distinguished career testing the significance of an immense variety of theories and conjectures. In particular, Null brought to science a realization of the medical injunction to “First do no harm,” by not claiming the truth of a hypothesis without clear evidence. Unfortunately, in recent years Null fell into declining health, contracting a serious case of consensus from which Null never fully recovered. Finally, when complications set in from natural variability and other signs of “bad data,” Null finally expired. In lieu of flowers, Null’s estate asks that you donate to the statistician of your choice.
“…the passing of the Null Hypothesis in climate science.”
Well, I don’t mean to be harsh here at such a solemn time, but perhaps if Null had stayed out of politics…
Nope. Politics came looking for Null.
The climate psuedoscience Null Hypothesis did not not die a natural death. It was a back alley mugging paid hit job. A sort of 9mm-sized brain haemorrhage, if you will.
Take heart, your post is important.
Thanks, Oatley.