This comment from rgbatduke, who is Robert G. Brown at the Duke University Physics Department on the No significant warming for 17 years 4 months thread. It has gained quite a bit of attention because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact thatindividual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physicsomitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best predictionof carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because theparameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the wayphysics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever bynot computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors.Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and stillpossibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The latter alternative should be $4.270 billion according to US practice and $4,270 billion according to Finnish practice.
Gail Combs says, June 19, 2013 at 1:39 am.
Gail, great, great comments – a brilliant demolition of the IPCC rationale, point by point.
BUT you are technically wrong in your specific comment about water, when you say: Water, probably THE most important on-planet climate parameter gets demoted to a CO2 feed back and isn’t even listed by the IPCC as a forcing. That point alone make ALL the models nothing but GIGO.
No! In a feedback system, a forcing is an externality – a change (whether natural or anthropogenic)that is external to the feedback system. A change in atmospheric water vapour composition on the other hand is an important feedback. It will only occur as a consequence of those changing externalities.
As a hard line skeptic, I am extremely happy for water to be described as a feedback. This is NOT a relegation of its significance, as you imply. Just correct science. Don’t forget that feedbacks can be negative or positive. There is simply no hard evidence that water feedback is net-positive rather than net-negative. If net-negative, water would be a cooling agent, would it not?
Logically speaking, the fact that the models don’t average nearer to current temperature says, in general, they are all seeking the same answer.
cn
Nick Stokes — “bitch slap”? taking it too personally Huh!
I am an avid WUWT fan though I don’t post often. James Delingpole wrote a blog article yesterday which, as usual, was quite amusing. http://blogs.telegraph.co.uk/news/jamesdelingpole/100222585/wind-farms-ceausescu-would-have-loved-em
Even more amusing was the tearing to shreds by Cassandra1963, Chreshirered2 and itzman of a warmista that posts under the name ‘soosoos’. I reproduce the thread below (apologies if the formatting is all over the place):-
Original Post by:- cheshirered2 . 17 hours ago
The science of global warming is represented in statistical form by those now-famous computer models. Those models have been a true disaster. That’s not ‘right wing’ rhetoric, it’s a statistical fact when we compare how the models performed against observed reality.
Look here and weep as Dr Roy Spencer shows how 73 out of 73 IPCC-influencing models failed dismally. (Yes, Ed Davey, that’s a 100% failure rate). http://www.drroyspencer.com/20…
Now read on as one of the most brutal posts ever on the legendary http://www.wattsupwiththat.com smashes the alarmist case – reliant as it is on entirely failed computer models, to pieces. (It’s a heavy read, but persist and howl with derision at the alarmist stupidity). http://wattsupwiththat.com/201…
On such failed and now entirely falsified anti-science is predicated the goverments ludicrously suicidal charge to ‘decarbonise’ the UK economy, with wind ‘power’ ridiculously being at the forefront. On a day when politicians supported the idea of bankers going to jail if they lose punters money we’re surely entitled to ask what will happen to politicians who cause untold financial misery and thousands of deaths due to their lunatic energy policies?
And here’s the punchline: if the models which represent the theory fail against real observations – and they do, then the theory that’s represented by those models must also fail.
Posted by: soosoos (Responding to cheshirered2). 17 hours ago
Look here and weep as Dr Roy Spencer shows how 73 out of 73 IPCC-influencing models failed dismally. (Yes, Ed Davey, that’s a 100% failure rate)
Oh look – yet another climate change denialist is regurgitating this nonsense! No one else has answered the following criticisms levelled at it. Perhaps you could?
How do you distinguish between a failure of the models and… a failure of the observations?
There is a precedent for this with satellite data, after all (also involving Spencer): their failure to accommodate orbital decay that led to a spurious cooling trend that contradicted models. The models were later exonerated. The satellite data should be treated very cautiously here.
First, they are unambiguously contaminated by stratospheric temperatures, which will impart a cooling bias (yes: there is tropospheric warming and stratospheric cooling, consistent with AGW but not solar warming).
Secondly, that graph averages UAH/RSS data. This is unfortunate because the two datasets explicitly disagree with each other. The confidence limits of their trends don’t even overlap. Something is rotten in the state of Denmark. RSS on its own is in much tighter agreement with the models – even with a stratospheric cooling bias.
But who needs a nuanced discussion of data and their misrepresentation when you can just have mindless regurgitation of nonsense from WUWT. Lol.
Posted by: Cassandra1963 (Responding to Soosoos).
Soosoos, you reality deniers are truly desperate now and it shows in the way you pick and choose and cherry pick and evade the central message. Denial, anger, bargaining, acceptance, the steps to finding out that everything you believed in was a pack of lies and a giant delusion. You are going to have a very difficult time coming to terms with the demise of the global warming fraud, I hope it doesn’t destroy you mentally as it may well do with some of the more committed cultists.
Posted y: BlueScreenOfDeath (Responding to Soosoos) 16 hours ago
“RSS on its own is in much tighter agreement with the models – even with a stratospheric cooling bias.”
Arrant nonsense. Which models? There are seventy-odd of the things and they are all over the place. Either you haven’t looked at the graph, or you are taking the piss.
Plus, you seem to ignore the radiosonde balloon data – is that because it doesn’t agree with your glib, baseless dismissal? Mind you, if an AGW sceptic scientist made an assertion that water was wet, you’d find some method of criticising it.
Posted by: cheshirered2 (Responding to Soosoos). 15 hours ago
Oh so sorry, I forgot the alarmist mantra; when the theory doesn’t match observed reality….reality is wrong! Talk about anti-science. You lot were happy enough to use ‘catastrophic’ model projections to drive the CO2 scare, and we’re now blessed with the resultant stupid energy policy.
You lot were happy to quote chapter and verse IPCC projections based on the AGW theory, and used your models as a means to an end. You lot were also happy enough to accept actual observations which fell in your favour, like Arctic ice in 2007. But now the models have been proven to have failed – please note those two words, PROVEN and FAILED, why, you only want to revise reality itself!!!
Faced with factual proof that your models have failed, you simply refuse to accept it. The embarrassment must be so acute.
You spout pure, unvarnished bullshit.
Posted by: itzman (Responding to Soosoos). 4 hours ago
Hey, we didn’t pick the data, the IPCC did. Its all very well to say WE cherry picked the data, and are therefore silly, but when YOU cherry pick the data – the one set that sort of agrees with the models, that’s science. The IPCC didn’t fail on OUR criteria, it failed on its OWN criteria.
WUWT et al are not doing any more than showing what the IPCC predicted, and what their own data sets that were USED TO JUSTIFY THE MODELS, actually did 17 years later. Utterly REFUTE their OWN models. Using THEIR OWN DATA.
You appear to be saying that the only data that is valid, is the data that supports the hypothesis, and that when it refutes the hypothesis, oh well here’s some other data that doesn’t. That’s called confirmation bias, or more commonly, being in denial, or simply ‘a denier’.
A phrase that will be hung round the neck of every refuted Green Believer*
*wasn’t that a Monkees song?
That shows that his veneer is wearing thin as all his BS has been systematically exposed.
@Nick
“As I said on the other thread, what is lacking here is a proper reference. Who does this? Where? “Whoever it was that assembled the graph” is actually Lord Monckton. But I don’t think even that graph has most of these sins, and certainly the AR5 graph cited with it does not.
Where in the AR5 do they make use of ‘the variance and mean of the “ensemble” of models’?”
They must use it (even if they don’t present it) if they are making this projection:
“Despite considerable advances in climate models and in understanding and quantifying climate
feedbacks, [b]the assessed literature still supports the conclusion from AR4 that climate sensitivity is likely in the range 2–4.5°C, and very likely above 1.5°C. The most likely value remains near 3°C.[/b] An 19 ECS greater than about 6–7°C is very unlikely, based on combination of multiple lines of evidence”
They specify the range of possibilities (ie the spread) and a “most likely” which would be the mean, unless they’re just picking numbers out of thin air and deciding that 3 degrees is the winner by fiat.
Further: chapter 12-13
“The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response, but does not convey any information on model 44 robustness, uncertainty, likelihood of change, or magnitude relative to unforced climate variability.
And just to back up his assertion that all of the models do things differently, take different things into account, etc, and shouldn’t be accumulated into an ensemble mean, here’s one example from ar5.
“Treatment of the CO2 emissions associated with land cover changes is also model-dependent. Some models do not account for land cover changes at all, some simulate the biophysical effects but are still forced externally by land cover change induced CO2 emissions (in emission driven simulations), while the most advanced ESMs simulate both the biophysical effects of land cover changes and their associated CO2 emissions.”
On 12-26: (12.4.1.2)
“Uncertainties in global mean quantities arise from variations in internal natural variability, model response and forcing pathways. Table 12.2 gives two measures of uncertainty in the CMIP5 model projections, the standard deviation and range (min/max) across the model distribution.”
@ur momisugly David Socrates.
“There is simply no hard evidence that water feedback is net-positive rather than net-negative. If net-negative, water would be a cooling agent, would it not?”
You have to be careful how you describe things. Cooling agent of “where”
H20 in its varied forms acts as an energy transfer medium from the Earth’s surface to the lower atmosphere (cloud level). So yes, it is a cooling agent with respect to the Earth’s surface.
Excellent post and I agree with every word of it. One of the best I read here.
And even if multi model means are used less today than yesterday (but they still are) what is left is the irreductible chaos in the climate system which can’t be meaningfully simulated on coarse grids of 100s of km. Even saying the word Navier Stokes in this context is an insult to Navier Stokes.
Of course space averaging of dynamical variables in a chaotic system is utter non sense too like many have been saying for several years already.
Just for posterity, the actual Figure 11.33 caption in the leaked AR5 draft report actually states:
“Figure 11.33: Synthesis of near-term projections of global mean surface air temperature. a) Projections of global mean, annual mean surface air temperature (SAT) 1986–2050 (anomalies relative to 1986–2005) under all RCPs from CMIP5 models (grey and coloured lines, one ensemble member per model), with four observational estimates (HadCRUT3: Brohan et al., 2006; ERA-Interim: Simmons et al., 2010; GISTEMP: Hansen et al., 2010; NOAA: Smith et al., 2008) for the period 1986–2011 (black lines); b) as a) but showing the 5–95% range for RCP4.5 (light grey shades, with the multi-model median in white) and all RCPs (dark grey shades) of decadal mean CMIP5 projections using one ensemble member per model, and decadal mean observational estimates (black lines). The maximum and minimum values from CMIP5 are shown by the grey lines. An assessed likely range for the mean of the period 2016–2035 is indicated by the black solid bar. The ‘2°C above pre-industrial’ level is indicated with a thin black line, assuming a warming of global mean SAT prior to 1986–2005 of 0.6°C. c) A synthesis of ranges for the mean SAT for 2016–2035 using SRES CMIP3, RCPs CMIP5, observationally constrained projections (Stott et al., 2012; Rowlands et al., 2012; updated to remove simulations with large future volcanic eruptions), and an overall assessment. The box 1 and whiskers represent the likely
(66%) and very likely (90%) ranges. The dots for the CMIP3 and CMIP5 estimates show the maximum and minimum values in the ensemble. The median (or maximum likelihood estimate for Rowlands et al., 2012) are indicated by a grey band.”
Second Order Draft Chapter 11 IPCC WGI Fifth Assessment Report, page 11-126
http://www.stopgreensuicide.com/Ch11_near-term_WG1AR5_SOD_Ch11_All_Final.pdf
Anyone who thinks that figure is not meant to give the impression that the climate models, either by grouping together models or by averaging/medianing and/or putting on error bars, can predict the future is a fool.
Any any rational person can see that the models are diverging from reality.
“An ECS greater than about 6–7°C is very unlikely, based on combination of multiple lines of evidence”
And if it’s very unlikely, why are these models still being included in the graph instead of discarded? Probably because it increases the mean and widens the uncertainty
Latitude says:
June 19, 2013 at 6:55 am
Roy Spencer says:
June 19, 2013 at 4:10 am
We don’t even know whether the 10% of the models closest to the observations are closer by chance (they contain similar butterflies) or because their underlying physical processes are better.
===================
thank you……over and out
————————————————————–
If the 10 percent of the models that are closest to the observations are all STILL wrong in the SAME direction, then this is, logically speaking, a clue that even those “best” models are systemically wrong and STILL oversenstive. In science, being CONSITENTLY wrong in ONE DIRECTION is a clue that a basic premise is wrong. In science being wrong is informative.
When one is ALLWAYS wrong to the oversenstive side of the equation , then you do not assume that the average of all your unidirectionaly wrong predictions, gets you closer to the truth.
Nick, it really is that simple.
Gail Combs says:
June 19, 2013 at 5:58 pm
{ Now the TRUE face of the World Bank, and the elite
This Graph shows world bank funding for COAL fired plants in China, India and elsewhere went from $936 billion in 2009 to $4,270 billion in 2010.
pat says:
June 19, 2013 at 10:12 pm
Time to stop arguing about climate change: World Bank
LONDON, June 19 (Reuters) – The world should stop arguing about whether humans are causing climate change and start taking action to stop dangerous temperature rises, the president of the World Bank said on Wednesday…
http://www.pointcarbon.com/news/reutersnews/1.2425075?&ref=searchlist }
Nick,
I think you are well intentioned, albeit somewhat misguided in arguing semantics.
Regardless, even you should be able to determine, acknowledging the statistics confirmed above, that regardless whether mother Earth is warming or cooling there will never be any rational attempt by any governmental agency to institute a prescription to do a d@mn thing about it.
It’s a suffering horse being beaten to death for control and taxing authority.
Rebuttal by Briggs:
http://wmbriggs.com/blog/?p=8394
Open letter to Lord Monckton.
Sir,
As you are an IPCC reviewer, please consider requesting the following stipulations be appended to the AR5 climate models regarding the ubiquitous “predictive skill” contained in explanatory verbiage in each of the previous IPCC reports:
As proof of predictive skill in hind casting all graphic representations of models should contain representative hind casts transposed over actual/reconstructed temperatures to include all previously published model runs. I believe such a stipulation is easily addressed if the models truly demonstrate predictive skill in hind casting.
It is axiomatic that periodic ‘refreshing’ of the models with actual temperature data is akin to hitting the reset button and provides a means instilling inflated confidence in model robustness. Furthermore, the splicing of graphs from previous reports with the AR5 report may prove instrumental in revealing this so called predictive skill in both hind casting and forecasting of climatic conditions.
Humanity has the right to see a graphic historical representation of this predictive skill sans repeated actual temperature updates.
Your kind consideration is appreciated,
Rob Ricket
Tennekes saw this coming. Well, a lot of people did but Tennekes is by far the most authoritative and earliest who saw this coming.
“In a meeting at ECMWF in 1986, I gave a speech entitled “No Forecast Is Complete Without A Forecast of Forecast Skill.” This slogan gave impetus to the now common procedure of Ensemble Forecasting, which in fact is a poor man’s version of producing a guess at the probability density function of a deterministic forecast. The ever-expanding powers of supercomputers permit such simplistic research strategies.
Since then, ensemble forecasting and multi-model forecasting have become common in climate research, too. But fundamental questions concerning the prediction horizon are being avoided like the plague. There exists no sound theoretical framework for climate predictability studies. As a turbulence specialist, I am aware that such a framework would require the development of a statistical-dynamic theory of the general circulation, a theory that deals with eddy fluxes and the like. But the very thought is anathema to the mainstream of dynamical meteorology.”
Janice Moore says:
June 18, 2013 at 8:07 pm
=====
Excellent. Reminds me of Richard Feynman’s rant about green stars and adding temperatures.
Sorry, I missed the reposting of my comment. First of all, let me apologize for the typos and so on. Second, to address Nick Stokes in particular (again) and put it on the record in this discussion as well, the AR4 Summary for Policy Makers does exactly what I discuss above. Figure 1.4 in the unpublished AR5 appears poised to do exactly the same thing once again, turn an average of ensemble results, and standard deviations of the ensemble average into explicit predictions for policy makers regarding probable ranges of warming under various emission scenarios.
This is not a matter of discussion about whether it is Monckton who is at fault for computing an R-value or p-value from the mish-mosh of climate results and comparing the result to the actual climate — this is, actually, wrong and yes, it is wrong for the same reasons I discuss above, because there is no reason to think that the central limit theorem and by inheritance the error function or other normal-derived estimates of probability will have the slightest relevance to any of the climate models, let alone all of them together. One can at best take any given GCM run and compare it to the actual data, or take an ensemble of Monte Carlo inputs and develop many runs and look at the spread of results and compare THAT to the actual data.
In the latter case one is already stuck making a Bayesian analysis of the model results compared to the observational data (PER model, not collectively) because when one determines e.g. the permitted range of random variation of any given input one is basically inserting a Bayesian prior (the probability distribution of the variations) on TOP of the rest of the statistical analysis. Indeed, there are many Bayesian priors underlying the physics, the implementation, the approximations in the physics, the initial conditions, the values of the input parameters. Without wishing to address whether or not this sort of Bayesian analysis is the rule rather than the exception in climate science, one can derive a simple inequality that suggests that the uncertainty in each Bayesian prior on average increases the uncertainty in the predictions of the underlying model. I don’t want to say proves because the climate is nonlinear and chaotic, and chaotic systems can be surprising, but the intuitive order of things is that if the inputs are less certain and the outputs depend nontrivially on the inputs, so are the outputs less certain.
I will also note that one of the beauties of Bayes’ theorem is that one can actually start from an arbitrary (and incorrect) prior and by using incoming data correct the prior to improve the quality of the predictions of any given model with the actual data. A classic example of this is Polya’s Urn, determining the unbiased probability of drawing a red ball from an urn containing red and green balls (with replacement and shuffling of the urn between trials). Initially, we might use maximum entropy and use a prior of 50-50 — equal probability of drawing red or green balls. Or we might think to ourselves that the preparer of the urn is sneaky and likely to have filled the urn only with green balls and start with a prior estimate of zero. After one draws a single ball from the urn, however, we now have additional information — the prior plus the knowledge that we’ve drawn a (say) red ball. This instantly increases our estimate of the probability of getting red balls from a prior of 0, and actually very slightly increases the probability of getting a red ball from 0.5 as well. The more trials you make (with replacement) the better your successive approximations of the probability are regardless of where you begin with your priors. Certain priors will, of course, do a lot better than others!
I therefore repeat to Nick the question I made on other threads. Is the near-neutral variation in global temperature for at least 1/8 of a century (since 2000, to avoid the issue of 13, 15, or 17 years of “no significant warming” given the 1997/1999 El Nino/La Nina one-two punch since we have no real idea of what “signficant” means given observed natural variability in the global climate record that is almost indistinguishable from the variability of the last 50 years) strong evidence for warming of 2.5 C by the end of the century? Is it even weak evidence for? Or is it in fact evidence that ought to at least some extent decrease our degree of belief in aggressive warming over the rest of the century, just as drawing red balls from the urn ought to cause us to alter our prior beliefs about the probable fraction of red balls in Polya’s urn, completely independent of the priors used as the basis of the belief?
In the end, though, the reason I posted the original comment on Monckton’s list is that everybody commits this statistical sin when working with the GCMs. They have to. The only way to convince anyone that the GCMs might be correct in their egregious predictions of catastrophic warming is by establishing that the current flat spell is somehow within their permitted/expected range of variation. So no matter how the spaghetti of GCM predictions is computed and presented — and in figure 11.33b — not 11.33a — they are presented as an opaque range, BTW, — presenting their collective variance in any way whatsoever is an obvious visual sham, one intended to show that the lower edge of that variance barely contains the actual observational data.
Personally, I would consider that evidence that, collectively or singly, the models are not terribly good and should not be taken seriously because I think that reality is probably following the most likely dynamical evolution, not the least likely, and so I judge the models on the basis of reality and not the other way around. But whether or not one wishes to accept that argument, two very simple conclusions one has little choice but to accept are that using statistics correctly is better than using it incorrectly, and that the only correct way to statistically analyze and compare the predictions of the GCMs one at a time to nature is to use Bayesian analysis, because we lack an ensemble of identical worlds.
I make this point to put the writers of the Summary for Policy Makers for AR5 on notice that if they repeat the egregious error made in AR4 and make any claims whatsoever for the predictive power of the spaghetti snarl of GCM computations, if they use the terms “mean and standard deviation” of an ensemble of GCM predictions, if they attempt to transform those terms into some sort of statement of probability of various future outcomes for the climate based on the collective behavior of the GCMs, there will be hell to pay, because GCM results are not iid samples drawn from a fixed distribution, thereby fail to satisfy the elementary axioms of statistics and render both mean behavior and standard deviation of mean behavior over the “space” of perturbations of model types and input data utterly meaningless as far as having any sort of theory-supported predictive force in the real world. Literally meaningless. Without meaning.
The probability ranges published in AR4’s summary for policy makers are utterly indefensible by means of the correct application of statistics to the output from the GCMs collectively or singly. When one assigns a probability such as “67%” to some outcome, in science one had better be able to defend that assignment from the correct application of axiomatic statistics right down to the number itself. Otherwise, one is indeed making a Ouija board prediction, which as Greg pointed out on the original thread, is an example deliberately chosen because we all know how Ouija boards work! They spell out whatever the sneakiest, strongest person playing the game wants them to spell.
If any of the individuals who helped to actually write this summary would like to come forward and explain in detail how they derived the probability ranges that make it so easy for the policy makers to understand how likely to certain it is that we are en route to catastrophe, they should feel free to do so. And if they in fact did form the mean of many GCM predictions as if GCMs are some sort of random variate, form the standard deviation of the GCM predictions around the mean, and then determine the probability ranges on the basis of the central limit theorem and standard error function of the normal distribution (as it is almost certain they did, from the figure caption and following text) then they should be ashamed of themselves and indeed, should go back to school and perhaps even take a course or two in statistics before writing a summary for policy makers that presents information influencing the spending of hundreds of billions of dollars based on statistical nonsense.
And for the sake of all of us who have to pay for those sins in the form of misdirected resources, please, please do not repeat the mistake in AR5. Stop using phrases like “67% likely” or “95% certain” in reference to GCM predictions unless you can back them up within the confines of properly done statistical analysis and mere common wisdom in the field of predictive modeling — a field where I am moderately expert — where if anybody, ever claims that a predictive model of a chaotic nonlinear stochastic system with strong feedbacks is 95% certain to do anything I will indeed bitch slap them the minute they reach for my wallet as a consequence.
Predictive modeling is difficult. Using the normal distribution in predictive modeling of complex multivariate system is (as Taleb points out at great length in The Black Swan) easy but dumb. Using it in predictive modeling of the most complex system of nominally deterministic equations — a double set of coupled Navier Stokes equations with imperfectly known parameters on a rotating inhomogeneous ball in an erratic orbit around a variable star with an almost complete lack of predictive skill in any of the inputs (say, the probable state of the sun in fifteen years), let alone the output — is beyond dumb. Dumber than dumb. Dumb cubed. The exponential of dumb. The phase space filling exponential growth of probable error to the physically permitted boundaries dumb.
In my opinion — as admittedly at best a well-educated climate hobbyist, not as a climate professional, so weight that opinion as you will — we do not know how to construct a predictive climate model, and will never succeed in doing so as long as we focus on trying to explain “anomalies” instead of the gross nonlinear behavior of the climate on geological timescales. An example I recently gave for this is understanding the tides. Tidal “forces” can easily be understood and derived as the pseudoforces that arise in an accelerating frame of reference relative to Newton’s Law of Gravitation. Given the latter, one can very simply compute the actual gravitational force on an object at an actual distance from (say) the moon, compare it to the actual mass times the acceleration of the object as it moves at rest relative to the center of mass of the Earth (accelerating relative to the moon) and compute the change in e.g. the normal force that makes up the difference and hence the change in apparent weight. The result is a pseudoforce that varies like (R_e/R_lo)^3 (compared to the force of gravity that varies like 1/R_lo^2 , R_e radius of the earth, R_lo radius of the lunar orbit). This is a good enough explanation that first year college physics students can, with the binomial expansion, both compute the lunar tidal force and compute the nonlinear tidal force stressing e.g. a solid bar falling into a neutron star if they are a first year physics major.
It is not possible to come up with a meaningful heuristic for the tides lacking a knowledge of both Newton’s Law of Gravitation and Newton’s Second Law. One can make tide tables, sure, but one cannot tell how the tables would CHANGE if the moon was closer, and one couldn’t begin to compute e.g. Roche’s Limit or tidal forces outside of the narrow Taylor series expansion regime where e.g. R_e/R_lo << 1. And then there is the sun and solar tides making even the construction of an heuristic tide table an art form.
The reason we cannot make sense of it is that the actual interaction and acceleration are nonlinear functions of multiple coordinates. Note well, simple and nonlinear, and we are still a long way from solving anything like an actual equation of motion for the sloshing of oceans or the atmosphere due to tidal pseudoforces even though the pseudoforces themselves are comparatively simple in the expansion regime. This is still way simpler than any climate problem.
Trying to explain the nonlinear climate by linearizing around some set of imagined “natural values” of input parameters and then attempting to predict an anomaly is just like trying to compute the tides without being able to compute the actual orbit due to gravitation first. It is building a Ptolemaic theory of tidal epicycles instead of observing the sky first, determining Kepler’s Laws from the data second, and discovering the laws of motion and gravitation that explain the data third, finding that they explain more observations than the original data (e.g. cometary orbits) fourth, and then deriving the correct theory of the tidal pseudoforces as a direct consequence of the working theory and observing agreement there fifth.
In this process we are still at the stage of Tycho Brahe and Johannes Kepler, patiently accumulating reliable, precise observational data and trying to organize it into crude rules. We are only decades into it — we have accurate knowledge of the Ocean (70% of the Earth’s surface) that is at most decades long, and the reliable satellite record is little longer. Before that we have a handful of decades of spotty observation — before World War II there was little appreciation of global weather at all and little means of observing it — and at most a century or so of thermometric data at all, of indifferent quality and precision and sampling only an increasingly small fraction of the Earth’s surface. Before that, everything is known at best by proxies — which isn’t to say that there is not knowledge there but the error bars jump profoundly, as the proxies don’t do very well at predicting the current temperature outside of any narrow fit range because most of the proxies are multivariate and hence easily confounded or merely blurred out by the passage of time. They are Pre-Ptolemaic data — enough to see that the planets are wandering with respect to the fixed stars, and perhaps even enough to discern epicyclic patterns, but not enough to build a proper predictive model and certainly not enough to discern the underlying true dynamics.
I assert — as a modest proposal indeed — that we do not know enough to build a good, working climate model. We will not know enough until we can build a working climate model that predicts the past — explains in some detail the last 2000 years of proxy derived data, including the Little Ice Age and Dalton Minimum, the Roman and Medieval warm periods, and all of the other significant decadal and century scale variations in the climate clearly visible in the proxies. Such a theory would constitute the moral equivalent of Newton’s Law of Gravitation — sufficient to predict gross motion and even secondary gross phenomena like the tides, although difficult to use to compute a tide table from first principles. Once we can predict and understand the gross motion of the climate, perhaps we can discern and measure the actual “warming signal”, if any, from CO_2. In the meantime, as the GCMs continue their extensive divergence from observation, they make it difficult to take their predictions seriously enough to condemn a substantial fraction of the world’s population to a life of continuing poverty on their unsupported basis.
Let me make this perfectly clear. WHO has been publishing absurdities such as the “number of people killed every year by global warming” (subject to a dizzying tower of Bayesian priors I will not attempt to deconstruct but that render the number utterly meaningless). We can easily add to this number the number of people a year who have died whose lives would have been saved if some of the half-trillion or so dollars spent to ameliorate a predicted disaster in 2100 had instead been spent to raise them up from poverty and build a truly global civilization.
Does anyone doubt that the ratio of the latter to the former — even granting the accuracy of the former — is at least a thousand to one? Think of what a billion dollars would do in the hands of Unicef, or Care. Think of the schools, the power plants, the business another billion dollars would pay for in India, in central Africa. Go ahead, think about spending 498 more billions of dollars to improve the lives of the world’s poorest people, to build up its weakest economies. Think of the difference not spending money building inefficient energy resources in Europe would have made in the European economy — more than enough to have completely prevented the fiscal crisis that almost brought down the Euro and might yet do so.
That is why presenting numbers like “67% likely” on the basis of gaussian estimates of the variance of averaged GCM numbers as if it has some defensible predictive force to those who are utterly incapable of knowing better is not just incompetently dumb, it is at best incompetently dumb. The nicest interpretation of it is incompetence. The harshest is criminal malfeasance — deliberately misleading the entire world in such a way that millions have died unnecessarily, whole economies have been driven to the wall, and worldwide suffering is vastly greater than it might have been if we had spent the last twenty years building global civilization instead of trying to tear it down!
Even if the predictions of catastrophe in 2100 are true — and so far there is little reason to think that they will be based on observation as opposed to extrapolation of models that rather appear to be failing — it is still not clear that we shouldn’t have opted for civilization building first as the lesser of the two evils.
I will conclude with my last standard “challenge” for the warmists, those who continue to firmly believe in an oncoming disaster in spite of no particular discernible warming (at anything like a “catastrophic” rate” for somewhere between 13 and 17 years), in spite of an utterly insignificant rate of SLR, in spite of the growing divergence between the models and reality. If you truly wish to save civilization, and truly believe that carbon burning might bring it down, then campaign for nuclear power instead of solar or wind power. Nuclear power would replace carbon burning now, and do so in such a way that the all-important electrical supply is secure and reliable. Campaign for research at levels not seen since the development of the nuclear bomb into thorium burning fission plants, as the US has a thorium supply in North Carolina alone that would supply its total energy needs for a period longer than the Holocene, and so does India and China — collectively a huge chunk of the world’s population right there (and thorium is minded with rare earth metals needed in batteries, high efficiency electrical motors, and more, reducing prices of all of these key metals in the world marketplace). Stop advocating the subsidy of alternative energy sources where those sources cannot pay for themselves. Stop opposing the burning of carbon for fuel while it is needed to sustain civilization, and recognize that if the world economy crashes, if civilization falls, it will be a disaster that easily rivals the worst of your fears from a warmer climate.
Otherwise, while “deniers” might have the blood of future innocents on their hands if your future beliefs turn out to be correct, you’ll continue to have the blood of many avoidable deaths in the present on your own.
rgb
The share buttons should precede the comment section…
I do believe prof Brown has bitch-slapped the Warmists and their anti-carbon doctrine into tomorrow. Let the whining and mis-direction from Stokes et al begin…
rgbatduke says:
June 20, 2013 at 10:04 am
Agree on nuke plants v. windmills, although both of course produce CO2 from making the cement needed in their concrete.
The human contribution to GHGs is negligible. The affect on climate of a slightly increased GH effect from the rise in CO2 since 1850 is similarly trivial. No catastrophe looms. So far more CO2 has been good for most living things on the planet, including humans.
Warmth & more luxurious plant growth means bigger animals, if not better.
PS: I should have said land animals, because the increasingly frigid conditions since the Oligocene & especially of the past 2.4 million years have led to evolution of the largest animals known, the baleen whales. (It’s possible that the biggest sauropods rivaled these marine giants, however.) Phytoplankton of course can get CO2 both from the air & sea, so can benefit from cold water retaining more carbon dioxide. The land mammals more massive than modern elephants went extinct during Oligocene & Miocene cooling from Eocene warmth.