Nassim Taleb Strikes Again

Guest Post by Willis Eschenbach

Following up on his brilliant earlier work “The Black Swan”, Taleb has written a paper called Error, Dimensionality, and Predictability (draft version). I could not even begin to do justice to this tour-de-force, so let me just quote the abstract and encourage you to read the paper.


taleb error paperAbstract—Common intuitions are that adding thin-tailed variables with finite variance has a linear, sublinear, or asymptotically linear effect on the total combination, from the additivity of the variance, leading to convergence of averages. However it does not take into account the most minute model error or imprecision in the measurement of probability. We show how adding random variables from any distribution makes the total error (from initial measurement of probability) diverge; it grows in a convex manner. There is a point in which adding a single variable doubles the total error. We show the effect in probability (via copulas) and payoff space (via sums of r.v.).

Higher dimensional systems – if unconstrained – become eventually totally unpredictable in the presence of the slightest error in measurement regardless of the probability distribution of the individual components.

The results presented are distribution free and hold for any continuous probability distribution with support in R.

Finally we offer a framework to gauge the tradeoff between added dimension and error (or which reduction in the error at the level of the probability is necessary for added dimension).

Dang … talk about alarmism, that’s scary stuff. Here’s one quote:

In fact errors are so convex that the contribution of a single additional variable could increase the total error more than the previous one. The nth variable brings more errors than the combined previous n-1 variables!

The point has some importance for “prediction” in complex domains, such as ecology or in any higher dimensional problem (economics). But it also thwarts predictability in domains deemed “classical” and not complex, under enlargement of the space of variables.

Read the paper. Even without an understanding of the math involved, the conclusions are disturbing, and I trust Taleb on the math … not that I have much option.

H/T to Dr. Judith Curry for highlighting the paper on her excellent blog.


As Usual: Let me request that if you disagree with someone, please quote the exact words you are referring to. That way we can all understand the exact nature of your objections.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
Frank Wood
July 11, 2015 2:25 pm

Dang Willis you find good stuff!

Reply to  Frank Wood
July 11, 2015 8:58 pm

You should thank Prof. Curry. She is the one to bring it to light and she has been saying this for years now.
Her Uncertainty Monster is alive and well.
BTW, any weather freak will tell you the best of the best of models are accurate for a day ahead, OK for 3 days, poor at 5 days and absolutely useless at a week time and longer period.

Reply to  Eyal Porat
July 12, 2015 10:33 pm

He gave her a hat tip.

Winnipeg Boy
Reply to  Eyal Porat
July 13, 2015 10:22 am

You are correct on the weather. See
By their own measure the climate prediction center spends much of the time at worse than random levels and some time at ‘less than no skill’ levels.

Mary Brown
Reply to  Eyal Porat
July 13, 2015 12:15 pm

Depends on time of year and variable being forecast

Steve Oak
Reply to  Eyal Porat
July 17, 2015 6:23 pm

That weather models exhibit accuracy as denoted above seems to be well known by those who generate them. If you examine any ‘extended’ forecast the predicted conditions will be come less and less distinguishable from average beyond 3 days.

Barclay E MacDonald
July 11, 2015 2:32 pm

This Would seem to apply to the predictability and usefulness of climate models!

Reply to  Barclay E MacDonald
July 11, 2015 3:38 pm

No, it limits the predictability of numerical weather forecasting. That’s been understood from the beginning, and is very familiar to anyone who tries to solve initial value problems.
GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes. Often characterised as solving a boundary value problem, rather than initial value.

Reply to  Nick Stokes
July 12, 2015 2:26 am

” GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes. Often characterised as solving a boundary value problem, rather than initial value.”
That’s part of their argument to being useful, but the ocean models required are not treaded as boundary problems, and the GCM’S are trained with real data.

Reply to  Nick Stokes
July 12, 2015 5:27 am

The track record of GCMs indicate that they are useless for predictions.
Deal with it.

Barclay E MacDonald
Reply to  Nick Stokes
July 13, 2015 12:39 pm

Thanks Nick! I would not have understood that.

Reply to  Nick Stokes
July 13, 2015 2:20 pm

Nick writes “GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes.”
And are artificially constrained under certain conditions. Ive seen plenty of references to their stability problems… No Nick, GCMs are not immune to this because they solve a different problem. GCMs are solving a different problem to weather and there is no evidence their accumulated errors don’t make them useless.

Reply to  Nick Stokes
July 14, 2015 9:23 pm

Can you explain what is the actual difference between a boundary-value problem and an initial-value problem? As far as the math behind solving partial differential equations, there is none. An initial value is simply a boundary value in the time dimension. The math doesn’t care whether the dimension is time or some other, though the form of the particular equations used to describe the system mean that varying the time often has different effects from varying the space. Is this what you meant?

M Seward
Reply to  Barclay E MacDonald
July 11, 2015 4:21 pm

No Barclay…. to the unpredeictability and utter uselessness of climate ‘models’

Barclay E MacDonald
Reply to  M Seward
July 11, 2015 6:24 pm

I stand corrected:)

Reply to  Barclay E MacDonald
July 12, 2015 5:31 am

You got it all wrong Barclay…This paper concerns an ant farm and the likelihood of ants tunneling up, down, left, right or sideways. How brainless are you to not recognize such a simple model?

Reply to  Barclay E MacDonald
July 12, 2015 10:28 am

They seem to adjust these models to keep them in line:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years (Hazeleger et al., 2013a). Biases can be largely removed using empirical techniques a posteriori (Garcia-Serrano and Doblas-Reyes, 2012; Kharin et al., 2012). The bias correction or adjustment linearly corrects for model drift (e.g., Stockdale, 1997; Garcia-Serrano et al., 2012; Gangstø et al., 2013). The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend (Fyfe et al., 2011; Kharin et al., 2012). Figure 11.2 is an illustration of the time scale of the global SST drift, while at the same time showing the systematic error of several of the forecast systems contributing to CMIP5. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions (e.g., Ho et al., 2012b). There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques. There are also difficulties in estimating the drift in the presence of volcanic eruptions.”
Ref: Contribution from Working Group I to the fifth assessment report by IPCC; Page 967.
Chapter 11 Near-term Climate Change: Projections and Predictability

July 11, 2015 2:37 pm

“the conclusions are disturbing.” Why, exactly?

M Courtney
Reply to  JPS
July 11, 2015 3:21 pm

I think because all the expectations about linear systems (mainly a bell curve distribution) don’t apply to systems more complex than 1:1.
Therefore we don’t have a clue.
And the predictions that are the source of policies are, therefore, not built on rock.

Reply to  M Courtney
July 11, 2015 4:31 pm

“we dont have a clue” ? I have to disagree. Our society and technology could not have advanced to where it is without a clue.

Reply to  M Courtney
July 11, 2015 4:35 pm

Society yes. Complex models with hundreds of variables, no.

Reply to  M Courtney
July 11, 2015 4:44 pm

Mark W- Society has far more variables than your average climate model.

Reply to  M Courtney
July 11, 2015 4:57 pm

Ah, but society does not have one problem solving model it has millions if not billions of problem solving models. They are called people and concentrating that problem solving into fewer and fewer of them is where the stupidity of the left really lies …

Jonas N
Reply to  M Courtney
July 11, 2015 6:30 pm

JPS, are you saying that we ‘understand’ society (in any meaningful way)?

Reply to  M Courtney
July 13, 2015 6:23 am

In order for society to advance it was never a necessary condition for anyone to comprehend all aspects of it and predict the changes in advance.
Same with climate…it will do what it does, regardless of the GCMs being all but worthless as a predictive tool.

Reply to  JPS
July 11, 2015 4:08 pm

The predictions of global climate models with a very large number of parameters (high dimensionality) for 100 years from now likely have little value the error is so high – so the extreme confidence in them is not warranted. So we now add rapidly exploding errors to non-linear chaotic coupled systems.

Reply to  David L. Hagen
July 11, 2015 4:29 pm

OK but that is hardly “disturbing”

Reply to  David L. Hagen
July 11, 2015 4:48 pm

It’s disturbing how much money is being spent based on predictions we know have no predictive skill.

Reply to  David L. Hagen
July 11, 2015 8:46 pm

“extreme confidence”?
The models have utterly failed.
Anyone who has any confidence in them whatsoever is, in my view, deluded.

Gloria Swansong
Reply to  David L. Hagen
July 11, 2015 8:47 pm

Extremely deluded.

Reply to  David L. Hagen
July 12, 2015 12:01 am

But this can’t be right and here’s the proof :-
As the error in the climate models got larger the level of IPCC confidence also got higher so that clearly disproves the hogwash in this predictability thingy above – there that was easy wasn’t it!!

Reply to  JPS
July 11, 2015 5:14 pm

Talab paper means: Even models that work, only work until they don’t. Each model’s failure is certain and unpredictable. M Courtney has perhaps chosen exactly the wrong word: A ‘clue’ is all models can produce & you can never know for certain if it is a good clue. Other than that Courtney is right.

M Courtney
Reply to  willybamboo
July 12, 2015 1:00 am

Thanks. I accept the clarification and am grateful for the general endorsement of my understanding.

k scott denison
Reply to  JPS
July 11, 2015 6:21 pm

JPS, if you have such a clue please predict the progress of society over the next 100 years so we can track how accurate you are.

Reply to  k scott denison
July 12, 2015 6:42 am

That’s easy, just read Gibbon’s description of 4th century Rome, or Thucydides on the steady collapse of the Athenian demos. We love to think that we, either individually or collectively, or something incredibly new and special, but in fact this has all happened before. We have a few new toys, but we’re the same people doing the same old crap as they were.

Reply to  k scott denison
July 12, 2015 7:25 am

i dont and would not claim to be able to do so- Im just saying to extrapolate from this paper that “we dont have a clue” flies in the face of the development of civilization.

Reply to  k scott denison
July 12, 2015 8:11 am

“i dont and would not claim to be able to do so- Im just saying to extrapolate from this paper that “we dont have a clue” flies in the face of the development of civilization.”
To the contrary. The development of civilization has followed from an empirical process of learning by observation of experiment what is or is not fit for purpose — and often without foreknowledge of what specific purpose the outcome of a given new experiment may serve. If the same results hold up repeatedly thereafter, we can rely on them as a foundation for progress. Otherwise not.
It seems mankind might be better classified taxonomically as “man the keen observer” rather than “man the wise” — so we don’t get too far ahead of ourselves and fall into the exact trap Taleb describes elsewhere of looking back after the fact and imagining discovery of “obvious” cause and effect relationships which were no more than the operation of random chance.

Reply to  k scott denison
July 12, 2015 5:41 pm

Whether or not “we don’t have a clue” depends on what the meaning of “we” is.

Reply to  k scott denison
July 13, 2015 6:33 am

Prairie dog society has advanced too. Does anyone think it is because they “have a clue” (whatever that means)? Or is it because a group of individuals each looking out for their own well being and the well being of their kin, and a certain amount of altruism garnished here and there, lead to such advances?
People learn things. They can use that knowledge to try new things. They can communicate what they have found and done. Other people can see and imitate things that others do. If the things that are learned and done and communicated offer advantages in survival or fitness or comfort, they these new ideas and processes will become entrenched and taught to new generations, and spread to other groups of people.
The advances of modern society and culture did not affect those cultures that were unaware of them.

Reply to  k scott denison
July 13, 2015 6:35 am

Sorry, commented before seeing what bh2 had written already. Same basic idea.

Mary Brown
Reply to  k scott denison
July 13, 2015 12:20 pm

“over the next 100 years so we can track how accurate you are.” You won’t track it. You’ll be dead. Which is one of the secrets of climate forecasts… keep the most outrageous stuff out in the future so it never quite gets here but it all sounds so scary…and the climate forecasters will be dead when it fails to verify.

Reply to  JPS
July 12, 2015 2:27 am

“the conclusions are disturbing.” Why, exactly?

In engineering, standard practice is that, to get better accuracy, you add another variable.
Example: Successive approximations to the DC characteristics of a diode.
1 – A diode conducts current in one direction and not the other
2 – A diode has a fixed forward drop
3 – A diode has a fixed forward drop plus resistance
4 – The forward drop is logarithmic
5 – Temperature matters
From the first electronics course on, young engineers are taught to add more variables to get a “better” answer. It’s counter-intuitive that adding an extra variable will make you worse off. Experienced engineers understand that striving for more accuracy is often a waste of time but that’s not the same as actually being worse off.

Reply to  commieBob
July 12, 2015 7:21 am

again, how is any of this disturbing? as you point out, experienced engineers balance complexity with diminishing (or negative) returns all the time.

Reply to  commieBob
July 12, 2015 8:15 am

The part you’ve left out is that the additional variables are trained on real world data.
If, on the other hand, you have nothing but theory, it is then that the additional variables represent ever greater potential sources of error.
A good example would be the disappearance, then return of electromigration of metal. This was a problem in the very early years of semiconductor design – went away for decades – then returned with the nanometer scale processes. Why weren’t the electromigration equations used in the middle? Because not only did they not describe any real world effect, they would skew the rest of the results.

Reply to  ticketstopper
July 12, 2015 4:38 pm

” A good example would be the disappearance, then return of electromigration of metal. This was a problem in the very early years of semiconductor design – went away for decades – then returned with the nanometer scale processes. Why weren’t the electromigration equations used in the middle? Because not only did they not describe any real world effect, they would skew the rest of the results.”
There’s different metallurgy, in the early 90’s I think our process had 4% silicon, but we’d tested different amounts of Silicon as well as copper, but copper was harder to process. But as long as the vertical coverage over steps, and they didn’t exceed the current density we didn’t have electromigration problems. We also didn’t have electromigration “equations”, we had design rules, same with the design tools I supported years later, design rules, current densities set by the process developers.
When they did fail, it was along grain boundaries, that would pull apart which led to higher current densities, led to more voiding ultimately leading to a failure, finer pitch would more likely span fewer grains and would likely have to have a lower density.

Reply to  commieBob
July 12, 2015 8:55 am

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”
– John von Neumann
Adding more parameters may make the model more physical, but more likely that you’re just fooling yourself by fitting the curve.

Reply to  commieBob
July 12, 2015 12:09 pm

JPS says:
July 12, 2015 at 7:21 am
again, how is any of this disturbing?

We’re not talking about the law of diminishing returns where extra effort doesn’t produce a worthwhile result. We’re talking about a situation where extra effort produces a much worse result. If that results in an unwelcome surprise, it should leave scars on the soul of any decent engineer.

Reply to  commieBob
July 12, 2015 3:42 pm

Harold said: “Adding more parameters may make the model more physical, but more likely that you’re just fooling yourself by fitting the curve.”
Actually, all adding more parameters does is make the model potentially more complex. Complexity doesn’t make anything physical – only testing vs. known physical behavior.
What Taleb is saying above is that not only does adding more variables make a model more complex, but that – even for supposedly constrained variables – it introduces even more variability in which the model can fail.

Reply to  ticketstopper
July 13, 2015 5:31 am

I think the real problem in the case of GCM’s is that the output of a round of calculations, becomes the input to the next round of calculations, so any error, is compounded over and over.
You have to watch for this in many types of simulations, electronics simulations for instance, that’s why you want that model of a diode to include all of those parameters, but those parameters come from a lab, where they are measured. And you have different types of diode models, you have to use the right one for your simulation to work, but there’s also some art to operating simulators, you have to understand the models, the inputs, and how simulations are different than operating a real circuit in the lab.
I supported a dozen different types of electronic simulators for almost 15 years, and spent a lot of that time working with engineers explaining how to use simulators to understand how your circuit will work once it’s made, and how the simulator tells you other things. For instance digital models can have their min or max delays added to them, a real collection of chips can have chips with different delays in combination, and at least at the time, there were different simulators to analyze the worse case delay (some chips min while others are at max simultaneously).

Tim Hammond
Reply to  JPS
July 13, 2015 2:12 am

Really? You think deciding public policy that affects billions of people on the basis of “evidence” that is simply not evidence is not disturbing?

July 11, 2015 2:38 pm

“The nth variable brings more errors than the combined previous n-1 variables!” You don’t need a degree in statistics or mathematics to understand.

Olaf Koenders
Reply to  markl
July 11, 2015 4:15 pm

Exactly. When I was writing software in assembler code, adding one incorrect memory address variable to another in a loop quickly overwrote the boundaries of the allocated memory, such as an image frame in an animation – machine go boom.. 🙂

Reply to  Olaf Koenders
July 11, 2015 5:21 pm

Been there, done that. If you throw a full carrot or chopped up remains into your garbage disposal, the final results are the same

Reply to  Olaf Koenders
July 11, 2015 7:01 pm

I believe the phrase was ‘halt & catch fire’

Barclay E MacDonald
July 11, 2015 2:38 pm

So assuming some level of teleconnection of numerous proxies and simply combining them together and taking their average may increase the error and not improve the reliability of the result?

Mark from the Midwest
July 11, 2015 2:57 pm

In an odd way this makes too much sense, but it’s late on a Saturday, I’ve been hanging with my trusty Stihl 309 much of the day, and now I’m drinking beer, so I’ll need to do a good read on the full paper tomorrow..

Mark from the Midwest
Reply to  Mark from the Midwest
July 11, 2015 3:07 pm

It’s actually a fairly brief paper, but it will demand a bit of effort to work through the math

Reply to  Mark from the Midwest
July 11, 2015 3:10 pm

It’s actually not finished. That makes it kinda hard to comment on. Let’s return to this when it’s finished.

Reply to  Mark from the Midwest
July 11, 2015 5:52 pm

I’m with Harold. The paper is barely started.
It’s fairly well understood that the central limit theorem applies to the average of the sample and not the sum of the sample but people always seem surprised when the sum diverges from n*avg. I suspect that’s what the author is doing but there’s not enough meat to see where he’s going with this.

Reply to  Mark from the Midwest
July 11, 2015 6:11 pm

OFFS. I just read the last paragraph.
“M(1) =nμ….
whether increase is of the sqrt(n)…..”
That, i.e. “when the sum diverges from n*avg”, is exactly what is going on. With increasing n, the first moment is wandering away at a rate of μ.

Ray Prebble
July 11, 2015 2:59 pm

It sounds like the butterfly effect. I always thought it was weird that people could happily talk about how even a miniscule event such as a butterfly flapping its wings could change the weather on a different continent (made famous in the first Jurassic Park movie) and then in the next breath warn about global warming.

David Case
Reply to  Ray Prebble
July 12, 2015 11:25 pm

Perhaps there’s potential for a career in training butterflies?

July 11, 2015 3:14 pm

You write:
“Even without an understanding of the math involved, the conclusions are disturbing, and I trust Taleb on the math … not that I have much option.”
I have absolutely n o option!
But – again – many thanks for many fascinating posts.

Ray Kuntz
July 11, 2015 3:28 pm

As someone with no background in Math it is my understanding from reading him that Taleb believes that the threats posed by Global are so severe that prudence indicates a proactive stance. Correct me if I’m wrong.

Reply to  Ray Kuntz
July 11, 2015 4:03 pm

The opposite. The large number of variables in global climate models suggest that their error may be so high as to have little value for public policy. Consequently, plan to manage the extremes in weather that we have seen historically and in the geological record – including both higher and much lower temperatures than the IPCC alarms over.

Mark from the Midwest
Reply to  David L. Hagen
July 11, 2015 4:50 pm

I concur with this interpretation, I read it as: If the model has sufficient detail to represent the real world then the model will also be highly prone, from time to time, to making absurdly-absurd predictions

Reply to  David L. Hagen
July 11, 2015 11:09 pm

You may be missing Ray’s point. And Taleb, with respect to climate models, seems to miss his own.

Flyover Bob
Reply to  David L. Hagen
July 12, 2015 9:38 am

Kim, What is Ray’s point as you see it? Likewise, what is Taleb point that he, by your account, he is missing?

Ray Kuntz
Reply to  Ray Kuntz
July 12, 2015 7:01 am

That should have been “…from reading his other writings, Taleb believes…”

Reply to  Ray Kuntz
July 12, 2015 7:10 am

Yep, he has demonstrated cognitive dissonance with respect to this issue. He’s not the Lone Ranger in that.

Flyover Bob
Reply to  Ray Kuntz
July 12, 2015 9:40 am

Kim, What is Ray’s point as you see it? Likewise, what is Taleb’s point that he, by your account, is missing? Sorry posted in the wrong place first

Tony K
Reply to  Ray Kuntz
July 12, 2015 8:46 am

Hi Ray,
I think the folks in this thread are thinking that Taleb has the models in mind. If that’s what he is thinking, then fine. We can all agree that the models are not good.
However, based on his recent work about the precautionary principle, I suspect he is referring to the uncertainty in the system, not the models.
I am not convinced that this work applies to the climate system, but I think you may be right in your interpretation.

Jeff Mitchell
Reply to  Tony K
July 12, 2015 10:04 pm

If the precautionary principle has any meaning, then the alarmists need to apply it to cooling as well. Cooling is much, much more dangerous. I first learned this from practical experience. When I was collecting reptiles and asked our Division of Wildlife why we couldn’t collect certain species of snakes or lizards. They would throw out “We don’t know much about them, and if we allow collection we might inadvertently hurt those populations.” So I returned with “Well, if you don’t allow collection, supply becomes low, and people will be able to get high prices for poached specimens.” So they hurt the population one way or the other.
We turned the corner on this subject by going to the legislature rather than the agency. After we got done, limited collection was allowed, and people can now commercially breed them so there is a legal source that doesn’t come from the wild. This same approach needs to be applied to NOAA and EPA in getting Congress to do something about these rogue agencies that are out of the bounds that were set for them. If enough people are willing to vote out the enablers of this climate fraud, the tide will turn much more quickly.

Reply to  Ray Kuntz
July 12, 2015 9:12 am

You are not wrong. I have no references at hand, but Taleb subscribes to avoiding recognizable risks which may or may not materialize and which also may or may not prove to be consequential.
He includes man-caused climate change as one such avoidable risk but claims no insight about what magnitude or direction of future climate change may eventually come to pass.

July 11, 2015 3:58 pm

“1) Adding variables vs. adding observations: Let us explore how errors and predictability degrades with dimensionality. Some mental bias leads us to believe that as we add random variables to a forecasting system (or some general model), the total error would grow in a concave manner (pn as is assumed or some similar convergence speed), just as we add observations. An additional variable would increase error, but less and less marginally.”
Excuse me – but exactly who’s mental bias is he talking about?

Reply to  Science or Fiction
July 11, 2015 4:04 pm

Everyone using the global climate models and adding more parameters to them in the belief that that will improve their results.

Reply to  David L. Hagen
July 11, 2015 4:29 pm

And averaging the results???

Reply to  Science or Fiction
July 11, 2015 4:25 pm

This sounds peculiar.
If the model output happens to be insensitive to the variable you add – the new variabel shouldn´t affect the result at all – no matter what.
If the model output happens to be dominated by the variable you add and the uncertainty of the new variable is dominating the uncertainty budget – the uncertainty of the new variable should dominate the uncertainty of the output of the model?
I would think that you will have to understand the sensitivity of the models output to the new variable, and also take into consideration the uncertainty of the new variable, to understand what effect this new variable will have on the uncertainty of the model output?

Reply to  Science or Fiction
July 11, 2015 5:02 pm

For uncorrelated input variables:
The uncertainty of the output variable should be equal to:
The square root of the sum over all variables
(Sensitivity of the output variable to each particular input variable)^2
Multiplied by
(The uncertainty of each particular input variable)^2
Ref: Guide to the expression of uncertainty in measurement (Freely available):
Section: 5.1.2

July 11, 2015 4:00 pm

Are CMIP5 models UNPREDICTABLE and INSIGNIFICANT, lacking skill?
From Talim’s preliminary results, given their very large number of variables,

do the CMIP5 global climate models have sufficiently “higher dimensionality” as to be “totally unpredictable”? (i.e., lacking predictive “skill”)

John Christy shows the mean of 102 CMIP5 climate model predictions from 1979 are ~ 500% hotter than the actual mid tropospheric tropical temperature since then.
Yet mathematician Valen Johnson calls for 5 time more stringent statistics for results to be significant or highly significant:

To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25–50:1, and to 100–200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

Consequently, do CMIP5 global climate models lack the skill needed for public policy? i.e., are they now just “INsignificant” or actually “highly INsignificant”?

July 11, 2015 4:21 pm

What is a “thin tailed variable”? I assume he means a variable that has a distribution with thin tails, which narrows it down to just about all non truncated data.
Perhaps he’s trying to say that no matter how much data you collect, you won’t be able to differentiate whether the data came, for example, from a Normal or a Burr distribution, and both give markedly different results because of differences in their tails? Random system changes make such differentiation impossible.

Reply to  Tony
July 11, 2015 5:04 pm

I heard science was settled.

Reply to  Tony
July 11, 2015 5:05 pm

I assumed he meant leptokurtic .

Reply to  Tony
July 12, 2015 6:12 am

A thin tailed variable vs. a thick tailed variable concerns consumption, digestion and the resulting bolus when expelled from the system. There are numerous combinations: Long, thin tailed variables; long, short tailed variables; short, thin tailed variables and simply, round, small variables of various sizes, and various other results. GIGO.

July 11, 2015 4:56 pm

Maths can be complex, but never “hard”. Boiled down it is always only sequence of individual sums of two numbers, completed in a specified order.

July 11, 2015 5:04 pm

I couldn’t get past the “Relation to the curse of dimensionality”.
No, the distance is not (k/2)^1/d. Distance is (d*(k/2))^1/2, or (d^1/2)*(k/2). Unless I’m missing something, start over.

July 11, 2015 5:07 pm

“Some mental bias leads us to believe that as we add random variables to a forecasting system (or some general model), the total error would grow in a concave manner (pn as is assumed or some similar convergence speed), just as we add observations. An additional variable would increase error, but less and less marginally.” No, it’s not mental bias if you believe this, it’s mental retardation! Two errors effects must be multiplied together not divided! How could anyone think otherwise? Multiplication causes a concave effect, not a convex one. Seriously, if it has taken this paper to make you realise that modelling variables is scientifically useless , your brain has been switched off all your life.

July 11, 2015 5:19 pm

The heading of page 4 states:
What does that mean?
Does it mean that is is ok to put out on the net unfinished products which does not seem to have been reviewed by anybody else than the author?
I guess it will be harder and harder to find quality products on the net as the time goes – if all drafts are published on the net?

Reply to  Science or Fiction
July 12, 2015 4:34 am

Your joking right?!?

Reply to  George P Williams
July 12, 2015 7:09 am

Hate to say it – but I was just being very stupid. Thanks 🙂

Scott Scarborough
July 11, 2015 5:24 pm

Simply explains why planned economies cannot work.

Jimmy Haigh
July 11, 2015 5:31 pm

As I’ve said all along, even thinking about modeling something as complex as the climate is a total waste of time and our money.

Ian H
July 11, 2015 5:45 pm

I need an elevator explanation. Is the following on the right track.
The words that have leapt out at me so far are the words “unconstrained” and “thin tailed”. That means each variable has a tiny chance of taking extremely large values. My elevator explanation might be that when the number of variables gets big enough these highly improbable but extremely large values start to dominate. Each additional variable makes it more likely that one of them will give one of these absurdly large values out on the thin tail that will blow the result out of the water,
If that is the correct elevator explanation then this is really just a technical and theoretical issue with no practical consequences because we never actually use unconstrained thin tailed variables. For example daily temperature at some location (for example) might seem to be normally distributed, but really it isn’t. A normal distribution is unbounded with a thin tail which means there would be an extremely small but non-zero probability of a temperature of ten million degrees for a daily high. Practically however ten million as a daily high temperature is impossible. It would vaporise the instrument, melt the planet it was sitting on, and if that isn’t enough impossibility for you, we’d also reject it as an outlier in the first stage of data analysis.
We use thin tailed unbounded distributions as approximations for real world distributions because they are mathematically simple and are good approximations in the region away from the thin tail. Normally we don’t care that the thin tail is very unrealistic because values out there are highly improbable. But when you start to stack up lots and lots of these variables in your calculation then chance of a value in the thin tail showing up in one of them starts to become significant.
In conclusion this is an interesting technical issue effecting theoretical calculations of little practical import. It also looks like it has a simple fix – just bound all the variables.This is something that tends to happen anyway when things get represented in a computer, since those are not good at handing unbounded values.

Reply to  Ian H
July 12, 2015 1:09 am

“The words that have leapt out at me so far are the words “unconstrained” and “thin tailed”. That means each variable has a tiny chance of taking extremely large values. My elevator explanation might be that when the number of variables gets big enough these highly improbable but extremely large values start to dominate. Each additional variable makes it more likely that one of them will give one of these absurdly large values out on the thin tail that will blow the result out of the water,”
This is the argument Taleb is using against GMO foods. One of them is going to go haywire and infect the world. GMO opponents have been delighted to make use of his name and his approach.

July 11, 2015 5:51 pm

Climate alarmism has become the shamanism of our time.
Ridiculously irrelevant and meaningless readings of sheep entrails (aka climate models) by the shamans (aka alarmists) insist society must offer human sacrifices to appease the gods (aka Statists) to keep the manna flowing (aka research grants) and save the world from destruction…
The errors baked into sheep entrails only get worse overtime because all the assumptions are wrong. This modern day shamanism is only still taken seriously because those that doubt the shamans and the readings of sheep entrails are dubbed heretics, with some religious fanatics calling for the doubters to be thrown in the dungeon…
The sheep entrails prophesies are now off by 2 standard deviations and within 5~7 years, they’ll likely be off by 3 standard deviations. When the acolytes aren’t looking, the shamans frantically push and poke the sheep entrails around with their magic wands to get the sheep entrails to divine as prophesied, but sooner or later, even the dumbest acolyte will see the shamans are cheating.
Sheep entrails have become a joke.

July 11, 2015 6:01 pm

“Higher dimensional systems – if unconstrained – become eventually totally unpredictable in the presence of the slightest error in measurement regardless of the probability distribution of the individual components.”
Ok, so first step is to “constraint” the climate. Well that ought to be easy peasy, first we set the Sun on “constant output mode”, then we fix the albedo, next we limit the cloud cover to a small fixed range (29.7775 – 29.7776 percent ought to do it). Oh, and of course we’ll need speed limits on all the winds……
Heck, I’m starting to think that discovering the meaning of life is a whole lot easier than modelling the climate….
Cheers, KevinK.

George Steiner
July 11, 2015 6:26 pm

My step daughter gave me Taleb’s book ANTIFRAGILE. I wrote a review of it for her. If Mr Watts allows a rather long comment on this fellow Taleb I post it for you guys.
Nassim Taleb
Described by the Times of London as the most important thinker today. So who is Nassim Taleb?
He is a Christian Lebanese born fifty some years ago. Father was a well to do doctor and the family had merchants and civil servants among them. He was schooled in Lebanon at a French Lycee and was sent to the US by his father after the civil war.
The Lebanese education was probably good in the classical sense. He is said to be literate in French, English and Arabic and knows Latin, Greek, Hebrew and Italian. He studied further at the University of Paris (Sorbonne) and the Wharton school for an MBA, finally getting a doctorate also at the Sorbonne.
The Arab cultural tradition is to pursue a carrier in medicine, commerce or government. In the US he earned his living as a trader for 20 years in the financial market, in various capacity and in many banks and institutions. As you know trading in the financial markets is done in products called instruments, that range from stocks of companies to mortgage backed securities as well as commodities such as for example wheat and copper.
He is said to have made money in the crisis of 2008 by anticipating the decline of the market. We have also anticipated the decline of the market but in our case we just didn’t loose money as the result.
Having become financially independent, Taleb wrote several books, the last two The Black Swan and Antifragile. In Antifragile and in the Black Swan Taleb talks a lot about himself, his likes and dislikes are illuminated vividly. What emerges from his background and the book is a man shaped by his origin, (Arab), his education, (humanist), and a desire to be recognised as a philosopher.
As an Arab he suffers from what I call the Arab sickness. Taking any opinion less than complementary as a personal insult that he does not excuse easily. He dislikes economists as a profession strongly. While reading the book I wondered why. He tells that one of his early books Dynamic Hedging was not liked by economists who reviewed it, intolerable isn’t it.
A humanist education does not give passage to engineering and science. In these areas his knowledge is less than feeble.
But is he a philosopher, a deep thinker, a man who can tell us “how to live in a world we don’t understand” as the subtitle suggests? I don’t think so. My reaction to the subtitle is: Mr. Taleb if you don’t understand the world why should I trust you to tell me how to live in it? Not to say that maybe he should first try to understand it. What do you think, Mariko?
In Atifragile the big idea is this. There are things that are fragile, can be broken or damaged easily. The opposite of fragile you may think is robust. But Taleb says antifragile is better then robust. Something that is Antifragile actually becomes stronger. If you are looking for a title for a best seller, Antifragile is better than robust for sure.
In addition to a good title you need pages, lots of pages. Over 500 pages. So you start with a Prologue 28 pages, then you end with a Glossary, Appendix and Bibliography 93 pages. And before all this there are the table of contents and chapter summaries.
To every book there is style and there is content. This one is no different. The style is informal, the language is verbose and very loose with the meanings of words. For example “harm” is used liberally with every conceivable shade from injury to irritation. The word “stressor” is used without ever saying what is really meant by it. The words “convex”, “concave”, “nonlinear” are sprinkled all over the place with great self assurance but no explanation. Anecdotes about Taleb, Greek mythology, Roman history are many.
Taleb says at one point: “For I am a pure autodidact”. A few paragraph later: “ Again I wasn’t exactly an autodidact, since I did get degrees: I was rather a barbell autodidact as I studied exact minimum necessary to pass any exam, oversooting accidentally once in a while, and only getting in trouble a few times by undershooting. But I read voraciously initially in the humanities, later in mathematics and science and now in history….” In an interview to the Financial Times he again makes an issue of his being an autodidact. I agree with him, he is not.
You get the desired impression that Nassim Taleb is a very well read, cultivated, cultured, man of substantial intellect. Here and there are quotation in Latin and lots of names are dropped. Not surprisingly journalists who are described aptly by an acquaintance of mine as “les pires putains du monde”, swoon.
A book written in this style is well on the way of becoming a best seller. Which it did. It is said that Taleb got a four million dollar advance for it. This looks like a lot of money but at $20 a piece it is only 200,000 books. On the US market alone it will easily be exceeded.
Is the content any better than the style? Taleb finds fragility everywhere. From the traffic pattern of New York to drug companies to research to education to… well every aspect of modern life. He tries to squeeze in antifragility arguments all over the place.
“The real world relies on the intelligence of antifragility, but no university would swallow that—just as interventionists don’t accept that things can improve without their intervention. Let us return to the idea that universities generate the wealth and the growth of useful knowledge in society. There is a causal illusion here; time to bust it”.
“So here is something to use. The technique, a simple heuristic called the fragility and antifragility detection heuristic, works as follows. Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars travel time extend by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until acceleration becomes mild ( acceleration I repeat, is acute concavity, or negative convexity effect)”.
It turns out that Taleb is a criticizer of modernity. His favorite place is the Arab souk, cities are too big, economist are dumb, universities are not useful, medical practice is harmful, traffic in cities is too heavy, countries are too complex, we don’t do enough to prevent the unpredictable, amateur tinkerers are the best, big research is useless, the litany goes on and on. You get the picture.
Why does such a book resonate today in particular? It resonates because there is a quasi religious green and environmental movement that shares his critical views. According to this movement the modern wold is not working. We should go back to a time when things were simpler, there were less people. If you ask when is that time you will not get an answer. Or if you do it is because they don’t know history.
The enviro-religious movement is mainly in the first world. The second world and the third world are not interested. For them hardship is real. For the first world
enviro-religious, hardship is a subject for intellectual discussion. Taleb himself is an enviro-religious fellow traveler. He as much as says so.
And does he put his money where his mouth is? No! He lives in New York, not exactly a village is it? Surrounded by the fine things of life and there is not even a souk.
In the end what about the book? There is a saying “there is no book that in some way would not be useful”. Should you read it? No, unless you have a mighty lot of time on your hands. Then why is it successful? Many will be amused by the anecdotes, many will hope to find out how to live in a world they don’t understand (they will be disappointed), many will want to sit at the feet of a deep thinker. It will be enough for Taleb and the publisher to make a lot of money.

Reply to  George Steiner
July 11, 2015 7:02 pm

Excellent review. I suspect anyone who wants to be regarded as a philosopher probably isn’t much of a philosopher.
By the way, you could have stopped at “he studied at Wharton” and we would have gotten the gist.
Enduring the shame of having graduated from there myself, I can assure you that “ANTIFRAGILE” is just the kind of piffle one would expect.

Gloria Swansong
Reply to  Max Photon
July 11, 2015 8:01 pm

There is piffle and anti-piffle.
IMO, Taleb is somewhere in between.
I’m not ready to join the Taleban, but he has his points worth hearing out. IMO.

Reply to  Max Photon
July 11, 2015 8:04 pm

“Frag(gi)le Rock”

Ian H
Reply to  George Steiner
July 11, 2015 7:37 pm

Thanks for that. Deserves to be promoted to a headline article.

Reply to  George Steiner
July 11, 2015 9:31 pm

Entertaining review. I don’t agree with your conclusions or assessments, but you certainly write quite well, George. Nassim Taleb’s books are not for the faint-of-heart, and could have used a bit of judicious editing. I thoroughly enjoyed his style, though, and the insights into his personality. His works have had a great deal of influence on some of my personal decisions, both at work and in my home life. Recognizing the possibility of Black Swan Incidents, and working towards creating an Anti-Fragile standard of living, is not a bad philosophy. As Thoreau said: “I say beware of all enterprises that require new clothes . . .”. [Just my humble opinion of Nassim Taleb’s books.]

Reply to  George Steiner
July 11, 2015 10:35 pm

You aren’t the George Steiner, are you? The one who wrote Tolstoy or Dostoevsky.

Reply to  willybamboo
July 13, 2015 3:47 pm

If he is, I’d be interested in knowing if he’s ever read Menand’s The Metaphysical Club and what he thought of it.
(FWIW, I come down on the Dostoevsky side.)

Reply to  George Steiner
July 12, 2015 1:30 am

“Taking any opinion less than complementary as a personal insult that he does not excuse easily.”
“Nassim Taleb’s books are not for the faint-of-heart, and could have used a bit of judicious editing.”
–Janice the Elder (downthread)
In Black Swan he has a mini-rant somewhere against editors who have dared to make or suggest corrections to his manuscripts. About three years ago he posted a 20-page article online. I noticed a lot of typos and awkward constructions. I sent hims about 30 fixes. (I once worked as a proofreader.) I didn’t get a response, but six months later he sneered in his next emission at people who did what I had done to him.
He ought to be less fragile.

Reply to  rogerknights
July 12, 2015 1:31 am

Oops: “hims”–>”him”

Reply to  rogerknights
July 12, 2015 7:24 am

Ya know, I’ve considered recommending you for Bob Tisdale’s editor, but everytime I read him I’m further impressed with the elegance of his own natural style.

Reply to  rogerknights
July 12, 2015 6:03 pm

A super callous fragilistic hexed with mild psychosis?

Reply to  George Steiner
July 12, 2015 6:05 am

LOL, this attack on Taleb’s works seems personal.

Reply to  George Steiner
July 12, 2015 6:52 am

Thank you George Steiner for a valuable book review.
A few observations:
With regard to society, the most important variables are Rule of Law and Personal Liberty, which must be kept in balance.
With regard to models, complexity and prediction, why is it that some individuals have a strong predictive track record with complex systems (such as weather and even climate), while others have a negative predictive track record (being consistently wrong, like the warmists and the IPCC)?
For example, we published the following statement in 2002:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
Since 1997, there has been NO significant global warming.
It is also apparent that models that some consider “obsolete”, such as analogue weather models, appear to function better than “modern” computer weather and climate models. The failure of modern computer weather models may be due to incorrect model equations or input variables, or the problem may be that the computer models cannot match the knowledge that is inherent in the analogue models.
For example, Environment Canada and the US National Weather Service both failed to predict the extremely cold winters in the eastern 2/3 of North America for the past TWO winters, while some independent forecasters made accurate long-range forecasts.
Common sense in matters of public policy seems to be increasingly rare. For example, false fears of dangerous global warming have led to foolish investments in “green energy” that are not green and product little useful energy.
We also published the following statement in the same 2002 article:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
Since then, several trillion dollars have been squandered on nonsensical green energy schemes, funds that could have been allocated to solving real societal problems, not imaginary ones.
We also published the following statement in the same 2002 article:
“Kyoto wastes enormous resources that are urgently needed to solve real environmental and social problems that exist today. For example, the money spent on Kyoto in one year would provide clean drinking water and sanitation for all the people of the developing world in perpetuity.”
Since then, some slow progress has been made on clean water and sanitation systems, but that progress has been hampered by inadequate resources. About 50 million kids below the age of five have died from contaminated water since global warming mania began.
We also published the following statement in the same 2002 article:
“Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution.”
Since then, the air quality in industrial China has become toxic due to pollution from new and relocated industries.
I also wrote in another article, also published in 2002:
“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.”
I hope to be wrong about imminent global cooling – we will soon see.
Regards, Allan

Reply to  Allan MacRae
July 12, 2015 7:22 am

Our most powerful digital apparati are pitifully inadequate simulacrums to model the great analog computer that is the heat engine that is the earth. You got it with ‘lack of knowledge’.

Reply to  Allan MacRae
July 12, 2015 10:51 am

“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.”
I hope to be wrong about imminent global cooling – we will soon see.
-Allan MacRae

You will not be wrong about that Allan.

Reply to  Allan MacRae
July 13, 2015 5:41 am

“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.
I hope to be wrong about imminent global cooling – we will soon see.
-Allan MacRae”
Bob Weber said on July 12, 2015 at 10:51 am
“You will not be wrong about that Allan.”
Thank you Bob.
Here is why I am concerned about naturally-caused global cooling, which I believe is imminent:
Kim said:
“Paleontology shows no limit to the net benefits of warming and always shows detriment from cooling.”
Agreed Kim – and not just paleontology.
Globally, cold weather kills many more people every year than hot weather, EVEN IN WARM CLIMATES.
We know this is true from many sources, from modern studies of Excess Winter Mortality to the great die-offs that occurred during the cold Maunder and Dalton Minimums.
Accordingly, it is logical that fewer Excess Winter Deaths would occur in a warmer world, and Excess Winter Deaths would increase in a colder world.
Regards, Allan
The numbers are shocking. Excess Winter Deaths now total approximately 10,000 per year in Canada, up to 50,000 per year in the United Kingdom and about 100,000 per year in the USA. I have been writing and researching about Excess Winter Mortality since ~2009 and I am confident that these alarmingly-high numbers are correct. Here is our recent article:
Cold weather kills 20 times as many people as hot weather, according to an international study analyzing over 74 million deaths in 384 locations across 13 countries.
On May 25, 2015 veteran meteorologist Joe d’Aleo and I published our paper entitled “Winters not Summers Increase Mortality and Stress the Economy”
Our objective is to draw attention to the very serious issue of Excess Winter Deaths, which especially targets the elderly and the poor.
It is hard to believe that anyone could be so foolish as to drive up the cost of energy AND also reduce the reliability of the electrical grid, which is what politicians have done by subsidizing grid-connected wind and solar power.
When uninformed politicians fool with energy systems, real people suffer and die.
Cheap, reliable, abundant energy is the lifeblood of modern society. It IS that simple.
Best wishes to all, Allan

Reply to  Allan MacRae
July 13, 2015 9:48 am

Alan, here’s surface forcing, along with the response in temperatures for global stations with 360+ samples per year.comment image
You can see forcing goes up prior to temps responding (lots of scaling, but I included the factor I used)

Steve (Paris)
Reply to  George Steiner
July 12, 2015 12:24 pm

I think WUWT is a splendid example of the modern world functionig well. Azamov or maybe A C Clark wrote a story about a spaceship stuck in a blackhole coz all the computers are wiped out. It eventually escapes after one of the crew teaches everyone to use a simple abacus. That simple tool and the combined brainpower of the crew allow the calculation of the right angle to escape the blackhole. I think of Anthony as the crewman with the abacus (surface stations project) and the blog as the brainpower that will eventually get us to the other side. Bon voyage.

Reply to  George Steiner
July 13, 2015 3:34 pm

George Steiner,
I enjoyed your post. If I had to guess, your review of Antifragile was probably too charitable. I can’t say for certain since I found Taleb’s Black Swan thesis pretty silly and I’m not inclined to read anything else of his. His thoughts about Black Swans ran from “irrelevantly true” to “not even wrong.” I mean no disrespect to anyone, but Taleb brilliant? I truly do not see it.
My quickie review of The Black Swan would go something like this. Taleb’s choice of metaphor actually disproves his thesis: there are black and white people, black and white sheep, doves and crows, etc. Only the consideration of swans in the abstract without recourse to anything else in human knowledge could make the appearance of a black swan surprising, not to mention less than trivial. Hence it is a perfect ivory tower exercise, but unrelated to life as it is actually lived.
In the opening paragraphs of the Black Swan book he mentions 9-11 as such an event. He also list three criteria for such events -rare, extreme in impact, and retrospectively unpredictable, none of which is met by the 9-11 example. The attack on the even WTC itself was not rare. That very site had already been attacked once, in 1993, and there is, sadly, nothing rare about terrorism in general. Extreme in its impact? The US economy didn’t skip a beat. The skies are full of planes. The Pentagon was quickly fixed. The WTC has been rebuilt. We fought two fairly desultory wars at little financial cost. Unpredictable, retrospectively or otherwise? Again, it had already actually occurred in 1993, and similar attacks were a staple of fiction and threat prognostication. 9-11 was actually prophesied by a fellow named Richard Rescorla – (google him if you want the details.)
He also brings up things like WWI, the subsequent rise of Hitler and the “precipitous” collapse of the Soviet Union. Let’s allow that any seemingly prophetic insights about the first two are what Taleb calls the result of “explanations drilled into your cranium by your dull high school teacher,” and take the latter. Richard Pipes, Bernard Levin and Ronald Reagan predicted it, and the latter set about doing everything he could to precipitate and accelerate it. Sure, that was against the best estimates of the professional prognosticators, but you cannot say that was not predicted, and by someone who was in a position to do something about it. Taleb ends that paragraph by saying “Literally, just about everything of significance around you might qualify [as a Black Swan].” (Literally? Oi.) Well, if everything is a black swan, then nothing is. Either that or we must be a singularly unobservant species to have missed them all, and a singularly fortunate one to have lasted this long. Or maybe, the Black Swans aren’t as significant as he thinks. Or more likely, it’s a case of Taleb having nothing but a hammer, so everything looks like a nail. (Apparently this is a problem in Antifragile as well.)
It would have served a better purpose for Taleb to explore the implications of his thesis’s weaknesses, or for my money, its outright failure.
Going back to the 9-11 example, given our actual losses to terrorism, our levels of preparation could arguably be considered reasonable. In fiction, tens of thousands, even hundreds of thousands of deaths were predicted. Tom Clancy’s The Sum of All Fears had terrorists nuking the Super Bowl to the tune of 75,000 immediate deaths, IIRC. As 9-11 was underway 10,000 deaths were thought anywhere from possible to likely. I mean no disrespect to those who did die that day, and whose loss is mourned, but compared to what was expected, it is no disservice to them to point out we did not suffer nearly as badly as we thought we would. Even when the swans don’t come up white – and they nearly always do, we usually deal with them pretty quickly. The tactic used against us on 9-11 was dealt with in 90 minutes by the heroes of Flight 93. There has been no recurrence.
Excessive focus on black swans is actually pernicious as it is likely to prevent us from seeing the more significant fact: the vast majority of swans come up white. In his opening paragraphs, Taleb says that “reading the newspaper actually decrease[s our] knowledge of the world.” Apparently, he means that journalism, focusing on minutiae, misses the big, rare events that have, on his view, huge impact. Has he never read a paper? “If it bleeds, it leads” as they say. Sensationalisms abound in the midst of the minutiae and the media lives for the big, rare event so they can make it seem even more important than it is. In fact, journalism is a prime example of doing what Taleb apparently wants: focusing on Black Swans, that is, Outliers. The inimitable GK Chesterton pointed out this inherent flaw a century ago:
“It is the one great weakness of journalism as a picture of our modern existence, that it must be a picture made up entirely of exceptions. We announce on flaring posters that a man has fallen off a scaffolding. We do not announce on flaring posters that a man has not fallen off a scaffolding. Yet this latter fact is fundamentally more exciting, as indicating that … a man is still abroad upon the earth. That the man has not fallen off a scaffolding is really more sensational; and it is also some thousand times more common. But journalism cannot reasonably be expected thus to insist upon the permanent miracles. Busy editors cannot be expected to put on their posters, “Mr. Wilkinson Still Safe,” or “Mr. Jones, of Worthing, Not Dead Yet.” They cannot announce the happiness of mankind at all. They cannot describe all the forks that are not stolen, or all the marriages that are not judiciously dissolved. Hence the complex picture they give of life is of necessity fallacious; they can only represent what is unusual. However democratic they may be, they are only concerned with the minority.” – The Ball and the Cross, part IV: “A Discussion at Dawn”, 2nd paragraph.
The white swans are the amazing thing. Black swan thinking distorts our outlook by focusing on exceptions that can never be assessed to such a degree we miss the benefits that accrue from seeing them for what they are – outliers- and treating them accordingly. Taleb seems little more than another Cassandra Wannabe, both a victim and a practitioner of sensationalism.
Put me down for Piffle. In fact, I don’t think Piffle Lite would be all that unfair.

Vangel Vesovski
Reply to  Langenbahn
July 14, 2015 7:52 pm

It would have served a better purpose for Taleb to explore the implications of his thesis’s weaknesses, or for my money, its outright failure.
What failure? Taleb is absolutely right about the stupidity of ‘experts’. His argument that decentralized efforts produce better outcomes than directed research seems quite sound. He is right about the preference for sensationalism and a desire for permanent miracles. And given the fact that most of the supposed ‘experts’ today cannot see the biggest bubble in history, the sovereign debt market, I doubt that his critics have a clue about what it is that they are talking about most of the time.

Reply to  Langenbahn
July 14, 2015 8:13 pm

I thought Talab made an astute observation; some very large complex systems can endure for a good long time – all the while they are growing very fragile. They become more fragile as they become bigger, more complex and dominant. Talab is observing investments/investors and specifically the big financial houses. He explains there are anti-fragile systems that grow less fragile as they expand and grow more complex. Mostly he is talking about networked distribution. When something is distributed from a network of distribution points, the more reliable and shock resistant the expanding network becomes. With important caveats, this is a valid observation and Talab tells it in concise and understandable prose. It isn’t research, and it isn’t math, its just something we all can observe and its something reasonable. It’s logical.
I agree with Talab’s critics. Talab’s attempt to construct a grand, unifying thesis falls short. I think there is a lot of merit in trying to organize an enterprise so it can take advantage of networked distribution. It really is always petty simple. What course of action will increase our company’s value to its customers?
Like so many authors; Talab should have written a shorter book.

Vangel Vesovski
Reply to  George Steiner
July 14, 2015 7:42 pm

Why does such a book resonate today in particular? It resonates because there is a quasi religious green and environmental movement that shares his critical views.
I think that the review misses Taleb’s point entirely. His critique is that the supposed ‘experts’ that modernity uses to drive policy decisions are arrogant fools who think that they know far more than they do. He has seen economists fail miserably and while he has some respect for Hayek and the Austrians, he thinks that most of modern economics is a wasteland.

July 11, 2015 6:46 pm

More variables … kinda like the stretchy-pants of fat-assed mathematical models.

July 11, 2015 7:15 pm

I have never understood why people think a model with many variables is better than a simple model.
In medicine, we often attempt to predict the possibility of a particular outcome (eg. disease-free survival in cancer). We being by observing the outcome of many cancer patients from direct observation with long-term follow-up, and then look at various variables in an attempt to see which one are associated with outcome, eg. age, gender, type of cancer, stage of cancer, type of treatment, and so on. You can get quite a long list.
Then, using commercial, not proprietary software, regression analysis is used to find variables which correlate with outcome and to assign a parameter to this variable. Usually, we look at variables individually at first. So, we might find that gender, stage of cancer, age, and type of treatment all correlate with outcome. But, then by doing multiple regression analysis, we might find that gender and age are highly correlated in our study, and we can use age and ignore gender in our predictive equation. Then we enter stage into our analysis. Whoa. Once we account for stage, outcome is the same for all ages, so age drops out as a predictive parameter. Now, we enter type of treatment. Gads. The treatment is so good for this disease, that all patients treated had such a good survival, that stage of disease drops out. So, the only meaningful parameter is treatment versus non-treatment.
Now, this doesn’t mean that age and stage of disease are not predictors of outcome in every situation, but in the presence of an effective treatment, they are no longer predictive of outcome, defined as p < 0.05. But that is just for this one study. If we had a larger group of patients for example, we might have had more statistical power, and might have found that another variable now had a parameter different from zero with p < 0.05.
Notice in this example that there are no errors in the measurements of our variables. We know for certain the age, gender, stage, type of cancer, and treatment status. Imagine if we had significant doubts about the actual values for these variables in any given patient. For example, there was only an 80% chance that we actually knew the age or the gender of the patient. In some cases it was not recorded, in others it was recorded incorrectly.
This is a just a very superficial look at a real world situation, and helps to explain why doctors want a simple, rather than a complex, way to predict outcome.

July 11, 2015 7:17 pm

Many years ago I analysed the hospital claims history for some years for about one million individuals. I found that the distribution of these claims varied by age (as expected) but these distributions were highly skewed (skewness greater than 6 across most ages) and very leptokurtic (that is a kurtosis of 60 or more). In simple terms these distributions had a very fat one-sided tail. At the time I had delusions that I might be able to model such distributions and so be able to develop mathematical models that would be extremely useful for health insurers with flow on uses in economic modeling and a host of other applications with underlying fat tailed distributions. However I discovered, after consulting some of the smartest mathematicians in the world, that such distributions are not capable of being expressed in the form of a mathematical model even when using copulas. Unfortunately there are many distributions in nature (and economics) that have similar skewness and kurtotic properties and many modelers seem to misunderstand that substituting other distributions in their models makes them too simplistic to represent these distributions accurately over long periods of time. Hence the long term conclusions obtained from these models are simply mathematically inappropriate.
Because of this the models used by traders (for example) to price derivative contracts are overly simplistic and do not properly allow for so-called “black swans” (think of this as high kurtosis). Hence the derivative pricing failures that contributed to, for example, the Global Financial Crisis. Of course if the mathematics were available to price derivatives correctly then that market would almost certainly be a lot smaller as the true prices of many derivatives would be much higher. The same is true for the models used by climate scientists. Were they based on mathematics that really represented the risks being modeled then the scammers who are making money out of climate change wouldn’t be able to promote their scams.
So thanks Willis for publishing this. It just adds to the weight of evidence that exposes the climate change fraudsters.

July 11, 2015 7:30 pm

Just to enlarge on my prior comment.
My example above is highly simplified. Imagine if instead of a simple equation like:
y = a1*x1 + a2*x2 + a3*x3
where all the variables assumed to be simply additive, we had an equation like:
y = ((a1*x1)^(a2*x2))*a3*x3
The propagation of error in this formula could be horrendous, with one bad parameter invalidating the whole effort.
So, with mathematical models of the REAL world, simplicity is highly to be desired. It is not rocket science.

Peter Sable
Reply to  joel
July 12, 2015 12:09 pm

What I didn’t understand about Taleb’s math was was he referring to the first equation that relates variables x1..x3 into the model y (linear) or the second equation (non-linear)?

July 11, 2015 7:30 pm

Always consider first the obvious and if it does not work out then progress but I agree with weather forecasting we start small and go large.. if necessary.

July 11, 2015 7:33 pm

Even without an understanding of the math involved, the conclusions are disturbing

July 11, 2015 7:37 pm

Taleb is a fascinating man. I listen to him whenever I can even and has a wonderful gift for communication. Find him interviewed several times and in depth by Russ Roberts, professor of economics at George Mason, on his weekly podcast ‘Econtalk’. Who wants to listen to a weekly podcast on economics? Well I suppose I do, it’s a real gem, entertaining and fascinating every week. Give it a spin.

Reply to  Grant
July 11, 2015 7:40 pm

Russ did a great interview a week ago with Matt Ridley which I enjoyed very much.

Reply to  Grant
July 11, 2015 7:46 pm

Links ??

Gloria Swansong
Reply to  Grant
July 11, 2015 7:59 pm
Vangel Vesovski
Reply to  Grant
July 14, 2015 7:57 pm

Sorry Grant. While I have a high regard for Taleb and most of his ideas he comes across as way too arrogant in most of the interviews, including those with Russ. I was also not very impressed with his argument against GM and about the need to take the AGW argument more seriously because of the fat tail issue. Like many smart math guys he sometimes gets lost in the detail and forgets to look around at the divergence between the prepositions and the reality. The Ridley argument on the fat tail issue was a good one. Taleb is being fooled by the data that he is given and is totally unaware that it does not reflect the actual measurements or the real distributions.

July 11, 2015 7:45 pm

BTW, Am I the only one who when reading an interview of Taleb is reminded of the Jeff Goldblum character from Jurassic Park?

Gloria Swansong
Reply to  Dinostratus
July 11, 2015 7:53 pm

In the June 3, 2007 New York Times Sunday Book Review, a survey “Read Any Good Books Lately?” featured celebrities giving their recommendations.
Michael Crichton’s response:
“Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable. The second volume by the author of Fooled by Randomness continues his theme — our blindness to the randomness of life — in an even more provocative, wide-ranging and amusing mode. A book that is both entertaining and difficult.”
OK, Crichton wrote “Jurassic Park” in 1990, but still…
Great minds think alike, apparently.
Chaos theory was already well advanced by 1990, but sadly ignored by climastrologists.

Keith Minto
Reply to  Gloria Swansong
July 11, 2015 9:04 pm

From that link you provided above,
Taleb on Hayek :
“So, Hayek was against–what? Against a top down social planner who thinks he knows things in advance, can’t foresee results. And makes–because the person first of all has arrogant claims that may harm us, but also because of mistakes–he’s not going to foresee his own mistakes; and mistakes will be large.”
Sounds very, very, familiar.

Reply to  Gloria Swansong
July 11, 2015 9:43 pm

Which is really, really odd.
In 1987 James Gleick wrote “Chaos: Making a New Science” and his brother is none other than Peter Gleick, the eco policy scientist. I mean, don’t they even talk over the holidays?

Reply to  Gloria Swansong
July 12, 2015 6:47 am

“Chaos theory was already well advanced by 1990, but sadly ignored by climastrologists.”
True, and I always find that fascinating because one of the first great books on chaos was “Chaos: Making a New Science” by James Gleick, brother of Peter Gleick, climate alarmist extraordinaire. I read “Chaos” back in 1987 and it left me with a lasting skepticism about the ability of modelers to properly fence or account for complexity. I get the same thing from Taleb who seems to write extensively about what could be characterized as the unpredictable boundary between order and chaos. Taleb is an alarmist, obsessed with the sudden onset of turbulence after a prolonged period of laminar flow. He sees the potential for catastrophic discontinuities everywhere, a perspective reasonably colored by his experience with the sudden breakdown of civilized society in Lebanon. I enjoy his books, seeing them as an assault on sanguine central planners and the complacent certainty of modelers. But Taleb appears to buy in to the climate alarmism with a hearty “It may suddenly become so much worse than it now appears!” rather than applying his otherwise excellent skepticism to the elaborate models of the climate community. Clearly our world is at least somewhat Anti-Fragile or we would not still have life after all these billions of years. I sometimes wonder whether James is the same way when sitting down to lunch with Peter? “Yes Peter, we can’t predict the weather seven days in the future, but clearly these complex models predicting the global thermageddon are onto something.” Or does he quietly eat his bisque and roll his eyes at the certainty?

Vangel Vesovski
Reply to  Gloria Swansong
July 14, 2015 7:58 pm

“Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable. The second volume by the author of Fooled by Randomness continues his theme — our blindness to the randomness of life — in an even more provocative, wide-ranging and amusing mode. A book that is both entertaining and difficult.”
For what it is worth, starting with Fooled by Randomness would be a good idea. It was brilliant and a great effort that shows Taleb’s genius.

Ian H
July 11, 2015 8:33 pm

Chaos theory, Catastrophe theory and Fuzzy sets are the three bad boys of mathematics. The mathematics is real enough. But by virtue of an excessively sexy name, these get over-promoted, over-hyped, misunderstood and applied inappropriately all over the place in areas outside mathematics; Jeff Goldblum pontificating about “nature finding a way” is just one example.

Reply to  Ian H
July 11, 2015 9:26 pm

And don’t forget the mathematics of marriage: The Principle of Least Action.

July 11, 2015 8:43 pm

I noticed that no one mentioned computer ALU round off errors due to using a finite number of bits. Also the step size of the calculations were not mentioned.

July 11, 2015 8:47 pm

Thanks to the help of Chinese Checkers (a much more appropriate term than “hackers”)…a document internal to the “economic establishment” has been found, address this very publication. We reproduce it here in full:
Your Honor, allow me to present EXHIBIT A. of the general rabble’s case against Mr Taleb.
It will become very clear that Mr. (is it Dr?) Taleb has not only had the “kutzpa” to make himself some number of millions of dollars, due to his “irrational” approach to the markets, and his belief of using
…shall we say, “intuitive” (some say GUT feeling) methods with regard to active floor trading, but he
has now…once again, produced a rigorous mathematical effort, to “show off” his considerable analytical
It should be noted, however, that as before, Mr. Taleb’s analytical skills are being applied for one primary reason, i.e. to expose our long standing fraud of the existence of any and all methods to use historical data, to predict future outcomes. (This could be applied to other fields outside of economics also, we may be able to recruit allies in our effort to quash Dr. Taleb.)
As you may well be aware, our livelihood is highly contingent on the number of FOOLS (excuse me, clients) for which we can establish and maintain a belief…by high sounding words, our own charts and graphs (either tending upward, downward or sideways) by which we can continue a delusion of our “economic invincibility”. One may recall how successful we were in preaching the “peak oil” mantra, with graphs of declining well curves, historical analyses of limit oil fields, growth of demand..and how we had
managed to maneuver the oil markets above $100 a barrel. (Adding immensely to the personal wealth of many of our clients.)
Unfortunately, to borrow a phrase from Mr. Taleb, a “black swan” occurred…(viz. N.D. oil, lateral drilling, modern fracking methods) which made much of the tenants of our predictions moot.
Now we have this INSULT to Dr. Taleb’s INJURY, in his saying …in a more direct analytical form, that OUR analytical methods have as much value as “stuff you spread on your garden to make it grow”.
It is the PLEA of our clients your HONOR that something be done to stop Mr. Taleb and put a halt to his dangerous activities, before we are brought before the court of pubic opinion for a complete session of ridicule and dis-belief!
Yours, the ECONOMIC establishment

July 11, 2015 9:11 pm

I think this all leads to the conclusion that all we are left with in the climate “prediction” (or projection) field is the good old “gut feeling”.
As uncertainty goes, it is much less prone to errors…

July 11, 2015 9:25 pm

Isn’t this the same thing as saying that any errors in initial conditions will compound to the nth power where n=the number of subsequent re-iterations using said initial conditions?

Reply to  hswiseman
July 11, 2015 9:34 pm

That’s exactly what popped into my mind as I glanced over the article.

Reply to  hswiseman
July 11, 2015 10:18 pm

Not exactly how compounding is calculated, but I hope you get my point.

Reply to  hswiseman
July 12, 2015 1:45 am

A more tangible example from my childhood comes to my mind:
Like attempting to saw 100 planks that are all 2,000 meter long +/- 2 millimeter.
Measure the first one with a tape measure.
And thereafter
– by eye only and no other regards to the size being reasonable –
always using the previous plank you sawed as a template for the next.
I tried that with 10 planks – my first experience with error propagation. 🙂

Reply to  Science or Fiction
July 12, 2015 2:22 pm

A reminder to American readers, that in Europe the comma “,” is used as a decimal point. Those planks aimed at two meters precisely in length, not two kilometers — which would have been truly memorable.

George Devries Klein, PhD, PG, FGSA
July 11, 2015 10:30 pm

Please post a link to final publication when it is available.

July 11, 2015 10:30 pm

This is yet another nail in the coffin of 19th century thought: That given enough time we could discover the underlying equations of nature and predict everything and control anything.
In the 20th century one theorem after another demonstrated that we couldn’t do that. Gödel’s incompleteness theorem. Turing’s work on the halting problem. The development of chaos mathematics. Quantum probability theory.
Few people, even those deep in science themselves, realise just how narrow is the field to which conventional linear analysis – the sort that has lead the way in one scientific breakthrough after another – is confined.
Looks like this paper is just another reminder of how little use (current) science and mathematics are in complex real world problems.
What the paper seems to be saying is that a function of a a large number of variables, all of which are well controlled within normally distributed variation, will exhibit much larger variation than its input terms: That is the probability of its output is not a linear function with respect to the combined probability of its inputs.
But that is the case for all non-linear functions.
Consider the case where a man stands under a ton rock suspended by a thin wire at which a sniper is firing rounds from a distance.
The rounds may be distributed at random but bell curve shaped distances from the wire. However if one strikes the wire, the rick falls on the man’s head. The outcomes as far as the man is concerned are not a smooth set of probabilities, but a binary case that means he either gets away unscathed or dies.
What I am not mathematician enough to understand in this case is whether Taleb’s analysis is somehow applying to linear functions in quantity as well.

Reply to  Leo Smith
July 12, 2015 2:05 am

my recollection is that Newton spent a considerable amount of time working on nonlinear functions.
its pretty much an accepted fact that linear analysis is appropriate for only a tiny fraction of phenomena in the world…

July 11, 2015 11:21 pm

The best example of what he’s talking about:
Let’s say you want to calculate the exact motion of a drop of water molecules in a micro gravity.
You have to have positions for each molecule, temperture, gravity of the environment, and that’s before you start calculating the interaction of the molecules, what he’s saying is the error of the actual position quickly grows.
The higher the number of requirements the quicker the error grows, the more molecules you’re calculating the quicker the error grows. It’s possible that this particular problem can not ever be solved, but even if you could for 5 molecules, 6 might be impossible, or if you add a vibration to the 5 molecule problem it could become unsolvable.
And my example is simply compared to global climate.

Reply to  micro6500
July 12, 2015 2:25 pm

The bell curve wrings from many, one / Describes all things; determines none.

John Robertson
July 11, 2015 11:32 pm

Read Nassim’s earlier book Fooled By Randomness – I haven’t had time to read his latest, but I think the previous book is even better than The Black Swan. You can find PDFs of Fooled easily enough…well worth it!

John Robertson
July 11, 2015 11:37 pm

Nassim also voiced opinions contrary to the ‘consensus’ back in 2009:
He certainly ruffles the GMO folks feathers…

July 11, 2015 11:38 pm

The climate will do what does. When modeling, it may be wise to remember “the map is not the territory”.

Dan Freedman
July 11, 2015 11:43 pm

Taleb originally wrote about the limitations of analysis in the financial sector.
I’ve long time thought that the same limitations and associated social responses are evident in Climate science.
And we see similar problems in other fields – pharmaceuticals, nutrition, etc.
There’s a book to be written about the rise and fall of the “analyst”…
Has anyone attempted to draw these clear parallels and offer a general thesis?

July 11, 2015 11:58 pm

Does anyone here know of a model/simulation that is used to find out NEW characteristics of effects?
My understanding is that a model/simulation is designed with correct functionality such as an aircraft flight simulator, and can then be used to train pilots.
Do we have any such models/simulations that have been created which are then used to ‘predict’ something we did not know?
I assume that the ‘center of gravity shift’ observed when an aircraft transitions to supersonic flight was found by physical testing, not as a result of a simulation.
Since, until we went supersonic we were unaware of the issue and could not ‘program it in’.
( I know this predates large computers but… )

Reply to  steverichards1984
July 12, 2015 2:28 pm

So models interpolate well (except for phase boundaries). Extrapolate, not so much.

Village Idiot
July 12, 2015 1:34 am

Climate modellers – Read and Learn!!

Ivor Ward
July 12, 2015 2:22 am

Relating this to my own experience leads to the axiom that “No plan survives first contact with the enemy.” The variables spiral rapidly out of control no matter what you thought you knew in the planning phase. The solution is to train your commanders to adapt and if that does not work……drop a nuke. It would seem that the climate chatteratty have already reached that stage but it failed to go off and we are somehow still here in the same climate with the same weather at the same temperature as we have been for a lifetime. Sucks when your bomb doesn’t explode.

Reply to  Ivor Ward
July 12, 2015 7:15 am

Ordered from Acme.

Reply to  kim
July 12, 2015 7:17 am

We could simply say that climate has roadrunner skills. Nothing wicked about it.

July 12, 2015 2:38 am

Hah! I have known this for years. It is summed up in our family by the saying “Anything can happen in the next half hour”.

Dodgy Geezer
July 12, 2015 4:06 am

…I assume that the ‘center of gravity shift’ observed when an aircraft transitions to supersonic flight was found by physical testing, not as a result of a simulation. Since, until we went supersonic we were unaware of the issue and could not ‘program it in’….
Since air travels over different parts of an aircraft at different speeds, we encountered to problem gradually, as aircraft increased performance and reached high speeds in dives. It’s known as ‘Mach Tuck’, and can occur in aircraft susceptible to it at quite low speeds.
When the problem was first encountered it was replicated in wind tunnels, understood in theory, and so by the time planes were being designed to travel supersonically there was no problem with ‘programming it in’.
Two aircraft stand out in this research process – a modified Mk XI Spitfire flown at Farnborough in 1944, which achieved controllable speeds of M0.9 in a dive (over 600mph!) and the Miles M52 (which provided the design for the Bell X-1). The stories of both are well worth reading…

Reply to  Dodgy Geezer
July 12, 2015 7:50 am

I think the aircraft that really stands out in this process was the DeHavilland 110 prototype, which tried to go supersonic at the Farnborough air show in 1952 and disintegrated, killing many people in the crowd.

Søren Bundgaard
July 12, 2015 4:15 am

Please se Prof John Christy of the University of Alabama at Huntsville, speaks on the subject of climate change. Christy is a climate scientist and responsible for the UAH satellite datasets.

Reply to  Søren Bundgaard
July 12, 2015 6:57 am

This man is dead wrong about nuclear power.
Yes, it is ‘safe’ so long as it is operational and under control. But when it is NOT under control and something very bad happens and it is inevitable this can and will happen roughly every 20 years or so, the entire region around these dangerous entities is uninhabitable and everyone loses everything and is forced to flee forever.
Then the genetic destruction moves forward relentlessly and we have no idea how long but know that 40 years later, it is still very nasty where previous accidents happened and it seems it may be never-ending, we just do not know.
Sneering that ‘no one is dying’ is crazy when the deaths are slow, relentless and unavoidable and above all, invisible that is, you don’t know when you enter a death zone via any of your senses except perhaps if you notice a lot of deformed living things scattered all over the place.

Reply to  emsnews
July 12, 2015 8:11 am

Your strong words seem to be out of place. So I’ll sneer at you.
As an old boomer sailor I know that nuclear power can be the ultimate power source. Of course, it is not something to go off half-cocked like you did above. Your research seems to be based on the fear that you have learned in your younger life and not from experience. After all, nuclear power is not a BOMB. Just like windmills are not placed in an area just to kill wildlife. And what about the mess that your thinking causes when the people begin to use the batteries for all the automobiles that are on the road. Buy yourself a golf cart and take care of it for a few years so you can gain experience as to the horrible mess it can cause when in the hands of the general public.
We all know that your side has in the back of their mind that OVERPOPULATION must be stopped without considering that we are already down that road without help from firm legislation.
So I suggest to you to TONE IT DOWN!!!!

Reply to  emsnews
July 12, 2015 8:15 am

“. . . every 20 years or so, the entire region around these dangerous entities is uninhabitable and everyone loses everything and is forced to flee forever.”
Not if the power plant is anchored 20 miles offshore in a specially designed fail-safe module.

Reply to  emsnews
July 12, 2015 8:45 am

How many nuclear power plants are ‘anchored offshore’ and would this work with tsunamis?
The answer is zero and then of course, violent storms and tsunamis are all too common ‘offshore’. Then there is warfare: #1 target in a war is a nuclear power plant.

Reply to  emsnews
July 13, 2015 6:18 pm

emsnews, you say “the deaths are slow, relentless and unavoidable and above all, invisible that is, you don’t know when you enter a death zone via any of your senses except perhaps if you notice a lot of deformed living things scattered all over the place.” I assume (perhaps wrongly) that you are talking about the area around Chernobyl. I have watched the news about Chernobyl ever since the accident, now many years ago, as part of professional curiosity. It would appear that, other than the destruction that occurred from the perfectly ordinary industrial accident and explosion, that the release of radioactive particles has not harmed the flora or fauna of the area. The unfortunate deaths of the men who initially were sent in could easily have been prevented, by cycling people in-and-out of the area. The animals and plants still in the area seem to be thriving, with no apparent increase in deformities or genetic aberrations. There are even people who have moved into the area. with no immediately apparent problems. There is actually much more genetic damage that occurs from chemical exposures than from increased radiation exposure.
I realize that this will not make any change to your opinions, emsnews. However, for the other people that read this, I would advise that they do their own investigations into this. Radiation and radioactivity are often not reported on in a fair and even-handed manner, and there is a lot of misinformation out in the wild.

Reply to  Søren Bundgaard
July 12, 2015 7:53 am

S0ren- thanks for posting the above video of John Chrisy. While watching it I was thinking of all the real time data I have been collecting. The collection process can be complicated and you can never be sure that the “data” is even real. I guess it is time for the other side to consider the aspect that they could be wrong. That their research is really based upon homogized data instead of past-ur-eyes data.

July 12, 2015 5:40 am

All modeled processes become unpredictable if sufficiently complex and/or if the time scale of the prediction is long. Even solar system motion is unpredictable at sufficiently long time scale. So I’m not sure that there is anything very surprising that adding variables increases unpredicability.

July 12, 2015 6:14 am

The usual assumption of novices to statistics is that variations (and errors) are independent. That is taught as nearly axiomatic in any introductory statistics class – the number that comes up on one die is unrelated to the number that comes up on the other.
When valid, this assumption allows reasonable estimates of the as-yet-unmeasured. Casinos thrives on the validity of this assumption for small, well-characterized and controlled systems.
The problem is that adding that nth variable can link the errors in the ith and jth variables in unanticipated ways so that they are no longer independent. From that point all bets are off. Only a fool would roll dice that are tied together.
Something similar can be seen in the calculations of chaos theory where the last (past) value and the current (present) value are used to calculate the next (future) value. Due to the iterative nature of the calculations a small variation or error introduced anywhere in the line is propagated indefinitely, and cannot be ‘corrected’.
Subsequent calculations can only continue to diverge, as we see daily in ordinary weather forecasts which only use a comparatively small ensemble of variables (typically air temperature, pressure, wind speed, direction, and humidity for a few score locations), yet have no predictive value at all beyond a week.
The subtleties of advanced mathematics and the connections to physical systems are often incompletely understood by the very investigators who work most closely with them. The very fact they are ‘investigating’ suggests they acknowledge to themselves there are aspects of the situation which they do not understand. This warrants the warning: “Do not try this at home.”

July 12, 2015 7:02 am

The real issue here is, computers have made it ridiculously easy to take any parameter at all and run it to infinity. Very few systems run to infinity in nature but computers do this easily, it is natural for them to do this and indeed, I remember the University of Chicago’s Univac system which my father used to make celestial calculations and which was used for nuclear bomb data, too, for example.
The scientists back then discussed the tendency of Univac to run to infinity and how to avoid this. It was seen as a problem, not as something real we should worry about in the real world. That is, being scared to death by computer programs that are designed to show ‘global warming’ running to infinity are actually artifacts of how computers respond to data inputs that are poorly described to avoid infinity factors!
Do note that more and more real astronomers are fed up with these stupid predictive computer programs for the weather.

July 12, 2015 7:36 am

This brings to mind a chemical synthesis I was responsible for many years ago. We were making a component of “carbonless carbon paper”. Sometimes it worked well, the next time we got garbage. We tried to hold all the known variables constant, but the result was unpredictable. After a total failure one day, I was called to a meeting to answer for my failure. After I had been berated sufficiently, the boss asked the development chemist what happened. His answer was “I don’t know, the same thing happens to us in the lab and we have no idea why”. Some unknown in the process was controlling the reaction path and we didn’t know what it was. Likely an unknown unknown was at work. I suspect there are several such unknown unknowns at work in weather and climate.

July 12, 2015 7:41 am

This is my thing. I have written predictive modeling code for models with hundreds of dimensions, and the well-known curse of dimensionality makes building usable models an exercise in highly advanced mathematics, especially optimization theory. It appears in many specific fields of physics as well — e.g. systems with broken ergodicity, complex systems, spin glasses and the like (which is where I first learned of it).
In part it is related to the way volume scales in high dimensional Euclidean spaces (although modelling may be a mix of discrete and continuous inputs). Almost all of the volume of a high-dimensional hypersphere is in a thin differential shell at its maximum radius. I rather suspect that this is the fundamental geometric origin of Taleb’s observation regarding error. A single step in a single variable can alter the total hyperspherical volume of possibilities by more than the total volume of the system before the step.
One does need to be a bit careful in one’s conclusions, though. Many systems with high dimensionality have many irrelevant or insensitive dimensions. Others have structure that is projective — that lives in a (possibly convoluted!) hypervolume of much lower dimensionality than the full space. The trick is in being able to construct a joint probability distribution function that has the same projective structure and dimensionality. So it isn’t quite fair to state that one can’t do anything with 100 dimensional modeling. I’ve built very excellent predictive models on top of up to 400-500 variables, and that was using computers twenty years ago. There are methods that would probably work for 4000 variables — for some problems. OTOH, he is quite right — for other problems even 200 variables are inaccessible.
I like to illustrate the curse of dimensionality by imagining a mere 100 dimensional space of binary variables. That is, suppose you were trying to describe a person with nothing but yes/no questions: Smoker (Y/N), Male (Y/N), etc. Then your space would, in principle, have 2^{100} \approx 10^{30} cells, each of which holds a unique combination for a person. If you sample from this space, there aren’t enough humans even if you count every human who has ever lived to have a good chance of populating more than a vanishing fraction of the cells. It seems as though forming an accurate joint probability distribution for anything on top of this space is pointless — one needs to have at least 30+ individuals in a cell for its probability and variance to start to have much predictive meaning.
But of course, this is not true! We can predict the probability of an individual going into the men’s room or the ladies room with remarkable accuracy given a few hundred random samples drawn from the population of individuals who use a public bathroom. Nearly all of the variables are irrelevant but gender and age (with a tail involving probable occupation that might be much more difficult to resolve). The trick is finding a way of building the model that can discover the correct projective subspace from comparatively sparse data on the larger space and that is capable of building a nonlinear function on this space that approximates the joint probability distribution for the targeted behavior or result.
There are a number of methods one can use for this, but they are not trivial! They involve some of the most difficult math and computation on the planet, and success or failure is often empirical — beyond around 20 variables, one cannot be certain of success because a sufficiently convoluted joint probability distribution or one with pathological structure simply cannot be discovered with the methods available and a “reasonable” amount of data. A model that is truly (projectively) 100 dimensional in a 1000 dimensional space is very, very difficult to discover or build.

July 12, 2015 8:33 am

I read this first in a FaceBook group, not the WUWT group, and the poster mentioned:

I’m a reformed organic chemist and physician.
Can someone mathematical here tell me if this new draft paper from Nicholas Taleb suggests we should be more skeptical about climate models?

I replied:
The Abstract starts with:

Common intuitions are that adding thin-tailed variables with finite variance has a linear, sublinear, or asymptotically linear effect on the total combination, from the additivity of the variance, leading to convergence of averages. However it does not take into account the most minute model error or imprecision in the measurement of probability. We show how adding random variables from
any distribution makes the total error (from initial measurement of probability) diverge; ….

I guess it can be applied. There’s some discussion that weather extremes are more “thick tailed” than a normal (Gaussian) distribution and that’s presented as an explanation of why extreme weather is more frequent than expected (e.g. the joke that one in hundred year events seem to happen every 20 years.)
Warmists like to point to major floods in the last 15 years as an example of the weather is becoming more extreme, but they look at the event distribution instead of looking at all of the weather record. For example, in New England we had several major events between 1927 and 1938. While 1938 involved a hurricane, the first of a couple decades of several hurricanes reaching New England, the ground was already saturated from previous heavy rains.
See my…/weather-before-and-after…/ for more.

Reply to  Ric Werme
July 12, 2015 8:35 am
July 12, 2015 11:25 am

Have to admit – the hockey stick graph cracked me up.

Peter Sable
July 12, 2015 12:42 pm

Given the birthday paradox, the probably of two independent variables both being outside the bog-standard 95% confidence is 1-e^(-n(n-1)/2*possibilities), where possibilities is 20 and number of variables n = 2, that’s 4.9%. With 5 variables that’s a 40% chance, and with 10 variables it’s a 90% chance that two variables are outside the confidence interval. I haven’t calculated it for “two or more” yet, but I should.
Too bad there are two threads on this. not sure where to post this.
If you reverse this and you want 95% confidence that no two independent variables are outside their respective 95% confidence interval, for n=5 variables you need p=0.008 for each variable and for n=10 you need p=0.001 for each variable. (found via goal-seek in Excel).
This analysis should be distribution independent but IANNT (I Am Not Nicholas Taleb)
Since most measurement error bars are posted at 95% confidence (2-sigma), then this applies to real world measurements. If I combine those measurements into a model I’ll get increasingly likelyhood (quickly!) of GIGO as I add measurements to the model. It should also apply to multiple ANOVA or any model that involves multiple variables that involve some sort of distribution of those variables.
Feel free to smash away at my bad assumptions and math. If you really need help programming the simple equation into Excel I’ll post it on request…

July 12, 2015 6:40 pm

Don’t waste your time on this paper. It’s written without acceptable definitions and is filled with non-sequitors. Because he fails to establish a basis for his discussion and fails to proceed logically, his conclusions (whatever they may be) are irrelevant. Tragically, many mathematics papers are published in this format.
@Willis; I’m surprised that a bunch of mathematical techno-babble impresses you.
But you like quotations, so let’s start with the first paragraph:
“Let us explore how errors and predictability degrades with dimensionality. ” (Sophomoric prose)
“Some mental bias leads us to believe that as we add random variables to a forecasting system (or some general model), the total error would grow in a concave manner (sqrt{n} as is assumed or some similar convergence speed), just as we add observations. ” (Sophomoric & assuming that the rate of growth is related to sqrt{n} seems to be about right. What is this guy talking about? Furthermore, he’s editorializing about mental bias in a math paper before he’s demonstrated anything. Very bad form!)
“An additional variable would increase error, but less and less marginally.” (He isn’t proposing anything here. It’s uninterpretable.)
[Next paragraph]
“The exact opposite happens. This is reminiscent of the old
parable of the chess game, with the doubling of the grains of
rice leading to more and more demands for each incremental
step, an idea also used with the well-known intractability of
the traveling salesman problem.” (He has given no reason to believe that the opposite happens. Pure editorial!)
[Next paragraph]
“In fact errors are so convex that the contribution of a single
additional variable could increase the total error more than
the previous one. The nth variable brings more errors than
the combined previous n!” (This is in the *BACKGROUND* section!?! Indeed, the nth variable could make the error transfinite, but that is left as an exercise 🙂
I refuse to go on in this vein. If you can’t explain yourself properly in the first few paragraphs, why is everything going to go smoothly after that?

Reply to  JDN
July 12, 2015 8:54 pm

” I refuse to go on in this vein. ”
It couldn’t be that his paper is another reason GCM’S are worthless?

Reply to  JDN
July 13, 2015 9:32 am

Here was my favorite,
“There are ways to derive the PDF of the product of beta
distributed independent random variables in terms of Meijer
functions (Tang and Gupta, [7], Bhargaval et al,[8], most recent
Dunkl, [9]). But it is not necessary at this stage as we can get
the moments using the product rule and compute the relative
Why mention “There are ways..”? Who cares? If the all that is “necessary at this stage” then why the non-sequitur? Taleb is just trying to show us how much of a smarty pants he is and for those who’ve done published such derivations, he’s not coming across as a smarty pants. He’s just baffling those who can’t follow the details of his argument.

July 12, 2015 10:21 pm

“Don’t waste your time on this paper. It’s written without acceptable definitions and is filled with non-sequitors.”
“There are three kinds of lies. Lies, damned lies, and statistics.”
This guy is saying something regarding the lack of proven facts in, what, GCM’s, statistical analyses, or so-called meta-analysis? Duh. Most unintelligible language I have seen on this blog, even worse than Tisdale, as well, really?
QED. Otherwise, what exactly have you done for us???

July 13, 2015 9:07 am

At first glance, it appears that extrapolationist concavities are abstrusely ill-confabulated in this paper.

Pass Through
July 13, 2015 10:05 am

You guys realize NNT has gone on record saying he’s “super-Green” and that “my [Taleb’s] position on the climate is to avoid releasing pollutants into the atmosphere, regardless of current expert opinion”, right? Hell, he even took part in some climate change conference hosted by the king of Sweden.
Read on for yourselves:

Mary Brown
July 13, 2015 12:36 pm

Nassim Taleb writes clever theoretical stuff about “black swans” and predictability but to me, he is like the wise professors I knew in college who were brilliant but couldn’t forecast their way out of a paper bag. The ability to forecast is a unique trait which requires knowledge of the operable “physics”, statistical smarts, and a good dose of common sense. Academics, and esp climate scientists, generally just have one of these traits.
This is one reason good weather forecasters tend to be climate skeptics. They understand the complexity of forecasting especially with a small verification sample size.
Taleb ran a tail-hedging hedge fund. It did not do well and closed down in 2004. Now he writes about risk but doesn’t make his money from actual trading. As a real-world forecast modeller, I’m unimpressed.

%d bloggers like this: