Nassim Taleb Strikes Again

Guest Post by Willis Eschenbach

Following up on his brilliant earlier work “The Black Swan”, Taleb has written a paper called Error, Dimensionality, and Predictability (draft version). I could not even begin to do justice to this tour-de-force, so let me just quote the abstract and encourage you to read the paper.

 

taleb error paperAbstract—Common intuitions are that adding thin-tailed variables with finite variance has a linear, sublinear, or asymptotically linear effect on the total combination, from the additivity of the variance, leading to convergence of averages. However it does not take into account the most minute model error or imprecision in the measurement of probability. We show how adding random variables from any distribution makes the total error (from initial measurement of probability) diverge; it grows in a convex manner. There is a point in which adding a single variable doubles the total error. We show the effect in probability (via copulas) and payoff space (via sums of r.v.).

Higher dimensional systems – if unconstrained – become eventually totally unpredictable in the presence of the slightest error in measurement regardless of the probability distribution of the individual components.

The results presented are distribution free and hold for any continuous probability distribution with support in R.

Finally we offer a framework to gauge the tradeoff between added dimension and error (or which reduction in the error at the level of the probability is necessary for added dimension).

Dang … talk about alarmism, that’s scary stuff. Here’s one quote:

In fact errors are so convex that the contribution of a single additional variable could increase the total error more than the previous one. The nth variable brings more errors than the combined previous n-1 variables!

The point has some importance for “prediction” in complex domains, such as ecology or in any higher dimensional problem (economics). But it also thwarts predictability in domains deemed “classical” and not complex, under enlargement of the space of variables.

Read the paper. Even without an understanding of the math involved, the conclusions are disturbing, and I trust Taleb on the math … not that I have much option.

H/T to Dr. Judith Curry for highlighting the paper on her excellent blog.

w.

As Usual: Let me request that if you disagree with someone, please quote the exact words you are referring to. That way we can all understand the exact nature of your objections.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

189 Comments
Inline Feedbacks
View all comments
Frank Wood
July 11, 2015 2:25 pm

Dang Willis you find good stuff!

Eyal Porat
Reply to  Frank Wood
July 11, 2015 8:58 pm

You should thank Prof. Curry. She is the one to bring it to light and she has been saying this for years now.
Her Uncertainty Monster is alive and well.
BTW, any weather freak will tell you the best of the best of models are accurate for a day ahead, OK for 3 days, poor at 5 days and absolutely useless at a week time and longer period.

rokshox
Reply to  Eyal Porat
July 12, 2015 10:33 pm

He gave her a hat tip.

Winnipeg Boy
Reply to  Eyal Porat
July 13, 2015 10:22 am

You are correct on the weather. See http://www.cpc.ncep.noaa.gov/products/verification/summary/index.php?page=chart
By their own measure the climate prediction center spends much of the time at worse than random levels and some time at ‘less than no skill’ levels.

Mary Brown
Reply to  Eyal Porat
July 13, 2015 12:15 pm

Depends on time of year and variable being forecast

Steve Oak
Reply to  Eyal Porat
July 17, 2015 6:23 pm

That weather models exhibit accuracy as denoted above seems to be well known by those who generate them. If you examine any ‘extended’ forecast the predicted conditions will be come less and less distinguishable from average beyond 3 days.

Barclay E MacDonald
July 11, 2015 2:32 pm

This Would seem to apply to the predictability and usefulness of climate models!

Nick Stokes
Reply to  Barclay E MacDonald
July 11, 2015 3:38 pm

No, it limits the predictability of numerical weather forecasting. That’s been understood from the beginning, and is very familiar to anyone who tries to solve initial value problems.
GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes. Often characterised as solving a boundary value problem, rather than initial value.

Reply to  Nick Stokes
July 12, 2015 2:26 am

” GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes. Often characterised as solving a boundary value problem, rather than initial value.”
That’s part of their argument to being useful, but the ocean models required are not treaded as boundary problems, and the GCM’S are trained with real data.

hunter
Reply to  Nick Stokes
July 12, 2015 5:27 am

Nick,
The track record of GCMs indicate that they are useless for predictions.
Deal with it.

Barclay E MacDonald
Reply to  Nick Stokes
July 13, 2015 12:39 pm

Thanks Nick! I would not have understood that.

Reply to  Nick Stokes
July 13, 2015 2:20 pm

Nick writes “GCM’s do not claim to solve initial value problems. They look at the statistics of forced processes.”
And are artificially constrained under certain conditions. Ive seen plenty of references to their stability problems… No Nick, GCMs are not immune to this because they solve a different problem. GCMs are solving a different problem to weather and there is no evidence their accumulated errors don’t make them useless.

Danny
Reply to  Nick Stokes
July 14, 2015 9:23 pm

Nick,
Can you explain what is the actual difference between a boundary-value problem and an initial-value problem? As far as the math behind solving partial differential equations, there is none. An initial value is simply a boundary value in the time dimension. The math doesn’t care whether the dimension is time or some other, though the form of the particular equations used to describe the system mean that varying the time often has different effects from varying the space. Is this what you meant?

M Seward
Reply to  Barclay E MacDonald
July 11, 2015 4:21 pm

No Barclay…. to the unpredeictability and utter uselessness of climate ‘models’

Barclay E MacDonald
Reply to  M Seward
July 11, 2015 6:24 pm

I stand corrected:)

Reply to  Barclay E MacDonald
July 12, 2015 5:31 am

You got it all wrong Barclay…This paper concerns an ant farm and the likelihood of ants tunneling up, down, left, right or sideways. How brainless are you to not recognize such a simple model?

Science or Fiction
Reply to  Barclay E MacDonald
July 12, 2015 10:28 am

They seem to adjust these models to keep them in line:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years (Hazeleger et al., 2013a). Biases can be largely removed using empirical techniques a posteriori (Garcia-Serrano and Doblas-Reyes, 2012; Kharin et al., 2012). The bias correction or adjustment linearly corrects for model drift (e.g., Stockdale, 1997; Garcia-Serrano et al., 2012; Gangstø et al., 2013). The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend (Fyfe et al., 2011; Kharin et al., 2012). Figure 11.2 is an illustration of the time scale of the global SST drift, while at the same time showing the systematic error of several of the forecast systems contributing to CMIP5. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions (e.g., Ho et al., 2012b). There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques. There are also difficulties in estimating the drift in the presence of volcanic eruptions.”
Ref: Contribution from Working Group I to the fifth assessment report by IPCC; Page 967.
http://www.ipcc.ch/report/ar5/wg1/
Chapter 11 Near-term Climate Change: Projections and Predictability

JPS
July 11, 2015 2:37 pm

“the conclusions are disturbing.” Why, exactly?

Reply to  JPS
July 11, 2015 3:21 pm

I think because all the expectations about linear systems (mainly a bell curve distribution) don’t apply to systems more complex than 1:1.
Therefore we don’t have a clue.
And the predictions that are the source of policies are, therefore, not built on rock.

JPS
Reply to  M Courtney
July 11, 2015 4:31 pm

“we dont have a clue” ? I have to disagree. Our society and technology could not have advanced to where it is without a clue.

MarkW
Reply to  M Courtney
July 11, 2015 4:35 pm

Society yes. Complex models with hundreds of variables, no.

JPS
Reply to  M Courtney
July 11, 2015 4:44 pm

Mark W- Society has far more variables than your average climate model.

Truthseeker
Reply to  M Courtney
July 11, 2015 4:57 pm

Ah, but society does not have one problem solving model it has millions if not billions of problem solving models. They are called people and concentrating that problem solving into fewer and fewer of them is where the stupidity of the left really lies …

Jonas N
Reply to  M Courtney
July 11, 2015 6:30 pm

JPS, are you saying that we ‘understand’ society (in any meaningful way)?

Reply to  M Courtney
July 13, 2015 6:23 am

In order for society to advance it was never a necessary condition for anyone to comprehend all aspects of it and predict the changes in advance.
Same with climate…it will do what it does, regardless of the GCMs being all but worthless as a predictive tool.

David L. Hagen
Reply to  JPS
July 11, 2015 4:08 pm

The predictions of global climate models with a very large number of parameters (high dimensionality) for 100 years from now likely have little value the error is so high – so the extreme confidence in them is not warranted. So we now add rapidly exploding errors to non-linear chaotic coupled systems.

JPS
Reply to  David L. Hagen
July 11, 2015 4:29 pm

OK but that is hardly “disturbing”

schitzree
Reply to  David L. Hagen
July 11, 2015 4:48 pm

It’s disturbing how much money is being spent based on predictions we know have no predictive skill.

Reply to  David L. Hagen
July 11, 2015 8:46 pm

“extreme confidence”?
The models have utterly failed.
Anyone who has any confidence in them whatsoever is, in my view, deluded.

Gloria Swansong
Reply to  David L. Hagen
July 11, 2015 8:47 pm

Extremely deluded.

ColA!
Reply to  David L. Hagen
July 12, 2015 12:01 am

But this can’t be right and here’s the proof :-
As the error in the climate models got larger the level of IPCC confidence also got higher so that clearly disproves the hogwash in this predictability thingy above – there that was easy wasn’t it!!

Reply to  JPS
July 11, 2015 5:14 pm

Talab paper means: Even models that work, only work until they don’t. Each model’s failure is certain and unpredictable. M Courtney has perhaps chosen exactly the wrong word: A ‘clue’ is all models can produce & you can never know for certain if it is a good clue. Other than that Courtney is right.

Reply to  willybamboo
July 12, 2015 1:00 am

Thanks. I accept the clarification and am grateful for the general endorsement of my understanding.

k scott denison
Reply to  JPS
July 11, 2015 6:21 pm

JPS, if you have such a clue please predict the progress of society over the next 100 years so we can track how accurate you are.

wws
Reply to  k scott denison
July 12, 2015 6:42 am

That’s easy, just read Gibbon’s description of 4th century Rome, or Thucydides on the steady collapse of the Athenian demos. We love to think that we, either individually or collectively, or something incredibly new and special, but in fact this has all happened before. We have a few new toys, but we’re the same people doing the same old crap as they were.

JPS
Reply to  k scott denison
July 12, 2015 7:25 am

i dont and would not claim to be able to do so- Im just saying to extrapolate from this paper that “we dont have a clue” flies in the face of the development of civilization.

bh2
Reply to  k scott denison
July 12, 2015 8:11 am

“i dont and would not claim to be able to do so- Im just saying to extrapolate from this paper that “we dont have a clue” flies in the face of the development of civilization.”
To the contrary. The development of civilization has followed from an empirical process of learning by observation of experiment what is or is not fit for purpose — and often without foreknowledge of what specific purpose the outcome of a given new experiment may serve. If the same results hold up repeatedly thereafter, we can rely on them as a foundation for progress. Otherwise not.
It seems mankind might be better classified taxonomically as “man the keen observer” rather than “man the wise” — so we don’t get too far ahead of ourselves and fall into the exact trap Taleb describes elsewhere of looking back after the fact and imagining discovery of “obvious” cause and effect relationships which were no more than the operation of random chance.

Reply to  k scott denison
July 12, 2015 5:41 pm

Whether or not “we don’t have a clue” depends on what the meaning of “we” is.

Reply to  k scott denison
July 13, 2015 6:33 am

Prairie dog society has advanced too. Does anyone think it is because they “have a clue” (whatever that means)? Or is it because a group of individuals each looking out for their own well being and the well being of their kin, and a certain amount of altruism garnished here and there, lead to such advances?
People learn things. They can use that knowledge to try new things. They can communicate what they have found and done. Other people can see and imitate things that others do. If the things that are learned and done and communicated offer advantages in survival or fitness or comfort, they these new ideas and processes will become entrenched and taught to new generations, and spread to other groups of people.
The advances of modern society and culture did not affect those cultures that were unaware of them.

Reply to  k scott denison
July 13, 2015 6:35 am

Sorry, commented before seeing what bh2 had written already. Same basic idea.

Mary Brown
Reply to  k scott denison
July 13, 2015 12:20 pm

“over the next 100 years so we can track how accurate you are.” You won’t track it. You’ll be dead. Which is one of the secrets of climate forecasts… keep the most outrageous stuff out in the future so it never quite gets here but it all sounds so scary…and the climate forecasters will be dead when it fails to verify.

commieBob
Reply to  JPS
July 12, 2015 2:27 am

“the conclusions are disturbing.” Why, exactly?

In engineering, standard practice is that, to get better accuracy, you add another variable.
Example: Successive approximations to the DC characteristics of a diode.
1 – A diode conducts current in one direction and not the other
2 – A diode has a fixed forward drop
3 – A diode has a fixed forward drop plus resistance
4 – The forward drop is logarithmic
5 – Temperature matters
From the first electronics course on, young engineers are taught to add more variables to get a “better” answer. It’s counter-intuitive that adding an extra variable will make you worse off. Experienced engineers understand that striving for more accuracy is often a waste of time but that’s not the same as actually being worse off.

JPS
Reply to  commieBob
July 12, 2015 7:21 am

again, how is any of this disturbing? as you point out, experienced engineers balance complexity with diminishing (or negative) returns all the time.

Reply to  commieBob
July 12, 2015 8:15 am

The part you’ve left out is that the additional variables are trained on real world data.
If, on the other hand, you have nothing but theory, it is then that the additional variables represent ever greater potential sources of error.
A good example would be the disappearance, then return of electromigration of metal. This was a problem in the very early years of semiconductor design – went away for decades – then returned with the nanometer scale processes. Why weren’t the electromigration equations used in the middle? Because not only did they not describe any real world effect, they would skew the rest of the results.

Reply to  ticketstopper
July 12, 2015 4:38 pm

” A good example would be the disappearance, then return of electromigration of metal. This was a problem in the very early years of semiconductor design – went away for decades – then returned with the nanometer scale processes. Why weren’t the electromigration equations used in the middle? Because not only did they not describe any real world effect, they would skew the rest of the results.”
There’s different metallurgy, in the early 90’s I think our process had 4% silicon, but we’d tested different amounts of Silicon as well as copper, but copper was harder to process. But as long as the vertical coverage over steps, and they didn’t exceed the current density we didn’t have electromigration problems. We also didn’t have electromigration “equations”, we had design rules, same with the design tools I supported years later, design rules, current densities set by the process developers.
When they did fail, it was along grain boundaries, that would pull apart which led to higher current densities, led to more voiding ultimately leading to a failure, finer pitch would more likely span fewer grains and would likely have to have a lower density.

Harold
Reply to  commieBob
July 12, 2015 8:55 am

“With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”
– John von Neumann
Adding more parameters may make the model more physical, but more likely that you’re just fooling yourself by fitting the curve.

commieBob
Reply to  commieBob
July 12, 2015 12:09 pm

JPS says:
July 12, 2015 at 7:21 am
again, how is any of this disturbing?

We’re not talking about the law of diminishing returns where extra effort doesn’t produce a worthwhile result. We’re talking about a situation where extra effort produces a much worse result. If that results in an unwelcome surprise, it should leave scars on the soul of any decent engineer.

Reply to  commieBob
July 12, 2015 3:42 pm

Harold said: “Adding more parameters may make the model more physical, but more likely that you’re just fooling yourself by fitting the curve.”
Actually, all adding more parameters does is make the model potentially more complex. Complexity doesn’t make anything physical – only testing vs. known physical behavior.
What Taleb is saying above is that not only does adding more variables make a model more complex, but that – even for supposedly constrained variables – it introduces even more variability in which the model can fail.

Reply to  ticketstopper
July 13, 2015 5:31 am

I think the real problem in the case of GCM’s is that the output of a round of calculations, becomes the input to the next round of calculations, so any error, is compounded over and over.
You have to watch for this in many types of simulations, electronics simulations for instance, that’s why you want that model of a diode to include all of those parameters, but those parameters come from a lab, where they are measured. And you have different types of diode models, you have to use the right one for your simulation to work, but there’s also some art to operating simulators, you have to understand the models, the inputs, and how simulations are different than operating a real circuit in the lab.
I supported a dozen different types of electronic simulators for almost 15 years, and spent a lot of that time working with engineers explaining how to use simulators to understand how your circuit will work once it’s made, and how the simulator tells you other things. For instance digital models can have their min or max delays added to them, a real collection of chips can have chips with different delays in combination, and at least at the time, there were different simulators to analyze the worse case delay (some chips min while others are at max simultaneously).

Tim Hammond
Reply to  JPS
July 13, 2015 2:12 am

Really? You think deciding public policy that affects billions of people on the basis of “evidence” that is simply not evidence is not disturbing?

markl
July 11, 2015 2:38 pm

“The nth variable brings more errors than the combined previous n-1 variables!” You don’t need a degree in statistics or mathematics to understand.

Olaf Koenders
Reply to  markl
July 11, 2015 4:15 pm

Exactly. When I was writing software in assembler code, adding one incorrect memory address variable to another in a loop quickly overwrote the boundaries of the allocated memory, such as an image frame in an animation – machine go boom.. 🙂

jbandrsn
Reply to  Olaf Koenders
July 11, 2015 5:21 pm

Been there, done that. If you throw a full carrot or chopped up remains into your garbage disposal, the final results are the same

Bill Marsh
Editor
Reply to  Olaf Koenders
July 11, 2015 7:01 pm

I believe the phrase was ‘halt & catch fire’

Barclay E MacDonald
July 11, 2015 2:38 pm

So assuming some level of teleconnection of numerous proxies and simply combining them together and taking their average may increase the error and not improve the reliability of the result?

Mark from the Midwest
July 11, 2015 2:57 pm

In an odd way this makes too much sense, but it’s late on a Saturday, I’ve been hanging with my trusty Stihl 309 much of the day, and now I’m drinking beer, so I’ll need to do a good read on the full paper tomorrow..

Mark from the Midwest
Reply to  Mark from the Midwest
July 11, 2015 3:07 pm

It’s actually a fairly brief paper, but it will demand a bit of effort to work through the math

Harold
Reply to  Mark from the Midwest
July 11, 2015 3:10 pm

It’s actually not finished. That makes it kinda hard to comment on. Let’s return to this when it’s finished.

Dinostratus
Reply to  Mark from the Midwest
July 11, 2015 5:52 pm

I’m with Harold. The paper is barely started.
It’s fairly well understood that the central limit theorem applies to the average of the sample and not the sum of the sample but people always seem surprised when the sum diverges from n*avg. I suspect that’s what the author is doing but there’s not enough meat to see where he’s going with this.

Dinostratus
Reply to  Mark from the Midwest
July 11, 2015 6:11 pm

OFFS. I just read the last paragraph.
“M(1) =nμ….
whether increase is of the sqrt(n)…..”
That, i.e. “when the sum diverges from n*avg”, is exactly what is going on. With increasing n, the first moment is wandering away at a rate of μ.

Ray Prebble
July 11, 2015 2:59 pm

It sounds like the butterfly effect. I always thought it was weird that people could happily talk about how even a miniscule event such as a butterfly flapping its wings could change the weather on a different continent (made famous in the first Jurassic Park movie) and then in the next breath warn about global warming.

David Case
Reply to  Ray Prebble
July 12, 2015 11:25 pm

Perhaps there’s potential for a career in training butterflies?

auto
July 11, 2015 3:14 pm

Willis,
You write:
“Even without an understanding of the math involved, the conclusions are disturbing, and I trust Taleb on the math … not that I have much option.”
I have absolutely n o option!
But – again – many thanks for many fascinating posts.
Auto

Ray Kuntz
July 11, 2015 3:28 pm

As someone with no background in Math it is my understanding from reading him that Taleb believes that the threats posed by Global are so severe that prudence indicates a proactive stance. Correct me if I’m wrong.

David L. Hagen
Reply to  Ray Kuntz
July 11, 2015 4:03 pm

Ray
The opposite. The large number of variables in global climate models suggest that their error may be so high as to have little value for public policy. Consequently, plan to manage the extremes in weather that we have seen historically and in the geological record – including both higher and much lower temperatures than the IPCC alarms over.

Mark from the Midwest
Reply to  David L. Hagen
July 11, 2015 4:50 pm

I concur with this interpretation, I read it as: If the model has sufficient detail to represent the real world then the model will also be highly prone, from time to time, to making absurdly-absurd predictions

kim
Reply to  David L. Hagen
July 11, 2015 11:09 pm

You may be missing Ray’s point. And Taleb, with respect to climate models, seems to miss his own.
=============

Flyover Bob
Reply to  David L. Hagen
July 12, 2015 9:38 am

Kim, What is Ray’s point as you see it? Likewise, what is Taleb point that he, by your account, he is missing?

Ray Kuntz
Reply to  Ray Kuntz
July 12, 2015 7:01 am

That should have been “…from reading his other writings, Taleb believes…”

kim
Reply to  Ray Kuntz
July 12, 2015 7:10 am

Yep, he has demonstrated cognitive dissonance with respect to this issue. He’s not the Lone Ranger in that.
===========

Flyover Bob
Reply to  Ray Kuntz
July 12, 2015 9:40 am

Kim, What is Ray’s point as you see it? Likewise, what is Taleb’s point that he, by your account, is missing? Sorry posted in the wrong place first

Reply to  Ray Kuntz
July 12, 2015 8:46 am

Hi Ray,
I think the folks in this thread are thinking that Taleb has the models in mind. If that’s what he is thinking, then fine. We can all agree that the models are not good.
However, based on his recent work about the precautionary principle, I suspect he is referring to the uncertainty in the system, not the models.
I am not convinced that this work applies to the climate system, but I think you may be right in your interpretation.
Tony

Jeff Mitchell
Reply to  Tony K
July 12, 2015 10:04 pm

If the precautionary principle has any meaning, then the alarmists need to apply it to cooling as well. Cooling is much, much more dangerous. I first learned this from practical experience. When I was collecting reptiles and asked our Division of Wildlife why we couldn’t collect certain species of snakes or lizards. They would throw out “We don’t know much about them, and if we allow collection we might inadvertently hurt those populations.” So I returned with “Well, if you don’t allow collection, supply becomes low, and people will be able to get high prices for poached specimens.” So they hurt the population one way or the other.
We turned the corner on this subject by going to the legislature rather than the agency. After we got done, limited collection was allowed, and people can now commercially breed them so there is a legal source that doesn’t come from the wild. This same approach needs to be applied to NOAA and EPA in getting Congress to do something about these rogue agencies that are out of the bounds that were set for them. If enough people are willing to vote out the enablers of this climate fraud, the tide will turn much more quickly.

bh2
Reply to  Ray Kuntz
July 12, 2015 9:12 am

You are not wrong. I have no references at hand, but Taleb subscribes to avoiding recognizable risks which may or may not materialize and which also may or may not prove to be consequential.
He includes man-caused climate change as one such avoidable risk but claims no insight about what magnitude or direction of future climate change may eventually come to pass.

Science or Fiction
July 11, 2015 3:58 pm

“1) Adding variables vs. adding observations: Let us explore how errors and predictability degrades with dimensionality. Some mental bias leads us to believe that as we add random variables to a forecasting system (or some general model), the total error would grow in a concave manner (pn as is assumed or some similar convergence speed), just as we add observations. An additional variable would increase error, but less and less marginally.”
Excuse me – but exactly who’s mental bias is he talking about?

David L. Hagen
Reply to  Science or Fiction
July 11, 2015 4:04 pm

Everyone using the global climate models and adding more parameters to them in the belief that that will improve their results.

ExArdingJas
Reply to  David L. Hagen
July 11, 2015 4:29 pm

And averaging the results???

Science or Fiction
Reply to  Science or Fiction
July 11, 2015 4:25 pm

This sounds peculiar.
If the model output happens to be insensitive to the variable you add – the new variabel shouldn´t affect the result at all – no matter what.
If the model output happens to be dominated by the variable you add and the uncertainty of the new variable is dominating the uncertainty budget – the uncertainty of the new variable should dominate the uncertainty of the output of the model?
I would think that you will have to understand the sensitivity of the models output to the new variable, and also take into consideration the uncertainty of the new variable, to understand what effect this new variable will have on the uncertainty of the model output?

Science or Fiction
Reply to  Science or Fiction
July 11, 2015 5:02 pm

For uncorrelated input variables:
The uncertainty of the output variable should be equal to:
The square root of the sum over all variables
of:
(Sensitivity of the output variable to each particular input variable)^2
Multiplied by
(The uncertainty of each particular input variable)^2
Ref: Guide to the expression of uncertainty in measurement (Freely available):
http://www.bipm.org/utils/common/documents/jcgm/JCGM_100_2008_E.pdf
Section: 5.1.2

David L. Hagen
July 11, 2015 4:00 pm

Are CMIP5 models UNPREDICTABLE and INSIGNIFICANT, lacking skill?
From Talim’s preliminary results, given their very large number of variables,

do the CMIP5 global climate models have sufficiently “higher dimensionality” as to be “totally unpredictable”? (i.e., lacking predictive “skill”)

John Christy shows the mean of 102 CMIP5 climate model predictions from 1979 are ~ 500% hotter than the actual mid tropospheric tropical temperature since then.
Yet mathematician Valen Johnson calls for 5 time more stringent statistics for results to be significant or highly significant:

To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25–50:1, and to 100–200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

Consequently, do CMIP5 global climate models lack the skill needed for public policy? i.e., are they now just “INsignificant” or actually “highly INsignificant”?

July 11, 2015 4:21 pm

What is a “thin tailed variable”? I assume he means a variable that has a distribution with thin tails, which narrows it down to just about all non truncated data.
Perhaps he’s trying to say that no matter how much data you collect, you won’t be able to differentiate whether the data came, for example, from a Normal or a Burr distribution, and both give markedly different results because of differences in their tails? Random system changes make such differentiation impossible.

u.k.(us)
Reply to  Tony
July 11, 2015 5:04 pm

I heard science was settled.

Reply to  Tony
July 11, 2015 5:05 pm

I assumed he meant leptokurtic .

Reply to  Tony
July 12, 2015 6:12 am

A thin tailed variable vs. a thick tailed variable concerns consumption, digestion and the resulting bolus when expelled from the system. There are numerous combinations: Long, thin tailed variables; long, short tailed variables; short, thin tailed variables and simply, round, small variables of various sizes, and various other results. GIGO.

July 11, 2015 4:56 pm

Maths can be complex, but never “hard”. Boiled down it is always only sequence of individual sums of two numbers, completed in a specified order.

Hoser
July 11, 2015 5:04 pm

I couldn’t get past the “Relation to the curse of dimensionality”.
No, the distance is not (k/2)^1/d. Distance is (d*(k/2))^1/2, or (d^1/2)*(k/2). Unless I’m missing something, start over.
https://en.wikipedia.org/wiki/Euclidean_distance

July 11, 2015 5:07 pm

“Some mental bias leads us to believe that as we add random variables to a forecasting system (or some general model), the total error would grow in a concave manner (pn as is assumed or some similar convergence speed), just as we add observations. An additional variable would increase error, but less and less marginally.” No, it’s not mental bias if you believe this, it’s mental retardation! Two errors effects must be multiplied together not divided! How could anyone think otherwise? Multiplication causes a concave effect, not a convex one. Seriously, if it has taken this paper to make you realise that modelling variables is scientifically useless , your brain has been switched off all your life.

Science or Fiction
July 11, 2015 5:19 pm

The heading of page 4 states:
EXTREME RISK INITIATIVE —NYU SCHOOL OF ENGINEERING WORKING PAPER SERIES
What does that mean?
Does it mean that is is ok to put out on the net unfinished products which does not seem to have been reviewed by anybody else than the author?
I guess it will be harder and harder to find quality products on the net as the time goes – if all drafts are published on the net?

Reply to  Science or Fiction
July 12, 2015 4:34 am

Your joking right?!?

Science or Fiction
Reply to  George P Williams
July 12, 2015 7:09 am

Hate to say it – but I was just being very stupid. Thanks 🙂

Scott Scarborough
July 11, 2015 5:24 pm

Simply explains why planned economies cannot work.

July 11, 2015 5:31 pm

As I’ve said all along, even thinking about modeling something as complex as the climate is a total waste of time and our money.

Ian H
July 11, 2015 5:45 pm

I need an elevator explanation. Is the following on the right track.
The words that have leapt out at me so far are the words “unconstrained” and “thin tailed”. That means each variable has a tiny chance of taking extremely large values. My elevator explanation might be that when the number of variables gets big enough these highly improbable but extremely large values start to dominate. Each additional variable makes it more likely that one of them will give one of these absurdly large values out on the thin tail that will blow the result out of the water,
If that is the correct elevator explanation then this is really just a technical and theoretical issue with no practical consequences because we never actually use unconstrained thin tailed variables. For example daily temperature at some location (for example) might seem to be normally distributed, but really it isn’t. A normal distribution is unbounded with a thin tail which means there would be an extremely small but non-zero probability of a temperature of ten million degrees for a daily high. Practically however ten million as a daily high temperature is impossible. It would vaporise the instrument, melt the planet it was sitting on, and if that isn’t enough impossibility for you, we’d also reject it as an outlier in the first stage of data analysis.
We use thin tailed unbounded distributions as approximations for real world distributions because they are mathematically simple and are good approximations in the region away from the thin tail. Normally we don’t care that the thin tail is very unrealistic because values out there are highly improbable. But when you start to stack up lots and lots of these variables in your calculation then chance of a value in the thin tail showing up in one of them starts to become significant.
In conclusion this is an interesting technical issue effecting theoretical calculations of little practical import. It also looks like it has a simple fix – just bound all the variables.This is something that tends to happen anyway when things get represented in a computer, since those are not good at handing unbounded values.

rogerknights
Reply to  Ian H
July 12, 2015 1:09 am

“The words that have leapt out at me so far are the words “unconstrained” and “thin tailed”. That means each variable has a tiny chance of taking extremely large values. My elevator explanation might be that when the number of variables gets big enough these highly improbable but extremely large values start to dominate. Each additional variable makes it more likely that one of them will give one of these absurdly large values out on the thin tail that will blow the result out of the water,”
This is the argument Taleb is using against GMO foods. One of them is going to go haywire and infect the world. GMO opponents have been delighted to make use of his name and his approach.

SAMURAI
July 11, 2015 5:51 pm

Climate alarmism has become the shamanism of our time.
Ridiculously irrelevant and meaningless readings of sheep entrails (aka climate models) by the shamans (aka alarmists) insist society must offer human sacrifices to appease the gods (aka Statists) to keep the manna flowing (aka research grants) and save the world from destruction…
The errors baked into sheep entrails only get worse overtime because all the assumptions are wrong. This modern day shamanism is only still taken seriously because those that doubt the shamans and the readings of sheep entrails are dubbed heretics, with some religious fanatics calling for the doubters to be thrown in the dungeon…
The sheep entrails prophesies are now off by 2 standard deviations and within 5~7 years, they’ll likely be off by 3 standard deviations. When the acolytes aren’t looking, the shamans frantically push and poke the sheep entrails around with their magic wands to get the sheep entrails to divine as prophesied, but sooner or later, even the dumbest acolyte will see the shamans are cheating.
Sheep entrails have become a joke.

KevinK
July 11, 2015 6:01 pm

“Higher dimensional systems – if unconstrained – become eventually totally unpredictable in the presence of the slightest error in measurement regardless of the probability distribution of the individual components.”
Ok, so first step is to “constraint” the climate. Well that ought to be easy peasy, first we set the Sun on “constant output mode”, then we fix the albedo, next we limit the cloud cover to a small fixed range (29.7775 – 29.7776 percent ought to do it). Oh, and of course we’ll need speed limits on all the winds……
Heck, I’m starting to think that discovering the meaning of life is a whole lot easier than modelling the climate….
Cheers, KevinK.

George Steiner
July 11, 2015 6:26 pm

My step daughter gave me Taleb’s book ANTIFRAGILE. I wrote a review of it for her. If Mr Watts allows a rather long comment on this fellow Taleb I post it for you guys.
ANTIFRAGILE
Nassim Taleb
Described by the Times of London as the most important thinker today. So who is Nassim Taleb?
He is a Christian Lebanese born fifty some years ago. Father was a well to do doctor and the family had merchants and civil servants among them. He was schooled in Lebanon at a French Lycee and was sent to the US by his father after the civil war.
The Lebanese education was probably good in the classical sense. He is said to be literate in French, English and Arabic and knows Latin, Greek, Hebrew and Italian. He studied further at the University of Paris (Sorbonne) and the Wharton school for an MBA, finally getting a doctorate also at the Sorbonne.
The Arab cultural tradition is to pursue a carrier in medicine, commerce or government. In the US he earned his living as a trader for 20 years in the financial market, in various capacity and in many banks and institutions. As you know trading in the financial markets is done in products called instruments, that range from stocks of companies to mortgage backed securities as well as commodities such as for example wheat and copper.
He is said to have made money in the crisis of 2008 by anticipating the decline of the market. We have also anticipated the decline of the market but in our case we just didn’t loose money as the result.
Having become financially independent, Taleb wrote several books, the last two The Black Swan and Antifragile. In Antifragile and in the Black Swan Taleb talks a lot about himself, his likes and dislikes are illuminated vividly. What emerges from his background and the book is a man shaped by his origin, (Arab), his education, (humanist), and a desire to be recognised as a philosopher.
As an Arab he suffers from what I call the Arab sickness. Taking any opinion less than complementary as a personal insult that he does not excuse easily. He dislikes economists as a profession strongly. While reading the book I wondered why. He tells that one of his early books Dynamic Hedging was not liked by economists who reviewed it, intolerable isn’t it.
A humanist education does not give passage to engineering and science. In these areas his knowledge is less than feeble.
But is he a philosopher, a deep thinker, a man who can tell us “how to live in a world we don’t understand” as the subtitle suggests? I don’t think so. My reaction to the subtitle is: Mr. Taleb if you don’t understand the world why should I trust you to tell me how to live in it? Not to say that maybe he should first try to understand it. What do you think, Mariko?
In Atifragile the big idea is this. There are things that are fragile, can be broken or damaged easily. The opposite of fragile you may think is robust. But Taleb says antifragile is better then robust. Something that is Antifragile actually becomes stronger. If you are looking for a title for a best seller, Antifragile is better than robust for sure.
In addition to a good title you need pages, lots of pages. Over 500 pages. So you start with a Prologue 28 pages, then you end with a Glossary, Appendix and Bibliography 93 pages. And before all this there are the table of contents and chapter summaries.
To every book there is style and there is content. This one is no different. The style is informal, the language is verbose and very loose with the meanings of words. For example “harm” is used liberally with every conceivable shade from injury to irritation. The word “stressor” is used without ever saying what is really meant by it. The words “convex”, “concave”, “nonlinear” are sprinkled all over the place with great self assurance but no explanation. Anecdotes about Taleb, Greek mythology, Roman history are many.
Taleb says at one point: “For I am a pure autodidact”. A few paragraph later: “ Again I wasn’t exactly an autodidact, since I did get degrees: I was rather a barbell autodidact as I studied exact minimum necessary to pass any exam, oversooting accidentally once in a while, and only getting in trouble a few times by undershooting. But I read voraciously initially in the humanities, later in mathematics and science and now in history….” In an interview to the Financial Times he again makes an issue of his being an autodidact. I agree with him, he is not.
You get the desired impression that Nassim Taleb is a very well read, cultivated, cultured, man of substantial intellect. Here and there are quotation in Latin and lots of names are dropped. Not surprisingly journalists who are described aptly by an acquaintance of mine as “les pires putains du monde”, swoon.
A book written in this style is well on the way of becoming a best seller. Which it did. It is said that Taleb got a four million dollar advance for it. This looks like a lot of money but at $20 a piece it is only 200,000 books. On the US market alone it will easily be exceeded.
Is the content any better than the style? Taleb finds fragility everywhere. From the traffic pattern of New York to drug companies to research to education to… well every aspect of modern life. He tries to squeeze in antifragility arguments all over the place.
Example:
“The real world relies on the intelligence of antifragility, but no university would swallow that—just as interventionists don’t accept that things can improve without their intervention. Let us return to the idea that universities generate the wealth and the growth of useful knowledge in society. There is a causal illusion here; time to bust it”.
Heh??
Example:
“So here is something to use. The technique, a simple heuristic called the fragility and antifragility detection heuristic, works as follows. Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars travel time extend by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until acceleration becomes mild ( acceleration I repeat, is acute concavity, or negative convexity effect)”.
Heh??
It turns out that Taleb is a criticizer of modernity. His favorite place is the Arab souk, cities are too big, economist are dumb, universities are not useful, medical practice is harmful, traffic in cities is too heavy, countries are too complex, we don’t do enough to prevent the unpredictable, amateur tinkerers are the best, big research is useless, the litany goes on and on. You get the picture.
Why does such a book resonate today in particular? It resonates because there is a quasi religious green and environmental movement that shares his critical views. According to this movement the modern wold is not working. We should go back to a time when things were simpler, there were less people. If you ask when is that time you will not get an answer. Or if you do it is because they don’t know history.
The enviro-religious movement is mainly in the first world. The second world and the third world are not interested. For them hardship is real. For the first world
enviro-religious, hardship is a subject for intellectual discussion. Taleb himself is an enviro-religious fellow traveler. He as much as says so.
And does he put his money where his mouth is? No! He lives in New York, not exactly a village is it? Surrounded by the fine things of life and there is not even a souk.
In the end what about the book? There is a saying “there is no book that in some way would not be useful”. Should you read it? No, unless you have a mighty lot of time on your hands. Then why is it successful? Many will be amused by the anecdotes, many will hope to find out how to live in a world they don’t understand (they will be disappointed), many will want to sit at the feet of a deep thinker. It will be enough for Taleb and the publisher to make a lot of money.

Reply to  George Steiner
July 11, 2015 7:02 pm

Excellent review. I suspect anyone who wants to be regarded as a philosopher probably isn’t much of a philosopher.
By the way, you could have stopped at “he studied at Wharton” and we would have gotten the gist.
Enduring the shame of having graduated from there myself, I can assure you that “ANTIFRAGILE” is just the kind of piffle one would expect.

Gloria Swansong
Reply to  Max Photon
July 11, 2015 8:01 pm

There is piffle and anti-piffle.
IMO, Taleb is somewhere in between.
I’m not ready to join the Taleban, but he has his points worth hearing out. IMO.

Reply to  Max Photon
July 11, 2015 8:04 pm

Max
“Frag(gi)le Rock”

Ian H
Reply to  George Steiner
July 11, 2015 7:37 pm

Thanks for that. Deserves to be promoted to a headline article.

Janice the Elder
Reply to  George Steiner
July 11, 2015 9:31 pm

Entertaining review. I don’t agree with your conclusions or assessments, but you certainly write quite well, George. Nassim Taleb’s books are not for the faint-of-heart, and could have used a bit of judicious editing. I thoroughly enjoyed his style, though, and the insights into his personality. His works have had a great deal of influence on some of my personal decisions, both at work and in my home life. Recognizing the possibility of Black Swan Incidents, and working towards creating an Anti-Fragile standard of living, is not a bad philosophy. As Thoreau said: “I say beware of all enterprises that require new clothes . . .”. [Just my humble opinion of Nassim Taleb’s books.]

Reply to  George Steiner
July 11, 2015 10:35 pm

You aren’t the George Steiner, are you? The one who wrote Tolstoy or Dostoevsky.

Langenbahn
Reply to  willybamboo
July 13, 2015 3:47 pm

If he is, I’d be interested in knowing if he’s ever read Menand’s The Metaphysical Club and what he thought of it.
(FWIW, I come down on the Dostoevsky side.)

rogerknights
Reply to  George Steiner
July 12, 2015 1:30 am

“Taking any opinion less than complementary as a personal insult that he does not excuse easily.”
“Nassim Taleb’s books are not for the faint-of-heart, and could have used a bit of judicious editing.”
–Janice the Elder (downthread)
In Black Swan he has a mini-rant somewhere against editors who have dared to make or suggest corrections to his manuscripts. About three years ago he posted a 20-page article online. I noticed a lot of typos and awkward constructions. I sent hims about 30 fixes. (I once worked as a proofreader.) I didn’t get a response, but six months later he sneered in his next emission at people who did what I had done to him.
He ought to be less fragile.

rogerknights
Reply to  rogerknights
July 12, 2015 1:31 am

Oops: “hims”–>”him”

kim
Reply to  rogerknights
July 12, 2015 7:24 am

Ya know, I’ve considered recommending you for Bob Tisdale’s editor, but everytime I read him I’m further impressed with the elegance of his own natural style.
==============

Reply to  rogerknights
July 12, 2015 6:03 pm

A super callous fragilistic hexed with mild psychosis?

Reply to  George Steiner
July 12, 2015 6:05 am

LOL, this attack on Taleb’s works seems personal.

Reply to  George Steiner
July 12, 2015 6:52 am

Thank you George Steiner for a valuable book review.
A few observations:
With regard to society, the most important variables are Rule of Law and Personal Liberty, which must be kept in balance.
With regard to models, complexity and prediction, why is it that some individuals have a strong predictive track record with complex systems (such as weather and even climate), while others have a negative predictive track record (being consistently wrong, like the warmists and the IPCC)?
For example, we published the following statement in 2002:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
Since 1997, there has been NO significant global warming.
It is also apparent that models that some consider “obsolete”, such as analogue weather models, appear to function better than “modern” computer weather and climate models. The failure of modern computer weather models may be due to incorrect model equations or input variables, or the problem may be that the computer models cannot match the knowledge that is inherent in the analogue models.
For example, Environment Canada and the US National Weather Service both failed to predict the extremely cold winters in the eastern 2/3 of North America for the past TWO winters, while some independent forecasters made accurate long-range forecasts.
Common sense in matters of public policy seems to be increasingly rare. For example, false fears of dangerous global warming have led to foolish investments in “green energy” that are not green and product little useful energy.
We also published the following statement in the same 2002 article:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
Since then, several trillion dollars have been squandered on nonsensical green energy schemes, funds that could have been allocated to solving real societal problems, not imaginary ones.
We also published the following statement in the same 2002 article:
“Kyoto wastes enormous resources that are urgently needed to solve real environmental and social problems that exist today. For example, the money spent on Kyoto in one year would provide clean drinking water and sanitation for all the people of the developing world in perpetuity.”
Since then, some slow progress has been made on clean water and sanitation systems, but that progress has been hampered by inadequate resources. About 50 million kids below the age of five have died from contaminated water since global warming mania began.
We also published the following statement in the same 2002 article:
“Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution.”
Since then, the air quality in industrial China has become toxic due to pollution from new and relocated industries.
I also wrote in another article, also published in 2002:
“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.”
I hope to be wrong about imminent global cooling – we will soon see.
Regards, Allan

kim
Reply to  Allan MacRae
July 12, 2015 7:22 am

Our most powerful digital apparati are pitifully inadequate simulacrums to model the great analog computer that is the heat engine that is the earth. You got it with ‘lack of knowledge’.
==================

Bob Weber
Reply to  Allan MacRae
July 12, 2015 10:51 am

“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.”
I hope to be wrong about imminent global cooling – we will soon see.
-Allan MacRae

You will not be wrong about that Allan.

Reply to  Allan MacRae
July 13, 2015 5:41 am

“If (as I believe) solar activity is the main driver of surface temperature rather than CO2, we should begin the next cooling period by 2020 to 2030.
I hope to be wrong about imminent global cooling – we will soon see.
-Allan MacRae”
Bob Weber said on July 12, 2015 at 10:51 am
“You will not be wrong about that Allan.”
__________________________
Thank you Bob.
Here is why I am concerned about naturally-caused global cooling, which I believe is imminent:
http://wattsupwiththat.com/2015/07/04/2c-or-not-2cthat-is-the-question/#comment-1978818
Kim said:
“Paleontology shows no limit to the net benefits of warming and always shows detriment from cooling.”
Agreed Kim – and not just paleontology.
Globally, cold weather kills many more people every year than hot weather, EVEN IN WARM CLIMATES.
We know this is true from many sources, from modern studies of Excess Winter Mortality to the great die-offs that occurred during the cold Maunder and Dalton Minimums.
Accordingly, it is logical that fewer Excess Winter Deaths would occur in a warmer world, and Excess Winter Deaths would increase in a colder world.
Regards, Allan
Notes:
The numbers are shocking. Excess Winter Deaths now total approximately 10,000 per year in Canada, up to 50,000 per year in the United Kingdom and about 100,000 per year in the USA. I have been writing and researching about Excess Winter Mortality since ~2009 and I am confident that these alarmingly-high numbers are correct. Here is our recent article:
http://wattsupwiththat.com/2015/05/24/winters-not-summers-increase-mortality-and-stress-the-economy/
Cold weather kills 20 times as many people as hot weather, according to an international study analyzing over 74 million deaths in 384 locations across 13 countries.
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)62114-0/abstract
____________________________
http://wattsupwiththat.com/2015/06/20/stanford-research-finds-climate-change-regulation-burden-heaviest-on-poor/#comment-1969683
On May 25, 2015 veteran meteorologist Joe d’Aleo and I published our paper entitled “Winters not Summers Increase Mortality and Stress the Economy”
Our objective is to draw attention to the very serious issue of Excess Winter Deaths, which especially targets the elderly and the poor.
It is hard to believe that anyone could be so foolish as to drive up the cost of energy AND also reduce the reliability of the electrical grid, which is what politicians have done by subsidizing grid-connected wind and solar power.
When uninformed politicians fool with energy systems, real people suffer and die.
Cheap, reliable, abundant energy is the lifeblood of modern society. It IS that simple.
Best wishes to all, Allan

Reply to  Allan MacRae
July 13, 2015 9:48 am

Alan, here’s surface forcing, along with the response in temperatures for global stations with 360+ samples per year.comment image
You can see forcing goes up prior to temps responding (lots of scaling, but I included the factor I used)

Steve (Paris)
Reply to  George Steiner
July 12, 2015 12:24 pm

I think WUWT is a splendid example of the modern world functionig well. Azamov or maybe A C Clark wrote a story about a spaceship stuck in a blackhole coz all the computers are wiped out. It eventually escapes after one of the crew teaches everyone to use a simple abacus. That simple tool and the combined brainpower of the crew allow the calculation of the right angle to escape the blackhole. I think of Anthony as the crewman with the abacus (surface stations project) and the blog as the brainpower that will eventually get us to the other side. Bon voyage.

Langenbahn
Reply to  George Steiner
July 13, 2015 3:34 pm

George Steiner,
I enjoyed your post. If I had to guess, your review of Antifragile was probably too charitable. I can’t say for certain since I found Taleb’s Black Swan thesis pretty silly and I’m not inclined to read anything else of his. His thoughts about Black Swans ran from “irrelevantly true” to “not even wrong.” I mean no disrespect to anyone, but Taleb brilliant? I truly do not see it.
My quickie review of The Black Swan would go something like this. Taleb’s choice of metaphor actually disproves his thesis: there are black and white people, black and white sheep, doves and crows, etc. Only the consideration of swans in the abstract without recourse to anything else in human knowledge could make the appearance of a black swan surprising, not to mention less than trivial. Hence it is a perfect ivory tower exercise, but unrelated to life as it is actually lived.
In the opening paragraphs of the Black Swan book he mentions 9-11 as such an event. He also list three criteria for such events -rare, extreme in impact, and retrospectively unpredictable, none of which is met by the 9-11 example. The attack on the even WTC itself was not rare. That very site had already been attacked once, in 1993, and there is, sadly, nothing rare about terrorism in general. Extreme in its impact? The US economy didn’t skip a beat. The skies are full of planes. The Pentagon was quickly fixed. The WTC has been rebuilt. We fought two fairly desultory wars at little financial cost. Unpredictable, retrospectively or otherwise? Again, it had already actually occurred in 1993, and similar attacks were a staple of fiction and threat prognostication. 9-11 was actually prophesied by a fellow named Richard Rescorla – (google him if you want the details.)
He also brings up things like WWI, the subsequent rise of Hitler and the “precipitous” collapse of the Soviet Union. Let’s allow that any seemingly prophetic insights about the first two are what Taleb calls the result of “explanations drilled into your cranium by your dull high school teacher,” and take the latter. Richard Pipes, Bernard Levin and Ronald Reagan predicted it, and the latter set about doing everything he could to precipitate and accelerate it. Sure, that was against the best estimates of the professional prognosticators, but you cannot say that was not predicted, and by someone who was in a position to do something about it. Taleb ends that paragraph by saying “Literally, just about everything of significance around you might qualify [as a Black Swan].” (Literally? Oi.) Well, if everything is a black swan, then nothing is. Either that or we must be a singularly unobservant species to have missed them all, and a singularly fortunate one to have lasted this long. Or maybe, the Black Swans aren’t as significant as he thinks. Or more likely, it’s a case of Taleb having nothing but a hammer, so everything looks like a nail. (Apparently this is a problem in Antifragile as well.)
It would have served a better purpose for Taleb to explore the implications of his thesis’s weaknesses, or for my money, its outright failure.
Going back to the 9-11 example, given our actual losses to terrorism, our levels of preparation could arguably be considered reasonable. In fiction, tens of thousands, even hundreds of thousands of deaths were predicted. Tom Clancy’s The Sum of All Fears had terrorists nuking the Super Bowl to the tune of 75,000 immediate deaths, IIRC. As 9-11 was underway 10,000 deaths were thought anywhere from possible to likely. I mean no disrespect to those who did die that day, and whose loss is mourned, but compared to what was expected, it is no disservice to them to point out we did not suffer nearly as badly as we thought we would. Even when the swans don’t come up white – and they nearly always do, we usually deal with them pretty quickly. The tactic used against us on 9-11 was dealt with in 90 minutes by the heroes of Flight 93. There has been no recurrence.
Excessive focus on black swans is actually pernicious as it is likely to prevent us from seeing the more significant fact: the vast majority of swans come up white. In his opening paragraphs, Taleb says that “reading the newspaper actually decrease[s our] knowledge of the world.” Apparently, he means that journalism, focusing on minutiae, misses the big, rare events that have, on his view, huge impact. Has he never read a paper? “If it bleeds, it leads” as they say. Sensationalisms abound in the midst of the minutiae and the media lives for the big, rare event so they can make it seem even more important than it is. In fact, journalism is a prime example of doing what Taleb apparently wants: focusing on Black Swans, that is, Outliers. The inimitable GK Chesterton pointed out this inherent flaw a century ago:
“It is the one great weakness of journalism as a picture of our modern existence, that it must be a picture made up entirely of exceptions. We announce on flaring posters that a man has fallen off a scaffolding. We do not announce on flaring posters that a man has not fallen off a scaffolding. Yet this latter fact is fundamentally more exciting, as indicating that … a man is still abroad upon the earth. That the man has not fallen off a scaffolding is really more sensational; and it is also some thousand times more common. But journalism cannot reasonably be expected thus to insist upon the permanent miracles. Busy editors cannot be expected to put on their posters, “Mr. Wilkinson Still Safe,” or “Mr. Jones, of Worthing, Not Dead Yet.” They cannot announce the happiness of mankind at all. They cannot describe all the forks that are not stolen, or all the marriages that are not judiciously dissolved. Hence the complex picture they give of life is of necessity fallacious; they can only represent what is unusual. However democratic they may be, they are only concerned with the minority.” – The Ball and the Cross, part IV: “A Discussion at Dawn”, 2nd paragraph.
The white swans are the amazing thing. Black swan thinking distorts our outlook by focusing on exceptions that can never be assessed to such a degree we miss the benefits that accrue from seeing them for what they are – outliers- and treating them accordingly. Taleb seems little more than another Cassandra Wannabe, both a victim and a practitioner of sensationalism.
Put me down for Piffle. In fact, I don’t think Piffle Lite would be all that unfair.

Reply to  Langenbahn
July 14, 2015 7:52 pm

It would have served a better purpose for Taleb to explore the implications of his thesis’s weaknesses, or for my money, its outright failure.
What failure? Taleb is absolutely right about the stupidity of ‘experts’. His argument that decentralized efforts produce better outcomes than directed research seems quite sound. He is right about the preference for sensationalism and a desire for permanent miracles. And given the fact that most of the supposed ‘experts’ today cannot see the biggest bubble in history, the sovereign debt market, I doubt that his critics have a clue about what it is that they are talking about most of the time.

Reply to  Langenbahn
July 14, 2015 8:13 pm

I thought Talab made an astute observation; some very large complex systems can endure for a good long time – all the while they are growing very fragile. They become more fragile as they become bigger, more complex and dominant. Talab is observing investments/investors and specifically the big financial houses. He explains there are anti-fragile systems that grow less fragile as they expand and grow more complex. Mostly he is talking about networked distribution. When something is distributed from a network of distribution points, the more reliable and shock resistant the expanding network becomes. With important caveats, this is a valid observation and Talab tells it in concise and understandable prose. It isn’t research, and it isn’t math, its just something we all can observe and its something reasonable. It’s logical.
I agree with Talab’s critics. Talab’s attempt to construct a grand, unifying thesis falls short. I think there is a lot of merit in trying to organize an enterprise so it can take advantage of networked distribution. It really is always petty simple. What course of action will increase our company’s value to its customers?
Like so many authors; Talab should have written a shorter book.

Reply to  George Steiner
July 14, 2015 7:42 pm

Why does such a book resonate today in particular? It resonates because there is a quasi religious green and environmental movement that shares his critical views.
I think that the review misses Taleb’s point entirely. His critique is that the supposed ‘experts’ that modernity uses to drive policy decisions are arrogant fools who think that they know far more than they do. He has seen economists fail miserably and while he has some respect for Hayek and the Austrians, he thinks that most of modern economics is a wasteland.

July 11, 2015 6:46 pm

More variables … kinda like the stretchy-pants of fat-assed mathematical models.

July 11, 2015 7:15 pm

I have never understood why people think a model with many variables is better than a simple model.
In medicine, we often attempt to predict the possibility of a particular outcome (eg. disease-free survival in cancer). We being by observing the outcome of many cancer patients from direct observation with long-term follow-up, and then look at various variables in an attempt to see which one are associated with outcome, eg. age, gender, type of cancer, stage of cancer, type of treatment, and so on. You can get quite a long list.
Then, using commercial, not proprietary software, regression analysis is used to find variables which correlate with outcome and to assign a parameter to this variable. Usually, we look at variables individually at first. So, we might find that gender, stage of cancer, age, and type of treatment all correlate with outcome. But, then by doing multiple regression analysis, we might find that gender and age are highly correlated in our study, and we can use age and ignore gender in our predictive equation. Then we enter stage into our analysis. Whoa. Once we account for stage, outcome is the same for all ages, so age drops out as a predictive parameter. Now, we enter type of treatment. Gads. The treatment is so good for this disease, that all patients treated had such a good survival, that stage of disease drops out. So, the only meaningful parameter is treatment versus non-treatment.
Now, this doesn’t mean that age and stage of disease are not predictors of outcome in every situation, but in the presence of an effective treatment, they are no longer predictive of outcome, defined as p < 0.05. But that is just for this one study. If we had a larger group of patients for example, we might have had more statistical power, and might have found that another variable now had a parameter different from zero with p < 0.05.
Notice in this example that there are no errors in the measurements of our variables. We know for certain the age, gender, stage, type of cancer, and treatment status. Imagine if we had significant doubts about the actual values for these variables in any given patient. For example, there was only an 80% chance that we actually knew the age or the gender of the patient. In some cases it was not recorded, in others it was recorded incorrectly.
This is a just a very superficial look at a real world situation, and helps to explain why doctors want a simple, rather than a complex, way to predict outcome.

July 11, 2015 7:17 pm

Many years ago I analysed the hospital claims history for some years for about one million individuals. I found that the distribution of these claims varied by age (as expected) but these distributions were highly skewed (skewness greater than 6 across most ages) and very leptokurtic (that is a kurtosis of 60 or more). In simple terms these distributions had a very fat one-sided tail. At the time I had delusions that I might be able to model such distributions and so be able to develop mathematical models that would be extremely useful for health insurers with flow on uses in economic modeling and a host of other applications with underlying fat tailed distributions. However I discovered, after consulting some of the smartest mathematicians in the world, that such distributions are not capable of being expressed in the form of a mathematical model even when using copulas. Unfortunately there are many distributions in nature (and economics) that have similar skewness and kurtotic properties and many modelers seem to misunderstand that substituting other distributions in their models makes them too simplistic to represent these distributions accurately over long periods of time. Hence the long term conclusions obtained from these models are simply mathematically inappropriate.
Because of this the models used by traders (for example) to price derivative contracts are overly simplistic and do not properly allow for so-called “black swans” (think of this as high kurtosis). Hence the derivative pricing failures that contributed to, for example, the Global Financial Crisis. Of course if the mathematics were available to price derivatives correctly then that market would almost certainly be a lot smaller as the true prices of many derivatives would be much higher. The same is true for the models used by climate scientists. Were they based on mathematics that really represented the risks being modeled then the scammers who are making money out of climate change wouldn’t be able to promote their scams.
So thanks Willis for publishing this. It just adds to the weight of evidence that exposes the climate change fraudsters.

1 2 3
Verified by MonsterInsights