Extreme events and how to live with them by Nassim Nicholas Taleb

24:28
is you know okay so there is a severe
error in reasoning and that that you
often hear by people giving you
empirical data and telling you for
example that were idiots to worry about
Ebola that killed two Americans when
more people slept with Kim Kardashian
that year okay so for example and that
was effectively the number they said
that more people slept with Kim
Kardashian in 2014 or worried about
Ebola then died of Ebola for example
right so now and then sometimes you see
numbers like these and this is the kind
of thing that that if you read something
called the New York Times and you still
read that nonsense this kind of stuff
you read in it which is factually right
but bogus okay well and the kind of
thing that teaches and psychology
department that so many Americans die
from eating too many hamburgers smoking
too much alcohol now let’s think about
it in terms of tails if I tell you or
let’s say Steven Pinker give the number
that 3,000 Americans died and their
bathtub every year
okay 3,000 people whereas three or two
have died of Ebola now let’s play
thought experiment if you read in the
papers if you go you know to Mars and
come back
and then read on you know on Google News
that two billion people have died what’s
more likely to have killed them diabetes
obesity is falling the rest stop
sleeping with Kim Kardashian or Ebola
Ebola there we go so you cannot compare
rule number one
thou shalt not compare a multiplicative
fat-tailed process in extremists and in
the sub-exponential class to a thin tail
process that has what we call Chernoff
bounds okay and that is in totally for
mediocre stuff simply because of the
catastrophe principle okay and then we
know for example that is very cheap to
protect yourself from Ebola you see the
probability that people dying from
smoking okay it’s multiplied by 10 next
year is 1 to the 10 – into 10 – 30 the
probability that that the the rate of
people who died from Ebola through
triples is not vastly higher you see so
you cannot compare processes thin tail
to fat tails are not comparable so this
is not empiricism this is called naive
ampere system it’s actually worse okay
so you cannot compare two processes like
these by saying we worry too much about
Ebola in fact we were too much about
diabetes and too little valley bola so
that’s one error of reasoning that comes
from not understanding fat tails now let
me show you the law of large numbers
everything you’ve learned in statistics
is based on a well functioning of a law
of large numbers no all right so in
other words you that as you add
observation this mean that you observe
would be very stable no and the rate is
square root of 2 square root of n number
of observations you agree all right
this is what you have on the left now on
the right for fat tail process the mean
exists but it takes much longer to
observe it
28:10
much longer my only my anymore
28:13
observations okay

Why Organizations Win, According to Musk, Sinek, and Paul Graham

Finite players play to beat the people around them. Infinite players play to be better than themselves.

I could’ve been reading an article analyzing Roger Federer and Rafael Nadal. Except I wasn’t.

ENTER: Simon Sinek

I was listening to Sinek (he was talking at Google) use game theory to describe the kinds of ‘games’ companies engage in: finite games, where the objective is to win (or cause all other participants to stop playing), or infinite games, where the objective is to continue the game as long as possible.

I first came across this line a few months ago. It has stayed with me since then. Suddenly, companies and leaders were falling into either one of these categories for me. If you are an entrepreneur, quite likely Sinek’s line speaks to you too. And you’ll start placing companies into one of these categories. I hope you will, at least.

Take Sinek’s examples as a starting point. Microsoft (under Ballmer’s leadership) executives used to tout how much better their products were compared to Apple. Apple, at the same time, however talked more about end results they were working to achieve. Microsoft, says Sinek, was playing a finite game, whereas Apple was onto an infinite game of self-improvement. Which approach is better did you ask? The business results of both Microsoft and Apple from the time speak volumes about the merits of each approach.

ENTER: Paul Graham

Understanding finite vs infinite games isn’t merely an exercise in the abstract. There is more. Lace it with investor and writer, Paul Graham’s mental model of good vs bad test, and I’d argue that we have a conceptual framework that is critical for business leaders anywhere.

Graham’s recent post about unlearning is a masterclass in understanding the merits of startup life relative to life at institutions like schools or large corporations.

The most damaging thing you learned in school wasn’t something you learned in any specific class. It was learning to get good grades.

Graham’s core idea is that it’s important to know whether you’re spending energy solving a challenge that’s directly connected to reality (studying for a good test), or a challenge that isn’t, usually imposed by an authority (bad test). Recognizing and destroying bad mental models may be even more valuable than adding new ones.

What is the litmus test for a good or bad test? Bad tests are inherently ‘hackable,’ meaning that with clever and directed energy, we can often find a shortcut to ‘scoring well’ on that test and acquiring a label of success without fully solving the underlying challenge. Good tests, on the other hand, are ‘unhackable,’ so we can either succeed at solving the challenge to a varying extent, or fail entirely.

A test in a class is supposed to measure not just how well you did on that particular test, but how much you learned in the class.

Good tests like curing cancer, making education free for anyone, or even winning a tennis match are inherently more engaging because the best scientist, entrepreneur, or player will typically win in each respective scenario.

ENTER: Elon Musk

Sinek and Graham’s models seemed familiar the first time I encountered them, and I wondered why. In filing away these new mental models, I was reminded of their neighbor in the idea world: thinking from first principles, especially as popularized by Elon Musk:

Physics teaches you to reason from first principles rather than by analogy. So I said, okay, let’s look at the first principles. What is a rocket made of? Aerospace-grade aluminum alloys, plus some titanium, copper, and carbon fiber. Then I asked, what is the value of those materials on the commodity market? It turned out that the materials cost of a rocket was around two percent of the typical price.

SpaceX went on to cut the cost of rocket launch by ~90%, while still making a profit.

Why didn’t the giant aerospace incumbents figure this out first?! In my view, the incumbents were busy playing finite games against their competitors, hacking bad tests, and thinking derivatively from their last quarterly results, rather than from first principles. Textbook opportunity for disruption.

Sinek + Graham + Musk For the Win

Good tests map beautifully to infinite games and first principles thinking. All three mental models seem to reinforce a simple message: think like a scientist.

Infinite leaders, says Sinek, filter decisions first through the unchanging values of a company. And only then, factor in the company’s dynamic interests. This may result in sub-optimal single decisions and failures along the way. However, over the years, the long string of decisions strung together will be more cohesive and, therefore, valuable (assuming the company’s values are well set up).

The political environment of a classroom, or large company, is often set up to reward those that hack bad tests and finite games, since the isolated outcome looks favorable. We don’t consider how that outcome will eventually be strung together with other outcomes in order to fully connect with reality.

The book What Have You Changed Your Mind About? chronicles painful realizations by experts playing a finite game in their area of expertise. In it, a successful hedge fund manager, Nassim Taleb (author of the excellent book Antifragile), talks of how he lost faith in probability as a guiding light for making decisions.

Good tests, infinite games and first principles thinking aren’t for everyone. For others, it’s the only way to go.

Helpfully, and devastatingly, startups afford little-to-no buffer from the real world. The company either solves the challenge, or dies. This kind of instant feedback and intolerance for ‘hacks’ forces infinite game leadership at startups –  painful in the short term, but ultimately more rewarding for everyone involved. And the true test of an entrepreneurial leader? As Satya Nadella said, while transitioning Microsoft from a finite game to an infinite game, leaders must find the rose petals in a field of S#$@.

What bad tests or finite games are you putting energy into? Where have you applied first principles thinking?

Taleb Says World Is More Fragile Today Than in 2007

Oct.31 — Nassim Nicholas Taleb, scientific advisor at Universa Investments, discusses the factors causing global fragility, hidden liabilities in global markets, and what he sees as safe trades in the current market. He speaks with Bloomberg’s Erik Schatzker on “Bloomberg Markets.”

High-end real estate will be the first to fall.

How to be Rational about Rationality

So when we look at religion and, to some extent ancestral superstitions, we should consider what purpose they serve, rather than focusing on the notion of “belief”, epistemic belief in its strict scientific definition. In science, belief is literal belief; it is right or wrong, never metaphorical. In real life, belief is an instrument to do things, not the end product. This is similar to vision: the purpose of your eyes is to orient you in the best possible way, and get you out of trouble when needed, or help you find a prey at distance. Your eyes are not sensors aimed at getting the electromagnetic spectrum of reality. Their job description is not to produce the most accurate scientific representation of reality; rather the most useful one for survival.

The Fourth Quadrant: A Map of the Limits of Statistics: Nassim Nicholas Taleb

Statistical and applied probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the “logic of science”; it is the instrument of risk-taking; it is the applied tools of epistemology; you can’t be a modern intellectual and not think probabilistically—but… let’s not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (let’s face it: use of probabilistic methods for the estimation of risks did just blow up the banking system).

.. After 1998, when a “Nobel-crowned” collection of people (and the crème de la crème of the financial economics establishment) blew up Long Term Capital Management, a hedge fund, because the “scientific” methods they used misestimated the role of the rare event, such methodologies and such claims on understanding risks of rare events should have been discredited.

.. While most human thought (particularly since the enlightenment) has focused us on how to turn knowledge into decisions, my new mission is to build methods to turn lack of information, lack of understanding, and lack of “knowledge” into decisions—how, as we will see, not to be a “turkey”.

.. certain class of relationships that “look good” in research papers almost never replicate in real life (in spite of the papers making some claims with a “p” close to infallible)

.. For instance you do not “need evidence” that the water is poisonous to not drink from it. You do not need “evidence” that a gun is loaded to avoid playing Russian roulette, or evidence that a thief a on the lookout to lock your door. You need evidence of safety—not evidence of lack of safety— a central asymmetry that affects us with rare events. This asymmetry in skepticism makes it easy to draw a map of danger spots.

.. My classical metaphor: A Turkey is fed for a 1000 days—every days confirms to its statistical department that the human race cares about its welfare “with increased statistical significance”. On the 1001st day, the turkey has a surprise.

.. And one Professor Ben Bernanke pronounced right before the blowup that we live in an era of stability and “great moderation”

.. I have nothing against economists: you should let them entertain each others with their theories and elegant mathematics, and help keep college students inside buildings. But beware: they can be plain wrong, yet frame things in a way to make you feel stupid arguing with them. So make sure you do not give any of them risk-management responsibilities.)

.. In Mediocristan, exceptions occur but don’t carry large consequences. Add the heaviest person on the planet to a sample of 1000. The total weight would barely change. In Extremistan, exceptions can be everything (they will eventually, in time, represent everything). Add Bill Gates to your sample: the wealth will  jump by a factor of >100,000. So, in Mediocristan, large deviations occur but they are not consequential—unlike Extremistan.

.. Fourth Quadrant: Complex decisions in Extremistan: Welcome to the Black Swan domain. Here is where your limits are. Do not base your decisions on statistically based claims. Or, alternatively, try to move your exposure type to make it third-quadrant style (“clipping tails”).