Antifragile Notes
Rare Events are Rare
Life has a lot more rare-but-extreme events than people expect.
- They are rare, so the recent record of any domain will not show any.
- Suppression of small corrections creates short-term stability but accumulates the strain that causes longer-term, catastrophic corrections.
- Many effects, particularly in complex systems, are non-linear, so extreme effects are outsized to their causes.
- Rare events are so rare that we cannot predict their likelihood. We have no basis to judge their probability.
- All complex plans and models are therefore highly risky.
Acting without Omniscience
Rare events cannot be predicted. However, one can still plan and act in the face of these unknown particulars.
- It is not possible to predict the likelihood of, the risk of, rare events.
- However, one can measure ones exposure to benefit or harm from extreme events.
- A thing and the effect of that thing are different.
- x, f(x), and E[f(x)] are all different.
- It is not necessary to know x with certainty to know E[f(x)].
- Jensen's inequality identifies that for a convex function f(x), f(E[x]) <= E[f(x)] .
- (p 439 ff -- The central idea of the book) "Some people talk about f(x) thinking they are talking about x. This is the problem of conflation of event and exposure.
- "One can become antifragile to x without understanding x, through the convexity of f(x).
- "The answer to the question 'what do you do in a world you don't understand' [i.e. without omniscience]?' is, simply, work on the undesirable states of f(x).
- "It is often easier to modify f(x) than to get better knowledge of x. (In other words, robustification rather than forcasting Black Swans.
- "Why payoff matters more than probability: Where p(x) is the density, the expectation, that is the integral of f(x)p(x)dx, will depend increasingly on f rather than p, and the more nonlinear f, the more it will depend on f rather than p.
- Antifragile == convexity re benefits == more to gain than to lose
- Fragile == negative convexity re benefits == more to lose than to gain
- If one arranges ones affairs such that one is antifragile across as many domains as possible, one will typically benefit from the future even while not knowing what that future will be. In fact, the more volatile the better!
- The first step to avoid failure is to protect against downside risk. This biases away from the deathline. It leaves primarily upside. It makes one antifragile.
- If one cannot fully eliminate the downside, one wants two things:
- Multiple, small trials so that your actual experience approximates E[f(x)]
- Buffer, so that no run of bad outcomes makes you hit the deathline
- Barbell strategy: avoid medians; invest in extremes; work hard to reduce downside and boost upside. E.g. hold cash plus 1/n speculative bets, organic construction, etc.
- Many things act like options: no downside; some, perhaps unbounded, upside
- Small businesses, self-employment, free time, money in the bank
- Small changes with the option to abort
- Knowledge
- Many things are short options, lots of downside but only small upside
- Employment in a large corporation (little upside to discovering an opportunity; full downside of getting fired)
- Debt
- A fixed schedule
- Anything that leads to a squeeze
- Living things can adapt to novelty, hence are anti-fragile. Inanimate cannot take advantage of novelty, hence are fragile. Living things grow stronger after episodes of adversity, up to a point.
- An complex, fragile system needs to be modularized, split into loosely-coupled parts that can be managed separately
- Mega-Pareto: Most results come from a few cases
- "Just worry about Black Swan exposure and life is easy"
Complex Systems
- Effects in complex systems are non-linear
- Iatrogenics: High probability of small benefits plus small probability of large harm makes likely that interventions will create more harm than good.
- Things which have survived a long time have survived many stressors
- Anything fragile will eventually fail
- Mere long-duration survival is evidence of antifragility to the full range of factors one has been exposed to
- Therefore, changing anything in a complex system is likely to cause more harm than good (but doesn't anti-fragility imply just the opposite? Or is Taleb saying that the complex system was optimized for all prior factors, but introducing a new factor or suppressing an existing factor drives the system into a new state where the balance among the parts is upset? What about monarchies such as Ancient Egypt or Feudal Christendom? These persisted a long time and yet were not good, not by any rational standard of individual human life. They survived only because nothing better was available and they had means of suppressing the better. They were a stable, local maximum. They were barely better than anarchy. The serfs would have benefitted from randomizing changes that drove the system out of its range of the inferior, local maximum, provided those changes occurred with a context of awareness of a better political system. E.g. the United States developed largely isolated from the monarchies of Europe, and the Founding Fathers had an idea of a much better political system, a superior maximum, better, that is, for individual men. What do I conclude from this? That it matters a great deal what the standard of value is.)
- Via negativa: improve by removal
Theft by Transfer of Fragility
Some people and institutions are predators, prospering by forcible transfer of fragility cost to others
- Statist governments, including bureaucrats and central banks
- This is a particular case of more general problem of theft, in this case made easier by the obscurity and time delay of the transferred risk (robbing Peter by shifting the fragility to him. Simple theft would directly steal a tangible value. Indirect theft imposes an unchosen cost, such as buying something and leaving someone else to pay the bill without their consent. Transfer of fragility is yet more indirect, and so can be much greater in consequence, because it leaves the victim with a hidden, delayed damage.)
Philosophical issues
- Lots of argument against Platonism and for skepticism & empiricism but I'm skipping all that
- He seems to be groping for the concept of objectivity. E.g. he roundly rejects rationalism and attempts to avoid skepticism, favoring what he calls empiricism, but without specifics
- Believes without proof that morality == altruism
- Strong, emotional conviction to integrity, but expressed imprecisely as (no-skin, skin, soul)
- Lots of argument against large corporations based on complexity and the agency problem of their members. But note that this assumes (falsely) that it is one's interest to lie and steal. Also note that there are many endeavors that require coordinated action among many people to a single purpose; hence corporations and armies.
- Taleb says that ideas survive not because the ideas are true but because the people who hold the ideas survive and prosper. (But why do those people survive? Because something in their ideas is more fit than average, that is, corresponds more to reality. This does admit that a valid strategy based on a false premise might succeed for hidden reasons, but it will be less effective than the possibilities opened up by true premises.)
Books
- Antifragile, Things that Gain from Disorder by Nassim Nicholas Taleb. Taleb is one of the few modern thinkers seriously considering how to act and thrive in a dynamic world. For most of history, people have feared change per se and tried to suppress it, for example, the 3,500 year, static Ancient Egyptian culture. Today, people may seek safety rather than growth in their jobs, or favor government policies that appear to deliver stability at the expense of liberty. Taleb argues that dynamism is necessary to life, and details a strategy to capitalize on variety and change. One can arrange one's affairs to benefit from a dynamic future, even while not knowing what that future will be!