The Black Swan
Taleb divides the world into three kinds of events:
-
White swans. This is "mild" randomness, where one new measurement cannot affect your sample average too much. This applies to quantities like a person's height or weight, where large deviations are physically impossible. The Gaussian distribution works well for white swans, "taming" their randomness and yielding predictions that we can be confident about.
-
Gray swans. This is "wild" randomness of the tractable kind. One new measurement can dwarf all previous ones, thus moving the average quite far. This applies to socially constructed quantities such as book sales for a sample of authors, or net wealth for a sample of people. We can model such distributions using power laws, but we should be wary that the exponent of a power law is more difficult to estimate from data than the parameters of a Gaussian. Taleb argues that we should use power-law models only to describe phenomena, not to make concrete predictions.
-
Black swans. This is "wild" randomness of the intractable kind. Events in this category represent the unknown unknown, e.g. a new technology that changes the world, a sudden war after a long period of peace, and so on. Anything that is unexpected but has extremely high impact is a black swan.
In Taleb's colorful terminology, white swans live in Mediocristan, while gray and black swans live in Extremistan. His central thesis is that we almost always assume a variable is in Mediocristan, when in fact it's in Extremistan. Taleb is an options trader who made a lot of money in several stock-market crashes by betting on extremely unlikely events. As such, he has a lot of distaste for economists and financial forecasters, whose prediction track record is extremely poor, though no one notices this (more about this in a minute). Taleb accuses these forecasters of "Locke's madness": reasoning correctly from erroneous premises. He claims that their fancy models rest on white-swan, Gaussian variables, and are thus completely useless in a world dominated by gray- and black-swan events.
Why do gray and black swans rule the world? This has to do with the idea of scalability: If a book sells a thousand copies, then selling the one-thousand-first copy requires no extra effort from the writer. But if a baker sells one thousand cookies, the one-thousand-first cookie still requires work. So the writer's profession is scalable, whereas the baker's is not. Scalable phenomena exhibit winner-takes-all effects, thus giving rise to a power-law distribution. (There is one J. K. Rowling and she sells her books everywhere.) Non-scalable phenomena are necessarily local. (Every city has its own bakery.) Technologies such as the alphabet, the printing press, and sound recording enable scalability, and so they make our world less like Mediocristan and more like Extremistan. But our brains evolved in Mediocristan, and are ill-equipped to deal with scalable phenomena.
In what ways do we pretend that black swans don't exist? Here Taleb describes a constellation of fallacies and heuristics that make us less rational than we'd like to believe we are:
-
Confirmation bias. We tend to overvalue confirming evidence, failing to realize that no amount of it can prove a hypothesis. We tend to ignore or disqualify falsifying evidence, even though a single piece of it is enough to prove our hypothesis false.
-
The narrative fallacy. Our memories require dimensionality reduction, so we weave events into stories, making them easier to remember. In the process, we keep the details that match the story, and forget everything else. We invent causes for events, when the real causal graph might be more complicated, or even unidentifiable. (My cognitive psychology professor liked to say that we humans are obsessive pattern seekers, happy to see patterns even where randomness is at work.)
-
Hindsight bias. In retrospect, it seems inevitable that WW2 would have taken place. We can find lots of confirming evidence, and weave a convincing story. But if you look at the writings from that time, no one knew such a big war was coming.
-
The various biases and heuristics that we use in day-to-day decision making. For example, we give more weight to anecdotal evidence, especially if it has a vivid story (a plane crashing into a skyscraper) than we give to statistical evidence (more deaths per mile traveled in a car than in a plane).
-
Silent evidence. We only hear those who lived to tell the story. For example, "beginner's luck" in gambling is explained like this: Those who had bad luck at the beginning did not take up gambling; therefore a disproportionate fraction of gamblers did, in fact, have luck at the start. Silent evidence gives us an illusion of stability and safety: We underestimate the risks we took in the past, because we survived them all. If we hadn't, we wouldn't be here to contemplate the question. So we compute odds from the point of view of the winning gambler, and not based on everyone who started in our cohort.
-
The ludic fallacy. We assume that the world is like a game of chance, where the probabilities are known and well behaved. But the world if full of high-impact risks that we don't even model.
-
Epistemic arrogance. We overestimate what we know, and underestimate what we don't know. We fail to appreciate that the forward process (starting at A and observing B) is much simpler than the backward process (observing B and inferring that its cause was A). Historians think that they understand causes, but Taleb doesn't trust them.
-
Self-serving bias. When our prediction turns out to be right, we attribute our success to our skills and insight. When our prediction turns out to be wrong, we blame our failure on external circumstances, unforeseen events, and outliers. This bias allows us to be blind to how bad our prediction track record truly is.
-
Platonification. We cut the world up into crisp categories, and then assume that the world matches our model precisely. (If it's not in my model, it doesn't exist.)
So what are Taleb's take-home lessons? Don't trust the forecasters. Train yourself to take important decisions rationally, not intuitively. Focus on the consequences of an event (which are knowable), and not on the probability of the event (which is unknowable except in Mediocristan.) Maximize your exposure to positive black swans (go to parties, seize anything that looks like an opportunity). Remain paranoid about negative black swans.
This book was a lot of fun to read. Taleb combines his insights with his life story, and plenty of personal philosophizing. His tone is at times rambling and arrogant. Here is a surprisingly bitter quote from chapter 14:
Is the world unfair? I have spent my entire life studying randomness, practicing randomness, hating randomness. The more that time passes, the worse things seem to me, the more scared I get, the more disgusted I am with Mother Nature. The more I think about my subject, the more I see evidence that the world we have in our minds is different from the one playing outside. Every morning the world appears to me more random than it did before, and humans seem to be even more fooled by it than they were the previous day. It is becoming unbearable. I find writing these lines painful; I find the world revolting.
Nassim Nicholas Taleb in The Black Swan
If you could handle that, you can handle the whole book. Seriously, most of it is not nearly as bitter. Or you could grab his first book instead, which I haven't read. Some reviewers on Amazon think it's less rambling.