Off-Topic: Understanding the Economic Crisis

October 9, 2008 · 💬 Join the Discussion

Nassim Taleb and Benoît Mandelbrot must be watching this crisis with very different eyes from the rest of us — tragically, in a way. Some people are probably tired of this, but I’ll repeat my recommendation to read the works of both.

See this excerpt from The Black Swan, from 2006, by Nassim Taleb:

Globalization creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words this creates devastating Black Swans. We have never before lived under the threat of a global collapse. Financial Institutions have been merging into a smaller number of very large banks. Almost all banks are interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks — when one fails, they all fall. The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they will be more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble each other. True, we now have fewer failures, but when they occur … I shiver at the thought.

The government-sponsored institution, Fannie Mae, when I look at its risks, seems to be sitting on a barrel of dynamite, vulnerable to the slightest hiccup. But not to worry: their large teams of scientists deem these events “unlikely.”

No, Nassim is not a prophet, a fortune-teller, or a guru. He’d actually hate being called a guru. He simply made an observation based on what all of us ignore: unfortunately our human machinery is very poor at handling abstractions, such as Randomness.

This isn’t an argument against globalization or other pseudo-socialist nonsense either. The really important part is the second paragraph of the quote.

It’s Happened Before

Long-Term Capital Management was the darling of the financial world. An American hedge fund formed in 1997, which had among its founders not just one, but two Nobel Prize winners in Economics: Robert C. Merton and Myron Scholes. They believed they had the mathematics (cough Gaussian cough) capable of predicting any type of event. In 1998, LTCM already owed USD 4.6 billion. The Russian Financial Crisis destroyed their Gaussian theories, which completely ignored and underestimated Black Swans.

And do you think people learned their lesson? Unfortunately we are more stubborn than that: Scholes and Merton’s theories are taught in economics programs to this day.

This isn’t a quality unique to the economic world: various other fields accept and employ absurd theories — without foundation, without results — and still treat them as if they were the next great revolutions. Ordinary people let themselves be fooled easily simply by names, by credentials. In short, if credentials were worth anything, a Nobel Prize destroyed in 1998 should mean something.

I often say the following: companies and charlatans who sell methodologies (of any type: financial, human resources, management, etc.) have the easiest job in the world — they just need to be good salespeople.

If your client succeeds after implementing the methodology: “See? You succeeded because you implemented our revolutionary methodology.” Of course this client then becomes a “success case” in their portfolios.

If your client fails after implementing the methodology: “Of course it failed — you didn’t implement it exactly as we said, your teams lacked commitment. There’s nothing wrong with our methodology, just with you.” And, obviously, this client will never appear in the portfolio, since nobody likes publicizing failures.

It’s very easy to fool people. And don’t look around: you’re implementing a methodology like this right now — I know it!

Falsifiability

People have the terrible habit of asking the wrong questions: “How do we know if a theory is true?” That’s exactly why they also receive the wrong answers, which leads to even more wrong decisions.

Once again, as I’ve said countless times in previous articles: we are made to be fooled. Worse than that — we consciously don’t exercise our skeptical abilities.

Every time we see a “success case,” we automatically accept the theory as “true.” Or in an even more twisted way: “I’ve never heard of a case where this methodology failed, so it must only be valid.” That’s the conclusion most “managers” and “executives” reach. How many times do we need to repeat this?

Absence of evidence is not evidence of absence.

Today we use a lot of statistical data to make decisions. Statistics is very useful when used correctly. But when used the wrong way, it’s an enormous disaster.

Let’s see: “In the last 3 years we’ve been growing 2% every month, so we can conclude with certainty that we’ll continue growing 2% in the coming months.” Everyone has done this. Until a Black Swan happens and then the excuse is something else: “I don’t know what happened, it was an accident, because according to the data this shouldn’t have happened.”

Let’s be clear: a rare event is called rare precisely because it doesn’t happen all the time. And it’s exactly this type of rare, random, unpredictable event that tends to cause the billion-dollar losses or billion-dollar gains — depending on whether you’re Gaussian type or Paretian type.

But returning to the initial problem: are statistical data from the past not sufficient to ensure the validity of a theory?

Obviously NOT.

What past data can do, at most, is ensure the falsifiability of a theory. As Nassim explains in Fooled by Randomness: with past data we can indeed prove that a theory is invalid, but we can never prove that a theory is valid.

For example, today we know that Newton’s classical physics theories are invalid, because Einstein’s Theory of Relativity has already overturned them. But we know within which contexts classical theory can still be used and when we need relativity. Good scientific theories are those that allow criteria for judging their falsifiability, never their validity.

Astrology and other pseudo-sciences are dogmatic. Anything dogmatic should be automatically discredited since it precisely prevents any attempt to verify its falsifiability. This is the signature of charlatans.

The argument I described in the previous section is exactly what charlatans do: if it worked it’s thanks to the theory; if it didn’t work it’s because you didn’t apply the theory correctly. The prediction didn’t come true despite Mars being aligned with Saturn because in reality it was “a little bit” off position — otherwise it would have worked…

Nassim gave another great example: “I’ve been conducting a study on George Bush’s life. After 20,000 observations I can assure you that in none of them did he die. Therefore I can state with certainty, based on this historical data, that Bush has never died, therefore he is immortal.”

Fallacies

Recently I’ve given several tips about things to avoid: if we’re talking about management or subjects related to control — especially of people — and non-mechanical things, and the theory doesn’t take Pareto into account, throw it out.

If the subject offers a theory but has no room to evaluate its falsifiability, it’s charlatanism — throw it out.

Self-help books are like that: they offer flowery theories, covered in honey, packaged attractively. But unlike scientific theories, they don’t let us try to prove they don’t work — they only assert that they work, cite various “success cases,” and hide all failed attempts.

Since the beginning of last month I’ve been traveling to give talks every week, and I always pass by airport or bus station bookstores. Paying more attention, the featured books are exactly of this type: cheap charlatanism. If not all, almost all.

The “beautiful” theory only exists today simply because it hasn’t yet encountered a Black Swan in its path. “Ah, it’ll never happen because it’s never happened until now.” And that’s precisely why there may be, perhaps, even more chances of it happening: because it never has!

Watch out for the following Fallacies:

  • Induction Fallacy: a fallacy where induction is wrong. Induction means you’re trying to find general principles from known facts.

  • Narrative Fallacy: the creation of a post-hoc story such that the event seems to have had an identifiable cause.

  • Regressive Statistical Fallacy: believing that the probability of future events is predictable by examining occurrences of past events.

  • Ludic Fallacy: believing that the structured randomness found in games resembles the unstructured randomness found in life. It’s the problem of confusing the map (model) with the territory (reality).

This last one takes into account:

  • it’s impossible to have all the information
  • very small variations in data can lead to enormous impact (Butterfly Effect — yes, it happens all the time)
  • theories/models based on empirical data are flawed, since events that haven’t happened yet cannot be taken into account

It’s the example Taleb explains: suppose you’re pulling colored balls from a covered box — without being able to see. You draw 5 white balls and 5 red balls and, from this empirical data, reach the conclusion that "always 1 red ball will come out for every 2 balls drawn." But little do you know that under the table there’s a hole and there’s a boy hiding there. Upon hearing your assertion, he starts giving you more white balls than red. That’s reality.

We exercise our skepticism far too little. This isn’t about becoming paranoid — it’s about evaluating things just a little better than the mediocre way we do it today.

Emotions

As Malcolm Gladwell says in Blink: we effectively make decisions in the blink of an eye. Of course, the decision will be only as good as our experience, knowledge, and skills.

Taleb describes an experiment conducted on a person who had to undergo brain surgery — due to a tumor — and because of this it was necessary to remove the part of the brain responsible for our emotions. Everything else remained intact.

“Excellent!” someone might think: here is a 100% rational person, who won’t let emotions interfere with reason. You could deduce that this person would be capable of making intelligent and rational decisions.

Surprise: this person became completely incapable of making any decision. They couldn’t even bring themselves to get out of bed. The studies show that perhaps our decisions are far more driven by the emotional part than the rational one — contrary to what we imagine.

Anyone who has dealt with artificial intelligence reaches the conclusion that we humans probably have to have an approximation mechanism, since it’s simply impossible to take all variables into account. The time to evaluate everything we know would be so long that we would have been extinct by other predators millennia ago.

Deciding in the blink of an eye, or worse, trying to be “rational,” has its advantages and disadvantages. Open-minded people, extremely studious, very experienced in many areas, with many skills and capabilities, will probably be able to make many correct decisions very quickly (some by luck, some wrong). But closed-minded, mediocre people will make many wrong decisions.

Example of a wrong decision: trusting sellers of pseudo-sciences.

As Karl Popper would say: “one should not take science too seriously” — precisely because science allows itself to be wrong, refining itself over time, and if you take everything too seriously, you risk using theories that haven’t yet been validated as false, and your next decision might be the very Black Swan that falsifies it.

Conclusion

Be very, very careful about experts. Without wanting to denigrate all types of experts — there are indeed many good ones — but experts in abstract and non-tangible things like “methodologies,” “economics,” and such things should always be viewed with suspicious eyes. A good credential doesn’t make a theory better or worse — it’s simply irrelevant.

Don’t let your decision be biased by non-scientific theories (pseudo-science, superstition, astrology, homeopathy, etc.).

Again, read Taleb: what he says is obvious, but for some reason we all ignore it. Few people truly understand what randomness is. Stop frenetically watching Bloomberg, stop refreshing your browser every 5 seconds to check financial indices — none of that will help you: you’ve already ignored the Black Swan, you’ve already lost.

Just to give a hint about Agile philosophy: it seems quite intelligent, for example, why Agile methodologies insist so much on short Sprints/Iterations. They know it’s impossible to predict the long-term future, and that’s precisely why they prioritize what’s truly important and try to plan only the short term — only what’s effectively possible. Agile methodologies seem especially designed to defend against the Black Swans that still haunt traditional software development teams, as I explained in my previous article.

Everyone has only known white swans in the past and infers that Black Swans don’t exist. That’s where the danger lies!