The Psychology of Risk

Humans are lousy at judging risk. Why is that? Let’s dive into the psychology and neuroscience of risk.

Imagine you hand a four-year-old a cello and demand they play Bach’s Cello Suite No. 1 in five years’ time, insisting on a month-by-month practice plan. It’s clearly quite absurd, isn’t it? Yet every project kick-off meeting follows the very same script: complete roadmaps for teams that barely know which notes they need, where they are located, or what a note even is. Agile teams water these down with milestone plans to feel less exposed to waterfalls, but the idea is the same. It is why we end up with what I call placebo confidence: we fool ourselves into thinking we are confident in our iffy plans when in reality we have no clue, but we are too afraid to admit that, either publicly or to ourselves. We pretend it is all copacetic as long as no one exposes our placebo confidence. Fortunately, few do because they too have no idea.

Sometimes people try to compensate by making risk explicit in plans by adding a discrete scale for confidence (e.g. low, medium, high). These labels are entirely arbitrary, even if someone decides to link these to confidence percentage ranges. Let’s say that “low” ends at 40%. How do you measure or substantiate that figure? You cannot, and that’s the point: it is placebo confidence. Any thresholds, whether confidence percentages or even business impact, are going to be made up. It appears rigorous, but it has the rigour of the roll of a dice.

Back to Africa

So, why does this happen? Because we remain foragers, lost in a sea of spreadsheets. Back on the savannah, overreacting to a rustle saved lives. This survival instinct turned into loss aversion, about which Kahneman and Tversky write that losses loom larger than gains, though not twice as much, as often cited, with doubts about its robustness. Still, loss aversion is so primal that even capuchin monkeys reject gambles framed as losses, even with identical payoffs. In today’s boardrooms, our brains are still wired to see abstract business risks as life-or-death bets. Since very few are, we tend to ignore the dangers.

Boehm’s cone of uncertainty shows that early software estimates can be off by a factor of four in either direction. And still we expect teams to estimate delivery dates months or years away. The only real surprise is that we act surprised when such self-imposed deadlines are missed.

But software is not the only area where it happens. Buehler et al. found that students were 84% confident they could meet self-imposed deadlines for various tasks, though only 44% did. People are notoriously bad at incorporating uncertainty in their plans.

Stories over statistics

Part of the problem lies in the fact that humans think in stories, not statistics. That is because stories activate brain regions that process emotions (e.g. amygdala), whereas statistics rely on the error-prone prefrontal cortex’s calculations. Bruner calls this the narrative mode of the mind: we crave plots over abstract data. Shiller’s research on narrative economics demonstrates how booms, busts, and fads spread by catchy tales, not by in-depth analyses and serious discourse. In projects, we also spin upbeat yarn instead of grappling with probabilities and risk.

Which brings us to our probabilistic blind spot. We translate a probabilistic statement such as “This venture has a 42% chance of failure” into a statement about our confidence because of unconscious mental shortcuts. What that means in practice is that we mentally round off probabilities: a 5% risk of failure feels like zero, and an 80% chance of success is interpreted as a guaranteed win. When we recast odds as 42 out of 100 (instead of 42% or 0.42), we can actually boost correct Bayesian inference by a factor of nearly three, because our brains tally frequencies more naturally than they manipulate decimals. What is more, cognitive overload is why people prefer fewer not more statistics when making decisions.

We also crave the illusion of control. People tend to assign personal values to bets based on subjective beliefs rather than objective probabilities. The Ellsberg paradox shows that humans prefer a known yet potentially made-up loss over an ambiguous yet realistic one, even at extra cost. In other words, people prefer a placebo to the messy truth.

What can we do?

We cannot “framework” our way out of the problem: we cannot manufacture certainty through a theatre of process. Admitting we are cavemen in suits or hoodies is step one. Step two is rewiring our tools and culture. First and foremost, we must admit “I do not know yet” more often, including leaders.

Second, we must reward such honesty. Praise the person who admits uncertainty more than the one who slaps a confident veneer on shaky assumptions. Psychological safety beats forced positivity every time.

Third, we must perform pre-mortems prior to every commitment. Ask: “What killed it?” If we list failure criteria upfront, we can incorporate safeguards now, not later.

Fourth, any estimate ought to be grounded in reference-class forecasts: use similar past experiences to predict what might happen rather than rely on instinct and guesses.

Face the music

Let’s now return to our aspiring cellist. No one would fault the child for ignorance, because we understand that real mastery requires deliberate practice, feedback, and, above all, effort over time. The same is true for knowledge work. Insisting on perfect foresight when you cannot even pick up the bow or curl your fingers around the neck of the cello is not management, it is madness. So, embrace uncertainty and let the roadmap evolve. Only then will teams play real music instead of rehearsing fantasies.