How to Correct Your Intuition

Thinking takes time and effort, meaning we can’t apply it to everything we do, otherwise it would take us too long to get anywhere.

For this reason we rely a great deal on intuition. It helps us navigate familiar roads, put our socks and shoes on, and get breakfast together. For the most part, it does its job and lets us be more selective in what we think about.

But intuition doesn’t always do a good job, and what we need in these cases is to inject some cognition. The question is whether we’re very good at recognising when this is the case, and how effective we are at following through.

Gut Override

In 2005, psychologist Shane Frederick developed the cognitive reflection test. It aims to measure how effective we are at identifying a flaw in our intuition and using our head to resolve the problem.

The test consists of 3 questions:

▪️A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

▪️If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

▪️In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

In each case, intuition has something to say.

But given the nature of the questions and the context I’m asking them, it should be obvious that the questions are designed to be intuitively wrong.

They are trick questions, but they nevertheless highlight how intuitions go awry, and they give us a chance to see how good people are at recovering.

To succeed on this test requires wrestling with your intuition, and being able to do that is a skill which reaches far beyond the bounds of these questions.

How to Stage a Successful Override

Succeeding on the cognitive reflection test requires a few things.

1️⃣ You have to recognise the problem. If nothing seems amiss, you won’t have a reason to question the gut response.

2️⃣ When conflict has been detected, you need the motivation to override it. If the problem is insignificant and you don’t have the energy, you won’t bother to correct the problem you recognised.

3️⃣ You need the right tools for the override. What good is identifying the problem if you don’t know how to fix it?

A Closer Look at Each Level

Detection 👁️

To know you have to question the intuitive response, you need to have an intuitive response. If there were none you would have no choice but to work through the problem, or suffice with not having an answer.

But that intuitive response also needs to make itself open to questioning. There needs to be the hint of an error, a warning label of low confidence.

In a sense, recognising that you have to question your intuition is itself an intuition.

Here’s the widget question again:

▪️If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

While you bear witness to the gut yelling 100! You might feel that there’s a problem with that answer and go in to check it out.

If the first intuition tells you 100 unaccompanied by a feeling of uncertainty or caution, the default is to trust your gut.

Research into the fluency effect is relevant here. Studies have consistently shown that the easier something is to process, the more likely we are to trust it.

For instance, one study found the readability of a font can cause people to be more critical of the information:

When asked how many animals of each type Moses took on the ark, a clear font gets more answers of 2, a difficult-to-read font leads to more people recognising Moses never had an ark, Noah did.

In response to this research, some designers created a font called Sans Forgetica. It’s deliberately difficult to read, with the aim that it engages the reader’s mental muscles, make them more critical of the information and better remember the contents.

In a review of the research on fluency, Adam Alter and Daniel Oppenheimer write:

“Whether a stimulus is easy to perceive visually, easy to process linguistically, easy to retrieve from memory, or semantically activated, people believe that it is truer than its less fluently processed counterparts.”

In the widget question, the nature of the question is hidden in a pattern so easy to pick up that it’s difficult to avoid. If one pattern is 5 5 5, and the other 100 [?] 100, it’s not hard to fill in the gap.

But if 5 machines make 5 widgets in 5 minutes, each machine takes 5 minutes to make 1 widget. Meaning 100 machines can make 100 widgets in 5 minutes.

Motivation 🏋️

We don’t often do things for no reason. We’ll think if we expect it to pay off somehow.

The rewards for thinking can be external, such as earning money or avoiding being punished; or internal, such as exploring something out of curiosity or the enjoyment of a challenge.

Sometimes we think for the pleasure of thinking, other times we’re pushed into thinking by our environment. Sometimes we think while something is at stake, other times we think just to pass the time.

When it comes to what we do with a dubious intuition, the role of motivation is key. If the problems and uncertainties presented by our intuition tug at the right heart-string, we’ll give it our attention.

Here’s the bat and ball question again:

▪️If a bat and a ball together cost $1.10, and the bat costs $1 more than the ball, how much does the ball cost?

It turns out, people aren’t very good at finding the right answer. In Thinking Fast & Slow, Daniel Kahneman wrote that “More than 50% of students at Harvard, MIT, and Princeton gave the intuitive—incorrect—answer.”

However, watching other people as they make mistakes doesn’t offer an easy way to distinguish between not detecting an error and not having the incentive to fix it.

For that, we can look at research by Wim De Neys, Sandrine Rossi, and Olivier Houdé. They found that people who fail the cognitive reflection test aren’t oblivious of their errors, and that while they go with the intuitive response, they are less confident in it.

The fact they were aware there might be a problem but went along with that answer anyway, suggests there wasn’t enough of an incentive for them to bother trying to fix it.

While there is likely a large variety of people who enjoy engaging in these types of problems and those who prefer to invest minimal effort, some have argued that in general people are intellectually lazy, that we’re cognitive misers.

“The rule that human beings seem to follow is to engage the brain only when all else fails—and usually not even then.”

—David Hull, Science and Selection

There is a trade-off between the ease and speed of intuitive thought or the slow and expensive process of thinking, and we have a natural preference for the easy route.

We have to be rather stingy with our mental effort because life is short and attention narrow. Each person will draw their own line between worth thinking about and not worth thinking about. Not all problems are worth solving.

Mindware 🌐

Thinking alone far from guarantees the correct response. You need to know how to solve the problem.

In the cognitive reflection test, you need to know the required mathematical operations, or at least know how you can learn them. Without that, even if you recognise an error and have the desire to fix it, you won’t.

Effort alone can backfire. It’s too easy to go looking for reasons to stick with the intuitive response, or to seek out evidence that supports what we want to be right. We could get to the point of overthinking, ruminating and going around in circles without ever making progress.

Effort has to be directed in the right way to get the right response. This is where learning comes in. We need a repertoire of rules, processes, systems, and models, that we can use to understand and solve problems.

Keith Stanovich calls these tools mindware. He writes:

“The mindware necessary to perform well on heuristics and biases tasks is disparate, encompassing knowledge in the domains of probabilistic reasoning, causal reasoning, scientific reasoning and numeracy.”

Many of the intuitions and heuristics we have now we have for a reason. When we succumb to biased reasoning or a misguided intuition, it is often because some rule that worked in our ancestral past remains embedded in our mind, and kicks into action in situations it wasn’t adapted to.

Rather than mindware, others point to mental models.

“[Mental models] are chunks of knowledge from different disciplines that can be simplified and applied to better understand the world.”

—Shane Parrish, The Great Mental Models

There are many mental models we could learn. They include ideas like supply and demand, opportunity costs, and regression to the mean. There are models for different disciplines, from math to biology and economics.

Parrish recommends a variety of models from different disciplines, as that helps us see problems with different perspectives:

“By default, a typical Engineer will think in systems. A psychologist will think in terms of incentives. A biologist will think in terms of evolution. By putting these disciplines together in our head, we can walk around a problem in a three dimensional way.”

Refining Intuition

At first, the mental models we learn will be applied at the third stage. We will recognise a problem, we will want to fix it, and we will have the tools necessary to think it through and find the correct solution.

Over time, the mental models could themselves become the intuitive processes. When over-learned they become automatic. Someone familiar with the cognitive reflection test might know how to find the right answers easily enough that they don’t really think about it anymore.

“Largely subconscious, mental models operate below the surface,” writes Shane.“We’re not generally aware of them and yet they’re the reason when we look at a problem we consider some factors relevant and others irrelevant. They are how we infer causality, match patterns, and draw analogies. They are how we think and reason.”

In a world full of information, statistics, numbers, and opinions, it can prove monumentally difficult to make sense of it. Problems are often far more complex and ambiguous than the difference in price between a bat and ball.

It pays to ensure we are using the right models when we try to understand bigger problems.

Our intuitions might be convincing, but they might also be missing something, or answering an easier question than the one we want answered. We need mental models to alert and orient us towards better solutions.

To explore some mental models, check out these posts from James Clear and Shane Parrish’s blog Farnam Street. Shane also has a book of them which is well worth a read.

Share the word

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *