Why don’t facts change people’s minds?

“These are the facts!  Why won’t you believe them?” is one of the more common complaints you hear during an argument.  Most of us know the feeling of talking to someone about, let’s say, climate change, gun control, abortion or the death penalty.  You think they’re wrong and you have the evidence to prove it, but they just won’t listen to the facts.  Why do people do this?  Why don’t facts change people’s minds?

To answer this question, we have to ask another one that may seem so self-evident you’ve probably never bother to ask it; what do our brains use information for?

If you said something like “to know what’s true”, or “to get an accurate picture of reality” you are only half-right.  Our brain has many reasons it uses information, and “knowing truth” is just one.  Others can be “confirming existing beliefs”, “protecting our self-image” and “maintaining group loyalty” (1).  I’m going to call these “non-truth goals”; ways our brains use information that don’t need to correspond to the truth.

Psychologists have long been aware that we reason towards goals other than truth, they call it “motivated reasoning” (2), a rarely used term that explains so much of why people stubbornly believe things that aren’t true.  So how is motivated reasoning different from regular reasoning?

Most of us know reasoning as a “bottom-up” process.  We gather and analyse facts, and use them as evidence to support true statements about reality.  This is what we mean when we say ‘follow the facts wherever they lead’.  Motivated reasoning however is a top-down process.   We start with the psychological goals we want to pursue; “confirming existing beliefs”, “protecting self-image”, “maintaining group loyalty” and then believe whatever information that helps us fulfil these goals.  Think of your brain as a courtroom.  Regular reasoning is the judge who weighs up all the evidence and tries to make the best decision. Motivated reasoning is more like a lawyer or attorney, it starts with the position it wants to reach and then collects only the information in a way that supports that case. (3)

Motivated reasoning reframes the problem of false belief.  Believing falsehoods is not always a case of a truth-machine that has malfunctioned, but one of a complex brain with multiple competing motivations being somehow induced to pursue something apart from truth.

What is this inducement?  What flips the switch between our inner-lawyer and our inner-judge?  The answer is emotion. If I told you Denmark has an area of 42 924 square kilometres and the atomic number of polonium is 84, you would likely believe me without a second thought as your brain would register almost no emotional response to either statement.  Despite this so-called “post-truth” era, most facts still fall into this category. However, if I told you facts about climate change, the death penalty, abortion or gun-control you would have an immediate pang of positive or negative emotion, before you had even had a chance to consciously process the statement itself (4), which would trigger your nefarious inner lawyer.  Importantly, this emotional trigger usually occurs outside of conscious awareness, so people can accept and reject facts based on it and still feel like they are being objective. 

This is why motivated reasoning is so effective and so frustrating.  It doesn’t make us blindly adhere to outright falsehoods without a shred of evidence, that would be too obvious.  Instead it uses subtle emotional cues to subconsciously steer us towards the “evidence” that supports what we want to believe.  Not only are we convinced of our position, we even have the “evidence” to prove it!

Towards the end of any discussion about psychology, post-truth and polarisation it’s obligatory to offer some vague advice to give the illusion that something can be done.  It usually takes the form of “Hey, you know that universal, inbuilt tendency I just explained that makes us process information badly without even realising it?  Let’s all agree to do it less”.  The problem with pervasive, unconscious, and automatic processes is that they are pervasive, unconscious and automatic.

I could say; next time you are in a conversation with someone who just won’t listen to the facts, remember they aren’t doing it deliberately.  Think about the other motivations they, and you, might have for your beliefs and rather than trying to overwhelm them with evidence, try addressing the subtle emotional reasons they might have for a rejecting the information. Does it threaten their identity? Their self-image?  And would believing it put them at odds with their friends and family?

That all might work, but I think more practical advice is; next time you are in a conversation with someone who just won’t listen to the facts, don’t hate them for it.



(1) Jost, John & Hennes, E.P. & Lavine, H. (2013). Hot political cognition: Its self-, group, and system-serving purposes. Oxford handbook of social cognition. 851-875.

(2) Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480-498.

(3) Ditto, Pizarro & Tannenbaum (2009) Motivated Moral Reasoning, Psychology of Learning and Motivation, Volume 50, 2009, Pages 307-338

(4) Lodge, Milton and Taber, Charles S., The Rationalizing Voter: Unconscious Thought in Political Information Processing (December 21, 2007).


What turns political disagreement into contempt?

The biggest threat facing Western society is not climate change, immigration, terrorism or nuclear war, it’s political polarisation. Democracies may break down arguing over how to deal with these issues, long before they can take their own existential toll. Pew Research Data (below) shows the US as two icebergs slowly shearing apart. The things that unite Americans are bigger than the things that divide them, but that may not be true for much longer.


Disagreement is not inherently a bad thing, indeed, it is the engine of democracy.  However, a functioning society requires some charity towards people we disagree with – a concession that, at the very least, they are not bad people trying to destroy the country.  Increasingly however, ideological consistency is going hand-in-hand with contempt.  As Pew stated in 2014  “disliking the other party is nothing new in politics. But today, these sentiments are broader and deeper than in the recent past.”  Partisans don’t just disagree with the other side but see them as threats and traitors.

Psychologists call the growing dislike between political tribes “affective polarisation”.  A 2016 Pew Survey found “For the first time in surveys dating to 1992, majorities in both parties express not just unfavourable but very unfavourable views of the other party.”  The same study showed Republicans and Democrats describe their political opponents as more “close-minded”, “immoral”, “lazy”, “unintelligent” than other Americans. About 1 in 5 partisans say they would be “unhappy” if a member of their family married someone from the opposite side, in 1960 it was less than 5%.  A CBS news poll, found around a quarter of people thought those in the opposing party were not just ‘people they disagree with’, but ‘a threat to their way of life’.


When does ideological polarisation become affective polarisation?  In other words, what turns disagreement into contempt?  Many have focused on what I call supply-side factors; partisan TV networks, social media bubbles, lying politicians and the agenda-driven bloggers who do their bidding – all things which generate a toxic information supply.  However, what is often overlooked is the demand-side factors.  People aren’t just passive absorbers of hate from above, they go looking for it and we need to understand the psychological factors that make certain people want to believe the worst about their opponents.  The demand side says, if we want to understand polarisation, as well as looking at the media we need to look at ourselves.

I want to focus on one aspect of the demand side; how we explain other people’s opinions to ourselves.   When faced with a divergent opinion, we often automatically generate an account of how that person came to hold it, ranging from the charitable; “their genuinely-held values conflict with mine”, to the external; “they get all their news from cable TV”, to the negative, “they’re too stupid to know better” to the downright immoral “they’re just racist”.  We all operate with a set of theories about how people work that helps us explain other’s beliefs; a battery of folk-psychological diagnostic tools that we use to interpret the world.  Unfortunately, most of us have a faulty toolset that lead us the most polarising explanations; that our political opponents aren’t well-intentioned and principled, but immoral, ignorant, lazy and biased.  It’s no coincidence these were the most popular descriptors in the Pew study.

If this is correct, much of the hatred between partisans is not deliberate, but is result of a misunderstanding about how moral positions are formed.  We don’t try to hate our opponents but are using a poor set of explanatory theories that make stupidity, evil and bias the most easily accessible explanations.  Ironically, we hate our political opponents not because they are ignorant, but because we are.

So what are the faulty premises that cause us to hate each other?

#1 – We think our moral positions are formed through objective, unbiased evaluations of the world.
Most of us believe we perceive the world in an objective, unbiased manner, “as it really is”, rather than through the subjective lense of our desires and experiences; an illusion known as Naive Realism.   As comedian George Carlin said; “Have you ever noticed when you’re out driving, everyone driving slower than you is an idiot, and everyone driving faster than you is a maniac?”.   We all operate on the default assumption that our thoughts and behaviours are the natural ones that follow from an unbiased view of the world.

From a day-to-day perspective, this is not just useful but necessary; constantly second-guessing our senses and beliefs would be incapacitating. However the downside is, if we use our “objectivity” as a platform from which to assess other peoples views it follows that other unbiased, reasonable people should share our views.  And those who don’t must therefore be biased, irrational or otherwise ignorant to the extent they disagree.  As Benjamin Franklin said “most men . . . think themselves in possession of all truth, and that wherever others differ from them, it is so far error.”  Our innate feeling of objectivity gives us an irresistible, one-size-fits-all explanation for anyone who disagrees; they must be biased, irrational, or in some way not seeing things clearly.

#2 – We rely too much on external influences to judge others.
All our pet theories about why we, and others, believe the things we do suffer from an inescapable information asymmetry; we can hear our own thoughts, but not those of others.  We can only see other people’s external influences; the TV shows they watch, the friends they talk to, or their social media bubbles.  This causes us to rely far too much on internal deliberations to explain our opinions and yet use mostly external influences to explain the opinions of others.  This over reliance is called “the introspection illusion”. Consciousness is but a fraction of our brain function, and own internal monologue is not a good indicator of our true motivations.  Yet “hearing” these thoughts and justifications make us feel as though have ‘reasoned’ toward all our opinions.  Conversely, we look around and feel that others just seem to parrot their favourite TV anchors.  As a result we consistently underestimate our own bias, and overestimate the bias of others, a phenomenon called the bias blind spot.

We all think that it’s other people who toe the party linefall for advertising, yet studies show we are far more influenced by external forces than we realise, and others are less so than we give them credit for.   Thanks to the introspection illusion we often explain disagreements by presuming that we are reasoned, independent thinkers and others are slaves to TV shows, political advertising and party talking points.

#3: We think people ‘build up’ their moral positions using facts.
Most people view morality as a ‘bottom-up’ process where we assess facts and evidence and over-time build up opinions about what is right and wrong.  On this model “giving people the facts” should readily sway their views, as facts form the foundations of belief.

However morality more often occurs from the ‘top down’ – we begin with intuitions about what is right and wrong and then rationalise justifications for them, or use them to filter the facts we accept.
  Psychologist Jonathan Haidt says this misconception leads to contempt as political debate is often an unwitting exercise in ‘shadow-boxing’, with each participant trading in facts, and apparently landing knock-out blows, but rarely actually engaging with their true basis of their opponents opinions.  When neither side falls, each walks away feeling their opponents “won’t see facts” and are closed-minded, ignorant, or intellectually dishonest.  However, this misunderstands the causal direction between facts and moral values; we don’t usually update our values based on facts, but facts against our existing values.

#4: We think ‘knowing truth’ is the only goal of thinking.
Our brains are not truth machines, they are survival machines, and they have evolved to process information to pursue goals other than truth – this is called Motivated Reasoning.  While we sometimes think to know truth, we also think in order to protect our self-interest, maintain our self-esteem or affirm our existing beliefs – and thus can easily believe things aren’t right or true if they fulfil these other goals.

When it comes to politics, there is another important non-truth goal of reasoning – Facts and beliefs can also function as symbols of group membership.   In this sense are not used to “know” things, but to “be” things, and act to bind groups together and identify outsiders.

While understanding the non-truth goals of thinking doesn’t justify false belief it is a far more useful tool for explaining it.  If we see “truth” as the only goal of thinking, any deviation from it is obviously a sign of stupidity.   Motivated reasoning tells us; instead of asking “why is this person so stupid?” we should be asking “What is this person believing this for?“.

#5: We think being smart makes us less biased
Many smart, politically minded or scientifically literate people think that their superior intellect protects them from bias.  However studies have shown subjects with higher cognitive sophistication, cognitive reflection & numeracy are just as likely to fit new evidence to their existing dispositions and dismiss evidence that doesn’t.  We have a misconception that the rational, intelligent part of our brain corrects our biases, however it sometimes facilitates them.

The notion that intelligence corrects for bias gives us an easy path to dismiss people we disagree with as cognitively deficient; “they think that way because they’re can’t or won’t use their brain properly”.   However, whilst smarter people are sometimes able to direct their brainpower towards seeking the truth, they are also able to use it to protect their false beliefs too. As we’ve seen, naive realism already makes us assume people we disagree with are biased and if we also think intelligence and bias are mutually exclusive, it follows they must be stupid too.   In reality, we can’t make assumptions about one from the other.  Highly intelligent people can still be hopelessly biased, and we might be one of them.


When we look to explain other people’s opinions most of us don’t search for the best explanation, but settle for the first explanation that loosely fits with our observations.  These premises are so destructive precisely because they offer us the most polarising explanations the most irresistible ones; the people we disagree with must simply be stupid, lazy, biased or evil.  This may seem like cause for despair, however there is hope in the idea that our animosity emerges almost accidentally, from a faulty set of assumptions that make bias & stupidity seem like the only reasonable explanations.  While we will never, and should never all agree, a better understanding of how morality works should help us decouple disagreement from contempt so we can disagree robustly without tearing the fabric of goodwill and shared purpose that is tenuously holding society together.