By Jesse Singal
A heated political story in the United States last weekend perfectly illustrates how tribal politics can supercharge a human weakness that psychologists have been studying for some time – our deep-seated tendency to accept evidence that supports our existing beliefs, and to ignore evidence that contradicts them.
It involved a conflict near the Lincoln Memorial featuring a handful of Black Israelites (a radical black nationalist group), a large group of mostly White American high-schoolers, some in Donald Trump hats, who were in town for an anti-abortion march, and a small group of Native American protesters, one of whom found himself in the midst of the high-schoolers.
Following the initial reports of what happened, and spurred along by a short video and dramatic photos, suggesting that the teens had encircled and confronted the Native American protester in an apparent act of intimidation, there was widespread condemnation of the teenagers, calls for them to be suspended or expelled from school, doxxed, and so on. But what’s telling is that when new details emerged, most notably the emergence of a longer video showing it was the protester who had waded into the sea of teens (because, he said later, he wanted to break up the conflict between them and the Black Israelites), and which complicated other aspects of the narrative as well, still many commentators continued to interpret events in line with their own political leanings. In fact the cacophonous online argument about what happened only seemed to explode in volume when the longer video was released — more information didn’t resolve things. At all.
As the Georgetown University professor Jonathan Ladd put it so well on Twitter: “Regarding the incident at the Lincoln Memorial,” he wrote, “it’s fascinating to see motivated reasoning play out in real time over a 24 hr period … Despite lots of video, all interpretations now match people’s partisanship.”
These politically motivated cognitive gymnastics are the subject of an important new paper lead-authored by Anup Gampa and Sean P. Wojcik that’s just been made available as a preprint (and due to be published in Social Psychological and Personality Science). Specifically, Gampa and Wojcik, working with a team that includes the open-science advocate Brian Nosek, decided to test the effects of politically motivated reasoning using logical syllogisms, a type of logical argument in which premises are assumed to be true, and arguments proceed from there.
The syllogism the researchers use as an example at the top of their paper nicely shows how this sort of thing works:
All things made of plants are healthy. [premise]
Cigarettes are made of plants. [premise]
Therefore, cigarettes are healthy. [conclusion]
By the rules of logic and the conceit of logical syllogisms, this argument is logically sound (even if factually incorrect). But because “cigarettes are healthy” clangs loudly against people’s beliefs about the world, some people will reject this syllogism as false, even after having the logical rules of syllogisms explained to them. This example isn’t a particularly political issue — no one really thinks cigarettes are healthy at this point. What the researchers wanted to know was whether logically valid or invalid politicised syllogisms (pertaining to abortion, for instance) would be more likely to be misinterpreted as false or true, respectively, based on readers’ political beliefs, even though these beliefs should be irrelevant to interpreting the logical soundness of the syllogisms.
So, in a series of studies drawing on thousands of visitors to YourMorals.org and Project Implicit (the first two studies), and a nationally representative online sample of Americans (the third), Gampa and Wojcik asked a group of participants about their political beliefs and then presented them with a series of syllogisms designed to tap into either liberal or conservative sentiments, or no such sentiments at all — sometimes presented in traditional, formal syllogistic structure, and sometimes in more everyday language. The participants’ task was simply to determine which syllogisms were valid, and which were invalid.
As prior research and theory on this subject would suggest, in the first study both liberals and conservatives were more likely to wrongly evaluate syllogisms whose conclusions’ clashed with their politics.
The researchers found a similar pattern in their second study, which dealt with arguments presented in more common language, and in a third drawing on a bigger, national American sample — though in that one “the effect was somewhat less pronounced[.]”
Summing things up, the researchers write, “participants evaluated the logical structure of entire arguments based on whether they believed in or agreed with the arguments’ conclusions. Although these effects were modest in magnitude, they were persistent: we observed these biases in evaluations of both classically-structured logical syllogisms and conversationally-framed political arguments, across a variety of polarised political issues, and in large Internet and nationally representative samples.”
As the furore over the recent events at the Lincoln Memorial illustrate, this is an interesting finding that is of course reflected constantly, albeit in a different and more nuanced form, in the real world: People routinely dismiss perfectly sound arguments that would cause them cognitive discomfort by threatening their political beliefs, or their sense of themselves as members in good standing within their political tribe, and so on. As a general rule, the more political and emotional and social ties we have to an idea — the more an idea matters, in a deep way, to our sense of ourselves — the less likely we’ll be to let it go even in the face of strong evidence it is false.
This study can’t contribute to the debate over whether liberals or conservatives are more likely to commit such errors, the researchers write, because the stimuli weren’t constructed to be equally polarising to the two “sides” (though some prior research suggest both tribes are equally vulnerable). It also doesn’t tell us what can be done about this sort of ill-formed reasoning. Though on that front, at least, Gampa, Wojcik, and their colleagues do have some ideas: “A takeaway from this research… may be that reasoners should strive to be epistemologically humble. If logical reasoning is to serve as the antidote to the poison of partisan gridlock, we must begin by acknowledging that it does not merely serve our objectivity, but also our biases.” That is, people should dispense themselves of the notion that when they sit down to reason a problem through carefully, the act of doing so automatically shields them from the effects of political bias. Because bias isn’t a problem endemic to any one political movement: It’s a problem endemic to having a human brain.
Post written by Jesse Singal (@JesseSingal) for the BPS Research Digest. Jesse is a contributing writer at BPS Research Digest and New York Magazine, and he publishes his own newsletter featuring behavioral-science-talk. He is also working on a book about why shoddy behavioral-science claims sometimes go viral for Farrar, Straus and Giroux.