Featured Post

MABUHAY PRRD!

Sunday, February 5, 2017

The Selective Laziness Of Human Reasoning

August 1, 201611:20 AM ET Commentary TANIA LOMBROZO

Democrat: "Those arguments by Republicans are preposterous!"

Republican: "Those arguments by Democrats are absurd!"

Sound familiar?

There are plenty of reasons why political disputes can be divisive, and a host of psychological mechanisms that contribute to a preference for one's own views.

For one thing, political preferences aren't just reasoned opinions; they're often markers of personal and cultural identity with strong emotional resonance. For another, we tend to expose ourselves to sources that support our own views, reinforcing rather than challenging our beliefs.

An article forthcoming in the journal Cognitive Science adds another mechanism into the mix: We're more critical of arguments offered by others than of those we produce ourselves. Authors Emmanuel Trouche, Petter Johansson, Lars Hall and Hugo Mercier describe this as the "selective laziness of reasoning." We reserve effortful scrutiny for others and often give ourselves a free pass.

To test this idea, the researchers exploited a phenomenon known as choice blindness: Under the right conditions, many people fail to recognize that a choice they made previously has been swapped with an alternative. For example, people who choose one of two photographs as more attractive will often fail to notice when the photograph they're subsequently presented isn't the one they chose, and will nonetheless go on to explain why they found the (non-chosen) option more attractive. The phenomenon has been replicated for choices in a variety of domains, including jam and tea preferences, moral judgments and political attitudes.

Trouche and colleagues adapted this technique to the case of arguments and, in so doing, created situations in which people were asked to evaluate arguments that they didn't recognize as their own. This revealed that people were willing to generate arguments that — when presented as coming from another person — they could readily recognize as flawed.

Here's how it worked. Across two studies, more than 400 participants recruited online were presented with word problems that required them to draw inferences from limited information. For example, they might read about a fruit and vegetable shop that carries apples as well as other products and learn that none of the apples are organic. They would then be asked what follows "for sure" from this information, and were given a variety of options to choose from: that all the fruits are organic (false), that none of the fruits are organic (unknown), that some of the fruits are not organic (true), and so on. Participants made a selection and provided an argument to justify their choice.

In a subsequent phase of the experiment, participants were presented with the same problems, along with choices and arguments purportedly provided by other participants. In each case the choice was presented as an alternative to what the participant had selected initially, and participants were invited to reconsider their own choice in light of the argument.

But within this set was a fake: a problem for which the participant's original choice had been swapped, such that the "alternative" response was the one that the participant had actually provided, and the corresponding argument was the participant's own. About half of participants failed to notice the swap: They were the victims of choice blindness. For these participants, the experimenters succeeded in creating the conditions they were after, putting people in the position of evaluating arguments they had produced as if they had been produced by someone else.

And what they found was this: that people rejected their own arguments over 50 percent of the time, failing to find them sufficiently compelling to change what they thought was their initial response. In other words, people were less critical of the very same arguments when they produced them themselves than when they were later presented as coming from another person. Evaluating these arguments also led to an overall improvement in performance: Accuracy increased from around 40 percent in the initial phase to around 60 percent after participants evaluated their own argument in disguise.

There are several ways to interpret these results. The authors of the study take them as evidence for a theory according to which human reasoning is principally geared towards effective argumentation rather than knowledge-seeking. But for present purposes, we can draw a timely, if more modest, conclusion: that when it comes to evaluating arguments across the political spectrum — especially those that challenge our own views — we would do well to bear in mind the selective laziness of reasoning.

It makes sense to evaluate other people's arguments with careful scrutiny, but we should apply the same consideration to our own.

Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo

http://www.npr.org/sections/13.7/2016/08/01/488228453/the-selective-laziness-of-human-reasoning

No comments: