10/02/2020

Fake News, Social Discourse and Rationality





Why do we believe so-called ‘fake news’? One explanation is our well-documented susceptibility to confirmation bias: the tendency to fasten onto anything that seems to confirm our previously held beliefs. So, someone who dislikes Hillary Clinton might be more inclined to believe the headline, ‘FBI Agent Who Exposed Hillary Clinton’s Corruption Found Dead’, while someone who dislikes Donald Trump might believe that a Trump Tower was opening in Pyongyang (these were two of the biggest false stories of 2018). If it is true we are so riddled with biases that our ability to reason clearly is undermined, it would be a serious blow to the proponents of deliberative democracy. Deliberative forms of democracy, such as citizens’ assemblies, rely on citizens being able to actively evaluate reasons. Jürgen Habermas envisioned a deliberative atmosphere as one where the only thing that prevails is the ‘forceless force of the better argument’. If human reasoning really is so biased, this vision seems very distant.


Luckily, though, there has been a substantial amount of work that adds more nuance to our understanding of the frailties of human reasoning and our susceptibility to misinformation. For Hugo Mercier and Dan Sperber (among others), human reasoning actually works well in social environments. It is when we reason alone and in isolation that biases are most likely to occur. Can deliberation, an ultimately social activity, be used to combat ‘fake news’? More specifically, could well-designed online deliberation mitigate the spread of fake news?

The term ‘fake news’ has been given a multitude of definitions. One helpful way to understand it is not as a type of content, but a characteristic of how content circulates online, and how this is situated in mediating infrastructures and participatory cultures. The problem is not simply inaccurate information, but also how social media platforms encourage the production and spread of this type of misinformation. This is also why fake news is novel and distinctive from more traditional forms of misinformation such as political propaganda.

In order to address this growing challenge, the House of Commons’ Digital, Culture, Media and Sport select committee released a report in July 2018 with 42 recommendations for the UK government to combat misinformation online, ranging from a levy on social media companies to fund social media training to a code for advertising through social media during political campaigns. Only three of the 42 recommendations were accepted by the government. However, the committee’s report contained no recommendations taken from the perspective of participatory democracy or the potential of deliberation to counter untruths. Is it a mistake not to include deliberative models when seeking solutions for this specific problem?

In their 2017 book, The Enigma of Reason, Mercier and Sperber argue that human reasoning actually works pretty well in social environments, largely because in collective settings, unlike someone sitting alone at their computer, we are frequently made to justify our beliefs and actions to others. We are ‘designed’ to reason collectively and socially, and it is not mainly ‘motivated’ reasoning (reasoning that is motivated by, for example, our pre-existing political beliefs) that makes us susceptible to misinformation. Instead, the culprit is simply a lack of any substantial reasoning at all. Or, as Pennycook and Rand put it, we are ‘lazy, not biased’. Herein lies the potential of public deliberation, where we can collectively and effectively reason ourselves away from false and ungrounded information. It is only in our interactions with others in the crucible of social discourse that our arguments are properly tested, developed and improved.

So, how can we harness the ‘truth-tracking’ power of public deliberation to combat the fake news phenomenon? An online solution for this online problem would seem most natural. Hopeful advocates for online democracy see great potential in the internet, since it opens up a new virtual public sphere that can bring a diverse group of people together with low barriers to entry. Examples such as Wikipedia show how powerful the internet can be for enabling collaboration, and delivering extensive accounts of knowledge. Additionally, the possibility of remaining anonymous online could rid online deliberation of uneven power structures. These circumstances seem to bode well for the deliberative ideal. However, Habermas himself warns that the type of mediated communication of the internet, where there is a lack of reciprocity and face-to-face interaction, undermines the deliberative environment.

Facebook is the online forum that has been most in the spotlight when it comes to fake news, and it is particularly ill-designed for encouraging high-quality deliberation. There is little to no moderation in, for example, the comment sections of a news article posted on Facebook. Discussions often happen in real time, which discourages reflection and encourages personal attacks and short messages without developed arguments.

On Facebook, and in fact all social media based on self-selection in terms of who you follow and interact with, there is a tendency to only have contact with people who hold similar beliefs to yourself. This is amplified by so-called ‘filter bubbles’ where internet algorithms feed you with content based on your past internet activity. Cass Sunstein describes a law of group polarisation where members of a deliberating group will move towards a more extreme viewpoint.

Taken together the above factors suggest online spaces are not conducive to deliberation and ‘truth-tracking’, where fake news seems to be an endemic problem. Though there are numerous examples of online platforms that are designed to accommodate and encourage deliberation, such as the Womenspeak consultation that was set up by the UK government in 2000 to inform policy with women’s experiences of domestic violence, the fact remains that these platforms have not integrated into our online way of life in ways that social media platforms such as Facebook have.

We therefore need to look beyond an online model for participatory democracy, to consider whether this online problem needs an offline solution. John Dewey spoke about democracy as a way of life, where all of our activities should be infused with open and informed communication. If we had more opportunities to use our reasoning as it was designed to be used, in social environments, we might be less susceptible to fake news. This could take the form of deliberative forums or citizens’ assemblies. For example, the Citizens’ Initiative Review in Oregon involves panels of citizens who evaluate ballot measures and provide recommendations in preparation for upcoming elections. Similarly, the Citizen’s Assembly on Electoral Reform in British Columbia invited a group of randomly selected citizens to formulate recommendations on how the electoral system could be improved. Both of these examples include citizens participating directly, and collectively, in the processing of information. Higher engagement in such activities will turn people away from fake news and misinformation, and towards modes of gathering information that are more based on active reasoning and evaluation.

Deliberation in these face-to-face forums can be a powerful tool for instilling the public with knowledge while breaking down boundaries between polarised groups. Policies aimed at improving democracy online and related to combating fake news should, of course, consider online solutions regarding regulations of misinformation, but it is perhaps more interesting to take a wider perspective and see what participatory models can be used to combat polarisation and disrupt some of the forces behind fake news. Democratic reforms could also consider how online behaviour, and susceptibility to certain types of information, fit inside the larger frame of what opportunities for deliberation and participation citizens have. Humans are made for public deliberation, but many of the social media platforms we use are not, and this ultimately strips individuals of an ability to protect themselves against susceptibility to fake news.


Deliberative democracy could be used to combat fake news – but only if it operates offline. By  Clara Wikforss.  Democratic Audit,  February 6, 2020.






People hold potentially misguided beliefs for all sorts of reasons. People want to be loyal to the values of their family, friends, political party, or religion. Some want to make a good impression for their boss and potential future employers. Others want to avoid conflict around those with whom they know will disagree. In other words, there are many reasons, some of which are actually rational, as to why we often reason poorly.


Are Humans Rational?

It has seemingly become a pastime of cognitive psychologists to find all the instances in which human reasoning flounders. Experts tell us that humans are poor reasoners. We fail miserably at rather simple conditional logic tasks of the following form: You are presented with four cards each of which has a number on one side and a letter on the other. For example, you are presented with four cards that show 4, 7, E, and K. You are given the following task: Which two cards must you turn over to test the truth of the proposition that if a card shows a vowel on one side then there is an even number on the other side? Take a guess.

The correct answer is that you turn over the card with the E (that’s the easy one) and your turn over the card with the number 7 (huh?). Most people get this wrong. In fact, research shows that less than 10% of people flip over the correct cards (Evans et al., 1993). Most people find themselves flipping the letter E, which is correct and flipping the card with the number 4, which is incorrect.

Moreover, a long list of cognitive biases has been generated showing how we reason differently about two pieces of information which are exactly equal in logic but differ in wording (framing effect), use irrelevant information to color how we understand probability (conjunction fallacy), reason about the rate of something based on how easily we can recall events (availability heuristic), find evidence to confirm our preexisting beliefs (confirmation bias), and much more (Tversky & Kahneman, 1974).

We Don’t Reason Like Computers Do

From the evidence presented thus far, one is prepared to conclude that humans are poor reasoners. Yet what is often conspicuously missing from this literature is the phrase "compared to what?" Compared to computers we are poor reasoners. But that’s always going to be the case since we developed computers to be perfectly logical. The question is, is it reasonable to suppose that humans reason like computers? Were humans designed to be perfectly logical? From an evolutionary perspective, there is little reason to suppose that this is the case.

To use a phrase I’ve heard voiced by cognitive psychologist Steven Pinker, “Reality is a powerful selection pressure.” And so it makes sense that truth and rationality are destinations reasoning minds can sometimes stumble upon. While we are poor reasoners in comparison to computers, compared to nearly all other animal species, our ability to reason is remarkable. We inhibit base impulses and defer and delay present gratification based on future concerns. We model behavioral patterns and sequences of actions in our minds before playing them out in reality. That way we can problem solve without physically suffering the ramifications of failure.


We outsmart every other species because we set goals, plan in advance, think before acting, remember what works and what doesn’t work and update our behavior in light of that information (Pinker, 2010; Tooby & DeVore, 1987).

However, truth and rationality also have unfortunate qualities to them. The truth can ruin someone’s day. It cares little for our feelings, rips off the masks we wear to conceal our vulnerabilities and flaws, penetrates through our petty attempts at infallibility, omniscience, and righteousness to reveal the mere mortal hiding in the corner behind it all. These features of truth and rationality, among others, may have pushed human reasoning off the path of perfect logicality.

We don’t always want the truth. Instead, we often want to convince others of our opinionated hot-take masquerading as truth in order to persuade them to join our cause. We distort the truth to make ourselves and others feel better, look better, and appear to be the godlike beings that we aren’t. We find evidence for, and deny evidence against opinions and beliefs we hold for groups to which we belong and for people with whom we socialize (a fact that will take you much further in understanding climate change denial than scientific ignorance).

Bringing Evolution Into the Picture

The assumptions traditionally underlying the field of reasoning psychology have been that human reasoning functions to enhance individual cognition (Mercier & Sperber, 2011). The field began and still is to a large degree framed within the Aristotelian logical framework that human reason functions to lead us to the logical answer. And so, human reasoning has been mostly assessed using deductive reasoning tasks in the form of syllogisms. But this is most likely not the assumptions on which reasoning evolved, because often the truth is exactly what we want to avoid, and avoiding it could have been evolutionarily advantageous especially for those who were especially good in persuading, deceiving, and arguing themselves out of truth’s crosshairs.

I propose that Aristotle needs to be supplanted with Darwin in this respect. Darwin offered the only theory of how organic design comes into being: evolution by natural selection. And natural selection is the process by which organisms become fitted to the environments in which they evolve. Reasoning is almost certainly an adaptive mental faculty, or set of mental faculties that was selected for through evolutionary time. But reasoning, as I stated above, likely didn’t evolve to enhance individual cognition or lead to truth.

Cognitive scientists Dan Sperber and Hugo Mercier have offered an evolutionarily informed perspective on reasoning as a mental faculty that came about due to adaptations for aiding in human social life, in particular, communication (Mercier & Sperber, 2011). Specifically, for them, the main function of reasoning is argumentative. Its proper function is to devise and evaluate arguments in a communicative context to convince others who would otherwise not accept what one says on the basis of trust.

Indeed, studies show that when reasoning is situated in argumentative contexts, people actually are good reasoners (Mercier & Sperber, 2011). It has been shown that people’s performance solving abstract logical syllogisms increases when these syllogisms are situated in an argumentative context (Mercier & Sperber, 2011; Petty & Wegener, 1998; Thompson et al., 2005).

Additionally, studies in motivated reasoning show that when people are motivated to reject a conclusion (e.g., when that conclusion implies something bad about them) they will use the evidence presented to them to disconfirm the conclusion. However, when people are motivated to accept a conclusion (e.g., when that conclusion implies something good about them) they will discount that very same information (Ditto & Lopez 1992). This argumentative theory of reasoning not only explains the apparent lack of reasoning skills in traditional tasks used to assess reasoning, but also explains key properties of reasoning such as strong differences in producing versus evaluating arguments.

Since reasoning functions to argue one’s case persuasively, this theory predicts that people should be both biased and lazy in the production of arguments. Specifically, they should have a strong confirmation bias in the production of arguments, producing arguments that favor their own point of view and attack that of their opponent. In this sense, confirmation bias is a feature of reasoning rather than a flaw, since it aids in the overall function of reasoning in arguing one’s case. Additionally, people are predicted to be lazy in the production of their arguments, quickly coming up with arguments without anticipating counterarguments (Mercier & Sperber 2011).

On the other hand, however, this theory predicts that when evaluating arguments, especially arguments of an interlocutor with opposing views, rather than being biased and lazy people are demanding and objective. Demanding because they don’t want to be swayed by any counterargument, but objective because they still want to be convinced of strong and truthful arguments, which is the whole point of arguing in the first place.

A number of studies show support for the distinction between the production and evaluation of arguments (Mercier & Sperber, 2011; Mercier & Sperber, 2017). When people reason alone relying on the production of arguments in isolation, they reason poorly. However, when people reason in group dialogic contexts in which arguments are being produced and evaluated people reason very well (Mercier, 2016; Mercier & Sperber, 2011). Additionally, this theory explains many of the seeming gaps in human reasoning such as the confirmation bias, laziness in reasoning, motivated reasoning, and the phenomenon of justification and rationalization in reasoning tasks (Gigerenzer, 2018; Mercier & Sperber, 2017).


As far as we know, the human brain is the most intricate and complex thing that exists in the universe. The list of “biases” and so-called “systematic errors” in our thinking needs to be reexamined in light of Darwinian evolution. In fact, when information is made to better match our intuitions and the conditions in which we evolved, many of these biases begin to look less like errors in cognition and more like errors in theory (Gigerenzer, 2018; Mercier & Sperber, 2017).


Human reason does not function to calculate the right answer like a perfectly logical computer. All adaptations, including the ones that give rise to human reasoning, have the built-in assumption of leading to increased reproductive fitness, which in the case of reasoning has been proposed to function mainly in human social life. We use reason to improve communication: to give justification for oneself and the ideas we hold, to persuade others of our case, to evaluate one’s reasons on the basis of objectivity, among others. This does not mean that we cannot reason logically like a computer (don’t forget we developed computers!), but just that logic is not the primary function of human reasoning. To understand how something like reasoning works in a biological sense requires that we understand how it evolved, which has implications for what it can and cannot do.

Bird wings are adaptations, and yet they cannot fly in the upper atmosphere where air particles are too far apart; vertebrate eyes are adapted organs which allow organisms to visually perceive the world and yet they cannot perceive radio waves, X-rays, infrared light, or any other kind of light other than the visible spectrum. Similarly, human reasoning has an adaptive function and yet there is little agreement on what functions it evolved to do. Understanding the conditions under which reasoning evolved and its proper function—the function it was designed to solve—will help us to understand reasoning’s actual function—everything reasoning can actually do (Sperber, 1994). A better understanding of this will not only help us in understanding the mechanisms involved in reasoning, but also inform us about which contexts are most suitable to eliciting reasoned arguments. This is deeply important when it comes to policymakers and decision-makers in society, but also for education. What kind of classroom context best enables students to think straight about important topics?

Bottom Line

Fortunately, we already understand many of the ways in which reasoning fails and reasoning is facilitated. For instance, of the various reasons why we should have our writing edited by another person, one is that he or she will raise counterarguments that we failed to anticipate and address by virtue of not knowing what we don’t know. Similarly, when we individually come to a conclusion about a controversial issue it is important to converse with others, especially with those of whom we disagree because they will raise counterarguments that will give much-needed nuance, revision, and ultimately strength to our own arguments. Knowledge is forever provisional, and humans are forever fallible. Therefore, it is important that our reasons, arguments, seemingly settled conclusions and points of view be exposed to continual debate and criticism.

Are Humans Rational? By Glenn Geher.    Psychology Today, November 6, 2019.






In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.



A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.


This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.


Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring.

Why Facts Don’t Change our Minds. By Elizabeth Kolbert.  The New Yorker, February 27,  2017,




It is often suggested that reason is what makes us human. But if reason is so useful, why doesn't it evolve in other animals too? And why do we so often use our reasoning to produce nonsensical conclusions? These are the questions that cognitive scientists Hugo Mercier and Dan Sperber set out to solve in their book "The Enigma of Reason", which takes a look at the evolution and workings of reason. Here, they discuss their argument that reason helps humans exploit their social environments by helping us justify our beliefs and actions to others.

Why did you decide to address the subject of why reason evolved?

Hugo: Since my undergrad, I was interested in evolutionary approaches to the human mind. Dan Sperber, with whom I wanted to work, had previously put forward the intriguing suggestion that the function of human reason was not to think better but to produce arguments in order to convince others.

Dan: This was just a sketchy hypothesis. During his PhD, Hugo fleshed it out, reviewed the literature, and conducted new experiments to test it. It took us several more years of common work to come to a novel and, we hope, illuminating account of reason.

How has the human capacity for reason been misunderstood in the past?

There have been two main misunderstandings about human reason, one bearing on what it is, the other on what it is for.

Reason is often seen as a very general capacity to solve problems, make better decisions, and arrive at sounder beliefs. A modern instantiation of this view takes the form of the System 1 / System 2 distinction made popular by Daniel Kahneman. In this view of the mind, System 1 corresponds roughly to our intuitions, which function well most of the time, but are subject to systematic mistakes. System 2, by contrast, corresponds to our capacity to reason in a rule-governed way, which enables us to fix System 1’s mistakes.

The first problem with this view is that it’s not clear how reason so understood could work. How could a single mechanism be responsible for fixing the rest of the mind? How could it be superior to the knowledge and experience encapsulated in all our other intuitions?

The second problem is that, if such a mechanism that can fix everything else could somehow have evolved, why would it have evolved only in humans? Why aren’t other cognitively complex animals also endowed with reason or at least some rudimentary form of it?

We suggest that reason is very much like any other cognitive mechanism—it is itself a form of intuition. Like other intuitions, it is a specialised mechanism. The specificity of reason is to bear... on reasons. Reason delivers intuitions about relationships between reasons and conclusions: some reasons are intuitively better than others. When you want to convince someone, you use reason to construct arguments. When someone wants to convince you of something, you use reason to evaluate their arguments. We are swayed by reasons that are intuitively compelling and indifferent to reasons that are intuitively too weak. Reason, then, does not contrast with intuition as would two quite different systems. Reason, rather, is just a higher order mechanism of intuitive inference.

Most of the time, we operate without thinking of reasons. When you drive to work in the morning, you are not thinking of reasons for every turn, every push on the gas pedal, every though that comes in our mind as you listen to the radio, etc. And these intuitions that drive the vast majority of our behaviours and inferences function remarkably well; they are not intuitions about reasons.

Why do we sometimes bother reasoning then? We suggest that the selective pressure behind the evolution of reason is not for solitary reasoners to improve on their thoughts and decisions, but for making social interaction more efficient. Reason has evolved chiefly to serve two related social purposes. Thanks to reason, people can provide justifications for their words and deeds and thereby mutually adjust their expectations. Thanks to reason, people can devise arguments to convince others. And, thanks to reason, people can evaluate the justifications and arguments offered by others and accept or reject them accordingly.

We all take these uses of reason for granted, but imagine how difficult even the most mundane interactions would be if we couldn’t exchange justifications and arguments. We would constantly run the risk of misjudging others and of being misjudged, and we would get stuck as soon as a disagreement emerges. You are driving with a colleague, and know that there is roadwork along the usual route, so you take a longer way. If you can’t explain why you chose this roundabout itinerary, your colleague will think you have no sense of direction. Or he’s the one driving, and you want to convince him to take the longer route. If he doesn’t trust your sense of direction over his, and you can’t defend your suggestion, there’s no way to change his mind, and you’ll end up stuck with the roadwork.

How would you define rationality?

There are many different senses of “rationality.” Two of them are quite useful. Cognition works by using a variety of inputs (from perception, communication, and memory) to draw inferences about how things are and how to act. In a wide sense, rationality is just the property exhibited by well-functioning inferential systems, whether those of flies, octopi, or humans.

In a narrower sense, rationality is the property of reasons that are intuitively recognisable as good reasons, and, by extension, of the opinions, decisions, actions or policies that can be justified or argued for by using such good reasons. Just as the use of reasons in justification and argumentation plays a major role in human interaction, appeal to reason-based rationality is a central feature of our attempts to come to terms with one another.

What is the function of irrationality?

If rationality in the wide sense correspond to effective inference, and rationality in the narrow sense correspond to effective uses of reason proper, then irrationality can no more have a function than any other form of cognitive or bodily impairment. On the other hand, irrationality can be put to use in a number of more or less disingenuous ways: as an defense for oneself, as means to attack others, as an excuse for letting go, and so on. These uses of irrationality may themselves, on occasion, be quite rational.

What place do flaws in reasoning, like confirmation bias, have in your view of reason?

The so-called confirmation bias consists in a strong tendency to find evidence and arguments in support of our preexisting opinions or hunches and to ignore counter-evidence or counter-arguments. It is better called the “myside bias” since we demonstrate it only in our own favour: we are not disposed, even if asked to do so, to “confirm” ideas that we do not share.

The myside bias, we argue, makes sense in interactive contexts. If you want to justify yourself, be your own advocate, not your own prosecutor. If you want to convince others of some opinion you already hold, look for the strengths, not the weaknesses of your viewpoint. Contrary to the dominant view, the myside bias is not a bug, but an adaptive feature of reason.

It is an important and original part of our hypothesis that the myside bias is characteristic of the production of reasons, but not of their evaluation. People must be much more objective in evaluate the reasons provided by other than in producing their own. This might seem surprising, but the evidence suggests that this is what people actually do. By and large, they respond well to good arguments, even if this means revising their own beliefs.

How can your view of the evolutionary/social function of reason be applied practically?

To make the best of our capacity to reason, we should keep in mind that it typically yields its best results in social settings. When we reason on our own, our natural inclination is to keep finding arguments for our point of view—because of the myside bias. As a result, we are unlikely to change our initial point of view—whether it is correct or not—and we might end up becoming overconfident. By contrast, if we discuss with people who disagree with us, but share some overarching goal—to make a good decision or have more accurate opinions—we are better equipped to evaluate their arguments and they are better equipped to evaluate ours. As a result, the myside biases of the different parties may be held in check, potentially yielding an efficient division of cognitive labuor.

We live in an era when "reason" seems far removed from much political discourse ("alternative facts" and "fake news"). How would you explain this trend?

One element of explanation is that many of the arguments we run in the political realm into are not really meant to convince anyone, but simply to bolster the views of like-minded people. As a result, they can spread without being properly evaluated.

Ditto increasing political polarisation in the US, UK, and elsewhere in Europe: why is this happening if our capacity for reason has a social function?

For reason to function well in a social setting, both disagreement and a common interest in reaching better knowledge and decisions is critical. When people who share a deeply entrenched opinions discuss together, arguments supporting one and the same point of view are likely to pile up, largely unexamined. In such conditions, people typically end up developing even more extreme views.

The question is, then, why do people keep reasoning on their own or with like-minded people? We think that the main impetus for such reasoning is the anticipation of being challenged or of having the opportunities of challenging one’s adversaries. People are typically rehearsing arguments and justifications to be used in such confrontations. A typical cue that such rehearsing may be useful is the knowledge that there are people around who strongly disagree with us. When we learn of different political views on TV, the newspaper, the Internet, etc., it is difficult not to spontaneously think of arguments defending our own views. The issue, however, is that we rarely end up actually talking with the people whose contrary views we are exposed to (and even less often discussing in an open-minded, constructive manner). So we don’t know how they would have answered our arguments. The myside bias is left to run amok.

The psychology of human reasoning, however, is only one relevant consideration among many in addressing the broad and complex historical, social, and cultural issues raised by the remarkable current political situation. In our book, we just aimed, more modestly - if this is the right word - to solve the challenge that human reason poses to scientific psychology.


Why do we use reason to reach nonsensical conclusions? Q&A with Hugo Mercier and Dan Sperber, authors of a new book about the evolution of reason.  By Samira Shackle.  New Humanist, April 20, 2017 






Ian Sample and Nicola Davis delve into the world of reason and ask why do we have it? How does it work? And what insights might our evolutionary past provide?

Long heralded as one of the last remaining barriers between “man and beast”, our ability to use reason and logic has historically been seen as the most human of behaviours. But as the field of neuroscience and psychology continues to probe our cognitive processes, are the foundations of reasoning now experiencing a shake up? Or, as many argue, are they somehow immune?

Sitting down with Ian Sample in the studio this week to explore a new theory of reason is the Central European University, Budapest and the Institut Jean Nicod’s philosopher and cognitive scientist Professor Dan Sperber. Laid out in his new book, ‘The Enigma of Reason: A New Theory of Human Understanding’, Sperber (alongside his co-author Dr Hugo Mercier of Institut des Sciences Cognitives Marc Jeannerod) looks to our evolutionary past and proposes a more social (or “interactionist”) function of reasoning, which includes what they call a “dark side”. Along the way, we also hear from the Minerva Schools at KGI, California neuroscientist Dr Daniel Levitin about how the likes of reason - and other cognitive processes - may leave us susceptible to the rise of ‘fake news.’

The evolution of reason: a new theory of human understanding – Science Weekly podcast.

The Guardian, April 17, 2017.






In the 1970s, two psychologists proved, once and for all, that humans are not rational creatures. Daniel Kahneman and Amos Tversky discovered “cognitive biases,” showing that that humans systematically make choices that defy clear logic.

But what Kahneman and Tversky acknowledged, and is all too often overlooked, is that being irrational is a good thing. We humans don’t always make decisions by carefully weighing up the facts, but we often make better decisions as a result.

To fully explore this, it’s important to define “rational,” which is an unexpectedly slippery term. Hugo Mercier, a researcher at the Institut des Sciences Cognitives-Marc Jeannerod in France and the co-author of “The Enigma of Reason,” says that he’s never fully understood quite what “rational” means.

“Obviously rationality has to be defined according to how well you accomplish some goals. You can’t be rational in a vacuum, it doesn’t mean anything,” he says. “The problem is there’s so much flexibility in defining what you want.”

So, for example, it’s an ongoing philosophical debate about whether drug addicts are rational—for in taking drugs they are, after all, maximizing their pleasure, even if they harm themselves in the process.

Colloquially, “rational” has several meanings. It can describe a thinking process based on an evaluation of objective facts (rather than superstition or powerful emotions); a decision that maximizes personal benefit; or simply a decision that’s sensible. In this article, the first definition applies: Rational decisions are those grounded on solid statistics and objective facts, resulting in the same choices as would be computed by a logical robot. But they’re not necessarily the most sensible.

Despite the growing reliance on “big data” to game out every decision, it’s clear to anyone with a glimmer of self-awareness that humans are incapable of constantly rational thought. We simply don’t have the time or capacity to calculate the statistical probabilities and potential risks that come with every choice.

But even if we were able to live life according to such detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, in which neat logic simply isn’t a good guide. It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Unconvinced? There’s an excellent real-world example of this: The financial crisis. Experts created sophisticated models and were confident that the events of the 2007 crisis were statistically impossible. Gerd Gigerenzer, Director of the Max Planck Institute for Human Development in Germany, who studies decision-making in real world settings, says there is a major flaw in any system that attempts to be overly rational in our highly uncertain world.

“If you fine-tune on the past with an optimization model, and the future is not like the past, then that can be a big failure, as illustrated in the last financial crisis,” he explains. “In a world where you can calculate the risks, the rational way is to rely on statistics and probability theory. But in a world of uncertainty, not everything is known—the future may be different from the past—then statistics by itself cannot provide you with the best answer anymore.”

Henry Brighton, a cognitive science and artificial intelligence professor at Tilburg University in the Netherlands, who’s also a researcher at the Max Planck Institute, adds that, in a real-world setting, most truly important decisions rely at least in part on subjective preferences.

“The number of objective facts deserving of that term is extremely low and almost negligible in everyday life,” he says. “The whole idea of using logic to make decisions in the world is to me a fairly peculiar one, given that we live in a world of high uncertainty which is precisely the conditions in which logic is not the appropriate framework for thinking about decision-making.”

Instead of relying on complex statistics to make choices, humans tend to make decisions according to instinct. Often, these instincts rely on “heuristics,” or mental shortcuts, where we focus on one key factor to make a decision, rather than taking into account every tiny detail.

However, these heuristics aren’t simply time-savers. They can also be incredibly accurate at selecting the best option. Heuristics tune out the noise, which can mislead an overly-complicated analysis. This explains why simply dividing your money equally among assets can outperform even the most sophisticated portfolios.

“In a world where all options and probabilities are known, a heuristic can only be faster but never more accurate,” says Gigerenzer. “In a world of uncertainty, which is typically the situation we face, where one cannot optimize by definition, heuristics tend to be more robust.”

For example, the recognition heuristic explains why we’re more likely to buy a product we know, or look for familiar faces in a crowd. And though this can be taken advantage of by advertisers, Gigerenzer’s work has shown that name recognition can predict the winners of Wimbledon tournaments better than the complex ATP rankings or other criteria.

Though they’re not perfect in all circumstances—our instincts can lead us to bias or racist assumptions, for example—heuristics are a highly useful tool for making decisions in our unstable world. “These are evolved capacities that have probably evolved for a reason,” says Brighton. “You could argue it’s irrational to try and weigh up all these unknown factors and it’s more rational to try and rely on their gut—which, for all we know, may be taking into account cues that aren’t obvious.”

Kahneman and Tversky recognized that heuristics and cognitive biases can be highly effective mechanisms, but all too often these biases are portrayed as flaws in our thought process. However, Gigerenzer insists that such biases are only weaknesses in very narrow settings. Cognitive biases tend to be highlighted in lab experiments, where the human decisions are contrasted with probability theory. This is often “the wrong yardstick,” says Brighton.

For example, hyperbolic discounting is a well-known cognitive bias, whereby people will instinctively prefer $50 now over $100 in a year’s time, even though that ultimately leads to a lesser reward. But while that may seem silly in a perfect economic model setting, imagine the scenario in the real world: If a friend offered you a sum of money now or double in twelve months time, you might well go ahead and take the money immediately on offer. After all, he could forget, or break his promise, or you could become less friendly. The many variables in the real world mean that it makes sense to hold on to whatever rewards we can quickly get our hands on.

Though calling someone hot-headed or overly emotional is generally a critique of their thinking process, emotions are in fact essential to decision-making. There’s even research to show that those who suffer brain damage in the part of the organ governing emotions often struggle to make decisions. They can weigh up the pros and cons, but can’t come down on one side.

This makes sense, given that positive emotions are often the ultimate ends of our decisions—we can only choose what course to take if we know what will make us happy. “You can very well know that the world is going to end tomorrow but if you have no desire to live or do anything then you shouldn’t give a damn about it. Facts on their own don’t tell you anything,” says Mercier. “It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Though emotions can derail highly rational thought, there are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational. But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

The same goes for courage. Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are, of course, occasions where we’d benefit from humans being more rational. Like politics, for example. The fallibility of human reasoning has been much discussed recently following unexpected and controversial populist uprisings (such as Britain’s ”Brexit” referendum and the election of US president Trump.) There’s understandable consternation about why people would vote against their own interests.

But, as a recent New Yorker piece explains, our attitude to facts makes evolutionary sense given that humans developed to be social creatures, not logicians analyzing GDP trends. Dan Sperber, a cognitive scientist at Central European University and Mercier’s co-author, says that the social implications of any decision are far from irrelevant. “Even if a decision seems to bring a benefit, if it is ill-judged by others, then there’s a cost,” he says. “The main role of reasoning in decision-making is not to arrive at the decision but to be able to present the decision as something that’s rational.”

He believes we only use reason to retrospectively justify the decision, and largely rely on unquestioned instincts to make choices. It makes good sense that, on occasion, instincts would encourage us to arrive at the same conclusion as those around us. After all, endless arguments about who’s right can easily lead to social ostracization.

Similarly, we’re happy to unthinkingly agree with others’ seeming expertise because this trait is key to our capacity to collaborate. It can be problematic when we unquestioningly go along with pundits on TV, but it does have its uses.

“Relying on our community of knowledge is absolutely critical to functioning. We could not do anything alone,” says Philip Fernbach, cognitive scientist at the University of Colorado. “This is increasingly true. As technology gets more complex it is increasingly the case that no one individual is a master of all elements of it.”

Even the cognitive biases that can lead to irrational political decisions do have some advantages. After all, refusing to rely on others’ reasoning and failing to consider how our responses would be socially received would likely leave us isolated and unable to get much done.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine. We’re inescapably irrational, and far better thinkers as a result.

Humans are born irrational, and that has made us better decision-makers. By Olivia Goldhill.
Quartz , March 4, 2017.






The Enigma of Reason. Harvard University Press





























No comments:

Post a Comment