Monthly Archives: November 2010

Sci-Phi: Rational decisions

Mathew Iredale discovers why myth-busting doesn’t work

A recent review of research into rational decision making, led by Dr. Norbert Schwarz of the Institute for Social Research at the University of Michigan, has once again illustrated the extraordinary fallibility of human judgment.

Research going back decades has consistently shown that our ability to make what we consider to be rational decisions can sometimes fall far short of a rational ideal. Over the years an increasing number of systematic biases have been discovered which underlie the errors in our judgment and decision making.

To earlier researchers, the solution to such fallibility seemed obvious: if people only thought enough about the issues at hand, considered all the relevant information and employed proper reasoning strategies, their decision making would surely improve. But as Schwartz et al report, these attempts to improve decision making often fail to achieve their goals, even under conditions assumed to foster rational judgment.

For example, models of rational choice assume that people will expend more time and effort on getting it right when the stakes are high, in which case, providing proper incentives should improve judgment. But the experimental evidence shows that it rarely does. Similarly, increasing people’s accountability for their decisions improves performance in some cases, but impedes it in others. A further problem described by Schwartz is that increased effort will only improve performance when people already possess strategies that are appropriate for the task at hand; “in the absence of such strategies, they will just do the wrong thing with more gusto.”

But even when no particularly sophisticated strategy is required, trying harder will not necessarily lead to better decision making. For example, asking people to “consider the opposite” is one of the most widely recommended debiasing strategies. And yet the more people try to consider the reasons why their initial judgment might be wrong, the more they convince themselves that their initial judgment was right on target.

Why should this be so? Schwartz argues that the strategy of “consider the opposite” often fails to achieve the desired effect because it ignores the metacognitive experiences that accompany the reasoning process.

Most theories of judgment and decision making focus on the role of declarative information, that is, on what people think about, and on the inference rules they apply to accessible thought content. But human reasoning is accompanied by a variety of metacognitive experiences: the ease or difficulty with which information can be brought to mind and thoughts can be generated, and the fluency with which new information can be processed as well as emotional reactions to that information.

According to Schwartz, these experiences qualify the implications of accessible declarative information, with the result that we can accurately predict people’s judgments only by taking the interplay of declarative and experiential information into account.

A similar situation occurs in another popular strategy used to counter false beliefs: using contradictory evidence. Given its use in public information campaigns, this is perhaps the most widespread mechanism for countering erroneous beliefs. It is perhaps also the most dangerous, given that it often doesn’t work. Amazingly, this rather pertinent piece of information has been common knowledge for some 60 years (ever since Floyd Allport and Milton Lepkin’s pioneering research into erroneous beliefs during the Second World War) and yet the contradictory evidence strategy is still very much in use. And it still doesn’t work, as a recent study by Ian Skurnik, Carolyn Yoon, and Schwartz himself, has shown.

The Centers for Disease Control and Prevention (CDC) in America has published a flyer, available online, which health professionals can download and give to their patients. It illustrates a common format of information campaigns that counter misleading information by confronting “myths” with “facts.” In this case, the myths are erroneous beliefs about flu vaccination (e.g. the side effects are worse than the flu), which are confronted with a number of facts (e.g. not everyone can take flu vaccine).

Skurnik et al split their participants into two groups, giving one the CDC’s “Facts & Myths” flyer and the other a “Facts” version of the flyer (presenting only the facts). They were interested to learn how the different flyers would affect participants’ beliefs about the flu and their intention to receive the flu vaccination. These measures were assessed either immediately after participants read the respective flyer or 30 minutes later.

Participants who read the “Facts & Myths” flyer received a list of statements that repeated the facts and myths and indicated for each statement whether it was true or false. Right after reading the flyer, participants had good memory for the presented information and made only a few random errors, identifying 4% of the myths as true and 3% of the facts as false. But after only thirty minutes, their judgments showed a systematic error pattern: they now misidentified 15% of the myths as true (their misidentification of facts as false remained at 2%).

Schwartz comments: “This is the familiar pattern of illusion-of-truth effects: once memory for substantive details fades, familiar statements are more likely to be accepted as true than to be rejected as false. This familiarity bias results in a higher rate of erroneous judgments when the statement is false rather than true, as observed in the present study. On the applied side, these findings illustrate how the attempt to debunk myths facilitates their acceptance after a delay of only 30 minutes.”

These findings suggest that participants drew on the declarative information provided by the flyers when it was highly accessible. As this information faded from memory, they increasingly relied on the perceived familiarity of the information to determine its truth value, resulting in the observed backfire effects.

As with the “consider the opposite” strategy, Schwartz concludes that the failure of the “Facts & Myths” flyer arises “because the educational strategy focuses solely on information content and ignores the metacognitive experiences that are part and parcel of the reasoning process.”

Unfortunately, such errors of judgement are all too common in decision making involving memory recall. For example, people wrongly assume that information that is well represented in memory is easier to recall than information that is poorly represented; that recent events are easier to recall than distant events; that important events are easier to recall than unimportant ones; and that thought generation is easier when one has high rather than low expertise relevant to the subject matter of the memory.

How, then, can we guard ourselves against such errors? The answer, at the present time, is not entirely clear. Despite years of research, “much remains to be learned about the role of metacognitive experiences” says Schwartz.

In the end, it may be the case that we simply cannot avoid making mistakes; that our thought processes are simply too complicated, too rich with emotion and content, to avoid systematic biases and the errors that they give rise to. And if this is the price that we have to pay for a full conscious experience, then we should not be too despondent; it is probably one that is well worth paying.

Suggested reading
“”Metacognitive Experiences And The Intricacies Of Setting People Straight: Implications For Debiasing And Public Information Campaigns” by Norbert Schwarz, Lawrence J. Sanna, Ian Skurnik, Carolyn Yoon (2007) Advances In Experimental Social Psychology, Vol. 39. pp.127-161

Mathew Iredale’s Sci-Phi column appears every issue in tpm