Category Archives: Columns

Why cheat?

Stephen Mumford explains why it is wrong to cheat in sport. This article appears in Issue 61 of The Philosophers’ Magazine. Please support TPM by subscribing.

Widespread moral outrage has been prompted by Lance Armstrong finally coming clean on his use of performance enhancing drugs in his sport. Some purchasers of his autobiography have demanded refunds on the ground of the work having being bought as fact that is now considered fiction. Armstrong was a cheat; and we feel cheated.

There is a good reason for this. Ethical and aesthetic values can be closely connected, as the case of sport illustrates. In Watching Sport, I argued that moral flaws can detract from the aesthetic value of sport, while moral virtues can increase it, and I used Lance Armstrong as one illustration. Back then, Armstrong fell into the latter category. The beauty of his victories was enhanced by his return from cancer. Now that we know there was a different kind of enhancement involved, the aesthetic is ruined. Ben Johnson’s 1988 Olympic sprint was similarly destroyed aesthetically by its basis in cheating.

If we assume that sporting beauty can be defeated by an ethical vice, we had better be sure that the use of drugs in sport really is wrong. While the judgements of sporting authorities are all or nothing – guilt or innocence – the arguments are not always so cut and dried. Chemicals in bodies come in degrees, and disqualifications do not. Some such chemicals are naturally occurring, such as testosterone. Up to a certain level, an athlete is innocent of wrong-doing. The slightest degree over the limit, and they bear absolute guilt. Might an athlete then try to get as close to the legal limit as they can, without exceeding it? Some other cases are claimed to have been a result of accidental ingestion of a drug, as in the case of British skier Alain Baxter, who was stripped of his 2002 Winter Olympic medal after the use of a shop-bought inhaler. And although the drug in his body was on the banned list, it was acknowledged to be an inactive variety of it. The drug had no performance-enhancing value. The wrongness of drug-cheating in sport does not, therefore, rest only on level-playing field considerations or unfair advantage. And it is not always a question of protection of the athletes either. Some performance-enhancers are damaging to health, but not all are. Maybe the harmless ones should be allowed.

Here is a different approach. Perhaps the wrongness of drugs in sport resides in nothing more than that it is against the rules. It’s wrong because it’s cheating. We want the rules obeyed, whatever they are, and some of those just happen to concern drug use. After all, some of the arguments above could apply to other cases of cheating in sport or violation of the rules. A snooker player cannot hope to escape sanction just because a hanging sleeve knocked a ball accidentally, and nor would it matter if the ball’s movement was of no advantage. It’s a foul either way. And in football, the ball can be taken as close to the boundary of play as one likes, but once it crosses that line – no matter how little – it is out of play absolutely. All or nothing calls are frequently essential in sport. At least the rules for drugs in sport are relatively clear.

Why, though, would anyone knowingly cheat, even if they thought they could escape punishment? In one of the finest books in the philosophy of sport, Bernard Suits’ The Grasshopper, the playing of games is defended as an end in itself. One plays for its own sake, and consequently we should expect game-playing to be the centrepiece of any imagined utopia. It follows, argues Suits, that one accepts the rules of a game precisely because it is a precondition of the playing of the game. One accepts a lusory goal in sport, an inefficient means of achieving some task, precisely because without the constraint of rules, game playing would not be possible. Thus, one accepts that one has to jump over the bar, rather than walk under it; one has to run all the way around the track instead of cutting across the infield; and one has to get the ball in the hole by hitting it with a club instead of carrying it and dropping it in there. Some of these constraints are fairly arbitrary. Games can and do evolve in all sorts of ways. But unless one accepts those rules, one is not playing. You may get to the other side of the bar, but unless you have done so by jumping over it, you are not playing high jump. And similarly, it can be contended, if one breaks the rules of drug use, one has opted out of the sport.

Assuming that is right, what tempts someone like Armstrong to knowingly opt out of the sport? Why did he voluntarily stop competing in cycling? Indeed, why would anyone cut across the infield in a 400-metre race, even if they could do so undetected? And would they really have “won” the race if they did so? Arguably not. But if someone willingly stops playing, while adopting the appearance of playing, doesn’t that show that something has gone wrong with sport? It is no longer befitting Suits’ utopia. It is not being done for its own sake but, rather, for the rewards of finance and fame. In that case, something has gone wrong in society’s institutionalisation of sport.

And here is a more general lesson, for cheating does not occur only in sport. Academics are acutely aware of incidents of plagiarism. One website was found offering to write undergraduate essays, with a pricing scale determined by length and class of the essay. The same site even offered to write PhD theses, for a price. Why would anyone want to take up that offer? Why graduate knowing that it is not one’s own achievement? Just like sport, learning should be its own reward. If we reach the point where the instrumental value of such achievements outweighs their intrinsic value, then we have created a defective society and a recipe for cheating.

Stephen Mumford is professor of metaphysics in the department of philosophy and dean of the faculty of arts at University of Nottingham, as well as adjunct professor at the Norwegian University of Life Sciences. He is author of Watching Sport: Aesthetics, Ethics and Emotion (Routledge, 2011) and Metaphysics: a Very Short Introduction (Oxford, 2012).

The skeptic

Libel laws are wrong, not bogus, argues regular columnist Wendy Grossman

Tomorrow, as I write this, is the launch of the Westminster Skeptics; the day after is the date of Simon Singh’s court appearance. The former is a sort of preparatory rally for the latter, since the speakers will include Singh and Ben Goldacre, both butts of libel suits over their various discussions of “alternative” therapies. The problem for all of us is this: how do we discuss science-related claims critically without falling afoul of England’s world-beating libel laws?

Exactly what Singh said in the Guardian about chiropractors has been analysed (and republished) in many places throughout the British media. The brief summary: he took them to task for promoting chiropractic as a remedy for six childhood ailments despite evidence that it’s ineffective. Somewhere in there, he used the word “bogus”. He went on to explain what he meant by it in the next paragraph – he did not mean that they were dishonest – but the BCA sued both Singh personally and the Guardian corporately on behalf of its members. The judge in the case, who has presided over some of the most famous and disputed libel cases in recent years, ruled that he meant something rather different and more actionable. Almost everyone was shocked by the judgement; the exceptions in my acquaintance were two lawyers well versed in English libel law, who took the view that the judgement was legally correct, even if no one likes it.

The Guardian extricated itself rather quickly; there’s a limit to how many libel actions it can afford and it’s got some other battles going on already. I would guess that the BCA imagined that Singh, like most writers, would be too poor and too intimidated to pursue the matter on his own.

Unfortunately for the BCA, Singh has several bestselling books in his back list, a pugnacious attitude, and noisy friends. Since the judgement, therefore, the BCA has found itself under far more public and unyielding criticism, and a campaign has been launched to get libel laws out of science. MPs are beginning to take note, and it’s just possible that reform of libel law may come out of this.

The question that I imagine will have to be answered is this: how far out of science should libel laws be kept? Libel lawyers defend the law: you can’t allow people to say anything they want without redress for people who have been harmed. Scientists and skeptics tend to feel that disputes about scientific claims ought to be exempt on the grounds that it’s a matter of public interest whether a treatment offered to help cure colic and frequent ear infections is effective or not. Otherwise, you could find yourself in a libel suit for having poorly phrased an attack on the view that the earth is not flat.

Somewhere in between those extremes is where the line will eventually be drawn. One possibility might be to exempt categories of critics – say, scientists and media (like MPs in Parliament). I don’t like this idea personally because the boundaries of these categories are muddy. You would presumably identify scientists by requiring them to work for an accredited institution; that would eliminate someone who, like James Randi, is vastly knowledgeable but uncredentialed. Similarly, if you require journalists to work for some sort of accredited media organisation you eliminate freelances much of the time, and also bloggers and others working in newer media.

It seems to me that the logical place to draw the line for exempt criticism is between the person and the person’s work. That is, it seems to me that ad hominem attacks should legitimately be covered by libel law, while critiques of claims and the (pseudo)science behind them should be exempt the way Parliamentary debates are exempt. That might actually still leave Singh’s original comments on the wrong side of the law, but it would be clear and understandable, and it would be something you could teach would-be journalists in a class or seminar. (That’s assuming there are any would-be journalists left after the financial collapse of traditional media.)

Of course, it’s never that simple. You still would not be able to call a fraud a fraud without risking having to prove your contention in court at vast expense. (Singh has a chart that shows that the cost of a libel trial in the UK is 100 times the cost of one in Europe.) And “bogus” would probably still be a dangerous word. But you could, under such a scheme, point out the lack of evidence for such treatments and their continued promotion and confidently expect not to receive a writ to shut you up.

In addition, therefore, I think it should be possible to mount a “public interest” defence that gives greater protection. Critics should, for example, have considerable latitude to discuss whether a treatment claimed to cure sick children actually does help them.

The really good thing that’s come out of all this – if you’re not a chiropractor – is that a much wider audience has been made aware of Singh’s critique of chiropractic than would have been had the BCA simply ignored him. If libel reform follows, the BCA will have done everyone except itself and its members a huge favour.

Wendy M Grossman is founder and former editor (twice) of The Skeptic magazine.

Imagine that

Raw – what is it good for? Jean Kazez’s regular arts column

Blogging at 3 Quarks Daily, the legal philosopher Gerald Dworkin recently discussed whether food can be considered art. Cooking is a “minor art form”, he argues, but if he’s right, food doesn’t always lend itself to discussion. Food writers tend to tell me more than I want to know about the state of their taste buds, seldom making the jump to any bigger issues. No wonder: most of the time, food isn’t aboutanything. But it can be, as I discovered during a trip to a raw food restaurant in Dallas, promisingly called Bliss.

Sandwiched between a busy street and an elevated train track, the tiny place had outdoor seating only. Across the street was a “can this be real?” liquor store where bikini-clad women brought orders out to customers sitting in cars and pick-up trucks – possibly all a holdover from a strip club next door that seemed to have shut down. We looked over menu options like Rawsagna Supreme, Rawko Taco and Naked Pizza while suffering a sense of impending doom, thanks to the vapid, end-of-the-world soundtrack that was being piped in. Our children had to be reassured that we were safe, despite the panhandler who reached his hand in and asked for train fare.

No doubt I was receptive to the semiotic possibilities because I had been reading The Year of the Flood, a new novel by Margaret Atwood. The restaurant staff could have been members of “The Gardeners”, a cult set in the near future that uses organic gardening, veganism, science, and a little Old Testament religion to hold their own in a world overrun by mega-corporations, environmental devastation, and genetic engineering run amok. The Gardeners live in Pleebland, a violence-infested neighbourhood outside the wealthy, gated HelthWyzer community. Under the leadership of Adam One, they prepare for a prophesied flood by honing survival skills and respect for animals and nature. Ren grew up a Gardener, but works in a strip club called “Scales and Tails” when the year of the flood – a waterless pandemic, as it turns out – arrives. She has to do more than deliver liquor to cars, but maintains her wits and doesn’t forget her Gardener roots.

All organic and vegan, Bliss goes a step further than the Gardeners, and eschews cooking. As we waited for our food, we wondered about this. It’s environmentally sound to eat local and organic, and good for animals to eat vegan, but how is it better to eat raw? I pondered the fact that heat is just the motion of molecules. Were we to prefer less motion, the way Puritans disapproved of dancing?

Later on I looked into health claims made by raw foodists. Cooking robs food of vitamins, and some of the compounds formed by cooking are possible carcinogens. But in a new book called Catching Fire: How Cooking Made Us Human, Harvard primatologist Richard Wrangham argues that cooking has its benefits, or at least had them, back in the days when we were barely past apehood. Heating changes the chemistry of food, making it more readily digested. Our forebears got more energy from their food when they started cooking it and quickly developed smaller guts and bigger brains. Social relations were altered and time was freed up for other pursuits, at least for men (Wrangham says that cooking is universally women’s work). If it weren’t for cooking, we’d still be chewing our food for six hours a day, like chimpanzees. Plus, cooking is needed to make things taste good. Isn’t it?

I picked up Atwood’s book not because I was thinking about going to Bliss, but because I’d been paying attention to a debate about animals and genetic engineering. Atwood’s novel is full of perverse animal life. Most of our familiar species have gone extinct, and greedy corporate scientists are busy engineering new and curious species. There are colourful Mo’hair sheep with human hair, rakunks made from raccoon and skunk genes, and pigs with human brains. The Liobam has been created to support the biblical prophesy about the lion lying down with the lamb.

Back here in the real world, I’d been reading about a proposal to genetically alter factory farmed animals so that they can’t feel pain. Though there are thorny arguments to be considered, novelists can help us imagine who we will have become, by the time we are using bioengineering to remake the animal world. We will have become a species on the precipice of extinction, Atwood’s novel says. There’s nothing that isn’t strange in this novel, but there’s both strange-good and strange-bad. The Gardeners, though gently mocked throughout the book, are strange-good. Though they are greener than green, I’m pretty sure we are meant to heed their messages.

But what about going one step beyond – going green, organic, vegan, and raw? When our food finally arrived, I was stunned. It was absolutely delicious. The flavours were intense and unique, and sheer heat was not missed. In fact, it turns out that hot spices are just as warming as high temperatures. And we did not sit there chewing for hours like chimpanzees.

Atwood’s novel was delicious too – as an exploration of science and religion, environmental ethics, and our planet’s future, but also as just plain riveting fiction.

Jean Kazez is the author of The Weight of Things: Philosophy and the Good Life and and Animalkind: What We Owe to Animals (Wiley-Blackwell). She teaches philosophy at Southern Methodist University in Dallas.

Sci-Phi: God and the brain

Mat Iredale on the natural basis of supernatural thinking

Why is it that humans are so attracted to religion? Is there something about our brain that predisposes us to religion – is there even a unique domain for religion in the human mind – or is it just a coincidence that markedly similar religions have arisen from otherwise vastly different human societies?

Recent research from cognitive psychology, neuroscience, anthropology and archaeology has given rise to a new science of religion that is beginning to provide answers to these questions that “promise to change our view of religion” according to the anthropologist Pascal Boyer, author of Religion Explained: The Evolutionary Origins of Religious Thought.

One finding that is emerging is that humans do seem predisposed towards a religious world view. As Boyer points out, unlike other social animals, humans are very good at establishing and maintaining relations with agents beyond their physical presence; social hierarchies and coalitions, for instance, include temporarily absent members. From childhood, humans form enduring, stable and important social relationships with fictional characters, imaginary friends, deceased relatives, unseen heroes and fantasised mates. Boyer suggests that the extraordinary social skills of humans, compared with other primates, may be honed by constant practice with imagined or absent partners.

He concludes that it is “a small step from having this capacity to bond with non-physical agents to conceptualising spirits, dead ancestors and gods, who are neither visible nor tangible, yet are socially involved.” Boyer thinks that this may explain why, in most cultures, at least some of the superhuman agents in which people believe have moral concerns: “Those agents are often described as having complete access only to morally relevant actions. Experiments show that it is much more natural to think ‘the gods know that I stole this money’ than ‘the gods know that I had porridge for breakfast’.”

Research has also shown that tacit assumptions are extremely similar in different cultures and religions, unlike conscious beliefs, which differ widely from one culture or religion to another. Boyer believes that these similarities may stem from the peculiarities of human memory. Experiments suggest that people best remember stories that include a combination of counterintuitive physical feats (in which characters go through walls or move instantaneously) and plausibly human psychological features (perceptions, thoughts, intentions), a memory bias that contributes to the cultural success of gods and spirits.

Other experiments suggest that children are predisposed to assume both design and intention behind natural events, prompting some psychologists and anthropologists to believe that children, left entirely to their own devices, would invent some conception of God. But this should come as little surprise, given that our minds have evolved to detect patterns in the world. We tend to detect patterns that aren’t actually there, whether it be faces in clouds or a divine hand in the workings of Nature.

Our cognitive predisposition towards a religious world view helps to explain the failure of the once popular prediction that the spread of industrialised society would spell the end of religion. Karl Marx, Sigmund Freud and Max Weber, together with various other sociologists, historians, psychologists and anthropologists influenced by their work, all expected religious belief to decline in the face of the modern developing world. Not only has this not happened, but as the philosopher and neuroscientist Sam Harris points out, religion remains one of the most prominent features of human life in the 21st century with orthodox religion being “in full bloom” throughout the developing world. Whether it is the rise of Islamism throughout the Muslim world or the spread of Pentecostalism throughout Africa, Harris says that it is clear that religion will have geopolitical consequences well into the 21st century.

Harris was part of a Los Angeles-based research team that recently carried out the first systematic study into the difference between religious and non-religious belief. Using functional magnetic resonance imaging (fMRI) to measure signal changes in the brains of thirty subjects (fifteen committed Christians and fifteen nonbelievers) as they evaluated the truth and falsity of religious and nonreligious statements, Harris et al. were able to compare those parts of the brain that “lit up” when the subjects were asked a serious of questions that were either of a religious nature or religion-neutral.

Whilst admitting that gradations of belief are certainly worth investigating, the authors wanted their experiment to characterise belief and disbelief in their purest form. They therefore excluded from the trial anyone who could not consistently respond ‘‘true’’ or ‘‘false’’ with conviction to the various statements. In a similar manner, the statements shown to the subjects were designed to elicit only a yes or no answer, rather than a maybe, and were designed, as far as possible, to have the same semantic structure and content. The statements were shown to the subjects in groups of four (true and false; religious and nonreligious), for example: The Biblical God really exists (Christian true/nonbeliever false); The Biblical God is a myth (Christian false/nonbeliever true); Santa Claus is a myth (both groups true); Santa Claus really exists (both groups false).

After each statement was shown, the subjects pressed a button to indicate whether the statement was true or false. The statements were designed to produce roughly equal numbers of believed and disbelieved trials. What they found was that while the human brain responds very differently to religious and nonreligious statements, the process of believing or disbelieving a statement, whether religious or not, seems to be governed by the same areas in the brain.

Contrasting belief and disbelief yielded increased activity in an area of the brain called the ventromedial prefrontal cortex, thought to be associated with self-representation, emotional associations, reward, and goal-driven behaviour. The authors report that this region showed greater signal “whether subjects believed statements about God, the Virgin Birth, etc. or statements about ordinary facts.”

A comparison of all religious with all nonreligious statements suggested that religious thinking is more associated with brain regions that govern emotion, self-representation and cognitive conflict in both believers and nonbelievers, while thinking about ordinary facts is more reliant upon memory retrieval networks. Activity in a region of the brain called the anterior cingulate cortex, an area associated with response to conflict and which has been negatively correlated with religious conviction, suggested that both believers and nonbelievers experienced greater uncertainty when evaluating religious statements.

The authors admit that one cannot reliably infer the presence of a mental state on the basis of brain data alone, unless the brain regions in question are known to be truly selective for a single state of mind. As the brain is an evolved organ, with higher order states emerging from lower order mechanisms, very few of its regions are so selective as to fully justify inferences of this kind. Nevertheless, they argue that their results “appear to make at least provisional sense of the emotional tone of belief. And whatever larger role our regions of interest play in human cognition and behaviour, they appear to respond similarly to putative statements of fact, irrespective of content, in the brains of both religious believers and nonbelievers.”

They conclude that there is no reason to expect that any regions of the human brain are dedicated solely to belief and disbelief, but that their research suggests that these opposing states of cognition can be discriminated by functional neuroimaging and are intimately tied to networks involved in self-representation and reward. And they argue that their results may have many areas of application, “ranging from the neuropsychology of religion, to the use of ‘belief-detection’ as a surrogate for ‘lie-detection,’ to understanding how the practice of science itself, and truth-claims generally, emerge from the biology of the human brain.”

Further Reading
“Being human: Religion: Bound to believe?” by Pascal Boyer, Nature, 455, 1038-1039 (23 October 2008)
“The Neural Correlates of Religious and Nonreligious Belief”, S. Harris, J.T. Kaplan, A. Curiel, S.Y. Bookheimer, M. Iacoboni, et al., PLoS ONE 4(10): 2009

Mathew Iredale’s Sci-Phi column appears every issue in tpm

Word of Mouse

Luciano Floridi takes an ehealth check-up

Monsieur Homais is one of the less likeable characters in Madame Bovary. The deceitful pharmacist fakes a deep friendship for Charles Bovary. In fact, he constantly undermines his reputation with his patients, thus contributing to Charles’ ruin. Monsieur Homais is not merely wicked. A clever man, he has been convicted in the past for practicing medicine without a license. So he worries, very reasonably, that Charles might denounce him to the authorities for the illicit business of health advice and personal consultations that he keeps organising in his pharmacy.

The ultimate success of the pharmacist’s dodgy schemes is not surprising. Those were the days when blacksmiths and barbers could regularly act as dentists and surgeons (after all, Charles is not a doctor either, but only a “health officer”); patients and doctors had to meet face to face in order to interact; and access to health information was the privilege of a few. Mail and telegraph messages were of course commonly available, but neither allowed real-time conversations.

Madame Bovary was serialised in 1856, exactly twenty years before Bell was awarded a patent for the electric telephone by the United States Patent and Trademark Office. Once ICT (information and communication technologies) of all kinds began to make possible quick consultations and rapid responses, being “on call” acquired a new meaning, telemedicine was born, and the Monsieurs Homais around the world started to find it increasingly more difficult to make a living.

Today, we ordinarily speak of e-Health or Health 2.0 as the most recent development in healthcare practices, which are increasingly patient-centred, not just patient-oriented. Definitions vary, but put simply e-Health is the answer to “what have computer scientists ever done for our health?” From the empowerment of individuals, who regularly access health-related information on the web, to specialised applications for monitoring populations of patients through their mobile phones, e-Health is a macroscopic phenomenon, which is fast spreading and has immense potentialities.

Two conferences recently organised in the Netherlands – the Second Health 2.0 Conference and the First International E-Mental Health Summit – well illustrate the exponential growth of e-Health and its popularity.

Behind the success of ICT-based medicine and well-being lie two phenomena and two trends. The first phenomenon may be labelled “the transparent body”. By measuring, monitoring and managing our bodies ever more deeply, accurately and non-invasively, ICT have made us more easily explorable, have increased the scope of possible interactions from without and from within our bodies (e.g. nanotechnology), and have made the boundaries between body and environment increasingly porous (e.g. fMRI). We were black boxes, we are quickly becoming white boxes through which anyone can see.

The second phenomenon is that of “the shared body”. “My” body can now be easily seen as a “type” of body, thus easing the shift from “my health conditions” to “health conditions I share with others”. And it is more and more natural to consider oneself not only the source of information (what I tell the doctor) or the owner of information about oneself (see my Google health profile,, but also a channel that transfers DNA information and corresponding biological features between past and future generations (see

The correlated trends are a democratisation of health information, which is available, accessible to, and owned by more citizens of any modern Yonville than ever before. And the socialisation of health conditions: you only need to check “multiple sclerosis” on YouTube, for example, to appreciate how easily and significantly ICT can shape and transform our sense of belonging to a community.

By 2018, the world population will consist of more people over 65 than children under 5, for the first time in the history of humanity. We are getting older, more educated and wealthier, so e-Health can only become an increasingly common daily experience, one of the pillars of future medical care, and obviously a multi-billion-dollar business, some of which will inevitably be dodgy. Your Inbox is full of dubious medical advices and pharmaceutical products. Which of course leads us back to Monsieur Homais. Emma learns from him how to acquire the arsenic with which she will commit suicide. During her horrible agony, her husband desperately “tried to look up his medical dictionary, but could not read it”. Nowadays you only need the usual Wikipedia. Just check under “Arsenic poisoning”. You will find there both diagnosis and treatment.

Luciano Floridi holds the Research Chair in Philosophy of Information at the University of Hertfordshire and is president of the International Association for Computing and Philosophy.

Imagine that

The first book I almost read last summer was Perfection: A Memoir of Betrayal and Renewal, to be found on many lists of perfect beach reading. The author is Julie Metz, a young New York book designer whose husband suddenly dropped dead at age 44, whereupon she discovered he had been cheating on her throughout their 13-year marriage. She devoted the next year to reading his love letters and confronting his mistresses, including a close friend, and now tells all – every last, lurid detail. The problem is that I got hung up on the question “why read it?” – why did I need to know about Julie Metz’s husband’s affairs? – and concluded that although I would have ripped the book off the shelf and devoured it in one sitting as a teenage babysitter, I just couldn’t justify it at this advanced stage of life. Sadly, I’ve come to think that books should have some nutritional value.

And so I decided to read The Believers, by Zoe Heller, in the hopes of being both entertained and edified. Entertainment seemed like a sure thing, since Heller is the author of Notes on a Scandal, the basis of the extremely good movie starring Cate Blanchett and Judi Dench. Edification was hinted at by the cover, which depicts a tangled thread festooned by crosses, stars of David, dollar signs, and other symbols. Perhaps there would be crises of faith and conversations about God, though hopefully none of the speechifying that makes the classic “belief” novel, The Brothers Karamazov, slow going. I was sort of hoping for the thinking woman’s (man’s) beach book.

As it turns out, the main character of The Believers has a husband of 40 years who drops comatose, whereupon she discovers he’s not only been a philanderer (she already knew that) but had another long-term partner and child. But no, she doesn’t need to know the details, and doesn’t seek out confrontation. The book really is about belief. Audrey Litvinoff and her comatose husband Joel are radical leftists, he a famous New York lawyer and she his devoted British-born wife. They are militant and utterly certain of all their views, including their contempt for religion – and particularly the Jewish religion they were born into. They are not just atheists, Heller tells us, but anti-theists (nice distinction).

To add to this, Audrey is blunt, insensitive, irascible, and very, very funny. She has no patience for anyone, and feels real love only for her comatose husband and her drug-addled adopted son. That leaves out her two daughters, Karla, a depressed, self-loathing, overweight woman who can’t get pregnant with her loutish husband; and Rosa, a mirror image of Audrey who’s shocked the family by developing an interest in orthodox Judaism.

The cast of characters that surrounds Audrey creates comedy and conflict, all a little thinly developed. The most central of these stories is about Rosa, who keeps taking two steps forward toward orthodox Judaism, then one step back. This daughter of anti-theists winds up being drawn to the most fundamentalist and non-rational variety of religion, first rebelling against its many peculiar commandments, and against its subordination of women, and then acquiescing. Meanwhile, Audrey shows herself to be a woman of faith too, devoted to her husband, and ready to suppress all her doubts about him.

What’s Zoe Heller trying to say? Fiction, unlike philosophy, needn’t say that ____ (fill in the blank). A novel is merely about things. This novel is about over-confident, over-certain, not-very-nice leftist atheists, and the way they are not as far as they think from faith and fundamentalism. Heller’s observations ring true about these fictional characters, and perhaps also elucidate a certain type of person we all must have run into.

Still, it’s hard not to see the book as a volley in the current debate between “the new atheists” and their much-perturbed critics. What has emerged as the mantra of many is that atheism is really, underneath it all, a kind of fundamentalism. It’s really a form of faith, you see. Readers who buy that kind of thing will find this book a perfect illustration of their point. The book has been optioned, which means the “atheism = faith” crowd may have a delicious movie treat coming their way.

When I finished the book I was surprised to learn that Heller is herself a member of a real-life Litvinoff clan. Jewish, atheist, and leftist (like half the people I know), Heller is probably not trying to say exactly what her novel seems to say. If The Believers is a cautionary tale, Heller really means to caution not against atheism, or left-wing politics, but against a certain sort of bull-headedness and failure to empathise; against being a Believer, instead of just having beliefs. The message that leaps off the page is a little more pointed, making this book provocative, but still a pleasure.

Jean Kazez is the author of The Weight of Things: Philosophy and the Good Life and and Animalkind: What We Owe to Animals (Wiley-Blackwell). She teaches philosophy at Southern Methodist University in Dallas.

Sci-Phi: Rational decisions

Mathew Iredale discovers why myth-busting doesn’t work

A recent review of research into rational decision making, led by Dr. Norbert Schwarz of the Institute for Social Research at the University of Michigan, has once again illustrated the extraordinary fallibility of human judgment.

Research going back decades has consistently shown that our ability to make what we consider to be rational decisions can sometimes fall far short of a rational ideal. Over the years an increasing number of systematic biases have been discovered which underlie the errors in our judgment and decision making.

To earlier researchers, the solution to such fallibility seemed obvious: if people only thought enough about the issues at hand, considered all the relevant information and employed proper reasoning strategies, their decision making would surely improve. But as Schwartz et al report, these attempts to improve decision making often fail to achieve their goals, even under conditions assumed to foster rational judgment.

For example, models of rational choice assume that people will expend more time and effort on getting it right when the stakes are high, in which case, providing proper incentives should improve judgment. But the experimental evidence shows that it rarely does. Similarly, increasing people’s accountability for their decisions improves performance in some cases, but impedes it in others. A further problem described by Schwartz is that increased effort will only improve performance when people already possess strategies that are appropriate for the task at hand; “in the absence of such strategies, they will just do the wrong thing with more gusto.”

But even when no particularly sophisticated strategy is required, trying harder will not necessarily lead to better decision making. For example, asking people to “consider the opposite” is one of the most widely recommended debiasing strategies. And yet the more people try to consider the reasons why their initial judgment might be wrong, the more they convince themselves that their initial judgment was right on target.

Why should this be so? Schwartz argues that the strategy of “consider the opposite” often fails to achieve the desired effect because it ignores the metacognitive experiences that accompany the reasoning process.

Most theories of judgment and decision making focus on the role of declarative information, that is, on what people think about, and on the inference rules they apply to accessible thought content. But human reasoning is accompanied by a variety of metacognitive experiences: the ease or difficulty with which information can be brought to mind and thoughts can be generated, and the fluency with which new information can be processed as well as emotional reactions to that information.

According to Schwartz, these experiences qualify the implications of accessible declarative information, with the result that we can accurately predict people’s judgments only by taking the interplay of declarative and experiential information into account.

A similar situation occurs in another popular strategy used to counter false beliefs: using contradictory evidence. Given its use in public information campaigns, this is perhaps the most widespread mechanism for countering erroneous beliefs. It is perhaps also the most dangerous, given that it often doesn’t work. Amazingly, this rather pertinent piece of information has been common knowledge for some 60 years (ever since Floyd Allport and Milton Lepkin’s pioneering research into erroneous beliefs during the Second World War) and yet the contradictory evidence strategy is still very much in use. And it still doesn’t work, as a recent study by Ian Skurnik, Carolyn Yoon, and Schwartz himself, has shown.

The Centers for Disease Control and Prevention (CDC) in America has published a flyer, available online, which health professionals can download and give to their patients. It illustrates a common format of information campaigns that counter misleading information by confronting “myths” with “facts.” In this case, the myths are erroneous beliefs about flu vaccination (e.g. the side effects are worse than the flu), which are confronted with a number of facts (e.g. not everyone can take flu vaccine).

Skurnik et al split their participants into two groups, giving one the CDC’s “Facts & Myths” flyer and the other a “Facts” version of the flyer (presenting only the facts). They were interested to learn how the different flyers would affect participants’ beliefs about the flu and their intention to receive the flu vaccination. These measures were assessed either immediately after participants read the respective flyer or 30 minutes later.

Participants who read the “Facts & Myths” flyer received a list of statements that repeated the facts and myths and indicated for each statement whether it was true or false. Right after reading the flyer, participants had good memory for the presented information and made only a few random errors, identifying 4% of the myths as true and 3% of the facts as false. But after only thirty minutes, their judgments showed a systematic error pattern: they now misidentified 15% of the myths as true (their misidentification of facts as false remained at 2%).

Schwartz comments: “This is the familiar pattern of illusion-of-truth effects: once memory for substantive details fades, familiar statements are more likely to be accepted as true than to be rejected as false. This familiarity bias results in a higher rate of erroneous judgments when the statement is false rather than true, as observed in the present study. On the applied side, these findings illustrate how the attempt to debunk myths facilitates their acceptance after a delay of only 30 minutes.”

These findings suggest that participants drew on the declarative information provided by the flyers when it was highly accessible. As this information faded from memory, they increasingly relied on the perceived familiarity of the information to determine its truth value, resulting in the observed backfire effects.

As with the “consider the opposite” strategy, Schwartz concludes that the failure of the “Facts & Myths” flyer arises “because the educational strategy focuses solely on information content and ignores the metacognitive experiences that are part and parcel of the reasoning process.”

Unfortunately, such errors of judgement are all too common in decision making involving memory recall. For example, people wrongly assume that information that is well represented in memory is easier to recall than information that is poorly represented; that recent events are easier to recall than distant events; that important events are easier to recall than unimportant ones; and that thought generation is easier when one has high rather than low expertise relevant to the subject matter of the memory.

How, then, can we guard ourselves against such errors? The answer, at the present time, is not entirely clear. Despite years of research, “much remains to be learned about the role of metacognitive experiences” says Schwartz.

In the end, it may be the case that we simply cannot avoid making mistakes; that our thought processes are simply too complicated, too rich with emotion and content, to avoid systematic biases and the errors that they give rise to. And if this is the price that we have to pay for a full conscious experience, then we should not be too despondent; it is probably one that is well worth paying.

Suggested reading
“”Metacognitive Experiences And The Intricacies Of Setting People Straight: Implications For Debiasing And Public Information Campaigns” by Norbert Schwarz, Lawrence J. Sanna, Ian Skurnik, Carolyn Yoon (2007) Advances In Experimental Social Psychology, Vol. 39. pp.127-161

Mathew Iredale’s Sci-Phi column appears every issue in tpm

The skeptic

No offence, but you’re a loon, says Wendy M. Grossman

“I am offended,” a friend instant-messaged me recently, with a URL. I clicked. (Why? So I could be offended, too? So I could give offensive stuff more traffic, to encourage the owner to be offensive some more?)

The story was a little (an appropriate word, I think) opinion piece on the Web site shared by the San Francisco Chronicle and the San Francisco Examiner (, members of that endangered species, metropolitan US newspapers. It was, to be fair, on the astrology part of the Web site, but it was way wackier than anything else I’ve ever seen on an astrology page (let’s face it; like it or not, astrology is mainstream).

The gist: some Japanese scientists had scheduled the lunar orbiter Kaguya to crash into the moon to generate debris that the scientists could study to understand the moon’s composition. The writer, Satya Harvey, compared this to a schoolboy cutting up a live frog, and noted that the moon “represents the feminine” and that women are “connected to the moon by their menstrual cycles”.

Then she demanded to know: “Did these scientists talk to the moon? Tell her what they were doing? Ask her permission? Show her respect?”

It’s worth noting that all the comments basically called her a loon.

You really don’t know where to start with stuff like this. A dignified silence seems the most logical response: deliberate stupidity on this level isn’t worth debunking. There are a few snide things to say, such as that if this is what newspapers are going to publish the sooner they die off the better.

There is also the principle of respecting free speech and not advocating censorship. But this presumably wasn’t free; I assume the paper actually paid her to produce this bit of nonsense. And for good reason: extreme stupidity, like extreme opinions, behaviour, or stories of any kind, gets people clicking, commenting, and linking. Stupidity may be bad for the soul and bad for society, but if it gets a struggling newspaper hits on its Web site that means advertising revenues. The “journalist” who pulls in hits is the one who’s worth paying. That astrologer may be stupid, but she’s not stupid.

From the newspaper’s point of view, stupidity is also a lot cheaper to produce than quality. Quality requires research, legwork, and effort, all of which has to be paid for. Stupidity a writer can dream up not only without ever leaving the house but without ever moving off the first news story they find on the Internet.

The same piece could have been more expensively but less stupidly written had Harvey bothered to look up the history of the ideas she was citing. She could, for example, have noted that taking things apart to see how they work is a long-standing, respected technique of science that has, among other things, enabled her to grow to maturity and live a rich, fulfilling life in (one presumes) reasonable health. In the IT field, it’s known as reverse-engineering.

Alternatively, she could have taken a moment or two to find out what Kaguya was doing up there in the first place. In fact, as Phil Plait posted in his Bad Astronomy blog (, the orbiter was up there collecting some kick-ass video of the moon’s surface. The controlled impact was just the final word in a two-year mission collecting data. Would Harvey rather the Japanese had let the orbiter just crash any old way? It would have hurt the moon’s feelings just as much but given us no new information. What a waste.

What I resent most is the implication that this is all being said in some kind of defence of Woman. It’s bad enough that techies constantly use older women as examples of the technically challenged. Do we really need our own players making us look bad?

There is apparently a school of thought that believes that because “menstruation” is etymologically related to “moon” the Moon influences menstrual cycles. There is no clear evidence that suggests this. My favourite debunking of this one comes from Cecil Adams’ The Straight Dope column ( The Earth is affected by the pull of the Moon, but somehow I don’t think gravity is capable of distinguishing between male and female.

I really do think this kind of thing is at least partially a failure of feminism. When I was a child in the 1960s, the common belief, even among many teachers, was that girls were biologically second-rate when it came to math and science. You would think that feminists seeking equal pay for equal work and seeking to knock down barriers would have sought to get more women into science as an imperative. Instead, many seem to have decided that science is male, rational, and evil and instead embraced the cult of the goddess. No wonder an astronomy writer of my acquaintance once said to me in exasperation of the astrology columns in every women’s magazine, “If women want us to take them seriously, they’ve got to stop reading this crap.” Amen, brother.

As for the question of why my friend sent me the URL to click on: some mysteries are too deep to penetrate.

Wendy M Grossman is founder and former editor (twice) of The Skeptic magazine.

Word of mouse

Luciano Floridi finds you only live twice

On May 23 2007, the Maldives became the first country to open an embassy in Second Life (SL), the web-based, virtual world inhabited by more than 6.5 million avatars (computer-generated residents), beating Sweden by a week. A quick look at the daily press shows that the popularity of SL is increasing exponentially. But this is not the main reason why philosophers should pay attention to it. SL is nothing short of the largest and most realistic thought experiment ever attempted, a true mine for philosophical research. Of course, this is not exactly how Linden Lab sees it, but just a few examples can easily drive the point home.

Ontologically, in SL the existential criterion seems to be some degree of “interactability” (x exists only if it can be interacted with) rather than a modal or temporal feature (x exists only if its essence is eternal and immutable) or an epistemic test (x exists only if it can be perceived). Your avatar might be there, but if it does not interact with its environment it counts as less than a ghost. It is an odd but instructive experience to be treated as thin air.

Of course, this prompts further questions about personal identity, the individual and social construction of the self, the emergence of communities with their rules and ethical codes, and so forth. New questions also begin to arise: think of the new digital flavour of “Platonic love”, or of the time when granny will leave to her grandchildren not only her earrings, but also that daring avatar, with spiky blue hair, that she started developing as an undergraduate in the twenties (of this millennium).

Epistemically, in SL one may enjoy a new opportunity to indulge in some old and idle speculations (“Am I an avatar in First Life, who is really being manoeuvred by another puppeteer in Zero Life?”; “do we live in a digital simulation?”) or probe more serious, new issues, for example about epistemic social trust, about ethical software design, or about the value of different theories of truth in a world that is in constant flux and which can easily answer to our wishes (in SL all sorts of things can be animated, not just automatic doors, so “snow is white” may become performative).

Semantically, imagine how interesting it can be to test new ideas about meaning and reference in a context where everybody is dumb, and must almost stenograph to communicate. But hurry up, because people are already “vocalising” SL and learning how to speak. Teleported to new places (known as islands), you may be tempted to experiment with different identities (try being many people at once, or someone with a different gender from the one you have in First Life) or diverse profiles (having a pair of wings is rather normal) or alternative personalities.

In these and many other cases, SL could easily replace Plato’s Ring of Gyges, Descartes’ malicious demon, Nozick’s pleasure machine and other similar thought experiments. But do not misinterpret me. I am not talking about SL as a mere colourful illustration of some philosophical theory. Reducing SL to PowerPoint material would be as pathetic and unimaginative as using the Matrix to illustrate Putnam’s brains in a vat argument, and as interesting as the garnish around the steak. What I am suggesting is that we should engage seriously with the new phenomenon and try to conceptualise it from within. For once, in order to think differently we may need to think inside the box.

Let me close this invitation to do some serious philosophy about SL with two warnings. One is political. We should never forget that someone is shaping our SL by running the system. A serious ethical investigation might be crucial in order to approach phenomena like SL critically. The other is psychological. We are mental animals, who live most of our lives wrapped in a semantic infosphere, even when this may mean just being obsessed with the local darts team. A virtual world that unleashes our imagination and allows us to be who we like and behave as we wish, where every exploration can be safe because utterly reversible, and (second) life is literally what we make it, is very dangerous. For it could certainly be as addictive as the most powerful drug. Watch out for SL-ics Anonymous.

Luciano Floridi holds the Research Chair in Philosophy of Information at the University of Hertfordshire and is president of the International Association for Computing and Philosophy.

Sci-Phi: Consciousness

Mind the gap

Mathew Iredale

Mathew Iredale

Until recently, consciousness was something of a taboo subject for scientists, most of whom seemed happy to leave such esoteric studies to philosophers and psychoanalysts. It is almost universally agreed that consciousness arises from the activity of the brain, and yet we are still waiting for a simple, satisfactory, brain-centred explanation of consciousness. An unhappy state of affairs, but one that may not be around for much longer, for scientists have finally woken from their slumber: armed with a flood of new discoveries from the neurosciences, they have been able to provide several competing models of consciousness.

One of these models, the Global Workspace Model, has recently gained support from a study by Raphael Gaillard and his colleagues in Paris, who showed that conscious visual information is rapidly and widely distributed across the brain, provoking the synchronised brain activity that is taken by the model to be the hallmark of consciousness.

The Global Workspace Model was first proposed by the cognitive scientist Bernard Baars in the late 1980s and proposes that at any given time, many modular cerebral networks are active in parallel, and process information in an unconscious manner. Regarding incoming visual information, for example, it becomes conscious if and only if the following three conditions are met:

Condition 1: information must be explicitly represented by the neuronal firing of perceptual networks located in the primary visual cortex at the rear of the brain.

Condition 2: this neuronal representation must reach a minimal threshold of duration and intensity in order to bring in a second stage of processing, distributed across the brain’s cortex, and especially involving the prefrontal cortex, which is believed to be a major centre for associating multiple kinds of information.

Condition 3: through joint bottom-up propagation (condition 1) and top-down attentional amplification (condition 2), the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain.

Why would this ignited state correspond to a conscious state? Gaillard et al. argue that the key idea behind the workspace model is that because of its massive interconnectivity, the active coherent assembly of workspace neurons can distribute its contents to a great variety of other brain processors, thus making this information globally available. The global workspace model postulates that this global availability of information is what we subjectively experience as a conscious state.

Just how did Gaillard and his colleagues set about measuring the neural signature of the conscious perception of a visual stimulus? Using patients with medically intractable epilepsy who, in preparation for surgery, had multiple shallow recording electrodes implanted within their cerebral cortices to locate seizure activity, they were able to directly record the patients’ neural activity as it happened. According to the scientists, this “unique opportunity” afforded greater spatial and temporal resolution than noninvasive methods used previously to probe the neural basis of consciousness, such as functional magnetic resonance imaging, which can only scan the brain about once every two seconds. They compared neural activity concomitant with conscious and non-conscious processing of words by using a visual masking procedure that allowed them to manipulate the conscious visibility of briefly masked words.

The scientists showed the subjects a computer screen upon which they projected a set of hash marks (which acts as the mask) for 400milliseconds (ms), then a word for 29ms, and then either a blank screen or a set of ampersands (another mask) for 71ms. The entire sequence thus took only half a second, but in both cases the word was registered at the earliest stages of visual processing, as shown by electrical activity in the primary visual cortex, thus meeting the first condition of the global workspace model.

When the subjects were exposed to a word followed by a second mask, they could only guess at the nature of the word they saw. But when subjects were exposed to a word and the second mask was absent, the word was consciously reportable and readable, so the scientists could compare masked (non-conscious) perception and unmasked (conscious) perception of briefly flashed words.

Non-conscious perception of words elicited short-lasting activity across multiple cortical areas, including parietal and visual areas. In sharp contrast, only consciously perceived words were accompanied by long-lasting effects (>200 ms) across a great variety of cortical sites, with a special involvement of the prefrontal lobes. This sustained pattern of neural activity was characterised by a specific increase of coherence between distant areas, suggesting conscious perception is broadcasted widely across the cortex.

Gaillard and his colleagues do acknowledge one shortcoming with their research, which is that “whenever a subject is conscious, he is necessarily conscious of a given mental content.” Consciousness is “intentional” in form (it is “about” a certain content), and the scientists admit it may be illusory to look for a “pure” form of consciousness independent of its particular contents and of the tasks that initiate it.

When applied to neuroscientific experiments, this implies that when imaging a brain having some conscious experience, one will necessarily observe activations corresponding to a specific conscious content. Can one really extrapolate from this to make generalisations about consciousness as a whole? Gaillard et al. believe that further experiments with other kinds of stimuli are clearly necessary, as they will reveal which late-stage, widespread brain events are common to all conscious processing, and which are specific to the experiment at hand.

Another shortcoming is perhaps more telling. Gaillard’s research joins a growing body of scientific evidence over the last few decades (from such techniques as electroencephalography, positron emission topography, and functional magnetic resonance imaging) that have helped to show how observable brain activity correlates with our inner feelings of blueness, sadness, pleasure, pain, and all the other subjective qualities that make up our conscious awareness.

But these neural correlates of consciousness, as they are known, do nothing to explain just how it is that a particular group of neurons brings about our feeling of blueness. They do nothing to close what philosophers term the explanatory gap, as Ned Block and Robert Stalnaker explain:

“Suppose that consciousness is identical to a property of the brain – say activity in the pyramidal cells of layer 5 of the cortex involving reverberatory circuits from cortical layer 6 to the thalamus and back to layers 4 and 6 – as Crick and Koch have suggested for visual consciousness. Still, that identity itself calls out for explanation!”

For all its sophistication and advancement upon earlier research, in the end the research of Galliard and his colleagues does nothing to close the explanatory gap between the scientific explanation of a mental content and the actual experiencing of the content itself.

As the philosopher Uriah Kriegel has put it “There is a persistent feeling that scientific theories of consciousness do not do much to explain phenomenal consciousness. Moreover, there is a widespread sense that there is something principled about the way in which they fail to do so.”

But it is to this phenomenal side of consciousness that scientists must attend if they are to provide a complete explanation of consciousness. Galliard and his colleagues have undoubtedly taken us a further step along the path to understanding consciousness, but there is clearly still a long way to go.

Suggested Reading
“Converging Intracranial Markers of Conscious Access” by Gaillard R, Dehaene S, Adam C, Clémenceau S, Hasboun D, et al. PLoS Biology Vol. 7, No. 3, 2009.
“Exploring the ‘Global Workspace’ of Consciousness” by Richard Robinson PLoS Biology Vol. 7, No. 3, 2009.
“Conceptual Analysis, Dualism, and the Explanatory Gap” by Ned Block and Robert Stalnaker, The Philosophical Review, Vol. 108, No. 1, January 1999.

Mathew Iredale’s Sci-Phi column appears every issue in tpm