Monthly Archives: October 2009

Walkie talkies

Julian Baggini takes a celluloid stroll

Slavoj Žižek is down in the dumps

Slavoj Žižek is down in the dumps

As a breed, philosophers are not exactly cinematic. When they are occasionally captured on camera it’s usually as static talking heads. So when film maker Astra Taylor decided she wanted to make a feature-length documentary about how top philosophers make sense of the world, she “desperately needed a way to actually make a piece of cinema as opposed to just, I don’t know, a radio show or something.” The solution involved changing the traditional formula by just one letter: behold, the walking head.

In Examined Life, Taylor takes nine thinkers on walks in a variety of different settings. We see Peter Singer discussing the ethics of poverty and affluence outside the exclusive boutiques of New York’s Fifth Avenue, Kwame Anthony Appiah talking about cosmopolitanism at Toronto airport, and Slavoj Žižek critiquing ecology at a London rubbish dump.

This isn’t just a practical way of making sure the result was a genuine “movie”. There is a “nice history of philosophers thinking on their feet,” as Taylor explained to me over the phone from New York. She’s thinking about “Aristotle and the peripatetic philosophers, or Socrates walking around Athens and raising hell. Rousseau’s Reveries of the Solitary Walker is one of my favourite books, Nietzsche famously took his walks in the Alps, and there’s Kierkegaard with his melancholy rambling around. And it was also symbolic of this idea to break out of the conventional spaces of academic intellectual discourse. Not just in the sense of taking philosophy to the streets where philosophy isn’t usually found, but revealing the philosophy already in the streets.”

This is Taylor’s second film about philosophy, following up on Žižek!, a portrait of the eccentric Slovenian cultural critic which was a surprise art house and festival hit. Taylor has always been interested in political and ethical thought, right from the age of nine, when she started doing “ideology critique” by interviewing her fellow school kids in order to try to prove that “kids are innate vegetarians”, brainwashed by their family and society into eating meat. It seems a natural part of a hippy upbringing, in which she and her siblings were allowed to decide themselves whether to go to school. She took the option not to, until high school. “There was a maxim in the family that was: question authority”.

When she left school, Taylor initially continued to pursue these intellectual interests. “By the time I was 19 I went off to the New School, which was attractive to me because of its whole history of having a relationship with the Frankfurt School, critical theory and all this. I had immersed myself at that point in Deleuze and Guattari and A Thousand Plateaus, and I got obsessed with that, and then when I went to the New School my horizons broadened a bit. But there was a moment around 2000 when I just thought I was going off the deep end in an academic sense and I knew it wasn’t my nature to specialise. It didn’t feel right and there was this side of me which was much more interested in what was happening in the world politically.”

Despite having no experience in filmmaking, she managed to “finagle this gig going to make a documentary about infant malnutrition in southern Senegal.” The result was “this very literal-minded social justice film: there are people starving, make a film about it.”

Having caught the film-making bug, Taylor went on to combine it with her other philosophical interests with Žižek! “I always thought of the Žižek film as sort of getting away with something. It took me exactly two years from conceiving of it and its premiering, and I thought, well this is my film school.”

There is a link of message as well as medium between the Senegal film and the philosophical ones that followed. The thinkers that interest Taylor are those who deal with the ethical and political issues which connect with the social justice issues that move her.

“You’ve actually touched on my fundamental conflict that I’m always revisiting in such a tired way,” she says when I suggest this. “What is socially-conscious enough, or activist enough, and what is too arty, too self-indulgent and I sort of include philosophy on that spectrum, secretly. There’s a part of me that thinks I should be more on the front line, doing something of use.”

The diverse thinkers in Examined Life all share this concern for the ethical and the political. But originally, the idea for the film didn’t have this clear focus. “The first walk I actually filmed was with Colin McGinn (which will be a DVD extra) and was more about phenomenology and philosophy of mind. That walk turned out quite well: it was on the beach, and because of the subject matter there was lot more opportunity to play with visuals, look at light bouncing off water, and perception is a visually rich subject to address. Yet my heart wasn’t in it. Ethical conversations are what brought me into this stuff, from childhood, and I knew for me to be able to really commit to doing this project and do it well that had to be the fundamental theme: what are our responsibilities to others? This whole theme of basic interdependence is fundamental.”

The selection of thinkers was a combination of accident and design. Taylor had already met a few of them while working as an intern at the publisher Verso. The questions Taylor asked of potential interviewees were “did they say something that I had to mull over that lingered with me? Did they change my perception of an issue?” It was also important that “they all had to be already committed to the process of taking philosophy to a wider public. It’s not like I’m finding these left-field thinkers who are in their little offices and taking them out for some sunshine.”

What she is doing is presenting people as flesh and blood who are usually encountered solely through their words. Yet Taylor is not sure whether “this whole aspect of embodiment really matters. Does it bring anything to the table? I’m working through this question with these films. I haven’t really decided. Is philosophy a body of knowledge or does it require a body?”

There are certainly times in the film where the physical action does seem to be saying something. Martha Nussbaum, for example, walks so quickly while she talks, you wonder how the camera keeps up with her.

“She’s marching, she’s on an ethical mission, and she’s an unstoppable force,” says Taylor. “When you start reading the body language it’s interesting how it relates to the thought.”

Although Taylor was interested in bringing together “different thinkers looking at similar issues but from different angles”, she was keen that the result would not lead to “a sense of total moral confusion. Sometimes there’s this sense that if we don’t embrace a particular philosophy you’re going to be lost in this quagmire of conflicting viewpoints and moral relativism. I wanted to reveal what I’ve been calling an ethic of intellectual inquiry and political commitment, even if there’s not necessarily a consensus between all the philosophers in the film.”

The idea of filming philosophers on the streets might sound like a too-desperate plea for their relevance. In fact, the result is far more ambiguous. Quite often, there is a disconnect between the walking heads and the world around them that makes their thinking appear quite detached. So you have Peter Singer looking at the luxury shoppers as though he is a baffled anthropologist from Mars. You see Avital Ronell walking around the park talking about transcendental signifiers, with the camera going past people on benches, headphones plugged in, oblivious to this academic world of ideas. And you see Cornel West in the back of a car, talking about the people walking around with nothing going on in their heads.

“I had to include that moment with Cornel West,” says Taylor. “It was quite funny when he appeared with the film in New York, one of the first things he did was recall that scene and cringe and apologise for it. There are moments when they’re really connecting to the environment and then moments when you just realise they’ve got this whole world in their heads and it doesn’t matter what’s going on beyond them.

“Is philosophy connected, is it disconnected? That’s a question I’m still unresolved about. Should we be asking for relevance or for connection from philosophy? I’m not sure that’s a just demand. The fact that I’m still undecided on certain questions is what’s motivating me.”

It might also be what motivates people to go and see the film.

The DVD of Examined Life is already issued in the USA. The film will be screened at the ICA in London 20-30 November and released in the UK on DVD in February. The companion book, containing full interview transcripts, is published by New Press.

How to be agnostic

Mark Vernon argues against atheism and belief

agnostic200I used to be a priest in the Church of England. Then – to cut a long story short – a few years later, I left, an atheist. The transition came about partly as a result of disillusionment with the church and its conflicts over issues in sexuality and gender; and partly, at a more intellectual level, because I started to read humanist philosophers. There I found a mixture of scepticism about the existence of God, deconstruction of the power games inherent in theology, and the rationalist call to “Grow Up!”. I lost my faith, though it felt like a liberation.

But then, something else unexpected happened. I found I was actually becoming an agnostic. Over time, I came to feel that the triumphalism that too often seems to be part and parcel of atheism entails a poverty of spirit that is detrimental to our humanity. It tends to ignore or ridicule the “big” questions of life – those questions of existence that are natural to ask, if never finding conclusive answers – for fear of letting theology in through the back door. Plus, I came to think that whether or not God exists is an open question, having pondered the arguments for and against several times over. And that keeping it open, rather than trying to find a knockout blow one way or another, is key. This is because, for all religion’s ills, for all its irrationality, religious traditions preserve a way of life that human beings are the poorer without. As Bernard Williams put it: “That religion can be a nasty business is a fact built into any religion worth worrying about, and that is one reason why it has seemed to so many people the only adequate response to the nasty business that everything is.”

My agnosticism gradually became more committed and passionate. It seemed to me to embody an attitude to life that is severely, even dangerously, lacking in public life. Think of the endless skirmishes between science and religion. They are at best a cul-de-sac, and at worse a risky self-indulgence. They are a cul-de-sac in the same way that arguing about whether God exists only goes round and round in circles. They are dangerous because in forcing people to take sides, they are pushed to fundamentalist extremes – whether based on religious or scientific dogma. This rides roughshod over the intellectual ground that is genuinely fascinating, humanly enriching, and socially essential: the places where science and religion reach the respective limits of their understanding and meet. The militant atheist, like the fundamentalist believer, tries to rubbish such engagement because it offends their faith that science, or religion, can and should say it all.

The question, though, is how can such an agnosticism be fleshed out? Can it be made to bear intellectual weight, and not just be reduced to a sophisticated shrug of the shoulders?

Consider a comparison between two agnostic figures in the history of philosophy, Bertrand Russell and Socrates. That Russell was an agnostic, though one who was “atheistically inclined”, we can take as right because it was what he called himself. He described the position saying: “An agnostic is a man who thinks that it is impossible to know the truth in the matters such as God and a future life with which the Christian religion and other religions are concerned. Or, if not for ever impossible, at any rate impossible at present.”

Socrates is an agnostic figure for a different reason. From what can be gleaned via Plato, he came to understand that the key to wisdom is not being able to prove beliefs, but understanding the extent of your ignorance. He was agnostic in not assenting to philosophical systems, and instead went around ancient Athens asking awkward questions. For him, the importance of reason was not that it could potentially understand all but that it exposed the limitations of all our understanding. Hence the word philosopher was invented for him; he was a lover of wisdom that is desired precisely because it is lacked. Philosophy, it seems, was not about establishing truths. And its aim of thinking clearly was a means to an end. That end was a matter of learning what it is to be an “in between” animal – in between the brute ignorance of the beasts, and the true wisdom of the gods: ignorant but not pig ignorant.

Now, Socrates differs from Russell in all sorts of ways, of course. However, their different “agnosticisms” are illuminating. For if Russell’s agnosticism made him tend towards atheism, Socrates’ agnosticism made him want to hold onto god-talk and religious practice: it did not leave him “atheistically inclined”. Quite the opposite, in fact. It intensified his sense of what it was to be religious. For in theology, “god-talk”, he found the perfect reflection of human uncertainty since matters divine are nothing if not ultimately unknown. It richly reflected his conviction of the “in between” status of human beings. Unlike the orthodox believer, Socrates’ uncertain attitude undermines any certain beliefs. But equally, unlike the committed atheist (or “near” atheist), his agnostic sensibility remains open to what god-talk might reveal about being human. Socrates’ agnosticism lies at the heart of his fascination with the big questions of how to live and where to find meaning in life.

This religiously-inclined agnosticism can be illuminated further by reconsidering one of the most common arguments in the theist/atheist debate – that of whether the basis for morality is morality itself or God. It originates in one of Socrates’ most famous theological arguments, found in the dialogue the Euthyphro. Plato tells the story of Socrates’ conversation with a young man, after whom the dialogue is named. Euthyphro had come to the Athenian courts to prosecute a charge of murder, and no ordinary murder, but one allegedly committed by his father. What is even more startling about the case is that the person whom his father had supposedly killed was a slave. The sequence of events was that this slave had himself killed another slave in a drunken rage, Euthyphro’s father had bound the offender and dumped him in a ditch, and had then forgotten about him; left there, the slave died of exposure. Euthyphro is a puritanical young man who feels his father must be brought to justice to cleanse what he considers to be a stain on his family. And this is what interests Socrates. Socrates thinks that for Euthyphro to pursue such a headline-grabbing case, he must be very sure that the moral benefit he would gain from the prosecution would not be outweighed by the offense of dishonouring his father. In short, Euthyphro is acting dogmatically – as if he has very certain knowledge of what it means to be pious.

Euthyphro argues that he is right to prosecute his father because he believes that the gods denounce murderous acts. This is what makes the crime so bad. Socrates is fascinated by this assumption too. In it, he sees a more general thesis: what is good is what the gods love. And, conversely, what is wrong is what the gods hate. Moreover, thinks Socrates, this thesis raises a wider question still. Is what is good loved by the gods because it is good, or is it good because it is loved by the gods?

The reason this dilemma is remembered is that it is taken to be profoundly undermining of theistic belief. The good is good because it is good, not because of any feelings someone, even a god, might have for it. So, it suggests that what is good is prior to anything a deity may say, which not only implies that the deity is subject to something over which it has no options, but that morally speaking we do not need theism to tell us what is good.

The standard reply to this challenge is that God is goodness itself. The atheist’s argument is flawed, theists say, because it suggests that there is some kind of separation between the virtue and the being of the divinity which in the case of God there is not. But, replies the atheist, you cannot escape the fact that you say God is good because God has the properties of goodness. In which case, you should be able to list the properties of goodness without reference to God. And so the argument goes round and round.

What is interesting about the original account of it in the Euthyphro, though, is that Socrates does not pose any arguments like this at all. It apparently never occurs to him, or Euthyphro, that the dilemma is a challenge to the gods. This could be put down to a number of things. Perhaps the pressing matter in the dialogue is not whether the gods exist but whether Euthyphro should prosecute his father; however, the conversation broadens out in other ways, so why not in this direction? Alternatively, it might be thought that Socrates lived in a society in which the existence of the gods was basically beyond question; ancient Athenians did not experience the world as disenchanted in the way that we, it is said, do today. But agnostic and atheistic ideas did circulate in Fifth century Athens, so it is significant that Plato does not choose to make something of them here.

I think that Socrates does not see the dilemma as troubling vis-à-vis the gods because of his conviction about the “in between” nature of the human condition. This implies, first, that he thinks that no one, with any seriousness, can presume to know what may or may not cause a divinity a sleepless night. And, second, it implies that what is far more obvious to him is that the dilemma should be troubling to human beings. Whatever it may be to be a god, it is human beings who must grapple with what it means to be good, not the gods. (Socrates would also be concerned about the assumption behind the modern atheist’s reply that the properties of goodness can be listed. What are these properties of goodness, he would ask: tell me that and you are a wiser man than I.)

So what does this suggest about Socrates’ approach to theology and how it connects with his philosophical way of life? First, it implies that Socrates was not very interested in debates about whether gods exist or not. Perhaps he suspected that when conducted as a knock-out between a theist and an atheist they go nowhere fast. Having said that, he was interested in theological debate: if god-talk can avoid getting hung up on “proofs”, then it can become a way of critiquing human knowledge. Examining what people take to be divine is valuable because it reminds them that they are made lower than gods and that aspirations to god-like knowledge will remain just that – aspirations.

Further, this attitude itself becomes a valuable source of insight, for its humility is a sign of having embraced the human condition. With it, the vain attempt to overcome human limits is ditched, and the challenge to understand is taken on. And this, in turn, is what makes life worthwhile. It produces the best kind of human beings, people who not merely are ignorant, but recognise the ways in which they are. To this extent, they become wise and lovers of wisdom.

What are the implications of this for the debate between theists and atheists today; what might the agnostic’s contribution be? The 15th century cardinal and philosopher, Nicholas of Cusa provides some suggestions. His best known work was entitled De Docta Ignorantia, “Of Learned Ignorance”. In it he pointed out that wise people from Solomon to Socrates realised that the most interesting things are difficult and unexplainable in words and that they know nothing except that they do not know. How, then, are we to interpret human beings’ desire to know nonetheless? The answer is that we desire to know that we do not know. This is the great challenge of the intellect:

“If we can fully attain unto this [knowledge of our ignorance], we will attain unto learned ignorance. For a man – even one very well versed in learning – will attain unto nothing more perfect than to be found to be most learned in the ignorance which is distinctively his. The more he knows that he is unknowing, the more learned he will be.”

In this learning, one learns something about what one does not know, as it were. Nicholas was a Platonist and so expressed this with the thought that truth is unitary, simple and absolute – and this is why it is unknowable: insofar as we know anything, human beings know in ways that are always multiple, complex and relative. The nature of human knowledge, therefore, is that it always results in contradictions. Which is not surprising, since it is in the coincidentia oppositorum – the realm in which all contradictions meet – that the divinity would dwell.

Nicholas’ words carry challenging implications for atheists and theists alike. For atheists, he makes the point that whatever they envisage God not to be, they must allow that image to be the most perfect thing possible. Else, they are lambasting not God but mere idols, something the believer would want to cast down too. For theists, he emphasizes that they need the “sacred ignorance” of negative theology to remember that God is ineffable. He concludes that strictly speaking God is known neither in this life, nor in the life to come, since being infinity, only infinity can comprehend itself. “The precise truth shines incomprehensibly within the darkness of our ignorance,” is a typically paradoxical formulation of his message.

So much for that. But I think the agnostic contribution to these debates is not merely academic. It matters because today we live in a culture with a lust for certainty. Dogmatic science would have us believe that it has all the answers and can feed us body and soul. Religion, too, is being hijacked by a conservatism that turns the quest for the unknown God into a feel-good experience on a Sunday morning. Agnosticism matters because it rejects an equal and opposite militant atheism or fundamentalist retreat. Daniel J. Boorstin put it well: “I have observed that the world has suffered far less from ignorance than from pretensions to knowledge. It is not skeptics or explorers but fanatics and ideologues who menace decency and progress.”

Further, if science has limits, as only the dogmatist denies, it is an ever-curious agnosticism that best expresses wonder at the world. Ultimately, it does not seek to explain everything but to nurture a piety towards creation. This agnosticism too understands the religious quest not as the imposition of answers, but as the pursuit of connections and questions. It is not just those individuals disillusioned with dogmatic science and strident religion who might turn to Socrates for clues as to how to be agnostic. Our flourishing as human beings, as “in between” creatures, needs this tradition too.

Mark Vernon‘s latest book is Plato’s Podcasts.

Review: Kierkegaard, Metaphysics and Political Theory

Kierkegaard, Metaphysics and Political Theory: Unfinished Selves by Alison Assiter (Continuum) £65 (hb)

Søren Kierkegaard

Søren Kierkegaard

While there has been no shortage of secondary literature on the work of Søren Kierkegaard, until quite recently the general consensus amongst scholars seems to have been that the writings of this troubled religious thinker have little to offer contemporary debates in social and political thought.

Instead Kierkegaard has long been considered the father of modern existentialism, and following the critique of Emmanuel Levinas and others, is often thought to offer little more than a philosophy of the drastically individual and a-social human subject for whom truth is nothing but a matter of subjective decision. Rather than offering a social and political critique which pre-figures contemporary critical theory, Kierkegaard’s philosophy has often been seen as an existential analysis of personal religious salvation.

Recently however, there have begun to appear a number of texts seeking to recover Kierkegaard as a potential resource for critical social theory. Allison Assiter’s Kierkegaard, Metaphysics, and Political Theory: Unfinished Selves is the most recent intervention in the growing body of social and political secondary Kierkegaard literature.

Whereas the majority of work on Kierkegaard proceeds by placing his thought in the context of recent European philosophy, Assiter instead considers Kierkegaard in relation to the metaphysical assumptions underlying the liberal political tradition. Of primary concern for this project is a critique of the conception of the self Assiter identifies as underlying the broadly Rawlsian liberal political project. The secondary concern is to construct an alternative conception of the self to underlie both liberal theory and the accompanying liberal human rights tradition.

Assiter’s analysis begins by drawing out the implicit metaphysical picture of the person underlying the liberal tradition, which is that of an autonomous, self interested and rational individual. Assiter wants to oppose this liberal conception of the self with a self that is embodied, connected with others, needy, and loving. Rather than attempting to critique the liberal human rights tradition as such, Assiter seeks to replace the underlying picture of the self at the heart of this tradition.

Assiter’s primary critique of Rawls and the liberal tradition is its inherent sense of reason and justice said to be possessed by all rational individuals. The underside of this inherent sense of reason is that those who fall outside of the community of the reasonable can be regarded as evil, and subsequently be excluded from the community of those with reason and rights. In this tradition, the individual precedes all communal and collective relations, and the mad or un-reasonable person can be excluded.

Against this individualism Assiter claims, against Rawls, that there is a common nature to all humans which precedes the emergence of the individual. The import of this point is that one is not excluded from human community on the grounds that they are mad or without reason. Rather, Assiter argues for an original and universal community which pre-figures any individualism. Rather than being grounded in universal reason, Assiter wants to make love the metaphysical foundation of morality and ethics.

While it sometimes seems as if Kierkegaard’s name is out of place in the title of this work, Assiter finally deals explicitly with the work of Kierkegaard in the final chapters.

Assiter avoids interacting with the religious tropes present in the work of Kierkegaard and instead attempts to naturalise Kierkegaard’s political and social thought. Whereas Kierkegaard theorises love as being a relation between the human and the divine, Assiter removes this divine-human relationship and instead makes the self-love of humanity the primary form of loving existence. While useful for making Kierkegaard more palatable to the tongue of secular political theory, Assiter too easily glances over the subtle complexity of Kierkegaard’s vision of love.

Rather then advocating either a purely supernatural or natural view of the human, Kierkegaard offers a paradoxical theory by which the human, and love, occur at the point of intersection between the supernatural and natural, or, infinite and finite. Assiter seems to miss the ontological subtlety of this all too easily, and subsequently fails to realize the relational and socio-political potential of Kierkegaard’s paradoxical theorisation of reality.

Another problem in the work is Assiter’s confusing use of terminology. While framing the work around developing an alternative metaphysic of the self, it seems that rather than a metaphysic she is developing an alternative anthropology, and the work as a whole would be much more effective if the key issue was framed in explicitly anthropological terms.

While this book will be of interest to those looking to push beyond the individualism of liberal political theory, it will likely disappoint those who expect a full scale study of Kierkegaard in relation to metaphysics and politics. While only hinting at the political potential within the work of Kierkegaard, Assiter nonetheless opens up the space for further research into this untapped socio-political potential within his work.

Michael O’Neill Burns is a PhD Student in philosophy at the University of Dundee


Stoics might not have been so stoical if they’d had bloggers to deal with, says Ophelia Benson

Alain de Botton

Alain de Botton

Seneca remarked in “On Firmness” (subtitled “the wise man can receive neither injury nor insult”) that Socrates “took in good part the published and acted gibes directed against him in comedies.” That kind of thing takes a lot of living up to. Alain de Botton praised Seneca in his 2000 book Consolations of Philosophy and his 2001 TV series Philosophy: a Guide to Happiness; the latter included an episode titled “Seneca on Anger”. Yet de Botton has lately revealed that he is capable of an occasional moment of pique himself.

Nina Power of Infinite Thought wrote a very funny and not entirely admiring post about de Botton’s The Pleasures and Sorrows of Work last February, taking particular exception to the blurb, which described him as “intrigued” by work’s pleasures and pains.

“The man is…intrigued?! What, like a captivity-raised squirrel suddenly let out in to the world for the first time, little sparkly opal eyes blinking at the overwhelming wonder and diversity of it all? Gasping at the, ahem, ‘sheer strangeness’ of the modern workplace?! I’m sure cleaners setting off on the 472 at 4am to get the first tube to Canary Wharf find their pitiful paycheck ‘strange’ and ‘beautiful’.”

De Botton sent Power a furious email, which started “Your latest blog makes my blood boil” and ended “I don’t know what you think you’re doing writing such blogs other than adding to the not already inconsiderable sum of human misery. If you’ve got any honesty or sincerity, you’ll take the post down immediately and if you’ve got a trace of courage, you’ll reply to this email and confront me as one person.” Power replied politely but trenchantly, saying interesting things about class and privilege and who gets to write for newspapers and magazines, ending “If indeed the email I received really was from the real Alain de Botton, I’m very surprised that you would be bothered by such a post, let alone write to its author!” De Botton replied more cordially and asked Power to publish their correspondence, which she did.

In April, the day de Botton’s book was published, an anonymous blogger who works in a bookshop wrote a mildly critical paragraph about him in a post largely about a book by Paco Underhill. De Botton wrote the first comment, ending “How dare you suggest for even a second that I don’t know about hard work. It’s 10 o’clock at night, I’ve been up since 5.30am working my guts out – blogs like yours make me want to be sick, you ignorant vindictive and mean-spirited person.” Subsequent posts by other people expressed a certain surprise at de Botton’s tone.

Two days later, Stephen Law wrote a brief facetious post about the book. De Botton commented again. “Stephen, your blog is normally great but this time, you’ve really a hit low. For a start, you write about a book you haven’t even read and imagine what it is…You’re such a fool to blunder in like this.”

Law apologised, then on further consideration added a few points in his defence, while still expressing regret.

Then, in June, the journalist and critic Caleb Crain wrote a review of The Pleasures and Sorrows of Work for The New York Times Book Review, and posted a link to the review on his blog Steamboats Are Ruining Everything, saying regretfully that he wasn’t crazy about it and adding that he wrote a favourable review of de Botton’s earlier How Proust Can Change Your Life. But that was not enough to save him. A few days later de Botton pounced. Sadly for him, what he said is now notorious.

“Caleb, you make it sound on your blog that your review is somehow a sane and fair assessment. In my eyes, and all those who have read it with anything like impartiality, it is a review driven by an almost manic desire to bad-mouth and perversely depreciate anything of value. The accusations you level at me are simply extraordinary. I genuinely hope that you will find yourself on the receiving end of such a daft review some time very soon – so that you can grow up and start to take some responsibility for your work as a reviewer. You have now killed my book in the United States, nothing short of that … I will hate you till the day I die and wish you nothing but ill will in every career move you make. I will be watching with interest and schadenfreude.”

A few days later the Telegraph ran a story on the dust-up, and the day after that the literary journalist Edward Champion posted a Q&A with de Botton, first asking if he had indeed posted the comments on Crain’s blog. “He confirmed that he had, and he felt very bad about his outburst.” De Botton explained his frustration: his intention was to “write a book that would open our eyes to the beauty, complexity, banality and occasional horror of the working world,” partly in order to correct for its absence from literature, and Crain’s review suggested “that I was interested rather in patronising and insulting people who had jobs and that I was mocking anyone who worked.” He added that he wished he had put his response in an envelope, not on the internet. Perhaps he also wishes he had remembered his Seneca, if not his Socrates.


Anthony Cox on how fear of chimeras in interfering with a rational assessment of DNA research. (NB: This article was first published in Spring 2007.)

chimera200When H G Wells wrote The Island of Doctor Moreau, the British scientific community was embroiled in a debate about vivisection. His novel played on those contemporary concerns, with Moreau’s animal victims transmogrified into humanlike monsters. When film producers revisited the story in 1996, Marlon Brando’s Dr Moreau had moved with the times, tampering with his animals’ DNA in order to create his creatures. In the past few weeks, a controversy about so-called “Frankenbunnies” has sparked a new debate about the relatively uncharted ethics surrounding genetic science.

The revolution in genetic science came to wider public attention after the cloning of Dolly the sheep in 1996, but has yet to deliver tangible benefits to the public. However, some researchers believe that the key to understanding the pathology of degenerative diseases, and eventually finding potential cures, lies with embryonic stem cell research. It is suggested that the research may help future sufferers of conditions like Alzheimer’s disease, Parkinson’s disease, cystic fibrosis, motor neurone disease and Huntington’s disease.

It is the nature of this research that has sparked a public debate about ethics in genetic research. Human embryos are in short supply, and there are obvious ethical concerns about experimenting on fertilised human embryos. For this reason, scientists have focused on ways to create stocks of similar embryos using non-embryonic human genetic material in animal embryo structures. Animal embryos are readily available as a by-product of the food industry.

In brief, the animal embryo (rabbit or cow) is cleared of its resident genetic material, and human genetic material is inserted. The resultant embryo is 99.9% human, with a residual amount of genetic material remaining in the cell structure making up the remainder. The resultant egg is stimulated to create stem cells for research. After the stem cells are extracted, and before it reaches 14 days of age, the embryo is destroyed. These embryos are called human-to-animal hybrids, or cybrids – a term considered more accurate by scientists in the field, since it avoids the suggestion that a true hybrid organism is being created: only 0.1% of the embryo would be animal, only a few cells would be created, and there is no intent to produce a viable foetus which would become a hybrid “animal”.

Despite the UK government’s generally positive views on stem cell research, in stark contrast to the debate in the United States, they initially blanched at the idea of human-to-animal hybrids. In the process of updating the sixteen-year-old Human Fertilisation and Embryology Act, the government produced a White Paper in December 2006, which appeared to propose outlawing research using “hybrid embryos”.

This aversion sprang from what the government termed “considerable public unease with the possible creation of embryos combining human and animal material”. The considerable public unease appeared to consist of a number of responses to a Human Fertilisation and Embryology Authority (HFEA) consultation, which had only briefly considered the ethical issues concerning cybrids. Critics of the White paper, such as the Liberal Democrat MP Evan Harris, argued that the consultation gave a misleading impression of the strength of opposition and was more reflective of a well-organised lobby against such research, rather than widespread public opposition. In addition, those involved in this form of embryonic research submitted a joint statement to the consultation process, thus diminishing their impact to one voice amongst seemingly many.

Neither were signals from government sources re-assuring for the scientists involved, with a Department of Health spokesman stating that “previous research in this area shows ongoing and widespread support for a ban on creating human-animal hybrids and chimeras for research purposes” However, despite such initial statements, under questioning Prime minister Tony Blair noted that “If there’s research that’s going to help people then we want to see it go forward.”

By the time the HFEA was due to deliver its widely expected rejection to two applications for research involving cybrids, the mood had changed. HFEA noted that cybrids would potentially fall within their remit to regulate and licence and that such research was not prohibited by current legislation, and argued for a full and proper debate on the issue of cybrids.

So what are some of the key issues? One issue is the “Yuck” factor: an almost instinctive hostility to the mixing of species. One of the opponents of cybrids research, Josephine Quintavalle from CORE ethics, has suggested this has made the anti-cybrid argument easier to make: “There’s the Yuck factor. There is an innate repugnance that we would mix the species in this way.” Even the UK scientific community, perhaps pragmatically, accepts the strength of this feeling, since they have a self-imposed ban on the injection of human stem cells into developing embryos of another species.

One explanation put forward for the Yuck factor by ethicists is that the creation of interspecies creatures evokes the same feelings as bestiality, widely considered immoral, and some may see the erotic mixing of species to be directly analogous to biotechnological mixing. Another, perhaps more plausible, explanation however lies in the concepts of boundaries that humans create to order their world, and the taboos that operate to avoid mixing items from distinct categories. By being neither human nor animal, cybrids threaten such social and moral concepts and boundaries – which are what set mankind apart from other creatures. They become an abomination and threaten our human identity. Andrew Ferguson from the Christian fellowship puts it thus: “We are creating a being that is not completely human. We should not alter the whole future of what it means to be human. We should not blur the distinction that’s been there in nature since the dawn of time”

What is the nature of the cybrids produced? If they are 99.9% human, and 0.1% animal, then are they part animal and part-human, or do we have to place them within one category? This is important, since some argue that the human-to-animal hybrids that are created have rights in themselves – rights we would generally apply only to those in the human camp. Andrew Ferguson argues that 99%-human embryos should be considered human, and therefore that the ending of such a life to obtain stem cells is unethical. In the case of a debate about the nature of the embryo, he claims “we should give him or her the benefit of the doubt”. Paradoxically, the stress that pro-cybrid researchers put on minimizing the animal genetic content of the embryo, presumably in order to reduce the Yuck factor, actually strengthens this argument. Dr Calum MacKellar of the Scottish Council on Scottish Bioethics has also stated that a animal-human embryo is “not just a pile of cells, but [has] a special moral status as a human person.”

The counter-argument to this is that the raising of cybrid embryos into human persons with rights is emotive and that the cybrids are purely created with the intent of harvesting stem cells. They are not intended to be viable embryos for the creation of an organism, nor have they been created from viable human embryos.

Concern about crossing interspecies boundaries in other ways is less problematic. Previous animal hybrid work has produced such cross-species creatures as the geep (a cross between a sheep and a goat), and the public have been largely unconcerned by the sight of a mouse with a human ear grown on its back, or for that matter medical xenotranplantation, The use of replacement pig heart valves in those with heart valve defects is now routine, and not generally opposed by those who are concerned about cybrid research. However, none of these examples threaten our understanding of human identity, or raise issues of new moral obligations.

Another ethical issue is the value of such research to human health. In the debate following the potential HFEA ban, both the Medical Research Council and the Wellcome Trust have backed the validity of cybrid research, suggesting it offers the possibility of significant improvements in the treatment of disease. A group of more than 40 leading UK doctors, scientists, ethicists, and politicians wrote to The Times on the 10th of January 2007 arguing that there were “clear potential benefits to human health” from this line of research. Opponents are less sure that such benefits will accrue.

Stephen Minger, director of the stem cell biology laboratory at King’s College London, is optimistic that the consultation would give scientists the opportunity to explain the underlying science and why the development of animal-human hybrids for stem cell research regulated by the authority is essential. Researchers are also not opposed to greater regulation in this field, as it is currently a regulatory void; they welcome an informed debate.

Despite the Frankenbunny headlines, the debate appears to have moved in favour of those who think such research is warranted. David King, the Government’s chief scientific adviser, argued that such research should be allowed under tight control; when put under pressure at the Parliamentary Science Committee, Caroline Flint, Minister of State for Public Health, suggested that her department’s position had been misrepresented. Acknowledging the changing climate she stated, “What is emerging now—which I think is positive—is possibly far more science engagement on this issue and more ideas and evidence coming forward as to developments than was provided at the time of the consultation.”

Antony Cox is a pharmacovigilance pharmacist who works at the West Midlands Centre for Adverse Drug Reactions, and also as a tutor at Aston University’s School of Pharmacy. He blogs at Black Triangle

The birth of a killer

Daniel Shaw laments the reduction of Hannibal Lecter

hannibal200Given the advancing age of Anthony Hopkins, and the continuing success of the Hannibal Lecter series, the third sequel had to be a prequel. Hannibal Rising, however, is the only film in the series to have lost money. While Lecter’s power over others, and over the environs in which he moves, is still as complete as in the first three films, something crucial has been lost. Has author and screenwriter Thomas Harris reduced Lecter to a psychological case study here? In my view, a figure like Hannibal Lecter is made to seem less powerful if he is seen as not being in control of his actions. This connects up with the contention of Friedrich Nietzsche that control over self is one of the most fundamental senses of power.

Hannibal Rising chronicles Lecter’s journey from ordinary boy to serial killer. As alluded to in Hannibal, Lecter’s beloved sister Mischa was killed and eaten by foragers in Lithuania at the end of World War II. This incident, coupled with the deaths of his parents and tutor in a Nazi Stuka attack, so traumatised young Hannibal that he blocked out the memory of the event. He would not speak for years, and was tormented by nightmares that moved him to scream out his sister’s name in the dead of night.

The teenage Hannibal (Gaspard Ulliel) is clearly suffering from several extremely neurotic symptoms. In classic Freudian fashion, he has repressed the memory of a particularly traumatic event (although he still recalls the death of his parents and tutor), and that repressed trauma has given rise to aphonia and repetitive nightmares which rehearse it over and over again. In Hannibal Rising, we are given to understand that he is transformed into a psychopathic killer by these events.

Having escaped before being eaten, Hannibal was taken by the Russians, and raised in an orphanage that they established in Lecter castle. Preternaturally strong, he was a troublemaker who beat up bullies that picked on small children. It is Hannibal’s great good fortune to be saved from this hell-hole by an uncle and his exotic wife, and be given the chance to live in comfort in Paris.

The strikingly beautiful Lady Murasaki (Gong Li) takes Hannibal under her wing, and there is more than a hint of romantic tension between them from the very start. He commits his first murder as a gesture of chivalry, when he guts a crass butcher who made lewd remarks to her at an outdoor market. He then launches on a vendetta to keep his promise to Mischa and avenge her death. Against all odds, he succeeds in tracking down her killers one by one, exacting his revenge in a progressively more brutal fashion.

In the course of this murder spree, Lecter is depicted as becoming less and less human. He seems, at one point, to love Lady Murasaki, but his passion for her recedes, and he succumbs neither to her emotional pleas nor to her seduction. She wants him to promise to turn the rest of the foragers (who are wanted for war crimes) over to the authorities, but he protests that his promise to Mischa must take precedence. Later, she notices that he has grown erotically indifferent to her presence. Despairing of ever reaching him again, she returns to her former home in Hiroshima, Japan.

In the rest of the series, Thomas Harris had successfully resisted the temptation to explain Lecter’s behaviour, until he raised the curtain on Lecter’s memory palace in the second half of Hannibal. But Harris embraces the classic psychoanalytic model of behavioural explanation in Hannibal Rising. For instance, when Lady Murasaki says, “I fold cranes for your soul, Hannibal. You are drawn into the dark,” Lecter responds, “Not drawn. When I couldn’t speak I was not drawn into silence, silence captured me.”

I take Hannibal to be saying that he was not responsible for his aphonia, nor for his murderous vendetta. Rather, the role of avenging angel had been thrust upon him by circumstances beyond his control, and he seems to have little choice but to carry it out. This diminishes Hannibal in the eyes of the audience. If he is merely a psychotic, with no choice but to do what he does, then he is to be pitied, not blamed for his actions. As soon as you start pitying Hannibal Lecter, the spell is broken and he ceases to be mesmerising. If he can’t control himself, he appears to be much less powerful as a result.

In Hannibal Rising (as in Hannibal), Lecter’s victims deserve to die for their heinous war crimes, and he can be blamed only for taking the law into his own hands (except in the case of the greasy butcher, whose punishment for insulting Lady Murasaki was indeed excessive). Yet Lecter still has the power to fascinate us, for several reasons. His murders are strikingly creative, exhibiting his signature aesthetic sensibility (which captivated audiences in Silence of the Lambs). Lecter slices the butcher in a fashion reminiscent of the original insult to Lady Murasaki (“Is it true that your pussy runs crossways?”). He beheads the first war criminal with a rope pulled by his beloved childhood horse Cesar (making a brochette from his cheeks), and in the set piece of the film, he drowns the second one in a vat of formaldehyde while Inspector Popil quizzes him about his other victims.

The fact that Lady Murasaki assists with his vendetta also helps to secure our sympathy. But she is increasingly appalled by the ferocity of these slayings. By the time Hannibal is slashing a big “M” into Grutas’ chest, she has totally given up, responding to his declaration of love by saying, “What is left in you to love?” This is what precipitates her return to Japan.

She has come to believe that Hannibal is an inhuman (and out of control) psycho killer. But nothing Lecter has yet accomplished can compare to slaying an unskilled flute player for marring performances by the Baltimore Symphony, or feeding a former patient his own face for making a homosexual pass at the good doctor. Carving an “M” into the chest of Grutas is humanly understandable, since he had falsely claimed that Lecter relished the soup they made out of Mischa.

In a sense, the climax of the film is classically heroic. Several members of the gang had taken Lady Murasaki captive and were holding her on Grutas’ houseboat. Hannibal rescued her and killed them all, saved from being shot in the back by her ritual short sword (which he carried in a sheath between his shoulder blades). In the conclusion of the film, he drives off, having completed his vendetta by tracking down the last of the foragers in Canada.

But Hannibal has become too easy to sum up, as Inspector Popil (Dominic West) does after learning that he had eaten the cheeks of his second victim. “We must arrest him, and the Court has to declare him insane,” he says. “Then, in the hospital, the doctors can study him and find out what he is…the little boy Hannibal died in 1944 out in that snow … his heart died with Mischa. What he is now, there is no word for it … except monster.” He had been called insane before, but never did the epithet ring so true. Consequently, he becomes a less empathetic (because less powerful) character in the process.

In scenes before his family was slain, Hannibal is a loving brother and obedient son, with an IQ that is off the charts and a fascination for mathematical proofs. There isn’t a hint of his later sadism and brutality. He even loves animals, playing with the swans in the castle moat and bringing carrots to the family horse. He tenderly nurses his sick sister, pre-chewing a stale crust of bread before putting it in her mouth. He is shown adopting his cannibalistic ways as a particularly fitting punishment for the first target of his vendetta.

After witnessing something so violating that he had to block it out of his conscious mind, Hannibal becomes an entirely different person. He loses all moral sense, dedicating his genius to a life of crime. He also seems to lose the ability to love, turning down the virtually irresistible Lady Murasaki in favour of completing his merciless task. He doesn’t regain that ability until he reunites with Starling in Hannibal.

In both Silence and Red Dragon, he relishes inflicting pain on others for the feeling of power that it gives him. This willingness to choose to do what most people consider to be evil stems from his conviction that ultimately, in a world without a perfect creator God, everything is permitted and nothing is extolled. While denying deterministic behaviourism, he also denies the existence of any absolute moral standards (by virtue of which one can praise some actions as good and condemn others as evil).

The Lecter of Silence would have no truck with the liberal tendency to explain away criminal behaviour as the necessary result of heredity and environment. His atheism, arrived at when his fervent prayers for Mischa’s life went unanswered, gives him free rein to do as he pleases. But one thing is clear here: part of the pleasure that he takes in doing evil is his conviction that he is choosing to do so. He is not simply a Satanic figure, forced to make evil his good and do his part in God’s Divine Plan, or a psycho-killer compelled to go on a blind rampage.

But the Lecter of Hannibal Rising now seems to be the victim of a traumatic childhood. He is shown to become progressively less in control of his actions. His growing lack of interest in Lady Murasaki is a symptom of this. Furthermore, while most of his vendetta is fuelled by blind rage, his final execution is cold-blooded. By the end of the novel (and the film) he is no longer an avenging angel, but a serial killer who relishes murder for its own sake, considering it to be a form of superior entertainment preferable to the mundane activities most people enjoy.

In providing such a detailed and plausible explanation for his (now apparently compulsive) behaviour, author Thomas Harris has made Lecter more sympathetic, but less empathetic. I can no longer feel with a character that is a compulsive psychotic, in large measure because I believe that I have the capacity to freely choose at least some of my actions. I can still feel for Lecter, all the more so because his pitiful past is now seen to be the cause of his criminal present. But what I feel for him is pity, not awe or grudging esteem anymore.

Fact or fancy, our belief that we have control of (at least some of) our choices is at the heart of our sense of self-confidence. This belief, however self-deceptive, makes it easier to sustain our conviction that what we do matters, in part because we could have done otherwise. While Nietzsche waffled about whether we could indeed exert such control, he was univocal in his contention that the belief that we can makes it more likely that we will take our actions seriously as making a difference in our lives. At his most cynical, Nietzsche characterises the illusion of free will as one of the lies we need to tell ourselves in order to survive.

Something is lost in transforming Lecter from antagonist to protagonist. Lecter is the hero of Hannibal Rising, and we are clearly meant to root for him as he picks off his former tormenters one by one. But in the process, he has become a less awesome figure. For one thing, we are not appalled by his activities. His victims deserve their fates, and are depicted as virtually subhuman. Unlike Lady Murasaki, we do not shrink from Hannibal as he enacts his revenge. If anything, I felt for him when he turned down a life with her to complete his vendetta, for he seemed to have no choice but to do so.

But, again, this is part of the problem. Lecter is not a sympathetic figure in Silence, despite the avuncular concern he shows for Clarice. Yet I revelled in his aesthetically staged slaughter of his captors, and the brilliance of his escape. He did not deserve to go free, and the guards did not deserve to die such grisly deaths for simply doing their jobs. But it is at that moment that his power is confirmed in the world of action. This seals our empathy with him as well. We feel his power and vicariously share in it, knowing full well that we are empathising with a criminal. I, too, cheered at the end of Silence, when Hannibal disappears into the indigenous crowd while planning to have Chilton for dinner.

As Nietzsche observed, we feel pity for those to whom we feel superior. Sympathy is really an act of condescension on the part of the sympathiser. His critique of pity is based on this fact, and on his belief that neither the sympathiser nor those with whom he sympathises are made stronger in the process. My reaction to the end of Hannibal Rising was not as exhilarating as what I experienced at the end of Silence. The former left me with a sense of loss, and of sympathy for the innocent little genius who was turned into a monster by the Second World War. The latter allowed me to revel in Lecter’s exercise of power over others with no excuses or explanations to get in the way.

Empathy, to my mind, is the basis of identification. We do not identify with characters that we do not, in some sense, want to be ourselves. That sense may vary greatly, but whether it is moral (Paul Rusesabagina in Hotel Rwanda) or aesthetic (Lord Henry Wooton in The Portrait of Dorian Gray), political (Jefferson Smith in Mr. Smith Goes to Washington) or horrific (Hannibal Lecter in Silence), we empathise with such characters because we can picture ourselves being them.

We sympathise, on the other hand, with many humans (both real and fictitious) with whom we would never be willing to trade places, even for a second. The concentration camp victim, the starving African child, the helpless damsel in distress, the isolated senior citizen, all elicit our sympathy without for a second tempting us to wish that we were one of them. At his peak, I do wish I could be Hannibal (at least for a day), wielding such complete power over others and over myself. But I never for a second found myself wanting to be young Hannibal, except when he stirred the embers of romantic love in Lady Murasaki. After he forsakes her so readily, only my pity remains.

Daniel Shaw is the editor of the journal Film and Philosophy, author of Film And Philosophy: Taking Movies Seriously (Wallflower Press) and co-editor of Dark Thoughts Philosophic Reflections on Cinematic Horror (Scarecrow Press)

The skeptic

Not all bets are off on the paranormal

James Randi

James Randi

The first I heard of it was a message on the Usenet newsgroup (where it was decidedly off-topic) gloating that James Randi had cancelled his million-dollar challenge because he had lost and was refusing to pay. The message added, “James Randi (is this even a REAL NAME?)”.

The first part is true: as of March 6, 2010 Randi will close the million-dollar challenge to psychics to prove they have paranormal powers under proper observing conditions. In his SWIFT newsletter for January 4, 2008, Randi notes, “While the JREF earns a certain income from having the prize money very conservatively invested, that sum could certainly be used more productively if it were made freely available to us.” He adds, “Ten years is long enough to wait. The hundreds of poorly-constructed applications, and the endless hours of phone, e-mail, and in-person discussions we’ve had to suffer through, will be things of the past, for us at the JREF.” (You can read the rest for yourself.)

Most of the applicants he had to deal with were deluded amateurs; the professionals stayed away in droves. I find this unsurprising, for two reasons. First, although relatively few big-name professionals make millions, they’re probably making a good enough living not to want to risk the kind of publicity they’d get from taking up Randi’s challenge. Second, speaking as a former full-time performer myself, someone who routinely performs in front of thousands of people has to know he can reliably deliver a good show, and therefore has to have full command of whatever techniques he’s using. I would logically expect conscious frauds to form a higher percentage of successful stage performers than of small-timers who largely deal with people one-on-one. The smart conscious fraud would not want to hand Randi the opportunity to expose him.

For skeptics, though, it’s the end of an era. The million-dollar challenge was only ten years old, but its predecessor was a $10,000 challenge that went for something like 35 years. When Randi set up the James Randi Educational Foundation, he began accepting pledges to increase the prize, and then eventually set up the million-dollar trust fund. Commenters to the BoingBoing blog, which posted the news shortly after Randi released it, suggested that a much bigger prize could be offered via an insurance policy, but, as another commenter pointed out, that would just attract more kooks and greater demands on Randi’s finite time and energy.

There are plenty of psychics who claim they’ve been tested by scientists and passed, but what Randi brought to the situation was a lot of experience, a lot of experts in various fields that he could call, and the right kind of magical thinking. I mean, he thinks like a magician, not that he thinks fantasies will come true.

Randi was my original inspiration to get involved with the skeptics: he did a lecture/demonstration at Cornell in 1982 that amazed me because he was able to explain woolly stuff I’d been hearing for years around the folk scene. What really sealed the deal was mathematical games writer Martin Gardner’s books on the subject; I’d known Gardner’s work for years, ever since my high school math teacher had talked about him. It was only later that I learned that Gardner, like Randi, had a background in magic; to go with his mathematical games and skeptical books, Gardner has published a number of well-regarded books on magic.

Magicians make the best skeptics because they are experts on diverting attention. The best of them have the cleverness of Hollywood stunt people in devising tricks and stratagems for making the impossible seem real. They are good at designing things and building things, too. As stage performers, they are experts at capturing the audience’s vision. And so when the magician says, “Pick a card, any card,” somehow you always pick the card the magician intended you to take (as Bernard Woolley observed in Yes, Minister). And when the scarf disappears into the left hand and reappears in the right, the magician’s performing ability ensures that you don’t notice the moment it changes hands. Learning to dissect what you see, to look in the direction the magician is trying to divert you from, to think how someone would do a trick, and to plot a stratagem to make the trick impossible – those are skills few scientists have.

The great thing about the challenge was that the rest of us could say to someone who was deeply impressed with his own abilities, “If you’re so psychic, why aren’t you going for Randi’s prize?” We could trust that if Randi mounted a test it would be a good, well-designed test that eliminated the possibility of fraud. And we could trust that anyone who ever won the money would really damn well be able to produce those paranormal powers.

I can’t blame Randi for wanting to hang up his check book after so many years, but I will miss knowing that challenge is there.

Oh, and to answer the poster’s question: “James Randi” isn’t exactly the name Randi was born with. He adopted it as his stage name when he was a teenager. I think he’s been using it long enough for it to be real.

Wendy M Grossman is founder and former editor (twice) of The Skeptic magazine.

Review: Conversations on Truth

Conversations on Truth, Edited by Mick Gordon and Chris Wilkinson (Continuum) £14.99/$19.95 (pb)

revtruth200A collection of interviews on the theme of truth, honesty, and associated notions is hardly inapposite. Gulf War II garnered what support it enjoyed because of lies, or at least untruthfulness; British MPs are apparently quite a fraudulent lot when it comes to expenses; and the global economy is in a mess because banks stopped trusting each other – for good reason given the prevalence of toxic debt. In short, public life is in the toilet. There is also much popular debate about the need for a new Enlightenment to combat a motley group of dark forces, ranging from the Taliban to palm readers and various sociologists. Two of the present interviewees are such self-styled crusaders (Simon Blackburn and AC Grayling); one may also mention Paul Boghossian and, of course, Richard Dawkins.

Being attuned with the Zeitgeist, however, does not necessarily make for an edifying read. One can’t complain at the very idea of the present volume. The value of the result will depend on the assembled cast. One should be thankful in this regard that the editors avoided the risible John Gray. What we do have is a bunch of media-friendly philosophers; a smattering of journalists from “left” (e.g., Nick Davis) and “right” (e.g., Peter Oborne); a historian (Richard J Evans) and a lawyer (Bruce Houlder); and Noam Chomsky, who has his political hat on. Gregory Chatin, the mathematician, is also interviewed. His views are fascinating, although somewhat out of place here. Finally, there is the odd choice of John Humphrys. I have no particular gripe against the BBC man, but why would anyone care that he thinks that “there is no objective truth”?

The responsibility for Humphrys’s silliness lies more in the questions asked, which occasionally give rise to absurdity. Thus, we have Mary Midgely being asked about “theories of everything” in physics. She opines that the pursuit of unity is “quite irrational” and that the inconsistency of general relativity with quantum mechanics is not “too surprising”. Oh, silly physicists! The fact is that gravity evades a unity that encompasses all other known forces – surprising, surely. The search for a wider unity still is far from irrational. Modern cosmology, for instance, just makes no sense without some unification of nuclear and gravitational forces.

The journalists offer more interest, with some quirks. Nick Davis calls Chomsky’s analysis of the media “crap” on no basis other than that journalists wouldn’t agree with it. Nothing Davis says, however, contradicts Chomsky’s position that the media function to distract the “bewildered herd” or set the agenda in which political discourse might take place, an agenda in line with major commercial interests. Davis is simply more interested in the micro-detail; besides, it is no more the business of journalists to sanction theories of the mass media than it is of rocks to sanction the theory of plate tectonics. Interestingly, Oborne, a self-described Tory, considers Chomsky “just stunning… his argument accords exactly with my practical experience.”

Chomsky himself is his usual blend of erudition, insight, and savage irony: How can we have a serious debate about potential Iranian “interference” in an invaded country? In line with Chomsky is Dan Hind, an independent author and blogger. He rightly complains of the neo-Enlightenment crowd targeting relatively peripheral issues, such as faith healing and theism (for what it’s worth, I am appalled by both); a genuine neo-Enlightenment should be concerned with contemporary impediments to our free, intellectual advancement (such as corporate hegemony, press independence, privatisation of education, a servile media), not the impediments of two hundred years ago. Hind is on less steady ground, I think, in his qualified interest in conspiracy theories. He is right that “conspiracy often serves to discredit legitimate concerns”. But the fact remains that states are guilty of enough that is known by all without the need for anyone to invent or root out often less nefarious crimes.

In a similar vein, Martin Kusch (a sociologist of science) upbraids Boghossian’s ill-informed polemics against the sociology of science and continental philosophy. It is a pity that Kusch has to state the obvious: science has a complex history, and inquiry into the social conditions that shape scientific development does not affect to usurp the science itself. It is of no little irony that the neo-Enlightenment lot, being unduly enamoured with common sense, tend to have a flat-footed conception of science. For sure, there is large-scale ignorance of science (nothing new there), but a better remedy is for people to learn something of its history, rather than read misinformed “popular philosophy”.

In general, the collection is of mixed interest. If you want philosophy, then there is little on offer here. On the other hand, Chomsky, Davis, Oborne, and Hind all provide righteous anger, and some entertainment along the way.

John Collins is a lecturer in philosophy at UEA and the author of Chomsky: A Guide for the Perplexed (Continuum)

Seizing power from the divine

Nicholas Rescher argues that Kant’s radicalism is widely underestimated

cistine200To an extent that seems surprising, and that intimidates even his most dedicated followers, Immanuel Kant holds that the lawful order of the world’s phenomena inheres in the operation of our minds. He flatly maintains that “the order and regularity in the phenomena, which we entitle nature, we ourselves introduce”. He insists that nature’s laws are such that “we could never find them there, had not we ourselves, or rather had not the nature of our mind, originally put them there”. And as though rubbing salt in the wound of natural-law realists, he explicitly insists that “however exaggerated and obscure it may sound, to say that the understanding is itself the source of the laws of nature, such an assertion is nevertheless correct … Nature’s empirical laws are only the special determinations of the prime laws of understanding.” What we have here is a well-guarded secret, for Kantians all too seldom acknowledge this drastically idealistic feature of his approach to natural philosophy.

On Kant’s teaching, Nature’s particular laws implement and concretise the generic conceptions of lawfulness that characterise our thought through being mandated by the principles of reason. That the operation of the human mind is the basis and ground of the lawfulness of nature is a salient thesis of Kant’s critical philosophy. And he puts this idea to work across a wide terrain.

Now when Kant characterises man as the lawgiver of nature he subverts the Leibnizian philosophy by putting man in place of God. The lawfulness of Nature is grounded not – as Leibniz saw it – in the creative decrees of God, but in the formative make-up of human reason. Kant thought that just as Newton gave us the key to the laws of the material world, so Rousseau gave us the key to the laws of the human world. But this meant that the human world is also lawful. On this basis, a prioritising of rules, regulations, maxims pervades Kant’s philosophy, alike on the theoretical as on the practical side. For being law-conformable in thought and nature we are being true to our nature as a component of the wider nature of which it is a part. And the maxims of personal conduct in practical philosophy should form part of a spectrum of universal regularity that pervades the realism of nature and man alike. On this basis, Kant, a devoted student of the classics, resumed the Stoic theme that the realm of human agency should mirror that of nature as a manifestation of lawful regularity.

With Leibniz, the supreme duty of the human is to act so as to emulate God’s kingdom of grace; for Kant it is to emulate Newton’s kingdom of nature. Leibniz with his Principle of Perfection sought to render the operations of nature divine; Kant with his Categorical Imperative sought to make the operations of humanity natural – albeit in the manner of the lawful nature of Newtonian cosmology. And just here – in the concept of lawfulness – lies the fundamental unity of Kant’s theoretical and his practical philosophy.

But something even more far-reaching and grandiose is also at work here. What Kant is after throughout his philosophical work is to give us the philosophy of Leibniz without God – to put man in place of God and to have our human modus operandi substitute for that of the deity. Where Leibniz theologised nature through divine governance, Kant subordinated it to the mind of man. Space, time and causality – the very fabric of the universe – are for Kant no more than thought-forms of the human sensibility and understanding. And, analogously, for him the principles of morality are not divinely instituted but rather are inherent demands of our human rationality. Here too the mind of man is once again the pivot.

His early critics who charged Kant with being a follower of Berkeley could not have been more wrong. For with Berkeley – just as with Descartes and Leibniz – God did all of the heavy lifting in philosophical explanation. But with Kant the inherent workings of the human intellect accomplished the needed work.

To Kant’s mind, all of the tasks that Western philosophical thought has traditionally assigned to the deity as institutor of a rational world-order do indeed need to be accomplished, but humanity– we mere mortals – are up to the task. What we have here is a philosophy not so much of enlightenment as of enormous hubris. For Kant qualifies – and perhaps saw himself – as the philosophical Prometheus who brought the power of God down into the domain of humanity.

Nicholas Rescher is University Professor of Philosophy at the University of Pittsburgh and the author of over 100 books, including Philosophical Dialectics: An Essay on Metaphilosophy (SUNY)

My philosophy: Lewis Wolpert

Lewis Wolpert tells Julian Baggini why philosophy is waste of time

Lewis Wolpert

Lewis Wolpert

“I was thinking before you came, if philosophy hadn’t existed – apart from Aristotle – what would we not know? The answer is that it wouldn’t have made the slightest difference.”

I had gone to see the biologist Lewis Wolpert in his North London home expecting to be told the subject at the heart of my work was total rubbish, and he did not disappoint. I first came across his uncompromising views back in 1992 when I saw him give a lecture at University College London. He had nearly finished a captivating talk about his book, The Unnatural Nature of Science, when, almost as an afterword, he briskly dismissed all philosophy of science as having nothing useful to say.

What he said must have stuck because when, a few years later, I was putting together a dummy of what tpm would look like, I included in the contents an interview with Wolpert. It took over a decade, however, before I actually got around to conducting it.

Over that time, Wolpert’s star as a public figure has risen tremendously. His book on depression, Malignant Sadness (1999), was a breakthrough success, combining a thorough overview of all we know about what depression is with some very personal sections dealing with his own battles with it. The book spawned a television series, and in 2006 his book on the evolutionary origins of belief, Six Impossible Things Before Breakfast, was a popular science bestseller.

Now 78, Wolpert has not exactly mellowed when it comes to his hostility to philosophers. He is personally charming, but when we got to philosophy, the phrases “totally unintelligible”, “no use whatsoever” and “gobbledegook” were bandied around with a vigour that was somewhere between irritation and zest.

We got off to a good start when I asked him when he first came into contact with philosophy.

“It was probably in relation to the philosophy of science, and I can’t even remember where it was, but it was quite late in life. I did read Popper’s book, and I hated it. I once wrote that it was the most over-rated book in the last 500 years.”

Wolpert had first-hand experience of how scientists worked, and simply found Popper’s ideas about the scientific method had nothing to do with that, and no one else he has come across since has been any better.

“Nothing in Popper or in any other philosophy of science has anything relevant to say about science. I don’t know of any scientist who takes the slightest interest in the philosophy of science, although I do think Peter Medawar was quite keen on Popper, to my surprise.”

A lot of people who claim philosophy is a waste of time can be tricked into conceding at least something by being drawn into an obviously philosophical discussion about the value of philosophy. With commendable consistency, Wolpert repeatedly rebuffed attempts to open up that kind of dialogue. So, for instance, when I challenged his view that philosophy of science is irrelevant by saying that it surely depended on what it was supposed to be relevant to, he retorted, “It’s not relevant to anything.”

But then came a small concession: “I’m not talking about political philosophy, I’m talking about the nature of the world.” But as if he had already granted too much, he added, “It’s clever, but totally irrelevant. Most of it seems to me just nonsense, it’s very hard to know what they’re talking about.

How then does Wolpert explain the fact that so many great minds over history have been seduced by a subject which he claims is totally irrelevant?

“That’s a very good question, and I think it’s a bit like religion. I’ve just been to a meeting on science and religion and I can’t understand what most people are talking about. They’re not unclever, they’re clever people but it just seems gobbledogook, babble.”

Wolpert is clearly not lacking in self-belief, but what makes him so confident it’s a failing of philosophy rather than himself that he finds it gobbledegook?

“Because it wouldn’t matter one hoot – science has done very well without any philosophy whatsoever. Take biology over the last 100 years – philosophy has had zero impact.”

Aren’t there people who think it might help at least with theoretical physics?

“I don’t think that it’s philosophy that will solve it in any way whatsoever, because it’s all about language and words, not science, and physics is about science.”

I couldn’t resist pointing out the contrast with the person interviewed in this slot last issue, the physicist Alan Sokal, who was rather more generous about the contribution philosophers make.

“I’m not at all generous about philosophy,” says Wolpert. “I think they’re very clever but have nothing useful to say whatsoever.”

Nothing useful for the practice of science by scientists, perhaps.

“No, nothing useful for the practice of anything,” he insists. “Perhaps morals politics and things like that, that may well be. John Stuart Mill and justice and so on, that’s important stuff, but about the nature of the world, absolutely nothing to say whatsoever.”

What about the nature of knowledge itself?

“Absolutely nothing useful to say at all.”

So the questions that are asked don’t need to be asked? They’re just interesting puzzles for clever people?

“That’s exactly what they are. They’re something for philosophers to dabble in.”

Wolpert is an immoveable object who clearly believes philosophy is an eminently resistible force. His most fiery response came when I suggested to him that his dislike of philosophy may be just a temperamental matter: philosophical problems just don’t turn him on.

“No it’s an intellectual rejection!” he says, sounding quite offended. “It’s certainly not temperament.”

No bait that I offer him is taken. For instance, I told him how I had been caught up in an ongoing exchange with a Christian about belief, one which has forced us to consider what knowledge is.

“Well I will not get into such a discussion,” he says. “I think there’s no meeting between religion and science whatsoever.”

But in order to make the claim that there is no meeting between religion and science, isn’t he forced to do some philosophy to justify that?

“Absolutely not. There’s no evidence for the existence of God, and that’s all there is to it. You just provide me with some evidence. As for the evidence from the Bible, all the studies that have been done show that no one who wrote bits of the Bible was there at the time. I’m not against religion and I have a moderately religious son, as long as religious people don’t interfere.”

I try one of my more involved attempts to draw Wolpert into philosophy’s net. He says that you don’t need philosophy to discuss religion because you simply ask where the evidence for the existence of God is, you find there is none, and it’s the end of the story. But what if a clever theologian or philosopher of religion comes around and says that Wolpert is demanding a scientific form of evidence for something that is not scientific? So he’s not really saying there is no reason to believe in God, he’s saying there is no scientific reason to believe in God. To answer that objection, doesn’t he have to go into philosophical questions as to whether or not scientific reasons are the same as reasons in general and so forth?

“No, I think I’m already asleep, because it’s really about evidence. I usually say to people that if I tell people I have found a fish that speaks Afrikaans – I’m South African – they would want to get some evidence that this fish actually exists. It’s the same with God, I’d have to bring some evidence.”

But isn’t the question of what makes something reasonable evidence a philosophical one?

“I don’t think so, no. Funnily enough I’ve just looked it up in Ted Honderich’s Encyclopedia of Philosophy, the word ‘evidence’, which gets about six lines.” Touché.

However, somewhat surprisingly for a man who thinks everything hinges on evidence, Wolpert himself is a theoretician. “I’m hopeless in the lab. I’m good at getting other people to work – that’s my skill. I like using the results of experiments but I don’t like doing them myself.”

Despite the bluster, there are corners of the philosophical world which Wolpert does have time for. He thinks that Aristotle’s logic “was very important for science”, although “his science was terrible.” However, even when we do find some common ground, Wolpert seems determined to stamp all over it: “What I’m curious about is that, unlike science, I’m not sure how much progress there’s been in philosophy. I wonder whether if Aristotle came back, not much would have changed.”

Wolpert also “fell in love with David Hume at one stage, although I disagree with him about causality, but on religion he is just wonderful, describing how no miracle should be believed in unless it is so miraculous you couldn’t avoid belief.”

He also has some nice things to say about John Rawls in The Unnatural Nature of Science. “Ethical issues, and issues related to the law and justice, I think that’s where philosophers really can make a contribution. I’m a bit hostile to bioethicists, but that’s another matter altogether. Some of them are very good and ask perfectly sensible questions. But a lot of them really are looking for problems rather than trying to solve them. We’ve got this bill going through parliament now, and I think there are really very few ethical problems there. You’ve got to have ethical committees for experiments relating to human beings and if there are philosophical issues involved there I have no problem whatsoever.”

More surprisingly, he says he likes Thomas Kuhn. “When I met him briefly I felt he was a relativist and I was rather disappointed. But I think his original thing about different paradigms and their influence on how one did scientific research was important to the historian of science. Maybe he made people slightly aware that you’ve got be careful that you are really in the right paradigm for thinking about how things work.”

But such concessions are the exception, not the rule. Of Feyerabend he says “He’s terrible,” Against Method being “absolute junk”. Worst of all, “I hate relativists of course. Those people are just terrifying, people who say science is just a social construct. I think it’s striking that’s there’s only one science, there aren’t different sciences around the world. Those relativists are just stupid.”

If Wolpert seems rather broad-brushed in his dismissal of philosophy, it is not because he hasn’t thought about some of the more specific ideas in the philosophy of science. For example, there is the famous underdetermination thesis, which states that the evidence always leaves room for multiple theories which explain it.

“I don’t think there is much in it,” says Wolpert. “I don’t know of any other theories in biology that will explain the available data. There may be occasionally a couple of theories which do and there is nothing to choose between them, but they’d be so similar that I don’t think they would be different theories. Underdetermination is a very rare phenomenon, in the sense that there are many theories that can explain the same thing; no, I don’t believe that.

“I think philosophers are probably quite jealous of science and this is why they come up with all this nonsense to try to show it’s not as reliable as people like to think it is. Look at how successful science is – philosophy is not successful – it’s achieved nothing.”

Wolpert also has specific criticisms of Popper’s idea that science works by coming up with theories that it then tries to falsify.

“That’s where Popper is wrong. When we scientists are working with something we’re not trying to falsify. We might on occasion. We’re really trying to see whether we can show that the theory is right or wrong.”

Wolpert believes the whole enterprise of trying to codify the scientific method is misguided.

“The essence of the scientific method is really quite simple: you have your observables, you mustn’t have any logical contradictions, and the theory must fit with the facts, and you don’t worry particularly in biology about what the facts are. Now your facts can be wrong – there is no question that you can make errors and they get discovered. So I don’t think knowing philosophy of science helps you in any way whatsoever.”

He thinks scientists learn this method mainly by working with other scientists. There are some other lessons worth learning to do science well, but you won’t be surprised to hear that Wolpert doesn’t think they come from philosophers.

“Peter Medawar said that science is the art of the soluble – you must choose a problem that can be solved. There’s no point in choosing too difficult a problem. You’ve got to define the problem in such a way that you think it can be soluble.

“Sydney Brenner – one of my heroes, a fellow South African, Nobel prize winner – said the way to make progress in science is not to know too much about the subject you’re working on, because you’re already constrained. So you need to come into a field where you don’t know too much, but you must know a lot about fields outside that, and then you should question the fundamental idea of the field that you’re moving into.”

Given Wolpert’s acceptance of moral and political philosophy, I make one last attempt to get him to concede there might be some value in other areas of philosophy. Let us allow that science deals with all those matters concerning the nature of the physical world. Let us also allow that since one cannot determine what the right or wrong thing to do is in a scientific manner, but that one has to make rational decisions about these things, we also have ethics. Between the two aren’t there questions about what is knowledge and so forth which are not scientific, but as curious rational beings we find ourselves asking questions about them?

“No we don’t have questions about what is knowledge!” insists Wolpert. “This is a cup, I have no doubt that this is a cup, and I think some of my theories in science are right, and others are hypotheses, and there may be beliefs that are much less reliable, but nobody struggles with those.”

But what about when it comes to other types of knowledge? Does Jack know Jill loves him?

“They’re not scientific issues.”

They are issues though.

“Yes, but neither philosophy nor science helps you very much.”

This piques my interest because of Wolpert’s interest in and experience of depression. Although some of that is down to chemical imbalances in the brain, pure and simple, couldn’t philosophy perhaps be useful in coming to terms with the issues and questions which lead to depression?

“Well it’s the way one thinks, it’s one’s negativity. My argument is that depression is malignant sadness, it’s sadness, which is a normal human condition, becoming extreme. Now how that becomes extreme is complex and to pretend that we understand depression is simply wrong.”

Whether philosophy can help is something he remains agnostic about. “I don’t know, it’s tricky. If it helps. I’m for it.”

Perhaps surprisingly, Wolpert is actually good friends with some philosophers, such as Ted Honderich and AC Grayling. “They’re very nice people and I like them.” But how does he maintain good relations with people whose subject he views as a waste of time? “I don’t think we ever discuss philosophy.”

Perhaps his irritation at philosophy’s lack of contribution to the sum of human knowledge misses the point. After all, one could say the same of poetry, which he would say has some value.

“Well it’s a good point, I’ve never thought of philosophy as poetry, philosophy as impenetrable poetry. If people enjoy it, and there’s no question that philosophers enjoy it and a lot of people like philosophy enormously. I’ve got a grandson who’s very keen on philosophy at the moment, he’s fifteen, loves it.”

Maybe Wolpert could agree with Wittgenstein, who thought that philosophy had no instrumental use but should only ever be pursued for its own sake, because one is gripped by a philosophical problem. The thing is simply to not pretend it has any instrumental value and just get on with it, as you might paint a picture or write a poem.

“Yes,” he agrees, “but then it shouldn’t be in universities.”

I wonder what else might be removed from universities if we head down this road. Literature departments?

“No, I think literature is important. Maybe philosophy doesn’t do any harm – we could keep a very small philosophy department. But if it’s just like poetry I don’t want them in university, no.”

But the political and moral philosophers would be allowed to stay?

“Oh, absolutely, and then you’d probably need the others there anyhow, because they have the same techniques, so I’d leave the philosophy there because you probably need the moral, political and legal philosophy.”

Perhaps that’s a good point to end on: when it comes to talking philosophy with Lewis Wolpert, that counts as quitting while you’re still ahead.

Julian Baggini‘s latest book is Should You Judge This Book by Its Cover?