Monthly Archives: July 2009

My philosophy: Lucy Eyre

Julian Baggini meets the debut philosophical novelist Lucy Eyre

eyre200I’ve just finished reading an introduction to philosophy for young people which might also be appreciated by adults. The story concerns a child who is taught about philosophy with the help of a supernatural guide. Sound familiar? If you think I’m talking about Sophie’s World, however, you’d be wrong. If Minds Had Toes has some superficial similarities to Jostein Gaarder’s bestseller, but as its author Lucy Eyre put it, “I like to think mine’s a bit sillier, with more jokes, a bit more of a story, and shorter.” To my mind that’s four improvements on Gaarder’s rather long-winded, didactic text.

If Minds Had Toes is set in England and its antithesis, the World of Ideas – a kind of philosophical heaven, or hell, if you think spending eternity with people who go on and on about ancient arguments sounds like torture. Eyre offers various clues that her world is a better literary device than it would be a place to actually inhabit. For instance, after hearing Socrates talk about his trial again, she has one man interrupt, saying, “One might hope that you would be bored of telling this story after two and a half thousand years.” But of course, each generation comes to the history of philosophy afresh. “I think the World of Ideas was a slight metaphor for the idea that the ideas keep going, and literally here the people keep going,” she says.

Eyre currently lives neither in England nor the World of Ideas, but in the Ethiopian capital Addis Ababa. I caught up with her while she was back in her home country to launch her first book. We talked in the west London home of her parents, Richard Eyre, the theatre and film director, and television producer Sue Birtwistle, whose CV includes the adaptation of Pride and Prejudice with Colin Firth.

Needless to say, in such a home Eyre grew up with plenty of intellectual discussion and stimulation. “When I was a child, one of my favourite books, which my mum used to love and which she got me into, was [Norton Juster’s] The Phantom Tollbooth. That’s been a definite influence, that sort of quirkiness, taking expressions literally and taking the implications to their full extent. That’s not explicitly philosophical but I think there is a kind of philosophical view in that book that really hooked me as a child.”

As she grew older, her exposure to philosophy came mainly through literary and artistic channels. “In almost all films, books or plays there are ideas that you might generally call philosophical, content-wise. But I do think there’s this whole leap to the academic stuff or virtually anything that’s written down about it.”

Eyre’s first steps to taking that leap were taken after one of her school teachers suggested that she might read Philosophy, Politics and Economics at Oxford University.

“I stared reading the usual, Thomas Nagel kind of things,” she says, “but it was only really when I went to Oxford that I started loving it. I had some great tutors there. It was the first thing and probably still the only thing that I found really satisfying and challenging.

“I was very lucky. I had two terms of tutorials on my own with Galen Strawson and two terms of tutorials with one other person with Jonathan Glover. That’s quite special. Then you can really do philosophy like it used to done and as it works best.”

However, philosophy was left on the back burner for several years after she graduated in 1997. She worked in radio and television for a few years, then went into economics, working as a consultant on competition policy. Her feelings about this period in her life are reflected in the fact that, in her book, an economist is thrown out of the World of Ideas for being an impostor.

“That’s a bit of a dig,” explains Eyre, “because I was working as an economist when I was writing it and I was desperate to get away from that world.”

Although she never wanted to become an academic, there was a sense of unfinished business with philosophy.

“When I studied philosophy I really loved it and wanted to tell everyone about these new things I was thinking about, but I felt I couldn’t do it by myself. I didn’t just want to have a discussion and start teaching them about it. But there weren’t that many obvious books that didn’t require quite a lot of concentration for people to get something out of it. There were basic philosophy books but almost all were aimed as a first step to studying philosophy and quite hard going. I was hoping with my book to bridge that gap and translate what I learned at university and spent quite a few years getting my head round, into something that I would tell people, if I had the energy, but it was easier to write it.

“I always wanted to write but it took me a while to get the confidence to just sit down and write it. In fact I was a bit ill so I gave up work for a few months and as soon as you step back from that particular career you think ‘of course, I can do anything’. Day to day I was doing it thinking ‘I must do something else one day,’ but it’s hard. So I took a few months off and I didn’t go back basically. I went to a very similar firm but part-time so I could write the book. I gave myself two years and it took just over that.”

Eyre was aware that philosophy for children was a burgeoning genre, but she deliberately didn’t read other peoples’ attempts while she was making her own.

“I tried to avoid it because I already knew quite clearly how I wanted to do it and I was sure there might be similar things out there and I didn’t want to risk reading something else and then thinking, no, I might be stealing that idea.”

She made an exception for Sophie’s World as, already familiar with it, she felt she needed to check if her book was going to be sufficiently different to merit the effort.

“I read about a third of Sophie’s World when I was a student, and got a bit bogged down in it. Then I conceived this idea and thought it’s a bit similar, so I’d better read it and just make sure that within the boundaries of what I wanted to do, mine was as different as possible. I think the philosophy is actually different in mine. I’ve done dialogues, it’s based on ideas and it is not a potted history of philosophy.”

Those differences avoiding making If Minds Have Toes read like a primer, even though its central dialogues are effectively introductions to the key themes of western philosophy.

“The philosophers are deliberately not real people so there’s no one who can say ‘you’ve misrepresented Kant’s view here’. Clearly some people have a bit of David Hume in them, or a Kantian influence, and stuff like that, but I wanted to be able to use old arguments and new arguments in the mouth of the same person, which is just not feasible if you make them John Locke or something.”

Nevertheless, it is possible to get arguments terribly wrong, and one of the great strengths of Eyre’s book is that her telegraphed discussions of the great debates always get the philosophy right. How did she achieve that, when she had not exactly been immersed in philosophy for years?

“I did go back to my university notes and essays. In a sense, when I started writing the book I knew what I wanted to write about and I knew what I wanted to say. I immediately wrote the list of the themes that the dialogues were about, and I knew that I wanted two opposing views. One person might move through several arguments but there were broadly two positions. I decided I didn’t want to teach myself any new philosophy. I wanted to write about things that I already felt comfortable with and had a view on, that had been sitting in the back of my mind for those five years. But of course I went back to remind myself of the subtleties and the different arguments and then I did spend a lot of time paring it down and simplifying it.”

The simplification might seem necessary because the main character, Ben, is 15, but it could equally be said that the age of the character was necessary because of the level of simplification Eyre sought.

“A lot of people assume it’s a book for young adults, and it is, but not wholly. The reason I have a 15 year old is that I wanted the arguments to be as if you were explaining them to your 15-year-old friend – that simple. I didn’t want to use any philosophical vocabulary as short cuts, in the way that people start using it to avoid having to explain what these words mean, assuming everyone knows. Everyone doesn’t know. They’re very specific; they’re quite off-putting; after you’ve heard about three of them you forget which is which, and automatically you’re having to concentrate much harder than is necessary. But I thought it would be oddly patronising if I made the character 35. It worked better to have him fifteen. I think everyone would like to read philosophy for fifteen year olds really, even if they don’t admit it, so that’s why the device is there.”

A central conceit of the book is that Plato and Wittgenstein have a bet as to whether philosophy can change a life for the better. The actual result is, of course, inevitable. “Clearly the bet has to end that way otherwise it would be self-defeating. I would have spent the whole book explaining why my book was pointless, which it’s clearly not!”

But does philosophy really change how people live on a day to day basis? Is Eyre’s life significantly different or better than those of the ex-pat diplomatic staff and NGO workers she is surrounded by in Ethiopia? Can’t we exaggerate the importance of living the Socratic examined life?

“If you take the examination to be a little bit broader than actual philosophy, you could say a life without fiction is not worth living, a life without thought and music and novels and some sort of beyond the necessities of life mental aspect is not worth living. Someone asked me a very difficult question: what about Ethiopia where clearly such things are a bit of a luxury? Then you’re forced to conclude that maybe their life is lacking something important, which is a difficult conclusion I don’t want to go to.”

I’m not sure why. Surely the whole point of helping the developing world is that we think there is more to life than mere subsistence and people should be helped to more than survive. That’s not to say more basic lives aren’t worth living, but that they could be much better. If that’s true, philosophy can add something worthwhile.

“I suppose the World of Ideas is a little bit of a metaphor for things you can find in books that are lurking if you open them and you can go and meet all these people. So in the broader sense I would have to say thinking in this way does provide something.”

The book leaves the possibility of a sequel open, but Eyre says, “At the moment I’m not in the mood to write it. My problem is that all the bits of philosophy that I really know about and love are already in there.” At the moment she’s working on a conventional novel. “The idea now is that I’ve started on a career as a writer, not particularly of philosophy, not a kind of Alain de Botton path, but you never know. I think there’s still more mileage to get out of the world of ideas and its interaction with the real world and Ben going back to visit.”

If Minds Had Toes is published by Bloomsbury

Julian Baggini is editor of tpm.

Review: The Wounded Animal by Stephen Mulhall

The Wounded Animal: J.M. Coetzee & the Difficulty of Reality in Literature and Philosophy
by Stephen Mulhall
(Princeton University Press)
£19.95/ $26.95 (pb)

mulhall200Literature and philosophy have a very long history of mutual entanglement. One might recall, as Stephen Mulhall does at the beginning of The Wounded Animal, that Plato’s engagement with poetry (advocating, famously, that the poets be ejected from his just city) was the first instance in the “quarrel … by which philosophy distinguishes itself … as an autonomous form of intellectual inquiry”; one of the ways, in other words, that philosophy has been able to understand itself as philosophy by articulating its relationship to and, in particular, its difference from literature.

Today, different schools of philosophy have varying approaches to the question of literature. While continental philosophers have often acknowledged the influence of novelists and poets, analytical philosophers generally stand at a greater distance from the literary tradition. When it comes to those trained in mainline Anglo-American philosophy, the exceptions seem to prove the rule.

Even given the disciplinary situation, a monograph by a philosopher on the work of a single novelist, like Mulhall’s The Wounded Animal, is even more rare. And the fact that this book focuses on just two works of a prolific contemporary author might well make it seem an extremely perplexing endeavour. But if any writer currently active is fit for this sort of attention, it’s Coetzee, whose career has taken a strange turn over the course of the last decade.

The first writer to win the Booker Prize twice (in 1983 for The Life and Time of Michael K. and in 1999 for Disgrace), Coetzee had long been known as a writer of novels that, even if formally complex and intellectually challenging, were still clearly recognisable as novels – narratives focused on characters and their relationships, series of events that constitute a plot and so on. But in the years just before and especially after he won the Nobel Prize for literature in 2003, his output changed quite significantly. The simplest (if slightly reductive) way to describe this turn is perhaps to say that every time he is asked to give a talk, he offers instead a fiction. And every time he offers the public a new fiction, he instead, now, seems inclined rather to produce a talk, an essay, a work of non-fiction. For instance his most recent novel, Diary of a Bad Year, presents as its main text a set of position papers written by an ageing novelist for a collection of political essays along with a narrative, restricted to the bottom of the page where footnotes would appear in a different sort of work, about the author’s relationship with a young woman whom he employs as a typist. When conversely Coetzee gave his Nobel lecture – normally an occasion that calls for the recipient of the award to make a serious-minded statement about literature and world events, he chose instead to present an ambiguous short story that complexly re-imagines the relationship between Daniel Defoe and his character Robinson Crusoe.

The two “novels”, if that is still the right word, that Mulhall focuses on in The Wounded Animal, Coetzee’s The Lives of Animals and Elizabeth Costello are drawn from the beginning of this strange period in Coetzee’s career. Both works centre on a fictional character named Elizabeth Costello, an ageing and celebrated female Australian novelist. But rather than straightforwardly narrating a series of events in her life, the greater part of both novels is made up of a series of public lectures and seminars along with a limited amount of description of the situations at which these presentations are given. In shaping these novels in this way, Coetzee refashions what is often called the “novel of ideas” into a narrative exploration of what it means to have an idea, to deliver it to an audience, and the relationship between the delivered idea and the human being who does the delivering.

It is this interest in the contextualisation of ideas, the deliberate revelation of the position from which a thinker speaks, that fuels Mulhall’s interest in Coetzee. His primary goal in this work seems to be to make a case for fictional ways of knowing, narrative forms of the understanding of reality. As he has it, Costello’s approach amounts to “challenging the philosopher’s way of understanding what it is for reality to make an impression on us” – a challenge on behalf of fiction against philosophical abstraction and too-easily-assumed universality.

For instance, Elizabeth Costello’s critique, in The Lives of Animals, of the philosophical deployment of abstract examples and thought experiments enables Mulhall to defamiliarise this standard practice: “They are explicitly constructed so as to strip away the complexity and detail of real-life situations, in order to isolate a specific conceptual or theoretical issue in as stark and plain a manner as possible, thereby allowing us to exercise our judgement about it free of any distortions that might result from the actual entanglement of this particular issue with a range of others in everyday experience.” That is to say, it is important for philosophers to remember that what they gain in terms of clarity from reductive constructions may well come at the cost of experiential or even ethical circumspection.

Above all else, Mulhall finds in Coetzee’s complex fictions a provocation that resists the idea of the work – say a philosophical paper or scholarly talk – as merely a transparent container for cleanly extractable arguments. In his final pages, he signals the possibility of a “philosophy that is both realist and modernist – committed to achieving a lucid grasp of reality, and willing to put in question any prevailing philosophical conventions concerning that enterprise that appear at present to block or subvert its progress.” The Wounded Animal seems to call for philosophical realism that is as attuned to the vicissitudes of the representation of reality – as sensitive to the distortions brought to bear by the act and context of writing – as the most sophisticated fiction of the last century.

For all the persuasiveness of Mulhall’s argument about philosophy’s resistance to the lessons of fiction, The Wounded Animal nevertheless repeats some of the very problems that its author is out to correct. First of all, while Mulhall presents provocative and detailed analyses of Coetzee’s works themselves, it is somewhat disappointing to note that there are almost no references to any of the increasingly large body of critical work on the author composed by literary scholars, which feels like an omission in a monograph so closely focused on just two short literary works. If Mulhall means to suggest that his call for philosophers to attend to fiction doesn’t quite extend to the consideration of literary scholarship, he should have at least addressed the issue at some point in his monograph.

Further, and especially important given the argument of the work, the question of animality that would seem to be at the centre of the project never receives the direct treatment that we might have expected. That is to say, if Elizabeth Costello intervenes against the philosophical deployment of animals as merely abstract examples, place-holders in arguments indifferent to the lived specificity of animal experience, Mulhall’s book suffers in a sense from the problem Coetzee’s character addresses. For the most part, Mulhall’s interest in the question of animals stops short at the comparative critique of philosophical engagement with them – as if they are simply a topic of debate that clarifies certain meta-philosophical issues rather than living creatures worthy of interest in their own right.

Michael Sayeau is a lecturer in the English department at University College London

Just deserts?

Brad Hooker asks if the idea of desert belongs at the foundation of ethics

hooker200There is widespread (though not universal) agreement that one thing morality and justice require is that people get what they deserve. But there are divergences in the use of the term “deserve”. And there are important disagreements about whether all principles of desert are morally derivative or at least some of them are morally foundational.

A very wide use of the term “deserve” occurs when “so-and-so deserves x” is used to mean nothing more than that this person should be given x. To say that someone deserves x in this maximally wide use of “deserves” is not to say why it would be good for the person to get x. “Desert” in this maximally wide sense neither picks out, nor rules out, any distinctive kind of moral reason. Claims about what people deserve in this wide sense cannot explain why certain acts are required or wrong. Claims about what people deserve in this wide sense instead just express that certain acts are required or wrong.

A more discriminating use of the term “desert” ties it to a distinctive kind of moral reason, one that contrasts with moral considerations such as benefit, need, and equality. You may deserve some good more than I do even though I would benefit more from getting it, I need it more, and my getting it would result in more equality between us.

What is the basis of your greater desert? A common view is that you deserve rewards or punishments to the extent that you are virtuous or vicious. Aristotle’s ideas about proportionality are incorporated here: reward or punishment should be proportional to virtue or vice. So if your virtue is greater than mine, then this view holds that you deserve greater rewards than I do.

In what does virtue consist? One view, often linked to Hume, is that virtues are those character traits beneficial to oneself or to others or to both. Another view is that virtues are those character traits beneficial both to oneself and to others. Still another view is that virtues are those character traits beneficial to others. None of those views is plausible unless qualified and attenuated in at least a couple of ways. “Beneficial” must be attenuated to “probably beneficial”. Another qualification must be added so that the claim is about what is probably beneficial in at least fairly normal circumstances. A character trait that happens to be beneficial in very odd circumstances is not a virtue but a fluke.

A quite different view about the nature of virtue has recently been revived by Thomas Hurka, in his book Virtue, Vice, and Value. This view maintains that virtue is loving the good and hating the evil, and vice is loving the evil and hating the good. Here, loving something amounts to desiring, pursuing, and taking pleasure in it. Again, proportionality is part of the view: virtue requires loving the good with an intensity that reflects the amount of good and hating the evil with an intensity that reflects the amount of evil.

When these ideas about proportionality are put together with the idea that virtue deserves reward proportional to the amount of virtue, I get nervous. Opera is better than folk music, because opera is deeper, more complex, etc. But I do not love opera. That is, I don’t desire, pursue, or take pleasure in it. You, being more musically sophisticated than I, love opera. So there is an aspect of the good you love and I don’t. This makes you deserve more rewards (e.g. pleasure) than I deserve? Surely not!

Here is a different example making the same point. Although Jack and Jill both love knowledge, Jill loves it more than Jack does. Perhaps Jack is in some sense mistaken not to love (i.e., not to desire, pursue, and take pleasure in) knowledge more than he does. But there seems to be no evil here. And Jill is not more virtuous than Jack simply because she loves knowledge more than he does. She may be more admirable, but not more virtuous.

If, because of ignorance or weak will, I choose lesser goods for myself than I could have chosen, I lose. As long as I am not here failing others as well, it would be inappropriate for morality to require punishment of me. Any morality that requires punishment for purely self-destructive action goes too far. People’s lives are their own property. If people impair their own property without harming anyone else, there is no ground for punishment.

In short, what does not seem plausible is the conjunction of a view about desert and a view about virtue and vice. The view about desert here is that the more virtuous always deserve greater rewards and the more vicious greater punishments. The view about virtue and vice here is that something can be a virtue or vice purely because of its actual or probable effects on the agent. Which of these views should be abandoned, or at least modified?

Perhaps both should be modified. As Bernard Williams pointed out, not every character trait beneficial to the person who has it can be a virtue. His example was some character trait that makes one sexually appealing to others. Such a character trait can certainly be advantageous, but it hardly seems a virtue. Having said this, I do not deny that character traits beneficial to those who have them can be virtues. What I do deny is that these traits deserve rewards, or at least rewards from other agents.

* * * * *

Let me now focus on the issue of the appropriate source of rewards or punishments. This issue is not prominent in the standard picture of desert. The standard picture holds that desert is a three-place relation between (i) the deserving person, (ii) the basis of this person’s desert, and (iii) the deserved good. In other words, (i) the deserving person, (ii) on the basis of facts about this person, which might include facts about how she is in some way related or connected to others, deserves (iii) such and such. There is nothing in this picture about who is responsible for supplying the such and such.

Admittedly, the picture of desert as a three-place relation fits easily with our practice of assessing states of affairs in terms of whether people get what they deserve in those states of affairs. If you are virtuous but have a life devoid of pleasure and other benefits, then this is a state of affairs that offends against desert. You deserve good things because of your virtue, but, unjustly, you don’t get them. Furthermore, the picture of desert as a three-place relation fits with our talk about promoting justice as a shared goal, something that we can all work towards with absolutely no conflict among us.

However, I think desert is often a four-place relation. There is (i) the deserving person, (ii) the basis of this person’s desert, (iii) the deserved good, and (iv) the person or persons responsible for supplying that good. In other words, a claim about desert should identify who the deserving person is, why she is deserving, what she deserves, and who is responsible for supplying what she deserves. Suppose you did something kind for me. Here, (i) you, (ii) because you did something kind for me, deserve (iii) thanks (iv) from me. The desert claim here must incorporate that fourth element.

I admit that the three-place relation picture might be salvageable by being more sophisticated about the deserved good. For example, it might be held that the deserved good wasn’t merely thanks, but thanks-from-me. I nevertheless think the picture of desert as a four-place relation is superior. Who has responsibility for supplying a deserved good or imposing a deserved punishment is often absolutely crucial. The four-place relation picture makes this aspect more prominent than the three-place relation picture does. And that sometimes everyone is obligated collectively or severally to supply the deserved reward or punishment doesn’t count against the usefulness of the four-place picture.

The case for the four-place picture strengthens when we turn from cases of moral desert to cases of legal and economic desert.

Jesse James robbed banks and trains and in the process shot fifteen people. On that basis, he deserved punishment. But legal punishment needed to come from the right source. Legal punishment of Jesse James couldn’t come from France or China or Virginia or California. It needed to come from Kansas, Iowa, Kentucky, Minnesota, Tennessee, Alabama or Missouri (i.e. from a state in whose jurisdiction he committed crimes). It isn’t just that there is a person who committed a crime for which this person deserves punishment; the punishment must come from a legitimate source.

(Incidentally, Jesse James was murdered by being shot in the back of his head by the Ford brothers, who were gang members Jesse had living in his house. The Ford brothers surrendered, were sentenced to hang for murder, but then were quickly pardoned by the governor of Missouri, Thomas T. Crittenden, who also cut them in on the reward money put up for Jesse’s capture. So the governor of Missouri apparently conspired in the murder of Jesse. Perhaps Jesse got what he morally deserved. But he didn’t get the trial or punishment he legally deserved.)

Now consider economic desert — what people deserve for the work they do. A moment’s reflection shows that economic desert is a different concept from moral desert. If you devote your weekends to a second job and I spend mine going on long walks, our economic deserts differ though our moral deserts may not.

Economic desert seems normally a four-place relation. (i) You, (ii) on the basis of so and so, deserve (iii) such and such from (iv) your employers or customers. That is, you deserve the economic rewards from certain people, not just from anyone.

What determines economic desert? Two factors often cited are contribution and effort. But it isn’t obvious how contribution and effort are to be understood.

One view is that the contribution made by your doing some job or by your making some investment is a function of how much better off people are when you do it than they are when you don’t do it and they make other arrangements. This view may not be correct. But rather than say more about how to understand contribution, I turn to effort.

Effort is typically understood as everything negative about the activity. For example, it can include the amount of training required for this activity, the unpleasantness and strenuousness of the activity, and the health and other risks imposed on the person who engages in the activity.

In order to calculate someone’s economic desert, should we add together the two factors, effort and contribution, or should we multiply one by the other? Suppose someone devotes lots of sweat over many hours to making something that no one else likes. Here effort is considerable but contribution nothing. Well, if effort and contribution are to be added together, this person deserves some economic reward, which seems implausible. But, if effort and contribution are to be multiplied together, then this person deserves no economic reward (since any amount of effort multiplied times zero contribution equals zero). In that case, effort multiplied times contribution comes out with the intuitively plausible result.

But in other cases the formula has counter-intuitive implications. For example, suppose I put a huge amount of work into doing something that benefits only one person a little. Suppose this beneficiary is the only one available to reward me economically. Does she owe me an amount determined by my contribution to her times the effort I expended? Well, that amount, because it is so heavily shaped by my high level of effort, could easily outstrip the benefit to her of the work I did.

The problem with the idea that effort is to be multiplied by contribution comes out most clearly in cases where contribution is high but effort, in the sense of negative aspects of the activity, is zero. On the view we are discussing, effortless contributions cannot generate any economic desert. That is ridiculous. Think of anyone for whom their “work” is their greatest passion and enjoyment. I don’t know who the highest paid footballer is at the moment. But, whoever he is, he might so love the game that he would pay to play. And yet he is so good, brings in so many fans, gets his team on TV so often, that teams aggressively bid against one another to pay him. He does deserve some economic rewards, even if playing is for him entirely effortless.

The same is going to be true, albeit with a few zeros knocked off the end of the salary figure, for a number of other sports stars, and for at least some painters, computer geeks, novelists, and even philosophers. Think of two eminent successful philosophers. Suppose one of them enjoys her work, and the other finds it frustrating and tedious. Do you think the one who enjoys it deserves less pay? Do you think her employer would pay her less? My bet is that her employer would pay her more because enthusiasm is infectious and makes for pleasant company.

An assumption behind those comments is that a free market by and large determines the price of labour and capital on the basis of perceived contribution, not effort. Except where altruism intervenes, the market pays nothing for contribution-less efforts. But it often pays very well for effortless contributions.

Of course negative aspects of work and investment will help determine the payment required to attract labour and capital. In this indirect way, effort will have an impact on what people are willing to accept. But this is quite different from saying that effort must be a factor in determining economic desert.

In referring to a free market, I do not mean to be ignoring its inadequacies. A free market combined with substantial egoism is brutal to those who have too little to sell or trade. And in some other contexts it is completely ineffective in supplying wanted goods and services. Still, in so far as the market enables mutually beneficial exchanges between consenting parties, and in so far as it respects autonomy while efficiently supplying wanted goods and services, the market has profound attractions. In any case, the market seems now the natural model for thinking about economic desert.

Perhaps even more compelling than the influence of contribution on economic desert is the role of freely negotiated contracts. Employment contracts are often sensitive to levels of contribution. This arrangement is normally intended to give people economic incentives to provide more or better goods and services that others want. But, for various good reasons, we might agree to a contract insensitive to productivity and contribution. If for whatever good reasons we have contracts with our employer that pay us the same no matter how big our respective contributions are, it seems mistaken to claim that whoever makes a bigger contribution deserves a bigger economic reward.

No matter what determines economic desert, economic desert is not always the most important consideration. You might deserve some benefits more than I do because you worked and I didn’t. But if my life is endangered if I don’t get some of those benefits, then your desert might be outweighed by my need.

* * * * *

It seems to me plausible that social practices, institutions, and agreements determine economic desert. But are we here referring to actually existing social practices, institutions, and agreements or ideal ones? Economic desert is at least partly determined by actually existing social practices, institutions, and agreements. The same seems true of legal desert. The potential gap between moral desert and actually existing social practices, institutions, and agreements is even greater. Where actually existing social practices, institutions, and agreements are terrible, they do not have much influence on people’s moral desert.

That was a point about actually existing social practices, institutions, and agreements, not about ideal ones. What is the relation of ideal rules, practices, and institutions to desert? Some philosophers think that moral desert should come into our moral thinking at the deepest level, that is, at a level prior to selection of ideal moral rules, social practices, and institutions. They would say that no system of rules and practices can be justified unless it respects certain principles of moral desert (e.g., that the virtuous should be rewarded in proportion to their virtue and the vicious punished in proportion to their vice).

Other philosophers think that the appropriate moral first principle — that is, the appropriate principle for selecting ideal moral rules, social practices, and institutions — does not mention desert. Among philosophers who think the first principle does not mention desert, some think that rules, practices, and institutions should be selected on the basis that everyone has sufficient reason to accept them. Others think rules, practices, and institutions should be selected on the basis that their acceptance would maximize overall social benefit. Still others have other non-desert first principles. All these philosophers who think the right first principle does not mention desert nevertheless agree that the rules, practices, and institutions selected by the appropriate first principle will then specify what is to be rewarded and what punished. Even philosophers who think desert does not appear at the foundational level of moral theory will accept that it has a very important role at a derived level.

Which side wins the debate about how foundational desert is? If there is a moral theory that has entirely plausible implications (including entirely plausible implications about desert) but does not incorporate theses about desert into its first principle, then this theory is best. Such a theory would be best because it would have entirely plausible implications but fewer assumptions. It is an open question whether there is such a theory. If there is not, desert appears at the foundational level of morality.

Further reading
Joel Feinberg, Doing and Deserving (Princeton University Press)
Lois Pojman and Owen McCloud (eds), What Do We Deserve? (Oxford University Press)

Brad Hooker is a professor in the Philosophy Dept at University of Reading. His Ideal Code, Real World was published by Oxford University Press in 2000.

Teaching jurisprudence in Namibia

Mark Hannam and Jonathan Wolff answer a surprising call to help facilitate the understanding of legal positivism in Africa

Windhoek, Namibia (cc) Clyde Robinson

Windhoek, Namibia (cc) Clyde Robinson

On being asked to set In setting out the development priorities for Southern Africa,, it seems unlikely that many would put place “helping facilitate the understanding of legal positivism” very high on the agenda. Yet recently we last year we were invited to Namibia for exactly this purpose. . Of course, Iin our own cases it might (rightly) be thought that we don’t have all that much else to offer. S, but still, it was an extraordinary invitation . aAnd, thus, one we could n’ot turn down. Furthermore, we came to see that discussions in jurisprudence – the philosophy of law – have great importance for social and political development throughout Africa. are relevant to wider questions about social and political development in Africa, just as they are elsewhere.

Many law departments encourage their students to consider the question, “What is law?” by making jurisprudence a compulsory course for their undergraduate students. The University of Namibia (UNAM) takes this approach and Iin August 2008 we travelled to Windhoek to run a two-day conference, hosted by Manfred Hinz, on jurisprudence for the fourth year law students at the University of Namibia (UNAM).

Teaching jurisprudence is a paradoxical activity: despite being a highly specialised field that looks at a very specific question – namely, what is the nature of law? – the subject cannot be properly studied without a good understanding of a wide range of ideas from philosophy and other disciplines. For example, natural law theories appeal to a set of ideas drawn from metaphysics, theology, science and ethics. The sociology of law, our second topic at the conference, demands familiarity with a number of sociological methods and a wide range of historical sources, as well as ideas from political science and international relations. Finally, while the debate between Hart and Dworkin starts with a good dose of analytical philosophy, it soon encompasses a discussion of the methods of literary criticism and other interpretive disciplines.

The breadth of the subject is most likely a consequence of the way in which law and the legal process are now pervasive throughout modern societies. It is perhaps also the reason why an understanding of jurisprudence turns out to be relevant to questions of social and political development. Two examples that stood out during our discussions with the students are these: first, whether there is such a thing as an African natural law tradition and therefore a distinctive African jurisprudence; second, how to judge between the competing claims of customary law and constitutional law.

In Europe the natural law tradition starts with the ancient Greeks, who noticed that while there were variations in the laws and customs of the various different Greek city-states, so too there were many underlying consistencies. From this observation they concluded that some laws were natural, meaning that they did not change from people to people, or from place to place. This idea was polished by Cicero and then passed on to the Christian theologians, reaching its fullest expression in the work of Aquinas: when God created nature he also created the rules or laws by which it is governed. By the use of our powers of reason we can learn these natural laws, which we can then use as the basis of our legal systems.

The Dutch philosopher Hugo Grotius argued that even if we reject the theological basis for natural law, these laws themselves were sufficiently evident to reason for us to feel obligated to follow them. This step allowed natural law theory to survive the Enlightenment’s assault on religious belief, and to continue to flourish up to our day as a secular theory of law and obligation.

Is there an African tradition of natural law and if so how does it compare with the European tradition? In part this is an empirical question: what evidence is there among the various tribes and peoples of Africa that, despite differences in language, custom and religious beliefs, there are identifiable similarities in their basic legal precepts? This is not an easy question to answer since there is a widespread lack of documentation of the customary laws of African tribes, and it is not clear how much reliance we can place on oral traditions in determining the beliefs and practices of these tribes in centuries past.

Even if we could establish sufficient clarity with regard to the customary laws of a reasonable range of different tribes and peoples, we would still need to determine what counts as evidence for shared legal precepts. So, for example, if one tribe recognises matrilineal succession and another tribe recognises patrilineal succession, do we judge that these two tribes are radically different in their view as to how succession should be managed through the generations? Or do we judge that they both share a strong belief in the importance of succession as a principle of social organisation, but a minor variation exists in the form that this succession takes?

Let us assume, for the sake of argument, that we can identify some legal precepts that are widespread, perhaps even ubiquitous in African customary law. Does this establish the case for the existence of natural law in Africa? Some African philosophers have argued for a distinctive African jurisprudence, different from that which has evolved in Europe and North America, claiming that African jurisprudence is better suited to the needs of contemporary African societies. But the desire to show that natural law exists in Africa and the desire to establish a distinctive African jurisprudence run counter to each other: the former asserts the universality of legal precepts while the latter asserts their particularity.

If we thought that European natural law and African natural law were radically different, that would imply that Europe and Africa are not just different places (which they clearly are) but also different types of place, which is much less obvious and much more controversial. It makes no sense to argue in favour of the idea of natural law and then to limit its scope to only one part of nature. It follows from this that while a study of African customary law should help to amplify and refine the contents of natural law, it is unlikely to establish a separate body of law unique to Africa.

In Namibia, as in many other parts of Africa, customary law continues to play an important role for ordinary people, by setting the framework of behaviour that the law expects of them and, in return, what protections they can expect from the law. This role is today increasingly under challenge from the growing importance of constitutional law. The contrasts between customary law and constitutional law are clear. Customary law is ancient, it has been established by habit and shared practice, it is speedy and accessible, and it is able to exert its influence from the bottom up through the network of households that make up each tribal grouping. By contrast, the constitution is recent, has been established by legislative process in national parliaments, is often slow and hard to follow, and is imposed top-down through the judicial system. Unsurprisingly there are many cases where customary and constitutional law come into conflict, two of which we discussed at the conference.

The first example concerns an individual’s right to gather fallen wood (an issue that Karl Marx was much exercised about in Germany in 1842). Customary law protects living trees from being felled because they are crucial to the maintenance of the character and quality of the pastoral farmland. However, dead wood is available for anyone to collect and use as firewood. Fallen wood is common property and can be gathered by anyone. Under constitutional law, wood within enclosed farmland is recognised as the property of the landowner, whether live or dead. It is privately owned even as it falls to the ground. Since (in Namibia) constitutional law trumps customary law, so too the rights of the landowner now trump the rights of the wood gatherer.

Our second example concerns polygamy (something Marx appears to have been less exercised by, although Frederick Engels discusses it in The Origin of the Family, Private Property and the State). African customary law frequently allows for polygamy – normally one man with multiple wives – and there are economic and social reasons that explain why this tradition has persisted up to the present day. In some African countries legal protections have been introduced to regularise polygamy, for example by giving all wives in one marriage defined property rights. However, human rights groups have argued that in Namibia the practice of polygamy often provides cover for the forced marriage of teenage girls in direct contravention of their constitutional rights.

A theory of jurisprudence that is relevant for Namibia today must be able to provide a credible solution to the competing claims of constitutional and customary law in cases such as these. In a community where customary law is strong there is likely to be resistance to the introduction of constitutional law if it disrupts traditional patterns of property ownership, resource use, and family life, since these are fundamental to the way of life of the community. However, some community members will welcome the introduction of constitutional law if it affords them and their property greater legal protection from the interference of others.

There are plenty of Namibian lawyers who are already working on answers to these questions. Soon they will be joined by some of the students from our conference as they leave UNAM for careers in the legal profession. Judging from the enthusiasm, the critical skills and the inventiveness that they brought to our discussions, their answers will be worth listening to. Meanwhile we will be back in Windhoek next summer, teaching another group of law students, but this time also helping to facilitate another conference that will address the wider development priorities for Southern Africa over the next ten years.

Mark Hannam is an honorary research fellow at the Institute of Philosophy, University of London. Jonathan Wolff is professor and head of the department of philosophy at University College London

Truth or dare

Simon Critchley tells Julian Baggini about philosophy without fear

critchley200Judging philosophers by their book covers is a perilous business, especially in these days of absurd blurb-inflation, which means that even the worst book comes plastered with some sort of glowing quote. But the especially impressive endorsements on Simon Critchley’s latest [as of September 2007] book, Infinitely Demanding, are signs that Critchley currently commands the admiration of his most esteemed peers. Alain Badiou assures us “Reading and discussing this essay is utterly essential,” an adjective also employed by Ernesto Laclau. Cornel West says he is “the most powerful and provocative philosopher now writing about the complex relations of ethical subjectivity and reinvigorated democracy.” Slavoj Žižek also offers warm praise, but ever since he told me that he provided some puff for Hardt and Negri’s Empire without having read it, I’ve taken such effusiveness with a pinch of salt.

With eminent friends like these, you inevitably end up with enemies too. Brian Leiter counts Critchley among the “philosophical used car salesmen”. But exposing himself to such criticism is part and parcel of Critchley’s whole approach to philosophy.

“I remember when I was teaching in the University of Sydney in 2000, there was a story I was told about someone who’d been offered a job,” the New York based Critchley told me while on a trip back to his native England. “He got this job on the basis of articles which had been accepted by prestigious mainstream journals, and then withdrew those articles. When he was asked why he did that, he responded that you have to make yourself as small a target as possible. Now there’s a strand of philosophy just about making yourself as small a target as possible, and I think that’s a philosophy which is dominated by a fear of falsehood.

“There’s nothing virtuous about falseness but there’s another way of doing philosophy, which would be to say that philosophy is about the pursuit of truth. And the pursuit of truth is about looking at the whole picture, with as much information as you can possibly find. That means you’re going to make some mistakes and be found wanting in all sorts of areas. The consequence of the fear of falseness is that philosophy ends up as a narrow, inward looking, professional discipline. And that just doesn’t really interest me. I’ve tried at times to engage with philosophy at that level and in that way but it’s not why I do this. I want philosophy that produces all sorts of daring hypotheses and which says something specific but general about what it means to be human.”

Infinitely Demanding is certainly not lacking in such ambition. It combines, among other things, a diagnosis of the times, a fundamental theory of the structure of morality, and a political prescription.

“To me, the framing philosophical problem of the modern period is the problem of nihilism,” says Critchley. “By nihilism, I understand the way Nietzsche formulates it, that the highest values have become devalued. The meaning of history is the process whereby we realise that the idols that we’ve set up to worship are ones that we ourselves have created and thereby we depose them.

“Then that leaves us with the huge question as to what is the basis for meaning. The position that Nietzsche calls nihilism is the affirmation of the meaninglessness of existence. Nietzsche sees the philosophical task, and I agree with him, as how we overcome or resist nihilism.

“Now for me in the contemporary world, there are two dominant forms of nihilism, what I call passive nihilism and active nihilism.

“Passive nihilism is roughly the idea that the world is a chaotic, disorderly place that’s blowing itself to pieces. What we do in the face of that is to withdraw, make ourselves into an island, make ourselves as peaceful as possible and try and shut our eyes to the reality. I see that passive nihilism as being a completely legitimate response to the world.

“The active nihilist looks at the world, finds it chaotic and meaningless, and decides to destroy it. My pathology of the active nihilist is to look at the traditions of violent, revolutionary vanguardism in political groups like Lenin’s bolshevism, which is all about the overcoming of nihilism as a construction of the new man; or Marinetti’s Futurism, where the processes of war and violence and technology can be used to construct the new situation. So there’s this alternative response to the meaninglessness of the world, which is to try and actively destroy it and bring another world into being.

“That active nihilism strand has its most powerful representation in groups like Al-Qaeda. I think a huge misunderstanding of groups like Al-Qaeda and of Jihadism or Islamic fundamentalism is to see it as some sort of other to western civilisation and thereby to construct some sort of clash of civilisations. Jihadism has, in perfect continuity with a strand of western thought, this violent, revolutionary, vanguardist tradition.”

Although, like all quotes in this interview, that is but an edited version of Critchley’s own oral précis of his book, it does give you a sense of how far he is happy to stick his neck out. But how can he even think of attempting such a sweeping zeitdiagnose without years of hard social scientific research to back it up?

“Just sheer chutzpah, I think. The tradition of philosophy that I still identify with is one that would see the philosophical task as having two faces. The philosophical task is to look at the state of the world, using all the available data, using whatever social scientific research or other research that would be available, but also just making hunches and guesses. Hegel came to his picture of the enlightenment based on a reading of Diderot’s Rameau’s Nephew, which is just a book. To that extent, I wouldn’t see a radical division of labour between philosophy and forms of social research. For me, a big chunk of philosophy is critical social research.

“The other side of philosophy is coming up with some picture of how things might be changed, how things might be looked at in another way, or at its crudest, imagining that another world is possible. I think if you look back at the philosophy of antiquity, right back to Heraclitus and Pythagoreans and then through to Plato, it’s both things.”

Critchley’s talk of hunches and guesses doesn’t do justice to the very careful, analytic nature of much of his book. This is particularly evident in his writing about moral motivation, which combines a tightly-argued theoretical position and more speculative stabs at capturing the zeitgeist.

“My metaethical claim is that at the core of ethics are the two concepts of demands and approval. I basically try and show that in every moral theory and indeed every major philosopher, there’s something doing the work of a demand at the core of their work. In Plato it’s the demand of the Good; in Paul and Augustine it’s the demand of the resurrected Christ; in Kant it’s the demand of the moral law; in utilitarianism it’s the demand of happiness; and so on and so forth. An ethical subject for me takes shape around a demand that’s approved of.”

For Critchley, however, there is a paradox at the heart of this, one which gives the book its title. For the moral demands that fall upon us can never be met, since they are without limit. So when I mention to him how he responds to the old claim that “ought implies can” – than one can only be morally obliged to do what one can actually do – he happily retorts, “I think ought implies cannot.”

Critchley is not embarrassed by the appearance of paradox in his position. “I think philosophy can be about the production of paradoxes. Carneades, head of the later Platonic academy, went to Rome, and gave two public lectures, one arguing in favour of justice, one arguing against justice. I think that is a quintessentially philosophical thing to do. Philosophy should be about the cultivation of certain forms of paradoxes in the face of what passes as common sense.”

But why prefer a self-defeating formulation? Why choose a view which is paradoxical over one which is not?

“It’s more stringent, and I think it’s truer to the ethical demands to which we should submit ourselves. I think if ethics is based on a feeling of being able or having the capacity to do something, or being satisfied by one’s actions, then one is lost. My position then gets very close to Christian ethics. This is something that at one level repels me and another level interests me. In the Sermon on the Mount, Christ says that you have heard it told that you should love your neighbour, but I say unto you, you should love your enemies as well as your neighbours, you should love those who despise you, you should love those who persecute and hate you. If you don’t do that, you’ll end up like what he calls the publicans, the people that go along with Roman authority, the people that agree with the status quo. And then he ends this passage in this sermon by saying ‘be ye perfect as God who made thee,’ something like that. So Christ’s ethical demand is to be God-like.

“Now it’s a completely paradoxical and ridiculous thing to say, because you can’t ask human beings to be God-like. Then the question arises, is the person who’s making that claim God? For a Christian, he is God, to me he’s just this rabbi on a mount in Palestine making this extraordinary demand that involves one in a deeply paradoxical position.”

Religion turns out to play an important role in this atheist philosopher’s thinking about what he calls the “motivational deficit in contemporary liberal democratic societies.”

“You can see this exemplified in phenomena like decline of interest in political institutions – the democratic deficit as it’s called – and the decline of traditional forms of activism. People are demotivated, and that leads on the one hand to passive nihilists, rejection of the world, and to active nihilists on the other. This motivational deficit in western liberal democracy is a failure of secularism. Secularism just does not have the wherewithal to motivate citizens unless it’s against some theological threat.

“I’m not a theist, but I’ve always been very interested in Jewish and Christian theology, and it seems to me that Christian theology particularly has something deep to say about what motivates human beings to act on the good or to fail to act on the good. There are descriptive or conceptual resources in the religious tradition which can help us think through the failures of secularism.”

If the quasi-theological idea of the infinite demand of morality sounds somewhat removed from real life, Critchley brings his thesis right back down to earth with his prescription of what a politics of the future should look like.

“I’m arguing for an ethical anarchism. I think anarchism is a belief that human beings can co-operate with each other freely and equally without the intervention of state, law and government. So anarchism is order: the A is always within the O of the symbol. That’s the classical 19th century vision. You find it back in Godwin, you find it in Bakunin and in Kropotkin. The belief there was that if you take off the shackles off the state, the law and government, human beings would co-operate freely and equally with each other and be happy.

“What interests me about anarchism firstly is that it’s a political philosophy whose core is ethical, rather than a political philosophy whose core is analysis of capital and revolutionary strategy, like say in Lenin.”

If you think putting money on neo-anarchism to emerge as the dominant form of politics in the near future wouldn’t be a sound investment, Critchley would agree.

“if you were going to bet on any option in the future as being what will determine the shape of the future, I think it’s military neo-liberalism,” by which he means “neo-liberal economic policy, justified by a discourse of human rights and a theology of freedom, backed up with military force.”

However, the only other option currently on the table right now is, Critchely believes, “what I call neo-Leninism. Neo-Leninism is very much like how I described active nihilism before. Given that the world is dominated by this combination of economic neo-liberalism, a western dominated model of globalisation, and a secular model of rights and military force, you have something like Jihadist revolutionary Islam as a response to that. And I see that as a form of violent confrontation with that violence.”

Faced with the choice between military neo-liberalism and neo-Leninism, you can see why neo-anarchism looks like an attractive option. “Neo-anarchism is an attempt to think about the nature and tactics of political resistance over the last 10-15 years in a way that burst into immediate visibility with the Seattle events in late 2000. The Seattle protests, as other people have shown, really find their motivation in movements like the Zapatistas, and the Landless Movement in Brazil, which offer a new form of political organisation and political resistance. I see that as an essentially anarchist strategy, because what you’re seeing is just the combination of groups with radically different demands being formed into a common block by having a common element. And what is driving that protest is not some common ideological world view like Marxism, it’s simply an ethical concern, a sense that there are grievances or wrongs that need to be addressed.”

What makes this different from traditional anarchism is that it is “not about the construction of society without a state, which always was the classical anarchist dream. It’s about the construction of, or the articulation of, a distance from state authority, what I call in the book interstitial distance: the idea that a political protest, a political movement or a labour movement in a specific part of the world might create a new space where human beings can autonomously co-operate in a way that’s free from outside intervention. So the best hope, I think, for politics at this point is the creation of a distance from the state.”

The second major innovation of neo-anarchism is that “it is anarchism of responsibilities rather than anarchies of freedom.”

“Anarchism classically and right through to the 1960s was concerned with freedom. It was deeply libertarian. In particular, it was about sexual liberation, for example in Marcuse. I think the forms of protest we’ve seen in the last 10 to 15 years have been very different. It’s an anarchism which is concerned with the fact that another’s freedom is not being respected and we need to do something about it, or it’s about responsibility for certain wrongs that multi-national capitalism might be doing.”

Although neo-anarchism can sound exciting, on closer inspection, it is not as radical as it might at first seem, for by seeking a distance from the state, it does not seek to radically change the political structures of the world, as Critchley is resigned to accept.

“I don’t think that, at this point, capitalism can be overcome. I think capitalism is a permanent feature of the social economic landscape – there we are. That means one has to work with that. I think the idea of revolution as an overthrow of capitalism was dependent on the possibility of a very specific epoch when there was something like a Bolshevik party, which claimed it could speak for the proletariat and therefore force humanity as such, and construct an entirely new economic system. I think that is not possible. We are stuck with capitalism. But that doesn’t mean that we’re stuck with increasingly oligarchic, expropriating capitalism that leads to forms of disgusting inequality. It means that we have to re-think what the political objectives are.

“The way I have always seen socialism, it is the perfection of capitalism, which exchanges impoverishment of the many for forms of co-operation. So if we look at say classical social democracy, to the Scandinavian model, that doesn’t mean rejecting structures of trade or market, it means redistribution of those things more equitably. And it seems that the situation that we’re in, in the west, and in particular in Britain and the United States, is a riotous celebration of inequality, and the belief is that anything that’s done to criticise that is going to be anti free trade and will lead to all these billionaires leaving the country. So the choice is not between things as they are now and some sort of revolutionary overthrow of capitalism. It’s a choice between things as they are now and some sort of much more equitable co-operative form of society.”

Critchley’s own immediate future looks brighter than that of the world he described. Infinitely Demanding is in many ways the culmination of his philosophical work to date, the one he says he would save from a metaphorical fire in the library of humankind. In 2008, he is also publishing his most commercial work yet, The Book of Dead Philosophers. You may not buy a used car from this man, but more people than ever are going to be acquiring and exchanging his ideas.

Simon Critchley’s home page

Julian Baggini is editor of tpm.

Debate: academic freedom

Steve Fuller and Alan Haworth debate the merits of a ‘Statement of Academic Freedom’

Dear Steve,

It is easy to empathise with your hostility to the attempts of politicians, bureaucrats, and the like, to interfere with academic freedom. Nevertheless, I am reluctant to sign up to the AFAF statement. I think there can be no doubt that we are living through a period during which freedom of thought and opinion are under threat from many quarters. However, it seems to me that any argument genuinely capable of meeting those threats at the intellectual level needs to be far more considered than the case you present.

Statement of Academic Freedom by Academics for Academic Freedom (www.afaf.org.uk)

‘We, the undersigned, believe the following two principles to be the foundation of academic freedom:

(1) that academics, both inside and outside the classroom, have unrestricted liberty to question and test received wisdom and to put forward controversial and unpopular opinions, whether or not these are deemed offensive, and

(2) that academic institutions have no right to curb the exercise of this freedom by members of their staff, or to use it as grounds for disciplinary action or dismissal.’

By way of illustration, take that piece of nonsense about ‘offensiveness’. One frequently hears it reiterated these days that ‘there is no such thing as a right not to be offended’, but the claim is clearly false. It is obvious that I would be doing something highly offensive if I were to approach some person in the street and say – e.g. – ‘It is my opinion that you and people like you are imbeciles with low standards of personal hygiene’, and it is equally obvious that, in ordinary morality, there is a strong presupposition against such behaviour. Moreover, it would make no difference if the offensive remark were made by an academic; if I had said, for example, ‘I am a professor, and it is my opinion that you and people like you, … etcetera’. Neither would it matter if whether I was inside or outside a classroom. Therefore, your first principle, according to which academics should have unrestricted liberty to express unpopular opinions ‘whether or not these are deemed offensive’ is, to say the least, questionable, and I don’t think you help your case by denying the obvious.

Now, I don’t mean to suggest that offensiveness is never permissible, only that we need to think much more carefully about the question of when the prima facie injunction against offensiveness can be overridden. In the case of academic freedom, that means paying close attention to the question of when, and why, the academic’s commitment to the pursuit of truth and understanding renders the injunction void. The simple argument that academics should have carte blanche, just because they are academics is inadequate.

I think its worth adding here that, in many of the cases which raise issues of academic freedom, ‘offensiveness’ is hardly the point. Take the example of Holocaust denial. As I see it, Holocaust denial is an exercise in bullying anti-semitism; a way of saying, in so many words, ‘You Jews, – always whingeing about nothing!’ You only have to consider the nexus of historical events and contemporary power relations which set the context within which the exercise takes place to appreciate that ‘offending people’ is far too feeble an expression with which to describe it. True, it sometimes happens that the spurious fabrications of Holocaust deniers are dressed up as serious historical scholarship, but it is not at all clear why this should legitimise them. And I am sure that parallel considerations apply to many other activities.

Yours,
Alan Haworth

Dear Alan,

It strikes me that you start with the assumption that free speech is something already out there that needs to be curtailed under certain sensitive circumstances. This is a peculiarly liberal way of thinking about the issue that we take for granted in the English-speaking world. ‘Academic freedom’, after all, was a 19th century German invention. The background political presumption was much more authoritarian – namely, no one has a right to free speech unless it is delegated, which in turn requires legislation and hence a clear sense of rights and obligations.

With that in mind, I read the AFAF statement as asserting a guild right for academics, which implies a corresponding set of obligations that academics have to those whom they might offend. It is not saying that academics can say whatever they want simply because they are academics. As with all guild rights, the issue turns on the use of the tools of the trade, and here the phrase ‘question and test’ in the AFAF statement is crucial to the scope of the freedom being defended.

I have no problem with academics arguing – either in the classroom or on television – that the Holocaust never took place, that Blacks are intellectually inferior to Whites, or that thermodynamics renders evolution impossible. However, they are then obliged to provide arguments that can be subject to critical scrutiny at the same level of publicity. They cannot get away with saying that it is just their opinion or an article of their faith, full stop. In fact, very few controversial academics are so reticent with their reasons. But those who refuse to offer reasons debase the currency of academic life – even, I might add, when they assert quite inoffensive positions.

No doubt academics, like everyone else, hold views they cannot defend with the tools of their trade. In that case, the terms of academic freedom require that they keep their mouths shut. However, this policy is seriously compromised by a climate of political correctness, partly influenced by the increased university auditing. Academics might be nowadays reluctant to mobilize the intellectual resources needed (e.g. by applying for grants) to give their more outlandish views a fair public hearing because of the censure that voicing such opinions would bring down on them.

As for the more fearless academics who publicly defend offensive positions, at the very least they force opponents to state the precise grounds on which they take offence, which is never a bad thing in a society that fancies itself rational. That the repeated airing of offensive positions might give solace to undesirable political factions strikes me as a fair risk for an enlightened society to take. Once again, if the words of a controversial academic are touted as supporting such a faction, the academic is obliged to state where he or she stands on the matter. It is not sufficient simply to say one’s words are being opportunistically used. This point goes to the guild element of protecting the tools of intellectual trade.

Yours,
Steve Fuller

Dear Steve,

As you raise the subject of free speech, let me begin by stating my position on that. I hold that there is a presumption in favour of everyone’s being free to say whatever they like on any occasion whatsoever. That makes me a liberal. However, it is, I think, essential to answer the question; What justifies the restriction of that liberty? Its perfectly obvious that something must. Otherwise, I would be entitled to stand outside your house with a megaphone at three o’clock in the morning, keeping you awake with readings from my favourite political manifesto. Or, to borrow a frequently cited example, I would be entitled to shout ‘Fire!’ in a crowded theatre, simply for the sake of causing mayhem. By the way, this is not at all equivalent to the view you attribute to me; the view according to which free speech is somehow ‘out there’ (it isn’t) but that there are ‘certain sensitive circumstances’ in which it ought to be restricted. On the contrary, I am simply pointing out that any serious philosophical account of free speech must define the specific category, or categories, of acts it deems worthy of protection, and explain why they are.

Fuller & Haworth

Fuller & Haworth

The same goes for academic freedom, so let me now turn to that. You and I are clearly agreed that academics, insofar as they aspire to be good intellectuals, must concern themselves with the pursuit of knowledge and understanding, and that this requires the protection of conditions within which even the most wild and controversial theses can be tested against standards of reason and evidence. To borrow J.S. Mill’s expression, there must be full ‘liberty of thought and discussion’ for those engaged in the pursuit of truth. This being so however, it follows that ‘academic freedom’ is a confusing expression for what is at issue between us. I take it that ‘academics’ are, by definition, individuals who work in universities or similar institutions, but the freedom really at issue here is intellectual freedom, and it is just not true that all intellectuals are academics. Of course intellectuals should be free to argue and debate, but, so far as I can see, there is no argument for absolving academics of normal responsibilities and treating them as a privileged cadre with distinct ‘guild rights’ of their own.

Moreover, I think it’s important for academics, in their capacity as intellectuals, to recognise that when they move from the relative privacy of the seminar room to the public realm, they go from a place in which seekers after truth confront each other as equals in rational debate (or try to), to a world fraught with dark moral complexity and ambivalence. Finally, then, you say that you ‘have no problem’ with academics arguing, in public, ‘that the Holocaust never took place, that Blacks are intellectually inferior to Whites, or that thermodynamics renders evolution impossible’, so let me ask you this. Would you be prepared to debate, in public, the proposition that Elvis Presley is alive and well and living in Moscow? I can’t believe that you would, because I can’t believe that you would be prepared to go through the pretence of treating it as the legitimate ‘other side’ in a serious debate. So, why adopt a different attitude to, e.g., ‘The Holocaust never took place’? (By the way, I’ve borrowed this example from Deborah Lipstadt’s impressive study, Denying the Holocaust.) For all their faults, academics do have a certain standing in the public eye, and they should remember that the opportunity to strike an ‘academic’ posture can serve duplicitous purposes. Do you really want to say that fascists, racists, and the like are just harmless wishful-thinkers, like those sad Elvis fans?

Yours,
Alan

Dear Alan,

I am sorry you find the concept of academic freedom confusing. However, the fact that intellectual freedom is conceptually broader – perhaps even vaguer — than academic freedom does not mean that academic freedom presupposes a more general concept of intellectual freedom. On the contrary, the distinctiveness of intellectual freedom is based on extending the conditions of academic freedom to society at large. The main problem with your example of arbitrarily shouting ‘Fire!’ in a crowded theatre is that while it involves speech there is nothing especially intellectual about it: The problem it raises is simply that of licence in liberal societies, solutions to which depend on how much a society can tolerate and who is authorized to judge.

However, to exercise intellectual freedom is to enable our ideas to die in our stead, to recall Karl Popper’s neat phrase. I have called it ‘the right to be wrong’, the ability to assert now without compromising one’s ability to assert in the future, even if one’s assertions are shown to be false. Intellectual freedom in this sense presupposes an institutionalised dualism, such that, literally, you need not put your money where your mouth is: ‘Speculation’ in the intellectual and financial senses are kept apart. A true believer in intellectual freedom would thus wish for an environment in which one can commit what statisticians call Type I errors with impunity – that is to say, err on the side of boldness (‘false positives’).

The modern model for this environment is academic tenure, which was originally introduced to simulate the property ownership requirement for citizenship in ancient Athens. This historical link was forged by the founder of the modern university, Wilhelm von Humboldt, to whom Mill’s On Liberty is dedicated. On the one hand, an Athenian citizen who was voted down in the public forum could return to his estate without concern for his material security; on the other, his economic significance for the city obliged him to offer opinions in the forum at the next opportunity. Citizens who refrained from self-expression were routinely ridiculed as cowards.

Correspondingly, if academic tenure were policed more rigorously for its entailed obligations, then the conditions surrounding its current erosion would not be tolerated. To the increasing number of academics who know only of the current neo-liberal knowledge production regime, tenure looks like an excuse to never stray from one’s intellectual comfort zone. But even if many – if not most – tenured academics conform to that stereotype, it is entirely against the spirit of tenure and indeed arguably merits censure.

At the same time, a much more charitable view should be taken towards tenured academics deemed ‘publicity seekers’ who self-consciously – yet often sincerely – advance outrageous views in the public forum. These people routinely expose themselves to criticism, in response to which the life of the mind is performed for society at large. Whether they ultimately win or lose these struggles is less important than the occasion they provide for thinking aloud, a process to which others may subsequently contribute, the result of which raises the overall level of social intelligence.

The sort of people I have in mind – say, Alan Dershowitz, Bjørn Lomborg, Richard Dawkins – most genuinely embody the spirit of intellectual responsibility. And, yes, I would add to this list the Holocaust revisionists, the eugenicists, the racists, the creationists and academia’s other despised elements. If you genuinely believe that society needs to be protected from the views of these people, then it has not earned the right to intellectual freedom.

Yours,
Steve

Dear Steve,

This is my last letter, so let me take the opportunity to summarise my position. The main points are these.

First: There is an ‘Elvis test’ for opinions. I see that you didn’t answer the question I put to you last time, so let me raise it again. Would you be prepared to debate, in public, the proposition that Elvis Presley is alive and well and living in Moscow? Actually, I doubt that you would, – except on Red Nose Day perhaps. Let me vary the example. Suppose that you are a member of a committee charged with funding research projects. There is a proposal to investigate the hypothesis that Elvis is alive and living in Moscow. The funds will enable a researcher to fly there and, if possible, conduct a full length interview with Elvis. Would you support the project on the grounds that every hypothesis, however wild, should be given an equal hearing?

Should you answer in the affirmative, I’m afraid I would find it impossible to take you seriously, and so, I believe, would most readers. Such readers might well be prompted to reflect that the pursuit of knowledge and understanding through debate cannot be a rough-and-tumble affair. On the contrary, it requires the recognition, implicit or explicit, of certain conventions, including the recognition that not all opinions are equally contestable. Both Holocaust denial and Elvis assertion fail on this score, as do creationism and attempts to draw significant connections between race and intelligence. It follows that protection of a ‘right’ to advance such theses is not the protection of intellectual freedom, but misguided collusion with the opportunistic and the deranged in the pretence that their activities are intellectually ‘respectable’.

Second: I wish you would cease insinuating that my reluctance to sign up to the AFAF statement stems from a wish to protect sensibilities. This time, you have saddled me with a desire to ‘protect society’ from the views of racists, creationists, and others. I would certainly be an easy target if I held such an exaggerated view of the power academics wield, but I don’t. I know very well that racists and religious bigots will continue to express their opinions in public whatever academics do. We now inhabit a ‘global village’ do we not? In fact, all I am expressing is a disinclination to endorse the activities of Nazis and others with a licence stamped ‘academic freedom’.

Third: It is certainly true that the dog is a four legged animal but it would be a simple logical error to conclude from this that all four legged animals are dogs. Likewise, it is certainly the case that academics (i.e. university teachers and the like) have a responsibility to protect and foster freedom – intellectual freedom, that is – but it doesn’t follow that the responsibility falls solely on the shoulders of academics. It also falls on those of others, including others who work in universities. Moreover, it can be abused by anyone. Let me reiterate a point I made last time, namely that intellectual freedom – that is, freedom to pursue truth and understanding through open debate – is distinct from what you mean by ‘academic freedom’. What you would like is the creation of a caste apart, with special rights and privileges (hence your arcane disquisition on the subject of tenure). Against this, I hold that universities are not private debating clubs, and that administrators are right to worry if racism and creationism are being passed off as serious history and serious science within their institutions.

In conclusion then: The AFAF statement may have a stirring ring to it, but, I would recommend circumspection to anyone thinking of signing it. Consider how its principles are, in reality, likely to be interpreted and the causes they could well be used to defend.

Yours,
Alan

Dear Alan,

I passed silently over the ‘Elvis test’ because it is a poor thought experiment to define our differences. Like you, I probably would not fund the ‘Living Elvis’ hypothesis. But that would be less out of the hypothesis’s unlikelihood than out of deference to other hypotheses that, even if shown false, would illuminate more of what matters.

You would do better to distinguish the Living Elvis hypothesis from one that you elide it with: Holocaust Denial. This is clearly an issue that does matter, where the hypothesis is very likely false, yet I believe should be funded to subject its strongest version to critical scrutiny. Like so many hypotheses of this kind, its falseness is most evident when taken as literally as its advocates would have us do. However, the effort we expend to falsify these hypotheses forces us to turn a diagnostic eye on the de facto limits we place on ‘free inquiry’ in the name of ‘political correctness’.

Let’s stick with Holocaust Denial. The ‘six million Jews’ figure was originally advanced as a back-of-an-envelope estimate during the 1946 Nuremberg Trial. Normally a figure constructed under such politicised circumstances would be hotly debated, if not treated with outright scepticism. At the very least, researchers would be expected to raise or lower the figure as they weighed the evidence. Holocaust deniers make much of the fact in their own case that these norms seem to be suspended, or at least attenuated. It is important to understand why they may be right on this point, even if their overall case is wrong and perhaps even malicious. It goes to why ‘intellectual freedom’ makes no sense other than as a generalisation of academic freedom.

A society that genuinely enjoyed the freedom we protect in academia would treat your gross phrase ‘the activities of Nazis’ as in need of serious wheat-and-chaff treatment. Ideally it should sound more like ‘the activities of liberals’ or ‘the activities of conservatives’: People would then publicly disaggregate the Nazi activities and judge them on their own terms, questioning whether they need to be bundled with the heinous activities historically associated with them.

We should be able to conclude – without fear or loathing — that Nazi sympathisers, regardless of their ulterior motives, deserve credit for, say, sensitising us to how our desperation for clear moral benchmarks compromises our critical capacities. I do not believe our moral outrage would be diminished, were we to learn that the Nazis exterminated only 6000 rather than 6 million Jews. But perhaps those like Alan who would censure Holocaust denial do not trust the maturity of collective human judgement?

Traditionally children and primitives had to be regaled with exaggerated accounts of unspeakable evil out of fear they would otherwise not do good. The Enlightenment was all about escaping from this state of ‘nonage’, as Kant put it. He wanted people to be legally recognised as adults empowered to discuss and decide matters for themselves through public deliberation. However, Kant’s most politically effective follower, Wilhelm von Humboldt, realized that this Enlightenment ideal required an institutional vehicle through which all of society may be slowly but surely encompassed. With that in mind he invented the modern university.

You complain about the ‘elitism’ of academic freedom but I’m afraid that all universalist projects have been about extending to the many what had been possessed by the few. Of course, as Hegel was especially fond of observing, various things may be lost and gained in the process. But without awareness of this process, it is all too easy to slip into metaphysical appeals to ‘intellectual freedom’ underwritten by chimerical intuitions married to half-baked notions of human nature.

Yours,
Steve

Steve Fuller is professor of sociology at the University of Warwick
Alan Haworth is senior research fellow at the Global Policy Institute

The unnatural selection of consciousness

Ray Tallis argues that there is no evolutionary explanation of consciousness

tallis200We have grown accustomed – perhaps too accustomed – to the idea that every characteristic of living creatures has been generated by the operation of natural selection on spontaneous variation; that it is there because it has, or at the very least once had, survival value or was a consequence of other things that had a survival value. Consciousness, even human consciousness, we are told, is no exception to this rule. Biology does not tolerate anything biologically useless and, given that my brain consumes 20% of my energy supply, and quite a lot of this seems to be used by neurones that are supposed to be responsible for keeping me conscious, consciousness must have a use. And it follow from this that all the things that consciousness enables us to get up to – not only fleeing predators whom we are aware of but also creating art or writing books like The Origin of Species – must also be directly or indirectly related to survival – now, or at some time in the past. Whether or not this is true, the ubiquity of “neuro-evolutionary” accounts of everyday human life is a testimony to belief in the power of evolution to explain consciousness.

But how well-founded is this belief? Was it really natural selection that eventually brought into being creatures that could see that they were naturally selected? Was it the blind laws of physics that so organised the matter in us that it could see the laws of physics and that they were blind? If we are going to address these questions properly, we need to start far enough back to see them clearly. We need to ask by what means consciousness could have come into being – if it was not there in the beginning – and what advantages it confers.

The zero point of evolution is a primitive self-replicator, perhaps a silicate, hardly differentiated, though exquisitely structured, like a crystal. A succession of steps over huge stretches of time, and unconsciously guided by natural selection, led to single cell organisms with their nuclei, organelles, membranes and, eventually, one or two bits of kit such as flagella to aid swimming. That was the story of life for 2.5 billion years until the Cambrian explosion 500 million years ago. Then multi-cellular forms arrived; after which came more complex organisms, with distinctive organs and systems, to deal with the business of keeping the organism stable, accessing nutrients, evading predators, and – when sexual reproduction came on the scene to give natural selection more genetic variation to get its teeth into – finding mates.

The confidence that these developments can be explained in Darwinian terms seems increasingly well-founded; so let us set aside the Creationist appeal to “irreducible complexity”, as evidence that higher organisms could not have evolved step-by-step, and the related claim that Intelligent Design is required to explain the emergence of exquisite structures such as the eye. But what of the other great story: the emergence of sentience, and of more complex consciousness, and ultimately of self-consciousness? How well does this fit into the Darwinian picture?

Very badly, notwithstanding Richard Dawkins’ claim that “Cumulative selection, once it has begun, seems…powerful enough to make the evolution of intelligence probable, if not inevitable” (The Blind Watchmaker p.146). Consider vision: let us begin with the notional “ur-eye”, the light-sensitive spot on the skin of some ancestral creature. This might confer a tiny survival advantage, perhaps making it easier to avoid predators. And one could see how an ever more complex sensitive surface, wiring the organism into ever more exquisitely discriminated and versatile behaviour, might be explained by natural selection. There are now very good accounts of gradual changes, each conferring an advantage, leading to the emergence of the orbit, the retina, the lens and so on, without appealing to Intelligent Design. And there are plenty of intermediate forms, demonstrating the benefits of having photosensitive structures marking the staging posts to the kind of complex eyes seen in higher organisms. But this story doesn’t address three problems that a satisfactory evolutionary account of consciousness would need to deal with. Consider the emergence of sight from photosensitivity.

Firstly, chemical or electrochemical sensitivity to light is not the same as awareness of light. Secondly, the content of awareness of light – brightness, colour, never mind beauty or meaning – is not to be found in electromagnetic radiation, which is not intrinsically bright, coloured, beautiful or meaningful. These secondary and tertiary qualities are not properties of the physical world and the energy in question. Thirdly, it is not clear how certain organisations of matter manage to be aware – of impingements of energy, and later of objects, and (in the case of humans) of themselves – when very similar organisations of matter do not have this property. This problem is more evident much further down the evolutionary path, when we look at neurones that are, and those that are not, associated with consciousness in the human brain and see how little distinguishes them. The biological story of the evolution of the eye from single cells to full-blown eyes tells us nothing about the journey from light incident on photosensitive cells, producing a programmed response, to the gaze that looks out and sees, and peers at, and inquires into, a visible world.

There is no reason to assume that photosensitivity brings awareness of light, however cunningly the relevant structure is wired into discriminative behaviour that will promote survival. Computers, after all, do not get any nearer to being conscious as the inputs are more complexly related to their outputs, however many stages and layers of processing intervene between the two. There is nothing, in short, that will explain why matter in a certain form will go “mental”. Or not unless we anticipate and borrow, on account as it were, the notion of an organism that is aware of its environment. We have to be on our guard: this anticipatory borrowing may be implicit even in the conceptual distinction between organism and environment; it slips the notion of viewpoint into a starter pack that consists only of matter. Indeed, the contrast between environment and organism already contains an embryonic hint of the differentiation between a subject and its objects; howsoever this might be concealed by treating organisms as physical systems. Without this fudge, it is difficult to see how energy exchanges between parts of a physical system would count as “inputs” and “outputs”. The fudge conceptually smoothes out the steps towards consciousness and makes the extraordinary claim that when matter assumes certain configurations it acquires mentality seem less extraordinary.

This forestalls our asking this entirely valid question: Even if consciousness conferred advantage, how it could become available to genes via the organisms that are the vehicles ensuring their replication? This question arises whether we are considering a single photosensitive cell, or a human eye, or the human being aware of other human beings in a shared world built up out of pooled experience. The explanatory gap – the jump from energy exchanges to awareness – just happens to be more evident in the case of single energy-senstive cells, which lie at the putative beginning of consciousness, though it is concealed by the assumption that the single cell has only a “teeny weenie” bit of consciousness that can be smuggled into the material world without its laws being bent or broken. But the question remains: How is it that certain configurations of matter should be aware, should suffer, enjoy, fear etc? What is there in matter, such that eventually certain configurations of it (human beings) pool that experience and live in a public world? No answer is forthcoming, which is why many materialistically inclined philosophers like to deny the real existence of consciousness, in particular those basic elements of subjective experience, so-called “qualia”.

Even if we were able to explain how matter in organisms manages to go mental, it is not at all clear what advantage that would confer. Why should consciousness of the material world around their vehicles (the organisms) make certain (material) replicators better able to replicate? Given that, as we noted, qualia do not correspond to anything in the physical world, this seems problematic. There may be ways round this awkward fact but not round the even more awkward fact that, long before self-awareness, memory, foresight, powers of conscious deliberation emerge to give an advantage over those creatures that lack those things, there is a more promising alternative to consciousness at every step of the way: more efficient unconscious mechanisms, which seem equally or more likely to be thrown up by spontaneous variation. Think, after all, what unconscious mechanisms can achieve: the evolution of most of the universe; the processes that are supposed to have created life and conscious organisms; the growth, development and most of the running of even highly conscious organisms such as ourselves. If you had to undertake something really difficult – for example growing in utero a brain with all its connexions in place – consciousness is the last thing you would want to oversee the task. OK, successful intra-uterine development relies, in the case of higher organisms, on a conscious mother choosing the right mate and getting the right food and so on. But that is to put the cart before the horse. Once you have a species that depends on consciousness, then it is essential for its members to remain conscious. But if we assume the materialist viewpoint and, unlike many evolutionary biologists, adhere to it consistently, and set aside an anthropocentric viewpoint that sees the entire evolutionary process as something that was always leading up to us or creatures like us, it seems highly implausible that, in an unconscious biosphere, consciousness, even if it were on offer, would seem like a good option.

Those who think consciousness confers advantage tend also to believe that it confers even more advantage as it gets more complex. They argue that complex consciousness permits planning, deliberation, the rehearsal of possible courses of action before commitment to one particular course (putting scenarios rather than flesh on the line), and (as David Hodgson brilliantly argued in The Mind Matters) to enable organisms to engage with wholes, with singular combinations that cannot be captured by general laws. Leaving aside the fact that “parts” and “wholes” count as such only in the context of a consciousness that puts them together or pulls them apart, this illustrates a deeper problem, common to many evolutionary apologists for consciousness: that of approaching its origin from the wrong direction – through the lens of existing life, indeed existing conscious life. Looking prospectively from the beginning rather than retrospectively, one could argue that an organism that has to plan, to deliberate, to rehearse possible courses of action, and has to see wholes so as to deal with singulars, in order to survive, is in a mess. Of course, once in the mess, it would be better off with better consciousness – and this applies irrespective of whether we are considering threats and opportunities from the material environment, other species, or competition from conspecifics. Yes, my genes would have a better chance of replicating if I had better memory or more foresight than you. But we need to start further back and ask by what disastrous processes did conscious, especially complex conscious, species get into this situation – where there are errors to be avoided or corrected? After all, mechanisms do not make mistakes: they are simply the expression of the unbreakable laws of physics. A deliberating creature that has increased capacity to get things right does so only because it has a propensity to get things wrong. A fully adapted organism would not have to deliberate.

And what is deliberation, planning, anyway? Do organisms really operate on the laws of physics as if from the outside? And if they do not, as the materialist view of life requires us to believe, then there is no way that planning and deliberation – or the illusion of planning or deliberation – could serve a function. No wonder many evolutionary biologists and neuroscientists often deny that free will is possible and marginalise the role of consciousness in human life. How, within the materialist world picture, is consciousness able to inflect the laws of physics to make the world a more hospitable place for the organism that is conscious? From the standpoint of a consistent materialism, no organism that was going to make the cut would have deliberately to work with the laws of physics, never mind work against them or trick them into doing its will.

And there is a serious difficulty with the notion of “better and better” consciousness that will compensate for the disadvantage of having to work through consciousness in the first place. We have seen how consciousness is profoundly dissociated from the material world in which organisms are generated and their fate decided; so it is difficult to see how the content of consciousness could get closer to the relevant truths of that material world which for materialists is all that there is. For some writers, such as Paul Churchland (see Matter and Consciousness), the criterion for intelligence – that catch-all term for premier cru consciousness – is that the organism is more closely coupled into its environment. If that were true, then a silicate crystal, so hard-wired into its environment that no wires are required, would be just the thing to be. Of course, intelligence makes us loosely rather than tightly wired – hence the possibility of deliberation between possible courses of action.

If it is difficult (though not impossible) to see how life emerged out of the operation of the laws of physics on lifeless matter, it is even less clear how consciousness emerged or why it should be of benefit to those creatures that have it – or more precisely, why evolution should have thrown up species with a disabling requirement to be conscious and to do things deliberately and make judgements. Why would life evolve towards such losers who have to get things right in order to do the right thing by themselves? We humans have of course benefited enormously from being conscious: we dominate the planet. But it is only very recently that our consciousness, and its pooling in an extraordinary shared human world, has significantly increased our traction on the laws that are supposed to have brought us into being – and made up for the disabling burden of consciousness and the requirement to be more conscious to get ahead of the game, including most intimate competition between replicators – between members of the same species.

Consciousness makes evolutionary sense only if one does not start far enough back; if, that is to say, one fails to assume a consistent and sincere materialist position, beginning with a world without consciousness, and then considers whether there could be putative biological drivers for organisms to become conscious. This is the only valid starting point for those who look to evolution to explain consciousness, given that the history of matter has overwhelmingly been without conscious life, indeed without history. Once the viewpoint of consistent materialism is assumed, it ceases to be self-evident that it is a good thing to experience what is there, that it will make an organism better able so to position itself in the causal net as to increase the probability of replication of its genomic material. On the contrary, even setting aside the confusional states it is prone to, and the sleep it requires, consciousness seems like the worst possible evolutionary move.

If there isn’t an evolutionary explanation of consciousness, then the world is more interesting than biologists would allow. And it gets even more interesting if we unbundle different modes of consciousness. There are clearly separate problems in trying to explain on the one hand the transition to sentience and on the other the transition from sentience to the propositional awareness of human beings that underpins the public sphere in which they live and have their being, where they consciously utilise the laws of nature, transform their environment into an artefactscape, appeal to norms in a collective that is sustained by deliberate intentions rather than being a lattice of dovetailing automaticities, and write books such as The Origin of Species. Those who are currently advocating evolutionary or neuro-evolutionary explanations of the most complex manifestations of consciousness in human life, preaching neuro-evolutionary aesthetics, law, ethics, economics, history, theology etc, should consider whether the failure to explain any form of consciousness, never mind human consciousness, in evolutionary terms, might not pull the rug from under their fashionable feet.

Raymond Tallis is emeritus professor of geriatric medicine at the University of Manchester, a philosopher, poet, novelist and cultural critic. His many books include The Enduring Significance of Parmenides: Unthinkable Thought (Continuum) and The Kingdom of Infinite Space (Atlantic).

The return of Xenophon

Robin Waterfield argues for the philosophical credentials of a neglected chronicler of Socrates

Xenophon, short man on the right

Xenophon, short man on the right

Xenophon the philosopher? But wasn’t he a historian – the man who wrote the famous and thrilling campaign narrative The Expedition of Cyrus (in Greek, The Anabasis) and who completed Thucydides’ unfinished history of the Peloponnesian War? This is true, but there was a lot more to him besides; he was a restless man, both in his life and in the range of his writings. In his long life (from roughly 430 to shortly after 355), apart from a number of years campaigning abroad as a mercenary commander, he wrote technical treatises on Athenian economics and on the Spartan constitution, on hunting, horse-breeding, cavalry command and estate-management; he wrote a short dialogue on tyranny and a eulogistic biography of King Agesilaus of Sparta; he wrote a long, largely fictional and irrepressibly rose-tinted account of the upbringing of Cyrus the Great, founder of the Persian empire; and, as a disciple of Socrates, he wrote a version of the defence speech Socrates delivered at his trial in 399, and a lively account of a symposium at which Socrates was present, and four volumes of memoirs of Socrates.

Xenophon, however, is the poor relation of classical Athenian philosophy, consistently dismissed to the margins. Although Socrates was his mentor no less than he was Plato’s, Plato’s dialogues are mined for Socratic thought and methods, while Xenophon’s are largely dismissed. Whole books on Socrates are written with scarcely a mention of Xenophon. After Socrates’ death, quite a few of his followers or self-professed followers wrote philosophical works with their master as the protagonist; none of this sub-genre of prose literature has survived, except the Socratic works of Plato and those of Xenophon. When time has been so mean, can we afford to ignore half of our evidence for the western world’s first and greatest philosopher?

This dismissal of Xenophon seems to philosophers to be a matter of necessity, since they read Xenophon as banal and shallow, while Plato’s Socrates is enigmatic, multifaceted and profound, and continues to stimulate and bewilder readers. From a historical perspective, however, the marginalization of Xenophon comes across as sheer prejudice. The prejudice shows clearly in an influential assertion by Bertrand Russell – influential because his A History of Western Philosophy, in which the assertion appears, sold well for decades. Russell described Xenophon as “a military man, not very liberally endowed with brains, and on the whole conventional in his outlook”. Now, historians of philosophy before Russell had been inclined to regard Xenophon as a good source for Socratic thought – as a better source, in fact, than Plato, on the grounds that Plato’s very brilliance made it far more likely that he had his own agenda in writing. So Russell went on: “There has been a tendency to think that everything Xenophon said must be true, because he had not the wits to think of anything untrue. This is a very invalid line of argument. A stupid man’s report of what a clever man says is never accurate, because he unconsciously translates what he hears into something that he can understand. I would rather be reported by my bitterest enemy among philosophers than by a friend innocent of philosophy.”

In actual fact, the only rational conclusion to draw is that neither Plato nor Xenophon can tell us much about the historical Socrates. Neither of them (and the same goes for all the other Socratics whose works are lost) was writing biography, but dramatized fiction, based loosely on their mentor’s practices. At best, what they wrote was what Socrates might have said, had he been involved in such-and-such a conversation with so-and-so on this or that topic. More likely, they simply used Socrates as a mouthpiece for their own ideas. For instance, we happen to know what four Socratics (Plato, Xenophon, Aristippus and Antisthenes) said about the role of pleasure in life. There is hardly any overlap at all: either pleasure is the goal of life (Plato, in Protagoras), or it is to be avoided like the plague (Antisthenes), or only certain pleasures are acceptable (Xenophon, Aristippus). Yet all four claimed to be good Socratics, and in the two certain cases (Plato, Xenophon) they attributed their views to Socrates. Only when the ground has been cleared of prejudice can we read Plato’s works as reflecting Socrates’ influence on the individual Plato, and Xenophon’s works as reflecting the same influence on Xenophon, with his individual interests and personality.

What philosophy, then, do we find in Xenophon? In whatever genre he was writing, a moral vein makes its presence strongly felt. In fact, it is precisely Xenophon’s exclusive focus on ethics that gives his works their somewhat humdrum appearance: he is not interested in (indeed, he has his Socrates dismiss) flights of speculative fancy in science, metaphysics, epistemology, and so on, and he consistently displays Socrates (and his other heroes) in pretty much the same situations – that is, interacting with friends, acquaintances and subordinates, since interpersonal relationships constitute the field of practical ethics. This exclusive focus is the source of the Russellian prejudice.

But though we may not get fireworks, we get the mature thoughts of a man who had reflected on his Socratic heritage. His moral ideal is summed up in the kaloskagathos, a compound word probably invented in the second half of the fifth century by Athenian aristocrats to summarize their own distinctive features: they were “the beautiful and the good”, beautiful for their honed bodies, and good not just morally but also because they were, or were claiming to be, good at administering the city. A superficial reading of Xenophon, then, has his ideal man (along with almost everyone of his time, he largely ignored women) being no more than an Athenian gentleman, a member of the ruling class. This glib dismissal needs to be remedied.

A survey of the many, scattered descriptions Xenophon gives of his heroes readily shows us the kinds of qualities a Xenophontic kaloskagathos should have. He should be well educated; he should have the ability to make other good men his friends and to get on with people; he should be able to function within the Greek norm of aristocratic reciprocity, which is doing good to one’s friends and harming one’s enemies; he should be able to manage his estate and, if necessary, his country; he should have the ability to lead others, to gain their willing obedience by the example of his superiority and by making it plain that he knows how to guide matters for the best, in both military and political circumstances; he should have the traditional virtues, such as wisdom, justice, self-control, courage and piety; and he should have freedom, or self-sufficiency, gained as a result of the ability to control his desires.

Thoughtful reflection on morality with a practical and prudential emphasis: this is Xenophon’s signature. Self-discipline is important not just for itself, but because it enables a person not to be distracted by his appetites from doing his duty. Education is desirable provided it stops short of useless theoretical studies and idle speculation. You do good to your friends so that they stick by you, defend you from your enemies and otherwise repay you. The purposes of the management of an estate are to create wealth and to train a man to administer his country. Hunting too is ideal training for future defenders of the state.

But these external and prudential aims should not distract us from the internal emphasis that underlies them. All his descriptions of the kaloskagathos make it clear that there is one quality above all that is essential, and this is self-discipline. This is the foundation of true goodness and the sine qua non of any other moral virtue. You cannot manage your estate, let alone your country, if you cannot manage yourself; you cannot do good to your friends unless you can restrain your appetites; you cannot be a true leader, in control of others, unless you are in control of yourself. Self-sufficiency (inseparable in Xenophon’s mind from self-discipline, as the product from the cause) is also the foundation of happiness: I am more likely to be happy if I adapt my needs until they are more easy to satisfy. This is a truly Socratic conception of how to attain happiness, reflected also (though more dimly) in Plato’s works.

One of the difficulties with appreciating the profundity of this notion is that, once stated, it is strikingly obvious. Of course we would all be happier if we did not succumb to illusory desires, did not want more than we could have. This obviousness disguises the fact that the theory is incredibly difficult to put into practice, and indeed lies at the heart of lifelong practices such as Buddhism. Consider, then, what kind of person Xenophon is portraying Socrates (and, to a lesser extent, his other heroes) as being. He is someone who can consistently live in this adaptive fashion, free from temptation and in full control of his desires, wishes and expectations. It is no wonder that it was Xenophon’s Socrates who became the model sage for the Stoics. To deny that Xenophon is a philosopher is to cast doubt on the philosophical acuity of the Stoics, as no one nowadays would.

Xenophon may have upheld traditional values, but he did not do so in an unthinking way (and neither did Plato’s Socrates). Xenophon had come to the conclusion that external activity requires certain internal conditions if it is to be genuine morality, rather than merely imitative action. And so the kaloskagathos is self-sufficient – free rather than in servile dependence on others for his livelihood, self-esteem, actions, feelings and opinions. If Xenophon had merely been superficially putting his weight behind the traditional Greek virtues, as he is accused of doing by countless scholars, there would have been no need for him to stress self-sufficiency to this extent; it did not occupy this central a place in the life of a traditional Athenian gentleman, who, if asked whether he was free, would assume that the question referred to his social status rather than to any internal state, and spent much of his life pursuing honour – a goal which, as Aristotle pointed out, depends on others and therefore is far from the ideal of self-sufficiency. Xenophon learnt from Socrates, thought things through by himself, and tweaked the traditional conception of goodness.

Xenophon’s moral concerns to a certain extent informed even his historical works: he wrote what has been called “paradigmatic” or “exemplary” history, focusing especially on the actions of past leaders, who were to stand as paradigms for current and future leaders. He structured his presentation of events and people (and even occasionally suppressed events, or selected among alternate versions) in order to communicate various subtextual messages, such as the inevitable downfall, engineered by the gods, of arrogant leaders. Philosophy for Xenophon, as for many of his contemporaries, was not an academic exercise, but a practical, if arduous, way to try to attain moral virtue as a steady state. Since examples of virtuous people can help an aspirant, even history-writing could serve a moral purpose. Whatever he was writing, Xenophon always had a moral and educational agenda; ancient authors were right to classify him more commonly as a philosopher than as a historian, and modern authors should no longer ignore him as a source for fourth-century philosophical thinking. He is a quiet thinker; he doesn’t trumpet his views, but a great deal of careful thought underpins his work.

Robin Waterfield is the author of Xenophon’s Retreat: Greece, Persia and the End of the Golden Age, publisher by Faber and Faber.

A presidential anatomy

A round-up up of reviews for two arch provocateurs

revofrevs200Gray’s Anatomy by John Gray (Allen Lane) £20 (hb)
Simon Blackburn writes in the Sunday Times that John Gray’s “fierce tone and unalloyed pessimism provide a kind of unity” to the thirty years’ worth of essays collected in Gray’s Anatomy, adding sardonically that “the warm embrace of his misanthropic book Straw Dogs by many readers might have suggested to Gray that he is not quite as alone as he would have us believe.”

“For someone so contemptuous of reason and its constructions,” Blackburn concludes, “it must have been horrid to spend a life looking at political theories, all of which Gray despises. He does however admire some of the myths of religion. He revels in the idea of original sin, but blames Christianity for inventing the idea of salvation, or, in other words, Progress. Gray could be comfortable only in a religion with no faith, no hope, and no charity.”

Christopher Howse in the Telegraph compares John Gray to Socrates-as-gadfly, but with a difference: “In our own times, John Gray stings our polity on its most sensitive parts, and we simply put him on Start the Week.”

“If Gray belittles political ideologies as religions by another name, his insights into religion remain unsatisfying. He says, for example, that he prefers to ‘see’ rather than to believe. This cannot in practice be true, for to love other people entails believing them and believing in them.”

The novelist John Banville is much more impressed. “It is not too much to say that Gray considers the Enlightenment to have been little short of a catastrophe,” Banville writes in the Guardian, “for it was the philosophers, unconsciously pining for the certainties of the old religion, who instituted the notion of the human adventure as an ever-ascending journey towards perfection and worldly redemption. For Gray, the Enlightenment idea of the soul progressing in tandem with technological advances is pernicious. Progress in science is real – painless dentistry and the flush lavatory, he concedes, are certain goods – but spiritual progress is a myth.”

In The Times, Kenan Malik suggests that “when it comes to Gray’s writing, it is genuinely difficult to separate the serious from the satirical.”

“The essays in Gray’s Anatomy span more than 30 years. The world, and Gray, have changed hugely in that time. What has remained constant is his despair about what it is to be human.”

The Meaning of Sarkozy by Alain Badiou (Verso) £12.99/$24.95 (hb)
“To some, the 72-year-old philosopher Alain Badiou is a god,” Lucy Wadham wrote in the New Statesman, adding that “To others, this unrepentant Maoist is a chronic nostalgic – a dangerous apologist, even, for left-wing totalitarianism.”

“At least Badiou’s is not another piece of limp, defeatist political journalism of the kind you read all the time in the bien-pensant publications critical of Sarkozy, Le Nouvel Observateur, or Marianne, or Le Monde Diplomatique. Instead, he has produced a thundering, rallying tirade. The language is violent. Sarkozy is the ‘disgusting’ Rat Man (for which Badiou’s enemies have groundlessly accused him of anti-Semitism), ‘the jittery cop’, guilty of oppression and persecution, carried into power (just like Hitler and Pétain) by the electoral system itself. As if preaching to the converted – which is, of course, what revolutionary literature does – Badiou’s essay is not evidence-based and rational, but emotional and sentimental and replete with prejudice.”

Christopher Bickerton in the English edition of Le monde diplomatique reported: “His friend and fellow philosopher, Slavoj Žižek, observed that what was most welcoming about Badiou’s success was that it signalled the beginning of the end of what was known in France in the late 1970s as the ‘new philosophy’ – the anti-totalitarian moralism of Bernard-Henri Lévy, Alain Finkielkraut, André Glucksmann and others. Finkielkraut himself described Badiou’s success as ‘the symptom of a return of radicality and the collapse of anti-totalitarianism’. Žižek believes that Badiou has helped us see this ‘new philosophy’ for what it is: an insignificant exercise in sophism, a pseudo-theorisation of the most opportunistic fears and survival instincts, a sign of the provincialisation of French thought over the last few decades.”

Michael Cronin in the Irish Times writes that Badiou views Sarkozy “as the figurehead of what he describes as ‘transcendental Pétainism’. What Badiou understands by this is a culture of defeat, of abject submission to the laws of the powerful.”

Steven Poole in the Guardian calls the book an “enjoyably bilious essay” but adds that “what is really disappointing about it is the juvenility of its abuse. By the end of the book, one has learned next to nothing about the man except that he is not very tall.”

The drug laws don’t work

Michael Huemer argues we shouldn’t fight a war on drugs, we should legalise them

drugs200Let me begin with a story, and see what you think about it. A man named Flip owned a computer. Flip, however, took very bad care of his computer. He often ate and drank over the computer, which resulted in his spilling Coke on the keyboard on three occasions, ruining the keyboard each time. He installed software that slowed the machine’s performance and caused the operating system to become unstable. Flip thought these programs were “cool”, but most industry experts considered them shoddy products whose drawbacks far outweighed their usefulness. Finally, three weeks ago, Flip got angry at his computer and threw it on the floor. The motherboard and several other components were fatally damaged, so that Flip no longer has a working computer. End of story.

Flip was an imprudent and irresponsible computer owner. He made several bad decisions. It would clearly have been better had he taken care of his computer, not installed harmful software, and never thrown it on the floor. This would have been better for the computer, for Flip, and even for society, for Flip would have been a more productive citizen with a working computer. So a question naturally arises: how might we prevent people from behaving like Flip?

A solution fairly thrusts itself on our imagination (or at any rate, on the imagination of those who take their cue from modern politics): we could send the police after Flip, to drag him off and throw him in jail. That would send a message to other would-be computer abusers.

What are we to think of this plan? Of course, as things stand, Flip will not be sent to prison because he has violated no law. But that just invites the question: Should Flip’s behaviour be against the law? It is clear enough that the behaviour was foolish and without redeeming social value. So why isn’t it illegal?

Here is why: because what Flip did, he did to his own computer. If Flip had destroyed someone else’s computer, say a computer in a public library, then he would be held accountable by the state, and rightly so. Likewise, if he had destroyed his computer in a manner directly harmful to someone else—say, by throwing the computer at another person—then he would deserve to be punished. But he does not deserve to be punished purely for what he does, privately, with his own property.

It isn’t that what Flip did was not sufficiently harmful. Flip could have had a top-of-the-line, $5,000 computer worth three months’ of his salary. It could have contained the only copies of his personal correspondence over the last twenty years, plus the Ph.D. dissertation he was working on for the last five years. None of that matters to assessing his legal liability. Nor is the point that no one else was made worse off by Flip’s behavior. Suppose that scholarship in his field of study will be set back thirty years by the tragic loss of Flip’s brilliant dissertation. Furthermore, Flip will be unhappy for the next several months because of the loss of his computer, causing his family and friends also to be unhappy. Again, none of this matters to assessing Flip’s fitness for public sanctions. As long as what Flip destroyed is clearly understood to have been entirely his own property, and as long as he did not in the process take or damage anything that another person had a right to, his deplorable behaviour would not and should not be legally punished. Almost no one would favor changing the laws in that regard.

This is our conception of property. When you own something, you may do with it as you wish, even including damaging or destroying it, up until the point at which you violate another person’s rights over what is theirs. This is what we generally accept when it comes to our material possessions—our computers, our clothes, our cars, and so on.

Now, what about our bodies? Shouldn’t we have at least as much right to control our own bodies as we have to control a computer? For what is more authentically yours than your own body?

Somehow, the majority of people disagree. Consider the case of Trip. Trip has a body. But like many of us, Trip takes rather poor care of this body. He often puts substances into it that damage his health and have little or no nutritional value. He claims that these substances give him great enjoyment, but most medical experts consider the harms to far outweigh any benefits of these substances. By ingesting them, Trip greatly reduces his overall life expectancy.

What can be done about Trip? Whereas nearly everyone is content to leave Flip alone to make his own mistakes with computers, most individuals and governments would see Trip hauled away and forcibly confined for years, in punishment for his indiscretions. At least, they would if Trip’s preferred “substances” include marijuana, cocaine, heroin, or any of several other specific substances named in the law. We would not, however, see Trip punished if his preferred substances include only alcohol, tobacco, and fatty foods. Although more people are killed every year by the latter substances than by illegal drugs—over twenty times more in the case of tobacco—most would consider legal sanctions for smoking, alcohol consumption, and overeating to be taking things too far.

Why this contrast in our attitudes? Why is consumption of illegal drugs viewed so differently from consumption of other harmful substances, or participation in self-harming activities more generally? One reason is the value of purity. Psychologist Jonathan Haidt has identified five broad kinds of ideas that influence people’s moral attitudes—ideas about harm, fairness, loyalty, authority, and purity. Conservatives tend to be more strongly influenced than liberals by the last three. In particular, many see illegal drugs as impure and as pollutants of the body. But few see alcohol, tobacco, or fatty foods in that way, even if they are objectively more harmful. This is partly because of social conditions created by the legal regime itself: we see normal, respected individuals drinking, smoking, and eating unhealthy foods in public all the time. We see no such thing for illegal drugs, which we tend to associate with sordid, poor, dangerous neighbourhoods and people.

What we should realize, when we are influenced by these feelings, is that illegal drugs are not inherently unclean, any more than alcohol, tobacco, or canola oil. All of these are simply chemicals that people choose to ingest for enjoyment, and that can harm our health if used to excess. Most of the sordid associations we have with illegal drugs are actually the product of the drug laws: it is because of the laws that drugs are sold on the black market, that Latin American crime bosses are made rich, that government officials are corrupted, and that drug users rob others to buy drugs. The drug laws create a regime in which crime burgeons: because legitimate businesses are prevented from providing the goods demanded by the market, criminals step in to provide the product, at greatly inflated prices. During America’s experiment with alcohol prohibition, organized crime grew bold and powerful from its booming trade in illegal alcohol. When prohibition ended, the alcohol business was taken over by legitimate businesses. Today, alcohol is sold in stores, not on the streets; it is shipped from breweries in the daylight, not smuggled across borders by criminal organizations; no government is corrupted by alcohol money; and virtually no one robs other people to get money to buy alcohol. The difference between the situation of alcohol and that of illegal drugs lies not in their chemical or pharmacological properties; it lies in the law. In the case of the drug trade, we have created laws that protect criminal organizations from noncriminal competition, thereby granting obscene profits to criminals.

None of this is to deny that misuse and overuse of drugs create problems for the user and those around him. The same is true, as I have said, of alcohol, tobacco, and unhealthy foods. But without prohibition, the problems created by drug use would be mainly private problems; it is the law that enables the drug trade to damage and corrupt society.

Most philosophers, in my experience, can be brought to agree with the case for legalisation. But there is another reason—apart from sentiments about purity—why most non-philosophers do not accept this case. That is that the legalization position strikes many as defeatist. We have a problem: drugs ruin many people’s lives. The legalisers’ position offers no solution to this problem, counselling instead that we merely learn to live with it. We hear this complaint often from the prohibitionists’ side. As conservative U.S. Senator Jon Kyl put it, “What [the legalisation advocates] all have in common is a defeatist mentality that America is losing the war on drugs, and a shared faith that we can somehow win it by surrendering.” The Senator’s last remark is of course a misstatement: legalisers do not propose to win the war on drugs by surrendering. They propose no way of winning at all. Those who advocated the repeal of alcohol prohibition likewise offered no way of winning the battle against drunkenness. Rather than trying to win that battle, we have learned to live with the enemy as best we can.

Many conservatives feel repelled by this cynical approach. Those who feel this way would do well to learn from the views of their conservative colleagues about economic policy. Conservatives have long pointed up the unworkability of socialism. Many socialists find this critique cynical and defeatist: market economics offers no solution to the problems of human greed, poverty, and inequality. How can we be content to acquiesce in these problems, when another ideology offers a plan to perfect society and human nature?

The key insight that these sentiments are missing is that expressed in Voltaire’s dictum, “The best is the enemy of the good.” Seeking the best imaginable result—seeking to eliminate a social problem—often leaves us worse off. That is because the best is almost always unattainable, and our pursuit of it interferes with more modest strategies that could have achieved a good result, reducing the problem’s size and its collateral effects. The insight conservatives bring to the discussion of at least some social issues is that human society is imperfect because human nature is imperfect. As long as there are human beings, there will be selfishness, there will be crime, there will be foolish choices and people who ruin their own and others’ lives. The root of the drug problem is not that there are too many drugs around, or that they are too cheap, or that people have yet to be sufficiently educated as to their harmful effects. The root of the problem lies in the fact that some human beings desire escape from their troubles, and that human beings are tempted more by immediate pleasures than by the long-term good. Until humans are replaced by angels, that problem will not be solved, and to declare “war” on it is to declare war on human nature.

This is not to justify complacency in the face of vice and suffering. The point is rather to reframe our task. Our goal should not be to solve problems or to win metaphorical wars. Our goal should be to mitigate problems. We should not ask “What will stop drug abuse in our society?” but “What will reduce the problems associated with drug use in the most cost-effective manner?”

The war on drugs is not the answer to the latter question. Legalisation would reduce the social costs of the drug trade, for the reasons I have mentioned. It would greatly reduce crime and corruption, free up state resources, and restore respect for individual rights. Once we properly frame our task in confronting social problems, this will strike us, not as a defeatist appeal, but as the realistic and responsible approach.

Michael Huemer is professor of philosophy at the University of Colorado at Boulder