This week is the last chance to see an interesting play on the London fringe about Nietzsche’s break with Wagner. Twilight of the Gods by Julian Doyle is running at The Courtyard theatre. Full details here.
Monthly Archives: June 2009
Jean Kazez on an impassioned plea to give moreThe Life You Can Save: Acting Now to End World Poverty
by Peter Singer
(UK: Picador, US: Random House)
It’s not easy being an uncompromising philosopher who wants to change the world. Peter Singer has been arguing for 35 years that well-off people aren’t just very nice to do something about extreme poverty, but morally obligated to give, and give in large amounts. His 1971 article “Famine, Affluence, and Morality” declared we ought to give to the point of marginal utility – the point at which, if we gave any more, we’d make ourselves as badly off as the people we’re trying to help. That’s immensely thought-provoking, but it’s not going to convince Joe Public.
Singer took a turn for the practical and persuasive in two New York Times Magazine articles published in 1999 and 2006. The Life You Can Save goes even further in the direction of realism. It starts with the staggering statistics. Once it was easy to remember: 1 billion people around the world live on less than a dollar a day. In 2008 the poverty line changed to $1.25 a day, and now there are 1.4 billion people living on less than that. Ten million children die every year due to poverty-related causes.
The argument for extreme giving is just what it was in 1971. Premise 1: suffering and death from lack of food, shelter, and medical care are bad. Premise 2: if it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so. Premise 3: you can do just that by donating to aid agencies. Conclusion: contribute, and then contribute some more.
What’s different about this book is that Singer confronts the features of human psychology that make us reluctant to give. They don’t give us an excuse, but they do have to be understood and worked around. Research shows that donors will give more if they’re told about one child in need, rather than two or eight, let alone a billion; and it’s better not to mention the bigger picture of which the child’s problems are a part. We’re more likely to help if we’re the only one in a position to help; failing that, we’ll help more if we perceive others as helping too. If we countenance the big picture at all, we want to make a big difference; we don’t want our donation to be just a drop in the proverbial bucket.
Wisely, Singer puts enough real life detail in this book to fill quite a few “Save the Children” appeals. There are also suggestions about how to get around the perception that others aren’t giving. We ought to talk openly about our donations, he suggests, violating the taboo that says charity ought to be discreet. If openness makes us wonder about the purity of our motives, so what? The point was not to be pure, but to prevent death and suffering. Groups like the “50% club”, which promise half their income to aid agencies, make high levels of giving the norm within small communities.
How much should we give? Strictly speaking, still to the point of marginal utility. It’s not totally impossible, as Singer demonstrates with profiles of some extraordinary people. Paul Farmer, the amazing doctor portrayed in the great book Mountains Beyond Mountains, by Tracy Kidder, does just about that. Even his own daughter barely takes priority over other people’s children. A radical Singer acolyte? Hardly. Farmer just takes seriously the Christian dictate to love your neighbour as yourself. “I can’t,” says Farmer, “but I’m gonna keep on trying.”
Singer knows that for many, a demanding message is likely to backfire. Though brought up in the Judeo-Christian tradition that extols charity, we love our iPods and SUVs and vacations much too much. The last chapter tones it down. A great deal could be accomplished if the wealthiest 10% of Americans gave at graduated rates starting around 5%. That, then, is Singer’s modest prescription, but he dispenses it with a clear warning that this is only what he thinks we might be willing to do, not all he thinks we should do. It makes you wonder about the meaning of saying “give X”, if it’s not to say “X is what you owe”. Perhaps it’s to create a “new normal” that could solve the global poverty problem, if not make us all perfectly good.
Singer seems particularly genuine when he lets loose a bit, even at the risk of offending some of his affluent readers. He writes a great couple of pages about the ludicrous yacht collection owned by Microsoft co-founder and not-so-impressive philanthropist Paul Allen. He also asks great questions about arts funding. In 2004, the Metropolitan Museum of Art in New York spent $45 million on a small painting of a Madonna and child. For that amount of money you could buy 900,000 sight-restoring cataract operations in a developing country, or perhaps save 45,000 lives. “How can a painting, no matter how beautiful and historically significant, compare with that?”
The deep philosophical question of the book is about such comparisons, and the obligations they give rise to. But Singer doesn’t want us to get tied up in knots. Give much more, he seems to say. Just do it.
Jean Kazez is the author of The Weight of Things: Philosophy and the Good Life (Blackwell).
Julian Baggini meets Anthony Kenny, one of the towering figures of late twentieth century British philosophy
If there’s one philosopher ideally placed to talk about the importance of certainty and doubt, Sir Anthony Kenny fits the bill, for sure. He has just finished writing the fourth volume of his major history of western philosophy, as the third was being published, so certainly has the big picture in mind. Furthermore, he has also recently published a kind of intellectual autobiography, What I Believe, in which he talks about why, as an agnostic, he doesn’t believe very much at all.
Kenny’s lack of religious belief is important to him, since he trained for the Catholic priesthood and was ordained in 1955. Doubt was to come later.
“I don’t remember having any really serious doubts about the truth of Catholicism while I was at school in a junior seminary to the age of eighteen,” he told me at his London club. “When I was then sent to the English college in Rome in 1949 I did the philosophy course at the Gregorian University, which in those days was taught by the Jesuits in Latin. I was very disillusioned about that. It was near scholastic philosophy. It was supposed to be in accordance with the mind of St Thomas Aquinas, but we didn’t get to read anything of Aquinas except one of his small treatises.”
It went on like this for three of the course’s seven years. “If this kind of philosophy is necessary for somebody to be an educated Catholic then there is something wrong with it,” he thought. So when he returned home for the only break in the course’s entire duration, he “was pretty disillusioned about the whole thing and very nearly quit at that point. However, when I went back and started the theology course, oddly enough I found it much more believable, and the people who were teaching me were very much better as philosophers. I was very lucky to be taught by some of the finest minds in the Jesuit order at the time. By the time I was due to take the final decision whether to get ordained or not, I had regained my enthusiasm and delight in Catholic teaching and practice.”
But doubts were soon to resurface. “The possibility of proving the existence of God was something I found very difficult. The proofs didn’t seem to me to hold water. I believed that one could know God but not perhaps by means of a proof.
“When I became a curate in a parish, having to teach practical Catholic moral theology, I was very unhappy with a lot of particular Catholic moral teachings, such as the opposition to contraception, and at that time the church’s support of the nuclear deterrent, which I thought, and still think, was immoral. I wrote several pieces against British nuclear deterrent policy and after one of them, which was in a popular Catholic newspaper, I was forbidden to write by the bishop. But this was only really a catalyst, and it was a much more fundamental worry about the provability of the existence of God.”
Crunch time came when he refused to complete his doctorate. “There were the various stages of ordination which I completed as a priest, but for the doctorate, because it was at a pontifical university, at that time you had to take the anti-modernist oath, which included stating that the existence of God could be proved, and I didn’t believe that.”
Listening to the story it becomes clear that Kenny’s doubts were very specific to Catholicism. “It’s true that the God in whom I came to disbelieve as very much the God of Catholicism.”
But although atheism “always seemed a possibility, that’s what an agnostic position is,” Kenny “never found the arguments against the existence of God much more convincing than the arguments for the existence of God.”
That’s a crucial point, because it is often argued that agnosticism is the best position for those who lack faith because it is impossible to prove God does not exist. But this is a strange argument. No one has proved Yetis don’t exist, but the balance of evidence is such that you plan a trip up Everest on the assumption that they do not. Kenny’s agnosticism is not a result of a lack of conclusive proof for God’s non-existence, but because he believes the balance of evidence and argument is more finely balanced than most atheists and believers think it is.
So why does he think that? Surely if he lacks good reasons to believe in God, the default position is not to. Think of why people believe there were no WMD in Iraq at the time of the American-led invasion. Clearly it is not the case that such a thorough survey of the country has been done to make that absolutely certain. In fact, it’s conceivable we might still find some. But the evidence we have found gives us very little reason to believe they were there. What’s more – and this is where the analogy is perhaps quite strong – even if there is something there, it’s not going to be anything like what was thought to be there in the beginning. In the same way, the argument for atheism is that we can’t disprove he’s there, and he might even be there in some sense, but with no good reasons to suppose the God we would recognise exists, why not work on the assumption that it doesn’t?
“Well, like you, the position which seems to me clearly to be rejected is dogmatic atheism, not being open to any eventual development of the argument the other way,” begins his reply. “I also agree with you to the extent that I think that if there is a God, it’s going to be very different to the traditional God of the great monotheistic religions, if it makes sense to talk of that as a single God. I wrote a book called The God of the Philosophers in which I gave a fairly precise definition of God as omniscient, omnipotent, benevolent and so on and said that I think one can show that that God can’t exist.
“But then I ended up by saying that is not the only possible, reasonable meaning one could give to God, and I took as an example the god described by John Stuart Mill in posthumously published essays on theism: a being of great but not unlimited power who is concerned for us but not overwhelmingly so.
“Spinoza, who is a philosopher I have a very great respect for, constantly speaks of deus sive natura – god or nature – and you can either take this as meaning that he took nature as a revelation of God, in which case he’s a kind of god-intoxicated man; or he thinks that all god can mean is nature, and then he’s just adopting a type of reverential attitude towards nature. But I think if you start from the nature end rather than the God end, the history of evolution is not very satisfactorily explained by the total absence of any kind of design – not only evolutionary beings but the cosmological constants and so on. There seems to me to be a difficulty for people who want to say there’s nothing more than the material universe. And I also think in a kind of Spinozistic way that even just nature as it reveals itself in history is something that should provoke our awe and in a way gratitude. So, as it were, instead of starting from the God end and stripping off the clothes and showing that it’s just naked nature, one could start with nature and think that perhaps that deserves some of the reactions that people have made to God.
“More recently I have been saying that though I believe religions are not literally true, that they have a great poetic value and that philosophers have not really done enough about reflecting about poetic kinds of meaningfulness and how they fit into science on the one hand, and how one should live one’s life on the other.
“The atheist conclusion, at least as expounded by most vociferous atheists in our day, is that there isn’t anything left to explain once science has done its best, and that doesn’t seem to be right.”
Although Kenny has moved from faith to agnosticism, in the discipline as a whole, he has witnessed a remarkable rehabilitation of religious belief. The denouement of his doctoral saga came “shortly after an anthology called New Essays in Philosophy and Theology had been published by Tony Flew and Alasdair McIntyre. It made a big splash at that time. Even though MacIntyre at that time was an Anglican, Flew was a very belligerent atheist, and certainly a lot of believers were shaken by the collection of analytic studies of philosophy of religion. What has now happened, of course, is that MacIntyre has become a Roman Catholic and Flew has recently announced his conversion to deism – he’s given up atheism.
“Philosophers of religion were in a pretty shattered state when I first came to the subject, and thought it would be wonderful if they could prove that religious propositions had meaning, let alone that they were true. In the fifty years I’ve been in philosophy I think there’s been a great revival of confidence among philosophers of religion. Plantinga and Swinburne I suppose are the two names that come to mind in that period.
“Not that there were not a lot of believers who were philosophers: Catholics like Anscombe, Geach and Dummettt. But at that time they didn’t write much about philosophy of religion.
“It was certainly out of fashion then,” Kenny says of religious belief at Oxbridge in the post-war period. “I think two principal things have changed. The Oxford which I first went to as a philosopher was a very self-confident philosophy department. They thought they were the best in the world. We thought there had been a revolution in philosophy and they were leading the revolution. People came from all over the world to study in Oxford. Because of this self-confidence they were not very interested in the history of philosophy. Aristotle and Plato never disappeared from the syllabus but the centre of Oxford teaching was very much contemporary, analytic ordinary-language philosophy.
“I think that Oxford lost self-confidence, for better and for worse. Gradually most of the best and most talented philosophers either went to or were in the United States rather than the UK. With that lack of self-confidence came a great interest in the history of philosophy and willingness to learn from it; to some extent a willingness to learn from philosophers in other places as well as from other times. As part of this there came back a great interest in medieval philosophy which of course linked to some extent with an interest in the philosophy of religion.”
In a wide-ranging career, however, the philosophy of religion has been just one small part. Kenny’s breadth of learning comes in part from a rejection of the narrow path of the academic specialist. For that, we can thank the late, great Donald Davidson.
“Davidson came to philosophy from social science rather later than I had come to philosophy from theology, and he came to see me before he published anything in philosophy. He was writing a review of [Kenny’s 1963 book] Action, Emotion and Will. He didn’t publish the review in the end; it turned into his article on actions and events. For a time I was quite close to him and it became clear to me that he was a much better philosopher than I was. But also I felt that the system he was producing was a kind of artificial system, which really had very little relation to the philosophy of mind and action as I understood it. It would sparkle and be exciting for a while but it wouldn’t be a fundamental contribution to the subject. And I thought, well if he’s so much better than I am and he can’t make a contribution to the subject, I would be much better employed not trying to make a contribution myself. I found in the course of teaching that I was quite good at explaining in terms which modern people understood what kind of thing was being meant by Plato, Aristotle, Aquinas and so on and so I would devote most of my work to that.
“I was then paid to teach philosophy for about 13 years and from then on my day job was really as an administrator, so I no longer had any obligation to read all the periodicals and keep up with the state of art in philosophy, but I could just wallow in the great minds of the past and I found that much more pleasant.”
When he looks back over the history of philosophy he has so brilliantly chronicled, how important does he think the quest for certainty has been?
“I think it was Descartes who made certainty the aim of philosophy. I don’t think in classical and medieval philosophy you get the emphasis on certainty. You get the emphasis on knowledge, but not the irrevocable certainty that Descartes wants. The topic of certainty does I think emerge in Medieval philosophy because of the religious context, because of faith being a state of mind which resembles knowledge in its irrevocability, the certainty of it. That doesn’t resemble knowledge because it doesn’t have the grounds on which knowledge rests, and so you get certainty coming in that way. But I think it was very much the crisis of faith of the Renaissance and Reformation – [Richard] Popkin’s book on scepticism is very good about this – that suddenly you have equal and opposite certainties on both sides of the Reformation divide; and then you get people like Descartes trying to produce, as it were, a non-sectarian certainty.
“I think the kind of certainty that Descartes was looking for was really a will o’ the wisp. Starting from immediate private data and creating the world with certainty from that doesn’t work. But I think there is an extremely interesting topic of the things of which we are quite certain but which are neither pieces of knowledge, nor commitments of religious faith. The people who I think have written most interestingly about this are John Henry Newman and Wittgenstein. Newman’s Grammar of Assent is all about that, about our certainty of such things as that Great Britain is an island or that we are each of us going to die, which are not objects of faith. On the other hand, if you’re actually asked to produce evidence for them, the evidence is immensely flimsy – it’s never certain. And yet they provide much more of the bedrock of our structure of knowledge than anything people offer as evidence for them.”
I was curious to hear Kenny say that certainty wasn’t of interest to the ancients. Wasn’t Plato very much interested in establishing knowledge of the forms, unchanging, unalterable constants, true knowledge of which would not rest on any assumptions?
“I think that you’re putting together some things other than certainty. What I take to be the mark of certainty is something that can’t be called into question, something that stands fast no matter what may happen. I suppose you might say that means something that rests on no assumptions – I don’t think that was Descartes’ view. Sure, he wanted to start there, but a lot of other things were to become certain and unshakeable. Whereas with Plato it’s not so much that he’s looking for a frame of mind that is unshakeable, but rather that he wants knowledge of entities that are unchanging, absolute and non-relative. So I think you might say, sure, philosophers have also been interested in certainty, but with Plato it’s an objective certainty – a certainty of the things that you know – whereas with Descartes it’s a subjective certainty, a state of consciousness which is unalterable.”
This illustrates an ambiguity in the word certain itself, doesn’t it? There’s certainty as a state of mind and then there are states of affairs which are certain.
“Newman distinguishes these, calling the platonic thing certainty and the state of mind certitude. I think it’s a useful distinction but I don’t think it’s been used by others.”
Wittgenstein is the philosopher who has most influenced Kenny, “with regard to the philosophy of mind and language,” he qualifies. “I wouldn’t say Wittgenstein was a great ethicist.” Why did he move us on from both Plato and Descartes?
“I think it was cutting the ground underneath both Cartesianism and empiricism. The whole project which is common to both Descartes and Locke and Hume, of building up all knowledge on the basis of the immediate, private thought and experience, and the way in which Wittgenstein turns this around, showing that even our most private thoughts wouldn’t be the thoughts they are unless they were related to the public language that we all use. It’s not building up the public on the basis of the private but doing justice to the private on the basis of what we share socially. It turns around the way you look at everything.”
However, unlike many Wittgensteinians, Kenny has not become monomaniacally obsessed with the Austrian. “Wittgenstein wasn’t a philosopher I wanted to be looking at all the time, he was somebody who had given me a pair of eyes with which to look at the other things.”
Another of his philosophical heroes is Aristotle, who seems to be effortlessly unperturbed by either extreme of certainty and doubt. Whereas some philosophers seem always in fear of being overcome by doubt, or grasping for certainty, Aristotle just seems to be able to be satisfied with the best we can accomplish. “That’s one of the things I admire about him,” agrees Kenny. Is this something he also sees in Wittgenstein?
“I think what you’ve said is quite important. Though he was himself a tortured person, his eventual idea of philosophy was really very close to Aristotle’s; in that passage where he says the real discovery in philosophy is the one that allows me to stop doing philosophy whenever I want to. Philosophy doesn’t have to have foundations which call itself into question, there are separate problems and we work on each problem as it comes, and I think that is a kind of Aristotelian idea, though of course one has to admit that unlike Wittgenstein, Aristotle was a foundationalist and he did believe that there was this thing called first philosophy which was a foundation of all the other branches of philosophy, but I think that doesn’t much affect his practical way of working. The way he works on distinctions between actuality and potentiality and so on, though it’s in the book called Metaphysics, is actually quite similar to what Wittgenstein was doing in The Blue and Brown Books.”
Of one thing however, Kenny remains at least fairly certain. I ask him if after all these years of working on the philosophy of the greats rather than trying to joining them, he still thinks that was the right choice.
“Oh yes,” he smiles, “I think so.”
Julian Baggini is editor of tpm
Richard Reeves on J.S. Mill’s rejection of the quiet life
By rights, this should be a year of celebration for liberals. 2009 marks the 150th birthday of both On Liberty and the Liberal Party. But liberalism is on the defensive. Economic liberals are taking the blame for the meltdown of the global economy, while social liberals are accused of creating a “broken”, amoral society.
Liberals are able to mount a sturdy defence, however. It is true that liberals have always recognised the moral and pragmatic value of free trade and markets. As a general principle, the freer the market, the freer the people within it. Protectionism, monopolies, tariffs, cartels: all are frequent targets of liberal rhetoric and politics. But the market, like all institutions, should be subject to the same test as the one T.H. Green posed for any action by government: “does it liberate individuals by increasing their self-reliance or their ability to add to human progress?” Over the last two years, free markets have performed patchily against this benchmark: but not over the last two hundred, or even two thousand. The point however is that the operations of a market are good only to the extent that they give people more power to lead a good life of their own choosing. If “neo-liberal” economics means a blind faith in free markets as inherently good, then it is not liberal at all.
The second challenge to liberalism is that it has corroded the institutions and social norms that make good communal life possible – and it is this claim I attempt to refute in what follows. According to social conservatives (of both left and right), liberal “relativism”, in the ascendancy since the 1960s, has undermined marriage, manners and morality. By elevating individual autonomy, liberals have given up any notion of “the good”.
It is certainly true that liberals are slow to pass judgement on others. They do not believe they have some magical insight into what constitutes a successful life for each and every person. But liberalism is not amoral; it does not encourage atomisation and incivility; and it certainly does not rely on a selfish view of human nature. On the contrary, liberalism is the most morally demanding philosophy because it insists that each of us generates our own moral resources rather than relying on externally-provided supplies.
Mill’s famous “harm” principle has done some damage to the reputation of liberalism: not because of the principle itself, which is sound as far as it goes, but because of its fame. The idea that our actions should be free from interference so long as they do not harm others remains as strong today as when Mill expressed it in On Liberty. (In fact he trailed it in the Principles of Political Economy eleven years earlier, but nobody noticed.) Amartya Sen’s updated version reads: “responsible adults must be in charge of their own well-being; it is for them to decide how to use their capabilities.”
But for too many people, on both sides of the argument, the harm principle has become the Alpha and Omega of the liberal faith. In fact, the harm principle plays a supporting role to the essential argument of both On Liberty, and liberalism itself: that each person should be free to be, and become, whoever they wish. Critics of liberalism wrongly suggest it promotes selfishness, in the shape of individualism. It is, rather, individuality that most concerns them. Liberty provides the conditions within which people can grow.
Mill’s principal goal, in On Liberty and in all his subsequent works, was to describe and advocate a particular view of a good life, one based on the free and energetic development of individual character. In his Autobiography, Mill described On Liberty as “a kind of philosophical text-book of a single truth”. Many writers, Gertrude Himmelfarb included, have wrongly assumed that he was here referring to the much-disputed “simple principle”, or harm principle. Here is how Mill actually went on to describe this single truth: “the importance, to man and society, of a large variety in types of character, and of giving full freedom to human nature to expand itself in innumerable and conflicting directions.”
In On Liberty, he wrote: “The only freedom which deserves the name … is that of pursuing our own good in our own way.” Individuality is the lodestar of the entire work. For Mill, a good life is one led on our own terms. This is why, for him, the idea of coercing people into happier or better states – with the important exceptions of children and “barbarian” peoples – was a contradiction in terms. As Alan Ryan rather brilliantly puts it, Mill “wanted volunteers for virtue, not conscripts”. For Mill, it was vitally important that individuals be not only authors of their opinions, but also architects of their lives: “he who lets the world, or his own portion of it, choose his plan of life for him, has no need of any other faculty than the ape-like one of imitation,” he wrote. “He who chooses his plan for himself employs all his faculties”.
This did not mean that each individual had to follow a different path to one another; but that their path should be positively chosen, rather than sheepishly followed. “Originality does not consist solely in making great discoveries,” he wrote, strongly echoing his 1833 article “On Genius”; “whoever thinks out a subject with his own mind, not accepting the phrases of his predecessors instead of facts, is original.” For Mill, a real “genius” was someone more individual than others, and “less capable, consequently, of fitting themselves … into any of the small number of moulds which society provides in order to save its members the trouble of forming their own character.”
Liberty, then, was not simply the absence of coercion or restraint, but the active “self-creation”. The ugly term “autonomy” – literally, self-governing – gets closer in some ways to what Mill meant than liberty, and it is striking that he used the phrase “l’autononomie de l’individu” to describe the main theme of On Liberty to a French correspondent.
Nor, for Mill, was liberty a static state of affairs; rather, it was manifested in each person progressing “nearer to the best thing they can be”. Mill prefixed his essay with what he called a “motto” from Wilhelm von Humboldt’s Sphere and Duties of Government, published in 1854: “The grand, leading principle, towards which every argument unfolded in these pages directly converges, is the absolute and essential importance of human development in its richest diversity.”
The passage was not idly chosen; the theme of human development is the golden thread running through On Liberty, and indeed most of Mill’s subsequent major works. Humboldt’s name appears nine times in an essay in which Hume, Voltaire and Hobbes are absent, and Bentham, Locke and Kant make a single appearance each. Mill especially endorsed Humboldt’s claim that “the end of man … is the highest and most harmonious development of his powers to a complete and consistent whole”.
Throughout the essay, Mill used organic metaphors for individuals. While the 18th-century philosopher Immanuel Kant had declared that “out of the crooked timber of humanity no straight thing was ever made”, for Mill people were not dead timber, but living trees – driven by their very nature to grow, stretch and seek the light. “Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it,” he insisted, “but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.” When Mill argued against repression, he did not use spatial terms like “invade” or “interfere”. For him, repression inhibited natural growth, with people turned into “pollards” or “compressed”, “cramped”, “pinched”, “dwarfed”, “starved” or “withered”. The reason Mill wanted people to be free was so that they could grow, and in his highly optimistic view of human nature, the tendency towards growth was immanent in each and every person. As Alan Ryan, one of the leading Mill scholars of the 20th century, put it: “Mill’s concern with self-development and moral progress is a strand in his philosophy to which almost everything else is subordinate.”
The result of individuals following their own life plans would be a kaleidoscope of different lifestyles – which Mill welcomed. “As it is useful that while mankind are imperfect there should be different opinions,” he wrote, “so is it that there should be different experiments in living; that free scope should be given to the varieties of character … and the worth of different modes of life should be proved practically, when any one thinks fit to try them.” The only way people could discover the best way to live, Mill believed, was by trying out a variety of different ways, and comparing notes on the results. In his diary, the day before sending Harriet their planned list of essays in February 1854, Mill railed against Goethe’s ideal of a cultivated individual as “rounded off and made symmetrical like a Greek temple or a Greek drama … Not symmetry, but bold, free expansion in all directions is demanded by the needs of modern life and the instincts of the modern mind.” Mill wanted individuals to ask when planning their lives, “what do I prefer? or, what would suit my character and disposition? or, what would allow the best and highest in me to have fair play, and enable it to grow and thrive?”. In the interests of social experimentation, Mill wanted more “spontaneity” and “eccentricity”.
Mill’s vision of a good, autonomous life – continuous self-creation along with an openness to new ideas, experiments and opportunities – demanded a great deal of personal energy. One of the more telling charges that Mill levelled at Christian morality, at least in its existing form, was that its ideal was “passive rather than active”, more concerned with “Abstinence from Evil, rather than energetic Pursuit of the Good”.
The words “energy”, “active” and “vital” (or their derivatives) appear forty-four times in On Liberty, compared to thirty-one mentions of “individuality” and forty-nine of “freedom”. Mill’s economics, politics, feminism and moral philosophy all had at their heart the same ideal conception not just of a good life, but of a good way of living: autonomously, socially, and strenuously. To many readers, it could all sound a bit exhausting. Certainly this was the view of the fictional Richard Phillotson, husband of Sue Bridehead in Hardy’s 1895 novel, Jude the Obscure:
Sue continued: “She, or he, ‘who lets the world, or his own portion of it, choose his plan of life for him, has no need of any other faculty than the ape-like one of imitation.’ J.S. Mill’s words those are. I have been reading it up. Why can’t you act upon them? I wish to, always.”
“What do I care about J.S. Mill!” moaned he. “I only want to lead a quiet life!”
But a quiet life was not what Mill had in mind. For him, the energetic pursuit of a self-defined, self-improving life was a vital component of individual character. On Liberty was in fact the most important instalment of Mill’s lifelong concern with what he had called, in his essay on Coleridge, the “culture of the inward man as the problem of problems.”
Mill’s lifelong interest in character, and its development, was based on a conviction that independence could only be achieved by individuals with the educational, social and moral resources to actively shape their own lives. This is, of course, a million miles away from the selfish atomism with which liberalism is falsely charged. Liberals know that virtue, morality and character all matter hugely: they know, indeed that a liberal society cannot survive without them. But they also know these goods cannot be delivered to people by an external agency such as the church or state. As he put it in On Liberty, “it really is of importance, not only what men do, but also what manner of men they are that do it.”
John Skorupski on the relationship between the freedoms of thought and speech
Mill’s essay on Liberty asserts “one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be physical force in the form of legal penalties, or the moral coercion of public opinion.”
The main thrust of the principle (call it the Liberty Principle) is that legal penalty and moral coercion should not be used to make individuals do things for their own good. It does not forbid the use of coercion to prevent an action that will do harm to others; whether or not that is justified depends on other principles. Nor, as Mill makes clear later in the essay, does it forbid legal penalty or moral coercion to prevent offensive or distracting invasions of public places. In shared space one can be expected to behave in ways that show due consideration for others. If I insist on having a loud and obscene conversation with my friends in a railway carriage, or on copulating at high noon in a crowded park, it is not an infraction of Mill’s liberty principle to prevent me, whether by legal penalty or moral coercion. This is so even if my activity does not harm others, as against offending them, or merely distracting them from their own pursuits. In contrast, if I want to do the very same things with my friends in my own house Mill’s principle says that others have no right to stop me.
Famously, in chapter two of On Liberty Mill also provides an inspiring defence of liberty of thought and discussion. However he does not explore its limits in anything like the detail in which he explores the limits of the Liberty Principle. Nor, connectedly, does he answer at all clearly two important questions about it. Is liberty of thought and discussion just a special case of the Liberty Principle, or does it protect thought and discussion in a stronger way than the Liberty Principle alone would do? Does it derive from the same or from different bases to those on which the Liberty Principle rests? These questions have become especially important today, since the principles of free speech are once again in active political play.
On the one hand speech is a species of action, so one might think that interference with it by means of “compulsion and control” would be governed by general principles which determine when such interference in any action is legitimate – that is, by the Liberty Principle. On the other hand speech (including writing, etc.) is widely treated as a special case that requires a distinct degree of protection.
As Mill himself remarks at one point, “no-one pretends that actions should be as free as opinions.” That remark still speaks for our prevailing assumption. If, for example, we think that moral, political or religious demonstrations in streets or parks should not be interfered with, though they inconvenience, distract or positively offend others, then we take this view. For we do not class them as invasions of shared space, in the way we class other kinds of comparably distracting or offending behaviour. There will still be limits, of course, but the mere offence, let alone the inconvenience or distraction a demonstration may cause are not sufficient ground for stopping it.
Likewise, many would say that it is wrong to prevent dissemination of moral or philosophical views, or factual inquiries, just because they may cause harm to some people. Here too there are limits, turning on tricky issues about incitement, intimidation, defamation, privacy. Mill makes a stab in this direction: he accepts that speech can be controlled when it constitutes “a positive instigation to a mischievous act.” This pregnant phrase is clearly meant as a more stringent criterion than the general test of harm to others.
But why give a special protection to freedom of speech, as against freedom of action in general? If liberty of thought and discussion demands a special degree, or distinctive kind, of protection – as I think it does – why does it?
The answer lies in the fundamental commitment of a certain western tradition of liberalism: namely, commitment to free thought.
Free thought is thought ruled by its own principles and by nothing else; that is, by principles of thinking that it discovers, or makes explicit, simply by reflecting on its own activity. It acknowledges no external constraints placed on it by doctrines of faith, revelation or received authority: it scrutinises such teachings in the light of its own principles. The contrast is with apologetic thought, in the traditional sense of that word: thought which seeks to make intelligible, so far as possible, the ways of God to man, without claiming to know those ways by its own principles alone. Apologetics is fideistic. It holds that free thought alone cannot tell us what to believe: natural reason must be a servant of faith, or at most a co-sovereign with it.
Apologetic thought says that certain truths are known extra-rationally, by revelation or tradition; furthermore since these characteristically are, or give rise to, important moral truths, they should serve as the foundation of the social order. (I am using “moral” in the widest sense, to cover what is of value and how we should live, not just specific doctrines about moral obligation.) The apologetic task of philosophy is to expound and defend these truths against their detractors and misinterpreters. But the task need not be just a task for philosophy. It may also be a task for censors and book-burners.
Now the liberal tradition that I have in mind, to which Mill belongs, also holds that the social order must be founded on moral truth. But the important difference is that it at the same time holds that free thought is the only reliable guide to truth. The difference is not that this particular liberal tradition denies there is such a thing as moral truth. The difference is about the epistemology of moral truth: how we know it.
Under what conditions does free thought give us rational belief? There must be free cognitive spontaneity – people must be free to accept a claim only because they find it belief-worthy by their own lights, and not simply because someone tells them to believe it. At the same time, there must be free discussion: openness to what others find belief-worthy, a readiness to give respect to views that deserve respect, and critical readiness to revise one’s own assumptions in the light of what others sincerely say.
Hence the distinctive importance that this liberal tradition attaches to free speech. While free thought may lead, and liberals think it does lead, to all the other characteristic doctrines of political liberalism, liberty of thought and discussion itself has a certain “transcendental” status among these doctrines, in the sense that it is the precondition of knowing what a social order based on moral truth can be. “Complete liberty of contradicting and disproving our opinion,” wrote Mill, “is the very condition which justifies us in assuming its truth for purposes of action; and on no other terms can a being with human faculties have any rational assurance of being right.”
Plainly that applies to this opinion itself, as well as to any other moral doctrines a liberal may want to put forward. It therefore demands a serious response to the apologetic conception of philosophy, and the epistemology on which it is based. That, I believe, was the main motive behind a project on which Mill spent much time and energy: his System of Logic, which is a treatise of naturalistic and fallibilist epistemology. This work and the essay On Liberty were the two by which he thought he would be remembered.
In the 20th century the liberal tradition to which Mill belongs came to be mistrusted by many people who counted themselves liberal, mostly for bad reasons, such as the notion that there is no such thing as moral truth. Two rather more cogent worries about it are that the drive to base social order on moral truth will lead to authoritarianism, and that it will lead to intransigent conflict among partisans of conflicting moral doctrines, conflict that may itself disturb liberal order.
There is some psychological, as against philosophical, truth in the first of these; we know very well how authoritarianism, or unyielding partisanship, can grow psychologically from the claim that one has sole access to the truth. Yet that does not mean that the moral truth, if we could but grasp it, would justify an authoritarian social order. Why should a liberal believe that? And if she does, why is she a liberal? There is also truth in the second worry, as we are presently in a very good position to see. Suppose we accept that complete freedom of thought and discussion is the only reliable road to truth, and also that it can be socially destabilising. Which of these truths, in our actual circumstances, should we steer by? It seems to me that you have to be grossly pessimistic to steer by the second. That is the basic argument against (for example) laws that outlaw holocaust denial.
Dialogue, unconstrained truth-seeking discussion, is nothing but the social expression of free thought. Given the distortions and manipulations to which free thought is subject, only continued full exposure to free discussion can give us continued rational warrant for our beliefs. Socially possessed truth and disinterested, rational qualities of mind among citizens are public goods. Hence, protection of free dialogue can be based directly on its social benefit, rather than indirectly, by an argument going through the rights of the individual. We could say that free speech is a democratic right – but better, we should say that it is a democratic obligation: that we have an obligation to protect and promote it for the sake of maintaining a liberal order based on moral truth.
A common objection to this ideal of democratic intellect is that it assumes unrealistically high standards of integrity and disinterested rationality from too many people on too many subjects. The more we are struck by human irrationality and ignorance, and by its unequal distribution, the more inclined we shall be to restrict open dialogue on this or that important subject to an appropriate elite – at least initially, while the main principles are being worked out. We may do it openly, but we will still quietly do it.
Elites, however, are also made up of fallible and corruptible human beings. They acquire the interests and solidarity of a special-interest group, and uncriticisable ideological doctrines to sustain those interests. Dialogue must appeal to the common reason of all human beings; to leave it in the hands of one group is to provide no mechanism for eliminating the particular distorting perspectives of that group. Further, rationality and responsibility are qualities developed by education and practice. People who are shut out of free discussion are stunted and diminished – they are prone to the diseases of reason, to paranoia, to the defensive aggression that arises from ignorance and low self-esteem, to exploitation by demagogues.
Mill was not a thinker to whom the dangers of free controversy had never occurred. Society, he thought, needs beliefs and feelings which provide enduring rallying points of allegiance and inspiration. Can such sources of allegiance survive if there is total liberty to criticise all sources of allegiance? That is the danger of destabilisation. Then there is another danger, the danger of uncritical group thinking, hardening into “moral coercion”. In a democracy, does unrestricted open dialogue undermine mediocre conformism, or does it, on the contrary, accelerate democracy’s tendency to sell out to celebrity, the politics of simple-minded causes, the glamour of simplistic myths? Or rather, since both occur, which prevails? This was a question that deeply Mill worried throughout his life. In the end, however, the Millian doctrine of free speech places a very large bet that the former prevails. A main question for liberal policy is what can be done to make that bet safer.
John Skorupski is professor of moral philosophy at the University of St Andrews and author of Why Read Mill Today? (Routledge)
TPM editor Julian Baggini has just released a special extra edition of his Philosophy Monthly podcast, a discussion between five shortlisted authors for the £10,000 2009 Bristol Festival of Ideas book prize. Taking part are Nick Davies (Flat Earth News), Misha Glenny (McMafia: Seriously Organised Crime), Richard Holmes (The Age of Wonder), Mark Leonard (What Does China Think?) and Sara Maitland (A Book of Silence). Also shortlisted was Susan Faludi (The Terror Dream). Download or listen by clicking here or by going to the iTunes store.
John Kampfner gives Britain a freedom audit
How has it happened that a Labour government, which came to power promising to restore public faith in democracy, will go down in history as being two of the most illiberal in modern British history?
There is no single answer, but there are many clues. Some are narrowly political. Tony Blair had inherited the package of reform measures from his predecessor, but his heart was not in them. Why disperse power when you have suddenly acquired so much of it? The large parliamentary majorities secured in 1997 and 2001 engendered in the Prime Minister and those around him a hubris that would be their undoing – manifested most famously in the Iraq war, but also in domestic preoccupations. That hubris was accompanied by a less perceptible but equally damaging under-confidence. Blair believed essentially that Britain was both a Conservative and a conservative country, and that he would achieve little if he did not acquiesce to the tastes of the majority view as represented to him by pollsters and selected newspaper magnates and editors. The terrorist attacks of 11 September, 2001, allowed him to elide his instincts with theirs, but his journey towards an authoritarian mind-set had begun before then. Throw in technological advances such as biometric data-collection, and the mix became potent.
By the time Blair left office in 2007, he had bequeathed to his successor a surveillance state unrivalled anywhere in the democratic world. Parliament passed 45 criminal justice laws – more than the total for the whole of the previous century – creating more than 3,000 new criminal offences. That corresponded to two new offences for each day the House of Commons sat during his premiership. The scope was extensive: police and security forces were given greater powers of arrest and detention; all institutions of state were granted increased rights to snoop; individuals were required to hand over unprecedented forms of data. Abroad, the government colluded with the transport of terrorist suspects by the US government to secret prisons around the world, giving landing rights at British airports for these so-called “rendition” flights. At home, new crimes were created such as glorifying terrorism or inciting religious hatred. Control Orders were imposed on people who were deemed a security threat but who, the government said, could not be prosecuted because the evidence that had been gathered by bugging or other means would not be admissible in courts. Anti-Social Behaviour Orders were imposed on people for actions that were not illegal in themselves and for which the burden of proof was considerably lower. In 2005 double jeopardy was removed for serious offences, meaning people could be tried for a second time even after being acquitted. The government also tried to cut back the scope of trial by jury, suggesting that some cases, such as serious fraud, were too complex for ordinary people to understand.
The more the state intruded into people’s lives, the harder it became to convince people it was making them safer. Whenever figures registered a rise in crime, particularly violent crime, newspapers indulged in a bout of moral panic. Even though the public doubted the effectiveness of many of these laws, most opinion polls suggested either support or acquiescence for the general idea of being tough – particularly when the specific measures were not explained to them. Civil liberties groups secured one or two notable victories in curbing ministerial zeal, but they felt they were battling against a popular tide.
I have never denied the important role of the state, at national and local level, in helping to provide more equitable social outcomes or to secure safer streets. Several spells of living in continental Europe had made me appreciate the more communitarian spirit alive in a number of countries. The social responsibility of the Germany where I lived during the 1980s compared well to the more mean-spirited atmosphere of Britain, in which Margaret Thatcher had famously said there “was no such thing as society”. In the mid-1990s, as Labour prepared for office, I like many welcomed a recalibration away from selfish individualism. But a few years of life under Blair and a succession of Home Secretaries with a thirst for authoritarianism led me to change my mind.
The problem for the government was that the public sent out mixed signals. The problem for the public was that the media sent out mixed signals. On the one hand, newspapers fed on fear, implying that Britons had never felt so unsafe; on the other, they warned of a police state, forever telling people what to do and punishing them if they did not. Ministers’ problem was not confusion, but dogma. They reduced the debate on the relationship between state and individual to a simple matrix in which you could be either a naïve libertarian who worried only about individual rights or a responsible citizen ever on the alert for threats. For me, and others like me, that was one of the most dispiriting aspects of the age.
In truth, there has never been a “golden age” of freedom in the UK. Governments of all colours have trodden similar paths, prime ministers in awe of the daily security briefings they receive. Blair and his ministers followed in the footsteps of the Conservatives in setting as their default position a disdain for a liberal “dinner party set” that obsessed about human rights. Their pollsters told them that the British public would put up with just about anything, from cameras, to identity cards, to pre-emptive custody, to stop and search, in order to feel safer. They bought the line that only those with something to hide had something to fear. If you kept out of trouble, you would not get into trouble.
Every country has its pact between liberty, security and prosperity. Blair expressed his approach with the eminently reasonable notion of “a society with rules but without prejudices”. Britain became, at least on paper, more tolerant. Laws were enforced recognising “civil partnership” for gay couples, and increasing penalties for discrimination on the basis of race, religion, gender, sexuality and age. Blair was marking the divide between public freedoms, which were negotiable, and private freedoms, which were sacrosanct. In the private realm, Britons had never been freer to lead their lives in the way they chose. In the public realm, from their behaviour in the local park, to their utterances in the media and in demonstrations, Britons were given ever narrower boundaries in which to operate. Overstep them, and the state would be given unbridled powers to hunt you down. Blair summed it up like this: “I believe in live and let live, except where your behaviour harms the freedoms of others.” But who and what determines harm?
As he angled for Blair’s job, Gordon Brown hinted that he would take a less cavalier approach to Britain’s constitutional norms and civil liberties. To a small degree, he did change procedures. But as time went on, whenever he was faced with a choice of curbing state excess or demonstrating that he was being “tough” on crime, he opted for the latter. He initially seemed uninterested in pursuing the battle over prolonging pre-trial detention. But, as with Blair, he was guided by two imperatives – the perennial warning by security chiefs that the situation “out there” was “more dangerous”, and the perennial advice of pollsters to “talk tough”. He was advised to seek an increase but to portray it as a compromise. So instead of the wished-for 90 days, 42 was the number his officials randomly chose. In spite of lobbying for the change, police chiefs could not point to one instance where they had needed the extra time. They based their justification on the precautionary principle, on the basis that they might, one day, need it, thereby re-interpreting criminal justice law on the basis of an undefined possible future threat. This was a variant on the “if only you knew what I know” line adopted by Blair in reference to the elusive weapons of mass destruction in Iraq.
The government was reinterpreting its task as not to minimise risk, calibrating security needs against liberty, but to set itself up as the guardian against all risk. Figures showed that the number of victims of terrorism among the British public was actually lower than before. In the decade of the Labour government terrorists had killed around 150 people in the UK (87 in Northern Ireland and almost all the rest on 7/7). This marked a decline of 88 per cent on the 1980s. Yet, security experts and ministers countered, that does not take into account the number of attacks that might have happened, that were foiled. Some of the cases came into the public domain, with terrorist cells broken and suspects convicted. But in many other cases, the information was not brought to light, for fear of jeopardising future intelligence operations. This debate came down ultimately to the public being asked to trust the politicians and the security services when they warned of dangers.
Not all the erosions in liberties could be attributable to government or state action. I had become increasingly mindful of the extent to which public organisations were self-censoring, not just in the media, but more broadly in cultural life. Public bodies across the land were tip-toeing away from areas of controversy. When chairing a round-table discussion organised by the Arts Council, I was startled to hear theatre directors and art gallery curators admitting that they avoided tackling issues of race or religion. Some cited police advice about potential unrest; others cited concern from funders or local authorities. Others took a more fundamental view, accepting the right of communities to be protected from offence. This right appears in the minds of some cultural figures to have taken precedence over the right to free speech.
The increased use of Britain’s restrictive libel laws has produced a chilling effect on free speech and investigative journalism; it has also hindered the work of non-governmental organisations which have long relied on confidential informants in reporting on tyrannical regimes around the world. They now spend large parts of their budget trying to shield themselves from litigation. English libel law was singled out for particular criticism in a UN Human Rights Committee report, which noted that it served to discourage critical media reporting on matters of serious public interest and adversely affected the ability of scholars and journalists to publish their work.
With such weak parliamentary scrutiny, much of the burden on holding the executive to account has fallen to Britain’s media. The results have been mixed. I was struck by the herd mentality of parliamentary journalists to follow each other and to think only of the next day’s headline. Issues such as the state of democracy were derided by reporters just as much as they were by politicians. Around the turn of the millennium, I had a chat with a colleague who had just quit working for a newspaper to become a government information officer. It was one of those periods when Fleet Street was taking rhetorical pot shots at Blair, and I asked my friend, now that the poacher had turned gamekeeper, how it felt to be part of an embattled government. He laughed, saying he had been shocked to find out how little reporters – let alone the public – knew what was going on in Whitehall. “I reckon on any given day you’ll be lucky to find out one per cent.”
The threat to robust inquiry is perhaps greater now than ever before in our system. Newspapers vent their spleen, but they uncover little of what is being done. Much of British journalism has become supine in the face of intimidation from state organs and from libel and other laws. For some time reporters have complained that editors and proprietors are shying away from difficult stories for fear of “getting into trouble”: in so doing Britain’s once-fearless press is merely following a global trend.
From ID cards to CCTV, to a universal DNA database, to detention without charge, to restrictions on protest and the press, the government has tried to rewrite the relationship between state and individual. In 1997, the government contained cabinet ministers such as Robin Cook and Mo Mowlam who cared about these issues. Their views were quickly sidelined, and replaced by machine politicians who saw the “delivery” of outcomes as the most important marker of success. Civil liberties became reduced to a lobby, instead of a core part of the political project.
Instead, ministers believed passionately in the role of the interventionist state to change behaviour for the common good. The philosophical underpinnings for increased state power lie in the ideas of the social reformer Jeremy Bentham. His utilitarian notion of the greatest happiness for the greatest number was re-cast by the British government as the greatest security for the greatest number – the “do whatever it takes” line of thinking. The Right took hold of the argument and re-framed it in patriotic, libertarian tones, claiming as one of its own John Stuart Mill, who wrote: “The only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.”
For 10 years many Britons enjoyed increasing prosperity, indulging in their favourite hobby of borrowing and spending money. The people who really mattered, the top one per cent, were indulged as never before. Blair and Brown resisted all attempts to tax them or to regulate them. Meanwhile, ministers vowed that they would stop at nothing, literally nothing, to keep them safe. This was an arrangement that suited all sides, as the polls showed. Britain signed up to a pact. It is hard to make the case that people were duped. Blair and his ministers, even before the events of 9/11 and 7/7, had been fairly frank about their priorities. The role of government was to create the environment for wealth-creation, and to stop in their tracks those who threatened that public good.
The tragedy for Britain is that over the past decade it had an extraordinary opportunity to combine an emphasis on social justice with civil liberties. As the man in charge of prosecuting criminals and terrorists on behalf of the state, the former chief prosecutor Ken Macdonald is better equipped to cast the final judgement. He juxtaposed the impunity of the bankers who had brought global finance to its knees with the rest of the population. “If you mug someone in the street and you are caught, the chances are that you will go to prison. In recent years mugging someone out of their savings or their pension would probably earn you a yacht,” he wrote. Britain, he said, had a business regulatory system that ignored malfeasance and a criminal justice system that was an auction of fake toughness. “So no one likes terrorists? Let’s bring in lots of terror laws, the tougher the better. Let’s lock up nasty people longer, and for longer before they are charged. Let’s pretend that outlawing offensiveness makes the world less offensive. This frequently made useful headlines. But it didn’t make our country or any other country a better or safer place to live. It didn’t respect our way of life. It brought us the War on Terror and it didn’t make it any easier for us to progress into the future with comfort and security.”
Jo Ellen Jacobs argues that Harriet Taylor was the co-author of On Liberty.
Scholars have debated the role of Harriet Taylor Mill in the composition of On Liberty almost continuously since the text appeared. Some commentators say she didn’t have anything to do with it, others that she did – and that explains why the book is not very good. Only a very few of us argue that her contribution was both significant and positive. A contemporary Mill scholar, Alan Ryan, suggests that “it would be more foolish to exaggerate Harriet’s role than to deny it.” Perhaps I am an exaggerating fool. I’ve been called worse.
Harriet Taylor Mill was a Victorian radical, a feminist economist, a philosopher, and the author of “The Enfranchisement of Women”, an influential article published in The Westminster Review. For twenty years she worked and travelled with John Stuart Mill, before marrying him in 1851. Harriet has been labelled many things: “a philosopher in petticoats”; “one of the meanest and dullest ladies in literary history, a monument of nasty self-regard, as lacking in charm as in grandeur”; a “tempestuous” “shrew”; “a female autocrat”; a “domineering, . . . perverse and selfish, invalid woman”; a “vain and vituperative, proud and petulant” masochist; and “a very clever, imaginative, passionate, intense, imperious, paranoid, unpleasant woman.” Harriet has been branded everything short of Wicked Witch of the West by John’s biographers and historians of philosophy.
Accompanying such personal invective, many historians insist that however Harriet “helped” John in his intellectual work, her effort did not, did not, did not, amount to co-authorship.
The definition of authorship has evolved gradually and resulted in the view of the individual self as the “knower”, or, as Margaret Atwood describes it, “a kind of spider, spinning out his entire work from within. This view depends on a solipsism, the idea that we are all self-enclosed monads, with an inside and an outside, and that nothing from the outside ever gets in.” Yet not every text has the kind of author Atwood describes. Think about an advertisement for The Philosophers’ Magazine, an article in that periodical, a contract for the sale of a house, an Associated Press news release, a poem, and instructions for your new iPhone. Only some of these written pieces have Atwoodian authors. How do we determine the difference between collaboration, influence, inspiration, and co-authorship? Three sources of evidence suggest themselves: textual evidence, testimony of others, and testimony of those involved. None of these is foolproof.
One kind of textual evidence would look for correlations between previous work and the text in question. When this is done, it is clear that the ideas found in On Liberty can be found in both Harriet’s work, much of it written in the 1830s, and in John’s previous writings. Examples of the parallels between Harriet’s writing and On Liberty abound (and can be found in detail in The Voice of Harriet Taylor Mill), but here I offer three that are central.
Liberty is “the chief value of life” according to Harriet, but how can it be developed when “almost all educationists seem to think [their goal is] filling the mind with an undigested mass from the minds of others”? We need a new kind of education that encourages “the desire, power, and habit of using the person’s own mind.” Or as they say in On Liberty, students who “have never thrown themselves into the mental position of those who think differently from them, and considered what such persons may have to say … do not, in any proper sense of the word, know the doctrine which they themselves profess.”
We also need “experiments” in living – those willing to try to live apart from their husbands, those willing to have a convention arguing for women’s rights, those willing to risk entering careers that are socially unacceptable. For example, during the period Harriet and John worked on the manuscript of On Liberty, her daughter Helen became a professional actress. In a letter that later echoes in their joint manuscript, Harriet says, “Try your experimental life, & as far as I can judge at present this seems to me the best.” People deemed “odd” should be reconsidered. “Eccentricity should be prima facie evidence for the existence of principle,” Harriet writes, while in On Liberty they proclaim, “Eccentricity has always abounded when and where strength of character has abounded; and the amount of eccentricity in a society has generally been proportional to the amount of genius, mental vigour, and moral courage it contains.”
And what is the use of education that enlivens the mind and the freedom to use that mind to think, discuss, publish and live experimentally? These activities are central to human development. Further, the development of human capacity must be supported, according to Harriet, by “good laws, laws which pay … regard to human liberty.” Specifically, “No government has a right to interfere with the personal freedom. . . which does not interfere with the happiness of some other.” This anticipates, of course, the “very simple principle” at the heart of On Liberty: “the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others.”
Were John and Harriet the only ones thinking about liberty and its importance? Hardly. But studying Harriet’s work makes it easier to acknowledge that many of the ideas in On Liberty were Harriet’s as well as John’s.
Another kind of textual evidence for authorship is in the essay’s rhetoric. The text flaunts a passionate commitment to the critical examination of ideas, to the insistence on a full-throated dialogue in the search for the truth. The words live because they were born in the lives of their authors. On Liberty celebrates a collaborative theory of knowing exemplified in the way Harriet and John worked together.
They believed fervently in the power of individuals struggling together to grasp the truth – including both the “idealistic” belief that there is truth as opposed to mere subjective opinion, and a deep scepticism about the beliefs accepted by the majority. They argued that the tyranny of the majority was to become the greatest danger of the future.
It is a future we find ourselves in now – with a majority willing to accept a definition of liberty that equates freedom with the ability to select between the ephemera of farting noises or koi ponds applications for an iPhone or 50 different varieties of breakfast cereal, while we are unconcerned that media has been reduced to the mouthpiece for half a dozen owners whose views seep into every sleepy eye and ear. The implicit orthodoxy is that anyone who questions this definition of freedom seeks a freedom that is unattainable, dangerous or both.
Further, many in the academic establishment have slouched into a cynical postmodern nihilism that rejects the mere mention of truth. The Mills resisted this future: liberty was required for self-development. We need freedom not to buy one object rather than another, but freedom to struggle with each other as we search for truth, however elusive. They lived this prickly and invigorating relationship and they wrote about it when they assert, for example, “truth has no chance but in proportion as every side of it, every opinion which embodies any fraction of the truth, not only finds advocates, but is so advocated as to be listened to.”
In addition to textual evidence, the testimony of others can help define authorship, although one must always be suspicious of the motives of witnesses – and especially in this case. The nasty comments about Harriet mentioned above and those of contemporary commentators point to a misogyny that cannot be ignored. Listen to just one of many I could cite: Jack Stillinger comments, “It is reasonably clear in fact that Harriet was no originator of ideas, however much she may have aided Mill by ordinary wifely discussion and debate. … It is unfortunate that Mill did not simply thank his wife for encouragement, perhaps also for transcribing a manuscript or making an index, and let it go at that.” (Emphasis added.) Uppity women cannot be acknowledged as co-authors in our history of philosophy.
Yet the testimony of those closest to Harriet and John – Harriet’s daughter Helen and John’s colleague and collaborator, Alexander Bain – both confirm Harriet’s contribution to this philosophical work. Helen verified Harriet’s role in On Liberty when she told Kate Amberley that “in ‘Liberty’ [Kate] should see [Harriet’s] mind & thoughts for they were mostly hers.” Alexander Bain, a vocal critic of Harriet, admitted in his biography of John that “the Liberty was the chief production of his married life: and in it, [Harriet] bore a considerable part.”
Finally, authorship can be established on the testimony of those involved in the collaboration. Many feminists have discovered the unacknowledged contributions to male texts. But here, John quite openly announces Harriet’s various roles, and does not thank Harriet merely for vague moral support or inspiration. He specifically thanks her for her contribution to the ideas and the writing published in his name alone. John makes it clear that the writing was actually the result of “the fusion of two” minds working together.
According to John, Harriet’s ideas are most evident in On Liberty. As early as the summer of 1853, John indicates in a letter to Harriet that the next book they plan to write will be their best: “But I shall never be satisfied unless you allow our best book, the book which is to come, to have our two names in the title page. … [T]he book which will contain our best thoughts, if it has only one name to it, that should be yours. I should like every one to know that I am the Dumont & you the originating mind, the Bentham, bless her!”
Like Dumont who made Bentham’s ideas intelligible to the public, so John hoped to present a coherent text that would combine Harriet’s insights. Both in his public autobiography and in private letters he attributes co-authorship of On Liberty to Harriet. “The Liberty,” he wrote, “was more directly and literally our joint production than anything else which bears my name, for there was not a sentence of it that was not several times gone through by us together, turned over in many ways, and carefully weeded of any faults, either in thought or expression, that we detected in it. It is in consequence of this that, … it far surpasses, as a mere specimen of composition, anything which has proceeded from me either before or since. With regard to the thoughts, it is difficult to identify any particular part or element as being more hers than all the rest. The whole mode of thinking of which the book was the expression, was emphatically hers.”
In a letter he wrote, “I can at least put in order for publication what had been already written in concert with her.” Written shortly after Harriet died, the dedication of On Liberty continues the recognition and was John’s first public proclamation of her contribution to their work. All J.S. Mill’s statements about their collaboration have been brushed aside by historians of philosophy as if it were sentimental blather, devoid of fact. The evidence of their collaboration is so persuasive, however, that it is impossible to ignore.
The reasons for claiming sole authorship when work is collaborative easily come to mind: fame, fortune, tenure, ego. Why someone would claim co-authorship when the work is solo is less obvious. Critics use the attacks on Harriet mentioned above to answer that question: Harriet clawed her way into John’s heart and then psychologically pistol-whipped him into claiming she was the co-author of some of his work. Furthermore, none of the manuscripts are in her handwriting, so that “proves” that she did not write the texts.
Collaboration is the result of verbal debates, suggestions, additions, creation, and editing that occur in private. There is no physical record of the musings and questions that lead to arguments that someone then writes into a text. No smoking guns reveal how half thoughts become whole during a conversation. In addition, because few academics are familiar with the first-hand experience of collaborative writing, they do not understand the futility of asking whose ideas are whose when an essay is jointly produced. A more trustworthy analysis of any common writing project comes from the participants themselves.
Despite the dedication to On Liberty and despite his claiming in 1853 that this book should either have both their names as authors, or if only one, hers, only John’s name originally appears on the book. Why? Does this mean that he did not think Harriet was co-author?
In addition to the predictable observation that a book by J.S. Mill would receive a fairer hearing by a philosophical audience than one by John and Harriet Mill, John may have also hesitated to place Harriet’s name on a text he feared would be seen as “an infidel book”. The tirade against Christianity in On Liberty and the questioning of the belief in God were the most stridently anti-religious writing yet published in Mill’s name. John was quite conscious of the sanctions an author might suffer for stating such unpopular beliefs. John may have wanted to shield her name from “a chorus of indignant protest.”
I am disappointed that Harriet’s name was not on the cover of On Liberty as John intended when they began to write the text. The decision to point to her contributions only in the dedication may have been the patronising protection of his dead wife’s legacy, or a desire to make sure the work received an unbiased reception. But all his other claims that Harriet is co-author cannot be overturned by the name missing on the title page.
One final feature complicates how we attribute authorship to this text: namely, its fame. Historians regularly misascribe co-authorship to The Subjection of Women, even though it was written two years after Harriet died – and it contains many ideas that Harriet would have despised. But that work is about women. And it’s not considered very important by most philosophers. So, it can be labelled co-authored without much ado. But On Liberty is important. Therefore, a) it cannot be co-authored and b) it cannot be by a woman. Women have historical and intellectual permission to deal with feminine topics, but it seems that permission is either not granted or revoked when they stray beyond those boundaries.
Harriet and John believed philosophy often required at least two parents. In letters, forewords, dedications, autobiographies, and drafts, they naturally refer to “our” ideas and the work “we” did. Throughout the history of philosophy, no one has quite believed what they were saying, and historians of philosophy have assumed a single-parent household for every philosophical child. However, the work of Harriet and John can serve as an example of an alternative model of philosophical production. Harriet and John’s cooperative production demonstrates how future philosophy might see philosophy as “plural work”. We philosophers need to consider what we have lost by writing alone and what we could gain by writing collaboratively.
On Liberty, written by a woman? Yes.
Just as it is time to recognize that Eli Whitney was hired to build a machine conceived by Mrs. Catharine Littlefield Greene, we also need to properly acknowledge Harriet’s co-authorship of On Liberty. Neither partner was a scribe for the other – instead they creatively exceeded the sum of their parts.
Jo Ellen Jacobs is professor of philosophy at Millikin University and the author of The Voice of Harriet Taylor Mill (Indiana University Press)
Jonathan M Riley celebrates 150 years of J.S. Mill’s classic essay with an overview of its central arguments.
Before discussing the essay, it’s worth emphasis that Mill’s grand theory of liberal democracy is revealed for the most part in his other writings. He argues in Considerations on Representative Government (1861) that a form of constitutional democracy is the best kind of government for any civil society, for example, and he insists in Principles of Political Economy (1848) that a form of free market capitalism will remain the most expedient kind of economy for the foreseeable future, although he leaves open the possibility that a decentralised form of socialism might eventually emerge spontaneously as egalitarian reforms of the laws of private property are gradually introduced. The philosophy he defends in Utilitarianism (1861) is nothing like any version of utilitarianism as that family of doctrines has come to be understood today, and, in my opinion, is entirely compatible with liberal democratic institutions. These various key elements of his liberal democratic theory are not the main concern of On Liberty, and readers must not expect to find his views on them spelled out in the essay.
The main aim of On Liberty is to defend “one very simple principle” of individual liberty. As I understand it, the principle states that any mature individual – anyone who is “capable of rational persuasion” – has a basic moral right to complete liberty with respect to his or her “purely self-regarding conduct”. This basic right ought to be recognised and enforced as a legal right in every civil society. By implication, society has no legitimate authority to coercively interfere with the individual’s purely self-regarding choices. Government has no legitimate authority to force the individual to obey social laws that regulate self-regarding acts and omissions, for instance, and organised pressure groups have no legitimate authority to compel the individual to comply with social customs of self-regarding conduct.
To appreciate the simple principle of liberty and its important practical implications for civil societies, key ideas must be clarified. Mill is clear that by a “right” he means a claim on society to protect some important personal interest, in this case, the mature person’s interest in complete liberty with respect to self-regarding conduct. Other people have correlative duties to respect the individual’s claim, and society must employ expedient penal sanctions to compel the fulfilment of these duties by people who otherwise would fail to do so. The individual who possesses the claim to liberty should also possess powers to enforce or waive the correlative duties, as well as immunities from having his or her claim altered or abrogated by other people. The Millian right to liberty is thus a complex instrument, including powers and immunities surrounding a core claim.
Mill’s idea of liberty must not be conflated with his idea of a right. By “liberty” he means “doing as one desires”. Whereas a right is a claim to be free from coercive interference by others, liberty in his sense is spontaneously choosing to do something in accord with one’s own judgement and inclinations. Liberty is supposed to be complete or absolute with respect to self-regarding conduct: the individual is supposed to be free to choose among all of his or her feasible self-regarding acts and omissions as he or she wishes. Liberty is said in turn by Mill to be the primary source of the individual’s self-development or individuality, “one of the principal ingredients of human happiness.”
Mill indicates that by “purely self-regarding conduct” he means conduct that does not “directly, and in the first instance” harm or benefit other people, or, if it does, “only with their free, voluntary, and undeceived consent and participation”. Moreover, although he is not so clear about this, he suggests that “harm” is not mere dislike or annoyance but some form of “perceptible damage” such as physical injury, financial loss, breach of contract, injury to reputation, and so forth; just as “benefit” is not mere liking or agreement but perceptible gain or improvement of one sort or another.
Consistently with this, he tells us that self-regarding conduct can, and indeed should, affect other people’s feelings – their likes and dislikes. But it does not alter others’ personal circumstances without their consent. Conduct is not self-regarding if it forces another person to experience any perceptible damage to his or her body or property, for example, or if it compels the other to experience a perceptible improvement by consuming medicine, undergoing surgeries, or investing economic resources against his wishes.
According to Mill, society’s legitimate authority to compel obedience to rules is properly restricted to what he calls a “social” domain of conduct. Properly defined, an individual’s social conduct, or what some commentators (though not Mill) call “other-regarding” conduct, includes any act or omission that directly and immediately causes others to experience perceptible changes in their personal circumstances without their own consent and participation. An individual’s social conduct includes coercively interfering with another’s self-regarding choices. Society can legitimately use coercion against a mature individual only, Mill insists, to prevent the individual from harming others without their consent, or to punish the individual for having done so. Since non-consensual harm to others includes coercive interference with their self-regarding choices, society legitimately uses coercion to deter or punish coercive interference with another’s self-regarding choices.
The simple principle of individual liberty evidently does identify particular rights as rights which ought to be recognised and enforced by the laws and customs of every civil society, namely, the rights of self-regarding liberty and individuality. If sex between consenting adults is purely self-regarding conduct under some conditions, for instance, then adults should have a right to spontaneously engage in sex under those conditions if they wish. The sex is not genuinely consensual, of course, if one party misleads the other by concealing his or her sexually transmitted disease or by lying about his or her birth control measures. Nor is it purely self-regarding if third parties are directly and immediately harmed without their consent, such as when obligations of a marriage contract are broken, or unwanted children are brought into being and aborted. When it is purely self-regarding, however, mature individuals should be perfectly free to engage in sex with other consenting adults as they please. As Mill puts it in a diary entry dated March 26, 1854, “what any persons may freely do with respect to sexual relations should be deemed to be an unimportant and purely private matter, which concerns no one but themselves.”
To take another example, a Millian case exists that adults should have a right to freely consume drugs and alcohol in the privacy of their own homes or clubs because such consumption is purely self-regarding conduct. This does not preclude expedient social regulation of the sellers and marketers of drugs and alcohol, because trade is not self-regarding conduct but rather social conduct: sellers may harm consumers without their consent by selling defective products, for example, and marketers may force one another out of business as a result of their competitive practices. But an expedient scheme of social regulation must not amount to social prohibition of any sale of the drugs and alcohol: such a ban would violate the mature consumer’s right to freely use these products in self-regarding ways.
Mill recognises that self-regarding conduct typically will, and should, have an impact on others’ feelings, including their convictions and likes or dislikes respecting the agent of the self-regarding conduct. Indeed, he emphasises that certain “natural penalties” are inseparable from self-regarding liberty because other people have equal rights to freely avoid the company of any adult whose self-regarding conduct displeases them: “We have a right … to act upon our unfavourable opinion of any one, not to the oppression of his individuality, but in the exercise of ours … [A] person may suffer very severe penalties at the hands of others, for faults which directly concern only himself.”
But others’ mere dislike never justifies coercive interference with an individual’s self-regarding liberty. Those who are merely upset by the self-regarding conduct have equal rights to freely avoid the agent and warn their acquaintances against him. Their acting on their aversion does not injure them without their consent, though it may result in harm to the agent of the self-regarding conduct: “the natural penalties … cannot be prevented from falling on those who incur the distaste or the contempt of those who know them.” Given that all parties involved form their own beliefs and preferences without coercive interference, these consenting adults are reasonably held responsible for any acts and omissions they choose to take based on these feelings.
Moreover, it cannot be overemphasised that self-regarding conduct can directly and immediately harm others as long as they consent to the damage, whatever conceptions they may have of their vital interests as human beings. Strictly speaking, Mill’s concern is not to prevent perceptible damage to others but rather to prevent harm inflicted on them without their consent. As he indicates, society does have legitimate authority to check in non-coercive ways whether people are genuinely consenting to participate in activities that directly and immediately cause set-backs to their personal circumstances (apart from their feelings). This is so even when others are not involved but the individual is inclined to engage in self-injurious behaviour. Thus, public officials may take a number of expedient steps, including: discussing with any individual whether he really knows what he is doing and even attempting to persuade him to abandon potentially self-injurious courses of conduct; posting signs, product labels and other warnings of potential injuries; and even demanding that he provide formal evidence of his wishes before allowing him to venture on some highly dangerous course of action. Once society has assured itself that an agent is genuinely consenting to self-harm, however, that is the end of the matter: society has no proper authority to coercively interfere with the self-regarding conduct that is the source of the consensual harm.
Society’s authority to scrutinise whether consent to self-injury is genuine shows that the self-regarding sphere is not properly a matter of social indifference. Moreover, if society determines that an individual’s consent is not really “free, voluntary and undeceived”, then it may resort to coercive interference, if necessary. If the individual is being misled by others, for example, then society has a right to employ coercion to prevent the other people from inflicting grave harm on him without his consent. Similarly, if an individual is found to be a child or otherwise incompetent, then society can rightfully interfere with his own behaviour to prevent him from harming himself unintentionally. Strictly speaking, the interference with self-injurious behaviour is not coercive in these cases because the behaviour is not truly intended by the individual. Rather, the individual is being forced by others or by his own incompetence to engage in unintentional behaviour that carries a definite risk of harm to self. He does not genuinely consent to engage in such behaviour.
Society may also reasonably decide that genuine consent is simply impossible in some situations such as slavery contracts or even marriage contracts in perpetuity, in which case it may enact into law its disapproval of such contracts, its refusal to enforce them, and its guarantee that one party will not be permitted to force another to keep to the terms of such contracts. These measures do not imply, however, that practices like voluntary slavery and marriage without possibility of divorce will be completely stamped out. Such practices may well persist, despite society’s disapproval, until all individuals learn that they have better ways to manage their self-regarding affairs.
Mill argues that society can legitimately use coercion against an individual only to prevent the individual from harming others without their consent. But it must not be thought that prevention of non-consensual harm to others, though necessary, is always sufficient to justify social coercion. As he emphasises, it may be generally expedient to permit people to freely compete to some extent even though the losers will be harmed without their consent, because the social benefits of free competition may often outweigh the harms. Such a laissez-faire approach is typically held to be generally expedient with respect to trade and speech, for example, although this does not mean that sellers and speakers should have complete liberty to market products or opinions as they please no matter how severe the harms caused to others without their consent.
Social coercion can also be legitimately used to prevent people from distributing benefits to others because the distribution may put some (perhaps many) at a relative disadvantage and thereby harm them without their consent. Nevertheless, this does not mean that the prospect of such relative disadvantage always justifies coercion to prevent it. Rather, general expediency may dictate permitting some individuals, including even the political authorities, to harm others in this way without their consent in order to provide large social benefits. Mill allows, for example, that the authorities should sometimes raise tax revenues to fund schools, museums, or other public goods that many people oppose and will not voluntarily help establish with private funds. Even so, he frowns on government provision of these goods and services in a civil society except as a last resort, because he wants to forestall the growth of a large-scale central bureaucracy that smothers independent individual initiative.
The argument for a basic right to complete self-regarding liberty does not presuppose that everyone must agree that he personally has vital interests in liberty and individuality. Mill recognises that many members of civil societies may not have any desire to do as they please in self-regarding matters, so that reasons other than the promotion of their own individualities must be supplied to make the ethical case for the basic right. What social benefits can the many expect if everyone is given the right to absolute self-regarding liberty? Mill points to such benefits as: new and better ideas, practices and technologies discovered by the few individuals of “genius” through their research and “experiments of living”; more effective government as a result of critical advice supplied by the intellectual and moral elite; the encouragement of personal diversity and tolerance as opposed to a totalitarian uniformity; and the prevention of a “despotism of custom”, which he associates with social stagnation and decline. Evidently, he believes that every mature individual, whatever his conception of good, can appreciate these sorts of social benefits. The argument does not presuppose that everyone views his own good in the same way, such that vital interests in self-regarding liberty and individuality are seen by all as principal ingredients of personal well-being.
In On Liberty, Mill concentrates on making the case that every civil society should recognise and protect an empirically observable “purely self-regarding” domain of conduct as a minimum domain of absolute liberty for every mature individual. His radical step is to argue for a basic right to complete self-regarding liberty as an essential component of civil and political liberty.
Jonathan M Riley is professor of philosophy at Tulane University and the author of J.S. Mill: On Liberty (Routledge)
Reflection on the ethics of climate change can get you into trouble. It can get you into philosophical trouble, because it’s easy to make mistakes when thinking about rights and wrongs on a planetary scale. Morality, whatever it is, seems to waver out of focus when applied to the big picture. It doesn’t like that sort of thing and feels more comfortable in homey contexts, probably because it grew up in small towns and copes best with little wrongs. Reflection on the ethics of climate change, even on a smaller scale, can get you into other sorts of trouble too. Primarily, it annoys other people. Not only can it end up sounding like moralizing, rather than moral philosophy, but it gets us where we live. It issues in the conclusion that our comfy lives of high-energy consumption have to change, that we in the developed world should make some serious sacrifices for other people. Arriving at this conclusion isn’t all that difficult, but seeing it clearly and acting on it certainly is. We’ll have a go at the bare bones of it in what follows.
A lot of people accept the fact that the present state of play is somehow unjust or wrong. You can arrive at this conclusion in a paragraph. Burning fossil fuels thickens the blanket of greenhouse gasses which swaddles our world and warms it up. The warmer our planet becomes, the more suffering we are in for – suffering caused by failed crops, hotter days and nights, rising sea levels, dwindling water supplies, altered patterns of disease, conflict over shifting resources, more dramatic weather, as well as the suffering of our fellow creatures who are also struggling to adapt. This is partly because our planet’s carbon sinks cannot absorb all of our emissions. The sinks are therefore a limited and valuable resource. Some countries on the planet, the richer, more developed, industrialised ones, have used up more than a fair share of the sinks and therefore caused more of the suffering which is under way and on the cards. If you think a little about fairness or justice or responsibility or the importance of doing something about unnecessary suffering (pick one which works for you), then you will quickly be drawn to the conclusion that the rich countries have a moral obligation to reduce emissions, probably dramatically. Maybe they should pay for a few sea walls, possibly foot the bill for a bit of disaster relief, too. The fact that they haven’t taken meaningful action is an obvious wrong. It seems easy enough to see it.
Peter Singer, who is better at this than I am, needs only two sentences to make the point in his book One World: “To put it in terms a child could understand, as far as the atmosphere is concerned, the developed nations broke it. If we believe that people should contribute to fixing something in proportion to their responsibility for breaking it, then the developed nations owe it to the rest of the world to fix the problem with the atmosphere.”
What are the proportions actually like? Brace yourself for a few numbers (all from www.unstats.un.org). The USA, with less than 5% of the world’s population, is responsible for the largest share of carbon dioxide emissions by country each year, about 26% of global emissions. China is second on the list, with 14.5% of global emissions. Try to bear in mind, as you think about this, that China has about a billion more people in it than the United States. The numbers then drop off pretty quickly, with Russia responsible for about 6% of the global total of emissions. You can come to the conclusion that the US is most responsible for the damage being caused to our planet. It therefore has the largest obligation to do something about it. Others in the West have similarly-sized obligations. The fact that so little has been done is a moral wrong.
Thinking a little about room for reduction and capacity for reduction can cement this thought in your mind. Consider room for reduction first. Not all emissions have the same moral standing. The greenhouse gasses resulting from a long-haul flight for a weekend break are not on a par with the emissions resulting from the efforts of a subsistence farmer toiling in the field. As Henry Shue puts it, some emissions are luxury emissions and others are subsistence emissions, and if cuts must be made, it’s the former which have to go first. It goes nearly without saying that the West emits more luxury emissions than the developing world, and it therefore has more room for reduction.
Think now about capacity for reduction, the varying abilities of states to make cuts in emissions or otherwise shift resources around. It’s pretty clear that the West is best placed to make large cuts in a number of senses. The developed world has the strength to move mountains; it has the infrastructure and the technological know-how, the manpower, the money, and on and on and on. It has not just the room for reduction, but also the capacity to do what’s right. Again, the fact that it has done so little smells like a moral wrong.
You can come to the conclusion that the present situation is morally outrageous. The developed world is primarily responsible for a problem which results in a lot of unnecessary suffering. If the sharp end of some predictions has it right, maybe the amount of suffering ahead is nearly too horrible to contemplate. Worse, the developed world has the room and capacity to do what’s right, but fails or refuses to do so. If, like me, you see all of this as clearly wrong, a moral outrage, you might be drawn to an uncomfortable conclusion. It might be that our individual lives are morally outrageous too. It’s consistency, nothing else, which leads to this unpleasant fact. Don’t take it personally if that stops you from taking it seriously. If it helps, it’s not just you, but me and everyone else living lives of high energy-consumption.
Consistency is at the heart of reflection on moral matters. Morality, whatever else it is, insists on a kind of humane consistency. If I am in a certain situation and contend that I should be treated in such and such a way, then morality demands that I conclude that others in a similar situation are entitled to that sort of treatment too. It’s why the categorical imperative enshrines the notion of universalizability, and why a utilitarian thinks everyone’s pain matters. If the thoughts just scouted above lead you to the conclusion that the behaviour of the West is a moral outrage, then maybe consistency of principle will lead you to the conclusion that your own behaviour is a moral outrage too.
If it’s correct to think that the US and other countries in the West are wrong to do nothing despite being responsible for the lion’s share of emissions, then it’s correct to think that we are wrong to do nothing in our everyday lives despite being responsible for the lion’s share of emissions per capita. If you live in the US, your yearly activities result in more than 20 metric tons of carbon dioxide on average. Australians contribute 19 metric tons, Canadians emit about 18 metric tons on average, and UK residents are responsible for about 10 metric tons. Many people, residents of more than a third of the countries on the planet, are responsible for less than a metric ton each year. Some are responsible for no measurable emissions at all.
If it’s correct to think that the West does wrong by doing nothing despite having the room to reduce emissions and the capacity to do so, then it’s correct to think that we’re doing wrong too, in our everyday lives. Plenty of your emissions are luxury emissions; most do not result from securing the real necessities of life. Probably, also, you’ve got the brains to work out the right course of action. If that’s too rich for you, then maybe you’ve been formally educated for longer than most people on the planet. At least it’s true that you’re the sort of person who reads philosophy without being forced to do so, which maybe suggests that you’re a bright spark. You probably also have a bit of cash to spare, as compared to others on our planet. You have a healthy disposal income, if you think about it. You’ve got the brains and the money and the resources generally to do something about your emissions if that’s what you want to do. Doing something, by the way, means a bit more than buying the bulbs and recycling. Your emissions might be as much as 20 times more than others in the world; you might be doing as much as 20 times the damage to the planet compared to other people. The bulbs are not enough.
I’ll calm down now. The point of these reflections is not to attack my fellow recyclers or to tell you that the long, hot showers in your life are a kind of sin. The aim is to get past a clutch of thoughts which stands in the way of thinking about the ethics of climate change. The thoughts have something to do with the belief that our little effects can’t matter all that much. What difference could overfilling the kettle make? What difference could a flight abroad make? What difference could leaving the DVD player on standby make? If you’re a consequentialist, and it turns out that your effects have insignificant consequences, then how could they possibly be wrong?
Those are good, tough questions, and they come up a lot when one is hunkered down over a drink, arguing about morality and our warming world. I try to answer them with the consistency move I’ve just sketched for you. If you think, for example, that the US does wrong for such and such a reason, then consistency demands that you apply the same principles operative in your thinking about the US to your own life, and see what you get. Sometimes I have to point out that I’m not arguing by analogy or mistaking the properties of a state and the properties of a person. We’ve all read the Republic and know what sorts of trouble you can get into by doing that. The argument is just a demand for consistency in our thinking, and that’s as legitimate a move in a moral debate as you are likely to get.
Thinking about your own, minimal consequences can lead you to one last conclusion. Suppose you conclude that your life has to change, that the individual choices you make every day have to be much more green. Good for you. However, given the way our societies are set up, given the fact that we are all enmeshed in a fossil-fuel burning world, it’s hard to make those sorts of choices. You can worry that every well-intentioned effort to reduce your carbon footprint results in some backhanded wrong. You can choose to ride a bike to work and depress yourself with the thought that you’ve just made room for one more car. Having breakfast without doing some sort of damage can seem impossible. Maybe no choice you make on your own can get you entirely clear of moral trouble. Thinking along these lines can tug in two directions. Some people find themselves dragged back to the thought that nothing they’ll ever do can make a difference. Others surprise themselves with the thought that we have to change not just our lives, but our societies.
This sort of change is something you and your little consequences can’t bring about on your own. It can go either way. You can fall back into the thought that nothing you can do will make a difference. Or you can set your jaw, give in to a high-minded hope or two, and push with others for a greener world.
James Garvey is the author of The Ethics of Climate Change (Continuum).