Author Archives: TPM

Sound the Philosopher-Signal: We need to save A level Philosophy

Zara Bain argues philosophers need to recognize that A level philosophy may soon cease to exist, and mobilize to ensure that this doesn’t happen. This article appears only on the web.

In January this year, the exam board AQA released a new specification for its A Level in Philosophy. Despite having agitated for changes to the specification for years, many teachers threw their arms up in disgust. Staff at AQA have been subject to personal abuse. The board have been called “totalitarian” for removing Political Philosophy and for the absence of prolonged consultation with the community-at-large in the redevelopment of the specification. Articles appeared in the press slamming AQA for the changes, this rare instance of national attention made all the worse since the perspectives offered rested mainly on factual inaccuracies and shoddy argument.

This was despite it being leaked into the public domain that the future of A level Philosophy is under threat. A senior source at AQA suggests that the cost to the board’s reputation has been so substantial that the subject narrowly avoided being scrapped in Autumn 2013. It may not be offered for teaching beyond a single cohort of AS and A2 students unless we can reverse a perception that the subject and the community that delivers it are “toxic”.

That there is a new specification at all results from a last-ditch attempt by a handful of individuals at AQA to stop it being thrown on the pyre. In a matter of weeks the specification was redrafted with input from teachers, professional philosophers, the British Philosophical Association and a world-leading expert in assessment practices. That is why there was no year-long consultation.

Until its approval by OfQual in Spring 2014, it was not clear that A Level Philosophy would be available for teaching in September. OfQual approval means it will go ahead even though the course’s future beyond June 2015 remains uncertain. Not one other board has offered a replacement course should this one be cut. If we and our subject are toxic, what would motivate them to do so?

The old specification: not the gold standard some think

The description of toxicity is, of course, unfair. The history of A level Philosophy is complicated: for a long time teachers and experts expressed grave concerns over the handling of the subject. This cannot be overstated. People are pissed off and they have good historical reasons for this.

Confidence in the course from anyone but AQA’s senior examining team was low. Previous specifications were criticised for philosophical inaccuracy and problematic assessment practises. The exams were not entirely unlike students being dropped into an A level equivalent of a dystopian game-of-survival run by a mysterious authority whose documented positions only sometimes lined up with what it practised.

Predicted grades regularly failed to match actual grades, re-marks were not only common but expected and exam centre after exam centre wrote letter after letter to AQA complaining about just how hard it was to convert student and teacher effort into stable, predictable results. The British Philosophical Association (BPA) commissioned reports evaluating previous incarnations of the course and a teacher-led Campaign to Improve AQA Philosophy flourished on Facebook.

In these days where results matter not only to league tables but to students concerned to maximise their chances of getting into a good university, school and college managers became increasingly unhappy stumping up the resources for this unpredictable, unresponsive and often small-cohort subject. Teachers have been forced to drop the course and switch to A Levels in Religious Studies else risk losing their jobs altogether. This is not the fault of the new specification, but the old one.

The competition with Religious Studies is nothing new, and Philosophy is losing

AQA’s A Level Philosophy sits amongst those small and specialist subjects whose existence depends on revenues generated by higher-entry subjects. Only 6000 students take this course at AS and only 50% of those continue to full A level. Compared with subjects like History, Physics or Economics, the numbers – including conversions from AS to A2 – are low. It makes the board no money: they run it to fulfil a supererogatory notion that it is in the public interest to do so.

This is the only formal pre-university qualification in Philosophy in England, Northern Ireland and Wales. There are no GCSEs in Philosophy and no purely philosophical alternative A level offered by another board.

Notable exceptions are popular modules in Philosophy of Religion & Ethics offered within various Religious Studies A levels. Many students who describe themselves as doing “Philosophy A level” actually study these courses.

Over three times as many students sit exams in Religious Studies across Edexcel, AQA and OCR than AQA’s Philosophy course, with 85% of students taking one or both of these philosophical modules. This has led some to suggest that Religious Studies benefits from this false conflation, given 16-18 year olds hungry for opportunities for critical theoretical reflection absent from many A levels, although keenly aware of grades.

These courses, while perfectly adequate in their context, restrict their focus to two very specific areas of Philosophy, both of which remain tethered to religious studies and its discursive methods, a fact recognised by the BPA.

This has consequences for students entering Philosophy degrees on the back of these courses, frequently heard to opine that questions in philosophical logic and epistemology bear very little resemblance to the what and how of Philosophy with which they are familiar

The popularity of Philosophy of Religion & Ethics modules – as options within both A Level Philosophy and Religious Studies – explains their inclusion on the new specification. In a situation where the restoration of teacher and centre confidence is paramount and playing to teacher strengths is imperative if teachers can deliver it well, this is reasonable. It’s also working: centres offering only Religious Studies A level are already making enquiries about switching to AQA’s new Philosophy course.

If there’s competition between these subjects, it’s nothing new. So far, Philosophy has been losing. This new specification is clearly an attempt to help it stay in the game.

The new specification is a good start, but much work remains to be done

None but the British Philosophical Association have spoken out in support of the new specification. They claim it represents a “step in the right direction” albeit one “constrained… by OfQual and AQA processes.” Given the realities of nationalised pre-university qualifications, this is unsurprising. Indeed, the BPA have praised AQA for their willingness to work with academic philosophers to ensure a credible future for the course.

The BPA are right. To suggest this specification is worse than the previous one is just false, as are the accusations that it creates competition with religious studies that did not already exist, or that it promotes rote learning over philosophical dexterity.

By centralising argument skills and textual engagement as baseline requirements in every single module, this course comes far closer to the rigours of first-year undergraduate-level Philosophy than ever before. Prima facie, this course will be easier to teach: gone is the mind-reading demanded by the previous specification and its mark schemes. So too for the assessment problems that led to so much justified shouting and attrition, although a new senior examining team will be needed to steer the qualification aright.

Of course, the new specification is not impervious to critique. There are legitimate concerns over the way the assessment objectives have been reframed and communicated in the sample examinations and mark schemes. Clarification is needed on the demandingness of textual components. Options have been removed, though this is true for much larger-entry subjects too: it represents a trend in A levels across the board, not a directed attack on the pedagogical preferences of philosophy teachers.

If we want options, we need to look at how to achieve 3-4 times the current number of students taking Philosophy at A level. While we’re at it, we might also seek to interrogate the nagging problem of well-qualified Philosophy graduates unable to qualify as Philosophy teachers unless they qualify to teach Citizenship or Religious Studies. These are questions warranting exploration once we’re sure there will be a course at all.

Philosophers: The A Level needs you

Let’s be clear: there is a counterfactual world in which things didn’t turn out this way. In this world, the grievances expressed by teachers, centres, the BPA, resigning examiners and the Campaign to Improve AQA Philosophy were taken far more seriously, far sooner. That this didn’t happen constitutes a failure on the part of AQA and various individuals therein. For the teacher community, weary from years of battling with AQA, this must be acknowledged.

All the same we don’t live in this counterfactual reality: in the actual world, this is the course we’ve got. AQA are not the enemy, since without them there is no A level at all. We need to stop locating spurious grounds to put the boot in as reward for their efforts to salvage the subject, however angry we are over these historical failings.

We face a situation where the A Level Philosophy may soon no longer exist such that we can continue to lament its supposed flaws and where the general perception of the subject amongst exam boards including but not limited to AQA is one of toxicity bordering on viciousness.

We must therefore heed Kant’s point that a thing must exist for us to ascribe properties to it at all – including grounds for complaint. We must also be mindful that over two thousand years after Plato’s defence of philosophers from the accusation of viciousness, we too need to dismantle the negative perception of our subject by rallying to support it.

We must do so to save the only formal pre-university qualification dedicated to the study of Philosophy in the UK. If we don’t, we risk there being nothing to complain about – let alone teach – twelve months from now.

Zara Bain (@zaranosaur) has taught and examined A level and Undergraduate philosophy for almost a decade and holds degrees in Philosophy from Heythrop College and King’s College London. She tweets about teaching & learning in introductory-level philosophy at @falasafaz and also runs a blog dedicated to highlighting the experiences of disabled postgraduates in academia (http://phdisabled.wordpress.com; @PhDisabled).

Does Philosophy Betray Both Reason and Humanity?

We need a revolution in the academy, argues Nicholas Maxwell. This article appears in Issue 62 of The Philosophers’ Magazine. Please support TPM by subscribing.

Our world suffers from bad philosophy. Universities around the world have, built into their intellectual/institutional structure, a seriously defective philosophy of inquiry we have inherited from the past. This holds that, in order to help promote human welfare, academia must devote itself to the pursuit of knowledge. First, knowledge is to be acquired; then, once acquired, it can be applied to help solve social problems. It is this ‘knowledge-inquiry’ philosophy that betrays both reason and humanity.

The extraordinarily successful pursuit of knowledge and technological know-how has been of immense benefit, and has made the modern world possible. It has also made possible all our current global problems. Modern science and technology have made possible modern industry, agriculture, medicine and hygiene, which in turn have made possible global warming, lethal modern warfare, explosive population growth, the destruction of natural habitats and rapid extinction of species, pollution of earth, sea and air, vast inequalities of wealth and power around the globe.

The problem is the gross and very damaging irrationality of knowledge-inquiry. What we need is a kind of academic inquiry that puts problems of living at the heart of the enterprise, and is rationally designed and devoted to helping humanity learn how to make progress towards as good and wise a world as possible. The basic intellectual aim should be to seek and promote wisdom, understood to be the capacity to realise what is of value in life, for oneself and others, thus including knowledge and technological know-how, but much else besides. ‘Wisdom-inquiry’ along these lines would differ dramatically from what we have at present, academia organised in accordance with the edicts of the false philosophy of knowledge-inquiry.

Wisdom-inquiry gives intellectual priority to the problems that primarily need to be solved if we are to create a better world, namely problems of living – personal, social, global. The central intellectual tasks of wisdom-inquiry are (1) to articulate, and improve the articulating of, our problems of living, and (2) to propose and critically assess possible solutions – possible actions, policies, political programmes, philosophies of life. The pursuit of knowledge and technological know-how emerges out of, and feeds back into, these fundamental intellectual activities. A transformed social science, devoted to helping humanity tackle problems of living in increasingly cooperatively rational ways, is intellectually more fundamental than natural science. Wisdom-inquiry seeks to help humanity learn what our global problems are, and what we need to do about them. A basic task of the university is to help people discover what is genuinely of value in life, and how it is to be realised.

None of this can be done as long as our universities are dominated by knowledge-inquiry. Giving priority to tackling problems of knowledge excludes tackling problems of living from the intellectual domain of inquiry – or pushes the task to the periphery and marginalises its importance. What universities most need to do to help humanity make progress towards a wiser world cannot be done at all – or can only be done very ineffectually. The fundamental endeavour to help humanity learn how to resolve conflicts and problems of living in increasingly cooperatively rational and wise ways cannot be undertaken by the university because to commit the university to such a political programme would, according to the edicts of knowledge-inquiry, sabotage the objectivity of academic inquiry and subvert the pursuit of knowledge.

The charge is very, very serious. Bad philosophy lies at the heart of our current global problems. It is at the root of our current incapacity to tackle them effectively and wisely. One might think that philosophers would be eager either to show what is wrong with the argument, if that is what it deserves, or – if the argument is valid – to proclaim to fellow academics, politicians and the public that our future is threatened by a bad philosophy built into universities around the world, and we urgently need to bring about an academic revolution.

Not a bit of it. The case for the urgent need for an academic revolution, from knowledge to wisdom, has not been taken up, criticised, proclaimed, attacked, fought over. It has been ignored. The silence is deafening.

Do we have the kind of academic inquiry we really need? Is knowledge-inquiry really damagingly irrational, and at the root of many of our current crises, or is it, on the contrary, the best that we can have? What grounds are there for holding that wisdom-inquiry serves the interests of reason and humanity better than knowledge-inquiry? What kind of academic inquiry do we really need? What kind of inquiry could best help us make progress towards as good a world as possible?

These questions ought to lie at the heart of philosophy. At present they are all but ignored. I suggest that philosophers should start to take very seriously the possibility that a bad philosophy of inquiry, inherited from the past, and built into the intellectual/institutional structure of universities round the world, is at the root of many of the troubles of our world today. What philosophers do should take account of this possibility – if philosophy is not to be the intellectual equivalent of Nero fiddling while Rome burns.

Nicholas Maxwell is emeritus reader at University College London, where he taught philosophy of science for twenty-nine years, and author of From Knowledge to Wisdom (Blackwell, 1984; 2nd ed., Pentire Press, 2007).

Good work: What should you do with your life?

Benjamin Todd on a philosophical non-profit organisation which asks, what should you do with your life? This article appears in Issue 64 of The Philosophers’ Magazine. Please support TPM by subscribing.

“What career should you pursue if you want to make a difference?” It turns out that this is a largely philosophical question, but it has received very little attention from philosophers – and good luck finding a careers adviser who can help you navigate moral philosophy.

It’s clear that your choice of career is an immensely important decision, and not just to you. There are huge problems in the world. More than a billion people live on less than $1.25 a day. One in one hundred people take their own lives. Civilisation itself faces risks from climate change and nuclear weapons. But these problems can be solved, and the contribution you make depends on your choice of career.

What’s lacking is information and guidance to help you navigate the maze of tricky moral dilemmas and messy empirical issues that arise from trying to make a big difference with your career. Despite philosophy’s alleged irrelevance, what’s needed is philosophically informed careers advice. Philosophers can address these tough moral questions, and the clear thinking encouraged by philosophical training makes them well placed to navigate the empirical problems too.

We called ourselves 80,000 Hours because that’s roughly how much total time you have to make a difference in your career. That’s a lot of time, but it’s also finite. The idea behind the name is that it’s worth investing the time to really think about what you’re going to do with those hours. If an improved career choice can make that time 1% more valuable, then it would be worth spending 800 hours (5 months of full-time work) thinking through how to do that before getting started. It seems like we often spend much less time thinking about our choice of career, and there’s much more than 1% of the value at stake.

Ultimately we’re interested in helping people use the 80,000 hours they have in the best possible way – finding the ethically optimal career. In practice, this means we care a great deal about which careers have the largest positive consequences for the world. As Rawls said, “All ethical doctrines worth our attention take consequences into account in judging rightness.” The consequences of your different career options, as a wealthy westerner, vary a great deal. Some careers are likely to be net harmful. In others you can literally save thousands of lives. Because the world is like this, the size of the consequences of different careers is one of the most morally significant factors you can consider. That’s not to say only consequences matter. It’s just that they seem very important given how the world is today.

Another reason why we think it’s useful to consider which careers have the largest positive consequences is because no one has systematically tried to answer the question before, and there is a lot of relevant data out there. This means, despite the difficulty of the question, we think we can make progress on it.

In practice, what is 80,000 Hours doing? Each week, we do in-depth coaching with one or two people. We work out the most important, resolvable questions they have about the potential social impact of their options. Then we try to answer these questions. We write up everything for our blog. Over time, we’re building up an answer to the overarching question “in which career can you have the largest social impact?”

What issues do we tend to focus on? First, we help people pick a cause e.g. global health, climate change, nuclear disarmament and so on. We think the potential for impact in some causes is hundreds of times higher than in others, so working within the right cause is extremely important. For this, we draw on a significant body of research and data created by economists and other groups like the charity evaluator, GiveWell.

Second, we evaluate specific career paths. Within career paths, we focus on two factors. We look at what leverage the career gives you to effect change, whether that’s a public platform, influence over an organisation, money or something else. We also look at how much career capital you gain; that is, how much it improves your ability to get better opportunities in the future. In particular, we evaluate the usefulness of the skills you gain, the quality of the network and what you gain from the credentials provided. To do this evaluation, we conduct lots of informational interviews, gather research where available, and make back-of-the-envelope estimates.

Our team has found studying philosophy useful in coming up with these broad strategies, but how exactly does philosophy come up in the specific questions we look at? Here are a couple of examples of how philosophy matters in career choice.

First of all, a philosopher might wonder what “making a difference” actually means. “I want to make a difference” is the slogan of ethical careers, but it’s rarely examined. It seems that people often make a pretty fundamental mistake when using it, which means that thousands of people end up incorrectly comparing their options.

To us, it seems clear that “make a difference” means “do some good that wouldn’t have happened otherwise” i.e. do good relative to the counterfactual situation in which you don’t take that action. But people often don’t assess the counterfactual when aiming to make a difference with their careers.

If you make a serious attempt at evaluating the counterfactual, it leads to a pretty dramatic shift in how you view careers that make a difference. Here’s one example. Suppose a charity runs a huge campaign raising money for medicine to prevent deaths due to HIV/AIDs, and raises millions of pounds. Their direct impact was to raise a huge amount of money for a good cause. Champagne all round.

But what would have happened if they hadn’t run the campaign? Some of the donors would have given the money anyway. And some of them would have given the money to other charities instead. That’s because many people give a more or less fixed amount of their income to charity each year. If they give to you, then they become slightly less inclined to give over the rest of the year. In fact, it seems like the average donor is of this type. We know this because the total fraction of national income given to charity is fairly stable over time. It probably is possible to raise money that wouldn’t have been given to charity otherwise, but it’s difficult and requires innovative techniques. (For instance, we think most of the money raised by our sister charity, Giving What We Can, is genuinely additional because they encourage people to donate at least 10% of their income, and very few people would do this otherwise).

Now consider, if lots of fundraising drives are not really raising new money, then they’re just moving money from one charity to another, and so at least some fundraisers are making things worse. That’s because some will be taking money from more effective charities and moving it to charities that can’t do as much with the money.

(Geek’s aside: There’s reason to think that charity effectiveness has a log-normal distribution i.e. there’s lots of middling charities and a small number of really good ones. If that’s true, then mean charity effectiveness is much higher than the effectiveness of the median, which determines the majority. That would mean that the majority of charity fundraisers are reducing the good done by the charity sector).

So, by thinking carefully about the counterfactual, we see that an activity most people think is clearly good – fundraising for charity – may often make things worse. There are lots of other examples of how forgetting to evaluate the counterfactual when making a difference can give a seriously misleading impression of your impact. You can see a few more on our blog. For instance, we’ve shown that doctors actually don’t save many lives, and there are reasons to think it can be good to work in harmful industries. A little bit of conceptual thinking turns out to be really important for making a difference.

Another example of philosophy figuring in to career choice comes up when we try to weigh the interests of future generations. There’s a pretty strong argument that what matters most about your actions in terms of consequences is their effect on the far future. This argument was made in a recently published philosophy thesis, “On the overwhelming importance of shaping the far future” by one of our trustees, Nick Beckstead. The basic idea is that: (i) future civilisation has the potential to be extremely long and valuable, (ii) our actions today can have a non-tiny impact on that future (for instance, we could trigger a nuclear war that wipes us out), (iii) thus, the impact of our actions on the trajectory of future civilisation is likely to be the most important thing about them.

We can illustrate this with some extremely rough numbers. The Earth will remain habitable for about a billion years. If we don’t wipe ourselves out, then civilisation could last at least that long. If you think there’s even just a small chance of not wiping ourselves out, then the expected length of civilisation is much longer than what has come before. If you think there’s even a small chance of colonising other planets, then the expected length of civilisation could be much more than a billion years.

If there are realistic actions that could decrease the chance of extinction in the next hundred years by more than one in a million, then they would be extremely valuable compared to the impact we can have on people in the near-term. In fact, if the expected length of civilisation is a billion years, a one in a million reduction in the chance of extinction would be as valuable as 1,000 years of future civilisation.

It seems like there are realistic ways to increase our chances of surviving by more than one in a million. For instance, the chance of extinction by asteroid impact in the next century is about one in a million. This risk could be almost entirely mitigated by setting up comprehensive tracking and deflection system, at a cost of around $20 billion. In fact, the risk has already been substantially reduced by NASA’s existing tracking systems.

There are many ways to respond to this kind of argument. One line of thought is that our ethics should be “person-affecting”. Action A is only better than action B to the extent that it’s better for someone. On the strict version of this view, it’s impossible to help non-existing people, so we need not include the potential interests of future generations in our moral calculus. This leads us into the minefield that is population ethics. I’m tempted to agree with Derek Parfit, as he argues in Reasons and Persons, that we should reject the person-affecting view, but you might disagree.

If the argument about far future consequences is right, then it raises a fascinating empirical question: which actions today yield the largest benefits to the future path of civilisation? This question is tentatively being explored systematically for the first time at the Future of Humanity Institute in Oxford and the Cambridge Centre for the Study of Existential Risk. Since this question is only just being explored, much of the work that needs to be done is conceptual, and philosophers are making a significant contribution.

We routinely hear the objection that you can’t measure the good done by a career. Unpicking this objection is important. It encourages the attitude that “anything goes” in doing good, so long as you have the right intentions. Unfortunately, when you look at the evidence, lots of common sense ways of doing good seem to make very little difference at all. An unwillingness to explicitly compare means we squander our limited time. Dealing with this objection is tricky, since it contains several overlapping ideas.

One confusion I think is common is the idea that measurement must be precise. Think back to a paradigm measurement from science classes, like using a pair of scales to weigh something. These normally gave you a “precise” number as the result. When comparing careers, however, we don’t get precise numbers, so you might think the good you do is not measurable.

But this isn’t a good way to think about measurement. No measurement is perfectly precise. Weighing with scales actually only gives you a range of probable values for the true weight, because the scales are not 100 per cent accurate. There will be faults in the design, tiny tremors and other small imperfections. In reality, there’s always some uncertainty. Our choice is only ever how much uncertainty is OK given our purposes.

If there’s always some uncertainty left, then in practice what it means to measure some quantity is to reduce uncertainty about its magnitude. That’s all that’s actually possible. Now that we’ve given up an unrealistic definition of measurement, we can see that lots of things can be measured – all we’re saying is that there is something we can do to reduce our uncertainty about them. (To see much more on this approach, see How to Measure Anything by Douglas Hubbard.)

Turning to the question of which careers have the most potential for impact, there are all kinds of things we could find out that would reduce our uncertainty about the relevant questions: how many people you affect, how good those people say these effects are, how impactful experts say the work is, and so on. We’re never going to be anywhere close to certain, but that doesn’t mean it’s impossible to measure impact.

So, we’ve seen one conceptual objection to the idea that we can’t measure the impact of careers. Other objections are moral, most commonly the idea that the value of different careers is incommensurable. Other objections are about methodology. For instance, when we’re dealing with extremely uncertain outcomes, should we use quantified comparisons of the value of different options or should we do something else? The charity evaluator Givewell, for its part, aims to identify the funding opportunities that maximise the good done per dollar, but they are sceptical of the value of expected value estimates. Navigating these kinds of problems is exactly what philosophers are trained for, except in this instance the answers really do matter.

I find it fascinating to think about which career is best for the world – it’s an extremely important but almost completely neglected question. Moreover, there is plenty of work for philosophers here, work with real world consequences. We’ve seen three examples – analysing what it means to make a difference, estimating the importance of future generations, and understanding the objections to comparing the value of different careers. Many of the people we’ve advised have completely changed their lives after doing some philosophy. It’s not often something like that happens.

Benjamin Todd is co-founder and executive director of 80,000 Hours. In less than two years, 80,000 Hours has grown from a student society to a fully-fledged Oxford-affiliated non-profit, which has been covered on the BBC, Washington Post, TED and more. If you’re interested in getting involved or thinking about your career, find out more on the website: 80000hours.org

Judgements of similarity

Similarity is not as simple as one might have hoped, argues Neil Greenspan. This article appears in Issue 62 of The Philosophers’ Magazine. Please support TPM by subscribing.

This past spring, on the American radio interview show, “Fresh Air,” the host asked the guest, science writer Carl Zimmer, to address the difference between bacteria and viruses. Zimmer responded by stating that bacteria were more similar to us than were viruses. It is true, as Zimmer suggested, that bacterial cells share key properties with our own cells, such as metabolism and the broad outlines of cell division, that are not attributes of viruses, which are obligate parasites. However, viral components can be much more similar to the components of their host cells than they are to bacterial components and are generally more similar to host cell components than are bacterial components.

Viral proteins, carbohydrates, and lipids are derived from the synthetic processes that operate in their host cells, whereas bacteria employ their own cellular mechanisms to produce proteins and other complex molecules. These processes differ significantly in some of their details from their counterparts in human cells. Therefore, in several respects Zimmer’s statement presents an incomplete and oversimplified picture.

So, similarity is not as simple as one might have hoped. Similarity can involve many aspects or dimensions of an entity or process. Consequently the degree of similarity characterizing some pairs of entities or processes can be assessed in a multitude of ways.

Consider an example involving the assessment of entities much simpler than cells or viruses. Which is more like a square of side length 4 units: a square with sides of 2 units or a circle with a diameter of 4 units? The ‘correct’ answer can be seen to depend on whether, for instance, the more relevant attribute is 1) area, in which case the circle is the more similar geometrical object or 2) shape, in which case the smaller square is the more similar geometrical object.

Since any exercise in classification hinges on assessments of similarity, the above considerations apply to some extent to our efforts to put what we find in the world into categories that will facilitate thinking about a range of issues and questions. Thus, any thinking that requires the use of categories, arguably most thinking, requires grappling with the potential complexities inherent in assessing similarity.Everyone encounters sets or their equivalents – categories with absolute criteria for membership, such as possession of all of a finite list of attributes or adherence to a precise rule. An example of the latter from mathematics can illustrate the point. Even integers are those that are divisible by two with no remainder. Consequently, if you are an integer you are either even or you are odd. There is no ambiguity. A pictorial way to convey this certainty and precision about membership in a grouping is to represent the group boundary as an infinitesimally thin line.

This geometric approach immediately raises interesting questions, as well as offering the prospect that geometry can repay the debt to logic incurred by Euclid. If the boundaries of what we can call classical categories or sets are extraordinarily sharp lines, what other kinds of category boundaries might there be and what might they suggest about the kinds of categories that are possible?

Imagine boundaries that are not best represented by sharp lines but that are more like zones of membership gradually fading from high to low inclusion in the category. We are all familiar with variables that vary continuously or nearly so. This reality is the basis for so-called fuzzy sets for which membership is not all-or-none but is quantitative.

Consider a category of substantial relevance to current economic policy debates,“rich people”. Although we could pick a single dollar value of income or total net wealth to cleanly divide the universe of citizens into the rich and the non-rich, it is more informative and more useful for most purposes to acknowledge the extent of variation in income and wealth instead of reducing the variation to just two categories. Given a threshold for being rich of $10,000,000, consider whether two individuals with net worth of, respectively, $50,000 and $9,999,999 are more similar financially than two individuals with net worth of, respectively, $10,000,001 and $9,999,999. Therefore, there is a strong case for maintaining that rich people, in the financial sense, constitute a fuzzy set with degrees of membership that can vary from 100% (e.g., Bill Gates) to 0%, (e.g., an individual with no net worth).

Another type of category, a polythetic class, consists of members or elements that need only possess some, instead of all, of a list of attributes. An example would be mothers who can possess varying numbers of the following attributes with respect to a son or daughter: 1) female nuclear gene donor, 2) female mitochondrial gene donor, 3) female egg donor, 4) the possessor of the incubating womb, 5) chief post-natal female caretaker, and 6) spouse of the sperm donor. The prototypical mother possesses all six attributes, but many women who function and are regarded as mothers may possess only one or some of these six attributes. Such classes are particularly useful for entities that evolve, since entities (e.g., viruses, cells, organisms) subject to mutation and selection cannot be expected to be or remain identical over time in all attributes, even attributes that may be viewed by some observers as defining.

Yet another type of category, referred to as a scale-dependent class, can be imagined. While category borders can be represented as thin lines, as is typically the case for countries on a map, complexities can arise at or near the border such that the precise placement of the boundary depends on the scale at which it is being analyzed. Concrete examples of this sort of category were provided by Frank Jacobs in his “Borderlines” blog in the New York Times, in which he described enclaves and exclaves that are relatively small zones of country X surrounded by country Y. To take one example, Jacobs identifies a number of small regions of Belgium and The Netherlands that are fully contained within the territory of the neighboring country.

In addition to the diversity of types of categories there is a further source of classificatory complexity. This particular notion can be summarized by what I would like to call the Principle of Purpose-Dependent Ontology. The claim is that the purpose motivating a question or inquiry can substantially influence what precise category boundaries and perhaps what kinds of category boundaries are most sensible.

Returning to the motherhood example, if one is interested in who is referred to by a child as “mother,” then all or most of the adult-child relationships delineated above may be relevant. However, if the motivation for identifying the ‘mother’ of an individual is based solely on concern about inherited genes that might predispose to disease, then only an individual that served as the source of those genes would be relevant.

These reflections lead me to a tentative conclusion: Most of the time, most people do not know precisely what they are talking about. Disagreements and erroneous inferences, some consequential, can arise from assuming that all useful groupings, categories, classes, or sets, must be of the variety for which membership is all-or-none and completely certain. Reality is often richer and more interesting than can be captured by rigid dichotomies or other simplistic schemes of classification, although such frameworks will have their uses.

Neil Greenspan is professor of pathology at Case Western Reserve University School of Medicine and a senior correspondent at the Evolution and Medicine Review.

Reviewing the situation

Marcela Herdova and Stephen Kearns examine the implications of “situationism” for our understanding of free will. This is a web only article. Please support The Philosophers’ Magazine by subscribing.

This is a tricky situation,
I’ve only got myself to blame…
–Freddie Mercury, It’s a Hard Life

On March 13, 1964, Kitty Genovese, a young woman from New York, was brutally assaulted on her way home. The attacker left Kitty with multiple stab wounds to which she succumbed during the transport to a hospital, once help finally arrived. This gruesome case motivated a large of body of work in social psychology on what has become known as the “bystander effect”. While the exact details of this case remain uncertain, it is said that approximately a dozen people witnessed this crime. The assault lasted around half an hour and yet during this time, none of the witnesses directly intervened or called the police.

As social psychologists explained later, such emergency situations have a strong and perhaps unexpected impact on our behaviour: the likelihood of intervention inversely correlates with the number of people who are part of that situation. In other words, the larger the number of people present, the less likely it is that any of these individuals will help. This has become, over the years, a well-documented phenomenon. In one bystander study, conducted by John Darley and Bibb Latané (1968), the experimental subjects overheard a staged epileptic-like attack. In those cases where participants thought they were the only one to witness this event, 85% of the subjects intervened. This starkly contrasts with a mere 31% of the subjects who believed that the attack was overheard by four other people.

The bystander experiments, and other research in social psychology, point to the fact that situations have a great impact on how we behave. What’s more, this research suggests that we are not aware of the impact these environmental factors have; that is, if we are aware of these situational aspects at all. This is the thesis of situationism: environmental cues have a great deal of influence on how we act without us being aware of this fact. In the bystander experiments, the researchers explain that people typically fail to reference the presence of other bystanders as something that would impact on their choices and actions.

Other experiments which are frequently cited in support of the thesis that circumstances exert great power over individuals’ behaviour include the famous obedience experiments conducted by Stanley Milgram (1963/1974) and the Stanford prison experiment led by Philip Zimbardo (1971). In the obedience studies, participants were led to believe that they were taking part in a learning experiment, part of which was to deliver what appeared to be electrical shocks to other experimental subjects. Participants were to use a range of levers to deliver the electrical shocks, each of which was associated with a different degree of shock. Astonishingly, roughly two-thirds of the participants continued, obeying the instructions of the experimental confederate, to deliver shocks all the way – delivering the highest degree of shock which involved pulling levers labelled as “extreme intensity shock” or “danger: severe shock”. Many participants continued their involvement despite the fact that these shocks appeared to have caused a great distress to their recipients. In the Stanford Prison experiment, a group of college students took on the roles of guards and prisoners in a simulated prison environment. Those who played guards exhibited a range of cruel behaviours towards the prisoner participants. The experiment, which was to originally last for two weeks, was then terminated in only six days. In both of these experiments, the extremely powerful situations influenced people to behave in a way that they would ordinarily hesitate to, and which clashed with some of their core values (such as not to intentionally cause physical or emotional distress).

In another famous study, by Isen and Levin (1972), subjects who found a dime in a phone booth were strikingly more willing to help out a stranger than those participants who did not find a dime. Only 4% of the participants who did not find a dime helped out but almost 88% of those who did offered help. A well-known study by Darley and Batson involved Princeton seminary students who were asked to deliver a lecture in a nearby building (1973). On their way to deliver the lecture, the seminary students came across a person who appeared to be in need of medical help. Whether the students helped or not largely depended on how much time the students were told they had: only 10% of those in a high hurry condition offered assistance while 63% of those in a low hurry condition offered help. In both of these experiments, it seems to have been the environmental and circumstantial factors which had impact on how people were going to act rather than their own values and beliefs. The second experiment is perhaps even more striking given that some of the seminary students were going to deliver a talk on the parable of the Good Samaritan (while some other students on job prospects). The content of the lecture did not seem to make a significant difference to whether one offered help or not.

The implications of these experimental studies have been discussed in psychology and philosophy alike. In philosophy, one common worry with regards to the situationist research is that it challenges the existence of free will, and, relatedly, moral responsibility. We think such a challenge is somewhat exaggerated. In what follows, we present five situationist threats to free will and our responses to them.

Threat 1: The situationist literature reveals that agents are unaware of the situational factors that causally influence their behaviour. If situationism is true, agents do not know why they do what they do. Indeed, worse than this, they think they know. Agents unaware of the power of the situation confabulate plausible but false stories that explain, to their own satisfaction, why they acted in a certain way. These facts do not sit easy with our conception of ourselves as autonomous agents. Being free and responsible requires our making sufficiently informed choices, which includes being aware of the factors that might causally impact our actions. If agents lack such knowledge, they are unable to effectively combat pernicious influences and thus should not be blamed if such influences manifest in their behaviour.

Response 1: Is it plausible that the experimental subjects have no awareness of the circumstantial factors that influence their behaviour? Take the bystanders experiments. It might be that people, at least in some of the cases, confabulate reasons for behaving the way they did, not because they are genuinely ignorant of the relevant factors, but because they recognise their lack of intervention as a bad choice which they then try to rationalise to the experimenter. One rather compelling explanation of the behaviour of participants in the obedience experiments is that they are aware that someone ‘in the know’, who they believe has a better assessment of the situation, and who has authority, told them to keep going. The prison guards are presumably aware that how they are acting is at least somewhat influenced by their assigned role. The seminary students who refrain from aiding someone know they are in a hurry, and it is difficult to tell whether they are entirely ignorant of the causal influence of this fact.

Even if it is true that the subjects of these experiments are truly unaware of some of the causes of their actions, it is unclear why this should be a threat to their responsibility. It borders on trivial that we are unaware of many of the causes of our actions. We do not know exactly how our brain activity gives rise to actions, nor do we know how our upbringing affects our behaviour. Events from before we are born, and of which we know nothing about, play some causal role in what we do. Unawareness of these influences does not diminish our responsibility, so why should unawareness of those influences highlighted by the situationist literature do so? Furthermore, any retrospective confabulation concerning what we do cannot affect our responsibility for our actions because such rationalisation occurs afterwards, when the actions have come and gone.

Threat 2: Situationism seems to suggest that agents do not act on good reasons (or, indeed, on bad reasons), but rather on situational cues that are largely irrelevant to the moral goodness or badness of their options. Instead of helping the stranger because he is in need, people help because they have just found a dime. The bystanders to a crime do nothing not because they judge it unnecessary to act, and not for more sinister motives, but because they are amongst many other people. Participants of the Milgram experiment submit to the authority of the scientist instead of freely assessing the available options on their merits and demerits. Responsibility requires, however, that we are sensitive to genuine reasons for our actions. Agents are morally assessable only to the extent that they are appropriately responsive to moral considerations. If we are not, as situationism suggests, we are not morally responsible agents.

Response 2: It is entirely consistent with the data that the agents in situationist experiments do perform their actions for good (or bad) reasons. The explanations mentioned above of their behaviour do not compete with explanations in terms of reasons. People do help the stranger in part because she is in need. After all, if she weren’t in need (or rather, didn’t appear to be), no one would help her. Perhaps finding the dime boosts a person’s mood and brings to mind more readily just those good reasons to help the stranger. This hypothesis is by no means ruled out by the data. In essence, while morally irrelevant situational cues might make a difference as to how someone acts, it simply does not follow that her action is insensitive to reasons.

Furthermore, an agent’s doing something for no reason is no guarantee of exculpation. If an agent performs an action on a whim while knowing their action is wrong, and having the capacity to act on good reasons to do otherwise, such an agent can still be blameworthy. Even if the situationist literature highlights that practically irrelevant differences affect behaviour, not only does this not show that practically relevant differences don’t affect behaviour, but even if it did show this, it still does not suggest that this behaviour is unfree.

Threat 3: Situationism arguably entails that agents exercise less control than commonly believed. If situations are more powerful than we think, then we are less powerful than we think. Our practices of responsibility and our attributions of free will rest on the assumption that we exercise rather considerable control over our actions. We are the authors, the originators, of our behaviour. Situationism does away with this conception of ourselves. We are not unmoved movers imposing our will on the world, but rather the puppets of our environment. How can our actions be up to us if they are controlled to such a large extent by the situations we find ourselves in?

Response 3: While situations might be more powerful than we recognise, situational factors do not compel us to act. In other words, it is not automatic that a certain situational aspect should lead, without any checks or constraints, to any particular action (the subjects of the above described experiments do not act in 100% accordance with those with whom they share a situation). So while situations may take away some of our agentive powers, in the sense that there are external factors – which include our present circumstances – that have impact on our behaviour, we are far from mindless puppets of our circumstances. Situational factors do not make us behave in an impulsive, knee-jerk manner: our deliberative and rational capacities responsible for how we choose to behave are not bypassed. Situationism may highlight that a certain conception of ourselves is misguided (that of the unmoved mover acting outside the causal nexus), but it does not cast doubt on the efficacy of our choices, deliberative capacities and application of our skills. A realistic account of free will should do without the need for a Cartesian mind acting without influence.

Threat 4: If situations have the kind of power that situationists claim, and we find ourselves in them for reasons largely outside of our control, then it seems that what we do is, to a large extent, due to luck. Whether I find the dime in the phone booth or not is simply a matter of fortune. And given that this is the main predicting factor concerning whether I help the stranger in need, whether I help is largely due to luck. But luck excludes responsibility. One person’s moral status as blameworthy, and another’s as praiseworthy, cannot rest simply on the latter’s getting lucky. Can those who helped the stranger sincerely blame those who did not, knowing that, had they not found the dime themselves, they very likely would have done exactly the same thing? Can we morally criticize participants in the Milgram experiments, or in the Stanford Prison Experiment, now that we know, to paraphrase John Bradford, there but for the Grace of God go us?

Response 4: While we may not be able to control what situations we find ourselves in, the fact that we are not powerless in these situations makes the situation itself irrelevant with regards to the question of free will and moral responsibility. While it may be just a matter of luck in what environment we are and what kinds of situations we encounter, how we deal with the situation is not lucky given that the situation does not diminish our agentive powers to the extent that would warrant exemption from the usual practices of moral responsibility. Though the external circumstances we find ourselves in might be a matter of luck (though even this is not always true), and thus not up to us, it is (for all we know) up to us how we react to such circumstances. Situationism simply highlights that people’s actions in the same circumstances often conform to certain patterns—it does not imply that these patterns do not arise from freely-chosen behaviour.

Threat 5: Another apparent consequence of situationism is that we lack cross-situationally stable characters. Our actions result not from personality traits or virtues or vices, but from largely uniform reactions to environmental cues. Some have suggested, however, that agents are responsible for their actions only if they reflect our moral characters. To be free is to (be able to) act in line with one’s deep self—i.e. those stable traits and values with which one intimately identifies. Freedom is self-expression. We are responsible for actions that are ours. If there is no deep self, no character, there is, then, no freedom, or responsibility.

Response 5: The death of character is somewhat exaggerated. It is implausible that situationism shows that we have no character and that what we do is determined by situations only. What it shows, at most, is that our character traits and values adapt to different situations we might found ourselves in. To see this, we must bear in mind three things. First, many of the scenarios that lend support to situationism are extremely powerful situations which are not all that common in everyday life. While it may be the case that extreme circumstances exert unusually strong influence on how we behave, without us being aware of this influence, we should be wary of over-generalising these results. Even if it is true that highly charged and stressful situations afford us less flexibility with regards to how we act, we should be cautious about extending such conclusions to more ordinary circumstances in which we might have more time to deliberate and in which we experience less pressure with regards to our decisions and actions. Second, even in the situations studied, compliance was far from 100%. Two thirds of Milgram’s subjects pulled all the levers, but this means, of course, that one third did not. Third, the situationist literature does away with only a very strict understanding of character, according to which no practically irrelevant environmental factors influence behaviour. We should simply give up on this conception of character.

When it comes to free will, this seems to be good news rather than bad news. If our character and our values were to determine us to act in certain ways irrespective of the wider context, this would make us very inflexible and rather predictable agents. If freedom is self-expression, this self-expression goes in hand with increased flexibility rather than with behaviour stemming from unchangeable and isolated character traits.

Even if we suppose that agents lack characters, it is not a simple step from this idea to the claim that agents lack responsibility. This leap relies on the controversial idea that character is necessary for responsibility. But if an agent without character exercises control of her actions, knows what she is doing, knows right from wrong, is not compelled to act a certain way, and could have done otherwise, she is a prime candidate for having free will, and being responsible, even if she lacks a character.

Situationism, free will and responsibility can coexist. Situations may influence our behaviour, but do not excuse it. With this in mind, we end with advice from Freddie Mercury’s band-mate:

Pull yourself together,
‘Cos you know you should do better
That’s because you’re a free man
–John Deacon, Spread Your Wings

(This doesn’t mean women are off the hook.)

Marcela Herdova is a research fellow at King’s College London and Stephen Kearns is an assistant professor at Florida State University.

A transhuman future

Russell Blackford scouts the conceptual core of a new intellectual movement. This article appears in Issue 62 of The Philosophers’ Magazine. Please support TPM by subscribing.

Technology has given us tools to manipulate the world around us. In such forms as telescopes and microscopes, it has augmented our perceptual capacities and opened up the universe to our inspection on new scales. Technologies such as writing and computers have even extended our minds and memories. As we learn to handle tools and machines, we alter our neurological pathways in what can be seen as a kind of cyborgisation.

Nonetheless, much remains the same. The underlying capacities of contemporary people are not so different from those of early Homo sapiens. In principle, technology can do much more to change us.

We have reached a point in history where there are realistic prospects of boosting our cognitive, emotional, perceptual, and physical capacities through deliberate and increasingly direct technological interventions. These might be, for example, genetic (modifying human DNA), prosthetic (incorporating tools or machines into our bodies), or pharmaceutical (fine-tuning our bodies with drugs to be smarter, stronger, happier, or longer-lived). Even if it is too late to employ these technologies to alter ourselves, it might not be too late for our children or our grandchildren. New technologies would, of course, be added to more familiar ones that can already be understood as enhancing human capacities, rather than as ‘merely’ therapeutic. Consider, for example, vaccinations for enhanced immune responses, the contraceptive pill to control female fertility, cosmetic surgery to ‘improve’ (whatever that means in this context) physical appearance, and anabolic steroids to help build muscle mass. Even rather innocuous substances such as coffee can be thought of as enhancement technologies, not to mention more powerful (and often illegal) drugs that can alter the moods and perceptions of healthy people.

Even as it becomes more familiar, the idea of technologically enhancing human capacities inspires much anxiety. Often, indeed, it generates social and political opposition – fierce enough to have procured numerous legal prohibitions on genetic tampering with human embryos. These have been enacted in recent decades by many countries at quite varied stages of industrial development.

Much of the anxiety is probably irrational, based on poorly articulated intuitions about playing God, violating nature, disrespecting human dignity, or ushering in a scary future with bizarre social arrangements. In other cases, however, the arguments have greater claims to our rational consideration. Some of the mooted technologies might be difficult to achieve, and the attempt might distract us from more important problems. Others might be all-too-achievable, but dangerous or open to abuse. Critics often present us with the spectre of a caste society divided into a highly enhanced ruling class (based largely on family wealth) and a more ‘natural’ population of subordinates with less ready access to the new technologies. The possibilities for oppression and suffering are fairly obvious. Whether such an outcome would ever occur in practice, it is at least difficult to rule out.

There is much to be said, pro and con, about whether we should welcome a future of technologically driven enhancement of human capacities. Many politicians, academics, church leaders, and lobby groups are passionately resistant to the very idea of human enhancement, but one intellectual and cultural movement embraces the idea with all its arms. I’m referring, of course, to transhumanism.

Transhumanist views have a long history, although the contemporary transhumanist movement is a product of twentieth-century advances in scientific understanding and technological capability. Precursors can be found, for example, in the futurist speculations of J B S Haldane and J D Bernal in the 1920s. Haldane’s Daedalus; or, Science and the Future (1924) and Bernal’s The World, the Flesh and the Devil (1929) envisage future societies that employ advanced science to alter human traits and direct our future evolution as a species. In the following decades, these writings were highly influential on science-fiction authors – perhaps most notably Olaf Stapledon and Arthur C Clarke – but they also produced a backlash, as in Aldous Huxley’s Brave New World (1932), much of the work of C S Lewis, and a veritable ocean of dystopian fiction and cinema.

Ideas such as Bernal’s and Haldane’s provide a continuing undercurrent in twentieth-century thought. However, transhumanism as a consciously organised movement took shape only in the 1970s and 1980s, with much of the early ferment happening in the high-tech enclaves of America’s west coast. Early transhumanist thinker F M Esfandiary published his UpWingers: A Futurist Manifesto in 1973. Shortly thereafter, he changed his legal name to FM-2030 partly in anticipation of his hundredth birthday in 2030 (he actually died in 2000, and his body was placed in cryonic suspension). He lectured at the University of California, Los Angeles, and in 1989 published his influential book Are You a Transhuman? During the 1980s, FM-2030 and others developed transhumanism as a visionary movement with outreach into philosophy, science, and the arts.

Most recently, long-time advocates of transhumanism Max More and Natasha Vita-More have published The Transhumanist Reader, a large volume of older, newer, and freshly commissioned essays that explore transhumanism’s ideals and ideas. It includes contributions by many of the philosophers, scientists, artists, creative writers, and others who have been closely associated with transhumanism over the past three decades.

What, then, is transhumanism when you boil it all down? In its current form, it is a broad movement – not so much a philosophy as a class of philosophical claims and cultural practices. As The Transhumanist Reader amply demonstrates transhumanism is lively with internal debates. Still, it has a discernible core of ideas. One is that of human beings in transition: this word puts the ‘trans’ in ‘transhumanism’ and ‘transhumanist’. Transhumanists sometimes speak of transcendence or of transformations, but the idea of a transition is crucial. Transition, then, from what to what?

Transhumanists foresee a time when technological interventions in the capacities of the human body and mind will lead to extreme alterations in our capacities. For fully-fledged transhumanist thinkers, these alterations will be so dramatic that it makes intuitive sense to think of the deeply-altered people of the future as posthuman (adjective) or posthumans (noun). Posthumans, so it is theorised, will be continuous with us but unlike us in many ways. In particular, they will live far longer lives than ordinary human beings, and they will be happier and cleverer. An extreme variation on this idea is that we might upload our personalities into advanced and highly durable computer hardware, interfacing with the world in complex ways.

Optimistically, you and I might become posthuman one day, if we just live long enough, but more likely the posthumans could be our children or our grandchildren. Given this picture, we are not posthuman yet, and perhaps it makes little sense to call ourselves ‘transhumans’. Still, so the argument goes, Homo sapiens has reached a point in technological development where the current generations of human beings form a bridge between historical humans and future people with greatly superior capacities. For transhumanists, this may not be inevitable – a point that I’ll return to – but it is a logical progression of past and current trends.

Transhumanism adds one more core idea: the idea that the transition from human to posthuman is essentially desirable. More generally, it is desirable to increase our capacities, including the span of human life, through whatever means are available, including direct technological interventions in the functioning of our bodies. Transhumanists pursue their aims through technological innovation, personal practice, artistic expression, and advocacy in the public square. Note, however, that there is no body of doctrines subscribed to by the movement as a whole. Transhumanist thinkers come with many analyses, priorities, and ideas, and they frequently disagree with each other about specific goals and the most effective methods for achieving them.

Earlier, I called transhumanism a philosophical and cultural movement, but is it really something more like a religion or an apocalyptic cult? Alternatively, should we see it as merely a bizarre fad that has spun off from the science-fiction and IT communities of California?

First, it is worth noting that, whatever else it might be, transhumanism is not an unworldly belief system. Nothing supernatural is involved, and any transformations that take place will be produced by purely this-worldly means. If there is anything that tranhumanists want to transcend, it is the current limits on human capacities, not the natural world itself.

Transhumanism does not invoke any being, entity, or force that transcends the natural world, and nor does it postulate otherworldly dimensions to human (or posthuman) flourishing. Although it imagines us changing in various ways, these fall within ordinary worldly desires, such as the desire for a longer life in this world. Most transhumanists would emphatically deny that transhumanism is a religious system, and they can make a strong case for that position.

At the same time, there is, indeed, something rather apocalyptic and cultish about at least some transhumanist activity and writing, particularly when we are promised a rapid transition to vastly extended life spans and a post-scarcity economy. One suggestion frequently presented in the transhumanist literature is that this will happen via something called the Singularity, a near-future event involving sudden and unprecedented technological advances. The idea is that the exponential curve of technological improvement over time is approaching a point where it will go almost vertical, leaving what lies on the other side radically beyond human prediction.

Often, this near-vertical take-off in technology is associated with the prospect of rapidly and recursively self-improving artificial intelligence of some sort. This might, in theory, lead to the appearance of cybernetic beings so cognitively superior to ourselves that they would defy human comprehension, not to mention human attempts to control them. They would possess an extraordinary ability to make further conceptual and practical advances, presumably for human betterment. If these beings ever do come into existence, let us hope that our new computer overlords turn out to be friendly!

In fact, much activity from committed transhumanists working in the field of cognitive science involves efforts to ensure that any machine super-intelligences of the future will, indeed, be benevolent in their attitudes to human beings. Some researchers are expending much time and brainpower in a quest to understand the nature of benevolence, human sympathy, and the like, and how these could ever be programmed into the deepest design levels of machine super-intelligences. Whether or not this research program, sometimes known as ‘Friendly AI’, will eventually bear fruit, it generates much interesting interdisciplinary discussion (involving moral philosophers and cognitive scientists, among others).

Transhumanists who emphasise the prospect of the Singularity envisage change on a vast scale within a very short period. However, not all transhumanists think in such an apocalyptic way, and indeed ‘Singularitarian’ ideas are rejected by many transhumanists who have less grandiose ambitions for the human future.

How seriously should we take all this? Is transhumanism merely a fad that we can dismiss? Not so quickly, I suggest. I expect that we will, indeed, discover new ways to enhance human capacities, whether or not any of them are as spectacular as mind uploading or a technological Singularity. If the science of enhancement continues for long enough, why wouldn’t it eventually produce people very different from us or our ancestors? Furthermore, I accept in a general way that enhancement of human capacities is desirable, whatever qualms we might have about the details and about important political issues such as those of distributive justice. If expressed at a sufficiently general level, the transhumanist thesis is plausible, even attractive. Or so it seems to me.

In any event, let us assume for argument’s sake that the most dramatic – some might say ‘horrific’ – visions of a posthuman future are exaggerated. All the same, something less sudden and disruptive might happen as new technologies cumulatively transform what human minds and bodies can do, probably with substantial social effects. Compare, for example, the myriad consequences of electrification, motor cars, the Pill, air transport, computers, and the Internet.

Nonetheless, this raises difficult problems, not least for philosophers. Some might challenge the very notion of ‘improvement’ or ‘enhancement’ of human capacities. Mightn’t enhanced human beings or posthumans be merely different, not better by any objective test? In which case, why go to so much trouble? Alternatively, how much change in ourselves does it make sense to want? Is a distinctly posthuman life a good one for beings who started out as human? And even if it is, can beings like us safely obtain it? Beyond certain limits, will we even be us if we undergo sufficiently extraordinary changes? Such questions encourage philosophers with an interest in transhumanism to grapple with issues relating to value theory, personal identity, and survival over time.

Transhumanist thinkers often seem keen on objective values. For example, they may think it objectively better for us to persist longer, feel happier, have increased abilities to affect the universe around us, and attain more complexity in our organised functioning. If they are correct about this, they have the beginnings of an argument that it is better to be posthuman than human – and perhaps it is better to be a human being with than one without certain technological enhancements.

Such an objectivist approach to value may seem implausible, but arguably the notion of ‘improvement’ need not rely on the idea that certain kinds of functioning are objectively better than others, independent of our current preferences. If we consider such examples as general health, disease-resistance, intelligence (perhaps understood as a broad-based problem-solving ability or a cluster of such abilities), it is always possible to imagine bizarre examples where these are detrimental. Nonetheless, they are of benefit in a vast range of situations that we encounter under many social and other conditions. It seems perfectly reasonable to prefer these things to their opposites, and to speak of improving them.

There might still be a question as to whether there could be too much of a good thing: might we prefer a longer but not vastly longer span of life, for example? And as I suggested, there is a question as to what conditions are required for us to preserve our identities at all. If I lived for thousands of years, would a time come when I could no longer be considered the person I am now? Or should this be seen much as we currently view growing up? I am very different from the child I was at, say, the age of four, and I retain few memories from that age. Yet we do not deny that the child has survived as the much older person I am now.

We might also wonder whether transhumanists are naïve about technology, human nature, and the idea of progress. Technology can be used for deliberately destructive purposes, and it can have unforeseen and unwanted effects; human nature has its dark side; and we cannot assume in the manner of Whig historians that things are always getting better. Indeed, the results of climate change, overpopulation, and depletion of natural resources could combine to offer humanity a very bleak future indeed.

However, transhumanists need not be naïve about any of this. The central transhumanist ideas do not rule out that things could go wrong, and cautionary points can be advanced within the transhumanist movement as well as by its critics. Indeed, much recent transhumanist thought has a surprisingly dystopian tinge, as the authors concerned contemplate what might go very wrong, perhaps preventing any posthuman transition from taking place after all, and perhaps threatening human civilization or human life itself. Thus, one focus of current transhumanist thought relates to existential risks – what could go very wrong on a global scale, and what might be done in an attempt to prevent it.

My own concern about the transhumanist movement is not any of the above. Many transhumanist ideas will doubtless prove impractical, but others may come to fruition. Meanwhile, the discussions are challenging and exciting; there is much scope within the movement for brainstorming, arguments, and new ideas. At its best, transhumanism is far more valuable than a fad or a cult, but there is always the danger that it could rigidify.

Given its vision of a highly desirable future for humanity (and whatever takes humanity’s place), the transhumanist movement has at least some potential to develop into a fanatical, even apocalyptic ideology. As I observe in my own contribution to More and Vita-More’s collection, no one has ever been imprisoned, sterilised, starved, or burned to do in the name of transhumanism; nonetheless, there is a risk of transhumanism developing into something dogmatic, illiberal, even downright nasty. The greatest danger is if a particular vision for the future hardens into a new orthodoxy. This can be avoided, but only with self-awareness and self-scrutiny. That, I think, is an ongoing challenge for transhumanist thinkers.

Russell Blackford is a conjoint lecturer in the school of humanities and social science, University of Newcastle, NSW. He is a contributor to The Transhumanist Reader, ed. Max More and Natasha Vita-More (Wiley-Blackwell 2013), and author of Humanity Enhanced (forthcoming from MIT Press).

What’s wrong with gay marriage?

John Corvino dismantles the objections to gay marriage. This article appears in Issue 62 of The Philosophers’ Magazine. Please support TPM by subscribing.

The gay-marriage movement has lately made dizzying progress. In the UK, which currently allows ‘civil partnerships’, the British and Scottish parliaments are close to recognising same-sex marriage. Last November, voters in three US states (Maine, Maryland, and Washington) extended marriage rights to same-sex couples; this year, legislators in Rhode Island, Delaware, and Minnesota have done the same, while those in Illinois, Nevada, and New Mexico have taken steps in that direction. Uruguay, New Zealand and France now allow same-sex couples to marry. Even staunch opponents of homosexuality concede that the tide is mounting against them – and yet they continue to put up a vigorous fight.

Where are the philosophers amidst this clash? Perhaps surprisingly, they have remained largely silent. Many believe that the right of same-sex couples to marry is so obvious as to be unworthy of serious debate. On the opposing side, a small but prominent group of socially conservative academics contend that the first group is simply blind to objective moral reality. According to their view, same-sex ‘marriage’ isn’t just bad policy: it’s a conceptual confusion, fashionable only because the sexual revolution has so badly distorted the proper understanding of sex and marriage.

This objection is best expressed by self-styled ‘new natural lawyers’ such as John Finnis at Oxford and Notre Dame, Robert P George at Princeton, and others, although one can find shades of their position in common-variety conservative arguments as well. Indeed, their view can be understood as a sophisticated defence of the familiar slogan ‘Marriage = One Man + One Woman’, sometimes rendered in religious garb as ‘It’s Adam and Eve, not Adam and Steve’.

The argument finds its fullest elaboration in a recent book by Sherif Girgis, Robert P George, and Ryan Anderson – What is Marriage? Man and Woman: A Defense – and the basic idea is as follows. In order to decide whether same-sex couples should be allowed to marry, one must first ask What is marriage? But (the argument continues) the correct answer to that question shows that marriage is, by its very nature, a male-female union. So whatever it is that same-sex couples are asking for, it isn’t marriage. ‘Same-sex marriage’ is thus an oxymoron, like ‘married bachelor’ or ‘four-sided triangle’ or ‘deconstructionist theory’. Call this the Definitional Objection to same-sex marriage.

Examples of the Definitional Objection abound. Former US Senator Rick Santorum used it on the campaign trail in his 2012 Republican presidential primary bid. Waving a napkin in the air, he announced, ‘Marriage existed before governments existed. This is a napkin. I can call this napkin a paper towel. But it is a napkin. Why? Because it is what it is. Right? You can call it whatever you want, but it doesn’t change the character of what it is.’

In a similar vein, Dr John Sentamu, Archbishop of York and the second most senior cleric in the Church of England, argues that ‘Marriage is a relationship between a man and a woman. I don’t think it is the role of the state to define what marriage is. It is set in tradition and history and you can’t just [change it] overnight, no matter how powerful you are.’ He went on to compare the push for same-sex ‘marriage’ with the behaviour of dictators.

In a law review article, Alliance Defense Fund attorney Jeffery Ventrella contends that ‘to advocate same-sex “marriage” is logically equivalent to seeking to draw a “square circle”: One may passionately and sincerely persist in pining about square circles, but the fact of the matter is, one will never be able to actually draw one.’

There is something profoundly unsatisfying about the Definitional Objection, although it’s initially hard to put one’s finger on what. One might worry that it involves a kind of verbal trick. After all, same-sex relationships – unlike square circles – surely do exist, and some jurisdictions legally recognise them as marriages. So the dispute seems to be less about whether something exists and more about what to call it.

But this way of putting it actually misses the Definitional Objection’s underlying concern: What we call things – and in particular, how the law treats them – can have a profound effect. If we group items together under the same legal name, people may conclude that there are no important differences between them. Conversely, if we maintain a verbal and legal distinction, people may better notice any underlying ‘natural’ distinctions.

An example will help to illustrate this point. Suppose Kate and William are arguing about whether to serve champagne at their anniversary party: Kate says yes; William says no. Kate relents: ‘Fine, you handle the beverages!’

On the day of the party, Kate is delighted to see waiters passing out crystal flutes filled with bubbly liquid. ‘I thought we weren’t serving champagne,’ she says to William.

‘We’re not,’ he responds: ‘That’s prosecco.’

But Kate doesn’t normally distinguish between champagne – which technically must originate in the Champagne region of France – and other kinds of sparkling wine; to her it’s all just ‘champagne’.

So far, it appears that Kate and William had a mere verbal dispute: they meant different things by the word ‘champagne’, and their initial argument consisted in miscommunication.

But now (at the risk of spoiling their party) let us imagine the argument going further: ‘Silly William,’ Kate says, ‘”champagne”’ is a perfectly fine term for any sparkling wine.’

‘No, no, no!’ William retorts. ‘They’re very different! And if you start calling them all by the same name, people won’t appreciate that difference.’

Proponents of the Definitional Objection have a worry similar to William’s. (You could say that they’re the wine snobs of the marriage debate.) Heterosexual marriage and committed same-sex relationships are fundamentally different, they argue, and using the term ‘marriage’ for both confuses people not only about marriage’s distinctive nature, but also about its value – a moral good which (all sides agree) is far more important than the pleasures of wine.

But what is marriage’s distinctive nature, and why does it exclude same-sex couples? The new natural lawyers answer that marriage is a comprehensive union: a union of both mind and body, exclusive and lifelong. As a comprehensive union, marriage must include bodily union. But the only way human beings achieve bodily union is in procreative-type acts – that is, in coitus: penis-in-vagina sex. Obviously, same-sex couples cannot perform coitus. Therefore, they cannot marry.

The usual response here invokes permanently infertile heterosexual couples: Why are they permitted to marry whereas same-sex couples are not? The new natural lawyers answer that the sterile heterosexual couple’s sex can still be ‘of the procreative type’. But this answer just stretches the meaning of words (ironic, for those offering a Definitional Objection): Sex in which procreation is known to be impossible seems to be precisely not of the ‘procreative type’.

Perhaps there is some looser sense in which coitus – even for permanently infertile couples – is ‘of the procreative type’ in a way that, say, oral or anal sex is not: It shares certain formal features with typical procreative sex. The real question is, what’s so special about that? More specifically, why is a necessary condition for marriage?

Girgis, George, and Anderson’s answer hearkens back to the notion of comprehensive union, which requires bodily union, which requires coitus: ‘In coitus, and there alone, a man and woman’s bodies participate by virtue of their sexual complementarity in a coordination that has the biological purpose of reproduction – a function that neither can perform alone.’ In effect, the two become one – indeed, elsewhere George has claimed that the act makes them ‘literally, not metaphorically, one organism.’

Put aside the biological strangeness of the ‘one organism’ claim. Suppose we accept, purely for the sake of argument, that marriage requires bodily union and that only coitus can achieve such union. Then the proper counterexample for the view is not infertile heterosexuals, but rather those who cannot achieve coitus.

Consider a hypothetical couple I’ll call Bob and Jane. Bob and Jane were high school sweethearts. Eventually, Bob proposed marriage, and Jane accepted. But prior to their wedding, tragedy struck: Bob was in a terrible car accident which paralyzed him from the waist down. As a result, he would never be capable of coitus. Bob offered to cancel the engagement, but Jane would have none of it: ‘You are the same person I have always loved,’ she declared. ‘We will make this work.’ So Bob and Jane legally wed, spent many years together, and eventually raised several adopted children. Although coitus was impossible, they engaged in other acts of sexual affection, which enhanced the special intimacy between them. For decades, until parted by death, they enjoyed each other and the happily family they jointly created.

Question: Were Bob and Jane married? They were certainly legally married, and also according to virtually everyone’s common-sense understanding of marriage. But not according to the new natural law view. On that view, Bob and Jane’s inability to engage in coitus prevented the bodily union necessary for the comprehensive union of marriage.

I’ve raised this objection to Girgis, George, and Anderson before, and they have responded – in an endnote buried in the next-to-last page of their book. (See what you miss by not reading endnotes?) There they bite the bullet and concede that the ‘strong’ version of their view entails that Bob and Jane were not really married. They quickly add that good marriage policy would continue recognising such marriages legally, however, since inquiring into their true status would be invasive. (Why it would be more invasive than, say, a blood test – required for marriage in many jurisdictions – they never explain.)

They also gesture at a ‘softer’ version of the view in which Bob and Jane’s relationship could be marital as long as coitus were possible ‘in principle’. It is not clear how this softer version gets off the ground, however. Any random male-female pair could engage in coitus in principle, But marriage does not consist in what people might do if the world were different; it consists in what they actually do. Suppose Bob were kidnapped before the wedding and never returned to Jane. In that case, they would (sadly) never marry, even though they could marry ‘in principle’ and even though their failure to do is fully involuntary.

The upshot is that the new natural law view avoids the infertile-couples objection only to get stuck with something worse: the paraplegic counterexample. By making coitus a necessary condition for marriage, the new natural lawyers must conclude that Bob and Jane’s ‘marriage’ is a counterfeit.

How did we end up in such a spot? Part of the problem is that ‘comprehensive union’ is a rather vague and slippery notion: suitable for greeting-card poetry, perhaps, but not the sort of thing on which to build a marriage theory. It is clear that comprehensive union doesn’t mean that spouses must do everything together: they may have independent friendships, professional collaborations, tennis partners and so on. It is also clear that sex is part of our usual understanding of marriage. But is it strictly necessary? And must it be coital?

Girgis, George and Anderson answer yes to both questions, because ‘your body is an essential part of you, not a vehicle driven by the real “you”, your mind; nor a mere costume you must don … Because of that embodiedness, an union of two people must include bodily union to be comprehensive. If it did not, it would leave out – it would fail to be extended along – a basic part of each person’s being.’

This is the sort of explanation that not only fails to make the case; it actually contradicts the point it’s intending to serve. Insofar as our bodies are an integral part of us, it follows that any union between two people must include bodily union. Disembodied minds do not form friendships, collaborate on professional projects, play tennis, and so forth. It thus remains unclear why ‘comprehensive union’ requires coitus any more than it requires professional collaboration. (This is not to deny that sex is an important feature of marriage – only to say that it doesn’t fall out of ‘comprehensive union’ in any clear and unproblematic way.)

So what’s the alternative? Proponents of the Definitional Objection, including Girgis, George, and Anderson, often complain that ‘revisionists’ like me offer no clear definition of marriage. They’re right if they mean that I don’t have a simple phrase like ‘comprehensive union’ which purportedly captures the necessary and sufficient conditions for marriage – conditions that all and only marriages will satisfy. But that’s because marriage, as a complex social institution, doesn’t lend itself to that sort of pithy definition. It’s not definable in the same way that, say, ‘bachelor’ or ‘triangle’ is. As Martha Nussbaum puts it, marriage ‘is plural in both content and meaning’ – involving a diverse cluster of goods and defining elements.

The best anyone can offer is a rough and qualified definition. Here’s mine: ‘Marriage is the social institution recognising committed adult unions which are presumptively sexual, exclusive, and lifelong; and which typically involve shared domestic life, mutual care and concern, and the begetting and rearing of children.’ The ‘presumptively’ and ‘typically’ are crucial: there will be exceptions, as well as ‘grey areas’. (Are ‘temporary marriages’ marriages? What about ‘marriages of convenience’?) Notice however, that loose edges are typical in definitions of social institutions. (Does secular humanism count as a religion? Do tribal councils count as governments?)

Bob and Jane exhibit enough of marriage’s defining features to count as married, even without coitus. But once we abandon the idea that coitus is strictly necessary for marriage, we eliminate the new natural lawyers’ bar to recognising same-sex unions as marriages – and thus the most powerful available version of the Definitional Objection.

Having argued that the new natural lawyers give the wrong answer to ‘What is marriage?’ I’d now like to argue that they’re asking the wrong question.

To see why, consider what I like to call the Marriage/Schmarriage Maneuver. Suppose I were wrong about what marriage is. And suppose that, realising my error, I approached the new natural lawyers and said:

‘You know what? You’re right! This thing I’ve been advocating isn’t marriage at all. It’s something else – let’s call it schmarriage. But schmarriage is better than marriage: it’s more inclusive, it helps gay people without harming straight people, etcetera. We’d all be better off if we replaced marriage with schmarriage. Now, it’s unlikely that the word “schmarriage” will catch on – and besides, it’s harder to say than “marriage”. So from now on, let’s have schmarriage – which includes both heterosexual and homosexual unions – but let’s just call it by the homonym “marriage”, as people currently do in Canada, Spain, Uruguay, South Africa and elsewhere. Okay?’

Their answer would surely be ‘Not okay!’ – but why? The reason is that they reject the idea that schmarriage is better than marriage. They maintain that marriage, traditionally understood, has a distinctive value, and they don’t want that value to get lost in a new, more inclusive terminology.

But if that’s the crux of the issue – marriage’s distinctive value – why not focus on that? After all, the marriage debate is primarily a moral debate, not a conceptual or metaphysical one. So instead of asking ‘What is marriage?’, shouldn’t we be asking why it’smorally important to maintain an exclusively heterosexual institution for recognising committed relationships?

Of course, many have asked the latter question, and the answers have been unsatisfying. For example, some argue that an exclusively heterosexual marital institution is important because children do best when raised by their own (biological) mothers and fathers. The problem with this argument – aside from its resting on dubious interpretations of existing data – is that it requires a blatant non-sequitur. Even if one grants that children do best with their own (biological) mother and father, it does not follow that same-sex marriage (or ‘schmarriage’) should be prohibited, because there is no reason to think that prohibiting it will result in more children getting their own (biological) mothers and fathers.

In fact, paradoxically, same-sex marriage may have the result that fewer same-sex couples raise children. Currently, the majority of same-sex couples with children have them not via adoption or artificial insemination, but rather through prior heterosexual relationships. In a world where same-sex relationships were more accepted – where gays and lesbians could aspire to ‘happily ever after’ in marriage just like their heterosexual counterparts – fewer would feel pressure to enter heterosexual relationships for which they are not suited, and thus fewer children would experience the breakup of such marriages. From the standpoint of child welfare, fewer divorces is surely a good thing.

Let me be clear: I fully grant that marriage, institutionally and individually, is important for child welfare. But it is also important for adults, including those who don’t want or can’t have children. Relationships are good for people in myriad ways. They are good not only for those in them, but also for those around them, because happy, stable partners make happy, stable neighbours, co-workers, family members and so on. At the same time, long-term romantic relationships are challenging, and they benefit from public commitment, legal protection, and social support – the very things that marriage provides. All of these reasons apply to gay people as well as heterosexual ones.

If the Definitional Objection appears unsatisfying, that is partly because it seeks to impose a tidy definition where such definitions are inapt. But it is mainly because appealing to definitions is generally unhelpful in this context. The marriage debate occurs precisely because of conflicting intuitions about what marriage is, or can become. Clever rhetoric about square circles gets us no further toward reconciling those intuitions; worse yet, it distracts us from the urgent moral question of how to treat gay and lesbian individuals, couples, and their families.

John Corvino is chair of the philosophy department at Wayne State University, the author of What’s Wrong with Homosexuality?, and the co-author (with Maggie Gallagher) of Debating Same-Sex Marriage. Read more at www.johncorvino.com.

Islamophobia or fair critique?

Brian D Earp argues that criticising a religious practice from the perspective of secular ethics is not the same thing as being prejudiced against the religion.

In a now-notorious ruling, a regional court in Cologne, Germany decided that non-therapeutic circumcision of young boys violates their constitutional rights to bodily integrity and to self-determination – even if carried out with parental permission, and even for religious reasons. The German legislature passed an emergency statute to protect religious circumcision from any future legal challenges, but the initial court decision sparked a firestorm of controversy. Muslim and Jewish commentators were outraged. Child rights activists and a handful of humanitarian groups were overjoyed. Professional bioethicists were not entirely surprised.

Why not? Ritual circumcision is a pre-Enlightenment tribal tradition. The Jewish version is openly sexist – females are left out of the divine covenant, perhaps to their great relief – and males lose functional erogenous tissue to an excruciating surgery done years before they are old enough to give their consent. Islam is more egalitarian: it allows for circumcision of boys and girls, although there appears to be no heavenly commandment involved in either case, and the procedure takes place in later childhood as opposed to pre-verbal infancy. Both versions are consistent with the norms of patriarchal tribalism; both elevate the concerns of the community over the freedom of the individual to make decisions about his own body in his own time; and both brand a child with a permanent mark of religious belonging despite the significant possibility that he may one day fail to embrace the belief system and/or cultural practices of his parents. Medical ethics, on the other hand, along with much of Western law, came to fruition in a post-Enlightenment world that favours notions like autonomy, consent, individual rights, and a child’s entitlement to an open future. Going by strict definitions, the medically irrelevant excision of healthy genital tissue – whether it’s taken from the vulvas of little girls, or the penises of little boys – is equivalent to criminal assault of a minor under the legal codes of most developed nations. The tension was bound to cause cracks somewhere.

And there is a genuine tension here. The religious metaphysic – which appeals to things like community rights, ritual continuity, and obedience to divine command – just doesn’t square very well with the normative basis of much of contemporary philosophical ethics nor with the underlying legal paradigm of secular constitutional democracies. You normally don’t get to cut off non-diseased, non-regenerating, functional and protective body parts from other people without first getting their permission, whether you think God told you to do it or not. Even religious freedom has its limits. But this point has not been very thoroughly acknowledged by the most vocal of Muslim and Jewish commentators in the ongoing aftermath of the Cologne decision: cries of religious persecution and even of outright Islamophobia and anti-Semitism came very quickly to the tongue. They’re still ringing in the air.

Yet as Russell Blackford recently reminded us in his wonderful essay, “Excessive tolerance?” (tpm 59), it really is OK to criticise religious practices on moral, ethical, or legal grounds. If one can pull off one’s critique in a spirit of fairness, that is, and without any sort of undue spite. His topic happened to be the burqa. Reviewing Martha Nussbaum’s recent failure in The New Religious Intolerance (Harvard University Press) to find “anything problematic at all” about veiling norms within Islam, Blackford brought up the existence of a handful of plausible, respectable, cogent, and relatively simple-to-pose moral objections to these contentious norms that have nothing whatsoever to do with irrational prejudice against Muslims.

Blackford’s assessment of Nussbaum is right on the mark, and provides a general lesson. While it’s true that “the state”, as he put it, “ought to adopt a degree of epistemic modesty about religious issues, and many moral ones (as well),” individual thinking citizens, and philosophers above all, need not be quite so timid about taking a clear ethical stand on potentially harmful customs, whether they are religious in nature or otherwise. As Blackford puts it with characteristic pithiness:

“We are well within our rights to conclude, from within our respective understandings of the world and conceptions of the good, that a particular religion has its dark side, or that a moral norm favoured by some religion is preposterous and harmful.”

Of course, compelling women and young girls to hide themselves in cloth bags in the name of modesty is (at least arguably) one such moral norm. So too is cutting off parts of their genitals in the name of chastity. Likewise, and – again, as at least a sensible, non-prejudiced, non-bigoted collection of arguments can reasonably be taken to show – so too is amputating functional erogenous tissue from the penises of male babies and other minor boys.

As Douglas Adams once observed, “If somebody votes for a party that you don’t agree with, you’re free to argue about it as much as you like … everybody will have an argument but nobody feels aggrieved by it.” Same for different views on economic policy, or whatever else might come up for spirited and productive debate. But if somebody mentions something about her religious practices: “Here is an idea or a notion that you’re not allowed to say anything bad about; you’re just not. Why not? Because you’re not!”

Adams’ point was plain enough. But it’s worth spelling out as a reminder – especially given recent debates about the moral and practical limits of freedom of speech in a climate of dangerous, and sometimes deadly, taking-of-offence. There is absolutely no good reason to think that we must refrain at all times from criticising an idea or custom just because it is rooted in religion. Indeed, sometimes we have an obligation to do the opposite. What if the pious practice is harmful? What if it flies in the face of certain ethical norms? What if we think those norms should count for something and are worth defending in the strongest of terms?

The circumcision debate will rage on for some time to come, and there are decent arguments to make on every side of it. But we do have to have the debate. Criticising a religious practice from the perspective of secular ethics is not the same thing as being prejudiced against the religion, nor does it imply any sort of ill-will toward members of a particular faith group. This distinction bears repeating at every turn. We simply have to be able to talk these things through.

Brian D Earp is a research fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. With Julian Savulescu, he is writing a book on the science and ethics of “love drugs” and the neuroenhancement of human relationships. His Academia.edu page is here.

The need for moral enhancement

Erik Parens reviews “Unfit for the Future: The Need for Moral Enhancement” by Ingmar Persson and Julian Savulescu. This article appears in Issue 62 of The Philosophers’ Magazine. Please support TPM by subscribing.

In the early 1990s, the Georgetown bioethicist LeRoy Walters began to ask, What if we could use biomedical means to ‘enhance’ people morally? What if, for example, we could use such means to reduce our ferocious tendencies and increase our generous ones? For those predisposed to be critical of ‘enhancement’ and also prepared to be honest, that was a hard question. Criticising the prospect of better athletes is one thing, criticising the prospect of morally better people another. That hardness may help explain why at least the critics of enhancement tried, for the next several years, to focus the enhancement debate on relatively easier questions concerning traits like strength, mood, or intelligence.

In 2008, however, Thomas Douglas, then still a student at Oxford’s Centre for Practical Ethics, published in the Journal of Applied Philosophy a much-discussed paper that was explicitly about moral enhancement and was explicitly enthusiastic about it. In the paper he sought to expose the wrongness of the thesis he attributed to the critics: that it is ‘always morally impermissible’ or ‘absolutely objectionable’ to use biomedical means to enhance people morally. Even if no critic ever said that such means are always morally impermissible or absolutely objectionable, Douglas deserves credit for opening up an important conversation, which, indeed, cannot proceed reasonably from any such absolute claim.

Picking up from where Douglas left off, Julian Savulescu (who directs the Oxford center where Douglas was a student) and Ingmar Persson (who is a research fellow in the same center) have now published the first book-length assessment of ‘moral enhancement’. Proceeding from the assumption that using biomedical means is not absolutely objectionable, most of the book describes the moral dispositions and commonsense morality we have evolved to have, and describes the disastrous mismatch between those moral resources and our acquired capacity to wreak destruction with technology.

Readers familiar with Persson and Savulescu’s enthusiasm for using technology to enhance human capacities in pursuit of a ‘transhuman’ future may be surprised here by their despair before the technological power that brings us to the brink of catastrophe. Alluding to Hiroshima and Nagasaki, they write: ‘We are inclined to believe that … a half century ago or so, when scientific technology provided us with means of causing Ultimate Harm, technological development reached a stage at which it became worse all things considered for us to have the current means of scientific technology, given that we are not capable of handling them in a morally responsible way.’ They identify two paths to Ultimate Harm. One is being trod on by all-too-careful terrorists, who are hell bent on using ever more sophisticated technologies to destroy liberal democracies. The other is being trod on by careless members of those liberal democracies, hell bent on using ever more sophisticated technologies to promote their profligate ‘overconsumption’. Indeed, Persson and Savulescu sometimes sound more like the prophet Jeremiah than, say, the prophet Ray (Kurzweil).

They do spend some time analysing that first, terrorist path. Their primary conclusion is that, to block it, liberal democracies will have to increase surveillance of citizens. This conclusion rests on their argument that privacy is a legal – not a moral – right, and that such a right can be restricted for the sake of the state’s survival. Moreover, they suggest, members of a liberal democracy well might vote for increased surveillance out of self-interest. They spend most of their time, however, analysing that second path to Ultimate Harm: the selfish overconsumption that is leading to environmental destruction. Here is where ‘biomoral enhancement’ comes in.

Appealing to evolutionary psychology, Persson and Savulescu remind us that our contemporary moral psychologies – and ‘commonsense morality’ – are adaptations to ways of life that are now 150,000 years old. Living in small groups, we evolved to care for those who are genetically and geographically close, but to fear those genetically and geographically far. We also evolved an overriding preoccupation with our own survival, at the cost of caring about the survival of future generations of our species, much less other species or the planet. As they assert over and over, because it has always been easier for human beings to harm a complex system (like a human organism) than it has been to benefit such a system, our morality has always emphasised the imperative not to harm others over the imperative to benefit them. Our ‘commonsense’ belief that acts of commission are morally weightier than acts of omission is a by-product of that same fundamental feature of our reality.

While those moral dispositions and that commonsense morality were once adaptive for people living in small groups and in possession of relatively crude technologies, they are no longer. If we accept that traditional moral means are inadequate to our current predicament – as evidenced by our failure to use them to achieve the moral improvement we desperately need – and if we accept that there is no plausible absolute objection to using biomedical means, then, Savulescu and Persson conclude, it is time to try a new approach: enhancing ourselves morally by means of biomedicine.

They say rather little, however, about how we might actually get from our current biomedical knowledge to practicable moral enhancements. They make the fair, if banal, point that moral behaviours like altruism and a sense of justice ‘have biological bases’, and they allude to a couple of specific biological targets – the neurotransmitters serotonin and oxytocin – that, in principle, might be manipulated to achieve the sorts of moral enhancements they envision. They fully acknowledge, however, that it is hard to imagine how those particular targets could, in practice, be manipulated to reliably produce enhanced moral behaviour. As they allow, even if we could, say, reliably reduce the concentration of neurotransmitter X in the brain to reduce an individual’s proneness to some action Y, whether such an action led to a good or bad moral outcome Z would depend on the context.

Think, for example, of the ferocity of the passenger who led the assault on the terrorists in the cockpit of the ill-fated flight over Pennsylvania on 9/11. Indeed, at the end of the book, when Persson and Savulescu finally do try to envision what biomedical moral enhancements might actually look like, they have to acknowledge the point from which they began: it is easier to harm a complex system, whether a human organism or an ecosystem, than to benefit it. (In the journal Neuroethics, Chris Zarpentine has recently elaborated just how gigantic are the practical obstacles.)

Savulescu and Persson are also keenly aware of – and ultimately sympathetic to – a closely related but deeper worry, which has preoccupied those prone to criticism from the beginning of the enhancement debates: the worry that in our effort to enhance ourselves, we will inadvertently diminish ourselves. Here the worry is specifically that, in our efforts to make ourselves more moral, we will make ourselves unfree, and thus incapable of being moral. Savulescu and Persson go out of their way to emphasise that they do not endorse any sort of biomedical intervention that would make moral behaviour ‘irresistible’. They want to liberate humans who have moral deficits to act morally, not deprive them of the freedom to choose. They want to ‘amplify’ the capacities of empathy and sympathetic concern in those who lack them. We might say that all they want is to ‘treat’ those who are morally ill, those who can’t experience the sense of altruism or justice experienced by those who are healthy.

Unfortunately, they spend little time explaining how, if the technology were practicable, the liberal democracies might implement a program of moral enhancement. They say that hundreds of millions of young people would need to be morally enhanced, but say virtually nothing about who would identify those children. As they acknowledge several times, there is a huge bootstrapping problem: Who exactly is sufficiently moral to identify those children and to implement the appropriate program?

It is hard to disagree with what they call their main point: ‘liberal democracies are in need of moral enhancement in order to deal safely with the overwhelming power of modern technology.’ It is striking, however, that by the end of the book, after they have acknowledged the magnitude of the problems associated with biomoral enhancement, they back way off from the claim that they sometimes seemed to be making earlier: that only biomedicine can treat the disease created by the mismatch between our paltry moral resources and our burgeoning technological capacities.

Indeed, the subtitle of the book, ‘The Need for Moral Enhancement’, avoids indicating that we need biomedical means to achieve such enhancement – while allowing the potential reader to imagine that only those means will do. After all, it’s not terribly controversial to assert that we need to use traditional, social means to become morally better. A more accurate, if less sexy, subtitle would have been: ‘The need for human beings to improve their moral behaviour is so great that using biomedical means should not be off the table, even though such improvement won’t be feasible in the foreseeable future, given the complexity of our moral natures, the crudeness of the available biomedical means, and the ethical and political obstacles to creating such a program.’

Although the remedy they sometimes appear to be prescribing doesn’t seem feasible even to them, the disease they diagnose couldn’t be more serious. When those prone to enthusiasm can acknowledge that an intervention that made us unable to choose freely wouldn’t be worthy of the name ‘enhancement’ (as John Harris recently put it in Bioethics), and when those prone to criticism can acknowledge that biomedical means aimed at enhancement aren’t absolutely or always objectionable (as I did above), we have entered into what we might view as a ‘second wave’ of the enhancement debates. Perhaps we’re closer than we were in the 1990s to having a conversation about what true enhancement is, and about how we can deploy all of the resources at our disposal to fight the disease that Perrson and Savulescu persuasively describe.

Unfit for the Future: The Need for Moral Enhancement by Ingmar Persson and Julian Savulescu (Oxford University Press), £21.00/$35.00.

Erik Parens is a senior research scholar at The Hastings Center, a bioethics research institute in Garrison, New York.

Gettier and justified true belief: fifty years on

On the fiftieth anniversary of Gettier’s famous paper, Fred Dretske explains what we should have learned from it. This article appears in Issue 61 of The Philosophers’ Magazine. Please support TPM by subscribing.

This is the golden – the fiftieth – anniversary of Edmund Gettier’s remarkable paper on why knowledge isn’t justified true belief. It seems like an appropriate time, therefore, to evaluate what we have learned – or should have learned – from his elegant counterexamples.

Gettier’s paper had a tremendous impact on contemporary epistemology. Measured in terms of impact per page his three-page paper (yes, only three pages) rates among the most influential of twentieth-century essays in philosophy. Prior to Gettier it was more or less assumed (without explicit defence) that knowledge, knowing that some proposition P was true (when it was in fact true), was to be distinguished from mere belief (opinion) that it was true, by one’s justification, evidence, or reasons for believing it true. I could believe – truly believe – that my horse would win the third race without knowing it would win. To know it would win I need more – some reason, evidence or justification (the race is fixed?) that would promote my true belief to the status of knowledge. Gettier produced examples to show that this simple equation of knowledge (K) with justified true belief (JTB) was too simplistic. His examples triggered a widespread search for a more satisfactory account of knowledge.

Gettier’s counterexamples are constructed on the basis of two assumptions about justification, both of which were (at the time he made them) entirely uncontentious. The first of these was that:

1: The justification one needs to know that P is true is a justification one can have for a false proposition.

Almost all philosophers who aren’t sceptics accept 1 without hesitation. After all, if one can, as we all believe we can – sometimes at least – come to know (just by looking) that there are bananas in the fruit bowl and (by glancing at the fuel gauge) fuel in the automobile tank, then given the existence of wax bananas and defective gauges, the justification, the kind of evidence, needed to know is clearly less than conclusive. It is something one can have for a false proposition.

Nonetheless, despite the overwhelming appeal of 1, accepting it lands one in the epistemological soup. Well, almost in the soup. The added push is supplied by Gettier’s second assumption:

2: If you are justified in believing P, and you know that P entails Q and accept Q as a result, you are justified in believing Q.

The idea behind 2, of course, is that one does not lose justification by performing deductive inferences one knows to be valid. If you have reasons to believe P is true, and you know P can’t be true unless Q is true, then you have equally good reasons to believe Q is true. It is difficult to see how 2 could be false if logic is to be regarded as a useful tool for expanding one’s corpus of rationally held beliefs.

But, alas, accepting both 1 and 2 lands you in deep trouble. Gettier explains why. Suppose you are justified (to a degree needed for knowledge) in believing a false proposition F– that, say, Tom lives in San Francisco. Letters, telephone calls, and a recent visit have convinced you of this. If asked, you would say you know where he lives. Nonetheless, despite your justification, F is false. He lives in Los Angeles. Gettier 1 tells us this can happen. Realising that if Tom lives in San Francisco he must live in California you, quite naturally, also believe he lives in California. You now (via 2) have a justified true belief in the proposition that he lives in California. Yet, because your belief in this proposition is reached by such defective means (via the false belief that he lives in San Francisco), you don’t know he lives in California. So knowledge is not justified true belief. K ≠ JTB. It is something more. Or something different.

What, you may ask, is so troublesome about that? Isn’t this exactly what Gettier was trying to show? Isn’t this what kept philosophers busy for years trying to patch up the JTB analysis of knowledge so as to make it immune to this style of counterexample? What about amending the analysis to exclude the possibility of reaching one’s true belief in P by such defective means – via, in particular, false propositions that entail P? Knowledge is not JTB. It is JGTB where JG (“G” for Gettierproof) is some suitably amended version of J, a form of justification that is immune to Gettier style counterexamples. This is not “trouble” at all. It is certainly not deep trouble. On the contrary, it represents philosophical progress. Thanks to Gettier we now have a better picture of what knowledge really is. Or isn’t.

This is a response to Gettier that seeks to solve the problem by accepting the two assumptions that led to it. But it doesn’t work. You can’t avoid the problem and still accept both 1 and 2. Something has to give. To see why consider some hypothetical improvement on the standard JTB analysis. Knowledge isn’t JTB, it is JGTB, where JG is designed to avoid Gettier counterexamples while satisfying both 1 and 2. JG might, for instance, specify that one’s justification for P must not depend in any essential way (as Gettier counterexamples do) on belief in any false proposition. This amendment automatically rules out as knowledge your belief that Tom lives in California, since it is reached via the false proposition that he lives in San Francisco. You may be justified (in some ordinary sense) in believing he lives in California, yes, but you are not justifiedG in believing it. So you don’t know it because knowledge now requires JG justification and that is something you do not have. Problem solved. Knowledge is not JTB. It is JGTB.

The problem is not solved. Remember, the Gettierproof justification, JG, in K = JGTB was supposed to satisfy both of Gettier’s opening assumptions. JustificationG may satisfy 1 (you can be justifiedG in believing Tom lives in San Francisco even though this is false), but it fails to satisfy 2. Why? Because even though your belief that Tom lives in California is a result of a justifiedG belief that he lives in San Francisco, you are not justifiedG in believing he lives in California. So contrary to 2, justificationG, the kind of justification needed for knowledge, does not exist for propositions (Tom lives in California) you know to be implied by things (he lives in San Francisco) you are justifiedG in believing. So JG does not satisfy 2.

As far as I can see, this result is perfectly general. If assumption 1 is true, if you can have a justification, JG, the kind of justification needed for knowledge, for a false proposition, F, then 2 is false: this justification will not give you JG for the true propositions known to be implied by F. One will not be justifiedG in believing propositions one knows to be implied by what one is justifiedG in believing.

This is the epistemological soup, the “deep trouble” I spoke of earlier. What to do about it? I suggest that the only reasonable option is to reject 1. One is not justified – not in the way needed for knowledge – in believing false propositions. If one is justified in the way needed to know P, then P has to be true. If your reasons for believing P are such that you can be wrong about P, you don’t know that P. As I once put it, JG justification is conclusive. You can’t have it for a false proposition. That is what gives JG justification the power to transform mere belief that P into knowledge that P. It provides – as knowledge is supposed to provide – security from error. That is why we all hanker after knowledge.

This sounds like pretty strong medicine. Is it too strong to swallow? Is the cure worse than the disease? Isn’t rejection of 1 simply the first step on the way to scepticism, since we seldom if ever have conclusive reasons to believe the things we take ourselves to know? Isn’t there room to manoeuvre here? What about rejecting 2?

I’ll come back to these questions in a moment, but for now let’s look at some of the other reasons for targeting 1 as the bad apple in this epistemological barrel.

Lottery examples suggest that you cannot know that P is true if there is a chance, however small, that P is false. If your only reason for thinking you are going to lose in the lottery is that you have only one of the million tickets sold, then it may be perfectly reasonable for you to be pessimistic, to believe you are going to lose, but you do not know you are going to lose. Why? Because, given merely those reasons, the fact that (given the number of tickets sold) your chances of losing are .999999, you nonetheless still might win. It is possible. Someone with exactly this small chance (.000001) of winning will win and – who knows? – it might be you. So if, given your overwhelming reasons for believing it, P might be false, you don’t know it is true. That, of course, is just another way of saying that to know something you need conclusive reasons to believe it, the kind of reasons you can’t have for P if it is false.

The Conjunction Principle says that if you know P and know Q, then you know P and Q. At least your evidence is good enough to know P and Q even if you don’t happen to put two and two together and entertain the two propositions together in your mind. This sounds like an eminently reasonable principle, one that a theory of knowledge should preserve. Nonetheless, if we suppose that one could know that P is true when P might (given your evidence) be false – whenever the probability of P was, say, at least .99 – then the Conjunction Principle would turn out false. Why? Because one might know P (P is .99 probable), know Q (also .99 probable) and not know P and Q since the probability of the conjunction P and Q = [(the probability of P) x (the probability of Q)] and this will generally be less than .99. It will always be less than the probability of the conjuncts when these conjuncts describe independent conditions. The only way to avoid this result and, thus, preserve the Conjunction Principle is to insist that the evidence or justification needed for knowing something must make the probability of it equal to 1. Not close to 1, not nearly 1, but 1. The justification must be conclusive since otherwise conjunctions will fail to meet the justificational standards of their conjuncts, and the Conjunction Principle will fail.

The third consideration is a bit more tricky, but it carried significant weight with me when I first thought about it.

Why can’t one have a reason to believe false something one knows to be true? One can, after all, have reasons to believe false things one has reasons, even very good reasons, for believing true. One’s reasons for believing it true are just much better than the reasons for thinking it false. But, to repeat, it doesn’t seem as though one can have reasons to believe P false if one knows it is true.

Think about Sue, Sam and the cookie jar. Sue knows there are cookies in the jar. She just looked. Sam does not know this. He did not look. They both watch a hungry boy peer into the cookie jar, replace the lid without taking anything, and leave with a disappointed look on his face. Sam, we may suppose, has now acquired a reason to think there are no cookies in the jar. Sue has not. Sue knows there are cookies in the jar. For her the child’s behaviour is not (as it is for Sam) a reason to think that the jar empty. It is, rather, a puzzling fact to be explained. Why didn’t he take a cookie? Doesn’t he like peanut butter cookies? If Sue, despite an earlier peek into the jar, now takes the child’s behaviour as a reason to think that there are no cookies in the jar, then – poof! – her knowledge that there are cookies there vanishes. It vanishes because Sue is now taking the absence of cookies as a possible explanation of the child’s behaviour, and this is something she cannot do if she knows that explanation is false. You can do that if you merely have good – maybe even excellent – reasons to think the jar has cookies in it, but you cannot do that if you know it has cookies in it. The explanation for this curious fact is that the reasons required for knowing the jar has cookies in it eliminates an empty jar as a possible explanation of anything, and the only reasons that do that are conclusive reasons. Very good reasons, reasons that leave open the possibility of cookies in the jar, reasons that satisfy Gettier 1, won’t do the trick. So, once again, Gettier 1 has to go.

Resistance to this conclusion might come from a misunderstanding of what is required for a “conclusive” justification, a justification that one cannot have for a false proposition. If one thinks of a conclusive justification, a justification one cannot have for a false proposition, as something like a logical proof, then, of course, this conclusion will sound absurd. It would be a way of saying that to know that P one must be able to prove that it is true. Sceptics may believe that, but no one else does. But that isn’t what is meant. That would set the standard for knowledge much too high. All that is meant – or, better, all that needs to be meant – is that the reason why one believes P (this is called an explanatory reason) is some existing condition R such that the following statement (CR) defining a conclusive reason is true.

CR: R would not be the case unless P was true.

If CR is true (you don’t have to know it is true), then R, the reason why you believe P is a conclusive reason for believing P (this is called a justifying reason). If you have conclusive reasons for believing P, you cannot be wrong about P. But this is not because R is a proof that P is true. It is because R – perhaps unknown to the person for whom R is his reason for believing P – satisfies a condition (viz., CR), that it cannot satisfy for a false P. As it turns out, therefore, conclusive reasons are not that hard to come by. We have them for many of our ordinary beliefs.

If the reason a person has for believing there is gas in his automobile tank is that his gas gauge indicates there is, then if the gauge is functioning properly, he has conclusive reasons for thinking he has gas in his tank. I assume here that a “properly functioning” fuel gauge is one that wouldn’t indicate there was gas in the tank unless there was gas in the tank. This, too, is the sort of justification a person might have for thinking that my birthday is in December – viz., I told him it was. If I am a truthful sort of guy, the sort of person who wouldn’t have said my birthday was in December if it wasn’t, then my saying my birthday is in December is a conclusive reason for believing it is.

Conclusive reasons, then, are reasons you can’t have for a false P. They don’t satisfy Gettier 1. If these are the kind of reasons one needs for knowledge, then – voila! – Gettier’s counterexamples are rendered ineffective. You can – via Gettier 2 – come to a justified true belief that Tom lives in California by way of a false belief that he lives in San Francisco, but because your reasons for believing he lives in San Francisco are not conclusive, they won’t (or needn’t) be conclusive for thinking he lives in California either. So your belief that he is in California won’t qualify as knowledge despite it’s being a justified true belief that he lives in California.

I have so far ignored the question of whether Gettier’s second assumption should be accepted. I have only argued that if it is accepted, one must reject Gettier 1. But what if one doesn’t accept 2. Don’t we have some wiggle room here?

For many philosophers this will be an academic question. There is no wiggle room. For these philosophers rejecting 2 brings on epistemological Armageddon. I have myself rejected principles (they are called closure principles) like 2. I believe there are things people don’t know (and sometimes have no way of knowing) that they know are implied by things they know. But this is a controversial issue that we can comfortably set aside here. Even philosophers like myself who reject closure principles are willing to accept them when they are restricted to the kind of obvious implications at work in familiar Gettier examples – implications such as that [P] implies [P or Q] or that [Tom did it] implies [Someone did it]. Controversy about closure arises with implications such as: [There are cookies in the cookie jar] implies [Idealism is false]. Since cookies are genuine physical objects, not ideas in the minds of conscious beings, anyone who thought himself able to see that there were cookies in a cookie jar would think himself able (with the help of closure) to refute Bishop Berkeley’s idealism by looking into cookie jars. I don’t think it is that easy to refute Berkeley’s philosophy. It isn’t that easy because that use of closure is illicit. Closure is not a generally valid principle. You can’t always use it to come to know what you know is implied by what you know. But its use in typical Gettier examples seems quite unobjectionable.

I conclude, therefore, that Gettier 1 is the troublemaker. The kind of justification one needs to know is not the kind one can have for a false proposition. That, I submit, is what we should have learned from Gettier.

Fred Dretske is senior research scholar in philosophy at Duke University and author of Knowledge and the Flow of InformationExplaining Behavior, and Naturalizing The Mind.