Ronald Aronson on the false choice between god and science
A significant share of those reading this article will probably agree that it is possible to talk meaningfully about morality. There are, after all, rights and wrongs. We live amidst, and subscribe to, not only norms of behaviour but also standards of truth, both of which are somehow inherent in human social life and indeed in reason itself. These are not Platonic Ideas but rather emerge historically and differentially within societies, and from one society to another. Although we know that cross- and intra-cultural disagreement over specific norms and standards is inescapable, we also know that this does not make thinking and arguing useless. Humans understand and develop ethical norms and standards of reason, however imperfect and provisional, however historically situated. Indeed, a good number of us would say it is possible to speak meaningfully if never absolutely about truth with a small “t,” despite the complexity and difficulty entailed in clarifying it, and despite the social and cultural stakes contending in some of its competing versions.
What’s more, in the United States at least, most ordinary citizens would tend to agree. When asked by the most recent Pew Survey where they get their ideas of right and wrong, 29% answered “religious teachings and beliefs”, while 5% of respondents looked most for guidance to “scientific information”. But the largest number, 61%, saw themselves standing on somewhat less substantial ground, choosing either “philosophy and reason” (9%) or “practical experience and common sense” (52%). These figures suggest that the vast majority of Americans have a clear sense of the role played by human thought in reaching moral conclusions – that is, norms of right and wrong exist without being sanctioned by divine authority or science. In fact, when the most fundamentalist and literalist of believers are excluded from the survey’s calculations, 80% of Americans draw their moral principles from areas other than religion or science.
You wouldn’t know this by reading battlefront reports from the Truth War. Consider the current argument between Sam Harris, “new atheist” author of the best-selling The End of Faith and Francis X. Collins, former head of the Human Genome Project and now director of the National Institutes of Health. “Argument” may not quite capture the asymmetrical character of this particular skirmish: while Harris denounces Collins fiercely and loudly, America’s most famous believer-scientist replies gently and indirectly, as befits an important public official.
Three years ago, Harris savaged Collins’ The Language of God: A Scientist Presents Evidence for Belief. Titled “The Language of Ignorance”, these are the review’s devastating concluding sentences: “This is an American book, attesting to American ignorance, written for Americans who believe that ignorance is stronger than death. Reading it should provoke feelings of collective guilt in any sensitive secularist. We should be ashamed that this book was written in our own time.”
Then this past summer, when Collins was appointed to head the NIH, Harris published a hostile op-ed in the New York Times citing some of Collins’s most recent statements about the limits of science and the importance of religion, including a lecture (with slides) speaking about God’s activities: creating the universe, setting evolution in motion, and endowing humans with a moral sense. All of these belong to “God’s plan”. In this lecture Collins had rejected scientific efforts to derive morality from evolution. Harris ended his op-ed asking whether American biomedical research should be entrusted “to a man who sincerely believes that a scientific understanding of human nature is impossible.”
In reply, the new head of the National Institutes of Health sought to reassure Americans that “I have no religious agenda.” But he has been using his celebrity as a famous scientist to sell an Evangelical Christianity that accepts evolution but places God behind it, and does so both in public lectures and at his Biologos Foundation which is dedicated to combining science and faith. For Harris the question is not only whether Collins can keep his other commitments from affecting his work at NIH, but also the effect that Collins’s theme of the “limits of science” and his religious lecturing and writing will have on the sorry state of scientific literacy in the USA.
For those who have followed President Barack Obama’s religious politics, it is clear that Collins was appointed not despite his faith or independently of it, but because of it. In appointing someone who is both scientist and highly public believer, the administration uses the mantra of reconciling science and religion. But this is a rather one-sided pursuit of common ground between the religious and the irreligious, which in fact consists of an occasional shout-out to secularists – who have nowhere to go but the Democratic Party – while actively courting religious believers. This is of a piece with Obama’s policy to revitalise George W. Bush’s “Faith-Based Initiative” which funds churches to perform vital social services.
Yet it would be a mistake to take sides too quickly in the Harris-Collins controversy. It demonstrates one of the paradoxes of the Culture War, where opposites conspire with each other against the rest of us. Together, these non-philosophers would exclude some of the most important functions of human reason. A troubling agreement links Collins’s embrace of Evangelical Christianity and Harris’s attack on Collins. Both of them offer an impoverished, narrow conception of reason and knowledge, proposing a stark choice to the rest of us: approach life’s important questions through science, or turn to religion. Dressed up as a conflict, for example, over whether the latest research from neuroscience and evolutionary psychology needs to be supplemented by religion, it is really an old-fashioned and quite nineteenth-century debate over whether science or religion is our ultimate answer. Either science will finally explain right and wrong and unlock the mysteries of human nature and the human mind, or we must turn to God to do so.
But surely this was a false choice two hundred years ago, and it remains so today. Collins’s turn to religion is based on what he calls the universal “Moral Law”, in capital letters, which by its universal and binding nature seems to transcend humans and can only come from a superhuman force, God. Harris on the contrary believes that “our understanding of human nature can be derived from neuroscience, psychology, cognitive science and behavioral economics, among others.”
In a sense the debate comes down to what we can hope for from evolutionary science, whether we anticipate a deeper and more revealing look into the sources of human meanings or simply more data demanding illumination from elsewhere. No less an evolutionist than Stephen Jay Gould set out the path Collins now follows with his effort to define science and religion as presiding over “non-overlapping magisteria”. In Rock of Ages, Gould proposed his “resolution to … the supposed conflict between science and religion”: separating human inquiry, except for realms such as art and beauty, into two domains according to the “appropriate tools” of each. Science “covers the empirical realm: what the Universe is made of (fact) and why does it work in this way (theory). The magisterium of religion extends over questions of ultimate meaning and moral value.” Gould was not only seeking a political truce in the Culture War, but like Collins he was also skeptical of resolving all vital life questions through the scientific method.
In contrast, proponents of the current enthusiasms for the all-explaining character of cognitive science, neuroscience, and evolutionary science hold out the will o’ the wisp of resolving life’s important questions through these. Imagine basing value judgments on more than the contentious and fragile character of human history and rationality - that brain circuitry or a distant evolutionary process might substantiate morality! One group of neuroscientists cited by Jürgen Habermas has proclaimed for example that “within the foreseeable future science will be able to both explain and predict thoughts and decisions, as well as sensations and emotions, based on observable physiochemical processes in the human brain.”
Obviously something is wrong with this picture. What might be a philosophical response to such projects as well as the disagreements they provoke? Correcting the picture entails changing the entire direction of the conversation, and Habermas and other philosophers of the rationalist and continental traditions have provided us with rich materials for doing so, as have the social and historical worlds in which we dwell. In what follows I would like not so much to present a definitive answer to the questions raised by Collins and Harris as to sketch a better way of thinking about them, one that includes a richer, more complex notion of reason and one that is willing to sift through and take on board its historical accomplishments.
Habermas would say that we disappear as persons if we give in to the temptation to take ourselves as no more than objects of explanation. No wonder believers like Collins, facing all-embracing claims for the value of scientific approaches to life’s important questions, feel the need to look elsewhere: something is missing. The question must always be asked, what are the purposes of observations, and what values do the observers attach to them? Where do these values come from? Not from heaven: operating all of the fields championed by Harris are conscious, acting subjects, imbedded in language and history, creators of norms and goals that transcend any individual and cannot be validated by the methods of science. These subjects can never be completely reduced to scientific objects of knowledge because they are actors and knowers, intentional beings. They are always behind every observation, every act of knowing. These intentional, rational participants who follow standards and give reasons for their actions are, after all, social beings. With others, they have over time developed collective identities, shared perspectives and values, using specific languages and following specific cultural styles. They have a history.
It is as subjects, indeed social subjects, that we know, we decide on truth, and we judge right and wrong. As social subjects we decide on the rules of “communicative action” in which these activities take place. And these rules include the existence of such a thing as objective truth, and the active belief that people are capable of arriving at it. If we are truth-seeking animals, we might of course ask how we got that way, but we must also ask what our truths are and what are the rules for arriving there.
This of course entails giving up any pretense to absolute knowledge. It entails allowing oneself to enter into discussion, to submit what one says to the judgment of others, to be proven wrong by them, to be seen as fallible, and thus to realise that any particular piece of knowledge is always tentative, always demanding verification. This in turn implies a commitment to a communicative process in which we are always in dialogue with others, and in which they are always looking over our shoulders and commenting on what we claim to be true.
Truth shifts historically and is framed according to one’s disciplinary standpoint. It is never absolute but is objective. It is never raised above humans, but always takes place with, for, and about others. This is even, or especially, true of science. Its knowledge is necessarily provisional, it can be challenged and even overturned – which makes it dramatically different from the supposed “absolute knowledge” conferred by religious faith. But this also places science within a larger human project, rather than supporting any scientific claim to ultimacy.
But we should not stop here. We should also tally some results. Can we do so where morality is concerned? In fact, some of these have actually been codified, in the United Nations Universal Declaration of Human Rights and then the Covenants on Civil and Political Rights and Economic, Social, and Cultural Rights. It is worth appreciating these truth claims and their posture of objectivity and universality. The Universal Declaration begins by proclaiming “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.” Then it stresses civil and political rights, including the prohibition of slavery and torture, the right to citizenship, property, equal protection under the law, democratic participation in government, and freedom of religion. Since it was promulgated in 1948, people all over the world have further spelled out what it means to be human – in thinking, writing, and teaching, in response to new technical, political, and cultural possibilities, and most sharply and dramatically, in political struggles.
The process of redefining what “we” are entitled to is always a process of redefining who “we” are. Previously excluded and subordinated groups have joined, and expanded, a once narrower discourse. Many who were at first voiceless, especially colonial peoples in Africa and Asia, demanded entry, which explains in part why the Universal Declaration of Human Rights was followed, in 1966, by the International Covenants on Civil and Political Rights and on Economic, Social, and Cultural Rights. Both of these stress that “the ideal of free human beings enjoying freedom from fear and want can only be achieved if conditions are created whereby everyone may enjoy his economic, social, and cultural rights, as well as his civil and political rights.”
Obviously these are still only promises, broken as often as honoured. They reach considerably beyond the practice of any particular contemporary nation-state, calling for higher education “to be made equally accessible to all”, the right to work, the right to “the highest standard of physical and mental health”, the right to an adequate standard of living, and, despite the male pronoun, equal rights for women, and equal pay for equal work. But all of this suggests a slowly developing universal vision.
Rather than dismissing these espoused rights as empty rhetoric, it is more correct to place these ideas historically. They could never have been put on a collective agenda of humanity three hundred, two hundred, or even a hundred years earlier. They began to be formulated during the Enlightenment, long before there was any collective institution to speak for humanity. But by the middle of the twentieth century, particular countries, and political and social movements almost everywhere, had developed to a point where such ideas were becoming widespread and politically compelling. Historical and social change had put on the agenda of humankind “the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want.” Something remarkable occurred: rulers and ruled together accepted the demands contained in the Declaration and Covenants as the norms by which they would agree to have their societies be judged.
The articles of these documents embody the international consensus concerning human rights achieved by the late twentieth century. They contain recent thinking about human social morality, and they contain a partially fulfilled, still-frustrated human project. They demonstrate how, without invoking God or science, humans can confidently talk about morality. And they demonstrate that such ideas are more and other than merely ideas. The Declaration and Covenants claim a moral force that reaches well beyond the governments that agreed to them: they are our promises to ourselves.
They testify to the many-sided and long-term development of a universal morality, its peculiar character of historical objectivity. This proceeds through rational argument and social struggles, is experienced in upheavals of social relations, and becomes embodied in government institutions and legal codes. We can still say that, buttressed by their own kind of evolved solidity, norms matter, truth matters – as hypothesised, argued for, demanded, struggled over, won sometimes wholly and sometimes in part, and lived.
Ronald Aronson’s latest book is Living without God (Counterpoint), from which portions of this article were taken
I read this and was somewhat taken by it. However, it seems disjointed.
In particular, I would guess it was the latter part of the essay you wanted to get at and the earlier stuff is just a lead in; Harris and Collins by the end were hung out to dry, so to speak.
Being a student of proceedings of this sort, I decided to reread this with a more discerning eye. I gave up when you put me in the position of accepting that I “… subscribe to … norms of … standards of truth … which are somehow inherent … in reason itself.” Perhaps if I’m able to figure out what this means I might agree with you, but may I suggest you don’t hold your breath.
[...] — Tags: Philosophy — timgier @ 5:27 am The Philosopher’s Magazine has posted an interesting article by Ronald Aronson on “the false choice between science and god.” Here’s a short excerpt: It [...]
The culture of myth has been practiced by humanity for 10,000+ years. Once we have dabbled in the modern sciences for that length of time, we’ll reflect back and know there is/are no god(s). Of course, I’m kidding myself since, I’m sure, this “debate” will still rage on. Why not face the fact there is no gray area for discussion? “Believers” live with their superstitions and need for a “higher power.” Evolutionists/humanist have gladly unhooked themselves from the myth and squarely place themselves in the slowly evolving sciences. At the moment of our reluctant last breadth, all of us should be content.
This seems to be two separate articles together, and something of a frustrating read as a result.
I think the author is saying that most ordinary people learn their values through experience and common sense and are therefore distrustful of both science and religion. A pretty obvious point but one that is surprisingly frequently overlooked by god-botherers and militant atheists alike.
Je pense qu’il devrait laisser Rob Russell a continué comme le leader de la tribu. En fin de compte, aura pour tenter de les amener à voter contre lui au large à une grande menace, puis Russell pouvez cliquer Slide
Does it make sense to base morality on objective reason, rather than on personal conscience or intuition, ‘that still small voice’ that arises when choices have to be made moment to moment?
Should reason, a function of consciousness, be given god-like status and deified in a supposedly philosophically objective view of what morality is supposed to be? Can this be extrapolated to apply in all times and in all circumstances? Mission statements can be smoke screens for nefarious behavior; for which a philosopical point of view can be the jumping-off point.
Science and philosophy are objective, impersonal, and cold. They lack affect (the feeling function) which should be engaged, as well as reason in choosing what is moral in a particular situation.
Morality does best evolving from the individual to the group. Objective standards do not always translate into subjective standards, as they are too impersonal.
[...] of the four horsemen, Daniel Dennett”, Ian S. Markham’s critique of the new atheists, Ron Aronson on the false choice between science and religion, Ziauddin Sardar’s “my philosophy” interview, and Ophelia Benson’s [...]
Morality is in our nature, and its in the form on concience, it’s just a matter of listening to it or not.
Society has a great impact on molding our morality, as well the lessons we learn from our dicission in life.