Against Bernard Williams

Utilitarianism is one of the most influential schools of normative ethics in modern philosophy. Broadly defined, utilitarianism is a form of consequentialism and teaches that “the morally right action is the action that produces the most good.” Utilitarianism “is also distinguished by impartiality and agent-neutrality,” meaning that “Everyone’s happiness counts the same.” Jeremy Bentham is generally regarded as the founder of classical utilitarianism, though the term was only later popularized by John Stuart Mill. The Australian animal rights activist and effective altruism theorist Peter Singer is probably the most influential utilitarian alive today.

The English moral philosopher Bernard Williams (1929-2003) was known, among other things, for his hostility to utilitarianism. Like other critics, he argued that its principles are too demanding, to the point of potentially harming the common good by distracting moral agents from everyday “projects” and “commitments” which serve the common good in less obvious ways. The distinguishing feature of his argument, however, is that utilitarianism turns the moral agent into a cog in a machine, destroying their integrity (to be understood as their “ability to originate actions, […] to be something more than a conduit for the furtherance of others’ initiatives, purposes, or concerns…”).

In the closing words of his essay “A critique of utilitarianism” that formed the second half of Utilitarianism: For & Against (1973), co-written with J. J. C. Smart, Williams wrote: “The important issues that utilitarianism raises should be discussed in contexts more rewarding than that of utilitarianism itself. The day cannot be too far off in which we hear no more of it.”

Fifty years later, that day has yet to arrive. Utilitarianism is an immensely flexible theory which can be reconciled with deeply demanding and deeply permissive codes of conduct. This is its great strength and its great weakness, as Williams seems to have realized. Hence, my argument for its enduring relevance can be formulated as follows:

P1: The utilitarian notion that desirable states of affairs are worth pursuing seemingly underlies all moral philosophies (at least all that most would consider endorsing). Arguably, this is the only good to be said of utilitarianism, and it may not be considered enough to make it a philosophy worth embracing. As Williams writes: “I take it to be the central idea of consequentialism that the only kind of thing that has intrinsic value is states of affairs, and that anything else that has value has it because it conduces to some intrinsically valuable state of affairs. How much, however, does this say? Does it succeed in distinguishing consequentialism from anything else?”

P2: Williams makes a valid point but succeeds too well in dismantling the systematization of moral philosophy. Biographical notes on Williams emphasize that he was suspicious of all systematization of ethics. Yet he continues to demand that we seek standards by which to live morally.

Conclusion: Williams thereby inadvertently exposes the main, and indispensable, function of utilitarianism: as a general framework on which to balance more particular moral considerations.

Utilitarianism and Schizophrenia

Simon Critchley, in Tragedy, the Greeks, and Us (2019), argues that Ancient Greek tragedy is a polyphonic expression of human contradictions, of the impossibility of arriving at a unified understanding of ourselves. Hence the hostility of philosophers like Socrates to the genre: there is no reconciling the schizophrenic nature of tragedy with a coherent value system. “Perhaps philosophy labors under the delusive ideal of the unity of rationality and freedom, that one’s true interests as a rational self correspond to one’s other, perhaps baser, interests and can redeem such baseness,” writes Critchley. “Whatever we can say of tragedy, it does not toil under the burden of such an ideal.”

In making his case, Critchley refers occasionally to the work of Bernard Williams, who rejected “progressivist” interpretations of tragedy. Williams recognized that the ancients had developed tragedy as a way of dealing with issues that are still with us. Critchley quotes an observation by Williams that, in tragedy, humans deal “sensibly, foolishly, sometimes catastrophically, sometimes nobly, with a world that is only partially intelligible to human agency and in itself is not necessarily well adjusted to ethical aspirations.”

There is an echo here of Williams’ skepticism vis-à-vis moral systematization. Throughout his critique of utilitarianism, Williams emphasizes that any attempt to impose order on the chaos of existence will likely do more harm than good.

How, then, are we to develop our values? Williams offers no suggestion. The standard foil to consequentialism is the deontological approach to ethics. Deontological theories are inherently rule-based, in the manner of biblical commandments, wherein “certain actions can be right even though not maximizing of good consequences, for the rightness of such actions consists in their instantiating certain norms…” Yet Williams rejected deontology as well, presumably because it, too, deprives the moral agent of integrity.

Where does that leave us? The consequentialist Peter Railton, in “Alienation, Consequentialism, and the Demands of Morality” (1984), acknowledges the danger of taking utilitarian premises too far (for instance, by obsessively seeking to do the most good by going to saintlike extremes, thereby neglecting equally or more important commitments like those to friends and family, resulting in what he calls alienation) but also argues that it is not as difficult as Williams suggests to incorporate consequentialist principles into one’s lifestyle without going to extremes: “[T]he tension between autonomy [Williams’ integrity] and non-alienation should not be exaggerated.” As someone with a vulnerability to obsessive-compulsiveness who was raised Catholic, I take the danger of sliding into saintlike scrupulousness very seriously, yet I find Railton’s argument the more convincing.

Likewise, Williams’ sparring partner Smart, in the other half of Utilitarianism: For & Against (“An outline of a system of utilitarian ethics”), acknowledges that we would go insane if we tried to foresee every possible consequence of all our actions, and blamed ourselves for every negative consequence of our past actions. Consequentialism is a general guideline, nothing more: “The utilitarian criterion, then, is designed to help a person, who could do various things if he chose to do them, to decide which of these things he should do.”

The only alternative would be to make decisions without recourse to any set of values, schizophrenically, and the end result of that would probably be, ahem, tragedy.

Utility and the Future

So, where does this leave us? What I find most intriguing in Williams’ argument is his acknowledgement that the consequentialist preoccupation with “states of affairs” makes sense insofar as we must attribute intrinsic value to certain things in order to formulate value systems. This implies that states of affairs are as good a thing as any to which to attach intrinsic value. What Williams takes issue with is, of course, the notion that we should formulate value systems at all, but his larger argument posits, compellingly, that utilitarianism is too vague to function as a value system anyway.

There is a subtle contradiction here, as I see it. Specifically, if we should not formulate value systems, what’s wrong with a system too vague to function as one?

This is why we have not outgrown utilitarianism, and why it may be more important now than ever. It is obviously impossible to formulate a perfectly coherent set of morals, but it is absolutely essential that we do our best. We cannot simply adopt our values at random and expect anything less than chaos to ensue.

A major problem for consequentialists has always been the difficulty of collecting sufficient empirical data, in a timely manner, to accurately predict the consequences of our actions. In this age of big data, automation, robotization, artificial intelligence, and virtual reality, this paradigm may be changing. Moreover, these phenomena are making it more important for entities big and small to adopt coherent sets of standards, grounded in moral values, all over an increasingly interconnected planet. We are fast approaching the day when, as Yuval Noah Harari writes in 21 Lessons for the 21st Century (2018), it may be possible to sue a philosopher for programming a self-driving car in a manner which caused a consumer grievous harm.

Therefore, we need a basic framework on which to build our moral values. Utilitarianism, the principle that we should pursue the greatest possible benefit for the greatest possible number, strikes me as the best framework available. With all due respect to the many sources from which we draw our specific values—religion, culture, school, family, government, media—we cannot justify any set of morals without appealing to some larger, general benefit of adopting those values. There can be no pluralism without a collective commitment to some deeper, unifying principle. Deontological theories always seem to fall back on some form of consequential reasoning to justify their prescriptions.

I am no expert on moral philosophy, but I have never encountered any school of thought that seemed to offer a better means of judging prospective moral values, laws, projects, policies, commitments, and so on than utilitarianism. As such, it seems safe to say that Williams’ dreamt-of day when we will hear no more of it remains a long way off.

Next
Next

Knowledge & Belief Need a Divorce