Shorter Sam Harris: FAQ :: How can you derive an âoughtâ from an âisâ?:
This is not quite how he’d probably shorter himself, but that’s not the immediately important issue. The immediately important issue is that he thinks we could get from “is” to “ought” by saying that actions which produce the worst possible suffering for all sentient beings are bad actions, and we ought to do something else. In other words, the “ought” Harris manages to derive (by various assumptions) is that we should not do the very worst thing possible.
As a reply to his critics (cf.), this goes nowhere special. Philosophically, I don’t think this really cuts it. Even if we grant all his arguments, he’s not gotten from “is” to “ought,” but from “is” to “ought not.” It strikes me that an “ought” should actually say what we “ought” to do.
Even that bit of the argument doesn’t hold together, but Harris argues that not only is this “ought not” a useful contribution, he insists that it is all that’s needed: “All lesser ethical concerns and obligations follow from this.” This strikes me as either demonstrably false, or just useless. I don’t know of anyone who thinks that their ethical system will tend to produce the most possible suffering for all sentient beings, or which outside observers claim would maximize suffering for all sentient beings. A guide to which moral values are right or wrong but which cannot actually distinguish between any extant moral systems is fundamentally not helpful.
Let’s take a fairly standard extreme of a profoundly evil value system: Nazi Germany. Hitler did a lot of horrible things, slaughtering millions, waging unprovoked wars of aggression, wantonly attacking civilian centers with bombs and rockets, crafting a state policy dedicated to eradicating certain classes of people based on ethnicity, disability, sexuality, religious practice, etc. The suffering he caused was astonishing. But some people benefitted from his system â not least Hitler himself, but more generally all healthy members of the Aryan race.
In short it is not clear to me that even Hitler violated Harris’s injunction against causing “the worst possible misery for everyone” (emphasis from original, repeatedly), there being more misery Hitler could have wanted to impose. Indeed, someone who undertook to destroy all life on earth but for himself or herself would still not, by Harris’s standard, be “wrong” (emphasis original). Only if this person proposed to torture and then kill (in a maximally painful manner) all sentient life including himself or herself (and whatever sentient life exists on other planets) could we call that action wrong. Forgive me if I’m nonplussed by this moral insight. Indeed, if the capacity of sentient life to suffer is upwardly unbounded (which is not immediately implausible), then value systems could never produce “the worst possible suffering,” and one could argue that no value system would ever violate Harris’s prohibition.
Things would be better if he could generate some plausible metric for what we “ought” to do, so that we could identify peaks and valleys within a moral landscape. But doing that would involve making a value judgment. And that’s where Harris’s list of alleged “facts” really falls apart. Because most of it is deeply value-laden, and few if any of the claims are actually fact claims (as opposed to syllogistic logic or value-laden premises for waffly syllogisms).
Ophelia Benson has pointed some aspects of this out in Harris’s comment thread and her blog, noting for instance the fallaciously excluded middle in Harris’s attempt to get from ought-not to ought. If we ought-not create maximal suffering for everyone, he argues, we should seek to increase well-being for everyone. As Ophelia observes, most moral systems:
lead to well-being for some people and misery for other people. It just isn’t usually the case that cultural practice X leads to well-being for everyone or that cultural practice Y leads to misery for everyone. One of the things that cultural practices do is sort people and allot more well-being to some than to others.
Harris’s contention (underlined in all its occurrences) is that our concern should be to avoid creating “the worst possible misery for everyone,” where “everyone” is treated throughout as “all sentient beings.” That’s a big value judgment. Ophelia is dead-on in pointing out that most of the variation among moral systems is variation regarding who counts in terms of the moral calculus. “Do unto othersâ¦” is uncontroversial, but who counts among those “others”? Other members of your family (weighted by genetic similarity, perhaps)? Members of your racial group? Members of your nation? Residents in your town? Only those who share your gender? Only those who share your sexual orientation? Only humans? Only sentient beings? Only living things? Only physical entities (but not corporations or abstract ideas)? Only physical entities or conglomerates of physical entities? Only sentient beings or conglomerates of sentient beings (i.e., corporations)?
Any of those can be and has been defended by some group at some point. The Supreme Court during the Lochner era gave greater weight to the rights of corporations than to individual workers, a doctrine that the Roberts court seems intent on reviving. I think that’s immoral, but I don’t know if it causes the most suffering possible. I do know that corporations would not even be part of Harris’s moral calculus. Which is also a fair choice, but not one motivated purely by empirical evidence.
Nor, troublingly, does Harris allow any inherent moral status to the natural world. Aldo Leopold argued convincingly for a “land ethic” which would bring the natural world into the system of ethical obligations we feel towards family and society more broadly. He argues convincingly that the issue is not the life or wellbeing of individual deer on a mountain, but the integrity and wellbeing of the ecosystem as a while. Killing off the wolves may leave a lot of happy deer on the mountain, but it ultimately causes overgrazing and degradation of the landscape. This doesn’t extend the same rights to mountains as we would to people, and it also runs exactly counter to the ethics underlying the animal rights movement. For Leopold, the individual animals aren’t what matters. Suffering is part of life (as the Buddhists say), and the important thing is to ensure the consistency of the natural system itself. If that means hunting deer or weeding out some plants or reintroducing wolves (who can be crueler hunters than humans), then that’s the right thing to do. Animal rights activists argue that animals have moral status and rights as individuals, including at minimum a right to their own lives, and generally also a right to individual agency (thus, not to be kept as pets or for purposes of labor or experimentation, let alone the harvesting of flesh, skin, eggs, honey, milk, etc.).
Leopold’s land ethic is a foundation of modern environmental ethics, and a major factor in the growth of environmentalism in the 20th century. The closest one could come to wedging it into a Harrisian ethical framework would be to evaluate the ways in which environmental degradation contributes to the suffering of sentient beings. But the central ethical claim of a land ethic is that it is wrong to treat the natural world as a means to an end, that the integrity of natural systems is an end unto itself. Harris’s system thus not only fails to account for environmental ethics (a nontrivial subset of the modern discourse on ethics), it is actively at odds with the principles of environmental ethics. I’d bet money that I could find similar examples from other fields (space exploration comes to mind, where a similar concern for non-interference in the natural state of other planets is a major topic of discussion). Animal rights would not uniformly fall into Harris’s scheme, nor would phenomena like fruitarianism, where not only animals but plants are extended certain moral rights. That these systems fall outside Harris’s framework doesn’t say that they are wrong, it just shows how narrow his own view of moral philosophy is. He’s just bundling all his assumptions into a structure that he thinks he can pass off as scientific. I’d think his goal was to impose this on others, but once you scratch the surface, he hasn’t actually got anything to impose.
Let’s briefly consider his 9 claimed facts:
FACT #1: There are behaviors, intentions, cultural practices, etc. which potentially lead to the worst possible misery for everyone. There are also behaviors, intentions, cultural practices, etc. which do not, and which, in fact, lead to states of wellbeing for many sentient creatures, to the degree that wellbeing is possible in this universe.
â¨Set aside that “the worst possible suffering” is a standard so absurd as to be meaningless. Focus instead on the choice to emphasize suffering and sentience. He does this because the ability to recognize pain is a mark of sentience, so he can claim that using suffering as a metric here is not an arbitrary value judgment. But the choice of sentience is still just such a value judgment, and not one which is uncontroversial. Again, it creates a potential direct conflict with environmental ethics (in which hunting and other killing of wild animals or plants can be not only acceptable but morally obligatory). Then note that Harris is treating wellbeing and suffering as opposites. What about people who get joy from suffering? Does this mean it’s immoral to be a masochist? That sadists must be made to forego their own wellbeing â which is enhanced by causing sufferingÂ âÂ in order to avoid diminishing the wellbeing of others? What about the unresolved problems with defining wellbeing?
This “fact” is simply not a fact claim. It is too deeply embedded in value judgments to be a fact on the order of a claim like “objects with mass exert an attractive force on one another.”
FACT #2: While it may often be difficult in practice, distinguishing between these two sets is possible in principle.
Which “two sets”? What about the excluded middle, in which suffering is caused to some but the wellbeing of others is increased? This is not a fact claim, it is an attempted bit of logical deduction from the previous claim, and thus packages on top of its own logical fallacy all of the non-factual claims from the previous point.
FACT #3: Our âvaluesâ are ways of thinking about this domain of possibilities. If we value liberty, privacy, benevolence, dignity, freedom of expression, honesty, good manners, the right to own property, etc.âwe value these things only in so far as we judge them to be part of the second set of factors conducive to (someoneâs) wellbeing.
The first sentence is at best an attempt at definition, is value-dependent itself, and is in any event not a matter of scientifically testable fact. Lots of people do value liberty, freedom, privacy, etc., etc. on their own merits, regardless of whether they enhance wellbeing in all cases. Consider the ACLU’s defense of the free speech rights of Nazis to march in Skokie. The ACLU wasn’t happy to have Nazis marching through Skokie, Skokians weren’t happy to have Nazis march through their city, and I expect that the Nazis would have been just as happy to be able to make a ruckus over being censored as they were to be able to march. The ACLU defended them not because doing so enhanced anyone’s immediate wellbeing, but because they regard free speech as an end unto itself. I support the ACLU’s work because I share that value. To the extent Harris’s first sentence is meant to be descriptive rather than normative (a description of how “value” is described rather than how it ought to be described), it is simply false. If the claim is not an is but an ought, well, it undermines his claim to have gotten from is to ought (which he doesn’t claim to do until “fact” 9).
FACT #4: Values, therefore, are (explicit or implicit) judgments about how the universe works and are themselves facts about our universe (i.e. states of the human brain). (Religious values, focusing on Godâs will or the law of karma, are no exception: the reason to respect Godâs will or the law of karma is to avoid the worst possible misery for many, most, or even all sentient beings).
The second word of this “fact” is misleading. Saying “therefore” suggests that there is some sort of logical necessity linking the prior statements to this one. No such logical necessity is obvious, if it exists at all. And even if it did exist, that would make this a deduction, not a fact.
FACT #5: It is possible to be confused or mistaken about how the universe works. It is, therefore, possible to have the wrong values (i.e. values which lead toward, rather than away from, the worst possible misery for everyone).
If we needed proof that “[i]t is possible to be confused or mistaken about how the universe works,” we need look no farther than Mr. Harris. Alas, Harris ignores the category of “not even wrong,” a category including untestable claims such as value judgments.
FACT #6: Given that the wellbeing of humans and animals must depend on states of the world and on states of their brains, and science represents our most systematic means of understanding these states, science can potentially help us avoid the worst possible misery for everyone.
That wellbeing “must depend on states of the world” has not been established here. The experience of suffering in life as we know it is the result of physical brain states (which is not an undisputed point, but Harris and I agree there and I don’t want to quibble), so I suppose one could argue that suffering is empirically measurable. But it doesn’t seem obvious that “wellbeing” is empirically measurable. Nor would I want to get too hung up on the details of brain state as the sole definition of suffering. Fetuses which do not yet have the capacity to feel pain let alone to process it intellectually still deserve some status in our moral calculus, as do people with severe brain damage that prevents them from recognizing pain, or with neurological disorders that prevent their nerves from transmitting pain signals. This also doesn’t account for the feelings of sympathy we have for robots and other entities which we know haven’t actually got an internal emotional state. Consider Mark Frauenfelder’s encounter with the Pleo, an encounter which reminds Mark of a novel chapter anthologized by Doug Hofstadter and Dan Dennett, and which reminds me of an Army colonel’s sympathetic defensiveness toward a mine-destroying robot:
At the Yuma Test Grounds in Arizona, the autonomous robot, 5 feet long and modeled on a stick-insect, strutted out for a live-fire test and worked beautifullyâ¦ Every time it found a mine, blew it up and lost a limb, it picked itself up and readjusted to move forward on its remaining legs, continuing to clear a path through the minefield.
Finally it was down to one leg. Still, it pulled itself forward. â¦The machine was working splendidly.
The human in command of the exercise, however — an Army colonel — blew a fuse.
The colonel ordered the test stopped.
The colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg.
This test, he charged, was inhumane.
This possibility âÂ that sentient beings might care for the wellbeing of non-sentient beings â is not even on Harris’s radar. Harris might say that this sort of sympathy is misplaced, is a value which is “wrong,” but given that neither this nor anything else under discussion actually constitutes the worst suffering possible for everyone, I don’t see how he’d justify that. The borders around “suffering” and “wellbeing” and “sentience” are too blurry for the fine distinctions Harris is trying to make, and the basis for using those concepts as his basis for making moral choices are too ambiguous.
FACT #7: In so far as our subsidiary values can be in conflictâe.g. individual rights vs. collective security; the right to privacy vs. freedom of expressionâit may be possible to decide which priorities will most fully avoid the worst possible misery for many, most, or even all sentient beings. Science, therefore, can in principle (if not always in practice) determine and prioritize our subsidiary values (e.g. should we value âhonorâ? If so, when and how much?).
“It may be possible” is not a testable claim, thus not legitimate as a scientific fact. Nor can one legitimately go from “it may be possible” to “science can.” At best, he’s saying “science may be able to determine and prioritize our subsidiary values.” And few people would disagree that science can inform those choices. The question is whether other factors enter into those choices, a question that this point fails to dismiss.
Every extant moral system already has a complex system for weighing the importance of certain values against others in context-dependent ways. I think (but accept the possibility of error) every major conflict over moral questions either boils down to a disagreement about which value ought to take precedence in a given situation, or which group affiliation should take priority. If Harris is unable to establish that science can do this, he’s not creating anything that can parallel extant moral systems.
FACT #8: One cannot reasonably ask, âBut why is the worst possible misery for everyone bad?ââfor if the worst possible misery for everyone isnât bad, the word âbadâ has no meaning. (This would be like asking, âBut why is a perfect circle round?â The question can be posed, but it expresses only confusion, not an intelligible basis for skeptical doubt.) Likewise, one cannot ask, âBut why ought we avoid the worst possible misery for everyone?ââfor if the term âoughtâ has any application at all, it is in urging us away from the worst possible misery for everyone.
I have infinite faith in the ability of philosophers to concoct reasonable ways to ask questions that seem utterly absurd, so I hesitate to fully endorse this claim. I will say that it’s a uselessly weak claim that does nothing to get us to a genuine “ought.” It also leaves us grasping a bit to empirically measure “misery,” to then determine which misery is “worst,” how to determine which entities to include within the scope of “everyone,” and how best to aggregate their suffering to determine which misery is “worst for everyone.” In short, every noun and adjective in this phrase â a phrase he repeats and underlines every time he repeats it â is value-laden.
FACT #9: One can, therefore, derive âoughtâ from âisâ: for if there is a behavior, intention, cultural practice, etc. that seems likely to produce the worst possible misery for everyone, one ought not adopt it. (All lesser ethical concerns and obligations follow from this).
Incorporating by reference all the previously cited flaws with the underlined phrase, and my general sense that this “ought not” is not the sort of “ought” which anyone could find useful as a moral code, I will note that the “therefore” is not part of any obvious syllogistic structure that compels the truth of the subsequent claim. The parenthetical might rescue this from a charge of utter uselessness, but Harris makes no effort to explain how he would derive any subsidiary values from the injunction against doing the absolutely worst thing possible. No value system in wide use seems intent on creating the maximum suffering possible for all sentient beings, so nothing at all really follows from this injunction.