Moral Standing
views expressed dated: 2025-09
Anyone who says that life matters less to an animal than it does to us has not held in his hands an animal fighting for its life. The whole of the being of the animal is thrown into that fight, without reserve. When you say that the fight lacks a dimension of intellectual or imaginative horror, I agree. It is not the mode of being animals to have an intellectual horror: their whole being is in the living flesh...I urge you to walk, flank to flank, beside the beast that is prodded down the chute to his executioner. — J. M. Coetzee, "The Lives of Animals"
Introduction
The purpose of this note is to explore the concept and puzzles of moral standing. What beings have moral standing? Why? How do we know? Exploring the philosophical puzzles of moral standing is like pulling on a small root connected to a vast underground network of other entanglements. What at first seems like a disjoint question, "what kinds of beings have moral standing and why?", the more we pull, inevitably unearths deeper philosophical commitments. One should be able to declare their view on moral standing without also having to add a heavy helping of metaphysics, metaphilosophical commitments, and epistemology along with it. I confess myself quite unable to do that, and so this note will chart a path through several of these domains on its way to answering the questions of moral standing it poses.
In today's world, one's view on moral standing is one of the most easily actionable philosophical beliefs one can act on daily. In a world of scale, we increasingly face collective choices that impact the other creatures that inhabit this world with us. Factory farming is perhaps one of the worst moral atrocities occurring on the planet at this moment, if the animals that are its victims have moral standing. Or is it? Perhaps it is not because those animals "don't count"? Which beings do count? On what basis do they count? Is moral standing a spectrum gradually increasing from the bacterium to the dolphin to the human, or is it a binary switch that at some point turns on? If some beings have more moral standing than others, how do we compare them with the moral status of others?
In this note, I attempt to tackle these questions with a focus on the deeper metaethical commitments and the roots that they take in metaphysics, metaphilosophy, and epistemology. I am not unaware of the shallow treatment that practical issues receive here, but that is not my primary goal here. I will put forward a view of moral standing that is quite different from one I previously endorsed, and attempt to resolve a few of the deeper puzzles that arise along the way. The view that emerges here will be a pragmatist/Kantian parity account—borrowing from Christine Korsgaard's philosophy with a pragmatist spin—that claims that respecting the well-being of any creature for whom valuing is possible is demanded by rationality and rational consistency upon understanding that concepts of moral weight apply to that creature. These moral concepts apply almost invariably to what can be called "a living being", but with varying degrees of certainty and correctness in their application. I will thus argue that moral standing is graduated because the moral reasons different beings provide (not give) us to act come in varying intensities. Along the way, I will attempt to show how the view I am outlining here sidesteps many of the problems with essentialism, functionalism, views of moral reasons that lead to relativism, and many of the pitfalls that broadly Kantian accounts of the moral standing of non-reasoning creatures face.
On Harming Ants Dogs, and Neighbors
I invite the reader to imagine a walk on a hot day in which one notices—in the lazy ambulatory flicking of the leg—that one is about to step on an ant. In a lapse of attention, perhaps we do not step out of the way and tread on the ant.
Does this matter? Does the ant have moral standing? That is, is it bad that the ant was stepped on? Probably not, we want to object, or at least not that bad. I have started with an ant because ants are an ambiguous case. They are perhaps simple automata without real feelings? How do we know? What are real feelings? Must I experience them to know? What grounds do we have to doubt that a writhing ant feels pain? Is it just that they are unlike us? Clearly there can be more. We can investigate their nervous system and so on. Allow an odd question to strike you:
Why do we think the presence of nervous systems tells us that creatures feel pain?
Upping the stakes a bit, lets suppose that I tread on the neighbor's dog. This case surely stands out to us as bad. Is it because the dog is more like us? Perhaps because it yelps and is more expressive? Again, we think the answer must be deeper than that.
Suppose further that I step on my snoozing neighbor himself. And here we have no issue claiming this is bad. There is no lingering mystery, at least for all but the nihilist, as to whether I ought to assault my neighbor. Is it as bad to tread on the dog? Can the two be compared? Perhaps stepping on the dog isn't bad, except only in some way that reflects on my lack of character? Maybe all "bads" are insulated from comparison between one another, only to be known by their havers?
One type of reader might already see a hierarchy here: ants don't count or barely count morally, dogs do a bit (?), people do without question. Yet another kind of reader, a Kant or a Descartes, might claim that only the person has worth, but the other two just trigger our moral feelings. Yet another view might suppose that there is some common denominator to all value havers, either in the natural world or an idealized one. Perhaps all morally relevant beings share this common denominator in all possible worlds? At this point, considering any of these views, we can hardly hold back a deluge of further philosophical questions.
The mystery is how we can justify attributions of moral standing, let alone a hierarchy, whether continuous or in discrete steps (on/off). Merely invoking superficial similarity to humans is clearly wrong. It cannot be beyond imagination that a species, real or fictional, could have genuine moral standing that is absolutely nothing like ours. Perhaps there is, or must be, a deeper similarity which we can turn to between ourselves and other beings of moral standing? Before exploring these questions deeper I want to paint with a broader brush and cover the philosophical strategies we might employ at the highest level to tackle these questions.
Expressing or Describing?
There are two broad categories of views we might adopt to address this philosophical puzzle. The first is a kind of expressivism. The second is a kind of descriptivism. The expressivist might claim that we are merely praising or disparaging ant-stomping, dog-stomping, and neighbor-stomping. I take this line of reasoning to be a dead end for several reasons. The heaviest roadblock in the path of this strategy is the fact that claims about moral standing seem to have propositional content, and so we encounter the Frege-Geach problem here. If we parse
"Stomping on ants is wrong"
as
"Booo stomping on ants!"
then we fail to make sense of its embedded use in conditionals like:
"If {stomping on ants is wrong} then {I shall be more careful}"
because
"If {Booo stomping on ants!} then {I shall be more careful}"
is unintelligible. So flat-footed expressivism will not work here. Perhaps the expressivist can pull this strategy out of its apparent nosedive, though. More contemporary expressivist strategies attempt to maintain the non-cognitivist (expressive of attitudes rather than truth-apt) construal of moral language but move to deflate truth in order to lower the cost of allowing value-claims to be truth-apt. This I think either collapses into a kind of realism, leaving our question unanswered, or leaves us back at a more flat-footed expressivism.
Much more could be said about expressivism, but I believe neither of these moves will work, not to mention the vicious relativism that expressivism can threaten without a standard for correct assessments of value. If whatever we feel straightforwardly dictates what is right and wrong, then we simply are not talking about right and wrong anymore.
The Motivation for Emotivism
The impetus for emotivism is often a kind of Humean one, and it has a grain of truth to it. Felt emotions do cause us to evaluate and to realize moral truths. Additionally, they can often serve as evidence for some moral claims. These functions, though, are not to be confused with felt emotions being reasons for moral claims. If felt emotions are allowed to serve as reasons for moral claims, then without a definition of correctly felt emotions, we have no intersubjectively or objectively available criteria to avoid relativism. However, if we begin to define correctly felt, then we have simply moved the bump deeper under the rug.
Intuitionism?
Another view which somewhat awkwardly fits into the box of views where moral standing is "expressive" is intuitionism, though many intuitionists might deny this. Some intuitionist philosophers think we do not need to plant moral status in non-moral conceptual grounds. While I am sympathetic to this move (I do not think a non-moral premise can server as a moral justification), I am deeply skeptical of the justificatory power of said "intuitions" or "seemings". If intuitions are a kind of state (physical or mental, whatever the account), the it is difficult to see how intuitions, qua state, could be justificatory. We at least deserve an account from the intuitionist of how this is so. I think this task is impossible as, like expressivism, it necessarily obliterates the distinction between "seems right" and "is right".
Descriptivism: a Platonic Dead End?
Turning to the descriptivist route. The descriptivist will attempt to find some property (natural or platonic) that underlies all correct attributions of moral standing. Perhaps ants are not complex enough, dogs, barely so, and humans just the right amount for moral standing? Complexity, qualia, neurological-consciousness, having a 'global workspace', having certain behaviors, etc. It seems that whichever metaphysical/physical attribute we pick, we leave ourselves open to some kind of chauvinism or other. What if something can feel pain but isn't conscious or very complex? Can a rock have qualia? (Some panpsychists think so.) What about animals that respond to anesthesia but do not have any of the neurological markers of "consciousness"? Problems abound for reasons I will spell out later in this note.
Ramsification and "Best Realizers"
Perhaps the better descriptivist strategy is to look for a higher-level property, defined functionally, which supervenes on these lower-level ones. What we are looking for in something that grounds moral standing is some property that can serve as the variable in all the sentences (Ramsey sentences, loosely speaking) of the form:
"There is some attribute set X, having attributes (P,Q,R etc...), such that if a being has X then it has moral standing."
As I have already hinted, I will be unwilling to accept something fully non-normative in the place of here. This rules out a handful of strategies that appeal to metaphysical or natural properties, but perhaps not a supervience one. More on exactly how will come later in this note.
Importantly, too—and it almost goes without stating—this approach should rule out s that have obviously absurd moral implications. Such examples would be
- = ability to own property
- = ability to speak
- = having a brain
- = creativity
- = intelligence
and so on. I will not rehearse the obvious counterexamples to these realizers here. It will suffice to say that we want an that involves the least bullet biting and handwaving.
Sentientism?
Ask anyone why they think dogs have moral standing or why frying ants with a magnifying glass is bad and you are likely do get a quick answer to the tune of
"Well they have feelings! They feel pain! Look at them!"
Convincing my angry neighbor that they do not after having stepped on his dog would border on insanity, even setting aside the affective reactions we have to animals' yelps.
Perhaps "having feelings", good or bad, can be said to be the grounds for moral standing? This view is called sentientism. Indeed, it seems promising, but we need to be careful to avoid pitfalls here.
Sentientism seems to be on the right track. Sentientism claims that the having of moral status is simply the having of states that are "good/bad for" the given being. But what does that mean? Properly construed, this is not to be confused with "ability to feel pain" as not all pain is bad, but suffering—in Derek Parfit's words, a "metahedonic state"—is bad by definition. We could continue to ask more "why" questions, but—as my strategy will later show, we need not continue this in an infinite regress.
Surely no one intent on being moral would deny something like the following:
"Causing a being suffering is prima facie wrong."
I think something like sentientism is correct on a skin-deep level, but many versions of it are radically confused as to what sentience is. What higher-level (functional) or lower-level properties are we looking for when we look for sentience? I will argue that any essentialist account of sentientism will not do and that certain brands of functionalism will not do either. Pinning moral standing to sentience might seem to solve our problem, but it becomes less and less clear how this is supposed to work when we begin to think about edge-cases (octopusses, insect, AI, etc), how we know about our own case, and the cases of others.
Senitientism as Common Denominator
Further, going too far down the path of sentientism can lead us to a view that fails to recognize that there is a distinction between being wronged and being harmed. One may be wronged without being harmed and vice versa. Consider a scenario where you read my diary and discover that I am going to be put into harm's way soon, and therefore act to prevent this. Here I have been wronged (you read my diary without permission), but in fact I have been helped, not harmed. I believe utilitarianism, among other issues, makes this mistake.
If sentientism is part of the right answer, it plays the role of a common denominator or precursor to moral standing, which can then instantiate a variety of reasons. It cannot be viewed as the total source of all moral reasons.
Descriptive & Expressive, Not Mutually Exclusive
It should be noted that I will not claim, nor do I think anyone should, that claims of moral standing are merely descriptive. They are, of course, also normative. To attribute moral status may involve describing, but it necessarily involves something more.
A Definition of Sentience?
How do I know a being (including myself) is sentient? I will offer what I think are two mistaken answers and then offer what I believe to be the correct one.
One method of attributing sentience might be to claim that it just is some set of natural properties, as hinted at above, picked out by the functional role of the concept of sentience and its best realizers in the furniture of the natural world. Jonathan Birch's "The Edge of Sentience" fascinatingly explores strange edge cases and philosophical puzzles, which make it far from obvious what the correct total account of these natural properties ought to be. For example, some crustaceans and insects do not bear many typical markers of pain, yet when given anesthesia in certain limbs, they respond remarkably differently and do not avoid the negative metahedonic stimuli. Birch's book assumes sentience to be of great moral importance, but adheres to a broad practical definition and claims the spade is turned there. I think this is the right track. Sentience is clearly, if anything is, a moral common denominator, and practically speaking, we could just stop there without grave moral error, but I want to go a bit deeper in building a philosophically respectable view of sentience.
Will Functionalism Save Sentience?
Non-functionalist essentialist accounts of sentience seem to me a dead end. As soon as we declare some property of a system, like having a brain, to be the grounds of moral standing, we can simply imagine some functional system that did not have that attribute that otherwise convincingly mattered morally.
In David Lewis's "Mad Pain and Martian Pain", he explores the apparent trouble with identity theories and functionalist accounts of normative concepts (I take the mental concepts Lewis explores to be equally normative to moral-standing-related ones). Lewis asks us to imagine "pain" in two radically different ways:
- Mad Pain: has the brain state typically associated with pain (C-fibers firing, etc.) but behaves as if they're just calmly thinking about mathematics—no writhing, screaming, avoidance behaviors, etc.
- Martian Pain: Martians with hydraulic systems instead of neurons, but they exhibit all the typical pain behaviors (writhing, avoidance, reports of distress, etc.).
Our intuitions are pulled in separate directions here. Clearly, if we are non-functionalist essentialists, we must deny that Martian Pain is pain. An absurd conclusion. However, if we are functionalists of a certain flat-footed kind it seems we must deny that Mad Pain is pain as well. Lewis's solution is that—roughly—we species relativize the function of the realizers of the concept of pain. In that case, since the Mad individual is having a brain state associated (for his kind) with pain, then we are part of the way there, but Lewis argues that they are still not fully qualifying for their kind-relative pain-role. I think this view is just another kind of essentialism and is not going to work either.
Three Candidate Definitions of Sentience
Consider three brands of sentience-definition:
- Sentience is just some set of natural properties (X, Y, Z ...), C-fibers firing, and the CNS doing ...
- Sentience is just some set of functional properties (P, Q, R ...),
- Sentience is an inherently subjective raw feel (qualia).
- Sentience is just some set of kind-relativized functional properties (A, B, C ...),
1 seems to rule out Martian pain and rules in Mad Pain, 2 seems to rule out "Mad Pain". As for 3, I will argue—with Birch—that it is simply an unhelpful dead end. I have many more issues with the notion of qualia, some of which will become clear here. 4 certainly seems more promising, but I think it still goes wrong.
I think all of these views make the same mistake. They assume that Pain/Sentience concepts are going to map cleanly onto some worldly properties or thing(s), but this cannot be the case. Making that thing a supervenient functional-level thing, while more promising, ultimately commits us to a view that is deeply at odds with how our concepts work.
The Pragmatist move
In the pragmatist order of explanation, the use of terms comes first, and through that use, we get the theoretical conceptual content of those terms. Any theoretical content those terms purport to have must serve to explain use, or it cannot be semantically justified. We cannot just declare the meaning of "sentience" as "A, B, C functionally defined properties". The correctness of the application of our concepts is ours through and through. This means that looking to what "sentience really refers to" in the world, if it divorces the term from our practices, is a mistake. We might ask, in pursuit of this pragmatist end: Did we learn how to deploy the concept of sentience by internalizing some supervenient functional property or other?
In this practices-first turn, the pull of relativism can be felt. If the practices are "ours through and through," then could they just be anything? Is moral standing, left up to our socio-normative practices of applying "sentience concepts", whatever they may be? Towards the conclusion of this note, I will argue that these fears of relativism are unwarranted.
Wittgenstien and Pragmatism
When we look at how we learn whether things are sentient, we learn from behavior. That is not to claim that sentience just is behavior. The Wittgenstein of the "Philosophical Investigations" is onto something here (not behaviorism).
If I see someone writhing in pain with evident cause I do not think: all the same, his feelings are hidden from me.
Wittgenstein's point is not eliminativism about inner states, as he is often mistakenly interpreted as saying. Rather, the point is that my perception of this writhing person's pain stops nothing short of perceiving their pain.
When we say, and mean, that such-and-such is the case, we—and our meaning—do not stop anywhere short of the fact.
What Wittgenstien is claiming here—I think correctly—is that all that there is to ground the use of the word "pain" is socio-normative practices around biological dispositions we have to certain stimuli, reactions to that response in others, inferences to and from those states and so on, all forming a complex network of ever-changing practices that constitute the use of the concept pain. This nominalism is motivated both by the pragmatist order of explanation (use determines content) and from the fact that this is simply how pain-talk works, functionally speaking.
I share Wittgenstein's pragmatist strategy here for assessing the conceptual content of the concept {pain}. What are the implications of this move for our current task of understanding moral standing?
Further Motivation for Pragmatism
One important motivation of this kind of nominalist move is that we cannot pin sentience to some physical/non-physical property because of the "gerrymandering problem". Whatever functional property we pick to pin the meaning of sentience to, it is likely to seem hopelessly gerrymandered to fit our use of the term anyway. This indicates that the order of operations here is backwards.
Imagine attempting to find a functional kind to underpin "writing utensil," and the same problem arises. The set of attributes we would have to pick out would have such wildly unrelated parts that it could not tenably be put under one umbrella. Even if we did, clearly the norms around the terms' use were in the driver's seat for that selection of functional properties anyway, and so it seems this methodology is backwards. The question is not what sentience really is. This is a mistake. Instead, the question is how the concept of sentience functions and what we can conclude from that.
It might be objected that our concept of sentience could be wrong from the get-go, in which case prying at it from the use-end can never get us towards truth. This objection is deeply confused, though. The meanings of our terms are not somehow accessible sideways-on from our practices. The very meaning of correctly applying a concept necessarily presupposes first understanding its correct application as it is learned.
Definitions of Sentience, Tenable and Not
I will using to indicate a socio-normatively picked-out concept and to indicate a metaphysically picked-out one, natural, natural-functional, or platonic.
shall always mean: defined extrinsically by its role in the functions of inference, action, and dispositions that a being has to respond to their environment.
shall always mean: defined extrinsically (extensionally: "The thing that underpins moral status") or intrinsically ("I feel this").
I will use Birch's definition of "sentience" here, which is as follows:
A sentient being [...] is a system with the capacity to have valenced experiences, such as experiences of pain and pleasure.
I will soon present my argument that is the best candidate for what descriptive content we might attribute to claims of moral status.
A Concept of Dependent Origin, Not an Intrinsically Known Thing
Wittgenstein is painfully unclear at times about what he is driving at in his talk about pains and sensations. I believe Wilfrid Sellars in "Empiricism and the Philosophy of Mind" to have contemporaneously made the same point in much clearer terms. Sellars coined the term "methodological behaviorism" to describe how we might come to know the content of talk about inner states. What Sellars and Wittgenstein are after is not the rejection of inner states, but rather an extrinsic and socio-normatively articulated understanding of them. For Sellars, this is the rejection of the "Myth of the Given," and for Wittgenstein, it is the "beetle in the box" or the denial of the private language: the "" sign which is supposed to privately stand for my sensation.
What does this move amount to? To borrow a transparently thin slice of Buddhist philosophy (many Buddhist philosophers also share Sellars' and Wittgenstein's insight), we could say that the move is to mark the content of the mental as being of "dependent-origin" on inherently public factors. It is defined only in reference to all of the things that give it its place in the network of behaviors, words, biological dispositions of us as creatures of natural selection, actions, and so forth. That does not entail that feelings are themselves public, just that these extrinsic factors mark the boundary of a shape, only defined by such boundaries.
When we see inner states, pains, sensations, etc, in this way, we can no longer see sentience and something mysterious. We can no longer see it as something that might be hiding behind an appearance. Our assessments of sentience stop nothing short of . That is all there is to the concept on the correct view, nothing deeper, nothing missing.
A Non-Cartesian Understanding
Where does this leave our earlier move to the partial-descriptivist strategy? If sentience is to be the property which underlies moral standing, then it cannot be the kind of sentience of the Cartesian sort, the kind that only I can know that I have, and the kind that I could always doubt the presence of in others without being them.
This means that sentience is attributed on the basis of its standing in an ever-changing and public inferential network of dispositions, actions, implications, etc. We may doubt a certain creature has real suffering, but not because of the creeping possibility that their mind may not be "real like we know ours is", in the Cartesian sense. Rather, we may doubt that ants really feel pain because, upon further inspection, their behaviors may not fit as tightly into the shape carved out by our practices around the concept of suffering. For instance, they may be only "writhing to signal" and we see that their actual circuitry is not that of pain-having creatures. Or perhaps, as in Birch's experiments, they may not respond to an anesthetic and so on.
On the Cartesian view, I should not stomp on ants, neighbor's dogs, or neighbors because I have at most an unprovable suspicion that "they have what I have".
Does anyone really suspend judgment on account of this, though?
Doing/Saying
I am claiming that in moral standing claims like:
"It is wrong to stomp on ants."
or
"Animals have rights."
what we are doing is expressing an emotional and normative (yet truth-apt) stance towards wronging animals. Further—departing from the expressivist even more—we are saying (describing) that animals exhibit certain behavioral dispositions that make their behavior fit clearly (or not) with , that is, with what we know to be the correct application of our expanding concept of .
Immediately, the question arises as to what to do in fuzzy cases. I argue, along with Birch, that we should adopt a "Principle of Precaution" and eventually I will argue for a kind of gradualism, seeing moral standing as a gradual spectrum (the weight of the moral standing being given by the weight of the moral reasons that being is capable of providing), not a discrete scale. This Principle of Precaution is not precaution in the Cartesian "we may never know that they have"-sense, but rather in the sense that further discoveries may change the status of a creature's behavior in relation to , as with Crustaceans and anesthesia.
Why Should Moral Standing Claims Compel Us?
There is a further difficulty here, though, and it concerns the relation between whatever content we find in the use of a concept that underpins moral standing and what actual reasons we have to act. Does seeing a {sentient} creature give us reason not to harm it? If we are going to claim that moral standing comes from some property , then we have a mysterious question to ask about . How does it get a normative grip on us? What is it about that makes us normatively compelled to act in some way?
The Humean faces this question about desires. If desires ultimately explain all "oughts" as "ought to if desire..." then what compels me to follow my desires? How do desires have this normative force? The moral realist faces the same question about moral properties (what J.L. Mackie called their "queerness"). What could it be about these things, brain states, platonic moral properties, functional moral properties, etc, that makes them have force over us?
These views make the mistake of trying to get something that is not normative in the first place to do normative work, and then being puzzled at how it can or cannot do the job. The move from to , evaporates these worries. Contained already in the ability to use (as it is socio-normatively articulated—in part) is the necessary presupposition of a normative commitment. This commitment is necessary for rational discourse and is binding on us through our recognition of each other as givers and takers of reasons. In order to understand the very concept of properly, I will argue, I must understand it as providing me with reason to act.
The business of morality, properly understood, runs on the currency of reasons.
What About Non-Reasoning Beings Then?
Very well for explaining the mystery of normative properties away, but doesn't this leave us with a worse problem? How can the neighbor's dog deal in reasons if it cannot speak? We run dangerously close to making the mistake Kant made when he claimed that only reasoners have moral standing.
Clearly, any correct account of moral standing must not include only rational agents. It must also include animals, comatose people, animals if no humans ever existed, perhaps some AI in the future, and so on.
Views that draw the line at reasoners, or functional property-havers, which end up being gerrymandered to a small class, typically make a move like
Real moral-standing-havers normatively demand moral treatment; the rest is merely 'virtuous' or 'indicative of good character' to treat well
The problem with this kind of view from the pragmatist standpoint is that it is either untrue or involves morally unacceptable bullet biting. This claim
If good character no longer matters or virtuousness is unimportant, then I shall torture my dog
would have to be a permissible inference on this view. Suppose that my dog and I are the last living beings on Earth. Then is it ok? If not, then was the view really an account of virtue all along? Virtuousness is obviously more complex than this, but it is hard to see how it could be a kind of sidecar to the Harley-Davidson of morality without itself being a full-blown moral normative claim.
How Do Non-Reasoning Beings Have Moral Standing?
If reasons are the currency of morality, though, where does this leave non-reasoning animals? Do we just throw them out as Kant did? No.
Consider Rorty's view in Philosophy and the Mirror of Nature, which makes something like the "reasoners only" move, but which I believe crosses the line into absurdity.
moral prohibitions against hurting babies and the better looking sorts of animals are not “ontologically grounded” in their possession of feeling. It is, if anything, the other way around. The moral prohibitions are expressions of a sense of community based on the imagined possibility of conversation.
or
Pigs rate much higher than koalas on intelligence tests, but pigs don't writhe in quite the right humanoid way, and the pig's face is the wrong shape for the facial expressions which go with ordinary conversation. So we send pigs to slaughter with equanimity, but form societies for the protection of koalas. This is not "irrational," any more than it is irrational to extend or deny civil rights to the moronic (or fetuses, or aboriginal tribes, or Martians).
Rorty claims that a being's moral standing is predicated on their membership in an actual or potential linguistic community. While I agree with Rorty in the pragmatist sense (not looking for , but rather ) I believe Rorty makes some awful mistakes here.
It is not the alethic modal possibility of the pig being able to speak or reason that grants it moral status. If this were the case, then beings without the ability to speak/reason in principle could never have moral status. Rather, the pig has moral standing because it is {sentient} and we know to have moral status. Further, if modal properties are the source of moral standing, then we end up in the conceptual rat's nest of deciding how and why would-be-speaking-pigs have different moral standing from the would-be-speaking-furniture of children's cartoons. If a chair could speak, would it ask you not to sit on it? An absurd question, surely.
Pigs are likely not {sentient} in exactly the same way that we are, but they have similar enough states to the ones that we have and which enable us (as reasoners) to provide the moral reasons that we do and exist in an ongoing network of them. We could not consistently hold that our reason-giving has the value we know it to have, while the dispositions involved in , {feelings}, {pain}, etc, that make this moral reason-giving possible have no moral status in the absence of reason-giving.
This is all not to mention the kind of terrible relativism that Rorty's view is infamous for. More on that later.
Non-Reasoning Beings Provide Reasons Without Giving Them
We might call such beings as pigs providers of reasons, rather than "givers" of reasons. These beings are the ones that have dispositions, behaviors, and abilities that we regard as granting them recognition as members of the moral community. We may be right or wrong about this, independent of what any of us thinks. On my view, there is a correct account of which non-reasoning beings do and do not have these statuses. These beings lack reason themselves, but do not need it for moral status any more than a forever comatose person does.
Non-reasoning beings (animals, comatose humans) have genuine moral standing not because they can give reasons themselves, but because they have the underlying capacities that enable reason-giving—they are "providers" of reasons through concepts which we already understand as being publicly articulated, norm-laden, and thus morally compelling if understood. I will argue that these capacities, when examined more deeply, turn out to be best described as , that is, best described as in accord with something like our use of the concept of sentience.
Notice that this is not just a claim based in our similarity to animals, but a kind of deeper conceptual affinity we have with other beings that fall under a certain description. Towards similarity-based approaches, we could level the objection that if human beings had been different, then we would have had different conclusions about which beings had and didn't have moral status. Clearly, that is a kind of unacceptable relativism.
What Bears Value?
A comatose person has moral standing not because they can give reasons themselves, or because they can be imagined to give reasons more conceivably than chairs, but because they have the underlying capacities that enable reason-giving of a certain morally relevant kind, paradigmatically . We may come to know this via comparisons between ourselves and other creatures, but it is not based in that comparison, as we will soon see.
Dogs can engage in a kind of proto-conceptual "speech" with us through body language that leaves an emotional mark on us. Very well for its emotional impact, but what moral content does the dog's "pleading" or whining have? I am arguing that these actions do have moral significance, just that the moral content that a dog's body language has is sense-dependent (not reference-dependent) on our discursive activities (which the dog cannot engage in).
It would be bad to hit the dog, not because of the Disney-esque possibility of it speaking, contra Rorty, but because if I acknowledge that the necessary antecedent——to me (or one of us reason-givers), giving full-fledged reasons for not hurting me is shared by the dog, then I cannot consistently and rationally say it is not wrong to hit it. Imagining the dog speaking is only one shaky way of making this evident.
{Statuses} Have Value, Not Just the Ability to Claim {Statuses}
The statuses that our moral conceptual activity takes root in, like , have value, even if that value comes second in the order of knowing to the moral conceptual activity that is made possible by those statuses. Our rational recognition of our ability to value at all—in a morally relevant sense—commits us to valuing the very same capacity to value in other beings (Not only the capacity for reason itself, as Kant thought). has moral value, and this commitment is implicit in a vast network of our claims and actions, all pointing back to our ability to value at all as {sentient} beings, havers of valenced states which are perceived as such.
Valenced states have value, not because of how they are expressed, that they are spoken or anything else in the natural world, but because they conceptually count as valenced in a network of moral claims.
Rorty's mistake is that of identifying the enabling factor with the sense-dependent source. This is the esse est percipi of language and moral reasons. I believe we can avoid Rorty's mistake here while also avoiding the essentialist mistake of assuming one natural property or set of natural properties must "ontologically ground" moral status.
The Thermostat Problem
In saying that the capacity to have valenced states—or to be a valuer—grants moral status, we have to be somewhat careful of being too permissive. Is a thermometer a valuer just because it has the "end" (valenced states) of achieving equilibrium between its setting and the current room temperature? A thermostat does not know it is warm because it cannot provide reasons, justification. Also, it obviously cannot give us moral reasons not to interfere with it, but the same is true for a comatose person or a whale. Why are whales and comatose people valuers when the thermostat is not?
I believe the answer is that whales and comatose people can be known to have valenced states which are morally relevant, but these can only be known as such from within the space of reasons. We cannot step outside of our moral conceptual apparatus to check which natural concepts have moral import, as Lewis seems to want to do. Having "ends", having , and having valenced states are all attributions that must measure up against the normative moral concepts we have now, not some other set of concepts detached from our practices and therefore detached from any practical meaning.
Whales and comatose people have moral standing because we can make a stronger claim to their having , the ability to perceive states which have the conceptual content of valenced states, based on our current understanding of morality and biology. Thermostats do not have moral standing because they have no such states. We know this because the states of thermostats are only at home in a very limited and well-known network of conceptual implications, none of which have moral weight.
Korsgaard's "Fellow Creatures"
I am making an argument with Kantian roots along the lines of the one Christine Korsgaard makes in her fantastic book "Fellow Creatures", but with some carefully drawn battlelines. I will attempt to sketch those battlelines here.
In "Fellow Creatures" Korsgaard gives a Kantian—and Aristotelian—account of moral standing. Her account focuses on a definition of animals as beings for whom their own well-functioning is their own "final good". A "final good" for Korsgaard is a good that is good-for a given creature and is therefore "good absolutely". Getting some of Korsgaard's technical jargon on the drawing board:
- Absolutely Good: Good for all possible beings
- Functional-Good: Good-for X in the functional sense (e.g. "A good knife is sharp")
- Final Good: Good-for a creature, not just in the functional sense (e.g. "Oxygen is good for humans")
- Animals: Beings for whom their functional good just is their final good. (Hereafter "")
Korsgaard is correct, I believe, to discard the pursuit of a God's-Eye account of the good. Instead, she claims that all value is "value-for". The ultimate thing of value, then, cannot be anything which is not dependent on some valuING. She concludes this line of reasoning with a definition of "absolutely good" as follows:
The Absolute Goodness of Goodness-For: It is absolutely good, good-for us all, that every sentient creature get the things that are good-for her, and avoid all the things that are bad-for her.
Korsgaard's technical definition of as beings for whom their own well-functioning is their own "final good" I believe to capture well the sense of that I want to push here as the best grounds for moral standing. I have no issues so far with how Korsgaard lays out the problem of moral standing and begins to suggest a solution.
My issue comes in the meat of the argument, which I will attempt to summarize as follows:
- We cannot pursue our ends, qua sentient rational beings, without acknowledging them to be our final good and therefore absolutely good.
- (We take ourselves to have good reasons for these ends, our final good.)
- An animal (a being for whom their functional good just is their final good) has absolute value because its existence is absolutely good.
- (The self-preservation of an animal (in this technical sense) is its functional good, and definitionally its final good and is therefore absolutely good.)
- Animals take themselves take their ends to be absolutely good.
- We cannot take ourselves, qua sentient rational beings, to be bearers of fundamental value unless we also take other sentient beings to be bearers of absolute value.
- We cannot decide to pursue our ends unless we take all sentient beings to be bearers of absolute value.
I think the thrust of Korsgaard's argument is correct, if it is read as a parity argument like
For the same deep reasons, we can respect our claims to moral standing; we should respect animals' claims to moral standing
But there is a snag that needs working out here. Animals do not "make claims" to moral standing. They do not "take themselves" (Korsgaard's terms) to have absolutely good ends. If they do, then so does a thermometer, and that is a consequence that is surely indicative of a mistake upstream. What is the mistake? The mistake is attributing the content of the animals' to their taking it to be good. Animals do no such thing, as they are not users of concepts. However, it does not follow that their states have zero conceptual content. I have been arguing that their valenced states—the ones that will grant them moral standing—do have content, but that that content is parasitic for its sense-dependence on our moral conceptual machinery.
Korsgaard is right to point out that we must assume our ends to have absolute value, but she makes the wrong resulting parity move. Her mistake is in attributing an ability to animals to "take" themselves as having ends which are their functional goods. If no such "taking" takes place, then we need another story as to how animals' states get their "content". If Korsgaard means {ends} not , then I have no issue. This does not seem to be the case based on her core argument.
[...] animals necessarily take themselves to be ends in themselves in this sense: that is simply animal nature, since an animal just is a being that takes its own functional good as the end of action. Each animal does this at the level of cognition or intentionality of which she is capable.
are not a thing to be found metaphysically in the unmediated world—even in the strivings and states of creatures—and for the reasons detailed above, any attempt to locate them there is doomed to fail. I believe once we give the correct account of how the of animals gets its conceptual content, we can agree with most, if not all, of Korsgaard's argument.
Again, valenced states have value, not because of how they are expressed, that they are spoken, or anything else in the natural world, but because they conceptually count as valenced in a network of moral claims.
How the account on offer differs from Korsgaard's with respect to content
On the Korsgaardian view, rationality commits us to valuing our own rationality (via Kant's Formula of Humanity) and it commits us to valuing sentience (via Korsgaard's argument above). Animal being has the content of valuing simply because of what animals do, qua animal (in Korsgaard's technical sense of the term).
On the view I am proposing, animal states have the content of valuing because we can correctly apply (via a parity move) moral concepts to them that we must apply to our similar states. On this view, rationality commits us to valuing reason (via something like Kant's Formula of Humanity, though I would pitch it differently), empirical discoveries, and conceptual truths lead us to see other non-rational beings as providing us with real reasons. This is because we can follow Korsgaard's argument above and see that we must value our own ends, we then turn to explaining animal behavior, and we ascribe—correctly—the very same capability (namely ).
This is a theoretical ascription, but it is no less real. Pluto was no less real for having been a theoretical postulation to explain the movement of other bodies. "Theoretical" is not a second-class kind of thing. It is rather a way of knowing things. We therefore grant animals (non-rational beings to whom moral concepts correctly apply) a theoretical—but no less real—status of having reason-providing capacities: . Being realists about this capacity, we can now conclude the story about how these states get their content.
We can say the following about the morally relevant content of non-rational beings:
- This content grants them moral standing via parity with content of ours which we are rationally bound to take as valuable (we know it conceptually as such).
- This content is unintelligible without reference to our moral conceptual toolkit, which is publicly understood, normatively binding, and not part of the furniture of the natural world.
- This content is also unintelligible without broader reference to a network of functional states and "functional goods" (conceptually understood).
- This moral standing is theoretically asserted but no less real.
- Acknowledgement of this moral standing is required for rational consistency with the very concept of ourselves, correctly understood.
This allows us to secure Korsgaard's important conclusion that non-rational beings are valuable "for their own sakes"—in Korsgaard's words. A typical pitfall of Kantian accounts of animals' moral standing is a kind of "secondary status" that animals have. The account I have offered here—clarified in contrast with Korsgaard's—allows us to secure that conclusion, but also to keep a rationalist distinction between things which actually reason ("taking things to be X", properly speaking) and things which do not. Non-rational beings of the right sort can matter just as much morally as we do, but the knowledge that they matter is only attainable within the space of reasons.
Objections to mine and Korsgaard's account
It was fascinating and flummoxing to read "Fellow Creatures" while nailing down my own positions on these knotty issues. I am no doubt borrowing from Korsgaard's argument to enhance the clarity of my own. In doing so, I would be remiss if I did not address some objections to her view that apply equally to my own. Some of these objections are deep objections to any Kantian account of morality, which I will not have space to tackle here. I will address only objections that grant a central premise of Kant's moral view that, loosely, "value comes from reason". I will not be interested in saving Korsgaard's arguments themselves from these objections, but in how some pragmatist moves I will add here can save them.
Must we rationally act as if our ends are our final good?
In the above reconstruction, Korsgaard claims
We cannot pursue our ends, qua sentient rational beings, without acknowledging them to be our final good and therefore absolutely good.
Critics of Korsgaard's approach here point out that this might not be so. In order for the above to not be the case it would have to be possible to pursue an end (not just a mere inclination, as Korsgaard is careful to point out) without really acknowledging it to be our final good. For example, Birch 2020 says:
I am doubtful of the “guise of the good” assumption: the idea that we must take our ends to be good absolutely to act rationally.
It seems rather uncontroversial that what I take to be my ends, I must see as my final good in order to act as a rational agent. This seems baked into any respectable notion of rationality. I cannot rationally say:
I shall save my friend from falling to his death, but it would be bad for me if I were to succeed in saving him.
Of course, it might end up later that saving my friend was indeed bad for me, but I cannot coherently assume that during or prior to the action of extending my hand to save him. So one might take the route of objecting to Korsgaard's formulation of "good absolutely" as
The Absolute Goodness of Goodness-For: It is absolutely good, good-for us all, that every sentient creature get the things that are good-for her, and avoid all the things that are bad-for her.
Parfit, for example, rejects this move in "On What Matters", moving towards a staunchly objectivist notion of the good. I think Korsgaard's view of the good occupies a Goldilocks zone between too relativistic a subjectivism and a "God's eye view" objectivism that seems to divorce objectivity from the job it was there to accomplish in the first place. So, unless doubters of Korsgaard's claim that
We cannot pursue our ends, qua sentient rational beings, without acknowledging them to be our final good and therefore absolutely good.
can show how one could coherently see one's rational ends as not their final good, I remain unconvinced of this objection. The objections that arise to it often seem to play off of the ambiguity between what is a rational choice and a mere inclination, as Birch does here:
If I eat a cake, I will agonise for a while about whether I can regard the eating of the cake as good—but I’d be no less rational, it seems to me, if I regarded eating the cake as bad and ate it anyway, deciding on this occasion to do something bad.
Must we value as good ANY ends which are not our ends qua rational being?
Korsgaard's argument, and mine via a less direct route, both try to show that in order to act rationally at all, we must assume some set of ends to be good. These ends include those we have qua rational beings (saving my friend) and those we have qua sentient being. Again, we must value our own ability to value. Birch takes issue with this, granting its Kantian premises, and argues that in order to show that there is something we value qua sentient being:
... you’d have to find at least one clear example of an end that we must value even though it is not good for us qua rational beings
Korsgaard foresees this objection:
... we also value ourselves as animate beings. This becomes especially clear when we reflect on the fact that many of the things that we take to be good-for us are not good for us in our capacity as autonomous rational beings. Food, sex, comfort, freedom from pain and fear, are all things that are good for us insofar as we are animals.
Birch is unsatisfied with this move, claiming that all these are valued by us just qua rational beings because of their rationality-sustaining nature. Korsgaard's argument, and mine, must show that rationality presupposes the valuing of some things that are good for us qua sentient beings, not just qua rational beings. Birch's "divide-and-rule strategy" here is to demand an example of an end that is good for us qua sentient beings but not qua rational beings.
The problem with the line that this objection takes is that sentience and rationality are not cleanly divisible like this. This is true both empirically and conceptually. Empirically speaking, Birch's objection here is akin to worrying about how we could value a flame's light without valuing thereby the fuel that sustains it. Flames simply need fuel. Conceptually speaking, if we have granted Korsgaard's definition of sentience as something like the ability to experience valenced states (which Birch countenances in his own work), then it seems that ascribing rationality correctly entails ascribing sentience of some kind, however watered down. Kant's definition of rationality is the autonomous capacity to choose our ends. How could it be that we can ascribe this ability, but fail to ascribe anything like "the ability to experience valenced states"?
This entailment may still be unclear, so I will spell it out a bit further. Clearly, a thermostat does not autonomously choose its end in turning on, or at least it does in a sense of "choose" which is critically bereft of something important that we mean when we make the same ascription to rational beings. What is the difference? Again, the giving of reasons is the difference. The thermostat does not know the temperature is correct; it merely has a reliable differential responsive disposition to the temperature sensor. To get to the right sense of "choose", I suggest we need to get on the Sellarsian track and say that rationality is inherently normative and inherently social.
To say that man is a rational animal, is to say that man is a creature not of habits, but of rules.
For Sellars, rationality is mastery of concepts in an inferential network (both practical and theoretical). This notion of rationality may seem like a red herring here, but I argue that it is the one stricter Kantians ought to have. These rules and the network of perceptions, actions, and inferences they necessarily exist in cannot be learned without having certain reliable differential responsive dispositions—to keep using the Brandomian terminology. I am arguing that:
- autonomous capacity to choose our ends conceptual mastery reliable differential responsive dispositions
If we fail to see these entailments, we have either interpreted "choose" mistakenly as just "responding to"—ala thermostat, or we have defined rationality in a way that is far too metaphysically detached from how we pragmatically ascribe it. With this entailment properly recognized, we see that Birch's demand for a sentience-only-end commits a category error in seeing sentience and rationality as cleanly separable in both directions. There are certainly sentient-but-not-rational animals, but it is not intelligible that there be rational-but-not-sentient ones. This failure to see the conceptual overlap between and rationality is like demanding that one see how a computer program runs in "pure code" (not run through any actual instantiation of a Turing machine).
Do other's final goods (or goods in the absolute-good) give us reasons?
Another common criticism of Korsgaard's view, and therefore mine, is as follows. Even if we grant the argument that rationality entails valuing ones own final-goods as good-absolutely, then it does not follow that this "absolute good" (the good of any other valuing thing) provides me with any reason to act. As Peter Godfrey-Smith puts it:
We can make all these concessions – only considering goals that aren’t selfish and disruptive, only looking for minimal respect from others – and there’s still no leverage being gained here, of the kind that should get other people on board with your projects, or you on board with theirs.
Now this can be interpreted one of two ways: simply as egoism or as a denial that the recognition of something's "absolute-goodness" gives me a reason to act. I doubt Korsgaard's argument being open to an egoist objection is what Godfrey-Smith is driving at here, but I will address both objections. Here is a distilled chunk of Godfrey-Smith's main counter-argument:
Godfrey-Smith's point is that there are two different senses of "recognized as good by everyone":
- Contextual recognition: "Good for anyone in circumstances like yours"
- Universal obligation: "Good in a way that creates shared reasons for everyone"
Some trouble comes from the word ‘absolute’, which Korsgaard uses when talking about the important kind of goodness. She does not mean absolute in a lofty sense. Something is absolutely good when it can be recognised as good by everyone. But there are two ways something can be recognised as good by everyone. It might be recognised, by everyone, as good for anyone who is in shoes like mine. That does not mean it is recognised as good in a further sense where it becomes part of a shared good, a good that everyone has reason to pursue.
Godfrey-Smith is correct to point out that Korsgaard's account does not close the gap between the "absolute good" and obligations from us, but I think we can help her across the finish line.
I think the kind of view that would leave us baffled as to how the recognizable good-for-others gives us reasons to act is actually not morality at all. I argue that any conception of the moral that does not come with strong impartiality is broken-backed from the get-go. As for skepticism as to the ability of the "absolute good" to give me reasons to act, I believe that it stems from the same egoist mistake. For views of morality that are inherently communal—as I think the correct view is—this issue does not arise. The correct view of morality and rationality sees them as "we" endeavors. As stated above, I also think it is a mistake to think that one can cleanly cleave practical reasoning from theoretical reasoning. The necessary impartiality of the moral and the breakdown of the distinction between the practical (moral) and the theoretical (epistemic) together lead me to agree with Peirce in this otherwise cryptic remark:
He who would not sacrifice his own soul to save the whole world, is illogical in all his inferences, collectively
I take the view of Peirce and Spinoza that ethics and logic are both built from the same normative material.
Korsgaard has other arguments that are supposed to seal this gap elsewhere, closing off the possibility of "private reasons", which Godfrey-Smith sees as failing. What these arguments suggest is that the 'gap' Godfrey-Smith identifies between recognition and obligation dissolves when we realize that rational agency itself cannot be conceived in purely individual terms.
Further disagreements to Korsgaard's account
Korsgaard claims that comparing final goods is actually incoherent. If all good is "good-for", then in some deep way it cannot make sense to compare our goods against one another's. I think this approach is correct in the way that it brushes aside odd questions like
Would it be better for you to be a pig?
but I think the inability to make tradeoffs that it threatens and the denial of any kind of gradualism surely lead to absurd consequences. I think these consequences stem from two key mistakes in Korsgaard's arguments. The first is regarding animals as "taking their ends to have value," and the second is not viewing the primary unity of morality to be a "we". Regarding the first, as stated above, animals do not have the conceptual machinery (at least that we are aware of yet) to count as "taking", we must ascribe that status to them in a theoretical—but no less real—way. However, this means that their reasons are not hidden from us in how well they do or do not fit with any conceptually known moral content, the only measuring stick we can hold up to them. Regarding the second mistake, when the proper unit of morality is seen as a "we" unit—a conclusion I will not argue here for at length—then comparisons are perfectly intelligible. This move is also crucial to make sense of moral sacrifice in any Kantian picture as well. We don't try to measure fox-goods against human-goods and throw our hands up, unable to compare. Rather, we apply the moral concepts we know with the best epistemic responsibility and impartiality that we can and make a choice. This choice may turn up—in the final account—to be wrong. The same is true of any claim or action.
Korsgaard is committed to a non-gradualism because she holds a view that disallows comparisons. This stems from a kind of subjectivism and non-communal view of morality that I think leaves us with untenable notions of practical reasoning where moral sacrifice across species is impossible to justify. This seems wrong and I believe this aspect of the account should be rejected.
Gradualism Instead
My view is more gradualist. Reasons vary in moral weight and can be evaluated against each other. This is evident in many good practical inferences like
I would kill 10,000 mice to save my mother
Practical reasons vary based on many factors. Setting epistemic certainty (expected value) aside and focusing only on the specifically moral variance. We can see that even moral reasons alone vary in weight. It is worse to steal a book than not to return a book for two years after borrowing it, even if it causes the book's owner the same amount of harm. It is worse to kill a dolphin than an ant. Why do these weights vary? We might say theoretical reasons vary in weight by certainty to achieve truth or "truth-aptness". Practical reasons similarly vary in certainty to achieve good or "good-aptness", but I won't dig much further into that issue here. It will suffice to say that we must account for the varying moral weights of different states of different beings. I think we can do this simply by the moral severity of the concepts that apply—as in itches vs agonies. Different beings will have different moral standings based on the richness of the moral concepts that apply to them and, therefore the moral reasons they provide us with to act. The threat of chauvinism can be felt here and it will not go unaddressed.
and Moral Standing
I can now complete the—admittedly loose—argument in that something like Korsgaard's is the best realizer for what descriptive content we might attribute to claims of moral standing. If the pragmatist order of explanation commits us to "reasons first" thinking and there must be some descriptive content to ascriptions of moral standing, then I think is the best realizer, by following Korsgaard's arguments for why have a constitutive good that we are rationally bound to recognize. I have offered an alternative explanation to Korsgaard's as to how we recognize the moral standing of , and I have disagreed with her in her rejection of the varying moral weights of reasons provided by different beings, but I have largely agreed with the main thrust of her arguments.
Statuses are Defined Socio-normatively
Let me be very clear. The statutes that have value can only be ones that we have socio-normative justification from inside the space of reasons to claim. I am rejecting as being a value bearer, and instead claiming that is the minimal and most important bearer of value in assessing moral status. Any attempt to find some essential property or set of properties that grounds moral standing metaphysically is doomed to be hopelessly gerrymandered to fit the concept or to make arbitrary exclusions. The pragmatist strategy here has been to embrace this normative dimension. There may be a fear that this move tips us towards relativism or a kind of idealism. I will later show that these worries are misplaced.
The Problems with Functionalism
Is this view of , not just functionalism? If so, is it susceptible to the kind of "Mad Pain" worries of David Lewis or the kind of "anaesthesia by genocide" that Eric Schwitzgebel points out as a flaw in functionalist accounts? This is the idea that if the "sentience" status of my states is determined by how normal beings of my class function, then eliminating them with the snap of God's fingers would render my state not one of sentience at all. These are devastating objections to purely functional accounts of statuses like sentience.
Both of these objections are allowed in the door by a kind of "functionalist essentialism", the assumption that some mental state is a metaphysical (though physical) rather than the product of the correct application of an evolving set of complex and interconnected concepts . I am saying sentience is the "best realizer" of moral standing, but—when we turn the crank one more time and ask what sentience is—I part with identity theorists and functionalist accounts that try to pin sentience to one natural set of functions or things.
My approach may raise concerns about objectivity. If I have traded s for s, have I just said " is whatever we'd like to apply the term to" and cast out all hope of a respectable notion of correctness? I think not, and I will demonstrate why shortly.
If Standing is Derivative, Is It Less Real? Is This a Kind of Idealism?
If moral standing depends on the normative application of our concepts, then are we trapped in a kind of moral standing idealism? Moral standing just is the concept we have of it? A hint towards the correct account is available in the idea that the correct account must apply to "animals or the comatose if no reasoning humans ever existed". The later C.S. Peirce claims that
"A diamond at the bottom of the ocean, never touched, is hard."
is an inherently modal or subjunctive claim. Sellars, much later, also points out that ordinary empirical concepts are, in fact, unintelligible without this modal component. For Sellars, the difference between describing and "merely labeling" is this modal component. To probe the strength of the Peircian claim, let's consider the opposite.
A diamond is only hard if investigated to be hard
This claim is clearly false. So what about the following?
It was bad that {sentient} beings suffered before humans were around to talk about it.
Surely both diamonds are were hard, and {sentient} beings had moral standing before humans were around to talk about {hard} and {sentient}.
Or as Abraham Lincoln is said to have put it:
How many legs does a dog have if you call the tail a leg?
Four. Calling a tail a leg doesn’t make it a leg.
How can it be that the account is correct without committing ourselves to the absurdities of claiming these properties were dependent on us for their reality? The key move is to distinguish between their sense-dependence and their reference-dependence, in Frege-speak.
The hardness of diamonds and the moral standing of pre-human animals are sense-dependent on us for their intelligibility, but they are not reference-dependent on us or any of our social or linguistic practices.
This derivative status is no less real for depending on its sense for us. When a pre-human sentient animal killed another, neither was culpable (if not a reasoner), but it was a bad outcome that some such animal suffered. This suffering, while parasitic for its intelligibility as a moral status on our moral practices of reason giving, was no less real a moral status any more than a diamond was not hard before any human lived to investigate it.
Reduction
I claim that the current view can give us a fully naturalist account of moral standing, but another difficulty is in our way before I can confidently make that claim. I have claimed that is of moral value and have shown why it is, however, I have left it unclear in what sense is a natural property, if it is.
I have rejected any view of whether natural or in some other platonic sense. I am not making an identity claim between sentience and {XYZ} natural properties. How can that be so if I claim there is nothing over and above the natural involved?
The key to this apparent contradiction is a distinction between naturalized in the reductive sense and naturalized in the sense I will call "undergirded by" natural phenomena. The distinction is this:
- Natural-Reductive: supervenes on and we can tell an exclusively story about
- Natural-Undergirded: supervenes on and we cannot (in principle) tell an exclusively story about
As per the pragmatist's practice-based order of explanation detailed above, to commit to the Natural-Reductive account would be both untrue to our lives and semantically untenable. Notice that the Natural-Undergirded does not exclude the possibility that we can tell a non-exclusively story about .
Consider the following. Suppose we want to "reduce" a dragonfly's field of vision to pixels. We want to tell a pixels-only story about dragonfly vision. But this is impossible. In order to cast the pixels on the inside of our now-black IMAX theatre, we would need to draw on the structure of those eyes in order to map the pixels where they ought to go. Similarly, with our practices of using {pain}, , {hammer}, etc, we cannot tell an exclusively physical particle story about hammers. There is no purely physical property of hammers that picks out or ever will pick out {hammer}. The norms (dragonfly's eye-schema) are in the driver's seat, not the metaphysical selection of a realizer (pixels).
So—I claim—the present account is fully naturalistic, but in the above sense non-reductive.
Summary Up to Here
Let us summarize up to here. We began with the puzzle of moral standing—why and how do some beings have moral standing which others do not? I have argued that neither an expressivist nor a purely descriptivist approach can adequately untangle the knot.
Some solutions availed themselves. Expressivism fails due to the Frege-Geach problem and to collapsing into either realism or relativism. The descriptivist approach, while closer to the mark in identifying sentience as morally relevant, goes astray when it tries to pin sentience to some property discoverable in the physical or platonic world. Following Wittgenstein and Sellars, I proposed that should be understood extrinsically—as constituted by its role in our social practices of inference, action, and response rather than by any intrinsic "what-it's-like-ness." Further, I have argued that sentience cannot be defined, even functionally, as some natural property on pain of seeming hopelessly gerrymandered (the norms are in the drivers' seat). These pragmatist moves dissolve the mysterious question of how sentience gets normative grip on us: the normative commitment is already built into competent use of the concept through being a rational speaker at all.
But these moves threatened to exclude non-reasoning beings from moral consideration entirely. If reasoning is the currency of morality, then how can the comatose matter? The solution lies, following Korsgaard with some pragmatist amendments, in recognizing that animals and other non-reasoners have dispositions and behaviors that grant them moral community membership because those traits are essential to our having it. Further, it was recognized that we must see that such beings have real but derivative moral standing: sense-dependent on our practices for intelligibility, but not reference-dependent on us for their moral reality. A pre-human animal's suffering was genuinely bad, even if no reasoners existed to articulate this badness.
More Summary
This gives us a gradualist, precautionary approach to moral standing grounded in as socio-normatively articulated, avoiding both Cartesian skepticism and arbitrary chauvinism. It is gradualist because the grounds for moral standing (which are modally robust) cannot be responsibly empirically supposed to contain hard-cutoffs. The light of reason-giving has an on-off switch, but the light of reason-providing (having ) is on a dimmer switch. This gradient of moral standing is a product of the average weightiness of the morally relevant statuses that beings have, conceptually understood. Moral standing's intensity is on a spectrum because moral reasons come on a spectrum. Ants, dogs, and humans.
This approach is precautionary because it is fallibilist. There is a fact of the matter—independent of the claims of any finite community—which beings have moral standing and in what intensity. At any point, we may be wrong about it. In moral matters, uncertainty ought to be proportional to worst-case outcomes.
Elephants and Crustaceans
If we see that a crustacean feels pain, we ought not to hurt it without overriding reason to, but its pain is not as deep as an elephant's upon losing a calf. Why? How do I know that? Well, the elephant will mourn, become depressed for days, and so on.
How do I know that your breaking a leg was a worse pain than stubbing your toe?
Reverse Chauvinism?
But is this view open to a kind of reverse-chauvinism that has absurd consequences? In other words, could we ever imagine super-moral beings, morally superior to ourselves? If not, it could indicate a deep problem with the view at hand, because if the view has the consequence of leaving us as the supreme beings of moral standing, then surely that indicates a flaw.
I argue that we could imagine beings with moral reasons of greater weight than ours and therefore of greater moral standing than us. Reason-givers with superior moral status to us are imaginable as beings that have similar dispositions to ours, but who simply react much more severely to them, demonstrating deep hurt at even a paper cut, perhaps writing volume upon volume of laments for some small, negatively valenced event. We should view their sacrifice as deeper than ours should we ever discover such beings. Of course, to be real morally weighty reasons, the reasons of these creatures would have to display something like the complex web of interactions that ours do: prompting emotion, prompting grief or celebration, having deep homeostatic impacts, etc.
Even non-reason givers might be coherently said to have stronger moral reasons than we do. Elephants, for example, have extremely complex grieving emotions and death rituals. We may, in time, discover that there is strong evidence that their grief goes deeper than ours.
Time Chauvinism?
In Dan Dennett's "Kinds of Minds", he introduces the concept of "time chauvinism". Dennett, somewhat mockingly, invites us to imagine that trees have moral standing because they have some pain-like homeostatic responses, but we fail to notice them as they unfold over long and silent timescales.
I think Dennett's mocking here is unwarranted. Whatever Dennett's motives in his mocking tone, we must not confuse defensiveness about the boundaries of our duties (do we have a lightweight duty to trees?) with what could be the true view of moral standing. Dennett declines to offer one.
Excluding the Right Cases
It is a requirement of any adequate view of moral standing that it not only include the right beings, but that it exclude the wrong kinds. For example, my MacBook should not morally count, nor should a primitive robot, and so on. This requirement can be easier to dance around for gradualist views, but I do not think I need to dance around it. The requirement is met rather straightforwardly by the present view. Since , being integral to what beings matter, is met by virtually all (on a graduated scale), things which do not qualify as will not meet the requirement for ANY moral standing at all to be attributed.
Clearly, we could imagine artificial beings that were . They would have to be beings for whom their functional good just is their final good, a kind of self-preserving thing that also had states that we had any evidence to attribute moral normative significance to. Such beings are certainly possible, but it is unclear why we should be hesitant to count them as having moral standing. It seems that any such being would be recognizably "striving to live" just as animals are. We would invoke deep inconsistencies in our moral reasoning if we did not attribute moral standing to them.
Precaution and Modality
It would be irresponsible to claim, now, that no chauvinism is present in the developing view. We cannot know that now any more than we can know that we have made no mistakes in understanding particle physics up to this point. We need only make a pessimistic meta-induction from our past treatment of animals or disabled human beings to see this is the case. A fear of arbitrariness may sneak in with these words:
... The correct view of moral standing is the view we would have when we had correctly assessed the properties of all beings ...
That fear should be fully assuaged by reading this "would" in the strong subjunctive or counterfactual sense of
An iron bar WOULD rust if it were left in water for years
There is some correct view as to what beings do and do not have moral status. Following Peirce, we are only able to refer to this view now as the view that would be indefeasible in the infinite limit of inquiry into moral standing. For that inquiry to be correctly carried out, we need to only be rationally consistent and open to new evidence. Further, we are unlikely to get any account of moral standing that exactly matches the thinking of our status quo beliefs without it being absurdly gerrymandered, self-serving, and inconsistent.
Objectivity? Relativism?
So far, I have only been issuing promissory notes for a notion of "correctly applied" moral concepts and for how my view is to avoid relativism if it relies on s and not s. If all of our morally relevant concepts are socio-normatively articulated, then what grounds them? What counts as "correctly applied" moral concepts? In my account so far the notion of the correct application of these concepts is doing a lot of work, no doubt. On my view, dogs have more moral standing than ants precisely because we can correctly apply morally weightier concepts to them than we can to ants—and with greater certainty. But what counts as correct application, and how do we know? Surely if we all believed did not apply to dolphins, just by virtue of believing so, we would not be right.
I think it is a mistake to view objectivity as requiring some sort of extra-human articulation of the thing(s) we are beholden to for our correctness. I will argue—again, contra Rorty—that we can get a respectable notion of objectivity straight from s with no need of s, provided we understand them correctly.
The worry about objectivity here stems from conflating two distinct questions: how concepts get their content (through social practices) and what makes our applications of those concepts correct or incorrect. The worry is that if we don't get the correctness from the appeal to s, then nothing can truly be correct. I think this is a false and unwarranted assumption. I will argue that we can achieve genuine objectivity straight from s with no need for a kind of representationalism or foundationalism. We do not need—nor should we want—correspondence to some metaphysical reality for a notion of objectivity. All the objectivity we need is available through the transcendental requirements that make rational discourse intelligible at all. In short, I am advocating something like the objectivity of Davidson, Brandom, Peirce, or Habermas (with important differences from each).
Being rational, in the sense of being reasoners, is not an optional game we might choose to play or abandon. The very possibility of engaging in discourse—of making claims, giving reasons, or even questioning objectivity itself—commits us to certain transcendental norms through recognizing each other as reasoners. These norms are not arbitrary social constructions but requirements for discursiveness as such.
Objectivity Promissory Note:
I do not have space in this note to fully flesh out these discourse-transcendental notions of truth and objectivity. I do not pretend to offer a complete defense of them here, but I wanted to add a flyover, as the other views expressed here could be interpreted to collapse into a sort of relativism without them. That is not a sin I am willing to leave myself open to being interpreted as committing.
I will leave a link to a deeper exposition and defense of them here.
A Pragmatist's Truth & Objectivity
A pre-human animal's suffering was genuinely bad, not because it tracked some metaphysical moral property , but because the correct application of the correct moral concepts, following the transcendental norms that make such application possible, would yield that judgment in the limit of inquiry (following Peirce, meaning a mathematical limit, this "end" need not be actual or possible).
This gives us all the objectivity we need: moral standing claims that reach beyond any finite community's attitudes, that constrain what we can rationally believe, and that allow for genuine moral progress—all without requiring unexplained correspondence to a moral reality independent of the practices through which we make such judgments intelligible and without mistakenly equating causal natural-world properties with moral ones.
(I won't exhaustively defend this objective pragmatist view of truth here, but I think it has powerful advantages over alternative accounts, chief among them being that it can secure objectivity without divorcing that objective-X from our practices of giving and asking for reasons. This view is not a convergence view, nor is it an "ideal conditions" view, given the above transcendental arguments.)
Serial Killers
There is no extra-moral justification for the moral any more than there is an extra-epistemic justification for the epistemic. Indeed, I view the two as two sides of one coin, theoretical (epistemic) and practical (moral) reason, both being inseparable components of reason together. If someone—a sociopath perhaps—is intent on going around stepping on dogs and neighbors and will accept no reasons against it, we cannot convince them they are wrong morally through anything extra-rational. We can appeal to consistency, rationality, and reasonable ends, but beyond that, what more can we expect? Moral claims are not incantatory devices of persuasion.
When questions of moral objectivity come up, often 'how we can coerce someone' is conflated with 'how we can show that they are in the wrong', that they ought to do otherwise. I don't think we need a God's Eye objectivity to show that the sociopath is wrong. While more coercive God's Eye views might be more effective for some, as in:
"You will go to hell for stepping on dogs."
The aim of moral philosophy is not to find the scariest ways to persuade irrational people to believe things. It is rather to find the true view of what we ought to do—how we ought to reason practically—and why.
Implications of This Account of Moral Standing
The implications of this view of moral standing are far-reaching. Somewhat strangely, we end up with a pragmatist version of the ancient Vedic notion of Ahimsa, a principle of nonviolence towards all living beings, if we use Korsgaard's definition of with careful attention to the application of moral concepts. If my arguments here are right, we have a moral obligation, on a sliding spectrum, to all beings so interpretable as having statuses that we know conceptually as being morally laden. That commitment is proportional to the correct applicability of morally-laden, reason-providing concepts—paradigmatically . It would then follow, pace Dennett, that since plants have homeostatic pain-like responses, we have a mild precautionary moral commitment to them. Moving up the hierarchy a bit higher, we get to something like ants, where the concept of can take deeper roots. All the way up to dogs, whose moral standing is undeniable on parity with statuses we intimately know ourselves to have.
Tradeoffs and Gradualism
An advantage of this gradualist, communitarian view is that it allows for tradeoffs. Further, it allows for tradeoffs without the oversteps of utilitarianism in denying the inherent worth of or rational beings. It is an important feature of any correct view of normative ethics that it allows for tradeoffs. Only the most austere normative ethical views (Kant's unmodified views included) do not allow for any weighing of the moral balance. This is important in confronting the absurdity that some will see in my claiming that stepping on an ant is wrong. The view espoused here is that this is prima facie wrong, but can be overridden by stronger moral reasons.
The kind of gradualism here has theoretical advantages in not being unreasonably anthropocentric, all the while without going in the extreme opposite direction Korsgaard does in claiming that we cannot even compare our standing or reasons with other creatures. In keeping with good naturalistic tendencies, we should be skeptical of views that stray too far in either direction. We ought not to deny that nature is full of differences of degree, but also that there seems to be something distinctive about human activity (without letting that push us to a kind of species chauvinism).
Comparisons are Messy, and That is Fine (A View Between Consequentialism and Deontology)
Moral tradeoffs are allowed on the views I am espousing here because morality is an inherently communal matter. Purely consequentialist views wrongly view value as a quantity that swings free of value-havers, and so these views naturally invite tradeoffs. I believe these views make the error of seeing value as an unexplained metaphysical property that somehow inheres in our experiences (and presumably those of others). This is a mistake and leads to a view where tradeoffs become too threatening (think the "repugnant conclusion" thought experiment from Parfit's "Reasons and Persons"). On the other hand, purely deontological views seem heinously restrictive to the point of being useless ethical systems because they would not allow us to lie to the Nazis asking us if we were hiding any jews under the floorboards, or would not allow us to vaccinate people because we are violating their rights.
The correct view is in between these, seeing the grounds of morality as rational and deontological, but the circumstances of evaluation and application as subject to the feedback of consequences, circumstances, and good reasons. The laws of morality are categorical. We derive the right actions from them, but the actions themselves—how to apply these laws in a given circumstance—are subject to consequential feedback.
An objection to these views might be that these tradeoffs and matters of degree are too unclear to be intelligible. I think this objection neglects the plentitude of perfectly functional qualitative concepts we have that all have messy tradeoffs and matters of degree without practical issues at all. Lack of quantification or bright lines does not preclude there being a fact of the matter about some inequality. Take the concept "strong", for example. Strength comes in matters of degree and—except in very narrow circumstances—is not quantifiable. I might say is stronger than and that might clearly be true, so I might enlist to help me move my couch up the stairs. We might imagine picking crews to portage canoes and wanting each crew to be "of equal strength", therefore making tradeoffs. "No. Swap those three out for those four, then it will be even", and so on.
Normative Ethical Implications: What Ought We to Do?
I will say little about what we ought to do practically in light of the metaethical views I have laid out here. My focus here is metaethical. However, I do think it is worth stopping to make a few practical ethical remarks. If we have good moral knowledge, which most adult human beings do, we should strive to treat all beings to whom our moral concepts coherently apply—in a theoretical but no less real sense—with respect and dignity. We should not economically promote things like factory farming, which are effectively industrial-scale torture. We should not use our moral theories to make excuses and justify our habits. Also, we should not ignore emotions when they may be evidence for the incoherence of our moral views, just because we like our comfortable habits.
As was alluded to earlier, emotions are not typically good moral reasons, but they can be evidence for good moral reasons. The pain and anger we feel, whether we suppress it or not, when we witness or read about animals being harmed, is evidence (among other things, clearly) that these animals deserve the same moral consideration we do. These affective responses we have are evidence that the same {sentient} states undergird their behavior and that we ought to respect that if we respect that same capacity in ourselves.
Conclusion
This investigation began with the puzzle of moral standing—which beings possess it and on what basis. When confronted with moral matters ranging over subjects of different species and constitution, how ought we to reason? This question gives way to two broad approaches, descriptivism and expressivism. Flat-footed expressivism was rejected for its inadequacy in handling truth-apt applications of moral language. Pure descriptivism was rejected as it denied the normative element of moral discourse, and a kind of descriptivism essentialism was dismissed because it denies that the norms of concept use are in the driver's seat, instead attempting to pin moral standing to some specific property , resulting in an indefensibly gerrymandered concept or too much bullet biting.
These considerations led our inquiry to a view of concepts as socio-normatively articulated by our ongoing network of inferences, action, and perception. Whatever the grounds of moral standing are to be, they must be addressed without essentialism and without denying how our practices actually give rise to their meaning. This pragmatist order of explanation—starting with the role of a concept in a web of inferences, perception, and action—blocks the route to a Cartesian view of the moral properties that would leave us puzzled at how we could know them in others. Understanding correctly how any moral-standing-candidate realizer must be understood, I then put forward as the best candidate, following Birch and Korsgaard.
Seeing the home-turf of our moral concepts to be our practices of giving and asking for reasons grounds our theoretical language in practice, avoids essentialist mistakes, avoids the conceptual knots of functionalist accounts, and removes the mystery of the normative force of these reasons. However, this strategy—loosely Kantian—comes with a challenge. If reasons are the currency of morality, then how can non-reasoning creatures achieve moral standing? The fear was that a reasons-based view of morality would leave us in Kant's untenable position that animals had either no moral standing or one that was only instrumental to ours. This fear was assuaged by seeing that non-reasoning beings can provide reasons, without giving them. When the possession of shared capacities that underlie the very capability to value anything at all is acknowledged as a necessary presupposition to rational action, following Korsgaard, then we see that we are bound already to treat these beings with respect. Important theoretical differences were acknowledged in contrast to Korsgaard's account, avoiding certain mistakes and winding up with a theoretical account—though no less real for it—of the moral standing of non-reasoning beings as well as a community-based view that allows for comparisons and tradeoffs between the moral reasons of different beings on some shared basis.
These arguments have leaned heavily on notions of normativity and correctness of concept use, leading to some worry about the threats of relativism or arbitrariness. I have attempted to address these worries by giving a brief set of transcendental arguments to the effect that our very engagement in factive discourse presupposes the outright denial of such a relativism.
These concept-application arguments have also threatened a kind of idealism that would suggest that moral standing has no reality outside of the application of our concepts. I have attempted to show why I think these worries are unwarranted by acknowledging the modal nature of concepts applying to the ordinary empirical world, as well as the sense-dependence—not reference-dependence—of moral concepts on our practices. The conclusion was that there is no more reason to see moral concepts as less real in our absence than there is to see diamonds as 'not hard' because we are not there to test them.
As was stated in the opening to this note, moral standing is an issue that is likely impossible to give a philosophical account of without also unearthing a whole entangled web of other commitments with it. I have certainly done that, arguably without restraint. While it makes my position more coherent, it leaves this note with many half-explained odds and ends of my philosophical system lying about. Rather than try to address them all satisfactorily here, I have decided (for now) to be content with the picture provided here, even if that means it is a little fuzzy around the edges.