Non-Kantian Autonomy
In the comments to my post on abortion, laurenhat asks how a sentience-based theory such as the one I propose deals with the fact that sentience comes in degrees:
I think the key to answering this lies in getting away from the Kantian model of moral considerability. In the Kantian model (which echoes Christian theories of soul-posession), morality begins by assigning intrinsic value to various entities on the basis of some characteristic, such as membership in Homo sapiens or sentience. Then moral action is that which respects the intrinsic value of those entities which hold it. To respect an entity's intrinsic value is generally held to consist, at least in part, of granting it autonomy.
This model becomes tricky when you are using a characteristic such as sentience which admits of degrees (i.e., pretty much every criterion of considerability that has been proposed except raw speciesism). You're faced with two choices: 1) you can set a threshold such that anything that's even a tiny bit more sentient than the threshold gets full intrinsic value and anything that's even a tiny bit less gets no intrinsic value, or 2) you can give things greater or lesser levels of intrinsic value. Neither of these is very satisfying.
To me, the solution is to drop out the step of assigning intrinsic value. Morality then works as follows. Sentience, as I use the term, simply means the ability to care what happens to you. Autonomy means the condition in which an entity's caring about its fate makes a difference in what actually happens to it (that is, the entity "gets its way," or is presented with some justification as to why it didn't get its way in this case)*. Moral action consists in granting as much autonomy to all entities concerned with some action as possible. Thus, it's meaningless to talk about granting autonomy to things beyond the bounds of their sentience -- you can't get your way on an issue you don't care about. Autonomy thus scales automatically to track differences in sentience levels between entities and over time. So in questions like fish versus pigs, what matters is the degree to which the fish and pigs care about whatever it is I'm proposing to do to them -- there can be no principle like "pigs are more important than fish" except as a rough empirical generalization about the species' typical ability to enjoy autonomy.
*Note the difference between "autonomy" as I'm using it and the common notion of "independence." Independence is the condition in which an entity enjoys its autonomy through its own unaided exertions. The value of independence is parasitic on autonomy -- independence is only good insofar as the independent entity values not just some outcome, but also values the fact of getting that outcome through their own work. Modern Western cultures tend to have an exaggerated view of how possible independence is (given our embeddedness in social structures).
I agree that sentience is a good basis on which to accord autonomy and rights. But you talk about sentience like it's a binary thing -- while I can see that it basically is when it comes to woman vs. fetus, there's no clear line in my head. Birth seems to me to be a good place to draw the line for killing living beings, because (a) we have to draw the line somewhere, and (b) the baby's health and mother's health are no longer closely tied after birth. But it's not like I think that birth is the moment of sentience. I also don't think living creatures are all equally sentient, and I'm curious about your views there and how it affects your decisions. Is it as important to you not to exploit honeybees as fish, or fish as pigs? Why/how does sentience play into that, or not?
I think the key to answering this lies in getting away from the Kantian model of moral considerability. In the Kantian model (which echoes Christian theories of soul-posession), morality begins by assigning intrinsic value to various entities on the basis of some characteristic, such as membership in Homo sapiens or sentience. Then moral action is that which respects the intrinsic value of those entities which hold it. To respect an entity's intrinsic value is generally held to consist, at least in part, of granting it autonomy.
This model becomes tricky when you are using a characteristic such as sentience which admits of degrees (i.e., pretty much every criterion of considerability that has been proposed except raw speciesism). You're faced with two choices: 1) you can set a threshold such that anything that's even a tiny bit more sentient than the threshold gets full intrinsic value and anything that's even a tiny bit less gets no intrinsic value, or 2) you can give things greater or lesser levels of intrinsic value. Neither of these is very satisfying.
To me, the solution is to drop out the step of assigning intrinsic value. Morality then works as follows. Sentience, as I use the term, simply means the ability to care what happens to you. Autonomy means the condition in which an entity's caring about its fate makes a difference in what actually happens to it (that is, the entity "gets its way," or is presented with some justification as to why it didn't get its way in this case)*. Moral action consists in granting as much autonomy to all entities concerned with some action as possible. Thus, it's meaningless to talk about granting autonomy to things beyond the bounds of their sentience -- you can't get your way on an issue you don't care about. Autonomy thus scales automatically to track differences in sentience levels between entities and over time. So in questions like fish versus pigs, what matters is the degree to which the fish and pigs care about whatever it is I'm proposing to do to them -- there can be no principle like "pigs are more important than fish" except as a rough empirical generalization about the species' typical ability to enjoy autonomy.
*Note the difference between "autonomy" as I'm using it and the common notion of "independence." Independence is the condition in which an entity enjoys its autonomy through its own unaided exertions. The value of independence is parasitic on autonomy -- independence is only good insofar as the independent entity values not just some outcome, but also values the fact of getting that outcome through their own work. Modern Western cultures tend to have an exaggerated view of how possible independence is (given our embeddedness in social structures).
4 Comments:
*nod* Okay. How do you assess the degree to which fish, pigs, termites, fetuses, etc., care about their possible futures?
I don't think current research bears you out on this point:
"This model becomes tricky when you are using a characteristic such as sentience which admits of degrees (i.e., pretty much every criterion of considerability that has been proposed except raw speciesism)."
Animal intelligence research has revealed that animals are a lot more intelligent than we think, which is still far less than what any human can achieve. For example, no ape has really been able to use language in a more sophisticated sense than saying things like "give food give food want food hungry give food." It's trickier with animals that aren't primates, especially dolphins, though even then research is conflicted.
Lauren -- It's a combination of observing behavior and drawing analogies from similarities of anatomical structures. Keep in mind that it's rarely necessary to make summary judgments ("Pigs have X units of sentience") rather than focusing on particular situations.
Alon -- I don't think my argument in this post is dependent on there being overlap in intelligence between humans and animals (it's not a version of the "marginal cases" argument). Even if there's a clear threshold separating human and animal levels of intelligence, you still have to ask: 1) Does this threshold happen at a morally relevant spot on the rock-to-Einstein scale of intelligence? and 2) Does it matter that intelligence differs *within* the human species (and within the non-human animal world)?
It seems like you need to say something about entities whose degree of sentience changes with time. The principle that "you can't get your way on an issue you don't care about" would seem to suggest that I have no right to specify what medical treatment I want in the scenario that I fall into a coma, or that a woman can not be held morally responsible for heavy drinking during pregnancy.
Moving away from hot button culture war issues: If I set up a charitable foundation in my will, do the administrators of that foundation have to have no obligation to pursue the goals I set for it? Is it wrong for me to heat my home with nuclear power if I can reasonably anticipate that the nuclear waste will get into the water table a century from now?
Post a Comment
Subscribe to Post Comments [Atom]
<< Home