Skip to main content

One post tagged with "existential-threats"

View All Tags

· 6 min read

Why A Successful Ethics Has to Apply to 'Beings' Generally

cosmos_mind

What 'beings'?

May all beings look at me with a friendly eye, may I do likewise, and may we look at each other with the eyes of a friend.

-- from the Yajurveda, 1000 B.C.

Many ethical theories unjustifiably limit their application exclusively to human beings or very human-like beings. As I have read more and more ethics I have continually been surprised by this. To me, it has always seemed like a very serious pre-ethical question: What makes a being "count" ethically?

Is it that it has states that are "sentient" or "hedonic" in the utilitarian sense (I am developing my own views on that), or that it can have preferences, or rights, or whether it can accept contracts or be virtuous? This 'cutoff' for ethical-entity membership is a very important one, however we choose to carve the cake. It will only grow more important as our species advances and the implications of having membership criteria that are too anthropocentric or too unrecognizable will have dire consequences. For the purpose of these short thoughts, I will call beings that make that ethical-entity membership cutoff "ethically relevant beings".

We will likely soon live in a world where artificial intelligence will meet or surpass our intellectual capabilities, where 'trans-human' adaptations might occur, or where we might encounter life from another world. In all of these cases, ethical theories that are too tightly bound to a conception of ethically relevant entities being "human-like" will get us into trouble.

While this may sound like science fiction for now, it is the kind of thinking that we would hope that a more advanced being than us, whom we had yet to encounter, would be doing. Think of the collisions of different civilizations at differing degrees of technological advancement in human history. The result is usually very bad for the lesser-advanced group. I think that since the impact of such an encounter would be so large for the worse-off party, it is very worth thinking about whether our theories account for ethically relevant beings that are "lower down the ethical totem pole" than us and why. How we might want to be treated by a being that viewed us as "lower down the ethical totem pole"? This is why I think that successful ethical theory, from the basement to the roof, has to account for cosmological scale and a vast variety of ethically relevant beings.

The right ethical theories have to apply to beings generally and not make the cutoff too high or too low. Here are some thoughts on the risks of "too high" and "too low":

Setting the Bar Too High

If the cutoff is too high for being ethically relevant, as many current ethical theories have it, I think that could potentially be very harmful. A small example is our horrific treatment of animals. Many people believe that this treatment is justified by the sort of high-bar reasoning like "chickens don't have rights" or "fish aren't sentient". Surely we would not want a vastly superior being (however we countenance ethically relevant beings) to view us that way. That would be nothing short of a disaster for humanity.

So what is the upper bound? It cannot be personhood. Maybe the ability to 'feel'? Many respond quickly with: "Well, the cutoff is obviously consciousness!" To me, this view is dangerous and very uncopernican. Imagine a mental or functional state (however you like it) called "Schmonsciousness".

Schmonsciousness is consciousness3.
schmon

Humans cannot even fathom what it is like to be Schmonscious, just as we cannot imagine a worm having the depth of experience we have. Would it be ok for Schmonscious beings to eradicate us? For them to use us as a means? If there are many other intelligent beings in this universe (biological or artificial), then it would be rational to believe that something like Schmonsciousness is not that far-fetched. Under the assumption that there are many intelligent beings in the universe, we should reason like we are an randomly selected from the set of all of them, via the "self sampling assumption" .

The importance of this goofy example is that when we set the bar erroneously high everyone below loses and so we need to be very certain that property is the right one, if there is such a property. Consciousness, as currently understood, is not that property.

Setting the Bar Too Low

If the cutoff for being an ethically relevant being is too low then we might end up with absurd conclusions which force us to countenance inanimate objects or automata as ethically relevant beings. This type of theory may end up having negative consequences as well if it casts the net so wide that it becomes an absurdity to truly follow.

Ahimsa is the Buddhist/Jainist principle of non-violence. According to another ancient text, the Atharvaveda, "Ahimsa is not causing pain to any living being at any time through the actions of one's mind, speech or body." For many adherents, this means not even walking on grass. Maybe this view has something right, but it is hard to see how we would feed 8 billion people on the planet without harming some life form. To the ethical intuitions of most, starving millions is worse than hurting many plants. Unless maybe we can bioengineer ourselves to photosynthesize, this approach might not be reasonable at this point in time. That is all not to mention why we need to suppose that all living things are ethically relevant beings. What about bacteria?

If it is discovered that there is no hard line between life and non-life (artificial or natural, whatever that means on a cosmic scale) setting the bar too low may become even more absurd. Calling anything which can do something like 'avoid' an ethically relevant being would lead us to admit that a world where all the magnets are touching opposite dipoles would be worse. That is surely absurd.

magnets

What should the criteria for ethically relevant beings be?

This is a problem I am fascinated by but do not yet have a satisfactory answer for. As I was considering above, maybe the ability to 'feel' is a good cutoff? Even that is difficult to pin down in a way we could apply to all beings. I am reminded of David Lewis's Mad Pain and Martian Pain.

Here are some quickly assembled thoughts on where to start:

An ethically relevant being is:


  • Causally efficacious (it can cause and be caused)

  • Spatiotemporally contiguous (it isn't made up of random unconnected things in space and time)

  • Made of the right machinery: has information-processing, mental states, and/or functional states**


It seems like the third bullet is probably the most important and most difficult to establish. My reasons for picking these are a bit beyond the scope of this post and more could be said about this topic. It is something I would love to think more deeply about.