Children and women had no rights for a long time in human history. Universal suffrage and women’s rights were unimaginable for centuries before the modern era. These days, most of the developed countries protect the rights of animals and the (living) environment to some extent. The technological development raises the question if we should give rights to machines. Should we stop beating our robots?
Is it possible that robots and creatures with artificial intelligence acquire rights? Will overworked carmaker robots establish their union one day? Shall abolitionists help sex robots? Whom to blame when a robot harms a worker in a factory? Will an artificial intelligence go to jail, and if yes, will it be incarcerated with human inmates? These questions seem to be impractical and science-fiction these days, but remember that one day children, women and animal rights were neither topics of public discourse. So, let’s see those features one by one that make something the subject of moral consideration.
Moral agency and patiency
The morality of machine intelligence can be approached from two distinct directions. 1) When we wonder whether machines can be liable for their acts, can set their own goals and are capable of deliberate and conscious actions, we raise issues of moral agency. 2) When we speculate about if machines can be used as sex toys or can be beaten, we inquire if they are mere artifacts or entities that we should take care of. This consideration is called moral patiency.
As the title of this post suggests we are dealing with moral patiency in detail, but this does not mean that moral agency is excluded from our argumentation, since we assume that moral agency entails patiency. More precisely, patiency is a necessary condition of agency. However, we don’t argue for this position here, but take it as a premise.
Sentience and patiency
As part of the Cartesian tradition, Western culture thought of animals as machines till the 1970s. The treatment of animals has radically changed since then, thanks to activists and books like Animal Liberation by Peter Singer. According to Singer, one can be the subject of moral considerations if it is a sentient being, or to put it simply, if it can suffer. If we accept this point of view, we have to examine if machines and artificial minds can be sentient beings.
No robot or artificial intelligence can feel anything. At the moment, the technology is far from producing a sentient machine. However there are lots of projects aiming to develop some sort of digital or robotic companion. The most well-known ones are chatbots for customer relations, chatbots for therapeutic use, supporting robots for the elderly, and robots as sex toys, just to mention a few. These projects don’t aim to build a fully autonomous general artificial intelligence, but to create reliable and useful tools that can be used in social interactions.
Human-Computer Interaction researchers illustrate companion machines of the future with the analogy of working and companion dogs. Guide dogs are very smart in general and are trained to excel at aiding humans to move freely. This way, they are similar to companion machines. Moreover, this study from the Family Dog Project argues that qualities of companion dogs, such as faithfulness, kindness, smartness should be implemented in companion robots to help humans accept machines. In this way, we may ascribe similar attributes, feelings, and emotional states to robots as to dogs.
The projection of these qualities raises an important question. If machines exhibit some feelings and emotions, should they be in an emotional state? Or to think it even further, can robots be in an emotional state that is identical with the emotional state of humans?
The problem of other minds
Maybe at first it sounds a stupid question, but why do we attribute sentience to animals? Is it just another form of anthropomorphism, or do they really have feelings? Anyway, how does one know that another human being shares the same feelings and emotions with her? On what basis can one attribute a rational mind to others? Philosophers of mind call this phenomenon “the problem of other minds”.
According to Wittgenstein, this is a linguistic question. If I hit my finger when I drive a nail, I cry out loud and say “awwwww!”, because I learned this behavior from my environment. My parents and all the adults around me did the same when I was a child, so I learnt to do it too. I learnt what to say when I feel terrible physical pain, just like I learnt to say “Hello” to my neighbors when I meet them. All these things constitute a language game or a way of life and they are social by their very nature. I cannot feel pain without expressing it. I cannot feel anything if I cannot name it. Hence language is a precondition of other minds. This is the way Wittgenstein’s argumentation goes. Consequently, the condition of emotional states and mental activity is speaking.
Our everyday experience contradicts with the view described above. We do attribute mental and emotional states to animals, although they cannot speak. We even speak about physical objects as if they were persons. E.g. “Why does my computer not want to work?” Philosophers call this strategy “intentional strategy”, which is a funny word for having mental states. If something behaves like an intentional agent, the best way to deal with it is to assume that it is really intentional.
But what can we know about the mental states of other creatures? Can we imagine what it is like to be a bat? More precisely, can we put ourselves in the place of a bat? What would it like to navigate ourselves using only our ears? Some philosophers of mind think that echolocation cannot be imagined and we cannot know what it is like to be a bat, since being a bat or being a human comes with a different qualia, i.e. a distinct way of perceiving and experiencing the world around us.
If we want to attribute sentience to animals and machines, we need something more than the intentional strategy. We have to identify similar behavioral patterns that animals share and we have to find their physiological structure. Some behavioral patterns are produced by very similar physiological structures, while others are not, but are functionally very similar. If a behavioral pattern can be “implemented” by various organic structures, it can be implemented by inorganic ones as well. Using the philosophers’ terminology, if functionalism works, we can build sentient machines.
One of the first lessons of robotics came from phenomenology and cognitive science. The mind of autonomous biological agents do not end at their skull. Humans and animals have bodies, and they sense the world through their organs. Also, they do not just passively navigate themselves in their environment, but they actively use the environment for various tasks to extend their minds. For example they apply landmarks for navigation. So human and animal cognition is embodied and extended at the same time. These embodied and extended minds created the abstract space of morality, or more exactly they are constantly creating morality.
Rights and obligations
Although there is still much to do, animal rights are established and are codified in almost all developed countries around the globe. The most common argument about the necessity of laws protecting the rights of animals is that animals, or at least vertebrates, are sentient beings. Although rights are granted to animals, they are not exercised by them. In case of minors and animals, it is the caretaker and the public who act on their behalf and exercise their rights. Also, animals are aware neither of their rights nor of the moral consequences of their acts.
Let’s study the case of a dog which bit a postman. No one would blame the dog for its act, but its owner would be in big trouble. On the one hand, he’d be charged with causing harm to the postman, on the other hand with treating his dog badly, which might have caused its aggressive behavior. But who should be blamed when an intelligent machine does harm? Its owner, its manufacturer or the programmer who trained it? What shall we do with such a machine? Can we simply switch it off or would it count as an execution?
During the course of history, minors, women and minorities were treated as sentient beings with limited rationality. As a result, they were deprived from the same rights as adults, mostly privileged and rich men, had . Also, they had specific obligations, e.g. to follow the orders of the head of the household, who were often adult men. They were also subjected to those persons’ orders who were above them in the societal hierarchy.
Machines have no obligations, since they are not living beings, but they are built with the purpose to handle and execute various tasks. If you hire a gardener, she has got the obligation to trim your lawn, but the lawnmower has no obligation, although it was built to trim the lawn. Also, horses and companion dogs have no obligations, but they are kept for various tasks by their owners. If a lawnmower doesn’t work, its owner can throw it away. If a horse is sick or it doesn’t want to jump over fences all day, its owner cannot simply throw it away. How about a sentient machine? What if a sex robot becomes sentient one day and it has negative feelings when someone uses it? What if fashion changes and the old model of the robot goes out of fashion? Can its owner throw it away?
It’s not about the future, it’s about the present of humanity
If you happen to think that we raised issues that are not reasonable and practical, it’s high time to shed light on the importance of philosophizing on beating robots. When we are considering the moral acceptability of beating a robot, we are not only thinking about the moral status of robots, but that of ourselves. What kind of traits do we want to cultivate in ourselves?
The questions of ethics are perennial, although there are no exact, timeless answers to them. The recent surge of Artificial Intelligence made us chewing over these problems again and again – as technology is evolving rapidly.
- Peter Singer: Animal Liberation, Harper Perennial Modern Classics, 2009
- Ludwig Wittgenstein: Philosophical Investigations, John Wiley and Sons, 2016
- Thomas Nagel: What Is it Like to Be a Bat? In: Thomas Nagel: Mortal Questions, Cambridge University Press, 2003
- Paul M. Churchland: Matter and Consciousness, MIT Press, 1998
- Hursthouse, Rosalind and Pettigrove, Glen, “Virtue Ethics”, The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2018/entries/ethics-virtue/>.
Subscribe to our newsletter
Get highlights on NLP, AI, and applied cognitive science straight into your inbox.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.