Must we grant rights to robots/artificial intelligence?

An analysis of whether we as a human race have an obligation to give rights to artificial intelligent entities


Artificial intelligence systems have slowly creeped into the underlying social constructs of human society. As we increase our dependence on algorithmic artificial systems to power our decision-making processes, the need to consider the role these technology systems play in our human society continues to rise. This essay will serve as an analysis of whether we as a human race have an obligation to give artificial intelligence rights as follows:

  • Support the idea that a degree of consciousness is attainable by artificial forms, which implies their deserving of rights within human society.
  • Evaluate Bryson’s utilitarian counterarguments that robots are owned by humans and that any artificial intelligence is still a result of human innovation, thus dismissing the human obligation of giving rights to robots.
  • Argue that the counterargument is insufficient due to its misunderstanding of definitions of consciousness and creation, especially as robotic consciousness may be highly imperceivable, but not impossible.
  • Suggest that moral patiency is instead subject to perceptions by other moral agents, shifting the focus on whether robots are deserving of rights based on human interactions.
  • Address the counterargument that robots could potentially be designed to serve human purposes and further the instrumentalist perspectives that robots are still subordinate to humans and therefore do not deserve rights.
  • Argue the immorality of engineering humans and by analogy the immorality of engineering robots due to their innate potential for self-autonomy and greater purpose.
  • Conclude that even if robots and artificial intelligence are the result of human innovation and thusly a human-owned creation, any technological entity with an intelligence considered to hold consciousness and is capable of moral agency is subject to a degree of rights like that of organic lifeforms.

Dissecting the Problem Statement

To consider the relationship between robots and human society, it is important to break down the meaning of the question ‘Must we grant rights to robots/artificial intelligence?’. First, we should consider the definition of rights, which are largely social constructs created by humanity. The question poses the suggestion that ‘robots/artificial intelligence’ should be given a place within human society where they can exercise their own desires. The two terms ‘robots’ and ‘artificial intelligence’ will be used interchangeably, but their definitions within the scope of rights in human society will be later discussed.

Returning to evaluating the structure of the prompt, it can be seen have considering four different scenarios:

  • Robots do not have the capacity and should not be given rights.
  • Robots have the capacity but should not be given rights.
  • Robots do not have the capacity but should be given rights.
  • Robots have the capacity and should be given rights.

The above question brings a focus into the word ‘must’. We must evaluate whether robots even have the capacity to bear rights. This is directly related to whether robots can be moral agents and therefore be held responsible for their own actions. This is increasingly important because of the implications that artificial intelligence-based decision making can make on other moral patients. Thus, we should consider what morality is defined by. Mark Alfano defines five distinct elements of human morality: patiency, agency, sociality, reflexivity, and temporality¹. Notably, Kantianism is reflected in most of modern Western moral theories and thusly will be used in the reflection of morality. With this in consideration, the areas of moral agency and patiency are of primary focus.

Consciousness in Robots

Let us break down whether robots have the potential to be conscious. Within the scope of current technological and artificial intelligence systems, it is easy to assume the primitivity of such technology and dismiss the future possibility of what has long been seen as the pinnacle of computing. Current forms of artificial intelligence are examples of weak AI — highly specialized forms of artificial intelligence usually focused on one area or task. In all fairness, any current or future attempts at artificial intelligence may not directly replicate human intelligence in the way mental processes function. However, the primary goal of the development of strong AI is to develop a general form of artificial intelligence that is versatile in responding to a variety of events, much like the constant decision-making processes we experience as humans from a day-to-day perspective.

It is easy to assume that the human brain and its related consciousness is a highly complex ‘machine’ that can only bear consciousness because of its incredible complexity. Yet, this fails to consider forms of consciousness in other animals, especially those with smaller brains and lesser intelligence⁸. With this in mind, Moore’s Law suggests that computation will continue to exponentially increase, signifying the upcoming intersection between human computing potential and machine computing potential. Current machines already exceed in certain respects such as mathematical computations, and it is entirely possible for other aspects of this intelligence to match or exceed the potentials of our own intelligences. However, even if artificial intelligence can reach a level of computing far beyond what we have today, how do we distinguish between ‘true’ artificial intelligence and other forms of artificial intelligence — the primary debate between strong and weak AI? Notably, there are many conditions that determine whether a being has achieved consciousness, including factors such as intelligence and free-will. Are these factors sufficient to determine consciousness? This is the basis of the problem of other minds, where we can only perceive the behavior of others, which does not grant us access to determine whether others bear a mind⁶. If we cannot even determine whether other human beings bear consciousness, it cannot be suitable for us to be fair judges of what other beings have consciousness. Perhaps the issue doesn’t lie with the consideration of how new ideas and entities fit into existing definitions, but invites opening the redefinition of existing terminologies such as the meaning of consciousness⁶.

Regardless, as uncertain as we can be about our own consciousness and thus the consciousness of others, we cannot dismiss the possibility of future artificial intelligence bearing forms of thinking similar or better than our own. The next area of debate transitions to that of, if robots have the potential to bear consciousness, are they capable of having rights? Luckily, this is a relatively simple question to resolve. The idea of rights are largely social constructs created by human society. Based on current definitions of consciousness, we can assume these robots have at least autonomy, free-will, and intelligence. Whether they can exercise their personal desires is determined by the society they are involved in, specifically human society. Therefore, if artificial intelligence has the capacity to bear rights, are we obligated as humans to give them rights in our society? As moral agents in our society can make decisions for their own, a moral self-recognition of their status as equal moral agents, but unequal standing in society, is inevitable. This is likely to lead to future conflict that could otherwise be avoided by treating them as equal members of society as they deserve. Therefore, we have an obligation as moral agents to give the same rights to other moral agents in our society.

Unnatural Entities and Rights

Notably, it is a common argument by many that robots are unnatural, and therefore cannot replicate the same processes as living beings, specifically consciousness. In Bryson’s publication Robots Should Be Slaves, he makes a strongly utilitarian argument that because artificial intelligence are inorganic forms, they are subject to ownership and able to be used — as is with anything inorganic³. Technology and robots are owned by humans. Any degree of intelligence is programmed and therefore created by humans. Bryson voices understanding that as technological systems become more complex, the influence of human intelligence on the development of further technologies become less obvious, however not less significant³. Regardless, machines operate within the limits set upon them by human creators. In the case where these robots commit an immoral action upon a moral agent, it is the fault of the creator because they defined the limits, regardless of whether it was knowingly or unknowingly. Why should robots be given moral and legal responsibilities if they are fully owned by us? Furthermore, Bryson counters the argument of the immorality of slavery. The term slavery is highly anthropocentric and there exists the possibility of an entity being in servitude without being a person. We can consider a lot of our existing technologies as subservient to humans, used to serve and reduce our human responsibilities by deferring them to a lesser entity. By extension, he argues what makes robots any different? Bryson notes that the reduction of robots to non-humans protects humans from giving them rights³. Because robots are fully owned by us, they should not be given legal/moral responsibilities for actions, especially robots, who largely serve as tools to reduce the responsibilities of mankind.

Redefining Consciousness

The argument stating that the unnatural and manufactured body is not capable of replicating cognitive thinking and consciousness is an argument with many fallacies in its justification. Dennett directly disproves many of these arguments⁵. It is possible to break down this argument into four different parts regarding the unnaturalness of artificial intelligence. Firstly, the claim that consciousness is highly natural, and it is impossible to replicate a process in a natural body has its fallacies⁵. The basis for this claim largely stands on the fear and distrust of the unknown. Like historical precedent, supernatural or unworldly entities were used to explain natural phenomena like weather and magnetic fields. History has also shown how many of these explanations have been debunked by natural science, even inaccurate theories based on scientific evidence. With that in mind, is it fair to believe that the brain is the only connection between our world and another unexplained dimension? Modern artificial intelligence computing is modeled similarly to the inner workings on the brain, using neuron-like digital representations to power computing processes. Furthermore, even with the assumption that consciousness and high levels of cognitive thinking can only occur in an organic brain, this does not dismiss the possibility of the unification of both inorganic and organic bodies to form a hybrid entity — one of an organic brain with an inorganic body.

Another argument is that only bodies that are neutral and not manufactured can exhibit genuine consciousness⁵. This explores the idea that non-unique manufactured bodies do not exhibit genuine consciousness because of their lack of uniqueness. As innovations in organic technologies grow, researchers have been exploring the possibility of DNA cloning. Noting the many ethical implications of cloning and focusing on the aspect of consciousness, on this basis, does this mean that two identical animals with the same fundamentally DNA can no longer exhibit genuine consciousness? Yet these are two different bodies with their individual ability for autonomy and free will. Therefore, by contradiction, we cannot prove that consciousness is entirely impossible. There is no solid proof that robots cannot develop a consciousness of their own.

Pain, Suffering, and Human-Robot Relationships

On the surface, organic and inorganic entities can seem very different, however, it is possible to reduce many mechanical and biological processes to the basic functionalities, such as the seemingly simple life and death cycle. For a biological being, this would imply the birth and end of life, while for a mechanical being, this may involve the first time it is provided with electricity and the time it is disconnected from electricity and destroyed. Humans also interact regularly with manufactured objects, which suggests exploring human-robot relationships. With that in mind, robots can potentially experience pain and suffering, while also being at the receiving end of human relationships, making them moral patients and therefore deserving of rights. Perhaps the pain that we experience as organic entities is unique to ourselves. However, inorganic entities could perceive pain in a different way. Studies show how emotional fear could lead to physical pain in humans. With that, could the fear of being removed from power be a form of pain for electricity-powered machines? Furthermore, through the vicious cycles of innovation, Prescott cites Metzinger that there raises the concern that an artificial being capable of suffering could be created unintentionally⁸. Without knowing that this artificial being can experience pain, we risk causing unnecessary pain to a sentient being, which goes against human morality. Although it is easy to dismiss artificial intelligence as not encompassing all the facets of human intelligence, recent biological research into pain and suffering by other animals suggests that even animals with lower intelligence capabilities can still experience pain and suffering.

However, maybe the fundamental argument about the morality of robots does not lie in determining whether robots can experience the same human qualities such as pain and suffering. If robots were able to experience a suffering-like state, much like their human counterparts, wouldn’t that make them moral patients? Robots may have the ability to be a recipient of actions performed by moral agents as well as reacting accordingly based on both external and internal factors. Perhaps how that happens is not like pain processes in humans, which have developed out of evolution. However, this is not a strong enough proof to dismiss the possibility of internal factors that are like human emotions. Although different intelligences may experience pain differently, it doesn’t mean one or the other doesn’t experience pain.

On the other hand, it is valuable to consider the current and future relationships between humans and robots. Prescott suggests two perspectives of looking at artificial beings: ontologically and psychologically⁸. These two perspectives focus on ‘what robots are’ and ‘how robots are seen’, respectively. Perhaps the sentience of an entity should be determined instead by how we as a human society treats them. Notably, humans are built to humanize other things. This has led to the wide acceptance of animal pets. But this cross-species empathy extends into physical objects, including rocks and robots. For example, humans experience emotion turmoil due to the loss of say a favorite robot. Furthermore, existing robots have primitive abilities to remember and communicate to humans, which highlights a social factor into the potentiality of robots integrated into society. It is naive to dismiss the potential for relationships to form between humans and robots.

Coeckelbergh notes that the way we look at other objects is largely subjected to how our society talks about them, how we live with them, as well as cultural influences that affect how we perceive them⁴. We as humans already see things in life that are highly humanized, such as pets and early iterations of electronic pets. Therefore, in combination of having the potential to be moral patients as well being a participant in human relationships, robots could potentially be deserving of rights.

Engineering Robot Servitude

First and foremost, it is important to consider the origin of the word ‘robot’. Based on the Czech word robota for ‘forced labor’, this implicitly defines robots as slaves to human society⁷. Understandably, the connotation of slavery is dangerous, especially given historical precedent of minority groups being subjugated into slavery. However, Petersen brings up a valuable point regarding the difference between ‘robot servitude to human aims’ and ‘robot slavery’⁷. On one hand, robots are subjected to the will of humans against their own will, while ‘robot servitude to human aims’ suggests personal motivation from the perspective of the robot to further human aims. Petersen himself makes a fair concession that slavery is wrong for intelligent creatures⁷. Intelligent creatures are subject to rights regardless of their natural or unnatural construction. Yet, does a potential engineering solution lie in analyzing symbiotic relationships in nature? These symbiotic relationships can be found throughout the animal kingdom, suggesting a potential middle ground in this dynamic between robots and humans. Theoretically, robots could be specifically engineered in a way where it is in their best interest to perform tasks and actions that are beneficial to both humans and themselves. This is otherwise known as Engineered Robot Servitude (ERS)⁷.

Immorality of Engineered Human Servitude

Arguably, the counterargument considers some valuable points. Designing robots in such a way that it is to their own best interest to serve greater human interests poses a significant workaround to the ethics of forcing a sentient being to do things against their will. Yet, we should compare the human equivalent of Engineering Robot Servitude — Engineering Human Servitude (ERS). This enters the realm of eugenics which operates on the same basis of designing or selectively breeding humans to serve a specific purpose. In order to consider the morality of this, we will consider the perspectives of three historical philosophers into this discussion: Kant, Aristotle, and Mill. In Kantian philosophy, specifically designing intelligent beings to perform relatively trivial tasks leaves no room for having a more fulfilling life⁷. Furthermore, a rational being cannot rationally consent to being used as a mean. Aristotelianism thinking lies in the belief that humans have a particular way of living and changing that way of living is immoral⁷. Millian philosophy considers any sort of engineering as a means to change one’s greater pleasures for lower ones⁷. These views all consider engineering humans as immoral. Notably, determining the morality of Engineering Human Servitude to also determine the morality of Engineering Robot Servitude implies some degree of equivalency that can be argued for. This will be considered next. It could be argued that artificial beings being engineered is in their innate nature. But if they have the potential of achieving greater things in their lifetime as opposed to serving some need of human society, they should have the right to do so. Aristotle’s philosophy is not exactly an equivalent comparison to ERS, but the immoral part of ERS lies prior to the creation of such a being — in the design process. It is not moral for an intelligent being with autonomy to design another intelligent being without having autonomy to serve its own purposes.

Sex Robots

Another major contentious point of concern is the future position of robots in our sexual lives. This discussion draws analogies to current debates of the ethical implications of prostitution and sex robots in our human world.

Supporters of sex work are likely strong supporters of sex robot as well on the basis that that they are a suitable solution to a seemingly societal trend towards greater demand of sexual services, as well as resolving many sex-related crimes. In a society where sex work can be seen as demeaning, sex robots could fill that void and thus leave greater fulfillment to humans. On the other hand, opponents of sex work argue that sex work is slavery and demeans the parties involved. Members providing the sexual services can be taken advantage of. From a greater perspective, the rise of the internet has led to the wide consumption of pornographic materials, which can also lead to greater sexism. Richardson is a strong advocate against sex robots, which she says the instrumentalized use of sex robots perpetuates sexist views and behaviors⁹. With consideration that the majority audience for sexual services are predominantly male. Furthermore, part of the main desires of sex robots is the objectification of the providing party in the temporal relationship. We already have issues of objectification of women in society and allowing objects to serve as members of sexual relationships allows the greater association of sex and objects, resulting in greater objectification of other human beings. It is also fair to reason that greater availability of sexual services may create more demand, thus creating a positive reinforcement cycle.

However, as discussed earlier, whether an artificial entity possesses intelligence/consciousness is a significant factor to how we should approach this issue. If intelligent beings can make decisions for themselves, is it our job to determine what they can or cannot do? In order to coexist with other intelligent beings, we have to accept the interactions that will occur as well. With this in mind, this emphasizes the prior argument that designing intelligent robots for a certain goal is immoral, for example serving the sexual needs of a human. If an intelligent robot has autonomy and willfully chooses to commit sexual acts with a human, it is within its own rights to do so. Regarding non-intelligent artificial sexual entities, sex toys are not morally wrong. The line lies in the depiction of living beings in a non-intelligent body unable to consent to sexual acts. Legislation should prevent the depiction of children or replication of real-life humans in sexual toys. However, in consideration of rights, only intelligent forms of robots must be protected with rights.

Biased AI

One sharp concern of growing artificial intelligence influence in human society is whether we should refuse rights to robots on the basis that they negatively harm humans. Understandably, considerations of how AIs can be potentially biased and affect our human society in adverse ways. Currently, we face a multitude of ethical issues in the technology industry. Corporations are harnessing the user data to further their businesses, such as Google’s reCAPTCHA program and Tesla’s self-driving program. Current artificial intelligence programs have created a lot of concern, such as racist artificial intelligence bots, and predictive policing systems that propagated a lot of prior discrimination. As technology becomes more automated, the need for greater contribution and quality assurance from humans rises, thus suggesting that not all problems should be solved using artificial intelligence. Although artificial intelligence can often offer quick solutions, they can endanger human rights if not monitored carefully². Furthermore, we can’t dismiss the possibility of human enslavement by a sentient artificial intelligence being².

Fully sentient artificial intelligent beings would be the one of the most defining marks in time within the scope of technologization of humanity, and it is in our best interests to ensure harmony with other artificial beings in our society. Reflexive programming can only help so much since bias stems from the human data that may be influenced by biases from a historical or cultural perspective. The larger issue is how we choose to treat robots. Treating other groups fair reduces conflict and reduces likelihood of this happening.

Conclusion

We cannot dismiss the possibility of fully conscious, sentient artificial intelligent beings that will largely be integrated with human society. It is our responsibility as moral agents to guarantee the rights of autonomy and free will. Not guaranteeing their rights undermines our humanity and exposes the risk of future conflicts between intelligent biological and mechanical beings. Therefore, even if robots and artificial intelligence are the result of human innovation and thusly a human-owned creation, the ability to hold consciousness guarantees any technological entity rights like that of organic lifeforms.


References

  1. Alfano, M. 2016. Moral psychology: An introduction. Cambridge: Polity Press.
  2. Birhane, A. van Dijk, J. 2020. Robot Rights?: Let’s Talk about Human Welfare Instead. In: AAAI/ACM Conference on ai, ethics, and society, 7 February 2020, New York. pp.207–213.
  3. Bryson, J. 2009. Robots Should Be Slaves. In: Wilks, Y. ed. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issue. Amsterdam: John Benjamins, pp.63–74.
  4. Coeckelbergh, M. 2014. The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics. Philosophy & Technology. 27(1), pp.61–77.
  5. Dennett, D. 2014. Consciousness in Human and Robot Minds. In: Scharff, R. Dusek, V. eds. Philosophy of technology: the technological condition: an anthology. 2nd ed. Chichester, West Sussex: Wiley Blackwell, pp.588–596.
  6. Gunkel, D. 2017. The other question: can and should robots have rights?. Ethics and information technology. 20(2), pp.87–99.
  7. Petersen, S. 2007. The ethics of robot servitude. Journal of experimental & theoretical artificial intelligence. 19(1), pp.43–54.
  8. Prescott, T. 2017. Robots are not just tools. Connection Science. 29(2), pp.142–149.
  9. Richardson, K. 2016. The asymmetric ‘relationship’: parallels between prostitution and the development of sex robots. Computers & Society. 45(3), pp.290–293.